diff --git a/.gitattributes b/.gitattributes index 2e285d4dc0175d36955b3608aca70029f795c2c1..92030287fde7e0d517219816a635f4abba81cc99 100644 --- a/.gitattributes +++ b/.gitattributes @@ -226,3 +226,4 @@ data_all_eng_slimpj/shuffled/split/split_finalac/part-18.finalac filter=lfs diff data_all_eng_slimpj/shuffled/split/split_finalac/part-13.finalac filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalac/part-19.finalac filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalac/part-12.finalac filter=lfs diff=lfs merge=lfs -text +data_all_eng_slimpj/shuffled/split/split_finalac/part-04.finalac filter=lfs diff=lfs merge=lfs -text diff --git a/data_all_eng_slimpj/shuffled/split/split_finalac/part-04.finalac b/data_all_eng_slimpj/shuffled/split/split_finalac/part-04.finalac new file mode 100644 index 0000000000000000000000000000000000000000..8e4096ed9a47145de3c8bec31f38f48b394a0d63 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split/split_finalac/part-04.finalac @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b11b4f11a776ff5030be9c04b87bd2ffc1e3ebb87741cef86834c79b0cc483d4 +size 12576661698 diff --git a/data_all_eng_slimpj/shuffled/split2/finalzign b/data_all_eng_slimpj/shuffled/split2/finalzign new file mode 100644 index 0000000000000000000000000000000000000000..f690b470c31017bad9729d754ca965ce3fd7358e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzign @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nRobot exploration problem is defined as making a robot or multi-robots explore unknown cluttered environments (i.e., office environment, forests, ruins, etc.) autonomously with specific goals. \nThe goal can be classified as: (1) Maximizing the knowledge of the unknown environments, i.e., acquiring a map of the world \\cite{thrun2002probabilistic}, \\cite{28_background_info}. (2) Searching a static or moving object without prior information about the world. \nWhile solving the exploration problems with the second goal can combine the prior information of the target object, such as the semantic information, it also needs to fulfill the first goal. \n\nThe frontier-based methods \\cite{21_yamauchi1997frontier}, \\cite{22_gonzalez2002navigation} have been widely used to solve the robot exploration problem. \n\\cite{21_yamauchi1997frontier} adopts a greedy strategy, which may lead to an inefficient overall path.\nMany approaches have been proposed to consider more performance metrics (i.e. information gain, etc.) \\cite{22_gonzalez2002navigation}, \\cite{23_bourgault2002information}, \\cite{33_basilico2011exploration}. \nHowever, these approaches are designed and evaluated in a limited number of environments. Therefore, they may fail to generalize to other environments whose layouts are different.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=0.2]{.\/imgs\/4D_pointclouds.png}\n\t\\caption{The example of 4D point-clouds-like information. (a) shows a map example in $HouseExpo$, where the black, gray, and white areas denote unknown space, free space, and obstacle space, respectively. (b) shows the global map in our algorithm, where the map is obtained by homogeneous transformations. The map's center is the robot start location, and the map's x-coordinate is the same as the robot start orientation. (c) shows the 4D point-clouds-like information generated based on the global map. The x, y, z coordinate denote x location, y location and distance information, respectively. The color denotes frontier information, where red denotes obstacles and blue denotes frontiers.}\n\t\\label{fig:label_pointcloud}\n\\end{figure}\n\nCompared with these classical methods, machine learning-based methods exhibit the advantages of learning from various data.\nDeep Reinforcement Learning (DRL), which uses a neural network to approximate the agent's policy during the interactions with the environments \\cite{sutton2018reinforcement}, gains more and more attention in the application of games \\cite{DQN2015}, robotics \\cite{rl_env_robo}, etc.\nWhen applied to robot exploration problems, most research works \\cite{MaxM2018}, \\cite{31_chen2019self} {design the state space as the form of image} and use Convolution Neural Networks (CNN).\nFor example, in \\cite{MaxM2018}, CNN is utilized as a mapping from the local observation of the robot to the next optimal action.\n\nHowever, the size of CNN's input images is fixed, which results in the following limitations: \n(1) If input images represent the global information, the size of the overall explored map needs to be pre-defined to prevent input images from failing to fully containing all the map information. \n(2) If input images represent the local information, which fails to convey all the state information, the recurrent neural networks (RNN) or memory networks need to be adopted. \nUnfortunately, the robot exploration problem in this formulation requires relatively long-term planning, which is still a tough problem that has not been perfectly solved.\n\nIn this paper, to deal with the aforementioned problems, we present a novel state representation method, which relies on 4D point-clouds-like information of variable size. \nThese information have the same data structure as point clouds and consists of 2D points' location information, and the corresponding 1D frontier and 1D distance information, as shown in Fig.\\ref{fig:label_pointcloud}.\nWe also designs the corresponding training framework, which bases on the deep Q-Learning method with variable action space.\nBy replacing the image observation with 4D point-clouds-like information, our proposed exploration model can deal with unknown maps of arbitrary size.\nBased on dynamic graph CNN (DGCNN) \\cite{wang2019dynamic}, which is one of the typical neural network structure that process point clouds information, our proposed neural network takes 4D point-clouds-like information as input and outputs the expected value of each frontier, which can guide the robot to the frontier with the highest value.\nThis neural network is trained in a way similar to DQN in the $HouseExpo$ environment \\cite{li2019houseexpo}, which is a fast exploration simulation platform that includes data of many 2D indoor layouts. The experiment shows that our exploration model can achieve a relatively good performance, compared with the baseline in \\cite{li2019houseexpo}, state-of-the-art in \\cite{Weight2019}, classical methods in \\cite{21_yamauchi1997frontier}, \\cite{22_gonzalez2002navigation} and a random method.\n\n\\subsection{Original Contributions}\nThe contributions of this paper are threefold. \nFirst, we propose a novel state representation method using 4D point-clouds-like information to solve the aforementioned problems in Section \\ref{sec:relatedwork}.\nAlthough point clouds have been utilized in motion planning and navigation (\\cite{27_2Dpointclouds}, \\cite{30_ObstacleResp}), our work is different from these two papers in two main parts: \n(1) We use point clouds to represent the global information while they represent the local observation.\n(2) Our action space is to select a frontier point from the frontier set of variable size, while their action space contains control commands.\nSecond, we design the corresponding state function based on DGCNN \\cite{wang2019dynamic}, and the training framework based on DQN \\cite{DQN2015}. \nThe novelty is that our action space's size is variable, which makes our neural network converge in a faster way.\nThird, we demonstrate the performance of the proposed method on a wide variety of environments, which the model has not seen before, and includes maps whose size is much larger than maps in the training set. \n\nThe remainder of this paper is organized as follows. \nWe first introduce the related work in Section \\ref{sec:relatedwork}.\nThen we formulate the frontier-based robot exploration problem and DRL exploration problem in Section \\ref{sec:formulation}. \nAfter that, the framework of our proposed method are detailed in Section \\ref{sec:alg}.\nIn Section \\ref{sec:exp}, we demonstrate the performance of our proposed method through a series of simulation experiments. \nAt last, we conclude the work of this paper and discuss directions for future work in section \\ref{sec:conclude}.\n\n\\section{Related Work}\n\\label{sec:relatedwork}\nIn \\cite{21_yamauchi1997frontier}, the classical frontier method is defined, where an occupancy map is utilized in which each cell is placed into one of three classes: open, unknown and occupied.\nThen frontiers are defined as the boundaries between open areas and unknown areas. \nThe robot can constantly gain new information about the world by moving to successive frontiers, while the problem of selecting which frontiers at a specific stage remains to be solved. \nTherefore, in a frontier-based setting, solving the exploration problem is equivalent to finding an efficient exploration strategy that can determine the optimal frontier for the robot to explore. \nA greedy exploration strategy is utilized in \\cite{21_yamauchi1997frontier} to select the nearest unvisited, accessible frontiers. \nThe experiment results in that paper show that the greedy strategy is short-sighted and can waste lots of time, especially when missing a nearby frontier that will disappear at once if selected (this case is illustrated in the experiment part).\n\nMany DRL techniques have been applied into the robot exploration problem in several previous works.\nIn a typical DRL framework \\cite{sutton2018reinforcement}, the agent interacts with the environment by taking actions and receiving rewards from the environment. \nThrough this trial-and-error manner, the agent can learn an optimal policy eventually.\nIn \\cite{MLiu2016}, a CNN network is trained under the DQN framework with RGB-D sensor images as input to make the robot learn obstacle avoidance ability during exploration. \nAlthough avoiding obstacles is important, this paper does not apply DRL to learn the exploration strategy.\nThe work in \\cite{Weight2019} combines frontier-based exploration with DRL to learn an exploration strategy directly. \nThe state information includes the global occupancy map, robot locations and frontiers, while the action is to output the weight of a cost function that evaluates the goodness of each frontier. \nThe cost function includes distance and information gain. \nBy adjusting the weight, the relative importance of each term can be changed. \nHowever, the terms of the cost function rely on human knowledge and may not be applicable in other situations. \nIn \\cite{2_li2019deep}, the state space is similar to the one in \\cite{Weight2019}, while the action is to select a point from the global map.\nHowever, the map size can vary dramatically from one environment to the next. \nIt losses generality when setting the maximum map size before exploring the environments. \n\nIn \\cite{MaxM2018} and \\cite{31_chen2019self}, a local map, which is centered at the robot's current location, is extracted from the global map to represent current state information.\nBy using a local map, \\cite{MaxM2018} trains the robot to select actions in ``turn left, turn right, move forward'', while \\cite{31_chen2019self} learns to select points in the free space of the local map. \nLocal map being state space can eliminate the limitation of global map size, but the current local map fails to contain all the information. \nIn \\cite{MaxM2018}, the robot tends to get trapped in an explored room when there is no frontier in the local map, because the robot has no idea where the frontiers are.\nThe training process in \\cite{31_chen2019self} needs to drive the robot to the nearest frontier when the local map contains no frontier information, although a RNN network is integrated into their framework. \nThis human intervention adopts a greedy strategy and can not guarantee an optimal or near-optimal solution. \nWhen utilizing local observations, the robot exploration problem requires the DRL approach to have a long-term memory.\nNeural map in \\cite{17_parisotto2018neural} is proposed to tackle simple memory architecture problems in DRL.\nBesides, Neural SLAM in \\cite{13_zhang2017neural} embeds traditional SLAM into attention-based external memory architecture.\nHowever, the memory architectures in \\cite{17_parisotto2018neural} and \\cite{13_zhang2017neural} are based on the fixed size of the global map.\nIt is difficult to be applied to unknown environments whose size may be quite large compared with maps in training sets.\nUnlike the aforementioned methods, our method uses the 4D point-clouds-like information to represent the global state information, which does not suffer from both the map size limitation and the simple memory problem. \nAs far as we know, our method is the first to apply point clouds to robot exploration problems. \nTherefore, we also design a respective DRL training framework to map the exploration strategy directly from point clouds.\n\n\n\\section{Problem Formulation}\n\\label{sec:formulation}\n\nOur work aims to develop and train a neural network that can take 4D point-clouds-like information as input and generate efficient policy to guide the exploration process of a robot equipped with a laser scanner. \nThe network should take into account the information about explored areas, occupied areas, and unexplored frontiers. \nIn this paper, the robot exploration problem is to make a robot equipped with a limited-range laser scanner explore unknown environments autonomously.\n\n\\subsection{Frontier-based Robot Exploration Problem}\n\\label{sec:formulation_A}\nIn the robot exploration problem, a 2D occupancy map is most frequently adopted to store the explored environment information. \nDefine the explored 2D occupancy map at step ${t}$ as ${M_t}$.\nEach grid in ${M_t}$ can be classified into the following three states: free grid ${M_t}$, occupied grid ${E_t}$, and unknown grid ${U_t}$. \nAccording to \\cite{21_yamauchi1997frontier}, frontiers ${F_t}$ are defined as the boundaries between the free space ${E_t}$ and unknown space ${U_t}$.\nMany existing DRL exploration frameworks learn a mapping from ${M_t}$ and ${F_t}$ to robot movement commands which can avoid obstacles and navigate to specific locations.\nAlthough this end-to-end framework has a simple structure, it is difficult to train.\nInstead, our method learns a policy network that can directly determine which frontier to explore, which is similar to \\cite{Weight2019} and \\cite{2_li2019deep}.\nAt step $t$, a target frontier is selected from ${F_t}$ based on an exploration strategy and current explored map ${M_t}$. \nOnce the change of the explored map is larger than the threshold, the robot is stopped, and the explored map at step $t+1$ ${M_{t+1}}$ is obtained.\nBy moving to selected frontiers constantly, the robot will explore more unknown areas until no accessible frontier exists. \n\nBecause ${M_t}$ can be represented as an image, it is commonly used to directly represent the state information.\nAs explained in Section \\ref{sec:relatedwork}, a novel state representation method with 4D point-clouds-like information is proposed instead.\nThe 4D point-clouds-like information at step $t$ is defined as a 4-dimensional point cloud set with $n$ points, denoted by $X_t = \\{ x_1^t, ..., x_n^t \\} \\subset \\mathbb{R}^4$.\nEach point contains 4D coordinates $x_i^t = \\left( x_i, y_i, b_i, d_i\\right)$, where $x_i, y_i$ denotes the location of the point, $d_i$ denotes the distance from the point to the robot location without collision, $b_i \\in \\{ 0, 1\\}$ denotes whether point $(x_i, y_i)$ in $M_t$ belongs to frontier or not.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=0.53]{imgs\/MDP_framework.png}\n\t\\caption{The framework of the proposed method, which consists of five components: (a) A simulator adopted from $HouseExpo$ \\cite{li2019houseexpo}, which receives and executes a robot movement command and outputs the new map in $HouseExpo$ coordinate; (b) The modified Dijkstra algorithm to extract the contour of the free space; (c) The state represented by 4D point-clouds-like information; (d) The policy network which processes the state information and estimates the value of frontiers; (e) The A* algorithm that finds a path connecting the current robot location to the goal frontier and a simple path tracking algorithm that generates the corresponding robot movement commands.}\n\t\\label{fig:label_framework}\n\\end{figure}\n\n\\subsection{DRL exploration formulation}\n\\label{sec:DRL_formulation}\nThe robot exploration problem can be formulated as a Markov Decision Process (MDP), which can be modeled as a tuple $(\\mathcal{S}, \\mathcal{A}, \\mathcal{T}, \\mathcal{R}, \\gamma)$.\nThe state space ${\\mathcal{S}}$ at step $t$ is defined by 4D point cloud $X_t$, which can be divided into frontier set $F_t$ and obstacle set $O_t$.\nThe action space ${\\mathcal{A}_t}$ at step $t$ is the frontier set $F_t$, and the action is to select a point $f_t$ from $F_t$, which is the goal of the navigation module implemented by $A^{*}$ \\cite{Hart1968Astar}.\nWhen the robot take an action $f_t$ from the action space, the state $X_t$ will transit to state $X_{t+1}$ according to the stochastic state transition probability ${\\mathcal{T}(X_t, f_t, X_{t+1})} = p(X_{t+1} | X_t, f_t)$.\nThen the robot will receive an immediate reward $r_t = \\mathcal{R}(X_t, f_t)$.\nThe discount factor $\\gamma \\in [0, 1]$ adjusts the relative importance of immediate and future reward.\nThe objective of DRL algorithms is to learn a policy $\\pi (f_t | X_t)$ that can select actions to maximize the expected reward, which is defined as the accumulated $\\gamma-$discounted rewards over time.\n\nBecause the action space varies according to the size of frontier set $F_t$, it is difficult to design a neural network that maps the state to the action directly.\nThe value-based RL is more suitable to this formulation. \nIn value-based RL, a vector of action values, which are the expected rewards after taking actions in state $X_t$ under policy $\\pi$, can be estimated by a deep Q network (DQN) $Q_{\\pi}(X_t, f_t; \\theta) = \\mathbb{E}[\\sum_{i=t}^{\\infty}{\\gamma}^{i-t}r_i | X_t, f_t]$, where $\\theta$ are the parameters of the muti-layered neural network.\nThe optimal policy is to take action that has the highest action value:\n$f_{t}^{*}=\\mathop{\\text{argmax}}_{f_t}Q_{\\pi}(X_t,f_t; \\theta)$.\nDQN \\cite{DQN2015} is a novel variant of Q-learning, which utilizes two key ideas: \nexperience reply and target network. \nThe DQN tends to overestimate action values, which can be tackled by double DQN in \\cite{doubleDQN}.\nDouble DQN select an action the same as DQN selects, while estimate this action's value by the target network.\n\n\\section{Algorithm}\n\\label{sec:alg}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=0.3]{.\/imgs\/PointCloud_Generation2.png}\n\t\\caption{The illustration of the 4D point-clouds-like information generation process. (a) presents the map where the black, gray, white, blue points denote obstacles, unknown space, free space, and robot location. In (b), the contour of free space, which is denoted by green points, is generated by the modified Dijkstra algorithm. The number on each point indicates the distance from this point to the robot location. In (c), the points in the contour set are divided into frontier or obstacle sets, which are denoted in dark green and light green, respectively. In (d), the 4D point-clouds-like information are extracted from the image (c). The point clouds include location, frontier flag, and distance information.}\n\t\\label{fig:label_illustration_h}\n\\end{figure}\n\nIn this section, we present the framework of our method and illustrate its key components in detail. \n\n\\subsection{Framework Overview}\nThe typical DRL framework is adopted, where the robot interacts with the environment step by step to learn an exploration strategy. \nThe environment in this framework is based on $HouseExpo$ \\cite{li2019houseexpo}.\nThe state and action space of the original $HouseExpo$ environment is the local observation and robot movement commands, respectively. \nWhen incorporated into our framework, $HouseExpo$ receives a sequence of robot movement command and outputs the global map once the change of the explored map is larger than the threshold, which is detailed in Section \\ref{sec:formulation_A}.\nAs shown in Fig. \\ref{fig:label_framework}, the 4D point-clouds-like information can be obtained by a modified Dijkstra algorithm. \nAfter the policy network outputting the goal point, the $A^{*}$ algorithm is implemented to find the path connecting the robot location to the goal point. \nThen a simple path tracking algorithm is applied to generate the sequence of robot movement commands.\n\n\n\\subsection{Frontier Detection and Distance Computation}\n\\label{sec:frontier_dectect}\nComputing the distances from the robot location to points in the frontier set without collision will be time-consuming if each distance is obtained by running the $A^{*}$ algorithm once. \nInstead, we modify the Dijkstra algorithm to detect frontiers and compute distance at the same time by sharing the search information. \nDenote the ``open list'' and ``close list'' as $L_o$ and $L_c$, respectively.\nThe open list contains points that need to be searched, while the close list contains points that have been searched.\nDefine the contour list as $L_f$, which contains the location and the cost of points that belong to frontier or obstacle.\nOnly points in the free space of map $M_t$ are walkable. \nThe goal of this modified algorithm is to extract the contour of free space and obtain the distance information simultaneously.\nAs shown in Algorithm 1, the start point with the cost of zero, which is decided by the robot location, is added to $L_o$. \nWhile the open list is not empty, the algorithm repeats searching 8 points, denoted by $p_{near}$, adjacent to current point $p_{cur}$.\nThe differences from Dijkstra algorithms are: \n(1) If $p_{near}$ belongs to occupied or unknown space, add $p_{cur}$ to frontier list $L_f$, as shown in line 10 of Algorithm 1.\n(2) Instead of stopping when the goal is found, the algorithm terminates until $L_o$ contains zero points.\nAfter the algorithm ends, the contour list contains points that are frontiers or boundaries between free space and obstacle space.\nPoints in the contour list can be classified by their neighboring information into frontier or obstacle set, which is shown in Fig. \\ref{fig:label_illustration_h}.\n\n\\begin{algorithm}[h]\n\\caption{Modified Dijkstra Algorithm}\n\\begin{algorithmic}[1]\n\\STATE $L_o \\leftarrow \\{ p_{start}\\}, Cost( p_{start}) = 0;$ \n\\WHILE {$L_o \\neq \\phi$}\n\\STATE $p_{cur} \\leftarrow minCost(L_o)$;\n\\STATE $L_c \\leftarrow L_c \\cup p_{cur}, L_o \\leftarrow L_o \\backslash p_{cur}$;\n\\FOR{$p_{near}$ in 8 points adjacent to $p_{cur}$}\n\\STATE $cost_{near} = Cost(p_{cur}) + distance(p_{near}, p_{cur})$\n\\IF{$p_{near} \\in L_c$}\n\\STATE continue;\n\\ELSIF{$p_{near}$ is not walkable}\n\\STATE $L_f \\leftarrow L_f \\cup p_{cur}$;\n\\ELSIF{$p_{near} \\notin L_o$}\n\\STATE $L_o \\leftarrow L_o \\cup p_{near}$\n\\STATE $Cost(p_{near}) = cost_{near}$;\n\\ELSIF{$p_{near} \\in L_o$ and $Cost(p_{near})>cost_{near}$}\n\\STATE $Cost(p_{near})=cost_{near}$\n\\ENDIF\n\\ENDFOR\n\\ENDWHILE \n\\label{code:recentEnd}\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{figure*}[t]\n\t\n\t\\centering\n\t\\includegraphics[scale=0.48]{imgs\/network_structure2.png}\n\t\\caption{The architecture of our proposed neural network. The input point clouds are classified into two categories: frontier set denoted by dark blue and obstacle set denoted by red. The edge convolution operation $EdgeConv$ is denoted by a light blue block, which is used to extract local features around each point in the input set. The feature sets of frontiers and obstacles, which are generated by $EdgeConv$ operations, are denoted by light green and green. The MLP operation is to extract one point's feature by only considering this point's information. After several $EdgeConv$ operations and one MLP operation, a max-pooling operation is applied to generate the global information, which is shown in light yellow.}\n\t\\label{fig:label_network_structure}\n\\end{figure*}\n\n\\subsection{Network with Point Clouds as Input}\n\\label{sec:DQN_architecture}\nIn this section, the architecture of the state-action value network with 4D point-clouds-like information as input is detailed.\nThe architecture is modified from DGCNN in Segmentation task \\cite{wang2019dynamic}, which proposes edge convolution (EdgeConv) operation to extract the local information of point clouds.\nThe EdgeConv operation includes two steps for each point in the input set: (1) construct a local graph including the center point and its k-nearest neighbors; (2) apply convolution-like operations on edges which connect each neighbor to the center point.\nThe $(D_{in}, D_{out})$ EdgeConv operation takes the point set $(N, D_{in})$ as input and outputs the feature set $(N, D_{out})$, where $D_{in}$ and $D_{out}$ denote the dimension of input and output set, and $N$ denotes the number of points in the input set.\nDifferent from DGCNN and other typical networks processing point cloud such as PointNet \\cite{qi2017pointnet}, which have the same input and output point number, our network takes the frontier and obstacle set as input and only outputs the value of points from the frontier set.\nThe reason for this special treatment is to decrease the action space's size to make the network converge in a faster manner.\n\nThe network takes as input $N_f + N_w$ points at time step $t$, which includes $N_f$ frontier points and $N_w$ obstacle points, which are denoted as $F_t = \\{ x_1^t, ..., x_{N_f}^t \\}$ and $O_t = \\{ x_{N_f+1}^t, ..., x_{N_f+N_w}^t \\}$ respectively.\nThe output is a vector of estimated value of each action $Q_{\\pi}(F_t, O_t, \\cdot; \\theta)$.\nThe network architecture contains multiple EdgeConv layers and multi-layer perceptron (mlp) layers.\nAt one EdgeConv layer, the feature set is extracted from the input set. \nThen all features in this edge feature set are aggregated to compute the output EdgeConv feature for each corresponding point.\nAt a mlp layer, the data of each point is operated on independently to obtain the information of one point, which is the same as the mlp in PointNet \\cite{qi2017pointnet}.\nAfter 4 EdgeConv layers, the outputs of all EdgeConv layers are aggregated and processed by a mlp layer to generate a 1D global descriptor, which encodes the global information of input points.\nThen this 1D global descriptor is concatenated with the outputs of all EdgeConv layers.\nAfter that, points that belong to the obstacle set are ignored, and the points from the frontier set are processed by three mlp layers to generate scores for each point.\n\n\n\\subsection{Learning framework based on DQN}\nAs described in Section \\ref{sec:DRL_formulation}, the DQN is a neural network that for a given state $X_t$ outputs a vector of action values.\nThe network architecture of DQN is detailed in Section \\ref{sec:DQN_architecture}, where the state $X_t$ in point clouds format contains frontier and obstacle sets.\nUnder a given policy $\\pi(f_t|F_t,O_t)$, the true value of an action $f_t$ in state $X_t = \\{F_t, O_t \\}$ is: $Q_{\\pi}(X_t, f_t; \\theta) \\equiv \\mathbb{E}[\\sum_{i=t}^{\\infty}{\\gamma}^{i-t}r_i | X_t, f_t]$. \nThe target is to make the estimate from the frontier network converge to the true value. \nThe parameters of the action-value function can be updated in a gradient decent way, after taking action $f_t$ in state $X_t$ and observing the next state $X_{t+1}$ and immediate reward $r_{t+1}$: \n\\begin{eqnarray}\\label{update_Q}\n\t\\theta_{t+1} = \\theta_t + \\alpha (G_t - Q_{\\pi}(X_t, f_t; \\theta_t)),\n\\end{eqnarray}\nwhere $\\alpha$ is the learning rate and the target $G_t$ is defined as \n\\begin{eqnarray}\\label{target}\n\tG_t = r_{t+1}+\\gamma Q_{\\pi}(X_{t+1}, \\mathop{\\text{argmax}}_{f}Q(X_{t+1}, f; \\theta_t); \\theta_t^{'}),\n\\end{eqnarray}\nwhere $\\theta^{'}$ denotes the parameter of the target network, which is updated periodically by $\\theta^{'} = \\theta$.\n\nTo make the estimate from our network converge to the true value, a typical DQN framework \\cite{DQN2015} can be adopted. \nFor each step $t$, the tuple $(X_t, f_t, X_{t+1}, r_{t+1})$ is saved in a reply buffer. \nThe parameters can be updated by equation \\ref{update_Q} and \\ref{target} given the tuple sampled from the reply buffer.\n\n\nThe reward signal in the DRL framework helps the robot know at a certain state $X_t$ whether an action $f_t$ is appropriate to take.\nTo make the robot able to explore unknown environments successfully, we define the following reward function:\n\n\\begin{equation}\\label{reward_all}\nr_t = r_{area}^t + r_{frontier}^t + r_{action}^t.\n\\end{equation}\nThe term $r_{area}^t$ equals the newly discovered area at time $t$, which is designed to encourage the robot to explore unknown areas.\nThe term $r_{action}^t$ provides a consistent penalization signal to the robot when a movement command is taken. \nThis reward encourages the robot to explore unknown environments with a relatively short length of overall path.\nThe term $r_{frontier}^t$ is designed to guide the robot to reduce the number of frontiers, as defined in equation \\ref{reward_f}:\n\\begin{equation}\\label{reward_f}\nr_{frontier}^t = \\left\\{\n\\begin{aligned}\n 1, N_{frontier}^t < N_{frontier}^{t-1}\\\\\n 0, N_{frontier}^t \\geq N_{frontier}^{t-1}.\n\\end{aligned}\n\\right.\n\\end{equation}\nwhere $N_{frontier}^t$ denotes the number of frontier group at time $t$.\n\n\n\\section{Training Details and Simulation Experiments}\n\\label{sec:exp}\nIn this section, we first detail the training process of our DRL agent. \nThen we validate the feasibility of our proposed method in robot exploration problem by two sets of experiments:\n(1) a comparison with five other exploration methods, (2) a test of scalability where maps of larger size compared with training set are to be explored.\n\n\\subsection{Training Details}\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=0.3]{.\/imgs\/HouseExpo_traindata25.png}\n\t\\caption{Some map samples from training set. The black and white pixels represent obstacle and free space respectively.}\n\t\\label{fig:label_mapsample}\n\\end{figure}\n\nTo learn a general exploration strategy, the robot is trained in the $HouseExpo$ environment where 100 maps with different shapes and features are to be explored.\nThe robot is equipped with a 2m range laser scanner with a 180 degree field of view. \nThe noise of laser range measurement is simulated by the Gaussian distribution with a mean of 0 and a standard deviation of 0.02m.\nAs an episode starts, a map is randomly selected from the training set, and the robot start pose, including the location and pose, is also set randomly.\nIn the beginning, the circle area centered at the start location is explored by making the robot rotate for 360 degree.\nThen at each step, a goal frontier point is selected from the frontier set under the policy of our proposed method.\nA* algorithm is applied to find a feasible path connecting the current robot location to the goal frontier. \nA simple path tracking algorithm is used to find the robot commands to follow the planned path: moving to the nearest unvisited path point $p_{near}$, and replanning the path if the distance between the robot and $p_{near}$ is larger than a fixed threshold.\nAn episode ends if the explored ratio of the whole map is over ${95\\%}$.\n\n\\begin{table}[!htbp]\n\\caption{Parameters in Training}\n\\centering\n\\begin{tabular}{cccc}\n \\toprule\n \\multicolumn{2}{c}{HouseExpo} & \\multicolumn{2}{c}{Training} \\\\\n \\hline\n Laser range & $2m$ & Discount factor $\\gamma$ & 0.99 \\\\\n \\hline\n Laser field of view & $180^{\\circ}$ & Target network update f & 4000 steps \\\\\n \\hline\n Robot radius & $0.15m$ & Learning rate & 0.001 \\\\\n \\hline\n Linear step length & $0.3m$ & Replay buffer size & 50000 \\\\\n \\hline\n Angular step length & $15^{\\circ}$ & Exploration rate $\\epsilon$ & 15000\\\\\n \\hline\n Meter2pixel & $16$ & Learning starts & 3000\\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\nSome training map samples are shown in Fig. \\ref{fig:label_mapsample}. \nThe largest size of a map in the training set is 256 by 256.\nBecause the size of state information $X_t$ changes at each time and batch update method requires point clouds with same size, currently, it is not realistic to train the model in a batch-update way.\nIn typical point clouds classification training process, the size of all the point clouds data are pre-processed to the same size.\nHowever, these operations will change the original data's spatial information in our method.\nInstead, for each step, the network parameters are updated 32 times with a batch size equal to 1.\nLearning parameters are listed in Table 1. \nThe training process is performed on a computer with Intel i7-7820X CPU and GeForce GTX 1080Ti GPU.\nThe training starts at 2000 steps and ends after 90000 update steps, which takes 72 hours.\n\n\\subsection{Comparison Study}\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=0.65]{imgs\/HouseExpo_test_data.png}\n\t\\caption{Four maps for testing. The size of map1, map2, map3 and map4 is (234, 191), (231, 200), (255, 174) and (235, 174), respectively. }\n\t\\label{fig:test_data}\n\\end{figure}\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=0.34]{imgs\/comparision.png}\n\t\\caption{The path length's data of each method on four test maps. For each map, each method is tested for 100 times with the same randomly initialized robot start locations.}\n\t\\label{fig:length_comparision}\n\\end{figure}\n\nBesides our proposed method, we also test the performance of the weight tuning method in \\cite{Weight2019}, a cost-based method in \\cite{22_gonzalez2002navigation}, a method with greedy strategy in \\cite{21_yamauchi1997frontier}, a method utilizing a random policy and the baseline in \\cite{li2019houseexpo}, which we denote as the weight method, cost method, greedy method, random method and baseline, respectively.\nTo compare the performance of different methods, we use 4 maps as a testing set, as shown in Fig. \\ref{fig:test_data}.\nFor each test map, we conduct 100 trials by setting robot initial locations randomly for each trail.\n\nThe \\emph{random method} selects a frontier point from the frontier set randomly.\nThe \\emph{greedy method} chooses the nearest frontier point.\nThe \\emph{baseline} utilizes a CNN which directly determines the robot movement commands by the current local observation.\n\nThe \\emph{cost method} evaluates the scalar value of frontiers by a cost function considering distance and information gain information. \n\\begin{equation}\\label{cost_method}\ncost = wd+(1-w)(1-g),\n\\end{equation}\nwhere $w$ is the weight that adjusts the relative importance of costs. \n$d$ and $g$ denote the normalized distance and information gain of a frontier.\nAt each step, after obtaining the frontier set as detailed in Section \\ref{sec:frontier_dectect}, the k-means method is adopted to cluster the points in the frontier set to find frontier centers. \nTo reduce the runtime of computing information gains, we only compute the information gain for each frontier center.\nThe information gain is computed by the area of the unknown space to be explored if this frontier center is selected \\cite{22_gonzalez2002navigation}.\nThe weight in the cost method is fixed to 0.5, which fails to be optimal in environments with different structures and features.\n\nThe \\emph{weight method} can learn to adjust the value of the weight in equation \\ref{cost_method} under the same training framework as our proposed method.\nThe structure of the neural network in weight method is presented in Fig. \\ref{fig:label_network_structure}, which takes the 4D point-clouds-like information as input and outputs a scalar value.\n\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=0.9]{imgs\/comparision_result2.png}\n\t\\caption{The representative exploration trials of six different methods on map1 with the same robot start locations and poses. The bright red denotes the start state and the dark red denotes the end state. The green point denotes the locations where the robot made decisions to choose which frontiers to explore. As the step increases, the brightness of green points becomes darker and darker. The baseline's result doesn't have green points because its action is to choose a robot movement command.}\n\t\\label{fig:map_compare}\n\\end{figure}\n\nWe select the length of overall paths, which are recorded after $95\\%$ of areas are explored, as the metric to evaluate the relative performance of the six exploration methods. \nThe length of overall paths can indicate the efficiency of the exploration methods.\nThe box plot in Fig. \\ref{fig:length_comparision} is utilized to visualize the path length's data of each exploration method on four test maps. \nThe baseline is not considered here because the baseline sometimes fails to explore the whole environment, which will be explained later. \nThree values of this metric are used to analyze the experiment results: (1) the average, (2) the minimum value, (3) the variance.\nOur proposed method has the minimum average length of overall paths for all four test maps.\nThe minimum length of the proposed method in each map is also smaller than other five methods.\nThis indicates that the exploration strategy of our proposed method is more effective and efficient than the other five methods.\nThe random method has the largest variance and average value for each map because it has more chances to sway between two or more frontier groups. \nThat is why the overall path of the random method in Fig. \\ref{fig:map_compare} is the most disorganized.\nThe weight method has a lower average and minimum value compared with the cost method in all test maps, due to the advantages of learning from rich data.\nHowever, the weight method only adjusts a weight value to change the relative importance of distance and information gain. \nIf the computation of the information gain is not accurate or another related cost exists, the weight method fails to fully demonstrate the advantages of learning.\nInstead, our proposed method can learn useful information, including information gain, by learning to estimate the value of frontiers, which is the reason that our proposed method outperforms the weight method.\n\nFig. \\ref{fig:map_compare} shows an example of the overall paths of six different methods when exploring the map1 with the same starting pose.\nEach episode ended once the explored ratio was more than 0.95.\nThe proposed method explored the environment with the shortest path.\nThe explored ratio of the total map according to the length of the current path is shown in Fig. \\ref{fig:label_explored_ratio}.\nThe greedy method's curve has a horizontal line when the explored ratio is near 0.95. \nThis is because the greedy method missed a nearby frontier which would disappear at once if selected. \nHowever, the greedy method chose the nearest frontier instead, which made the robot travel to that missed frontier again and resulted in a longer path.\nFor example, in Fig. \\ref{fig:map_compare}, the greedy method chose to explore the point C instead of the point A, when the robot was at point B. This decision made the robot travel to the point A later, thus making the overall path longer.\nThe baseline's curve also exhibits a horizontal line in Fig. \\ref{fig:label_explored_ratio}, which is quite long.\nThe baseline's local observation contained no frontier information when the surroundings were all explored (in the left part of the environment shown in Fig. \\ref{fig:map_compare}).\nTherefore, at this situation, the baseline could only take ``random'' action (i.e. travelling along the wall) to find the existing frontiers, which would waste lots of travelling distances and may fail to explore the whole environment.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=0.38]{imgs\/explored_ratio.png}\n\t\\caption{The ratio of the area explored and the area of the whole map with respect to the current path's length. The test map is map1 and the start locations is the same as Fig. \\ref{fig:map_compare}.}\n\t\\label{fig:label_explored_ratio}\n\n\\end{figure}\n\n\\subsection{Scalability Study}\n\nIn this section, a map size of (531, 201) is used to test the performance of our proposed method in larger environments compared with maps in the training set. \nIf the network's input is fixed size images, the map needs to be padded into a (531, 531) image, which is a low-efficient state representation way. \nThen a downscaling operation of the image is required to make the size of the image input the same as the requirement of the neural network, e.g. (256,256).\nAlthough the neural network can process the state data by downscaling, the quality of the input data decreases.\nTherefore, the network fails to work once the scaled input contains much less necessary information than the original image.\nFig. \\ref{fig:label_scalabilityTest} presents the overall path generated by our method without downscaling the map size.\nOur proposed method, which takes point clouds as input, has better robustness in maps with large scales because of the following two reasons.\nFirst, we incorporate the distance information into point clouds, which can help neural networks learn which part of point clouds are walkable.\nAlthough the image representation can also have a fourth channel as distance information, the scaling operation can make some important obstacles or free points disappear, which changes the structure of the map.\nSecondly, the number of pixels in an image increases exponentially as the size of the image increase.\nThe number of points in point clouds equals the number of pixels that represent an obstacle or frontier in a map, which is not an exponential relation unless all the pixels in a map are obstacles or frontiers.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=1.2]{.\/imgs\/path_1998.png}\n\t\\caption{The result of the scalability test. The map size is (531, 201). The meaning of points' color is the same as Fig. \\ref{fig:map_compare}}\n\t\\label{fig:label_scalabilityTest}\n\\end{figure}\n\n\\section{Conclusions And Future Work}\n\\label{sec:conclude}\n\n\n\nIn this paper, we present a novel state representation method using 4D point-clouds-like information and design the framework to learn an efficient exploration strategy.\nOur proposed method can solve the problems that come with using images as observations.\nThe experiments demonstrate the effectiveness of our proposed method, compared with other five commonly used methods.\nFor the future work, other network structures and RL algorithms can be modified and applied to the robot exploration problem with point clouds as input.\nThe converge speed of training may also be improved by optimizing the training techniques.\nBesides, the multi-robot exploration problem may also use point clouds to represent the global information. \n\n\n\\addtolength{\\textheight}{-8cm} \n \n \n \n \n \n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:INTRO}\nRelational Reinforcement Learning (RRL) has been investigated in early 2000s by works such as \\cite{bryant1999combining,dvzeroski1998relational,dvzeroski2001relational} among others. \nThe main idea behind RRL is to describe the environment in terms of objects and relations. \nOne of the first practical implementation of this idea was proposed by \\cite{dvzeroski1998relational} and later was improved in \\cite{dvzeroski2001relational} based on a modification to Q-Learning algorithm \\cite{watkins1992q} via the standard relational tree-learning algorithm TILDE \\cite{blockeel1998top}. As shown in \\cite{dvzeroski2001relational}, the RRL system allows for very natural and human readable decision making and policy evaluations. More importantly, the use of variables in ILP system, makes it possible to learn generally formed policies and strategies. Since these policies and actions are not directly associated with any particular instance and entity, this approach leads to a generalization capability beyond what is possible in most typical RL systems.\nGenerally speaking RRL framework offers several benefits over the traditional RL: (i) The learned policy is usually human interpretable, and hence can be viewed, verified and even tweaked by an expert observer. (ii) The learned program can generalize better than the classical RL counterpart. (iii) Since the language for the state representation is chosen by the expert, it is possible to incorporate inductive biases into learning. This can be a significant improvement in complex problems as it might be used to manipulate the agent to choose certain actions without accessing the reward function, (iv) It allows for the incorporation of higher level concepts and prior background knowledge. \n\nIn recent years and with the advent of the new deep learning techniques, significant progress has been made to the classical Q-learning RL framework. By using algorithms such as deep Q-learning and its variants \\cite{mnih2013playing,van2016deep}, as well as Policy learning algorithms such A2C and A3C \\cite{mnih2016asynchronous}, more complex problems are now been tackled. However, the classical RRL framework cannot be easily employed to tackle large scale and complex scenes that exist in recent RL problems. \nSince standard RRL framework is in not usually able to learn from complex visual scenes and cannot be easily combined with differentiable deep neural\nIn particular, none of the inherent benefits of RRL have been materialized in the deep learning frameworks thus far. This is because existing RRL frameworks usually are not designed to learn from complex visual scenes and cannot be easily combined with differentiable deep neural networks.\nIn ~\\cite{payani2019Learning} a novel ILP solver was introduced which uses Neural-Logical Network (NLN)~\\cite{payani2018} for constructing a differentiable neural-logic ILP solver (dNL-ILP). The key aspect of this dNL-ILP solver is a differentiable deduction engine which is at the core of the proposed RRL framework. \nAs such, the resulting differentiable RRL framework can be used similar to deep RL in an end-to-end learning paradigm, trainable via the typical gradient optimizers. Further, in contrast to the early RRL frameworks, this framework is flexible and can learn from ambiguous and fuzzy information. Finally, it can be combined with deep learning techniques such as CNNs to extract relational information from the visual scenes. \nIn the next section we briefly introduce the differentiable dNL-ILP solver. In section, \\ref{sec:RRL} we show how this framework can be used to design a differentiable RRL framework. Experiments will be presented next, followed by the conclusion.\n\n\\section{Differentiable ILP via neural logic networks}\n\\label{sec:dNL-ILP}\n \nIn this section, we briefly present the basic design of the differentiable dNL-ILP which is at the core of the proposed RRL. More detailed presentation of dNL-ILP could be found in \\cite{payani2019Learning}.\nLogic programming is a paradigm in which we use formal logic (and usually first-order-logic) to describe relations between facts and rules of a program domain. In logic programming, rules are usually written as clauses of the form $H \\leftarrow B_1,\\,B_2,\\,\\dots,\\,B_m$, \nwhere $H$ is called \\texttt{head} of the clause and $B_1,\\,B_2,\\,\\dots,\\,B_m$ is called \\texttt{body} of the clause. A clause of this form expresses that if all the atoms $B_i$ in the \\texttt{body} are true, the \\texttt{head} is necessarily true.\nEach of the terms $H$ and $B$ is made of \\texttt{atoms}. Each \\texttt{atom} is created by applying an $n$-ary Boolean function called \\texttt{predicate} to some constants or variables. A \\texttt{predicate} states the relation between some variables or constants in the logic program. We use lowercase letters for constants (instances) and uppercase letters for variables. \nTo avoid technical details, we consider a simple logic program. Assume that a directed graph is defined using a series of facts in the form of \\texttt{edge(X,Y)} where for example \\texttt{edge(a,b)} states that there is an edge from node \\texttt{a} to the node \\texttt{b}. As an example, the graph in Fig. \\ref{fig:connected_graph} can be represented as \\texttt{\\{edge(a,b), edge(b,c), edge(c,d), edge(d,b)\\}}.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.2\\textwidth]{cnt.png}\n\t\\caption{Connected graph example}\n\t\\label{fig:connected_graph}\n\t\\vspace{-.16in}\n\\end{figure}\nAssume that our task is to learn the \\texttt{cnt(X,Y)} predicate from a series of examples, where \\texttt{cnt(X,Y)} is true if there is a directed path from node \\texttt{X} to node \\texttt{Y}. The set of positive examples in graph depicted in Fig. \\ref{fig:connected_graph} is $\\mathcal{P}=$ \\texttt{\\{cnt(a,b), cnt(a,c), cnt(a,d), cnt(b,b),cnt(b,c), cnt(b,d),\\dots\\}}. Similarly the set of negative examples $\\mathcal{N}$ includes atoms such as \\texttt{\\{cnt(a,a),cnt(b,a),\\dots\\}}.\n\nIt is easy to verify that the predicate \\texttt{cnt} defined as below satisfies all the provided examples (entails the positive examples and rejects the negative one):\n\\begin{align}\n\\text{cnt(X,Y)} &\\leftarrow \\text{edge(X,Y)} \\nonumber\\\\\n\\text{cnt(X,Y)} &\\leftarrow \\text{edge(X,Z),\\,\\,cnt(Z,Y)}\n\\label{eq:cnt}\n\\end{align}\nIn fact by applying each of the above two rules to the constants in the program we can produce all the consequences of such hypothesis\nIf we allow for formulas with 3 variables (\\texttt{X,Y,Z}) as in (\\ref{eq:cnt}), we can easily enumerate all the possible symbolic atoms that could be used in the body of each clause. In our working example, this corresponds to $\\mathbb{I}_{cnt}=$\\texttt{\\{edge(X,X), edge(X,Y), edge(X,Z), \\dots, cnt(Z,Y), cnt(Z,Z)\\}}. \nAs the size of the problem grows, considering all the possibilities becomes unfeasible. Consequently, almost all ILP systems use some form of rule templates to reduce the possible combinations. For example, the dILP \\cite{evans2018learning} model, allows for the clauses (in the body) of at most two atoms in each clause predicate. \nIn \\cite{payani2019Learning}, a novel approach was introduced to alleviate the above limitation and to allow for learning arbitrary complex predicate formulas. The main idea behind this approach is to use multiplicative neurons \\cite{payani2018} that are capable of learning and representing Boolean logic. Consider the fuzzy notion of Boolean algebra where fuzzy Boolean value are represented as a real value in range $[0,1]$, where True and False are represented by 1 and 0, respectively. Let $\\bar{x}$ be the logical `NOT' of $x$. \n\\begin{figure}[tb]\n\t\\centering\n\t\\subfloat[][]{\n\t\t\\small\n\t\t\\vspace{-5mm}\n\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\\hline \n\t\t\t$x_i$ & $m_i$ & $F_c$ \\\\ \t\\toprule \n\t\t\t0 & 0 & 1 \\\\ \\hline\n\t\t\t0 & 1 & 0 \\\\ \\hline\n\t\t\t1 & 0 & 1 \\\\ \\hline\n\t\t\t1 & 1 & 1 \\\\ \\hline\n\t\t\\end{tabular}\n\t\t\\label{fig:Fc}%\n\t}%\n\t\\qquad\n\t\\subfloat[][]{\n\t\t\\small\n\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\\hline\t \n\t\t\t$x_i$ & $m_i$ & $F_d$ \\\\ \t\\toprule\n\t\t\t0 & 0 & 0 \\\\ \\hline\n\t\t\t0 & 1 & 0 \\\\ \\hline\n\t\t\t1 & 0 & 0 \\\\ \\hline\n\t\t\t1 & 1 & 1 \\\\ \\hline\n\t\t\\end{tabular}\n\t\t\\label{fig:Fd}%\n\t}\n\t\\caption{Truth table of $F_c(\\cdot)$ and $F_d(\\cdot)$ functions}%\n\t\\label{fig:FcFd}%\n\\end{figure}\nLet $\\boldsymbol{x}^n \\in \\{0,1\\}^n$ be the input vector for a logical neuron. we can associate a trainable Boolean membership weight $m_i$ to each input elements $x_i$ from vector $\\boldsymbol{x}^n$. Consider Boolean function $F_c(x_i,m_i)$ with the truth table as in Fig. \\ref{fig:Fc} which is able to include (exclude) each element $x_i$ in (out of) the conjunction function $f_{conj}(\\boldsymbol{x}^n)$. This design ensures the incorporation of each element $x_i$ in the conjunction function only when the corresponding membership weight $m_i$ is $1$. Consequently, the neural conjunction function $f_{conj}$ can be defined as:\n\\vspace{-2mm}\n\\begin{align}\n\\label{eq:conj}\nf_{conj}(\\boldsymbol{x}^n) &= \\prod_{i=1}^{n} F_c(x_i,m_i) \\nonumber \\\\\n\\text{where, } \\quad F_c(x_i,m_i) &= \\overline{\\overline{x_i} m_i } = 1 - m_i ( 1 - x_i) \n\\end{align}\nLikewise, a neural disjunction function $f_{disj}(\\boldsymbol{x}^n) $ can be defined using the auxiliary function $F_d$ with the truth table as in Fig. ~\\ref{fig:Fd}. \nBy cascading a layer of $N$ neural conjunction functions with a layer of $N$ neural disjunction functions, we can construct a differentiable function to be used for representing and learning a Boolean Disjunctive Normal Form (DNF). \n\ndNL-ILP employs these differentiable Boolean functions (e.g. dNL-DNF) to represent and learn predicate functions. Each dNL function can be seen as a parameterized symbolic formula where the (fuzzy) contribution of each symbol (atom) in the learned hypothesis is controlled by the trainable membership weights (e.g., $w_i$ where $m_i = sigmoid(w_i)$). If we start from the background facts ( e.g. all the groundings of predicate \\texttt{edge(X,Y)} in the graph example and apply the parameterized hypothesis we arrive at some new consequences (e.g., forward chaining). After repeating this process to obtain all possible consequences, we can update the parameters in dNL by minimizing the cross entropy between the desired outcome (provided positive and negative examples) and the deduced consequences. \n\nAn ILP description of a problem in this framework consist of these elements:\n\\begin{enumerate}\n\t\\item The set of constants in the program. In example of Fig. \\ref{fig:connected_graph}, this consists of $\\mathcal{C}=$\\texttt{\\{a,b,c,d\\}}\n\t\\item The set of background facts. In the graph example above this consists of groundings of predicate \\texttt{edge(X,Y)}, i.e., $\\mathcal{B}=$\\texttt{\\{edge(a,b), edge(b,c), edge(c,d), edge(d,b)\\}}\n\t\n\t\\item The definition of auxiliary predicates. In the simple example of graph we did not include any auxiliary predicates. However, in more complex example they would greatly reduce the complexity of the problem.\n\t\n\t\\item The signature of the target hypothesis. In the graph example, This signature indicates the target hypothesis is 2-ary predicate \\texttt{cnt(X,Y)} and in the symbolic representation of this Boolean function we are allowed to use three variables \\texttt{X,Y,Z}. \n\t\n\n\n\t\n\\end{enumerate}\nIn addition to the aforementioned elements, some parameters such as intial values for the membership weights ($m_i=sigmoid(w_i)$), as well as the number of steps of forward chaining should be provided. Furthermore, in dNL-ILP the memberships are fuzzy Boolean values between 0 and 1. As shown in \\cite{payani2019Learning}, for ambiguous problems where a definite Boolean hypothesis may not be found which could satisfy all the examples, there is no guaranty that the membership weights converge to zero or 1. In applications where our only goal is to find a successful hypothesis this result is satisfactory. However, if the interpretability of the learned hypothesis is by itself a goal in learning, we may need to encourage the membership weights to converge to 0 and 1 by adding a penalty term:\n\\begin{equation}\n\\text{interpretability penalty} \\propto m_i(1-m_i)\\label{eq:interpret}\n\\end{equation}\n\n\\section{Relational Reinforcement Learning via dNL-ILP}\n\\label{sec:RRL}\n \n\nEarly works on RRL \\cite{dvzeroski2001relational,van2005survey} mainly relied on access to the explicit representation of states and actions in terms of relational predicate language. In the most successful instances of these approaches, a regression tree algorithm is usually used in combination with a modified version of Q-Learning algorithms. \nThe fundamental limitation of the traditional RRL approaches is that the employed ILP solvers are not differentiable. Therefore, those approaches are typically only applicable the problems for which the explicit relational representation of states and actions is provided. Alternatively, deep RL models, due to recent advancement in deep networks, have been successfully applied to the much more complex problems. These models are able to learn from raw images without relying on any access to the explicit representation of the scene. However, the existing RRL counterparts are falling behind such desirable developments in deep RL. \n\nIn this paper, we establish that differentiable dNL-ILP provides a platform to combine RRL with deep learning methods, constructing a new RRL framework with the best of both worlds. This new RRL system allows the model to learn from the complex visual information received from the environment and extract intermediate explicit relational representation from the raw images by using the typical deep learning models such as convolutional networks. \n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.85\\textwidth]{boxworld_states2.png}\n\t\\caption{States representation in the form of predicates in BoxWorld game, before and after an action}\n\t\\label{fig:boxworld}\n\\end{figure*}\nAlthough the dNL-ILP can also be used to formulate RL algorithms such as deep Q-learning, we focus only on deep policy gradient learning algorithm. This formulation is very desirable because it makes the learned policy to be interpretable by human. \nOne of the other advantages of using policy gradient in our RRL framework is that it enables us to restrict actions according to some rules obtained either from human preferences or from problem requirements. This in turn makes it possible to account for human preferences or to avoid certain pitfalls, e.g., as in safe AI.\n\nIn our RRL framework, although we use the generic formulation of the policy gradient with the ability to learn stochastic policy, certain key aspects are different from the traditional deep policy gradient methods, namely state representation, language bias and action representation. In the following, we will explain these concepts in the context of BoxWorld game. In this game, the agent's task is to learn how to stack the boxes on top of each other (in a certain order). For illustration, consider the simplified version of the game as in Fig.\\ref{fig:boxworld} where there are only three boxes labeled as \\texttt{a,b}, and \\texttt{c}. A box can be on top of another or on the \\texttt{floor}. A box can be moved if it is not covered by another box and can be either placed on the floor or on top of another uncovered box. For this game, the environment state can be fully explained via the predicate \\texttt{on(X,Y)}. Fig. \\ref{fig:boxworld} shows the state representation of the scene before and after an action (indicated by the predicate \\texttt{move(c,b)}). In the following we discuss each distinct elements of the proposed framework using the BoxWorld environment. \nFig.~\\ref{fig:ilp_rrl_diag} displays the overall design of our proposed RRL framework. In the following we discuss the elements of this RRL system.\n\n\\begin{figure*}[h]\n\t\\centering\n\t\\includegraphics[width=.95\\textwidth]{.\/rrl_diag.png}\n\t\\vspace{-5mm}\n\t\\caption{Learning explicit relational information from images in our proposed RRL; Images are processed to obtain explicit representation and dNL-ILP engine learns and expresses the desired policy (actions)}\n\t\\label{fig:ilp_rrl_diag}\n\\end{figure*}\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{.\/language_state6.png}\n\n\n\t\\vspace{-10mm}\n\t\\caption{Transforming low-level state representation to high-level form via auxiliary predicates}\n\t\\label{fig:ilp_rrl_state}\n\\end{figure*}\n\\begin{figure*} \n\t\\centering \n\t\\subfloat[A sample from CLEVER datset]{\n\t\t\\includegraphics[width=.55\\textwidth]{.\/clevr.png}\n\t\n\t}\n\t\\subfloat[A sample from sort-of-CLEVER datset]{\n\t\t\\includegraphics[width=.28\\textwidth]{.\/sclevr.png}\n\t\t\\label{subfig:sortofclever}\n\t\n\t}\n\t\\vspace{-2mm}\n\t\\caption{Extracting relational information from visual scene \\cite{santoro2017simple}}\n\t\\label{fig:ilp_RELATIONAL}\n\t\n\\end{figure*}\n\\subsection{State Representation}\n\\label{subsec:STATE-REP}\nIn the previous approaches to the RRL \\cite{dvzeroski1998relational,dvzeroski2001relational,jiang2019neural}, state of the environment is expressed in an explicit relational format in the form of predicate logic. This significantly limits the applicability of RRL in complex environments where such representations are not available. Our goal in this section is to develop a method in which the explicit representation of states can be learned via typical deep learning techniques in a form that will support the policy learning via our differentiable dNL-ILP. \nAs a result, we can utilize the various benefits of the RRL discipline without being restricted only to the environments with explicitly represented states.\n\nFor example, consider the BoxWorld environment explained earlier where the predicate\n\\texttt{on(X,Y)} is used to represent the state explicitly in the relational form (as shown in Fig.\\ref{fig:boxworld}). \nPast works in RRL relied on access to explicit relational representation of states, i.e.,\nall the groundings of the state representation predicates. \nSince this example has 4 constants, i.e. $\\mathcal{C}=$\\texttt{\\{a,b,c,floor\\}}, these groundings would be the binary values (`true' or `false') for the atoms \\texttt{on(a,a), on(a,b), on(a,c), on(a,floor), \\dots, on(floor,floor)}. \nIn recent years, extracting relational information from visual scenes has been investigated.\nFig. \\ref{fig:ilp_RELATIONAL} shows two types of relational representation extracted from images in~\\cite{santoro2017simple}. \nThe idea is to first process the images through multiple CNN networks. The last layer of the convolutional network chain is treated as the feature vector and is usually augmented with some non-local information such as absolute position of each point in the final layer of the CNN network. This feature map is then fed into a relational learning unit which is tasked with extracting non-local features.\nVarious techniques have been then introduced recently for learning these non-local information from the local feature maps, namely, self attention network models \\cite{vaswani2017attention,santoro2017simple} as well as graph networks \\cite{narayanan2017graph2vec,allamanis2017learning}. Unfortunately, none of the resulting presentations from past works is in the form of predicates needed in ILP.\n\n\nIn our approach, we use similar networks discussed earlier to extract non-local information. However given the relational nature of state representation in our RRL model, we consider three strategies in order to facilitate learning the desired relational state from images. Namely:\n\\begin{enumerate}\n\t\\item \\textbf{Finding a suitable state representation:} In our BoxWorld example, we used the \\texttt{on(X,Y)} to represent the state of the environment. However, learning this predicate requires inferring relation among various objects in the scene. As shown by previous works (e.g., \\cite{santoro2017simple}), this is a difficult task even in the context of a fully supervised setting (i.e., all the labels are provided) which is not applicable here. Alternatively, we propose to use lower-level relation for state representation and build higher level representation via predicate language. In the game of BoxWorld as an example, we can describe states by the respective position of each box. In particular, we define two predicates \\texttt{posH(X,Y)} and \\texttt{posV(X,Y)} such that variable $X$ is associated with the individual box, whereas $Y$ indicate horizontal or vertical coordinates of the box, respectively. Fig.~ \\ref{fig:ilp_rrl_state} shows how this new lower-level representations can be transformed into the higher level description by the appropriate predicate language: \n\t\\begin{align}\n\t\\text{on(X, Y)} &\\leftarrow \\text{posH(X, Z)}, \\text{posH(Y, T)}, \\nonumber\\\\ &\\text{inc(T, Z)}, \\text{sameH(X, Y)} \\nonumber\\\\\n\t\\text{sameH(X, Y)} &\\leftarrow \\text{posH(X, Z)}, \\text{posH(Y, Z)}\n\t\\label{eq:on}\n\t\\end{align}\n\t\\item \\textbf{State constraints:} When applicable, we may incorporate relational constraint in the form of a penalty term in the loss function. For example, in our BoxWorld example we can notice that \\text{posY(floor)} should be always 0. In general, the choice of relational language makes it possible to pose constraints based on our knowledge regarding the scene. Enforcing these constraints does not necessarily speed up the learning as we will show in the BoxWorld experiment in Section \\ref{subsec:BoxWorld}. However, it will ensure that the (learned) state representation and consequently the learned relational policy resemble our desired structure of the problem.\n\t\\item \\textbf{Semi-supervised setting:} While it is not desirable to label every single scene that may happen during learning, in most cases it is possible to provide a few labeled scene to help the model to learn the desired state representation faster. These reference points can then be incorporated to the loss function to encourage the network to learn a representation that matches to those provided labeled scenes. We have used a similar approach in Asterix experiment (see appendix \\ref{app:asterix}) to significantly increase the speed of learning.\n\\end{enumerate}\n\n\n\n\\subsection{Action Representation}\n\\label{subsec:ACTION-REP}\nWe formulate the policy gradient in a form that allows the learning of the actions via one (or multiple) target predicates. These predicates exploit the background facts, the state representation predicates, as well as auxiliary predicates to incorporate higher level concepts. \nIn a typical Deep policy gradient (DPG) learning, the probability distributions of actions are usually learned by applying a multi layer perception with a \\texttt{softmax} activation function in the last layer. In our proposed RRL, the action probability distributions can usually be directly associated with groundings of an appropriate predicate. For example, in BoxWorld example in Fig.\\ref{fig:boxworld}, we define a predicate \\texttt{move(A,B)} and associate the actions of the agent with the groundings of this predicate. In an ideal case, where there is deterministic solution to the RRL problem, the predicate \\texttt{move(A,B)} may be learned in such a way that, at each state, only the grounding (corresponding to the correct action) would result 1 ('true') and all the other groundings of this predicate become 0. In such a scenario, the agent will follow the learned logic deterministically. Alternatively, we may get more than one grounding with value equal to 1 or we get some fuzzy values in the range of $[0,1]$. \nIn those cases, we estimate the probability distribution of actions similar to the standard deep policy learning by applying a \\texttt{softmax} function to the valuation vector of the learned predicate \\texttt{move} (i.e., the value of \\texttt{move(X,Y)} for \\texttt{X,Y}$\\in$ \\texttt{\\{a,b,c,floor\\}}). \n\n\\section{Experiments}\n\\label{sec:EXPERIMENTS}\nIn this section we explore the features of the proposed RRL framework via several examples. We have implemented\\footnote{The python implementation of the algorithms in this paper\n\tis available at \\url{https:\/\/github.com\/dnlRRL2020\/RRL}} the models using Tensorflow \\cite{abadi2016tensorflow}.\n\\subsection{BoxWorld Experiment}\n\\label{subsec:BoxWorld}\nBoxWorld environment has been widely used as a benchmark in past RRL systems \\cite{dvzeroski2001relational,van2005survey,jiang2019neural}. In these systems the state of the environment is usually given as an explicitly relational data via groundings of the predicate \\texttt{on(X,Y)}. While ILP based systems are usually able to solve variations of this environments, they rely on explicit representation of state and they cannot infer the state from the image. Here, we consider the task of stacking boxes on top of each other. We increase the difficulty of the problem compared to the previous examples \\cite{dvzeroski2001relational,van2005survey,jiang2019neural} by considering the order of boxes and requiring that the stack is formed on top of the blue box (the blue box should be on the floor). To make sure the models learn generalization, we randomly place boxes on the floor in the beginning of each episode. We consider up to 5 boxes. Hence, the scene constants in our ILP setup is the set \\texttt{\\{a,b,c,d,e,floor\\}}. The dimension of the observation images is 64x64x3 and no explicit relational information is available for the agents. The action space for the problem involving $n$ boxes is $(n+1)\\times (n+1)$ corresponding to all possibilities of moving a box (or the floor) on top of another box or the floor. Obviously some of the actions are not permitted, e.g., placing the floor on top of a box or moving a box that is already covered by another box. \n\n\\paragraph{Comparing to Baseline:}\nIn the first experiment, we compare the performance of the proposed RRL technique to a baseline. For the baseline we consider standard deep A2C (with up to 10 agents) and we use the implementation in \\texttt{stable-baseline} library \\cite{stable-baselines}. We considered both MLP and CNN policies for the deep RL but we report the results for the CNN policy because of its superior performance. \nFor the proposed RRL system, we use two convolutional layers with the kernel size of 3 and strides of 2 with $tanh$ activation function. We apply two layers of MLP with \\texttt{softmax} activation functions to learn the groundings of the predicates \\texttt{posH(X,Y)} and \\texttt{posV(X,Y)}. Our presumed grid is $(n+1)\\times(n+1)$ and we allow for positional constants \\texttt{\\{0,1,\\dots,n\\}} to represent the locations in the grid in our ILP setting. \nAs constraint we add penalty terms to make sure \\texttt{posV(floor,0)} is True. We use vanilla gradient policy learning and to generate actions we define a learnable hypothesis predicate \\texttt{move(X,Y)}. Since we have $n+1$ box constants (including floor), the groundings of this hypothesis correspond to the $(n+1)\\times(n+1)$ possible actions. Since the value of these groundings in dNL-ILP will be between 0 and 1, we generate \\texttt{softmax} logits by multiplying these outputs by a large constant $c$ (e.g., $c=10$). For the target predicate \\texttt{move(X,Y)}, we allows for 6 rules in learning ( corresponding to dNL-DNF function with 6 disjunctions). The complete list of auxiliary predicates and parameters and weights used in the two models are given in appendix \\ref{app:box}. As indicated in Fig. \\ref{fig:ilp_rrl_state} and defined in (\\ref{eq:on}), we introduce predicate \\texttt{on(X,Y)} as a function of the low-level state representation predicates \\texttt{posV(X,Y)} and \\texttt{posH(X,Y)}. We also introduce higher level concepts using these predicates to define the aboveness (i.e., \\texttt{above(X,Y)}) as well as \\texttt{isCovered(X,Y)}. \nFig. \\ref{fig:box_cmp} compares the average success per episode for the two models for the two cases of $n=4$ and $n=5$. The results shows that for the case of $n=4$, both models are able to learn a successful policy after around 7000 episodes. For the more difficult case of $n=5$, our proposed approach converges after around 20K episodes \nwhereas it takes more than 130K episodes for the A2C approach to converge, and even then it fluctuates and does not always succeed.\n\\paragraph{Effect of background knowledge:}\nContrary to the standard deep RL, in an RRL approach, we can introduce our prior knowledge into the problem via the powerful predicate language. By defining the structure of the problem via ILP, we can explicitly introduce inductive biases \\cite{battaglia2018relational} which would restrict the possible form of the solution. We can speed up the learning process or shape the possible learnable actions even further by incorporating background knowledge. \nTo examine the impact of the background knowledge on the speed of learning, \nwe consider three cases for the BoxWorld problem involving $n=4$ boxes. The baseline model (RRL1) is as described before. In RRL2, we add another auxiliary predicate which defines the movable states as:\n\n\\begin{align*}\n\\text{movable(X,Y)} \\leftarrow \\neg \\text{isCovered(X)}, \\neg \\text{isCovered(Y)}, \\\\\\neg \\text{same(A,B)}, \\neg \\text{isfloor(X)}, \\neg \\text{on(X,Y)}\n\\end{align*}\nwhere $\\neg$ indicates the negate of a term. In the third model (RRL3), we go one step further, and we force the target predicate \\text{move(X,Y)} to incorporate the predicate \\texttt{movable(X,Y)} in each of the conjunction terms. \nFig. \\ref{fig:box_bk} compares the learning performance of these models in terms of average success rate (between [0,1]) vs the number of episodes.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.380\\textwidth]{box_mlp.png}\n\t\\vspace{-2mm}\n\t\\caption{Comparing deep A2C and the proposed model on BoxWorld task}\n\t\\label{fig:box_cmp}\n\\end{figure}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.380\\textwidth]{b4_bk.png}\n\t\\vspace{-2mm}\n\t\\caption{Effect of background knowledge on learning BoxWorld}\n\t\\label{fig:box_bk}\n\\end{figure}\n\\paragraph{Interpretability:}\nIn the previous experiments, we did not consider the interpretability of the learned hypothesis. Since all the weights are fuzzy values, even though the learned hypothesis is still a parameterized symbolic function, it does not necessarily represent a valid Boolean formula. \nTo achieve an interpretable result we add a small penalty as described in (\\ref{eq:interpret}). We also add a few more state constraints to make sure the learned representation follow our presumed grid notations (see Appendix \\ref{app:box} for details). The learned action predicate is found as: \n\\begin{align*}\n\\text{move(X, Y)} &\\leftarrow \\text{moveable(X, Y)},\\, \\neg \\text{lower(X, Y)} \\\\\n\\text{move(X, Y)} &\\leftarrow \\text{moveable(X, Y)},\\, \\text{isBlue(Y)} \\\\\n\\text{lower(X, Y)} &\\leftarrow \\text{posV(X, Z)},\\, \\text{posV(Y, T)},\\, \\text{lessthan(Z, T)} \n\\end{align*}\n\n\\subsection{GridWorld Experiment}\n\\label{subsec:GridWorld}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.4\\textwidth]{gridworld.png}\n\t\\caption{GridWorld environment \\cite{zambaldi2018relational}} \\label{fig:ilp_gridworld}\n\t\\label{fig:ilp_boxworldaction}\n\\end{figure}\nWe use the GridWorld environment introduced in \\cite{zambaldi2018relational} for this experiment. This environment is consisted of a $12\\times12$ grid with keys and boxes randomly scattered. It also have an agent, represented by a single dark gray square box. The boxes are represented by two adjacent colors. The square on the right represents the box's lock type whose color indicates which key can be used to open that lock. The square on the left indicates the content of the box which is inaccessible while the box is locked. The agent must collect the key before accessing the box.\nWhen the agent has a key, provided that it walks over the lock box with the same color as its key, it can open the lock box, and then it must enter to the left box to acquire the new key which is inside the left box.\nThe agent cannot get the new key prior to successfully opening the lock box on the right side of the key box.\nThe goal is for the agent to open the gem box colored as white. We consider two difficulty levels. In the simple scenario, there is no (dead-end) branch. In the more difficult version, there can be one branch of dead end. An example of the environment and the branching scenarios is depicted in Fig.~\\ref{fig:ilp_gridworld}.\nThis is a very difficult task involving complex reasoning. Indeed, in the original work it was shown that a multi agent A3C combined with a non-local learning attention model could only start to learn after processing $5\\times10^8$ episodes. To make this problem easier to tackle, \nwe modify the action space to include the location of any point in the grid instead of directional actions. Given this definition of the problem, the agent's task is to give the location of the next move inside the rectangular grid. Hence, the dimension of the action space is $144=12\\times12$. \nFor this environment, we define the predicates \\texttt{color(X,Y,C)}, where $X,Y\\in\\{1,\\dots12\\}$, $C\\in\\{1,\\dots,10\\}$ and \\texttt{hasKey(C)} to represent the state. Here, variables $X,Y$ denote the coordinates, and the variable $C$ is for the color. Similar to the BoxWorld game, we included a few auxiliary predicates such as \\texttt{isBackground(X,Y)}, \\texttt{isAgent(X,Y)} and \\texttt{isGem(X,Y)} as part of the background knowledge. The representational power of ILP allows us to incorporate our prior knowledge about the problem into the model. As such we can include some higher level auxiliary helper predicates such as :\n\\begin{align*}\n\\text{isItem(X, Y)}&\\leftarrow \\ \\neg \\text{isBackground(X, Y)}, \\neg \\text{isAgent(X, Y)} \\\\\n\\text{locked(X, Y)}&\\leftarrow \\ \\text{isItem(X, Y)}, \\text{isItem(X,Z)}, \\text{inc(Y, Z)}\n\\end{align*}\nwhere predicate \\texttt{inc(X,Y)} defines increments for integers (i.e., \\texttt{inc(n,n+1)} is true for every integer n). \nThe list of all auxiliary predicates used in this experiment as well as the parameters of the neural networks used in this experiment are given in Appendix \\ref{app:grid}. \nSimilar to previous experiments we consider two models, an A2C agent as the baseline and our proposed RRL model using the ILP language described in Appendix \\ref{app:grid}.\n\\begin{table}[ht]\n\t\\caption{Number of training episodes required for convergence }\n\t\\label{tbl:results_block}\n\t\\centering \n\t\\begin{tabular} { l c c c c c c c }\n\t\t\\toprule\n\t\tmodel & Without Branch & With Branch\\\\ \t\t\n\t\t\\midrule\n\t\tproposed RRL & 700 & 4500 \\\\\n\t\tA2C & $> 10^8$ & $> 10^8$ \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\\end{table}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.35\\textwidth]{bl_bk.png}\n\t\\vspace{-2mm}\n\t\\caption{Effect of background knowledge on learning GridWorld}\n\t\\label{fig:grid_bk}\n\\end{figure}\nWe listed the number of episodes it takes to converge in each setting in Table\\ref{tbl:results_block}. As the results suggest, the proposed approach can learn the solution in both settings very fast. On the contrary, the standard deep A2C was not able to converge after $10^8$ episodes. \nThis example restates the fact that incorporating our prior knowledge regarding the problem can significantly speed up the learning process.\n\nFurther, similar to the BoxWorld experiment, we study the importance of our background knowledge in the learning. In the first task (RRL1), we evaluate our model on the non-branching task by enforcing the action to include the $isItem(X,Y)$ predicate. In RRL2, we do not enforce this. As shown in Fig\\ref{fig:grid_bk}, RRL1 model learns 4 times faster than RRL2. Arguably, this is because, enforcing the inclusion of $isItem(X,Y)$ in the action hypothesis reduces the possibility of exploring irrelevant moves (i.e., moving to a location without any item).\n\\subsection{Relational Reasoning}\n\\label{subsec:SORTOFCLEVER}\nCombining dNL-ILP with standard deep learning techniques is not limited to the RRL settings. In fact, the same approach can be used in other \nareas in which we wish to reason about the relations of objects.\nTo showcase this, we consider the relational reasoning task involving the Sort-of-CLEVR \\cite{santoro2017simple} dataset. This dataset (See Fig.\\ref{subfig:sortofclever}) consists of 2D images of some colored objects. The shape fo each object is either a rectangle or a circle and each image contains up to 6 objects. The questions are hard-coded as fixed-length binary strings. Questions are either non-relational (e.g, \"what is the color of the green object?\") or relational (e.g., \"what is the shape of the nearest object to the green object?\"). In \\cite{santoro2017simple}, the authors combined a CNN generated feature map with a special type of attention based non-local network in order to solve the problem. We use the same CNN network and similar to the GridWorld experiment, we learn the state representation using predicate \\texttt{color(X,Y,C)} (the color of each cell in the grid) as well as \\texttt{isCircle(X,Y)} which learn if the shape of an object is circle or not. Our proposed approach reaches the accuracy of 99\\% on this dataset compared to the 94\\% for the non-local approach presented in \\cite{santoro2017simple}. The details of the model and the list of predicates in our ILP implementation is given in appendix \\ref{app:relational}.\n\n\n\\section{Conclusion}\n\\label{sec:Conclusion}\nIn this paper, we proposed a novel deep Relational Reinforcement Learning (RRL) model based on a differentiable Inductive Logic Programming (ILP) that can effectively learn relational information from image. We showed how this model can take the expert background knowledge and incorporate it into the learning problem using appropriate predicates. The differentiable ILP allows an end to end optimization of the entire framework for learning the policy in RRL. We showed the performance of the proposed RRL framework using environments such as BoxWorld and GridWorld.\n\n\n\n\n\n\n\\nocite{langley00}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn many data science problems, data are available through different views. Generally, the views represent different measurement modalities such as audio and video, or the same text that may be available in different languages. Our main interest here is neuroimaging where recordings are made from multiple subjects. In particular, it is of interest to find common patterns or responses that are shared between subjects when they receive the same stimulation or perform the same cognitive task \\citep{chen2015reduced,richard2020modeling}. \n\nA popular line of work to perform such shared response modeling is group Independent Component Analysis (ICA) methods. The fastest methods~\\cite{calhoun2001method, varoquaux2009canica} are among the most popular, yet they are not grounded on principled probabilistic models for the multiview setting. \nMore principled approaches exist~\\cite{richard2020modeling, guo2008unified}, but they do not model subject-specific deviations from the shared response. However, such deviations are expected in most neuroimaging settings, as the magnitude of the response may differ from subject to subject \\cite{penny2007random}, as may any noise due to heartbeats, respiratory artefacts or head movements~\\cite{liu2016noise}.\nFurthermore, most GroupICA methods are typically unable to separate components whose density is close to a Gaussian.\n\nIndependent vector analysis (IVA)~\\cite{lee2008independent, anderson2011joint} is a powerful framework where components are independent within views but each component of a given view can depend on the corresponding component in other views. \nHowever, current implementations such as IVA-L~\\cite{lee2008independent},\nIVA-G~\\cite{anderson2011joint}, IVA-L-SOS~\\cite{bhinge2019extraction}, IVA-GGD~\\cite{anderson2014independent} or\nIVA with Kotz distribution~\\cite{anderson2013independent} estimate only the\nview-specific components, and do not model or extract a shared response which is\nthe main focus in this work.\n\nOn the other hand, the shared response model~\\cite{chen2015reduced} is a popular approach to perform shared response modeling, yet it imposes orthogonality constrains that are restrictive and not biologically plausible.\n\nIn this work we introduce Shared ICA (ShICA), where each view is modeled as a linear transform of shared independent components contaminated by additive Gaussian noise. ShICA allows the principled extraction of the shared components (or responses) in addition to view-specific components. \nSince it is based on a statistically sound noise model, it enables optimal inference (minimum mean square error, MMSE) of the shared responses.\n\nLet us note that ShICA is no longer the method of choice when the concept of common response is either not useful or not applicable. \nNevertheless, we believe that the ability to extract a common response is an important feature in most contexts because it highlights a stereotypical brain response to a stimulus. Moreover, finding commonality between subjects reduces often unwanted inter-subject variability.\n\nThe paper is organized as follows.\nWe first analyse the theoretical properties of the ShICA model, before providing inference algorithms.\nWe exhibit necessary and sufficient conditions for the ShICA model to be identifiable (previous work only shows local identifiability~\\cite{anderson2014independent}), in the presence of Gaussian or non-Gaussian components. \nWe then use Multiset CCA to fit the model when all the components are assumed to\nbe Gaussian. We exhibit necessary and sufficient conditions for Multiset CCA to\nbe able to recover the unmixing matrices (previous work only gives sufficient\nconditions~\\cite{li2009joint}). In addition, we provide instances of the problem where Multiset CCA cannot recover the mixing matrices while the model is identifiable.\nWe next point out a practical problem : even a small sampling noise\ncan lead to large error in the estimation of unmixing matrices when Multiset CCA is used. To\naddress this issue and recover the correct unmixing matrices, we propose to\napply joint diagonalization to the result of Multiset CCA yielding a new method\ncalled ShICA-J.\nWe further introduce ShICA-ML, a maximum likelihood estimator of ShICA that models non-Gaussian components using a Gaussian mixture model. \nWhile ShICA-ML yields more accurate components, ShICA-J is significantly faster and offers a great initialization to ShICA-ML.\nExperiments on fMRI and MEG data demonstrate that the method outperforms existing GroupICA and IVA methods.\n\n\n\\section{Shared ICA (ShICA): an identifiable multi-view model}\n\\paragraph{Notation} We write vectors in bold letter $\\vb$ and scalars in lower case $a$. Upper case letters $M$ are used to denote\nmatrices. We denote $|M|$ the absolute value of the determinant of $M$. $\\xb \\sim \\Ncal(\\mub, \\Sigma)$ means that $\\xb \\in \\mathbb{R}^k$ follows\na multivariate normal distribution of mean $\\mub \\in \\mathbb{R}^k$ and\ncovariance $\\Sigma \\in \\mathbb{R}^{k \\times k}$. The $j, j$ entry of a diagonal matrix $\\Sigma_i$ is denoted $\\Sigma_{ij}$, the $j$ entry of $\\yb_i$ is denoted $y_{ij}$. Lastly, $\\delta$ is the Kronecker delta.\n\n\\paragraph{Model Definition} In the following, $\\xb_1, \\dots ,\\xb_m \\in \\bbR^p$ denote the $m$ observed random vectors obtained from the $m$ different views. We posit the following generative model, called Shared ICA (ShICA): for $i= 1\\dots m$\n\\begin{equation}\n \\label{eq:model}\n \\xb_i = A_i(\\sbb + \\nb_i)\n\\end{equation}\nwhere $\\sbb \\in \\mathbb{R}^{p}$ contains the latent variables called \\emph{shared components}, $A_1,\\dots, A_m\\in\\bbR^{p\\times p}$ are the invertible mixing matrices, and $\\nb_i \\in\n\\mathbb{R}^{p}$ are \\emph{individual noises}. The individual noises model both the deviations of a view from the mean ---i.e.\\ individual differences--- and measurement noise. Importantly, we explicitly model both the shared components and the individual differences in a probabilistic framework to enable an optimal inference of the parameters and the responses.\n\nWe assume that the shared components are statistically independent, and that the individual noises are Gaussian and independent from the shared components:\n$p(\\sbb) = \\prod_{j=1}^p p(s_j)$ and $\\nb_i \\sim\\mathcal{N}(0, \\Sigma_i)$, where the matrices $\\Sigma_i$ are assumed diagonal and positive. Without loss of generality, components are assumed to have unit variance $\\bbE[\\sbb \\sbb^{\\top}] = I_p$. We further assume that there are at least 3 views: $m \\geq 3$. \n\nIn contrast to almost all existing works, we assume that some components (possibly all of them) may be Gaussian, and denote $\\mathcal{G}$ the set of Gaussian components: $\\sbb_j \\sim \\mathcal{N}(0, 1)$ for $j \\in \\mathcal{G}$. The other components are non-Gaussian: for $j\\notin \\mathcal{G}$, $\\sbb_j$ is non-Gaussian.\n\n\n\\paragraph{Identifiability} The parameters of the model are $\\Theta = (A_1, \\dots, A_m, \\Sigma_1, \\dots, \\Sigma_m)$. We are interested in the identifiability of this model: given observations $\\xb_1,\\dots, \\xb_m$ generated with parameters $\\Theta$, are there some other $\\Theta'$ that may generate the same observations?\nLet us consider the following assumption that requires that the individual noises for Gaussian components are sufficiently diverse:\n\\begin{assumption}[Noise diversity in Gaussian components]\n\\label{ass:diversity}\nFor all $j, j' \\in \\mathcal{G}, j \\neq j'$, the sequences $(\\Sigma_{ij})_{i=1 \\dots m}$ and $(\\Sigma_{ij'})_{i=1 \\dots m}$ are different where $\\Sigma_{ij}$ is the $j, j$ entry of $\\Sigma_i$\n\\end{assumption}\n\nIt is readily seen that there is one trivial set of indeterminacies in the problem: if $P \\in \\mathbb{R}^{p \\times p}$ is a sign and permutation matrix (i.e. a matrix which has one $\\pm 1$ coefficient on each row and column, and $0$'s elsewhere) the parameters $(A_1 P, \\dots, A_m P, P^{\\top}\\Sigma_1 P, \\dots, P^{\\top} \\Sigma_m P)$ also generate $\\xb_1,\\dots, \\xb_m$. The following theorem shows that under the above assumption, these are the only indeterminacies of the problem.\n\n\\begin{theorem}[Identifiability]\n\\label{thm:identif}\nWe make Assumption~\\ref{ass:diversity}. We let $\\Theta'=(A_1', \\dots, A_m', \\Sigma_1', \\dots,\\Sigma_m')$ another set of parameters, and assume that they also generate $\\xb_1,\\dots, \\xb_m$. Then, there exists a sign and permutation matrix $P$ such that for all $i$, $A_i'=A_iP$, and $\\Sigma_i'= P^{\\top} \\Sigma_i P$.\n\\end{theorem}\nThe proof is in Appendix~\\ref{proof:identif}. Identifiability in the Gaussian case is a consequence of the identifiability results in~\\cite{via2011joint} and in the general case, local identifiability results can be derived from the work of ~\\cite{anderson2014independent}. \nHowever local identifiability only shows that for a given set of parameters there exists a neighborhood in which no other set of parameters can generate the same observations~\\cite{rothenberg1971identification}. In contrast, the proof of Theorem~\\ref{thm:identif} shows global identifiability.\n\nTheorem~\\ref{thm:identif} shows that the task of recovering the parameters from the observations is a well-posed problem, under the sufficient condition of Assumption~\\ref{ass:diversity}. We also note that Assumption~\\ref{ass:diversity} is necessary for identifiability. For instance, if $j$ and $j'$ are two Gaussian components such that $\\Sigma_{ij} = \\Sigma_{ij'}$ for all $i$, then a global rotation of the components $j, j'$ yields the same covariance matrices. The current work assumes $m \\geq 3$, in appendix~\\ref{app:identifiability} we give an identifiability result for $m=2$, under stronger conditions.\n\n\n\n\\section{Estimation of components with noise diversity via joint-diagonalization}\n\nWe now consider the computational problem of efficient parameter inference. This section considers components with noise diversity, while the next section deals with non-Gaussian components.\n\n\n\\subsection{Parameter estimation with Multiset CCA}\nIf we assume that the components are all Gaussian,\nthe covariance of the observations given by\n$C_{ij}= \\bbE[\\xb_i\\xb_j^\\top] = A_i(I_p + \\delta_{ij}\\Sigma_i)A_j^{\\top}\\enspace\n$ are sufficient statistics and methods using only second order information, like Multiset CCA, are candidates to estimate the parameters of the model.\nConsider the\nmatrix $\\mathcal{C} \\in \\bbR^{pm \\times pm}$ containing $m \\times m$ blocks of size $p\n\\times p$\nsuch that the block $i,j$ is given by $C_{ij}$. Consider the matrix $\\mathcal{D}$ identical to $\\mathcal{C}$ excepts that the non-diagonal blocks are filled with zeros:\n\\begin{equation}\n \\mathcal{C} = \\begin{bmatrix}\n C_{11} & \\dots & C_{1m}\\\\\n \\vdots & \\ddots & \\vdots \\\\\n C_{m1} &\\dots & C_{mm} \n \\end{bmatrix}\n ,\\enspace\n \\mathcal{D} = \\begin{bmatrix}\n C_{11} & \\dots & 0\\\\\n \\vdots & \\ddots & \\vdots \\\\\n 0 &\\dots & C_{mm} \n \\end{bmatrix}. \n\\end{equation} \nGeneralized CCA consists of the following generalized eigenvalue problem:\n\\begin{equation}\n\\label{eq:eigv}\n \\mathcal{C} \\ub = \\lambda \\mathcal{D}\\ub,\\enspace \\lambda > 0,\\enspace \\ub\\in\\bbR^{pm} \\enspace .\n\\end{equation}\n \nConsider the matrix $U = [\\ub^1, \\dots, \\ub^p] \\in \\mathbb{R}^{mp \\times p}$ formed by concatenating the $p$ leading eigenvectors of the previous problem ranked in decreasing eigenvalue order. Then, consider $U$ to be formed of $m$ blocks of size $p \\times p$ stacked vertically and define $(W^i)^{\\top}$ to be the $i$-th block. These $m$ matrices are the output of Multiset CCA. We also denote $\\lambda_1 \\geq \\dots \\geq \\lambda_p$ the $p$ leading eigenvalues of the problem.\n \n\nAn application of the results of \\cite{li2009joint} shows that Multiset CCA recovers the mixing matrices of ShICA under some assumptions.\n\\begin{proposition}[Sufficient condition for solving ShICA via Multiset CCA~\\cite{li2009joint}]\nLet $r_{ijk} = (1 + \\Sigma_{ik})^{-\\frac12} (1 + \\Sigma_{jk})^{-\\frac12}$.\nAssume that $(r_{ijk})_k$ is non-increasing. Assume that the maximum eigenvalue $\\nu_k$ of matrix $R^{(k)}$ of general element $(r_{ijk})_{ij}$ is such that $\\nu_k = \\lambda_k$ \n.\nAssume that $\\lambda_1 \\dots \\lambda_p$ are distinct.\nThen, there exists scale matrices $\\Gamma_i$ such that $W_i = \n\\Gamma_i A_i^{-1}$ for all $i$.\n\\end{proposition}\nThis proposition gives a sufficient condition for solving ShICA with Multiset CCA. It needs a particular structure for the noise covariances as well as specific ordering for the eigenvalues. The next theorem shows that we only need $\\lambda_1 \\dots \\lambda_p$ to be distinct for Multiset CCA to solve ShICA:\n\\begin{assumption}[Unique eigenvalues]\n \\label{ass:uniqueeig}\n$\\lambda_1 \\dots \\lambda_p$ are distinct.\n\\end{assumption}\n\\begin{theorem}\n \\label{th:eig}\n We only make\n Assumption~\\ref{ass:uniqueeig}. Then, there exists a permutation matrix $P$ and scale matrices $\\Gamma_i$ such that $W_i = P\\Gamma_i A_i^{-1}$ for all $i$.\n\\end{theorem}\nThe proof is in Appendix~\\ref{proof:eig}. This theorem means that solving the generalized eigenvalue problem~\\eqref{eq:eigv} allows to recover the mixing matrices up to a scaling and permutation: this form of generalized CCA recovers the parameters of the statistical model.\nNote that Assumption~\\ref{ass:uniqueeig} is also a necessary condition. Indeed, if two eigenvalues are identical, the eigenvalue problem is not uniquely determined.\n\nWe have two different Assumptions, \\ref{ass:diversity} and \\ref{ass:uniqueeig}, the first of which guarantees theoretical identifiability as per Theorem~\\ref{thm:identif} and the second guarantees consistent estimation by Multiset CCA as per Theorem~\\ref{th:eig}. Next we will discuss their connections, and show some limitations of the Multiset CCA approach. To begin with, we have the following result about the eigenvalues of the problem~\\eqref{eq:eigv} and the $\\Sigma_{ij}$.\n\\begin{proposition}\n \\label{prop:eigvals_from_noise}\n For $j\\leq p$, let $\\lambda_j$ the largest solution of $ \\sum_{i=1}^m\\frac{1}{\\lambda_j(1 + \\Sigma_{ij}) -\\Sigma_{ij}}=1$. Then, $\\lambda_1, \\dots, \\lambda_p$ are the $p$ largest eigenvalues of problem~\\eqref{eq:eigv}.\n\\end{proposition}\nIt is easy to see that we then have $\\lambda_1, \\dots, \\lambda_p$ greater than $1$, while the remaining eigenvalues are lower than $1$.\nFrom this proposition, two things appear clearly. First, Assumption~\\ref{ass:uniqueeig} implies Assumption~\\ref{ass:diversity}.\nIndeed, if the $\\lambda_j$'s are distinct, then the sequences $(\\Sigma_{ij})_i$ must also be different from the previous proposition.\nThis is expected as from Theorem~\\ref{th:eig}, Assumption~\\ref{ass:uniqueeig} implies identifiability, which in turn implies Assumption~\\ref{ass:diversity}.\n\nProp.~\\ref{prop:eigvals_from_noise} also allows us to derive cases where Assumption~\\ref{ass:diversity} holds but not Assumption~\\ref{ass:uniqueeig}. The following Proposition gives a simple case where the model is identifiable but it cannot be solved using Multiset CCA:\n\\begin{proposition}\n\\label{counter}\nAssume that for two integers $j, j'$, the sequence $(\\Sigma_{ij})_i$ is a permutation of $(\\Sigma_{ij'})_i$, i.e. that there exists a permutation of $\\{1,\\dots, p\\}$, $\\pi$, such that for all $i$, $\\Sigma_{ij} = \\Sigma_{\\pi(i)j'}$. Then, $\\lambda_j = \\lambda_{j'}$.\n\\end{proposition}\nIn this setting, Assumption~\\ref{ass:diversity} holds so ShICA is identifiable, while Assumption~\\ref{ass:uniqueeig} does not hold, so Multiset CCA cannot recover the unmixing matrices.\n\n\n\n\n\\subsection{Sampling noise and improved estimation with joint diagonalization} \\label{sec:samplingnoise}\n\nThe consistency theory for Multiset CCA developed above is conducted under the assumption that the\ncovariances $C_{ij}$ are the true covariances of the model, and not\napproximations obtained from observed samples. In practice, however, a serious limitation of Multiset CCA is that even a slight error of estimation on the covariances, due to ``sampling noise'', can yield a large error in the estimation of the unmixing matrices, as will be shown next.\n\nWe begin with an empirical illustration. We take $m=3$, $p=2$, and $\\Sigma_i$ such that $\\lambda_1 = 2 + \\varepsilon$ and $\\lambda_2 =2$ for $\\varepsilon > 0$.\nIn this way, we can control the \\emph{eigen-gap} of the problem, $\\varepsilon$.\nWe take $W_i$ the outputs of Multiset CCA applied to the true covariances $C_{ij}$.\nThen, we generate a perturbation $\\Delta = \\delta \\cdot S$, where $S$ is a random positive symmetric $pm \\times pm$ matrix of norm $1$, and $\\delta >0$ controls the scale of the perturbation. \nWe take $\\Delta_{ij}$ the $p\\times p$ block of $\\Delta$ in position $(i, j)$, and $\\tilde{W}_i$ the output of Multiset CCA applied to the covariances $C_{ij} + \\Delta_{ij}$.\nWe finally compute the sum of the Amari distance between the $W_i$ and $\\tilde{W}_i$: the Amari distance measures how close the two matrices are, up to scale and permutation~\\cite{amari1996new}.\n\\begin{wrapfigure}{r}{.4\\textwidth}\n \n \\centering\n \\includegraphics[width=.99\\linewidth]{figures\/multicca_gap_jd.pdf}\n \\caption{Amari distance between true mixing matrices and estimates of Multiset\n CCA when covariances are perturbed. Different solid curves correspond to different\n eigen-gaps. The black dotted line shows the chance level. When the gap is small, a small perturbation can lead to complete mixing. Joint-diagonalization (colored dotted lines) fixes the problem.}\n \\label{fig:cca_gap}\n \n \\end{wrapfigure}\nFig~\\ref{fig:cca_gap} displays the median Amari distance over 100 random repetitions, as the perturbation scale $\\delta$ increases. The different curves correspond to different values of the eigen-gap $\\varepsilon$. We see clearly that the robustness of Multiset CCA critically depends on the eigen-gap, and when it is small, even a small perturbation of the input (due, for instance, to sampling noise) leads to large estimation errors.\n\n\nThis problem is very general and well studied~\\cite{stewart1973error}: the mapping from matrices to (generalized) eigenvectors is highly non-smooth.\nHowever, the gist of our method is that the \\emph{span} of the leading $p$ eigenvectors is smooth, as long as there is a large enough gap between $\\lambda_p$ and $\\lambda_{p+1}$.\nFor our specific problem we have the following bounds, derived from Prop.~\\ref{prop:eigvals_from_noise}.\n\\begin{proposition}\n We let $\\sigma_{\\max} = \\max_{ij}\\Sigma_{ij}$ and $\\sigma_{\\min} = \\min_{ij}\\Sigma_{ij}$. Then, $\\lambda_p \\geq 1 + \\frac{m-1}{1+\\sigma_{\\max}}$, while $\\lambda_{p+1}\\leq 1 - \\frac{1}{1 + \\sigma_{min}}$.\n\\end{proposition}\nAs a consequence, we have $\\lambda_{p} -\\lambda_{p+1} \\geq \\frac{m-1}{1+\\sigma_{\\max}} + \\frac{1}{1+ \\sigma_{\\min}}\\geq \\frac m{1+ \\sigma_{\\max}}$: the gap between these eigenvalues increases with $m$, and decreases with the noise power.\n\n\\begin{wrapfigure}{l}{.45\\textwidth}\n\\begin{minipage}{.45\\textwidth}\n \\begin{algorithm}[H]\n \\caption{ShICA-J}\n \\label{algo:shicaj}\n \\begin{algorithmic}\n \\STATE {\\bfseries Input :} Covariances $\\tilde{C}_{ij} = \\bbE[\\xb_i\\xb_j^{\\top}]$\n \\STATE $(\\tilde{W}_i)_i \\leftarrow \\mathrm{MultisetCCA}((\\tilde{C}_{ij})_{ij})$\n \\STATE $Q \\leftarrow \\mathrm{JointDiag}((\\tilde{W}_i\\tilde{C}_{ii}\\tilde{W}_i^{\\top})_i)$\n \\STATE $\\Gamma_{ij} \\leftarrow Q\\tilde{W}_i\\tilde{C}_{ij}W_j^\\top Q^\\top$\n \\STATE $(\\Phi_i)_i \\leftarrow \\mathrm{Scaling}((\\Gamma_{ij})_{ij})$\n \\STATE \\textbf{Return : } Unmixing matrices $(\\Phi_iQ\\tilde{W}_i)_i$.\n \\end{algorithmic}\n \\end{algorithm}\n\\end{minipage}\n\\end{wrapfigure}\nIn this setting, when the magnitude of the perturbation $\\Delta$ is smaller than $\\lambda_{p}-\\lambda_{p+1}$, ~\\cite{stewart1973error} indicates that $\\mathrm{Span}([W_1, \\dots, W_m]^{\\top})\\simeq \\mathrm{Span}([\\tilde{W}_1,\\dots, \\tilde{W}_m]^\\top)$, where $[W_1, \\dots, W_m]^{\\top}\\in\\bbR^{pm\\times p}$ is the vertical concatenation of the $W_i$'s.\nIn turn, this shows that there exists a matrix $Q\\in\\bbR^{p\\times p}$ such that\n\\begin{equation}\n \\label{eq:justif_jd}\n W_i \\simeq Q\\tilde{W}_i\\enspace \\text{for all} \\enspace i.\n\\end{equation}\nWe propose to use joint-diagonalization to recover the matrix $Q$. Given the $\\tilde{W}_i$'s, we consider the set of symmetric matrices $\\tilde{K}_i = \\tilde{W}_i\\tilde{C}_{ii}\\tilde{W}_i^{\\top}$, where $\\tilde{C}_{ii}$ is the contaminated covariance of $\\xb_i$. Following Eq.~\\eqref{eq:justif_jd}, we have $Q\\tilde{K}_iQ^{\\top} = W_i \\tilde{C}_{ii}W_i^{\\top}$, and using Theorem~\\ref{th:eig}, we have $Q\\tilde{K}_iQ^{\\top} = P\\Gamma_i A_i^{-1}\\tilde{C}_{ii}A_i^{-\\top}\\Gamma_iP^{\\top}$. Since $\\tilde{C}_{ii}$ is close to $C_{ii} = A_i (I_p + \\Sigma_i)A_i^\\top$, the matrix $P\\Gamma_i A_i^{-1}\\tilde{C}_{ii}A_i^{-\\top}\\Gamma_iP^{\\top}$ is almost diagonal.\nIn other words, the matrix $Q$ is an approximate diagonalizer of the $\\tilde{K}_i$'s, and we approximate $Q$ by joint-diagonalization of the $\\tilde{K}_i$'s. In Fig~\\ref{fig:cca_gap}, we see that this procedure mitigates the problems of multiset-CCA, and gets uniformly better performance regardless of the eigen-gap.\nIn practice, we use a fast joint-diagonalization algorithm~\\cite{ablin2018beyond} to minimize a joint-diagonalization criterion for positive symmetric matrices~\\cite{pham2001joint}. The estimated unmixing matrices $U_i = Q\\tilde{W}_i$ correspond to the true unmixing matrices only up to some scaling which may be different from subject to subject: the information that the components are of unit variance is lost. As a consequence, naive averaging of the recovered components may lead to inconsistant estimation. We now describe a procedure to recover the correct scale of the individual components across subjects.\n\n\\textbf{Scale estimation}\nWe form the matrices $\\Gamma_{ij} = U_i\\tilde{C}_{ij}U_j^\\top$. In order to estimate the scalings, we solve $\n\\min_{(\\Phi_i)} \\sum_{i\\neq j} \\| \\Phi_i \\diag(\\Gamma_{ij}) \\Phi_j - I_p \\|_F^2$\nwhere the $\\Phi_i$ are diagonal matrices.\nThis function is readily minimized with respect to one of the $\\Phi_i$ by the formula\n$\\Phi_i = \\frac{\\sum_{j \\neq i} \\Phi_j \\diag(Y_{ij})}{\\sum_{j \\neq i} \\Phi_j^2 \\diag(Y_{ij})^2}$ (derivations in Appendix~\\ref{app:fixedpoint}). We then iterate the previous formula over $i$ until convergence.\nThe final estimates of the unmixing matrices are given by\n$(\\Phi_i U_i)_{i=1}^m$. The full procedure, called ShICA-J, is summarized in Algorithm~\\ref{algo:shicaj}.\n\n\\subsection{Estimation of noise covariances}\n\nIn practice, it is important to estimate noise covariances $\\Sigma_i$ in order to take advantage of the fact that some views are noisier than others. As it is well known in classical factor analysis, modelling noise variances allows the model to virtually discard variables, or subjects, that are particularly noisy. \n\nUsing the ShICA model with Gaussian components, we derive an estimate for the noise covariances directly from maximum likelihood. We use an expectation-maximization (EM) algorithm, which is especially fast because noise updates are in closed-form. Following derivations given in appendix~\\ref{conditional_density}, the sufficient statistics in the E-step are given by \n\\begin{align}\n\\label{mmse1}\n\\EE[\\sbb|\\xb]= \\left(\\sum_{i=1}^m \\Sigma_i^{-1} + I \\right)^{-1} \\sum_{i=1}^m \\left(\\Sigma_i^{-1} \\yb_i \\right)\n && \\VV[\\sbb|\\xb]= (\\sum_{i=1}^m \\Sigma_i^{-1} + I)^{-1}\n\\end{align}\nIncorporating the M-step we get the following updates that only depend on the covariance matrices:\n$\n\\Sigma_i \\leftarrow \\diag(\\hat{C_{ii}} - 2 \\VV[\\sbb | \\xb] \\sum_{j=1}^m \\Sigma_j^{-1} \\hat{C}_{ji} + \\VV[\\sbb | \\xb] \\sum_{j = 1}^m \\sum_{l = 1}^m \\left(\\Sigma_j^{-1} \\hat{C}_{jl} \\Sigma_l^{-1} \\right) \\VV[\\sbb | \\xb] + \\VV[\\sbb | \\xb])\n$\n\n\\section{ShICA-ML: Maximum likelihood for non-Gaussian components}\nShICA-J only uses second order statistics. However, the ShICA model~\\eqref{eq:model} allows for non-Gaussian components. We now propose an algorithm for fitting the ShICA model that combines covariance information with non-Gaussianity in the estimation to optimally separate both Gaussian and non-Gaussian components.\nWe estimate the parameters by maximum likelihood. Since most non-Gaussian\ncomponents in real data are super-Gaussian~\\cite{delorme2012independent, calhoun2006unmixing}, we assume that the non-Gaussian components $\\sbb$ have the super-Gaussian density \\\\ $p(s_j) = \\frac12\\left(\\mathcal{N}( s_j; 0, \\frac12) + \\mathcal{N}( s_j; 0, \\frac{3}{2})\\right) \\enspace.$\n\nWe propose to maximize the log-likelihood using a generalized\nEM~\\cite{neal1998view, dempster1977maximum}. Derivations are available in Appendix~\\ref{app:emestep}. Like in the previous section, the E-step is in closed-form yielding the following sufficient statistics:\n\\begin{align}\n\\label{mmse2}\n \\EE[s_j | \\xb] = \\frac{\\sum_{\\alpha \\in \\{\\frac12, \\frac32\\}} \\theta_{\\alpha} \\frac{\\alpha \\bar{y}_{j}}{\\alpha + \\bar{\\Sigma_{j}}}}{\\sum_{\\alpha \\in \\{0.5, 1.5\\}} \\theta_{\\alpha}} \\enspace \\text{ and } \\enspace \\VV[s_j | \\xb] = \\frac{\\sum_{\\alpha \\in \\{\\frac12, \\frac32\\}} \\theta_{\\alpha} \\frac{\\bar{\\Sigma_{j}}\\alpha}{\\alpha + \\bar{\\Sigma_{j}}}}{\\sum_{\\alpha \\in \\{0.5, 1.5\\}} \\theta_{\\alpha}} \n\\end{align}\n where $\\theta_{\\alpha} = \\Ncal(\\bar{y}_{j}; 0 , \\bar{\\Sigma}_{j} + \\alpha)$, \n $\\bar{y}_j = \\frac{\\sum_i \\Sigma_{ij}^{-1} y_{ij}}{ \\sum_i\n \\Sigma_{ij}^{-1}}$ and $\\bar{\\Sigma_{j}} = (\\sum_i\n \\Sigma_{ij}^{-1})^{-1}$ with $\\yb_i = W_i \\xb_i$.\nNoise updates are in closed-form and given by:\n$\\Sigma_i \\leftarrow \\diag((\\yb_i - \\EE[\\sbb | \\xb]) (\\yb_i - \\EE[\\sbb | \\xb])^{\\top}+ \\VV[\\sbb | \\xb])$.\nHowever, no closed-form is available for the updates of unmixing matrices. We therefore perform quasi-Newton updates given by\n$W_i \\leftarrow (I - \\rho (\\widehat{\\mathcal{H}^{W_i}})^{-1} \\mathcal{G}^{W_i}) W_i$ where $\\rho \\in \\mathbb{R}$ is chosen by backtracking line-search,\n$\\widehat{\\mathcal{H}^{W_i}_{a, b, c, d}} = \\delta_{ad} \\delta_{bc} +\n\\delta_{ac} \\delta_{bd}\\frac{(y_{ib})^2}{\\Sigma_{ia}}$\nis an approximation of the Hessian\nof the negative complete likelihood and $\\mathcal{G}^{W_i} = -I + (\\Sigma_i)^{-1}(\\yb_i - \\mathbb{E}[\\sbb|\\xb])(\\yb_i)^{\\top}$ is the gradient.\n\nWe alternate between computing the statistics $\\mathbb{E}[\\sbb|\\xb]$, \n$\\mathbb{V}[\\sbb|\\xb]$ (E-step) and updates of parameters $\\Sigma_i$ and $W_i$ for $i=1 \\dots m$ (M-step). Let us highlight that our EM algorithm and in particular the E-step resembles the one used in~\\cite{moulines1997maximum}. However because they assume noise on the sensors and not on the components, their formula for $\\EE[\\sbb| \\xb]$ involves a sum with $2^p$ terms whereas we have only $2$ terms. The resulting method is called ShICA-ML.\n\n\\paragraph{Minimum mean square error estimates in ShICA}\nIn ShICA-J as well as in ShICA-ML, we have a closed-form for the expected components given the data $\\EE[\\sbb | \\xb]$, shown in equation~\\eqref{mmse1} and~\\eqref{mmse2} respectively. This provides minimum mean square error estimates of the shared components, and is an important benefit of explicitly modelling shared components in a probabilistic framework.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\\section{Related Work}\nShICA combines theory and methods coming from different branches of ``component analysis''. It can be viewed as a GroupICA method, as an extension of Multiset CCA, as an Independent Vector Analysis method or, crucially, as an extension of the shared response model. In the setting studied here, ShICA improves upon all existing methods.\n\n\\paragraph{GroupICA}\nGroupICA methods extract independent components from multiple datasets. In its original form\\cite{calhoun2001method}, views are concatenated and then a PCA is applied yielding reduced data on which ICA is applied. One can also reduce the data using Multiset CCA instead of PCA, giving a method called \\emph{CanICA}~\\cite{varoquaux2009canica}. Other works~\\cite{Esposito05NI, Hyva11NI} apply ICA separately on the datasets and attempt to match the decompositions afterwards.\nAlthough these works provide very fast methods, they do not rely on a well defined model like ShICA.\nOther GroupICA methods impose some structure on the mixing matrices such as the tensorial method of~\\cite{beckmann2005tensorial} or the group tensor model in~\\cite{guo2008unified} (which assumes identical mixing matrices up to a scaling) or \\cite{svensen2002ica} (which assumes identical mixing matrices but different components). In ShICA the mixing matrices are only constrained to be invertible.\nLastly, maximum-likelihood based methods exist such as\n\\emph{MultiViewICA}~\\cite{richard2020modeling} (MVICA) or the full model\nof~\\cite{guo2008unified}.\nThese methods are weaker than ShICA as they use the same noise covariance across views and lack a principled method for shared response inference.\n\n\\paragraph{Multiset CCA}\nIn its basic formulation, CCA identifies a shared space between two datasets.\nThe extension to more than two datasets is ambiguous, and many\ndifferent generalized CCA methods have been proposed. \\cite{kettenring1971canonical} introduces 6 objective functions that reduce to CCA when $m=2$ and \\cite{nielsen2002multiset} considered 4 different possible constrains leading to 24 different formulations of Multiset CCA. The formulation used in ShICA-J is refered to in~\\cite{nielsen2002multiset} as SUMCORR with constraint 4 which is one of the fastest as it reduces to solving a generalized eigenvalue problem. The fact that CCA solves a well defined probabilistic model has first been studied in~\\cite{bach2005probabilistic} where it is shown that CCA is identical to multiple battery factor analysis~\\cite{browne1980factor} (restricted to 2 views). This latter formulation differs from our model in that the noise is added on the sensors and not on the components which makes the model unidentifiable. Identifiable variants and\ngeneralizations can be obtained by imposing sparsity on the mixing matrices such as in~\\cite{archambeau2008sparse, klami2014group, witten2009extensions} or non-negativity~\\cite{DELEUS2011143}.\nThe work in~\\cite{li2009joint} exhibits a set of sufficient (but not necessary) conditions under which a well defined model can be learnt by the formulation of Multiset CCA used in ShICA-J. The set of conditions we exhibit in this work are necessary and sufficient. We further emphasize that basic Multiset CCA provides a poor estimator as explained in Section~\\ref{sec:samplingnoise}.\n\n\\paragraph{Independent vector analysis}\nIndependent vector analysis~\\cite{lee2008independent} (IVA) models the data as a linear mixture of independent components $\\xb_i = A_i \\sbb_i$ where each component $s_{ij}$ of a given view $i$ can depend on the corresponding component in other views ($(s_{ij})_{i=1}^m$ are not independent).\nPractical implementations of this very general idea assume a distribution for\n$p((s_{ij})_{i=1}^m)$. In IVA-L~\\cite{lee2008independent}, $p((s_{ij})_{i=1}^m)\n\\propto \\exp(-\\sqrt{\\sum_i (s_{ij})^2})$ (so the variance of each component in\neach view is assumed to be the same), in IVA-G~\\cite{anderson2011joint} or\nin~\\cite{via2011maximum}, $p((s_{ij})_{i=1}^m) \\sim \\mathcal{N}(0, R_{ss})$\nand~\\cite{engberg2016independent} proposed a normal inverse-Gamma density.\nLet us also mention IVA-L-SOS~\\cite{bhinge2019extraction}, IVA-GGD~\\cite{anderson2014independent} and\nIVA with Kotz distribution~\\cite{anderson2013independent} that assume a\nnon-Gaussian density general enough so that they can use both second and higher\norder statistics to extract view-specific components.\nThe model of ShICA can be seen as an instance of IVA\nwhich specifically enables extraction of shared components from the subject specific components, unlike previous versions of IVA. In fact, ShICA comes with minimum mean square error estimates for the shared components\nthat is often the quantity of interest.\nThe IVA theory provides global identifiability conditions in the Gaussian case (IVA-G)~\\cite{via2011joint} and local identifiability conditions in the general case~\\cite{anderson2014independent} from which local identifiability conditions of ShICA could be derived. However, in this work, we provide global identifiability conditions for ShICA.\nLastly, IVA can be performed using joint diagonalization of cross covariances~\\cite{li2011joint, congedo2012orthogonal} although multiple matrices have to be learnt and cross-covariances are not necessarily symmetric positive definite which makes the algorithm slower and less principled.\n\n\\paragraph{Shared response model}\nShICA extracts shared components from multiple datasets, which is also the goal\nof the shared response model (SRM)~\\cite{chen2015reduced}. The robust\nSRM~\\cite{turek2018capturing} also allows to capture subject specific noise.\nHowever these models impose orthogonality constraints on the mixing matrices\nwhile ShICA does not.\nDeep variants of SRM exist such\nas~\\cite{chen2016convolutional} but while they release the orthogonality\nconstrain, they are not very easy to train or interpret and have many\nhyper-parameters to tune. ShICA leverages ICA theory to provide a much more powerful model of shared responses.\n\n\\paragraph{Limitations}\nThe main limitation of this work is that the model cannot reduce the dimension inside each view : there are as many estimated sources as sensors. This might be problematic when the number of sensors is very high. In line with other methods, view-specific dimension reduction has to be done by some external method, typically view-specific PCA. Using specialized methods for the estimation of covariances should also be of interest for ShICA-J, where it only relies on sample covariances. Finally, ShICA-ML uses a simple model of a super-Gaussian distribution, while modelling the non-gaussianities in more detail in ShICA-ML should improve the performance.\n\n\\section{Experiments}\nExperiments used Nilearn~\\cite{abraham2014machine} and MNE~\\cite{gramfort2013meg} for fMRI and MEG data\nprocessing respectively, as well as the scientific Python ecosystem:\nMatplotlib~\\cite{hunter2007matplotlib}, Scikit-learn~\\cite{pedregosa2011scikit},\nNumpy~\\cite{harris2020array} and Scipy~\\cite{2020SciPy-NMeth}. We use the Picard algorithm for non-Gaussian ICA~\\cite{ablin2018faster}, and mvlearn for multi-view ICA~\\cite{perry2020mvlearn}. The above libraries use open-source licenses. fMRI experiments used the following datasets: sherlock~\\cite{chen2017shared}, forrest~\\cite{hanke2014high} , raiders~\\cite{ibc} and gallant~\\cite{ibc}. The data we use do not contain offensive content or identifiable information and consent was obtained before data collection. Computations were run on a large server using up to 100 GB of RAM and 20 CPUs in parallel.\n\\paragraph{Separation performance}\n\\label{sec:rotation}\nIn the following synthetic experiments, data are generated according to model~\\eqref{eq:model} with $p=4$ components and $m=5$ views and mixing matrices are generated by sampling coefficients from a standardized Gaussian.\nGaussian components are generated from a standardized Gaussian and their noise\nhas standard deviation $\\Sigma_i^{\\frac12}$ (obtained by sampling from a uniform\ndensity between $0$ and $1$) while non-Gaussian components are generated from a\nLaplace distribution and their noise standard deviations are equal. We study 3\ncases where either all components are Gaussian, all components are non-Gaussian\nor half of the components are Gaussian and half are non-Gaussian.\nWe vary the\nnumber of samples $n$ between $10^2$ and $10^5$ and display in\nFig~\\ref{exp:rotation} the mean Amari distance across subjects between the true unmixing\nmatrices and estimates of algorithms as a function of $n$.\nThe experiment is repeated $100$ times using different seeds. We report the median result and error bars represent the first and last deciles.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{.\/figures\/identifiability2.pdf}\n \\caption{\\textbf{Separation performance}: Algorithms are fit on data following model~\\ref{eq:model} \\textbf{(a)} Gaussian components with noise diversity \\textbf{(b)} Non-Gaussian components without noise diversity \\textbf{(c)} Half of the components are Gaussian with noise diversity, the other half is non-Gaussian without noise diversity. }\n \\label{exp:rotation}\n\\end{figure}\n\nWhen all components are Gaussian (Fig.~\\ref{exp:rotation}~(a)), CanICA cannot\nseparate the components at all. In contrast ShICA-J, ShICA-ML, Multiset CCA and\nMVICA are able to separate them, but Multiset CCA needs many more samples than\nShICA-J or ShICA-ML to reach a low amari distance, which shows that correcting for the rotation due to sampling noise improves the results. Looking at error bars, we also see that the performance of Multiset CCA varies quite a lot with the random seeds: this shows that depending on the sampling noise, the rotation can be very different from identity.\nMVICA needs even more sample than Multiset CCA to reach a low amari distance but\nstill outperforms CanICA.\n\nWhen none of the components are Gaussian (Fig.~\\ref{exp:rotation}~(b)), only\nCanICA, ShICA-ML and MVICA are able to separate the components, as other methods do not make use of non-Gaussianity.\nFinally, in the hybrid case (Fig.~\\ref{exp:rotation}~(c)), ShICA-ML is able to\nseparate the components as it can make use of both non-Gaussianity and noise\ndiversity. MVICA is a lot less reliable than ShICA-ML, it is uniformly worse and\nerror bars are very large showing that for some seeds it gives poor results.\nCanICA, ShICA-J and MultisetCCA cannot separate the components at all.\nAdditional experiments illustrating the separation powers of algorithms are available in Appendix~\\ref{app:separation}.\n\n\nAs we can see, MVICA can separate Gaussian components to some extent and therefore does not completely fail when Gaussian and non-Gaussian components are present. However MVICA is a lot less reliable than ShICA-ML: MVICA is uniformly worse than ShICA-ML and the error bars are very large showing that for some seeds it gives poor results.\n\n\\paragraph{Computation time}\n\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{.\/figures\/synthetic_gaussian_timings.pdf}\n \\caption{}\n \\label{exp:syn_timings}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{.\/figures\/inter_subject_stability.pdf}\n \\caption{}\n \\label{fig:eeg_intragroup_variability}\n\\end{subfigure}\n \\caption{\\textbf{Left: Computation time.} Algorithms are fit on data generated from model~\\eqref{eq:model} with a super-Gaussian density. For different values of the number of samples, we plot the Amari distance and the fitting time. Thick lines link median values across seeds. \\textbf{Right: Robustness w.r.t intra-subject variability in MEG.}\n (\\textbf{top}) $\\ell_2$ distance between shared components corresponding to the same stimuli in different trials. (\\textbf{bottom}) Fitting time.}\n\\end{figure}\nWe generate components using a slightly super Gaussian density: $s_j = d(x)$ with $d(x) = x |x|^{0.2}$ and $x \\sim \\mathcal{N}(0, 1)$. We vary the number of samples $n$ between $10^2$ and $10^4$. We compute the mean Amari distance across subjects and record the computation time. The experiment is repeated $40$ times. We plot the Amari distance as a function of the computation time in Fig~\\ref{exp:syn_timings}. Each point corresponds to the Amari distance\/computation time for a given number of samples and a given seed. We then consider for a given number of samples, the median Amari distance and computation time across seeds and plot them in the form of a thick line. From Fig~\\ref{exp:syn_timings}, we see that ShICA-J is the method of choice when speed is a concern while ShICA-ML yields the best performance in terms of Amari distance at the cost of an increased computation time. The thick lines for ShICA-J and Multiset CCA are quasi-flat, indicating that the number of samples does not have a strong impact on the fitting time as these methods only work with covariances. On the other hand CanICA or MVICA computation time is more sensitive to the number of samples.\n\n\n\n\n\\paragraph{Robustness w.r.t intra-subject variability in MEG}\n\nIn the following experiments we consider the Cam-CAN\ndataset~\\cite{taylor2017cambridge}. We use the magnetometer data from the MEG of\n$m=100$ subjects chosen randomly among 496.\nIn appendix~\\ref{app:preprocessing}\nwe give more information about Cam-CAN dataset.\nEach subject is repeatedly presented three audio-visual stimuli. \nFor each stimulus, we divide the trials into two sets and within each set,\nthe MEG signal is averaged across trials to isolate the evoked response. This\nprocedure yields 6 chunks of individual data (2 per stimulus).\nWe study the similarity between shared components corresponding to repetitions of the same stimulus. This gives a measure of robustness of each ICA algorithm with respect\nto intra-subject variability.\nData are first reduced using a subject-specific PCA with $p=10$ components.\nThe initial dimensionality of the data before PCA is $102$ as we only use the 102 magnetometers.\nAlgorithms are run 10 times with different seeds on the 6 chunks of data,\nand shared components are extracted.\nWhen two chunks of data correspond to repetitions of the same stimulus they should yield similar\ncomponents.\nFor each component and for each stimulus, we therefore measure the $\\ell_2$\ndistance between the two repetitions of the stimulus.\n This yields $300$ distances per algorithm that are\nplotted on Fig~\\ref{fig:eeg_intragroup_variability}.\n\nThe components recovered by ShICA-ML have a much lower variability than other approaches. The performance of ShICA-J is competitive with MVICA while being much faster to fit. Multiset CCA yields satisfying results compared with ShICA-J. However we see that the number of components that do not match at all across trials is greater in Multiset CCA.\n \nAdditional experiments on MEG data are available in Appendix~\\ref{app:phantom}.\n\n\n\n\\paragraph{Reconstructing the BOLD signal of missing subjects}\n\\begin{wrapfigure}{l}{.42\\textwidth}\n \\centering\n \\includegraphics[width=0.99\\linewidth]{.\/figures\/reconstruction.pdf}\n \\includegraphics[width=0.99\\linewidth]{.\/figures\/reconstruction_timings.pdf}\n \n \\caption{\\textbf{Reconstructing the BOLD signal of\n missing subjects}. (\\textbf{top}) Mean $R^2$ score between reconstructed data and true\n data. (\\textbf{bottom}) Fitting time.\n \n }\n \\label{fig:reconstruction}\n\\end{wrapfigure}\nWe reproduce the experimental pipeline of~\\cite{richard2020modeling} to benchmark GroupICA methods using their ability to reconstruct fMRI data of a left-out subject.\nThe preprocessing involves a dimension reduction step performed using the shared response model~\\cite{chen2015reduced}. Detailed preprocessing pipeline is described in Appendix~\\ref{app:preprocessing}. We call an \\emph{unmixing operator} the product of the dimension\nreduction operator and an unmixing matrix and a \\emph{mixing operator} its pseudoinverse. There is one unmixing operator and one mixing operator per view.\nThe unmixing operators are learned using all subjects\nand $80\\%$ of the runs. Then they are applied on the remaining $20\\%$ of the runs using $80\\%$\nof the subjects yielding unmixed data from which shared components are\nextracted.\nThe unmixed data are combined by averaging (for SRM and other baselines) or using the MMSE estimate for ShICA-J and ShICA-ML.\nWe\nthen apply the mixing operator of the remaining $20\\%$ subjects on the shared components to reconstruct their data.\nReconstruction accuracy is measured via the coefficient of determination, \\aka $R^2$ score, that\nyields for each voxel the relative discrepancy between the true time course and the predicted one.\nFor each compared algorithm, the experiment is run 25 times with different seeds to obtain error bars. We report the mean $R^2$ score across voxels in a region of interest (see Appendix~\\ref{app:preprocessing} for details)\n and display the results in Fig~\\ref{fig:reconstruction}. The error bars represent a $95\\%$ confidence interval.\nThe chance level is given by the $R^2$ score of an algorithm that samples the\ncoefficients of its unmixing matrices and dimension reduction operators from a\nstandardized Gaussian. The median chance level is below $10^{-3}$ on all\ndatasets.\nShICA-ML yields the best $R^2$ score in all datasets and for any number of\ncomponents. ShICA-J yields competitive results with respect to MVICA\nwhile being much faster to fit. A popular benchmark especially in the SRM\ncommunity is the time-segment matching experiment~\\cite{chen2015reduced}: we\ninclude such experiments in Appendix~\\ref{app:timesegment}.\nIn\nappendix~\\ref{app:table}, we give the performance of ShICA-ML, ShICA-J and MVICA\nin form of a table.\n\n\n\n\n\\section{Conclusion, Future work and Societal impact}\n\nWe introduced the ShICA model as a principled unifying solution to the problems of shared response modelling and GroupICA. ShICA is able to use both the diversity of Gaussian variances and non-Gaussianity for optimal estimation. We presented two algorithms to fit the model: ShICA-J, a fast algorithm that uses noise diversity, and ShICA-ML, a maximum likelihood approach that can use non-Gaussianity on top of noise diversity. ShICA algorithms come with principled procedures for shared components estimation, as well as adaptation and estimation of noise levels in each view (subject) and component. On simulated data, ShICA clearly outperforms all competing methods in terms of the trade-off between statistical accuracy and computation time. On brain imaging data, ShICA gives more stable decompositions for comparable computation times, and more accurately predicts the data of one subject from the data of other subjects, making it a good candidate to perform transfer learning. \nOur code is available at \\url{https:\/\/github.com\/hugorichard\/ShICA}.\n\\footnote{Regarding the ethical aspects of this work, we think this work presents exactly the same issues as any brain imaging analysis method related to ICA.}\n\\clearpage\n\\paragraph{Acknowledgement and funding disclosure}\nThis work has received funding\nfrom the European Union's Horizon 2020 Framework Programme for Research and Innovation under\nthe Specific Grant Agreement No. 945539 (Human Brain Project SGA3), the KARAIB AI chair\n(ANR-20-CHIA-0025-01), the Grant SLAB ERC-StG-676943 and the BrAIN AI chair (ANR-20-CHIA-0016). PA acknowledges funding by the French government under management of Agence Nationale de la Recherche as part of the \"Investissements d'avenir\" program, reference ANR19-P3IA-0001 (PRAIRIE 3IA Institute). AH received funding from a CIFAR Fellowship. \n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn quantum information science,\nmany figures of merit such as fidelity and von Neumann entropy \\cite{Nielsen2010} are utilized to characterize a quantum state.\nQuantum state tomography (QST) \\cite{James2001},\nby which a quantum density operator of an unknown quantum state is identified,\nis the most comprehensive method for deriving them.\nRecently, QST for photonic high-dimensional quantum states (qudits) \\cite{Thew2002} has been intensively investigated for\nentanglements based on orbital angular momentum \\cite{Agnew2011},\nfrequency bins \\cite{Bernhard2013},\nand time-energy uncertainty \\cite{Richart2014}.\nObservation of high-dimensional multipartite entanglement has also been reported \\cite{Malik2016}.\nFor time-bin qudits, which are promising candidates for transmission over an optical fiber,\nQST based on the conversion between time-bin states and polarization states has been performed \\cite{Nowierski2015}.\nQST generally requires $(d^2 - 1)$ different measurements for a state in $d$ dimensional Hilbert spaces\nbecause a general mixed state is characterized by $(d^2 - 1)$ real numbers.\nThus, it is important to reduce the number of measurement settings for high-dimensional QST.\nFor time-bin qubits,\nQST has been performed with a single delay Mach-Zehnder interferometer (MZI) \\cite{Takesue2009},\nwhich simultaneously constructed measurements projecting on two time-bin basis states and a superposition state of the time-bin basis.\nIn this paper, we propose an efficient scheme to implement QST for time-bin qudits utilizing cascaded delay MZIs \\cite{Ikuta2016a,Richart2012}.\nThanks to the simultaneous construction of the different measurements,\nthe number of measurement settings scales linearly with dimension $d$.\n\n\n\n\n\\section{\\label{sec:QSTDetail}Measurements with cascaded MZIs}\n\\subsection{\\label{sec:BasicQST}Basic concept}\n\nFirst, we give a general description of QST.\nA $d$-dimensional density operator $\\op{\\rho}$ can be expressed as $\\op{\\rho} = \\sum_{i=0}^{d^2-1} g_i \\op{G}_i$,\nwhere $\\op{G}_i$ is the generalized Gell-Mann matrix defined in \\cite{Thew2002}\nand $g_i$ is a real number.\n$g_0$ is usually fixed to $1\/d$ to be $\\mathrm{Tr}\\left( \\op{\\rho} \\right) = 1$,\nbecause $\\op{G}_i$ is traceless for $i \\geq 1$ and $\\op{G}_0$ is the identity operator $\\op{I}_d$.\nWhen we repeat a measurement represented by a projector $\\op{P}_j$ for $N$ photons,\nthe expected values of the photon counts $n_j^E$ is given by\n\\begin{equation}\nn_j^E = N \\mathrm{Tr} \\left( \\op{P}_j \\op{\\rho} \\right) = N \\sum_{i = 0}^{d^2 - 1} A_{ij} g_i\t,\t\\label{eq:ConceptOfQST}\n\\end{equation}\nwhere $A_{ij} = \\mathrm{Tr} \\left( \\op{P}_j \\op{G}_i \\right)$.\nWe can estimate $N$ and $g_i$ by multiplying the inverse matrix of $A_{ij}$ from the left of \\eref{eq:ConceptOfQST}.\nThus, the problem remaining to complete QST is how to prepare a set of measurements\nthat correspond to $\\op{P}_j$ for constructing $A_{ij}$ with rank $d^2$.\n\nTo prepare such a set of measurements for time-bin qudits,\nwe use cascaded MZIs.\n\\Fref{fig:CMZI} shows the concept of the measurements with the cascaded MZIs for a four-dimensional time-bin state.\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=\\linewidth]{figure01_ConceptOfCMZI.eps}\n\\caption{Concept of QST utilizing cascaded MZIs.}\n\\label{fig:CMZI}\n\\end{figure}\nThe 2-bit delay MZI has time delay $2T$ and phase difference $\\theta_2$,\nwhere $T$ denotes the temporal interval of time slots constituting the time-bin basis\nand $\\theta_2$ is the phase difference between the short and the long arms of the 2-bit delay MZI.\nThe 1-bit delay MZI has time delay $T$ and phase difference $\\theta_1$.\nThe output ports of the 2-bit delay MZI, $p_{2x}$ and $p_{2y}$, are connected to the input port of the 1-bit delay MZI and photon detector D2, respectively.\nThe output port of the 1-bit delay MZI, $p_{1x}$, is connected to photon detector D1,\nand the other output port, $p_{1y}$, is terminated.\nWhen the time-bin qudit is launched into the cascaded MZIs,\nD1 can detect a photon in a superposition of four different input states.\nOn the other hand,\nD2 cannot,\nbut it can detect a photon projected on the time-bin basis, which D1 cannot.\nTherefore, the information obtained from D1 and D2 are intrinsically different.\nWe utilize the number of photons detected by D1 and D2 at different detection times as $n_j^E$ in \\eref{eq:ConceptOfQST}.\n\nIn what follows, we describe the measurements by the cascaded MZIs in more detail.\nThe basis for the four-dimensional time-bin state is given by state $\\ket{k} (k \\in [0, 3])$\nin which a photon exists in the $k$th time slot.\nWhen pure state $\\ket{k}$ is launched into the 2-bit delay MZI,\nthe output state at port $p_{2x}$ is $\\op{M}_{2x} \\ket{k}$,\nwhere generalized measurement operator $\\op{M}_{2x}$ is given by\n\\begin{equation}\n\\op{M}_{2x} = \\frac{1}{2} \\sum_{k=0}^3 \\left( \\ket{k} + e^{i \\theta_2} \\ket{k+2} \\right) \\bra{k}\t\t.\n\\end{equation}\nSimilarly,\nwe can obtain the operators representing the measurements of each MZI at ports $p_{2y}, p_{1x}$, and $p_{1y}$ as follows.\n\\begin{eqnarray}\n\\op{M}_{2y} &=& \\frac{1}{2} \\sum_{k=0}^3 \\left( - \\ket{k} + e^{i \\theta_2} \\ket{k+2} \\right) \\bra{k}\t\t,\t\\\\\n\\op{M}_{1x} &=& \\frac{1}{2} \\sum_{k=0}^5 \\left( \\ket{k} + e^{i \\theta_1} \\ket{k+1} \\right) \\bra{k}\t\t,\t\\\\\n\\op{M}_{1y} &=& \\frac{1}{2} \\sum_{k=0}^5 \\left( -\\ket{k} + e^{i \\theta_1} \\ket{k+1} \\right) \\bra{k}\t\t.\n\\end{eqnarray}\n\nPhoton detectors D1 and D2 detect a photon at different detection times, $t_l$, for $l \\in [0, 6]$,\nwhich correspond to the projection measurements $\\op{M}_D = \\ket{l} \\! \\bra{l}$.\nTherefore,\nthe expected value $n^E_{D1 l \\theta_1 \\theta_2}$ of the photons detected by D1 at time $t_l$ is given by\n\\begin{eqnarray}\nn^E_{D1 l \\theta_1 \\theta_2} &=& N \\mathrm{Tr} \\left(\n\t\\op{M}_D\n\t\t\\op{M}_{1x}\n\t\t\t\\op{M}_{2x}\n\t\t\t\t\\op{\\rho}\n\t\t\t\\op{M}_{2x}^{\\dag}\n\t\t\\op{M}_{1x}^{\\dag}\n\t\\op{M}_D^{\\dag}\n\\right)\t\t\\\\\n\t&=& N \\mathrm{Tr} \\left( \\op{E}_{l \\theta_1 \\theta_2}^{D1} \\op{\\rho} \\right)\t\t,\t\\label{eq:E(Count)ltt}\n\\end{eqnarray}\nwhere we define the element of the positive operator valued measure\n$\\op{E}_{l \\theta_1 \\theta_2}^{D1} = \\op{M}_{2x}^{\\dag} \\op{M}_{1x}^{\\dag} \\op{M}_D^{\\dag} \\op{M}_D \\op{M}_{1x} \\op{M}_{2x}$.\nThe element of the positive operator valued measure for D2 is similarly defined as \n$\\op{E}_{l \\theta_1 \\theta_2}^{D2} = \\op{M}_{2y}^{\\dag} \\op{M}_D^{\\dag} \\op{M}_D \\op{M}_{2y}$.\nTo see what the measurement is performed by $\\op{E}_{l \\theta_1 \\theta_2}^{DX}$ for $DX \\in \\{D1, D2\\}$,\nit is convenient to estimate the simplified forms of $\\op{M}_D \\op{M}_{1x} \\op{M}_{2x}$ and $\\op{M}_D \\op{M}_{2y}$.\nFortunately,\n$\\op{M}_D$ is the projection onto the $l$th time slot for output states;\nthus, they have the simplified forms as $w_{DX l} \\ket{l} \\bra{\\psi^{DX}_{l \\theta_1 \\theta_2}}$,\nwhere $w_{DX l}$ is a complex weight\nand $\\ket{\\psi^{DX}_{l \\theta_1 \\theta_2}}$ is a normalized state in four-dimensional state.\nAll the simplified forms of the measurement operators are summarized in \\tref{tab:OpList}.\nTherefore,\n$\\op{E}_{l \\theta_1 \\theta_2}^{DX}$ returns the measurement result by the projector $\\ket{\\psi^{DX}_{l \\theta_1 \\theta_2}} \\bra{\\psi^{DX}_{l \\theta_1 \\theta_2}}$,\nexcluding the difference in weight $|w_{DX l}|^2$.\nThe simplified forms are easier to understand,\nbut the multiplication forms like $\\op{M}_D \\op{M}_{1x} \\op{M}_{2x}$ are more convenient for expanding the dimension\nor compensating for the imperfections due to measurement equipment as described later.\n\n\n\\begin{table}\n\t\\caption{\\label{tab:OpList}Measurement operators at different detection times and detector.}\n\t\\begin{indented}\n\t\\lineup\n\t\\item[]\n\t\t\\begin{tabular}{@{}ccr@{}r@{}r@{}r@{}r@{}r}\n\t\t\t\\br\n\t\t\tDetector & Detection time & \\multicolumn{6}{c}{Measurement operator}\t\\\\\n\t\t\t\\mr\n\t\t\tD1 & $t_0$ & $\\frac{1}{4} \\ket{0}$ & & $\\bra{0}$ &&& \\\\\n\t\t\t & $t_1$ & $\\frac{1}{4} \\ket{1}$ & $($ & $\\bra{1}$ & $+e^{i \\theta_1}\\bra{0}$ && $)$\t\\\\\n\t\t\t & $t_2$ & $\\frac{1}{4} \\ket{2}$ & $($ & $\\bra{2}$ & $+e^{i \\theta_1}\\bra{1}$ & $+e^{i \\theta_2}\\bra{0}$ & $)$\t\\\\\n\t\t\t & $t_3$ & $\\frac{1}{4} \\ket{3}$ & $($ & $\\bra{3}$ & $+e^{i \\theta_1}\\bra{2}$ & $+e^{i \\theta_2}\\bra{1}$ & $+e^{i (\\theta_1 + \\theta_2)}\\bra{0})$\t\\\\\n\t\t\t & $t_4$ & $\\frac{1}{4} \\ket{4}$ & $($ && $e^{i \\theta_1}\\bra{3}$ & $+e^{i \\theta_2}\\bra{2}$ & $+e^{i (\\theta_1 + \\theta_2)}\\bra{1})$\t\\\\\n\t\t\t & $t_5$ & $\\frac{1}{4} \\ket{5}$ & $($ &&& $e^{i \\theta_2}\\bra{3}$ & $+e^{i (\\theta_1 + \\theta_2)}\\bra{2})$\t\\\\\n\t\t\t & $t_6$ & $\\frac{1}{4} \\ket{6}$ & $($ &&&& $e^{i (\\theta_1 + \\theta_2)}\\bra{3})$\t\\\\\n\t\t\t\\cline{2-8}\n\t\t\tD2 & $t_0$ & $-\\frac{1}{2} \\ket{0}$ && $\\bra{0}$ &&&\t\t\\\\\n\t\t\t & $t_1$ & $-\\frac{1}{2} \\ket{1}$ && $\\bra{1}$ &&&\t\t\\\\\n\t\t\t & $t_2$ & $-\\frac{1}{2} \\ket{2}$ & $($ & $\\bra{2}$ && $-e^{i \\theta_2}\\bra{0}$ & $)$\t\\\\\n\t\t\t & $t_3$ & $-\\frac{1}{2} \\ket{3}$ & $($ & $\\bra{3}$ && $-e^{i \\theta_2}\\bra{1}$ & $)$\t\\\\\n\t\t\t & $t_4$ & $-\\frac{1}{2} \\ket{4}$ & $($ &&& $-e^{i \\theta_2}\\bra{2}$ & $)$\t\\\\\n\t\t\t & $t_5$ & $-\\frac{1}{2} \\ket{5}$ & $($ &&& $-e^{i \\theta_2}\\bra{3}$ & $)$\t\\\\\n\t\t\t\\br\n\t\t\\end{tabular}\n\t\\end{indented}\n\\end{table}\n\n\n\nAs in the QST for qubits,\nwe need to rotate $\\theta_1$ and $\\theta_2$ to complete the QST for qudits.\nWe use the same combinations of phase differences $\\theta_1$ and $\\theta_2$ utilized\nfor the time-energy entangled qudits \\cite{Richart2014}.\nThe total Hilbert space of the time-energy entangled qudits is spanned by two different logical qubits.\nOne is the qubit defined by the short and the long arms of the 1-bit delay MZI,\nand the other is the qubit defined by the short and the long arms of the 2-bit delay MZI.\nTherefore,\nthe high-dimensional QST is performed by the combination of the QST for logical qubits.\nSetting the phase differences between the arms at $0$ and $\\pi\/2$ corresponds\nto the measurements by the Pauli matrices $\\sigma_x$ and $\\sigma_y$ \\cite{Nielsen2010} for logical qubits, respectively.\nTherefore, combinations of phase differences $(\\theta_1, \\theta_2) = (0,0), (0, \\pi\/2), (\\pi\/2, 0)$, and $(\\pi\/2, \\pi\/2)$\nare sufficient to obtain the information about the phase of the qudits.\n\nOn the other hand,\nQST for qubits usually requires a measurement corresponding to the Pauli matrix $\\sigma_z$,\nwhich implies that it requires measurements without interference.\nThe measurement corresponding to $\\sigma_z$ for both the logical qubits are performed by D2 at $t_0, t_1, t_4$ and $t_5$,\nbecause the states $\\ket{\\psi^{D2}_{l \\theta_1 \\theta_2}}$ at these times are single time-bin basis states that correspond to eigenstates of $\\sigma_z$.\nHowever,\nwe need to prepare not only a $\\sigma_z \\otimes \\sigma_z$ measurement for logical qubits that doesn't completely interfere\nbut also measurements that partially interfere like a $\\sigma_z \\otimes \\sigma_x$ measurement.\nFrom this point,\nthe measurements by D1 at different detection times play an important role in the proposed scheme,\nbecause the interference pattern of the measurement $\\op{E}_{l \\theta_1 \\theta_2}^{D1}$ depends on detection time $t_l$ as shown in \\fref{fig:CMZI} and \\tref{tab:OpList}.\nIn other words,\nthe combination of the time-bin basis constituting $\\ket{\\psi^{D1}_{l \\theta_1 \\theta_2}}$ varies depending on the detection time.\nThe measurement at $t_0$ by D1 corresponds to the projection onto the single time-bin basis $\\ket{0}$,\nthe measurement at $t_1$ by D1 corresponds to the projection onto a superposition of $\\ket{0}$ and $\\ket{1}$, and so on.\n\n\nConsidering these characteristics of $\\op{E}_{l \\theta_1 \\theta_2}^{DX}$ described above,\nit is expected that the QST for time-bin qudits can be performed only by switching $\\theta_1$ and $\\theta_2$,\nwhich is confirmed by comparing \\eref{eq:ConceptOfQST} and \\eref{eq:E(Count)ltt}\nand by estimating the rank of $A_{ij}$.\nThe proposed scheme can be extended to general $d$-dimensional QST by adding extra MZIs.\nThe number of the MZIs for $d$-dimensional QST is $K$ given by $\\lceil \\log_2 d \\rceil$,\nwhere $\\lceil x \\rceil$ is the ceiling function for $x \\in \\mathbb{R}$.\nThe $K$ delay MZIs have different delay times $2^{i-1} T$ and phase differences $\\theta_i$ for $1 \\leq i \\leq K$.\nEach $\\theta_i$ takes $0$ and $\\pi \/ 2$ independently;\nthus, the number of measurement settings scales linearly with $d$.\n\nIt should be noted that we can implement QST for time-bin qudits without D2,\nwhich is confirmed from the rank of $A_{ij}$.\nHowever,\nD2 not only detects the photon which would be lost without it\nbut also collects information different from that obtained by D1.\nFor example,\nD1 cannot implement the measurement corresponding to the projection onto $\\ket{1}$, which D2 can.\nThis implies that D2 observes the same state from a different angle on the high-dimensional Bloch sphere.\nTherefore, the addition of D2 effectively improves the accuracy of the QST in the same measurement time.\n\n\n\n\\subsection{\\label{sec:LossComp}Compensation for imperfections}\n\nThe measurements described in subsection \\ref{sec:BasicQST} are ideal ones without imperfection.\nIn practice,\nthere are no ideal 50 : 50 beam splitters and no photon detectors with $100 \\%$ detection efficiency.\nFurthermore, when we utilize delay MZIs made with planar light wave circuit technology (PLC),\nthe difference in the optical path length between the long and the short arms causes imperfection due to medium loss.\nHowever,\nthe following modifications of the measurement operators can compensate for such imperfections:\n\\begin{eqnarray}\n\\op{M}_{2x} &=& \\frac{\\sum_{k=0}^3 \\left( \\ket{k} + \\sqrt{\\mathit{\\Delta} \\eta_{2x} } e^{i \\theta_2} \\ket{k+2} \\right) \\bra{k}}{\\sqrt{2\\left( 1+\\mathit{\\Delta} \\eta_{2x} \\right)}} \t\t,\t\\label{eq:CompM2x}\t\\\\\n\\op{M}_{2y} &=& \\frac{\\sum_{k=0}^3 \\left( - \\ket{k} + \\sqrt{\\mathit{\\Delta} \\eta_{2y} } e^{i \\theta_2} \\ket{k+2} \\right) \\bra{k}}{\\sqrt{2\\left( 1+\\mathit{\\Delta} \\eta_{2y} \\right)}} \t\t,\t\\label{eq:CompM2y}\t\\\\\n\\op{M}_{1x} &=& \\frac{\\sum_{k=0}^5 \\left( \\ket{k} + \\sqrt{\\mathit{\\Delta} \\eta_{1x} } e^{i \\theta_1} \\ket{k+1} \\right) \\bra{k}}{\\sqrt{2\\left( 1+\\mathit{\\Delta} \\eta_{1x} \\right)}} \t\t,\t\\label{eq:CompM1x}\t\\\\\n\\op{M}_{1y} &=& \\frac{\\sum_{k=0}^5 \\left( -\\ket{k} + \\sqrt{\\mathit{\\Delta} \\eta_{1y} } e^{i \\theta_1} \\ket{k+1} \\right) \\bra{k}}{\\sqrt{2\\left( 1+\\mathit{\\Delta} \\eta_{1y} \\right)}} \t\t,\t\\label{eq:CompM1y}\t\\\\\n\\op{E}_{l \\theta_1 \\theta_2}^{D1} &=& \\mathit{\\Delta} \\eta_{1} \\op{M}_{2x}^{\\dag} \\op{M}_{1x}^{\\dag} \\op{M}_D^{\\dag} \\op{M}_D \\op{M}_{1x} \\op{M}_{2x}\t,\t\\label{eq:CompED1}\t\\\\\n\\op{E}_{l \\theta_1 \\theta_2}^{D2} &=& \\op{M}_{2y}^{\\dag} \\op{M}_D^{\\dag} \\op{M}_D \\op{M}_{2y}\t,\t\\label{eq:CompED2}\n\\end{eqnarray}\nwhere $\\mathit{\\Delta} \\eta_{2x}, \\mathit{\\Delta} \\eta_{2y}, \\mathit{\\Delta} \\eta_{1x}, \\mathit{\\Delta} \\eta_{1y},$ and $\\mathit{\\Delta} \\eta_{1}$ are relative transmittances.\nRelative transmittances are the ratios between the transmittances depending on the optical paths and detectors.\nWe utilizes the relative values rather than absolute ones for experimental and theoretical convenience.\nThe use of the relative values decreases the expected value of the total photon number $N$ obtained by QST;\nthus, it is not an accurate modification in this sense.\nHowever,\nthe expected density operator $\\op{\\rho}$ will not change because $\\op{\\rho}$ is determined by the relative values of the photon counts.\nTherefore,\nthe use of the relative values is justified for the purpose of QST.\n\n\n\n\\subsection{\\label{sec:MLE}Maximum likelihood estimation}\n\nAs we mentioned in subsection \\ref{sec:BasicQST},\nQST for time-bin qudits can be performed by linear conversion of \\eref{eq:ConceptOfQST}.\nHowever,\nit is well known that\nthe density operator obtained by linear conversion does not often satisfy positivity,\nwhich implies the estimated density operator is unphysical \\cite{James2001}.\nMaximum likelihood estimation (MLE) is often used to avoid this problem \\cite{Agnew2011,James2001,Richart2014,Takesue2009}.\nFirst,\nwe use another representation of $\\op{\\rho}$ to enforce positivity as follows:\n\\begin{eqnarray}\n\\op{\\rho} &=& \\frac{\\op{R}^\\dag \\op{R}}{ \\mathrm{Tr} \\left( \\op{R}^\\dag \\op{R} \\right)}\t\t,\t\\\\\nN &=& \\mathrm{Tr} \\left( \\op{R}^\\dag \\op{R} \\right)\t,\n\\end{eqnarray}\nwhere $\\op{R}$ is an operator having a triangular form \\cite{James2001}.\nMLE is performed by finding $\\op{R}$ that minimizes the likelihood function $L\\left(\\op{R}\\right)$ given by\n\\begin{equation}\nL\\left(\\op{R}\\right) = \\sum_j \\left[ \\frac{\\left(n_j^M - n_j^E \\right)^2}{n_j^E} + \\ln n_j^E\t\\right]\t,\t\\label{eq:MLE}\n\\end{equation}\nwhere $n_j^M$ is the measured photon count and $n_j^E$ is the expected photon count in \\eref{eq:ConceptOfQST}.\nThe summation over $j$ is calculated for $j$ indicating different measurements.\nNote that we add $\\ln n_j^E$ to the likelihood function given in \\cite{James2001}.\nThe likelihood function is derived from the probability of obtaining a set of photon counts $n_j^M$,\nwhich is given by\n\\begin{equation}\nP = \\frac{1}{N_{norm}} \\prod_j \\exp \\left[ - \\frac{\\left(n_j^M - n_j^E \\right)^2}{2 \\sigma_j^2} \\right]\t,\n\\end{equation}\nwhere $N_{norm}$ is the normalization constant and $\\sigma_j \\approx \\sqrt{n_j^E}$ is the standard deviation for the $j$th measurement.\nHowever,\nthe normalization constant $N_{norm}$ can be approximated by $\\prod_j \\sqrt{2\\pi}\\sigma_j$ with Gaussian approximation,\nwhich leads to the additional term $\\ln n_j^E$.\nTo perform MLE according to \\eref{eq:MLE},\nwe need to precisely map $n^E_{D1 l \\theta_1 \\theta_2}$ and $n^E_{D2 l \\theta_1 \\theta_2}$ to $n_j^E$\nbecause the intrinsically same measurements exist in the measurement settings.\nFor example,\nthe measurement at $t_0$ by D1 corresponding to the projection onto $\\ket{0}$ does not depend on $\\theta_1$ and $\\theta_2$.\nFor this purpose,\nwe introduce space $V_j$,\nwhich satisfies the following conditions:\n\\begin{eqnarray}\n^\\forall \\left(DX, l, \\theta_1, \\theta_2 \\right) \\in V_j \\ , \\ ^\\forall \\left( DX', l', \\theta_1', \\theta_2' \\right) \\in V_{j'}\t\\nonumber\\\\\n\\frac{\\op{E}_{l \\theta_1 \\theta_2}^{DX}}{\\mathrm{Tr} \\left( \\op{E}_{l \\theta_1 \\theta_2}^{DX} \\right)}\n=\n\\frac{\\op{E}_{l' \\theta_1' \\theta_2'}^{DX'}}{\\mathrm{Tr} \\left( \\op{E}_{l' \\theta_1' \\theta_2'}^{DX'} \\right)}\n\\qquad\n\\mbox{for}\n\\qquad\nj=j'\t,\t\t\\label{eq:Vcondition1}\n\\\\\n\\frac{\\op{E}_{l \\theta_1 \\theta_2}^{DX}}{\\mathrm{Tr} \\left( \\op{E}_{l \\theta_1 \\theta_2}^{DX} \\right)}\n\\neq\n\\frac{\\op{E}_{l' \\theta_1' \\theta_2'}^{DX'}}{\\mathrm{Tr} \\left( \\op{E}_{l' \\theta_1' \\theta_2'}^{DX'} \\right)}\n\\qquad\n\\mbox{for}\n\\qquad\n j \\neq j'\t.\t\\label{eq:Vcondition2}\n\\end{eqnarray}\nSpace $V_j$ is numerically generated via a comparison according to \\eref{eq:Vcondition1} and \\eref{eq:Vcondition2}.\nBy utilizing $V_j$,\nwe can map $n^E_{D1 l \\theta_1 \\theta_2}$ and $n^E_{D2 l \\theta_1 \\theta_2}$ to $n_j^E$ as follows:\n\\begin{eqnarray}\nn_j^E &=& \\sum_{\\left(DX, l, \\theta_1, \\theta_2 \\right) \\in V_j} n^E_{D1 l \\theta_1 \\theta_2}\t\t\\\\\n&=& N \\mathrm{Tr} \\left( \\op{E}_{j} \\op{\\rho} \\right)\t,\n\\end{eqnarray}\nwhere $\\op{E}_{j} = \\sum_{\\left(DX, l, \\theta_1, \\theta_2 \\right) \\in V_j} \\op{E}_{l \\theta_1 \\theta_2}^{DX}$.\nSimilarly, we obtain $n_j^M$,\nand now we can perform the QST for time-bin qudits by MLE.\n\n\n\\subsection{\\label{SumOfProc}Summary}\n\n\nHere,\nwe summarize the proposed QST procedure.\n\n\nFirst,\nwe measure the relative transmittances\n$\\mathit{\\Delta} \\eta_{2x}, \\mathit{\\Delta} \\eta_{2y}, \\mathit{\\Delta} \\eta_{1x}, \\mathit{\\Delta} \\eta_{1y},$ and $\\mathit{\\Delta} \\eta_{1}$,\nwith which we estimate the measurement operators $\\op{E}_{l \\theta_1 \\theta_2}^{DX}$ according to\n\\eref{eq:CompM2x}--\\eref{eq:CompED2}.\nThen,\nwe generate space $V_j$ from $\\op{E}_{l \\theta_1 \\theta_2}^{DX}$ according to \\eref{eq:Vcondition1} and \\eref{eq:Vcondition2}\nand prepare $\\op{E}_{j} = \\sum_{\\left(DX, l, \\theta_1, \\theta_2 \\right) \\in V_j} \\op{E}_{l \\theta_1 \\theta_2}^{DX}$.\n\n\n\nNext,\nwe perform photon count measurement\nby switching combinations of phase differences $(\\theta_1, \\theta_2) = (0,0), (0, \\pi\/2), (\\pi\/2, 0)$, and $(\\pi\/2, \\pi\/2)$\nand obtain $n^M_{DX l \\theta_1 \\theta_2}$.\nAfter the measurement,\n$n^M_{DX l \\theta_1 \\theta_2}$ is reduced into $n_j^M$ by using space $V_j$.\n\n\nFinally,\nwe find $\\op{R}$ minimizing the likelihood function $L\\left(\\op{R}\\right)$ with $n_j^M$ and $\\op{E}_{j}$\nand obtain the reconstructed density operator $\\op{\\rho}$.\nWhen we perform the QST for the multi-photon state,\nwe extend the procedure as in \\cite{James2001,Thew2002}\nby replacing $\\op{E}_{l \\theta_1 \\theta_2}^{DX}$ and $n^M_{DX l \\theta_1 \\theta_2}$ with its tensor production and coincidence count,\nrespectively.\n\n\n\\section{Experimental setup}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=\\linewidth]{figure02_ExpSetup.eps}\n\\caption{Experimental setup.\nCW: Continuous wave laser.\nIM: Intensity modulator.\nEDFA: Erbium-doped fiber amplifier.\nPC: Polarization controller.\nFBG: Fiber Bragg grating filter.\nVATT: Optical variable attenuator.\nPPLN: Periodically poled lithium niobate waveguide.\nBPF: Optical band-pass filter.\nWDM: Wavelength demultiplexing filter.\nPol: Polarizer.\n2-bit delay MZI, 1-bit delay MZI (Delay Mach-Zehnder interferometers were fabricated using PLC technology.)\nSNSPD: Superconducting nanowire single-photon detector.\n}\n\\label{fig:ExpSetup}\n\\end{figure}\n\n\\Fref{fig:ExpSetup} shows the experimental setup.\nFirst,\nwe generate a continuous-wave light with a wavelength of 1551.1 nm and a coherence time of $\\sim$10 $\\mu$s,\nwhich is modulated into four-sequential pulses by an intensity modulator.\nThe repetition frequency, the temporal interval, and the pulse duration are 125 MHz, 1 ns, and 100 ps, respectively.\nThese pulses are amplified by an erbium-doped fiber amplifier (EDFA),\nand then the average power of the pulses are adjusted by an optical variable attenuator.\nThey are launched into a periodically poled lithium niobate (PPLN) waveguide,\nwhere 780-nm pump pulses are generated via second harmonic generation.\nThe 780-nm pump pulses are launched into another PPLN waveguide to generate a four-dimensional maximally entangled state through spontaneous parametric down-conversion.\nA fiber Bragg grating filter and two optical band-pass filters are located after the EDFA and the PPLN waveguides, respectively.\nThe fiber Bragg grating filter eliminates amplified spontaneous emission noise from the EDFA,\nand the first and the second band-pass filters eliminate the 1551.1- and the 780-nm pump pulses, respectively.\nThe generated entangled photons are separated by a wavelength demultiplexing filter into a signal and an idler photon whose wavelengths are 1555 and 1547 nm, respectively.\nEach separated photon is launched into the cascaded MZIs followed by two superconducting nanowire single-photon detectors (SNSPDs),\nwhere the QST described in \\sref{sec:QSTDetail} is performed.\nThe cascaded MZIs are composed of a 2-bit delay MZI and a 1-bit delay MZI fabricated by using PLC technology.\nThe phase differences of the 2- and 1-bit delay MZIs are controlled via the thermo-optic effect caused by electrical heaters attached to the waveguides.\nEach MZI shows a $>20$-dB extinction ratio thanks to the stability of the PLC \\cite{Takesue2005, Honjo2004}.\nPolarization controllers and polarizers are located in front of each MZI to operate the MZIs for one polarization.\nChannels 1 and 2 (3 and 4) of the SNSPDs are connected to the 1- and the 2-bit delay MZIs for the signal (idler) photon, respectively.\nThe photon detection events from the SNSPDs are recorded by a time-interval analyzer\nand analyzed by a conventional computer.\nThe detection efficiencies of the SNSPDs for channels 1, 2, 3, and 4 are $40, 56, 34$, and $43$ \\%, respectively,\nand the dark count rate for all channels is $<30$ cps.\n\n\\section{Results}\n\n\\subsection{Measurement of relative transmittance}\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=\\linewidth]{figure03_Marge.eps}\n\\caption{Histograms of single counts for single photon generated by single pump pulse\nfor the detector's (a) channel 1, (b) channel 2, (c) channel 3, and (d) channel 4.}\n\\label{fig:DiffEta}\n\\end{figure}\n\nWe first measured the relative transmittances between the arms of the MZIs---$\\mathit{\\Delta} \\eta_{2x}, \\mathit{\\Delta} \\eta_{2y},$ and $\\mathit{\\Delta} \\eta_{1x}$---for the signal and the idler photon.\nTo measure these values,\nwe generated a single pulse using the intensity modulator instead of four-sequential ones,\nbecause the photons generated by the single pulse don't interfere at the MZIs.\n\\Fref{fig:DiffEta} shows the histograms of single photon counts for each detector channel.\nThe four peaks in \\fref{fig:DiffEta}(a) and (c) correspond to the single counts for finding a photon in detection times $t_0$, $t_1$, $t_2$, and $t_3$, respectively.\nSimilarly,\nthe two peaks in \\fref{fig:DiffEta}(b) and (d) correspond to the single counts for finding a photon in detection times $t_0$ and $t_2$, respectively.\nWe calculated the relative transmittances from these single counts.\nFor example,\nsingle count $S^1_l$ at detection time $t_l$ for channel 1 satisfies the following relation:\n\\begin{equation}\nS^1_0 : S^1_1 : S^1_2 : S^1_3 = 1 : \\mathit{\\Delta} \\eta_{1x}^s : \\mathit{\\Delta} \\eta_{2x}^s : \\mathit{\\Delta} \\eta_{1x}^s \\mathit{\\Delta} \\eta_{2x}^s\t,\n\\end{equation}\nwhere $\\mathit{\\Delta} \\eta_{2x}^s$ and $\\mathit{\\Delta} \\eta_{1x}^s$ are the relative transmittances for the signal photon.\nTherefore,\nthe relative transmittances were estimated as\n$\\mathit{\\Delta} \\eta_{2x}^s = \\left( S^1_2 + S^1_3 \\right) \/ \\left( S^1_0 + S^1_1 \\right)$\nand\n$\\mathit{\\Delta} \\eta_{1x}^s = \\left( S^1_1 + S^1_3 \\right) \/ \\left( S^1_0 + S^1_2 \\right)$.\nSimilarly,\nwe calculated the other relative transmittances,\nwhich are summarized in \\tref{tab:DiffEta}.\nWe didn't measure $\\mathit{\\Delta} \\eta_{1y}$\nbecause output port $p_{1y}$ was terminated\nand thus didn't affect the result of our experiment.\nThe values summarized in \\tref{tab:DiffEta} were utilized for the QST described in the next section.\n\n\\begin{table}\n\t\\caption{\\label{tab:DiffEta}Summary of the relative transmittance.}\n\t\\begin{indented}\n\t\\lineup\n\t\\item[]\n\t\t\\begin{tabular}{@{}crr}\n\t\t\t\\br\n\t\t\t & Signal & Idler\t\\\\\n\t\t\t\\mr\n\t\t\t$\\mathit{\\Delta} \\eta_{2x}$ & 1.009\\0 & 0.8495\t\\\\\n\t\t\t$\\mathit{\\Delta} \\eta_{2y}$ & 0.8300 & 0.8302\t\\\\\n\t\t\t$\\mathit{\\Delta} \\eta_{1x}$ & 1.063\\0 & 0.9669\t\\\\\n\t\t\t\\br\n\t\t\\end{tabular}\n\t\\end{indented}\n\\end{table}\n\n\\subsection{QST for the time-bin entangled qudits}\n\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=\\linewidth]{figure04_rho_merge.eps}\n\\caption{(a) Real parts and (b) imaginary parts of measured density operator $\\op{\\rho}$.}\n\\label{fig:Rho}\n\\end{figure}\n\nWe then generated the four-dimensional maximally entangled state $\\ket{\\Psi_{MES}^4 (\\phi)}$ by utilizing the four-sequential pump pulses.\nThe state is given by\n\\begin{equation}\n\\ket{\\Psi_{MES}^4 (\\phi)} = \\frac{1}{2} \\sum_{k=0}^3 \\exp (i \\phi k ) \\ket{k}_s \\otimes \\ket{k}_i \t,\n\\end{equation}\nwhere $\\ket{k}_s$ and $\\ket{k}_i$ denote the time-bin basis for the signal and idler photon, respectively,\nand $\\phi$ denotes the relative phase between the product states $\\ket{k}_s \\otimes \\ket{k}_i$\ndue to the phases of the pump pulses for SPDC.\nThe pump pulses were generated from the CW laser;\nthus, the phase is proportional to $k$ and determined by the frequency and the temporal interval of the time slots.\nIt should be noted that we can control the phase of the entangled state by modulating that of the pump pulses.\nIn our setup,\nthe CW laser had a coherence time of $\\sim$10 $\\mu$sec,\nwhich implies that, in principle, we can extend the dimension of the entangled photons $d$ up to $10^3\\sim10^4$.\nThe measured single photon count rates for detector channels 1, 2, 3, and 4 were\n17.1, 72.4, 20.6, and 82.1 kcps, respectively.\nFrom these single photon count rates,\nthe relative transmittances between the detectors $\\mathit{\\Delta} \\eta_{1}$ for the signal and idler photon\nwere estimated to be 0.474 and 0.501, respectively.\nThe average photon number per qudit was 0.02,\nand the measurement time for one measurement setting was 10 sec.\nWe employed coincidence counts for arbitrary combinations of detection times between the signal and the idler photon with 16 measurement settings,\nwith which the QST for a single qudit described in \\sref{sec:QSTDetail} was extended to the QST for two qudits.\n\n\nWe performed the QST for the entangled qudits fifteen times.\n\\Fref{fig:Rho} shows one of the measured density operators $\\op{\\rho}$.\nAll measured coincidence counts and reconstructed operators in the fifteen trials are provided in the supplementary material.\nNote that we utilized $\\op{U}\\op{\\rho}\\op{U}^\\dag$ instead of $\\op{\\rho}$ so that the visualized operator would be close to $\\ket{\\Psi_{MES}^4 (0)}$,\nwhere the local unitary operator $\\op{U}$ for the signal photon is given by $\\sum_k \\exp (-i \\phi' k) \\ket{k}_s\\bra{k}_s$.\nBoth the real and the imaginary parts of the measured operator showed characteristics close to $\\ket{\\Psi_{MES}^4 (0)}$,\nand the elements of the operator that were 0 for $\\ket{\\Psi_{MES}^4 (0)}$ were suppressed.\n\n\n\\begin{table}\n\t\\caption{\\label{tab:FigOfMerit}Average quantities derived from measured $\\op{\\rho}$ for the fifteen experimental trials.\n\tThe critical values to violate the CGLMP inequality are also summarized.}\n\t\\begin{indented}\n\t\\lineup\n\t\\item[]\n\t\t\\begin{tabular}{@{}crr}\n\t\t\t\\br\n\t\t\t & Measured & Critical \\\\\n\t\t\t\\mr\n\t\t\tFidelity & $F(\\op{\\rho}, \\op{\\sigma}) = \\m $ 0.950 $\\pm$ 0.003 & $> 0.710$ \\\\\n\t\t\tTrace distance & $D(\\op{\\rho}, \\op{\\sigma}) = \\m $ 0.068 $\\pm$ 0.003 & $< 0.290$ \\\\\n\t\t\tLinear entropy & $H_{lin}(\\op{\\rho}) = \\m $ 0.093 $\\pm$ 0.006 & $< 0.490$ \\\\\n\t\t\tVon Neumann entropy & $H_{vn}(\\op{\\rho}) = \\m $ 0.343 $\\pm$ 0.016 & $< 2.002$ \\\\\n\t\t\t\\multirow{2}{*}{Conditional entropy} & $H_{c}(\\op{\\rho}|s) = - $ 1.654 $\\pm$ 0.016 & $< 0.002$ \\\\\n\t\t\t & $H_{c}(\\op{\\rho}|i) = - $ 1.653 $\\pm$ 0.016 & $< 0.002$ \\\\\n\t\t\t\\br\n\t\t\\end{tabular}\n\t\\end{indented}\n\\end{table}\n\nTo evaluate the measured operators more quantitatively,\nwe derived five figures of merit from $\\op{\\rho}$:\nfidelity $F(\\op{\\rho}, \\op{\\sigma})$,\ntrace distance $D(\\op{\\rho}, \\op{\\sigma})$,\nlinear entropy $H_{lin}(\\op{\\rho})$,\nvon Neumann entropy $H_{vn}(\\op{\\rho})$,\nand conditional entropy $H_{c}(\\op{\\rho}|X)$ \\cite{Nielsen2010,James2001}.\nHere, we employed the following definitions:\n\\begin{eqnarray}\nF(\\op{\\rho}, \\op{\\sigma}) &=& \\left[ \\Tr \\sqrt{ \\sqrt{\\op{\\sigma}} \\op{\\rho} \\sqrt{\\op{\\sigma}}}\\right]^2\t,\t\\\\\nD(\\op{\\rho}, \\op{\\sigma}) &=& \\frac{1}{2} \\Tr \\sqrt{\\left(\\op{\\rho} - \\op{\\sigma}\\right)^2}\t,\t\\\\\nH_{lin}(\\op{\\rho}) &=& 1 - \\Tr \\left( \\op{\\rho}^2 \\right)\t,\t\\\\\nH_{vn}(\\op{\\rho}) &=& - \\Tr \\left( \\op{\\rho} \\log_2 \\op{\\rho}\\right)\t,\t\\\\\nH_{c}(\\op{\\rho}|X) &=& H_{vn}(\\op{\\rho}) - H_{vn}(\\op{\\rho}_X)\t,\n\\end{eqnarray}\nwhere $\\op{\\sigma}$ is given by$\\ket{\\Psi_{MES}^4 (\\phi)} \\bra{\\Psi_{MES}^4 (\\phi)}$ with $\\phi$,\nwhich maximizes $F(\\op{\\rho}, \\op{\\sigma})$ or minimizes $D(\\op{\\rho}, \\op{\\sigma})$,\n$X \\in \\{s, i \\}$ denotes the signal and idler photon, respectively,\nand $\\op{\\rho}_X$ is the reduced density operator for $X$.\nThe average values of these quantities are summarized in \\tref{tab:FigOfMerit}.\nThe errors in \\tref{tab:FigOfMerit} were estimated as standard deviations in the fifteen experimental trials.\nTherefore,\nthey included the statistical characteristics of the coincidence counts and all the effects due to the experimental imperfections as well.\nThe measured fidelity and trace distance showed that the reconstructed operators were close to the target state $\\ket{\\Psi_{MES}^4 (\\phi)}$.\nNote that this is the first time fidelity $>0.90$ has been reported for entangled qudits \\cite{Agnew2011,Bernhard2013,Nowierski2015,Richart2014}.\nThe measured linear entropy and von Neumann entropy were low,\nwhich implies that the reconstructed operators were close to the pure state and that small disturbances occurred in the proposed QST scheme.\nFurthermore,\nthe measured conditional entropies were negative,\nwhich confirmed that the signal and the idler photons were entangled \\cite{Horodecki1996a,Horodecki1996}.\n\nTo evaluate the quality of entangled qudits,\nmany previous experiments employed the Collins-Gisin-Linden-Massar-Popescu (CGLMP) inequality test,\nwhich is a generalized Bell inequality for entangled qudits \\cite{Collins2002,Dada2011a}.\nIf we assume symmetric noise,\ndepolarized entangled state $\\op{\\rho}_{mix}$ is given by\n\\begin{equation}\n\\op{\\rho}_{mix} = p \\ket{\\Psi_{MES}^4 (0)} \\bra{\\Psi_{MES}^4 (0)} + (1 - p) \\frac{\\op{I}_{16}}{16}\t,\t\t\\label{eq:MixedMES}\n\\end{equation}\nwhere $p$ is a probability and $\\op{I}_{16}$ is the identity operator in the 16-dimensional Hilbert space.\nThe condition $p > 0.69055$ is a criterion to violate the CGLMP inequality.\nTherefore,\nthe quantities derived from $\\op{\\rho}_{mix}$ with $p = 0.69055$ can be considered as the critical values for the evaluation of the entangled qudits.\nThese critical values are also summarized in \\tref{tab:FigOfMerit},\nwhich shows that all of the measured values satisfied the conditions to violate the CGLMP inequality.\nThus,\nwe confirmed that the proposed QST scheme based on cascaded MZIs successfully reconstructed the quantum density operator of the time-bin entangled qudits\nwith only 16 measurement settings.\n\n\n\n\\section{Conclusion}\nWe proposed QST for time-bin qudits based on cascaded MZIs,\nwith which the number of measurement settings scales linearly with dimension $d$.\nWe generated a four-dimensional maximally entangled time-bin state\nand confirmed that the proposed scheme successfully reconstructed the density operator with only 16 measurement settings.\nAll the quantities derived from the reconstructed state were close to the ideal ones,\nand the fidelity of 0.950 is the first time fidelity $>0.90$ has been achieved for entangled qudits.\nWe hope that our result will lead to advanced quantum information processing utilizing high-dimensional quantum systems.\n\n\n\n\n\\ack\nWe thank T. Inagaki and F. Morikoshi for fruitful discussions.\n\n\n\n\n\\section*{References}\n\n\\bibliographystyle{iopart-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nA natural description of the dynamics of chaotic systems is in \nterms of evolving probability densities~\\cite{Lasota}. \nOn this level the time evolution in maps is governed by the linear Frobenius--Perron\noperator and the dynamical problem is solved by the determination of the spectral decomposition of this\noperator. In recent years, several authors have constructed complete and explicit spectral\ndecompositions of the Frobenius--Perron operator of a variety of model\nsystems~\\cite{firstgenspec,scndgenspec}. The most useful decompositions contain in\ntheir spectrum the decay rates characterizing the approach to equilibrium of the system. \nFor one-dimensional piecewise-linear Markov maps such decompositions\nare constructed in function spaces spanned by polynomials. The dual space of these polynomials\nare generalized function spaces and so the decompositions are known as generalized\nspectral decompositions~\\cite{Deanbook}. \n\nChaotic systems often contain a control parameter that characterizes\nthe strength of the chaos. A simple model system with such a control\nparameter is the well-known tent map with varying height~\\cite{Schuster}. \nThe tent map with height $h$ on the unit interval is given by\n\\begin{equation} \\label{generaltent}\n{\\rm T}(x) = \\left\\{ \\begin{array}{lc} \n\\alpha x & 0 \\leq x < \\frac{1}{2} \\\\ \\noalign{\\vskip4pt} \n\\alpha(1-x) & \\frac{1}{2} \\leq x < 1,\n\\end{array} \\right.\n\\end{equation}\nwhere the parameter $\\alpha \\equiv 2 h$. In Figure 1 the map (\\ref{generaltent}) with \nheight $\\sqrt{2}\/2$ is shown. Note that the map acts on the unit interval $[0,1)$ but\nhas images on $[0,h]$. \n\\begin{figure}[htb]\n\\begin{center}\n\\scalebox{.5}[.5]{\\includegraphics{fig13.eps}}\n\\parbox{5in}{\\caption{\\small The tent map at $\\alpha = \\protect\\sqrt{2}$. For this value of\n$\\alpha$ iterates of the critical point define four intervals around which the dynamics is\norganized, as discussed in Section 2.}}\n\\end{center}\n\\end{figure}\nAs characterized by its Lyapunov exponent, $\\log\\alpha$,\nthe map (\\ref{generaltent}) switches abruptly chaotic as the\nheight is raised past $1\/2$. We are interested in the chaotic regime where $1\/2 < h \\leq 1$, i.e.,\n$1 < \\alpha \\leq 2$. Similar to the well-known universality of the quadratic map, it has recently \nbeen reported~\\cite{Moon} that the tent map also governs the low-dimensional behavior of a\nwide class of nonlinear phenomena. Specifically, it was found that the dynamics of the\nGinzburg--Landau equation, in its description of the modulational instability of a wave train, is\nreducible to the tent map. Under this reduction, varying the height in\n(\\ref{generaltent}) corresponds to varying the wavelength of the initial modulational instability. \n\nOur interest is in the\nstatistical properties of the iterates of the tent map and in evolving probability\ndensities. The Frobenius--Perron operator, $U$, corresponding to a map, ${\\rm S}(x)$, defined on the\nunit interval evolves a probability density, $\\rho(x,t)$ by one time step as\n\\begin{equation}\n\\rho(x,t+1) = U \\rho(x,t) \\equiv \\int_0^1 dx'\\, \\delta(x - {\\rm S}(x')) \\, \\rho(x',t).\n\\end{equation}\nEvaluating the integral gives a sum of contributions from the inverse branches of ${\\rm S}(x)$. \nThe Frobenius--Perron operator corresponding to the map (\\ref{generaltent}) acts explicitly on a\ndensity $\\rho(x)$ as\n\\begin{equation} \\label{gententfpop}\nU_{\\rm T} \\rho(x) = \\frac{1}{\\alpha}\\left[ \\rho\\left(\n\\frac{x}{\\alpha} \\right) + \\rho\\left(\\frac{\\alpha - x}{\\alpha} \n\\right) \\right]\\Theta \\left(\\frac{\\alpha}{2} - x \\right), \n\\end{equation}\nwhere \n\\begin{equation} \n\\Theta(a-x) = \\left\\{ \\begin{array}{lc} \n1 & x \\leq a \\\\\n0 & x > a.\n\\end{array} \\right. \n\\end{equation}\nThe step function appears here because the map has\nno inverse images for $x>\\alpha\/2$.\nFor $\\alpha=2$ the map has images on the\nwhole unit interval. For this value of $\\alpha$ the invariant density (being the stationary\nsolution of (\\ref{gententfpop})) is\nuniform on the whole unit interval. As $\\alpha$ is lowered the\ninvariant density is supported only on a subset of the interval\n$[\\alpha(1-\\alpha\/2),\\alpha\/2]$, i.e., from the\nsecond iterate to the first iterate of the critical point $x_c \\equiv 1\/2$. The invariant density is\ndiscontinuous at all values of the trajectory of the critical point. If the critical trajectory is\nperiodic (or eventually periodic) there will be a finite number of discontinuities. \n\nFor $\\alpha \\geq \\sqrt{2}$ the invariant density has nonvanishing support on all of\n$[\\alpha(1-\\alpha\/2),\\alpha\/2]$. As $\\alpha$ is decreased past this critical value,\n$\\alpha_{1}$, the invariant density breaks up into two bands with a gap in the\nmiddle. Decreasing $\\alpha$ past $\\alpha_{2} = 2^{1\/4}$ causes the invariant\ndensity to break up into 4 bands. In general the\ninvariant density has\n$2^n$ bands as $\\alpha$ is decreased past $\\alpha_{n} = 2^{2^{-n}}$. \nThese values of $\\alpha$ are called the band-splitting \npoints~\\cite{bsps}. \nThis is illustrated in Figure 2.\n\\begin{figure}[htb]\n\\begin{center}\n\\scalebox{0.73}[0.73]{\\includegraphics{bif.eps}}\n\\parbox{5in}{\\caption{\\small Bifurcation plot of the tent map with varying height showing\nthe formation of bands as the height, parametrized by $\\alpha=2h$ on the horizontal axis,\nis lowered. The vertical axis is the subset $[\\alpha(1-\\alpha\/2),\\alpha\/2]$ of the\none-dimensional phase space. In (a) the shaded regions indicate where the invariant\ndensity has nonvanishing support. At this resolution we see up to the formation of the\nfour bands at $\\alpha=2^{1\/4}$ but higher band-splitting points are not resolved. In (b) iterates\nof the critical trajectory as a function of $\\alpha$ are plotted. These are seen to\ndetermine the band structure in (a).}}\n\\end{center}\n\\end{figure}\nAt the band-splitting points the critical trajectory is eventually \nperiodic and the invariant density is constant in each band and thus piecewise-constant\nover the unit interval. But a general initial density will also have persistent oscillating\ncomponents among the bands. This feature is known as asymptotic periodicity~\\cite{Lasota}.\n\nWe want to determine the spectral decomposition of the Frobenius--Perron operator so that we may\nexpand a density or correlation function in terms of its eigenmodes. In order to do this we need\nthe dual states or left eigenstates of $U$. These correspond to the right eigenstates of the\nadjoint of $U$, which is known as the Koopman operator~\\cite{Lasota}. The Koopman operator,\n$K=U^\\dagger$, acts on a phase space function $A(x)$ as\n\\begin{equation} \\label{koopman}\nK A(x) = A({\\rm T}(x)),\n\\end{equation}\nwhere ${\\rm T}(x)$ is the rule for the map, such as~(\\ref{generaltent}).\nFor decompositions of one-dimensional, chaotic, Markov maps in spaces spanned by polynomials the\nKoopman operator has eigenstates that are generalized functions or\neigenfunctionals~\\cite{firstgenspec,scndgenspec,Deanbook}. \n\nThe decay of the $x$-autocorrelation function for the tent map at the \nband-splitting points and at values of $\\alpha$\nclose to band-splitting points was calculated in~\\cite{Mori}. \nThe tent map has also been studied by\nDorfle~\\cite{Dorfle} at arbitrary values of $\\alpha$ in several function spaces\nbut he does not provide explicit complete spectral decompositions.\nThe asymptotic periodicity of the system has been studied by Provatas and Mackey~\\cite{ProvMac}. \nSince as $t \\to \\infty$ the density is supported only within $[\\alpha(1-\\alpha\/2),\\alpha\/2]$\nall these authors have only considered the map in that region and neglected transient behavior onto\nit. That is sufficient if one only considers the behavior of time correlation functions; but for\nthe general evolution of densities and observables in a nonequilibrium statistical mechanics context\nthis transient behavior must not be neglected since, as will be seen, the slowest decay\nmodes originate from this part of the dynamics.\n\nIn the next section we construct the spectral decomposition of the\nFrobenius--Perron operator at the first band-splitting point. In Section 3 the decomposition at all\nthe band-splitting points is constructed using the self-similarity of the map at higher\nband-splitting points to the map at lower band-splitting points. \\section{The first band-splitting point}\n\nFrom (\\ref{generaltent}) the map at the first band-splitting point corresponding to the\nheight $h=\\sqrt{2}\/2$ or $\\alpha=\\sqrt{2}$ is\n\\begin{equation} \\label{mapatfbsp}\n{\\rm{T}}_{1}(x) = \\left\\{ \\begin{array}{lc} \n\\sqrt{2} \\, x & 0 \\leq x < \\frac{1}{2} \\\\ \\noalign{\\vskip4pt} \n\\sqrt{2} \\, (1-x) & \\frac{1}{2} \\leq x < 1,\n\\end{array} \\right.\n\\end{equation}\nwhich is shown in Figure 1.\nThe dynamics is organized around four\nintervals determined by the trajectory of the critical point. At the third iteration\nthe critical trajectory settles onto the fixed point, $x^*=2-\\sqrt{2}$. The four\nintervals: ${\\rm{I}}=[0,{\\rm{T}}_{1}^{(2)}(x_{\\rm c}))$, \n${\\rm{II}}=[{\\rm{T}}_{1}^{(2)}(x_{\\rm c}),{\\rm{T}}_{1}^{(3)}(x_{\\rm c}))$\n${\\rm{III}}=[{\\rm{T}}_{1}^{(3)}(x_{\\rm c}),{\\rm{T}}_{1}^{(1)}(x_{\\rm c}))$ and\n${\\rm{IV}}=[{\\rm{T}}_{1}^{(1)}(x_{\\rm c}),1)$, define a minimal Markov partition\nfor the map. (These intervals are indicated in Figure 1.) Any point in the interior of\ninterval ${\\rm{IV}}$ is mapped onto some point in interval ${\\rm{I}}$ in one\niteration. Under successive iterations all points in the interior of ${\\rm{I}}$ are\neventually mapped into ${\\rm{II}}$. Any point in interval $\\rm{II}$ maps into interval\n$\\rm{III}$ in one iteration and any point in interval $\\rm{III}$ maps into\n$\\rm{II}$ in one iteration. Thus the union of the intervals $\\rm{II}$ and $\\rm{III}$ \nform the attracting set $\\Omega$. \n\nThe Frobenius--Perron operator for the map (\\ref{mapatfbsp}) acts on \na density as \n\\begin{equation} \\label{op}\nU_{\\rm T_1} \\rho(x) = \\frac{1}{\\sqrt{2}}\\left[ \\rho\\bigg(\n\\frac{x}{\\sqrt{2}} \\bigg) + \\rho\\bigg(\\frac{\\sqrt{2} - x}{\\sqrt{2}} \n\\bigg) \\right]\\Theta \\bigg( \\frac{\\sqrt{2}}{2} - x \\bigg). \n\\end{equation}\nA general initial density continuous over the unit interval develops\ndiscontinuities at the endpoints of the four intervals described above. \nWe thus choose a function space to consider $U_{\\rm T_1}$ in as the space\nof piecewise-polynomial functions where the pieces are the four\nintervals described above. The invariant density has support only in\n$\\Omega$. \n\nThe fact that points oscillate between the intervals $\\rm{II}$ and $\\rm{III}$ means\nthat a general density will have a persistent oscillating component between these two\nintervals under time evolution. This property we will sometimes refer to in the\npresent context as the ``flip property\", since the part of the density in $\\rm{II}$\nwill all be in $\\rm{III}$ (with stretching) in the next time step and vice-versa. The parts of the\ninitial density with support in the intervals\n${\\rm{I}}$ and $\\rm{IV}$ will decay onto $\\Omega$. Since we will use the eigenfunctions on\n$\\Omega$ to determine those on its complement, we first consider the decomposition on $\\Omega$.\n\n\\subsection{Evolution on the attractor}\n\nFor convenience $\\Omega$ is stretched\nonto the interval $[0,1)$. At the end of the computation we will rescale all\nresults back to $\\Omega$. The linear function that makes the stretch is\n\\begin{equation} \\label{phi}\n\\phi(x) = \n(2\/x^{*}) x - \\sqrt{2},\n\\end{equation}\nwhere ${\\rm{T}}_{1}^{(2)}(x_{\\rm c}) \\leq x < {\\rm{T}}_{1}^{(1)}(x_{\\rm c})$.\nThe transformation (\\ref{phi}) is a homeomorphism so that it begets a new\nmap ${\\rm R_1}$ topologically conjugate to the part of ${\\rm T_1}$ on $\\Omega$ as\n$\\rm{R}_{1} = \\phi \\circ {\\rm{T}}_{1} \\circ \\phi^{-1}$\ngiven by \n\\begin{equation} \\label{rescaled}\n{\\rm{R}}_{1}(x) = \\left\\{ \\begin{array}{lc}\n\\sqrt{2} \\, x + x^{*} & 0 \\leq x < x^{*} \\! \/2 \\\\ \\noalign{\\vskip4pt}\n\\sqrt{2} \\, (1-x) & x^{*} \\! \/2 \\leq x < 1, \n\\end{array} \\right.\n\\end{equation}\nwhere $x^{*} = 2 - \\sqrt{2}$ is the fixed point of the map ${\\rm{R}}_{1}(x)$,\nwhich is the same as\nthe fixed point of the map ${\\rm{T}}_{1}(x)$. Under this transformation, the\nintervals $\\rm{II}$ and $\\rm{III}$\nare stretched to the intervals ${\\rm{A}} \\equiv [0,x^{*})$ and ${\\rm{B}} \\equiv\n[x^{*},1)$ respectively. The map ${\\rm{R}}_{1}(x)$ is shown in Figure 3. \n\\begin{figure}[htb]\n\\begin{center}\n\\scalebox{.5}[.5]{\\includegraphics{fig21.eps}}\n\\parbox{5in}{\\caption{\\small The rescaled Tent map at the first band-splitting point.}}\n\\end{center}\n\\end{figure}\n\nThe Frobenius--Perron operator corresponding to the\nrescaled map\n$\\rm{R}_{1}$ acts on a density as\n\\begin{equation} \\label{ur1}\nU_{\\rm{R}_{1}} \\rho(x) = \\frac{1}{\\sqrt{2}}\\left[ \\rho\\bigg(\n\\frac{\\sqrt{2} - x}{\\sqrt{2}} \\bigg) + \\rho\\bigg(\\frac{x - x^{*}}{\\sqrt{2}} \n\\bigg)\\Theta(x - x^{*}) \\right]. \n\\end{equation}\nThe flip property of $\\rm T_1$ is inherited by $\\rm R_1$ in that the inverse image of\n$\\rm A$ is $\\rm B$ and vice-versa. This suggests that a simpler analysis will be\nobtained by considering the map corresponding to two iterations of $\\rm R_1$. This map,\n${\\rm{G}}_{1} \\equiv \\rm{R}_{1} \\circ \\rm{R}_{1}$, is given by \n\\begin{equation}\n{\\rm G}_{1}(x) = \\left\\{ \n\\begin{array}{lc} \n-2x + x^{*} & 0 \\leq x < x^{*} \\! \/2 \\\\ \\noalign{\\vskip4pt}\n 2x - x^{*} & x^{*} \\! \/2 \\leq x < (1+x^*)\/2 \\\\ \\noalign{\\vskip4pt}\n-2x + (2 + x^{*}) & (1+x^*)\/2 \\leq x < 1.\n\\end{array} \\right. \n\\end{equation}\nThe flip property of $\\rm{R}_{1}$ means that ${\\rm{G}}_{1}$\nis metrically decomposable into two independent maps on the intervals\n$\\rm{A}$ and\n$\\rm{B}$, as is clear from Figure 4. From now on in this section we shall\ndrop the subscript $1$ on the maps $\\rm R_1$ and $\\rm G_1$; it being\nunderstood that we are referring to these maps at the first\nband-splitting point. \n\\begin{figure}[htb]\n\\begin{center}\n\\scalebox{.5}[.5]{\\includegraphics{fig31.eps}}\n\\end{center}\n\\parbox{5in}{\\caption{\\small The map ${\\rm{G_{1}}}={\\rm{R_{1} \\circ R_{1}}}$ is metrically\ndecomposable into two parts each conjugate to the tent map with unit height.} }\n\\end{figure}\n\n\nThe map ${\\rm G}$ restricted to $\\rm{A}$ is just a rescaling (with a\nflip) of the tent map with full height, ${\\rm T}_0$, i.e., the map\n(\\ref{generaltent}) with $\\alpha=2$. This is expressed in terms of a\ntopological conjugacy as\n\\begin{equation}\n{\\rm{G}}_{\\rm{A}}(x) = \\phi^{-1}_{\\rm{A}}(x) \\circ {\\rm{T_0}}(x) \\circ\n\\phi_{\\rm{A}}(x),\n\\end{equation}\nwhere\nthe conjugating function $\\phi_{\\rm{A}}(x)$ is \n\\begin{equation}\n\\phi_{\\rm{A}}(x) = 1 - \\frac{x}{x^*},\n\\end{equation}\nand $x \\in [0, x^{*})$.\nSimilarly the map on $\\rm B$ is topologically conjugate to ${\\rm T}_0$ as\n\\begin{equation}\n{\\rm{G}}_{\\rm{B}}(x) = \\phi^{-1}_{\\rm{B}}(x) \\circ {\\rm{T}}_{0}(x) \\circ\n\\phi_{\\rm{B}}(x),\n\\end{equation}\nwhere the conjugating function $\\phi_{\\rm B}(x)$ is \n\\begin{equation}\n\\begin{array}{lc}\n\\phi_{\\rm{B}}(x) = \n(\\sqrt{2}\/x^{*})x - \\sqrt{2} ,\n\\end{array} \n\\end{equation}\nand here $x \\in [x^{*}, 1)$.\nThese conjugacies are useful for us because the\nspectral decompositions of maps that are topologically conjugate are simply\nrelated, as is reviewed in Appendix A.\nThe generalized spectral decomposition of $\\rm{T_{0}}$ has been\npreviously determined~\\cite{Gonzalo,fox} and is reviewed\nin Appendix B. \n\nFollowing the discussion in those appendices gives the right eigenvectors of\n$\\rm{G}_{\\rm{A}}$ and $\\rm{G}_{\\rm{B}}$ as\n\\alpheqn\n\\begin{eqnarray} \\label{Gvecsaa}\n| 2^{-2j} \\rangle_{\\rm{G}_{\\rm{A}}} & = & \\frac{1}{x^{*}}\nB_{2j}\\left( \\frac{x^{*} - x}{2x^{*}} \\right)\\chi_{\\rm{A}} \\\\ \\label{Gvecsab}\n| 0_{2j+1} \\rangle_{\\rm{G}_{\\rm{A}}} & = & \\frac{1}{x^{*}}\nE_{2j+1}\\left( \\frac{x}{x^{*}} \\right)\\chi_{\\rm{A}}, \n\\end{eqnarray}\n\\reseteqn\n\\alpheqn\n\\begin{eqnarray} \\label{Gvecsba}\n| 2^{-2j} \\rangle_{\\rm{G}_{\\rm{B}}} & = & \\frac{\\sqrt{2}}{x^{*}}\nB_{2j}\\left( \\frac{x - x^{*}}{\\sqrt{2}x^{*}} \\right)\\chi_{\\rm{B}} \\\\ \\label{Gvecsbb}\n| 0_{2j+1} \\rangle_{\\rm{G}_{\\rm{B}}} & = & \\frac{\\sqrt{2}}{x^{*}}\nE_{2j+1}\\left( \\frac{\\sqrt{2}}{x^{*}}\\left( x-x^{*} \\right) \\right)\n\\chi_{\\rm{B}},\n\\end{eqnarray}\n\\reseteqn\nwhere the associated eigenvalue is the argument of the\nket vector with $|0_{2j+1}\\rangle$ meaning a null eigenpolynomial of degree $2j+1$\nand $\\chi_{\\rm{A}}$ and $\\chi_{\\rm{B}}$ are indicator functions on the intervals\n$\\rm{A}$ and $\\rm{B}$ respectively. Due to the metric decomposability of $\\rm G$ these\nstates are eigenstates of $U_{\\rm G}$ as well.\n\nSimilarly, the left eigenvectors of\n$\\rm{G}_{\\rm{A}}$ and $\\rm{G}_{\\rm{B}}$ are the generalized functions\n\\alpheqn \n\\begin{eqnarray} \\label{lgvecaa}\n\\langle 2^{-2j} |_{\\rm{G}_{\\rm{A}}} & = & \n\\frac{(-1)^{2j-1}\\left( 2x^{*} \\right)^{2j}}{(2j)!}\\left[\n\\delta^{(2j-1)}_{-}(x - x^{*}) - \\delta^{(2j-1)}_{+}(x) \\right] \\\\ \\label{lgvecab}\n\\langle 0_{2j+1} |_{\\rm{G}_{\\rm{A}}} & = & -\\frac{\\left(x^{*}\\right)^{2j+2}}\n{(2j+1)!}\\delta^{(2j+1)}_{+}\\left( x \\right), \n\\end{eqnarray}\n\\reseteqn\n\\alpheqn\n\\begin{eqnarray} \\label{lgvecba}\n\\langle 2^{-2j} |_{\\rm G_B} \n& = & \\frac{(-1)^{2j-1}\\left( \\sqrt{2} \\, x^{*} \\right)^{2j}}{(2j)!}\\left[\n\\delta^{(2j-1)}_{-}(x - 1) - \\delta^{(2j-1)}_{+}\n(x - x^{*}) \\right] \\\\ \\label{lgvecbb}\n\\langle 0_{2j+1} |_{\\rm G_B} & = & \\frac{-1}{(2j+1)!}\n\\left( \\frac{x^{*}}{\\sqrt{2}} \\right)^{2j+2}\n\\delta^{(2j+1)}_{-}\\left( x - 1 \\right),\n\\end{eqnarray}\n\\reseteqn\nwhere the definitions of $\\delta_{\\pm}$ are given in Appendix B.\n\nSince $U_{\\rm{G}} = U^{2}_{\\rm{R}}$ the spectrum of $U_{\\rm{R}}$ is a subset of\n$\\{0,\\pm 2^{-j} \\}$. Consider a non-zero eigenvalue, \n${2}^{-2j}$, of $U_{\\rm{G}}$. There are two\neigenvectors associated with this eigenvalue, each a polynomial of\norder $2j$. Since the function space on which $U_{\\rm{R}}$ acts has two basis\nelements for each degree $j$, i.e., an $j^{\\rm th}$ degree polynomial in ${\\rm A}$ and an\n$j^{\\rm th}$ degree polynomial in\n${\\rm B}$, there should be either two eigenvectors or one eigenvector and one Jordan\nvector that are polynomials of degree $2j$, associated with either one or both the\neigenvalues $\\{ +2^{-j},-2^{-j} \\}$. Since $U_{\\rm{G}}$ does not have any Jordan vectors\nit follows that $U_{\\rm{R}}$ doesn't either (for non-zero eigenvalues). \nThe eigenvalues of $U_{\\rm R}$ cannot be twofold degenerate since that\nwould imply that all the eigenvectors of\n$U_{\\rm{G}}$ are also eigenvectors of $U_{\\rm{R}}$, which is impossible since $\\rm{G}$ is\nmetrically decomposable and $\\rm{R}$ has the flip property. Therefore the\nnon-zero eigenvalues of $\\rm{R}$ are $+2^{-j}$ and $-2^{-j}$.\n\nThe eigenvectors of\n$U_{\\rm{R}}$ with eigenvalue $\\pm 2^{-j}$ are in the eigenspace \nspanned by the two eigenvectors of $U_{\\rm{G}}$ corresponding to\n$+ 2^{-2j}$. Thus they will be linear combinations as\n\\alpheqn \n\\begin{equation} \\label{reigenforma}\n|{+2^{-j}} \\rangle_{\\rm{R}} = \\frac{1}{2} \\left( |{+2^{-2j}}\n\\rangle_{\\rm{G}_{\\rm{A}}} + c_{j}|{+2^{-2j}} \\rangle_{\\rm{G}_{\\rm{B}}} \\right)\n\\end{equation}\nand\n\\begin{equation}\n|{-2^{-j}} \\rangle_{\\rm{R}} = \\frac{1}{2} \\left( |{+2^{-2j}} \\label{reigenformb}\n\\rangle_{\\rm{G}_{\\rm{A}}} + d_{j}|{+2^{-2j}} \\rangle_{\\rm{G}_{\\rm{B}}} \\right) , \n\\end{equation}\n\\reseteqn\nwhere the coefficent of $1\/2$ is put for convenient normalization. To determine\n$c_j$ we use that $|{+2^{-j}} \\rangle_{\\rm R}$ is an eigenvector of \n$U_{\\rm{R}}$ as\n\\begin{equation}\nU_{\\rm{R}}\\left[ |{+2^{-2j}} \\rangle_{\\rm{G}_{\\rm{A}}} + \nc_{j}|{+2^{-2j}} \\rangle_{\\rm{G}_{\\rm{B}}} \\right] =\n2^{-j}\\left[ |{+2^{-2j}}\\rangle_{\\rm{G}_{\\rm{A}}} + \nc_{j}|{+2^{-2j}} \\rangle_{\\rm{G}_{\\rm{B}}} \\right]. \n\\end{equation}\nThe flip property tells us that \n\\begin{equation} \\label{nn}\nU_{\\rm{R}}\\left[ c_{j}|{+2^{-2j}} \\rangle_{\\rm{G}_{\\rm{B}}} \\right]\n = 2^{-j}|{+2^{-2j}}\\rangle_{\\rm{G}_{\\rm{A}}}. \n\\end{equation}\nSubstituting the explicit form (\\ref{Gvecsaa}) of $|{+2^{-2j}} \\rangle_{\\rm{G}_{\\rm{A}}}$\nand (\\ref{Gvecsba}) of $|{+2^{-2j}} \\rangle_{\\rm{G}_{\\rm{B}}}$ in (\\ref{nn}) and solving\nfor $c_{j}$ we find that\n$c_{j} = 2^{-j}$. A similar analysis shows that $d_{j} = -2^{-j}$. Hence \n\\begin{equation} \\label{fbspurstates}\n|{\\pm 2^{-j}} \\rangle_{\\rm R} = \\frac{1}{2 x^*}\\left( B_{2j}\\left( \\frac\n{x^{*} - x}{2x^{*}} \\right)\\chi_{\\rm{A}} \\pm \\frac{\\sqrt{2}}{2^{j}}B_{2j}\\left(\n\\frac {x - x^{*}}{\\sqrt{2}x^{*}} \\right)\\chi_{\\rm{B}} \\right). \n\\end{equation}\n\nThe invariant state, corresponding to the invariant density of $U_{\\rm R}$ is\n\\begin{equation}\n|{+1}\\rangle_{\\rm R} = \\frac{1}{2 x^*}(\\chi_{\\rm A} + \\sqrt{2} \\, \\chi_{\\rm B}).\n\\end{equation}\nThis state carries all the probability under evolution of $U_{\\rm R}$ and any density\nwill have this component. The state\n\\begin{equation}\n|{-1}\\rangle_{\\rm R} = \\frac{1}{2 x^*}(\\chi_{\\rm A} - \\sqrt{2} \\, \\chi_{\\rm B})\n\\end{equation}\nis the asymptotically periodic state. Only it and the invariant density survives as $t\n\\to \\infty$; but $|{-1}\\rangle_{\\rm R}$, like decaying states, doesn't carry any probability. \nIt does keep the memory though of the projection of the initial density on\n$|{-1}\\rangle_{\\rm R}$, which is a special property of asymptotically periodic\nsystems~\\cite{Lasota}.\n \n\nNow we consider the null space of $\\rm{U_{R}}$. The map $\\rm{G}$ has two independent\nnull vectors (one in $\\rm A$ and one in $\\rm B$) for each odd degree. \nThis implies that\n$\\rm{R}$ can have either a corresponding $2 \\times 2$ Jordan block \nor have 2 independent eigenvectors for each odd degree. The latter case is not possible\nsince null vectors of $\\rm{R}$ cannot have support in interval $\\rm{B}$ because only one\nof the terms on the rhs of (\\ref{ur1}) acts on functions in $\\rm B$.\nThus there is a $2 \\times 2$ Jordan block for each odd degree associated with\neigenvalue $0$.\n\nConsider the action of $U_{\\rm G}\n=U^{2}_{\\rm{R}}$ on a null state, $| 0_{2j+1}\\rangle_{\\rm{G_{A}}}$, of\n$\\rm{G_A}$ as \n\\begin{equation} \nU_{\\rm{R}}\\Big[ U_{\\rm{R}}|0_{2j+1} \\rangle_{\\rm{G}_A} \\Big] = 0.\n\\end{equation} \nThe function inside the square brackets has support only in $\\rm B$, and\n$U_{\\rm{R}}$ acting on any non-zero function with support in $\\rm B$ cannot vanish\nin one iteration. Thus $|0_{2j+1} \\rangle_{\\rm R}=|0_{2j+1} \\rangle_{\\rm\nG_A}$ is a null vector of $U_{\\rm{R}}$ with explicit form given in\n(\\ref{Gvecsab})\n\nThe Jordan vector, $|0_{J_{2j+1}} \\rangle_{\\rm R}$, associated with this eigenvector\nsatisfies\n\\begin{equation} \\label{q}\n U_{\\rm{R}}| 0_{J_{2j+1}} \\rangle_{\\rm{R}} = | 0_{2j+1}\n\\rangle_{\\rm{R}}.\n\\end{equation} \nWe may choose the Jordan vector to have support only\nin ${\\rm B}$ as\n\\begin{equation} \\label{qq}\n| 0_{J_{2j+1}} \\rangle_{\\rm{R}} = \\eta_{2j+1}| 0_{2j+1}\n\\rangle_{\\rm{G}_{\\rm{B}}},\n\\end{equation} \nwhere $\\eta_{2j+1}$ is a constant to be determined. \nTo determine $\\eta_{2j+1}$ we apply $U_{\\rm R}$ on (\\ref{qq}) and use (\\ref{q}) and the\nexplicit forms (\\ref{Gvecsab}) and (\\ref{Gvecsbb}) (remembering that\n$|0_{2j+1} \\rangle_{\\rm R}=|0_{2j+1} \\rangle_{\\rm G_A}$) to obtain\n$\\eta_{2j+1} = -1$.\n\n\\subsubsection{Left eigenstates of $U_{\\rm{R}}$}\n\nThe left eigenstates of $U_{\\rm{R}}$ with non-zero eigenvalues may be determined by an\napproach similar to the one used to find the right eigenstates. \nThe result is\n\\begin{eqnarray}\n\\langle \\pm 2^{-j}|_{\\rm{R}} & = & \\frac{\\left( 2x^{*} \\right)^{2j}}\n{(2j)!} \\left[ \\delta^{(2j-1)}_{+}(x) - \\delta^{(2j-1)}_{-}(x - x^{*})\n\\right. \\nonumber \\\\ & & \\left. \\hskip50pt\n\\pm \\delta^{(2j-1)}_{+}(x - x^{*}) \\mp \\delta^{(2j-1)}_{-}(x - 1) \\right].\n\\end{eqnarray} \nThe left states with zero eigenvalues are like the right states identical to those\nof $U_{\\rm G}$ (except for the factor of $-1$ for the dual state of the Jordan vector) as\n\\alpheqn \n\\begin{eqnarray}\n\\left\\langle 0_{2j+1} \\right|_{\\rm R} & = & \n\\left\\langle 0_{2j+1} \\right|_{\\rm G_A} \\\\\n\\langle 0_{J_{2j+1}} |_{\\rm{R}} & = & \n- \\left\\langle 0_{2j+1} \\right|_{\\rm G_B}. \n\\end{eqnarray}\n\\reseteqn\nNote that $\\langle 0_{J_{2j+1}} |_{\\rm{R}}$ is an eigenstate of the Koopman operator\nand $\\left\\langle 0_{2j+1} \\right|_{\\rm{R}}$ is a Jordan state. \n\n\n\n\\subsection{Decay onto the attractor}\n\nAs noted before, initial densities with support in the intervals $\\rm{I}$ and\/or\n$\\rm{IV}$ will decay into $\\Omega$. Consider a density with support\nonly in $\\rm{IV}$ at $t=0$. At $t=1$ the density has support only in ${\\rm{I}}$. Thus\nany eigenvector of $U_{\\rm T_1}$ with support in $\\rm{IV}$ can only have\neigenvalue\n$0$. To determine such an eigenvector we write an ansatz for it as \n\\begin{equation} \\label{aa}\n| 0_j \\rangle_{\\rm{IV}} =\nf_{{\\rm{I}},j}(x)\\chi_{{\\rm{I}}} +\nf_{{\\rm{II}},j}(x)\\chi_{\\rm{II}}\n+ f_{{\\rm{III}},j}(x)\\chi_{\\rm{III}} + f_{{\\rm{IV}},j}(x)\\chi_{\\rm{IV}},\n\\end{equation}\nwhere the subscript $\\rm IV$ on the ket denotes that it\ndescribes decay out of interval $\\rm IV$ and we take $f_{{\\rm IV},j}(x)$ as a\npolynomial of order $j$. Applying $U_{\\rm T_1}$ to (\\ref{aa}) and collecting terms that are\nmultiplied by the same indicator function we get\n\\begin{eqnarray} \\label{aaa}\nU_{\\rm T_1}|0_j \\rangle_{\\rm{IV}} & = & \n\\left[ f_{{\\rm{I}},j} \\left( \\frac{x}{\\sqrt{2}} \n\\right) + f_{{\\rm{IV}},j}\\left(1- \\frac{x}{\\sqrt{2}} \n\\right) \\right]\\chi_{{\\rm{I}}} \\nonumber \\\\\n & & \\mbox{} + \\left[ f_{{\\rm{I}},j} \\left(\n\\frac{x}{\\sqrt{2}} \n\\right) + f_{{\\rm{III}},j}\\left(1- \\frac{x}{\\sqrt{2}} \\right) \\right]\n\\chi_{\\rm{II}} \\nonumber \\\\ \n& & \\mbox{} + \\left[ f_{{\\rm{II}},j} \\left( \\frac{x}{\\sqrt{2}} \n\\right) + f_{{\\rm{II}},j}\\left(1- \\frac{x}{\\sqrt{2}} \\right)\n\\right]\\chi_{\\rm{III}}.\n\\end{eqnarray} \nSince $|0_j \\rangle_{\\rm{IV}}$ is a null eigenstate the coefficients of each of the\nindicator functions must vanish. Since\n$f_{{\\rm{IV}},j}$ is a polynomial of degree\n$j$, it follows that \n$f_{{\\rm{I}},j}$ and\n$f_{{\\rm{III}},j}$ are also polynomials of degree $j$. We choose\n$f_{{\\rm{II}},j}$ to be zero. Clearly,\n$f_{{\\rm{III}},j}=f_{{\\rm{IV}},j}$ and choosing it as\n$(x-1)^j$ fixes $f_{{\\rm{I}},j}$. We thus have \n\\begin{equation} \\label{rt1} \n| 0_j \\rangle_{\\rm{IV}} = (-1)^{j+1}x^j\\chi_{\\rm{\\rm{I}}} +\n(x-1)^j(\\chi_{\\rm{III}} + \\chi_{\\rm{IV}}). \n\\end{equation} \nWe note that because of the degeneracy associated\nwith eigenvalue $0$ in interval $\\rm{IV}$, this choice of eigenvectors is not\nunique. The left eigenstates (given below) associated with this choice of\nright eigenvectors is unique and therefore when we expand an arbitrary \ndensity in terms of the right eigenvectors, the expansion coefficents \nare uniquely defined.\n\nThe action of $U_{\\rm T_1}$ on a function with support only \non $\\rm{\\rm{I}}$ is given by\n\\begin{equation} U_{\\rm T_1} [f(x)\\chi_{\\rm{I}}] = \\frac{1}{\\sqrt{2}}\nf(x\/\\sqrt{2}) ( \\chi_{\\rm{I}} + \\chi_{\\rm{II}}).\n\\end{equation} \nActing with $U_{\\rm T_1}$ on a monomial in $\\rm{I}$ gives\n\\begin{equation} \\label{ind}\n U_{\\rm T_1} [x^{j}\\chi_{\\rm I}] =\n\\frac{x^{j}}{(\\sqrt{2})^{j+1}} (\\chi_{\\rm{\\rm{I}}} + \\chi_{\\rm{II}}), \n\\end{equation}\nso that the eigenvectors of $U_{\\rm T_1}$ with support in $\\rm{I}$\nare monomials in $\\rm{I}$.\nTo determine their form in the other intervals we again write an ansatz as\nwe did for the eigenvectors with support in $\\rm{IV}$ as\n\\begin{equation} \\label{haha} \n| 2^{-(j+1)\/2}\n\\rangle_{{\\rm{I}}} = x^{j}\\chi_{{\\rm{I}}} + g_{{\\rm II},j}(x) \\chi_{\\rm{II}} +\ng_{{\\rm III},j}(x)\\chi_{\\rm{III}},\n\\end{equation}\nwhere the associated eigenvalue, appearing as the argument of the ket, is seen from (\\ref{ind}). In\nequation (\\ref{haha}) and below the subscript $\\rm I$ on a ket implies that\nit describes decay out of region ${\\rm I}$. Note that the\n$j=0$ mode here is the slowest decay mode in this system. Since $U_{\\rm T_1}$\ndoes not raise the degree of the polynomial it acts on, \n$g_{{\\rm II},j}(x)$ and $g_{{\\rm III},j}(x)$ are polynomials of degree $j$. Applying\n$U_{\\rm T_1}$ to (\\ref{haha}) and using (\\ref{ind}) and the fact that the function on\nthe rhs of (\\ref{haha}) is an eigenvector with eigenvalue\n$2^{-(j+1)\/2}$, we find that \n\\begin{equation} \\label{form1}\nU_{\\rm T_1} ( g_{{\\rm II},j}(x)\\chi_{\\rm{II}} + g_{{\\rm III},j}(x)\\chi_{\\rm{III}}) =\n2^{-(j+1)\/2} (g_{{\\rm II},j}(x)\\chi_{\\rm{II}} + g_{{\\rm III},j}(x)\\chi_{\\rm{III}}\n-x^{j}\\chi_{\\rm{II}}). \n\\end{equation}\nThe determination of $g_{{\\rm II},j}(x)$ \nand $g_{{\\rm III},j}(x)$ is described in Appendix C \n\nThe explicit form of the first few of these eigenvectors is\n\\alpheqn\n\\begin{eqnarray} \\label{firstfew}\n| 2^{-1\/2}\\rangle_{\\rm I} & = & \\chi_{{\\rm{I}}} +\n\\frac{1}{2(1-\\sqrt{2})}|1\\rangle_{\\Omega} + \\frac{1}{2(1+\\sqrt{2})}|{-1\\rangle_\\Omega} \\\\\n| 2^{-1}\\rangle_{\\rm I} & = & x \\, \\chi_{\\rm{I}} -\n\\frac{1}{4(x^*)}|1\\rangle_\\Omega + \\frac{1}{12}|{-1\\rangle_\\Omega} +\n\\frac{(x^*)^2}{2}|0\\rangle_\\Omega \\\\ \n|2^{-3\/2}\\rangle_{\\rm I} & = & x^2 \\,\\chi_{\\rm{I}} -\n\\frac{\\sqrt{2}x^*}{12}|1\\rangle_\\Omega +\n\\frac{(9\\sqrt{2}-8)x^*}{84}|-1\\rangle_\\Omega + \n\\frac{\\sqrt{2}(x^*)^3}{2}|0\\rangle_{\\Omega} \\\\ \n& & \\mbox{}\\hspace{1cm} - \\frac{\\sqrt{2}(x^*)^3}{2}|2^{-1}\\rangle_\\Omega +\n\\frac{(x^*)^4}{2(1+\\sqrt{2})}|{-2^{-1}\\rangle_\\Omega}.\n\\end{eqnarray}\n\\reseteqn \nThe subscript $\\Omega$ on a ket implies that it is a right state of \n$U_{\\rm R}$ which has been rescaled into the attractor, $\\Omega$, of $U_{\\rm T_1}$.\nThe explicit form of these states is given in Table~1.\n\n\\subsection{Left eigenstates of $U_{\\rm T_1}$}\n\nIt is easily seen that the left states given by \n\\begin{equation} \\label{lt1}\n\\langle 0_j |_{\\rm{IV}} = \\frac{(-1)^j}{j!}\\delta_{-}^{(j)}(x-1)\n\\end{equation}\nform a bi-orthonormal set with the null states in~(\\ref{rt1}).\nThese left states are also orthogonal to all the other right states of $U_{\\rm T_1}$.\nThe left states associated with the transient decay states out of $\\rm{I}$ are given by\n\\begin{equation} \\label{l1}\n\\langle {2}^{-(j+1)\/2}|_{\\rm I} = \\frac{1}{j!}\n\\left[ (-1)^j\\delta_{+}^{(j)}(x) + \\delta_{-}^{(j)}(x-1) \\right]. \n\\end{equation}\n\nThe left states of $U_{\\rm R}$ are not the left states of $U_{\\rm T_1}$, even though\nthe right states of $U_{\\rm R}$ are also right states of $U_{\\rm T_1}$. This is\nbecause $U_{\\rm T_1}$ acting on a density contained within $\\Omega$ will\ncontinue to be in $\\Omega$. But in general the\nKoopman operator, $K_{\\rm T_1}$, acting on a function with support only in $\\Omega$\nwill result in the function having support outside $\\Omega$ too.\nThe left states of $U_{\\rm R}$ scaled back to $\\Omega$ form a bi-orthonormal set with the\nright states contained in $\\Omega$, but they are not orthogonal to the transient\nstates that decay into $\\Omega$. To make them so we use Gram--Schmidt orthogonalization. The\nresults are given in Table 1. \n\n\\subsection{The spectral decomposition}\n\nUsing all the eigenstates and eigenvalues given in Table~1 we may write\nthe action of $U_{\\rm T_1}^t$ in terms of its spectral decomposition as\n\\begin{eqnarray} \\label{specdec}\nU_{\\rm T_1}^{t} & = & | 1 \\rangle\\langle 1 |_{\\Omega} + \n(-1)^t|{-1 \\rangle}\\langle{-1 |_\\Omega} + \\sum_{j=0}^{\\infty}\n\\left( 2^{-(2j+1)\/2} \\right)^t|2^{-(2j+1)\/2}\\rangle\\langle\n2^{-(2j+1)\/2}|_{\\rm{I}} \\nonumber \\\\ \n& & + \\sum_{j=1}^{\\infty}(2^{-j})^t \\left[ | {+2^{-j}}\n\\rangle\\langle {+2^{-j} |_\\Omega} +\n|2^{-j}\\rangle\\langle 2^{-j}|_{\\rm{I}} \\right] \\nonumber \\\\ \n& & + \\sum_{j=1}^{\\infty}(-2^{-j})^t | {-2^{-j}}\n\\rangle\\langle {-2^{-j} |_\\Omega} +\n\\sum_{j=0}^{\\infty}\\delta_{1,t} | 0_{2j+1}\n\\rangle\\langle 0_{J_{2j+1}} |_\\Omega \\nonumber \\\\ \n& & + \\sum_{j=0}^{\\infty}\\delta_{0,t} \\left[ |0_{2j+1}\n\\rangle\\langle 0_{2j+1} |_{\\Omega} + |0_{J_{2j+1}}\n\\rangle\\langle 0_{J_{2j+1}} |_{\\Omega} + |0_j\n\\rangle\\langle 0_j|_{\\rm{IV}} \\right], \n\\end{eqnarray}\nwhere the subscript on the bra states also identifies their dual ket states.\n\n\\clearpage\n\n \n\\begin{table}[p!] \n\\[\n\\begin{array}{|c|c|l|l|} \\hline \n\\mbox{eigenvalue} & \\mbox{degeneracy} & \\mbox{symbol} & \\mbox{eigenvector} \\\\ \\hline \n1 & 1 & | 1 \\rangle_{\\Omega} & \\chi_{\\rm{II}} + \\sqrt{2}\\chi_{\\rm{III}} \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & & \\langle 1 |_{\\Omega} & \\frac{(x^*)^2}{2}(\\chi_{\\rm{II}} + \\chi_{\\rm{III}}) \\\\\n& & & \\mbox{}-\\sum_{k=0}^{\\infty}\\left(a_{0,k}\\langle 2^{-(k+1)\/2}|_{\\rm{I}} +\n\\alpha_{0,k}'\\left\\langle 0_k \\right|_{\\rm{IV}} \\right) \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n\\hline \n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n-1 & 1 & | {-1 \\rangle_\\Omega} & \\chi_{\\rm{II}} - \\sqrt{2}\\chi_{\\rm{III}} \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & & \\langle -1 |_{\\Omega} & \\frac{(x^*)^2}{2}(\\chi_{\\rm{II}} - \\chi_{\\rm{III}}) \\\\\n& & &\\mbox{}-\n\\sum_{k=0}^{\\infty}\\left( b_{0,k}\\langle 2^{-(k+1)\/2}|_{\\rm{I}} -\n\\alpha_{0,k}'\\left\\langle 0_k \\right|_{\\rm{IV}} \\right) \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n\\hline\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n+\\frac{1}{2^j} & 1 & |{+ 2^{-j} \\rangle_\\Omega} &\nB_{2j}\\left( \\frac{x^* - \\phi(x)}{2x^*} \\right)\\chi_{\\rm{II}} +\n\\frac{\\sqrt{2}}{2^j}B_{2j}\\left( \\frac{\\phi(x) -\nx^*}{\\sqrt{2}x^*}\\right)\\chi_{\\rm{III}} \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & & \\langle +2^{-j} |_{\\Omega} & \\frac{\\left( x^{*} \\right)^{4j-2}}\n{(2j)!} \\left[ \\delta^{(2j-1)}_{+}(x-{\\rm{T}}^{(2)}(x_c))\n - \\delta^{(2j-1)}_{-}(x -x^{*}) \\right. \\\\ \n& & & \\mbox{} \\left. \n + \\delta^{(2j-1)}_{+}(x - x^{*}) -\n\\delta^{(2j-1)}_{-}(x - {\\rm{T}}^{(1)}(x_c)) \\right] \\\\ & & & - \n\\sum_{k=2j}^{\\infty}\\left( a_{j,k}\\langle 2^{-(k+1)\/2}|_{\\rm{{I}}} +\n\\alpha_{j,k}'\\left\\langle 0_k \\right|_{\\rm{IV}} \\right) \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n \\hline \n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n-\\frac{1}{2^j} & 1 & |{- 2^{-j} \\rangle_\\Omega} &\nB_{2j}\\left( \\frac{x^* - \\phi(x)}{2x^*} \\right)\\chi_{\\rm{II}} -\n\\frac{\\sqrt{2}}{2^j}B_{2j}\\left( \\frac{\\phi(x) -\nx^*}{\\sqrt{2}x^*}\\right)\\chi_{\\rm{III}} \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & & \\langle -2^{-j} |_{\\Omega} & \\frac{\\left( x^{*}\n\\right)^{4j-2}} {(2j)!} \\left[ \\delta^{(2j-1)}_{+}(x-{\\rm{T}}^{(2)}(x_c))\n- \\delta^{(2j-1)}_{-}(x - x^{*}) \\right. \\\\\n& & & \\mbox{} \\left. \n - \\delta^{(2j-1)}_{+}(x - x^{*}) +\n\\delta^{(2j-1)}_{-}(x - {\\rm{T}}^{(1)}(x_c)) \\right] \\\\ & & & - \n\\sum_{k=2j}^{\\infty}\\left( b_{j,k}\\langle 2^{-(k+1)\/2}|_{\\rm{I}} - \\alpha_{j,k}'\\left\\langle\n0_k \\right|_{\\rm{IV}} \\right) \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n \\hline \n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n0 & \\infty & | 0_{2j+1} \\rangle_{\\Omega} & E_{2j+1}\\left(\n\\frac{\\phi(x)}{x^*} \\right)\\chi_{\\rm{II}} \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & ( 2 \\times 2 & \\left\\langle 0_{2j+1} \\right|_{\\Omega} &\n-\\left(\\frac{x^*}{\\sqrt{2}}\\right)^{4j+2}\\frac{1}{(2j+1)!}\n\\delta^{(2j+1)}_{+}(x-{\\rm{T}}^{(2)}(x_c)) \\\\\n& \\mbox{Jordan} & &\\mbox{} - \\sum_{k=2j+1}^{\\infty}c_{j,k}\n\\langle 2^{-(k+1)\/2}|_{\\rm{I}} \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & \\mbox{blocks}) & | 0_{J_{2j+1}} \\rangle_{\\Omega} &\n-\\sqrt{2}\\left(E_{2j+1}\\left(\n\\frac{\\sqrt{2}}{x^*} (\\phi(x) - x^*)\\right)\\right)\\chi_{\\rm{III}} \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & & \\left\\langle 0_{J_{2j+1}} \\right|_{\\Omega} &\n\\frac{(x^*)^{4j+2}}{\\sqrt{2}(2\\sqrt{2})^{2j+1}}\\frac{1}{(2j+1)!}\n\\delta^{(2j+1)}_{-}(x-{\\rm{T}}^{(1)}(x_c)) \\\\\n& & & \\mbox{}-\n\\sum_{k=2j+1}^{\\infty}\\gamma^{'}_{j,k}\\left\\langle 0_{k}\\right|_{\\rm{IV}}\n\\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n \\hline \n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n\\left( \\frac{1}{\\sqrt{2}}\\right)^{j+1} & 1 & |2^{-(j+1)\/2}\\rangle_{\\rm{I}} &\nx^j\\chi_{\\rm{I}} +\n\\sum_{i=0}a_{i,j}| 2^{-i}\\rangle + b_{i,j}|\n-2^{-i}\\rangle + c_{i,j}|0_i\\rangle \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & & \\langle 2^{-(j+1)\/2}|_{\\rm{I}} & \\frac{1}{j!}\n\\left( (-1)^j\\delta^{(j)}(x) + \\delta^{(j)}(x-1) \\right) \\\\ \n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n\\hline \n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n0 & \\infty & |\n0_j \\rangle_{\\rm{IV}} & (-1)^{j+1}x^j\\chi_{\\rm{I}} + (x-1)^{j}\\left(\n\\chi_{\\rm{III}} + \\chi_{\\rm{IV}} \\right) \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & & \\left\\langle 0_j \\right|_{\\rm{IV}} & \\frac{(-1)^j}{j!}\n\\delta^{(j)}(x-1) \\\\ \\hline\n \\end{array} \\]\n\\caption{\\small Elements of the spectral decomposition of the tent map at the first band splitting\npoint. The constants $a_{i,j}\\equiv\\langle\n{2^{-i}}|_{\\Omega}(2)^{-(j+1)\/2}\n\\rangle_{\\rm{I}}$, $b_{i,j}\\equiv\\langle\n-{2^{-i}}|_{\\Omega}(2)^{-(j+1)\/2}\n\\rangle_{\\rm{I}}$ and $c_{i,j}\\equiv\\langle\n0_i|_{\\Omega}(2)^{-(j+1)\/2}\n\\rangle_{\\rm{I}}$ and are given in Appendix C. The\nconstants $\\alpha_{i,j}'\\equiv\\langle\n+{2^{-i}}|_\\Omega 0_j \\rangle_{\\rm{IV}}$, $\\beta_{i,j}'\\equiv\\langle -{2^{-i}}|_\\Omega 0_j\n\\rangle_{\\rm{IV}}=-\\alpha_{i,j}'$ and $\\gamma_{i,j}'\\equiv\\langle 0_{J_i}|_\\Omega 0_j\n\\rangle_{\\rm{IV}}$ and\n$\\phi(x)$ is defined in (\\ref{phi}).}\n\\end{table}\n\n\n\\clearpage\n \n\n\n\n\n\n\n\n\\section{Higher band-splitting points}\n\nWe can determine the spectral decomposition of the tent map at any\nband-splitting point (bsp) by generalizing the approach used in the previous\nsection for the first bsp. For the rescaled map at\n$\\alpha = \\sqrt{2}$ we found that by considering its square\nit separated into two parts that were\ndirectly related by a simple change of scale (including a reflection for one\npart) to the tent map at full height where\n$\\alpha = 2$. This relationship also holds generally~\\cite{ProvMac,Heidel}\nbetween the map at the\n$n^{\\rm{th}}$ bsp,\n$(\\alpha_{n} = 2^{2^{-n}})$, and the $(n-1)^{\\rm{th}}$ bsp, \n$(\\alpha_{n-1} = 2^{2^{-(n-1)}} = \\alpha_{n}^2)$.\n\nThe tent map at $\\alpha_n$ on the interval\n$[ {\\rm{T}}^{(2)}_n (x_c), {\\rm{T}}^{(1)}_n (x_c))$,\nwhich contains the attractor, is first stretched to the \nunit interval $[0,1)$ to make the rescaled map ${\\rm R}_n$. \nThe linear function\nthat makes this stretch is \n\\begin{equation} \\label{genphi}\n\\phi_n (x) = \\frac{2x - \\alpha_n (2 - \\alpha_n)}{\\alpha_n(\\alpha_n - 1)}.\n\\end{equation} \nUnder (\\ref{genphi}) the map ${\\rm{T}}_{n}$ transforms to \n${\\rm{R}}_n = \\phi_n \\circ {\\rm{T}}_n \\circ \\phi^{-1}_n$ and\nis given by \n\\begin{equation} \\label{genr}\n{\\rm{R}}_n (x) = \\left\\{ \\begin{array}{lc}\n2-\\alpha_n (1-x) & 0 \\leq x < (\\alpha_n -1)\/ \\alpha_n \\\\\n\\noalign{\\vskip4pt}\n\\alpha_n (1- x) & (\\alpha_n -1)\/ \\alpha_n \\leq x < 1. \n\\end{array} \\right.\n\\end{equation}\n\nWe then compose ${\\rm{R}}_n$ with itself to obtain\n${\\rm{G}}_n \\equiv {\\rm{R}}_n \\circ {\\rm{R}}_n$. \nAs is illustrated in Figure 5 for the $2^{\\rm{nd}}$ bsp, 1t can be shown easily that in general: \n\\newline\n(a) ${\\rm{G}}_n(x)$ in the interval \n$X_{n, {\\rm A}} \\equiv [0,{\\rm{R}}_n^{(4)}(x_c))$ is\ntopologically conjugate to\n${\\rm{R}}_{n-1}$ (the rescaled map at higher height) in the interval\n$[0,1)$ as \n\\begin{equation} \\label{imp1}\n{\\rm{R}}_{n-1} = \\phi_{n,\\rm{A}}\n\\circ {\\rm{G}}_{n,\\rm{A}} \\circ \\phi_{n,\\rm{A}}^{-1} , \n\\end{equation}\nwhere\n\\begin{equation} \\label{impc1}\n\\phi_{n,\\rm{A}} = 1 - \\frac{x}{\\alpha_n (\\alpha_n -1)},\n \\;\\;\\;\\;\\; x \\in X_{n, {\\rm A}}.\n\\end{equation}\n(b) ${\\rm{G}}_n(x)$ in the interval \n$X_{n, {\\rm B}} \\equiv [{\\rm{R}}_n^{(3)}(x_c),1)$ is\ntopologically conjugate to\n${\\rm{R}}_{n-1}$ in the interval $[0,1)$ as\n\\begin{equation} \\label{imp2}\n{\\rm{R}}_{n-1} = \\phi_{n,\\rm{B}} \\circ\n{\\rm{G}}_{n,\\rm{B}} \\circ \\phi_{n,\\rm{B}}^{-1}, \n\\end{equation}\nwhere\n\\begin{equation} \\label{impc2}\n\\phi_{n,\\rm{B}} = \\frac{x-(2-\\alpha_n)}{\\alpha_n -1},\n \\;\\;\\;\\;\\; x \\in X_{n, {\\rm B}}. \n\\end{equation}\n\\begin{figure}[htb]\n\\begin{center}\n\\scalebox{.5}[.5]{\\includegraphics{fig41.eps}}\n\\parbox{5in}{\\caption{\\small At the second band-splitting point, ${\\rm{G_{2}}}$ \nis conjugate in $X_{2,{\\rm A}}$ and $X_{2,{\\rm B}}$ to\n$\\rm{R_{1}}$ shown in Figure 3. The central transient region \nis shown as $X_{2,{\\rm C}}$.}}\n\\end{center}\n\\end{figure}\n\n\nAt the $n^{\\rm{th}}$ bsp the critical trajectory is eventually periodic with period\n$2^{n-1}$ ($n \\geq 1$), and therefore the number of discontinuities that appear\nunder time evolution of an initially smooth density is finite. Equations (\\ref{imp1}) --\n(\\ref{impc2}) imply that the band structure at the $n^{\\rm{th}}$ bsp consists of 2 scaled\ncopies of the band structure at the $(n-1)^{\\rm{th}}$ bsp separated by the\ninterval $[ {\\rm{R}}_{n}^{(4)}(x_c),{\\rm{R}}_{n}^{(3)}(x_c)) \\equiv\nX_{n, {\\rm C}}$, which makes one band. Therefore we have the following\n recursion relation for $S_n$, the number of bands at the $n^{{\\rm th}}$ bsp\nfor the map ${\\rm R}_{n}$,\n\\begin{equation}\nS_{n} = 2 S_{n-1} + 1 , \n\\end{equation}\nfor $n \\geq 2$,\nwhere $ S_{1} = 2$. Solving this recursion relation gives \n\\begin{equation}\\label{bandno}\nS_{n} = 2^n + 2^{n-1} - 1, \n\\end{equation}\nfor $n \\geq 1$.\nOf the $S_n$ bands, the invariant density has support only on $2^n$\nbands. The density is transient on the other\n$2^{n-1} -1$ bands. The function space we consider at the $n^{\\rm{th}}$ bsp is\npiecewise polynomial, with each piece extending over one band. \n\n\\subsection{Decomposition on the attractor}\n\nAssociated with the $2^n$ bands that make up \n$\\Omega_{n}$, the attractor at the $n^{\\rm{th}}$ bsp, are $2^n$\n eigen\/Jordan vectors of each polynomial degree\n$j$. We denote an element in the spectrum of\n$U_{{\\rm R}_n}$ associated with states on $\\Omega_{n}$ as $\\lambda^{n}_{k,j}$ and the right\neigen\/Jordan vector associated with it as $ |\\lambda^{n}_{k,j} \\rangle$. The\nsuperscript $n$ stands for the order of the band-splitting point. The index\n$k$ is an integer from $1$ to $2^n$ and distinguishes between the $2^n$\nindependent eigen\/Jordan vectors of degree $j$, at the $n^{\\rm{th}}$ bsp. \n\nSince ${\\rm G}_{n,{\\rm A}}$ and ${\\rm G}_{n,{\\rm B}}$ are topologically conjugate \nto ${\\rm R}_{n-1}$ all three share the same spectrum. The map ${\\rm G}_n$ is\nthe union of ${\\rm G}_{n,{\\rm A}}$, ${\\rm G}_{n,{\\rm B}}$ and the \npart of ${\\rm{G}}_{n}$ on the central transient interval $X_{n,{\\rm C}}$.\nThe spectrum of ${\\rm{R}}_{n}$ on $\\Omega_{n}$ is determined from the\nspectrum of ${\\rm{G}}_{n}$ on $\\Omega_{n}$. Consider a non-zero eigenvalue\n$\\lambda^{n-1}_{k,2j}$ of ${\\rm R}_{n-1}$ (we will show later that as at the\nfirst bsp the right states corresponding to non-zero eigenvalues\non the attractor are even-order polynomials), which is\nalso an eigenvalue of ${\\rm{G}}_{n}$ with degeneracy 2 \n(one for ${\\rm G}_{n,{\\rm A}}$ and one for ${\\rm G}_{n,{\\rm B}}$). \nUsing arguments parallel to those that we used at the first bsp\nwe deduce that ${\\rm{R}}_{n}$ has two distinct eigenvalues \n$+\\sqrt{\\lambda^{n-1}_{k,2j}}$ and $-\\sqrt{\\lambda^{n-1}_{k,2j}}$. By induction \n${\\rm{R}}_{n+1}$ on $\\Omega_{n+1}$ has in its spectrum the eigenvalues\n$+(\\lambda^{n-1}_{k,2j})^{1\/4}$, $-(\\lambda^{n-1}_{k,2j})^{1\/4}$,\n$+i(\\lambda^{n-1}_{k,2j})^{1\/4}$ and $-i(\\lambda^{n-1}_{k,2j})^{1\/4}$.\nThus from\nthe non-zero eigenvalues at the $0^{\\rm{th}}$ bsp we can determine \nthe non-zero eigenvalues at the\n$n^{\\rm {th}}$ bsp by taking square roots of $2^{-2j}$\nrecursively, $n$ times. Thus \n\\begin{equation} \\label{eig1}\n\\lambda^{n}_{k,2j} =\n\\left(\\frac{1}{2}\\right)^{{2j}\/{2^n}}\\mbox{exp}\\left(\\frac{2\\pi\nik}{2^n}\n\\right),\n\\end{equation}\nwhere $k=1,2,\\ldots,2^n$ and $j=0,1,2,\\ldots$ .\nNote that for $k \\leq 2^{n-1}$ \n\\begin{equation} \\label{minus}\n\\lambda^{n}_{k,2j} = -\\lambda^{n}_{k+2^{n-1},2j}.\n\\end{equation}\n\nAt the $0^{\\rm{th}}$ bsp the eigenvalue $0$ is infinitely degenerate with\nan associated eigenpolynomial of odd degree for each occurence of the\neigenvalue. In the previous section we saw that at the $1^{\\rm st}$ bsp \nthere are an infinite number of $2 \\times 2$ Jordan blocks associated with\neigenvalue zero, one each for each odd degree. We will show by explicit\nconstruction below that this trend continues and at the $n^{\\rm{th}}$ bsp, ${\\rm R}_{n}$\nhas a $2^n \\times 2^n$ Jordan block for every odd degree. \n\n\\subsubsection{Right states on the attractor}\n\nTo determine the right states on $\\Omega_{n}$ with non-zero eigenvalues we begin similar to\n$(20)$ by writing\nthe right states at the\n$n^{\\rm{th}}$ bsp in terms of those at the $(n-1)^{\\rm th}$ bsp as \n\\alpheqn \n\\begin{eqnarray}\n |\\lambda^{n}_{k,2j} \\rangle & = & |\\lambda^{n-1}_{k,2j}\n\\rangle_{{\\rm G}_{n,{\\rm A}}} +c^{n}_{k,2j} |\\lambda^{n-1}_{k,2j}\n\\rangle_{{\\rm G}_{n,{\\rm B}}} \\\\ \n |\\lambda^{n}_{k+2^{n-1},2j} \\rangle & = & |\\lambda^{n-1}_{k,2j}\n\\rangle_{{\\rm G}_{n,{\\rm A}}} +d^{n}_{k,2j} |\\lambda^{n-1}_{k,2j}\n\\rangle_{{\\rm G}_{n,{\\rm B}}},\n\\end{eqnarray}\n\\reseteqn\nwhere $k = 1,2,\\ldots,2^{n-1}$. In equation (52) and\nin the remaining part of this subsection kets without\nany subscript denote right states of ${\\rm{R}}$ at the appropriate bsp. \nConjugacies (\\ref{imp1}) and (\\ref{imp2}) imply that the eigenvectors \nof ${\\rm G}_{n,{\\rm A}}$ and ${\\rm G}_{n,{\\rm B}}$ are related to those \nof ${\\rm R}_{n-1}$ as\n\\alpheqn \n\\begin{eqnarray}\n|\\lambda^{n-1}_{k,2j}\\rangle_{{\\rm G}_{n,{\\rm A}}} & = & \nU_{\\phi^{-1}_{n,\\rm{A}}} |\\lambda^{n-1}_{k,2j} \\rangle \\\\ \n|\\lambda^{n-1}_{k,2j}\\rangle_{{\\rm G}_{n,{\\rm B}}} & = & \nU_{\\phi^{-1}_{n,\\rm{B}}} |\\lambda^{n-1}_{k,2j} \\rangle. \n\\end{eqnarray}\n\\reseteqn\nUsing this in (52) gives\n\\alpheqn \n\\begin{eqnarray} \\label{onea} \n |\\lambda^{n}_{k,2j} \\rangle & =\n& U_{\\phi^{-1}_{n,\\rm{A}}} |\n\\lambda^{n-1}_{k,2j} \\rangle +c^{n}_{k,2j}U_{\\phi^{-1}_{n,\\rm{B}}} \n |\\lambda^{n-1}_{k,2j} \\rangle \\\\ \\label{oneb}\n |\\lambda^{n}_{k+2^{n-1},2j} \\rangle & =\n& U_{\\phi^{-1}_{n,\\rm{A}}} |\n\\lambda^{n-1}_{k,2j} \\rangle +d^{n}_{k,2j}U_{\\phi^{-1}_{n,\\rm{B}}} \n|\\lambda^{n-1}_{k,2j} \\rangle. \n\\end{eqnarray}\n\\reseteqn\nThe state $U_{\\phi^{-1}_{n,\\rm{A}}}|\\lambda^{n-1}_{k,2j} \\rangle$\nhas support only in $X_{n,{\\rm{A}}}$ and \n$U_{\\phi^{-1}_{n,\\rm{B}}} |\\lambda^{n-1}_{k,2j}\n\\rangle$ has support only in $X_{n,{\\rm{B}}}$. \n\nWe now use the relation\n\\begin{equation} \\label{pmyst}\n {\\rm{R}}_{n}\\circ \\phi_{n,\\rm{B}}^{-1} = \\phi_{n,\\rm{A}}^{-1}, \n\\end{equation}\nwhich implies that \n\\begin{equation} \\label{myst}\nU_{{\\rm{R}}_{n}}U_{\\phi_{n,\\rm{B}}^{-1}} =\nU_{\\phi_{n,\\rm{A}}^{-1}}.\n\\end{equation} \n(Even though (\\ref{pmyst}) and (\\ref{myst}) are written \nspecifically at the band-splitting points, they are valid for all\n$\\alpha \\in (1,2]$.)\nActing on (54) by $U_{{\\rm{R}}_n}$ and using (\\ref{myst}) we get\n\\alpheqn\n\\begin{eqnarray} \\label{xiaoa}\n\\lambda^{n}_{k,2j} | \\lambda^{n}_{k,2j} \\rangle & =\n& U_{{\\rm{R}}_n}U_{\\phi^{-1}_{n,\\rm{A}}}|\n\\lambda^{n-1}_{k,2j} \\rangle +c^{n}_{k,2j}U_{\\phi^{-1}_{n,\\rm{A}}} \n |\\lambda^{n-1}_{k,2j} \\rangle \\\\ \\label{xiaob} \n-\\lambda^{n}_{k,2j}| \\lambda^{n}_{k+2^{n-1},2j} \\rangle & =\n& U_{{\\rm{R}}_n}U_{\\phi^{-1}_{n,\\rm{A}}} |\n\\lambda^{n-1}_{k,2j} \\rangle +d^{n}_{k,2j}U_{\\phi^{-1}_{n,\\rm{A}}} \n |\\lambda^{n-1}_{k,2j} \\rangle, \n\\end{eqnarray}\n\\reseteqn\nwhere (\\ref{minus}) has been used on the lhs of (\\ref{xiaob}). Since\n$U_{{\\rm{R}}_n}$ has the flip property that any function with support \nonly in $X_{n,\\rm{B}}$ will go entirely to\n$X_{n,\\rm{A}}$ in one iteration and vice-versa we know that\n$U_{{\\rm{R}}_n}U_{\\phi^{-1}_{n,\\rm{A}}}|\\lambda^{n-1}_{k,2j} \\rangle$\n has support only in $X_{n,{\\rm{B}}}$ and $U_{\\phi^{-1}_{n,\\rm{A}}} \n|\\lambda^{n-1}_{k,2j} \\rangle$ has support only in $X_{n,\\rm{A}}$.\nMultiplying (\\ref{onea}) by $\\lambda^{n}_{k,2j}$ and (\\ref{oneb}) by\n$-\\lambda^{n}_{k,2j}$ and identifying the components with support only\nin $X_{n,{\\rm A}}$ with the corresponding components in (57)\ngives\n\\alpheqn\n\\begin{eqnarray}\nc^{n}_{k,2j}U_{\\phi^{-1}_{n,\\rm{A}}} \n |\\lambda^{n-1}_{k,2j} \\rangle & = & \\lambda^{n}_{k,2j}\nU_{\\phi^{-1}_{n,\\rm{A}}} |\\lambda^{n-1}_{k,2j} \\rangle \\\\ \nd^{n}_{k,2j}U_{\\phi^{-1}_{n,\\rm{A}}} \n |\\lambda^{n-1}_{k,2j} \\rangle & = & -\\lambda^{n}_{k,2j}\nU_{\\phi^{-1}_{n,\\rm{A}}}|\\lambda^{n-1}_{k,2j} \\rangle,\n\\end{eqnarray}\n\\reseteqn\nshowing that $c^{n}_{k,2j} = \\lambda^{n}_{k,2j}$ and $d^{n}_{k,2j} = -\\lambda^{n}_{k,2j}$. \nUsing this result gives the pair of recursion relations\n\\alpheqn\n\\begin{eqnarray} \\label{three}\n| \\lambda^{n}_{k,2j} \\rangle & =\n& \\left( U_{\\phi^{-1}_{n,\\rm{A}}}\n+\\lambda^{n}_{k,2j}U_{\\phi^{-1}_{n,\\rm{B}}} \\right) \n |\\lambda^{n-1}_{k,2j} \\rangle \\\\ \n| \\lambda^{n}_{k+2^{n-1},2j} \\rangle & =\n& \\left( U_{\\phi^{-1}_{n,\\rm{A}}} \n-\\lambda^{n}_{k,2j}U_{\\phi^{-1}_{n,\\rm{B}}} \\right)\n |\\lambda^{n-1}_{k,2j} \\rangle ,\n\\end{eqnarray}\n\\reseteqn \nwhich express the right states at the $n^{\\rm{th}}$ bsp in terms \nof those at the $(n-1)^{\\rm{th}}$ bsp. \n\nThese recursion relations can be solved to write the right\nstates at the\n$n^{\\rm{th}}$ bsp in terms of the right states at the $0^{\\rm{th}}$ ($\\alpha =\n2$) bsp. For notational convenience we define\n\\begin{equation} \\label{four}\n\\begin{array}{lcl}\n\\widehat{{\\rm{A}}}_{i} & \\equiv & U_{\\phi^{-1}_{i,{\\rm{A}}}} \\\\\n\\widehat{{\\rm B}}_{i} & \\equiv & U_{\\phi^{-1}_{i,{\\rm{B}}}} \\\\\n\\widehat{{\\rm B}}^{n}_{k,2j,i} & \\equiv &\n\\left(\\lambda^{n}_{k,2j}\\right)^{i}\\widehat{{\\rm{B}}}_{i}\n\\end{array}\\end{equation}\nLet $\\sigma_i$ ($i=1,2,\\dots,n$) be either $0$ or $1$ and we define $\\widehat\n{\\Pi}_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_1}$ to be an ordered \n$n$-product of $\\widehat{{\\rm{A}}}_{i}$'s and $\\widehat{{\\rm\nB}}^{n}_{k,2j,i}$'s, where if $\\sigma_i = 1$ then the $i^{\\rm {th}}$\nlocation in the $n$-product (counting from the right) will be taken by\n$\\widehat{{\\rm B}}^{n}_{k,2j,i}$ and if $\\sigma_i = 0$ then the $i^{\\rm {th}}$\nlocation will be taken by $\\widehat{{\\rm{A}}}_{i}$. Solving (59)\ngives \n\\begin{equation} \\label{five}\n| \\lambda^{n}_{k,2j} \\rangle = \n\\sum_{\\{\\sigma \\}}\\widehat{\\Pi}_{\\sigma_n\\sigma_{n-1}...\\sigma_1} |\n\\lambda^{0}_{1,2j} \\rangle.\n\\end{equation}\nThe sum in (\\ref{five}) is over all possible $\\sigma$-strings \nof $0$'s and $1$'s of length $n$ and so consists of\n$2^n$ terms ($n$-products). The order of the operators in each $n$-product \nmust be strictly observed \nsince the operators involved do not commute. \n\nTo illustrate (\\ref{five}) we write it out explicitly for $n = 1$ and $2$.\nFor $n=1$ (\\ref{five}) gives\n\\begin{equation} \\label{example1}\n| \\lambda^{1}_{k,2j} \\rangle = \\widehat{\\rm{A}}_{1} |\n\\lambda^{0}_{1,2j} \\rangle + \\widehat{\\rm B}^{1}_{k,2j,1} |\n\\lambda^{0}_{1,2j} \\rangle.\n\\end{equation}\nThis agrees with the expression (\\ref{fbspurstates})\n(corresponding to $k=1$ and $k=2$) we had for the\nright eigenstates at the first bsp. \nFor $n=2$ (\\ref{five}) gives\n\\begin{eqnarray}\n| \\lambda^{2}_{k,2j}\\rangle & = &\n\\widehat{\\rm{A}}_{2}\\widehat{\\rm{A}}_{1} |\n\\lambda^{0}_{1,2j} \\rangle + \\widehat{\\rm{A}}_{2}\\widehat{\\rm B}^{2}_{k,2j,1} |\n\\lambda^{0}_{1,2j} \\rangle \\nonumber \\\\\n& & \\mbox{} + \\widehat{\\rm B}^{2}_{k,2j,2} \\widehat{\\rm A}_{1} |\n\\lambda^{0}_{1,2j} \\rangle + \\widehat{\\rm B}^{2}_{k,2j,2}\\widehat{\\rm B}^{2}_{k,2j,1}\n | \\lambda^{0}_{1,2j} \\rangle.\n\\end{eqnarray}\n\nNow we prove by induction that there is a $2^n \\times\n2^n$ Jordan block associated with the eigenvalue $0$ at the $n^{\\rm{th}}$ bsp for\neach odd order $2j+1$. In section 2 it was shown that this statement is\ntrue for the first bsp. Assume that this statement is true at the $(n-1)^{\\rm{th}}$ bsp.\nWe denote the Jordan vectors as $| 0^{n-1}_{k,2j+1} \\rangle$ where \n$k=2,3,\\dots,2^{n-1}$ ($| 0^{n-1}_{1,2j+1} \\rangle$ is the eigenvector of the block). They satisfy\n\\alpheqn\n\\begin{eqnarray} \\label{jorddef}\nU_{{\\rm R}_n}| 0^{n-1}_{k,2j+1} \\rangle & = & | 0^{n-1}_{k-1,2j+1}\n\\rangle, \\;\\;\\;\\;\\;\\;\\;\\; k \\neq 1 \\\\ \nU_{{\\rm R}_n} | 0^{n-1}_{1,2j+1} \\rangle & = & 0.\n\\end{eqnarray}\n\\reseteqn \nSince we are assuming that $U_{{\\rm{R}}_{n-1}}$ has $2^{n-1} \\times 2^{n-1}$ \nJordan blocks, the conjugacies \n(\\ref{imp1}) and (\\ref{imp2}) imply that ${\\rm{G}}_{n,{\\rm A}}$ and ${\\rm{G}}_{n,{\\rm B}}$\nboth have $2^{n-1} \\times 2^{n-1}$ Jordan blocks with states given by\n\\alpheqn \n\\begin{eqnarray} \n| 0^{n}_{k,2j+1} \\rangle_{{\\rm{G}}_{n,A}} & = & \n\\widehat{{\\rm{A}}}_{n}| 0^{n-1}_{k,2j+1} \\rangle \\\\ \n| 0^{n}_{k,2j+1} \\rangle_{{\\rm{G}}_{n,B}} & = & \\label{jzerob}\n\\widehat{{\\rm{B}}}_{n}| 0^{n-1}_{k,2j+1} \\rangle,\n\\end{eqnarray}\n\\reseteqn\nwhere $k=1,2,\\dots,2^{n-1}$. Since \n${{\\rm{G}}_{n}}$ on $\\Omega_{n}$ has two $2^{n-1} \\times 2^{n-1}$ Jordan blocks\nfor each $j$, ${{\\rm{R}}_{n}}$ on $\\Omega_{n}$ can either have two \n$2^{n-1} \\times 2^{n-1}$ Jordan blocks for each $j$ or have one $2^{n}\n\\times 2^{n}$ Jordan block for each $j$. The first case implies that \n$U_{{\\rm R}_n}$ have two null vectors $| 0^{n}_{1,2j+1}\n\\rangle_{{\\rm{G}}_{n,{\\rm A}}}$ and $| 0^{n}_{1,2j+1}\n\\rangle_{{\\rm{G}}_{n,{\\rm B}}}$for each $j$. But $U_{{\\rm R}_n}\n| 0^{n}_{1,2j+1} \\rangle_{{\\rm{G}}_{n,{\\rm B}}} \\neq 0$, since no function with\nsupport in $X_{n,\\rm{B}}$ can vanish in one iteration under\n$U_{{\\rm R}_n}$. Therefore \n$U_{{\\rm R}_n}$ has a $2^n \\times 2^n$ Jordan block for \neach odd degree $2j+1$. This\ncompletes the proof by induction. \n\nA null state of $U_{{\\rm R}_n}$ has to be a null state of\n$U_{{\\rm G}_n}$ too. Therefore the null state of\n$U_{{\\rm R}_n}$ for each $j$ is given by \n\\begin{equation} \\label{z1}\n| 0^{n}_{1,2j+1} \\rangle = \n\\widehat{{\\rm{A}}}_{n}| 0^{n-1}_{1,2j+1} \\rangle.\n\\end{equation}\nWe use the relation \n\\begin{equation}\n{\\rm R}_n \\circ \\phi^{-1}_{n, {\\rm A}} = \n\\phi^{-1}_{n, {\\rm B}} \\circ {\\rm R}_{n-1}\n\\end{equation}\nwhich implies that\n\\begin{equation} \\label{myst1}\nU_{{\\rm R}_n}\\widehat{{\\rm{A}}}_{n} = \n\\widehat{{\\rm{B}}}_{n}U_{{\\rm R}_{n-1}}.\n\\end{equation}\nUnlike (\\ref{myst}), equation (\\ref{myst1}) \nis true only at the band-splitting points. Equation (\\ref{myst1}) can be used to\nverify that if we act on both sides of (\\ref{z1}) by $U_{{\\rm R}_n}$ its rhs reduces to zero.\n\nThe right state $| 0^{n}_{1,2j+1} \\rangle_{{\\rm G}_{n,{\\rm B}}}$ \nis a good candidate for the Jordan state $| 0^{n}_{2,2j+1} \\rangle$ since under one iteration by \n$U_{{\\rm R}_n}$ it will have support only in $X_{n, \\rm{A}}$ and under \ntwo iterations of $U_{{\\rm R}_n}$ it will vanish. Tentatively, from \n(\\ref{jzerob}) we write \n\\begin{equation} \\label{z2}\n| 0^{n}_{2,2j+1} \\rangle = \n\\widehat{{\\rm{B}}}_{n} | 0^{n-1}_{1,2j+1} \\rangle.\n\\end{equation}\nThis guess for the Jordan state can be verified by using \nrelation (\\ref{myst}). Similarly (\\ref{myst1})\ncan be used to show that \n\\begin{equation} \\label{z3}\n| 0^{n}_{3,2j+1} \\rangle = \n\\widehat{{\\rm{A}}}_{n}| 0^{n-1}_{2,2j+1} \\rangle.\n\\end{equation}\nand equation (\\ref{myst}) can be used to show that \n\\begin{equation} \\label{z4}\n| 0^{n}_{4,2j+1} \\rangle = \n\\widehat{{\\rm{B}}}_{n}| 0^{n-1}_{2,2j+1} \\rangle.\n\\end{equation}\nIn general, we find \n\\begin{eqnarray} \\label{r1}\n| 0^{n}_{k,2j+1} \\rangle & = & \n\\widehat{{\\rm{A}}}_{n} | 0^{n-1}_{\\lceil k\/2 \\rceil,2j+1} \\rangle \\;\\;\\;\\;\n\\mbox{for} \\; k \\; \\mbox{odd} \\\\ \\nonumber\n| 0^{n}_{k,2j+1} \\rangle & = & \n\\widehat{{\\rm{B}}}_{n} | 0^{n-1}_{k\/2 ,2j+1} \\rangle \\;\\;\\;\\;\\;\\;\n\\mbox{for} \\; k \\; \\mbox{even}\n\\end{eqnarray}\nwhere $k=1,2,\\dots,2^n$ and $\\lceil q \\rceil$ is the ceiling function ($q$ if it is an integer or\nelse the next greatest integer). \nThese recursions are illustrated in Figure~6 up to $n=2$.\n\\setlength{\\unitlength}{1mm}\n\\begin{figure}\n\\begin{center}\n\\begin{picture}(67,60)\n\\put(67,33){$|0^{0}_{1,2j+1}\\rangle$}\n\\put(66,34){\\vector(-1,1){15}}\n\\put(66,34){\\vector(-1,-1){15}}\n\\put(58,43){${\\widehat{\\rm A}}_{1}$}\n\\put(58,22){${\\widehat{\\rm B}}_{1}$}\n\\put(36,49){$|0^{1}_{1,2j+1}\\rangle$}\n\\put(36,18){$|0^{1}_{2,2j+1}\\rangle$}\n\\put(35,50){\\vector(-2,1){20}}\n\\put(35,50){\\vector(-2,-1){20}}\n\\put(35,19){\\vector(-2,1){20}}\n\\put(35,19){\\vector(-2,-1){20}}\n\\put(24,56.5){${\\widehat{\\rm A}}_{2}$}\n\\put(24,9){${\\widehat{\\rm B}}_{2}$}\n\\put(24,40){${\\widehat{\\rm B}}_{2}$}\n\\put(24,25){${\\widehat{\\rm A}}_{2}$}\n\\put(0,60){$|0^{2}_{1,2j+1}\\rangle$}\n\\put(0,39){$|0^{2}_{2,2j+1}\\rangle$}\n\\put(0,28){$|0^{2}_{3,2j+1}\\rangle$}\n\\put(0,8){$|0^{2}_{4,2j+1}\\rangle$}\n\\put(2,0){$n=2$}\n\\put(38,0){$n=1$}\n\\put(69,0){$n=0$}\n\\end{picture}\n\\parbox{5in}{\\caption{\\small States associated with eigenvalue zero obtained from the\naction of $\\widehat{{\\rm{A}}}_{n}$ and $\\widehat{{\\rm{B}}}_{n}$ on $|0^{n-1}_{k,2j+1}\\rangle$. }}\n\\end{center}\n\\end{figure}\nThis recursion relation can then be solved to write the eigen\/Jordan\nvectors at the $n^{\\rm{th}}$ bsp in terms of the null vectors at the $0^{\\rm{th}}$ bsp.\n To write down a compact solution we define $\\Pi_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_1}$, which is\nsimilar to the $\\widehat {\\Pi}_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_1}$ previously defined. \n We define $\\Pi_{\\sigma_n\\sigma_{n-1}...\\sigma_1}$ to be an ordered \n$n$-product of $\\widehat{{\\rm{A}}}_{i}$'s and $\\widehat{{\\rm{B}}}_{i}$'s.\nIf $\\sigma_i = 1$ then the $i^{\\rm {th}}$ location in the $n$-product\n(counting from the right) will be taken by $\\widehat{{\\rm{B}}}_{i}$ and if\n$\\sigma_i = 0$ then the $i^{\\rm {th}}$ location will be taken by \n$\\widehat{{\\rm{A}}}_{i}$. With each $\\Pi_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_1}$\nwe associate a binary\nnumber formed from the string of $1$'s and $0$'s as $\\kappa = \\sigma_n\\sigma_{n-1}\\dots\\sigma_1 +1$.\nSolving the recursion relation (\\ref{r1}) we get\n\\begin{equation} \\label{jord}\n| 0^{n}_{\\kappa,2j+1} \\rangle = \\Pi_{\\sigma_n\\sigma_{n-1}...\\sigma_1}\n| 0^{0}_{1,2j+1} \\rangle,\n\\end{equation}\nwhere $| 0^{0}_{1,2j+1} \\rangle$ is the null vector of degree $2j+1$ of the\ntent map with full height, and $\\kappa$ here is the decimal equivalent of the binary $\\kappa$,\nwhich ranges from $1$ to $2^n$. \n\n\\subsubsection{Left states on the attractor}\nWe obtain the left states, $ \\langle \\lambda^{n}_{k,j}\n|$, which are orthonormal to the right states given by\n(\\ref{five}) and (\\ref{jord})\nby taking the duals of those expressions.\nThe dual expression of (\\ref{five}) gives the left \nstates corresponding to the non-zero eigenvalues as \n\\begin{equation}\n\\langle \\lambda^{n}_{k,2j} | =\n\\sum_{\\{\\sigma\\}}\\widehat{\\Pi}^{\\dag}_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_1}\n\\frac{\\langle\n\\lambda^{0}_{1,2j}|}{2^n},\n\\end{equation}\nwhere the $n$-product here is of $(\\widehat{{\\rm{A}}}_{i}^{-1})^{\\dag}$'s and \n$((\\widehat{{\\rm B}}^{n}_{k,2j,i})^{-1})^{\\dag}$'s and the factor of $1\/2^n$ is put for\nnormalization. The left states corresponding to the Jordan vectors associated with the\neigenvalue $0$ are \n\\begin{equation} \\label{leftjord}\n\\langle 0^{n}_{\\kappa,2j+1} | =\n\\Pi^{\\dag}_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_1} \\langle 0^{0}_{1,2j+1}|,\n\\end{equation}\nwhere ${\\Pi^{\\dag}_{\\sigma_n\\sigma_{n-1}...\\sigma_1}}$ is an \nordered $n$-product of $(\\widehat{{\\rm{A}}}_{i}^{-1})^{\\dag}$'s and \n$(\\widehat{{\\rm B}}_{i}^{-1})^{\\dag}$'s.\n\n\\subsection{Decay onto the attractor of the rescaled map}\n\nWe saw in section 2.1 that $\\rm{R}_{1}$ has no transient bands. \nThe band structure of $\\rm{G}_2$\nconsists of two scaled copies of that of $\\rm{R}_{1}$ separated by the central interval \n$X_{2,{\\rm C}}$. This central interval is a transient band of $\\rm{R}_{2}$. Since\n${\\rm{G}}_{3,{\\rm A}}$ and ${\\rm{G}}_{3,{\\rm B}}$ have a band structure similar to\nthat of ${\\rm{R}}_{2}$ both of them have a transient interval also. We refer\nto these two transient bands as peripheral transients since in\naddition there is a central transient in the interval $X_{3,{\\rm C}}$. \nIn general, as discussed below (\\ref{bandno}),\nat the $n^{\\rm{th}}$ bsp ${\\rm{R}}_{n}$ has $2^{n-1} - 1$ transient bands, of which \n$2^{n-1} - 2$ are peripheral transient bands. At each bsp\nall transient bands except the central one are rescaled versions of\nthe central transient bands at previous band-splitting points.\n\\begin{figure}[htb]\n\\begin{center}\n\\scalebox{.5}[.5]{\\includegraphics{fig51.eps}}\n\\parbox{5in}{\\caption{\\small Band structure at the $1^{\\rm{st}}$, $2^{\\rm{nd}}$ and \n$3^{\\rm{rd}}$ band-splitting points. The central transient $X_{2,{\\rm C}}$ at the $2^{\\rm{nd}}$\nbsp transforms into two peripheral transients at the $3^{\\rm{rd}}$ bsp.}}\n\\end{center}\n\\end{figure} \n\nUnder the map ${\\rm{R}}_{n}$ the inverse image of any point in the central\ninterval $X_{n,{\\rm C}}$ is contained within the interval itself. This implies that if\na function initially has no support in $X_{n,{\\rm C}}$, it will continue to have no\nsupport in $X_{n,{\\rm C}}$ under repeated iterations by $U_{{\\rm R}_n}$.\nThe spectrum and the form of the eigen\/Jordan vector in\nthe central interval at all the band-splitting points can be obtained by\ninspection. We notice that \n\\begin{equation} \\label{trans1}\nU_{{\\rm R}_n}\\left[ \\left(x-x^{*}_{n}\\right)^{j}\\chi_{n,{\\rm C}}\\right] =\n(-1)^{j}\\left(\\frac{1}{\\alpha_{n}}\\right)^{j+1}\\left\\{\\left(x-x^{*}_{n}\\right)^{j}\n\\chi_{n,{\\rm C}} +\n\\left(x-x^{*}_{n}\\right)^{j}\\chi_{n,b} \\right\\},\n\\end{equation}\nwhere $\\chi_{n,{\\rm C}}$ is the indicator function on \n$X_{n,{\\rm C}}$ and $\\chi_{n,b} =1$ if \n$x \\in [{\\rm{R}}^{(3)}_{n}(x_c), {\\rm{R}}^{(5)}_{n}(x_c) )$ and $0$ otherwise. The\nform of the eigen\/Jordan vector in $X_{n,{\\rm C}}$ \nassociated with decay out of the central interval is thus\n$\\left(x-x^{*}_{n}\\right)^{j}$. The complete form will be determined below.\nThe eigenvalues associated with decay out of\nthe central transient at the $n^{\\rm{th}}$ bsp are seen by (\\ref{trans1}) to be\n\\begin{equation} \\label{trans2}\n\\phi^{n,0}_{1,j} = (-1)^{j}\\left(\\frac{1}{\\alpha_{n}}\\right)^{j+1}\n\\end{equation}\n \nNext we obtain the complete spectrum at the $n^{\\rm{th}}$ bsp associated with\nthe decay out of all the transient regions onto $\\Omega_{n}$. This is done\nby transforming all the central transients at band-splitting points of\norder less than $n$. Of all the peripheral transient bands at the $n^{\\rm th}$\nbsp, $2^{n-2}$ of these are\n$(n-2)$ times rescaled versions of the central transient at ${\\rm{R}}_{2}$, $2^{n-3}$\nof these are $(n-3)$ times rescaled versions of the central transient at \n${\\rm{R}}_{3}$ and so on up to $2$ of these peripheral transients being rescaled\nversions of the central transient at the $(n-1)^{\\rm{th}}$ bsp.\nThis is shown in Figure~7 up to $n=3$.\nWe denote the transient eigenvalues at the $n^{\\rm{th}}$ bsp by\n$\\phi^{n,l}_{k,j}$, where $l$ indicates that it was obtained from the\ncentral transient at the $(n-l)^{\\rm th}$ bsp, where \n$l=0,1,2,\\dots,n-2$, $k$ is\nan integer from $1$ to $2^l$, $j=0,1,2,3,\\dots,$ and as before the right state \n$|\\phi^{n,l}_{k,j} \\rangle$ is piecewise-polynomial of degree $j$ in each of\nthe pieces. The eigenvalues are obtained in a similar fashion to (\\ref{eig1}) as \n\\begin{equation} \\label{eig2} \\phi^{n,l}_{k,j} =\n\\left(\\phi^{n-l,0}_{1,j}\\right)^{{1}\/{2^l}}\\mbox{exp}\\left(\n\\frac{2\\pi ik}{2^l}\\right),\n\\end{equation}\nwhere the $\\phi^{n,0}_{1,j}$ are given by (\\ref{trans2}). \nFor even values of $j$ there are degeneracies in the spectrum while for odd\nvalues of $j$ there are no degeneracies. We consider first the states associated\nwith the degenerate eigenvalues.\n\n\\subsubsection{Transient right states of even degree}\nAs seen in (\\ref{eig2}), for a given $l$ (with $n$ and $2j$ fixed) there are $2^l$ distinct\neigenvalues indexed by $k$. At each integer step ($l$) all\nthe eigenvalues of the previous step ($l-1$) are present and $2^{l-1}$ new eigenvalues \nappear. But identical eigenvalues from different steps have disparate $k$ values.\nIt is convenient to rearrange the $k$ index so that degenerate eigenvalues share\nthe same $k$.\\footnote{This can be accomplished by choosing the new $k$'s asssociated\nwith the $l^{\\rm th}$ step as $k_{2i-1}^l = k_i^{l-1}$ and \n$k_{2i}^l = k_{2i-1}^l + 2^{l-1}$, where $i=1,\\dots,2^{l-1}$ denotes the order of its appearance and\n$k_1^0=1$.} Table 2 contains $\\phi^{5,l}_{k,0}$ for all possible values of \n$l$ and $k$, arranged to illustrate the reordering in $k$.\n\n\\begin{table}[htbp!]\n\\[\n\\begin{array}{|l|cccccccc|} \\hline\n & k=1 & k=2 & k=3 & k=4 & k=5 & k=6 & k=7 & k=8 \\\\ \\hline\n& \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} \n& \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\\\\n\\phi^{5,0}_{k,0} & \\varphi & & & & & & & \\\\\n& \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} \n& \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\\\\n\\phi^{5,1}_{k,0} & \\varphi & -\\varphi & & & & & & \\\\ \n& \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} \n& \\vspace{-0.4mm} & \\vspace{-0.4mm} &\\\\\n\\phi^{5,2}_{k,0} & \\varphi & -\\varphi & i\\varphi & -i\\varphi & & & & \\\\ \n& \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} \n& \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\\\\n\\phi^{5,3}_{k,0} & \\varphi & -\\varphi & i\\varphi & -i\\varphi & \n\\varphi\\exp\\big( \\frac{\\pi i}{4} \\big) &\n\\varphi\\exp\\big( \\frac{3\\pi i}{4} \\big) &\n\\varphi\\exp\\big( \\frac{5\\pi i}{4} \\big)& \n\\varphi\\exp\\big( \\frac{7\\pi i}{4} \\big) \\\\\n\\hline\n\\end{array}\n\\]\n\\caption{Transient spectrum at the $5^{\\rm{th}}$ bsp with $j=0$. The\nconstant $\\varphi \\equiv 1\/\\alpha_{5}$.} \\end{table}\n\nNext we ask if there are independent eigenvectors\nassociated with the degenerate eigenvalues. \nUsing the same procedure used to find the right eigenvectors\ndecaying out of region $\\rm{I}$ it can be shown that in the rescaled map,\nfor $n > 2$ there is no eigenpolynomial of even degree with support in the central\ntransient. This means that the right states associated with $\\phi^{n,0}_{1,2j}$\nare Jordan vectors, for $n > 2$. For $n = 2$, decay out of the central\ntransient is described by eigenvectors for all values of $j$. Since the right states associated\nwith \n$\\phi^{3,1}_{k,j}$, $\\phi^{4,2}_{k,j}$,\\dots,$\\phi^{n,n-2}_{k,j}$ are transformed\nversions of $| \\phi^{2,0}_{1,j} \\rangle$, these are also eigenvectors. For\n$n \\geq 3$ decay out of the central transient is described by Jordan \nvectors for even values\nof $j$. Therefore all the peripheral transients which are related by conjugacies\n(\\ref{imp1}) and (\\ref{imp2}) to these central transients are also Jordan\nvectors. Hence associated with the transient spectrum are Jordan blocks whose sizes correspond to the\nalgebraic multiplicity of the eigenvalues. Thus from Table 2 we see that at the $5^{\\rm{th}}$ bsp\nthere is a $4 \\times 4$ Jordan block associated with\n$\\phi^{5,0}_{1,2j}$, a $3 \\times 3$ Jordan block associated with $\\phi^{5,1}_{2,2j}$ and so on. At\nthe $n^{\\rm{th}}$ bsp the largest Jordan block in the transient spectrum is\nassociated with $\\phi^{n,0}_{1,2j}$ and is of size $n-1 \\times n-1$. Since \n$\\phi^{n,0}_{1,0}$ is the largest eigenvalue with modulus less than 1 in the\nspectrum of $U_{\\rm R_n}$, it corresponds to the slowest decay mode. Since there is a Jordan block\nassociated with $\\phi^{n,0}_{1,0}$ this decay is modified exponential (polynomial factors in $t$\ntimes exponential decay).\n\nIn general (for even $j$) we have Jordan vectors for $l\\leq n-3$ and eigenvectors for $l=n-2$ as\n\\begin{eqnarray} \\label{t5}\nU_{{\\rm R}_n}|\\phi^{n,l}_{k,2j} \\rangle & = & \\phi^{n,l}_{k,2j}\n|\\phi^{n,l}_{k,2j} \\rangle + |\\phi^{n,l+1}_{k,2j}\n\\rangle \\;\\;\\;\\;\\;\\;\\;\\; n \\geq 3, \\; l \\leq n-3 \\nonumber \\\\ \\noalign{\\vskip4pt}\nU_{{\\rm R}_n}|\\phi^{n,n-2}_{k,2j} \\rangle & = & \\phi^{n,n-2}_{k,2j}\n|\\phi^{n,n-2}_{k,2j} \\rangle \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; n \\geq 2. \n\\end{eqnarray}\nBy inspection we have\nalready obtained the form of the Jordan state in the central region in\n(\\ref{trans1}). We have\n\\begin{equation} \\label{t6} | \\phi^{n,0}_{1,2j} \\rangle =\na_{n,2j}\\left(x-x^*_n\\right)^{2j}\\chi_{n,{\\rm C}} + f_{n,2j}(x) \\end{equation}\nwhere $f_{n,2j}(x)$ is piecewise polynomial with degree $2j$ over each of the\n$S_{n}$ intervals (excluding the central interval\n$X_{n, {\\rm C}}$) at the $n^{\\rm{th}}$ bsp. Since a Jordan vector multiplied by a\nscalar doesn't remain a Jordan vector (with respect to the same eigenvector in the block)\nwe do not have the freedom to choose\nthe $a_{n,2j}$'s to be $1$ as we did in (\\ref{haha}). Using (\\ref{trans1})\nand (\\ref{t5}) we know that if (\\ref{t6}) is to be a Jordan vector the\narbitrary functions $f_{n,2j}(x)$ must satisfy \n\\begin{equation} \\label{t1} U_{{\\rm R}_n}\nf_{n,2j}(x) = \\phi^{n,0}_{1,2j}\\left[ f_{n,2j}(x) -\na_{n,2j}\\left(x-x^{*}_{n}\\right)^{2j}\\chi_{n, b}\n\\right] + | \\phi^{n,1}_{1,2j} \\rangle\n\\end{equation}\nA formal approach to determine the $a_{n, 2j}$'s and $f_{n, 2j}(x)$ is \ndescribed in Appendix D.\n\nTo find the eigenstates corresponding to the peripheral transients\nwe transform the central transient eigenstates at the $2^{\\rm{nd}}$ bsp\nto all the higher band-splitting points. We define \n\\begin{equation}\n\\check{\\rm B}^{n,n-2}_{k,2j,i} = \\left( \\phi^{n,n-2}_{k,2j} \\right)^{i}\\widehat{\\rm B}_{i},\n\\end{equation}\nand $\\check{\\Pi}_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_{3}}$ is an ordered product of \n$n-2$ operators $\\widehat{\\rm A}_{i}$ and $\\check{\\rm B}^{n,n-2}_{k,2j,i}$ ($i\n= 3,4,\\dots,n$). If $\\sigma_i = 0$ the $i^{\\rm th}$ place in the\nproduct is taken by $\\widehat{\\rm A}_{i}$ and if $\\sigma_i = 1$ the $i^{\\rm\nth}$ place in the product is taken by $\\check{\\rm B}^{n,n-2}_{k,2j,i}$.\nFollowing the same procedure used to obtain (\\ref{five}) we find that the transient\neigenvectors, $| \\phi^{n,n-2}_{k,2j}\\rangle$, $(n\\geq 3)$, are given by\n\\begin{equation} \\label{transeig}\n| \\phi^{n,n-2}_{k,2j} \\rangle = \\sum_{\\{ \\sigma\\}}\n\\check{\\Pi}_{\\sigma_n \\sigma_{n-1}\\dots\\sigma_3}\n| \\phi^{2,0}_{1,j} \\rangle\n\\end{equation}\n\n\nTo find the Jordan states corresponding to the peripheral transients, \n$| \\phi^{n,l}_{k,2j} \\rangle$ for $1 \\leq l \\leq n-3$, \nwe use an approach similar to that used to find the\nJordan vectors describing decay out of the central transient, \n$| \\phi^{n,0}_{1,2j} \\rangle $. To clarify the notation we \nnote that $| \\phi^{n,1}_{k,2j} \\rangle $ does not\nhave support in the central transient band, but has \nsupport over all the other transient and attracting bands. Similarly\n$| \\phi^{n,2}_{k,2j} \\rangle $ has support on all bands except the\ncentral transient band and the two transient bands that are related by a single\ntransformation to the central transient at the $(n-1)^{\\rm{th}}$ bsp. This\npattern continues and $| \\phi^{n,n-2}_{k,2j} \\rangle$ has support\non all the attracting bands and the $2^{n-2}$ transient bands that are related\nby transformations to the central transient at the $2^{\\rm{nd}}$ bsp. Following\n(\\ref{t6}) and using the fact that the peripheral transients at the\n$n^{\\rm{th}}$ bsp are transformed versions of the transients at the\n$(n-1)^{\\rm{th}}$ bsp we write\n\\begin{equation} \\label{jord10}\n| \\phi^{n,l}_{k,2j} \\rangle = a^{n,l}_{k,2j}\\left( \n\\widehat{\\rm A}_n| \\phi^{n-1,l-1}_{\\lceil k\/2 \\rceil,2j} \\rangle +\n\\phi^{n,l}_{k,2j} \\widehat{\\rm B}_n | \\phi^{n-1,l-1}_{\\lceil k\/2 \\rceil,2j}\n\\rangle \\right) + g^{n,l}_{k,2j}(x)\n\\end{equation}\nwhere $g^{n,l}_{k,2j}(x)$ is piecewise polynomial of degree $2j$. The\npolynomial $g^{n,l}_{k,2j}(x)$ can be obtained using the procedure similar to\nthe one described in Appendix D. We present here the\nsolution only for the simplest case,\n\\begin{equation} \\label{jord14}\n| \\phi^{n,n-3}_{k,2j} \\rangle = 2 \\, \\phi^{n,n-3}_{k,2j} \\left(\n\\widehat{\\rm A}_n| \\phi^{n-1,n-4}_{\\lceil k\/2 \\rceil,2j} \\rangle\n + \\phi^{n,n-3}_{k,2j}\n\\widehat{\\rm B}_n| \\phi^{n-1,n-4}_{\\lceil k\/2 \\rceil,2j} \\rangle\n\\right) - \\frac{1}{2 \\, \\phi^{n,n-3}_{k,2j}}\n| \\phi^{n,n-2}_{k',2j} \\rangle\n\\end{equation}\nwhere $k'=k+1$ if $k$ is odd and $k'=k-1$ if $k$ is even. The recursion\nrelations for the other Jordan vectors involve more terms, and in general\nto find all the Jordan vectors at the $n^{\\rm{th}}$ bsp one must know\nall the Jordan vectors at the $(n-1)^{\\rm{th}}$ bsp. Also at a particular\nbsp to determine $| \\phi^{n,l}_{k,2j} \\rangle $ one must know \nall the $| \\phi^{n,l'}_{k,2j} \\rangle $ for $l' < l$. So\ngenerally one proceeds in the following order\n$| \\phi^{2,0}_{1,2j} \\rangle $, $|\n\\phi^{3,1}_{k,2j}\\rangle $, $| \\phi^{3,0}_{1,2j}\\rangle $,\n$| \\phi^{4,2}_{k,2j} \\rangle $, $|\n\\phi^{4,1}_{k,2j}\\rangle \\dots $. The transient eigenvectors can be\nobtained directly from (\\ref{transeig}) without regard to this order.\n\n\\subsubsection{Transient Right states of odd degree}\n\nFor odd values of $j$ the transient spectrum at the $n^{\\rm th}$ bsp, \ngiven by equation (\\ref{eig2}), is nondegenerate. We first find the \neigenvectors with support in the central transient bands, at all the\nband-splitting points. By inspection we have already obtained the form \nof the eigenvector in the central transient in (\\ref{trans1}) so that\n\\begin{equation} \\label{oddj}\n| \\phi^{n,0}_{1,2j+1} \\rangle = \\left( x-x^{*}_{n} \\right)^{2j+1} \\chi_{n,C} \n+ f_{n,2j+1}(x),\n\\end{equation}\nwhere $f_{n,2j+1}(x)$ is a polynomial of degree $2j+1$ in each of the \n$S_n$ intervals excluding the central interval. For this expression is to be an \neigenvector $f_{n,2j+1}(x)$ must satisfy\n\\begin{equation} \\label{oddj1}\nU_{{\\rm R}_n}\nf_{n,2j+1}(x) = \\phi^{n,0}_{1,2j+1}\\left[ f_{n,2j+1}(x) -\n\\left(x-x^{*}_{n}\\right)^{2j+1}\\chi_{n, b}\n\\right]. \n\\end{equation}\nThis equation for $f_{n,2j+1}(x)$ is similar to equation (\\ref{t1}), except\nthat it has one term less on the rhs, and may be solved using the procedure\noutlined in Appendix~D.\n\nOnce the $| \\phi^{n,0}_{1,2j+1} \\rangle$ are known the eigenvector\ncorresponding to any $| \\phi^{n,l}_{k,2j+1} \\rangle$ can be written as \n\\begin{equation}\n| \\phi^{n,l}_{k,2j+1} \\rangle = \\sum_{\\{ \\sigma \\}\n}\\check{\\Pi}_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_{n-l+1}}\n|\\phi^{n-l,0}_{1,2j+1} \\rangle,\n\\end{equation}\nwhere $\\check{\\Pi}_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_{n-l+1}}$ \n is an ordered product of \n$l$ operators $\\widehat{\\rm A}_{i}$ and $\\check{\\rm B}^{n,l}_{k,2j,i}$ ($i =\nn-l+1,\\dots,n$). The $\\check{\\rm B}^{n,l}_{k,2j,i}$ operators are defined\nhere as \n\\begin{equation}\n\\check{\\rm B}^{n,l}_{k,2j,i} \\equiv \\left(\\phi^{n,l}_{k,2j+1}\\right)^i\n \\widehat{\\rm B}_i\n\\end{equation}\nIf $\\sigma_i = 0$ the\n$i^{\\rm th}$ place in the product is taken by \n$\\widehat{\\rm A}_{i}$ and if $\\sigma_i = 1$ the $i^{\\rm th}$ place in the \nproduct is taken by $\\check{\\rm B}^{n,l}_{k,2j,i}$. The sum is over all\npossible strings of $0$'s and $1$'s of length $l$.\n\n\\subsubsection{Left States}\n \nThe left states $\\langle \\phi^{n,l}_{k,j} |$ form a bi-orthonormal set with all\nthe previously obtained right states at the $n^{\\rm{th}}$ bsp as\n\\alpheqn\n\\begin{eqnarray}\n\\langle \\phi^{n,l}_{k,j} | \\lambda^{n}_{k'j'} \n\\rangle & = & 0 \\\\ \n\\langle \\phi^{n,l}_{k,j} | \\phi^{n,l'}_{k'j'} \n\\rangle & = & \\delta_{ll'}\\delta_{kk'}\\delta_{jj'}.\n\\end{eqnarray}\n\\reseteqn\nAmong the right states only the states with $l=0$ have support in the central transient band.\nFrom (\\ref{t6}) we see that the associated left states are \n\\alpheqn\n\\begin{eqnarray}\n\\langle \\phi^{n,0}_{1,2j} | & = & \\frac{1}\n{a_{n,2j}}\\delta^{(2j)}\\left( x - x^{*}_{n} \\right) \\\\ \n\\langle \\phi^{n,0}_{1,2j+1} | & = & \n-\\delta^{(2j+1)}\\left( x - x^{*}_{n} \\right).\n\\end{eqnarray}\n\\reseteqn\nThe left states corresponding to the peripheral transients at \nthe $n^{\\rm{th}}$ bsp, $\\langle \\phi^{n,l}_{k,j} |$,\nare found by transforming the left states at the \n$(n-1)^{\\rm{th}}$ bsp, \n$\\langle \\phi^{n-1,l-1}_{\\lceil k\/2 \\rceil,j} |$.\nThen the entire set at the $n^{\\rm{th}}$ bsp has to be orthonormalized\nusing a Gram--Schmidt procedure.\n\n\\subsection{Back to ${\\rm T}_n(x)$}\n\nGoing back to the tent map ${\\rm{T}}_{n}(x)$, we transform\nall the right states of ${\\rm{R}}_{n}(x)$ by $U_{\\phi^{-1}_{n}}$ and the left states by\n$K_{\\phi_{n}}$. The states describing decay out of \n$[ {\\rm{T}}_{n}^{(3)}(x_{c}),1 ]$ are null states. \nThe eigenvalues describing decay out of \n$[ 0,{\\rm{T}}_{n}^{(4)}(x_{c})]$ are \n$\\phi_j^n \\equiv \\left( \\frac{1}{\\alpha_{n}}\\right)^{j+1}$ and have\nassociated polynomial eigenvectors of degree $j$. For $j$ even this part of the spectrum overlaps\nwith the spectrum describing decay of transients of ${\\rm R}_n$. There are Jordan vectors associated\nwith this part of the spectrum; denoting them as\n$| \\phi^{n}_{2j} \\rangle$ we have\n\\alpheqn\n\\begin{eqnarray}\nU_{{\\rm T}_n}| \\phi^{n}_{2j} \\rangle = \\phi^{n}_{2j}\n| \\phi^{n}_{2j} \\rangle +| \\phi^{n,0}_{1,2j} \n\\rangle \\\\ \nU_{{\\rm T}_n}| \\phi^{n}_{2j+1}\\rangle = \\phi^{n}_{2j+1}\n| \\phi^{n}_{2j+1} \\rangle .\n\\end{eqnarray}\n\\reseteqn\nThese right states can be determined by an extension of the \nmethods used in\nSection 2.2 to determine the eigenstates describing \ndecay out of interval $\\rm{I}$ for the tent map at the first bsp.\n\n\\section{Conclusion}\n\nWe have presented the generalized spectral\ndecomoposition of the Frobenius--Perron\noperator of the tent map at the\nband-splitting points. The right eigenstates are\npiecewise-polynomial functions and the left eigenstates\nare generalized functions. The \nspectrum is discrete and gives the\ncharacteristic decay times of the map. From the decomposition\none can calculate correlations of arbitrary polynomials (as\nwell as functions expandable in terms of the polynomial \neigenstates). Furthermore, since the modes corresponding\nto transient decay onto the attractor have been obtained, the\nfull nonequilibrium dynamics of initial probability densities\nis accessible.\n\nThe slowest decay mode, corresponding to the\neigenvalue $\\alpha_{n}^{-1}$ at the\n$n^{\\rm th}$ bsp, describes decay onto the\nattractor. At the $n^{\\rm th}$ bsp there is an \n$n \\times n$ Jordan block associated with this\neigenvalue and therefore the decay is modified\nexponential. The asymptotic periodicity\nof the map is clearly reflected in the\nspectrum as at the $n^{\\rm th}$ bsp, all the \n$n^{\\rm th}$ roots of unity are part of the\nspectrum. Our analytic solution of density \nevolution in this system may be useful for \ncomparision with the behavior of systems governed by the Ginzburg--Landau\nequation since a component of its dynamics~\\cite{Moon} can be reduced to the tent map.\n\n\\section*{Acknowledgements}\n\nWe thank I.~Prigogine for his support and encouragement and G.E.~Ord\\'{o}\\~{n}ez\nfor several useful discussions and his comments on the manuscript.\nWe acknowledge US\nDepartment of Energy grant no. FG03-94ER14465, \nthe Welch Foundation\ngrant no. F-0365 and the European Communities Commision (contract no.\n27155.1\/BAS) for support of this work.\n\\section*{Appendix A: Topological conjugacy}\n\nIn this appendix we review the spectral decompositions \nof maps related by a coordinate \ntransformation~\\cite{Deanbook}. Let ${\\rm{T}}: X \\rightarrow X$ be a map defined on the\ninterval\n$X$. Transforming the interval $X$ by the one-to-one, \nonto, continuous function $\\phi : X \\rightarrow Y$ gives a new map, ${\\rm{S}}:Y\n\\rightarrow Y$. This map is determined as\n\\begin{equation}\ny_{t+1} = \\phi(x_{t+1}) = \\phi({\\rm T}(x_t)) \\equiv {\\rm S}(y_t).\n\\end{equation}\nUsing that $\\phi$ has an inverse gives\n\\begin{equation}\n\\phi({\\rm T}(x_t)) = \\phi({\\rm T}(\\phi^{-1}(\\phi(x_t)))) \n= \\phi \\circ {\\rm T} \\circ \\phi^{-1}(y_t),\n\\end{equation}\nso that\n\\begin{equation} \\label{strel}\n{\\rm S} = \\phi \\circ {\\rm T} \\circ \\phi^{-1}.\n\\end{equation}\nThe maps ${\\rm{T}}$ and ${\\rm{S}}$ are said to be topologically \nconjugate to each other. \n\nThe Koopman operator, $K_{\\rm{S}}$,\ncorresponding to ${\\rm{S}}$ is given from (\\ref{strel}) by \n\\begin{equation} \\label{kooprel}\n K_{\\rm{S}} = K^{-1}_{\\phi}K_{\\rm{T}}K_{\\phi},\n\\end{equation}\nwhere we have used the fact that $K_{\\phi^{-1}} = K^{-1}_{\\phi}$.\nThe Frobenius--Perron operator, $U_{\\rm{S}}$, coresponding to ${\\rm{S}}$\nis the adjoint of $K_{\\rm{S}}$. Taking the adjoint of (\\ref{kooprel})\nand using $(K_{\\phi}^{-1})^{\\dagger} = (K_{\\phi}^{\\dagger})^{-1}$ gives\n\\begin{equation} \\label{frobrel}\nU_{\\rm{S}} = U_{\\phi} U_{\\rm{T}} U^{-1}_{\\phi}.\n\\end{equation}\nSince $U_{\\rm S}$ and $U_{\\rm T}$ are related by the similarity (\\ref{frobrel})\nthe spectrum of $U_{\\rm S}$ is identical to that of $U_{\\rm T}$ and eigenstates\ntransform as\n\\begin{equation} \\label{app2}\n\\left| \\lambda_{n} \\right\\rangle_{\\rm{S}} = U_{\\phi}\\left| \\lambda_{n}\n\\right\\rangle_{\\rm{T}},\n\\end{equation}\nwhere we use a Dirac-style bra-ket notation for the states.\nFrom (\\ref{kooprel}) the left states transform as \n\\begin{equation} \\label{app3}\n\\left\\langle \\lambda_{n} \\right|_{\\rm{S}} = \nK_{\\phi^{-1}} \\left\\langle \\lambda_{n} \\right|_{\\rm{T}}.\n\\end{equation}\nJordan states of the maps are also related as in (\\ref{app2}) and (\\ref{app3}) and\nboth the algebraic and geometric multiplicities of the eigenvalues are preserved \nunder conjugacy.\n\n\\section*{Appendix B: The tent map with unit height}\n\nThe Frobenius--Perron operator of the tent map with unit height is given by \n\\begin{equation}\nU_{\\rm{T_{0}}}\\rho(x) = \\frac{1}{2}\\left[ \\rho \\left( \\frac{x}{2} \\right)\n + \\rho\\left( \\frac{2-x}{2} \\right) \\right].\n\\end{equation}\nThe operator $U_{\\rm T_0}$ admits polynomial eigenstates with support on the\nwhole unit interval. Associated with eigenpolynomials of order\n$2j$ are the nonzero eigenvalues $2^{-2j}$. There\nis an infinite degeneracy of the eigenvalue $0$ with an independent\nodd-order eigenpolynomial associated with each occurence of the eigenvalue. \nThus, the odd-order eigenpolynomials are not unique but we choose them as Euler\npolynomials so that the associated left eigendistributions take a simple form.\nThe right eigenvectors of $U_{\\rm T_0}$ are~\\cite{Gonzalo,fox}\n\\alpheqn \n\\begin{eqnarray}\n|2^{-2j}\\rangle_{{\\rm{T}}_{0}} & = & \nB_{2j}(x\/2) \\\\\n| 0_{2j+1} \\rangle_{\\rm T_0} & = & E_{2j+1}(x), \n\\end{eqnarray}\n\\reseteqn\nwhere $B_{j}(x)$ is the Bernoulli polynomial of order $j$ and \n$E_{j}(x)$ is the Euler polynomial of order $j$~\\cite{Absteg}.\nThe corresponding left states are\n\\alpheqn\n\\begin{eqnarray}\n\\langle {2^{-2j}}|_{\\rm{T}_{0}} & = & \n2^{2j}\\widetilde B_{2j} (x) \\\\\n\\left\\langle 0_{2j+1} \\right|_{\\rm{T}_{0}} & = & \n\\frac{-1}{(2j+1)!}\\delta^{(2j+1)}_{-}(x-1),\n\\end{eqnarray}\n\\reseteqn\nwhere $\\tilde B_0(x) = 1$ and for $j\\geq 1$\n\\begin{equation} \n\\widetilde B_{2j} (x) \\equiv \\frac{(-1)^{2j-1}}{(2j)!}\\left[ \n\\delta^{(2j-1)}_{-}(x-1) - \\delta^{(2j-1)}_{+}(x) \\right], \n\\end{equation}\nwhere the action of $\\delta^{(m)}_\\pm (x-c)$ on a sufficently differentiable\nfunction $f(x)$ is given by \n\\begin{equation}\n\\int_a^b dx \\, \\delta^{(m)}_{\\pm}(x-c) f(x) =\n\\lim_{\\epsilon \\rightarrow 0} (-1)^{m} f^{(m)}(c \\pm \\epsilon),\n\\end{equation}\nfor $a \\leq c \\leq b$ and $\\epsilon$ is a positive infinitesimal.\n\nThe time evolution of a density is expressed in terms of the spectral decomposition of\n$U_{\\rm T_0}$ as\n\\begin{equation}\nU_{\\rm T_0}^t \\, \\rho(x) = \\sum_{j=0}^\\infty\n\\left[ (2^{-2j})^t |2^{-2j}\\rangle \\langle 2^{-2j}| \\rho \\rangle\n+ \\delta_{t,0} | 0_{2j+1} \\rangle \\langle 0_{2j+1}| \\rho \\rangle \\right],\n\\end{equation}\nwhere the bilinear form is defined by\n\\begin{equation}\n\\langle f | g \\rangle \\equiv \\int_0^1 dx \\, f^*(x) g(x).\n\\end{equation}\n\n\\section*{Appendix C: Calculation of transient right states}\n\nTo determine the functions \n$g_{{\\rm II},j}(x)$ and $g_{{\\rm III},j}(x)$, which appear\non the rhs of (\\ref{form1}) we expand \n$g_{{\\rm II},j}(x)\\chi_{\\rm{II}} + g_{{\\rm III},j}(x)\\chi_{\\rm{III}}$ in terms of the\neigenstates given in Table 1 of $U_{\\rm T_1}$ on the attractor as \n\\begin{eqnarray} \\label{expand1}\ng_{{\\rm II},j}(x)\\chi_{\\rm{II}} + g_{{\\rm III},j}(x)\\chi_{\\rm{III}} & = &\n\\sum_{i=1}^{\\lfloor j\/2\n\\rfloor} a_{i,j}|{+2^{-i}} \\rangle_{\\Omega} + \n\\sum_{i=1}^{\\lfloor j\/2\\rfloor}\n b_{i,j}|{-2^{-i}} \\rangle_{\\Omega} \\nonumber \\\\\n & & \\mbox{} + \\sum_{i=1}^{\\lfloor\n\\frac{j-1}{2} \\rfloor} c_{i,j}| 0_{2i+1} \\rangle_{\\Omega} +\n\\sum_{i=1}^{\\lfloor\n\\frac{j-1}{2} \\rfloor} d_{i,j}| 0_{J_{2i+1}} \\rangle_{\\Omega}, \n\\end{eqnarray}\nwhere $\\lfloor x \\rfloor$ denotes the integer\npart (floor) of the real number $x$.\nThen acting with $U_{\\rm T_1}$ gives\n\\begin{eqnarray} \\label{expand2}\nU_{{\\rm{T_1}}} \\left( g_{{\\rm II},j}(x)\\chi_{\\rm{II}} + g_{{\\rm\nIII},j}(x)\\chi_{\\rm{III}}\n\\right) & = &\n\\sum_{i=1}^{\\lfloor j\/2\n\\rfloor} \\frac{a_{i,j}}{2^i}|{+2^{-i}} \\rangle_{\\Omega} - \n\\sum_{i=1}^{\\lfloor j\/2\\rfloor}\n \\frac{b_{i,j}}{2^i}|{-2^{-i}} \\rangle_{\\Omega} \\nonumber \\\\\n& & \\mbox{} + \\sum_{i=1}^{\\lfloor\n\\frac{j-1}{2} \\rfloor} d_{i,j}| 0_{2i+1} \\rangle_{\\Omega}. \n\\end{eqnarray} \nWe substitute (\\ref{expand1}) \nand (\\ref{expand2}) into (\\ref{form1}) and act on (\\ref{form1}) by all the left states\non the attractor in succession. Using orthonormality we obtain the\nfollowing equations for the expansion coefficients:\n\\alpheqn\n\\begin{eqnarray} \\label{form2}\n\\frac{a_{i,j}}{2^i} + \\alpha_{i,j} & = & 2^{-(j+1)\/2} \\, a_{i,j} \\\\\n\\frac{b_{i,j}}{2^i} - \\beta_{i,j} & = & -2^{-(j+1)\/2} \\, b_{i,j} \\\\ \nd_{i,j} + \\gamma_{i,j} & = & 2^{-(j+1)\/2} \\, c_{i,j} \\\\\nd_{i,j} & = & 0,\n\\end{eqnarray}\n\\reseteqn \nwhere\n\\alpheqn\n\\begin{eqnarray} \\label{def}\n\\alpha_{i,j} & \\equiv & 2^{-(j+1)\/2} \n\\langle{+2^{-i}}| x^j\\chi_{\\rm{II}} \\rangle \\\\ \n\\beta_{i,j} & \\equiv & 2^{-(j+1)\/2}\n\\langle{-2^{-i}}| x^j\\chi_{\\rm{II}} \\rangle = \n\\alpha_{i,j} \\\\ \n\\gamma_{i,j} & \\equiv & 2^{-(j+1)\/2}\n\\langle 0_i | x^j\\chi_{\\rm{II}} \\rangle,\n\\end{eqnarray}\n\\reseteqn\nand we used $\\langle 0_{J_i}| x^j\\chi_{\\rm{II}} \\rangle = 0$ because \n$\\langle 0_{J_i}|$ has support only in $\\chi_{\\rm III}$. \nExplicit evaluation of $\\langle +\\frac{1}{2^i}|\nx^j\\chi_{\\rm{II}} \\rangle$ gives\n\\begin{equation}\n\\langle +\\frac{1}{2^i}| x^j\\chi_{\\rm{II}} \\rangle = \n\\left\\{ \\begin{array}{lc} \\frac{\\sqrt{2}}{2(\\sqrt{2}-1)(j+1)} \n((2 - \\sqrt{2})^j -1 )\n& i=0 \\\\ \\noalign{\\vskip4pt}\n0 & 2i-1 \\geq j \\\\ \\noalign{\\vskip4pt}\n-\\frac{j!(2 - \\sqrt{2})^{j+2i-1}}{(2i)!(j-2i+1)!}(2^{(j-2i+1)\/2} -1) & 2i-1 < j. \n\\end{array} \\right.\n\\end{equation}\nEvaluation of $\\langle 0_i | x^j\\chi_{\\rm{II}} \\rangle$ gives\n\\begin{equation}\n\\langle 0_i | x^j\\chi_{\\rm{II}} \\rangle = \n\\left\\{ \\begin{array}{lc} \\frac{j!}{(2i+1)!(j-2i-1)!}(\n\\sqrt{2} -1 )^{j+2i+1} & 2i+1 \\leq j \\\\ \\noalign{\\vskip4pt}\n0 & \\mbox{otherwise} .\n\\end{array} \\right.\n\\end{equation}\nThese results are then\nused in (109) to determine the expansion coefficients for the transient\neigenstates with support in region $\\rm I$. \n\n\n\\section*{Appendix D: Transient right states at higher bsps}\n\nWe expand the arbitrary functions $f_{n,2j}(x)$ in terms of the\ntransient and non-transient eigenvectors of $U_{{\\rm R}_n}$. The\nnon-transient eigenvectors are given in (\\ref{five}) and (\\ref{jord}). The\ntransient eigenvectors will be transformed versions of the central\neigenvectors at the previous band-splitting points. The expansion is\n\\begin{eqnarray} \\label{t2}\nf_{n,2j}(x) & = &\n\\sum_{k=1}^{2^n}\\sum_{j^{'}=1}^{j} b^{n,2j}_{k,2j^{'}}\n| \\lambda^{n}_{k,2j^{'}} \\rangle +\n\\sum_{k=1}^{2^n}\\sum_{j^{'}=1}^{j-1} c^{n,2j}_{k,j^{'}}\n| 0^{n}_{k,2j^{'}+1} \\rangle \\nonumber \\\\\n & & + \\sum_{l=1}^{n-2}\\sum_{k=1}^{2^l}\\sum_{j^{'}=1}^{j}\nd^{n,l,2j}_{k,j^{'}}\n| \\phi^{n,l}_{k,2j^{'}} \\rangle + \\sum_{l=1}^{n-2}\\sum_{k=1}^{2^l}\n\\sum_{j^{'}=1}^{j-1} e^{n,l,2j}_{k,j^{'}}\n| \\phi^{n,l}_{k,2j^{'}+1} \\rangle.\n\\end{eqnarray}\nSince $d^{n,n-2,2j}_{1,j}$ is the coefficent of the eigenvector of the\nJordan block it can be set to zero. \nApplying $U_{{\\rm R}_n}$ to the function $f_{n,2j}(x)$ we get\n\\begin{eqnarray} \\label{t3}\n\\lefteqn{U_{{\\rm R}_n} f_{n,2j}(x) = \n\\sum_{k=1}^{2^n}\\sum_{j^{'}=1}^{j}b^{n,2j}_{k,j^{'}}\n\\lambda^{n}_{k,2j^{'}}| \\lambda^{n}_{k,2j^{'}}\\rangle \n+\\sum_{k=1}^{2^{n}-1}\\sum_{j^{'}=1}^{j-1}\nc^{n,2j}_{k+1,j^{'}}\n| 0^{n}_{k,2j^{'}+1} \\rangle } \\hspace{20pt} \\nonumber \\\\\n& & \\mbox{} \n + \\sum_{l=1}^{n-2}\\sum_{k=1}^{2^l}\\sum_{j^{'}=1}^{j}\nd^{n,l,2j}_{k,j^{'}}\n\\phi^{n,l}_{k,2j^{'}}| \\phi^{n,l}_{k,2j^{'}} \\rangle + \\sum_{l=2}^{n-2}\\sum_{k=1}^{2^{l-1}}\\sum_{j^{'}=1}^{j}\nd^{n,l-1,2j}_{k,j^{'}}\n\\phi^{n,l}_{k,2j^{'}}| \\phi^{n,l}_{k,2j^{'}} \\rangle \\nonumber \\\\\n & &\\mbox{} \n+ \\sum_{l=1}^{n-2}\\sum_{k=1}^{2^l}\\sum_{j^{'}=1}^{j}\ne^{n,l,2j}_{k,j^{'}}\n\\phi^{n,l}_{k,2j^{'}+1}| \\phi^{n,l}_{k,2j^{'}+1} \\rangle\n\\end{eqnarray} \nWe substitute (\\ref{t3}) and (\\ref{t2}) into (\\ref{t1}) and\nhit both sides of the equation with $\\langle \\lambda^{n}_{k,2j^{'}}|$,\n$\\langle 0^{n}_{k,2j^{'}+1} |$, $\\langle\n\\phi^{n,l}_{k,2j^{'}+1} |$ and $\\langle\n\\phi^{n,l}_{k,2j^{'}} |$ successively. Letting\n\\alpheqn\n\\begin{eqnarray}\n\\alpha^{n,2j}_{k,j'} & \\equiv & \\langle \\lambda^{n}_{k,2j'} |\n\\left(x-x^*_n \\right)^2j\\chi_{b,n} \\rangle \\\\\n\\beta^{n,2j}_{k,j'} & \\equiv & \\langle 0^{n}_{k,2j'+1} |\n\\left(x-x^*_n \\right)^{2j}\\chi_{b,n} \\rangle \\\\\n\\gamma^{n,l,2j}_{k,j'} & \\equiv & \\langle \\phi^{n,l}_{k,2j'}|\n\\left(x-x^*_n \\right)^{2j}\\chi_{b,n} \\rangle \n\\end{eqnarray}\n\\reseteqn \nwe obtain the following equations for the expansion coefficents\n$a_{n,j}$, $b^{n,2j}_{k,j^{'}}$, $c^{n,2j}_{k,j^{'}}$,\n$d^{n,l,2j}_{k,j^{'}}$ and $e^{n,l,2j}_{k,j'}$\n\\begin{eqnarray} \\label{t4}\nb^{n,2j}_{k,j'}\\lambda^{n}_{k,2j'} & = & \\phi^{n,0}_{1,j}\\left(\nb^{n,2j}_{k,j'} - \\alpha^{n,2j}_{k,j'} \\right) \\nonumber \\\\\nc^{n,2j}_{k+1,j^{'}} & = & \\phi^{n,0}_{1,j}\\left(\nc^{n,2j}_{k,j'} - \\beta^{n,2j}_{k,j'} \\right) \\nonumber \\\\\na_{n,2j}\\phi^{n,0}_{1,2j}\\gamma^{n,1,2j}_{1,j} & = & 1 \\nonumber \\\\\ne^{n,l,2j}_{k,j'}\\phi^{n,l}_{k,2j'+1} & = & \\phi^{n,0}_{1,j}\\left(\ne^{n,l,2j}_{k,j'} - \\delta^{n,l,2j}_{k,j'} \\right).\n\\end{eqnarray}\nThe equations for $d^{n,l,2j}_{k,j'}$ differ depending on the values of\n$l,k$ and $j'$. For $k=1$, $j'=j$ and $l=2,3,\\dots,n-2$\n\\begin{eqnarray} \n d^{n,l-1,2j}_{1,j} & = & a_{n,2j}\\phi^{n,0}_{1,2j}\\gamma^{n,l,2j}_{1,j} \\nonumber \\\\\n d^{n,n-2,2j}_{1,j} & = & 0.\n\\end{eqnarray}\nWhen $j \\neq j'$, $k \\neq 1$ and $l=1$ we have\n\\begin{equation} \nd^{n,1,2j}_{k,j'}\\left(\n\\phi^{n,0}_{1,2j} - \\phi^{n,1}_{k,2j'} \\right)\n= a_{n,j}\\phi^{n,0}_{1,2j}\\gamma^{n,1,2j}_{k,j'}. \n\\end{equation}\nWhen $j \\neq j'$, $k \\neq 1$ and $l=2,3,\\dots,n-2$ we have\n\\begin{equation} \\label{t7}\n d^{n,l,2j}_{k,j'}\\left(\n\\phi^{n,0}_{1,2j} - \\phi^{n,l}_{k,2j'} \\right) - d^{n,l-1,2j}_{k,j'} =\na_{n,2j}\\phi^{n,0}_{1,2j}\\gamma^{n,l,2j}_{k,j'}. \n\\end{equation}\n\nThe equations (\\ref{t4}) -- (\\ref{t7}) for $a_{n,2j}$, \n$b^{n,2j}_{k,j'}$, $c^{n,2j}_{k,j'}$ \n$d^{n,l,2j}_{k,j'}$ and $e^{n,l,2j}_{k,j'}$ are either uncoupled or are coupled in a simple\nmanner and can be solved explicitly to find the expansion coeffients. Plugging\nthese coefficents into (\\ref{t2}) and (\\ref{t6}) we get all the Jordan\nstates with support in the central transient, for all $n > 2$. For $n = 2$\nset all the $d^{l,j}_{2,k,j'} = 0$, $e^{n,l,2j}_{k,j'} = 0$ and $a_{2,j} = 1$ and solve for\n$b^{j}_{n,k,j'}$, $c^{j}_{n,k,j'}$ from (\\ref{t7}) to obtain the\neigenvectors with support in the central transient at the second bsp.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzljq b/data_all_eng_slimpj/shuffled/split2/finalzljq new file mode 100644 index 0000000000000000000000000000000000000000..170652e477e0244cbbaf1b49135211f9fb967078 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzljq @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe Space Telescope Imaging Spectrograph (STIS) (Kimble et al.\\ \\markcite{kimble97}1997;\nWoodgate et al.\\ \\markcite{woodgate98}1998; Walborn \\& Baum \\markcite{walborn98}1998) was used during the Hubble\nDeep Field -- South (HDF--S) (Williams et al.\\ \\markcite{williams99}1999) observations\nfor ultraviolet spectroscopy (Ferguson et al.\\ \\markcite{ferguson99}1999) and ultraviolet\nand optical imaging. In this paper we present the imaging data.\n\nThe Hubble Deep Field -- North (HDF--N) (Williams et al.\\ \\markcite{williams96}1996) is the\nbest studied field on the sky, with $>$1~Msec of Hubble Space\nTelescope (HST) observing time (including follow-up observations\nby Thompson et al.\\ \\markcite{thompson99}1999 and Dickinson et al.\\ \\markcite{dickinson99}1999), and countless\nobservations with ground-based telescopes (e.g., Cohen et al.\\ \\markcite{cohen96}1996;\nConnolly et al.\\ \\markcite{connolly97}1997). Results obtained to date include a measurement\nof the ultraviolet luminosity density of the universe at $z>2$\n(Madau et al.\\ \\markcite{madau96}1996), the morphological distribution of faint galaxies\n(Abraham et al.\\ \\markcite{abraham96}1996), galaxy-galaxy lensing (Hudson et al.\\ \\markcite{hudson98}1998), and\nhalo star counts (Elson, Santiago \\& Gilmore \\markcite{elson96}1996). See Ferguson \\markcite{ferguson98}(1998) and\nLivio, Fall \\& Madau \\markcite{livio98}(1998) for reviews and further references. The HDF--S\ndiffers from the HDF--N in several ways. First, the installation\nof STIS and NICMOS on HST in 1997 February has enabled parallel\nobservations with three cameras. In addition to the STIS data, the\nHDF--S dataset includes deep WFPC2 imaging (Casertano et al.\\ \\markcite{casertano99}1999),\ndeep near-infrared imaging (Fruchter et al.\\ \\markcite{fruchter99}1999), and wider-area\nflanking field observations (Lucas et al.\\ \\markcite{lucas99}1999). Second, the STIS\nobservations were centered on QSO J2233-606, at $z \\approx 2.24$,\nto obtain spectroscopy. Finally, the field was chosen in the southern\nHST continuous viewing zone in order to enable follow-up observations\nwith ground-based telescopes in the southern hemisphere.\n\nIn section 2 we describe the observations. In section 3 we describe\nthe techniques we used to reduce the CCD images. In section 4 we\ndescribe the reduction of the MAMA images. In section 5 we describe\nthe procedures used to catalog the images. In section 6 we present\nsome statistics of the data, including galaxy number counts and color\ndistributions. Our purpose in this paper is to produce a useful\nreference for detailed analysis of the STIS images. Thus for the\nmost part we refrain from model comparisons and speculation on the\nsignificance of the results. We expect the STIS images to be useful\nfor addressing a wide variety of astronomical topics, including\nthe sizes of the faintest galaxies, the ultraviolet-optical color\nevolution of galaxies, the number of faint stars and white dwarfs\nin the galactic halo, and the relation between absorption line\nsystems seen in the QSO spectrum and galaxies near to the line of\nsight. We also expect the observations to be useful for studying\nsources very close to the quasar, and perhaps for detecting the\nhost galaxy of the quasar. However, this may require a re-reduction\nof the images, as the quasar is saturated in all of the CCD exposures,\nand there are significant problems with scattered light and\nreflections.\n\n\\section{Description of the observations}\n\nThe images presented here were taken in 4 different modes, 50CCD\n(Figure~\\ref{logclr}), F28X50LP (Figure~\\ref{loglp}), NUVQTZ\n(Figure~\\ref{lognuv}), and FUVQTZ (Figure~\\ref{logfuv}). The 50CCD\nand F28X50LP modes used the Charge Coupled Device (CCD) detector.\nThe 50CCD is a clear, filterless mode, while the F28X50LP mode uses\na long-pass filter beginning at about 5500{\\AA}. The FUVQTZ and\nNUVQTZ used the Multi-Anode Microchannel Array (MAMA) detectors as\nimagers with the quartz filter. The quartz filter was selected to\nreduce the sky noise due to airglow to levels below the dark noise. The\neffective areas of the 4 modes are plotted in Figure~\\ref{filttrans},\nalong with a pseudo-$B_{430}$ bandpass constructed from the 50CCD\nand F28X50LP fluxes. The MAMA field of view is a square, $25\\arcsec$\non a side, and was dithered so that the observations include data\non a field approximately $30\\arcsec$ square. The 50CCD mode is\nfilterless imaging with a CCD. The field of view is a square\n$50\\arcsec$ on a side, and the dithering extends to a square\n$60\\arcsec$ on a side. The F28X50LP is a long-pass filter that\nvignettes the field of view of the CCD to a rectangle $28\\times\n50\\arcsec$. The observations were dithered to image the entire\nfield of view of the 50CCD observations, although the exposure time\nper point on the sky is thus approximately half the total exposure\ntime spent in this mode. The original pixel scale is\n$0.0244\\arcsec$~pix$^{-1}$ for the MAMA images, and\n$0.05071\\arcsec$~pix$^{-1}$ for the CCD images. The final combined\nimages have a scale of $0.025\\arcsec$~pix$^{-1}$ in all cases.\nTable~\\ref{obstab} describes the observations. The filterless 50CCD\nobservations correspond roughly to V+I, and reach a depth of 29.4\nAB magnitudes at $10\\sigma$ in a 0.2 square arcsecond aperture (320\ndrizzled pixels). This is the deepest exposure ever made in the\nUV-optical wavelength region.\n\n\\subsection{Selection of the Field}\n\nSelection of the field is described by Williams et al.\\ \\markcite{williams99}(1999). The QSO\nis at RA~=~$\\rm 22^h33^m37.5883^s$, Dec~=~$-60^{\\circ} 33\\arcmin\n29.128\\arcsec$ (J2000). The errors on this position are estimated\nto be less than 40 milli-arcseconds (Zacharias et al.\\ \\markcite{zacharias98}1998). The\nposition of the QSO on the 50CCD and F28X50LP images is x=1206.61,\ny=1206.32, and on the MAMA images is x=806.61, y=806.32.\n\n\\subsection{Test Data}\n\nTest observations of the field were made in 1997 October. \nThese data are not used in the present analysis. While the test\nexposures do not add significantly to the exposure time, they would\nprovide a one-year baseline for proper motion studies of the brighter\nobjects.\n\n\\subsection{Observing Plan}\n\nThe STIS observations were scheduled so that the CCD was used in the\norbits that were impacted by the South Atlantic Anomaly, and the\nMAMAs were used in the clear orbits. The observations were made in the\ncontinuous viewing zone, and therefore were all made close to the\nlimb of the Earth. The G430M spectroscopy, all of which was read-noise\nlimited, was done during the day or bright part of the orbit, while\nthe CCD imaging was all done during the night or dark part of the\norbit. The MAMA imaging, done with the quartz filter, is insensitive\nto scattered Earth light, and was therefore done during bright time. A\nmore detailed discussion of the scheduling issues is given by\nWilliams et al.\\ \\markcite{williams99}(1999). The sky levels in the 50CCD images were\napproximately twice the square of the read noise, so these data are\nmarginally sky noise limited. The MAMA images are limited by the dark\nnoise.\n\n\\subsection{Dithering and Rotation}\n\nThe images were dithered in right ascension (RA) and declination\n(Dec) in order to sample the sky at the sub-pixel level. In addition,\nvariations in rotation of about $\\pm 1$ degree were used to provide\nadditional dithering for the WFPC2 and NICMOS fields during the\nSTIS spectroscopic observations. The STIS imaging observations were\ninterspersed with the STIS spectroscopic observations; therefore,\nall of the images were dithered in rotation as well as RA and Dec.\n\n\\subsection{CR-SPLIT and pointing strategy}\n\nThe CCD exposures were split into 2 or 3 {\\sc cr-split}s that each\nhave the same RA, Dec, and rotation. This facilitates cosmic ray\nremoval, although as discussed below, this was only used in the\nfirst iteration of the data reduction. The final 50CCD image is\nthe combination of 193 exposures making up 67 {\\sc cr-split}\npointings. After standard pipeline processing, (including bias and\ndark subtraction, and flatfielding), each exposure is given a {\\sc\nflt} file extension, and the cosmic-ray rejected combinations of\neach {\\sc cr-split} is given a {\\sc crj} file extension. The final\nF28X50LP image is the combination of 66 exposures making up 23 {\\sc\ncr-split} pointings. The F28X50LP image included 12 pointings at\nthe northern part of the field, one pointing at the middle of the\nfield, and 10 pointings at the southern half of the field.\n\n\\subsection{PSF observations}\n\nIn order to allow for PSF subtraction of the QSO present in the\ncenter of the STIS 50CCD image, two SAO stars of about 10~mag were\nobserved in the filterless 50CCD mode before and after the main\nHDF-S campaign. The stars are SAO 255267, a G2 star, and SAO 255271,\nan F8 star, respectively. These targets have spectral energy\ndistributions in the STIS CCD sensitivity range similar to that of\nthe QSO. For each star, 32 different {\\sc cr-split} exposures were taken.\nThe following strategy was used: (i) four different exposure times\nbetween 0.1 s and 5 s for each {\\sc cr-split} frame, to ensure high\nsignal-to-noise in the wings while not saturating the center; (ii)\na four-position dither pattern with quarter-pixel sampling and\n{\\sc cr-split} at each pointing with each exposure time; (iii) use of\ngain=4, to insure no saturation in the A-to-D conversion. During\nthe observations for SAO255267, a failure in the guide star\nacquisition procedure caused the loss of its long-exposure (5~s)\nimages. Gain=4 has a well-documented large scale pattern noise that\nmust be removed, e.g., by Fourier filtering, before a reliable PSF\ncan be produced. These data are not discussed further in this paper,\nbut are available from the HST archive for further analysis.\n\n\\section{Reduction of the CCD Images}\n\n\\subsection{Bias, Darks, Flats and Masks}\n\nStandard processing of CCD images involves bias and dark subtraction,\nflatfielding, and masking of detector defects. The bias calibration\nfile used for the HDF-S was constructed from 285 individual exposures,\ncombined together with cosmic-ray and hot-pixel trail rejection.\n\nThe dark file was constructed from a ``superdark'' frame and a\n``delta'' dark frame. The superdark is the cosmic-ray rejected\ncombination of over 100 individual 1200~s dark exposures taken\nover the several months preceding the HDF-S campaign. The delta\ndark adds into this high S\/N dark frame the pixels that are more\nthan $5\\sigma$ from the mean in the superdark-subtracted combination\nof 14 dark exposures taken during the HDF-S campaign. Calibration\nof the images with this dark frame removes most of the hot pixels\nbut still leaves several hundred in each image.\n\nAn image mask was constructed to remove the remaining hot pixels\nand detector features. The individual cosmic-ray rejected HDF-S\n50CCD exposures were averaged together without registration. The\nremaining hot pixels were identified with the IRAF\\footnote[12]{IRAF\nis distributed by the National Optical Astronomy Observatories,\nwhich are operated by the Association of Universities for Research\nin Astronomy, Inc., under cooperative agreement with the National\nScience Foundation.} {\\sc cosmicrays} task. These pixels were\nincluded in a mask that was used to reject pixels during the\n{\\sc drizzle} phase. Pixels that were more than $5\\sigma$ below the mean\nsky background were also masked, as were the 30 worst hot pixel\ntrails, and the unilluminated portions of the detector around the\nedges. Hot pixel trails run along columns and are caused by high\ndark current in a single pixel along the column.\n\nFlatfielding was carried out by the IRAF\/STSDAS {\\sc calstis}\npipeline using two reference files. The first, the {\\sc pflat}\ncorrects for small-scale pixel-to-pixel sensitivity variations,\nbut is smooth on large scales. This file was created from ground-test\ndata but comparisons to a preliminary version of the on-orbit flat\nrevealed only a few places where the difference was more than 1\\%.\nThe CCD also shows a 5-10\\% decrease in sensitivity near the edges\ndue to vignetting. This illumination pattern was corrected by a\nlow-order fit to a sky flat constructed from the flanking field\nobservations.\n\n\\subsection{Shifts and rotations}\n\nAfter pipeline processing, the CCD images were reduced using the\nIRAF\/STSDAS package {\\sc dither}, and test versions called {\\sc\nxdither}, and {\\sc xditherii}. These packages include the {\\sc\ndrizzle} software (Fruchter \\& Hook \\markcite{fruchterhook98}1998; Fruchter et al.\\ \\markcite{fruchteretal98}1998;\nFruchter \\markcite{fruchter98}1998). We used {\\sc drizzle} version 1.2, dated 1998\nFebruary. The test versions differ from the previously released\nversion primarily in their ability to remove cosmic rays from each\nindividual exposure, and include tasks that have not yet been\nreleased.\n\nThe {\\sc xditherii} package uses an iterative process to reject cosmic\nrays and determine the x and y sub-pixel shifts, which we summarize\nhere. The standard pipeline rejects cosmic rays using each {\\sc\ncr-split} of 2 or 3 images. The resulting {\\sc crj} files are used\nas the first iteration, we determine the x and y shifts, and the\nfiles are median combined. The resulting preliminary combination\nis then shifted back into the frame of each of the original exposures\n({\\sc flt} files), and a new cosmic ray mask is made. By comparing\neach exposure to a high signal-to-noise combination of all of the\ndata, we are less likely to leave cosmic ray residuals. The x and\ny shifts are determined at each iteration as well.\n\nThe rotations used in combining the data were determined from the\n{\\sc roll\\_avg} parameter in the jitter files, using the program\n{\\sc bearing}. We did not seek to improve on these rotations via\ncross-correlation or any other method. We did use cross-correlation\nto determine the x and y shifts.\n\nDetermination of the sub-pixel x and y shifts was done with an\niterative procedure. The first iteration was obtained by determining\nthe centroid of the bright point source just west of the QSO, using\nthe pipeline cosmic-ray rejected {\\sc crj} files. We could not use\ncross-correlation in this first iteration, because the very bright\nstar on the southern edge of the field was present on images taken\nat some, but not all, dither positions, which corrupted the\ncross-correlation. The source we used for centroiding was clearly\nvisible on all of the 50CCD and F28X50LP frames.\n\nUsing these shifts (which were accurate to better than 1 pixel),\nwe created a preliminary combined image. After pipeline processing\nand cosmic ray rejection, the {\\sc drizzle} program was used to\nshift and rotate each {sc crj} file onto individual outputs, without\ncombining them. We then used the task {\\sc imcombine} to create a\nmedian combination of the files. This preliminary image was then\nshifted and rotated back into the frame of each individual exposure\nusing the {\\sc xdither} task, {\\sc blot}, ready for the next iteration\nof the cosmic-ray rejection procedure. \n\n\\subsection{Cosmic ray rejection}\n\nIn this iteration, we discarded the {\\sc crj} files, and went back\nto the {\\sc flt} files, in which each exposure had undergone bias\nand dark subtraction and flatfielding, but not cosmic-ray rejection.\nEach exposure was compared to the blotted image, and a cosmic-ray\nmask for that exposure was created from all of the pixels that\ndiffered (positively or negatively) by more than a given threshold\nfrom the blotted image. In the version 1.0 released 50CCD image,\nthis threshold was set to be $5\\sigma$. However, we believe that\na small error in the sky level determination, introduced by the\namplifier ringing correction discussed below, meant that our \nrejection was approximately at the $3\\sigma$ level.\nThe cosmic ray masks were multiplied by the hot pixel masks discussed\nabove, and resulted in about 8\\% of the pixels being masked as\neither cosmic rays or hot pixels. This is, perhaps, overly\nconservative. A less conservative cut (after correcting the error\nin the sky value) would result in slightly higher exposure time\nper pixel, and thus an improvement of 1-2\\% in the signal to noise\nratio. The cosmic ray mask was combined with the hot pixel and\ncosmetic defect mask.\n\nThis problem with the sky value was corrected in the F28X50LP image,\nand a $3\\sigma$ level was used in the cosmic ray rejection.\n\n\\subsection{Amplifier ringing correction}\n\nHorizontal features due to amplifier ringing, varying in pattern\nfrom image to image, were present in most of the STIS CCD frames.\nWhen a pixel saw a highly saturated signal, the bias level was\ndepressed in the readout for the next few rows. The very high\nsignals causing this ringing came from hot pixels and from the\nsaturated QSO. The signal-to-noise ratio in the overscan region of\nthe detector was not sufficient to remove these features well. We\nremoved them with a procedure that subtracted on a row-by-row basis,\nfrom each individual image, the weighted average of the background\nas derived from the innermost 800 columns after masking and rejecting\n``contaminated'' pixels. The masks included all visible sources,\nhot pixels, and cosmic-ray hits. The source mask was determined\nfrom the initial registered median-combined image, shifted back to\nthe reference frame of each of the individual images. For the\nunmasked pixels in each row, the 50 highest and lowest were rejected\nand the mean of the remaining pixels was subtracted from the each\npixel in that row.\n\nHeavily smoothing the images reveals very slight horizontal residuals\nthat were not removed by the present choice of parameters in this\nprocess.\n\n\\subsection{Drizzling it all together}\n\nThe final image combination was done by drizzling the amplifier-ringing\ncorrected pipeline products together onto a single output image.\nThe exposures were weighted by the square of the exposure time,\ndivided by the variance, which is (sky+rn$^2$+dark). The rotations\nwere corrected so that North is in the +y direction, and the scale\nused was 0.492999 original CCD pixels per output pixel so that the\nfinal pixel scale is exactly 0.025 arcsec\/pixel. For the 50CCD data\nwe used a {\\sc pixfrac}=0.1, which is approximately equivalent to\ninterleaving, where each input pixel falls on a single output pixel.\nFor the F28X50LP data we used {\\sc pixfrac}=0.6, as a smaller {\\sc\npixfrac} left visible holes in the final image. See Fruchter \\& Hook \\markcite{fruchterhook98}(1998)\nfor a discussion of the meaning of the {\\sc drizzle} parameters. The\npoint spread functions of bright, non-saturated point sources are\nshown in Figure~\\ref{psf}. The sources selected are the point source\njust to the west of the quasar in the 50CCD and F28X50LP images,\nand the QSO in the MAMA images.\n\nThe final image is given in counts per second, which can be converted\nto magnitudes on the {\\sc stmag} system using the photometric\nzeropoints given by the {\\sc photflam} parameter supplied in the\nimage headers. We used the pipeline photometric zeropoints for the\n50CCD and MAMA images, but revised the F28X50LP zeropoint by 0.1\nmagnitude based on a comparison of STIS photometry of the HST\ncalibration field in $\\omega$ Centauri with the ground-based\nphotometry of Walker \\markcite{walker94}(1994). The zeropoints in the AB magnitude\nsystem which we used are 26.386, 25.291, 23.887, and 21.539, for\nthe 50CCD, F28X50LP, NUVQTZ and FUVQTZ respectively. We also supply\nthe weight image, which is the sum of the weights falling on each\npixel. For the F28X50LP image, we supply an exposure-time image,\nwhich is the total exposure time contributing to each pixel. We\nhave multiplied this image by the area of the output pixels. The\nworld coordinate system in the headers was corrected so that North\nis exactly in the +y direction, and the pixel scale is exactly\n0.025 arcsec\/pixel.\n\n\\subsection{Window reflection}\n\nA window in the STIS CCD reflects slightly out-of-focus light from\nbright sources to the +x, $-$y direction (SW on the HDF-S images).\nThe QSO is saturated in every 50CCD and F28X50LP exposure. The\nwindow reflection of the QSO is clearly visible in the F28X50LP\nimage, but has been partially removed from the 50CCD image by the\ncosmic-ray rejection procedure. We wish to emphasize that it has\nonly been partially removed, and there are remaining residuals.\nThese residuals should not be mistaken for galaxies near the QSO,\nnor should they be mistaken for the host galaxy of the QSO. There\nis additional reflected light from the QSO (and from the bright\nstar at the southern edge) evident in the images. We believe that\nthe version 1.0 released images are not appropriate for searching\nfor objects very close to or underlying the QSO, and that such a\nsearch would require re-processing the raw data with particular\nattention paid to the window reflection, other reflected light,\nand to the PSF of the QSO. The diffraction spikes of the QSO are\nsmeared in the final images by the rotation of the individual\nexposures.\n\n\\section{Reduction of the MAMA Images}\n\nThe near-UV and far-UV images are respectively the weighted averages\nof 12 and 25 registered frames, with total exposure times of 22616~s\nand 52124~s. The MAMAs do not suffer from read noise or cosmic\nrays, and the quasar is not saturated in any of the UV data. However,\nthe MAMAs do have calibration issues that must be addressed.\n\n\\subsection{Flats, Dark Counts, and Geometric Correction}\n\nPrior to combination, all frames were processed with CALSTIS,\nincluding updated high-resolution pixel-to-pixel flat field files\nfor both UV detectors. Geometric correction and rescaling were\napplied in the final combinations via the {\\sc drizzle} program. The\nquartz filter changes the far-UV plate scale relative to that in\nthe far-UV clear mode, and so the relative scale between MAMA\nimaging modes was determined from calibration images of the globular\ncluster NGC~6681.\n\nDark subtraction for the near-UV image was done by subtracting a\nscaled and flat-fielded dark image from each near-UV frame. The\nscale for the dark image was determined by inspection of the\nright-hand corners of the near-UV image, because these portions of\nthe detector are occulted by the aperture mask and thus only register\ndark counts. For the far-UV images, {\\sc calstis} removes a nearly flat\ndark frame, but the upper left-hand quadrant of STIS far-UV frames\ncontains a residual glow in the dark current after nominal calibration.\nThis glow varies from frame to frame and also appears to change\nshape slightly with time. To remove the residual dark current, the\n16 far-UV frames with the highest count rates in the glow region\nwere co-added without object registration but with individual object\nmasks for the only two obvious objects in the far-UV frames (the\nquasar and bright spiral NNE of the quasar). We then fit the result\nwith a cubic spline to produce a glow profile. This profile was\nthen scaled to the residual glow in each processed frame and\nsubtracted prior to the final drizzle. Even during observations\nwith a strong dark glow, where the dark count rate is an order of\nmagnitude higher than normal, it is still very low, reaching rates\nno higher than $6\\times 10^{-5}$cts~sec$^{-1}$~pix$^{-1}$. The glow\nthus appears as a higher concentration of ones in a sea of zeros,\nand the subtraction of a smooth glow profile from such quantized\ndata over-subtracts from the zeros and under-subtracts from the\nones. These effects are visible in the corrected data, even when\nsmoothed out considerably in the final drizzled far-UV image. A\nlow-resolution flat-field correction was applied to the far-UV\nframes after subtraction of the residual dark glow. The near-UV\nframes require no low-resolution flat field correction.\n\n\\subsection{Shifts and Rotations}\n\nCurrently, geometrically corrected NUVQTZ and FUVQTZ frames do not\nhave the same plate scale. Although geometric correction, rotation,\nand rescaling is applied during the final summation of individual\ncalibrated frames, we first produced a set of calibrated frames that\nincluded these corrections, in order to accurately determine the\nrelative shifts between them; this information was then used in\nconjunction with these corrections in the final drizzle. All near-UV\nand far-UV frames were geometrically corrected, rescaled to\n$0.025\\arcsec~pix^{-1}$, and rotated to align North with the +y image\naxis. The roll angle specified in the jitter files was used to\ndetermine the relative roll between frames, and the mean difference\nbetween the planned roll and the jitter roll determined the absolute\nrotation. It is difficult to determine accurate roll angles from the\nimages themselves, because of the scarcity of objects in the MAMA\nimages. All near-UV and far-UV frames were then cross-correlated\nagainst one of the far-UV frames to provide shifts in the output\ncoordinate system. Note that centroiding on the quasar in all far-UV\nand near-UV frames yields the same shifts as cross-correlation, within\n0.1 pixel.\n\n\\subsection{Drizzling}\n\nThe calibrated frames were drizzled to a $1600 \\times 1600$ pixel\nimage, including the above corrections, rescaling, rotations, and\nshifts. We updated the world coordinate system in the image headers\nto exactly reflect the plate scale, alignment, and the astrometry\nof the quasar.\n\nFor both the far-UV and near-UV frames, individual pixels in each\nframe were weighted by the ratio of the exposure time squared to\nthe dark count variance; this weights the exposures by (S\/N)$^2$\nfor sources that are fainter than the background. Although the\nvariations in the far-UV dark profile are smooth, the near-UV dark\nprofile is an actual sum of dark frames, and so we smoothed the\nnear-UV dark profile to determine the weights. With this weighting\nalgorithm, pixels in the upper left-hand quadrant of a given far-UV\nimage contribute less when the dark glow is high, and contribute\nmore when it is low. The statistical errors (cts~s$^{-1}$) in the\nfinal drizzled image, for objects below the background (e.g.,\nobjects other than the quasar), are given by the square root of\nthe final drizzled weights file.\n\nThe drizzle ``dropsize'' ({\\sc pixfrac}) was 0.6, thus improving the\nresolution over a {\\sc pixfrac} of 1.0 (which would be equivalent to\nsimple shift-and-add). The $1600 \\times 1600$ pixel format contains\nall dither positions, and pixels outside of the dither pattern\nare at a count rate of zero. The pixel mask for each near-UV input\nframe included the occulted corners of the detector, a small number\nof hot pixels, and pixels with relatively low response (those with\nvalues $\\le$ 0.75 in the high-resolution flat field). The pixel\nmask for each far-UV frame included hot pixels and all pixels\nflagged in the data quality file for that frame. When every input\npixel drizzled onto a given output pixel was masked, that\npixel was set to zero.\n\n\\subsection{Window Reflection}\n\nAs with the CCD, a window reflection of the QSO\nappears in the near-UV image. This reflection appears $\\approx 0.2\\arcsec$\neast of the QSO itself, and should not be considered an astronomical object.\n \n\\section{Cataloging}\n\n\\subsection{Cataloging the Optical Images}\n\nThe catalog was created using the {\\sc SExtractor} package\n(Bertin \\& Arnouts \\markcite{bertin96}1996), revision of 1998 November 19, with some minor\nmodifications that were done for this application. We used two\nseparate runs of {\\sc SExtractor}, and manually merged the resulting\noutput catalogs. The first run used a set of parameters selected\nto optimize the detection of faint sources while not splitting what\nappeared to the eye to be substructure in a single object. We varied\nthe parameters {\\sc detect\\_thresh}, {\\sc deblend\\_mincont}, {\\sc\nback\\_size}, and {\\sc back\\_filtersize}. We decided to use a\ndetection threshold corresponding to an isophote of $0.65\\sigma$.\nSources were required a minimum area of 16 connected pixels above\nthis threshold. Deblending was done when the flux in the fainter\nobject was a minimum of 0.03 times the flux in the brighter object.\nThe background map was constructed on a grid of 60 pixels, and\nsubsequently filtered with a $3\\times3$ median filter. Prior to\ncataloging, the image was convolved with a Gaussian kernel with\nfull width half maximum of 3.4 pixels. As discussed in\nFruchter \\& Hook \\markcite{fruchterhook98}(1998), the effects of drizzling on the photometry\nis no more than 2\\%, and in our well-sampled 50CCD field, the\neffects should be much less than this. This effect is smaller than\nother uncertainties in the photometry of extended objects.\n\nThe second run of {\\sc SExtractor} was optimized to detect objects\nthat lay near the QSO and the bright star at the southern edge of\nthe image. These objects tend to be blended in with the point source\nat the lower detection threshold. Although our catalog might include\ngalaxies that are associated with absorption lines in the quasar\nspectrum, we did not attempt to subtract the quasar light from the\nimage, and so the catalog does not include objects within $3\\arcsec$\nof the quasar. The parameters used for the second run were the same\nas for the first run, with the exception of the {\\sc detect\\_thresh}\nparameter, which was set to $3.25\\sigma$. This parameter not only\nsets the minimum flux level for detection, but also is the isophote\nused to determine the extent of the object. Several objects fall\nbetween the $0.65\\sigma$ isophote and the $3.25\\sigma$ isophote of\nthe quasar. These are not deblended on the first {\\sc SExtractor}\nrun, because their fluxes are below 0.03 of the quasar flux, but\nare detected (without the need for deblending) on the second run.\nObjects near the quasar detected in the second run were added to\nthe catalog generated by the first run, and flagged accordingly.\nObjects from the second run that were not confused with the quasar\nor the bright star were not included. The isophotal photometry of\nobjects from the second run will not be consistent with the photometry\nof objects from the first run, because a different isophote was\nused. Eight objects were added to the catalog in this way.\n\nIn addition, 26 objects from the first {\\sc SExtractor} run \nwere clearly spurious due to the diffraction spikes of the QSO and \nthe bright star. These were manually deleted from the catalog. \n\nPhotometry of the F28X50LP image was done with {\\sc SExtractor}\nrun in two-image mode, in which the objects were detected and\nidentified on the 50CCD image, but the photometry was done in the\nother band. Isophotes and elliptical apertures are thus determined\nby the extent of the objects on the 50CCD images. Objects detected\nin the F28X50LP image but not on the 50CCD image are impossible,\nsince it has a lower throughput and shorter exposure time.\n\n\n\\subsection{Cataloging the Ultraviolet Images}\n\nFluxes in the UV were calculated outside of {\\sc SExtractor} because\nit had some problems handling quantized low-signal data. To determine\nthe gross flux, we summed the countrate within the area for each\nobject appearing in the {\\sc SExtractor} 50CCD segmentation map.\nWe then created an object mask by ``growing'' each object in the\nsegmentation map, using the IDL routine {\\sc dilate}, until it\nsubtended an area three times its original size. The resulting mask\nexcludes faint emission outside of the {\\sc SExtractor} isophotes\nfor all known objects in the field. The sky was calculated from\nthose exposed pixels within a $151 \\times 151$ pixel box centered\non each object, excluding pixels from the mask. The mean countrate\nper pixel in this sky region was used to determine the background\nfor each object (the median is not a useful quantity when dealing\nwith very low quantized signals), and thus the net flux. Statistical\nerrors per pixel for objects at or below the background are determined\nfrom the {\\sc drizzle} weight image raised to the $-1\/2$ power.\nThe statistical errors for the gross flux and sky flux were calculated\nusing this pixel map of statistical errors, and thus underestimate\nthe errors for bright objects such as the quasar.\n\nSome objects that are fully-exposed in the CCD image do not fall\nentirely within the exposed area of the MAMA images; for these\nobjects, we calculated the UV flux in the exposed area only, without\ncorrecting for the incomplete exposure, and flagged such objects\naccordingly. Objects were also flagged if the sky-box described\nabove did not contain at least 100 pixels (e.g., the quasar). For\nthese objects, we calculated a global sky value from a larger $685\n\\times 670$ pixel box, roughly centered in each MAMA image, that\nonly includes areas fully exposed in the dither pattern, and excludes\npixels in the object mask. When the net flux incorporates this\nglobal sky value, they have been flagged accordingly. We do not\nexpect or see any evidence for objects in the ultraviolet images\nthat do not appear on the 50CCD image.\n\n\\subsection{The Catalog}\n\nThe catalog is presented in Table~\\ref{cattab}, which contains a\nsubset of the photometry. The full catalogs are available on the World\nWide Web. For each object we report the following parameters:\n\n{\\bf ID:} The {\\sc SExtractor} identification number. The objects\nin the list have been sorted by right ascension (first) and\ndeclination (second), and thus are no longer in catalog order. In\naddition, the numbers are no longer continuous, as some of the object\nidentifications from the first {\\sc SExtractor} run have been\nremoved. Objects from the second {\\sc SExtractor} run have had\n10000 added to their identification numbers. These identification\nnumbers provide a cross-reference to the segmentation maps.\n\n{\\bf HDFS\\_J22r$-$60d:} The minutes and seconds of right ascension\nand declination, from which can be constructed the catalog name of\neach object. To these must be added 22 hours (RA) and $-60$ degrees\n(Dec). The first object in the catalog is HDFS\\_J223333.69$-$603346.0,\nat RA 22$^h$ 33$^m$ 33.69$^s$, Dec 60$\\deg$ 33$\\arcmin$\n46.0$\\arcsec$, epoch J2000.\n\n{\\bf x, y:} The x and y pixel positions of the object on the 50CCD and\nF28X50LP images. To get the x and y pixel positions on the MAMA\nimages, subtract 400 from each.\n\n{\\bf $m_i$, $m_a$:} The isophotal ($m_i$) and ``mag\\_auto'' ($m_a$)\n50CCD magnitudes. The magnitudes are given in the AB system\n(Oke \\markcite{oke71}1971), where $m = -2.5 log f_{\\nu} - 48.60$. The isophotal\nmagnitude is determined from the sum of the counts within the\ndetection isophote, set to be 0.65$\\sigma$. The ``mag\\_auto'' is\nan elliptical Kron \\markcite{kron80}(1980) magnitude, determined from the sum of\nthe counts in an elliptical aperture. The semi-major axis of \nthis aperture is defined by 2.5 times the first\nmoments of the flux distribution within an ellipse roughly twice the\nisophotal radius. However if the aperture defined this way \nwould have a semi-major axis smaller than than 3.5 pixels, a\n3.5 pixel value is used.\n\n{\\bf clr-lp:} Isophotal color, 50CCD$-$F28X50LP, in the AB magnitude\nsystem, as determined in the 50CCD isophote. {\\sc SExtractor} was\nrun in two-image mode to determine the photometry in the F28X50LP\nimage, using the 50CCD image as the detection image. When the\nmeasured F28X50LP flux is less than $2\\sigma$, we determine an\nupper limit to the color using the flux plus $2\\sigma$ when the\nmeasured flux is positive, and $2\\sigma$ when the measured flux is\nnegative. We did not clip the 50CCD photometry.\n\n{\\bf nuv-clr, fuv-clr:} Isophotal colors, NUVQTZ-50CCD and\nFUVQTZ-50CCD, in the AB magnitude system. Photometry in the MAMA\nimages are discussed above. Photometry of objects falling partially\noutside the MAMA image are flagged and should not be considered\nreliable. When the measured flux is less than $2\\sigma$, we give\nlower limits to the color as discussed above.\n\n{\\bf $r_h$:} The half-light radius of the object in the 50CCD image,\ngiven in milli-arcseconds. The half-light radius was determined by\n{\\sc SExtractor} to be the radius at which a circular aperture\ncontains half of the flux in the ``mag\\_auto'' elliptical aperture.\n\n{\\bf s\/g:} A star-galaxy classification parameter determined by a\nneural network within {\\sc SExtractor}, and based upon the morphology\nof the object in the 50CCD images (see Bertin \\& Arnouts \\markcite{bertin96}1996 for a detailed\ndescription of the neural network). Classifications near 1.0 are\nmore like a point source, while classifications near 0.0 are more\nextended.\n\n{\\bf flags:} Flags are explained in the table notes, and include both\nthe flags returned by {\\sc SExtractor}, and additional flags we\nadded while constructing the catalog.\n\n\\section{Statistics}\n\nIn this section we present several statistics of the data compiled\nfrom the catalog.\n\n\\subsection{Source Counts}\n\nThe source counts in the 50CCD image are given in Table~\\ref{nctable},\nand plotted as a function of AB magnitude in Figure~\\ref{numcts},\nwhere they are compared with the galaxy counts from the HDF-N WFPC2\nobservations, as compiled by Williams et al.\\ \\markcite{williams96}(1996). The counts are\ncompiled directly from the catalog, although all flagged regions\nhave been excluded, so that the counts do not include objects near\nthe edge of the image, or near the quasar. We plot only the Poissonian\nerrors, although there might be an additional component due to\nlarge-scale structure. We plot all sources, including both galaxies\nand stars, although we do not expect stars to contribute substantially\nto the source counts. No corrections for detection completeness\nhave been made, and the counts continue to rise until fainter than\n30~mag. The turnover fainter than this is due to incompleteness;\nthe counts do not turn over for astrophysical or cosmological\nreasons. \n\n\\subsection{Colors and Dropouts}\n\nThe 50CCD-F28X50LP colors of objects in the STIS images are plotted\nas points in Figure~\\ref{lpcolor}. Flagged objects have been removed\nfrom the sample. For comparison, we plot K--corrected (no-evolution)\ncolors of the template galaxies in the Kinney et al.\\ \\markcite{kinney96}(1996) sample\nas a function of redshift on the left of the figure. The LP filter is\nable to distinguish blue galaxies at $z<2.5$, but becomes\ndominated by the noise for blue galaxies fainter than 28~mag, and\nloses color resolution at $z>3$, where the Ly$\\alpha$ forest\ndominates the color in these bandpasses.\n\nBecause the F28X50LP bandpass is entirely contained within the\n50CCD bandpass, it is possible, by subtracting an appropriately\nscaled version of the measured F28X50LP flux from the 50CCD flux,\nto construct a pseudo-$B_{430}$ measurement (see Figure~\\ref{filttrans}).\nThis pseudo-$B_{430}$ is combined with the NUVQTZ and the F28X50LP\nmeasurements in a color-color diagram in Figure~\\ref{nuvdrop}. NUV\ndrop-outs, indicated on this figure by the dashed line, are those\nobjects with blue colors in the visible, but red colors in the UV,\nindicative of galaxies at $z>\\sim 1.5$. These galaxies show blue\ncolors characteristic of rapid star formation, while the red NUV\nto optical color is due to the Lyman break and absorption by the\nLy$\\alpha$ forest. The selection criteria were determined using\nthe models of Madau et al.\\ \\markcite{madau96}(1996). In an inset to Figure~\\ref{nuvdrop},\nwe plot the efficiency of these criteria for selecting galaxies of\nhigh redshift. The solid line is the fraction of all of the models\nthat meet these criteria, while the dotted line is the fraction of\nthose models with ages $<10^8$ years and foreground-screen extinction\nless than $A_B = 2$. These criteria are very efficient at finding\nyoung, star-forming galaxies at $1.5 < z < 3.5$. We have removed\npoint sources from this figure, including the bright object just\nwest of the QSO, which is extremely red and is likely to be an M\nstar.\n\nIn Figure~\\ref{fuvdrop} we give a FUV-NUV vs NUV-50CCD color-color\nplot showing FUV dropouts, where the Lyman break is passing through\nthe FUV bandpass at $z>0.6$. Of the 17 galaxies in the MAMA field with\nNUV magnitudes brighter than 28.4, only 3 have a clear signature of a\nLyman break at $0.60.6$.\n\n\\section{Conclusions}\n\nWe have presented the STIS imaging observations that were done as\npart of the Hubble Deep Field -- South campaign. The 50CCD image is\nthe deepest image ever made in the UV-optical wavelength region,\nand achieves a point source resolution near the diffraction limit\nof the HST. We have presented the catalog, and some statistics of\nthe data. These data will be useful for the study of the number and\nsizes of faint galaxies, the UV-optical color evolution of galaxies,\nthe number of faint stars and white dwarfs in the galactic halo, and\nthe relation between absorption line systems seen in the QSO spectrum\nand galaxies near to the line of sight. Follow-up observations of\nthe HDF-South fields by southern hemisphere ground-based telescopes,\nby HST, and by other space missions will also greatly increase our\nunderstanding of the processes of galaxy formation and evolution.\n\nThe images and catalog presented here are available on the World Wide\nWeb at: $<$http:\/\/www.stsci.edu\/ftp\/science\/hdfsouth\/hdfs.html$>$.\n\n\\bigskip\n\n\\acknowledgments\n\nWe would like to thank all of the people who contributed to making the\nHDF-South campaign a success, including those who helped to identify a\ntarget quasar in the southern CVZ, and those who helped in planning\nand scheduling the observations. JPG, TMB, and HIT wish to acknowledge\nfunding by the Space Telescope Imaging Spectrograph Investigation\nDefinition Team through the National Optical Astronomical Observatories,\nand by the Goddard Space Flight Center. CLM and CMC wish to acknowledge\nsupport by NASA through Hubble Fellowship grants awarded by STScI.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{References}}\n\\bibpunct{(}{)}{;}{a}{,}{;}\n\n\\newcommand{\\citepos}[1]{}\n\\renewcommand{\\citepos}[1]{\\citeauthor{#1}'s (\\citeyear{#1})}\n\n\\begin{document}\n\n\n\n\n\\title{Yet Another Statistical Analysis of Bob Ross Paintings}\n\n\\author{Christopher Steven Marcum, PhD \\\\ National Institutes of Health}\n\\date{\\today{}}\n\\maketitle\n\n\\abstract{\nIn this paper, we analyze a sample of clippings from paintings by the late artist Bob Ross. Previous work focused on the qualitative themes of his paintings \\citep{hickey2014sawb}; here, we expand on that line of research by considering the colorspace and luminosity values as our data. Our results demonstrate the subtle aesthetics of the average Ross painting, the common variation shared by his paintings, and the structure of the relationships between each painting in our sample. We reveal, for the first time, renderings of the average paintings and introduce ``eigenross'' components to identify and evaluate shared variance. Additionally, all data and code are embedded in this document to encourage future research, and, in the spirit of Bob Ross, to teach others how to do so.\n{\\\\{\\linebreak \\bf Keywords}: art, Bob Ross, paintings, linear subspace}\n}\n\n\\doublespace\n\n\\section{Introduction}\nPainter Bob Ross (1942--1995) was an icon of American art education. For over a decade, his television program, \\emph{The Joy of Painting}, taught and entertained millions of Americans tuning in to the half-hour show on PBS. In the course of his art career, Ross is estimated to have painted upward of $25,000$ paintings. As a master of the Alexander ``wet-on-wet'' oil painting technique, Ross's iconic ``happy'' clouds, mountains, streams, and, of course, trees were laid down on canvas in a matter of seconds. Recently, his set of paintings became the subject of a popular blog post titled ``A Statistical Analysis of the Work of Bob Ross'' by Walt Hickey on Nate Silver's pop-stat site \\href{http:\/\/fivethirtyeight.com\/features\/a-statistical-analysis-of-the-work-of-bob-ross\/}{fivethirtyeight.com}. The post ``went viral'' on social media sites and is the inspiration for the current work. \n\nAs a commendable digest, Hickey's approach to Ross's work should rightly be characterized as a statistical analysis of qualitative features, subjects, and themes of the paintings. He enumerates the frequency distribution of various aesthetic elements (trees, rocks, hills, etc) and, with great levity, calculates and describes the conditional probability that Ross paints one element given that he's already painted another More, Hickey delved into the voluminous library of episodes of \\emph{The Joy of Painting} to determine such statistical anomalies of the presence of humans (n=2), and chimneys (n=1), in Ross's work. Hickey also employed the k-means clustering algorithm on the data represented by these features to determine unique subsets of paintings. In this paper, we take a different approach that advances this prior work in a quantitative analysis of digital representations of Ross's paintings.\n\nIn particular, we consider a different set of research questions to be addressed by formal statistical analysis. First, what does the ``average'' Bob Ross painting look like? Here, we diverge from Hickey's approach, which describes the ``typical'' Ross painting, the features of which may be quantified using conditional probabilities as he did. Instead, we specifically want to describe average tendency in the red-green-blue colorspace of digital representations of Ross's work. That is, can we render a representation of the central tendency for Ross's work by averaging over his paintings? Second, we ask what is the common variation shared across Bob Ross's paintings? The answer to this question will shed light on the commonly held belief that Ross has a relatively standardized theme. Finally, we ask to what extent are separate Bob Ross paintings correlated with one another: what is the relationship \\emph{between} Bob Ross paintings? \n\nFinally, this manuscript serves a didactic purpose: it illustrates how to conduct comparative quantitative research with completely reproducible results from image data packaged with the manuscript. To this end, this paper was prepared using the Sweave interface between Latex and R on a Linux operating system and the code generating the analysis is embedded in both the compiled pdf and the source. Additionally, an archive of the data can be found deposited online at the journal.\n\n\\section{Data}\nThe first thirty images returned from a Google image search of \\linebreak ``bob+ross+paintings'' in large format (as defined by Google) and attributed to Bob Ross were downloaded on November 1st, 2014. The selection criteria also included that the \\emph{Ross} signature be present or, alternatively, that the painting could be verified against the catalog of known Bob Ross paintings from the \\emph{Joy of Painting} television program which is validated by comparison with the archive on \\href{http:\/\/www.tv.com\/shows\/the-joy-of-painting\/forums\/pictures-of-every-painting-15711-691600\/}{tv.com}. Each image was saved using a br\\%d.jp*g naming precedent, where br stands for bobross, \\%d is an integer from $1$ to $30$ and $*$ is either null or the letter $e$ depending on the image source, which is used by the following embedded script.\n\nNext, each image was cropped down to a square 550 by 550 pixels. This was automatically achieved by drawing the clipping window about the Cartesian center pixel (detected via gravity method) of each image using imagemagick and a Bourne-shell loop. The following snippet demonstrates the code, which was saved to a file called ConvertAllImages.sh. Thus, comparisons made between images are done on the pixel subset contained within this clipping window.\n\n\\begin{verbatim}\n#!\/bin\/sh\nlist=`ls br*jp*`\ni=1\nfor image in $list; do\nconvert $image -gravity center -crop 550X550+0+0 $i.jpg\ni=$((i+1))\ndone\n\\end{verbatim}\n\nThe resulting library of clipped images can be found in Table~\\ref{mat}. The intensity values of each image's three channels (red, green, and blue) was read on a pixel-by-pixel basis and stored as a three dimensional array (with dimensions $[550,550,3]$) using the ``jpeg'' library for \\texttt{R}. These arrays contain the data used in the subsequent analysis. The images are sampled at 100 dpi.\n\n\\section{Analysis}\n\nOur primary research question is, ``what does the average Bob Ross painting look like?'' To address this, we integrate over the respective channel indexes in the data array to obtain the mean value for each red, green, and blue channel among the 30 clippings. The resulting figure is rendered as a raster image and displayed in Figure~\\ref{res1}. Despite considerable apparent variation in the supporting set of images, this average (while quite abstract in detail) clearly shows a preference gradient for blues and pinks at the top of the image and greens and browns at the bottom. One can also detect the faint gray outlines of the trunks and branches in Ross' ``Happy Trees'' rising from the bottom toward the top of the image, suggesting that Happy Trees have low alignment variance as we'd expect. The lower and upper 95\\% confidence range in these images only validates the consistency in the pixel-by-pixel averages; though, we note that the darker saturation of browns, grays, and blacks Ross used in his buildings is readily apparent in the lower bound image rendering.\n\nSecond, while the average is interesting, it fails to account for the variation in Ross' bucolic landscapes and cannot address the second research question: ``what is the common variation shared across Bob Ross' paintings?'' To examine this, we consider the eigenspace among the covariances across the dataset. We derive a set of orthonormal vectors that best describe the shared variance across the distribution of the data---we'll call these vectors the eigenrosses. Specifically, using simple principal components analysis, we project the data back onto the eigenrosses and compare shared variances in the highly loading eigenross components. This classic approach is used in a wide variety of data reduction and statistical applications including, factor analysis, spectral analysis, and face-detection software (i.e., vis-a-vis eigenfaces methods). We conduct this for each of the three color channels as well as a flattened (monochromatic) version of the covariances---the flattened version is derived by averaging each channel with respect to each pixel, which is the Gaussian method of converting to grayscale used by most image manipulation software.\n\nThe proportion of shared variances from the eigenross components is plotted in Figure~\\ref{res2}. Interestingly, despite the commonly held belief that Ross' paintings are relatively similar, the plot demonstrates considerable variation---it's not until the fifth eigenross component is reached until 50\\% of common variation across the whole set is attained for all channels including the flattened version (in gray). However, the first two components jointly explain more than 30\\% of the variance. We can explore this further by rendering images from these components.\n\nEach of the first five eigenross components is displayed in Figure~\\ref{eigenross}. As these are channel-independent orthonormal transformations we cannot recombine the red, green, and blue eigenross components in a reasonable way; thus, the components are plotted separately. Lighter colored areas indicate lower pixel-by-pixel shared variance across the set of clippings in that channel. Examination of the first two components demonstrates a clear preference for upper sky and lower foreground shared variation in the red and blue channels, and a clear foreground preference in the greens. The remainder of the eigenross components appear to differentiate trees, mountains, and buildings in an order different for each channel.\n\nFinally, to address the third question, ``what is the relationship between Bob Ross paintings'' we simply examine the correlation structure using a network perspective. Specifically, we posit a relationship between two paintings if the product of red, green, and blue channel correlations is greater than or equal to $0.3^3$; in other words, two paintings are said to be related if the total correlation between them is moderate by classical standards \\citep{cohen1988spab}. The resulting network is depicted in Figure~\\ref{network}. \n\nNearly half (n=12) of the data are isolates in this network. The isolates include the three paintings that feature buildings. In the connected component, there are two clusters, bridged by the relationship between paintings 16 and 28. Paintings, 1 and 16 appear to be the most central. These qualitative interpretations of the plot are confirmed quantitatively in Table~\\ref{net}, which reports degree (number of connections), betweenness (number of non-redundant shortest-paths), and closeness (measure of being in the middle of the network) centrality scores for the paintings in the connected component of the network \\citep{wasserman&faust1994snam}. \n\n\\section{Conclusion}\nAs we mark the $20^{th}$ anniversary of Bob Ross's death this year, the popularity of \\textit{The Joy of Painting} is again on the rise. Various public tributes have surfaced recently, including a video mash-up set to music called ``Happy Trees'' by PBS on YouTube, the selection of a Bob Ross themed costume as the winner of the Smithsonian National Zoo's Annual ``Night of the living zoo '' Costume Contest in 2013, a weekly ``Bob Ross Night'' in Missoula \\url{http:\/\/www.zootownarts.org\/bobross}, and now two statistical analyses (one qualitative and one quantitative) of his work. \n\nIn this paper, we've conducted yet another statistical analysis of Bob Ross paintings. Rather than examine the qualitative features of the subjects as prior work has done, we used the quantitative values of the colorspace in digital representations of the paintings as our data. We've demonstrated the subtle aesthetics of the average Bob Ross painting, the common variance shared by a set of Ross's work, and the structure of relationships between this sample using relatively simple quantitative techniques. \n\nFinally, there are a number of limitations with the current approach. First, we consider only a very small sample (n=30) of the publicly available set of Ross's work, a corpus many thousands of paintings in number. Future work may wish to expand this dataset. Indeed, the techniques employed here may be used to identify specific works attributed to an artist in a larger corpus of mixed artists' work \\citep{cutzu3&2005dpp}. Second, to standardize the dataset we take only the innermost central region; thus, we have a sample within a sample. It's possible that this strategy does not fully represent the variety of work encompassed by the whole lot---however, given the high spectral variability reported in our results, we believe this strategy is indeed representative of his works. Third, the source digital images that we collected from the internet were not rendered in a uniform manner; in an ideal data scenario, digital reproductions would have been obtained using the same high-resolution equipment in a controlled lighting environment. This gives rise to random errors in the channel values. However, these limitations did not impede evaluation of the research questions set forth here. As a proof-of-concept, the fact that our methods were able to recover discernible features in both the average and the variance of the set of paintings lends confidence to our results. We leave it to future research to further this approach by mitigating these limitations with a larger, more standardized, sample. Additionally, future research should take a mixed-methods approach to statistically combine the results of qualitative and quantitative analysis of a body of art in this manner.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\\section{Introduction}\nFew-shot learning~\\cite{fei2006one, fink2005object, wu2010towards, lake2015human, wang2020generalizing} is the learning problem where a learner experiences only a limited number of examples as supervision.\nIn computer vision, it has been most actively studied for the tasks of image classification~\\cite{alexnet, vgg, resnet} and semantic segmentation~\\cite{deeplab, fcn, deconvnet, unet} among many others~\\cite{han2021query, ojha2021few, ramon2021h3d, yue2021prototypical, zhao2021few}.\nFew-shot classification (FS-C\\xspace) aims to classify a query image into target classes when a few support examples are given for each target class. \nFew-shot segmentation (FS-S\\xspace) is to segment out the target class regions on the query image in a similar setup.\nWhile being closely related to each other~\\cite{li2009towards, yao2012describing, zhou2019collaborative},\nthese two few-shot learning problems have so far been treated individually.\nFurthermore, the conventional setups for the few-shot problems, FS-C\\xspace and FS-S\\xspace, are limited and do not reflect realistic scenarios; \nFS-C\\xspace~\\cite{matchingnet, ravi2016optimization, koch2015siamese} presumes that the query always contains one of the target classes in classification, while FS-S\\xspace~\\cite{shaban2017oslsm, rakelly2018cofcn, hu2019amcg} allows the presence of multiple classes but does not handle the absence of the target classes in segmentation.\nThese respective limitations prevent few-shot learning from generalizing to and evaluating on more realistic cases in the wild. \nFor example, when a query image without any target class is given as in \\figref{fig:teaser}, FS-S\\xspace learners typically segment out arbitrary salient objects in the query.\n\nTo address the aforementioned issues, we introduce the \\textit{integrative task of few-shot classification and segmentation} (FS-CS\\xspace) that combines the two few-shot learning problems into a multi-label and background-aware prediction problem. \nGiven a query image and a few-shot support set for target classes, FS-CS\\xspace aims to \\textit{identify the presence of each target class} and \\textit{predict its foreground mask} from the query.\nUnlike FS-C\\xspace and FS-S\\xspace, it does not presume either the class exclusiveness in classification or the presence of all the target classes in segmentation.\n\n\n\nAs a learning framework for FS-CS\\xspace, we propose {\\em integrative few-shot learning} (iFSL\\xspace) that learns to construct shared foreground maps for both classification and segmentation.\nIt naturally combines multi-label classification and pixel-wise segmentation by sharing class-wise foreground maps and also allows to learn with class tags or segmentation annotations. \nFor effective iFSL\\xspace, we design the {\\em attentive squeeze network} (ASNet\\xspace) that computes semantic correlation tensors between the query and the support image features and then transforms the tensor into a foreground map by strided self-attention. \nIt generates reliable foreground maps for iFSL\\xspace by leveraging multi-layer neural features~\\cite{hpf, hsnet} and global self-attention~\\cite{transformers, vit}. \nIn experiments, we demonstrate the efficacy of the iFSL\\xspace framework on FS-CS\\xspace and compare ASNet\\xspace with recent methods~\\cite{xie2021few, wu2021learning, hsnet, xie2021scale}.\nOur method significantly improves over the other methods on FS-CS\\xspace in terms of classification and segmentation accuracy and also outperforms the recent FS-S\\xspace methods on the conventional FS-S\\xspace.\nWe also cross-validate the task transferability between the FS-C\\xspace, FS-S\\xspace, and FS-CS\\xspace learners, and show the FS-CS\\xspace learners effectively generalize when transferred to the FS-C\\xspace and FS-S\\xspace tasks.\n\n\n\nOur contribution is summarized as follows:\n\\begin{itemize}\n \\item We introduce the task of \\textit{integrative few-shot classification and segmentation} (FS-CS\\xspace), which combines few-shot classification and few-shot segmentation into an integrative task by addressing their limitations.\n \\item We propose the \\textit{integrative few-shot learning framework} (iFSL\\xspace), which learns to both classify and segment a query image using class-wise foreground maps.\n \\item We design the \\textit{attentive squeeze network} (ASNet\\xspace),\n which squeezes semantic correlations into a foreground map for iFSL\\xspace via strided global self-attention.\n \\item We show in extensive experiments that the framework, iFSL\\xspace, and the architecture, ASNet\\xspace, are both effective, achieving a significant gain on FS-S\\xspace as well as FS-CS\\xspace.\n\\end{itemize}\n\n\n\n\n\\section{Related work}\n\\smallbreakparagraph{Few-shot classification (FS-C\\xspace)}.\nRecent FS-C\\xspace methods typically learn neural networks that maximize positive class similarity and suppress the rest to predict the most probable class.\nSuch a similarity function is obtained by a) meta-learning embedding functions~\\cite{koch2015siamese, matchingnet, protonet, allen2019infinite, tewam, can, feat, deepemd, renet}, b) meta-learning to optimize classifier weights~\\cite{maml, leo, mtl}, or c) transfer learning~\\cite{closer, rfs, dhillon2019baseline, wang2020few, negmargin, gidaris2018dynamic, qi2018low, rodriguez2020embedding}, all of which aim to generalize to unseen classes.\nThis conventional formulation is applicable if a query image corresponds to no less or more than a single class among target classes.\nTo generalize FS-C\\xspace to classify images associated with either none or multiple classes,\nwe employ the multi-label classification~\\cite{mccallum1999multi, boutell2004learning, cole2021multi, lanchantin2021general, durand2019learning}.\nWhile the conventional FS-C\\xspace methods make use of the class uniqueness property via using the categorical cross-entropy, we instead devise a learning framework that compares the binary relationship between the query and each support image individually and estimates a binary presence of the corresponding class.\n\n\n\n\n\\smallbreakparagraph{Few-shot semantic segmentation (FS-S\\xspace)}.\nA prevalent FS-S\\xspace approach is learning to match a query feature map with a set of support feature embeddings that are obtained by collapsing spatial dimensions at the cost of spatial structures~\\cite{wang2019panet, zhang2021self, siam2019amp, yang2021mining, liu2021anti, dong2018few, nguyen2019fwb, zhang2019canet, gairola2020simpropnet, yang2020pmm, liu2020ppnet}.\nRecent methods~\\cite{zhang2019pgnet, xie2021scale, xie2021few, wu2021learning, tian2020pfenet} focus on learning structural details by leveraging dense feature correlation tensors between the query and each support.\nHSNet~\\cite{hsnet} learns to squeeze a dense feature correlation tensor and transform it to a segmentation mask via high-dimensional convolutions that analyze the local correlation patterns on the correlation pyramid.\nWe inherit the idea of learning to squeeze correlations and improve it by analyzing the spatial context of the correlation with effective global self-attention~\\cite{transformers}. \nNote that several methods~\\cite{yang2020brinet, wang2020dan, sun2021boosting} adopt non-local self-attention~\\cite{nlsa} of the query-key-value interaction for FS-S\\xspace, but they are distinct from ours in the sense that they learn to transform image feature maps, whereas our method focuses on transforming dense correlation maps via self-attention.\n\nFS-S\\xspace has been predominantly investigated as an one-way segmentation task, \\ie, foreground or background segmentation, since the task is defined so that every target (support) class object appears in query images, thus being not straightforward to extend to a multi-class problem in the wild.\nConsequently, most work on FS-S\\xspace except for a few~\\cite{wang2019panet, tian2020differentiable, liu2020ppnet, dong2018few} focuses on the one-way segmentation, where the work of \\cite{tian2020differentiable, dong2018few} among the few presents two-way segmentation results from person-and-object images only, \\eg, images containing (person, dog) or (person, table).\n\n\n\\smallbreakparagraph{Comparison with other few-shot approaches.}\nHere we contrast FS-CS\\xspace with other loosely-related work for generalized few-shot learning.\nFew-shot open-set classification~\\cite{liu2020few} brings the idea of the open-set problem~\\cite{scheirer2012toward, fei2016breaking} to few-shot classification by allowing a query to have no target classes.\nThis formulation enables background-aware classification as in FS-CS\\xspace, whereas multi-label classification is not considered.\nThe work of \\cite{tian2020generalized, ganea2021incremental} generalizes few-shot segmentation to a multi-class task, but it is mainly studied under the umbrella of incremental learning~\\cite{mccloskey1989catastrophic, rebuffi2017icarl, castro2018end}.\nThe work of \\cite{siam2020weakly} investigates weakly-supervised few-shot segmentation using image-level vision and language supervision, while FS-CS\\xspace uses visual supervision only.\nThe aforementioned tasks generalize few-shot learning but differ from FS-CS\\xspace in the sense that FS-CS\\xspace integrates two related problems under more general and relaxed constraints.\n\\section{Problem formulation}\n\\label{sec:ourtask}\nGiven a query image and a few support images for target classes, we aim to {\\em identify the presence} of each class and {\\em predict its foreground mask} from the query (\\figref{fig:teaser}), which we call the {\\em integrative few-shot classification and segmentation} (FS-CS\\xspace). \nSpecifically, let us assume a target (support) class set $\\mathcal{C}_{\\text{s}}$ of $N$ classes and its support set $\\mathcal{S}=\\{ (\\mathbf{x}_{\\text{s}}^{(i)}, y_{\\text{s}}^{(i)}) | y_{\\text{s}}^{(i)} \\in \\mathcal{C}_{\\text{s}} \\}^{NK}_{i=1}$, which contains $K$ labeled instances for each of the $N$ classes, \\ie, $N$-way $K$-shot~\\cite{matchingnet, ravi2016optimization}. \nThe label $y_{\\text{s}}^{(i)}$ is either a class tag (weak label) or a segmentation annotation (strong label). \nFor a given query image $\\mathbf{x}$, we aim to identify the multi-hot class occurrence $\\mathbf{y}_\\text{C}$ and also predict the segmentation mask $\\mathbf{Y}_\\text{S}$ corresponding to the classes. \nWe assume the class set of the query $\\mathcal{C}$ is a subset of the target class set, \\ie, $\\mathcal{C} \\subseteq \\mathcal{C}_{\\text{s}}$, thus it is also possible to obtain $\\mathbf{y}_\\text{C} = \\varnothing$ and $\\mathbf{Y}_\\text{S} = \\varnothing$. \nThis naturally generalizes the existing few-shot classification~\\cite{matchingnet, protonet} and few-shot segmentation~\\cite{shaban2017oslsm, rakelly2018cofcn}. \n\n\n\n\\smallbreakparagraph{Multi-label background-aware prediction.}\nThe conventional formulation of few-shot classification (FS-C\\xspace)~\\cite{matchingnet, protonet, maml} assigns the query to one class among the target classes exclusively and ignores the possibility of the query belonging to none or multiple target classes. \nFS-CS\\xspace tackle this limitation and generalizes FS-C\\xspace to multi-label classification with a background class. \nA multi-label few-shot classification learner $f_{\\text{C}}$ compares semantic similarities between the query and the support images and estimates class-wise occurrences: $\\hat{\\mathbf{y}}_{\\text{C}} = f_{\\text{C}}(\\mathbf{x}, \\mathcal{S}; \\theta)$ where $\\hat{\\mathbf{y}}_{\\text{C}}$ is an $N$-dimensional multi-hot vector each entry of which indicates the occurrence of the corresponding target class.\nNote that the query is classified into a \\textit{background} class if none of the target classes were detected. \nThanks to the relaxed constraint on the query, \\ie, the query not always belonging to exactly one class, FS-CS\\xspace is more general than FS-C\\xspace.\n\n\n\n\\smallbreakparagraph{Integration of classification and segmentation.}\nFS-CS\\xspace integrates multi-label few-shot classification with semantic segmentation by adopting pixel-level spatial reasoning.\nWhile the conventional FS-S\\xspace~\\cite{shaban2017oslsm, rakelly2018cofcn, wang2019panet, siam2019amp, nguyen2019fwb} assumes the query class set exactly matches the support class set,\n\\ie, $\\mathcal{C} = \\mathcal{C}_{\\text{s}}$,\nFS-CS\\xspace relaxes the assumption such that the query class set can be a subset of the support class set,\n\\ie, $\\mathcal{C} \\subseteq \\mathcal{C}_{\\text{s}}$.\nIn this generalized segmentation setup along with classification, an integrative FS-CS\\xspace learner $f$ estimates both class-wise occurrences and their semantic segmentation maps: $\\{ \\hat{\\mathbf{y}}_{\\text{C}}, \\hat{\\mathbf{Y}}_{\\text{S}}\\} = f(\\mathbf{x}, \\mathcal{S} ; \\theta)$.\nThis combined and generalized formulation gives a high degree of freedom to both of the few-shot learning tasks, which has been missing in the literature;\nthe integrative few-shot learner can predict multi-label background-aware class occurrences and segmentation maps simultaneously under a relaxed constraint on the few-shot episodes.\n\n\\section{Integrative Few-Shot Learning (iFSL\\xspace)}\n\\label{sec:ourmethod}\nTo solve the FS-CS\\xspace problem, we propose an effective learning framework, \\textit{integrative few-shot learning (iFSL\\xspace)}.\nThe iFSL\\xspace framework is designed to jointly solve few-shot classification and few-shot segmentation using either a class tag or a segmentation supervision.\nThe integrative few-shot learner $f$ takes as input the query image $\\mathbf{x}$ and the support set $\\mathcal{S}$ and then produces as output the class-wise foreground maps.\nThe set of class-wise foreground maps $\\mathcal{Y}$ is comprised of $\\mathbf{Y}^{(n)} \\in \\mathbb{R}^{H \\times W}$ for $N$ classes: \n\\begin{align}\n\\mathcal{Y} = f(\\mathbf{x}, \\mathcal{S}; \\theta) = \\{ \\mathbf{Y}^{(n)}\\}_{n=1}^{N},\n\\label{eq:foreground_mask}\n\\end{align}\nwhere $H \\times W$ denotes the size of each map and $\\theta$ is parameters to be meta-learned.\nThe output at each position on the map represents the probability of the position being on a foreground region of the corresponding class.\n\n\\smallbreakparagraph{Inference.}\niFSL\\xspace infers both class-wise occurrences and segmentation masks on top of the set of foreground maps $\\mathcal{Y}$.\nFor class-wise occurrences, a multi-hot vector $\\hat{\\mathbf{y}}_\\text{C} \\in \\mathbb{R}^{N}$ is predicted via max pooling followed by thresholding:\n\\begin{align}\n\\hat{\\mathbf{y}}_{\\text{C}}^{(n)} &= \n\\begin{cases}\n 1 \\text{\\; if \\,} \\max_{\\mathbf{p} \\in [H] \\times [W]} \\mathbf{Y}^{(n)}(\\mathbf{p}) \\geq \\delta,\\\\\n 0 \\text{\\; otherwise,}\n\\end{cases}\n\\label{eq:predict_class}\n\\end{align}\nwhere $\\mathbf{p}$ denotes a 2D position, $\\delta$ is a threshold, and $[ k ]$ denotes a set of integers from 1 to $k$, \\ie, $[k] = \\{1,\\! 2,\\! \\cdots,\\! k\\}$.\nWe find that inference with average pooling is prone to miss small objects in multi-label classification and thus choose to use max pooling.\nThe detected class at any position on the spatial map signifies the presence of the class. \n\nFor segmentation, a segmentation probability tensor $\\mathbf{Y}_{\\text{S}} \\in \\mathbb{R}^{ H \\times W \\times (N + 1)}$ is derived from the class-wise foreground maps.\nAs the background class is not given as a separate support, we estimate the background map in the context of the given supports; we combine $N$ class-wise background maps into \\textit{an episodic background map} on the fly. \nSpecifically, we compute the episodic background map $\\mathbf{Y}_{\\text{bg}}$ by averaging the probability maps of not being foreground and then concatenate it with the class-wise foreground maps to obtain a segmentation probability tensor $\\mathbf{Y}_{\\text{S}}$: \n\\begin{align}\n\\mathbf{Y}_{\\text{bg}} &= \\frac{1}{N} \\sum_{n=1}^{N}(\\mathbf{1} - \\mathbf{Y}^{(n)}), \\label{eq:bg_mask}\\\\\n\\mathbf{Y}_{\\text{S}} &= \\left[ \\mathbf{Y} || \\mathbf{Y}_{\\text{bg}} \\right] \\in \\mathbb{R}^{ H \\times W \\times (N + 1)}.\n\\label{eq:merge_mask}\n\\end{align}\nThe final segmentation mask $\\hat{\\mathbf{Y}}_\\text{S} \\in \\mathbb{R}^{H \\times W}$ is obtained by computing the most probable class label for each position:\n\\begin{equation}\n\\hat{\\mathbf{Y}}_\\text{S} = \\argmax_{n \\in [N + 1]} \\mathbf{Y}_{\\text{S}}.\n\\label{eq:predict_mask}\n\\end{equation}\n\n\n\\smallbreakparagraph{Learning objective.}\nThe iFSL\\xspace framework allows a learner to be trained using a class tag or a segmentation annotation using the classification loss or segmentation loss, respectively.\nThe classification loss is formulated as the average binary cross-entropy between the spatially average-pooled class scores and its ground-truth class label:\n\\begin{align}\n \\mathcal{L}_{\\text{C}} &= -\\frac{1}{N}\\sum_{n=1}^{N}\\mathbf{y}_{\\text{gt}}^{(n)} \\log \\frac{1}{H W}\\sum_{\\scriptscriptstyle{\\mathbf{p} \\in [H] \\! \\times \\! [W]}} \\mathbf{Y}^{(n)}(\\mathbf{p}), \n \\label{eq:loss_cls_final}\n\\end{align}\nwhere $\\mathbf{y}_\\text{gt}$ denotes the multi-hot encoded ground-truth class.\n\nThe segmentation loss is formulated as the average cross-entropy between the class distribution at each individual position and its ground-truth segmentation annotation:\n\\begin{align}\n \\mathcal{L}_{\\text{S}} &= - \\frac{1}{(N + 1)}\\frac{1}{H W} \\sum_{n=1}^{N + 1}\\sum_{\\scriptscriptstyle{\\mathbf{p} \\in [H] \\! \\times \\! [W]}} \\mathbf{Y}_\\text{gt}^{(n)}(\\mathbf{p}) \\log \\mathbf{Y}_{\\text{S}}^{(n)}(\\mathbf{p}), \n \\label{eq:loss_seg_final}\n\\end{align} where $\\mathbf{Y}_\\text{gt}$ denotes the ground-truth segmentation mask. \n\nThese two losses share a similar goal of classification but differ in whether to classify each \\textit{image} or each \\textit{pixel}. Either of them is thus chosen according to the given level of supervision for training.\n\n\\section{Model architecture}\nIn this section, we present \\textit{Attentive Squeeze Network} (ASNet\\xspace) of an effective iFSL\\xspace model.\nThe main building block of ASNet\\xspace is the attentive squeeze layer (AS\\xspace layer), which is a high-order self-attention layer that takes a correlation tensor and returns another level of correlational representation.\nASNet\\xspace takes as input the pyramidal cross-correlation tensors between a query and a support image feature pyramids, \\ie, a hypercorrelation~\\cite{hsnet}.\nThe pyramidal correlations are fed to pyramidal AS\\xspace layers that gradually squeeze the spatial dimensions of the support image, and the pyramidal outputs are merged to a final foreground map in a bottom-up pathway~\\cite{hsnet, fpn, refinenet}.\n\\figureref{fig:overview}~illustrates the overall process of ASNet\\xspace.\nThe $N$-way output maps are computed in parallel and collected to prepare the class-wise foreground maps in \\eqref{eq:foreground_mask} for iFSL\\xspace.\n\n\n\\subsection{Attentive Squeeze Network (ASNet)}\n\\smallbreakparagraph{Hypercorrelation construction.}\nOur method first constructs $NK$ hypercorrelations~\\cite{hsnet} between a query and each $NK$ support image and then learns to generate a foreground segmentation mask \\wrt each support input.\nTo prepare the input hypercorrelations, an episode, \\ie, a query and a support set, is enumerated into a paired list of the query, a support image, and a support label: $\\{(\\mathbf{x}, (\\mathbf{x}_{\\text{s}}^{(i)}, y_{\\text{s}}^{(i)})) \\}_{i=1}^{NK}$.\nThe input image is fed to stacked convolutional layers in a CNN and its mid- to high-level output feature maps are collected to build a feature pyramid $\\{\\mathbf{F}^{(l)}\\}_{l=1}^{L}$, where $l$ denotes the index of a unit layer, \\eg, $\\texttt{Bottleneck}$ layer in ResNet50~\\cite{resnet}.\nWe then compute cosine similarity between each pair of feature maps from the pair of query and support feature pyramids to obtain 4D correlation tensors of size $H_{\\text{q}}^{(l)} \\times W_{\\text{q}}^{(l)} \\times H_{\\text{s}}^{(l)} \\times W_{\\text{s}}^{(l)}$, which is followed by ReLU~\\cite{relu}:\n\\begin{equation}\n\\mathbf{C}^{(l)}(\\mathbf{p}_{\\text{q}}, \\mathbf{p}_{\\text{s}}) = \\mathrm{ReLU}\\left( \\frac{\\mathbf{F}_{\\text{q}}^{(l)}(\\mathbf{p}_{\\text{q}}) \\cdot \\mathbf{F}_{\\text{s}}^{(l)}(\\mathbf{p}_{\\text{s}})}{||\\mathbf{F}_{\\text{q}}^{(l)}(\\mathbf{p}_{\\text{q}})|| \\, ||\\mathbf{F}_{\\text{s}}^{(l)}(\\mathbf{p}_{\\text{s}})||} \\right).\n\\end{equation}\nThese $L$ correlation tensors are grouped by $P$ groups of the identical spatial sizes, and then the tensors in each group are concatenated along a new channel dimension to build a hypercorrelation pyramid: $\\{\\mathbf{C}^{(p)} | \\mathbf{C}^{(p)} \\in \\mathbb{R}^{ H_{\\text{q}}^{(p)} \\times W_{\\text{q}}^{(p)} \\times H_{\\text{s}}^{(p)} \\times W_{\\text{s}}^{(p)} \\times C_{\\text{in}}^{(p)}} \\}_{p=1}^{P}$ such that the channel size $C_{\\text{in}}^{(p)}$ corresponds to the number of concatenated tensors in the $p_{\\text{th}}$ group. \nWe denote the first two spatial dimensions of the correlation tensor, \\ie, $\\mathbb{R}^{H_{\\text{q}} \\times W_{\\text{q}}}$, as query dimensions, and the last two spatial dimensions, \\ie, $\\mathbb{R}^{H_{\\text{s}} \\times W_{\\text{s}}}$, as support dimensions hereafter.\n\n\n\\smallbreakparagraph{Attentive squeeze layer (AS\\xspace layer).}\nThe AS\\xspace layer transforms a correlation tensor to another with a smaller support dimension via strided self-attention. \nThe tensor is recast as a matrix with each element representing a support pattern.\nGiven a correlation tensor $\\mathbf{C} \\in \\mathbb{R}^{H_{\\text{q}} \\times W_{\\text{q}} \\times H_{\\text{s}} \\times W_{\\text{s}} \\times C_{\\text{in}} }$ in a hypercorrelation pyramid, we start by reshaping the correlation tensor as a block matrix of size $H_{\\text{q}} \\times W_{\\text{q}}$ with each element corresponding to a correlation tensor of $\\mathbf{C}(\\mathbf{x}_{\\text{q}}) \\in \\mathbb{R}^{H_{\\text{s}} \\times W_{\\text{s}} \\times C_{\\text{in}} }$ on the query position $\\mathbf{x}_{\\text{q}}$ such that \n\\begin{equation}\n\\mathbf{C}^{\\text{block}} = \n\\begin{bmatrix} \n \\mathbf{C}((1, 1)) & \\hdots & \\mathbf{C}((1, W_{\\text{q}})) \\\\\n \\vdots & \\ddots & \\vdots \\\\\n \\mathbf{C}((H_{\\text{q}}, 1)) & \\hdots & \\mathbf{C}((H_{\\text{q}}, W_{\\text{q}})) \n\\end{bmatrix}\n\\label{eq:block_matrix}\n.\n\\end{equation}\nWe call each element a \\textit{support correlation tensor}.\nThe goal of an AS\\xspace layer is to analyze the global context of each support correlation tensor and extract a correlational representation with a reduced support dimension while the query dimension is preserved: \n$\\mathbb{R}^{ H_{\\text{q}} \\times W_{\\text{q}} \\times H_{\\text{s}} \\times W_{\\text{s}} \\times C_{\\text{in}} } \\rightarrow \\mathbb{R}^{H_{\\text{q}} \\times W_{\\text{q}} \\times H_{\\text{s}}' \\times W_{\\text{s}}' \\times C_{\\text{out}} }$, where $H_{\\text{s}}' \\leq H_{\\text{s}}$ and $W_{\\text{s}}' \\leq W_{\\text{s}}$.\nTo learn a holistic pattern of each support correlation, we adopt the global self-attention mechanism~\\cite{transformers} for correlational feature transform.\nThe self-attention weights are shared across all query positions and processed in parallel.\n\n\nLet us denote a support correlation tensor on any query position $\\mathbf{x}_{\\text{q}}$ by $\\mathbf{C}^{{\\text{s}}} = \\mathbf{C}^{\\text{block}}(\\mathbf{x}_{\\text{q}})$ for notational brevity as all positions share the following computation.\nThe self-attention computation starts by embedding a support correlation tensor $\\mathbf{C}^{{\\text{s}}}$ to a target\n\\footnote{\nIn this section, we adapt the term ``target'' to indicate the ``query'' embedding in the context of self-attention learning~\\cite{transformers, vit, lsa, pvt, lrnet} to avoid homonymous confusion with the ``query'' image to be segmented.\n}\n, key, value triplet: \n$\n\\mathbf{T}, \\mathbf{K}, \\mathbf{V} \\in \\mathbb{R}^{ H_{\\text{s}}' \\times W_{\\text{s}}' \\times C_{\\text{hd}}},\n$\nusing three convolutions of which strides greater than or equal to one to govern the output size.\nThe resultant target and key correlational representations, $\\mathbf{T}$ and $\\mathbf{K}$, are then used to compute an attention context.\nThe attention context is computed as following matrix multiplication:\n\\begin{equation}\n\\mathbf{A} = \\mathbf{T} \\mathbf{K}^{\\top} \\in \\mathbb{R}^{H_{\\text{s}}' \\times W_{\\text{s}}' \\times H_{\\text{s}}' \\times W_{\\text{s}}'}.\n\\label{eq:sixd_attn}\n\\end{equation}\nNext, the attention context is normalized by softmax such that the votes on key foreground positions sum to one with masking attention by the support mask annotation $\\mathbf{Y}_{\\text{s}}$ if available to attend more on the foreground region:\n\\begin{align*}\n\\bar{\\mathbf{A}}(\\mathbf{p}_{\\text{t}}, \\mathbf{p}_{\\text{k}}) = \\frac{\\exp \\left( \\mathbf{A}(\\mathbf{p}_{\\text{t}}, \\mathbf{p}_{\\text{k}}) \\mathbf{Y}_{\\text{s}}(\\mathbf{p}_{\\text{k}}) \\right)}{\\sum_{\\mathbf{p}_{\\text{k}}'}\\exp \\left( \\mathbf{A}(\\mathbf{p}_{\\text{t}}, \\mathbf{p}'_{\\text{k}}) \\mathbf{Y}_{\\text{s}}(\\mathbf{p}'_{\\text{k}}) \\right)},\n\\end{align*}\n\\begin{equation}\n\\text{where\\, } \\mathbf{Y}_{\\text{s}}(\\mathbf{p}_{\\text{k}}) =\n\\begin{cases}\n 1 \\quad \\; \\; \\text{if} \\; \\mathbf{p}_{\\text{k}} \\in [H'_{\\text{s}}] \\times [W'_{\\text{s}}] \\text{ is foreground,} \\\\ \n - \\infty \\; \\text{otherwise.}\n\\end{cases} \n\\label{eq:masked_attn}\n\\end{equation}\nThe masked attention context $\\bar{\\mathbf{A}}$ is then used to aggregate\nthe value embedding $\\mathbf{V}$:\n\\begin{equation}\n\\mathbf{C}^{{\\text{s}}}_{{\\text{A}}} = \\bar{\\mathbf{A}} \\mathbf{V} \\in \\mathbb{R}^{ H_{\\text{s}}' \\times W_{\\text{s}}' \\times C_{\\text{hd}} }.\n\\label{eq:agg}\n\\end{equation}\nThe attended representation is fed to an MLP layer, $\\mathbf{W}_{\\text{o}}$, and added to the input.\nIn case the input and output dimensions mismatch, the input is optionally fed to a convolutional layer, $\\mathbf{W}_{\\text{I}}$.\nThe addition is followed by an activation layer $\\varphi(\\cdot)$ consisting of a group normalization~\\cite{groupnorm} and a ReLU activation~\\cite{relu}:\n\\begin{equation}\n\\mathbf{C}^{{\\text{s}}}_{\\text{o}} = \\varphi(\\mathbf{W}_{\\text{o}}(\\mathbf{C}^{{\\text{s}}}_{\\text{A}}) + \\mathbf{W}_{\\text{I}}(\\mathbf{C}^{{\\text{s}}}) ) \\in \\mathbb{R}^{ H_{\\text{s}}' \\times W_{\\text{s}}' \\times C_{\\text{out}}}.\n\\end{equation}\nThe output is then fed to another MLP that concludes a unit operation of an AS\\xspace layer:\n\\begin{equation}\n\\mathbf{C}^{{\\text{s}} \\prime} = \\varphi(\\mathbf{W}_\\text{FF}(\\mathbf{C}^{{\\text{s}}}_{\\text{o}}) + \\mathbf{C}^{{\\text{s}}}_{\\text{o}}) \\in \\mathbb{R}^{ H_{\\text{s}}' \\times W_{\\text{s}}' \\times C_{\\text{out}}},\n\\end{equation}\nwhich is embedded to the corresponding query position in the block matrix of \\eqref{eq:block_matrix}.\nNote that the AS\\xspace layer can be stacked to progressively reduce the size of support correlation tensor, $H_{\\text{s}}' \\times W_{\\text{s}}'$, to a smaller size.\nThe overall pipeline of AS\\xspace layer is illustrated in the supplementary material.\n\n\n\n\\smallbreakparagraph{Multi-layer fusion.}\nThe pyramid correlational representations are merged from the coarsest to the finest level by cascading a pair-wise operation of the following three steps: upsampling, addition, and non-linear transform.\nWe first bi-linearly upsample the bottommost correlational representation to the query spatial dimension of its adjacent earlier one and then add the two representations to obtain a mixed one $\\mathbf{C}^{\\text{mix}}$.\nThe mixed representation is fed to two sequential AS\\xspace layers until it becomes a point feature of size $H_{\\text{s}}' = W_{\\text{s}}'=1$, which is fed to the subsequent pyramidal fusion.\nThe output from the earliest fusion layer is fed to a convolutional decoder, which consists of interleaved 2D convolution and bi-linear upsampling that map the $C$-dimensional channel to 2 (foreground and background) and the output spatial size to the input query image size.\nSee \\figref{fig:overview} for the overall process of multi-layer fusion.\n\n\n\n\n\n\n\\smallbreakparagraph{Class-wise foreground map computation.}\nThe $K$-shot output foreground activation maps are averaged to produce a mask prediction for each class.\nThe averaged output map is normalized by softmax over the two channels of the binary segmentation map to obtain a foreground probability prediction $\\mathbf{Y}^{(n)} \\in \\mathbb{R}^{H \\times W}$.\n\n\n\\section{Experiments}\nIn this section we report our experimental results regarding the FS-CS\\xspace task, the iFSL\\xspace framework, as well as the ASNet\\xspace after briefly describing implementation details and evaluation benchmarks.\nSee the supplementary material for additional results, analyses, and experimental details.\n\n\n\n\n\n\\subsection{Experimental setups}\n\\smallbreakparagraph{Experimental settings.}\nWe select ResNet50 and ResNet-101~\\cite{resnet} pretrained on ImageNet~\\cite{russakovsky2015imagenet} as our backbone networks for a fair comparison with other methods and freeze the backbone during training as similarly as the previous work~\\cite{tian2020pfenet, hsnet}.\nWe train models using Adam~\\cite{adam} optimizer with learning rate of $10^{-4}$ and $10^{-3}$ for the classification loss and the segmentation loss, respectively.\nWe train all models with 1-way 1-shot training episodes and evaluate the models on arbitrary $N$-way $K$-shot episodes.\nFor inferring class occurrences, we use a threshold $\\delta = 0.5$.\nAll the AS\\xspace layers are implemented as multi-head attention with 8 heads. \nThe number of correlation pyramid is set to $P=3$.\n\n\n\n\\smallbreakparagraph{Dataset.}\nFor the new task of FS-CS\\xspace,\nwe construct a benchmark adopting the images and splits from the two widely-used FS-S\\xspace datasets, Pascal-5$^{i}$~\\cite{shaban2017oslsm, pascal} and COCO-20$^{i}$~\\cite{nguyen2019fwb, coco}, which are also suitable for multi-label classification~\\cite{wang2017multi}.\nWithin each fold, we construct an episode by randomly sampling a query and an $N$-way $K$-shot support set that annotates the query with $N$-way class labels and an $(N\\!+\\!1)$-way segmentation mask in the context of the support set.\nFor the FS-S\\xspace task, we also use Pascal-5$^{i}$ and COCO-20$^{i}$ following the same data splits as \\cite{shaban2017oslsm} and \\cite{nguyen2019fwb}, respectively.\n\n\\smallbreakparagraph{Evaluation.}\nEach dataset is split into four mutually disjoint class sets and cross-validated.\nFor multi-label classification evaluation metrics, we use the 0\/1 exact ratio $\\mathrm{ER} = \\mathbbm{1} [\\mathbf{y}_{\\text{gt}} = \\mathbf{y}_{\\text{C}} ]$~\\cite{durand2019learning}.\nIn the supplementary material, we also report the results in accuracy $\\mathrm{acc} = \\frac{1}{N} \\sum_n \\mathbbm{1} [\\mathbf{y}_{\\text{gt}}^{(n)} = \\mathbf{y}^{(n)}_{\\text{C}} ]$.\nFor segmentation, we use mean IoU $\\mathrm{mIoU} = \\frac{1}{C}\\sum_c \\mathrm{IoU}_c$~\\cite{shaban2017oslsm, wang2019panet}, where $\\mathrm{IoU}_c$ denotes an IoU value of $c_{\\text{th}}$ class.\n\n\n\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\small\n \\includegraphics[width=0.97\\linewidth]{fig\/qual2way.pdf}\n \\vspace{-1mm}\n\t\\caption{2-way 1-shot segmentation results of ASNet\\xspace on FS-CS\\xspace.\n\tThe examples cover all three cases of $\\mathcal{C} = \\varnothing$, $\\mathcal{C} \\subset \\mathcal{C}_{\\text{s}}$, and $\\mathcal{C} = \\mathcal{C}_{\\text{s}}$.\n\tThe images are resized to square shape for visualization.\n \\vspace{-4mm}\n}\n\\label{fig:qual2way}\n\\end{figure}\n\n\n\n\n\n\n\\subsection{Experimental evaluation of iFSL\\xspace on FS-CS\\xspace}\nIn this subsection, we investigate the iFSL\\xspace learning framework on the FS-CS\\xspace task.\nAll ablation studies are conducted using ResNet50 on Pascal-$i^5$ and evaluated in 1-way 1-shot setup unless specified otherwise.\nNote that it is difficult to present a fair and direct comparison between the conventional FS-C\\xspace and our few-shot classification task since FS-C\\xspace is always evaluated on single-label classification benchmarks~\\cite{matchingnet, tieredimagenet, cifarfs, metaoptnet, metadataset}, whereas our task is evaluated on multi-label benchmarks~\\cite{pascal, coco}, which are irreducible to a single-label one in general. \n\n\n\n\\smallbreakparagraph{Effectiveness of iFSL\\xspace on FS-CS\\xspace.}\nWe validate the iFSL\\xspace framework on FS-CS\\xspace and also compare the performance of ASNet\\xspace with those of three recent state-of-the-art methods, PANet~\\cite{wang2019panet}, PFENet~\\cite{tian2020pfenet}, and HSNet~\\cite{hsnet}, which are originally proposed for the conventional FS-S\\xspace task; all the models are trained by iFSL\\xspace for a fair comparison. \nNote that we exclude the background merging step (Eqs.~\\ref{eq:bg_mask} and \\ref{eq:merge_mask}) for PANet as its own pipeline produces a multi-class output including background.\nTables \\ref{table:ipa} and \\ref{table:ico} validate the iFSL\\xspace framework on the FS-CS\\xspace task quantitatively, where our ASNet\\xspace surpasses other methods on both 1-way and 2-way setups in terms of few-shot classification as well as the segmentation performance.\nThe 2-way segmentation results are also qualitatively demonstrated in \\figref{fig:qual2way} visualizing exhaustive inclusion relations between a query class set $\\mathcal{C}$ and a target (support) class set $\\mathcal{C}_{\\text{s}}$ in a 2-way setup.\n\n\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\small\n \\includegraphics[width=0.9\\linewidth]{fig\/multiway_legend.pdf}\n \\vspace{-3mm}\n \\includegraphics[width=0.49\\linewidth]{fig\/multiway_er_bar.pdf} \\hfill\n \\includegraphics[width=0.49\\linewidth]{fig\/multiway_miou_bar.pdf}\n \\caption{$N$-way 1-shot FS-CS\\xspace performance comparison of four methods by varying $N$ from 1 to 5.\n\t}\n\\label{fig:multiway}\n\\end{figure}\n\n\\smallbreakparagraph{Weakly-supervised iFSL\\xspace.}\nThe iFSL\\xspace framework is versatile across the level of supervision: weak labels (class tags) or strong labels (segmentation masks).\nAssuming weak labels are available but strong labels are not, ASNet\\xspace is trainable with the classification learning objective of iFSL\\xspace (Eq.~\\ref{eq:loss_cls_final}) and its results are presented as $\\text{ASNet\\xspace}_{\\text{w}}$ in \\tableref{table:ipa}.\n$\\text{ASNet\\xspace}_{\\text{w}}$ performs on par with ASNet\\xspace in terms of classification ER (82.0\\% \\textit{vs.}~84.9\\% on 1-way 1-shot), but performs ineffectively on the segmentation task (15.0\\% \\textit{vs.}~52.3\\% on 1-way 1-shot).\nThe result implies that the class tag labels are sufficient for a model to recognize the class occurrences, but are weak to endorse model's precise spatial recognition ability.\n\n\n\n\n\\smallbreakparagraph{Multi-class scalability of FS-CS\\xspace.}\nIn addition, FS-CS\\xspace is extensible to a multi-class problem with arbitrary numbers of classes, while FS-S\\xspace is not as flexible as FS-CS\\xspace in the wild.\nFigure~\\ref{fig:multiway} compares the FS-CS\\xspace performances of four methods by varying the $N$-way classes from one to five, where the other experimental setup follows the same one as in \\tableref{table:ipa}.\nOur ASNet\\xspace shows consistently better performances than other methods on FS-CS\\xspace in varying number of classes.\n\n\n\n\n\\smallbreakparagraph{Robustness of FS-CS\\xspace against task transfer.}\nWe evaluate the transferability between FS-CS\\xspace, FS-C\\xspace, and FS-S\\xspace by training a model on one task and evaluating it on the other task.\nThe results are compared in \\figref{fig:task_transfer} in which `$\\text{FS-S\\xspace} \\rightarrow$ FS-CS\\xspace' represents the result where the model trained on the $\\text{FS-S\\xspace}$ task (with the guarantee of support class presence) is evaluated on the FS-CS\\xspace setup.\nTo construct training and validation splits for FS-C\\xspace or FS-S\\xspace, we sample episodes that satisfy the constraint of support class occurrences~\\footnote{We sample 2-way 1-shot episodes having a single positive class for training on FS-C\\xspace or evaluating on FS-C\\xspace.\nWe collect 1-way 1-shot episodes sampled from the same class for training on FS-S\\xspace or evaluating on FS-S\\xspace.}. \nFor training FS-C\\xspace models, we use the class tag supervision only. \nAll the other settings are fixed the same, \\eg, we use ASNet\\xspace with ResNet50 and Pascal-$i^5$.\n\nThe results show that FS-CS\\xspace learners, \\ie, models trained on FS-CS\\xspace, are transferable to the two conventional few-shot learning tasks and yet overcome their shortcomings.\nThe transferability between few-shot classification tasks, \\ie, FS-C\\xspace and $\\text{FS-CS\\xspace}_{\\text{w}}$, is presented in \\figref{fig:task_transfer}(a).\nOn this setup, the $\\text{FS-CS\\xspace}_{\\text{w}}$ learner is evaluated by predicting a higher class response between the two classes, although it is trained using the multi-label classification objective.\nThe FS-CS\\xspace learner closely competes with the FS-C\\xspace learner on FS-C\\xspace in terms of classification accuracy.\nIn contrast, the task transfer between segmentation tasks, FS-S\\xspace and FS-CS\\xspace, results in asymmetric outcomes as shown in \\figref{fig:task_transfer}(b)~and~(c).\nThe FS-CS\\xspace learner shows relatively small performance drop on FS-S\\xspace, however, the FS-S\\xspace learner suffers a severe performance drop on FS-CS\\xspace.\nQualitative examples in \\figref{fig:teaser} demonstrate that the FS-S\\xspace learner predicts a vast number of false-positive pixels and results in poor performances.\nIn contrast, the FS-CS\\xspace learner successfully distinguishes the region of interest by analyzing the semantic relevance of the query objects between the support set.\n\n\\input{table\/task_transfer}\n\\input{table\/f1pa}\n\\input{table\/f2co}\n\n\\input{table\/ablation}\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Comparison with recent FS-S\\xspace methods on FS-S\\xspace}\nTables~\\ref{table:fpa} and \\ref{table:fco} compare the results of the recent few-shot semantic segmentation methods and ASNet\\xspace on the conventional FS-S\\xspace task.\nAll model performances in the tables are taken from corresponding papers, and the numbers of learnable parameters are either taken from papers or counted from their official sources of implementation.\nFor a fair comparison with each other, some methods that incorporate extra unlabeled images~\\cite{yang2021mining, liu2020ppnet} are reported as their model performances measured in the absence of the extra data.\nNote that ASNet\\xspace in Tables~\\ref{table:fpa} and \\ref{table:fco} is trained and evaluated following the FS-S\\xspace setup, not the proposed FS-CS\\xspace one.\n\nThe results verify that ASNet\\xspace outperforms the existing methods including the most recent ones~\\cite{wu2021learning, xie2021few, yang2021mining}.\nEspecially, the methods that cast few-shot segmentation as the task of correlation feature transform, ASNet and HSNet~\\cite{hsnet}, outperform other visual feature transform methods, indicating that learning correlations is beneficial for both FS-CS\\xspace and FS-S\\xspace.\nNote that ASNet\\xspace is the most lightweight among others as ASNet\\xspace processes correlation features that have smaller channel dimensions, \\eg, at most 128, than visual features, \\eg, at most 2048 in ResNet50.\n\n\n\n\n\n\n\\subsection{Analyses on the model architecture}\nWe perform ablation studies on the model architecture to reveal the benefit of each component.\nWe replace the global self-attention in the ASNet\\xspace layer with the local self-attention~\\cite{lsa} to see the effect of the global self-attention~(\\tableref{table:ablation_architecture}\\texttt{a}).\nThe local self-attention variant is compatible with the global ASNet\\xspace in terms of the classification exact ratio but degrades the segmentation mIoU significantly, signifying the importance of the learning the global context of feature correlations.\nNext, we ablate the attention masking in \\eqref{eq:masked_attn}, which verifies that the attention masking prior is effective~(\\tableref{table:ablation_architecture}\\texttt{b}).\nLastly, we replace the multi-layer fusion path with spatial average pooling over the support dimensions followed by element-wise addition~(\\tableref{table:ablation_architecture}\\texttt{c}), and the result indicates that it is crucial to fuse outputs from the multi-layer correlations to precisely estimate class occurrence and segmentation masks.\n\n\n\n\\section{Discussion}\nWe have introduced the integrative task of few-shot classification and segmentation (FS-CS\\xspace) that generalizes two existing few-shot learning problems.\nOur proposed integrative few-shot learning (iFSL\\xspace) framework is shown to be effective on FS-CS\\xspace, in addition, our proposed attentive squeeze network (ASNet\\xspace) outperforms recent state-of-the-art methods on both FS-CS\\xspace and FS-S\\xspace.\nThe iFSL\\xspace design allows a model to learn either with weak or strong labels, that being said, \nlearning our method with weak labels achieves low segmentation performances. \nThis result opens a future direction of effectively boosting the segmentation performance leveraging weak labels in the absence of strong labels for FS-CS\\xspace.\n\n\n\n\\section{Supplementary Material}\n\n\n\\subsection{Detailed model architecture}\nThe comprehensive configuration of attentive squeeze network is summarized in \\tableref{table:asnet}, and its building block, attentive squeeze layer, is depicted in \\figref{fig:aslayer}.\nThe channel sizes of the input correlation $\\{C_\\text{in}^{(1)}, C_\\text{in}^{(2)}, C_\\text{in}^{(3)}\\}$ corresponds to $\\{4, 6, 3\\}$, $\\{4, 23, 3\\}$, $\\{3, 3, 1\\}$ for ResNet50~\\cite{resnet}, ResNet101, VGG-16~\\cite{vgg}, respectively.\n\n\n\\subsection{Implementation details}\nOur framework is implemented on PyTorch~\\cite{pytorch} using the PyTorch Lightning~\\cite{falcon2019pytorch} framework.\nTo reproduce the existing methods, we heavily borrow publicly available code bases.~\\footnote{PANet~\\cite{wang2019panet}: \\url{https:\/\/github.com\/kaixin96\/PANet} \\\\ PFENet~\\cite{tian2020pfenet}: \\url{https:\/\/github.com\/dvlab-research\/PFENet} \\\\\nHSNet~\\cite{hsnet}: \\url{https:\/\/github.com\/juhongm999\/hsnet}}\nWe set the officially provided hyper-parameters for each method while sharing generic techniques for all the methods, \\eg, excluding images of small support objects for support sets or switching the role between the query and the support during training. \nNVIDIA GeForce RTX 2080 Ti GPUs or NVIDIA TITAN Xp GPUs are used in all experiments, where we train models using two GPUs on Pascal-$5^{i}$~\\cite{shaban2017oslsm} while using four GPUs on COCO-$20^{i}$~\\cite{nguyen2019fwb}.\nModel training is halt either when it reaches the maximum $500_{\\text{th}}$ epoch or when it starts to overfit.\nWe resize input images to $400 \\times 400$ without any data augmentation strategies during both training and testing time for all methods.\nFor segmentation evaluation, we recover the two-channel output foreground map to its original image size by bilinear interpolation.\nPascal-$5^{i}$ and COCO-$20^{i}$ is derived from Pascal Visual Object Classes 2012~\\cite{pascal} and Microsoft Common Object in Context 2014~\\cite{coco}, respectively.\nTo construct episodes from datasets, we sample support sets such that one of the query classes is included in the support set by the probability of 0.5 to balance the ratio of background episodes across arbitrary benchmarks.\n\n\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\small\n \\includegraphics[width=\\linewidth]{supp\/fig\/aslayer.pdf}\n \n \\vspace{-5mm}\n\t\\caption{Illustration of the proposed attentive squeeze layer (Sec.~5.1. in the main paper).\n\tThe shape of each output tensor is denoted next to arrows.\n }\n\\label{fig:aslayer}\n\\end{figure}\n\n\\newcommand{\\plh}{%\n {\\ooalign{$\\phantom{0}$\\cr\\hidewidth$\\scriptstyle\\times$\\cr}}%\n}\n\n\\begin{table}[t!]\n \\centering\n \\setlength{\\extrarowheight}{-2.0pt}\n \\scalebox{0.70}{\n \\begin{tabular}{ccc}\n \\toprule\n $p = 1$ & $p = 2$ & $p = 3$ \\\\ \n $\\frac{H}{8}\\plh\\frac{H}{8}\\plh\\frac{H}{8}\\plh\\frac{H}{8}\\plh C_\\text{in}^{(1)}$ & $\\frac{H}{16}\\plh\\frac{H}{16}\\plh\\frac{H}{16}\\plh\\frac{H}{16}\\plh C_\\text{in}^{(2)}$ & $\\frac{H}{32}\\plh\\frac{H}{32}\\plh\\frac{H}{32}\\plh\\frac{H}{32}\\plh C_\\text{in}^{(3)}$ \\\\ \n \\midrule\n {[pool support dims. by half]} & & \\\\\n $\\mathrm{AS}(C_\\text{in}^{(1)}\\rightarrow32, 5, 4, 2)$ & $\\mathrm{AS}(C_\\text{in}^{(2)}\\rightarrow32, 5, 4, 2)$ & $\\mathrm{AS}(C_\\text{in}^{(3)}\\rightarrow32, 5, 4, 2)$ \\\\\n $\\mathrm{AS}(32\\rightarrow128, 5, 4, 2$) & $\\mathrm{AS}(32\\rightarrow128, 5, 4, 2$) & $\\mathrm{AS}(32\\rightarrow128, 3, 2, 1$) \\\\\n {[pool support dims.]} & & \\\\\n {[upsample query dims.]} & & \\\\\n \\multicolumn{2}{c}{ {[element-wise addition]} } & \\\\\n \\multicolumn{2}{c}{ $\\mathrm{AS}(128\\rightarrow128, 1, 1, 0)$ } & \\\\\n \\multicolumn{2}{c}{ $\\mathrm{AS}(128\\rightarrow128, 2, 1, 0)$ } & \\\\\n \\multicolumn{2}{c}{ {[upsample query dims.]} } \\\\\n & \\multicolumn{2}{c}{ {[element-wise addition]} } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{AS}(128\\rightarrow128, 1, 1, 0)$ } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{AS}(128\\rightarrow128, 2, 1, 0)$ } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{conv}(128\\rightarrow128, 3, 1, 1)$ } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{ReLU}$ } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{conv}(128\\rightarrow64, 3, 1, 1)$ } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{ReLU}$ } \\\\\n & \\multicolumn{2}{c}{ {[upsample query dims.]} } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{conv}(64\\rightarrow64, 3, 1, 1)$ } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{ReLU}$ } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{conv}(64\\rightarrow2, 3, 1, 1)$ } \\\\\n & \\multicolumn{2}{c}{ {[interpolate query dims. to the input size]} } \\\\\n \\bottomrule\n \\end{tabular}\n } \n \\caption{Comprehensive configuration of ASNet\\xspace of which overview is illustrated in Fig.~2 in the main paper.\n The top of the table is the input of the model and the detailed architecture of the model below it.\n $\\mathrm{AS}(C_{\\text{in}} \\rightarrow C_{\\text{out}}, k, s, p)$ denotes an AS\\xspace layer of the kernel size ($k$), stride ($s$), padding size ($p$) for the convolutional embedding with the input channel ($C_{\\text{in}}$) and output channel~($C_{\\text{out}}$). \\label{table:asnet}\n }\n\\end{table}\n\n\n\n\n\\subsection{Further analyses}\nIn this subsection we provide supplementary analyses on the iFSL\\xspace framework and ASNet\\xspace.\nAll experimental results are obtained using ResNet50 on Pascal-$5^{i}$ and evaluated with 1-way 1-shot episodes unless specified otherwise.\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\small\n \\includegraphics[width=0.9\\linewidth]{supp\/fig\/cls_thr.pdf}\n\t\\caption{Classification threshold $\\delta$ and its effects.\n }\n\\label{fig:cls_thr}\n\\vspace{-3mm}\n\\end{figure}\n\n\n\\smallbreakparagraph{The classification occurrence threshold $\\delta$.}\nEquation~2 in the main paper describes the process of detecting object classes on the shared foreground map by thresholding the highest foreground probability response on each foreground map.\nAs the foreground probability is bounded from 0 to 1, we set the threshold $\\delta=0.5$ for simplicity.\nA high threshold value makes a classifier reject insufficient probabilities as class presences.\nFigure~\\ref{fig:cls_thr} shows the classification 0\/1 exact ratios by varying the threshold, which reaches the highest classification performance around $\\delta=0.5$ and $0.6$.\nFine-tuning the threshold for the best classification performance is not the focus of this work, thus we opt for the most straightforward threshold $\\delta=0.5$ for all experiments.\n\n\n\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\small\n\t\\includegraphics[width=0.99\\linewidth]{supp\/fig\/bg.pdf}\n\t\\caption{Visualization of background map for each support class and the merged background map $\\mathbf{Y}_{\\text{bg}}$ for the query.\n\tHigh background response is illustrated in black.\n\t}\n\\label{fig:bg}\n\\vspace{-3mm}\n\\end{figure}\n\n\\smallbreakparagraph{Visualization of $\\mathbf{Y}_{\\text{bg}}$.}\nFigure~\\ref{fig:bg} visually demonstrates the background merging step of iFSL\\xspace in Eq.~(3) in the main paper.\nThe background maps are taken from the 2-way 1-shot episodes.\nThe background response of the negative class is relatively even, \\ie, the majority of pixels are estimated as background, whereas the background response of the positive class highly contributes to the merged background map.\n\n\n\n\n\n\\input{supp\/table\/clsplussegloss}\n\n\\input{supp\/table\/ipa101}\n\n\n\n\\smallbreakparagraph{iFSL\\xspace with weak labels, strong labels, and both.}\n\\tableref{table:clsplussegloss} compares FS-CS\\xspace performances of three ASNets each of which trained with the classification loss (Eq.~(6) in the main paper), the segmentation loss (Eq.~(7) in the main paper), or both.\nThe loss is chosen upon the level of supervisions on support sets; classification tags (weak labels) or segmentation annotations (strong labels).\nWe observe that neither the classification nor segmentation performances deviate significantly between $\\mathcal{L}_{\\text{S}}$ and $\\mathcal{L}_{\\text{C}} + \\mathcal{L}_{\\text{S}}$;\ntheir performances are not even 0.3\\%p different.\nAs a segmentation annotation is a dense form of classification tags, thus the classification loss influences insignificantly when the segmentation loss is used for training.\nWe thus choose to use the segmentation loss exclusively in the presence of segmentation annotations.\n\n\n\n\n\\subsection{Additional results}\nHere we provide several extra experimental results that are omitted in the main paper due to the lack of space.\nThe contents include results using other backbone networks, another evaluation metric, and $K$ shots where $K > 1$.\n\n\n\n\\smallbreakparagraph{iFSL\\xspace on FS-CS\\xspace using ResNet101.}\nWe include the FS-CS\\xspace results of the iFSL\\xspace framework on Pascal-$5^{i}$ using ResNet101~\\cite{resnet} in \\tableref{table:ipa101}, which is missing in the main paper due to the page limit. \nAll other experimental setups are matched with those of Table~1 in the main paper except for the backbone network.\nASNet\\xspace also shows greater performances than the previous methods on both classification and segmentation tasks with another backbone.\n\n\n\n\\input{supp\/table\/acc}\n\\smallbreakparagraph{FS-CS\\xspace classification metrics: 0\/1 exact ratio and accuracy.}\n\\tableref{table:ipa50_acc} presents the results of two classification evaluation metrics of FS-CS\\xspace: 0\/1 exact ratio~\\cite{durand2019learning} and classification accuracy.\nThe classification accuracy metric takes the average of correct predictions for each class for each query, while 0\/1 exact ratio measures the binary correctness for all classes for each query, thus being stricter than the accuracy; the exact formulations are in Sec.~6.1. of the main paper.\nASNet\\xspace shows higher classification performance in both classification metrics than others.\n\n\\input{supp\/table\/ipa50_5shot}\n\\input{supp\/table\/ipa101_5shot}\n\n\n\n\\smallbreakparagraph{iFSL\\xspace on 5-shot FS-CS\\xspace.}\nTables~\\ref{table:ipa50_5shot} and \\ref{table:ipa101_5shot} compares four different methods on the 1-way 5-shot and 2-way 5-shot FS-CS\\xspace setups, which are missing in the main paper due to the page limit.\nAll other experimental setups are matched with those of Table~1 in the main paper except for the number of support samples for each class, \\ie, varying $K$ shots.\nASNet\\xspace also outperforms other methods on the multi-shot setups.\n\n\n\n\n\n\n\n\\input{supp\/table\/fpavgg}\n\n\n\\input{supp\/table\/multiway}\n\n\n\\smallbreakparagraph{ASNet\\xspace on FS-S\\xspace using VGG-16.}\n\\tableref{table:fpavgg} compares the recent state-of-the-art methods and ASNet\\xspace on FS-S\\xspace using VGG-16~\\cite{vgg}.\nWe train and evaluate ASNet\\xspace with the FS-S\\xspace problem setup to fairly compare with the recent methods.\nAll the other experimental variables are detailed in Sec.~6.3. and Table~3 of the main paper.\nASNet\\xspace consistently shows outstanding performances using the VGG-16 backbone network as observed in experimnets using ResNets.\n\n\n\n\n\\begin{figure*}[t!]\n\t\\centering\n\t\\small\n\t\\includegraphics[width=\\linewidth]{supp\/fig\/2way_extra_coco_v1.pdf}\n\t\\caption{2-way 1-shot FS-CS\\xspace segmentation prediction maps on the COCO-$20^{i}$ benchmark.}\n\\label{fig:2way_extra_coco}\n\\end{figure*}\n\n\n\\smallbreakparagraph{Qualitative results.}\nWe attach additional segmentation predictions of ASNet\\xspace learned with the iFSL\\xspace framework on the FS-CS\\xspace task in \\figref{fig:2way_extra_coco}.\nWe observe that ASNet\\xspace successfully predicts segmentation maps at challenging scenarios in the wild such as a) segmenting tiny objects, b) segmenting non-salient objects, c) segmenting multiple objects, and d) segmenting a query given a small support object annotation.\n\n\n\n\n\n\n\\begin{figure*}[t!]\n\t\\centering\n\t\\small\n\t\\includegraphics[width=0.80\\linewidth]{supp\/fig\/fssfscs.pdf}\n\t\\caption{2-way 1-shot FS-CS\\xspace segmentation prediction maps of $\\text{ASNet}_{\\text{FS-S\\xspace}}$ and $\\text{ASNet}_{\\text{FS-CS\\xspace}}$.}\n\\label{fig:fssfcss}\n\\end{figure*}\n\n\n\\smallbreakparagraph{Qualitative results of $\\text{ASNet}_{\\text{FS-S\\xspace}}$.}\nFigure~\\ref{fig:fssfcss} visualizes typical failure cases of the $\\text{ASNet}_{\\text{FS-S\\xspace}}$ model in comparison with $\\text{ASNet}_{\\text{FS-CS\\xspace}}$; these examples qualitatively show the severe performance drop of $\\text{ASNet}_{\\text{FS-S\\xspace}}$ on FS-CS\\xspace, which is quantitatively presented in Fig.~5~(b) of the main paper.\nSharing the same architecture of ASNet\\xspace, each model is trained on either FS-S\\xspace or FS-CS\\xspace setup and evaluated on the 2-way 1-shot FS-CS\\xspace setup.\nThe results demonstrate that $\\text{ASNet}_{\\text{FS-S\\xspace}}$ is unaware of object classes and gives foreground predictions on any existing objects, whereas $\\text{ASNet}_{\\text{FS-CS\\xspace}}$ effectively distinguishes the object classes based on the support classes and produces clean and adequate segmentation maps.\n\n\n\n\n\n\n\\input{supp\/table\/ico50_foldwise}\n\\input{supp\/table\/fco50_foldwise}\n\\smallbreakparagraph{Fold-wise results on COCO-$\\mathbf{20^{i}}$.}\nTables~\\ref{table:ico50_foldwise} and \\ref{table:fco50_foldwise} present fold-wise performance comparison on the FS-CS\\xspace and FS-S\\xspace tasks, respectively.\nWe validate that ASNet\\xspace outperforms the competitors by large margins in both the FS-CS\\xspace and FS-S\\xspace tasks on the challenging COCO-$20^{i}$ benchmark.\n\n\n\n\n\n\\smallbreakparagraph{Numerical performances of Fig.~4 in the main paper.}\nWe report the numerical performances of the Fig.~4 in the main paper in \\tableref{table:multiway} as a reference for following research.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDealing with nonlinear phenomena has become one of the predominant issues for mechanical engineers, in the objective of virtual testing. Whether they are geometrical or related to the material behavior, nonlinearities can be treated by a combination of Newton and linear solvers. Newton algorithms can be modified, secant, quasi-Newton \\cite{crisfield1979faster,deuflhard1975relaxation,dennis1977quasi,zhang1982modified}, depending mostly on the complexity of tangent operators computation. If the meshed structure has a large number of degrees of freedom, linear solvers are chosen to be iterative and parallel, belonging to the class of Domain Decomposition Methods for instance \\cite{mandel1993balancing,le1994domain,rixen1999simple,farhat2001feti,gosselet2006non}.\n\nThis article focuses on the nonlinear substructuring and condensation method, which has been investigated in previous studies \\cite{cresta2007nonlinear,pebrel2008nonlinear,bordeu2009balancing,negrello2016substructured}. The substructured formulation involves a choice of interface transmission conditions, which can be either primal, dual or mixed, referring either to interface displacements, nodal interface reactions, or a linear combination of the two previous types -- i.e. Robin interface conditions. In this context, the mixed formulation has shown good efficiency \\cite{cresta2008decomposition,pebrel2009etude,hinojosa2014domain,negrello2016substructured}, mostly due to a sound choice of the parameter introduced in the linear combination of interface conditions. Being homogeneous to a stiffness, and often refered to as an \\textit{interface impedance}, this parameter can indeed be optimized, depending on the mechanical problem \\cite{lions1990schwarz}. However, the computational cost of the optimal value involves in general storage and manipulation of global matrices, and is consequently not affordable in the framework of parallel computations. \n\nThe interface impedance, in DDM methods for structural mechanics, should model, from the point of view of one substructure, its interactions with the complement of the whole structure. In order to achieve good convergence rates without degrading computational speed, interface impedance can generally be approximated either by \\textit{short scale} or \\textit{long scale} formulations, depending on the predominant phenomena which must be accounted for. In the mechanical context, for instance, a common \\textit{short scale} approximation can be built by assembling interface stiffness of the neighbors \\cite{cresta2008decomposition, pebrel2009etude, hinojosa2014domain, negrello2016substructured}.\n \nHowever, filtering long range interactions gives quite a coarse approximation of interface impedance, and does not give an accurate representation of the environment of each substructure. A good evaluation of the remainder of the structure should indeed couple these two strategies. Starting from this consideration, we propose here a new construction process of the interface impedance, based on a ``spring in series'' modeling of the structure, which couples the \\textit{long} and \\textit{short} range interactions with the structure. The heuristic we develop is strongly influenced by the availability of the various terms involved in our approximation. \n\nThe first section of this paper introduces the reference (mechanical or thermal) problem and the notations used in the following. A succinct presentation of the nonlinear substructuring and condensation method then recalls the principles of its mixed formulation: how the interface nonlinear condensed problem is built from nonlinear local equilibriums, and the basics of the whole solving process, involving a global Newton algorithm combined with two internal solvers (parallel local Newton algorithms and a multi-scale linear preconditioned Krylov solver for tangent interface system). At Section~\\ref{sec:new_heuristic}, the question of finding a relevant Robin parameter for mixed interface conditions is developed, mainly based on the observation that for each substructure, the optimal interface impedance is the nonlinear discretized \\textit{Dirichlet-to-Neumann} operator of its complementary part. The new heuristic is then introduced, starting from the model of two springs in series, and a possible nonlinear multi-scale interpretation is given. The details of the two-scale approximation can be found at subsection \\ref{ssec:two_scale}, its efficiency is evaluated at last section on several academic numerical examples.\n\n\\section{Reference problem, notations}\n\n\\subsection{Global nonlinear problem}\n\nWe consider here a nonlinear partial differential equation on a domain $\\Omega$, representative of a structural mechanical or thermal problem, with Dirichlet conditions on the part $\\partial \\Omega_u \\neq \\varnothing$ of its boundary, and Neumann conditions on the complementary part $\\partial \\Omega_F$. After discretization with the Finite Element method, the problem to be solved reads:\n\\begin{equation}\\label{eq:nl_problem}\nf_{int}(u) + f_{ext} = 0\n\\end{equation}\nVector $f_{ext}$ takes into account boundary conditions (Dirichlet or Neumann) and dead loads, operator $f_{int}$ refers to the discretization of homogeneous partial differential equation.\n\n\\begin{remark} In linear elasticity, under the small perturbations hypothesis, one has:\n\\begin{equation*}\nf_{int}(u) = -Ku\n\\end{equation*}\nwith $K$ the stiffness matrix of the structure.\n\\end{remark}\n\n\\subsection{Substructuring}\n\nClassical DDM notations will be used -- see figure \\ref{fig:notations}: global domain $\\Omega$ is partitioned into $N_s$ subdomains $\\Omega\\ensuremath{^{(s)}}$. For each subdomain, a trace operator $t\\ensuremath{^{(s)}}$ restricts local quantities $x\\ensuremath{^{(s)}}$ defined on $\\Omega\\ensuremath{^{(s)}}$ to boundary quantities $x\\ensuremath{^{(s)}}_b$ defined on $\\Gamma\\ensuremath{^{(s)}} \\equiv \\partial \\Omega\\ensuremath{^{(s)}} \\backslash \\partial \\Omega$:\n\\begin{equation*}\nx\\ensuremath{^{(s)}}_b = t\\ensuremath{^{(s)}} u\\ensuremath{^{(s)}} = x\\ensuremath{^{(s)}}_{\\vert \\Gamma\\ensuremath{^{(s)}}}\n\\end{equation*}\nQuantities defined on internal nodes (belonging to $\\Omega\\ensuremath{^{(s)}} \\backslash \\Gamma\\ensuremath{^{(s)}}$) are written with subscript $i$: $x\\ensuremath{^{(s)}}_i$.\n\nGlobal primal (resp. dual) interface are noted $\\Gamma_A$ (resp $\\Gamma_B$).\nPrimal assembly operators $A\\ensuremath{^{(s)}}$ are defined as canonical prolongation operators from $\\Gamma\\ensuremath{^{(s)}}$ to $\\Gamma_A$: $A\\ensuremath{^{(s)}}$ is a full-ranked boolean matrix of size $n_A \\times n_b\\ensuremath{^{(s)}}$ - where $n_A$ is the size of global primal interface $\\Gamma_A$ and $n_b\\ensuremath{^{(s)}}$ the number of boundary degrees of freedom belonging to subdomain $\\Omega\\ensuremath{^{(s)}}$. \n\\begin{figure}[ht]\n\\begin{center}\n\\hspace{-0.5cm}\\subfloat[Subdomains]{\\includegraphics[width=4cm]{graph_2a.pdf}}\n\\subfloat[Local interface]{\\includegraphics[width=4cm]{graph_2b.pdf}}\n\\subfloat[Interface nodes]{\\includegraphics[width=3.5cm]{graph_2c.pdf}}\n\\subfloat[Interface connexions]{\\includegraphics[width=3.5cm]{graph_2d.pdf}}\n\n\\hspace{-0.75cm}\\subfloat{\\includegraphics[width=14.5cm]{matrices.pdf}}\n\\end{center}\n\\caption{Local numberings, interface numberings, trace and assembly operators}\n\\label{fig:notations}\n\\end{figure}\n\nDiamond notations are used in the following: for a domain $\\Omega$ substructured in $N_s$ subdomains $\\left( \\Omega\\ensuremath{^{(s)}} \\right)$, concatenated local variables are superscripted $\\ensuremath{^{\\diamondvert}}$, $\\ensuremath{^{\\diamondminus}}$ or $\\ensuremath{^{\\diamondbackslash}}$, depending on the alignment.\n\\begin{equation*}\n\\begin{aligned}\nx\\ensuremath{^{\\diamondvert}} = & \\begin{pmatrix} \nx^{(1)} \\\\\n\\vdots \\\\\n x^{(N_{s})} \\end{pmatrix},\\qquad x\\ensuremath{^{\\diamondminus}} = \\begin{pmatrix}\n{x^{(1)}} \\, \\ldots \\, {x^{(N_{s})}} \\end{pmatrix},\\qquad\n M\\ensuremath{^{\\diamondbackslash}} = \\begin{pmatrix} \nM^{(1)} & 0 & 0 \\\\\n0 & \\ddots & 0 \\\\\n0 & 0 & M^{(N_{s})} \\\\\n\\end{pmatrix} \\\\\n\\end{aligned}\n\\end{equation*}\nAny matrix $B\\ensuremath{^{(s)}}$ satisfying $\\text{Range}(B\\ensuremath{^{\\diamondminus^T}}) = \\text{Ker}(A\\ensuremath{^{\\diamondminus}})$ can be assigned to dual assembly operator -- see figure~\\ref{fig:notations} for the most classical choice. \n\n\n\\section{Nonlinear substructuring and condensation: mixed formulation} \n\nThis section recalls the principle of nonlinear substructuring and condensation, which is explained in details in \\cite{negrello2016substructured}.\n\n\\subsection{Formulation of the condensed problem}\n\n Nonlinear problem \\eqref{eq:nl_problem} is decomposed into $N_s$ nonlinear subproblems:\n\\begin{equation*}\nf_{int}\\ensuremath{^{\\diamondvert}}(u\\ensuremath{^{\\diamondvert}}) + f_{ext}\\ensuremath{^{\\diamondvert}} + t\\ensuremath{^{\\diamondbackslash^T}} \\lambda\\ensuremath{^{\\diamondvert}}_b = 0\\ensuremath{^{\\diamondvert}}\n\\end{equation*}\nwhere $\\lambda\\ensuremath{^{(s)}}_b$ is the unknown local interface nodal reaction, introduced to represent interactions of the subdomain $\\Omega\\ensuremath{^{(s)}}$ with neighboring subdomains.\n\nTransmission conditions hold:\n\\begin{equation*}\n\\left\\lbrace \\begin{aligned}\nB\\ensuremath{^{\\diamondminus}} u\\ensuremath{^{\\diamondvert}}_b = 0 \\\\\nA\\ensuremath{^{\\diamondminus}} \\lambda\\ensuremath{^{\\diamondvert}}_b = 0\n\\end{aligned} \\right.\n\\end{equation*}\n\nThe mixed formulation consists in introducing a new interface unknown:\n\\begin{equation*}\n\\mu\\ensuremath{^{\\diamondvert}}_b = \\lambda\\ensuremath{^{\\diamondvert}}_b + Q\\ensuremath{^{\\diamondbackslash}}_b u\\ensuremath{^{\\diamondvert}}_b\n\\end{equation*}\nwhere the matrix $Q\\ensuremath{^{\\diamondbackslash}}_b$ is a parameter of the method. It has to be symmetric positive definite, and can be interpreted as a stiffness added to the interface, per subdomain: $Q\\ensuremath{^{\\diamondbackslash}}_b$ is called \\textit{interface impedance}. \\medskip\n\nLocal equilibriums can then be reformulated as:\n\\begin{equation}\\label{eq:local_eq}\nf_{int}\\ensuremath{^{\\diamondvert}}(u\\ensuremath{^{\\diamondvert}}) + f_{ext}\\ensuremath{^{\\diamondvert}} + t\\ensuremath{^{\\diamondbackslash^T}} \\left( \\mu_b\\ensuremath{^{\\diamondvert}} - Q\\ensuremath{^{\\diamondbackslash}}_b u\\ensuremath{^{\\diamondvert}}_b \\right) = 0\n\\end{equation}\nWe assume the existence, at least locally, of a nonlinear mixed analogue $H\\ensuremath{^{\\diamondvert}}_{nl}$ of the Schur complement (ie. a discrete Robin-to-Dirichlet operator):\n\\begin{equation}\\label{eq:local_eq_H}\nu\\ensuremath{^{\\diamondvert}}_b = H\\ensuremath{^{\\diamondvert}}_{nl}\\left( \\mu\\ensuremath{^{\\diamondvert}}_b; Q\\ensuremath{^{\\diamondbackslash}}_b, f\\ensuremath{^{\\diamondvert}}_{ext} \\right)\n\\end{equation}\n\\begin{prop} The tangent operator~$H\\ensuremath{^{\\diamondbackslash}}_{t}$ to~$H\\ensuremath{^{\\diamondvert}}_{nl}$ can be explicitly computed in function of the tangent stiffness~$K\\ensuremath{^{\\diamondbackslash}}_t$:\n\\begin{equation*}\n\\begin{aligned}\n H\\ensuremath{^{\\diamondbackslash}}_t = \\frac{\\partial H\\ensuremath{^{\\diamondvert}}_{nl}}{\\partial \\mu\\ensuremath{^{\\diamondvert}}_b} = t\\ensuremath{^{\\diamondbackslash}} \\left( K_t\\ensuremath{^{\\diamondbackslash}} + t\\ensuremath{^{\\diamondbackslash^T}} Q\\ensuremath{^{\\diamondbackslash}}_b t\\ensuremath{^{\\diamondbackslash}} \\right)^{-1} t\\ensuremath{^{\\diamondbackslash^T}} \\\\\n\\end{aligned}\n\\end{equation*}\nMoreover, in the linear case, the Robin-to-Dirichlet operator written $H\\ensuremath{^{\\diamondvert}}_{l}$ is affine, with the constant term associated with external forces:\n\\begin{equation*}\n\\begin{aligned}\n& H\\ensuremath{^{\\diamondvert}}_l \\left( \\mu\\ensuremath{^{\\diamondvert}}_b; Q\\ensuremath{^{\\diamondbackslash}}_b, f\\ensuremath{^{\\diamondvert}}_{ext} \\right) = H\\ensuremath{^{\\diamondbackslash}}_t \\mu\\ensuremath{^{\\diamondvert}}_b + b\\ensuremath{^{\\diamondvert}}_m \\\\\n\\text{with }& b\\ensuremath{^{\\diamondvert}}_m = t\\ensuremath{^{\\diamondbackslash}} \\left( K\\ensuremath{^{\\diamondbackslash}} + t\\ensuremath{^{\\diamondbackslash^T}} Q\\ensuremath{^{\\diamondbackslash}}_b t\\ensuremath{^{\\diamondbackslash}} \\right)^{-1} f\\ensuremath{^{\\diamondvert}}_{ext}\n\\end{aligned}\n\\end{equation*}\n\\end{prop}\n\n\\begin{remark} \nFor the upcoming discussion, we will make use of the nonlinear primal Schur complement (Dirichlet-to-Neumann, noted $S_{nl}\\ensuremath{^{(s)}}$) which is such that $\\lambda_b\\ensuremath{^{(s)}}=S_{nl}\\ensuremath{^{(s)}}(u_b\\ensuremath{^{(s)}};f_{ext}\\ensuremath{^{(s)}})$. The tangent primal Schur complement can be computed from the tangent stiffness matrix:\n\\begin{equation*}\nS_t\\ensuremath{^{(s)}}=K\\ensuremath{^{(s)}}_{t_{bb}}-K\\ensuremath{^{(s)}}_{t_{bi}}{K\\ensuremath{^{(s)}}_{t_{ii}}}^{-1}K\\ensuremath{^{(s)}}_{t_{ib}}\n\\end{equation*}\nand we have $H_t\\ensuremath{^{\\diamondbackslash}} = (S\\ensuremath{^{\\diamondbackslash}}_t + Q\\ensuremath{^{\\diamondbackslash}}_b)^{-1}$. Note that the tangent dual Schur complement (Neumann-to-Dirichlet) can be written as ${S_t\\ensuremath{^{(s)}}}^{\\dagger} = t\\ensuremath{^{(s)}}{K_t\\ensuremath{^{(s)}}}^{\\dagger}{t\\ensuremath{^{(s)}}}^T$. In the linear case, the primal Schur complement is an affine operator with the constant term due to the external load:\n\\begin{equation*}\n\\begin{aligned}\n& S\\ensuremath{^{\\diamondvert}}_l \\left( u\\ensuremath{^{\\diamondvert}}_b; f\\ensuremath{^{\\diamondvert}}_{ext} \\right) = S\\ensuremath{^{\\diamondbackslash}}_t u\\ensuremath{^{\\diamondvert}}_b + b\\ensuremath{^{\\diamondvert}}_p \\\\\n\\text{with }& b\\ensuremath{^{\\diamondvert}}_p = f_{ext_b}\\ensuremath{^{\\diamondvert}} - K\\ensuremath{^{(s)}}_{t_{bi}}{K\\ensuremath{^{(s)}}_{t_{ii}}}^{-1}f_{ext_i}\\ensuremath{^{\\diamondvert}} = (S_t\\ensuremath{^{\\diamondbackslash}}+Q_b\\ensuremath{^{\\diamondbackslash}})b_m\\ensuremath{^{\\diamondvert}}\n\\end{aligned}\n\\end{equation*}\n\\end{remark}\n\nThanks to the complementarity between balanced and continuous quantities, and to the symmetry positive definiteness of $Q\\ensuremath{^{\\diamondbackslash}}_b$, any boundary displacement (defined independently on neighboring subdomains) can be split in a unique way into a continuous field belonging to $\\text{Ker}\\left( B\\ensuremath{^{\\diamondminus}} \\right)$ and a balanced field belonging to $\\text{Ker}\\left( A\\ensuremath{^{\\diamondminus}} Q\\ensuremath{^{\\diamondbackslash}}_b \\right)$. Thus, the transmission conditions can be written in terms of $\\mu\\ensuremath{^{\\diamondvert}}_b$ and $u\\ensuremath{^{\\diamondvert}}_b$, and gathered in a single equation:\n\\begin{equation*}\nA\\ensuremath{^{\\diamondminus^T}} \\left( A\\ensuremath{^{\\diamondminus}} Q\\ensuremath{^{\\diamondbackslash}}_b A\\ensuremath{^{\\diamondminus^T}} \\right)^{-1} A\\ensuremath{^{\\diamondminus}} \\mu\\ensuremath{^{\\diamondvert}}_b - u\\ensuremath{^{\\diamondvert}}_b = 0\n\\end{equation*}\n\nFinally, interface condensed problem reads:\n\\begin{equation}\\label{eq:interf_pb}\nR\\ensuremath{^{\\diamondvert}}_b(\\mu_b\\ensuremath{^{\\diamondvert}}) \\equiv A\\ensuremath{^{\\diamondminus^T}} \\left( A\\ensuremath{^{\\diamondminus}} Q\\ensuremath{^{\\diamondbackslash}}_b A\\ensuremath{^{\\diamondminus^T}} \\right)^{-1} A\\ensuremath{^{\\diamondminus}} \\mu\\ensuremath{^{\\diamondvert}}_b - H\\ensuremath{^{\\diamondvert}}_{nl} \\left( \\mu\\ensuremath{^{\\diamondvert}}_b; Q\\ensuremath{^{\\diamondbackslash}}_b, f\\ensuremath{^{\\diamondvert}}_{ext} \\right) = 0\n\\end{equation} \n\n\\subsection{Solving strategy}\n\n\\subsubsection{Newton-Krylov algorithm}\n\nNonlinear substructuring and condensation results in applying a global Newton algorithm to interface problem \\eqref{eq:interf_pb} instead of problem \\eqref{eq:nl_problem}. Three steps are then involved in the solving process:\n\\begin{enumerate}[label=(\\roman*)]\n\\item Local solutions of nonlinear equilibriums \\eqref{eq:local_eq} are computed by applying local Newton algorithms.\n\\item The interface mixed residual is assembled.\n\\item The interface tangent problem is solved by a DDM solver.\n\\end{enumerate}\nNewton global algorithm can be written, with previous notations:\n\\begin{equation*}\n\\left\\lbrace \\begin{aligned}\n&\\frac{\\partial R\\ensuremath{^{\\diamondvert}}_b}{\\partial \\mu_b\\ensuremath{^{\\diamondvert}}} d\\mu\\ensuremath{^{\\diamondvert}}_b + R\\ensuremath{^{\\diamondvert}}_b = 0 \\\\\n&\\mu\\ensuremath{^{\\diamondvert}}_b += d\\mu\\ensuremath{^{\\diamondvert}}_b\n\\end{aligned} \\right.\n\\end{equation*}\nTangent problem then reads:\n\\begin{equation}\\label{eq:tg_pb}\n\\left( A\\ensuremath{^{\\diamondminus^T}} \\left( A\\ensuremath{^{\\diamondminus}} Q\\ensuremath{^{\\diamondbackslash}}_b A\\ensuremath{^{\\diamondminus^T}} \\right)^{-1} A\\ensuremath{^{\\diamondminus}} - H\\ensuremath{^{\\diamondbackslash}}_t \\right) d\\mu\\ensuremath{^{\\diamondvert}}_b = H\\ensuremath{^{\\diamondvert}}_{nl} \\left( \\mu\\ensuremath{^{\\diamondvert}}_b, Q\\ensuremath{^{\\diamondbackslash}}_b, f\\ensuremath{^{\\diamondvert}}_{ext} \\right) - A\\ensuremath{^{\\diamondminus^T}} \\left( A\\ensuremath{^{\\diamondminus}} Q\\ensuremath{^{\\diamondbackslash}}_b A\\ensuremath{^{\\diamondminus^T}} \\right)^{-1} A\\ensuremath{^{\\diamondminus}} \\mu\\ensuremath{^{\\diamondvert}}_b\n\\end{equation}\n\n\\subsubsection{Alternative formulation}\\label{sec:altern_formul}\nTangent problem \\eqref{eq:tg_pb} could be treated by a FETI-2LM solver \\cite{roux2009feti}. An equivalent formulation of problem \\eqref{eq:interf_pb} is also possible, where the boundary interface unknown $\\mu\\ensuremath{^{\\diamondvert}}_b$ is replaced by a couple of interface unknowns $\\left( f_B, v_A \\right)$, $f_B$ being a nodal reaction and $v_A$ an interface displacement. Couple $\\left( f_B, v_A \\right)$ is made unique by imposing the three following conditions:\n\\begin{itemize}[label=$\\circ$]\n\\item $f_B$ is balanced\n\\item $v_A$ is continuous \n\\item $\\mu\\ensuremath{^{\\diamondvert}}_b = B\\ensuremath{^{\\diamondminus^T}} f_B + Q\\ensuremath{^{\\diamondbackslash}}_b A\\ensuremath{^{\\diamondminus^T}} v_A$\n\\end{itemize}\nWith this formulation, tangent problem is expressed by:\n\\begin{equation}\\label{eq:pb_tg2}\n\\begin{aligned}\n&\\left(A\\ensuremath{^{\\diamondminus}} S\\ensuremath{^{\\diamondbackslash}}_t A\\ensuremath{^{\\diamondminus^T}}\\right) dv_A = A\\ensuremath{^{\\diamondminus}} \\left( Q\\ensuremath{^{\\diamondbackslash}}_b + S\\ensuremath{^{\\diamondbackslash}}_t \\right) b\\ensuremath{^{\\diamondvert}}_m \\\\\n\\text{with }&b\\ensuremath{^{\\diamondvert}}_m = H\\ensuremath{^{\\diamondvert}}_{nl} \\left( \\mu\\ensuremath{^{\\diamondvert}}_b; Q\\ensuremath{^{\\diamondbackslash}}_b, f\\ensuremath{^{\\diamondvert}}_{ext} \\right) - A\\ensuremath{^{\\diamondminus^T}} v_A\n\\end{aligned}\n\\end{equation}\nEquation \\eqref{eq:pb_tg2} has the exact form of a BDD \\cite{mandel1993balancing} problem. It can thus conveniently be solved with usual preconditioner and coarse problem. The following quantities can then be deduced:\n\\begin{equation}\\label{eq:recup_mu_u_lam}\n\\begin{aligned}\n& d\\mu_b\\ensuremath{^{\\diamondvert}} = S_t\\ensuremath{^{\\diamondbackslash}} A\\ensuremath{^{\\diamondminus^T}} dv_A - A\\ensuremath{^{\\diamondminus}} \\left( Q\\ensuremath{^{\\diamondbackslash}}_b + S_t\\ensuremath{^{\\diamondbackslash}} \\right) b_m\\ensuremath{^{\\diamondvert}} \\\\\n& du\\ensuremath{^{\\diamondvert}} = \\left( K_t\\ensuremath{^{\\diamondbackslash}} + t\\ensuremath{^{\\diamondbackslash^T}} Q_b\\ensuremath{^{\\diamondbackslash}} t\\ensuremath{^{\\diamondbackslash}} \\right)^{-1} t\\ensuremath{^{\\diamondbackslash^T}} \\left( A\\ensuremath{^{\\diamondminus}} \\left[ Q\\ensuremath{^{\\diamondbackslash}}_b + S_t\\ensuremath{^{\\diamondbackslash}} \\right] b_m\\ensuremath{^{\\diamondvert}} + d\\mu_b\\ensuremath{^{\\diamondvert}} \\right) \\\\\n& du_b\\ensuremath{^{\\diamondvert}} = t\\ensuremath{^{\\diamondbackslash}} du\\ensuremath{^{\\diamondvert}} \\\\\n& d\\lambda_b\\ensuremath{^{\\diamondvert}} = S_t\\ensuremath{^{\\diamondbackslash}} du_b\\ensuremath{^{\\diamondvert}} - A\\ensuremath{^{\\diamondminus}} \\left( Q\\ensuremath{^{\\diamondbackslash}}_b + S_t\\ensuremath{^{\\diamondbackslash}} \\right) b_m\\ensuremath{^{\\diamondvert}} = d\\mu_b\\ensuremath{^{\\diamondvert}} - Q\\ensuremath{^{\\diamondbackslash}}_b du_b\\ensuremath{^{\\diamondvert}}\n\\end{aligned}\n\\end{equation}\n\n\n\\subsubsection{Typical algorithm}\n\nAlgorithm~\\ref{alg:robin-bdd} sums up the main steps of the method with the mixed nonlinear local problems and primal tangent solver. For simplicity reasons, only one load increment was considered.\n\nAs can be seen in this algorithm, several convergence thresholds are needed:\n\\begin{itemize}\n\\item Global convergence criterion $\\varepsilon_{NG}$: since our approach is mixed, the criterion not only controls the quality of the subdomains balance (as in a standard Newton approach) but also the continuity of the interface displacement which is measured by an appropriate norm written $\\|\\cdot\\|_B$. \n\\item Local nonlinear thresholds $\\varepsilon_{NL}\\ensuremath{^{\\diamondvert}}$, which are associated with the Newton processes carried out independently on subdomains.\n\\item The global linear threshold of the domain decomposition (Krylov) solver $\\varepsilon_{K}$ (here BDD). \n\\end{itemize}\nThe other parameters of the method are the initializations of the various iterative solvers and the choice of the impedance matrices $Q_b\\ensuremath{^{\\diamondbackslash}}$. \n\n\\begin{algorithm2e}[!ht]\n\\DontPrintSemicolon\n\\KwSty{Define:}\\;\n$r_{nl}^{m\\diamondvert}(u\\ensuremath{^{\\diamondvert}},\\mu_b\\ensuremath{^{\\diamondvert}}) = f_{int}\\ensuremath{^{\\diamondvert}}(u\\ensuremath{^{\\diamondvert}})-t\\ensuremath{^{\\diamondbackslash^T}} Q_b\\ensuremath{^{\\diamondbackslash}} t\\ensuremath{^{\\diamondbackslash}} u\\ensuremath{^{\\diamondvert}} + t\\ensuremath{^{\\diamondbackslash^T}} \\mu_b\\ensuremath{^{\\diamondvert}}+ f\\ensuremath{^{\\diamondvert}}_{ext}$\\;\n\\BlankLine\n\\KwSty{Initialization:}\\;\n$(u_0\\ensuremath{^{\\diamondvert}},\\lambda_{b_0}\\ensuremath{^{\\diamondvert}})$ such that $B\\ensuremath{^{\\diamondminus}} t\\ensuremath{^{\\diamondbackslash}} u_0\\ensuremath{^{\\diamondvert}}=0$ and $A\\ensuremath{^{\\diamondminus}} \\lambda_{b_0}\\ensuremath{^{\\diamondvert}}=0$\\;\n\\KwSty{Set} $k=0$\\;\n\\KwSty{Define} $\\mu_{b_k}\\ensuremath{^{\\diamondvert}}= \\lambda_{b_k}\\ensuremath{^{\\diamondvert}} + Q_b\\ensuremath{^{\\diamondbackslash}} t\\ensuremath{^{\\diamondbackslash}} u_k\\ensuremath{^{\\diamondvert}}$\\;\n\\While{$\\| r_{nl}^{m\\diamondvert}(u_k\\ensuremath{^{\\diamondvert}},\\mu_{b_k}\\ensuremath{^{\\diamondvert}})\\|+ \\| B\\ensuremath{^{\\diamondminus}} t\\ensuremath{^{\\diamondbackslash}} u\\ensuremath{^{\\diamondvert}}\\|_{B}>\\varepsilon_{NG}$ }{%\n \\KwSty{Local nonlinear step}: \\\n \\KwSty{Set} $u_{k,0}\\ensuremath{^{\\diamondvert}}=u_{k}\\ensuremath{^{\\diamondvert}}$ and $j=0$\\; \n \\While{$\\| r_{nl}^{m\\diamondvert}(u_{k,j}\\ensuremath{^{\\diamondvert}},\\mu_{b_k}\\ensuremath{^{\\diamondvert}})\\|>\\varepsilon\\ensuremath{^{\\diamondvert}}_{NL}$}{\n $u_{k,j+1}\\ensuremath{^{\\diamondvert}}=u_{k,j}\\ensuremath{^{\\diamondvert}}-\\left(K_{t_{k,j}}\\ensuremath{^{\\diamondbackslash}} + t\\ensuremath{^{\\diamondbackslash^T}} Q_b\\ensuremath{^{\\diamondbackslash}} t\\ensuremath{^{\\diamondbackslash}} \\right)^{-1} r_{nl}^{m\\diamondvert}( u_{k,j}\\ensuremath{^{\\diamondvert}}, \\mu_{b_k}\\ensuremath{^{\\diamondvert}})$ \\;\n \\KwSty{Set} $j=j+1$\n }\n \\KwSty{Linear right-hand side}:\\;\n $b_{m_k}\\ensuremath{^{\\diamondvert}} = A\\ensuremath{^{\\diamondminus^T}} \\left( A\\ensuremath{^{\\diamondminus}} Q_b\\ensuremath{^{\\diamondbackslash}} A\\ensuremath{^{\\diamondminus^T}} \\right)^{-1} A\\ensuremath{^{\\diamondminus}} \\mu_{b_k}\\ensuremath{^{\\diamondvert}} -t\\ensuremath{^{\\diamondbackslash}} u_{k,j}\\ensuremath{^{\\diamondvert}} $\\;\n $b_{p_k}\\ensuremath{^{\\diamondvert}} = (S_{t_{k,j}}\\ensuremath{^{\\diamondbackslash}} + Q_b\\ensuremath{^{\\diamondbackslash}})b_{m_k}\\ensuremath{^{\\diamondvert}} $\\;\n \\KwSty{Global linear step}:\\;\n \\KwSty{Set} $dv_A^{0}=0$ and $i=0$\\; \n \\While{$\\|b\\ensuremath{^{\\diamondvert}}_{p_k}-\\left( A\\ensuremath{^{\\diamondminus}}\\; S_{t_{k,j}}\\ensuremath{^{\\diamondbackslash}} A\\ensuremath{^{\\diamondminus^T}} \\right) dv^i\\|>\\varepsilon_{K}$}{\n Make BDD iterations (index $i$)\n }\n \\KwSty{Set} $u_{k+1}\\ensuremath{^{\\diamondvert}} = u_k\\ensuremath{^{\\diamondvert}} + du_k^{i\\diamondvert}$ and $ \\lambda_{b_{k+1}}\\ensuremath{^{\\diamondvert}} = \\lambda_{b_k}\\ensuremath{^{\\diamondvert}} + d\\lambda_{b_k}^{i\\diamondvert}$ using \\eqref{eq:recup_mu_u_lam}\\;\n \\KwSty{Set} $k=k+1$\\;\n}\n\\caption{Mixed nonlinear approach with BDD tangent solver}\n\\label{alg:robin-bdd}\n\\end{algorithm2e}\n\n\n\n\\section{New heuristic for the interface impedance}\\label{sec:new_heuristic}\n\n\\subsection{Motivation}\\label{ssec:motivation}\n\n\nThe parameter $Q\\ensuremath{^{\\diamondbackslash}}_b$ is involved all along the solving, and a special care should be paid to its computation. \n\n\\medskip\n\nIn order to frame the ideas, let us consider the Robin-Robin algorithm with stationary iteration, in the nonlinear case, with nonlinear impedance. Starting from the initial guess $\\mu_b\\ensuremath{^{\\diamondvert}}=0$, we have the iterations of Algorithm~\\ref{alg:robin-robin}.\n\n\\SetNlSty{texttt}{(}{)}\n\\begin{algorithm2e}[ht]\n\t\\DontPrintSemicolon\n\\nl Parallel solve: $S_{nl}\\ensuremath{^{\\diamondvert}}(u_b\\ensuremath{^{\\diamondvert}};f\\ensuremath{^{\\diamondvert}}_{ext})+Q_{nl}\\ensuremath{^{\\diamondvert}}(u_b\\ensuremath{^{\\diamondvert}})= \\mu_b\\ensuremath{^{\\diamondvert}}$\\;\n\\nl Parallel post-processing: $\\lambda_b\\ensuremath{^{\\diamondvert}} = S_{nl}\\ensuremath{^{\\diamondvert}}(u_b\\ensuremath{^{\\diamondvert}};f\\ensuremath{^{\\diamondvert}}_{ext}) = \\mu_b\\ensuremath{^{\\diamondvert}}-Q_{nl}\\ensuremath{^{\\diamondvert}}(u_b\\ensuremath{^{\\diamondvert}})$\\;\n\\nl Assembly: $\\bar{u}_b\\ensuremath{^{\\diamondvert}} = A\\ensuremath{^{\\diamondminus^T}} \\tilde{A}\\ensuremath{^{\\diamondminus}} u_b\\ensuremath{^{\\diamondvert}}$, and $\\bar{\\lambda}_b\\ensuremath{^{\\diamondvert}} = \\left( I - \\tilde{A}\\ensuremath{^{\\diamondminus^T}} A\\ensuremath{^{\\diamondminus}} \\right) \\lambda_b\\ensuremath{^{\\diamondvert}}$\\;\n\\nl Parallel update of interface unknown: $\\mu_b\\ensuremath{^{\\diamondvert}} = Q_{nl}\\ensuremath{^{\\diamondvert}}( \\bar{u}_b\\ensuremath{^{\\diamondvert}}) + \\bar{\\lambda}_b\\ensuremath{^{\\diamondvert}}$\\;\n\\caption{Robin-Robin stationary iteration}\n\\label{alg:robin-robin}\n\\end{algorithm2e}\t\n\n\\medskip\n\nAssembled quantities $\\bar{u}_b\\ensuremath{^{\\diamondvert}}$ and $\\bar{\\lambda}_b\\ensuremath{^{\\diamondvert}}$ are defined such that the interface conditions can be written as:\n\\begin{equation}\\label{eq:new_interf_cond}\nu_b\\ensuremath{^{\\diamondvert}} = \\bar{u}_b\\ensuremath{^{\\diamondvert}} \\text{ and }\n\\lambda_b\\ensuremath{^{\\diamondvert}} = \\bar{\\lambda}_b\\ensuremath{^{\\diamondvert}}\n\\end{equation}\nand we assume the nonlinear local operators $Q_{nl}\\ensuremath{^{\\diamondvert}}$ to be such that the equivalence between \\eqref{eq:new_interf_cond} and the following equation is ensured:\n\\begin{equation}\\label{eq:intQnl}\n\\left(Q_{nl}\\ensuremath{^{\\diamondvert}}(u_b\\ensuremath{^{\\diamondvert}}) - Q_{nl}\\ensuremath{^{\\diamondvert}}(\\bar{u}_b\\ensuremath{^{\\diamondvert}})\\right) + \\left(\\lambda_b\\ensuremath{^{\\diamondvert}}-\\bar{\\lambda}_b\\ensuremath{^{\\diamondvert}}\\right)=0\n\\end{equation}\n\n\nConsidering a given subdomain $\\Omega\\ensuremath{^{(j)}}$, and writing $\\Omega\\ensuremath{^{(\\overline{j})}}$ its complement, we can condense the whole problem on its interface; the boundary displacement $u_b\\ensuremath{^{(j)}}$ must then be the solution to:\n\\begin{equation*}\nS_{nl}\\ensuremath{^{(j)}}(u_b\\ensuremath{^{(j)}};f_{ext}\\ensuremath{^{(j)}}) + S_{nl}\\ensuremath{^{(\\overline{j})}}(u_b\\ensuremath{^{(j)}};f_{ext}\\ensuremath{^{(\\overline{j})}}) =0\n\\end{equation*}\nComparing this equation with line~(1) of algorithm~\\ref{alg:robin-robin}, one can see that, starting from a zero initial guess $\\mu_b\\ensuremath{^{(j)}}=0$, the method converges in only one iteration with $Q_{nl}\\ensuremath{^{(j)}} = S_{nl}\\ensuremath{^{(\\overline{j})}}$: the ideal impedance is the Dirichlet-to-Neumann operator of the complement. \\medskip\n\nIn order to further discuss the problem, we now consider the linear case, and we recall that we have: $S_{l}\\ensuremath{^{(j)}}(u_b\\ensuremath{^{(j)}}) = S_{t}\\ensuremath{^{(\\overline{j})}} u_b\\ensuremath{^{(j)}} + b_p\\ensuremath{^{(\\overline{j})}}$. In that case, the optimal impedance is thus an affine operator whose linear part (which we will write $Q_b\\ensuremath{^{(j)}}$ in agreement with the development of previous section) accounts for the stiffness of the complement domain, whereas the constant part accounts for the external load on the complement part. Note that another point of view is to use a strictly linear impedance together with a good (non-zero) initialization for $\\mu_b\\ensuremath{^{\\diamondvert}}$ which should account for the external load on the complement domain.\n\nThe construction of a good constant part for the impedance is usually realized, in the linear case, by the introduction of a well-chosen coarse problem; this is discussed in subsection~\\ref{ssec:multi}. \nIn the nonlinear case, building a coarse problem which would connect all subdomains during their inner Newton loop seems complex; more, it would break the independent computations. It looks simpler to rely on a good initialization in order to propagate the right-hand side: this can be done at low cost by easing accuracy constraints in the first inner Newton loop (adapting $\\varepsilon_{NL}$), and then using the multiscale solver of the global linear step. Note that in \\cite{klawonn2017new} a coarse problem is built for nonlinear versions of FETIDP and BDDC but, again, it mainly serves to find a good initialization before independent parallel nonlinear solves.\n\nWe now focus on the construction of $Q_b\\ensuremath{^{\\diamondbackslash}}$, i.e. the linear part of the impedance. In the linear case, one can show \\cite{magoules2004optimal,gander2011optimal} that for a slab-wise decomposition of the structure (or a tree-like decomposition, i.e. whose connectivity graph has no cycles), the setting $Q_b\\ensuremath{^{(j)}} = S_t\\ensuremath{^{(\\overline{j})}}$ is optimal, in the sense that the convergence is reached in a maximum number of iterations equal to the number of subdomains (iterations are only needed to propagate the right-hand side). If the convenient coarse grid is added, convergence can be extremely fast. For an arbitrary decomposition, the optimality of such a setting can theoretically be lost, because of the unclear propagation rate of the right-hand side \\cite{nier1998remarques,gander2011optimal}. However, the pertinence of this value still seems to be ongoing, especially being given the difficulty to define a more relevant setting for a matrix operator $Q_t\\ensuremath{^{\\diamondbackslash}}$.\n\n\\medskip\n\n\nStarting from these considerations, let us further analyze the terms of the following expression of the interface impedance for a given subdomain $\\Omega\\ensuremath{^{(j)}}$:\n\\begin{equation}\\label{eq:opti_qb_lin}\nQ_t\\ensuremath{^{(j)}} = S_t\\ensuremath{^{(\\overline{j})}} = K_{bb}\\ensuremath{^{(\\overline{j})}} - K_{bi}\\ensuremath{^{(\\overline{j})}} K_{ii}\\ensuremath{^{(\\overline{j})^{-1}}} K_{ib}\\ensuremath{^{(\\overline{j})}}\n\\end{equation}\n\n\nThe first term, $ K\\ensuremath{^{(\\overline{j})}}_{bb}$, accounts for very local interactions. It is sparse, and exactly has the fill-in of matrix $K_{bb}\\ensuremath{^{(j)}}$. The second term, $K\\ensuremath{^{(\\overline{j})}}_{bi} {K\\ensuremath{^{(\\overline{j})}}_{ii}}^{-1} K\\ensuremath{^{(\\overline{j})}}_{ib}$, accounts for long range interactions, it depends on the whole structure (geometry and material), and couples all degrees of freedom together via in-depth interactions. It is thus a full matrix; this property can be seen as the consequence of the pseudo-differentiability of the underlying Steklov-Poincar\u00e9 operator of which the Schur complement is the discretization. It is important to note the minus sign: the short range part is very stiff and the global effects mitigate it. \n\nObviously, formula~\\eqref{eq:opti_qb_lin} is intractable in a distributed environment. However, different strategies have been investigated to compute approximations at low cost -- see next subsection for a quick review.\n\n\\medskip\nIn the nonlinear context, the use of a linear impedance is of course non-optimal. Moreover, the best linear impedance probably resembles the Schur complement of the remainder of the subdomain in the final configuration, which is of course unknown a priori. Our aim is then to try to find a heuristic which gives an easy-to-compute approximation of the formula~\\eqref{eq:opti_qb_lin} to be applied to the initial tangent stiffness. \n\n\n\n\\subsection{Quick review}\\label{ssec:review}\n\nThe question of finding a good approximation of the Schur complement of a domain is at the core of mixed domain decomposition methods like optimized Schwarz methods \\cite{gander2006osm} or the Latin method \\cite{ladeveze2000micro}. Studies have proved that they needed to reproduce short-range effects (like local heterogeneity) but also structural effects (like the anisotropy induced by the slenderness of plate structures \\cite{SAAVEDRA.2012.1}). When one wishes to choose an invariant scalar (or tensor in case of anisotropy) for each interface, it can be beneficial to use a coarse model for its estimation \\cite{SAAVEDRA.2016.hal.1}. A possibility in order to better model short-range interaction between interface nodes is to use Ventcell conditions instead of simple Robin conditions \\cite{Hoang2014}; this enables to recover the same sparsity for the impedance as for the stiffness of the subdomain. An extreme strategy is to use (scalar) Robin conditions on the Riesz' image of the normal flux leading to a fully populated impedance matrix \\cite{DESMEURE.2011.3.1}. A more reasonable strategy is to use a strip approximation of the Schur complement \\cite{magoules_algebraic_2006}, which can also be computed by adding elements to the subdomains \\cite{Oumaziz2017}, in the spirit of restricted additive Schwarz methods \\cite{cai1999restricted}. \n\n\nFrom an algebraic point of view, short range approximation $ K\\ensuremath{^{(\\overline{j})}}_{t_{bb}}$ (or even $ \\operatorname{diag}(K\\ensuremath{^{(\\overline{j})}}_{t_{bb}})$) is sometimes used for FETI's preconditioner \\cite{Far94bis}, where it is called lumped approximation. \nLet $\\text{neigh}(j)$ be the set of the neighbors of subdomain $j$, we have\n\\begin{equation}\\label{eq:ktbb_neigh}\n\\text{lumped: }K_{t_{bb},l}\\ensuremath{^{\\text{neigh}(j)}} \\equiv K_{t_{bb}}\\ensuremath{^{(\\overline{j})}} = A\\ensuremath{^{(j)^T}} \\left( \\sum_{s \\in \\text{neigh}(j)} A\\ensuremath{^{(s)}} K\\ensuremath{^{(s)}}_{t_{bb}} A\\ensuremath{^{(s)^T}} \\right) A\\ensuremath{^{(j)}}\n\\end{equation}\nor even:\n\\begin{equation}\\label{eq:ktbb_neigh_diag}\n\\text{superlumped: }K_{t_{bb},sl}\\ensuremath{^{\\text{neigh}(j)}} \\equiv \\operatorname{diag}\\left(K_{t_{bb}}\\ensuremath{^{(\\overline{j})}}\\right) = A\\ensuremath{^{(j)^T}} \\left( \\sum_{s \\in \\text{neigh}(j)} A\\ensuremath{^{(s)}} \\operatorname{diag}\\left(K\\ensuremath{^{(s)}}_{t_{bb}}\\right) A\\ensuremath{^{(s)^T}} \\right) A\\ensuremath{^{(j)}}\n\\end{equation}\nBeing an assembly among a few subdomains of sparse block-diagonal matrices, this term is quite cheap to compute, and does not require any extra-computations, since local tangent stiffnesses are calculated anyway at each iteration of the solving process. The efficiency of the simple approximation \\eqref{eq:ktbb_neigh} has been studied, in the context of nonlinear substructuring and condensation, in some research works \\cite{cresta2007nonlinear, hinojosa2014domain, negrello2016substructured}, and has given good results when tested on rather homogeneous structures of standard shape. \\medskip\n\n\nIn the domain decomposition framework for linear problems, long range interactions are taken into account thanks to the coarse grid problems \\cite{Far94bis,mandel1993balancing,nouy03b}, which enables the method to comply with Saint-Venant's principle. These are closely related to projection techniques inspired by homogenization \\cite{ibrahimbegovic2003strong, feyel2000fe, ladeveze2000micro, guidault2007two, guidault2008multiscale} in order to get low rank approximations. Let $U$ be an orthonormal basis of a well chosen subspace of displacements, the approximation can be written as:\n\\begin{equation}\\label{eq:long_scale}\nS\\ensuremath{^{(\\overline{j})}} \\simeq U (U^T S\\ensuremath{^{(\\overline{j})}} U) U^T\n\\end{equation}\nSaint-Venant's principle imposes $U$ to contain at least the rigid body motions of $\\Omega\\ensuremath{^{(j)}}$, for computational efficiency it can be complemented by affine deformation modes or by displacements defined independently by interfaces.\\medskip\n\n\nHowever, if \\textit{short range} approximations do not provide enough information to give a good representation of the faraway structure influence on a substructure $\\Omega\\ensuremath{^{(j)}}$, neither do \\textit{long range} approximation give a good estimation of the near-field response to a sollicitation. Besides, in the context of small displacements, a lack of precision on the close structure is more problematic than the filtering of long range interactions: predominant mechanical reactions usually come from nearby elements of the mesh.\n \nThe best strategy for $Q\\ensuremath{^{(j)}}_b$ would combine both \\textit{short} and \\textit{long} range formulations, however this version has not been much investigated yet. In particular it is not that easy to ensure the positivity of the impedance if the two approximations are computed independently. In \\cite{gendre2011two} an expensive scale separation was introduced in the context of non intrusive global\/local computations where $\\Omega\\ensuremath{^{(\\overline{j})}}$ was somehow available (which is not the case in our distributed framework). We propose here a new expression for parameter $Q\\ensuremath{^{(j)}}_b$, in the context of nonlinear substructuring and condensation with mixed interface conditions, which combines short and long scale formulations, at low computational cost.\n\n\n\\subsection{Spring in series model}\\label{ssec:springs}\n\nOur heuristic for the impedance relies on the simple observation that finding a two-scale approximation of the flexibility of $\\Omega\\ensuremath{^{(\\overline{j})}}$ may be more patent than for the stiffness. It is inspired by the simple model of two springs assembled in series: one spring models the stiffness of the neighboring subdomains whereas the second models the stiffness of the faraway subdomains (see figure \\ref{fig:springs_series}). The resulting equivalent flexibility is the sum of the two flexibilities. \n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics{spring_series.pdf}\n\\end{center}\n\\caption{Springs in series model}\n\\label{fig:springs_series}\n\\end{figure}\nIn practice, in order to recover the structure of \\eqref{eq:opti_qb_lin}, while remaining tractable, we propose the local flexibility ${S_t\\ensuremath{^{\\text{neigh}(j)}}}^{-1}$ to be the inverse of a sparse matrix, and the long-range flexibility ${S_t^{\\text{far}(j)}}^{-1}$ to be low-rank. The latter condition is also motivated by \\cite{bebendorf2003existence,amestoy2016complexity}, where it is shown that low-rank approximants of fully populated inverse operators, arising from FE discretization of elliptic problems, can be derived from the hierarchical-matrices theory. Typically we have:\n\\begin{equation}\\label{eq:flex2terms}\n\\begin{aligned}\n{Q_b\\ensuremath{^{(j)}}}^{-1} = {K_{t_{bb}}\\ensuremath{^{\\text{neigh}(j)}}}^{-1} + A\\ensuremath{^{(j)^T}} V F V^T A\\ensuremath{^{(j)}}\n\\end{aligned}\n\\end{equation}\nwhere $K_{t_{bb}}\\ensuremath{^{\\text{neigh}(j)}}$ can refer for instance to expressions \\eqref{eq:ktbb_neigh} or \\eqref{eq:ktbb_neigh_diag}, $F$ is a small-sized $m \\times m$ square matrix, and $V$ an interface vectors basis of size $n_A \\times m$. Writing $V\\ensuremath{^{(j)}} = A\\ensuremath{^{(j)^T}} V$ the local contribution of basis $V$, expression \\eqref{eq:flex2terms} can be inversed using the Sherman-Morrisson formula:\n\\begin{equation}\n\\begin{aligned}\nQ_b\\ensuremath{^{(j)}} = K_{t_{bb}}\\ensuremath{^{\\text{neigh}(j)}} - \\underset{W_b\\ensuremath{^{(j)}}}{\\underbrace{ K_{t_{bb}}\\ensuremath{^{\\text{neigh}(j)}} V\\ensuremath{^{(j)}}}} \\, \\, \\underset{ {M\\ensuremath{^{(j)}}}^{-1} }{\\underbrace{ \\left( F^{-1} + V\\ensuremath{^{(j)^T}} K_{t_{bb}}\\ensuremath{^{\\text{neigh}(j)}} V\\ensuremath{^{(j)}} \\right)^{-1}}} \\, \\, \\underset{W_b\\ensuremath{^{(j)^T}}}{\\underbrace{ V\\ensuremath{^{(j)^T}} K_{t_{bb}}\\ensuremath{^{\\text{neigh}(j)}} }}\n\\end{aligned}\n\\end{equation} \nThis stiffness is a sparse matrix corrected by a low-rank term; then, when solving the (generalized) Robin problems, the Sherman-Morrisson formula can be used again:\n\\begin{equation}\n\\begin{aligned}\n\\text{let } \\tilde{K}_t\\ensuremath{^{(j)}} & \\equiv (K_t\\ensuremath{^{(j)}}+ t\\ensuremath{^{(j)^T}} {K_{t_{bb}}^{\\text{neigh}(j)} } t\\ensuremath{^{(j)}}) \n\\text{ and } W\\ensuremath{^{(j)}} \\equiv t\\ensuremath{^{(j)^T}} W_b\\ensuremath{^{(j)}}:\\\\\n(K_t\\ensuremath{^{(j)}}+ t\\ensuremath{^{(j)^T}} Q_b\\ensuremath{^{(j)}} t\\ensuremath{^{(j)}})^{-1} & = \\tilde{K}_t^{(j)^{-1}} + \\tilde{K}_t^{(j)^{-1}} W\\ensuremath{^{(j)}} \\left(M\\ensuremath{^{(j)}} - W\\ensuremath{^{(j)^T}} \\tilde{K}_t^{-1} W\\ensuremath{^{(j)}} \\right)^{-1} W\\ensuremath{^{(j)^T}} \\tilde{K}_t^{(j)^{-1}}\n\\end{aligned}\n\\end{equation} \nThe short-range term enables to regularize the problem without impairing the sparsity of the stiffness matrix.\n\n\\subsection{A multi-scale interpretation}\\label{ssec:multi}\n\n\nIn the spirit of \\cite{oumaziz2}, we can derive a multi-scale interpretation of the additive form \\eqref{eq:flex2terms} adopted for the interface impedance. \n\nStarting from Algorithm~\\ref{alg:robin-robin}, a macroscopic condition, inspired from the Latin method, can be imposed on the nodal reactions: the nodal reactions should satisfy a weak form of the interface balance, defined by a macroscopic basis $C_A$:\n\\begin{equation}\\label{eq:macro_const}\nC_A^T A\\ensuremath{^{\\diamondminus}} \\lambda_b\\ensuremath{^{\\diamondvert}} = 0\n\\end{equation}\nIn the linear case, this condition can be enforced by the introduction of a Lagrange multiplier $\\alpha$ (details can be found in \\cite{oumaziz2}) in the interface condition \\eqref{eq:intQnl}:\n\\begin{equation*}\n\\lambda_b\\ensuremath{^{\\diamondvert}} - \\bar{\\lambda}_b\\ensuremath{^{\\diamondvert}} + Q_b\\ensuremath{^{\\diamondbackslash}} \\left( u_b\\ensuremath{^{\\diamondvert}} - \\bar{u}_b\\ensuremath{^{\\diamondvert}} \\right) + Q_b\\ensuremath{^{\\diamondbackslash}} A\\ensuremath{^{\\diamondminus^T}} C_A \\alpha = 0\n\\end{equation*}\nAfter algebraic calculations, writing local equilibriums with this new condition leads to:\n\\begin{equation*}\n\\left[ K\\ensuremath{^{\\diamondbackslash}} + t\\ensuremath{^{\\diamondbackslash^T}} Q_b\\ensuremath{^{\\diamondbackslash}} \\left( I\\ensuremath{^{\\diamondbackslash}} - P_{C_A}\\ensuremath{^{\\diamondbackslash}} \\right) t\\ensuremath{^{\\diamondbackslash}} \\right] u\\ensuremath{^{\\diamondvert}} = f_{ext}\\ensuremath{^{\\diamondvert}} + t\\ensuremath{^{\\diamondbackslash^T}} \\left[ \\bar{\\lambda}_b\\ensuremath{^{\\diamondvert}} + Q_b\\ensuremath{^{\\diamondbackslash}} \\left( I\\ensuremath{^{\\diamondbackslash}} - P_{C_A}\\ensuremath{^{\\diamondbackslash}} \\right) \\bar{u}_b\\ensuremath{^{\\diamondvert}} \\right]\n\\end{equation*}\nwhere $P_{C_A}\\ensuremath{^{\\diamondbackslash}} = A\\ensuremath{^{\\diamondminus^T}} C_A \\left( C_A^T A\\ensuremath{^{\\diamondminus}} Q_b\\ensuremath{^{\\diamondbackslash}} A\\ensuremath{^{\\diamondminus^T}} C_A \\right)^{-1} C_A^T A\\ensuremath{^{\\diamondminus}} Q_b\\ensuremath{^{\\diamondbackslash}}$ is a projector on the low-dimension subspace $\\operatorname{Range}(A\\ensuremath{^{\\diamondminus^T}} C_A)$.\n\nNot only the coarse space associated to the macroscopic constraint \\eqref{eq:macro_const} results in the propagation of the right-hand side on the whole structure ($P_{C_A}\\ensuremath{^{\\diamondbackslash}}$ is not sparse) but also in the modification of the impedance by the symmetric negative low rank term $-Q_b\\ensuremath{^{\\diamondbackslash}} P_{C_A}\\ensuremath{^{\\diamondbackslash}}$.\n\n\\medskip\n\n\nIn our nonlinear context, considering the basic setting $Q_b\\ensuremath{^{(j)}} = K_{t_{bb}}\\ensuremath{^{\\text{neigh}(j)}}$ \\eqref{eq:ktbb_neigh} or \\eqref{eq:ktbb_neigh_diag}, the modification $\\bar{Q}_b\\ensuremath{^{\\diamondbackslash}} = K_{t_{bb}}\\ensuremath{^{\\text{neigh}(j)}} - W_b\\ensuremath{^{(j)}} M\\ensuremath{^{(j)^{-1}}} W_b\\ensuremath{^{(j)^T}}$ proposed in \\eqref{eq:flex2terms} can be seen as the introduction of a multi-scale computation inside the mixed nonlinear substructuring and condensation method. As said earlier, the propagation of the right-hand side is ensured by a well-built initialization, which can be realized by adapting the inner Newton criterion $\\epsilon_{NL}$ at each global iteration.\n\n\n\n\n\\subsection{Two-scale approximation of the flexibility}\\label{ssec:two_scale}\n\n\\subsubsection{General idea}\n\nFrom previous analysis, we try to derive an approximation of the (linear) optimal flexibility \\eqref{eq:opti_qb_lin} which takes the additive form of \\eqref{eq:flex2terms}. Being given a substructure $\\Omega\\ensuremath{^{(j)}}$, we write $S_A\\ensuremath{^{(\\overline{j})}} = \\sum_{s \\neq j} A\\ensuremath{^{(s)}} S_t\\ensuremath{^{(s)}} A\\ensuremath{^{(s)^T}}$ the assembly of local tangent Schur complements on the remainder $\\Omega\\ensuremath{^{(\\overline{j})}}$.\n\nUsing the quotient and the inverse formulas for the Schur complement, we have:\n\\begin{equation}\\label{eq:flexSt}\n{S_t\\ensuremath{^{(\\overline{j})}}}^{-1} = \\left( {S_A\\ensuremath{^{(\\overline{j})}}}^{-1} \\right)_{bb} = A\\ensuremath{^{(j)^T}} {S_A\\ensuremath{^{(\\overline{j})}}}^{-1} A\\ensuremath{^{(j)}}\n\\end{equation}\n\n\\begin{remark}\nWe here assume a substructuring ensuring the inversibility of $S_A\\ensuremath{^{(\\overline{j})}}$ and $S_t\\ensuremath{^{(\\overline{j})}}$, i.e. Dirichlet conditions are not concentrated on only one subdomain, and the complementary part of each subdomain is connected. In practice, this is almost always the case; if not, a simple subdivision can overcome the problem.\n\\end{remark}\n\n\\medskip\n\nClassical preconditioners of BDD-algorithm can then be used as approximations of the inverse of $S_A\\ensuremath{^{(\\overline{j})}}$. \nWe hence introduce $\\hat{G}_A\\ensuremath{^{(\\overline{j})}} = \\left[\\, \\ldots, \\, \\hat{A}\\ensuremath{^{(s)}}_j R_b\\ensuremath{^{(s)}}, \\, \\ldots \\, \\right]_{s \\neq j}$ the concatenation of the scaled local traces of rigid body motions ($R_b\\ensuremath{^{(s)}}$) of subdomains belonging to $\\Omega\\ensuremath{^{(\\overline{j})}}$, with $\\hat{A}\\ensuremath{^{(s)}}_j$ scaled assembly operators taking into account the absence of matter inside subdomain $\\Omega\\ensuremath{^{(j)}}$. Considering the classical definition of scaled assembly operators $\\tilde{A}\\ensuremath{^{(s)}}$ \\cite{klawonn2001feti}, modified operators $\\hat{A}\\ensuremath{^{(s)}}_j$ can be defined as:\n\\begin{equation*}\n\\begin{aligned}\n \\hat{A}\\ensuremath{^{(s)}}_j = & \\left\\lbrace \\,\\,\\, \\begin{aligned} \\left( A\\ensuremath{^{\\diamondminus}} \\Delta\\ensuremath{^{\\diamondbackslash}} A\\ensuremath{^{\\diamondminus^T}} - A\\ensuremath{^{(j)}} \\Delta\\ensuremath{^{(j)}} A\\ensuremath{^{(j)^T}} \\right)^{-1} A\\ensuremath{^{(s)}} \\Delta\\ensuremath{^{(s)}} \\quad \\text{ if } s \\neq j \\\\\n0 \\quad \\text{ if } s = j \\end{aligned} \\right. \\\\\n& \\hspace{0.5cm} \\text{ with } \\,\\, \\Delta\\ensuremath{^{(s)}} \\equiv \\operatorname{diag} \\left( K_{t_{bb}}\\ensuremath{^{(s)}} \\right)\n\\end{aligned}\n\\end{equation*}\n\n\nLet $P_A\\ensuremath{^{(\\overline{j})}}$ be the $S_A\\ensuremath{^{(\\overline{j})}}$-orthogonal projector on $\\operatorname{Ker}\\left( \\hat{G}_A\\ensuremath{^{(\\overline{j})^T}} S_A\\ensuremath{^{(\\overline{j})}} \\right)$:\n\\begin{align*}\n P_A\\ensuremath{^{(\\overline{j})}} = I - \\hat{G}_A\\ensuremath{^{(\\overline{j})}} \\left( \\hat{G}_A\\ensuremath{^{(\\overline{j})^T}} S_A\\ensuremath{^{(\\overline{j})}} \\hat{G}_A\\ensuremath{^{(\\overline{j})}} \\right)^{-1} \\hat{G}_A\\ensuremath{^{(\\overline{j})^T}} S_A\\ensuremath{^{(\\overline{j})}} \n\\end{align*}\nwe have:\n\\begin{equation}\\label{eq:sep_inv_proj}\nS_A\\ensuremath{^{(\\overline{j})^{-1}}} = P_A\\ensuremath{^{(\\overline{j})}} S_A\\ensuremath{^{(\\overline{j})^{-1}}} P_A\\ensuremath{^{(\\overline{j})^T}} + \\left( I - P_A\\ensuremath{^{(\\overline{j})}} \\right) S_A\\ensuremath{^{(\\overline{j})^{-1}}} \\left( I - P_A\\ensuremath{^{(\\overline{j})}} \\right)^T \n\\end{equation}\nThe BDD-theory states that in the first term, $S_A\\ensuremath{^{(\\overline{j})^{-1}}}$ can be conveniently approximated by a scaled sum of local inverses\\footnote{The GENEO theory \\cite{SPILLANE:2013:FETI_GenEO_IJNME} states that, if needed, computable extra modes shall be inserted in $\\hat{G}_A\\ensuremath{^{(\\overline{j})}}$ in order to maintain the quality of the approximation.}.\nAfter developing and factorizing, we have a first approximation of the flexibility:\n\\begin{equation}\\label{eq:expr_inv_st}\n\\begin{aligned}\nQ_{BDD}\\ensuremath{^{(j)^{-1}}} \\equiv A\\ensuremath{^{(j)^T}} \\left( P_A\\ensuremath{^{(\\overline{j})}} \\sum_{s \\neq j} \\hat{A}\\ensuremath{^{(s)}}_j {S_t\\ensuremath{^{(s)}}}^{\\dagger} \\hat{A}\\ensuremath{^{(s)^T}}_j P_A\\ensuremath{^{(\\overline{j})^T}} + \\hat{G}_A\\ensuremath{^{(\\overline{j})}} \\left( \\hat{G}_A\\ensuremath{^{(\\overline{j})^T}} S_A\\ensuremath{^{(\\overline{j})}} \\hat{G}_A\\ensuremath{^{(\\overline{j})}} \\right)^{-1} \\hat{G}_A\\ensuremath{^{(\\overline{j})^T}} \\right) A\\ensuremath{^{(j)}} \\\\\n\\end{aligned}\n\\end{equation}\n\n\n\n\\subsubsection{Long range interactions term}\n\nThe second term of expression \\eqref{eq:expr_inv_st}, written $\\hat{F}_{A,2}\\ensuremath{^{(j)}}$, is a matrix of low rank $m^{(j)}$, where $m^{(j)}$ is the number of neighbors rigid body motions. It could be used as is, however its computation involves the inversion of quantity $\\hat{G}_A\\ensuremath{^{(\\overline{j})^T}} S_A\\ensuremath{^{(\\overline{j})}} \\hat{G}_A\\ensuremath{^{(\\overline{j})}}$, an interface matrix of rank $m\\ensuremath{^{(\\overline{j})}}$, where $m\\ensuremath{^{(\\overline{j})}}$ is the number of local rigid body modes of the whole remainder $\\Omega\\ensuremath{^{(\\overline{j})}}$. In the context of large structures with a high number of subdomains, $m^{(\\overline{j})}$ can increase drastically; saving the computation and factorization of such a matrix could then become quite interesting. Moreover, during the computation of the structure coarse problem, a close quantity is already assembled and factorized: the matrix $\\tilde{G}_A^T S_A \\tilde{G}_A$ -- with $S_A \\equiv \\sum_{s=1}^{N_s} A\\ensuremath{^{(s)}} S\\ensuremath{^{(s)}} A\\ensuremath{^{(s)^T}}$ and $\\tilde{G}_A \\equiv \\left[ \\ldots , \\, \\tilde{A}\\ensuremath{^{(s)}} R_b\\ensuremath{^{(s)}}, \\, \\ldots \\right]$. Compared to $\\hat{G}\\ensuremath{^{(\\overline{j})^T}} S_A\\ensuremath{^{(\\overline{j})}} \\hat{G}\\ensuremath{^{(\\overline{j})}}$, the addition of the local term linked to $\\Omega\\ensuremath{^{(j)}}$ in $\\tilde{G}_A^T S_A \\tilde{G}_A$ somewhat balances the classical scaling on its boundary (taking into account non-existant matter inside $\\Omega\\ensuremath{^{(j)}}$), we thus propose:\n\\begin{equation*}\n\\hat{F}_{A,2}\\ensuremath{^{(j)}} \\simeq A\\ensuremath{^{(j)^T}} \\hat{G}_A\\ensuremath{^{(\\overline{j})}} \\left( \\tilde{G}_A^T S_A \\tilde{G}_A \\right)^{-1} \\hat{G}_A\\ensuremath{^{(\\overline{j})^T}} A\\ensuremath{^{(j)}} \\equiv \\tilde{F}_{A,2}\\ensuremath{^{(j)}}\n\\end{equation*}\n\n\\subsubsection{Short range interactions term}\n\nThe first term of expression \\eqref{eq:expr_inv_st}, written $\\hat{F}_{A,1}\\ensuremath{^{(j)}}$, can also be simplified. First, for numerical efficiency, a diagonal lumping technique is used to approximate the local Schur complements (as explained in section \\ref{ssec:review}). Then, in order to preserve sparsity, the projectors are removed. Assuming stiffness scaling is used we then directly recover the inverse of the superlumped stiffness of the neighbors:\n\\begin{equation}\\label{eq:tild_ktbb}\n\\begin{aligned}\n\\hat{F}_{A,1}\\ensuremath{^{(j)}} & \\simeq A\\ensuremath{^{(j)^T}} \\sum_{s \\in \\text{neigh}(j)} \\hat{A}\\ensuremath{^{(s)}}_j \\operatorname{diag}\\left( K_{t_{bb}}\\ensuremath{^{(s)}} \\right)^{-1} \\hat{A}\\ensuremath{^{(s)^T}}_j A\\ensuremath{^{(j)}} \\\\ \n& = A\\ensuremath{^{(j)^T}} \\left( \\sum_{s \\in \\text{neigh}(j)} A\\ensuremath{^{(s)}} \\operatorname{diag} \\left( K_{t_{bb}}\\ensuremath{^{(s)}} \\right) A\\ensuremath{^{(s)^T}} \\right)^{-1} A\\ensuremath{^{(j)}} = K_{t_{bb},\\, sl}\\ensuremath{^{\\text{neigh}(j)^{-1}}}\n\\end{aligned}\n\\end{equation}\n\n\\subsubsection{Scaling issue}\n\nA way to avoid building the modified scaled assembly operators $\\hat{A}\\ensuremath{^{(s)}}_j$ is to notice that for $s \\neq j$, the following relation holds between modified and classical scaling operators $\\tilde{A}\\ensuremath{^{(s)}}$ \\cite{klawonn2001feti}:\n\\begin{equation*}\n\\begin{aligned}\nA\\ensuremath{^{(j)^T}} \\hat{A}_j\\ensuremath{^{(s)}} & = \\tilde{D}\\ensuremath{^{(j)}} A\\ensuremath{^{(j)^T}} \\tilde{A}\\ensuremath{^{(s)}} \\\\\n\\text{with }\\tilde{D}\\ensuremath{^{(j)}} \\equiv A\\ensuremath{^{(j)^T}} \\left( A\\ensuremath{^{\\diamondminus}} \\Delta\\ensuremath{^{\\diamondbackslash}} A\\ensuremath{^{\\diamondminus^T}} \\right) & \\left( A\\ensuremath{^{\\diamondminus}} \\Delta\\ensuremath{^{\\diamondbackslash}} A\\ensuremath{^{\\diamondminus^T}} - A\\ensuremath{^{(j)}} \\Delta\\ensuremath{^{(j)}} A\\ensuremath{^{(j)^T}} \\right)^{-1} A\\ensuremath{^{(j)}}\n\\end{aligned}\n\\end{equation*}\nand we observe that the local diagonal matrix $\\tilde{D}\\ensuremath{^{(j)}}$ can be \nextracted without cost from $\\tilde{A}\\ensuremath{^{(j)}}$:\n\\begin{equation*}\n\\tilde{D}\\ensuremath{^{(j)}} = A\\ensuremath{^{(j)^T}} \\left( I - A\\ensuremath{^{(j)}} \\tilde{A}\\ensuremath{^{(j)^T}} \\right)^{-1} A\\ensuremath{^{(j)}}\n\\end{equation*}\n\\begin{remark}\nWith evident notations, for a scaling based on the material stiffness, the diagonal coefficient of $\\tilde{D}\\ensuremath{^{(j)}}$ associated with degree of freedom $x$ is equal to:\n\\begin{equation*}\n\\begin{aligned} \n\\tilde{D}\\ensuremath{^{(j)}}_{xx} = \\dfrac{ \\sum_s K_{t_{xx}}\\ensuremath{^{(s)}} }{ \\sum_{s \\neq j } K_{t_{xx}}\\ensuremath{^{(s)}} } = \\left( 1 - \\dfrac{K_{t_{xx}}\\ensuremath{^{(j)}} }{ \\sum_s K_{t_{xx}}\\ensuremath{^{(s)}}} \\right)^{-1} = \\left( 1 - \\tilde{A}\\ensuremath{^{(j)}}_x \\right)^{-1}\n\\end{aligned}\n\\end{equation*}\\qed\n\\end{remark}\n\\medskip\n\n\n\\noindent \\textit{Final expression.} To conclude, we propose the following two-scale impedance:\n\\begin{equation}\\label{eq:final_expr_qb}\n\\left( Q\\ensuremath{^{(j)}}_{b, \\,2s} \\right)^{-1} = K_{t_{bb},\\,sl}\\ensuremath{^{\\text{neigh}(j)^{-1}}} + \\tilde{D}\\ensuremath{^{(j)}} A\\ensuremath{^{(j)^T}} \\tilde{G}_A\\ensuremath{^{(\\overline{j})}} \\left( \\tilde{G}_A^T S_A \\tilde{G}_A \\right)^{-1} \\tilde{G}_A\\ensuremath{^{(\\overline{j})^T}} A\\ensuremath{^{(j)}} \\tilde{D}\\ensuremath{^{(j)}}\n\\end{equation}\n\n\n\\subsection{Attempt to enrich the short-range approximation}\\label{ssec:Qritz}\n\nThe short range part of the impedance, corresponding to the sparse approximation of $\\hat{F}_{A,1}\\ensuremath{^{(j)}}$ by $K_{t_{bb},\\,sl}\\ensuremath{^{\\text{neigh}(j)^{-1}}}$, seems very crude. In particular, we most probably underestimate the flexibility of the neighbors by using a diagonal operator.\n\nWe believe it is worth mentioning the tentative improvement which consisted in adding another low rank term:\n\\begin{equation*}\n\\hat{F}_{A,1}\\ensuremath{^{(j)}} \\simeq K_{t_{bb},\\,sl}\\ensuremath{^{\\text{neigh}(j)^{-1}}} + \\tilde{D}\\ensuremath{^{(j)}} A\\ensuremath{^{(j)^T}} V_k \\Theta_k V_k^T A\\ensuremath{^{(j)}} \\tilde{D}\\ensuremath{^{(j)}}\n\\end{equation*}\nwhere $\\Theta_k$ is a diagonal matrix and $V_k$ an orthonormal basis, approximations of the eigen-elements of $S_A\\ensuremath{^{(\\overline{j})^{-1}}}$ associated with the higher part of the spectrum. They could be obtained at a moderate cost by post-processing the tangent BDD iterations in the spirit of \\cite{gosselet2013total} (but considering the classical eigenvalues instead of the generalized ones). \n\nThis low rank term could be concatenated with the one associated with rigid body motions $\\tilde{F}_{A,2}\\ensuremath{^{(j)}}$, and thus did not modify the usability of the approximation. \nWe observed that it led to a stiffness which was closer to our reference $S_t\\ensuremath{^{(\\overline{j})^{-1}}}$ (measured with the Frobenius norm). But in practice when using it as the impedance in our numerical experiments, the reduction achieved in iterations numbers was not worth the additional cost of the enrichment term -- this is why we do not present it in detail. This ``improvement'' may be more useful on other classes of nonlinear problems for which it would be important not to overestimate the stiffness of the remainder of the structure.\n\n\n\n\n\\section{Results}\n\n\\subsection{Two test cases}\\label{sec:two_test_cases}\n\n\nThe efficiency of the expression \\eqref{eq:final_expr_qb} is evaluated on two numerical test cases. First test case is a bi-material beam with bending load, represented on figure~\\ref{fig:test_case_bimat}. Material and geometrical parameters are given in table \\ref{tab:params_tests_case}: one of the two materials is chosen to be elastoplastic with linear hardening, the other one is chosen to remain elastic. Load is applied with imposed displacement on the edge defined by $x=L$.\n\nSecond test case is a homogeneous multiperforated beam with bending load, represented on figure~\\ref{fig:test_case_multiperf}. Material and geometrical parameters are given in table \\ref{tab:params_tests_case}: material is chosen to be elastoplastic with linear hardening. Load is applied with imposed displacement $u_D$ on the edge defined by $x=L$.\n\n\n\\begin{figure}[!ht]\n\\includegraphics[width=\\textwidth]{test_case_betarme2.pdf}\n\\caption{Bi-material beam: partition and loading}\n\\label{fig:test_case_bimat}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[width=15cm]{test_case_multiperf.pdf}\n\\end{center}\n\\caption{Multiperforated beam: partition and loading}\n\\label{fig:test_case_multiperf}\n\\end{figure}\n\n\n\\begin{table}[ht]\n\\begin{center}\n\\begin{minipage}{0.56\\linewidth}\n\\begin{tabular}{|c:c|c|}\n\\hline \\hline\n\\multicolumn{3}{|c|}{Bi-material beam} \\\\\n\\hline \\hline\n\\multicolumn{3}{|c|}{ Material parameters} \\\\\n\\hline \\hline\n & Material 1 & Material 2 \\\\\nYoung & $E_1 = 420e2$ & $E_2 = 210e6$ \\\\\nPoisson coefficient & $\\nu_1 = 0.3$ & $\\nu_2 = 0.3$ \\\\\nElastic limit & & $\\sigma_{0_2} = 420e3$ \\\\\nHardening coefficient & & $h_2 = 1e3$ \\\\\n\\hline \\hline\n\\multicolumn{3}{|c|}{Geometrical parameters} \\\\\n\\hline \\hline\nTotal length & \\multicolumn{2}{:c|}{L $ = 13$} \\\\\nTotal height & \\multicolumn{2}{:c|}{H $ = 2$} \\\\\nHeight of an armature & \\multicolumn{2}{:c|}{H$_\\text{a} = 0.25$} \\\\\n\\hline \\hline\n\\end{tabular}\n\\end{minipage}\n\\begin{minipage}{0.43\\linewidth}\n\\begin{tabular}{|c:c|}\n\\hline \\hline\n\\multicolumn{2}{|c|}{Multiperforated beam} \\\\\n\\hline \\hline\n\\multicolumn{2}{|c|}{ Material parameters} \\\\\n\\hline \\hline\n & \\\\\nYoung & $E = 210e6$ \\\\\nPoisson coefficient & $\\nu = 0.3$ \\\\\nElastic limit & $\\sigma_0 = 420e3$ \\\\\nHardening coefficient & $h = 1e6$ \\\\\n\\hline \\hline\n\\multicolumn{2}{|c|}{Geometrical parameters} \\\\\n\\hline \\hline\nLength & L $ = 10$ \\\\\nHeight & H $ = 1$ \\\\\nHole radius & r $ = 2\/30$ \\\\\n\\hline \\hline\n\\end{tabular}\n\\end{minipage}\n\\caption{Material and geometrical parameters}\n\\label{tab:params_tests_case}\n\\end{center}\n\\end{table}\n\n\n\n\n\n\\subsection{Elastic analysis}\n\nThe ultimate goal of this paper is to assess the performance of the new impedance \\eqref{eq:final_expr_qb} in the nonlinear multi-scale distributed context. Before we reach that point, a preliminary mono-scale elastic study is performed in order to verify that the heuristic developed in previous sections is actually able to capture both short and long range interactions within the structure.\n\nSollicitations are here keeped low enough to remain in the elastic domain of every materials: bi-material beam and multiperforated beam are both submitted to a bending load of intensity $u_D = 1.5 \\,\\, 10^{-3}$. More, decomposition is for now only performed along $x$-axis (multiple points will be involved in next section, where the nonlinear multi-scale context is considered). One of the interest of the elastic linear case with slab-wise decomposition relies on the ability to express the optimal interface impedance: $Q_b\\ensuremath{^{(j)}} = S_t\\ensuremath{^{(\\overline{j})}}$ (see \\ref{ssec:motivation}). Even if the computational cost of this parameter would be, in a real situation, absolutely not affordable in the context of parallel resolutions, it was calculated here for the purpose of our analysis. A comparison with an optimal reference can thus be made for the two following expressions:\n\\begin{itemize}[label=$\\circ$]\n\\item a classical choice $K_{bb,l}\\ensuremath{^{\\text{neigh}(j)}}$: see \\eqref{eq:ktbb_neigh}\n\\item the new expression $Q_{b,\\,2s}\\ensuremath{^{(j)}}$: see \\eqref{eq:final_expr_qb}\n\\end{itemize}\nBeing given the alternative formulation we chose for the mixed nonlinear substructuring and condensation method (see section \\ref{sec:altern_formul}), an elastic resolution would be strictly equivalent to a primal BDD resolution. Therefore, no comparison of different interface impedances is possible with Algorithm~\\ref{alg:robin-bdd}. A mono-scale FETI-2LM solver \\cite{roux2009feti} was hence implemented, corresponding to the first formulation of the mixed interface problem with the $\\mu_b\\ensuremath{^{\\diamondvert}}$ unknown \\eqref{eq:tg_pb}. This algorithm enables to solve linear problems with Robin interface transmission conditions. \n\nNote that an optimal coarse problem could be added in order to recover an efficient multi-scale solver \\cite{dubois2012optimized,haferssas2015robust,loisel2015optimized}. However, this augmentation strategy would make it impossible to discern the efficiency of the long range interactions term of our two-scale impedance. Again, our aim is not to compete with augmented Krylov solvers for linear problems but to find an alternative way, compatible with nonlinear problems, to introduce long-range effects. The mono-scale formulation is thus preserved in order to evaluate the ability of \\eqref{eq:final_expr_qb} to introduce in local equilibriums information related to the interactions with the far structure, in a linear context where the optimal parameter is known. \n\n\n\\begin{table}[!ht]\n\\begin{center}\n\\subfloat[Bi-material beam]{\\includegraphics[width=10cm]{results_lin_bimat.pdf}\\label{subtab:results_lin_bimat}} \\qquad\n\\subfloat[Multiperforated beam]{\\includegraphics[width=10cm]{results_lin_multiperf.pdf}\\label{subtab:results_lin_multiperf}}\n\\end{center}\n\\caption{Comparison of the three interface impedances: linear behavior}\n\\label{tab:results_lin}\n\\end{table}\n\nResults are given on table \\ref{tab:results_lin} for the two previously introduced test cases. \n\n\nAs expected, for both test cases, the optimal interface impedance $S_t^{(\\overline{j})}$ rounds off the resolution after a number of iterations equal to the number of subdomains minus one. Being given the repartition of the subdomains (no multiple points) and the absence of a coarse problem, this is the best convergence rate that can be achieved: mixed transmission conditions with interface impedance $S_t^{(\\overline{j})}$ is optimal.\n\nThe classical choice $K_{bb,l}^{\\text{neigh}(j)}$ does not involve any information on the long range interactions of a subdomain with the faraway structure inside local equilibriums: the number of iterations drastically increases along with the number of substructures. \n\nThe new expression $Q_{b,2s}^{(j)}$ introduced in this paper highly reduces the numbers of FETI-2LM iterations, compared to classical choice $K_{bb,l}^{\\text{neigh}(j)}$: gains are between 77 and 98\\%. This should mostly be due to the additive form of expression \\eqref{eq:final_expr_qb}, with the introduction of a long range interactions-term in the flexibility -- obviously, the absence of coarse problem in the resolution reinforces the benefits of this term. The forthcoming nonlinear study, based on algorithm \\ref{alg:robin-bdd}, will replace this expression in a context of multiscale computation. \n\nPerformance of expression $Q_{b,2s}^{(j)}$ is evidently not as good as that of optimal expression $S_t^{(\\overline{j})}$, but the increase in iterations numbers is only of about ten times the optimal iterations number (while it reaches about hundreds times the optimal iterations number for $K_{bb,l}^{\\text{neigh}(j)}$). We also recall that interface impedance $S_t^{(\\overline{j})}$ can not be computed in parallel resolutions: expression $Q_{b,2s}^{(j)}$, at the contrary, is fully and easily tractable.\n\n\nThe expression introduced here to evaluate the interface impedance thus seems, at least in the linear case, to achieve great performance at very low cost. \n\n\\begin{remark} As said earlier, the second effect of multiscale approaches (beside modifying the Robin condition), lies in the instantaneous propagation of the right-hand side. In our approach, the absence of a coarse problem is somehow compensated by the presence of tangent interface systems (solved with state of the start multi-scale BDD method). As an example, we initialized our linear FETI-2LM solver with the fields resulting from one BDD iteration. For the multiperforated beam split in 15 subdomains, the number of FETI-2LM iterations goes from 113 to 89, which is significant (for less subdomains, the coarse problem is too small to bring any valuable piece of information). In the spirit of \\cite{negrello2016substructured}, a tuned setting of the solvers' thresholds (synchronized with the evolution of the global residual, i.e. the precision of the global solution) could perform a good compromise between a global spread of the information and independent computations.\n\n\\end{remark}\n\n\\subsection{Plastic analysis}\\label{sec:it_numbers}\n\nThe evaluation of the performance of expression \\eqref{eq:final_expr_qb} is continued with a plastic evolution study. The two test cases are submitted to bending loads, applied incrementally. Bi-material beam loading is decomposed as follows:\n\\begin{align}\nu_D & = \\left[ 0.05, \\, \\, 0.1, \\, \\, 0.15, \\, \\, 0.2, \\, \\, 0.25, \\, \\, 0.3, \\, \\, 0.35, \\, \\, 0.375, \\, \\, 0.4, \\, \\, 0.425, \\, \\, 0.45 \\right] u_{max} \\label{eq:load_incre_bimat}\\\\\nu_{max} & = 7.1 \\nonumber\n\\end{align}\nFor multiperforated beam loading, the incremental decomposition is set to:\n\\begin{align}\nu_D & = \\left[ 0.4, \\, \\, 0.6, \\, \\, 0.8, \\, \\, 1, \\, \\, 1.15, \\, \\, 1.3, \\, \\, 1.45, \\, \\, 1.5 \\right] u_{max} \\label{eq:load_incre_multiperf} \\\\\nu_{max} & = 0.275 \\nonumber\n\\end{align}\n\\begin{remark} \nFor the sake of clarity, every over load increment \\eqref{eq:load_incre_bimat} and \\eqref{eq:load_incre_multiperf} is represented in the forthcoming results tables. \n\\end{remark}\n\nThe substructuring of the bi-material beam involves 13 subdomains along $x$-axis, while multiperforated beam is decomposed into 30 subdomains with multiple points (see figures~\\ref{fig:test_case_bimat} and~\\ref{fig:test_case_multiperf}).\n\n\nNumbers of Krylov iterations, cumulated over global Newton loops and load increments, are stored for the three interface impedances $S_t^{(\\overline{j})}$, $K_{bb,l}^{\\text{neigh}(j)}$ and $Q_{b,2s}^{(j)}$ and the two test cases in tables \\ref{tab:iter_bimat} and \\ref{tab:iter_multiperf}. Indeed, performance of the solver is in particular linked to the number of processor communications, which are directly proportional to the number of Krylov iterations. \n\nThe computation of local tangent operators, at each global iteration, is also a costly operation. The numbers of global Newton iterations, cumulated over load increments, are thus also stored for each expression of the interface impedance and the two test cases. Note that in these cases, the number of Krylov iterations is almost constant per linear system, the cumulated numbers of Krylov iterations are thus nearly proportional to the numbers of global Newton iterations; the latter are therefore only stored for the last load increment. \n\nA fourth approach has been added to the study, written NKS in both tables, and corresponding to the ``classical'' resolution process used in nonlinear structural mechanical problems: a global Newton algorithm, combined with a linear DD solver for the tangent systems. The main difference between the nonlinear substructuring and condensation method and this classical technique resides in the nonlinear\/linear algorithms used for local resolutions. The resulting comparisons with approaches $S_t^{(\\overline{j})}$, $K_{bb,l}^{\\text{neigh}(j)}$ and $Q_{b,2s}^{(j)}$ hence represent the gains that can be achieved with the mixed nonlinear substructuring and condensation method, in the more general framework of nonlinear solvers. \\medskip\n\n\n\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{minipage}{\\linewidth}\n\\begin{center}\n\\begin{tabular}{|c|cccccc|c|}\n\\hline\n\\multicolumn{7}{|c|}{Krylov} & Global Newton \\\\\n\\hline\nload inc. & 0.05 & 0.15 & 0.25 & 0.35 & 0.4 & 0.45 & 0.45\\\\\n\\hline\n$S_t^{(\\overline{j})}$ & 37 & 229 & x & & & & x \\\\\n$K_{bb,l}^{\\text{neigh}(j)}$ & 36 & 259 & 598 & 978 & 1357 & 1772 & 47 \\\\\n$Q_{b,2s}^{(j)}$ & 37 & 266 & 580 & 891 & 1204 & 1514 & 39 \\\\\n\\hline\n\\hline\nNKS & 74 & 296 & 633 & 970 & 1344 & 1795 & 48 \\\\\n\\hline\n\\multicolumn{8}{c}{ } \\\\\n\\hline\n\\multicolumn{8}{|c|}{Gains (\\%) } \\\\\n\\hline\n$Q_{b,2s}^{(j)}$ vs. $K_{bb,l}^{\\text{neigh}(j)}$ & -3 & -3 & 3 & 9 & 11 & 15 & 17 \\\\\n\\hline\n\\hline\n$Q_{b,2s}^{(j)}$ vs. NKS & 50 & 10 & 8 & 8 & 10 & 16 & 19 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{minipage}\n\\end{center}\n\\caption{Bi-material beam: Krylov cumulated iterations over load increments, global Newton cumulated iterations}\n\\label{tab:iter_bimat}\n\\end{table}\n\n\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{minipage}{\\linewidth}\n\\begin{center}\n\\begin{tabular}{|c|cccc|c|}\n\\hline\n\\multicolumn{5}{|c|}{Krylov} & Global Newton\\\\\n\\hline\nload inc. & 0.6 & 1 & 1.3 & 1.5 & 1.5\\\\\n\\hline\n$S_t^{(\\overline{j})}$ & 48 & 172 & 322 & 481 & 30 \\\\\n$K_{bb,l}^{\\text{neigh}(j)}$ & 61 & 212 & 373 & 548 & 35 \\\\\n$Q_{b,2s}^{(j)}$ & 48 & 170 & 300 & 438 & 28 \\\\\n\\hline\n\\hline\nNKS & 73 & 222 & 385 & 561 & 36 \\\\\n\\hline\n\\multicolumn{6}{c}{ } \\\\\n\\hline\n\\multicolumn{6}{|c|}{Gains (\\%)} \\\\\n\\hline\n$Q_{b,2s}^{(j)}$ vs. $K_{bb,l}^{\\text{neigh}(j)}$ & 21 & 20 & 20 & 20 & 20 \\\\\n$Q_{b,2s}^{(j)}$ vs. $S_t^{(\\overline{j})}$ & 0 & 1 & 7 & 9 & 7 \\\\\n\\hline\n\\hline\n$Q_{b,2s}^{(j)}$ vs. NKS & 34 & 23 & 22 & 22 & 22 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{minipage}\n\\end{center}\n\\caption{Multiperforated beam: Krylov cumulated iterations over load increments, global Newton cumulated iterations}\n\\label{tab:iter_multiperf}\n\\end{table}\n\n\nA first preliminary observation compares results for interface impedance $S_t^{(\\overline{j})}$ in the linear and the nonlinear case: the primitive guess we made about $S_t^{(\\overline{j})}$ being the best possible approximation we could analytically define of the interface impedance value was mistaken in the nonlinear formulation. For bi-material beam for instance, the resolution ended up with a divergence in the local Newton solvers, caused by fake high levels of plasticity inside subdomains, artifacts of the resolution -- this may be due to an excessively soft interface impedance, which lets the material deform more than necessary. \n\nSecondly, although our first guess was apparently misguided, the additive expression we derived from it seems to behave very satisfyingly: best performance is now achieved -- in the nonlinear process -- with the new expression of interface impedance $Q_{b,2s}^{(j)}$. Gains in terms of Krylov cumulated iterations, compared to classical interface impedance $K_{bb,l}^{\\text{neigh}(j)}$, vary from 15\\% to 20\\% at the end of the resolution: a benefit which should represent a non negligible decrease in CPU time for large structure problems (where each communication operation can be highly time-consuming). Compared to the interface impedance $S_t^{(\\overline{j})}$ -- which is not computationally affordable in practice, -- only the multiperforated beam can be effectively studied (convergence was not reached for bi-material beam): gains, for approach $Q_{b,2s}^{(j)}$, reach up 9\\% at the end of the resolution -- in terms of cumulated numbers of Krylov iterations. \n\n\\begin{remark}\nBi-material beam was meshed with 25~789 degrees of freedom, and its substructuring into 13 subdomains involved 984 interface degrees of freedom. Multiperforated beam was meshed with 30~515 degrees of freedom, and its substructuring into 30 subdomains involved 1641 interface degrees of freedom. Despite the relative smallness of these test cases, we expect them to be representative of computations on larger structures. Unfortunately our Octave-based code did not allow meaningful time measurements and large scale computations. Moreover, limiting communication as we try to do would be even more appreciable on computations involving many processors. The number of Krylov iterations seems to be the fairest and most reliable performance measurement.\n\\end{remark}\n\nComparison with classic method shows similar results for both test cases: at the end of the resolution, gains vary from 16 to 22\\% for Krylov cumulated iterations, and from 19 to 22\\% for global Newton cumulated iterations. This gain corresponds to the overall performance of the nonlinear substructuring and condensation method that can be achieved with mixed approach, compared to classical procedures. \n\n\n\\begin{remark}\nThe rather limited performance of mixed nonlinear substructuring and condensation method with classical interface impedance $K_{bb,l}^{\\text{neigh}(j)}$, compared to the classical resolution method, can be noticed in the above two examples. This lack of efficiency can probably be imputed to the difficulty of giving full account of long range phenomena with a short-scale interface impedance, whereas they prevail in the case of local heterogeneity (bi-material beam) and slenderness of plate structures (multiperforated beam).\n\\end{remark}\n\n\\subsection{Coupling with SRKS-method}\n\n\nAn augmentation strategy of Krylov subspaces, at each global nonlinear iteration, is possible by extracting Ritz vectors and values at the end of each Krylov solving and re-using them to construct an augmentation basis for the following Krylov iterations. The so-called TRKS method \\cite{gosselet2013total} reuses all of the produced Ritz vectors, while SRKS method \\cite{gosselet2013total} consists in selecting the Ritz values which are good enough approximations of tangent operator eigenvalues, and the corresponding Ritz vectors. SRKS method was implemented and its coupling with nonlinear substructuring and condensation method was studied for both test cases defined at section \\ref{sec:two_test_cases}. \n\nResults are given in tables \\ref{tab:iter_bimat_SRKS} and \\ref{tab:iter_multiperf_SRKS}. \n\n\n\n\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{tabular}{|c|cccccc:c|}\n\\hline\n\\multicolumn{8}{|c|}{Krylov}\\\\\n\\hline\n\\multicolumn{7}{|c:}{ with SRKS } & wo SRKS \\\\\n\\hline\nload inc. & 0.05 & 0.15 & 0.25 & 0.35 & 0.4 & 0.45 & 0.45 \\\\\n\\hline\n$S_t^{(\\overline{j})}$ & 37 & 97 & x & & & & x \\\\\n$K_{bb,l}^{\\text{neigh}(j)}$ & 36 & 113 & 218 & 339 & 452 & 578 & 1772 \\\\ \n$Q_{b,2s}^{(j)}$ & 37 & 98 & 197 & 304 & 398 & 492 & 1514 \\\\\n\\hline\n\\hline\nNKS & 53 & 130 & 245 & 370 & 492 & 648 & 1795 \\\\\n\\hline\n\\multicolumn{8}{c}{ } \\\\\n\\hline\n\\multicolumn{8}{|c|}{Gains (\\%)} \\\\\n\\hline\n$Q_{b,2s}^{(j)}$ vs. $K_{bb,l}^{\\text{neigh}(j)}$ & -3 & 13 & 10 & 10 & 12 & 15 & 15 \\\\\n\\hline\n\\hline\n$Q_{b,2s}^{\\text{neigh}(j)}$ vs. NKS & 30 & 25 & 20 & 18 & 19 & 24 & 16 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{Bi-material beam, coupling with SRKS: Krylov cumulated iterations over load increments}\n\\label{tab:iter_bimat_SRKS}\n\\end{table}\n\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{tabular}{|c|cccc:c|}\n\\hline\n\\multicolumn{6}{|c|}{Krylov} \\\\\n\\hline\n\\multicolumn{5}{|c:}{with SRKS} & wo SRKS \\\\\n\\hline\nload inc. & 0.6 & 1 & 1.3 & 1.5 & 1.5 \\\\\n\\hline\n$S_t^{(\\overline{j})}$ & 48 & 164 & 304 & 445 & 481 \\\\\n$K_{bb,l}^{\\text{neigh}(j)}$ & 61 & 201 & 351 & 506 & 548 \\\\\n$Q_{b,2s}^{(j)}$ & 47 & 159 & 280 & 410 & 438 \\\\\n\\hline\n\\hline\nNKS & 72 & 212 & 362 & 517 & 561 \\\\\n\\hline\n\\multicolumn{6}{c}{ } \\\\\n\\hline\n\\multicolumn{6}{|c|}{Gains (\\%) } \\\\\n\\hline\n$Q_{b,2s}^{(j)}$ vs. $K_{bb,l}^{\\text{neigh}(j)}$ & 20 & 20 & 20 & 20 & 20 \\\\\n$Q_{b,2s}^{(j)}$ vs. $S_t^{(\\overline{j})}$ & 2 & 3 & 8 & 8 & 9 \\\\\n\\hline\n\\hline\n$Q_{b,2s}^{(j)}$ vs. NKS & 35 & 25 & 23 & 21 & 22 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{Multiperforated beam, coupling with SRKS: Krylov cumulated iterations over load increments}\n\\label{tab:iter_multiperf_SRKS}\n\\end{table}\n\n\n\n\n\nAs expected, SRKS leads to a global decrease of the number of Krylov iterations, observable by comparing the columns ''with'' and ''without'' SRKS of results tables. For the bi-material beam, Krylov iterations are reduced on average by 67\\% at last load increment; for multiperforated beam the average reduction is only close to 8\\% (a small number of Krylov iterations implies a small number of post-processed Ritz vectors: this could partly explain the less impressive efficiency of SRKS method on this test case). \n\nConcerning global Newton solver, the cumulated numbers of iterations remained constant with and without SRKS -- as expected, -- they were thus not presented again in this section. \n\nTables \\ref{tab:iter_bimat_SRKS} and \\ref{tab:iter_multiperf_SRKS} confirm observations of previous section. Even if the cumulated numbers of Krylov iterations are decreased thanks to SRKS, the overall gains generated by the new expression $Q_{b,2s}^{(j)}$ remain rather constant, and are even better for bi-material beam (indeed, the classic method NKS suffered from a slight degradation of its overall performance, and the gain of impedance $Q_{b,2s}^{(j)}$ compared to NKS reaches then 24\\% at the end of the resolution, in terms of Krylov cumulated iterations). \n\n\\section{Conclusion}\n\nA new approximation of the interface impedance has been developed, in the context of nonlinear substructuring and condensation methods with mixed approach. The expression of the interface impedance introduced here couples both short and long range interactions terms.\n\nThe procedure for building such a parameter consists in evaluating, for a given subdomain, the Schur tangent operator of the remainder of the structure (i.e. the optimal value in a linear context), which was originally the best analytic expression we could produce to approximate the optimal interface impedance in the nonlinear context. This evaluation involves a short scale term, basically consisting in the stiffness of the considered subdomain neighbors, and a long scale low rank term, composed of the projection of the Schur tangent operator into the space generated by rigid body modes, thereby capturing long range interactions with the faraway structure. \n\nPerformance of a FETI-2LM solver was studied on a linear case, where the Schur tangent operator of the remainder is exactly the optimal value for the interface impedance -- despite its intractability in practice in parallel resolution processes. Although, as expected, the new additive expression of the impedance did not produce as good results as this optimal value, it managed quite impressive gains, in particular compared to the classical choice made in this framework -- i.e. the stiffness assembled over the neighbors of a subdomain. \n\nPerformance of the mixed nonlinear substructuring and condensation method was also studied, on a plasticity case. Not only the exact computation of Schur tangent operator is not affordable in the framework of parallel distributed computations -- unlike the new expression we build, which was chosen to be inexpensively calculable in parallel, -- but it also was found to achieve not as good results as this new expression. This suggests that the level of accuracy obtained on the representation of a substructure environment with the additive expression of the interface impedance introduced here is increased. \n\nEventually, a study of the coupling of the resolution process with a selective reuse procedure of Krylov solver Ritz vectors (SRKS) tends to assess that performance of this new expression is maintained while numbers of Krylov iterations are decreased.\nAll these considerations are rather promising for implementations at larger scales.\n\n\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Introduction}\n \nThe BESIII\\xspace experiment is located at the Institute of High Energy Physics in Beijing. Symmetric \\ensuremath{e^+e^-}\\xspace collisions from Beijing Electron-Positron Collider (BEPCII) in an energy range between \\SI{2.0}{\\si{\\giga\\electronvolt}\\xspace} and \\SI{4.6}{\\si{\\giga\\electronvolt}\\xspace} are analyzed. The maximum luminosity of BEPCII of \\SI{1e33}{\\Lumi} at $\\sqrt{s}=$\\SI{3.773}{\\si{\\giga\\electronvolt}\\xspace} were surpassed in April 2016. \n\nThe detector measures charged track momenta with a relative precision of \\SI{0.5}{\\percent} (@\\SI{1.0}{\\si{\\giga\\electronvolt}\\xspace\/\\ensuremath{c}\\xspace}) using a multi-wire drift chamber in a \\SI{1}{\\tesla} magnetic field. Electromagnetic showers are measured in a caesium iodide calorimeter with a relative precision of \\SI{2.5}{\\percent} (@\\SI{1.0}{\\si{\\giga\\electronvolt}\\xspace}) and a good particle identification is achieved by combining information from energy loss in the drift chamber, from the time-of-flight system and from the calorimeter. Muons can be identified using 9 layers of resistive plate chambers integrated in the magnet return yoke. Details are provided elsewhere \\cite{Ablikim:2009aa}.\nBESIII\\xspace has collected large data samples in the tau-charm region. The interesting samples for the study of charmed hadrons are usually at a center-of-mass energy close to a threshold. The samples of interest for the analyses described in the following were recorded at the \\ensuremath{\\Dz {\\kern -0.16em \\Dzb}}\\xspace\/\\ensuremath{\\Dp {\\kern -0.16em \\Dm}}\\xspace threshold $(\\sqrt{s}=\\SI{3.773}{\\si{\\giga\\electronvolt}\\xspace})$ and at the \\ensuremath{\\Dsp{\\kern -0.16em \\Dsm}}\\xspace threshold $(\\sqrt{s}=\\SI{4.009}{\\si{\\giga\\electronvolt}\\xspace})$. Integrated luminosities of \\SI{2.81}{\\invfb} and \\SI{0.482}{\\invfb} were recorded, respectively.\n\n\\begin{figure}[tbp]\n \\centering\n \\begin{tikzpicture}\n\t \n\t \\draw[draw=none, use as bounding box](0,1.5) rectangle (10.2cm,6cm);\n\t \\begin{scope}[scale=0.5,color=black!70!white]\n\t\t \\coordinate (origin) at (9,9);\n\t\t \\draw[->] (origin) -- +(0:1) node[above] (n1) {z};\n\t\t \\draw[->] (origin) -- +(90:1) node[left] (n2) {y};\n\t\t\n\t \\end{scope}\n\t \\begin{scope}[line width=2pt]\n\t\t \\node[draw,black,rectangle,rounded corners=3pt,minimum size=3pt] (pv) at (5,3) {\\ensuremath{\\psi(3770)}\\xspace};\n\t\t \\draw[blue,->] (1,3) node[below] {\\ensuremath{e^+}\\xspace} -- (pv);\n\t\t \\draw[blue,->] (9,3) node[above] {\\en} -- (pv);\n\t\t\n\t\t \\node[draw,red,circle,minimum size=3pt, label=below:$\\ensuremath{\\kern 0.2em\\overline{\\kern -0.2em D}\\rule{0pt}{1.5ex}^0}\\xspace_{tag}$] (Dbarvtx) at (6.5,2.5) {};\n\t\t \\draw[black,dashed,-] (pv) -- (Dbarvtx);\n\t\t \\draw[->,line width=1.5pt,black!60!white] (Dbarvtx) -- +(5:2cm); \n\t\t \\draw[->,line width=1.5pt,black!60!white] (Dbarvtx) -- +(-5:2cm) node[right] {hadrons}; \n\t\t \\draw[->,line width=1.5pt,black!60!white] (Dbarvtx) -- +(-15:2cm); \n\t\t \\draw[->,line width=1.5pt,black!60!white] (Dbarvtx) -- +(-25:2cm); \n\t\t\n\t\t \\node[draw,red,circle,minimum size=3pt, label=above right:$\\ensuremath{D^0}\\xspace$] (Dvtx) at (3.5,3.5) {};\n\t\t \\node[draw,black!60!white,fill,circle,minimum size=0pt] (KSvtx) at ($ (Dvtx) + (182:1.5) $) {};\n\t\t \\draw[black!60!white,dashed,-] (Dvtx) -- (KSvtx);\n\t\t \\draw[black,dashed,-] (pv) -- (Dvtx);\n\t\t \\draw[black!60!white] (KSvtx) -- +(190:1) node[left] {$l^+$};\n\t\t \\draw[black!60!white,dashed] (KSvtx) -- +(170:1) node[left] {$\\nu_l$};\n\t\t \\draw[black!60!white] (Dvtx) -- +(150:2) node[left] {};\n\t\t \\draw[black!60!white,-] (Dvtx) -- +(120:2) node[left] {hadrons};\n\t \\end{scope}\n\t \n\t\t\n\t\t\t \n\t\t\t\t \n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t \n\t\t\n\t \n \\end{tikzpicture}\n \\caption{\\ensuremath{\\psi(3770)}\\xspace decay topology in the \\ensuremath{\\psi(3770)}\\xspace rest frame. An undetected particle track can be reconstructed using the constrained kinematics of the decay. Typical tag modes for \\ensuremath{C\\!P}\\xspace and flavour eigenstates are listed.}\n \\label{fig:psiprprdecay}\n\\end{figure}\nThe at-threshold decay topology at a center-of-mass energy of \\SI{3.773}{\\si{\\giga\\electronvolt}\\xspace} is illustrated in \\cref{fig:psiprprdecay}. A pair of mesons is produced and it is possible to conclude from the decay of one meson (so-called tag meson) properties of the second decay. For instance in case of neutral \\ensuremath{D}\\xspace decays the flavour or the \\ensuremath{C\\!P}\\xspace quantum numbers of the signal decay can be measured, even if the signal final state does not provide this information. In case of charged \\ensuremath{D}\\xspace decays the reconstruction of both decays is used to reduce the background and furthermore if undetected particles are involved in the signal decay the four momenta of those can be reconstructed. In particular the study of leptonic and semi-leptonic decays benefits from this. The reconstruction of both decays in each event is referred to as double tag technique.\n\nIn the following we present the measurements of the \\ensuremath{D^+_s}\\xspace decay constant (\\cref{sec:dsmunu}), first evidence of the decay $\\ensuremath{D^+}\\xspace\\ensuremath{\\rightarrow}\\xspace\\tau\\ensuremath{\\nu_\\tau}\\xspace$ (\\cref{sec:dptaunu}) and the analysis of the decay $\\ensuremath{D^0}\\xspace\\ensuremath{\\rightarrow}\\xspace\\KSL\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)$ (\\cref{sec:dzkspizpiz}).\n\\section{Pure leptonic $\\ensuremath{D_{s(d)}}\\xspace^+$ decays}\n\\label{sec:pureLeptonicD}\nThe pure leptonic decay of charged \\ensuremath{D_{s(d)}}\\xspace mesons proceeds via the annihilation of \\ensuremath{c}\\xspace and \\ensuremath{\\overline s}\\xspace(\\ensuremath{\\overline d}\\xspace) to a virtual \\ensuremath{W^\\pm}\\xspace boson and its decay to $l^+\\nu_l$. The decay rate can be parametrized as:\n\\begin{align}\n\t\\Gamma(\\ensuremath{D_{s(d)}}\\xspace\\ensuremath{\\rightarrow}\\xspace l^+\\nu_l) = \\frac{G_F^2}{8\\pi} f_{\\ensuremath{D_{s(d)}}\\xspace}^2 m_l^2 m_{\\ensuremath{D_{s(d)}}\\xspace} \\left( 1-\\frac{m_l^2}{m^2_{\\ensuremath{D_{s(d)}}\\xspace}}\\right)^2 \\abs{V_{cs(d)}}^2.\n \\label{eqn:ds:decayRate}\n\\end{align}\nWith the Fermi constant $G_F$, the lepton mass $m_l$, the corresponding CKM matrix element $\\abs{V_{cs(d)}}^2$, the \\ensuremath{D_{s(d)}}\\xspace mass $m_{\\ensuremath{D_{s(d)}}\\xspace}$ and the decay constant $f_{\\ensuremath{D_{s(d)}}\\xspace}^2$. The decay constant parametrizes the \\ensuremath{\\mathrm{QCD}}\\xspace effects on the decay. From the measurement of the decay width $\\Gamma(\\ensuremath{D_{s(d)}}\\xspace\\ensuremath{\\rightarrow}\\xspace l^+\\nu_l)$ the decay constant $f_{\\ensuremath{D_{s(d)}}\\xspace}^2$ can be extracted. \n\nThe branching fraction can be measured via the previously described double tag technique. In each event the tag decay is reconstructed via numerous decay channels. The number of events that contain a tag candidate is denoted by $N_{\\text{tag}}$. Among those events the signal decay is reconstructed and the number of events that contain a tag decay and a signal decay is denoted by $N_{\\text{sig,tag}}$. The branching fraction is given by:\n\\begin{align}\n {\\ensuremath{\\mathcal B}}\\xspace(\\ensuremath{D_{s(d)}}\\xspace\\ensuremath{\\rightarrow}\\xspace l^+\\nu_l) = \\frac{N_{\\text{sig,tag}}}{\\epsilon_{\\text{sig,tag}}}\\times\\frac{\\epsilon_{\\text{tag}}}{N_{\\text{tag}}}.\n \\label{eqn:ds:bf}\n\\end{align}\nThe efficiencies for reconstruction and selection $\\epsilon_i$ are obtained from simulation.\nSince the final state contains a neutrino which is not detected the signal yield is determined using the missing mass:\n\\begin{align}\n MM^2 = \\frac{\\left(E_{\\text{beam}}-E_\\mu\\right)^2}{c^4} - \\frac{\\left(-\\vec{p}_{\\ensuremath{D_{s(d)}}\\xspace}-\\vec{p}_{\\ensuremath{\\mu^+}\\xspace}\\right)^2}{\\ensuremath{c}\\xspace^2}.\n\\end{align}\nThe beam energy is denoted by $E_{\\text{beam}}$ and the reconstructed momentum of the tag \\ensuremath{D_{s(d)}}\\xspace decay candidate by $\\vec{p}_{\\ensuremath{D_{s(d)}}\\xspace}$.\n\n\\pagebreak\n\\subsection{$\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\mu^+\\ensuremath{\\nu_\\mu}\\xspace$ and $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\tau^+\\ensuremath{\\nu_\\tau}\\xspace$}\n\\label{sec:dsmunu}\nThe distribution of $MM^2$ of $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\mu^+\\ensuremath{\\nu_\\mu}\\xspace$ and $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\tau^+\\ensuremath{\\nu_\\tau}\\xspace$ is shown in \\cref{fig:ds:missingMass}. The \\ensuremath{\\tau^+}\\xspace is reconstructed via its decay to $\\ensuremath{\\pi^+}\\xspace\\ensuremath{\\nub_\\tau}\\xspace$. The yield is determined via a simultaneous fit to signal and sideband regions whereas the sideband regions are defined in the \\ensuremath{\\Dbar^+_s}\\xspace mass spectrum of the tag candidate. The $\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace$ signal is shown as red dotted curve and the $\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace$ signal as black dot-dashed curve. Background from misreconstructed tag \\ensuremath{D^+_s}\\xspace decays and background from non-\\ensuremath{\\Dsp{\\kern -0.16em \\Dsm}}\\xspace events is shown as green short dashed and violet long dashed curve, respectively.\nWithin a sample of \\num{15127(321)} events which contain a tag candidate we find \\num{69.3(93)} $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace$ decays and \\num{32.5(43)} $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace$ decays. In the fitting procedure the ratio of $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace$ to $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace$ was constraint to its Standard model prediction. The yields are corrected for radiative effects and we obtain:\n\\begin{align}\n {\\ensuremath{\\mathcal B}}\\xspace(\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace) &= \\SIerrs{0.495}{0.067}{0.026}{\\percent} \\nonumber\\\\\n {\\ensuremath{\\mathcal B}}\\xspace(\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace) &= \\SIerrs{4.83}{0.65}{0.26}{\\percent}.\n\\end{align}\n\\begin{wrapfigure}[21]{r}{0.35\\textwidth}\n \\centering\n \\includegraphics[width=0.35\\textwidth]{Dslnu}\n \\caption{$MM^2$ distribution of $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\mu^+\\ensuremath{\\nu_\\mu}\\xspace$ and $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\tau^+\\ensuremath{\\nu_\\tau}\\xspace$. Signal (a) and sideband (b) regions are shown.}\n \\label{fig:ds:missingMass}\n\\end{wrapfigure}\nThe branching fractions {\\ensuremath{\\mathcal B}}\\xspace($\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace$) and {\\ensuremath{\\mathcal B}}\\xspace($\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace$) are consistent with the world average within \\num{1} and \\num{1.5} standard deviations, respectively.\nFurthermore, the branching fractions are consistently determined using a fitting method which does not rely on the ratio of $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace$ to $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace$. For further details we refer to \\cite{Ablikim:2016duz}.\n\nUsing ${\\ensuremath{\\mathcal B}}\\xspace(\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace)$ the decay constant $f_{\\ensuremath{D^+_s}\\xspace}$ is determined using \\cref{eqn:ds:decayRate}:\n\\begin{align}\n f_{\\ensuremath{D^+_s}\\xspace} = \\SIerrs{241.0}{16.3}{6.5}{\\si{\\mega\\electronvolt}\\xspace}.\n\\end{align}\nThe CKM matrix element $\\abs{V_{cd}}=\\num{0.97425(22)}$ \\cite{Agashe:2014kda} and the \\ensuremath{D^+_s}\\xspace lifetime \\cite{Agashe:2014kda} is used. A good agreement with LQCD calculations is found. Result are published in \\cite{Ablikim:2016duz}\n\n\\subsection{$\\ensuremath{D^+}\\xspace\\ensuremath{\\rightarrow}\\xspace\\tau^+\\ensuremath{\\nu_\\tau}\\xspace$}\n\\label{sec:dptaunu}\nThe $MM^2$ distribution of $\\ensuremath{D^+}\\xspace\\ensuremath{\\rightarrow}\\xspace\\tau^+\\ensuremath{\\nu_\\tau}\\xspace$ is shown in \\cref{fig:MMsqDptaunu}. The most severe background to the signal channel is $\\mu^+\\ensuremath{\\nu_\\mu}\\xspace$. To distinguish signal and background in a fitting procedure we use the difference in energy deposit of pions and muons in the electromagnetic calorimeter (EMC). We split the sample into events with an energy deposit larger and sample $\\SI{300}{\\si{\\mega\\electronvolt}\\xspace}$. As shown in \\cref{fig:MMsqDptaunu}(b) above $\\SI{300}{\\si{\\mega\\electronvolt}\\xspace}$ the number of $\\mu^+\\ensuremath{\\nu_\\mu}\\xspace$ events is reduced compared to the number of $\\tau^+\\ensuremath{\\nu_\\tau}\\xspace$ events.\n\\begin{figure}[tb]\n\t\\centering\n\t\\subfloat[$E_{EMC} \\leq \\SI{300}{\\si{\\mega\\electronvolt}\\xspace}$]{\n\t\\includegraphics[width=0.45\\textwidth]{dptaunuLow}}\n\t\\subfloat[$E_{EMC} > \\SI{300}{\\si{\\mega\\electronvolt}\\xspace}$]{\n\t\\includegraphics[width=0.45\\textwidth]{dptaunuHigh}}\n\t\\caption{$MM^2$ distribution for the decay $\\ensuremath{D^+}\\xspace\\ensuremath{\\rightarrow}\\xspace\\tau^+\\ensuremath{\\nu_\\tau}\\xspace$. The signal is shown as solid orange line. Background comes mainly from \\ensuremath{D^+}\\xspace decays to $\\mu^+\\ensuremath{\\nu_\\mu}\\xspace$ (solid black) and to $\\ensuremath{\\pi^+}\\xspace\\KL$ (dashed blue).}\n\t\\label{fig:MMsqDptaunu}\n\\end{figure}\n\nWe obtain a preliminary signal yield of \\SI{137(27)} events. The significance of the signal is larger than \\SI{4}{\\stdDev}. The preliminary branching fraction is given by:\n\\begin{align}\n\t{\\ensuremath{\\mathcal B}}\\xspace(\\ensuremath{D^+}\\xspace\\ensuremath{\\rightarrow}\\xspace\\tau^+\\ensuremath{\\nu_\\tau}\\xspace) = \\SI{1.20(24)}{\\timesten{-3}}.\n\\end{align}\nFurthermore, we extract the ratio of $\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace$ to $\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace$ decays:\n\\begin{align}\n\tR := \\frac{\\Gamma(\\ensuremath{D^+}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace)}{\\Gamma(\\ensuremath{D^+}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace)} = \\num{3.21(64)}. \n\\end{align}\nThe result is consistent with the Standard Model prediction.\n\n\\section{Analysis of the decay $\\ensuremath{D^0}\\xspace\\ensuremath{\\rightarrow}\\xspace\\KSL\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)$}\n\\label{sec:dzkspizpiz}\nWe present preliminary results of the branching measurement of the decays $\\ensuremath{D^0}\\xspace\\ensuremath{\\rightarrow}\\xspace\\KSL\\ensuremath{\\pi^0}\\xspace$ and $\\ensuremath{D^0}\\xspace\\ensuremath{\\rightarrow}\\xspace\\KSL\\ensuremath{\\pi^0}\\xspace$. Furthermore we determine the \\ensuremath{D^0}\\xspace mixing parameter $y_{\\ensuremath{C\\!P}\\xspace}$ using the \\ensuremath{C\\!P}\\xspace eigenstates $\\KS\\ensuremath{\\pi^0}\\xspace$ and $\\KL\\ensuremath{\\pi^0}\\xspace$. The challenge in this channel is the reconstruction of the \\KL decay since its long decay time signals of its decay products in the drift chamber is very unlikely. We use the constraint kinematics at the \\ensuremath{\\Dz {\\kern -0.16em \\Dzb}}\\xspace threshold to predict the \\KL four-momentum and furthermore require a certain energy deposit in the electromagnetic calorimeter.\n\nThe branching fraction of a \\ensuremath{C\\!P}\\xspace eigenstate can be measured in a self-normalization way using Cabibbo favoured (CF) tag channels. We define:\n\\begin{align}\n\tM^\\pm = \\frac{N_{CF,CP\\pm}}{\\epsilon_{CF,CP\\pm}}\\frac{\\epsilon_{CF}}{N_{CF}}.\t\n\\end{align}\nThe yields of double of double and single tag events is denoted by $N_{CF,CP\\pm}$ and $N_{CF}$ and the corresponding reconstruction efficiencies by $\\epsilon_{CF,CP\\pm}$ and $\\epsilon_{CF}$.\nThe branching fraction is given by:\n\\begin{align}\n\t{\\ensuremath{\\mathcal B}}\\xspace_{\\ensuremath{C\\!P}\\xspace\\pm} = \\frac{1}{1\\mp C_{f}} M^\\pm, \\qquad C_f = \\frac{M^- - M^+}{M^- + M^+}.\n\\end{align}\n\nWe use the flavour tag channels $\\ensuremath{K^-}\\xspace\\ensuremath{\\pi^+}\\xspace$, $\\ensuremath{K^-}\\xspace\\ensuremath{\\pi^+}\\xspace\\ensuremath{\\pi^-}\\xspace\\ensuremath{\\pi^+}\\xspace$ and $\\ensuremath{K^-}\\xspace\\ensuremath{\\pi^+}\\xspace\\ensuremath{\\pi^0}\\xspace$. The double tag yields and the preliminary branching fractions are listed in \\cref{tab:dzkspizpizBF}. The branching fractions of the final states $\\KSL\\ensuremath{\\pi^0}\\xspace$ and $\\KS\\ensuremath{\\pi^0}\\xspace\\piz$ are consistent with the PDG average \\cite{Agashe:2014kda} and the branching fraction to $\\KS\\ensuremath{\\pi^0}\\xspace\\piz$ is the first accurate measurement.\n\nFrom the branching fractions we can calculate the asymmetry between the \\ensuremath{C\\!P}\\xspace eigenstates:\n\\begin{align}\n\tR_{\\ensuremath{K^0}\\xspace\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)} = \\frac{{\\ensuremath{\\mathcal B}}\\xspace_{\\KS\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)}-{\\ensuremath{\\mathcal B}}\\xspace_{\\KL\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)}}{{\\ensuremath{\\mathcal B}}\\xspace_{\\KS\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)}+{\\ensuremath{\\mathcal B}}\\xspace_{\\KL\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)}}.\n\\end{align}\nThe results are also listed in \\cref{tab:dzkspizpizBF}.\n\\begin{table}[tbp]\n\t\\centering\n\t\\caption{Double tag yields and branching fractions of \\ensuremath{C\\!P}\\xspace eigenstates $\\KSL\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)$. Uncertainties are statistical only.}\n\t\\label{tab:dzkspizpizBF}\n\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\\hline\n\t\tChannel & \\ensuremath{C\\!P}\\xspace &$N_{CF,CP\\pm}$& ${\\ensuremath{\\mathcal B}}\\xspace_{\\ensuremath{C\\!P}\\xspace\\pm}$& R\\\\\n\t\t\\hline\n\t\t\\KS\\ensuremath{\\pi^0}\\xspace & $+$ &\\num{7141(91)}& \\num{1.230(020)} & \\multirow{2}{*}{\\num{0.1077(125)}} \\\\\n\t\t\\KL\\ensuremath{\\pi^0}\\xspace & $-$ &\\num{6678(118)}& \\num{0.991(019)} & \\\\\n\t\t\\hline\n\t\t\\KS\\ensuremath{\\pi^0}\\xspace\\piz & $+$ &\\num{2623(60)}& \\num{0.975(24)} & \\multirow{2}{*}{\\num{-0.0929(209)}} \\\\\n\t\t\\KL\\ensuremath{\\pi^0}\\xspace\\piz & $-$ &\\num{2136(69)}& \\num{1.18(04)} & \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\n\\pagebreak\n\\subsection{Measurement of $y_{CP}$}\nUsing the final states $\\KS\\ensuremath{\\pi^0}\\xspace$ and $\\KL\\ensuremath{\\pi^0}\\xspace$ we determine the \\ensuremath{D^0}\\xspace mixing parameter $y_{\\ensuremath{C\\!P}\\xspace}$. The branching ratio of a \\ensuremath{C\\!P}\\xspace eigenstate is connected to the branching ratio of a pure flavour eigenstate via:\n\\begin{align}\n\t{\\ensuremath{\\mathcal B}}\\xspace_{\\ensuremath{C\\!P}\\xspace} \\approx {\\ensuremath{\\mathcal B}}\\xspace_{\\text{flavour}} (1\\mp y_{\\ensuremath{C\\!P}\\xspace}). \t\n\\end{align}\nThe parameter $y_{\\ensuremath{C\\!P}\\xspace}$ is then given by the asymmetry of branching ratios of \\ensuremath{C\\!P}\\xspace even and odd states to pure flavour states f:\n\\begin{align}\n\ty_{\\ensuremath{C\\!P}\\xspace} = \\frac{{\\ensuremath{\\mathcal B}}\\xspace_{-;f} - {\\ensuremath{\\mathcal B}}\\xspace_{+;f}}{{\\ensuremath{\\mathcal B}}\\xspace_{-;f} + {\\ensuremath{\\mathcal B}}\\xspace_{+;f}}.\n\\end{align}\nThe previously mentioned Cabibbo favoured final states are not pure flavour eigenstates. Therefore, we use the semi-leptonic decay to $\\ensuremath{K^-}\\xspace e^+\\ensuremath{\\nu_e}\\xspace$.\nWe obtain a preliminary value of:\n\\begin{align}\n\ty_{\\ensuremath{C\\!P}\\xspace} = \\SI{0.98(243)}{\\percent}.\n\\end{align}\nWe quote statistical uncertainty only. The result is in agreement with a previous measurement of BESIII\\xspace \\cite{Ablikim:2015hih} as well as with the HFAG average \\cite{arXiv:1612.07233}. Results are preliminary and we quote statistical uncertainties only.\n\n\\section{Summary}\nThe BESIII\\xspace experiment has collected large data sample at charm-related thresholds. The constraint kinematics at those energies allow the reconstruction of (semi-) leptonic decays with low background. Furthermore, the quantum entanglement of \\ensuremath{\\Dz {\\kern -0.16em \\Dzb}}\\xspace at threshold provides a unique laboratory for the analysis of \\ensuremath{C\\!P}\\xspace eigenstates. \nWe present the analysis of the leptonic decay of \\ensuremath{D^+_s}\\xspace to $\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace$ and $\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace$ with the measurement of branching fractions of the derived \\ensuremath{D^+_s}\\xspace form factor. Recently, BESIII\\xspace has found preliminary evidence of the decay \\ensuremath{D^+}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace with a statistical significance above \\SI{4}{\\stdDev}. \nThe analysis of the $\\ensuremath{D^0}\\xspace\\ensuremath{\\rightarrow}\\xspace\\KSL\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)$ includes the measurement of the branching fractions and using the decays to $\\KSL\\ensuremath{\\pi^0}\\xspace$ the measurement of the \\ensuremath{D^0}\\xspace mixing parameter $y_{\\ensuremath{C\\!P}\\xspace}$.\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzascm b/data_all_eng_slimpj/shuffled/split2/finalzzascm new file mode 100644 index 0000000000000000000000000000000000000000..a427072b205660462ccda25887e23d4f69b548fc --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzascm @@ -0,0 +1,5 @@ +{"text":"\\section{Sample}\n\\label{sec:sample}\n\nThe sample is composed of 70 archival {\\it Hubble Space Telescope}\n({\\it HST}) images of low-redshift QSOs. They have redshifts between $0.06 \\leq z \\leq 0.46$ and total\n(host plus nucleus) absolute magnitudes brighter than $M_V \\leq -23$.\nFurthermore, they must have been observed with the\n{\\it HST}'s Wide-Field Planetary Camera 2 (WFPC2), using broad-band\nfilters, and have images publicly available in the {\\it HST} archives\nas of 1999. This brings our sample to 70 QSOs.\nRather than restrict our study to a specific class of QSOs, we impose\nno physical criteria on the QSOs beyond those of magnitude and\nredshift. Thus we are able to study a broad range of properties and\ndraw general conclusions. The images are reduced and the physical parameters fitted \nas described by Hamilton et al.~(2002).\n\n\\section{The ``Fundamental Plane'' of QSOs}\n\\label{sec:fp}\n\nFor our Principal Components Analysis (PCA), we use a restricted sample of those QSOs for which we\nhave all of the following parameters: $M_V\\mathrm{(nuc)}$, $L_X$, \n$r_{1\/2}$, and $\\mu_e$, where $\\mu_e$ is the effective\nsurface magnitude of the galactic bulge. We further\nrequire that each QSO have a modeled, spheroidal bulge (the entire\ngalaxy, in the case of elliptical hosts). These qualifications\nrestrict the sample to 42 QSOs.\n\n\nWe can perform two PCAs, an optical one using $M_V\\mathrm{(nuc)}$, \n$\\log r_{1\/2}$, and $\\mu_e$ as the parameters, and an x-ray one that \nsubstitutes $\\log L_X$ for the nuclear luminosity.\nFrom the optical PCA performed on this sample of 42 objects, \nwe find that 96.1\\% of the variance can be explained with just the first two\nprincipal axes, and therefore the QSOs mostly lie in a plane within this parameter\nspace. This we consider to be a fundamental plane (FP) for QSOs. \nFor the corresponding x-ray results, the first two principal axes explain \n95.2\\% of the variance in the sample, and we find here an x-ray QSO fundamental plane.\nThe individual subsamples of QSOs (radio-loud or radio-quiet, with spiral or elliptical hosts, and all \ncombinations of these) \nare also examined in this way, and they show fundamental planes, as well.\n\n\nWe obtain the optical and x-ray formulae for the full sample's fundamental plane:\n\\begin{equation}\n\tM_V\\mathrm{(nuc)} = -77.5 + 3.14 \\mu_e - 14.2 \\log\n\tr_{1\/2}\n\t\\label{equ:oall-physical}\n\\end{equation}\n\\begin{equation}\n\t\\log L_X = 79.3 - 2.03 \\mu_e + 8.74 \\log\n\tr_{1\/2} \\mbox{ .} \n\t\\label{equ:xall-physical}\n\\end{equation}\nViews of the optical and x-ray fundamental planes, with the QSO data points superimposed, \nare displayed in Figure~\\ref{fig:fp-phys}. Note that the host properties describe the horizontal and the nuclear luminosity the vertical in these plots. Figure~\\ref{fig:fp-rms} illustrates the precision of the \nQSO fundamental plane in both forms, with the plane plotted against \nthe measured host sizes. Its highest precision is found when solving for $\\log r_{1\/2}$.\n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\n\\subsection{Possible Derivation}\nThe fundamental plane for QSOs shows a relationship between the\nnuclear and host features that goes beyond the simple (and weak) correlation of nuclear and\nhost luminosities. This behavior may be connected to other, known\nrelations between the objects.\nFor example, there is already a well-studied\nfundamental plane for normal, elliptical galaxies (Djorgovski \\&\nDavis~1987; Dressler et al.~1987) that incorporates galaxy size, $r_{1\/2}$, central velocity dispersion,\n$\\sigma_c$, and effective surface magnitude,\n$\\mu_e$.\n\nLet us take a $V$-band measurement of the normal galaxy fundamental plane \n(Scodeggio et al.~1998), \n$\\log r_{1\/2} = 1.35 \\log \\sigma_c + 0.35 \\mu_e + \\mathit{Constant}$. \nThe ratio of the coefficients of $\\log r_{1\/2}$ to $\\mu_e$ differs by about 37\\% between the QSO\noptical fundamental plane and the normal galaxy FP, and the QSO x-ray FP\nshows a 34\\% difference. Still, there is a formal similarity between\nthe QSO and normal fundamental planes, which might point to a link between \nthe host galaxy's central velocity\ndispersion and the nuclear luminosity of the QSO. This\ncould derive from the fueling mechanism of QSOs, if the movement of gas\nto the center of the galaxy and the black hole is related to the\nvelocity dispersion. \n\n\nIt is therefore tempting to try to derive the\nQSO fundamental plane directly from the elliptical galaxy fundamental\nplane, but we find two problems with this approach, both arising from the relation of black hole mass to nuclear luminosity. Using the\nvelocity dispersion to black hole mass relation of Merritt \\& Ferrarese~(2001),\n$ \\mathcal{M}_{BH}=1.3 \\times 10^8 \n\t\\left( \\sigma_c \/ 200 \\mbox{ km s}^{-1} \\right)^{4.72} \n\t\\mathcal{M}_{\\odot} $\nwe can put the elliptical galaxy fundamental plane in terms of black hole mass. \nUsing the observed (but weak) correlation between black hole mass and \nnuclear luminosity in our sample, $M_V\\mathrm{(nuc)} = -1.98 \\log \\left( \\mathcal{M}_{\\mathrm{BH}} \/ \\mathcal{M}_{\\odot} \\right) -6.90$ and \n$\\log L_X = 2.77 \\log \\left( \\mathcal{M}_{\\mathrm{BH}} \/ \\mathcal{M}_{\\odot} \\right) + 19.8$,\nwe obtain\n\\begin{equation}\n\tM_V\\mathrm{(nuc)} = \\mathit{Constant} + 2.5 \\mu_e - 7.14 \\log r_{1\/2}\n\\end{equation}\n\\begin{equation}\n\t\\log L_X = \\mathit{Constant} - 3.5 \\mu_e + 10 \\log r_{1\/2} \\mbox{ .}\n\\end{equation}\nThese are our attempts to derive the QSO optical fundamental plane from the normal galaxy FP.\nThe optical form differs from the actual QSO FP, equation~(\\ref{equ:oall-physical}), \ncompletely outside the propagated errors, but \nthe x-ray form is within the errors of equation~(\\ref{equ:xall-physical}).\n\n\nBut any derivation of the QSO fundamental plane has an additional problem. \nAs mentioned before, the QSO fundamental plane for the full sample is \ncomposed of individual FPs of the several subsamples. Some subsample FPs \nactually slope in the opposite direction from the overall QSO FP.\nFor example, in the optical form, the FP of radio-quiets in elliptical hosts \nslopes in the opposite direction. \nAnd in the x-ray form, the radio-quiet subsamples slope oppositely \nfrom the overall sample.\nYet these differences cannot be accounted for by different correlations of \nnuclear luminosity with black hole mass. \nThe poor correlation of black hole mass with nuclear luminosity lies in contrast with the \nrelatively thin QSO fundamental plane. \nFurthermore, Woo \\& Urry~(2002) suggest that the apparent correlations \nbetween black hole mass and nuclear luminosity are merely artifacts of \nsample selection.\nRegardless of how we take this interpretation, the relationship between \nblack hole mass and nuclear luminosity remains the missing link in any \nderivation of the QSO fundamental plane.\n\n\\subsection{Arrangement of Subsample Planes}\n\nThe insight into the origins of this new fundamental plane relationship might come from\nthe comparison of the QSO subsample FPs. The thickness of the overall QSO\nfundamental plane appears partly to be the result of the superposition of\nthe subsamples' planes.\nBecause the QSO FP mathematically describes a link between the host and the \nnucleus, it seems reasonable to suppose that the slope of the plane depends on the \nphysical nature of this link. The fueling mechanism at a QSO's core would seem to be the most directly related to this, depending on how we define ``fueling mechanism.'' \nWe could encompass within this term the details of the structure and dynamics of the accretion disk, as well as question of whether the QSO is efficiently or inefficiently fueled.\n\nIt is intriguing that as we change from one class to another, the fundamental plane \nessentially pivots about an axis, so the differences are mostly reduced to a single dimension, \nthe slope (or gradient) relative to the $\\mu_e$--$\\log r_{1\/2}$ plane.\nThe gradient directions, projected onto the $\\mu_e$--$\\log r_{1\/2}$ plane, are almost all \neither aligned (or anti-aligned, for those with opposite slope). \nThe optical subsample gradient directions are never more \nthan 3.8 degrees away from that of the full sample, and in the x-ray form, they never exceed \na 6.4 degree deviation.\n\nRadio-loudness has the strongest effect on the slopes. In the x-ray form, the subsample FPs \nare almost evenly divided between those aligned with the full sample and those anti-aligned. \nIn the optical form, only radio-quiets in elliptical hosts tilt opposite to the full sample.\nThis effect is interesting because we are seeing a stark difference between the radio-loud and \nradio-quiet nuclei in the hosts of the same morphology.\n\nIt would be interesting to find if the different QSO FP orientations described above come \nabout from different fueling mechanisms that might be found in the various subsamples.\nWe see, for instance, that radio-quiet and radio-loud QSOs are characterized by very \ndifferent slopes in their x-ray FPs, but the understanding of what makes these QSO types differ \nis still too limited to speculate further here. In our ongoing research, we are expanding the \nfundamental plane study to other types of AGN. \nWe can then compare their FP orientations with those of the different QSO\nsubsamples, which may teach us more about the physics underlying the QSO fundamental plane.\n\n\\section{Future Work}\n\nWe should ask if lower luminosity classes of AGN (such as Seyferts or LLAGN) also have \nfundamental planes of this sort. If they do, how do they compare with that of QSOs? We can \nimagine four possibilities:\n\\begin{enumerate}\n\\item{They share the same fundamental plane as QSOs. This would indicate that AGN power \nscales with the host properties, even across AGN types, and would support some form of \nunification.}\n\n\\item{The plane is parallel to that of QSOs, but shifted to lower nuclear luminosities. This would \nshow that these host properties don't determine the AGN class, and a given galaxy could \nhost different types.}\n\n\\item{The plane is tilted with respect to that of QSOs. Then the fundamental plane slope would \nbe characteristic of the AGN type, possibly supporting the idea that the slope is tied to the \naccretion mechanism.}\n\n\\item{There is no fundamental plane whatsoever. In that case, this type of fundamental plane \nwould be a unique property of QSOs. High-luminosity objects would be more closely \nconnected with their host properties.}\n\\end{enumerate}\n\nAny of these outcomes would teach us something useful.\n\n\\begin{figure}\n\\scalebox{0.675}{\\includegraphics{ofp2.eps}}\n\\scalebox{0.675}{\\includegraphics{xfp2.eps}}\n\\caption{\nViews of the optical (left) and x-ray (right) QSO\nfundamental planes, showing the individual QSOs (points) and the plane fitted to the overall sample. The host properties are the horizontal axes, while nuclear luminosity is vertical. \nThe shading of the planes is proportional to nuclear luminosity, ranging from black (faint) to white (bright). \nNote that only those points lying above the plane are visible here. \nIn the axis labels, ``M'' is $M_V\\mathrm{(nuc)}$, ``X'' is $\\log L_X$, \n``$\\mu$'' is $\\mu_e$, and ``r'' is $\\log r_{1\/2}$.\n}\n\\label{fig:fp-phys}\n\\end{figure}\n\n\n\\begin{figure}\n\\scalebox{0.34}{\\includegraphics{fp-errors-rx.eps}}\n\\scalebox{0.34}{\\includegraphics{fp-errors-ro.eps}}\n\\caption{Overall QSO fundamental plane (vertical\naxis), plotted against the measured host galaxy size ($\\log r_{1\/2}$, horizontal\naxis). Points on the diagonal line show perfect correspondence. \nThe left figure uses the QSO fundamental plane in its optical form, while the right\nfigure uses the x-ray form. The QSO fundamental plane is most precise when solved for the host size.}\n\\label{fig:fp-rms}\n\\end{figure}\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThree types of fourth order Painlev\\'{e} type ordinary differential equations have been studied \\cite{FS,NY1,S}.\nThey are extensions of the Painlev\\'{e} equations $P_{\\rm{II}},\\ldots,P_{\\rm{VI}}$ and expressed as Hamiltonian systems\n\\[\n\t\\mathcal{H}^{X_n^{(1)}}:\\quad\n\t\\frac{dq_i}{dt} = \\frac{\\partial H^{X_n^{(1)}}}{\\partial p_i},\\quad\n\t\\frac{dp_i}{dt} = -\\frac{\\partial H^{X_n^{(1)}}}{\\partial q_i}\\quad\n\t(i=1,2),\n\\]\nwith the Coupled Hamiltonians\n\\[\\begin{split}\n\tH^{A_4^{(1)}} &= H_{\\rm{IV}}(q_1,p_1;\\alpha_2,\\alpha_1)\n\t+ H_{\\rm{IV}}(q_2,p_2;\\alpha_4,\\alpha_1+\\alpha_3) + 2q_1p_1p_2,\\\\\n\ttH^{A_5^{(1)}} &= H_{\\rm{V}}(q_1,p_1;\\alpha_2,\\alpha_1,\\alpha_1+\\alpha_3)\\\\\n\t&\\quad + H_{\\rm{V}}(q_2,p_2;\\alpha_4,\\alpha_1+\\alpha_3,\\alpha_1+\\alpha_3)\n\t+ 2q_1p_1(q_2-1)p_2,\\\\\n\tt(t-1)H^{D_6^{(1)}} &= H_{\\rm{VI}}(q_1,p_1;\\alpha_0,\\alpha_3+\\alpha_5,\n\t\\alpha_3+\\alpha_6,\\alpha_2(\\alpha_1+\\alpha_2))\\\\\n\t&\\quad + H_{\\rm{VI}}(q_2,p_2;\\alpha_0+\\alpha_3,\\alpha_5,\\alpha_6,\n\t\\alpha_4(\\alpha_1+2\\alpha_2+\\alpha_3+\\alpha_4))\\\\\n\t&\\quad + 2(q_1-t)p_1q_2\\{(q_2-1)p_2+\\alpha_4\\},\n\\end{split}\\]\nwhere\n\\[\\begin{split}\n\tH_{\\rm{IV}}(q,p;a,b) &= qp(p-q-t) - aq - bp,\\\\\n\tH_{\\rm{V}}(q,p;a,b,c) &= q(q-1)p(p+t) + atq + bp - cqp,\\\\\n\tH_{\\rm{VI}}(q,p;a,b,c,d) &= q(q-1)(q-t)p^2 - \\{(a-1)q(q-1)\\\\\n\t&\\quad +bq(q-t)+c(q-1)(q-t)\\}p + dq.\n\\end{split}\\]\nBut complete classification of fourth order Painlev\\'{e} systems is not achieved, so that the existence of unknown ones is expected.\nIn this article, we derive a class of fourth order Painlev\\'{e} systems from the Drinfeld-Sokolov hierarchies of type $A_n^{(1)}$ by similarity reductions.\n\nThe Drinfeld-Sokolov hierarchies are extensions of the KdV (or mKdV) hierarchy for the affine Lie algebras \\cite{DS}.\nFor type $A_n^{(1)}$, they imply several Painlev\\'{e} systems by similarity reductions \\cite{AS,KIK,KK1,KK2,NY1}; {\\it see Table 1}.\\DSPainleveKnown\nSuch fact clarifies the origines of several properties of the Painlev\\'{e} systems, Lax pairs, affine Weyl group symmetries and particular solutions in terms of the Schur polynomials.\n\nThe Drinfeld-Sokolov hierarchies are characterized by the Heisenberg subalgebras, that is maximal nilpotent subalgebras, of the affine Lie algebras.\nAnd the isomorphism classes of the Heisenberg subalgebras are in one-to-one correspondence with the conjugacy classes of the finite Weyl group {\\rm\\cite{KP}}.\nIn this article, we choose the {\\it regular} conjugacy classes of $W(A_n)$ and consider their associated hierarchies, called {\\it type I hierarchies} \\cite{GHM}.\nIn the notation of \\cite{DF}, the regular conjugacy classes of $W(A_n)$ correspond to the partitions $(p,\\ldots,p)$ and $(p,\\ldots,p,1)$.\nFor the derivation of fourth order Painlev\\'{e} systems, we investigate the partitions $(2,2)$, $(3,1)$, $(4,1)$, $(2,2,1)$ and $(3,3)$; {\\it see Table 2}.\\DSPainleveResult\n\nOne of impotant results in this article is the derivation of a new Painlev\\'{e} system.\nIt is expressed as a Hamiltonian system\n\\begin{equation}\\label{Eq:CP6}\n\t\\frac{dq_i}{dt} = \\frac{\\partial H_c}{\\partial p_i},\\quad\n\t\\frac{dp_i}{dt} = -\\frac{\\partial H_c}{\\partial q_i}\\quad (i=1,2),\n\\end{equation}\nwith a Coupled Hamiltonian\n\\begin{equation}\\begin{split}\\label{Eq:CP6_Ham}\n\tt(t-1)H_c &= H_{\\rm{VI}}(q_1,p_1;\\alpha_2,\\alpha_0+\\alpha_4,\n\t\\alpha_3+\\alpha_5-\\eta,\\eta\\alpha_1)\\\\\n\t&\\quad + H_{\\rm{VI}}(q_2,p_2;\\alpha_0+\\alpha_2,\\alpha_4,\n\t\\alpha_1+\\alpha_3-\\eta,\\eta\\alpha_5)\\\\\n\t&\\quad\n\t+ (q_1-t)(q_2-1)\\left\\{(q_1p_1+\\alpha_1)p_2+p_1(p_2q_2+\\alpha_5)\\right\\}.\n\\end{split}\\end{equation}\nThis system admits affine Weyl group symmetry of type $A_5^{(1)}$; see Appendix \\ref{Sec:AffWey}.\nOn the other hand, the system $\\mathcal{H}^{D_6^{(1)}}$ admits one of type $D_6^{(1)}$.\nThe relation between those two coupled Painlev\\'{e} VI systems is not clarified.\n\n\\begin{rem}\nFor the partition $(1,\\ldots,1)$ of $n+2$, we have the Garnier system in $n$-variables {\\rm\\cite{KK2}}.\nAlso for each partition $(5,1)$ and $(2,2,2)$, a system of sixth order is derived{\\rm;} we do not give the explicit formula here.\nThus we conjecture that any more fourth order Painlev\\'{e} system do not arise from the type I hierarchy.\n\\end{rem}\n\nThis article is organized as follows.\nIn Section \\ref{Sec:AffLie}, we recall the affine Lie algebra of type $A^{(1)}_n$ and realize it in a flamework of a central extension of the loop algebra $\\mathfrak{sl}_{n+1}[z,z^{-1}]$.\nIn Section \\ref{Sec:Heisenberg}, the Heisenberg subalgebra of $\\widehat{\\mathfrak{sl}}_{n+1}$ corresponding to the partition $\\mathbf{n}$ is introduced.\nIn Section \\ref{Sec:D-S}, we formulate the Drinfeld-Sokolov hierarchies and their similarity reductions.\nIn Section \\ref{Sec:Deri_CP6} and \\ref{Sec:Deri_Others}, the Painlev\\'{e} systems are derived from the Drinfeld-Sokolov hierarchies.\nIn Appendix \\ref{Sec:Lax}, we give explicit descriptions of Lax pairs by means of a bases of $\\widehat{\\mathfrak{sl}}_{n+1}$.\nIn Appendix \\ref{Sec:AffWey}, we discuss a group of symmetries for the system \\eqref{Eq:CP6} with \\eqref{Eq:CP6_Ham}.\n\n\n\\section{Affine Lie algebra}\\label{Sec:AffLie}\n\nIn this section, we recall the affine Lie algebra of type $A^{(1)}_n$ and realize it in a flamework of a central extension of the loop algebra $\\mathfrak{sl}_{n+1}[z,z^{-1}]$.\n\nIn the notation of \\cite{Kac}, the affine Lie algebra $\\mathfrak{g}=\\mathfrak{g}(A^{(1)}_n)$ is generated by the Chevalley generators $e_i,f_i,\\alpha_i^{\\vee}$ $(i=0,\\ldots,n)$ and the scaling element $d$ with the fundamental relations\n\\[\\begin{split}\n\t&(\\mathrm{ad}e_i)^{1-a_{i,j}}(e_j)=0,\\quad\n\t(\\mathrm{ad}f_i)^{1-a_{i,j}}(f_j)=0\\quad (i\\neq j),\\\\\n\t&[\\alpha_i^{\\vee},\\alpha_j^{\\vee}]=0,\\quad\n\t[\\alpha_i^{\\vee},e_j]=a_{i,j}e_j,\\quad\n\t[\\alpha_i^{\\vee},f_j]=-a_{i,j}f_j,\\quad\n\t[e_i,f_j]=\\delta_{i,j}\\alpha_i^{\\vee},\\\\\n\t&[d,\\alpha_i^{\\vee}]=0,\\quad [d,e_i]=\\delta_{i,0}e_0,\\quad\n\t[d,f_i]=-\\delta_{i,0}f_0,\n\\end{split}\\]\nfor $i,j=0,\\ldots,n$.\nThe generalized Cartan matrix $A=\\left[a_{i,j}\\right]_{i,j=0}^{n}$ for $\\mathfrak{g}$ is defined by\n\\[\\begin{array}{llll}\n\ta_{i,i}=2& (i=0,\\ldots,n),\\\\[4pt]\n\ta_{i,i+1}=a_{n,0}=a_{i+1,i}=a_{0,n}=-1& (i=0,\\ldots,n-1),\\\\[4pt]\n\ta_{i,j}=0& (\\text{otherwise}).\n\\end{array}\\]\nWe denote the Cartan subalgebra of $\\mathfrak{g}$ by\n\\[\n\t\\mathfrak{h} = \\mathbb{C}\\alpha_0^{\\vee}\\oplus\\mathbb{C}\\alpha_1^{\\vee}\n\t\\oplus\\cdots\\oplus\\mathbb{C}\\alpha_n^{\\vee}\\oplus\\mathbb{C}d\n\t= \\mathfrak{h}'\\oplus\\mathbb{C}d.\n\\]\nThe normalized invariant form $(\\cdot|\\cdot):\\mathfrak{g}\\times\\mathfrak{g}\\to\\mathbb{C}$ is determined by the conditions\n\\[\\begin{array}{lll}\n\t(\\alpha_i^{\\vee}|\\alpha_j^{\\vee}) = a_{i,j},& (e_i|f_j) = \\delta_{i,j},&\n\t(\\alpha_i^{\\vee}|e_j) = (\\alpha_i^{\\vee}|f_j) = 0,\\\\[4pt]\n\t(d|d) = 0,& (d|\\alpha_j^{\\vee}) = \\delta_{0,j},& (d|e_j) = (d|f_j) = 0,\n\\end{array}\\]\nfor $i,j=0,\\ldots,n$.\n\nLet $\\mathfrak{n}_{+}$ and $\\mathfrak{n}_{-}$ be the subalgebras of $\\mathfrak{g}$ generated by $e_i$ and $f_i$ $(i=0,\\ldots,n)$ respectively.\nThen the Borel subalgebra $\\mathfrak{b}_{+}$ of $\\mathfrak{g}$ is defined by $\\mathfrak{b}_{+}=\\mathfrak{h}\\oplus\\mathfrak{n}_{+}$.\nNote that we have the triangular decomposition\n\\[\n\t\\mathfrak{g} = \\mathfrak{n}_{-}\\oplus\\mathfrak{h}\\oplus\\mathfrak{n}_{+}\n\t= \\mathfrak{n}_{-}\\oplus\\mathfrak{b}_{+}.\n\\]\nThe corresponding infinite demensional groups are defined by\n\\[\n\tN_{\\pm} = \\exp(\\mathfrak{n}_{\\pm}^*),\\quad H = \\exp(\\mathfrak{h}'),\\quad\n\tB_{+} = HN_{+},\n\\]\nwhere $\\mathfrak{n}_{\\pm}^*$ are completions of $\\mathfrak{n}_{\\pm}$ respectively.\n\nLet $\\mathbf{s}=(s_0,\\ldots,s_n)$ be a vector of non-negative integers.\nWe consider a gradation $\\mathfrak{g}=\\bigoplus_{k\\in\\mathbb{Z}}\\mathfrak{g}_k(\\mathbf{s})$ of type $\\mathbf{s}$ by setting\n\\[\n\t\\deg\\mathfrak{h}=0,\\quad \\deg e_i=s_i,\\quad \\deg f_i=-s_i\\quad\n\t(i=0,\\ldots,n).\n\\]\nWith an element $\\vartheta(\\mathbf{s})\\in\\mathfrak{h}$ such that\n\\[\n\t(\\vartheta(\\mathbf{s})|\\alpha_i^{\\vee}) = s_i\\quad (i=0,\\ldots,n),\n\\]\nthis gradation is defined by\n\\[\n\t\\mathfrak{g}_k(\\mathbf{s}) = \\left\\{x\\in\\mathfrak{g}\\bigm|\n\t[\\vartheta(\\mathbf{s}),x]=kx\\right\\}\\quad (k\\in\\mathbb{Z}).\n\\]\nWe denote by\n\\[\n\t\\mathfrak{g}_{n_{r+1}=\\ldots=n_s=1$.\nConsider a partition of matrix corresponding to $\\mathbf{n}$\n\\[\n\t\\begin{bmatrix}\n\t\tB_{11}& B_{12}& \\cdots& B_{1s}\\\\\n\t\tB_{21}& B_{22}& \\cdots& B_{2s}\\\\\n\t\t\\vdots& \\vdots& \\ddots& \\vdots\\\\\n\t\tB_{s1}& B_{s2}& \\cdots& B_{ss}\n\t\\end{bmatrix},\n\\]\nwhere each block $B_{ij}$ is an $n_i\\times n_j$-matrix.\nWith this blockform, we define matricies $\\Lambda_i'\\in\\widehat{\\mathfrak{sl}}_{n+1}$ $(i=1,\\ldots,r)$ by\n\\[\n\t\\Lambda_i' = \\begin{bmatrix}\n\t\tO& & \\cdots& & O\\\\ & & & & \\\\ \\vdots& & B_{ii}& & \\vdots\\\\\n\t\t& & & & \\\\ O& & \\cdots& & O\n\t\\end{bmatrix},\\quad\n\tB_{ii} = \\begin{bmatrix}\n\t\t0& 1& 0& \\cdots& 0\\\\ 0& 0& 1& & 0\\\\ \\vdots& \\vdots& & \\ddots& \\\\\n\t\t0& 0& 0& & 1\\\\ z& 0& 0& \\cdots& 0\n\t\\end{bmatrix},\n\\]\ndiagonal matricies $H_j'\\in\\widehat{\\mathfrak{sl}}_{n+1}$ $(i=j,\\ldots,s-1)$ by\n\\[\n\tH_j' = n_{j+1}z^{-1}(\\Lambda_j')^{n_j}\n\t- n_jz^{-1}(\\Lambda_{j+1}')^{n_{j+1}},\n\\]\nand a diagonal matrix $\\eta_{\\mathbf{n}}'\\in\\widehat{\\mathfrak{sl}}_{n+1}$ by\n\\[\n\tB_{ii} = \\frac{1}{2n_i}\\mathrm{diag}(n_i-1,n_i-3,\\ldots,-n_i+1)\\quad\n\t(i=1,\\ldots,r).\n\\]\n\nDenoting the matrix $\\eta_{\\mathbf{n}}'$ by $\\mathrm{diag}(\\eta_1',\\eta_2',\\ldots,\\eta_{n+1}')$, we consider a permutation\n\\[\n\t\\sigma = \\left(\\begin{array}{llll}\n\t\t\\eta_1'& \\eta_2'& \\ldots& \\eta_{n+1}'\\\\[4pt]\n\t\t\\eta_1& \\eta_2& \\ldots& \\eta_{n+1}\n\t\\end{array}\\right),\n\\]\nsuch that $\\eta_1\\geq\\eta_2\\geq\\ldots\\geq\\eta_{n+1}$.\nThis permutation can be lifted to the transformation $\\sigma$ acting on the matricies $\\Lambda_i'$ and $H_j'$.\nWe set\n\\[\n\t\\Lambda_i = \\sigma(\\Lambda_i')\\quad (i=1,\\ldots,r),\\quad\n\tH_j = \\sigma(H_j')\\quad (j=1,\\ldots,s-1).\n\\]\nThen the Heisenberg subalgebra of $\\widehat{\\mathfrak{sl}}_{n+1}$ corresponding to the partition $\\mathbf{n}$ is defined by\n\\[\n\t\\mathfrak{s}_\\mathbf{n} = \\bigoplus_{i=1}^{r}\n\t\\bigoplus_{k\\in\\mathbb{Z}\\setminus n_i\\mathbb{Z}}\\mathbb{C}\\Lambda_i^k\n\t\\oplus\\bigoplus_{j=1}^{s-1}\\bigoplus_{k\\in\\mathbb{Z}\\setminus\\{0\\}}\n\t\\mathbb{C}z^kH_j\\oplus\\mathbb{C}K.\n\\]\n\nLet $N_{\\mathbf{n}}'$ be the least common multiple of $n_1,\\ldots,n_s$.\nAlso let\n\\[\n\tN_{\\mathbf{n}} = \\left\\{\\begin{array}{ll}\n\t\tN_{\\mathbf{n}}'&\n\t\t\\text{if $\\displaystyle N_{\\mathbf{n}}'\\left(\\frac{1}{n_i}+\\frac{1}{n_j}\\right)\n\t\t\\in2\\mathbb{Z}$ for $\\forall(i,j)$}\\\\[12pt]\n\t\t2N_{\\mathbf{n}}'& \\text{otherwise}\n\t\\end{array}\\right..\n\\]\nWe consider a operator corresponding to $\\mathbf{n}$\n\\[\n\t\\vartheta_{\\mathbf{n}}\n\t= N_{\\mathbf{n}}\\left(z\\frac{d}{dz}+\\mathrm{ad}\\eta_{\\mathbf{n}}\\right),\n\\]\nwhere $\\eta_{\\mathbf{n}}=\\sigma(\\eta_{\\mathbf{n}}')$.\nThen the operator $\\vartheta_{\\mathbf{n}}$ implies a gradation $\\mathbf{s}=(s_0,\\ldots,s_n)$ as follows:\n\\[\n\t\\vartheta_{\\mathbf{n}}(e_i) = s_ie_i\\quad (i=0,\\ldots,n).\n\\]\nNote that the Heisenberg subalgebra $\\mathfrak{s}_\\mathbf{n}$ admits the gradation $\\mathbf{s}$ defined by $\\vartheta_{\\mathbf{n}}$.\n\n\n\\section{Drinfeld-Sokolov hierarchy}\\label{Sec:D-S}\n\nIn this section, we formulate the Drinfeld-Sokolov hierarchy associated with the Heisenberg subalgebra $\\mathfrak{s}_\\mathbf{n}$.\nIts similarity reduction is also formulated.\n\nLet $\\Lambda_i$ and $H_j$ be the generators for $\\mathfrak{s}_\\mathbf{n}$ given in Section \\ref{Sec:Heisenberg}.\nIntroducing time variables $t_{i,k}$ $(i=1,\\ldots,r;k\\in\\mathbb{N})$, we consider an $N_{-}B_{+}$-valued function $G=G(t_{1,1},t_{1,2},\\ldots)$ defined by\n\\[\n\tG = \\exp\\left(\\sum_{i=1}^r\\sum_{k=1}^{\\infty}t_{i,k}\\Lambda_i^k\\right)G(0).\n\\]\nHere we assume the $\\mathbf{n}$-reduced condition\n\\[\n\tt_{i,l}=0\\quad (i=1,\\ldots,r;l\\in n_i\\mathbb{N}).\n\\]\nThen we have a system of partial differential equations\n\\begin{equation}\\label{Eq:DS_exp}\n\t\\partial_{i,k}(G) = \\Lambda_i^kG\\quad (i=1,\\ldots,r;k\\in\\mathbb{N}),\n\\end{equation}\nwhere $\\partial_{i,k}=\\partial\/\\partial t_{i,k}$\nVia the trianglar decomposition\n\\[\n\tG = W^{-1}Z,\\quad W\\in N_{-},\\quad Z\\in B_{+},\n\\]\nthe system \\eqref{Eq:DS_exp} implies {\\it a Sato equation}\n\\begin{equation}\\label{Eq:Sato}\n\t\\partial_{i,k}(W) = B_{i,k}W - W\\Lambda_i^k\\quad\n\t(i=1,\\ldots,r;k\\in\\mathbb{N}),\n\\end{equation}\nwhere $B_{i,k}$ stands for the $b_{+}$-component of $W\\Lambda_i^kW^{-1}$.\nThe compatibility condition of \\eqref{Eq:Sato} gives the Drinfeld-Sokolov hierarchy\n\\begin{equation}\\label{Eq:DS}\n\t\\left[\\partial_{i,k}-B_{i,k},\\partial_{j,l}-B_{j,l}\\right] = 0\\quad\n\t(i,j=1,\\ldots,r;k,l\\in\\mathbb{N}).\n\\end{equation}\n\nUnder the system \\eqref{Eq:Sato}, we consider an equation\n\\begin{equation}\\label{Eq:Sato_SR}\n\t(\\vartheta_{\\mathbf{n}}-\\mathrm{ad}\\rho)(W)\n\t= \\sum_{i=1}^r\\sum_{k=1}^{\\infty}d_ikt_{i,k}\\partial_{i,k}(W),\n\\end{equation}\nwhere $d_i=\\deg\\Lambda_i$ $(i=1,\\ldots,r)$ and $\\rho=\\sum_{j=1}^{s-1}\\rho_jH_j$.\nNote that each $\\rho_j$ is independent of time vatiables $t_{i,k}$.\nThe compatibility condition of \\eqref{Eq:Sato} and \\eqref{Eq:Sato_SR} gives\n\\begin{equation}\\label{Eq:DS_SR}\n\t\\left[\\vartheta_{\\mathbf{n}}-M,\\partial_{i,k}-B_{i,k}\\right] = 0\\quad\n\t(i=1,\\ldots,r;k\\in\\mathbb{N}),\n\\end{equation}\nwhere\n\\[\n\tM = \\rho + \\sum_{i=1}^r\\sum_{k=1}^{\\infty}d_ikt_{i,k}B_{i,k}.\n\\]\nWe call the systems \\eqref{Eq:DS} and \\eqref{Eq:DS_SR} a similarity reduction of the Drinfeld-Sokolov hierarchy.\n\n\\begin{rem}\\label{Rem:Lax}\nThe similarity reduction can be regarded as the compatibility condition of a Lax form\n\\[\n\t\\partial_{i,k}(\\Psi) = B_{i,k}\\Psi\\quad (i=1,\\ldots,r;k\\in\\mathbb{N}),\\quad\n\t\\vartheta_{\\mathbf{n}}(\\Psi) = M\\Psi.\n\\]\nHere an $N_{-}B_{+}$-valued function $\\Psi$ is given by\n\\[\n\t\\Psi = W\\exp\\left(\\sum_{i=1}^r\\sum_{k=1}^{\\infty}t_{i,k}\\Lambda_i^k\\right).\n\\]\n\\end{rem}\n\n\n\\section{Derivation of Coupled $P_{\\rm{VI}}$}\\label{Sec:Deri_CP6}\n\nIn this section, we derive the Painlev\\'{e} system \\eqref{Eq:CP6} with \\eqref{Eq:CP6_Ham} from the Drinfeld-Sokolov hierarchies for $\\mathfrak{s}_{(3,3)}$ and $\\mathfrak{s}_{(2,2,1)}$ by similarity reductions.\n\n\n\\subsection{For the partition $(3,3)$}\\label{Sec:System33}\n\nAt first, we define the Heisenberg subalgebra $\\mathfrak{s}_{(3,3)}$ of $\\mathfrak{g}(A^{(1)}_5)$.\nLet\n\\[\n\t\\Lambda_1 = e_{1,2} + e_{3,4} + e_{5,0},\\quad\n\t\\Lambda_2 = e_{0,1} + e_{2,3} + e_{4,5},\\quad\n\tH_1 = \\alpha^{\\vee}_1 + \\alpha^{\\vee}_3 + \\alpha^{\\vee}_5,\n\\]\nwhere\n\\[\n\te_{i_1,i_2,\\ldots,i_{n-1},i_n} = \\mathrm{ad}e_{i_1}\\mathrm{ad}e_{i_2}\n\t\\ldots\\mathrm{ad}e_{i_{n-1}}(e_{i_n}).\n\\]\nThen we have\n\\[\n\t\\mathfrak{s}_{(3,3)} = \\bigoplus_{k\\in\\mathbb{Z}\\setminus3\\mathbb{Z}}\n\t\\mathbb{C}\\Lambda_1^k\\oplus\\bigoplus_{k\\in\\mathbb{Z}\\setminus3\\mathbb{Z}}\n\t\\mathbb{C}\\Lambda_2^k\\oplus\\bigoplus_{k\\in\\mathbb{Z}\\setminus\\{0\\}}\n\t\\mathbb{C}z^kH_1\\oplus\\mathbb{C}K.\n\\]\nThe grade operator for $\\mathfrak{s}_{(3,3)}$ is given by\n\\[\n\t\\vartheta_{(3,3)} = 3\\left(z\\frac{d}{dz}+\\mathrm{ad}\\eta_{(3,3)}\\right),\n\\]\nwhere\n\\[\n\t\\eta_{(3,3)} = \\frac{1}{3}(\\alpha^{\\vee}_1+2\\alpha^{\\vee}_2+2\\alpha^{\\vee}_3\n\t+2\\alpha^{\\vee}_4+\\alpha^{\\vee}_5).\n\\]\nIt follows that $\\mathfrak{s}_{(3,3)}$ admits the gradation of type $\\mathbf{s}=(1,0,1,0,1,0)$, namely\n\\[\n\t\\vartheta_{(3,3)}(e_i) = e_i\\quad (i=0,2,4),\\quad\n\t\\vartheta_{(3,3)}(e_j) = 0\\quad (j=1,3,5).\n\\]\nNote that\n\\[\n\t\\mathfrak{g}_{\\geq0}(1,0,1,0,1,0) = \\mathbb{C}f_1\\oplus\\mathbb{C}f_3\n\t\\oplus\\mathbb{C}f_5\\oplus\\mathfrak{b}_{+}.\n\\]\n\nWe now assume $t_{2,1}=1$ and $t_{1,k}=t_{2,k}=0$ $(k\\geq2)$.\nThen the similarity reduction \\eqref{Eq:DS} and \\eqref{Eq:DS_SR} for $\\mathfrak{s}_{(3,3)}$ is expressed as\n\\begin{equation}\\label{Eq:DS_SR_33_b}\n\t\\left[\\vartheta_{(3,3)}-M,\\partial_{1,1}-B_{1,1}\\right] = 0.\n\\end{equation}\nHere the $\\mathfrak{b}_{+}$-valued functions $M$ and $B_{1,1}$ are defined by\n\\begin{equation}\\begin{split}\\label{Eq:DS_SR_33_b_BM}\n\tM &= \\vartheta_{(3,3)}(W)W^{-1}\n\t+ W(\\rho_1H_1+t_{1,1}\\Lambda_1+\\Lambda_2)W^{-1},\\\\\n\tB_{1,1} &= \\partial_{1,1}(W)W^{-1} + W\\Lambda_1W^{-1},\n\\end{split}\\end{equation}\nwhere $W$ is an $N_{-}$-valued function; its explicit formula is given below.\nIn the following, we derive the Painlev\\'{e} system from the system \\eqref{Eq:DS_SR_33_b} with \\eqref{Eq:DS_SR_33_b_BM}.\n\nWe denote by\n\\[\n\tW = \\exp(\\omega_0)\\exp(\\omega_{-1})\\exp(\\omega_{<-1}),\n\\]\nwhere\n\\[\\begin{split}\n\t\\omega_0 &= -w_1f_1 - w_3f_3 - w_5f_5,\\\\\n\t\\omega_{-1} &= -w_0f_0 - w_2f_2 - w_4f_4 - w_{0,1}f_{0,1} - w_{1,2}f_{1,2}\n\t- w_{2,3}f_{2,3} - w_{3,4}f_{3,4}\\\\\n\t&\\quad - w_{4,5}f_{4,5} - w_{5,0}f_{5,0} - w_{1,2,3}f_{1,2,3}\n\t- w_{3,4,5}f_{3,4,5} - w_{5,0,1}f_{5,0,1},\n\\end{split}\\]\nand $\\omega_{<-1}\\in\\mathfrak{g}_{<-1}(1,0,1,0,1,0)$.\nThen the $\\mathfrak{b}_{+}$-valued function $M$ is described as\n\\[\\begin{split}\n\tM &= \\kappa_0\\alpha^{\\vee}_0 + \\kappa_1\\alpha^{\\vee}_1\n\t+ \\kappa_2\\alpha^{\\vee}_2 + \\kappa_3\\alpha^{\\vee}_3\n\t+ \\kappa_4\\alpha^{\\vee}_4 + \\kappa_5\\alpha^{\\vee}_5 - (t_{1,1}w_5-w_1)e_0\n\t+ \\varphi_1e_1\\\\\n\t&\\quad - (t_{1,1}w_1-w_3)e_2 + \\varphi_3e_3 - (t_{1,1}w_3-w_5)e_4\n\t+ \\varphi_5e_5 + t_{1,1}\\Lambda_1 + \\Lambda_2,\\\\\n\\end{split}\\]\nwith dependent variables\n\\[\n\t\\varphi_1 = t_{1,1}w_2 - w_0,\\quad \\varphi_3= t_{1,1}w_4 - w_2,\\quad\n\t\\varphi_5 = t_{1,1}w_0 - w_4,\n\\]\nand parameters\n\\[\\begin{split}\n\t&\\kappa_0 = -t_{1,1}w_{5,0} - w_{0,1},\\quad\n\t\\kappa_1 = t_{1,1}(w_1w_2-w_{1,2}) - (w_0w_1+w_{0,1}) + \\rho_1,\\\\\n\t&\\kappa_2 = -t_{1,1}w_{1,2} - w_{2,3},\\quad\n\t\\kappa_3 = t_{1,1}(w_3w_4-w_{3,4}) - (w_2w_3+w_{2,3}) + \\rho_1,\\\\\n\t&\\kappa_4 = -t_{1,1}w_{3,4} - w_{4,5},\\quad\n\t\\kappa_5 = t_{1,1}(w_0w_5-w_{5,0}) - (w_4w_5+w_{4,5}) + \\rho_1.\n\\end{split}\\]\nNote that\n\\[\n\t\\partial_{1,1}(\\kappa_i) = 0\\quad (i=0,\\ldots,5).\n\\]\nWe also remark that\n\\[\n\tw_1\\varphi_1 + w_3\\varphi_3 + w_5\\varphi_5 + \\kappa_0 - \\kappa_1\n\t+ \\kappa_2 - \\kappa_3 + \\kappa_4 - \\kappa_5 + 3\\rho_1 = 0.\n\\]\nThe $\\mathfrak{b}_{+}$-valued function $B_{1,1}$ is described as\n\\[\\begin{split}\n\tB_{1,1} &= u_0K + (u_1+w_1x_1)\\alpha^{\\vee}_1 + u_2\\alpha^{\\vee}_2\n\t+ (u_3+w_3x_3)\\alpha^{\\vee}_3 + u_4\\alpha^{\\vee}_4\\\\\n\t&\\quad + w_5x_5\\alpha^{\\vee}_5 - w_5e_0 + x_1e_1 - w_1e_2 + x_3e_3\n\t- w_3e_4 + x_5e_5 + \\Lambda_1,\n\\end{split}\\]\nwhere\n\\[\\begin{split}\n\t&u_1 = \\frac{-2w_1\\varphi_1+w_3\\varphi_3+w_5\\varphi_5\n\t-2\\kappa_0+2\\kappa_1+\\kappa_2-\\kappa_3+\\kappa_4-\\kappa_5}{3t_{1,1}},\\\\\n\t&u_2 = -\\frac{w_1\\varphi_1+\\kappa_0-\\kappa_1+\\rho_1}{t_{1,1}},\\\\\n\t&u_3 = \\frac{-w_1\\varphi_1-w_3\\varphi_3+2w_5\\varphi_5\n\t-\\kappa_0+\\kappa_1-\\kappa_2+\\kappa_3+2\\kappa_4-2\\kappa_5}{3t_{1,1}},\\\\\n\t&u_4 = \\frac{w_5\\varphi_5+\\kappa_4-\\kappa_5+\\rho_1}{t_{1,1}},\\quad\n\tx_1 = \\frac{t_{1,1}^2\\varphi_1+t_{1,1}\\varphi_5+\\varphi_3}{t_{1,1}^3-1},\\\\\n\t&x_3 = \\frac{t_{1,1}^2\\varphi_3+t_{1,1}\\varphi_1+\\varphi_5}\n\t{t_{1,1}^3-1},\\quad\n\tx_5 = \\frac{t_{1,1}^2\\varphi_5+t_{1,1}\\varphi_3+\\varphi_1}{t_{1,1}^3-1}.\n\\end{split}\\]\nHence the system \\eqref{Eq:DS_SR_33_b} with \\eqref{Eq:DS_SR_33_b_BM} can be expressed as a system of ordinary differential equations in terms of the variabes $\\varphi_1,\\varphi_5,w_1,w_3,w_5$; we do not give its explicit formula.\n\nLet\n\\[\n\tq_1 = \\frac{w_1}{t_{1,1}^2w_3},\\quad\n\tp_1 = \\frac{t_{1,1}^2w_3\\varphi_1}{3},\\quad\n\tq_2 = \\frac{w_5}{t_{1,1}w_3},\\quad\n\tp_2 = \\frac{t_{1,1}w_3\\varphi_5}{3},\\quad t = \\frac{1}{t_{1,1}^3}.\n\\]\nWe also set\n\\[\\begin{split}\n\t&\\alpha_0 = \\frac{1}{3}(1-2\\kappa_0+\\kappa_1+\\kappa_5),\\quad\n\t\\alpha_1 = \\frac{1}{3}(\\kappa_0-2\\kappa_1+\\kappa_2),\\\\\n\t&\\alpha_2 = \\frac{1}{3}(1+\\kappa_1-2\\kappa_2+\\kappa_3),\\quad\n\t\\alpha_3 = \\frac{1}{3}(\\kappa_2-2\\kappa_3+\\kappa_4),\\\\\n\t&\\alpha_4 = \\frac{1}{3}(1+\\kappa_3-2\\kappa_4+\\kappa_5),\\quad\n\t\\alpha_5 = \\frac{1}{3}(\\kappa_0+\\kappa_4-2\\kappa_5),\n\\end{split}\\]\nand\n\\[\n\t\\eta = \\rho_1 + \\frac{1}{2}(\\alpha_1+\\alpha_3+\\alpha_5).\n\\]\nThen we have\n\n\\begin{thm}\nThe system \\eqref{Eq:DS_SR_33_b} with \\eqref{Eq:DS_SR_33_b_BM} gives the Painlev\\'{e} system \\eqref{Eq:CP6} with \\eqref{Eq:CP6_Ham}.\nFurthermore, $w_3$ satisfies the completely integrable Pfaffian equation\n\\[\\begin{split}\n\tt(t-1)\\frac{d}{dt}\\log w_3 &= -(q_1-1)(q_1-t)p_1 - (q_2-1)(q_2-t)p_2\\\\\n\t&\\quad - \\alpha_1q_1 - \\alpha_5q_2\n\t+ \\frac{1}{3}(\\alpha_1+\\alpha_2-\\alpha_3-\\alpha_4+2\\eta)t\\\\\n\t&\\quad - \\frac{1}{3}(\\alpha_1+\\alpha_2+2\\alpha_3-\\alpha_4-4\\eta).\n\\end{split}\\]\n\\end{thm}\n\n\n\\subsection{For the partition $(2,2,1)$}\\label{Sec:System221}\n\nThe Heisenberg subalgebra $\\mathfrak{s}_{(2,2,1)}$ of $\\mathfrak{g}(A^{(1)}_4)$ is defined by\n\\[\n\t\\mathfrak{s}_{(2,2,1)} = \\bigoplus_{k\\in\\mathbb{Z}\\setminus2\\mathbb{Z}}\n\t\\mathbb{C}\\Lambda_1^k\\oplus\\bigoplus_{k\\in\\mathbb{Z}\\setminus2\\mathbb{Z}}\n\t\\mathbb{C}\\Lambda_2^k\\oplus\\bigoplus_{k\\in\\mathbb{Z}\\setminus\\{0\\}}\n\t\\mathbb{C}z^kH_1\\oplus\\bigoplus_{k\\in\\mathbb{Z}\\setminus\\{0\\}}\n\t\\mathbb{C}z^kH_2\\oplus\\mathbb{C}K,\n\\]\nwith\n\\[\\begin{array}{ll}\n\t\\Lambda_1 = e_{4,0} + e_{1,2,3},& \\Lambda_2 = e_{0,1} + e_{2,3,4},\\\\[4pt]\n\tH_1 = \\alpha^{\\vee}_1 + \\alpha^{\\vee}_2 - \\alpha^{\\vee}_3,&\n\tH_2 = -\\alpha^{\\vee}_2 + \\alpha^{\\vee}_3 + \\alpha^{\\vee}_4.\n\\end{array}\\]\nThe subalgebra $\\mathfrak{s}_{(2,2,1)}$ admits the gradation of type $\\mathbf{s}=(2,0,1,1,0)$ with the grade operator\n\\[\n\t\\vartheta_{(2,2,1)}\n\t= 4\\left(z\\frac{d}{dz}+\\mathrm{ad}\\eta_{(2,2,1)}\\right),\\quad\n\t\\eta_{(2,2,1)} = \\frac{1}{4}\n\t(\\alpha^{\\vee}_1+2\\alpha^{\\vee}_2+2\\alpha^{\\vee}_3+\\alpha^{\\vee}_4).\n\\]\nNote that\n\\[\n\t\\mathfrak{g}_{\\geq0}(2,0,1,1,0)\n\t= \\mathbb{C}f_1\\oplus\\mathbb{C}f_4\\oplus\\mathfrak{b}_{+}.\n\\]\n\nWe now assume $t_{1,2}=1$ and $t_{1,k}=t_{2,k}=0$ $(k\\geq3)$.\nThen the similarity reduction \\eqref{Eq:DS_SR} for $\\mathfrak{s}_{(2,2,1)}$ is expressed as\n\\begin{equation}\\label{Eq:DS_SR_221_b}\n\t\\left[\\vartheta_{(2,2,1)}-M,\\partial_{1,1}-B_{1,1}\\right] = 0,\n\\end{equation}\nwith\n\\begin{equation}\\begin{split}\\label{Eq:DS_SR_221_b_BM}\n\tM &= \\vartheta_{(2,2,1)}(W)W^{-1}\n\t+ W(\\rho_1H_1+\\rho_2H_2+2t_{1,1}\\Lambda_1+2\\Lambda_2)W^{-1},\\\\\n\tB_{1,1} &= \\partial_{1,1}(W)W^{-1} + W\\Lambda_1W^{-1}.\n\\end{split}\\end{equation}\n\nLet\n\\[\n\tW = \\exp(\\omega_0)\\exp(\\omega_{-1})\\exp(\\omega_{-2})\\exp(\\omega_{<-2}),\n\\]\nwhere\n\\[\\begin{split}\n\t\\omega_0 &= -w_1f_1 - w_4f_4,\\\\\n\t\\omega_{-1} &= -w_2f_2 - w_3f_3 - w_{1,2}f_{1,2} - w_{3,4}f_{3,4},\\\\\n\t\\omega_{-2} &= -w_0f_0 - w_{0,1}f_{0,1} - w_{2,3}f_{2,3} - w_{4,0}f_{4,0}\\\\\n\t&\\quad - w_{1,2,3}f_{1,2,3} - w_{2,3,4}f_{2,3,4} - w_{4,0,1}f_{4,0,1}\n\t- w_{1,2,3,4}f_{1,2,3,4},\n\\end{split}\\]\nand $\\omega_{<-2}\\in\\mathfrak{g}_{<-2}(2,0,1,1,0)$.\nThen the system \\eqref{Eq:DS_SR_221_b_BM} gives explicit formulas of $M,B_{1,1}$ as follows:\n\\[\\begin{split}\n\tM &= \\kappa_0\\alpha^{\\vee}_0 + \\kappa_1\\alpha^{\\vee}_1\n\t+ \\kappa_2\\alpha^{\\vee}_2 + \\kappa_3\\alpha^{\\vee}_3\n\t+ \\kappa_4\\alpha^{\\vee}_4 + 2(w_1-t_{1,1}w_4)e_0\\\\\n\t&\\quad + \\varphi_1e_1 + (\\varphi_2-w_1\\varphi_{1,2})e_2\n\t+ (\\varphi_3+w_4\\varphi_{3,4})e_3 + \\varphi_4e_4\\\\\n\t&\\quad + \\varphi_{1,2}e_{1,2} + 2(t_{1,1}w_1-w_4)e_{2,3}\n\t- \\varphi_{3,4}e_{3,4} + 2t_{1,1}\\Lambda_1 + 2\\Lambda_2,\\\\\n\tB_{1,1} &= u_0K + (u_2+w_1x_1)\\alpha^{\\vee}_1 + u_2\\alpha^{\\vee}_2\n\t+ u_3\\alpha^{\\vee}_3 + w_4x_4\\alpha^{\\vee}_4 - w_4e_0\\\\\n\t&\\quad + x_1e_1 - w_1x_{1,2}e_2 + \\frac{\\varphi_3}{2t_{1,1}}e_3 + x_4e_4\n\t+ x_{1,2}e_{1,2} - w_1e_{2,3} + \\Lambda_1,\n\\end{split}\\]\nwhere\n\\[\\begin{split}\n\t&\\varphi_1 = -2w_0 + t_{1,1}w_2w_3 - 2t_{1,1}w_{2,3},\\quad\n\t\\varphi_2 = -2w_{3,4},\\quad \\varphi_3 = 2t_{1,1}w_{1,2},\\\\\n\t&\\varphi_4 = 2t_{1,1}w_0 + w_2w_3 + 2w_{2,3},\\quad\n\t\\varphi_{1,2} = 2t_{1,1}w_3,\\quad \\varphi_{3,4} = -2w_2,\n\\end{split}\\]\nand\n\\[\\begin{split}\n\t&u_2 = -\\frac{w_1\\varphi_1+\\kappa_0-\\kappa_1+\\rho_1}{2t_{1,1}},\\quad\n\tu_3 = \\frac{w_4\\varphi_4+\\kappa_3-\\kappa_4+\\rho_1}{2t_{1,1}},\\\\\n\t&x_1 = \\frac{(t_{1,1}\\varphi_1+\\varphi_4)\\varphi_3\n\t+(w_1\\varphi_1+w_4\\varphi_4+\\kappa_0-\\kappa_1+\\kappa_3-\\kappa_4+2\\rho_1)\n\t\\varphi_{3,4}}{2(t_{1,1}^2-1)\\varphi_3},\\\\\n\t&x_4 = \\frac{(\\varphi_1+t_{1,1}\\varphi_4)\\varphi_3+t_{1,1}\n\t(w_1\\varphi_1+w_4\\varphi_4+\\kappa_0-\\kappa_1+\\kappa_3-\\kappa_4+2\\rho_1)\n\t\\varphi_{3,4}}{2(t_{1,1}^2-1)\\varphi_3},\\\\\n\t&x_{1,2} = \\frac{w_1\\varphi_1+w_4\\varphi_4+\\kappa_0-\\kappa_1+\\kappa_3\n\t-\\kappa_4+2\\rho_1}{\\varphi_3}.\n\\end{split}\\]\nNote that $\\kappa_0,\\ldots,\\kappa_4$ are constants.\nWe also remark that\n\\[\\begin{split}\n\t&\\varphi_2\\varphi_{3,4} + 2(w_1\\varphi_1+w_4\\varphi_4+\\kappa_0-\\kappa_1\n\t+\\kappa_2-\\kappa_4+2\\rho_2) = 0,\\\\\n\t&\\varphi_3\\varphi_{1,2} - 2t_{1,1}(w_1\\varphi_1+w_4\\varphi_4+\\kappa_0\n\t-\\kappa_1+\\kappa_3-\\kappa_4+2\\rho_1) = 0.\n\\end{split}\\]\nHence the system \\eqref{Eq:DS_SR_221_b} can be expressed as a system of ordinary differential equations in terms of the variables $\\varphi_1,\\varphi_3,\\varphi_4,\\varphi_{3,4},w_1,w_4$.\n\nLet\n\\[\\begin{split}\n\t&q_1 = -\\frac{t_{1,1}^2\\varphi_{3,4}w_4}{\\varphi_3},\\quad\n\tp_1 = -\\frac{\\varphi_3\\varphi_4}{4t_{1,1}^2\\varphi_{3,4}},\\\\\n\t&q_2 = -\\frac{t_{1,1}\\varphi_{3,4}w_1}{\\varphi_3},\\quad\n\tp_2 = -\\frac{\\varphi_3\\varphi_1}{4t_{1,1}\\varphi_{3,4}},\\quad\n\tt = t_{1,1}^2.\n\\end{split}\\]\nWe also set\n\\[\\begin{split}\n\t&\\alpha_0 = \\frac{1}{4}(2-2\\kappa_0+\\kappa_1+\\kappa_4),\\quad\n\t\\alpha_1 = \\frac{1}{4}(\\kappa_0+\\kappa_3-2\\kappa_4),\\\\\n\t&\\alpha_2 = \\frac{1}{4}(1+\\kappa_2-2\\kappa_3+\\kappa_4),\\quad\n\t\\alpha_3 = \\frac{1}{4}(-\\kappa_2+\\kappa_3+2\\rho_1-2\\rho_2),\\\\\n\t&\\alpha_4 = \\frac{1}{4}(1+\\kappa_1-\\kappa_2-2\\rho_1+2\\rho_2),\\quad\n\t\\alpha_5 = \\frac{1}{4}(\\kappa_0-2\\kappa_1+\\kappa_2),\\\\\n\t&\\eta\n\t= \\frac{1}{4}(2\\kappa_0-2\\kappa_1+2\\kappa_3-2\\kappa_4+3\\rho_1-\\rho_2).\n\\end{split}\\]\nThen we have\n\n\\begin{thm}\nThe system \\eqref{Eq:DS_SR_221_b} with \\eqref{Eq:DS_SR_221_b_BM} gives the Painlev\\'{e} system \\eqref{Eq:CP6} with \\eqref{Eq:CP6_Ham}.\nFurthermore, $\\varphi_3$ and $\\varphi_{3,4}$ satisfy the completely integrable Pfaffian equations\n\\[\\begin{split}\n\tt(t-1)\\frac{d}{dt}\\log\\varphi_3 &= -q_1(q_1-t)p_1 - q_2(q_2-t)p_2\n\t- \\alpha_1q_1 - \\alpha_5q_2\\\\\n\t&\\quad + \\frac{1}{4}\n\t(1+2\\alpha_2-2\\alpha_3-2\\alpha_4-2\\alpha_5+6\\eta)t\\\\\n\t&\\quad - \\frac{1}{4}\n\t(1+2\\alpha_2+2\\alpha_3-2\\alpha_4-2\\alpha_5+2\\eta),\\\\\n\tt(t-1)\\frac{d}{dt}\\log\\varphi_{3,4} &= -(q_1-t)p_1 - (q_2-t)p_2 - \\eta.\n\\end{split}\\]\n\\end{thm}\n\n\n\\section{Derivation of other systems}\\label{Sec:Deri_Others}\n\nIn this section, we discuss the derivation of the Painlev\\'{e} systems for $\\mathfrak{s}_{(2,2)}$, $\\mathfrak{s}_{(3,1)}$ and $\\mathfrak{s}_{(4,1)}$ by a similar manner as in Section \\ref{Sec:Deri_CP6}.\n\n\n\\subsection{For the partition $(2,2)$}\\label{Sec:System22}\n\nThe Heisenberg subalgebra $\\mathfrak{s}_{(2,2)}$ of $\\mathfrak{g}(A^{(1)}_3)$ is defined by\n\\[\n\t\\mathfrak{s}_{(2,2)} = \\bigoplus_{k\\in\\mathbb{Z}\\setminus2\\mathbb{Z}}\n\t\\mathbb{C}\\Lambda_1^k\\oplus\\bigoplus_{k\\in\\mathbb{Z}\\setminus2\\mathbb{Z}}\n\t\\mathbb{C}\\Lambda_2^k\\oplus\\bigoplus_{k\\in\\mathbb{Z}\\setminus\\{0\\}}\n\t\\mathbb{C}z^kH_1\\oplus\\mathbb{C}K,\n\\]\nwith\n\\[\n\t\\Lambda_1 = e_{1,2} + e_{3,0},\\quad\n\t\\Lambda_2 = e_{0,1} + e_{2,3},\\quad\n\tH_1 = \\alpha^{\\vee}_1 + \\alpha^{\\vee}_3.\n\\]\nThe subalgebra $\\mathfrak{s}_{(2,2)}$ admits the gradation of type $\\mathbf{s}=(1,0,1,0)$ with the grade operator\n\\[\n\t\\vartheta_{(2,2)}\n\t= 2\\left(z\\frac{d}{dz}+\\mathrm{ad}\\eta_{(2,2)}\\right),\\quad\n\t\\eta_{(2,2)}\n\t= \\frac{1}{2}(\\alpha^{\\vee}_1+2\\alpha^{\\vee}_2+\\alpha^{\\vee}_3).\n\\]\nNote that\n\\[\n\t\\mathfrak{g}_{\\geq0}(1,0,1,0) = \\mathbb{C}f_1\\oplus\\mathbb{C}f_3\\oplus\\mathfrak{b}_{+}.\n\\]\n\nWe now assume $t_{1,2}=1$ and $t_{1,k}=t_{2,k}=0$ $(k\\geq3)$.\nThen the similarity reduction \\eqref{Eq:DS_SR} for $\\mathfrak{s}_{(2,2)}$ is expressed as\n\\begin{equation}\\label{Eq:DS_SR_22_b}\n\t\\left[\\vartheta_{(2,2)}-M,\\partial_{1,1}-B_{1,1}\\right] = 0,\n\\end{equation}\nwith\n\\begin{equation}\\begin{split}\\label{Eq:DS_SR_22_b_BM}\n\tM &= \\vartheta_{(2,2)}(W)W^{-1}\n\t+ W(\\rho_1H_1+t_{1,1}\\Lambda_1+\\Lambda_2)W^{-1},\\\\\n\tB_{1,1} &= \\partial_{1,1}(W)W^{-1} + W\\Lambda_1W^{-1}.\n\\end{split}\\end{equation}\n\nLet\n\\[\n\tW = \\exp(\\omega_0)\\exp(\\omega_{-1})\\exp(\\omega_{<-1}),\n\\]\nwhere\n\\[\\begin{split}\n\t\\omega_{0} &= -w_1f_1 - w_3f_3,\\\\\n\t\\omega_{-1} &= -w_0f_0 - w_2f_2 - w_{0,2}f_{0,2} - w_{1,2}f_{1,2}\\\\\n\t&\\quad - w_{2,3}f_{2,3} - w_{3,0}f_{3,0} - w_{1,2,3}f_{1,2,3}\n\t- w_{3,0,1}f_{3,0,1},\n\\end{split}\\]\nand $\\omega_{<-1}\\in\\mathfrak{g}_{<-1}(1,0,1,0)$.\nThen the system \\eqref{Eq:DS_SR_22_b_BM} gives explicit formulas of $M,B_{1,1}$ as follows:\n\\[\\begin{split}\n\tM &= \\kappa_0\\alpha^{\\vee}_0 + \\kappa_1\\alpha^{\\vee}_1\n\t+ \\kappa_2\\alpha^{\\vee}_2 + \\kappa_3\\alpha^{\\vee}_3 + (w_1-t_{1,1}w_3)e_0\\\\\n\t&\\quad + \\varphi_1e_1 + (w_3-t_{1,1}w_1)e_2 + \\varphi_3e_3\n\t+ t_{1,1}\\Lambda_1 + \\Lambda_2,\\\\\n\tB_{1,1} &= u_0K + u_1\\alpha^{\\vee}_1 + u_2\\alpha^{\\vee}_2\n\t+ w_3x_3\\alpha^{\\vee}_3 + w_1e_0 + x_1e_1 + w_3e_2 + x_3e_3 + \\Lambda_1,\n\\end{split}\\]\nwhere\n\\[\n\t\\varphi_1 = t_{1,1}w_2 - w_0,\\quad \\varphi_3 = t_{1,1}w_0 - w_2,\n\\]\nand\n\\[\\begin{split}\n\tu_1 &= \\frac{w_1}{t_{1,1}}x_3\n\t- \\frac{\\kappa_0-\\kappa_1+\\rho_1}{t_{1,1}},\\quad\n\tu_2 = \\frac{w_3\\varphi_3+\\kappa_2-\\kappa_3+\\rho_1}{t_{1,1}},\\\\\n\tx_1 &= \\frac{(w_1-t_{1,1}w_3)\\varphi_3\n\t-(\\kappa_0-\\kappa_1+\\kappa_2-\\kappa_3+2\\rho_1)t_{1,1}}{(t_{1,1}^2-1)w_1},\\\\\n\tx_3 &= \\frac{(t_{1,1}w_1-w_3)\\varphi_3\n\t-(\\kappa_0-\\kappa_1+\\kappa_2-\\kappa_3+2\\rho_1)}{(t_{1,1}^2-1)w_1}.\n\\end{split}\\]\nNote that $\\kappa_0,\\ldots,\\kappa_3$ are constants.\nWe also remark that\n\\[\n\tw_1\\varphi_1 + w_3\\varphi_3 + \\kappa_0 - \\kappa_1 + \\kappa_2 - \\kappa_3\n\t+ 2\\rho_1 = 0.\n\\]\nHence the system \\eqref{Eq:DS_SR_22_b} can be expressed as a system of ordinary differential equations in terms of the variables $\\varphi_3,w_1,w_3$.\n\nLet\n\\[\n\tp = \\frac{w_1\\varphi_3}{2t_{1,1}},\\quad q = \\frac{t_{1,1}w_3}{w_1},\\quad\n\tt = t_{1,1}^2.\n\\]\nWe also set\n\\[\\begin{split}\n\t&\\alpha_0 = \\displaystyle\\frac{1}{2}(1+\\kappa_1-2\\kappa_2+\\kappa_3),\\quad\n\t\\alpha_1 = \\displaystyle\\frac{1}{2}(-\\kappa_1+\\kappa_3+2\\rho_1),\\\\\n\t&\\alpha_2 = \\kappa_0 + \\kappa_2 - 2\\kappa_3,\\quad\n\t\\alpha_3 = \\displaystyle\\frac{1}{2}(1-2\\kappa_0+\\kappa_1+\\kappa_3),\\\\\n\t&\\alpha_4 = \\displaystyle\\frac{1}{2}(-\\kappa_1+\\kappa_3-2\\rho_1),\n\\end{split}\\]\nand\n\\[\n\ta = \\alpha_0,\\quad b = \\alpha_3,\\quad c = \\alpha_4,\\quad\n\td = \\alpha_2(\\alpha_1+\\alpha_2).\n\\]\nThen we have\n\n\\begin{thm}\nThe system \\eqref{Eq:DS_SR_22_b} with \\eqref{Eq:DS_SR_22_b_BM} gives the sixth Painlev\\'{e} equation.\nFurthermore, $w_1$ satisfies the completely integrable Pfaffian equation\n\\[\\begin{split}\n\tt(t-1)\\frac{d}{dt}\\log w_1 &= -(q-1)(q-t)p - \\alpha_2q\\\\\n\t&\\quad + \\frac{1}{4}(1+2\\alpha_1-2\\alpha_3-4\\alpha_4)t\n\t- \\frac{1}{4}(1-2\\alpha_1-4\\alpha_2-2\\alpha_3).\n\\end{split}\\]\n\\end{thm}\n\n\n\\subsection{For the partition $(3,1)$}\\label{Sec:System31}\n\nThe Heisenberg subalgebra $\\mathfrak{s}_{(3,1)}$ of $\\mathfrak{g}(A^{(1)}_3)$ is defined by\n\\[\n\t\\mathfrak{s}_{(3,1)} = \\bigoplus_{k\\in\\mathbb{Z}\\setminus3\\mathbb{Z}}\n\t\\mathbb{C}\\Lambda_1^k\\oplus\\bigoplus_{k\\in\\mathbb{Z}\\setminus\\{0\\}}\n\t\\mathbb{C}z^kH_1\\oplus\\mathbb{C}K,\n\\]\nwith\n\\[\n\t\\Lambda_1 = e_0 + e_1 + e_{2,3},\\quad\n\tH_1 = \\alpha^{\\vee}_1 + 2\\alpha^{\\vee}_2 - \\alpha^{\\vee}_3.\n\\]\nThe subalgebra $\\mathfrak{s}_{(3,1)}$ admits the gradation of type $\\mathbf{s}=(1,1,0,1)$ with the grade operator\n\\[\n\t\\vartheta_{(3,1)}\n\t= 3z\\left(\\frac{d}{dz}+\\mathrm{ad}\\eta_{(3,1)}\\right),\\quad\n\t\\eta_{(3,1)} = \\frac{1}{3}(\\alpha^{\\vee}_1+\\alpha^{\\vee}_2+\\alpha^{\\vee}_3).\n\\]\nNote that\n\\[\n\t\\mathfrak{g}_{\\geq0}(1,1,0,1) = \\mathbb{C}f_2\\oplus\\mathfrak{b}_{+}.\n\\]\n\nWe now assume $t_{1,2}=1$ and $t_{1,k}=0$ $(k\\geq3)$.\nThen the similarity reduction \\eqref{Eq:DS_SR} for $\\mathfrak{s}_{(3,1)}$ is expressed as\n\\begin{equation}\\label{Eq:DS_SR_31_b}\n\t\\left[\\vartheta_{(3,1)}-M,\\partial_{1,1}-B_{1,1}\\right] = 0,\n\\end{equation}\nwith\n\\begin{equation}\\begin{split}\\label{Eq:DS_SR_31_b_BM}\n\tM &= \\vartheta_{(3,1)}(W)W^{-1}\n\t+ W(\\rho_1H_1+t_{1,1}\\Lambda_1+2\\Lambda_1^2)W^{-1},\\\\\n\tB_{1,1} &= \\partial_{1,1}(W)W^{-1} + W\\Lambda_1W^{-1}.\n\\end{split}\\end{equation}\n\nLet\n\\[\n\tW = \\exp(-w_2f_2)\\exp(\\omega_{-1})\\exp(\\omega_{-2})\\exp(\\omega_{<-2}),\n\\]\nwhere\n\\[\\begin{split}\n\t\\omega_{-1} &= -w_0f_0 - w_1f_1 - w_3f_3 - w_{1,2}f_{1,2}\n\t- w_{2,3}f_{2,3},\\\\\n\t\\omega_{-2} &= -w_{0,1}f_{0,1} - w_{3,0}f_{3,0} - w_{0,1,2}f_{0,1,2} \n\t- w_{1,2,3}f_{1,2,3} - w_{2,3,0}f_{2,3,0},\n\\end{split}\\]\nand $\\omega_{<-2}\\in\\mathfrak{g}_{<-2}(1,1,0,1)$.\nThen the system \\eqref{Eq:DS_SR_31_b_BM} gives explicit formulas of $M,B_{1,1}$ as follows:\n\\[\\begin{split}\n\tM &= \\kappa_0\\alpha^{\\vee}_0 + \\kappa_1\\alpha^{\\vee}_1\n\t+ \\kappa_2\\alpha^{\\vee}_2 + \\kappa_3\\alpha^{\\vee}_3 + \\varphi_0e_0\n\t+ (\\varphi_1+w_2\\varphi_{1,2})e_1\\\\\n\t&\\quad + \\varphi_2e_2 + (\\varphi_3-w_2\\varphi_{2,3})e_3\n\t+ \\varphi_{1,2}e_{1,2} + \\varphi_{2,3}e_{2,3} - 2w_2e_{3,0}\n\t+ 2\\Lambda_1^2,\\\\\n\tB_{1,1} &= u_3K - \\frac{\\varphi_1-t_{1,1}}{2}\\alpha^{\\vee}_0\n\t+ \\frac{\\varphi_0-t_{1,1}}{2}\\alpha^{\\vee}_1\n\t+ \\frac{w_2\\varphi_{1,2}}{2}\\alpha^{\\vee}_2\n\t+ \\frac{\\varphi_{1,2}}{2}e_2 - w_2e_3 + \\Lambda_1,\n\\end{split}\\]\nwhere\n\\[\\begin{split}\n\t&\\varphi_0 = 2w_1 + 2w_{2,3} + t_{1,1},\\quad\n\t\\varphi_1 = -2w_0 - 2w_{2,3} + t_{1,1},\\\\\n\t&\\varphi_2 = (w_0-2w_1+t_{1,1})w_3 - 2w_{3,0},\\quad \\varphi_3 = 2w_{1,2},\\\\\n\t&\\varphi_{1,2} = 2w_3,\\quad \\varphi_{2,3} = 2w_0 - 2w_1 + t_{1,1}.\n\\end{split}\\]\nNote that $\\kappa_0,\\ldots,\\kappa_4$ are constants.\nWe also remark that\n\\[\n\t2w_2\\varphi_2 - \\varphi_3\\varphi_{1,2} = 2(\\kappa_2-\\kappa_3-3\\rho_1),\\quad\n\t\\varphi_0 + \\varphi_1 + \\varphi_{2,3} = 3t_{1,1}.\n\\]\nHence the system \\eqref{Eq:DS_SR_31_b} can be expressed as a system of ordinary differential equations in terms of the variables $\\varphi_0,\\varphi_1,\\varphi_2,\\varphi_{1,2},w_2$.\n\nLet\n\\[\n\tq_1 = -\\frac{w_2\\varphi_{1,2}}{\\sqrt{6}},\\quad\n\tp_1 = -\\frac{2\\varphi_2}{\\sqrt{6}\\varphi_{1,2}},\\quad\n\tq_2 = \\frac{\\varphi_1}{\\sqrt{6}},\\quad\n\tp_2 = -\\frac{\\varphi_0}{\\sqrt{6}},\\quad t = -\\frac{\\sqrt{6}t_{1,1}}{2}.\n\\]\nWe also set\n\\[\\begin{split}\n\t&\\alpha_1 = \\displaystyle\\frac{1}{3}(\\kappa_2-\\kappa_3-3\\rho_1),\\quad\n\t\\alpha_2 = \\displaystyle\\frac{1}{3}(\\kappa_1-2\\kappa_2+\\kappa_3),\\\\\n\t&\\alpha_3 = \\displaystyle\\frac{1}{3}(1+\\kappa_0-2\\kappa_1+\\kappa_2),\\quad\n\t\\alpha_4 = \\displaystyle\\frac{1}{3}(1-2\\kappa_0+\\kappa_1+\\kappa_3).\n\\end{split}\\]\nThen we have\n\n\\begin{thm}\nThe system \\eqref{Eq:DS_SR_31_b} with \\eqref{Eq:DS_SR_31_b_BM} gives the Painlev\\'{e} system $\\mathcal{H}^{A_4^{(1)}}$.\nFurthermore, $\\varphi_{1,2}$ satisfies the completely integrable Pfaffian equation\n\\[\n\t\\frac{d}{dt}\\log\\varphi_{1,2} = p_1 + p_2 - \\frac{2}{3}t.\n\\]\n\\end{thm}\n\n\n\\subsection{For the partition $(4,1)$}\\label{Sec:System41}\n\nThe Heisenberg subalgebra $\\mathfrak{s}_{(4,1)}$ of $\\mathfrak{g}(A^{(1)}_4)$ is defined by\n\\[\n\t\\mathfrak{s}_{(4,1)} = \\bigoplus_{k\\in\\mathbb{Z}\\setminus4\\mathbb{Z}}\n\t\\mathbb{C}\\Lambda_1^k\\oplus\\bigoplus_{k\\in\\mathbb{Z}\\setminus\\{0\\}}\n\t\\mathbb{C}z^kH_1\\oplus\\mathbb{C}K,\n\\]\nwith\n\\[\n\t\\Lambda_1 = e_0 + e_1 + e_4 + e_{2,3},\\quad\n\tH_1 = \\alpha^{\\vee}_1 + 2\\alpha^{\\vee}_2 - 2\\alpha^{\\vee}_3\n\t- \\alpha^{\\vee}_4.\n\\]\nThe subalgebra $\\mathfrak{s}_{(4,1)}$ admits the gradation of type $\\mathbf{s}=(2,2,1,1,2)$ with the grade operator\n\\[\n\t\\vartheta_{(4,1)}\n\t= 8\\left(z\\frac{d}{dz}+\\mathrm{ad}\\eta_{(4,1)}\\right),\\quad\n\t\\eta_{(4,1)} = \\frac{1}{8}\n\t(3\\alpha^{\\vee}_1+4\\alpha^{\\vee}_2+4\\alpha^{\\vee}_3+3\\alpha^{\\vee}_4).\n\\]\nNote that\n\\[\n\t\\mathfrak{g}_{\\geq0}(2,2,1,1,2) = \\mathfrak{b}_{+}.\n\\]\n\nWe now assume $t_{1,2}=1$ and $t_{1,k}=0$ $(k\\geq3)$.\nThen the similarity reduction \\eqref{Eq:DS_SR} for $\\mathfrak{s}_{(4,1)}$ is expressed as\n\\begin{equation}\\label{Eq:DS_SR_41_b}\n\t\\left[\\vartheta_{(4,1)}-M,\\partial_{1,1}-B_{1,1}\\right] = 0,\n\\end{equation}\nwith\n\\begin{equation}\\begin{split}\\label{Eq:DS_SR_41_b_BM}\n\tM &= \\vartheta_{(4,1)}(W)W^{-1}\n\t+ W(\\rho_1H_1+2t_{1,1}\\Lambda_1+4\\Lambda_1^2)W^{-1},\\\\\n\tB_{1,1} &= \\partial_{1,1}(W)W^{-1} + W\\Lambda_1W^{-1}.\n\\end{split}\\end{equation}\n\nLet\n\\[\n\tW = \\exp(\\omega_{-1})\\exp(\\omega_{-2})\\exp(\\omega_{-3})\\exp(\\omega_{-4})\n\t\\exp(\\omega_{<-4}),\n\\]\nwhere\n\\[\\begin{split}\n\t\\omega_{-1} &= -w_2f_2 - w_3f_3,\\\\\n\t\\omega_{-2} &= -w_0f_0 - w_1f_1 - w_4f_4 - w_{2,3}f_{2,3},\\\\\n\t\\omega_{-3} &= -w_{1,2}f_{1,2} - w_{3,4}f_{3,4},\\\\\n\t\\omega_{-4} &= -w_{0,1}f_{0,1} - w_{4,0}f_{4,0} - w_{1,2,3}f_{1,2,3}\n\t- w_{2,3,4}f_{2,3,4},\n\\end{split}\\]\nand $\\omega_{<-4}\\in\\mathfrak{g}_{<-4}(2,2,1,1,2)$.\nThen the system \\eqref{Eq:DS_SR_41_b_BM} gives explicit formulas of $M,B_{1,1}$ as follows:\n\\[\\begin{split}\n\tM &= \\kappa_0\\alpha^{\\vee}_0 + \\kappa_1\\alpha^{\\vee}_1\n\t+ \\kappa_2\\alpha^{\\vee}_2 + \\kappa_3\\alpha^{\\vee}_3\n\t+ \\kappa_4\\alpha^{\\vee}_4 + \\varphi_0e_0 + \\varphi_1e_1\\\\\n\t&\\quad + \\varphi_2e_2 + \\varphi_3e_3 + \\varphi_4e_4\n\t+ \\varphi_{1,2}e_{1,2} + \\varphi_{2,3}e_{2,3} + \\varphi_{3,4}e_{3,4}\n\t+ 4\\Lambda_1^2,\\\\\n\tB_{1,1} &= u_4K + u_0\\alpha^{\\vee}_0\n\t+ \\frac{\\varphi_0-2t_{1,1}}{4}\\alpha^{\\vee}_1 + u_2\\alpha^{\\vee}_2\n\t+ u_3\\alpha^{\\vee}_3 + \\frac{\\varphi_{1,2}}{4}e_2\n\t+ \\frac{\\varphi_{3,4}}{4}e_3 + \\Lambda_1,\n\\end{split}\\]\nwhere\n\\[\\begin{split}\n\t&\\varphi_0 = 4w_1 - 4w_4 + 2t_{1,1},\\quad\n\t\\varphi_1 = -4w_0 + 2w_2w_3 - 4w_{2,3} + 2t_{1,1},\\\\\n\t&\\varphi_2 = -2(2w_1-w_4-t_{1,1})w_3 - 4w_{3,4},\\quad\n\t\\varphi_3 = 2(w_1-2w_4-t_{1,1})w_2 + 4w_{1,2},\\\\\n\t&\\varphi_{1,2} = 4w_3,\\quad \\varphi_{2,3} = -4w_1 + 4w_4 + 2t_{1,1},\\quad\n\t\\varphi_{3,4} = -4w_2,\n\\end{split}\\]\nand\n\\[\\begin{split}\n\t64t_{1,1}u_0 &= (\\varphi_0-4t_{1,1})(4\\varphi_1+\\varphi_{1,2}\\varphi_{3,4})\n\t+ 4\\varphi_2\\varphi_{3,4}\\\\\n\t&\\quad + 16t_{1,1}^2 + 16(\\kappa_0-\\kappa_1+\\kappa_2-\\kappa_4-2\\rho_1),\\\\\n\t64t_{1,1}u_2 &= \\varphi_0(4\\varphi_1+\\varphi_{1,2}\\varphi_{3,4})\n\t+ 4(\\varphi_2-t_{1,1}\\varphi_{1,2})\\varphi_{3,4}\\\\\n\t&\\quad - 16t_{1,1}^2 + 16(\\kappa_0-\\kappa_1+\\kappa_2-\\kappa_4-2\\rho_1),\\\\\n\t64t_{1,1}u_3 &= \\varphi_0(4\\varphi_1+\\varphi_{1,2}\\varphi_{3,4})\n\t+ 4\\varphi_2\\varphi_{3,4}\\\\\n\t&\\quad - 16t_{1,1}^2 + 16(\\kappa_0-\\kappa_1+\\kappa_2-\\kappa_4-2\\rho_1).\n\\end{split}\\]\nNote that $\\kappa_0,\\ldots,\\kappa_4$ are constants.\nWe also remark that\n\\[\\begin{split}\n\t&(\\varphi_0-4t_{1,1})\\varphi_{1,2}\\varphi_{3,4} + 4\\varphi_3\\varphi_{1,2}\n\t+ 4\\varphi_2\\varphi_{3,4} = 16(-\\kappa_2+\\kappa_3+4\\rho_1),\\\\\n\t&4\\varphi_1 + 4\\varphi_4 + \\varphi_{1,2}\\varphi_{3,4} = 16t_{1,1},\\quad\n\t\\varphi_0 + \\varphi_{2,3} = 4t_{1,1}.\n\\end{split}\\]\nHence the system \\eqref{Eq:DS_SR_41_b} can be described as a system of ordinary differential equations in terms of the variables $\\varphi_0,\\varphi_1,\\varphi_2,\\varphi_{1,2},\\varphi_{3,4}$.\n\nLet\n\\[\\begin{split}\n\t&q_1 = \\frac{\\varphi_0}{4t_{1,1}},\\quad p_1 = \\frac{t_{1,1}\\varphi_1}{8},\\\\\n\t&q_2 = \\frac{\\varphi_0}{4t_{1,1}}\n\t+ \\frac{\\varphi_2}{t_{1,1}\\varphi_{1,2}},\\quad\n\tp_2 = \\frac{t_{1,1}\\varphi_{1,2}\\varphi_{3,4}}{32},\\quad\n\tt = -\\frac{t_{1,1}^2}{2}.\n\\end{split}\\]\nWe also set\n\\[\\begin{split}\n\t&\\alpha_1 = \\frac{1}{8}(2-2\\kappa_0+\\kappa_1+\\kappa_4),\\quad\n\t\\alpha_2 = \\frac{1}{8}(2+\\kappa_0-2\\kappa_1+\\kappa_2),\\\\\n\t&\\alpha_3 = \\frac{1}{8}(1+\\kappa_1-2\\kappa_2+\\kappa_3),\\quad\n\t\\alpha_4 = \\frac{1}{8}(\\kappa_2-\\kappa_3-4\\rho_1),\\\\\n\t&\\alpha_5 = \\frac{1}{8}(1-\\kappa_3+\\kappa_4+4\\rho_1).\n\\end{split}\\]\nThen we have\n\n\\begin{thm}\nThe system \\eqref{Eq:DS_SR_41_b} with \\eqref{Eq:DS_SR_41_b_BM} gives the Painlev\\'{e} system $\\mathcal{H}^{A_5^{(1)}}$.\nFurthermore, $\\varphi_{1,2}$ satisfies the completely integrable Pfaffian equation\n\\[\n\tt\\frac{d}{dt}\\log\\varphi_{1,2} = -q_1p_1 - q_2p_2 + tq_2 - \\frac{3}{4}t\n\t- \\frac{1+2\\alpha_1+2\\alpha_3+2\\alpha_5}{4}.\n\\]\n\\end{thm}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe Edelweiss experiment searches for the WIMP candidates of the Dark Matter. The set-up is located in LSM in the French Alps which provides a shielding factor of $\\sim$4800~m.w.e. The detection principle is based on measuring energy of the recoil nucleus originating from the WIMP elastic scattering. Bolometers of pure natural Ge are used both as the detectors and the target material. These detectors cooled down to about 20~mK allow to measure simultaneously heat and ionization signals. Due to the quenching of the ionization signal present for nuclear recoils one achieves a very high discrimination of $\\beta$ and $\\gamma$ background from the recoil candidates \\cite{defay08}. However, neutrons coming from the natural radioactivity or induced by the remaining muons can still mimic the nuclear recoil signal of WIMP events and thus can not be discriminated in the same way as $\\beta$'s and $\\gamma$'s. Therefore, this type of background requires special careful investigation. The knowledge of it also becomes highly important in view of large 1-tonne scale experiments like EURECA \\cite{kraus08}.\n\n\\section{Neutron background}\nFor kinematic reasons, not every neutron can mimic the WIMP nuclear recoil event but only those who have an energy of 0.5\\--10~MeV when they reach the Ge bolometers. Such neutrons appearing due to natural radioactivity in the surrounding (e.g. U\/Th contamination) can be avoided by using a passive hydrogen-rich moderator (50~cm polyethylene shield in case of Edelweiss) and in addition by radiopurity selection of materials to be used. Monitoring of the ambient neutron flux in proximity of the Edelweiss experimental set-up is performed with the help of $^3$He gas detectors. This measurement yields a flux of about $2\\cdot10^{-6}$~n\/cm$^2$\/day \\cite{yakushev08} which is in good agreement with the previously measured value \\cite{fiorucci07}. Another part of the neutron background is caused by muon interactions in the rock and in fact everywhere in the set-up (especially in high-Z materials such as the gamma shield based on lead). High energy neutrons (well above 10~MeV) created in such deep inelastic scattering (DIS) processes further lead to the production of secondary neutrons with energies below 10~MeV. The effect of this $\\mu-$induced neutron component is commonly reduced by tagging the original muons. In Edelweiss experiment the plastic scintillator modules covering the full bolometer set-up act as the muon veto \\cite{chantelauze07}. Full simulations of the Edelweiss set-up including the muon veto were performed in GEANT4 in order to estimate the influence of muons for the Dark Matter search. These simulations involve muon generation to reproduce the muon flux specific for LSM and allow to get complete event topology \\cite{horn07}. It was shown then that muons which miss the veto can still induce some neutrons reaching the bolometers and giving rise to WIMP-like events not vetoed by the muon system. To verify these simulations one has to normalize them to the experimental data, i.e. one needs explicit $\\mu-$induced neutron measurements. One way to achieve this is to check the rate of events which are in coincidence between muon veto and the bolometers. This rate currently measured in Edelweiss is about 0.03~events\/kg\/day and it is reasonably well reproduced in the simulations. However, the rareness of these coincidence events makes it hard to get enough statistics to draw a reliable conclusion. A dedicated detector based on liquid scintillator was thus designed and installed in 2008 in LSM.\n\n\\section{A detector for muon-induced neutrons}\n\\label{sec:nc-detector}\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\t\\includegraphics[width=0.72\\textwidth]{figs\/nc-detector-fin.pdf}\n\t\\caption{General scheme of the neutron counter (side view): 1-tonne of Gd-loaded liquid scintillator viewed by 16 PMTs of 8-inch and 6 PMTs of 2-inch diameter. A layer of lead bricks below the liquid scintillator volume acts as an effective target for muons and high energy neutrons.}\n\t\\label{fig:nc-detector}\t\n\\end{figure}\n\nThe measuring principle of $\\mu$-induced neutrons is based on registering thermalized neutrons in coincidence with the incoming muon or by detecting a multiple neutron event (secondary neutrons produced in a $\\mu$-induced particle showers). To efficiently observe neutrons, a liquid scintillator of 1~m$^3$ volume (50x100x200~cm$^3$) loaded with Gadolinium (St.~Gobain Bicron BC525) is used as a core of the detector. The neutron capture process on Gd results in several gammas with 8~MeV sum energy. The scintillator volume is viewed from each of the two module ends by 8 photomultiplier tubes (PMT) of 8-inch diameter (Fig.~\\ref{fig:nc-detector}). These PMTs are optimized to register the light produced after the neutron capture. In addition, the system is equipped with 6 smaller PMTs (2-inch diameter) to register muons crossing the neutron detector (these muons create much more light and thus the 8-inch PMTs will saturate). The scintillator and PMTs are placed in one plexiglass container divided into three parts: the central one for the scintillator itself and two side ones filled with paraffin in which the PMTs are immersed. This plexiglass chamber is then placed in an aluminum vessel as secondary safety container. Finally, the system is surrounded by iron plates to reflect a fraction of neutrons back to the scintillator. In order to enhance the neutron production (up to a factor of 10 comparing to rock) a 10~cm thick layer of lead bricks is put underneath the detector. On top of the counter, a plastic scintillator module (same type as the muon veto of Edelweiss) is installed. The complete system is positioned right near the western wall of the Edelweiss muon veto. Based on the currently measured muon flux in the lab, the expected count rate of muon-induced neutrons is about few counts per day. \n\nThe GEANT4 simulations mentioned above were extended to optimize the neutron set-up before going for construction. Additionally, a smaller prototype (25x25x250~cm$^3$) was built beforehand in Karlsruhe in order to test mechanical properties, handling of liquids and gas as well as to study light collection, PMTs and overall performance. This prototype also allowed to develop a LED system to monitor over time the light properties of the scintillator and stability of PMTs. There are in total 8 LEDs ($\\lambda=$425~nm) placed at different positions. These LEDs are operated via VME-based PC commands and regularly fired one by one. The data from the groups of opposite PMTs are then analyzed. \n\nThe neutron counter is also equipped with safety sensors because of the pseudocumene based scintillator. This includes vapor sensors to check the internal and surrounding atmosphere, two leak sensors in the aluminum vessel, one temperature meter immersed in the paraffin volume and one outside of the counter. Signals of the vapor sensors are incorporated into LSM safety system which takes care of an alarm activation in the lab. One can as well monitor these sensors using the LabVIEW$^{\\tt{TM}}$-based program (\\underline{Ka}rlsruhe \\underline{C}ontrol of \\underline{S}afety or KA-CS) installed on Linux computer (SuSE~10.3) (Fig.~\\ref{fig:nc-ka-cs}). This software notifies users by email in case of an alarm due to a failure or passing of specified thresholds.\n\n\nThe neutron detector described was successfully installed in LSM in September 2008 and as for the time of writing, it is under intensive commissioning.\n\n\\begin{figure}\n\t\\centering\n\t\t\\includegraphics[width=0.62\\textwidth]{figs\/nc-ka-cs.pdf}\n\t\\caption{Principle scheme of KA-CS system for safety monitoring: sensors are continuously read via NI-6221 DAQ card by the LabVIEW program installed on Linux computer.}\n\t\\label{fig:nc-ka-cs}\n\\end{figure}\n\n\\section{Conclusion}\nImproved sensitivity of Dark Matter search experiments requires much better knowledge of the background conditions. Activity of the Edelweiss collaboration concerning the neutron background studies was presented, in particular the new detector for the $\\mu$-induced neutrons was described. \n\n\\section{Acknowledgements}\nThis work is in part supported by the German Research Foundation (DFG) through the Trans\\-regional Collaborative Research Center SFB-TR27 as well as by the EU contract RII3-CT-2004-506222.\n\n\n\\section{...}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $G$ be a finite group which is a transitive subgroup of a certain symmetry group $S_{d+1}$. A number field $K$ of degree $d+1$ is called a $G$-field if its Galois closure $\\widehat{K}$ over $\\mathbb{Q}$ is a $G$-Galois extension. For a $G$-field $K$, we attach the Artin L-function\n$$\nL(s,\\rho,K)=\\frac{\\zeta_K(s)}{\\zeta(s)}=\\sum_{n=1}^\\infty a_{\\rho}(n)n^{-s},\n$$\nwhere $\\rho$ is $d$-dimensional representation of $G$. Note that $-1\\leq a_{\\rho}(p)\\leq d$.\nIf $G=S_{d+1}$, $\\rho$ is the $d$-dimensional standard representation of $S_{d+1}$.\nLet $L(X)^{r_2}$ be the set of $G$-fields $K$ with $ |d_K| x^a$.\n\nBy Theorem 25.8 and Theorem 30.2 (the method of moments) in \\cite{B}, it is enough to consider $h(x)=x^r$.\nConsider\n\\begin{equation}\\label{central-limit}\n\\sum_{\\rho\\in L(X)} \\left(\\frac {\\sum_{p\\leq x} a_\\rho(p)}{\\sqrt{\\pi(x)}}\\right)^r.\n\\end{equation}\n\nBy multinomial formula,\n$$\\left(\\sum_{p\\leq x} a_{\\rho}(p)\\right)^r=\\sum_{u=1}^r {\\sum}^{(1)}_{(r_1,...,r_u)} \\frac {r!}{r_1!\\cdots r_u!} \\frac 1{u!} {\\sum}^{(2)}_{(p_1,...,p_u)} a_{\\rho}(p_1)^{r_1}\\cdots a_{\\rho}(p_u)^{r_u},\n$$\nwhere $\\sum_{(r_1,...,r_u)}^{(1)}$ means the sum over the $u$-tuples $(r_1,...,r_u)$ of positive integers such that $r_1+\\cdots+r_u=r$, and $\\sum_{(p_1,...,p_u)}^{(2)}$ means the sum over the $u$-tuples $(p_1,...,p_u)$ of distinct primes such that $p_i\\leq x$ for each $i$.\nThen\n$$(\\ref{central-limit})=\\pi(x)^{-\\frac r2} \\sum_{u=1}^r \\frac 1{u!} {\\sum}^{(1)}_{(r_1,...,r_u)} \\frac {r!}{r_1!\\cdots r_u!} {\\sum}^{(2)}_{(p_1,...,p_u)} \\left(\\sum_{\\rho\\in L(X)} a_{\\rho}(p_1)^{r_1}\\cdots a_{\\rho}(p_u)^{r_u}\\right).\n$$\n\nNow we claim that except when $r$ is even, $u=\\frac r2$, and $r_1=\\cdots=r_u=2$, it gives rise to the error term.\n\nNow suppose $r_i\\geq 2$ for all $i$, and $r_j>2$ for some $j$. Then since $r_1+\\cdots+r_u=r$, $u\\leq\\frac {r-1}2.$ Hence by the trivial estimate,\nsuch term is majorized by\n$$\\pi(x)^{-\\frac r2} \\sum_{u=1}^r \\frac 1{u!} {\\sum}_{(r_1,...,r_u)}^{(1)}\\frac {r!}{r_1!\\cdots r_u!} d^{r_1+\\cdots+r_u} |L(X)| \\pi(x)^u\n\\ll_r \\pi(x)^{-\\frac 12} |L(X)| \\sum_{u=1}^r \\frac 1{u!} d^r \\ll_{r,d} X \\pi(x)^{-\\frac 12}.\n$$\nThis gives rise to the error term.\n\nSuppose $r_i\\leq 2$ for all $i$.\nSuppose $r_i=1$ for some $i$. We may assume that $r_1=1$.\n\nLet $N$ be the number of conjugacy classes of $G$, and partition the sum $\\sum_{\\rho\\in L(X)}$ into $(N+w)^{u}$ sums, namely, given\n$(\\mathcal S_1,...,\\mathcal S_{u})$, where $\\mathcal S_i$ is either $\\mathcal S_{p_i,C}$ or $\\mathcal S_{p_i,r_j}$,\nwe consider the set of $\\rho\\in L(X)$ with the local conditions $\\mathcal S_i$ for each $i$. Note that in each such partition, $a_{\\rho}(p_1)^{r_1}\\cdots a_{\\rho}(p_u)^{r_u}$ remains a constant.\n\nSuppose $p_1$ is unramified, and fix the splitting types of $p_2,\\cdots,p_u$, and let $\\text{Frob}_{p_1}$ runs through the conjugacy classes of $G$. Then by (\\ref{estimate1}), the sum of such $N$ partitions is\n$$\\sum_C \\left(\\frac{|C|a_\\rho(p_1)}{|G|(1+f(p_1))} A(\\mathcal S_2,...,\\mathcal S_{u})X + O((p_1\\cdots p_u)^\\gamma X^\\delta) \\right),\n$$\nfor a constant $A(\\mathcal S_2,...,\\mathcal S_u)$.\nLet $\\chi_\\rho$ be the character of $\\rho$. Then $a_{\\rho}(p)=\\chi_{\\rho}(g)$, where $g=\\text{Frob}_p$. By orthogonality of characters,\n$\\sum_C |C| a_{\\rho}(p_1)=\\sum_{g\\in G} \\chi_\\rho(g)=0$. Hence the above sum is\n$O((p_1\\cdots p_u)^\\gamma X^\\delta)$, and it is majorized by $\\pi(x)^{-\\frac r2+u}x^{\\gamma u} X^{\\delta}.$\n\nHence we can assume that $r_i\\leq 2$ for each $i$, and $p_j$ is ramified when $r_j=1$. Suppose $r_1+\\cdots+r_{v}+r_{v+1}+\\cdots+r_u=r$, $r_1=\\cdots=r_v=1$ and $r_{v+1}=\\cdots=r_u=2$. Then $u-v\\leq \\frac {r-1}2$, and $p_1,...,p_v$ are ramified.\nThe partition of fixed splitting types of $p_{v+1},...,p_u$ is majorized by\n$$\n\\prod_{i=1}^v \\frac {f(p_i)}{1+f(p_i)} B(\\mathcal S_{v+1},...,\\mathcal S_u)X + O((p_1\\cdots p_u)^\\gamma X^\\delta),\n$$\nfor some constant $B(\\mathcal S_{v+1},...,\\mathcal S_u)$.\nSince $\\frac {f(p)}{1+f(p)}\\ll \\frac 1p$, it contributes to\n$$\\pi(x)^{u-v-\\frac r2}(\\log\\log x)^v X+ \\pi(x)^{-\\frac r2+u}x^{\\gamma u} X^{\\delta}\\ll X (\\log\\log x)^v \\pi(x)^{-\\frac 12}+\\pi(x)^{-\\frac r2+u}x^{\\gamma u} X^{\\delta}.\n$$\n\nNow let $r$ be even, $u=\\frac r2$, and $r_1=\\cdots=r_u=2$. If one of $p_1,p_2, \\cdots , p_u$ is ramified, their contribution is\nmajorized by $X \\pi(x)^{-1}\\log\\log x.$\nNow we assume that all primes are unramified. Then the corresponding term is\n\\begin{equation}\\label{main}\n\\pi(x)^{-\\frac r2} \\frac 1{u!} \\frac {r!}{2^u} \\sum_{(p_1,...,p_u)}^{(2)} \\left(\\sum_{L(s,\\rho)\\in L(X)} a_{\\rho}(p_1)^2\\cdots a_{\\rho}(p_u)^2\\right).\n\\end{equation}\n\nLet $N$ be the number of conjugacy classes of $G$, and partition the sum $\\sum_{\\rho\\in L(X)}$ into $N^u$ sums\nwhere $(C_1,...,C_u)$ is the set of $\\rho\\in L(X)$ such that $\\text{Frob}_{p_i}\\in C_i$ for each $i$.\nThen,\n\\begin{eqnarray*}\n&& \\sum_{\\rho\\in L(X)} a_{\\rho}(p_1)^2\\cdots a_{\\rho}(p_u)^2=\\sum_{(C_1,...,C_u)} \\chi_{\\rho}(p_1)^2\\cdots \\chi_{\\rho}(p_u)^2\n\\left(\\sum_{\\rho\\in L(X)\\atop \\text{Frob}_{p_i}\\in C_i} 1\\right) \\\\\n&& =\\sum_{(C_1,...,C_u)} \\chi_{\\rho}(p_1)^2\\cdots \\chi_{\\rho}(p_u)^2 \\left(\\prod_{i=1}^u \\frac {|C_i|}{|G|(1+f(p_i))} |L(X)|+O((p_1\\cdots p_u)^\\gamma X^{\\delta})\\right).\n\\end{eqnarray*}\nNow\n$$\n\\sum_{(C_1,...,C_u)} \\chi_{\\rho}(p_1)^2\\cdots \\chi_{\\rho}(p_u)^2 \\prod_{i=1}^u \\frac {|C_i|}{|G|(1+f(p_i))}=\\prod_{i=1}^u \\left(\\sum_{C_i} \\frac {\\chi_{\\rho}(p_i)^2|C_i|}{|G|(1+f(p_i))}\\right).\n$$\nHere $\\chi_{\\rho}(p)^2=\\chi_{\\rho^2}(p)=\\chi_{Sym^2\\rho}(p)+\\chi_{\\wedge^2\\rho}(p)$. We observed in \\cite{CK} that since $\\rho$ is an irreducible real self-dual representation,\n$Sym^2\\rho$ contains the trivial representation and\n$\\wedge^2\\rho$ does not contain the trivial representation (\\cite{JL}, page 274). Hence $\\chi_{\\rho}(p)^2=1+\\sum_{j=1}^l \\eta_j(p)$, where $\\eta_j$'s are non-trivial irreducible characters of $G$. By the orthogonality of characters, for each $j$, $\\sum_C |C|\\eta_j(p)=\\sum_{g\\in G} \\eta_i(g)=0$. Hence $\\sum_{C} \\chi_{\\rho}(p)^2|C|=|G|.$\nTherefore,\n\\begin{eqnarray*}\n&& {\\sum}_{(p_1,...,p_u)}^{(2)} \\left(\\sum_{\\rho\\in L(X)} a_{\\rho}(p_1)^2\\cdots a_{\\rho}(p_u)^2\\right) \\\\\n&=& \\pi(x)^u |L(X)| + O(\\pi(x)^{u-1} |L(X)|\\log\\log x) + O(\\pi(x)^u x^{\\gamma u} X^{\\delta}).\n\\end{eqnarray*}\n\nNote\n$$\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^{\\infty} t^r e^{-\\frac{t^2}{2}}dt = \\begin{cases} \\frac{r!}{(r\/2)! 2^{r\/2}}, & \\text{if $r$ is even,}\\\\\n0, & \\text{if $r$ is odd}\\end{cases}.\n$$\nHence we have proved\n\\begin{theorem}\\label{Artin}\nSuppose $\\frac {\\log X}{\\log x}\\longrightarrow \\infty$ as $x\\to\\infty$. Then\n$$\n\\frac 1{|L(X)|} \\sum_{\\rho\\in L(X)} \\left(\\frac {\\sum_{p\\leq x} a_{\\rho}(p)}{\\sqrt{\\pi(x)}}\\right)^r\n=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^{\\infty} t^r e^{-\\frac{t^2}{2}}dt + O\\left(\\frac {(\\log\\log x)^r}{\\pi(x)^{\\frac 12}}\\right).\n$$\n\\end{theorem}\n\nThis proves (\\ref{main-id}).\n\n\\section{Central Limit Theorem for Hecke eigenforms; Level aspect}\n\nIn this section, in analogy to (\\ref{N}),\nwe consider central limit theorem for modular form $L$-functions with the trivial central character with respect to congruence subgroups as the level goes to infinity. We follow \\cite{N} closely. For $k\\geq 2$, let $S_k(N)$ be the set of normalized Hecke eigen cusp forms of weight $k$ with respect to $\\Gamma_0(N)$ with the trivial central character.\nLet $f(z)=\\sum_{n=1}^\\infty a_f(n)n^{\\frac {k-1}2} e^{2\\pi inz}$; $a_f(mn)=a_f(m)a_f(n)$, if $(m,n)=1$; $a_f(1)=1$; $a_f(p^j)=a_f(p)a_f(p^{j-1})-a_f(p^{j-2})$.\n\nWe show\n\\begin{theorem} \\label{Hecke}\nFor a continuous real function $h$ on $\\Bbb R$, (assume that $\\frac {\\log N}{\\log x}\\longrightarrow \\infty$ as $x\\to\\infty$.)\n\n$$\n\\frac 1{\\#S_k(N)} \\sum_{f\\in S_k(N)} h\\left(\\frac {\\sum_{p\\leq x} a_f(p)}{\\sqrt{\\pi(x)}}\\right)\\longrightarrow \\frac 1{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty h(t) e^{-\\frac {t^2}2}\\, dt\\quad \\text{as $x\\to\\infty$}.\n$$\n\\end{theorem}\n\nWe have, from \\cite{Se},\n\n\\begin{lemma} Suppose $k\\geq 2$. Let $S_k(N,\\chi)$ be the set of normalized Hecke eigen cusp forms of weight $k$ with respect to $\\Gamma_0(N)$ with a character $\\chi$ (mod $N$). Then\n$$\\sum_{f\\in S_k(N,\\chi)} a_f(n)=\\frac {k-1}{12} \\chi(\\sqrt{n}) n^{-\\frac 12} \\psi(N)+O(n^{c}N^{\\frac 12} d(N)),\n$$\nfor some constant $c$, independent of $n, N$.\n\\end{lemma}\n\nHere $\\psi(N)=N \\prod_{l | N} (1+\\frac 1l)$,\nand $d(N)$ is the number of positive divisors of $N$. Note that $\\psi(N)=|SL_2(\\Bbb Z): \\Gamma_0(N)|$.\nHere $\\chi(x)=0$ if $x$ is not a positive integer prime to $N$. In particular, if $n$ is not a square,\n$\\sum_{f\\in S_k(N,\\chi)} a_f(n)=O(n^{c}N^{\\frac 12} d(N)).$ Taking $n=1$ and $\\chi=1$, we have\n$$\\#S_k(N)=\\frac {k-1}{12} \\psi(N)+O(N^{\\frac 12}d(N)).\n$$\n\nWe need to compute, for a positive integer $r$,\n\\begin{equation}\\label{central-limit-h}\n\\sum_{f\\in S_k(N)} \\left(\\frac {\\sum_{p\\leq x} a_f(p)}{\\sqrt{\\pi(x)}}\\right)^r.\n\\end{equation}\n\nBy multinomial formula,\n$$(\\ref{central-limit-h})=\\pi(x)^{-\\frac r2} \\sum_{u=1}^r \\frac 1{u!} {\\sum}_{(r_1,...,r_u)}^{(1)} \\frac {r!}{r_1!\\cdots r_u!} {\\sum}_{(p_1,...,p_u)}^{(2)} \\left(\\sum_{f\\in S_k(N)} a_f(p_1)^{r_1}\\cdots a_f(p_u)^{r_u}\\right).\n$$\n\nNow we claim that except when $r$ is even, $u=\\frac r2$, and $r_1=\\cdots=r_u=2$, it gives rise to the error term.\n\nBy \\cite{N}, Lemma 2, we can show that\n$a_f(p)^n=\\sum_{j=0}^n h_n(j)a_f(p^j)$, where $h_n(j)=0$ if $n$ is odd and $j$ is even, or if $n$ is even and $j$ is odd.\nFor $u$-tuples $(r_1,...,r_u)$ and $(p_1,...,p_u)$, we define\n\\begin{eqnarray*}\nA(r_1,...,r_u) &=& {\\sum}_{(p_1,...,p_u)}^{(2)} B(r_1,...,r_u; p_1,...,p_u),\\\\\n B(r_1,...,r_u; p_1,...,p_u) &=& \\sum_{f\\in S_k(N)} a_f(p_1)^{r_1}\\cdots a_f(p_u)^{r_u}.\n\\end{eqnarray*}\n\nThen\n$$\nB(r_1,...,r_u; p_1,...,p_u)=\\sum_{0\\leq j_r\\leq r_1,...,0\\leq j_u\\leq r_u} h_{r_1}(j_1)\\cdots h_{r_u}(j_u) \\sum_{f\\in S_k(N)} a_f(p_1^{j_1}\\cdots p_u^{j_u}).\n$$\n\nAs in \\cite{N}, if $r_l$ is odd for some $l$, $A(r_1,...,r_u)\\ll N^{\\frac 12}d(N) \\pi(x)^u x^{cur}.$\n\t\nNow let $r_1=\\cdots=r_u=2$. Then $r$ is even, and $u=\\frac r2$.\n\n$$A(r_1,...,r_u)=\\pi(x)^{\\frac r2} \\#S_k(N)+O(\\pi(x)^{\\frac r2-1}(\\log\\log x)^{\\frac r2}\\#S_k(N)).\n$$\n\nNow suppose that all $r_i$'s are even, and $r_i>2$ for some $i$. Then $u\\leq \\frac r2-1$.\nThen\n$$A(r_1,...,r_u)\\ll \\pi(x)^{-1}\\#S_k(N).\n$$\n\nHence, as in Theorem \\ref{Artin}, we have\n\\begin{prop} Assume that $\\frac {\\log N}{\\log x}\\longrightarrow \\infty$ as $x\\to\\infty$. Then\n$$\n\\frac{1}{\\#S_k(N)} \\sum_{f\\in S_k(N)} \\left(\\frac {\\sum_{p\\leq x} a_f(p)}{\\sqrt{\\pi(x)}}\\right)^r\n=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^{\\infty} t^r e^{-\\frac{t^2}{2}}dt + O\\left(\\frac {(\\log\\log x)^{\\frac r2}}{\\pi(x)}\\right).\n$$\n\\end{prop}\nThis proves Theorem \\ref{Hecke}\n\n\\section{Analogues of Sato-Tate distribution}\n\nFor a Hecke eigenform $f\\in \\mathcal F_k$, Sato-Tate conjecture says that for a continuous real function $h$ on $[-2,2]$,\n$$\n\\frac 1{\\pi(x)} \\sum_{p\\leq x} h(a_{f}(p))\\longrightarrow \\frac 1{2\\pi}\\int_{-2}^2 h(t) \\sqrt{4-t^2}\\, dt,\\quad \\text{as $x\\to\\infty$}.\n$$\nLet $a_f(p)=2\\cos\\theta_f(p)$ for $\\theta_f(p)\\in [0,\\pi]$. Then\n$\\{\\theta_f(p)\\}$ is uniformly distributed with respect to the measure $\\frac 2{\\pi}\\sin^2\\theta \\, d\\theta$ on $[0,\\pi]$. This is proved in \\cite{BGHT}.\n\nFor a vertical Sato-Tate distribution, one can consider, for a fixed prime $p$,\n\\begin{equation} \\label{CDF}\n\\sum_{f\\in \\mathcal F_k} a_{f}(p)^n.\n\\end{equation}\n\nConrey-Duke-Farmer \\cite{CDF} proved, for a holomorphic form of weight $k$,\n$$\\sum_{f\\in \\mathcal F_k} a_f(p)^n=\\frac k{6\\pi} \\left(1+\\frac 1p\\right)\\int_0^\\pi 2^n \\cos^n\\theta \\frac {\\sin^2\\theta}{(1-\\frac 1{p})^2+\\frac 4p \\sin^2\\theta} \\, d\\theta+O(p^{\\frac n2+\\epsilon}).\n$$\n\nThis implies that $\\{\\theta_f(p), f\\in \\mathcal F_k\\}$ is uniformly distributed with respect to the measure\n$$\\frac 2{\\pi} \\left(1+\\frac 1p\\right)\\frac {\\sin^2\\theta}{(1-\\frac 1p)^2+\\frac 4p \\sin^2 \\theta} \\, d\\theta.\n$$\n\nFor Artin $L$-function analogue of Sato-Tate distribution, we consider, for $r\\geq 1$,\n\n\\begin{equation}\\label{Sato-Tate}\n\\frac 1{\\pi(x)} \\sum_{p\\leq x} a_{\\rho}(p)^r.\n\\end{equation}\n\nIn our case, note that $-1\\leq a_{\\rho}(p)\\leq d$.\nBy effective Chebotarev density theorem (cf. \\cite{Se1}, page 132), for $\\log x\\gg |G|(\\log \\left|d_{\\widehat{K}}\\right| )^2$,\n\\begin{equation*}\\label{chebo}\n\\sum_{p\\leq x \\atop \\text{Frob}_p\\in C} 1=\\frac {|C|}{|G|} \\pi(x)+O\\left(\\pi(x^{\\beta})\\right)+O\\left(x e^{-c |G|^{-\\frac 12} (\\log x)^{\\frac 12}}\\right),\n\\end{equation*}\nwhere $\\beta$ is an exceptional zero of $\\zeta_{\\widehat{K}}(s)$ such that $1-\\beta\\leq \\frac 14\\log d_{\\widehat{K}}$, if it exists. Hence\n\n$$\\sum_{p\\leq x} a_{\\rho}(p)^r=\\sum_C a_{\\rho}(p)^r \\left(\\sum_{p\\leq x\\atop \\text{Frob}_p\\in C} 1\\right)\n=\\sum_C a_{\\rho}(p)^r \\frac {|C|}{|G|} \\pi(x)+ O(\\pi(x^{\\beta})+x e^{-c |G|^{-\\frac 12} (\\log x)^{\\frac 12}}).\n$$\n\nNow $\\sum_C |C| a_{\\rho}(p)^r=\\sum_{g\\in G} \\chi_{\\rho}(g)^r$ and $\\chi_{\\rho}(g)^r=\\chi_{\\rho^r}(g)$.\nNote that\n$$\\frac 1{|G|}\\sum_{g\\in G} \\chi_{\\rho^r}(g)=n_r,\n$$\nwhich is the multiplicity of the trivial representation in $\\rho^r$. Hence\n$$\\sum_{p\\leq x} a_{\\rho}(p)^r=n_r \\pi(x)+O(\\pi(x^{\\beta})+x e^{-c |G|^{-\\frac 12} (\\log x)^{\\frac 12}}).\n$$\nTherefore,\n$$\\frac 1{\\pi(x)} \\sum_{p\\leq x} a_{\\rho}(p)^r\\longrightarrow n_r, \\quad \\text{as $x\\to\\infty$}.\n$$\n\nFor vertical Sato-Tate distribution, for a fixed prime $p$, consider\n$$\\frac 1{|L(X)|} \\sum_{\\rho\\in L(X)} a_{\\rho}(p)^r.\n$$\nThen by (\\ref{estimate1}),\n\\begin{eqnarray*}\n&& \\sum_{\\rho\\in L(X)} a_{\\rho}(p)^r=\\sum_C a_{\\rho}(p)^r \\left(\\sum_{\\rho\\in L(X)\\atop \\text{Frob}_p\\in C} 1\\right)+ a_{\\rho}(p)^r\n\\left(\\sum_{\\rho\\in L(X)\\atop \\text{$p$ is ramified}} 1 \\right) \\\\\n&&\n=\\frac {|L(X)|}{|G|(1+f(p))} \\sum_C |C| a_{\\rho}(p)^r +O(p^\\gamma X^\\delta)+O\\left(\\frac Xp\\right)\n=\\frac {|L(X)| n_r}{1+f(p)}+O(p^\\gamma X^\\delta)+O\\left(\\frac Xp\\right).\n\\end{eqnarray*}\nSo if $X>p^{\\frac {1+\\gamma}{1-\\delta}}$,\n$$\\frac 1{|L(X)|} \\sum_{\\rho\\in L(X)} a_{\\rho}(p)^r=\\frac {n_r}{1+f(p)}+O(p^{-1}).\n$$\n\n\\section{Counting $S_5$ quintic fields with local conditions}\\label{S_5}\n\nShankar and Tsimerman \\cite{ST} recently counted $S_5$ quintic fields with a power saving error terms. For $i=0,1,2$, let $N_5^{(i)}(X)$ be the number of $S_5$ quintic fields of signature $(5-2i,i)$ with $|d_K| < X$. Then they showed\n\\begin{eqnarray*}\nN_5^{(i)}(X)= D_i X+ O_{\\epsilon}\\left( X^{\\frac{399}{400}+\\epsilon }\\right),\n\\end{eqnarray*}\nwhere $D_i=d_i \\prod_p (1+ p^{-2} -p^{-4} -p^{-5})$ and $d_0,d_1,d_2$ are $\\frac{1}{240}, \\frac{1}{24}$ and $\\frac{1}{16}$, respectively.\n\nWe can count quintic fields with finitely many local conditions. Let $C$ be a conjugacy class of $S_5$ and $f(p)=p^{-1}+2p^{-2}+2p^{-3}+p^{-4}$. Let $\\mathcal S=\\{ LC_p \\}$ be a finite set of local conditions. Define\n $|\\mathcal S_{p,C}|=\\frac {|C|}{|G|(1+f(p))}$, $|\\mathcal S_{p,r_i} |=\\frac {c_i(p)}{(1+f(p))}$, and $|\\mathcal S|=\\prod_p |LC_p|$, where $c_i(p)$'s are given explicitly at the end of this section.\n\n\\begin{theorem} Let $N_5^{(i)}(X,\\mathcal S)$ be the number of $S_5$ quintic fields of signature $(5-2i,i)$ with $|d_K| < X$, and with the local condition $\\mathcal S$. Then\n\\begin{eqnarray*}\nN_5^{(i)}(X,\\mathcal S)= |\\mathcal S| D_i X+ O_{\\epsilon}\\left(\\left(\\prod_{p\\in \\mathcal S} p \\right)^{2-\\epsilon} X^{\\frac{199}{200}+\\epsilon }\\right).\n\\end{eqnarray*}\n\\end{theorem}\n\nWe follow the notations in \\cite{ST}. Let $V_{\\mathbb{Z}}$ be the space of $4$-tuples of $5 \\times 5$ alternating matrices with integer coefficients. The group $G_\\mathbb{Z} = GL_4(\\mathbb{Z}) \\times SL_5(\\mathbb{Z})$ acts on $V_\\mathbb{Z}$ via\n$$\n(g_4,g_5) \\cdot ( A, B, C, D)^t = g_4 ( g_5 A g_5^t, g_5 B g_5^t,g_5 C g_5^t,g_5 D g_5^t)^t.\n$$\nHere $g_4\\cdot (A,B,C,D)^t$ means $(a_1 (A,B,C,D)^t, a_2 (A,B,C,D)^t, a_3 (A,B,C,D)^t, a_4 (A,B,C,D)^t)$, where $a_i$ is the $i$th row of $g_4$.\n\nThere is a canonical bijection between the set of $G_\\mathbb{Z}$-equivalence classes of elements $(A,B,C,D) \\in V_{\\mathbb{Z}}$, and the set of isomorphism classes of pairs of $(R,R')$, where $R$ is a quintic ring and $R'$ is a sextic resolvent ring of $R$. (See \\cite{B08}.)\nLet $\\mathcal V$ be an element of $V_{\\mathbb{Z}}$. Over the residue field $\\mathbb{F}_p$, the element $\\mathcal V$ determines a quintic\n$\\mathbb{F}_p$-algebra $R(\\mathcal V)\/(p)$. Let us define the splitting symbol $(\\mathcal V,p)$ by\n$$\n(\\mathcal V,p)=(f_1^{e_1}f_2^{e_2} \\cdots ),\n$$\nwhenever $R(\\mathcal V)\/(p) \\cong \\mathbb{F}_{p^{f_1}}[t_1]\/(t_1^{e_1}) \\oplus \\mathbb{F}_{p^{f_2}}[t_2]\/(t_2^{e_2}) \\oplus \\cdots$. Then there are 17 possible splitting types for $(\\mathcal V,p)$; $(11111),$ $(1112),$ $(122),$ $(113),$ $(23),$ $(14),$ $(5),$ $(1^2111),$ $(1^212),$ $(1^23),$ $(1^21^21),$ $(2^21),$ $(1^311),$ $(1^32),$ $(1^31^2),$ $(1^41),$ and $(1^5)$. Let $\\sigma$ be one of 17 splitting types. Then define $T_p(\\sigma)$ to be the set of $\\mathcal V\\in V_{\\mathbb{Z}}$ such that $(\\mathcal V,p)=\\sigma$ and $U_p(\\sigma)$ to be the set of elements in $T_p(\\sigma)$ corresponding to quintic rings that are maximal at $p$. The set $U_p(\\sigma)$ is defined by congruence conditions on coefficients of $\\mathcal V$ modulo $p^2$. Let $\\mu(U_p(\\sigma))$ be the $p$-adic density of $\\mathcal S$ in $V_{\\mathbb{Z}_p}$. They are computed in Lemma 4 in \\cite{B08}. Let ${U}_p$ denote the union of the 17 $U_p(\\sigma)$. Then Lemma 20 of \\cite{B08} implies that\n\\begin{eqnarray*}\n\\mu({U_p})=(p-1)^8p^{12}(p+1)^4(p^2+1)^2(p^2+p+1)^2(p^4+p^3+p^2+p+1)(p^4+p^3+2p^2+2p+1)\/p^{40}.\n\\end{eqnarray*}\n\nNote that\n$$d_i \\zeta(2)^2\\zeta(3)^2\\zeta(4)^2\\zeta(5) \\prod_p \\mu({U_p}) = d_i \\prod_p (1+p^{-2}-p^{-4}-p^{-5}),\n$$\nwhich is the coefficient of the main term in counting quintic fields. Here we need to interpret $\\mu({U_p})$ in the following way:\n$U_p$ can be considered as a subset of $(\\mathbb{Z}\/q^2 \\mathbb{Z})^{40}$, or the union of $k$ translates of $p^2 V_{\\mathbb{Z}}$, where $k$ is the size of the set. Here $k$ is $\\mu(U_p)q^{80}$.\nLet ${W}_p$ be the complement of ${U_p}$ in $V_{\\mathbb{Z}}$, then $\\mu({W}_p)=1- \\mu({U_p})$. Then $W_p$ is the union of $\\mu({W}_p)q^{80}$ translates of $q^2V_{\\mathbb{Z}}$.\n\nFor $q$ square-free, let $W_q \\subset V_{\\mathbb{Z}}$ be the set of elements corresponding to quintic rings that are not maximal at each prime dividing $q$. Then $W_q$ is the union of $\\prod_{p \\mid q} \\mu(W_p) \\cdot q^{80}$ translates of $q^2 V_{\\mathbb{Z}}$ by the Chinese Remainder Theorem.\n\nLet $V_{\\mathbb{Z}}^{ndeg}$ denote the set of elements in $V_{\\mathbb{Z}}$ that correspond to orders in $S_5$-fields, and let $V_{\\mathbb{Z}}^{deg}$ be the complement of $V_{\\mathbb{Z}}^{ndeg}$.\nA point in $V_{\\mathbb{Z}}$ corresponds to a maximal order in an $S_5$ quintic fields precisely if it is in $ \\cap_p U_p \\cap V_{\\mathbb{Z}}^{ndeg}$. For a $G_\\mathbb{Z}$-invariant subset $S$ of $V_{\\mathbb{Z}}$, let $N(S,X)$ denote the number of irreducible $G_\\mathbb{Z}$-orbits in $S^{ndeg}:=S \\cap V_{\\mathbb{Z}}^{ndeg}$ having discriminant bounded by $X$. For a set $S$ which is not $G_\\mathbb{Z}$-invariant, we can define $N^*(S,X)$ which also counts the orbits of degenerate points in $S$.\n\nNow we choose a finite set of primes $\\{p_1,p_2, \\cdots, p_n \\}$. And choose a splitting type $\\sigma_{p_k}$ for each $p_k$, $k=1,2,\\cdots, n$. Define $U'_p$ to be $U_p$ if $p \\neq p_k$, $k=1,2,\\cdots, n$. If $p=p_k$ for some $k$, then $U'_p=U_p(\\sigma_{p_k})$. Let $W'_p$ be the complement of $U'_p$. Then $W'_q$ is the union of $\\prod_{p \\mid q} \\mu(W'_p) \\cdot q^{80}$ translates of $q^2 V_{\\mathbb{Z}}$.\n\nLet $N_5^{(i)}(X, \\{ \\sigma_{p_k} \\}_{k=1,2,\\cdots,n})$ be the number of $S_5$ quintic fields of signature $(5-2i,i)$ with $|d_K| Q} O_\\epsilon \\left( (p_1 p_2 \\cdots p_n)^{2-\\epsilon} \\frac{X}{q^{2-\\epsilon} } \\right)\\\\\n&&=\\sum_{q \\leq Q} \\left( \\mu(W'_q) \\cdot c_i X - \\mu(q)N_{12}^*(W_q \\cap V_{\\mathbb{Z}}^{deg,(i)},X) \\right)\n+ O_\\epsilon \\left( (p_1 p_2 \\cdots p_n)^{2-\\epsilon} X\/Q^{1-\\epsilon} + X^{\\frac{39}{40}}Q^{3+\\epsilon} \\right)\\\\\n&&= \\sum_q c_i \\mu(W'_q)X + (p_1 p_2 \\cdots p_n)^{2-\\epsilon} O_\\epsilon \\left( X\/Q^{1-\\epsilon}+X^{\\frac{39}{40}}Q^{3+\\epsilon} + X^{\\frac{199}{200}Q^{1+\\epsilon}}\\right)\\\\\n&&= c_i \\prod_p (1- \\mu(W'_q)) X + (p_1 p_2 \\cdots p_n)^{2-\\epsilon} O_\\epsilon \\left( X\/Q^{1-\\epsilon}+X^{\\frac{39}{40}}Q^{3+\\epsilon} + X^{\\frac{199}{200}Q^{1+\\epsilon}}\\right)\\\\\n&&= c_i \\prod_p (\\mu(U'_q)) X + (p_1 p_2 \\cdots p_n)^{2-\\epsilon}O_\\epsilon \\left( X\/Q^{1-\\epsilon}+X^{\\frac{39}{40}}Q^{3+\\epsilon} + X^{\\frac{199}{200}Q^{1+\\epsilon}}\\right).\n\\end{eqnarray*}\n\nPutting $Q=X^{\\frac{1}{400}}$, we have\n$$(p_1 p_2 \\cdots p_n)^{2-\\epsilon}O_\\epsilon \\left( X\/Q^{1-\\epsilon}+X^{\\frac{39}{40}}Q^{3+\\epsilon} + X^{\\frac{199}{200}Q^{1+\\epsilon}}\\right) \\ll_\\epsilon (p_1 p_2 \\cdots p_n)^{2-\\epsilon} X^{\\frac{399}{400}+\\epsilon}.\n$$\n Note that\n \\begin{eqnarray*}\n c_i \\prod_p (\\mu(U'_q)) &=& \\prod_{k=1}^n \\frac{\\mu(U_p(\\sigma_{p_k}))}{\\mu(U_{p_k})} c_i \\prod_p \\mu(U_p)= \\prod_{k=1}^n \\frac{\\mu(U_p(\\sigma_{p_k}))}{\\mu(U_{p_k})} d_i \\prod_p \\left( 1+ p^{-2} -p^{-4} -p^{-5} \\right).\n\\end{eqnarray*}\n\nFrom Lemma 20 in \\cite{B08}, we can see that, for $f(p)=p^{-1}+2p^{-2}+2p^{-3}+p^{-4}$,\n$$\n\\frac{\\mu(U_p(\\sigma))}{\\mu(U_p)} = \\frac{1\/120}{1+f(p)},\\:\\frac{1\/12}{1+f(p)},\\:\\frac{1\/8}{1+f(p)},\\:\\frac{1\/6}{1+f(p)},\\: \\frac{1\/6}{1+f(p)},\\:\\frac{1\/4}{1+f(p)},\\:\\mbox{and }\\frac{1\/5}{1+f(p)}\n$$\nfor $\\sigma=(11111),(1112),(122),(113),(23),(14),(5),$ respectively, and\n\\begin{eqnarray*}\n\\frac{\\mu(U_p(\\sigma))}{\\mu(U_p)}& =& \\frac{1\/6 \\cdot 1\/p}{1+f(p)},\\:\\frac{1\/2 \\cdot 1\/p}{1+f(p)},\\:\\frac{1\/3 \\cdot 1\/p}{1+f(p)},\\:\\frac{1\/2 \\cdot 1\/p^2}{1+f(p)},\\: \\frac{1\/2 \\cdot 1\/p^2}{1+f(p)},\\:\\frac{1\/2 \\cdot 1\/p^2}{1+f(p)},\\frac{1\/2 \\cdot 1\/p^2}{1+f(p)},\\\\\n & & \\frac{1\/p^3}{1+f(p)},\\:\\frac{1\/p^3}{1+f(p)},\\:\\mbox{ and } \\frac{1\/p^4}{1+f(p)}\n\\end{eqnarray*}\nfor $\\sigma=(1^2111),(1^212),(1^23),(1^21^21),(2^2 1),(1^3 11),(1^3 2),(1^3 1^2), (1^4 1), (1^5),$ respectively. Hence we have proved the theorem. \n\n\\section{Counting $S_4$ quartic fields with local conditions}\\label{S_4}\n\nIn \\cite{BBP}, Belabas, Bhargava and Pomerance obtained a power saving error term for counting $S_4$ quartic fields. For $i=0,1$,\nlet $N_4^{(i)}(X)$ be the number of isomorphism classes of $S_4$-quartic fields of signature $(4-2i,i)$ with $|d_K| 7$ Gyr) and are born near the galactic center, but separate as a function of age (see Figures \\ref{fig:contour_3panel} and \\ref{fig:enlink_clusters}). \n \n \\item Using a simple second order polynomial regression, we quantify the relationship between observable abundance labels and birth property outputs (Section \\ref{sec:results_regression}). We model age as a function of (\\mbox{$\\rm [Fe\/H]$}, \\mbox{$\\rm [X\/Fe]$}), and can infer a star's age to a precision of $\\pm 0.52$ Gyr for the 2-dimensional abundance simulation and $\\pm 0.06$ Gyr for the 15-dimensional abundance simulation. We also model $R_\\text{birth}$\\ as a function of (\\mbox{$\\rm [Fe\/H]$}, age), and infer it to a precision of $\\pm 1.24$ kpc and $\\pm 1.17$ kpc for the 2- and 15-dimensional abundance simulations respectively. \n \n \\item The ability to reconstruct stellar groups born in different times and places from their abundances is determined by the formation history of the galaxy. When formation conditions lead to age and $R_\\text{birth}$\\ trends in the abundance plane with small dispersion, we find that there is a simple connection between clustered abundances and discrete birth times and places. Under clumpy star formation however, the simple relationship vanishes (Section \\ref{sec:victor}). \n \n \\item Our comparison of three simulations implies that the low dispersion of age across the \\mbox{$\\rm [\\alpha\/Fe]$}-\\mbox{$\\rm [Fe\/H]$}\\ plane of the Milky Way indicates that the Milky Way's star formation history is sufficiently quiet and that clustering in abundance will correspond to birth associations in time and location (Figure \\ref{fig:ageDisp}).\n \n\\end{itemize}\n\nWe seek to examine how abundance structure links to birth properties. We find that there is a simple relationship between age and chemical abundances, which agrees with previous work \\citep[e.g.][]{Ness2019}. $R_\\text{birth}$\\ cannot be tested as we can do for age --- we never have direct access to this quantity in observations. From our regression however, we see age and ([Fe\/H], \\mbox{$\\rm [X\/Fe]$}) link us to $R_\\text{birth}$\\ in the simulations. Indeed this analytical formalism has been adopted in models of radial migration \\citep[e.g.][]{Frankel2018,2019minchev}. We examine the $R_\\text{birth}$--age distribution further using the idea of abundance clustering, which we seek to see if it links to underlying physical processes. \n\nThis work highlights how we might use clustering of high dimensional abundance measurements in large surveys to infer groups of different birth place and time, and the impact of measurement uncertainty in working with the observational data. \n\n\\section{Acknowledgements} \nMelissa K Ness acknowledges support from the Sloan Foundation Fellowship. \nTobias Buck acknowledges support from the European Research Council under ERC-CoG grant CRAGSMAN-646955. This research made use of {\\sc{pynbody}} \\citet{pynbody}.\nWe gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time on the GCS Supercomputer SuperMUC at Leibniz Supercomputing Centre (www.lrz.de).\nThis research was carried out on the High Performance Computing resources at New York University Abu Dhabi. We greatly appreciate the contributions of all these computing allocations.\nK.V.J. is supported by NSF grant AST-1715582.\nB.S. is supported by NSF grant DMS-2015376.\nVPD and LBS are supported by STFC Consolidated grant \\#ST\/R000786\/1\n\n\n\\section{Appendix - additional figures}\nHere we include abundance--age plots colored by $R_\\text{birth}$\\ for both the 2d and 15d simulations. These plots are similar to Figures \\ref{fig:Rbirthxfe_2d} and \\ref{fig:Rbirthxfe_hd}, however the coloring and x-axis are switched. \n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{buck_AgeXfe.pdf}\n\\caption{(\\textbf{Left}) The \\mbox{$\\rm [Fe\/H]$}--age plane colored by $R_\\text{birth}$\\ for the 2-dimensional simulation. The black lines and grey area mark off the solar metallicity stars, which we consider to be $\\pm$0.05 dex in \\mbox{$\\rm [Fe\/H]$}. (\\textbf{Right}) the running mean of [O\/Fe] of the solar metallicity stars across age colored by $R_\\text{birth}$\\ selected from within the horizontal lines at left. For a given bin of metallicity, stars clearly have a polynomial trend in \\mbox{$\\rm [X\/Fe]$}--age.}\n\\label{fig:Agexfe}\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.92\\textwidth]{buckHD_AgeXfe.pdf}\n\\caption{(\\textbf{Top left}) The \\mbox{$\\rm [Fe\/H]$}--age plane colored by $R_\\text{birth}$\\ for the 15-dimensional simulation. The black lines and grey area mark off the solar metallicity stars, which we consider to be $\\pm$0.05 dex in \\mbox{$\\rm [Fe\/H]$}. All other plots show the running mean of \\mbox{$\\rm [X\/Fe]$}\\ of the solar metallicity stars accross age colored by $R_\\text{birth}$. Similar to the 2-dimensional simulation shown in Figure \\ref{fig:Agexfe}, solar metallicity stars of different ages separate into different polynomial curves.}\n\\label{fig:Agexfe_hd}\n\\end{figure*}\n\n\\pagebreak\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzetup b/data_all_eng_slimpj/shuffled/split2/finalzzetup new file mode 100644 index 0000000000000000000000000000000000000000..51dd9def5ddb0ffa385f8844472624695288ae3e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzetup @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n The very innovative proposal by F. Wilczek \\cite{W} to look for quantum time crystals has raised great interest, but their possible existence has been denied \\cite{B, WO}. The aim of this note is to partly refine Wilczek proposal, in such a way to overcome the objections of Refs. \\cite{B, WO}, and to exhibit explicit examples of quantum time crystals. \n\n In our opinion, misleading prejudices affect the issue of possible breaking of time translations. One critical point is the widespread belief that the only criterion of spontaneous symmetry breaking is the non-invariance of a ground state correlation function and this would a priori preclude the breaking of time translations. \n\nAs a matter of fact, quite generally, a symmetry $\\b$ defined as a mapping of the quantum canonical variables, more generally of (the algebra of) the observables generated by them, fails to define a symmetry of the states of a given realization or phase of the system (i.e. it does not define a Wigner symmetry), if it cannot be described by a unitary operator in the corresponding Hilbert space.\n\n For an explicit easily checkable criterion of symmetry breaking, in terms of non-invariant expectations, one has to consider pure states (since invariant expectations of a mixed state do not exclude symmetry breaking) and therefore the characterization of a pure phase as an irreducible (more generally factorial) representation of the algebra of observables enters in a decisive way. Irreducibility implies that all the operators which commute with the observables must be represented by multiples of the identity. In order to cover the finite temperature case (where irreducibility fails) this condition is only required to apply to the set of observables which commute with all the observables (briefly the {\\em center} of the algebra of the observables); in this more general case the representation is said to be {\\em factorial} and provides the general definition of a pure phase. \n\nClearly, in order to relate the breaking of time translations to non-invariant expectations one must consider a state which is not an eigenstate of the (finite volume) Hamiltonian, e.g. a superposition of eigenstates, \nbut obviously additional conditions are needed. \n\nThey are easily identified if \n \\,\\,i) a group $\\T$ of translations is defined on the observables and ii) the following minimal locality condition (asymptotic abelianess) is satisfied for any pair of observables $A, \\,B$ and any translation $T \\in \\T$\n$$ \\limn [\\,T^n(A), \\,B\\,] = 0.$$ In the following, we shall always consider pure phases of quantum systems satisfying i) and ii). \n\nIn this case, if a state $\\omega$ invariant under $\\T$ belongs to the pure phase $\\Gamma$, it is the unique $\\T$-invariant state, equivalently the cluster property holds \\cite{FSB}\n$$ \\limn \\langle T^n(A)\\,B\\rangle = \\langle A \\rangle\\langle B \\rangle, $$\nwhere $\\langle C \\rangle \\eqq \\omega(C)$ denotes the expectation of the state $\\omega$.\n\nThis allows for a simple criterion of spontaneous breaking of a symmetry $\\b$ which commutes with the group of translations $\\T$ (called an internal symmetry): \n\n\\noindent {\\em an internal symmetry $\\b$ is spontaneously broken in the pure phase $\\Gamma$ if and only if one of the correlation functions of (the translationally invariant state) $\\omega$ $\\in \\Gamma$ is not invariant under} $\\b$, \n\\be{\\langle \\b(A) \\rangle \\neq \\langle A \\rangle,}\\ee equivalently $$\\limn \\langle T^n (\\b(A)) B \\rangle - \\langle A \\rangle \\langle B \\rangle \\neq 0.$$ \n In fact, if $\\omega$ is invariant under $\\b$, by a general result $\\b$ may be described by a unitary operator $U_\\b$ in the Hilbert space $\\H_\\Gamma$ which describes $\\Gamma$, i.e. $\\b$ defines a Wigner symmetry of $\\Gamma$; conversely, the transformed state state $\\omega_\\b$, defined by $\\omega_\\b(A) \\eqq \\omega(\\b(A)) \\neq \\omega(A)$, is invariant under $\\T$ (because so is $\\omega$ and $\\b$ commutes with $\\T$), and by eq.\\,(1) and the uniqueness of the translationally invariant state in $\\Gamma$ $\\omega_\\b$ must belong to a different phase (and therefore $\\b $ cannot be described by a unitary operator). If such a uniqueness property (crucially related to the pure phases) does not hold one cannot exclude the existence of a unitary operator $U_\\b$ which relates the corresponding state vectors $\\Psi{\\omega_\\b} = U_\\b \\Psi_\\omega$, in $\\H_\\Gamma$. \n\nIt is worthwhile to stress that such a criterion of symmetry breaking, which crucially uses invariance under a group of translations, does not require that $\\omega$ is the ground state, even if invariance under a group of translations (not necessarily the full group of translations!) is generically shared by ground states, and therefore it may be used for the breaking of time translations in a pure phase. \n\n\\section{A mechanism for realizing quantum time crystals. A spin model} \nThe next issue is the identification of pure phases defined by a translationally invariant state, which is not invariant under time translations. A possible practical way is to consider a quantum system defined by a Hamiltonian $H$ and a corresponding translationally invariant ground state $\\omega$, which defines a pure phase. Then, one switches on an interaction $V$, typically with an external field, formally $H \\mapsto H + V$, such that $\\omega $ is no longer invariant under the time evolution $\\a_t$ defined on the observables $A$\n by the infinite volume limit of (volume-)cutoffed Hamiltonian groups \n\\be{ \\at(A) = \\limn e^{i (H + V)_n t} \\,A\\,e^{-i (H + V)_n t},}\\ee\n $n$ indexing the cutoff. The crucial condition is that $\\at$ maps observables into observables. \n In this way, one may obtain a spontaneous breaking of time translations. \n\nTo explicitly check this, the following criterion may be useful: \n\n{\\em In a pure phase defined by a state $\\om$ invariant under a group of translations $\\T$, an internal symmetry $\\b$ is spontaneously broken if and only if there is \na symmetry breaking order parameter $\\om(\\b(A)) \\neq \\om(A)$, for some observable $A$ or, equivalently, \nthere is an average or macroscopic observable\n\\be{ A_{av} \\eqq \\limN \\frac{1}{N} \\sum_{i = 1}^N T^i(A)), \\text{ for }T \\in \\T, }\\ee with non-invariant expectation $\\omega(\\b(A_{av})) \\neq \\omega(A_{av}) = \\omega(A)$.} \n\n\nEq.\\,(3) immediately implies symmetry breaking, since, by asymptotic abelianess $A_{av}$ commutes with all the observables so that it is represented by a multiple of the identity in each pure phase and therefore therefore $\\b$ cannot be implemented by a unitary operator $U_\\b$, which would leave $A_{av}$ invariant. \n\nIt is worthwhile to note that the non-invariance of expectations of average\/macroscopic observables on a translationally invariant state is much easier to control, in particular for detecting the breaking of time translations. \n\n\nA simple prototype of quantum time crystals is provided by a Heisenberg (lattice) ferromagnet described by a spin one-half Hamiltonian with nearest neighbor coupling $H$. The translationally invariant state $\\omega_x$ with all the spins pointing in the $x$-direction defines an irreducible representation of the observables and a pure phase $\\Gamma$, satisfying i) and ii).\n\nThen, one introduces the interaction $H_1$ with a uniform magnetic field ${\\bf h}$ pointing in the $z$-direction. Since $H$ and $H_1$ commute one gets \n\\be{ \\langle \\a_t(\\sigma_{x, av}) \\rangle = \\cos(|{\\bf h}| t) = \\langle \\sigma^i_{x}(t) \\rangle.}\\ee\nBy the above criterion, this implies the breaking of time translations, leaving unbroken the group of time translations with $t = (\\ume + n) \\pi \/|{\\bf h}|$, with $\\,\\,n \\in \\mathbb Z,$ ({\\em quantum time crystal}). For such a conclusion, a crucial role has been played by the thermodynamical limit and by the translational invariance of the state, a point which has not been realized by Watanabe and Oshikawa in their discussion of a simplified version of such a model. Clearly, the impossibility of describing $\\a_t$ by a unitary operator in $\\H_\\Gamma$ prevents the definition of an Hamiltonian there.\n\nThe general lesson from this relatively simple model is that if in a given pure phase the average observables undergo a periodic motion, one has a breaking of time translations with residual invariance under periodic translations, exactly as it happens for the breaking of space translations in a crystal. \nThus, the occurence of time crystals does not appear as a very odd, if not impossible, phenomenon as argued in the literature \\cite{B,WO}.\n\n\n\\section{An example of time periodicity enforced by topology}\n A similar mechanism may be realized in the model, discussed by Wilczek \\cite{W}, of a charged particle with charge $q$ and unit mass confined to a ring of unit radius that is threaded by (magnetic) flux $2 \\pi \\alpha\/q$.\n\n\n \n\n\n\n\n \n\nWhen $\\a = 0$ the model becomes the familiar quantum mechanical model of a particle on a ring. In the textbook presentations, it is not sufficiently emphasized that the non-trivial topology of the circle has a strong impact on the identification of the algebra of observables $\\A$, which is restricted to be the algebra generated by the canonical momentum $p$ and by periodic functions of the angle $\\ph$, conveniently by $U((n\\ph) \\eqq e^{ i n \\ph},$ $ n \\in \\mathbb{ Z}$ \\cite{WA}.\n\n Since the observable $V( 2 \\pi) \\eqq e^{ i 2 \\pi \\,p}$ commutes with all the observables, its spectrum, i.e. the expectation $\\langle V(2 \\pi) \\rangle_\\theta = e^{i \\theta}$, $\\theta \\in [ 0, 2 \\pi)$ mod $2 \\pi$, labels the irreducible representations of (the algebra of) the observables hereafter called {\\em $\\theta$-sectors}, all $\\theta$'s being allowed \\cite{Autoagg}.\nThe mapping \n$$\\rho^\\theta: U(n) \\,\\mapsto \\,U(n), \\,\\,V(\\b) \\eqq e^{ i \\b \\,p} \\,\\mapsto \\,V(\\b)\\,e^{ i \\tilde{\\theta}\\, \\b}, \\,\\,\\,\\tilde{\\theta} \\eqq \\theta\/2 \\pi,$$ defines a symmetry of the observables, which is broken in each $\\theta$ sector, since it does not leave the central element $V(2 \\pi)$ invariant. The structure is very similar of that of QCD, with $V(2 n \\pi)$ playing the role of the large gauge transformations and $\\rho^\\theta$ the chiral transformations \\cite{FS, FS2}.\n \nFor $\\a \\neq 0$ the dynamics $\\a^\\a(t)$ is given by \n$\\a^\\a(t) = \\rho^{ -2 \\pi\\,\\a}\\,\\a^0(t)\\,\\rho^{ 2 \\pi\\,\\a}$, with $\\a^0(t)$ the dynamics induced by $H_0 = \\ume p^2$. The spectrum of the corresponding Hamiltonian $H^\\a$ in each $\\theta$ sector coincides with the spectrum of $H^0$ in the sector $\\theta - \\a$.\n\nAs remarked above, to realize a spontaneous symmetry breaking of time translations it is enough to introduce a coupling of $\\ph$ with an external field $h$:\n$H^{\\a,h} = H^\\a + h \\ph.$\nSuch a Hamiltonian does not belong to the algebra of observables, but nevertheless the corresponding time evolution $\\a^{\\a,h}(t)$ maps the algebra of observables $\\A$ into itself, as required, for physical consistency.\nIn fact, by using Zassenhaus' formula, one easily gets \n $$\\a^{\\a, h}(t)(U(n)) = e^{- i \\ume t^2 h (p - \\a)}\\,\\a^\\a(t)(U(n))\\, e^{ i \\ume t^2 h (p - \\a)} \\in \\A,$$\n \\be{\\a^{\\a, h}(t)(V(\\b)) = V(\\b)\\,e^{i \\b\\, h\\,t}, \\,\\,\\,\n\\a^{\\a, h}(t)(V(2 \\pi)) = V(2 \\pi)\\,e^{i 2 \\pi h\\,t}.}\\ee \nHence, $\\a^{\\a, h}(t)$ is broken in each $\\theta$ sector, leaving as residual invariance group the time translations with $t = n\/h, \\,n \\in \\mathbb Z$.\n\nThe non-trivial topology of the circle plays a crucial role for the symmetry breaking in such a finite dimensional model; in the (reducible) representation with a decompactified $\\ph$, given by $L^2(\\mathbb R, d \\ph)$, the dynamics $\\a^{\\a, h}(t)$ is implemented by unitary operators, which, however, do not leave each $\\theta$ sector invariant. \n\n\\section{Breaking of time translations in many-particle Wilczek model}\nFor a many particle extension of the model discussed above, consider $N$ copies of the Wilczek model, e.g. by considering $N$ coaxial rings with the same unit radius, labeled by the index $i$, with the following Hamiltonian:\n$ H^{\\a, V, N} = \\sum_{i = 1}^N( H^\\a_i + V(\\ph_i)),$\nwith $V(\\ph_i)$ a periodic function, to guarantee that the corresponding time evolution $\\a^{\\a, V, N}(t)$ maps observables into observables \\cite{WF}. \n\nIn order to find phases with broken time translations in the $N \\ra \\infty$ limit, consider the product state $\\Psi_0^N \\eqq \\prod_{i = 1}^N \\,\\Psi_0^i$, with $\\Psi_0^i = (2 \\pi)^{-\\ume} \\,e^{i \\a \\ph_i}$ the ground state of $H^{\\a, i}$ (one might as well take the product of the same $\\theta$ state for each index $i$). Clearly, in the thermodynamical limit such a state defines a pure phase and it is invariant under the group of ``translations'' of the index $i$, so that the criterion of Section 2 applies. \n\nTo this purpose, one may consider \n the following observable\n$$ P_N = \\frac{1}{N} \\sum_{i = 1}^N f(\\ph_i)\\,p_i\\,f(\\ph_i) \\eqq \\frac{1}{N}\\sum_{i = 1}^N p_i^f,$$ \nwith $f$ a regular function of compact support in $[0, 2 \\pi]$, with periodic extension $f(\\ph_i + 2 \\pi) = f(\\ph_i)$; the observable $p_i^f$ describes a momentum localized in the support of $f$ and its expectation on the corresponding $\\Psi_0^i $ is $\\langle p_i^f \\rangle_i = \\a\\,(2 \\pi)^{-1} \\int _0^{2 \\pi} d \\ph_i\\,|f(\\ph_i)|^2 \\neq 0$, and is independent of the index $i$. \n\n Furthermore, denoting by $\\a^{\\a, V, N}(t)$ the time evolution defined by $H^{\\a, V, N}$ and by $\\langle \\ldots \\rangle_{\\a, N}$ the expectation on the above product state, one has \n\n$$ \\frac{d}{d t} \\langle \\a^{\\a, V, N}(t)(p_i^f) \\rangle_{\\a, N}|_{t = 0} = \\int _0^{2 \\pi} \\frac{d \\ph_i}{2 \\pi} \\,|f(\\ph_i)|^2 \\,\\frac{d V(\\ph_i)}{d \\ph_i},$$\nhaving used that $\\Psi_0^i$ is the ground state of the kinetic Hamiltonian $H_i^{\\a}$. For a given potential one can always find $f$ such that the right hand side does not vanish (thanks to the fact that $\\Psi_0^i$ is not an eigenstate of $H_i^{\\a, V}$). \n\nAs in the example of spin systems, in the infinite particle limit the average observable\n$$P_{av} = \\limN \\frac{1}{N} \\sum_{i = 1}^N p_i^f$$ commutes with the observables, it is not zero in the (factorial) representation defined by the above product state in the limit $N \\ra \\infty$, $\\langle P_{av} \\rangle_{\\a, \\infty} = \\langle p_i^f \\rangle_{\\a, \\infty}$, and \n\\be{ d\/dt \\langle \\a^{\\a, V, \\infty}(t)(P_{av}) \\rangle_{\\a, \\infty}|_{t = 0} \\neq 0.}\\ee\nThus, such an average observable has a non-trivial time evolution and therefore time translations are broken \n in such a representation.\n\nAnother possibility is to consider the average\/macroscopic observable \n$$ F(\\ph)_{av} \\eqq \\limN \\frac{1}{N} \\sum_{i=1}^N F(\\ph_i),$$\nwith $F$ a regular periodic function of compact support.\n\nProceeding as before one gets \n$$d\/dt \\langle \\a^{\\a, V, N}(t)(F(\\ph_i)) \\rangle_{\\a, N}|_{t = 0} = 0, $$\n\\be{(d\/dt)^2 \\langle \\a^{\\a, V, N}(t)(F(\\ph_i)) \\rangle_{\\a, N}|_{t = 0} = - \\langle V'(\\ph_i)\\,F'(\\ph_i) \\rangle_{\\a, N },\n}\\ee where the prime denotes the derivative with respect to $\\ph_i$. The right hand side is independent of $i$ and different from zero for all periodic functions $F$, with derivative which is not orthogonal to $V'$; therefore the same result holds for $F(\\ph)_{av}$ in the $N \\ra \\infty$ limit. By the above criterion, this implies the breaking of time translations. \n\nThe conclusions do not change if the interaction $V$ also contains a particle-particle interaction term $V_{p-p} = \\sum_{i|\\alpha^{\\exparen{s_k}}_i|,\n\\]\nwe see that~$w$ must be a factor of~$\\alpha^{\\exparen{s_k}}_j$ for some $j\\ge i+1$. All such words are composed of concatenations of~$\\alpha$ with its complement~$\\overline{\\alpha}$, and thus~$w$ must contain either~$\\alpha$ or~$\\overline{\\alpha}$ as a factor, since~$w$ is at least twice as long as both. The result follows because each of~$\\alpha$ and~$\\overline{\\alpha}$ contain~$\\alpha^{\\exparen{s_k}}_i$ as a factor.\n\\end{proof}\n\nThat these languages are wqo follows readily from Proposition~\\ref{prop-contains-alpha}.\n\n\\begin{proposition}\n\\label{prop-Lsk-wqo}\nFor every sequence $(s_k)$ of positive integers, the set~$\\L^{\\exparen{s_k}}\\subseteq\\{0,1\\}^\\ast$ of words is wqo under the factor order.\n\\end{proposition}\n\n\n\\begin{proof}\nSuppose to the contrary that there were an infinite antichain~$w_1$,~$w_2$, $\\dots$ of words from~$\\L^{\\exparen{s_k}}$.\nThe word~$w_1$ is contained in~$\\alpha^{\\exparen{s_k}}_i$ for some index $i$ because it lies in~$\\L^{\\exparen{s_k}}$. Letting $f^{\\exparen{s_k}}(i)$ denote the function from Proposition~\\ref{prop-contains-alpha}, we see that every word in~$\\L^{\\exparen{s_k}}$ of length at least $f^{\\exparen{s_k}}(i)$ contains~$\\alpha^{\\exparen{s_k}}_i$ as a factor, and hence also contains~$w_1$ as a factor. It follows that there is some index~$j$ such that~${|w_j|\\geq f^{\\exparen{s_k}}(i)}$, from which we conclude that~$w_1$ is a factor of~$w_j$, which contradicts our assumption that these words form an infinite antichain.\n\\end{proof}\n\nPouzet~\\cite{pouzet:sur-la-theorie-:} established that when $(s_k)$ and $(t_k)$ are distinct sequences, then~$\\L^{\\exparen{s_k}}$ and~$\\L^{\\exparen{t_k}}$ are distinct languages. As there are uncountably many sequences of positive integers, there must then be uncountably many wqo factor-closed languages over a binary alphabet. For our enumerative goal, we require something a bit more precise: not only must~$\\L^{\\exparen{s_k}}$ and~$\\L^{\\exparen{t_k}}$ be distinct languages, but there must be some length where one language contains more words of that length than the other. In fact, the result we prove is stronger still: the set of words of some given length in one language is a \\emph{proper subset} of the words of that length in the other.\n\nBefore we state and prove this, however, we require a technical characterisation of the possible embeddings of~$\\alpha^{\\exparen{s_k}}_i$ in~$\\alpha^{\\exparen{s_k}}_j$ for given $i$ and any $j>i$. We again adopt the viewpoint that~$\\alpha_j^{\\exparen{s_k}}$ can be regarded as a word over the alphabet $\\{\\alpha_i^{\\exparen{s_k}},\\overline{\\alpha}_i^{\\exparen{s_k}}\\}$, but to ease exposition let us use~$\\alpha_i^\\ast$ and~$\\overline{\\alpha}_i^\\ast$ to denote the \\emph{letters}, and~$\\alpha_j^\\ast$ the word over $\\{\\alpha_i^\\ast,\\overline{\\alpha}_i^\\ast\\}$, with the property that the binary word~$\\alpha_j^{\\exparen{s_k}}$ is equal to~$\\alpha_j^\\ast$ after performing the substitutions~$\\alpha_i^\\ast\\mapsto \\alpha_i$ and~$\\overline{\\alpha}_i^\\ast\\mapsto\\overline{\\alpha}_i$. \n\n\\begin{proposition}\\label{prop-alpha-i-embeddings}%\nFor every sequence $(s_k)$ of positive integers and integers $i1$, fix an occurrence of $01$ in~$\\alpha_j$. If this occurrence starts on an odd-numbered letter of~$\\alpha_j$, then in~$\\alpha_j^\\ast$ this occurrence corresponds either to the letter~$\\alpha_1^\\ast$ or~$\\overline{\\alpha}_1^\\ast$, but it clearly cannot be the second of these. If, on the other hand, this occurrence starts on an even-numbered letter of~$\\alpha_j$, then this occurrence straddles two letters of~$\\alpha_j^\\ast$. By inspection, the only possibility is~$\\overline{\\alpha}_i^\\ast\\overline{\\alpha}_i^\\ast$. A similar argument applies for the occurrences of~$\\overline{\\alpha}_i$ in~$\\alpha_j$, and this completes the base case.\n\nSuppose that the proposition is true for some $i\\geq 1$. Take $j> i+1$, and consider an occurrence of~$\\alpha_{i+1}$ in~$\\alpha_j$. Since~$\\alpha_{i+1} = \\alpha_{i}^{s_{i}}\\overline{\\alpha}_{i}^{s_{i}}$, we begin by considering the occurrences of~$\\alpha_{i}^{s_{i}}$ and~$\\overline{\\alpha}_{i}^{s_{i}}$ in~$\\alpha_j$. \n\nBy induction, the only occurrences of~$\\alpha_i$ in~$\\alpha_j$ are as given in the proposition. Consequently, the only occurrences of~$\\alpha_i^{s_i}$ in~$\\alpha_j$ correspond to factors $(\\alpha_i^\\ast)^{s_i}$ in~$\\alpha_j^\\ast$, or the middle terms of the binary word corresponding to $(\\overline{\\alpha}_i^\\ast)^{s_i+1}$. A similar statement holds for the occurrences of~$\\overline{\\alpha}_i^{s_i}$ in~$\\alpha_j$.\n\nWe now consider the possible positions for our occurrence of~$\\alpha_{i+1} = \\alpha_{i}^{s_{i}}\\overline{\\alpha}_{i}^{s_{i}}$ in~$\\alpha_j$. If the first half (the word~$\\alpha_{i}^{s_{i}}$) appears in the middle of some factor of the form~$\\overline{\\alpha}_i^{s_i+1}$, then the second half of the word,~$\\overline{\\alpha}_i^{s_i}$ does not embed. Similarly, if the second half of the word embeds in the middle of a factor of the form~$\\alpha_i^{s_i+1}$ then the first half cannot embed. Therefore, the only embeddings of~$\\alpha_{i+1}$ in~$\\alpha_j$ must correspond precisely to instances of the factor $(\\alpha_i^\\ast)^{s_i}(\\overline{\\alpha}_i^\\ast)^{s_i}$ in~$\\alpha_j^\\ast$. A similar statement applies to~$\\overline{\\alpha}_{i+1}$. \n\nFinally, by construction,~$\\alpha_j^\\ast$ is composed of factors of the form $(\\alpha_i^\\ast)^{s_i}(\\overline{\\alpha}_i^\\ast)^{s_i}$ and $(\\overline{\\alpha}_i^\\ast)^{s_i}(\\alpha_i^\\ast)^{s_i}$. These correspond to the words~$\\alpha_{i+1}$ and~$\\overline{\\alpha}_{i+1}$ that can be used to make up~$\\alpha_j$, from which the inductive step follows. \n\\end{proof}\n\nWe now state the main result of this section; we prove it after considering an example.\n\n\\begin{proposition}\n\\label{prop-words-distinct-enum}\nSuppose that $(s_k)$ and $(t_k)$ are distinct sequences of positive integers, and that~$(s_k)$ lexicographically precedes $(t_k)$. Then there exists an integer $M\\ge 3$ such that~$\\L^{\\exparen{s_k}}_n = \\L^{\\exparen{t_k}}_n$ for all~${nI$ be such that~$\\alpha_j^{\\exparen{s_k}}$ contains~$w$. Now view~$\\alpha_j^{\\exparen{s_k}}$ as a sequence of concatenations of~$\\alpha^{s_I}$ and~$\\overline{\\alpha}^{s_I}$. Since~$w$ has length at most $s_I|\\alpha|+1$, it follows that~$w$ embeds in~$\\alpha_j^{\\exparen{s_k}}$ in at most two terms of this concatenation. That is,~$w$ appears as a factor in one of the following words:\n\\[\n\t\\alpha^{s_I}\\alpha^{s_I},\\quad\n\t\\alpha^{s_I}\\overline{\\alpha}^{s_I},\\quad\n\t\\overline{\\alpha}^{s_I}\\alpha^{s_I},\\quad\n\t\\overline{\\alpha}^{s_I}\\overline{\\alpha}^{s_I}\n\\]\nIn fact, containment in one of these four words is a precise characterization of the words of length at most $M-1$ in~$\\L^{\\exparen{s_k}}$, since all four words appear in~$\\alpha_{I+3}^{\\exparen{s_k}}$:\n\\[\n\t\\alpha_{I+3}^{\\exparen{s_k}} = \\big(\n\t\t\\left(\\alpha^{s_I}\\overline{\\alpha}^{s_I}\\right)^{s_{I+1}}\n\t\t\\left(\\overline{\\alpha}^{s_I}\\alpha^{s_I}\\right)^{s_{I+1}}\\big)^{s_{I+2}}\n\t\\big(\n\t\t\\left(\\overline{\\alpha}^{s_I}\\alpha^{s_I}\\right)^{s_{I+1}}\n\t\t\\left(\\alpha^{s_I}\\overline{\\alpha}^{s_I}\\right)^{s_{I+1}}\\big)^{s_{I+2}}\n\\]\nFurthermore, since $t_I > s_I$, all four of these words also appear in~$\\L^{\\exparen{t_k}}$, and this establishes that~$\\L^{\\exparen{s_k}}$ and~$\\L^{\\exparen{t_k}}$ contain the same words up to length $M-1$.\n\nWe now turn our attention to the words of length $M$. First, any word~$w$ of length $M$ that embeds in one of the four words\n\\[\n\t\\alpha^{s_I}\\alpha^{s_I},\\quad\n\t\\alpha^{s_I}\\overline{\\alpha}^{s_I},\\quad\n\t\\overline{\\alpha}^{s_I}\\alpha^{s_I},\\quad\n\t\\overline{\\alpha}^{s_I}\\overline{\\alpha}^{s_I}\n\\]\nwill lie in both~$\\L^{\\exparen{s_k}}$ and~$\\L^{\\exparen{t_k}}$, by the same argument as before. \nAny other word of length $M$ must be formed from a copy of~$\\alpha^{s_I}$ or~$\\overline{\\alpha}^{s_I}$, with exactly one letter before and one after. That is, the only remaining words to consider are the following:\n\\begin{gather*} \n0\\alpha^{s_I}0,\\quad\n1\\alpha^{s_I}0,\\quad\n0\\alpha^{s_I}1,\\quad\n1\\alpha^{s_I}1,\\\\\n0\\overline{\\alpha}^{s_I}0,\\quad\n1\\overline{\\alpha}^{s_I}0,\\quad\n0\\overline{\\alpha}^{s_I}1,\\quad\n1\\overline{\\alpha}^{s_I}1.\t\n\\end{gather*}\nLet us assume that $I$ is even; the case where $I$ is odd is analogous. By Observation~\\ref{obs-last-letters}, the word~$\\alpha$ both begins and ends with 0. The following table summarizes whether each of the eight words above belongs to~$\\L^{\\exparen{s_k}}$ and~$\\L^{\\exparen{t_k}}$, and if so illustrates how it arises (note that the words specified in the second and third columns are factors of~$\\alpha_{I+3}^{\\exparen{s_k}}$ and~$\\alpha_{I+3}^{\\exparen{t_k}}$, respectively.\n\n\\begin{center}\n\\begin{tabular}{c@{\\qquad}l@{\\qquad}l}\nword \t\t\t\t&in~$\\L^{\\exparen{s_k}}$, factor of\t&in~$\\L^{\\exparen{t_k}}$, factor of\\\\\n\\hline\\\\[-10pt]\n$0\\alpha^{s_I}0$\t&$\\overline{\\alpha}^{s_I}\\overline{\\alpha}^{s_I}$ (see below)\t\t\t\t&$\\alpha^{t_I}\\alpha^{t_I}$\\\\\n$1\\alpha^{s_I}0$\t&$\\overline{\\alpha}^{s_I}\\alpha^{s_I}\\alpha^{s_I}$\t&$\\overline{\\alpha}^{t_I}\\alpha^{t_I}$\\\\\n$0\\alpha^{s_I}1$\t&$\\alpha^{s_I}\\alpha^{s_I}\\overline{\\alpha}^{s_I}$\t&$\\alpha^{t_I}\\overline{\\alpha}^{t_I}$\\\\\n$1\\alpha^{s_I}1$\t&$\\overline{\\alpha}^{s_I}\\alpha^{s_I}\\overline{\\alpha}^{s_I}$&Not in set (see below)\\\\\n$0\\overline{\\alpha}^{s_I}0$\t&$\\alpha^{s_I}\\overline{\\alpha}^{s_I}\\alpha^{s_I}$\t&Not in set (see below)\\\\\n$1\\overline{\\alpha}^{s_I}0$\t&$\\overline{\\alpha}^{s_I}\\overline{\\alpha}^{s_I}\\alpha^{s_I}$\t&$\\overline{\\alpha}^{t_I}\\alpha^{t_I}$\\\\\n$0\\overline{\\alpha}^{s_I}1$\t&$\\alpha^{s_I}\\overline{\\alpha}^{s_I}\\overline{\\alpha}^{s_I}$\t&$\\alpha^{t_I}\\overline{\\alpha}^{t_I}$\\\\\n$1\\overline{\\alpha}^{s_I}1$\t&$\\alpha^{s_I}\\alpha^{s_I}$ (see below)\t\t&$\\overline{\\alpha}^{t_I}\\overline{\\alpha}^{t_I}$\n\\end{tabular}\n\\end{center}\nThere remain four entries in the above table to consider. Let us begin first with the word $0\\alpha^{s_I}0$. Proposition~\\ref{prop-alpha-i-embeddings} tells us that~$\\alpha$ embeds in the middle of~$\\overline{\\alpha}\\alphabar$, and thus~$\\alpha^{s_I}$ embeds in the middle of~$\\overline{\\alpha}^{s_I}\\overline{\\alpha}^{s_I}$. Furthermore, the letter immediately to the left of this embedding is the last letter of~$\\overline{\\alpha}_{I-1}^{\\exparen{s_k}}$, which is 0 by Observation~\\ref{obs-last-letters} (since we are assuming that $I-1$ is odd). Similarly, the first letter after this embedding is the first letter of~$\\overline{\\alpha}_{I-1}^{\\exparen{s_k}}$, which is also 0, and hence $0\\alpha^{s_I}0$ is a factor of~$\\overline{\\alpha}^{s_I}\\overline{\\alpha}^{s_I}$.\n\nA similar argument can be applied to show that $1\\overline{\\alpha}^{s_I}1\\in\\L^{\\exparen{s_k}}$, and this establishes that~${\\L_M^{\\exparen{s_k}}\\supseteq \\L_M^{\\exparen{t_k}}}$.\n\nOur final task is to show that neither $1\\alpha^{s_I}1$ nor $0\\overline{\\alpha}^{s_I}0$ lies in~$\\L^{\\exparen{t_k}}$. We consider only $1\\alpha^{s_I}1$, the case for the other word being entirely analogous. \n\nBy Proposition~\\ref{prop-alpha-i-embeddings}, the only factors of~$\\alpha_j^{\\exparen{t_k}}$ that are equal to~$\\alpha$ are either given by the \\emph{letter}~$\\alpha$, or appear in the middle of the pair of letters~$\\overline{\\alpha}\\alphabar$, when we express~$\\alpha_j^{\\exparen{t_k}}$ as a word over $\\{\\alpha,\\overline{\\alpha}\\}$. Thus, the binary word~$\\alpha^{s_I}$ appears as a factor of~$\\alpha_j^{\\exparen{t_k}}$ only as the sequence of letters~$\\alpha^{s_I}$, or in the middle of~$\\overline{\\alpha}^{s_I+1}$.\nIn this latter embedding, the letter in~$\\overline{\\alpha}^{s_I+1}$ that lies immediately to the left of such an embedding is the last letter of~$\\overline{\\alpha}_{I-1}^{\\exparen{s_k}}$, which we have already established is equal to 0. Thus $1\\alpha^{s_I}1$ does not appear in the middle of~$\\overline{\\alpha}^{s_I+1}$.\n\nThe only remaining possibility is that $1\\alpha^{s_I}1$ embeds into~$\\alpha_j^{\\exparen{t_k}}$ precisely as~$\\alpha^{s_I}$, together with one extra letter on either side. Now~$\\alpha_j^{\\exparen{t_k}}$ can be written as a word comprising factors of the form~$\\alpha^{t_I}$ and~$\\overline{\\alpha}^{t_I}$. Since $t_I > s_I$, we are forced to embed either the letter 1 on the left of $1\\alpha^{s_I}1$ as the rightmost letter of~$\\alpha$, or the letter 1 on the right of $1\\alpha^{s_I}1$ as the leftmost letter of~$\\alpha$. Since~$\\alpha$ both begins and ends with 0, however, neither case is possible. Thus $1\\alpha^{s_I}1$ does not embed into~$\\alpha_j^{\\exparen{t_k}}$, from which we conclude that $1\\alpha^{s_I}1\\not\\in\\L^{\\exparen{t_k}}$, as required.\n\\end{proof}\n\n\n\\section{Rightward-Yearning Pin Sequences}\n\\label{sec-rightward-yearning}\n\nOur tool for translating the results about Pouzet's languages~$\\L^{\\exparen{s_k}}$ to the permutation context are the pin sequences first introduced by Brignall, Huczynska, and Vatter~\\cite{brignall:decomposing-sim:}. These are best described with a pictorial description of the permutation pattern order. The \\emph{plot} of the permutation~$\\pi$ is the set $\\{(i,\\pi(i))\\}$ of points. Clearly every plot of a permutation is \\emph{generic} in the sense that no two of its points share the same $x$- or $y$-coordinate. Conversely, every finite generic set of points in the plane is order isomorphic to the plot of a unique permutation, in the sense that two sets of points in the plane are \\emph{order isomorphic} if the axes can be stretched and shrunk to transform one of the sets into the other.\n\nAn \\emph{axis-parallel rectangle} is any rectangle in the plane with sides parallel to the $x$- and $y$-axes. The \\emph{rectangular hull} of a set of points in the plane is defined as the smallest axis-parallel rectangle containing them. Given a sequence $(p_1, \\dots, p_i)$ of points in the plane, a \\emph{proper pin} for this sequence is a point $p$ that lies outside their rectangular hull and \\emph{separates} $p_i$ from $\\{p_1,\\dots,p_{i-1}\\}$, meaning that $p$ lies either horizontally or vertically between $p_i$ and the rectangular hull of $\\{p_1,\\dots,p_{i-1}\\}$. A \\emph{proper pin sequence} is then constructed by starting with two points $p_1$ and $p_2$ (whose placement we discuss later), choosing $p_3$ to be a proper pin for $(p_1,p_2)$, then choosing $p_4$ to be a proper pin for $(p_1,p_2,p_3)$, and so on. We describe pins as either \\emph{left}, \\emph{right}, \\emph{up}, or \\emph{down} based on their position relative to the rectangular hull of $\\{p_1,\\dots,p_{i}\\}$. Note that the direction of a pin uniquely specifies its position relative to the previous points in a pin sequence. We specify pin sequences with the alphabet $\\{\\mathsf{l},\\r,\\u,\\d\\}$.\n\nIt follows from their definition that proper pin sequences must turn by $90^\\circ$ with each pin. In other words, an up pin may be immediately followed by a left or a right pin, but not by another up pin or by a down pin. As our goal is to encode binary strings as proper pin sequences, we \\emph{could} therefore translate each $0$ into left or down and each $1$ into right or up, with the choices determined by the previous pin. If we were to do this, we might for example have the correspondence\n\\[\n\t01101001\n\t\\mapsto\n\t\\mathsf{drululdr}.\n\\]\n\nWhile we suspect that our results would remain true with this translation from binary words to proper pin sequences, they would undoubtably be more troublesome to prove. Instead, we restrict our attention to \\emph{rightward-yearning pin sequences}, which we define to be the pin sequences made up only of right, up, and down pins. This means, because proper pin sequences must turn by $90^\\circ$ with every pin, that every second pin in a rightward-yearning pin sequence must be a right pin. For the other pins, we translate $0$ to be a down pin and $1$ to be an up pin. \nThus in this correspondence, we would have\n\\[\n\t01101001\n\t\\mapsto\n\t\\mathsf{drururdrurdrdrur}.\n\\]\n\nIn fact, we encode our pin sequences slightly differently than this. For reasons that will become apparent later, we subscript the letter encoding each right pin by the type of pin immediately preceding it. This enlarges our alphabet to $\\{\\rd,\\ru,\\d,\\u\\}$, and our previous example becomes\n\\[\n\t01101001\n\t\\mapsto\n\t\\mathsf{d r_d u r_u u r_u d r_d u r_u d r_d d r_d u r_u}.\n\\]\n(This pin sequence is plotted in Figure~\\ref{fig-rightward-yearning}.) We stress that the subscripts on our encodings of down pins do not affect the actual pin sequences; these subscripts instead inform us where the corresponding points lie in the plane.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tikzpicture}[scale=0.3]\n\\node[openpoint] (orig) at (0,-0.5) {};\n\\foreach \\y [count=\\x] in {-2,1,-1,3,0,-4,2,5,-3,-6,4,-8,-5,7,-7,6}\n\t\\node[point] (\\y) at (\\x,\\y) {};\n\\foreach \\node [count=\\n,remember=\\node as \\prevnode (initially orig)] in {-2,-1,1,0,3,2,-4,-3,5,4,-6,-5,-8,-7,7,6} {\n\t\\ifodd\\n\n\t\t\\draw (\\prevnode-|\\node) -- ++(0,0.5)-- ++(0,-1) -- (\\node);\n\t\\else\n\t\t\\draw (\\prevnode|-\\node) -- ++(-0.5,0) -- (\\node);\n\t\\fi\n\t}\n\\end{tikzpicture}\n\\end{center}\n\\caption{The rightward-yearning pin sequence associated to the word $\\mathsf{d r_d u r_u u r_u d r_d u r_u d r_d d r_d u r_u}$.}\n\\label{fig-rightward-yearning}\n\\end{figure}\n\nIt remains to explain how to start a pin sequence. In the present work, we start every pin sequence with a point called an \\emph{origin} and labelled by $p_0$. \\emph{Importantly, we do not consider the origin $p_0$ to be part of the resulting rightward-yearning pin sequence.} \n\nFor the first real pin, $p_1$, if it has encoding $\\u$ or $\\mathsf{r_u}$, then $p_1$ appears above and to the right of $p_0$. Analogously, if $p_1$ has encoding $\\d$ or $\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace$, then it is placed below and to the right of $p_0$. In either case, the second pin $p_2$ slices the rectangular hull of $(p_0,p_1)$ in the direction indicated by its encoding.\n\nAs our pin sequences progress only to the right, this origin $p_0$ lies to the left of all other pins of the pin sequence, and will lie below all pins whose encoding is $\\mathsf{u}$ or $\\mathsf{r_u}$, and above all pins whose encoding is $\\mathsf{d}$ or $\\mathsf{r_d}$. In this way, the origin partitions each of the entries of the pin sequence into two groups: those above and below the $x$-axis. This means that our pin sequences could be considered a very simple type of \\emph{grid pin sequence} (as first considered by Brignall~\\cite{brignall:grid-classes-an:}), but we do not adopt this viewpoint.\n\n\nGiven any word~$w\\in\\{\\rd,\\ru,\\d,\\u\\}^n$ in which the letters alternate between $\\{\\d,\\u\\}$ and $\\{\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace,\\mathsf{r_u}\\}$, we take the \\emph{rightward-yearning pin sequence defined by~$w$} to be the permutation $\\tau_w^\\circ$ that is order isomorphic to the set $\\{p_1,p_2,\\dots,p_n\\}$ of points defined by the word~$w$. Thus $|\\tau_w^\\circ|=|w|=n$.\n\nAs is demonstrated in Section~\\ref{sec-decomp}, in order to discuss subpermutations of rightward-yearning pin sequences, it is necessary to also introduce the permutations $\\tau_w^\\bullet$ that include the origin as an extra point. We take $\\tau^\\bullet_w$ to be the permutation that is order isomorphic to the set $\\{p_0,p_1,p_2,\\dots,p_n\\}$ of points defined by the word~$w$, so $|\\tau_w^\\bullet|=|w|+1=n+1$.\n\n\n\\section{The Construction}\\label{sec-construction}\n\nWe now have the necessary background to describe the family of permutation classes used to prove Theorem~\\ref{thm-wqo-not-algebraic}.\n\nGiven a binary word~$w\\in\\{0,1\\}^\\ast$, we denote by $\\rho(w)$ the word (of twice the length as~$w$) over the alphabet $\\{\\rd,\\ru,\\d,\\u\\}$ that is obtained by performing the substitutions\n\\[\n\t0\\leftarrow \\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\n\t\\quad\n\t\\text{and}\n\t\\quad\n\t1\\leftarrow \\u\\mathsf{r_u};\n\\]\nthat is, replacing occurrences of $0$ by $\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace$ and occurrences of $1$ by $\\u\\mathsf{r_u}$.\nWe are frequently interested only in the image of $\\rho$ and factors of those words, and so we define the language\n\\begin{align*}\n\t\\P\t\t\t&=\n\t\t\\text{the factor closure of $\\rho(\\{0,1\\}^\\ast)$}.\\\\\n\\intertext{Our construction itself requires us to restrict to the binary words of Section~\\ref{sec-pouzet-factors}, for which we recall that}\n\t\\L^{\\exparen{s_k}}\t&=\n\t\t\\text{the factor-closed wqo languages of Section~\\ref{sec-pouzet-factors}}.\\\\\n\\intertext{We then consider the factors of the image under $\\rho$ of those languages, and the class of permutations to which they correspond:}\n\t\\P^{\\exparen{s_k}}\t&=\n\t\t\\text{the factor closure of $\\rho(\\L^{\\exparen{s_k}})$ and}\\\\\n\t\\mathcal{C}^{\\exparen{s_k}}\t&=\n\t\t\\text{the downward closure of $\\{\\psi_w^\\circ\\::\\: w\\in\\P^{\\exparen{s_k}}\\}$}.\n\\end{align*}\n\nIt follows from the definitions that $\\P$,~$\\L^{\\exparen{s_k}}$, and $\\P^{\\exparen{s_k}}$ are closed under taking factors, while $\\mathcal{C}^{\\exparen{s_k}}$ is closed under taking subpermutations, and is thus a permutation class. What remains to do is to establish that the classes $\\mathcal{C}^{\\exparen{s_k}}$ are wqo, and that if the sequences $(s_k)$ and $(t_k)$ differ, then the resulting classes $\\mathcal{C}^{\\exparen{s_k}}$ and $\\mathcal{C}^{\\exparen{t_k}}$ have different enumerations. These two results will follow by lifting the analogous results about the languages~$\\L^{\\exparen{s_k}}$, Propositions~\\ref{prop-Lsk-wqo} and \\ref{prop-words-distinct-enum}, to this context. In order to do this, we must first establish a decomposition result for the members of $\\mathcal{C}^{\\exparen{s_k}}$ in the next section. Before that, we make a simple observation now that we have the terminology to express it.\n\n\\begin{proposition}\n\\label{prop-Psk-to-Csk}\nSuppose that $v$ is a factor of~$w$ for words $v,w\\in\\P$. Then the permutation~$\\psi_w^\\circ$ contains the permutation~$\\psi_v^\\circ$ and the permutation~$\\psi_w^\\bullet$ contains the permutation~$\\psi_v^\\bullet$.\n\\end{proposition}\n\\begin{proof}\nFix a factor of $v$ in $w$. The pins of $\\psi_w^\\circ$ corresponding to this factor are in the same relative position to each other as the pins of~$\\psi_v^\\circ$, which verifies the claim for this pair of permutations. For the version of the result with origins, we note that the origin in~$\\psi_w^\\bullet$ is in the same relative position to the pins that correspond to a factor of $v$ in~$w$ as the origin in~$\\psi_v^\\bullet$ is in relative to the rest of the pins of that permutation.\n\\end{proof}\n\n\n\\section{A Decomposition}\n\\label{sec-decomp}\n\nWe frame the discussion in this section as considering the effect of deleting points from rightward-yearning pin sequences. There are essentially three types of pins we can delete: the first pin, the last pin, or an interior pin. We handle these cases below in order of their difficulty, for a word~$w\\in\\{\\rd,\\ru,\\d,\\u\\}^n$.\n\\begin{itemize}\n\\item To delete the last pin from a pin sequence, we simply don't create it in the first place. Thus the permutation obtained from~$\\psi^\\circ_w$ by deleting the point corresponding to $p_n$ is~$\\psi^\\circ_{w(1)\\cdots w(n-1)}$.\n\\item The same argument as above holds for the first pin, so the permutation obtained from~$\\psi^\\circ_w$ by deleting the point corresponding to the first pin is~$\\psi^\\circ_{w(2)\\cdots w(n)}$. \n\\item Deleting the $i$\\th pin from~$\\psi^\\circ_w$ for some index $2\\le i\\le n-2$ corresponds to replacing the origin in the permutation~$\\psi^\\bullet_{w(i+1)\\cdots w(n)}$ with the permutation~$\\psi^\\circ_{w(1)\\cdots w(i-1)}$.\n\\end{itemize}\nWe call the operation in this last case \\emph{inflating the origin}, and this case is the reason we introduced the permutations~$\\psi^\\bullet_w$ in Section~\\ref{sec-rightward-yearning} in the first place. The process of removing pins from the permutations~$\\psi^\\bullet_w$ is also needed, but is entirely analogous to the above, and will be handled once we have introduced some additional notation.\n\nInflating origins is similar to the sum of two permutations. Recall that given a permutation~$\\sigma$ of length $m$ and another permutation~$\\psi$ of length $n$, their \\emph{sum} is the permutation denoted by~$\\sigma\\oplus\\tau$ and defined by\n\\[\n\t(\\sigma\\oplus\\tau)(i)\n\t=\n\t\\left\\{\\begin{array}{ll}\n\t\\sigma(i)&\\text{if $1\\le i\\le m$,}\\\\\n\t\\tau(i-m)+m&\\text{if $m+1\\le i\\le m+n$.}\n\t\\end{array}\\right.\n\\]\n\nThe origin in~$\\psi^\\bullet_w$ is always the leftmost point, so we define the more general operation of inflating the first entry of a permutation. Suppose that~$\\sigma$ is a permutation of length $m$ and that $\\tau$ is a permutation of length $n+1$. Then~$\\sigma\\boxplus\\tau$ is the permutation of length $m+n$ obtained by inflating the first entry of $\\tau$ by~$\\sigma$. Formulaically, it is defined by\n\\[\n\t(\\sigma\\boxplus\\tau)(i)\n\t=\n\t\\left\\{\\begin{array}{ll}\n\t\\sigma(i)+\\tau(1)-1&\\text{if $1\\le i\\le m$,}\\\\\n\t\\tau(i-m+1)&\\text{if $m< i\\le m+n$ and $\\tau(i-m+1)<\\tau(1)$, and}\\\\\n\t\\tau(i-m+1)+m-1&\\text{if $m< i\\le m+n$ and $\\tau(i-m+1)>\\tau(1)$.}\n\t\\end{array}\\right.\n\\]\nSee Figure~\\ref{fig-interior-pin} for an example.\n\n\\begin{figure}\n{\\centering\n\\begin{tikzpicture}[scale=0.25]\n\\node[openpoint] (orig) at (0,0) {};\n\\foreach \\y [count=\\x] in {-2,2,-1,-4,1,4,-3,-6,3,-5}\n\t\\node[point] (\\y) at (\\x,\\y) {};\n\\foreach \\node [count=\\n,remember=\\node as \\prevnode (initially orig)] in {-2,-1,2,1,-4,-3,4,3,-6,-5} {\n\t\\ifodd\\n\n\t\t\\draw (\\prevnode-|\\node) -- ++(0,0.5)-- ++(0,-1) -- (\\node);\n\t\\else\n\t\t\\draw (\\prevnode|-\\node) -- ++(-0.5,0) -- (\\node);\n\t\\fi\n\t}\n\\node[openpoint,minimum width=4pt+3pt] at (-4) {};\n\\node[draw=none,fill=none] at (12,-1) {$\\rightarrow$};\n\\begin{scope}[shift={(15,-0.5)}]\n\\draw[fill=black!10,draw=none] (-0.5,-2.5) rectangle (4.5,2.5);\n\\node[openpoint] (orig) at (0,0) {};\n\\coordinate (-40) at (4,-4);\n\\foreach \\y [count=\\x] in {-2,2,-1,1\n\t\\node[point] (\\y) at (\\x,\\y) {};\n\\foreach \\y [count=\\x] in {4,-3,-5,3,-4}\n\t\\node[point] (\\y) at (\\x+4,\\y) {};\n\\foreach \\node [count=\\n,remember=\\node as \\prevnode (initially orig)] in {-2,-1,2,1,4,3} \n\t\\ifodd\\n\n\t\t\\draw (\\prevnode-|\\node) -- ++(0,0.5)-- ++(0,-1) -- (\\node);\n\t\\else\n\t\t\\draw (\\prevnode|-\\node) -- ++(-0.5,0) -- (\\node);\n\t\\fi\n\t}\n\\foreach \\node [count=\\n,remember=\\node as \\prevnode (initially -40)] in {-3,4,3,-5,-4} {\n\t\\ifodd\\n\n\t\t\\draw (\\prevnode|-\\node) -- ++(-0.5,0) -- (\\node);\n\t\\else\n\t\t\\draw (\\prevnode-|\\node) -- ++(0,0.5)-- ++(0,-1) -- (\\node);\n\t\\fi\n\t}\n\\end{scope}\n\\end{tikzpicture}\\par}\n\\caption{Deleting the circled interior pin from~$\\psi^\\circ_{\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\u\\mathsf{r_u}\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\u\\mathsf{r_u}\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace}$ results in~$\\psi^\\circ_{\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\u\\mathsf{r_u}}\\boxplus\\psi^\\bullet_{\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\u\\mathsf{r_u}\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace}$.}\\label{fig-interior-pin}\n\\end{figure}\n\nFrom our previous discussion, it follows that if we delete the $i$th pin from~$\\psi^\\circ_w$, where~$w\\in\\{\\rd,\\ru,\\d,\\u\\}^{n}$ and $2\\le i\\le n-1$, then we obtain the permutation\n\\[\n\t\\psi^\\circ_{w(1)\\cdots w(i-1)}\\boxplus \\psi^\\bullet_{w(i+1)\\cdots w(n)}.\n\\]\nIndeed, letting $\\varepsilon$ denote the empty word or empty permutation (as dictated by the context), and with the understanding that~$\\psi^\\circ_\\varepsilon=\\varepsilon$ while~$\\psi^\\bullet_\\varepsilon=1$, we see that for \\emph{any} $1\\le i\\le n$, the result of deleting the point corresponding to the $i$th pin $p_i$ from~$\\psi^\\circ_w$ is~$\\psi^\\circ_{w(1)\\cdots w(i-1)}\\boxplus\\psi^\\bullet_{w(i+1)\\cdots w(n)}$.\n\nAs noted earlier, we must also describe how to delete points from the permutations~$\\psi^\\bullet_w$, which follows by the same analysis. If we delete the origin $p_0$ from~$\\psi^\\bullet_w$, we obviously obtain the permutation~$\\psi^\\circ_w$.\nOtherwise, if~$w\\in\\{\\rd,\\ru,\\d,\\u\\}^{n}$ and $1\\le i\\le n$, then the result of deleting the point corresponding to the $i$th pin $p_i$ from~$\\psi^\\bullet_w$ is\n\\[\n\t\\psi^\\bullet_{w(1)\\cdots w(i-1)}\\boxplus\\psi^\\bullet_{w(i+1)\\cdots w(n)}.\n\\]\n\nOur next result puts this decomposition in the form we need it later. Note that we needn't include parentheses in the statement of this result because the operation $\\boxplus$ is associative.\n\n\\begin{proposition}\\label{prop-boxplus-decomp}\nFor any subpermutation~$\\pi$ of the rightward-yearning pin sequence~$\\psi^\\circ_w$, there exist nonempty words~$w_1$, $\\dots$,~$w_k\\in\\P$, each appearing as a factor in~$w$, such that\n\\[\n\t\\pi=\n\t\\psi^\\circ_{w_1}\\boxplus\\psi^\\bullet_{w_2}\\boxplus\\cdots\\boxplus\\psi^\\bullet_{w_k}.\n\\]\n\\end{proposition}\n\n\\begin{proof}\n\tFix a particular embedding of~$\\pi$ in~$\\psi_w^\\circ$, and delete the entries of~$\\psi_w^\\circ$ that are not involved in this embedding one at a time. At each step of this process, we have a permutation of the form~$\\psi^\\circ_{u_1}\\boxplus\\psi^\\bullet_{u_2}\\boxplus\\cdots\\boxplus\\psi^\\bullet_{u_\\ell}$ for some choice of words $u_1$, $\\dots$, $u_\\ell\\in \\P$ (which we may assume, by induction, appear as factors in the word~$w$), from which we must delete some specific entry. This entry must be a non-origin pin of either~$\\psi^\\circ_{u_1}$ or one of the~$\\psi^\\bullet_{u_j}$, and we've shown above that any such deletion results in another permutation of the desired form. The fact that each of the resulting words is a factor of~$w$ also follows by this inductive argument.\n\\end{proof}\n\n\nWe are now able to describe the decomposition of all permutations in some class $\\mathcal{C}^{\\exparen{s_k}}$ in terms of permutations defined by words from the language $\\P^{\\exparen{s_k}}$.\n\n\\begin{theorem}\n\\label{thm-Csk-description}\nLet $(s_k)$ be a sequence of positive integers. The class $\\mathcal{C}^{\\exparen{s_k}}$ consists precisely of those permutations~$\\pi$ that can be expressed as\n\\[\n\t\\pi=\\psi^\\circ_{v_1}\\boxplus\\psi^\\bullet_{v_2}\\boxplus\\cdots\\boxplus\\psi^\\bullet_{v_\\ell}\n\\]\nfor words $v_1$, $v_2$, $\\dots$, $v_\\ell\\in\\P^{\\exparen{s_k}}$ with $|v_1|+|v_2|+\\cdots+|v_\\ell|=|\\pi|$.\n\\end{theorem}\n\n\\begin{proof}\nLet~$\\pi\\in\\mathcal{C}^{\\exparen{s_k}}_n$. By the definition of $\\mathcal{C}^{\\exparen{s_k}}$, there is some word~$w\\in\\P^{\\exparen{s_k}}$ such that~$\\pi$ is contained in the permutation~$\\psi^\\circ_w$. By Proposition~\\ref{prop-boxplus-decomp}, it follows that there exist words $v_1$, $v_2$, $\\dots$, $v_\\ell$, each appearing as a factor of~$w$, such that \n\\[\n\t\\pi=\\psi^\\circ_{v_1}\\boxplus\\psi^\\bullet_{v_2}\\boxplus\\cdots\\boxplus\\psi^\\bullet_{v_\\ell}.\n\\]\nSince each $v_i$ is a factor of~$w\\in\\P^{\\exparen{s_k}}$, it follows that each $v_i\\in\\P^{\\exparen{s_k}}$. It is similarly clear that $|v_1|+|v_2|+\\cdots+|v_\\ell|=|\\pi|$.\n\nConversely, we consider a permutation~$\\pi$ as in the statement of the theorem and appeal to the properties of the language~$\\L^{\\exparen{s_k}}$. By definition, for each $v_i\\in\\P^{\\exparen{s_k}}$, there exists an integer $j_i$ such that $v_i$ is a factor of $\\rho(\\alpha_{j_i}^{\\exparen{s_k}})$. Set $m=\\max\\{j_1,\\dots,j_\\ell\\}$, so that each of $v_1$, $\\dots$, $v_\\ell$ appears as a factor in $\\rho(\\alpha_m^{\\exparen{s_k}})$.\n\nLetting $f^{\\exparen{s_k}}$ denote the function from Proposition~\\ref{prop-contains-alpha}, we see from that result that every (binary) word of length at least $f^{\\exparen{s_k}}(m)$ in~$\\L^{\\exparen{s_k}}$ contains $\\alpha_m^{\\exparen{s_k}}$. It follows that every word in $\\P^{\\exparen{s_k}}$ of length at least $2f^{\\exparen{s_k}}(m)+2$ contains $\\rho(\\alpha_m^{\\exparen{s_k}})$, and thus also contains all of $v_1$, $v_2$, $\\dots$, $v_\\ell$. (The ``$+2$'' here is due to the fact that the first and last letter of a word in $\\P$ need not be part of a factor of the form $\\rho(u)$.) It follows that every word in $\\P^{\\exparen{s_k}}$ of length at least $\\ell\\cdot(2f^{\\exparen{s_k}}(m)+2)+(\\ell-1)$ must contain a word~$w$ of the form\n\\[\n\tw = v_1x_1v_2x_2\\cdots x_{\\ell-1}v_\\ell\n\\]\nwhere $x_1$, $\\dots$, $x_{\\ell-1}$ are arbitrary non-empty words. Thus~$\\pi$ is a subpermutation of~$\\psi^\\circ_w$.\n\\end{proof}\n\nIt is tempting to conclude from Theorem~\\ref{thm-Csk-description} that each $\\mathcal{C}^{\\exparen{s_k}}$ is $\\boxplus$-closed in the sense that~${\\sigma,\\tau\\in\\mathcal{C}^{\\exparen{s_k}}}$ implies that~$\\sigma\\boxplus\\tau\\in\\mathcal{C}^{\\exparen{s_k}}$, but we need to be careful: if we simply inflate the first entry of $\\tau$ by~$\\sigma$, then what results is not guaranteed to lie in $\\mathcal{C}^{\\exparen{s_k}}$, since it is not necessarily the case that $\\tau$ can be described in such a way that its first entry can act as a (non-phantom) origin. However, since~${\\tau\\in\\mathcal{C}^{\\exparen{s_k}}}$, Theorem~\\ref{thm-Csk-description} tells us that we can write\n\\[\n\t\\tau=\\psi^\\circ_{v_1}\\boxplus\\psi^\\bullet_{v_2}\\boxplus\\cdots\\boxplus\\psi^\\bullet_{v_\\ell}.\n\\]\nfor some $v_1$, $v_2$, $\\dots$, $v_\\ell$ in $\\P^{\\exparen{s_k}}$, and then Theorem~\\ref{thm-Csk-description} tells us that\n\\[\n\t\\sigma\\boxplus \\psi^\\bullet_{v_1}\\boxplus\\psi^\\bullet_{v_2}\\boxplus\\cdots\\boxplus\\psi^\\bullet_{v_\\ell}\n\t\t\\in\\mathcal{C}^{\\exparen{s_k}}.\n\\]\n\n\n\\section{Indecomposable Permutations}\\label{sec-indecomposable}\n\nOne calls a permutation \\emph{sum decomposable} if it can be expressed as the sum of two shorter permutations, and \\emph{sum indecomposable} otherwise. We analogously call a permutation \\emph{$\\boxplus$-decomposable} if it can be expressed as~$\\sigma\\boxplus\\tau$ for two shorter permutations each of length at least two, and \\emph{$\\boxplus$-indecomposable} otherwise. (We must require that both~$\\sigma$ and $\\tau$ have length at least $2$ to avoid trivial decompositions, because $1\\boxplus\\pi=\\pi\\boxplus 1=\\pi$.)\n\n\\begin{table}\n\\[\n\t\\begin{array}{@{}c@{\\quad}c@{}}\n\t\t\\begin{array}{rcll}\n\t\t\\hline\\\\[-10pt]\n\t\t1\t\t&=&\\psi_\\u^\\circ=\\psi_\\d^\\circ=\\psi_\\r^\\circ\t\t&\\text{$\\boxplus$-indecomposable}\\\\[2pt]\n\t\t\\hline\\\\[-10pt]\n\t\t12\t\t&=&\\psi_{\\r\\d}^\\circ=\\psi_{\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace}^\\circ\t\t&\\text{$\\boxplus$-indecomposable}\\\\\n\t\t21\t\t&=&\\psi_{\\r\\u}^\\circ=\\psi_{\\u\\mathsf{r_u}}^\\circ\t\t\t&\\text{$\\boxplus$-indecomposable}\\\\[2pt]\n\t\t\\hline\\\\[-10pt]\n\t\t123\t\t&=&12\\boxplus 12\\\\\n\t\t132\t\t&=&\\psi_{\\r\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace}^\\circ=\\psi_{\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\u}^\\circ\t\t\t&\\text{$\\boxplus$-indecomposable}\\\\\n\t\t213\t\t&=&21\\boxplus 12=\\psi_{\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\d}^\\circ\\\\\n\t\t231\t\t&=&12\\boxplus 21=\\psi_{\\u\\mathsf{r_u}\\u}^\\circ\\\\\n\t\t312\t\t&=&\\psi_{\\u\\mathsf{r_u}\\d}^\\circ=\\psi_{\\r\\u\\mathsf{r_u}}^\\circ\t\t\t&\\text{$\\boxplus$-indecomposable}\\\\\n\t\t321\t\t&=&21\\boxplus 21\\\\[2pt]\n\t\t\\hline\\\\[19.625pt]\n\t\t\\end{array}\n\t&\n\t\t\\begin{array}{rcll}\n\t\t\\hline\\\\[-10pt]\n\t\t1234\t&=&12\\boxplus 12\\boxplus 12\\\\\n\t\t1243\t&=&12\\boxplus 132\\\\\n\t\t1324\t&=&132\\boxplus 12\\\\\n\t\t1342\t&=&\\psi_{\\r\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\u}^\\circ\t\t\t\t\t&\\text{$\\boxplus$-indecomposable}\\\\\n\t\t1423\t&=&\\psi_{\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\u\\mathsf{r_u}}^\\circ\t\t&\\text{$\\boxplus$-indecomposable}\\\\\n\t\t1432\t& &\t\t\t\t\t\t\t\t\t&\\text{$\\boxplus$-indecomposable}\\\\[2pt]\n\t\t\\hline\\\\[-10pt]\n\t\t2134\t&=&21\\boxplus 12\\boxplus 12\\\\\n\t\t2143\t&=&21\\boxplus 132\\\\\n\t\t2314\t&=&12\\boxplus 21\\boxplus 12\\\\\n\t\t2341\t&=&12\\boxplus 12\\boxplus 21\\\\\n\t\t2413\t&=&\\psi_{\\u\\mathsf{r_u}\\u\\mathsf{r_u}}^\\circ=\\psi_{\\r\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\d}^\\circ&\\text{$\\boxplus$-indecomposable}\\\\\n\t\t2431\t&=&132\\boxplus 21\\\\[2pt]\n\t\t\\hline\n\t\t\\end{array}\n\t\\end{array}\n\\]\n\\caption{Decompositions of permutations of lengths at most four. An initial letter $\\r$ corresponds to either $\\mathsf{r_u}$ or $\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace$.}\n\\label{table-short-decomps}\n\\end{table}\n\nTable~\\ref{table-short-decomps} shows the decomposition of permutations of lengths at most four, as well as their expressions of the form of~$\\psi_w^\\circ$, for those permutations that can be expressed that way. To cut the number of cases in half, when considering permutations of length four we utilize the fact that these concepts are invariant under \\emph{complementation} of permutations (flipping their plots upside down).\n\nFrom Table~\\ref{table-short-decomps} we see that there are eight $\\boxplus$-indecomposable permutations of length four ($1342$, $1423$, $1432$, $2413$, and their complements). In this table, an initial letter $\\r$ corresponds to either $\\mathsf{r_u}$ or $\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace$, so of the permutations of length four, only $1423$ and its complement $4132$ have unique representations of the form~$\\psi_w$, while the other six $\\boxplus$-indecomposable permutations do not. The permutation $1432$ and its complement $4123$ in fact cannot be expressed in terms of $\\boxplus$ and permutations of the form~$\\psi^\\circ_w$ at all. This means that these two permutations are not subpermutations of any rightward-yearning pin sequence, and thus they do not arise in the classes we construct.\n\nWe show next that all sufficiently long rightward-yearning pin sequences are $\\boxplus$-indecomposable. The bound $|w|\\ge 4$ is necessary because, as shown in Table~\\ref{table-short-decomps},~$\\psi_{\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\d}^\\circ=213=21\\boxplus 12$ and, symmetrically,~$\\psi_{\\u\\mathsf{r_u}\\u}^\\circ=231=12\\boxplus 21$.\n\n\\begin{lemma}\\label{lem-pins-box-indecomposable}\nSuppose that~$w\\in\\P$. If $|w|\\ge 3$, then~$\\psi_w^\\bullet$ is $\\boxplus$-indecomposable. If $|w|\\ge 4$, then~$\\psi_w^\\circ$ is also $\\boxplus$-indecomposable.\n\\end{lemma}\n\n\n\\begin{proof}\nWe proceed by induction on the length of~$w$. The base case for the claim about~$\\psi_w^\\circ$ follows from examining Table~\\ref{table-short-decomps} for~$\\psi_w^\\circ$. For the claim about~$\\psi_w^\\bullet$, the base case follows by the following computations:~$\\psi^\\bullet_{\\u\\mathsf{r_u}\\u}=1342$, $\\psi^\\bullet_{\\u\\mathsf{r_u}\\d}=\\psi^\\bullet_{\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\u\\mathsf{r_u}}=2413$, $\\psi^\\bullet_{\\mathsf{r_u}\\u\\mathsf{r_u}}=1423$, $\\psi^\\bullet_{\\mathsf{r_u}\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace}=\\psi^\\bullet_{\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\u}=3142$, $\\psi^\\bullet_{\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\d}=4213$, and $\\psi^\\bullet_{\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace}=4132$.\n\nNow let $v$ denote the prefix of~$w$ comprising all but the last letter, and suppose by induction that~$\\psi_u^\\bullet$ (resp.,~$\\psi_u^\\circ$) is $\\boxplus$-indecomposable.\nThe only way in which~$\\psi_w^\\circ$ (resp.,~$\\psi_{w}^\\bullet$) could be $\\boxplus$-decomposable is if the last letter corresponds to a point that is inserted northeast or southeast of all of~$\\psi_{v}^\\circ$ (resp.,~$\\psi_{v}^\\bullet$), or if it is inserted next to the origin. Neither case is possible since pins must separate the predecessor pin from all the earlier ones. This means that the final pin cannot be placed in the top-right or bottom-right corner of~$\\psi_w^\\circ$ (resp.\\,$\\psi_{w}^\\bullet$), and also, since $|\\psi_v^\\circ|\\ge 4$ (resp., $|\\psi_v^\\bullet|\\ge 4$), there are at least two other pins whose positions come between the origin and the final pin. Thus~$\\psi_w^\\circ$ and~$\\psi_w^\\bullet$ are $\\boxplus$-indecomposable.\n\\end{proof}\n\nNext we show that, except for a small number of shorter permutations, all of the $\\boxplus$-indecomposable permutations that arise in our classes $\\mathcal{C}^{\\exparen{s_k}}$ are of the form~$\\psi^\\circ_w$.\n\n\\begin{proposition}\\label{prop-boxplus-indecomp}\nA subpermutation~$\\pi$ of a rightward-yearning pin sequence is $\\boxplus$-indecomposable if and only if~$\\pi\\in\\{1,12,21,132,312\\}$ or~$\\pi=\\psi_w^\\circ$ for some word~$w\\in\\P$ with $|w|\\ge 4$.\n\\end{proposition}\n\n\\begin{proof}\nFor permutations of lengths at most three, the result follows from an examination of Table~\\ref{table-short-decomps}. Now suppose that~$\\pi$ is a subpermutation of a rightward-yearning pin sequence and $|\\pi|\\ge 4$. If~$\\pi=\\psi^\\circ_w$ for some word~$w\\in\\P$, then $|w|\\ge 4$, so Lemma~\\ref{lem-pins-box-indecomposable} shows that~$\\pi$ is $\\boxplus$-indecomposable. Conversely, suppose that~$\\pi$ is $\\boxplus$-indecomposable. Since~$\\pi$ is a subpermutation of a rightward-yearning pin sequence, Proposition~\\ref{prop-boxplus-decomp} shows that there are nonempty words~$w_1$, $\\dots$,~$w_k\\in\\P$ such that\n\\[\n\t\\pi=\n\t\\psi_{w_1}^\\circ\\boxplus\\psi_{w_2}^\\bullet\\boxplus\\cdots\\boxplus\\psi_{w_k}^\\bullet.\n\\]\nObviously, the only way that~$\\pi$ could be $\\boxplus$-indecomposable in this case would be if all of the words~$w_i$ except one were empty. If~$w_j$ is the nonempty word then, by noting that~$\\psi_\\varepsilon^\\bullet=1$ and~$\\psi_\\varepsilon^\\circ\\boxplus\\psi_w^\\bullet =\\psi_w^\\circ$, we have~$\\pi=\\psi_{w_j}^\\circ$, and thus $|w_j|=|\\pi|\\ge 4$, completing the proof.\n\\end{proof}\n\nFinally, we investigate the uniqueness of the words encoding these $\\boxplus$-indecomposable permutations. In this direction, we are primarily interested in the uniqueness of words for permutations without an origin, but it is easier to first establish the result for permutations with an origin. Note that the bound $|w|\\geq 4$ is best possible because~$\\psi^\\bullet_{\\u\\mathsf{r_u}\\d}=\\psi^\\bullet_{\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\u\\mathsf{r_u}}=2413$.\n\n\\begin{table}\n\\[\n\t\\begin{array}{c|c}\n\t\tw&\\psi_w^\\bullet\\\\\n\t\t\\hline\\\\[-10pt]\n\t\t\\mathsf{ur_uur_u}&13524\\\\\n\t\t\\mathsf{ur_udr_d}&35142\\\\\n\t\t\\mathsf{r_uur_uu}&14253\\\\\n\t\t\\mathsf{r_uur_ud}&25314\\\\\n\t\t\\mathsf{r_udr_du}&31452\\\\\n\t\t\\mathsf{r_udr_dd}&42513\n\t\\end{array}\n\t\\quad\\quad\\quad\n\t\\begin{array}{c|c}\n\t\tw&\\psi_w^\\bullet\\\\\n\t\t\\hline\\\\[-10pt]\n\t\t\\mathsf{dr_dur_u}&31524\\\\\n\t\t\\mathsf{dr_ddr_d}&53142\\\\\n\t\t\\mathsf{r_dur_uu}&24153\\\\\n\t\t\\mathsf{r_dur_ud}&35214\\\\\n\t\t\\mathsf{r_ddr_du}&41352\\\\\n\t\t\\mathsf{r_ddr_dd}&52413\n\t\\end{array}\n\\]\n\\caption{The 12 words of length 4 in $\\P$ and their corresponding permutations~$\\psi_w^\\bullet$.}\n\\label{table-short-words-with-origin}\n\\end{table}%\n\n\\begin{proposition}\\label{prop-long-perms-have-unique-words}\nIf~$\\psi_v^\\bullet=\\psi_w^\\bullet$ for words $v,w\\in\\P$ satisfying $|v|=|w|\\ge 4$, then $v=w$.\n\\end{proposition}%\n\n\\begin{proof}\nWe proceed by induction on $|v|=|w|$. The base case of $|v|=|w|=4$ follows from an examination of Table~\\ref{table-short-words-with-origin}. Now suppose that $|v|=|w|\\ge 5$. Note that the first entry of~$\\psi_v^\\bullet$ must correspond to the origin $p_0$, and every entry of~$\\psi_v^\\bullet$ that lies above the origin must correspond to $\\u$ or $\\mathsf{r_u}$, while every entry that lies below the origin must correspond to $\\d$ or $\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace$.\n\nConsider the rightmost entry of~$\\psi_v^\\bullet$, which we may assume lies above the origin as the other case follows by a symmetrical argument. In this case, the last right step in both $v$ and~$w$ is encoded by the letter $\\mathsf{r_u}$, and this letter is either the final or penultimate letter of both $v$ and~$w$.\n\nIf $\\mathsf{r_u}$ is the final letter of both $v$ and~$w$, then we can remove it---the permutations are still equal, and thus by induction, so are the words $v$ and~$w$ with the last letter removed from each. Similarly, if $\\mathsf{r_u}$ is the penultimate letter in both $v$ and~$w$, then the final letter of each word must be the same (corresponding to the second to last entry of~$\\psi_v^\\bullet=\\psi_w^\\bullet$), and again we can remove it and apply induction. In either case, we conclude that $v=w$.\n\nIt remains to consider the case where, without loss of generality, $\\mathsf{r_u}$ is the final letter of $v$, and the penultimate letter of~$w$. Thus we have, say, $v=x\\mathsf{r_u}$ and~$w=y\\mathsf{r_u} z$ where $x$ and $y$ are words of lengths at least four and three, respectively, and $z\\in\\{\\u,\\d\\}$. Removing the rightmost point of~$\\psi_v^\\bullet=\\psi_{w}^\\bullet$ corresponds in each case to removing this final $\\mathsf{r_u}$. In the case of~$\\psi_v^\\bullet$, this leaves us with~$\\psi_x^\\bullet$, since $\\mathsf{r_u}$ was the last pin, whereas for~$\\psi_{w}^\\bullet$ we obtain~$\\psi_y^\\bullet\\boxplus\\psi_z^\\bullet$ since $\\mathsf{r_u}$ was an interior pin. However, we must have~$\\psi_x^\\bullet=\\psi_y^\\bullet\\boxplus\\psi_z^\\bullet$, and this is impossible since~$\\psi_x^\\bullet$ is $\\boxplus$-indecomposable by Lemma~\\ref{lem-pins-box-indecomposable}, while~$\\psi_y^\\bullet\\boxplus\\psi_z^\\bullet$ is not.\n\\end{proof}\n\nFor pin sequences without an origin, the analogous result must account for the fact that~${\\psi^\\circ_{\\mathsf{r_u} x}=\\psi^\\circ_{\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace x}}$ for all words $x\\in\\P$. Additionally, we note that~$\\psi_{\\u\\mathsf{r_u}\\u\\mathsf{r_u}}^\\circ=\\psi_{\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\d}^\\circ=2413$, so our result must start with words of length $5$.\n\n\\begin{proposition}\\label{prop-long-perms-have-unique-words-norigin}\nIf~$\\psi^\\circ_v=\\psi^\\circ_w$ for words $v,w\\in\\P$ satisfying $|v|=|w|\\ge 5$, then either~${v=w}$, or~${v=\\mathsf{r_u} x}$ and ${w=\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace x}$ for some word~$x\\in\\P$.\n\\end{proposition}\n\n\\begin{proof}\nConsider a word $v\\in\\P$ of length at least 5. We first claim that there exists a function~${\\phi \\::\\: \\P\\to\\P}$ with $|\\phi(v)|=|v|-1$ such that~$\\psi^\\circ_v=\\psi^\\bullet_{\\phi(v)}$.\n\nTo establish the claim, we recall that~$\\psi^\\circ_v=\\psi^\\circ_\\varepsilon\\boxplus\\psi^\\bullet_v$, and consider the effect of deleting the origin from~$\\psi^\\bullet_v$. The second-leftmost entry of~$\\psi^\\bullet_v$ becomes the new origin, and its position relative to the rest of the permutation depends upon the first few letters of $v$. The following chart defines the function~$\\phi$ for words of length at least 3, and in each case it is straightforward to verify that~$\\psi_v^\\circ=\\psi_{\\phi(v)}^\\bullet$.\n\\[\\begin{array}{ccc}\n\tv&\\quad&\\phi(v)\\\\\n\t\\hline\n\t\\u\\mathsf{r_u} x&&\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace x\\\\\n\t\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace x&&\\mathsf{r_u} x\\\\\n\t\\begin{aligned} \\mathsf{r_u}\\u\\mathsf{r_u} x\\\\[-3pt] \\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\u\\mathsf{r_u} x\\end{aligned}&\\Big\\}&\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace x\\\\\n\t\\begin{aligned} \\mathsf{r_u}\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace x\\\\[-3pt] \\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace\\d\\ensuremath{^{\\text{\\scriptsize rd}}}\\xspace x\\end{aligned}&\\Big\\}&\\u\\mathsf{r_u} x\n\\end{array}\\]\n(It would be possible to define~$\\phi$ explicitly for shorter words as well, but we do not need to do so.)\n\nNow consider words $v,w$ of length $\\ge 5$ such that~$\\psi^\\circ_v=\\psi^\\circ_w$. We have \n\\[\n\t\\psi^\\bullet_{\\phi(v)}=\\psi^\\circ_v=\\psi^\\circ_w=\\psi^\\bullet_{\\phi(w)},\n\\]\nand since $|\\phi(v)|=|v|-1 \\ge 4$, Proposition~\\ref{prop-long-perms-have-unique-words} implies that~$\\phi(v)=\\phi(w)$. By the definition of~$\\phi$, it now follows that either $v=w$, or $v$ and~$w$ differ only in their first letter, as required. \n\\end{proof}\n\n\n\nThe combination of Theorem~\\ref{thm-Csk-description} with Propositions~\\ref{prop-long-perms-have-unique-words} and~\\ref{prop-long-perms-have-unique-words-norigin} provide us with a guarantee that any permutation~$\\pi\\in\\mathcal{C}^{\\exparen{s_k}}$ corresponds to an almost-unique collection of words $v_1$, $v_2$, $\\dots$, $v_\\ell\\in\\P^{\\exparen{s_k}}$, providing $|v_1|\\ge 5$ and $|v_i|\\ge 4$ for $i\\ge 2$. If such criteria are met, then the only possible ambiguity arises in the first letter of $v_1$.\n\n\n\\section{Well-Quasi-Order}\n\\label{sec-wqo}\n\nThe fact that the classes $\\mathcal{C}^{\\exparen{s_k}}$ are, in a sense, ``$\\boxplus$ closed'' enables us to establish that these classes are all wqo, although this takes a bit of preparation. Recall that the binary languages~$\\L^{\\exparen{s_k}}$ are wqo under the factor order by Proposition~\\ref{prop-Lsk-wqo}. First we must lift this property to our pin sequence languages $\\P^{\\exparen{s_k}}$.\n\n\\begin{proposition}\n\\label{prop-Psk-wqo}\nFor every sequence $(s_k)$ of positive integers, the set $\\P^{\\exparen{s_k}}\\subseteq\\{\\rd,\\ru,\\d,\\u\\}^\\ast$ of words is wqo under the factor order.\n\\end{proposition}\n\\begin{proof}\nGiven any word~$w$, we define $\\Delta_L(w)$ to be the word obtained by removing the first letter of~$w$ (assuming that~$w$ is nonempty). If $u$ is contained as a factor in~$w$, it follows that $\\Delta_L(u)$ is contained as a factor in $\\Delta_L(w)$. Therefore, if the language $\\mathcal{S}$ is wqo under the factor order, then the language $\\Delta_L(\\mathcal{S})$ is wqo under the factor order. We similarly define $\\Delta_R(w)$ to be the word obtained by removing the last letter of~$w$.\n\nBy definition, for every word $v\\in\\P^{\\exparen{s_k}}$, there is some word~$w\\in\\L^{\\exparen{s_k}}$ for which $v$ is a factor of $\\rho(w)$. Indeed, if we take~$w$ to be minimal, then $v$ comprises all but possibly the first and last letter of $\\rho(w)$. Thus we can express $\\P^{\\exparen{s_k}}$ as\n\\[\n\t\\P^{\\exparen{s_k}}\n\t=\n\t\\rho(\\L^{\\exparen{s_k}})\n\t\\cup\\Delta_L(\\rho(\\L^{\\exparen{s_k}}))\n\t\\cup\\Delta_R(\\rho(\\L^{\\exparen{s_k}}))\n\t\\cup\\Delta_L(\\Delta_R(\\rho(\\L^{\\exparen{s_k}}))).\n\\]\nThis shows that $\\P^{\\exparen{s_k}}$ is the union of four wqo posets, and is therefore wqo itself.\n\\end{proof}\n\nTo go from the languages $\\P^{\\exparen{s_k}}\\subseteq\\{\\rd,\\ru,\\d,\\u\\}^\\ast$ to the permutation classes $\\mathcal{C}^{\\exparen{s_k}}$, we need to first recall the setting and statement of Higman's lemma. Given a poset $(X,\\le)$, we denote by $X^\\ast$ the set (or language) of all words with letters from $X$. The \\emph{generalized subword order} on $X^\\ast$ is defined by stipulating that the word $v=v(1)\\cdots v(k)$ is contained in the word~$w=w(1)\\cdots w(n)$ if and only if~$w$ has a subsequence~$w({i_1})w({i_2})\\cdots w({i_k})$ such that $v(j)\\le w({i_j})$ for all indices $j$. The following is a weakened version of Higman's original result.\n\n\\newtheorem*{higmans-lemma}{\\rm \\textbf{Higman's lemma}~\\cite{higman:ordering-by-div:}}\n\\begin{higmans-lemma}\nIf $(X,\\le)$ is wqo, then $X^\\ast$ is also wqo, under the generalized subword order.\n\\end{higmans-lemma}\n\nHigman's lemma immediately implies (via Proposition~\\ref{prop-Psk-wqo}) that the poset $(\\P^{\\exparen{s_k}})^\\ast$ is wqo under the generalized subword order. Note that in this poset, the ``letters'' of a ``word'' are in fact words from~$\\P^{\\exparen{s_k}}$ (which are themselves defined over the alphabet $\\{\\rd,\\ru,\\d,\\u\\}$).\n\nNow define a mapping~$\\phi\\::\\: (\\P^{\\exparen{s_k}})^\\ast\\to\\mathcal{C}^{\\exparen{s_k}}$ by\n\\[\n\t\\Phi(w)\n\t=\n\t\\psi_{w(1)}^\\circ\\boxplus\\psi_{w(2)}^\\bullet\\boxplus\\cdots\\boxplus\\psi_{w(k)}^\\bullet.\n\\]\nProposition~\\ref{prop-Psk-to-Csk} shows that the mappings~$w\\mapsto\\psi_w^\\circ$ and~$w\\mapsto\\psi_w^\\bullet$ are both order-preserving, and it then follows from the definition of $\\boxplus$ that~$\\phi$ is order-preserving. Theorem~\\ref{thm-Csk-description} further implies that~$\\phi$ maps surjectively onto $\\mathcal{C}^{\\exparen{s_k}}$.\n\nThe main result of this section then follows from the general fact that if the domain of an order-preserving mapping is wqo, then its range must be as well. (This fact is easily proved by contradiction, for one could pull back any infinite antichain in the range of such a mapping to find an infinite antichain in its domain.)\n\n\\begin{proposition}\n\\label{prop-Csk-wqo}\nFor every sequence $(s_k)$ of positive integers, the permutation class $\\mathcal{C}^{\\exparen{s_k}}$ is wqo.\n\\end{proposition}\n\n\n\\section{Distinct Enumerations}\n\\label{sec-enum}\n\nHaving shown that the classes $\\mathcal{C}^{\\exparen{s_k}}$ are all wqo, we now finish the proof of our main theorem by establishing that they have distinct enumerations.\n\n\\begin{theorem}\n\\label{thm-distinct-enum}\n\tSuppose that $(s_k)$ and $(t_k)$ are distinct sequences of positive integers, and that $(s_k)$ lexicographically precedes $(t_k)$. Then there exists an integer $N$ such that \n\t\\[\n\t\t\\bigcup_{n\\le N}\\mathcal{C}_n^{\\exparen{t_k}} \\subsetneq \\bigcup_{n\\le N}\\mathcal{C}_n^{\\exparen{s_k}}.\n\t\\]\n\tIn particular, the classes $\\mathcal{C}^{\\exparen{s_k}}$ and $\\mathcal{C}^{\\exparen{t_k}}$ have distinct enumeration sequences. \n\\end{theorem}\n\n\\begin{proof}\nLet $M$ be the integer from Proposition~\\ref{prop-words-distinct-enum}, so~$\\L^{\\exparen{s_k}}_n = \\L^{\\exparen{t_k}}_n$ for all $n\\alpha_2$, from the upper to the lower curve are blue (1.00001, 0.99999), orange (1.05, 0.95), green (2, 1), red (3, 1) and purple (4, 1). (b) Rashba coefficients $(\\alpha_1,\\,\\alpha_2)$, with $\\alpha_1<\\alpha_2$, from the upper to the lower curve are blue (0.99, 1.01), orange (1, 1.5), green (1, 2), red (1, 3) and purple (1, 4).}\n\t\\label{Total}%\n\\end{figure*}\n\n\\begin{figure}[bpht!]\n\t\\includegraphics[width=8.5cm, height=5.2cm]{Fig3.pdf}\\\\\n\t\\caption{\\footnotesize \\textbf{Adiabatic approximation of the spin polarization.} $S^y_{ad}$ as a function of time with fixed adiabatic parameter $\\gamma_0=0.1$ and different ratios of Rashba coefficients $k=\\frac{\\alpha_1}{\\alpha_2}$ with components of magnetic field {\\bf B} = (0.1, 0.2, 0.3) for the purple ($k=0.98$), red ($k=2$), orange ($k=3$), green ($k=0.5$) and yellow ($k=4$) curves. The blue curve is taken at {\\bf B} = 0 and $\\alpha_1=\\alpha_2$, which corresponds to the characteristic result for 2DEG.}\n\t\\label{Adiabatic}\n\\end{figure}\n\nWe now turn our attention to the total spin polarization. Differently from Vignale and Tokatly~\\cite{vignale2016theory}, who kept the momentum from Eq.~(\\ref{momentum}) fixed at the Fermi level, assuming that the difference between $p_1$ and $p_2$ is small, we do not fix the momentum in our computation, i.e. the anisotropic case that we consider has the additional variable $p$. We consider the integration within the asymmetric region limited by the energies $E_1(p(\\theta,p_F))$ and $E_2(p(\\theta,p_F))$, and the Fermi level inthe absence of SOC lays in this region. Thus, the total spin polarization must be integrated as\n\n\\begin{equation}\n\tS^y (\\tau)=\\sum_{s=1,2} \\int \\ \\frac{d^2p}{(2\\pi)^2} \\ (-1)^{s-1} \\ S^y_{\\bf p}(\\tau) \\Theta \\left(\\frac{p_F^2}{2m}-E_s({\\bf p}) \\right)\n\t\\label{Integr}\n\\end{equation} \nwhere $\\Theta \\left(\\frac{p_F^2}{2m}-E_s({\\bf p}) \\right)$ is the Heaviside step function. This means that the total spin polarization can be computed by integrating $S^y_{\\bf p}$ over all the angles and averaging over the momentum $p(\\theta,p_0)$ as\n\t\\begin{equation}\n\t\tS^y ( \\tau)= \\int_{0}^{2\\pi} \\ \\frac{d\\theta}{2\\pi}\\int_{p_2(\\theta,p_0)}^{p_1(\\theta,p_0)} \\ \\frac{p\\,dp}{2\\pi} \\ S^y_{\\bf p}(\\tau,p,\\theta)\n\t\t\\label{Integr1}\n\t\\end{equation}\n\n\n\nFig.\\ref{Total} shows the total spin polarization $S^y$ for the two ratios of Rashba coefficients $\\alpha_1>\\alpha_2$ (Fig.\\ref{Total}a) and $\\alpha_1<\\alpha_2$ (Fig.\\ref{Total}b) in the adiabatic regime $\\gamma_0=0.1$. To evaluate Eq.(\\ref{Integr}) we consider $m=1$, $p_F=2$ and {\\bf B}=(0.1, 0.2, 0.3).\nThe blue and orange curves in Fig.\\ref{Total}a, corresponding to the case $\\alpha_1\\simeq\\alpha_2$, reach a steady condition soon, meaning that the spin direction follows that induced by the electric field. Besides, it confirms the theoretical prediction of the spin polarization for 2DEG with Rashba spin-orbit and Edelstein effect \\cite{vignale2016theory}. The violet and red curves in Fig.\\ref{Total}a are the total spin polarizations for $\\alpha_1=2\\alpha_2$, $\\alpha_1=3\\alpha_2$, and $\\alpha_1=4\\alpha_2$: they respond very slowly to the influence of an external electric field $E$, hence the steady state of the atomic spin takes longer to build up, depending on the ratio of Rashba coefficients. From Fig.\\ref{Total}b we observe that all the curves respond faster than in the case $\\alpha_1>\\alpha_2$, since saturation occurs in shorter times: only the blue ($1.01\\alpha_1=0.99\\alpha_2$), orange ($1.5\\alpha_1=\\alpha_2$) and green ($2\\alpha_1=\\alpha_2$) curves have a longer build-up time.\n\n\n\n\nNow let us consider the adiabatic approximation and compare it with the solutions in the adiabatic regime of weak electric field $E$. Note that we calculate the spin polarization $S^y_{{\\bf p},ad}$ with regard to the lower eigenstate of the system, which means that for all times the system under adiabatical process of the external electric field remains in the instantaneous lower eigenstate. This approach allows to obtain the result of the evolution considering time $\\tau$ as a parameter in the calculations. To this purpose we solve the system of equations (\\ref{SysEq}) taking into account only the eigenvalue for the lowest state. The component of the approximated spin polarization in the adiabatic regime can be found as\n\t\n\\begin{equation}\nS^y_{{\\bf p},ad}=\\frac{1}{2}\\frac{|\\tilde\\Delta_p|^2-\\left(\\tau-\\tilde\\tau_p+\\sqrt{|\\tilde\\Delta_p|^2+\\tilde\\tau_p^2}\\right)^2}{|\\tilde\\Delta_p|^2+\\left(\\tau-\\tilde\\tau_p+\\sqrt{|\\tilde\\Delta_p|^2+\\tilde\\tau_p^2}\\right)^2}\n\\end{equation}\n\nFig.\\ref{Adiabatic} shows the evolution of total spin polarization for a cold atomic system together with that in 2DEG (blue) with {\\bf B} = 0 and $\\alpha_1=\\alpha_2$ with $k=\\frac{\\alpha_1}{\\alpha_2}$, retrieved by integrating numerically over the angles and averaging over the momentum $p$ as in Eq.(\\ref{Integr1}) with the substitution of $S^y_{\\bf p}(\\tau,p,\\theta)$ with its adiabatic expression $S^y_{{\\bf p},ad}$.\n\n\nAs we can see from Fig.\\ref{Adiabatic} the blue curve in the long times coincides with the purple one ($k=0.98$), but in the small times the presence of \\textcolor{black}{Rashba SOC field}\nmakes the latter lay higher than the one for the 2DEG. The green curve ($k=0.5$) saturates quite fast with the small times and even slightly faster than the one for 2DEG, which means that the Edelstein field is more significant than Rashba field. The red ($k=2$), orange ($k=3$) and yellow curve ($k=4$) decline in the slower way and their saturation happens much later in comparison to the purple and green curve, showing that Rashba field still prevails. \n\n\nThe results obtained from the adiabatic approximation qualitatively coincide with the results calculated from the exact solution, and have the identical subsiding behavior in both small and large ratio of SOC parameters $k$. \nThe artificial pseudospin states in cold atoms with Rashba spin-orbit coupling show a longer lifetime, highlighting their better tunability compared with 2DEG. \n\n\n\n\n\n\n\n\n\n\\section{Discussion} In contrast to solid-state matter, where Rashba spin-orbit coupling depends on the SOC of the material, in cold atoms this type of coupling can be generated synthetically and manipulated externally. The application of artificial gauge fields in quantum gases with neutral atoms provides a series of advantages: lack of disorder compared to 2DEG, convenience for studying many-body systems, external tunability and longer lifetime of pseudospin states. Moreover, a synthetic magnetic field cannot be influenced by a real field due to the absence of dynamical degrees of freedom. In addition, the gauge fields allow to observe a non-equilibrium spin dynamics in many-particle interactions without complications due to the disorder, as we have for 2DEG or other systems in condensed matter systems. \n\nWe described analytically the model of cold atoms in the presence of RSOC and DSOC, \\textcolor{black}{weak electric field and external magnetic field represented as a Zeeman field.} After finding the solutions in terms of parabolic cylinder functions and the component of spin polarization, we integrate over all the angles and average over the momentum. Results show that when the ratio of SOC coefficients is $\\alpha_1>\\alpha_2$ the total spin polarization relaxes much more slowly than for the inverse ratio. Besides, the adiabatic approximation shows that the total spin polarization $S^y_{ad}$ is tunable much better and has longer lifetime of the atomic spin state than the spin state in 2DEG.\n\nWhile our scheme does not suffer from the heating problem, it may have a problem with the limited lifetime of the pseudospin states, due to atomic collisions which induce a decay of the degenerate dark states into those of lower energy. This difficulty can be addressed, however, by introducing external magnetic fields and weak electric fields. Our results can enable further investigations on spin current, spin Hall effect in cold atoms or other effects requiring stable spin states and large tunability. \n\n{\\bf Acknowledgements} We thank G. Vignale, C. Tassi, and A. Sheikhabadi for discussion.\n\n\n\\bigskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Why hadron-hadron scattering?}\n\n\nIn 1966, Weinberg considered pion scattering off hadrons\nusing current algebra techniques~\\cite{Weinberg:1966kf}.\nFor pion scattering on a target with mass $m_t$ and isospin $T_t$,\nthe corresponding scattering lengths for total isospin $T$ read\n\\begin{equation}\na_T = -\\frac{L}{1+ M_\\pi\/m_t}\\, \\left[T(T+1) -T_t(T_t+1) -2 \\right]~,\n\\end{equation}\nwith $M_\\pi$ the charged pion mass. For pion scattering on a \npion [``the more complicated case''], he found\n\\begin{equation}\na_0 = \\frac{7}{4}L ~, ~~~ a_2 =\n-\\frac{1}{2}L~, ~~~\nL = \\frac{g_V^2 M_\\pi}{8\\pi F_\\pi^2} \\simeq 0.1 \\, M_\\pi^{-1}~,\n\\end{equation}\nwith $g_V \\simeq 1$ the vector coupling and $F_\\pi \\simeq 92\\,$MeV the\nweak pion decay constant. The predictions were on one side amazing \ndue to their extreme simplicity and on the other side surprising,\nas one believed that scattering lengths should be of the order\nof 1~fm, the typical hadronic length scale - and not much smaller\nas given by these equations. The physics behind this suppression\nis well understood - the Goldstone boson nature of the pions requires\ndecoupling from any given field as external momenta go to zero (in the\nlimit of vanishing quark \/ pion masses). Corrections to these \npredictions can be worked out consistently in chiral perturbation theory\n(CHPT) and variants thereof, as will be discussed in the next sections. \n CHPT is the effective field theory (EFT) of the Standard Model at low energies\nand allows one to systematically explore the consequences of the\nspontaneous and explicit chiral symmetry breaking in QCD. Given the aims\nand scope of this meeting, it is therefore natural to ask: what have \nwe learned since the seminal work of Weinberg? In the following, \nI will give a very personal answer to this question and hopefully \nconvince the reader\nthat hadron-hadron scattering is a fine tool to gain insight into the\nstrong interactions in the nonperturbative regime.\n\n\\section{Chiral symmetry and the essence of chiral perturbation theory}\n\nThis section serves as a warm up -- as CHPT and extensions thereof will be\nused to analyze various hadron-hadron scattering processes, a few remarks are\nin order.\nAs already stated, CHPT explores the consequences of chiral symmetry breaking\nin QCD. As in any effective field theory, a power counting based on the scale\nseparation lets one organize the string of contributions to a given matrix\nelement or Greens function. The power of chiral symmetry manifests itself in\nthe relations between many processes. A particularly nice example of this are\nthe next-to-leading order low-energy constants (LECs) of the chiral effective\npion-nucleon Lagrangian, commonly denoted as $c_i$. These can e.g. be\ndetermined from data on low energy pion-nucleon scattering to some precision.\nThey can then be used in the chiral EFT description of the nuclear \nforces, as they\nplay an important role in the two-pion exchange contributions of the\ntwo-nucleon forces and also in the leading long-range part of the \nthree-nucleon force. It is also important to stress that while the number\nof LECs increases with higher orders, this is not necessarily \nprolific for basic processes. E.g., elastic $\\pi\\pi$ scattering to\none loop features four LECs, at two loops there are only 2 \nnew LECs, as all other ${\\cal O}(p^6)$ local operators \nsimply lead to a quark mass renormalizations of the ${\\cal O}(p^4)$ \nLECs. Further, the predictions of CHPT can \nbe sharpened by combining it with other methods such as dispersion relations,\ncoupled-channel approaches, lattice simulations and so on. As I will show,\nthis can lead to incredibly precise predictions in some cases, but one\nshould be aware that there is no free lunch - as will be discussed in the following. \n\n\n\\section{Lesson 1: Pion-pion scattering}\n\n Elastic pion-pion scattering ($\\pi\\pi\\to\\pi\\pi$) is the purest process \nin two-flavor chiral dynamics as the up and the down quark masses are\nreally small compared to any other hadronic mass scale. At threshold \nthe scattering amplitude is given in terms of two numbers, the \nscattering lengths $a_0$ and $a_2$. Most interesting is the history\nof the prediction for $a_0$: At leading order (LO) (tree graphs in CHPT)\none has $a_0 = 0.16$ \\cite{Weinberg:1966kf}. The NLO (one-loop) corrections\nwere worked out by Gasser and Leutwyler in 1983, $a_0 = 0.20 \\pm 0.01$\n\\cite{Gasser:1983kx}. The fairly large correction can be understood\nin terms of the strong $\\pi\\pi$ final-state interactions in this partial\nwave. At NNLO (2-loops), one finds $a_0 = 0.217 \\pm 0.009$ \n\\cite{Bijnens:1995yn} and this was considerably sharpened by \nmatching the 2-loop representation to the Roy equation solution,\nresulting in $a_0 = 0.220 \\pm 0.005$ \\cite{Colangelo:2000jc}. This is an\namazingly precise prediction for a low-energy QCD observable. Incidentally,\na similarly accurate prediction was achieved for $a_2$, \n$a_2 = -0.0444 \\pm 0.0010$, but here the corrections \nto the LO result, $a_2^{\\rm LO} = -0.0042$, are very small due to \nthe very weak $\\pi\\pi$ interactions\nin isospin~2. This visible improvement in accuracy compared to the two-loop\npredictions was due to the inclusion of data from higher energies in the\ndispersion relations with better accuracy than this is possible through the\nfinite number of LECs. Also, the two scattering lengths serve as subtraction\nconstants in the Roy equations and are thus much tighter constrained than\nin the pure chiral expansion. \n\n\\begin{SCfigure}[][t]\n\\includegraphics[width=0.54\\textwidth]{a0a2_201202.pdf}~~~~~\n\\caption{Theoretical predictions for the S-wave $\\pi\\pi$\n scattering lengths in comparison to experimental data \n as well as direct and indirect lattice determinations. Figure\ncourtesy of Heiri Leutwyler \\cite{Heiri}.\n}\n\\label{fig:pipi}\n\\end{SCfigure}\n\nGiven such precise predictions - how about\nexperiment? The analysis of the $\\pi\\pi$ final-state interactions in\n$K_{e4}$ and the cusp in $K^0\\to 3\\pi^0$ decays has proven to lead to the most precise\ndeterminations of the scattering lengths. An alternative it the measurement\nof lifetime of pionium, but this is experimentally more difficult and thus\nless accurate. From kaon decays using a particularly tailored non-relativistic\nEFT \\cite{Gasser:2011ju}, one obtains ~$a_0^0 = 0.2210\\pm \n0.0047_{\\rm stat}\\pm 0.0040_{\\rm sys}$ and $a_0^2 = -0.0429\\pm \n0.0044_{\\rm stat}\\pm 0.0028_{\\rm sys}$ \\cite{Batley:2010zza}.\nThe pionium lifetime measurement leads to $|a_0^0-a_0^2| = \n0.2533^{+0.0080}_{-0.0078}{}^{+0.0078}_{-0.0073}$, where the first\/second error\nis statistical\/systematic \\cite{Adeva:2011tc}. The agreement with the\nprediction from Ref.~\\cite{Colangelo:2000jc} is stunning. In addition, there\nare direct and indirect lattice determinations of these fundamental\nparameters. Here, ``direct'' refers to using the L\\\"uscher method \nand extracting the scattering length from the measured energy shift \nwhile ``indirect'' means that\nthe LECs $\\ell_3$ and $\\ell_4$ have been extracted from the pion decay constant\nand mass whereas $\\ell_{1,2}$ have been taken from other sources. The grand\npicture is presented in Fig.~\\ref{fig:pipi} and shows a \nbeautiful consistency.\nThis is truly one of the finest tests of the Standard Model at low energies.\nHowever, not all is well -- a direct lattice determination of $a_0$ is still\nmissing and the lattice practitioners are urged to provide this so important\nnumber. Such a calculation is, of course, technically challenging because\nof the disconnected diagrams, but time is ripe for doing it.\n\n\n\\section{Lesson 2: Pion-kaon scattering}\n\nThe purest scattering process in chiral dynamics involving strange\nquarks is elastic pion-kaon scattering, $\\pi K\\to\\pi K$. Again, at \nthreshold the scattering amplitude is given in terms of two numbers,\nnamely the scattering lengths $a_{1\/2}$ and $a_{3\/2}$. Before \ndiscussing the status of the $\\pi K$ scattering lengths, I want to\naddress a few mysteries surrounding the $s$ quark. In standard \nthree-flavor CHPT, it is treated like the $u,d$ quarks. However: \nis the strange quark really light as $m_s \\sim 100~{\\rm MeV}\n\\sim \\Lambda_{\\rm QCD}$? This is reflected in the expansion parameter: \n$\\xi_s = {M_K^2}\/{(4\\pi F_\\pi)^2} \\simeq\n0.18$, which is much bigger than its SU(2) equivalent: \n$\\xi = {M_\\pi^2}\/{(4\\pi F_\\pi)^2} \\simeq 0.014$.\nIn fact, many predictions of SU(3) CHPT work quite well, but\nthere are also indications of bad convergence in some recent \nlattice calculations, see e.g. Refs.~\\cite{Boyle:2007qe,Allton:2008pn}.\nA possible solution to this is offered by reordering techniques,\nsee~\\cite{Bernard:2010ex}.\nAlso, since many years there have been speculations that the\nthree-flavor condensate $\\Sigma(3)$ is very much suppressed compared to its\ntwo-flavor cousin $\\Sigma(2)$. E.g. Moussallam performed a sum rule study\nand found a sizeable suppression, $\\Sigma (3) = \\Sigma (2) \n[1-0.54\\pm 0.27]$ \\cite{Moussallam:1999aq}\nwhereas a recent lattice study finds a more standard relation\n$\\Sigma (3) = \\Sigma(2) [1-0.23\\pm 0.39]$ \\cite{Fukaya:2010na}.\nNote that in both cases the uncertainties are large.\nThe history of the CHPT predictions for the scattering lengths\n\\begin{table}[t]\n\\begin{center}\n\\begin{tabular}{|l|c|c|c|c|}\n\\hline\n & Tree \\cite{Weinberg:1966kf,Griffith:1969ph} \n & 1-loop \\cite{Bernard:1990kx,Bernard:1990kw}\n & 2-loop \\cite{Bijnens:2004bu}\n & RS \\cite{Buettiker:2003pp} \\\\\n\\hline\n$a_0^{1\/2}$ & +0.14 & $+0.18\\pm 0.03$ & +0.220 [0.17 \\ldots 0.225] &\n $+0.224\\pm 0.022$\\\\\n$a_0^{3\/2}$ & $-0.07$ & $-0.05\\pm 0.02$ & $-0.047 [-0.075 \\ldots -0.04]$ \n & $-0.0448\\pm 0.0077$\\\\\n\\hline\n\\end{tabular}\n\\caption{Chiral predictions at LO, NLO and NNLO for the $\\pi K$\nscattering lengths and from Roy-Steiner (RS) equations. The \nuncertainty for the 2-loop results is taken from the various\nparameter variations discussed in \\cite{Bijnens:2004bu}.}\n\\label{tab:piK}\n\\vspace{-0.5cm}\n\\end{center}\n\\end{table}\nis given in Tab.~\\ref{tab:piK} together with the results of the\nRoy-Steiner (RS) equations \\cite{Buettiker:2003pp}. The agreement\nof the two-loop predictions (central values) is quite satisfactory, \nbut the precision achieved in the RS framework is not as good as\nin the case of $\\pi\\pi$ scattering. This is largely due to the worse\nand partly inconsistent data base. Similarly, the scattering lengths\nextracted from these data are not very precise and fairly scattered,\nsee e.g. Fig.~1 in Ref.~\\cite{Bernard:1990kx}. More recent measurements\nin D- and B-meson decays with high statistics \n\\cite{Poluektov:2004mf,Aitala:2005yh,Link:2009ng,Aubert:2008bd}\nallow in principle for a better determination of $a_{1\/2}$ and $a_{3\/2}$,\nbut none of these has performed an isospin decomposition. This is\nurgently called for. In principle, lattice QCD can have some impact here.\nHowever, the present situation as summarized for $a_{1\/2}$ in\nFig.~\\ref{fig:Kpi} (the date are taken from the\nrecent paper \\cite{Lang:2012sv}, see also the references therein) is everything\nbut satisfactory, there is a large spread of the results. For $a_{3\/2}$,\nthere is better agreement between the lattice results but also some\ntension with the value obtained from the RS analysis. Clearly, more\nwork is needed to clear up the situation for this process.\n\\begin{SCfigure}[][t]\n\\includegraphics[width=0.54\\textwidth]{a_piK_overview.pdf}~~~~~\n\\caption{The S-wave $\\pi K$ scattering lengths from the lattice \nin comparison to LO and NLO CHPT and Roy-Steiner determinations. \nData as collected in Ref.~\\cite{Lang:2012sv}. For better comparison, \nthe scattering lengths are normalized to the reduced mass \n$\\mu_{\\pi K}$ of the pion-kaon system.\n}\n\\label{fig:Kpi}\n\\end{SCfigure}\n \n\n\n\\section{Lesson 3: Pion-nucleon scattering}\n\nLet me now turn to elastic pion-nucleon scattering, $\\pi N\\to\\pi N$, which\nis the simplest (and also the most fundamental) scattering process \ninvolving nucleons. As in the case of\npion-kaon scattering, one has total isospin 1\/2 and 3\/2. Often used\nare the isoscalar and isovector scattering length, $a^+$ and $a^-$,\nrespectively, with $a_{3\/2} = a^+ - a^-$ and $a_{1\/2} = a^+ + 2a^-$.\nThe LO prediction for the isoscalar\/isovector scattering length\nis quite intriguing: \n\\begin{equation}\na_{\\rm LO}^+ = 0~,~~ a_{\\rm LO}^- =\n\\frac{1}{1+M_\\pi\/m_p}\\frac{M_\\pi^2}{8\\pi F_\\pi^2} = 79.5 \\cdot 10^{-3}\/M_\\pi~.\n\\end{equation}\nMuch is known about the chiral corrections to these results that were\nfirst addressed in \\cite{Bernard:1993fp}. While the chiral expansion \nfor $a^-$ converges fast \\cite{Bernard:1995pa},\nthere are large cancellations in $a^+$, so that even its sign is \nnot known from scattering data, see e.g. table~3 in \\cite{Fettes:2000xg}\n(see also the more recent work in Ref.~\\cite{Alarcon:2012kn}). \nHowever, there is a wonderful\nalternative to get a hand on these scattering lengths, namely hadronic atoms.\nThese are electromagnetic bound states of two oppositely charged hadrons.\nDue to the large spatial extent of these objects, the strong interactions \nlead to small perturbations in the observed level spectrum. In particular,\nthere is a shift of the ground state energy ($\\Delta E_{1s}$) and further, \ndue to channel coupling, this level acquires a width $\\Gamma_{1s}$. \nDue to the small three-momenta in such a system, we are dealing essentially with \nscattering at zero energy or, stated differently, the energy shift and \nwidth can be expressed in terms of the corresponding scattering lengths.\nThere are many species of such hadronic atoms, like pionium ($\\pi^+\\pi^-$)\nalready discussed, pionic hydrogen ($\\pi^- p$), pionic deuterium ($\\pi^- d$)\nand their kaonic cousins ($K^- p, K^- d$). Hadronic atoms can be analyzed \nmost systematically and precisely in suitably tailored non-relativistic\nEFTs, see \\cite{Gasser:2007zt} for a comprehensive review. For the case\nof pion-nucleon scattering, the corresponding high-precision theoretical\nframework for pionic hydrogen and pionic deuterium has been provided\nin Refs.~\\cite{Gasser:2002am} and \\cite{Baru:2011bw}, respectively (see\nalso the contribution from Martin Hoferichter to these proceedings). On the\nexperimental side, superb experiments have been performed at PSI for\nmany years, culminating in precise measurements of the energy shift and\nwidth of pionic hydrogen \\cite{Gotta:2008zza} and the energy shift \nin pionic deuterium \\cite{Strauch:2010vu}. \n\\begin{SCfigure}[][t]\n\\includegraphics[width=0.40\\textwidth]{bands1_mod.pdf}~~~~~~~\n\\caption{Extraction of the isoscalar and isovector\nscattering lengths from pionic hydrogen and deuterium.\nHere, $\\tilde{a}^+ = a^+ +\\frac{1}{1+M_\\pi\/m_p}\\times $\n$\\left\\{\\frac{\\Delta M_\\pi^2}{\\pi F_\\pi}c_1 - \n2\\alpha f_1\\right\\},$ with $\\Delta M_\\pi^2 = M_\\pi^2 -\nM_{\\pi^0}^2$, $\\alpha$ the fine structure constant and\n$c_1$\/$f_1$ are strong\/electromagnetitic NLO LECs with\n$c_1 \\simeq 0.9\\,$GeV$^{-1}$ and $|f_1| \\leq 1.4$~GeV$^{-1}$.\nFigure courtesy of Martin Hoferichter.\n}\n\\label{fig:piN}\n\\end{SCfigure}\nThe analysis of these data within these EFTs leads to\na very precise extaction of the scattering lengths \\cite{Baru:2011bw}\n\\begin{equation}\na^+ =(7.6 \\pm 3.1) \\cdot 10^{-3}\/M_\\pi~, ~~~\na^-=(86.1 \\pm 0.9) \\cdot 10^{-3}\/M_\\pi~.\n\\end{equation}\nNote that the value for the isovector scattering length differs by just 8\\% form\nthe LO value, whereas the isocalar one is positive and only slightly larger in\nvalue than various contributions to it. This underlines that {\\sl only} within a\nconsistent framework like the employed EFT one is able to extract this value.\nAlso, using the GMO sum rule, the authors of \\cite{Baru:2011bw} find a precise\nvalue for the pion-nucleon coupling constant, ${g_{\\pi N}^2}\/(4\\pi) =\n13.69(12)(15)$, where the first error stems from the scattering lengths and\nthe second one from the integral over the $\\pi^- p$ cross section. This is\nconsistent with other determinations from pion-nucleon scattering or from\nperipheral nucleon-nucleon phase shifts.\n \n\n\\section{Lesson 4: Antikaon-nucleon scattering}\n\nNext, I consider the reaction $K^-p \\to K^- p$. It is a fundamental scattering \nprocess with strange quarks involving baryons. The dynamics of this process\nis driven by channel couplings and leads to the dynamic generation of the\n$\\Lambda (1405)$ resonance \\cite{Dalitz:1959dn,Dalitz:1960du}\nthat resides between the $\\pi \\Sigma$ and $\\bar K\nN$ thresholds and is certainly not a simple three-quark state. This reaction\nis therefore a major playground of {\\em unitarized CHPT (UCHPT)}. Due to the open\nchannels below the $K^-p$ threshold, the two scattering lengths $a_0$ and $a_1$\nare complex-valued quantities, which means that in this case we deal with four\nnumbers. Before continuing,\nit is important to point out the differences between CHPT and unitarized\nversions thereof. CHPT is an exact representation of the chiral Greens function of\nQCD, which expands matrix elements in powers of small momenta and quark masses.\nCrossing and analyticity are in general fulfilled (if a proper regularization\nscheme is employed), whereas\ndue to the underlying power counting, unitarity is fulfilled perturbatively.\nCHPT is formulated in terms of the lowest-lying hadronic degrees of freedom\nand all effects from resonances are subsumed in the low-energy constants.\nIf one wants to describe resonances explicitely -- as it is the case here --\nsome resummation scheme that fulfills 2-body unitarity exactly is needed. \nThere are various formalism available\nto achieve that, mostly based on Lippman-Schwinger or Bethe-Salpeter equations.\nIn general, one gives up crossing symmetry and also some\nmodel-dependence is induced as there is a cornucopia of unitarization schemes.\nFurther, the power counting is not applied to the reaction amplitude but\nrather to the kernel of the scattering equation, as\nit is e.g. frequently and successfully done in chiral nuclear effective field\ntheory. Having said that, it is also important to stress that UCHPT has been\nquite successful in describing various scattering processes and the dynamic\ngenerations of resonances like the $\\Lambda(1405)$, $S_{11}(1535)$,\n$S_{11}(1650)$, and others. \n\nBut back to $K^-p$ scattering. A long-standing puzzle in these field,\nnamely the inconsistency of the DEAR kaonic hydrogen data with the\nscattering data (first pointed out in \\cite{Meissner:2004jr} and then \nconfirmed by many others) was recently resolved by the fine experiment\nof the SIDDHARTA collaboration, who measured the properties of\nkaonic hydrogen with high precision, $\\Delta E_{1s} = -283 \\pm 36 {\\rm (stat)} \n\\pm 6 {\\rm (syst)}$~eV and $\\Gamma_{1s} = 541 \\pm 89 {\\rm (stat)} \\pm \n22 {\\rm (syst)}$~eV \\cite{Bazzi:2011zj}. Based on this, the\n kaonic hydrogen and the older scattering data can now be analyzed\n consistently, using the chiral Lagrangian at NLO, as shown by the\nthree groups \\cite{Ikeda:2011pi,Ikeda:2012au,Mai:2012dt,Guo:2012vv}\n(see also \\cite{Cieply:2011nq}). Here, I report the results from \nRef.~\\cite{Mai:2012dt}. 14 LECs and 3 subtraction constants were fitted\nto scattering data in the channels $K^-p \\to K^-p$, $\\bar K^0n$,\n$\\Sigma^\\pm\\pi^\\mp$, and $\\Sigma^0\\pi^0$ for laboratory momenta\n${p_{\\rm lab} \\leq 300}$~MeV together with the SIDDHARTA data.\nThis allows for good description of the antikaon-proton cross section data \nand an accurate determination of the scattering lengths, \n\\begin{equation}\\label{eq:a}\na_0 = -1.81^{+0.30}_{-0.28} + i~ 0.92^{+0.29}_{-0.23}~{\\rm\n fm}~, ~~~\na_1 = +0.48^{+0.12}_{-0.11} + i~ 0.87^{+0.26}_{-0.20}~{\\rm fm}~.\n\\end{equation}\nThe improvement as compared to using scattering data only \n\\cite{Borasoy:2006sr} is clearly visible in Fig.~\\ref{fig:KN} (left panel).\nThese numbers are similar to the ones reported by Ikeda et al.\n\\cite{Ikeda:2011pi,Ikeda:2012au}. Therefore, \nthese fundamental chiral SU(3) parameters have now been\ndetermined with about an accuracy of $\\sim~15\\%$.\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.44\\textwidth]{a0a1-nur.pdf}\n~~~~~~~~\\includegraphics[width=0.4\\textwidth]{PWA_nur.pdf}\n\\caption{Left panel: Real and imaginary part of the isospin $T=0$ \n and $T=1$ ${KN\\to KN}$\n scattering lengths. The light shaded (green) areas correspond to the \n $1\\sigma$ region of \\cite{Mai:2012dt} around the central value (full circles). \n The darker (blue) areas correspond to the $1\\sigma$ region around central \n value (empty circle) from Ref.~\\cite{Borasoy:2006sr}.\n Right panel: Real and imaginary part of the $K^-p\\to K^-p$ scattering\n amplitude. The shaded band indicates the uncertainty of the calculation. \n The data point at $W_{\\rm cms}=M_K+m_p$ is determined from the energy\n shift and width of kaonic hydrogen from the SIDDHARTA experiment.\n}\n\\label{fig:KN}\n\\end{figure}\nOne can extrapolate the amplitudes of elastic $K^-p$\nscattering to the subthreshold region, i.e. center-of-mass energies\n$1330 \\leq W_{\\rm cms} \\leq 1450$ MeV. The result is presented in the right\npanel of Fig.~\\ref{fig:KN}.\nFor both real and imaginary parts of the amplitude the maximum lies close\nto the $KN$ threshold and is quite narrow, which indicates the presence of a close-by pole. \nIt is also worth mentioning that the error band gets smaller for lower\nenergies, different to the recent analysis by Ikeda et al.~\\cite{Ikeda:2011pi,Ikeda:2012au}.\nWe note that although Ikeda et al. and \\cite{Mai:2012dt} describe the scattering and bound\nstate data equally well, the subthreshold amplitude is very different. This is\npresumably due to the different approximation made in these two approaches.\nTherefore, a truly model-independent determination of this subthreshold\namplitude is not yet available.\n\n\\section{Lesson 5: Goldstone boson scattering off {\\boldmath$D$} and\n{\\boldmath$D^\\star$}-mesons}\n\nFinally, let us consider scattering the Goldstone boson octet $(\\pi, K ,\\eta)$ \noff the $D$-meson triplet $(D^0, D^+, D_s^+)$. This involves the\npositive-parity scalar charm-strange meson $D_{s0}^*(2317)$ which\nhas a very narrow, isospin-violating width and is interpreted by \nsome groups as a molecular $DK$ state. Here, we are mostly interested \nin the scattering length in the channel with $(S,I)=(1,0)$, as its \nvalue can tell us something about the possible nature of the scalar \nmeson. As I will show, this field enjoys a healthy interplay of \nlattice QCD and UCHPT. But a few general remarks on the calculation\nof the scattering process $\\phi D \\to\\phi D$ are in order. This is \nan interesting problem, as it involves a variety of scales and is \nalso multi-faceted. First, light particles related to the chiral \nsymmetry of QCD are involved, thus we can perform a chiral expansion \nin momenta and quark masses. Second, as the $D$-meson contain charm \nquarks, we can exploit heavy quark symmetry and perform an expansion \nin $1\/m_c$. Third, one must consistently include isospin-violation\neffects. These are on one hand generated by the strong interactions \n($m_d \\neq m_u$) and on the other hand of electromagnetic origin ($q_u \\neq q_d$).\nIn total, one has to consider 16 channels with different total strangeness and\nisospin. Some of these are perturbative, but most are non-perturbative and\nrequire resummation, which can lead to the dynamical generation of\nmolecules. \n\nBefore addressing the actual calculations, let me discuss the relation\nbetween the scattering length $a$ and the nature of the state under\nconsideration, see Refs.~\\cite{Weinberg:1965zz,Baru:2003qq}\nfor a derivation:\n\\begin{equation}\n a = -2 \\left( \\frac{1-Z}{2-Z} \\right) \\frac1{\\sqrt{2\\mu\\epsilon}} \\left(\n 1+\\mathcal{O}(\\sqrt{2\\mu\\epsilon}\/\\beta) \\right),\n \\label{eq:weinberg}\n\\end{equation}\nwhere $\\mu$ and $\\epsilon$ are the reduced mass of the two-hadron \nsystem and the binding energy, respectively, and $Z$ is the wave function\nrenormalization constant ($0 \\leq Z\\leq 1$).\n Corrections of the above equation come from neglecting the range of\nforces, $1\/\\beta$, which contains information of the $D_s\\eta$ channel.\nWere the $D_{s0}^*(2317)$ a pure $DK$ bound state ($Z=0$), the value of \nthe $DK(I=0)$ scattering length would be $a=-1.05$~fm.\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.84\\textwidth]{fit5p3data.pdf}~~~~~\n\\caption{Fit to lattice data for various channels \nwith 5 parameters. The subtraction constant is \ndetermined from fixing the pole in the $(S,I)=(1,0)$ channel to \n2317.8~MeV. The points at the highest pion mass in each channel are not fitted.\n}\n\\label{fig:Dpi}\n\\end{figure}\nIn \\cite{Liu:2012zya}, new lattice data using the MILC plus Fermilab actions\nfor the channels $D\\bar K (-1,1), (-1,0)$, $D_sK(2,1\/2)$, $D\\pi(0,3\/2)$ and\n$D\\pi(1,1)$ were analyzed based on the UCHPT formalism developed in\n\\cite{Guo:2009ct} (for related work, see \n\\cite{Liu:2009uz,Geng:2010vw,Cleven:2010aw,Wang:2012bu}).\nThese are more data than previously available allowing \nin particular for the inclusion of $N_c$-suppressed operators\nof the NLO effective Lagrangian (there are 5 LECs at this\norder from which 3 are formally subleading in $1\/N_c$ and one subtraction\nconstant).\nIf one requires the $D_{s0}^* (2317)$ to be a $DK$-molecule by a proper\nchoice of the subtraction constant (i.e. having \na pole at the proper mass in this channel), \none has 5 fit parameters that describe the lattice data well, \nsee Fig.~\\ref{fig:Dpi}. In this fit, all parameters come out of natural \nsize and the large-$N_c$ hierarchy obeyed. The scattering length \nin the $DK(I=0)$ channel comes out as\n$a(DK(I=0)) = -0.85^{+0.07}_{-0.05}~\\mbox{fm}$, which is consistent\nwith the molecular interpretation (for a more detailed discussion,\nalso concerning a different fit procedure, see \\cite{Liu:2012zya}).\nHaving pinned down all the LECs, one finds an improved prediction \nfor the isospin-violating width \n$\\Gamma(D_{s0}^* (2317)^+ \\to D_s^+ \\pi^0) = (89\\pm 27)\\,{\\rm keV}\\,$\n(for earlier work in the molecular picture, see \n\\cite{Faessler:2007gv,Lutz:2007sk,Guo:2008gp}).\nThis is very different from typical quark model predictions,\nwhich find this width to be about a few keV \\cite{Godfrey:2003kg},\nand thus an accurate measurement of this quantity is called for.\nI wish to end with a few remarks on the $D_{s1}(2460)$. As\n$M_{D_{s1}(2460)}{-}M_{D_{s0}^*(2317)} \\simeq M_{D^*}{-}M_D$, it is most\nlikely a $D^\\star K$ molecule (if the $D^\\star_{s0} (2317)$\nis a $DK$ molecule). Goldstone boson scattering off $D$- \nand $D^\\star$-mesons was considered in \\cite{Cleven:2010aw}.\nThe most interesting observation made concerns the\nmass and binding energy of a molecular state of a heavy meson $(H)$\nand a kaon. As its mass is given by $M_{\\rm mol} = M_K + M_H - \\epsilon$,\nthe mass should depend linearly on the kaon mass, very different\nfrom a genuine multi-quark state, whose mass depends linearly on the\nstrange quark and thus quadratically on the kaon mass. This behaviour\ncan, of course, be investigated on the lattice.\n\n\\section{Short summary \\& outlook}\n\nAs I have shown, there has been much progress in our understanding of\nhadron-hadron scattering lengths since Weinberg's seminal paper in 1966.\nMost advanced is theory and experiment in the case of pion-pion scattering.\nHere, lattice QCD still has to provide an acccurate number for $a_0$.\nMatters are much less satisfactory for pion-kaon scattering - as both \nexperiment and lattice QCD have to deliver more precise values for \nboth scattering lengths. A measurement of the properties of $\\pi K$ atoms\nwould certainly be very welcome \\cite{Adeva:2009zz,Schacher}.\nVery different to that, the combination of\nchiral EFTs with precision data has led to a high precision determination\nof the pion-nucleon scattering lengths $a^+$ and $a^-$. This should \nchallenge lattice practioners to provide a similarly accurate ab initio\ncalculation. In case of antikaon-nucleon scattering, there has been\nrecent progress by analysing scattering data and kaonic hydrogen data\nprovided by SIDDHARTA and the corresponding scattering lengths are now\nknown with an accuracy of about 15\\%. Here, a measurement of kaonic deuterium\nwould provide further stringent constraints \n\\cite{Doring:2011xc,Shevchenko:2012np,Faber:2010iw}. Finally, including \ncharm quarks, the interplay of unitarized CHPT and lattice QCD complemented\nby experiment can deepen our understanding of heavy-light systems, thus\nproviding a bridge from chiral dynamics to the physics of heavy quark flavors.\n\n\\section*{Acknowledgements}\n\nI thank all my collaborators for sharing their insight into the\ntopics discussed here.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzflia b/data_all_eng_slimpj/shuffled/split2/finalzzflia new file mode 100644 index 0000000000000000000000000000000000000000..08f5d9474a2231a6188248ab1f4aecc48927860e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzflia @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nOver past decades, much attentions focus on multitemporal Synthetic Aperture Radar (SAR) image change detection, since a SAR is capable of working in all-time and all-weather without the influence of extremely bad weather and the cloud. \nIn a past decade, most traditional SAR image change detection methods are developed how to extract changed areas from a difference image (DI), which suppose to include the information of changed regions. The DI calculated by the log-ratio (LR) \\cite{bazi2006automatic} is usually subject to the speckle and it is challenging to extract the accurate and clear information on the changed region. To tackle this issue, sparse learning\\cite{wang2016sar} was recently proposed learning robust features from the noisy DI. Wang et al.\\cite{wang2019can} analyzed the affects of the SAR image speckle on change detection and proposed a sparse learning-based method for SAR image change detection.\n\nRecently, deep neural networks have been successfully employed to computer vision and remote sensing image analysis due to its ability to exploiting essential and robust structural features on categories of objects. It also has been introduced into the field of change detection. Gong et al\\cite{Gong2017Change} proposed a deep neural network for SAR image change detection. Gao et al. \\cite{gao2016automatic} proposed a simple convolutional neural network, known as PCA-Net, exploring robust features on the changed regions from the noisy DI. However, the performance of these two unsupervised methods are limited without the correct guidance. To tackle this issue, Wang et al. \\cite{wang2018imbalanced} proposed a supervised PCA-Net to improve the performance by carefully collecting typical training samples, which obtains state-of-arts performance of bitemporal SAR image change detection. However, this two-layer convolutional neural network is low efficient, since the convolutional kernels are trained or generated by a Principle Component Analysis (PCA) decomposition. Recently, Li et al. \\cite{li2019deep} proposed a convolutional neural network (CNN) for SAR image change detection based on both unsupervised and supervised learning. Zhao et al. \\cite{zhao2017novel} proposed a bitemporal PolSAR image change detection by a joint classification and a similarity measurement. Currently, it is still an open problem to extract the changed regions from the noisy DI. \n{Nowadays, a large volume of SAR images are acquired by satellites and it is imperative to develop an efficient model that can produce promising results of SAR image interpretation. Most above networks are too heavy and cost much computational burden. It is strongly required to develop a lightweight convolutional neural network.}\n\n {Recently, several lite networks are proposed to improve the inference efficiency. Howard et al. and Sandler et al. proposed two lite networks MobileNetV1 \\cite{howard2017mobilenets} and MobileNetV2 \\cite{sandler2018mobilenetv2} for visual category. Recently, Howard et al. proposed to search for MobileNetV3 \\cite{howard2019searching}. Tan et al. \\cite{tan2019efficientnet} proposed an efficient network for visual category. These lite networks have been extensively employed to visual category and the experimental results show that they can achieve comparable performance with heavy networks, but with low latency and network capacity. It can be potentially performed on edge devices with low power. Most recently, Chen et al.\\cite{chen2020a} proposed a lightweight multiscale spatial pooling network for bitemporal SAR image change detection. \n\nFollowing the idea of the lightweight neural network, in this letter, we focus on the application of lite networks in SAR image change detection. To achieve this, we propose a lite CNN for SAR image change detection. In the proposed network, bottleneck layers are introduced to reduce the number of output channel. Furthermore, the dilated convolutional layers\\cite{li2018csrnet} are introduced to enlarge receptive field with a few of non-zero entries in the kernel, which reduces the number of network parameters. We verify the proposed network by comparing other conventional CNN. Compared with the lightweight network in \\cite{chen2020a}, the proposed network is more robust with the residual and bottleneck structure. Experimental results on four sets of bitemporal SAR image show that our proposed method obtain comparable performance with CNN, while being much more efficient than CNNs. }\n\nThe rest of paper will be organized as follows. We will introduce our proposed method in Section 2. Then the proposed method will be verified on four datasets in Section 3. Finally, we will draw a conclusion in Section 4. \n\n\n\\section{Proposed Method}\n{Given bitemporal SAR images ${\\bf I}_1$ and ${\\bf I}_2$, the DI can be generated as follows}\n \\begin{equation}\\label{di}\n {\\bf I}_{DI} = {\\bf I}_1 \\ominus {\\bf I}_2\n \\end{equation}\n {where $\\ominus$ denote the difference operator. However, most existing difference operator is subject to the speckle. \nThen we will propose a lightweight convolutional neural network to exploit the changed regions from the noisy DI.}\n\n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=\\textwidth]{fig1.png}\n\\caption{The framework of the proposed network.(a)Network Architecture. (b) Bottleneck for encoder. (c)Bottleneck for decoder.}\n\\label{framework}\n\\end{figure}\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{The tensors of all the layers.}\n\\label{tab:network}\n\\begin{tabular}{cccccccc}\n\\toprule\n Layer Name & Tensor Size & & Layer Name & Tensor Size & & Layer Name & Tensor Size \\\\\n \\hline\n \\multicolumn{8}{c}{Initial Block} \\\\\nInput& (32,32,1) \\\\ \n Conv&(16,16,13) & & \n Max-pooling & (16,16,1) && \n Concatenation & (16,16,14) \\\\\n \\hline\n \\multicolumn{8}{c}{Group 1} \\\\\nBottleNeck 1.0&(8,8,64) & &\nBottleNeck 1.1&(8,8,64) & &\nBottleNeck 1.2&(8,8,64)\\\\\nBottleNeck 1.3&(8,8,64) & &\nBottleNeck 1.4&(8,8,64) \\\\\n\\hline\n \\multicolumn{8}{c}{Group 2} \\\\\nBottleNeck 2.0&(4,4,128) & &\nBottleNeck 2.1&(4,4,128) & &\nBottleNeck 2.2&(4,4,128)\\\\\nBottleNeck 2.3&(4,4,128) & &\nBottleNeck 2.4&(4,4,128) & &\nBottleNeck 2.5&(4,4,128) \\\\\nBottleNeck 2.6&(4,4,128) & &\nBottleNeck 2.7&(4,4,128)& &\nBottleNeck 2.8&(4,4,128) \\\\\n\\hline\n \\multicolumn{8}{c}{Group 3} \\\\\nBottleNeck 3.0&(4,4,128) & &\nBottleNeck 3.1&(4,4,128) & &\nBottleNeck 3.2&(4,4,128)\\\\\nBottleNeck 3.3&(4,4,128) & &\nBottleNeck 3.4&(4,4,128) & &\nBottleNeck 3.5&(4,4,128)\\\\\nBottleNeck 3.6&(4,4,128) & &\nBottleNeck 3.7&(4,4,128)\\\\\n\\hline\n\\multicolumn{8}{c}{Group 4} \\\\\nBottleNeck 4.0&(8,8,64) & &\nBottleNeck 4.1&(8,8,64) & &\nBottleNeck 4.2&(8,8,64)\\\\\n\\hline\n\\multicolumn{8}{c}{Group 5} \\\\\nBottleNeck 5.0&(16,16,16) & &\nBottleNeck 5.1&(16,16,16) \\\\\n\\hline\n\\multicolumn{8}{c}{Output} \\\\\nConv&(32,32,2) \\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\n\nThe whole framework of the proposed network can illustrated in Fig.\\ref{framework}(a). It is shown that the network consists of five groups of bottleneck layers \\cite{lin2013network} with an 1$\\times$1 kernels, illustrated by variety of colorful bars, among which the former three ones work as the encoder, and the latter two ones as the decoder. {The tensors of all layers are listed in Table \\ref{tab:network}.}\n\n\nIn the forward process, the network takes a patch of DI as the input. Firstly, the input data go through a normal convolutional layer and a max-pooling (MP) layer, respectively and then the outputs are concatenated. Next, the contact activations go through the decoder with three groups bottleneck layers. The essential structure of a bottleneck of encoder can be illustrated in Fig.\\ref{framework}(b). It is shown that a bottleneck layer is constructed by a small residual block, including a maxpooling path and a convolutional path. More specifically, the convolutional path consists of two 1$\\times$1 convolutions and one main convolution. The main convolution will vary with the various function of the bottleneck. It can be a normal convolution, a dilated convolution \\cite{li2018csrnet} or an asymmetrical convolution\\cite{szegedy2016rethinking}. {The tensors inside the encoding bottleneck are listed in Table \\ref{tab:bo_en}.} \n\\begin{table}[!htbp]\n\\centering\n\\caption{The tensors inside an encoding bottleneck layer.}\n\\label{tab:bo_en}\n\\begin{tabular}{ccccc}\n\\toprule\n Layer Name & Tensor Size & & Layer Name & Tensor Size \\\\\n \\hline\n Input& (16,16,14) \\\\\n \\hline\n \\multicolumn{5}{c}{Branch 1} \\\\\n Conv1& (8,8,16) &&\nConv2&(8,8,16) \\\\\nConv3&(8,8,64) &&\nDropout&(8,8,64) \\\\\n\\hline\n\\multicolumn{5}{c}{Branch 2} \\\\\nMax-pooling&(8,8,14) & &\nPadding&(8,8,64)\\\\\n\\multicolumn{5}{c}{Output} \\\\\n\\hline\nAddition & (8,8,64)\\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\\begin{table}[!htbp]\n\\centering\n\\caption{The tensors inside a decoding bottleneck layer.}\n\\label{tab:bo_de}\n\\begin{tabular}{ccccc}\n\\toprule\n Layer Name & Tensor Size & & Layer Name & Tensor Size \\\\\n \\hline\n Input& (4,4,128) \\\\\n \\hline\n \\multicolumn{5}{c}{Branch 1} \\\\\nConv1& (4,4,16) &&\nConv2&(8,8,16) \\\\\nConv3&(8,8,64) \\\\\n\\hline\n\\multicolumn{5}{c}{Branch 2} \\\\\nConv&(4,4,64) & &\nUp-Sampling&(8,8,64)\\\\\n\\multicolumn{5}{c}{Output} \\\\\n\\hline\nAddition & (8,8,64)\\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\n The first group consist a down-sampling bottleneck and four normal convolutional bottleneck layers. In the normal bottleneck layers, the main convolution component is default set as the main normal convolutional layer. Especially, in the down-sampling bottleneck, the main convolutional component is set as a normal convolution kernel with 3$\\times$3 and the 1$\\times$1 convolution component is replaced by a 2$\\times$2 one. In the next two groups, to exploit the spatial context, we insert the bottleneck layers with asymmetrical and dilated convolution layers with various kernel sizes among the normal bottleneck, where the kernel sizes are set 2, 4, 8 and 16, respectively, as shown the digits below the green bars. In these bottleneck layers, the main convolutional layers are replaced by the dilated convolutional layers, where the kernels are sparse and most entries are zeros. Furthermore, the bottleneck layers with asymmetric convolution layers are also insert between the dilated and normal convolutional layer, illustrated by the blue bars. After encoding, the sizes of feature maps decrease as the one fourth as the original image. \n\nIn the decoding part, the context information collected by the encoder will be propagated to the pixel level. To achieve this, inspired by the idea of U-Net \\cite{ronneberger2015u}, we schedule a upsampling layer and two bottleneck layers. The essential structure of bottleneck for decoder can be illustrated in Fig.\\ref{framework}(c). Similar to the bottleneck for the encoder, the bottleneck for the decoder contains a pooling path and a convolution path. The former includes a maxpooling layer and a 1$\\times$1 convolution, while the latter includes two 1$\\times$1 convolutional layers and a 3$\\times$3 convolutional layer. Especially, when the bottleneck layer is used for upsampling, the 3$\\times$3 convolution component, illustrated by the yellow bar, will be replaced by a 3$\\times$3 transpose convolution \\cite{dumoulin2016guide}. Then the bottleneck will do the 2x upsampling. Trough two groups of decoding bottleneck layers, the feature map will be recover the same size as the input image. {The tensors inside the decoding bottleneck are listed in Table \\ref{tab:bo_de}.} Finally, we put a 2$\\times$2 convolutional layer to get the probability map of two categories. \n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=\\textwidth]{fig2.png}\n\\caption{Four sets of bitemporal SAR images. The first two rows are bitemporal images and the last row is the DIs. (a) YR-A. (b) YR-B. (c) Sendai-A. (d) Sendai-B. }\n\\label{fig2}\n\\end{figure}\n\n\\section{Experimental Results}\n\\label{sec:experiment}\n\\subsection{Experiment Datasets}\nIn this paper, the proposed method is verified on four sets of bitemporal SAR images. Two scenes (YR-A and YR-B) are from bitemporal Yellow River SAR images \\cite{Gong2017Change} acquired by the Radarsat-2 satellite in 2008 and 2009, respectively. Their image sizes are 306 $\\times$ 291 and 400 $\\times$ 350, respectively. Other two are parts of TerraSAR-X images acquired prior to (on Oct. 20, 2010) and after (on May 6, 2011) the Sendai earthquake in Japan \\cite{Cui2016A}. Their sizes (Sendai-A and Sendai-B) are 590 $\\times$687 and 689 $\\times$ 734, respectively. These four datasets are shown in Fig.\\ref{fig2}). These four datasets are quite challenging, such as the linear-shape changed regions in YR-B dataset and complex scene in both Sendai-A and Sendai-B datasets. \n\n\\begin{table}[!htbp]\n\\centering\n\\caption{The number of training samples.}\n\\label{tab:nos}\n\\begin{tabular}{ccccc}\n\\toprule\n Dataset & YR-A & YR-B & Sendai-A & Sendai-B \\\\\n \\hline\n No. Samples& 3596 & 6205 & 15375 & 20294 \\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=0.6\\textwidth]{fig3.png}\n\\caption{The variations of loss and accuracies.}\n\\label{fig:loss}\n\\end{figure}\n\n\\subsection{Implementations}\n{\nWe have introduced a lite CNN for change detection for bitemporal SAR images. We first generate the DI by the Eq.(\\ref{di}) and the difference operator is implemented by the neighborhood-based LR operator \\cite{gong2011neighborhood}. \nTo train the network, we collect a training dataset according to the method in \\cite{wang2018imbalanced}. The numbers of training samples for each dataset are listed in Table \\ref{tab:nos}.\n\nMore specifically, the patchsize of each sample is set as 32 $\\times$ 32 and 8 samples are fed in each training step. Additionally, the network is trained by an end-to-end back-propagation manner and the loss is set as {the binary entropy function defined in \\cite{sadowski2016notes}}. \nThe training is optimized by the Adam algorithm \\cite{kingma2014adam} in the training stage, where the initial learning rate is set as 0.005. \nThe training is performed on the PyTorch platform built on the Ubuntu 16.04 installed in a PC with a 16 GB DDR memory and an NVIDIA TITAN Xp Graphics Processing Unit of 11 GB memory. The training process will converge at around 15 epochs. We show the variations of loss values and accuracies of training process in Fig.\\ref{fig:loss}.\n}\n\\subsection{Comparison Experiments}\nTo verify the benefits of the proposed method, it is compared with the unsupervised PCA-Net (U-PCA-Net)\\cite{gao2016automatic}, the supervised PCA-Net (S-PCA-Net) \\cite{wang2018imbalanced} which achieves the state-of-arts performance on SAR image change detection. We also compare the proposed method with the deep neural network (DNN) method \\cite{Gong2017Change} and CNN \\cite{li2019deep}. Among these methods, DNN and U-PCA-Net are unsupervised methods,while S-PCA-Net, CNN and the proposed method are supervised ones. \n\nThe performance of the compared methods is evaluated by probabilistic Missed Alarm (pMA), probabilistic False Alarm (pFA) and kappa coefficient, where pFA (pMA) are calculated by the ratios between FA (MA) and the number of Non-Changed pixels (NC)\\cite{wang2019can}. \n\n\\begin{figure}[!thb]\n\\centering\n\\includegraphics[width=\\textwidth]{fig4.png}\n\\caption{The visual comparison results. (a)U-PCA-Net. (b)S-PCA-Net. (c)DNN. (d) CNN. (e)Lite CNN. (f)Reference. }\n\\label{fig3}\n\\end{figure}\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=\\textwidth]{fig5.png}\n\\caption{The quantitative evaluations of compared methods.(a) MA. (b)FA. (c) Kappa. }\n\\label{eva}\n\\end{figure}\n\\subsection{Experiment Results on Individual Dataset Change Detection}\nIn this experiment, for the supervised learning methods, we collect the training samples from an individual dataset, which covers 30\\% areas of the whole image frame. The rest part is employed for testing. \n\nThe visual comparison results are shown in Fig.\\ref{fig3}. It is shown that for the YR-A dataset, S-PCA-Net, DNN and Lite CNN get less noisy but more completed changed regions. Lite CNN can get more clear boundary of changed regions than DNN. For the YR-B dataset, Lite CNN can get more completed changed regions, especially the line at the bottom of the image. Other methods can not get the completed changed regions. For the Sendai-A dataset with complex scene, S-PCA-Net and the Lite CNN are less subject to the speckle and the background and get more clear changed regions, while other compared methods are subjected to the speckle and background and they are almost failed. Moreover, compared with S-PCA-Net, Lite CNN get better inner regional consistence. For the Sendai-B dataset, Lite CNN gets more accurate changed regions than other methods. \n\n\nMoreover, we show the quantitative evaluations in Fig.\\ref{eva}. {It is shown in Fig.\\ref{eva} (a) that the proposed Lite CNN gets a lower pMA on YR-A, Sendai-A and Sendai-B dataset, while the CNN gets a lower pMA on YR-B dataset. It is shown in Fig.\\ref{eva} (b) that the proposed Lite CNN gets a lower pFA on Sendai-B dataset, while S-PCA-Net gets a lower pFA on YR-A, YR-B and Sendai-A dataset. }\nIt is shown in Fig.\\ref{eva} (c) that on the YR-A dataset, S-PCA-Net gets the best kappa among all the methods, while the Lite CNN gets the comparable kappas with other methods, except S-PCA-Net. On the YR-B dataset, Lite CNN gets the comparable kappas with other methods. However, on both Sendai-A and Sendai-B datasets, the Lite CNN performs better than other methods in terms of pMA and kappas. \n\n\\begin{figure*}[!hbt]\n\\centering\n\\includegraphics[width=\\textwidth]{fig6.png}\n\\caption{The visual comparison results. (a) S-PCA-Net. (b) CNN. (c) Lite CNN. (d) Reference. }\n\\label{fig4}\n\\end{figure*}\n\\begin{figure*}[!htb]\n\\centering\n\\includegraphics[width=\\textwidth]{fig7.png}\n\\caption{The quantitative evaluations of compared methods.(a) MA. (b)FA. (c) Kappa.}\n\\label{eva2}\n\\end{figure*}\n\\subsection{Experiment Results on Cross-dataset Change Detection}\nTo further compare the proposed method with other supervised learning methods, S-PCA-Net and CNN, we perform the comparisons on the cross-dataset change detection, where the network trained on several datasets is applied to an unknown testing dataset. More specifically, to achieve this, this experiment is conducted through the leave-one-out manner, i.e. each dataset alternative is selected as the testing dataset and others as training datasets. \n\nThe visual comparisons are shown in Fig.\\ref{fig4}. It is shown that on the YR-A dataset Lite CNN gets more clear visual result with less noisy spots. On the YR-B dataset, Lite CNN performs better than CNN, but not better than S-PCA-Net. There is many miss alarms in the results of Lite CNN. On both Sendai-A and Sendai-B datasets, Lite CNN gets better results than other two methods. \n\n{The quantitative evaluations in terms of pFA, pMA and Kappa are shown in Fig.\\ref{eva2}. It is shown in Fig.\\ref{eva2}(a) that the S-PCA-Net gets a lower pMA on all datasets. It is shown in Fig.\\ref{eva2}(b) that the Lite CNN gets a lower pFA on the Sendai-A dataset, while S-PCA-Net gets a lower pFA on the YR-A dataset and CNN gets a lower pFA on the YR-B and Sendai-B dataset.} It is shown in Fig.\\ref{eva2}(c) that Lite CNN performs better than CNN but comparable with S-PCA-Net on YR-A and YR-B datasets. However, Lite CNN shows great advantages over other two methods on Sendai-A and Sendai-B datasets. It indicates that the Lite CNN performs better than other two methods in model generalization, especially on challenging datasets with complex scenes.\n\\begin{table}[!htbp]\n\\centering\n\\caption{The training times of compared methods.}\n\\label{time}\n\\begin{tabular}{lccc}\n\\toprule\n Methods & S-PCA-Net & CNN& Lite CNN \\\\\n \\hline\nTimes& ~3 h & 30 mins. &15 mins. \\\\ \n \\bottomrule\n\\end{tabular}\n\\end{table}\n\\subsection{Discussion}\nFrom the above comparisons, it is shown that Lite CNN can obtain comparable performance with other method on the YR-A and YR-B datasets. On the challenging datasets, e.g. Sendai-A and Sendai-B, Lite CNN outperforms other methods. Moreover, it has better ability to model generalization. Moreover, Lite CNN is more computationally efficient than S-PCA-Net and CNN. The training times of three supervised learning methods are compared in Table \\ref{time}. It is shown that Lite CNN is easy to train and take less time than S-PCA-Net and CNN, while S-PCA-Net takes longer time, since the convolutional kernel is generated by the principle component analysis decomposition. \n\nOverall, Lite CNN can obtain comparable or even better performance than S-PCA-Net and CNN. It is also more computationally efficient than other two methods. It is expected that Lite CNN is more practical in change detection, especially for the requirement of real-time detection.\n\n\\section{Conclusion}\n\\label{sec:conlusion}\nIn this paper, we develop a lightweight convolutional neural network for bitemporal SAR image change detection. The proposed network consists of groups of bottlenecks layers which exploit the image feature. To verify the benefits of our proposed method, we compare it with several traditional neural networks and the comparisons are performed on fours sets of bitemporal SAR images. The experimental results show that our proposed method Lite CNN performs better than other two methods on cross-dataset change detection, especially when the scene is complex. Furthermore, Lite CNN is a lightweight network, which is more computationally efficient than CNN and S-PCA-Net. In the future, we will further optimize Lite CNN and make it more efficient on the edge device. \n\n \\section*{Acknowledgment}\n This work was supported by the State Key Program of National Natural Science of China (No. 61836009), the National Natural Science Foundation of China (No. 61701361, 61806154), the Major Research Plan of National Natural Science Foundation of China (No. 91838303), the Open Fund of Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University (Grant No. IPIU2019006), the Natural Science Basic Research Plan in Shaanxi Province of China (No.2018JM6083) and the Open Fund of State Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University (No. 17E02).\n\n\n\\bibliographystyle{spiejour} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nA function $g:\\mathcal{I}\\subseteq\\mathbb{R}\\to\\mathbb{R}$ is said to be convex on the\ninterval $\\mathcal{I}$, if the inequality\n\\begin{align}\\label{eq:11}\ng(\\eta\\,x+(1-\\eta)y)\\leq \\eta\\,g(x)+(1-\\eta)g(y)\n\\end{align}\nholds for all $x,y\\in\\mathcal{I}$ and $\\eta\\in[0,1]$. We say that $g$ is concave, provided $-g$ is convex.\n\nFor convex functions \\eqref{eq:11}, many equalities and inequalities have been established, {\\em e.g.},\nOstrowski type inequality \\cite{7}, Opial inequality \\cite{Farid}, Hardy type inequality \\cite{8}, Olsen type \ninequality \\cite{9}, Gagliardo-Nirenberg type inequality \\cite{10}, midpoint and trapezoidal type inequalities \n\\cite{6,Mohammed9} and the Hermite--Hadamard type (HH-type) inequality \\cite{5} that will be used in our study, \nwhich is defined by:\n\\begin{align}\\label{eq:12}\ng\\left(\\frac{u+v}{2}\\right)&\\leq \\frac{1}{v-u}\\int_u^v g(x)dx\\leq \\frac{g(u)+g(v)}{2},\n\\end{align}\nwhere $g:\\mathcal{I}\\subseteq\\mathbb{R}\\to\\mathbb{R}$ is assumed to be a convex function on \n$\\mathcal{I}$ where $a, b\\in \\mathcal{I}$ with $u0$:\n\\begin{align}\\label{eq:13}\ng\\left(\\frac{u+v}{2}\\right)&\\leq \\frac{\\Gamma(\\mu+2)}{2(v-u)^\\mu}\\left[I^{\\mu}_{u^+}g(v)\n+I^{\\mu}_{v^-}g(u)\\right]\\leq \\frac{g(u)+g(v)}{2},\n\\end{align}\nwhere $I^{\\mu}_{u^+}$ and $I^{\\mu}_{v^-}$ denote left-sided and right-sided \nRiemann-Liouville fractional integrals of order $\\mu>0$, respectively, defined as~\\cite{11}:\n\\begin{align}\\label{eq:14}\n\\begin{aligned}\nI^{\\mu}_{u^+}g(x)=\\frac{1}{\\Gamma(\\mu)}\\int_u^x (x-t)^{\\mu-1}g(t)dt, \\quad x>u,\n\\\\\nI^{\\mu}_{v^-}g(x)=\\frac{1}{\\Gamma(\\mu)}\\int_x^v (t-x)^{\\mu-1}g(t)dt, \\quad x0$. Let $\\psi(x)$ be an increasing and positive monotone function on the interval $(u,v]$ with a \ncontinuous derivative $\\psi'(x)$ on the interval $(u,v)$. Then the left and right-sided \n$\\psi$-Riemann--Liouville fractional integrals of a function $g$ with respect to another function $\\psi(x)$ \non $[u,v]$ are defined by \\cite{11,19,20}:\n\\begin{align}\\label{eq:15}\n\\begin{aligned}\nI^{\\mu:\\psi}_{u^+}g(x)=\\frac{1}{\\Gamma(\\mu)}\\int_{u}^{x} \\psi'(t)(\\psi(x)-\\psi(t))^{\\mu-1}g(t)dt,\n\\\\\nI^{\\mu:\\psi}_{v^-}g(x)=\\frac{1}{\\Gamma(\\mu)}\\int_{x}^{v} \\psi'(t)(\\psi(t)-\\psi(x))^{\\mu-1}g(t)dt.\n\\end{aligned}\n\\end{align}\nIt is important to note that if we set $\\psi(x)=x$ in \\eqref{eq:15}, then $\\psi$-Riemann--Liouville \nfractional integral reduces to Riemann--Liouville fractional integral \\eqref{eq:14}.\n\\end{definition}\n\nAs we said, in this study we investigate several inequalities of midpoint type for Riemann--Liouville \nfractional integrals of twice differentiable convex functions with respect to increasing functions.\n\\section{Main Results}\nOur main results follow the following lemma:\n\\begin{lemma}\\label{lem:31}\nLet $g:[u,v]\\subseteq\\mathbb{R}\\to\\mathbb{R}$ be a differentiable function and $g''\\in L_1[u,v]$ with \n$0\\leq u1$. Using inequality of \\eqref{eq:36}, convexity of $|g''|^q$ and the power--mean's \ninequality for $q>1$, we have\n\\begin{align}\\label{eq:314}\n\\int_{0}^{1}t^{\\mu+1}\\left|g''\\left(\\frac{t}{2}u+\\frac{2-t}{2}v\\right)\\right|dt\n&=\\int_{0}^{1}t^{\\mu+1-\\frac{\\mu+1}{q}}\\left[t^{\\frac{\\mu+1}{q}}\n\\left|g''\\left(\\frac{t}{2}u+\\frac{2-t}{2}v\\right)\\right|\\right]dt\n\\nonumber \\\\\n&\\leq \\left(\\int_{0}^{1}t^{\\mu+1}\\right)^{1-\\frac{1}{q}}\n\\left(\\int_{0}^{1}t^{\\mu+1}\\left|g''\\left(\\frac{t}{2}u+\\frac{2-t}{2}v\\right)\\right|^{q}dt\n\\right)^{\\frac{1}{q}}\n\\nonumber \\\\\n&\\leq \\left(\\frac{1}{\\mu+2}\\right)^{1-\\frac{1}{q}}\n\\left(\\int_{0}^{1}\\left(\\frac{t^{\\mu+2}}{2}|g''(u)|^{q}+\\frac{2t^{\\mu+1}-t^{\\mu+2}}{2}|g''(v)|^{q}\n\\right)dt\\right)^{\\frac{1}{q}}\n\\nonumber \\\\\n&=\\left(\\frac{1}{\\mu+2}\\right)^{1-\\frac{1}{q}}\n\\left[\\frac{1}{2(\\mu+3)}|g''(u)|^q+\\left(\\frac{1}{\\mu+2}-\\frac{1}{2(\\mu+3)}\\right)\n|g''(v)|^q\\right]^{\\frac{1}{q}}.\n\\end{align}\nIn the same manner, we get\n\\begin{align}\\label{eq:315}\n&\\int_{0}^{1}t^{\\mu+1}\\left|g''\\left(\\frac{2-t}{2}u+\\frac{t}{2}v\\right)\\right|dt\n\\leq \\left(\\frac{1}{\\mu+2}\\right)^{1-\\frac{1}{q}}\n\\left[\\left(\\frac{1}{\\mu+2}-\\frac{1}{2(\\mu+3)}\\right)|g''(u)|^q\n+\\frac{1}{2(\\mu+3)}|g''(v)|^q\\right]^{\\frac{1}{q}}.\n\\end{align}\nUsing \\eqref{eq:314} and \\eqref{eq:315} in \\eqref{eq:36} we obtain \\eqref{eq:34} for $q>1$.\nThus the proof of theorem \\ref{th:31} is completed.\n\\end{proof}\n\\begin{corollary} \\label{cor:32}\nWith the similar assumptions of Theorem \\ref{th:31} if\n\\begin{enumerate}\n\\item $\\psi(x)=x$, we have\n\\begin{align*}\n&\\left|\\frac{2^{\\mu-1}\\Gamma(\\mu+2)}{(v-u)^\\mu}\\left[I^{\\mu}_{\\left(\\frac{u+v}{2}\\right)^+}g(v)\n+I^{\\mu}_{\\left(\\frac{u+v}{2}\\right)^-}g(u)\\right]-(\\mu+1)g\\left(\\frac{u+v}{2}\\right)\\right|\n\\\\\n&\\leq\\frac{(v-u)^2}{8}\\left(\\frac{1}{\\mu+2}\\right)^{1-\\frac{1}{q}}\n\\Biggl\\{\\left[\\frac{1}{2(\\mu+3)}|g''(u)|^q+\\left(\\frac{1}{\\mu+2}-\\frac{1}{2(\\mu+3)}\\right)\n|g''(v)|^q\\right]^{\\frac{1}{q}}\n\\\\\n&+\\left[\\left(\\frac{1}{\\mu+2}-\\frac{1}{2(\\mu+3)}\\right)|g''(u)|^q\n+\\frac{1}{2(\\mu+3)}|g''(v)|^q\\right]^{\\frac{1}{q}}\\Biggr\\},\n\\end{align*}\nwhich is obtained by Tomar et al. \\cite{21}.\n\\item $\\psi(x)=x$ and $\\mu=1$, we have\n\\begin{align*}\n&\\left|\\frac{1}{v-u}\\int_{u}^{v}g(x)dx-g\\left(\\frac{u+v}{2}\\right)\\right|\n\\leq\\frac{(v-u)^2}{48}\\Biggl[\\left(\\frac{3|g''(u)|^q+5|g''(v)|^q}{8}\\right)^{\\frac{1}{q}}\n+\\left(\\frac{5|g''(u)|^q+3|g''(v)|^q}{8}\\right)^{\\frac{1}{q}}\\Biggr],\n\\end{align*}\nwhich is obtained by Sarikaya et al. \\cite{23}.\n\\item $\\psi(x)=x$ and $q=1$, we have\n\\begin{align*}\n&\\left|\\frac{2^{\\mu-1}\\Gamma(\\mu+2)}{(v-u)^\\mu}\\left[I^{\\mu}_{\\left(\\frac{u+v}{2}\\right)^+}g(v)\n+I^{\\mu}_{\\left(\\frac{u+v}{2}\\right)^-}g(u)\\right]-(\\mu+1)g\\left(\\frac{u+v}{2}\\right)\\right|\n\\leq\\frac{(v-u)^2}{8(\\mu+2)}\\biggl(|g''(u)|+|g''(v)|\\biggr),\n\\end{align*}\nwhich is obtained by Tomar et al. \\cite{21}.\n\\item $\\psi(x)=x, \\mu=1$ and $q=1$, we have\n\\begin{align*}\n&\\left|\\frac{1}{v-u}\\int_{u}^{v}g(x)dx-g\\left(\\frac{u+v}{2}\\right)\\right|\n\\leq\\frac{(v-u)^2}{24}\\left(\\frac{|g''(u)|+|g''(v)}{2}\\right),\n\\end{align*}\n\\end{enumerate}\nwhich is obtained by Sarikaya et al. \\cite{23}.\n\\end{corollary}\n\\begin{theorem}\\label{th:32}\nLet $g:[u,v]\\subseteq\\mathbb{R}\\to\\mathbb{R}$ be a differentiable function and $g''\\in L_1[u,v]$ with \n$0\\leq u1$ and $\\frac{1}{p}+\\frac{1}{q}=1$.\n\\end{theorem}\n\\begin{proof}\nBy using the Holder's inequality, we have\n\\begin{align}\\label{eq:38}\n\\int_{0}^{1}t^{\\mu+1}\\left|g''\\left(\\frac{t}{2}u+\\frac{2-t}{2}v\\right)\\right|dt\n&\\leq \\left(\\int_{0}^{1}t^{(\\mu+1)p}\\right)^{\\frac{1}{p}}\n\\left(\\int_{0}^{1}\\left|g''\\left(\\frac{t}{2}u+\\frac{2-t}{2}v\\right)\\right|^{q}dt\\right)^{\\frac{1}{q}}\n\\nonumber \\\\\n&\\leq \\left(\\frac{1}{(\\mu+1)p+1}\\right)^{\\frac{1}{p}}\n\\left(\\int_{0}^{1}\\left(\\frac{t}{2}|g''(u)|^{q}+\\frac{2-t}{2}|g''(v)|^{q}\\right)\ndt\\right)^{\\frac{1}{q}}\n\\nonumber \\\\\n&=\\left(\\frac{1}{(\\mu+1)p+1}\\right)^{\\frac{1}{p}}\n\\left(\\frac{|g''(u)|^{q}+3|g''(v)|^{q}}{4}\\right)^{\\frac{1}{q}}.\n\\end{align}\nSimilarly, we have\n\\begin{align}\\label{eq:39}\n\\int_{0}^{1}t^{\\mu+1}\\left|g''\\left(\\frac{2-t}{2}u+\\frac{t}{2}v\\right)\\right|dt\n&\\leq \\left(\\frac{1}{(\\mu+1)p+1}\\right)^{\\frac{1}{p}}\n\\left(\\frac{3|g''(u)|^{q}+|g''(v)|^{q}}{4}\\right)^{\\frac{1}{q}}.\n\\end{align}\nThus, the inequalities \\eqref{eq:36}, \\eqref{eq:38} and \\eqref{eq:39} complete the proof of\nthe first inequality of \\eqref{eq:37}.\n\nTo prove the second inequality of \\eqref{eq:37}, we apply the formula \n\\begin{align*}\n\\sum_{i=1}^{n}\\left(c_i+d_i\\right)^m\\leq \\sum_{i=1}^{n}c_i^m+\\sum_{i=1}^{n}+d_i^m, \\quad 0\\leq m<1\n\\end{align*}\nfor $c_{1}=3|g''(u)|^{q}, c_{2}=|g''(u)|^{q}, d_{1}=|g''(v)|^{q}, d_{2}=3|g''(v)|^{q}$ and $m=\\frac{1}{q}$.\nThen \\eqref{eq:36} gives\n\\begin{align*}\n\\left|\\sigma_{\\mu,\\psi}(g;u,v)\\right|\n&\\leq\\frac{(v-u)^2}{8}\\left(\\frac{1}{(\\mu+1)p+1}\\right)^{\\frac{1}{p}}\n\\Biggl[\\left(\\frac{|g''(u)|^q+3|g''(v)|^q}{4}\\right)^{\\frac{1}{q}}\n+\\left(\\frac{3|g''(u)|^q+|g''(v)|^q}{4}\\right)^{\\frac{1}{q}}\\Biggr]\n\\\\\n&\\leq\\frac{(v-u)^2\\left(3^{\\frac{1}{q}}+1\\right)}{16}\\left(\\frac{1}{(\\mu+1)p+1}\\right)^{\\frac{1}{p}}\n\\bigl[|g''(u)|+|g''(v)|\\bigr]\n\\\\\n&\\leq\\frac{(v-u)^2}{8}\\left(\\frac{1}{(\\mu+1)p+1}\\right)^{\\frac{1}{p}}\\bigl(|g''(u)|+|g''(v)|\\bigr).\n\\end{align*}\nHence the proof of Theorem \\ref{th:32} is completed.\n\\end{proof}\n\\begin{corollary}\\label{cor:33}\nWith the similar assumptions of Theorem \\ref{th:32}, if\n\\begin{enumerate}\n\\item $\\psi(x)=x$, we have\n\\begin{align*}\n&\\left|\\frac{2^{\\mu-1}\\Gamma(\\mu+2)}{(v-u)^\\mu}\\left[I^{\\mu}_{\\left(\\frac{u+v}{2}\\right)^+}g(v)\n+I^{\\mu}_{\\left(\\frac{u+v}{2}\\right)^-}g(u)\\right]-(\\mu+1)g\\left(\\frac{u+v}{2}\\right)\\right|\n\\\\\n&\\leq\\frac{(v-u)^2}{8}\\left(\\frac{2}{(\\mu+1)p+1}\\right)^{\\frac{1}{p}}\n\\Biggl[\\left(\\frac{|g''(u)|^q+3|g''(v)|^q}{4}\\right)^{\\frac{1}{q}}\n+\\left(\\frac{3|g''(u)|^q+|g''(v)|^q}{4}\\right)^{\\frac{1}{q}}\\Biggr]\n\\\\\n&\\leq\\frac{(v-u)^2}{8}\\left(\\frac{2}{(\\mu+1)p+1}\\right)^{\\frac{1}{p}}\\bigl(|g''(u)|+|g''(v)|\\bigr),\n\\end{align*}\nwhich is obtained by Tomar et al. \\cite{21}.\n\\item $\\psi(x)=x$ and $\\mu=1$, we have\n\\begin{align*}\n\\left|\\frac{1}{v-u}\\int_{u}^{v}g(x)dx-g\\left(\\frac{u+v}{2}\\right)\\right|\n&\\leq\\frac{(v-u)^2}{16(2p+1)^{\\frac{1}{p}}}\n\\Biggl[\\left(\\frac{|g''(u)|^q+3|g''(v)|^q}{4}\\right)^{\\frac{1}{q}}\n+\\left(\\frac{3|g''(u)|^q+|g''(v)|^q}{4}\\right)^{\\frac{1}{q}}\\Biggr]\n\\\\\n&\\leq\\frac{(v-u)^2}{2^{2+\\frac{2}{q}}(2p+1)^{\\frac{1}{p}}}\\bigl(|g''(u)|+|g''(v)|\\bigr),\n\\end{align*}\n\\end{enumerate}\nwhich is obtained by Sarikaya et al. \\cite{23}.\n\\end{corollary}\n\\begin{corollary}\\label{cor:34}\nFrom Theorems \\ref{th:31}--\\ref{th:32}, we obtain the following inequality for $\\psi(x)=x, \\mu=1$ and $q>1$:\n\\begin{align*}\n\\left|\\frac{1}{v-u}\\int_{u}^{v}g(x)dx-g\\left(\\frac{u+v}{2}\\right)\\right|\n&\\leq(v-u)^2\\min\\{\\delta_{1},\\delta_{2}\\}\\bigl(|g''(u)|+|g''(v)|\\bigr),\n\\end{align*}\nwhere $\\delta_{1}=\\frac{1}{24}$ and $\\delta_{2}=\\frac{1}{2^{2+\\frac{2}{q}}(2p+1)^{\\frac{1}{p}}}$\nsuch that $p=\\frac{q}{q-1}$.\n\\end{corollary}\n\\section{Applications}\nIn this section some applications are presented to demonstrate usefulness of our obtained results in the\nprevious sections. \n\\subsection{Applications to special means}\nLet $u$ and $v$ be two arbitrary positive real numbers, then consider the following special means:\n\\begin{enumerate}\n\\item[(i)] The arithmetic mean:\n\\[A=A(u,v)=\\frac{u+v}{2}.\\]\n\\item[(ii)] The inverse arithmetic mean:\n\\[H=H(u,v)=\\frac{2}{\\frac{1}{u}+\\frac{1}{v}}, \\quad u,v\\neq 0.\\]\n\\item[(iii)] The geometric mean:\n\\[G=G(u,v)=\\sqrt{u\\,v}.\\]\n\\item[(iv)] The logarithmic mean:\n\\[L(u,v)=\\frac{v-u}{\\log(v)-\\log(u)}, \\quad u\\neq v.\\]\n\\item[(v)] The generalized logarithmic mean:\n\\[L_{n}(u,v)=\\left[\\frac{v^{n+1}-u^{n+1}}{(v-u)(n+1)}\\right]^{\\frac{1}{n}}, \n\\quad n\\in\\mathbb{Z}\\setminus\\{-1,0\\}.\\]\n\\end{enumerate}\n\\begin{proposition}\\label{prop:1}\nLet $|n|\\geq 3$ and $u, v\\in\\mathbb{R}$ with $0-1$ we have\n\\begin{align}\\label{eq:prop64}\n&\\left|\\frac{\\mathcal{I}_{p}(v)-\\mathcal{I}_{p}(u)}{v-u}\n-\\frac{a+b}{4(p+1)}\\mathcal{I}_{p+1}\\left(\\frac{u+v}{2}\\right)\\right|\n\\leq (v-u)^{2}\\min\\{\\delta_{1},\\delta_{2}\\}\\, 2^{3-2p}\\sqrt{\\pi}\\Gamma(p+1)\n\\nonumber \\\\\n&\\times\\Biggl(\\left|a\\right|^{p-3}\n\\left|\\,_{2}F_{3}\\left(\\frac{p+1}{2},\\frac{p+2}{2};\\frac{p+1-n}{2},\\frac{p+2-n}{2},p+1;\\frac{a^2}{4}\\right)\n\\right|\n\\nonumber \\\\\n&+\\left|b\\right|^{p-3}\n\\left|\\,_{2}F_{3}\\left(\\frac{p+1}{2},\\frac{p+2}{2};\\frac{p+1-n}{2},\\frac{p+2-n}{2},p+1;\\frac{b^2}{4}\\right)\n\\right|\\Biggr).\n\\end{align}\n\\end{proposition}\n\\begin{proof}\nLet $g(x)=\\mathcal{I}'_{p}(x)$. Note that the function $x\\mapsto\\mathcal{I}'''_{p}(x)$ is convex on the \ninterval $[0,\\infty)$ for each $p>-1$. Using Corollary \\ref{cor:34} and \\eqref{eq:prop61}--\\eqref{eq:prop62}, \nwe obtain the desired inequality \\eqref{eq:prop64} immediately.\n\\end{proof}\n\\section{Conclusion}\nIn this paper, we established some new integral inequalities of midpoint type for convex functions with \nrespect to increasing functions involving Riemann--Liouville fractional integrals. It can be noted from \nCorollary \\ref{cor:31}--\\ref{cor:33} that our results are a generalization of all obtained results in\n\\cite{21,22,23}.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nIn the current paradigm of galaxy formation, it is believed that virtually all\ngalaxies initially form as `disks' owing to the cooling of gas with non-zero\nangular momentum in virialized dark matter haloes. This smooth gas accretion\ndominates the galactic gas supply and hence the fuel for star formation.\nGalaxies that reside in the centers of lower mass halos, those with masses\nless than $M_{halo} \\lower.7ex\\hbox{\\ltsima} 3\\times 10^{11} \\>h^{-1}\\rm M_\\odot$, accrete gas through very\nefficient cold mode accretion, i.e. gas that is never heated (Keres et\nal. 2005, Keres et al. 2008). The central galaxies that reside in larger\nhalos accrete their gas through the classic, but less efficient, hot mode of\naccretion where the gas is shock heated to near the virial temperature near\nthe virial radius and then must cool to be accreted by the central galaxy.\nHence the naive expectation would be that dwarf galaxies should be actively\nstar forming and blue.\n\nHowever, when a small halo is accreted by a larger halo, i.e. when it becomes\na subhalo, the central galaxy that formed in the small halo becomes a\nsatellite galaxy and may experience a number of environmental effects that may\nchange its properties. For instance, the diffuse gas originally associated\nwith the subhalo may be stripped, thus removing the fuel for future\nstar-formation (e.g. Larson, Tinsley \\& Caldwell 1980). This process,\nreferred to as strangulation (Balogh \\& Morris 2000), can result in a gradual\ndecline of the star formation rate in the satellite galaxy, making it redder\nwith the passage of time. If the external pressure is sufficiently high,\nram-pressure stripping may also be able to remove the entire cold gas\nreservoir of the satellite (e.g. Gunn \\& Gott 1972), causing a fast quenching\nof its star formation. A satellite galaxy is also subject to tidal heating\nand stripping and galaxy harassment (Moore et al 1996), which may also cause\nthe satellite to lose its fuel for star formation. These processes are\nbelieved to have played an important role in the evolution of satellite\ngalaxies, and to be responsible, to a large extent, for the relation between\ngalaxy properties and their environment. Indeed, satellite galaxies are\ngenerally found to be redder and somewhat more concentrated than central\ngalaxies with similar stellar masses (e.g. van den Bosch et al. 2008a;\nWeinmann et al. 2008; Yang et al. 2008b; c; Guo et al. 2009).\n\nFurthermore, recent investigations based on cosmological $N$-body simulations\nhave shown that a significant fraction of dark matter haloes that are close\nto, but beyond the virial radius of, a more massive neighboring halo, are\nphysically connected to their neighbor. As shown by Lin et al. (2003) and\nmore recently by Ludlow et al. (2008), some low-mass halos are {\\it\nphysically} associated with more massive halos, in the sense that they were\nonce subhalos within the virial radii of these more massive progenitors and\nhave subsequently been ejected. This population of halos was found to extend\nbeyond three times the virial radii of their host halos, and represents about\n$10\\%$ of the entire population of low-mass halos (Wang, Mo \\& Jing 2008). If\ngalaxies have managed to form in the progenitors of these ejected halos, it is\nlikely that the same environmental processes operating on satellite galaxies\nmay also have affected the properties of these galaxies. In particular, we\nwould expect the presence of a red population of faint galaxies that are\nclosely associated with massive halos that once hosted them.\n\nGalaxies are observed to be bimodal in the color-magnitude plane: red galaxies\nwith very little star formation (the red sequence) and blue star forming\ngalaxies that are typically disky (the blue cloud) (e.g. Kauffman et al. 2003,\nBaldry et al. 2004) Extrapolating the observed division line (Yang et\nal. 2008a) to dwarf galaxies we surprisingly find that for central dwarf\ngalaxies in the SDSS, with $r$-band magnitudes between -14.46 and -17.05, just\nover 1\/4 are red. In this paper we will investigate the nature of these red\ndwarf galaxies. Quantifying the spatial distribution of this population of\ngalaxies is clearly important, because it allows us to determine whether or\nnot they can be explained as a population of satellite galaxies that were\nejected from larger halos.\n\nIn this paper, we use the galaxy group catalogue constructed by Yang et al.\n(2007) from the Sloan Digital Sky Survey Data Release 4 (SDSS DR4;\nAdelman-McCarthy {et al.~} 2006) to study the distribution of central dwarf\ngalaxies around massive halos. The structure of this paper is as follows. In\n\\S\\ref{sec_data} we briefly describe the criteria used to select galaxies and\ngalaxy groups. In \\S\\ref{sec_analyze} we study the radial distribution of\ndwarf galaxies around their nearest neighbor halos and its dependence on\ngalaxy color and concentration. Some systematic effects that may change our\nresults are discussed in \\S\\ref{sec_systematics}. In \\S\\ref{sec_mock}, we use\nmock catalogues to test the reliability of our results and to quantify their\nimplications. Finally, in \\S\\ref{sec_discussion}, we present some further\ndiscussion regarding our results.\n\n\\section{Observational Data}\n\\label{sec_data}\n\n\\subsection{Samples of Galaxy Groups}\n\nOur analysis uses the galaxy group catalogues of Yang {et al.~} (2007), which were\nconstructed from the New York University Value-Added Galaxy Catalog (NYU-VAGC,\nsee Blanton {et al.~} 2005b) based on the Sloan Digital Sky Survey Data Release 4\n(SDSS DR4; Adelman-McCarthy {et al.~} 2006). Only galaxies in the Main Galaxy\nSample with redshifts in the range $0.01 \\leq z \\leq 0.20$ and with a redshift\ncompleteness ${\\cal C} > 0.7$ were used. Three sets of group catalogues were\nconstructed using a modified version of an adaptive halo-based group finder,\nwhich was optimized to assign galaxies into groups according to their common\ndark matter halos (Yang {et al.~} 2005). For our study here, we use group sample\nII, in which only galaxies with spectroscopic redshifts (either provided by\nthe SDSS or taken from alternative surveys) are used. We have tested, though,\nthat using group sample III, which also includes galaxies that have been\nmissed owing to fiber collisions, does not have a significant impact on any of\nour results.\n\nFor each group in the catalogue, Yang {et al.~} (2007) estimated the corresponding\nhalo mass using either the ranking of its characteristic luminosity (this mass\nis denoted by $M_L$) or using the ranking of its stellar mass (this mass is\ndenoted by $M_S$). Throughout this paper, we use $M_S$ as our halo masses.\nWe have also tested that using $M_L$ instead does not change any of our\nresults. As described in Yang {et al.~} (2007), the characteristic luminosity and\nstellar mass of a group are defined to be the total luminosity and total\nstellar mass of all group members, respectively, with $\\>^{0.1}{\\rm M}_r-5\\log h \\leq -19.5$.\nThus, groups whose member galaxies are all fainter than $\\>^{0.1}{\\rm M}_r-5\\log h = -19.5$ cannot\nbe assigned halo masses according to the ranking. For these groups, the halo\nmasses are estimated in the following way. In Yang {et al.~} (2008b) it is shown\nthat the stellar masses of central galaxies are tightly correlated with the\nmasses of their host haloes. The mean of this relation is well described by\n\\begin{equation}\\label{eq:Ms_fit}\nM_{\\ast} = M_0\n\\frac { (M_h\/M_1)^{\\alpha +\\beta} }{(1+M_h\/M_1)^\\beta } \\,,\n\\end{equation}\nwhere $M_{\\ast}$ and $M_h$ are the central galaxy stellar mass and the host\nhalo mass of the group, respectively, and ($\\log M_0$, $\\log M_1$, $\\alpha$,\n$\\beta$) = (10.306, 11.040, 0.315, 4.543). For groups that cannot be assigned\na halo mass according to the stellar-mass (luminosity) ranking, we use the\nabove relation to obtain $M_h$ through the stellar masses of their central\ngalaxies.\n\n\n\\subsection{Galaxy Samples}\n\\label{sec:samples}\n\\begin{deluxetable*}{lccccccccc}\n\\tabletypesize{\\scriptsize}\n\\tablecaption{Galaxy Samples \\label{tab1}} \\tablewidth{0pt} \\tablehead{ID &\n $\\>^{0.1}{\\rm M}_r-5\\log h$ & $N_{\\rm total}$ & $N_{\\rm cent}$ & $N_{\\rm sat}$ &\n $f_{\\rm red,cent}$ & $f_{\\rm red,sat}$ & $f^b_{\\rm red,cent}$ & $f^b_{\\rm\n red,sat}$ & $f^b(r_{\\rm p}\/R_{180})\\le 3$ \\\\\n (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10)} \\startdata\nS1 & (-14.46,-16.36] & 1500 & 1103 & 397 & 13.69\\% & 37.53\\% & 33.79\\% &\n 53.42\\% & 34.60\\%\\\\\nS2 & (-16.36,-16.78] & 1500 & 1081 & 419 & 13.69\\% & 36.28\\% & 22.50\\% &\n 47.02\\% & 47.46\\%\\\\\nS3 & (-16.78,-17.05] & 1500 & 1008 & 492 & 10.20\\% & 40.24\\% & 19.51\\% &\n 52.64\\% & 47.72\\%\\\\\nS1+S2+S3 & (-14.46,-17.05] & 4500 & 3192 & 1308 & 12.59\\% & 38.15\\% & 25.49\\% &\n 51.08\\% & 41.63\\% \\\\\n\\cline{1-10}\\\\\nS4 & & 1500 & 1080 & 420 & 13.51\\% & 37.02\\% \\enddata\n\n\n\\tablecomments{Column 1 indicates the sample ID. Column 2 lists the absolute\n magnitude range of each sample. Columns 3 to 5, indicate the number of\n total, central and satellite galaxies in each sample, respectively. Columns\n 6 and 7 list the red fractions among central and satellite galaxies,\n respectively, where the red galaxies are defined to be the reddest 20\\% of\n all the galaxies. Column 8 lists the red fraction of central dwarf galaxies\n where the red galaxies are defined by extrapolating the division between red\n sequence and blue cloud galaxies from Yang et al. (2008a). Column 9 lists\n also this red fraction, but for the satellite dwarf galaxies. Column 10\n lists the fraction of those (in Column 8) central red dwarf galaxies that\n have $r_{\\rm p}\/R_{180}\\le 3$. }\n\\end{deluxetable*}\n\n\nGroup catalogue II consists of 369447 galaxies, which are assigned into 301237\ngroups. The majority, 271420, of the groups contain only one member, i.e.,\nall of them are the central galaxies of the groups. The remaining 98027\ngalaxies are in groups with more than one member, and 29817 of them are {\\it\ncentral} galaxies (the brightest one in each group). We refer to the other\n68210 galaxies as {\\it satellites} (Yang et al. 2007).\n\nFrom our galaxy sample, we select three subsamples of dwarf galaxies as\nfollows. We rank order all galaxies according to their absolute magnitudes\n(in the $r$-band, $K$- and evolution- corrected to redshift $z=0.1$), starting\nwith the faintest galaxy. The 1500 galaxies with the highest rank (i.e. the\n1500 faintest galaxies) make up our first sample, called S1, The galaxies with\nranks 1501-3000 make up sample S2, and those with ranks 3001-4500 sample\nS3. Table~\\ref{tab1} lists the (sequential) absolute magnitude ranges of all\nthree samples, as well as the numbers of central and satellite galaxies. Note\nthat all the galaxies in S1, S2 and S3 are fainter than $\\>^{0.1}{\\rm M}_r-5\\log h = -17.05$. In\nwhat follows we refer to all galaxies in these three samples as dwarf\ngalaxies.\n\n\\begin{figure} \\plotone{f1.eps}\n \\caption{The color-magnitude distribution of dwarf galaxies (including\n central and satellite galaxies). In four panels the dwarf galaxies are\n separated into 20\\%, 30\\%, 40\\%, 50\\% red galaxies as indicated,\n respectively. The vertical dashed lines in the upper-left panel show the\n separation criteria of samples S1, S2 and S3.} \\label{fig:col}\n\\end{figure}\n\nTo study how the spatial distribution of dwarf galaxies depends on galaxy\ncolor, we separate each of the samples, S1, S2, and S3, into red and blue\nsubsamples. In particular, we define a color cut\n\\begin{equation}\\label{colcut}\n^{0.1}(g-r) = a + b~(\\>^{0.1}{\\rm M}_r-5\\log h)\\,,\n\\end{equation}\nand we adjust the parameters $a$ and $b$ such that samples S1, S2, and S3\nroughly have the same fractions of galaxies, $f_{\\rm red}$, redder than this\nparticular cut. We consider four values for $f_{\\rm red}$: 20\\%, 30\\%, 40\\%,\nand 50\\%, for which we obtain [a, b]=[-0.421,-0.060], [-0.423,-0.055],\n[-0.375,-0.049] and [-0.313,-0.043], respectively. Thus, if $f_{\\rm red} =\n20$\\% it means that the red subsamples of S1, S2 and S3 each consist of the\n20\\% reddest galaxies in their particular samples, etc. Fig~\\ref{fig:col}\nshows the color-magnitude relations of galaxies in S1, S2 and S3 (delineated\nby vertical dashed lines). The four panels correspond to the four different\nvalues of $f_{\\rm red}$, as indicated, and the solid line in each panel\ncorresponds to the color cut of Eq.~(\\ref{colcut}) used in each case.\n\n\nIn Table~\\ref{tab1} we list, for each sample, the red fractions of central\n($f_{\\rm red,cent}$) and satellite galaxies ($f_{\\rm red,sat}$). Here red\ngalaxies are defined to be the reddest 20\\% of all galaxies (both centrals and\nsatellites) in our sample of dwarf galaxies. Clearly, dwarf galaxies that are\nsatellites have a much higher red fraction than central dwarf galaxies. For\ncomparison, extrapolating the observed division line between the red sequence\nand the blue cloud from Yang et al. (2008a) down to dwarf galaxies one would\nfind that 33.8\\%, 22.5\\%, and 19.5\\% of the central galaxies in the S1, S2,\nand S3 samples, respectively, were red. While in total, there are more red\ncentral dwarfs than red satellite dwarfs, which is so far not predicted,\ne.g. by halo occupation models (Brown et al. 2008). Note that different\ndefinition of red galaxies may change these fractions (e.g. with respect to\nthe galaxies of similar stellar masses), however not the general results we\nfind in this paper.\n\n\n\n\\section {The distribution of central dwarfs}\n\\label{sec_analyze}\n\n\\begin{figure} \\plotone{f2.eps}\n \\caption{Fractions of the red central galaxies near more massive halos as a\n function of projected distance $r_{\\rm p}\/R_{180}$. The four panels show the\n results of the color subsamples with 20\\%, 30\\%, 40\\% and 50\\% red\n galaxies, respectively. The related color subsample separation criteria\n are shown in Fig. \\ref{fig:col}. The different lines correspond to the\n different samples as indicated. The fit long-dashed, green line is\n described in the text. For comparison, we show the fraction of red {\\it\n satellite} galaxies that are within the same luminosity ranges as the\n central galaxies at $r_{\\rm p}\/R_{180}=0$. The small window within each panel\n shows the number counts of dwarf central (or satellite) galaxies in S1 as\n a function of $r_{\\rm p}\/R_{180}$: black histogram for all galaxies and shaded,\n red histogram for red galaxies. } \\label{fig:f_r}\n\\end{figure}\n\n\nIn this section, we investigate how central dwarf galaxies are distributed\nwith respect to their nearest more massive halo (i.e. more massive than their\nown halo). Since the distances of galaxies based on redshifts suffer from\nredshift distortions, we separate the distance between a central dwarf galaxy\nand its nearest more massive halo into two components: $\\pi$, which is the\nseparation along the line-of-sight, and $r_{\\rm p}$, which is the separation in the\nperpendicular direction.\n\nFor each group in the catalogue, we use the assigned halo mass, $M_S$, to\nestimate its halo radius, $R_{180}=[3M_{S}\/(4\\pi*180\\bar{\\rho})]^{1\/3}$, which\nfollows from defining the mean mass density within a halo as $180$ times the\naverage density of the universe, $\\bar{\\rho}$. We search around each central\ndwarf galaxy, within a line-of-sight separation $|\\pi| = 15\\>h^{-1}{\\rm {Mpc}}$,\n\\footnote{Tests have shown that changing the line-of-sight separation for the\nsearch from $|\\pi| \\le 15\\>h^{-1}{\\rm {Mpc}}$ to $|\\pi| \\le 10\\>h^{-1}{\\rm {Mpc}}$ or to $|\\pi| \\le\n20\\>h^{-1}{\\rm {Mpc}}$ does not have a significant impact on any of our results.} for the\ngroup that has (i) a halo mass larger than that of the dwarf galaxy in\nquestion and (ii) the lowest value of $r_{\\rm p}\/R_{180}$ (with $R_{180}$ the halo\nradius of the group). The central galaxy is then said to be at a scaled\n`distance' $r_{\\rm p}\/R_{180}$ from a massive halo, and the halo is referred to as\nthe nearest halo of the galaxy. We use this scaled distance because $R_{180}$\nis the only important length scale related to the dynamics of a virialized\nhalo.\n\n\\subsection{Color Dependence}\n\nFig.~\\ref{fig:f_r} shows the fraction $N_{\\rm red}\/N_{\\rm total}$ of central\ndwarf galaxies that are red as a function of the scaled distance,\n$r_{\\rm p}\/R_{180}$, to their nearest halos. Here $N_{\\rm total}$ is the total\nnumber of dwarf galaxies in each sample (S1, S2 or S3) in that bin of\n$r_{\\rm p}\/R_{180}$, and $N_{\\rm red}$ is the number of those galaxies that are red\naccording to the criterion used. The numbers are shown in the small window in\neach panel (the black histogram for $N_{\\rm total}$, and the red, hatched\nhistogram for $N_{\\rm red}$). The four panels show the results for $f_{\\rm\nred} = 20$\\%, 30\\%, 40\\% and 50\\%. In each panel, the different lines show\nthe results obtained for the three samples, S1, S2, and S3, as indicated in\nthe upper-left panel. The error-bars are obtained using 100 bootstrap\nresamplings (Barrow, Bhavsar, \\& Sonoda 1984; Mo, Jing \\& B\\\"orner 1992).\nGalaxies are counted in bins specified by $N-0.5 \\leq r_{\\rm p}\/R_{180} \\leq N+0.5$\n(for $N=2, 3,... ,7$), and $0\\leq r_{\\rm p}\/R_{180} \\leq N+0.5$ for $N=1$. For\ncomparison, the data point at $r_{\\rm p}\/R_{180} =0$, indicates the fraction of red\n{\\it satellite} galaxies within the same luminosity range as the central\ngalaxies. Note that the results obtained for S1, S2, and S3 are almost\nidentical, indicating that the spatial distribution of dwarf galaxies around\nmassive haloes does not depend on their luminosities. However, if we consider\nmuch brighter galaxies, e.g. at $\\>^{0.1}{\\rm M}_r-5\\log h \\sim -18.5$, the radial dependence\nstarts to level off.\n\nThere is a clear trend that the fraction of red central dwarf galaxies\nincreases with decreasing scaled distance to the nearest halo. The fraction\nof the 20\\% reddest population at $r_{\\rm p}\/R_{180}\\ga 4$ is around 5\\% to 10\\%,\nincreases systematically to $\\sim 25$\\% at $r_{\\rm p}\/R_{180}\\sim 1$, and to $\\sim\n40$\\% at $r_{\\rm p}\/R_{180} = 0$ for the satellite galaxies. For the other three\ncases (with $f_{\\rm red} =30$\\%, 40\\% and 50\\%), the fraction also decreases\nwith $r_{\\rm p}\/R_{180}$, but reaches a higher level at large $r_{\\rm p}\/R_{180}$. This\nindicates that the less red galaxies in these subsamples are not strongly\nassociated with massive halos. We quantify the association of red dwarf\ngalaxies with massive halos by fitting the data obtained from S3 shown in the\nfour panels simultaneously with a function $f=a+b\\times\\exp(-x\/2)$, where\n$x=r_{\\rm p}\/R_{180}$. In the fit we subtract a constant of, $0.1$, $0.2$ and\n$0.3$, from the data for the 30\\%, 40\\% and 50\\% reddest subsamples,\nrespectively, to account for the component that is not closely associated with\nmassive halos. The best fit results, with $a=0.045$ and $b=0.356$, are shown\nin each panel of Fig. \\ref{fig:f_r} as the green, long-dashed lines. We have\nalso checked the radial distribution of the central dwarf galaxies when using\n$f_{\\rm red} = 15$\\% to define the subsample of red dwarfs. In this case, we\nfind less than a 5\\% decrease in the red fraction at large $r_{\\rm p}\/R_{180}$,\nindicating that the 5\\% least red galaxies in the 20\\% reddest subsample are\nnot randomly but more closely distributed relative to the massive halos. Thus\nthe overall results suggest that the $15\\%$ - $20\\%$ reddest dwarf galaxies\nare quite distinct from the other dwarfs, in that they reveal a clear\npreference to reside close to their nearest more massive dark matter halo. In\naddition, the non-zero red fraction at large $r_{\\rm p}\/R_{180}$ indicates that\nthere is a $\\sim 5\\%$ tail of red dwarfs randomly distributed throughout the\nbackground population, especially in the voids, due to some processes that\nshut down the star formation. A similar trend has also been found by Cooper et\nal. (2007) from the DEEP2 survey, however for more massive galaxies in low\ndensity regions at higher redshifts. In \\S\\ref{sec_mock}, we use mock galaxy\nredshift surveys constructed from cosmological $N$-body simulations to\nquantify this connection.\n\n\n\\subsection{Concentration Dependence}\n\nApart from a color dependence, we also check whether the radial distribution\nof dwarf galaxies with respect to their nearest more massive dark matter halos\ndepends on their surface brightness profiles. To this end, we split our dwarf\ngalaxies into two subsamples according to the value of their concentration\nparameter $C=r_{90}\/r_{50}$. Here $r_{90}$ and $r_{50}$ are the radii that\ncontain 90 and 50 percent of the Petrosian $r$-band flux, respectively. As\nshown in Strateva {et al.~} (2001), $C$ is a reasonable proxy for the Hubble type,\nwith $C>2.6$ corresponding to early-type galaxies. We, therefore, separate\ngalaxies into high ($C>2.6$) and low ($C\\le 2.6$) concentrations, as\nillustrated in the lower-right panel of Fig.~\\ref{fig:concen}. Roughly 20\\%\nof the dwarf galaxies thus end up in the high-concentration subsample.\n\nThe lower-right panel of Fig.~\\ref{fig:f_con} shows the fraction of galaxies\nin this high-concentration subsample as a function of the scaled distance to\nthe nearest more massive halo. Unlike the reddest galaxies, the most\nconcentrated galaxies have a radial distribution that is similar to that of\nthe total population of central dwarf galaxies. Note however, for brighter\ngalaxies, especially in and around clusters, there is a so called\nmorphology-radius relation (e.g., Dressler et al. 1997), which according to\nPark \\& Hwang (2008) may be largely induced by the interaction of the target\ngalaxy with its nearest (early-type) neighbor galaxy.\n\n\n\\section{Systematics}\n\\label{sec_systematics}\n\nBefore we proceed to study the origin of the central red dwarf galaxies, there\nare a number of issues that need to be addressed. An obvious worry is that the\ngroup finder is not perfect, and has misclassified a number of satellite\ngalaxies as central galaxies. We will discuss this issue in more detail with\nthe help of mock galaxy and group catalogs in \\S\\ref{sec_mock}. In this\nsection we discuss systematics that can be addressed without the need for mock\ncatalogs.\n\n\n\\subsection{Stellar Mass Dependence}\n\\begin{figure} \\plotone{f3.eps}\n \\caption{Upper-left panel: the color-stellar mass distribution of the 20\\%\n red (`x'-crosses) and 80\\% blue (`+'-crosses) dwarf galaxies in terms of\n {\\it similar stellar masses}. Upper-right panel: the same set of dwarf\n galaxies as in the upper-left panel, but separated into 20\\% blue\n (`x'-crosses) and 80\\% red (`+'-crosses). Lower-left panel: the same set\n of dwarf galaxies as in the upper-left panel, but randomly selected and\n separated into 20\\% (`x'-crosses) and 80\\% (`+'-crosses) populations.\n Lower-right panel: the concentration-magnitude distribution of the dwarf\n galaxies, which are separated into $\\sim 20\\%$ high and $\\sim 80\\%$ low\n concentration populations by $C=2.6$. } \\label{fig:concen}\n\\end{figure}\n\n\\begin{figure} \\plotone{f4.eps}\n \\caption{Similar to Fig. \\ref{fig:f_r}, but with related subsample\n separation criteria shown in Fig. \\ref{fig:concen}. Upper-left panel:\n fraction of the red central galaxies near more massive halos as a function\n of $r_{\\rm p}\/R_{180}$, where the 20\\% red population is defined with respect to\n the similar stellar mass galaxies. Upper-right panel: similar to the\n upper-left panel but for the 20\\% blue population. Lower-left panel:\n similar to the upper-left panel but for the 20\\% random population.\n Lower-right panel: fraction of the high concentration $C>2.6$ central\n galaxies near more massive halos as a function of\n $r_{\\rm p}\/R_{180}$. } \\label{fig:f_con}\n\\end{figure}\n\nFor a given luminosity, redder galaxies are expected to have a larger stellar\nmass. The color separation used above may thus introduce a bias in the sense\nthat galaxies in the redder subsample are systematically more massive. To\ncheck whether or not the color distribution we obtained in the previous\nsection is robust when the dwarf galaxies are selected in a similar stellar\nmass bin, we construct a controlled subsample S4, where stellar masses for\ngalaxies are estimated using the relation between the stellar mass-to-light\nratio and color obtained by Bell et al.(2003). Note that the survey {\\it\nmagnitude limit} of the SDSS observation corresponds to a higher (lower) {\\it\nstellar mass limit} for the redder (bluer) galaxies. In general, one can\nconstruct a stellar mass limit sample (and hence the subsample S4) for all\n(including both red and blue) galaxies by adopting the stellar mass limit (as\na function of redshift) for the reddest galaxies (see Appendix of van den\nBosch et al. 2008b), which, however, may significantly reduce the number of\ndwarf galaxies in our sample. Instead, as a rough approximation, we construct\nsubsample S4 as follows (with the hidden assumption that if the galaxies are\ncomplete in both luminosity and stellar mass they have similar color\ndistributions and thus similar red and blue fractions). First, we separate all\nthe dwarf galaxies into red and blue populations: the reddest 20\\% being red\nand the rest being blue, using the separation line shown in the upper-left\npanel of Fig ~\\ref{fig:col}. Next, for each of the (1500$\\times$20\\%$=300$)\nred galaxies in S1, we randomly select four blue galaxies from the blue\npopulation with stellar masses within $\\Delta \\log M_{\\ast}=0.025$ of the red\ngalaxy. This yields a blue control sample of (1500$\\times$80\\%$=1200$) dwarf\ngalaxies, which has the same stellar mass distribution as the 20\\% reddest\ngalaxies. The control subsample S4 so constructed has exactly the same red\nfraction of dwarf galaxies as S1, but now with respect to blue galaxies with\nsimilar stellar masses. The upper-left panel of Fig.~\\ref{fig:concen} shows\nthe color-stellar mass relation for S4, split into red and blue galaxies.\n\n\nFor comparison, we also form the following two subsamples from S4. In one, we\nrandomly select 20\\% of the galaxies from S4; in the other, we select the 20\\%\nbluest galaxies that have the same stellar mass distribution as all the\ngalaxies in sample S4. The color-stellar mass relations of these two\nsubsamples are shown in the lower-left and upper-right panels of Fig.\n\\ref{fig:concen}, respectively.\n\nThe upper-left panel of Fig. \\ref{fig:f_con} shows $N_{\\rm red}\/N_{\\rm total}$\nas a function of $r_{\\rm p}\/R_{180}$ obtained using sample S4. Fitting the data\nagain with the function $f=a+ b \\times\\exp(-x\/2)$, we obtain $a=0.053$ and\n$b=0.309$, and the corresponding model is shown as the long-dashed curve.\nThis dependence on the scaled distance, $x\\equiv r_{\\rm p}\/R_{180}$, is only\nslightly weaker than that for the corresponding luminosity sample S1,\nindicating that the bias caused by the stellar-mass difference between the red\nand blue subsamples is not important.\n\nThe upper-right and lower-left panels of Fig. \\ref{fig:f_con} show the\nresults obtained for the 20\\% bluest galaxies and for the 20\\% random galaxies\n(as defined above). For these two cases there is no significant radial\ndependence. Although one expects such a lack of radial dependence for the\nrandom subsample, it does indicate that there are no significant systematic\nerrors in our analysis. The lack of a radial dependence for the 20\\% bluest\nsubsample is due to the fact that only the $\\sim 15-20\\%$ reddest galaxies\nreveal a radial distribution that is peaked towards smaller $r_{\\rm p}\/R_{180}$.\n\n\n\\subsection{Dependence on the Mass of the Nearest Neighbor}\n\n\\begin{figure} \\plotone{f5.eps}\n \\caption{The nearest neighbor to host halo mass ratio $M_n\/M_h$ - projected\n distance $r_{\\rm p}\/R_{180}$ distribution of the dwarf central galaxies in\n samples S1+S2+S3. The triangles and crosses show the red and blue central\n galaxies, respectively, where the red population is defined to be the 20\\%\n reddest all galaxies. } \\label{fig:mass_rp}\n\\end{figure}\n\\begin{figure} \\plotone{f6.eps}\n \\caption{Similar to Fig. \\ref{fig:f_r}, but for all galaxies in samples\n S1+S2+S3 within different central-nearest halo systems. In each panel the\n selection criteria, $M_n\/M_h$, is indicated. Here the results are shown\n for the fraction of the 20\\% red population. For comparison, in each\n panel, we also show as the symbols with solid line the results for all\n central-nearest halo systems. } \\label{fig:M_n}\n\\end{figure}\n\nIn our analysis above, the ``nearest more massive halo'' of a central dwarf\ngalaxy is defined as the halo with a line-of-sight separation $|\\pi| \\le\n15\\>h^{-1}{\\rm {Mpc}}$ which has (i) a mass that is more massive than that of the dwarf\ngalaxy, and (ii) the smallest value of $r_{\\rm p}\/R_{180}$ (see\nsection~\\ref{sec_analyze}). This implies that some of these nearest more\nmassive halos may have masses that are only slightly larger than that of the\ndwarf galaxy under consideration.\n\nIn what follows we use $M_h$ to refer to the halo mass of the central dwarf\ngalaxy, and $M_n$ to refer to the mass of its nearest more massive halo. For\nour combined sample (S1 + S2 + S3) the average value of $M_h$ is about\n$10^{10.9}\\>h^{-1}\\rm M_\\odot$. Fig.~\\ref{fig:mass_rp} shows the ratio $M_n\/M_h$ as a\nfunction of the scaled distance $r_{\\rm p}\/R_{180}$ for all central dwarf galaxies\nin S1+S2+S3. Here the results for the 20\\% reddest galaxies are shown as red\ntriangles, while the other 80\\% are indicated by blue crosses. Note that there\nis a very large amount of scatter in $M_n\/M_h$, ranging from unity to well in\nexcess of 1000.\n\nIt is interesting to investigate whether the color dependence of the radial\ndistribution of central dwarfs with respect to their nearest more massive halo\ndepends on $M_n$. This can provide valuable insight into the actual origin of\nthis color dependence. We therefore proceed as follows. We first combine\nsamples S1+S2+S3, and then calculate the fraction of central red dwarf\ngalaxies that belong to the 20\\% reddest subsample as we did in\n\\S\\ref{sec_analyze}. However, now we only select systems for which $M_n\/M_h$\nis restricted to [1,2], [2,4], [4,8] or $\\log [M_n\/ \\>h^{-1}\\rm M_\\odot]\\ge 12.0$,\nrespectively. The results are shown in the four panels of Fig.~\\ref{fig:M_n}\nas indicated. For comparison we also show, in each panel, the results obtained\nfor all systems (i.e. $M_n\/M_h > 1$). A comparison of all four panels shows\nthat there is a clear, albeit somewhat weak, dependence of the trend on\n$M_n\/M_h$. Overall, the colors of central dwarf galaxies are most strongly\naffected by nearest neighbor halos that are more massive. In the case of $1 <\nM_n\/M_h \\leq 2$ (upper-left panel), the central dwarf galaxies have a $N_{\\rm\nred}\/N_{\\rm total}$ that is almost independent of $r_{\\rm p}\/R_{180}$, and much\nlower than that of the dwarf satellites. On the other hand, the central\ndwarfs that are distributed around halos more massive than $10^{12.0} \\>h^{-1}\\rm M_\\odot$\n(lower-right panel), have a radial dependence that is somewhat stronger than\nthat for all systems.\n\n\\subsection{Survey Edge Effect}\n\\begin{figure} \\plotone{f7.eps}\n \\caption{Similar to Fig. \\ref{fig:f_r}, but here we compare the results for\n all galaxies in samples S1+S2+S3, with and without edge effects by\n removing the central dwarf galaxies that are near the survey\n edge. } \\label{fig:edge}\n\\end{figure}\n\nSince the SDSS is not a full-sky survey, and since our group catalogue is\nconstructed using only galaxies with redshifts $0.01 \\leq z \\leq 0.2$, our\nresults may be influenced by edge effects of the survey: for central dwarf\ngalaxies near an edge of the survey, there is an enhanced probability that it\nis actually a satellite (or central) galaxy in a more massive group (halo),\nbut for which all other members just happen to lie beyond the edges of the\nsurvey. Although we tried to take these effects into account when assigning\nhalo masses to our groups (see Yang et al. 2007 for details), it could still\nbe that a significant fraction of our central dwarf galaxies are in reality\nmisclassified centrals or satellites owes to the survey geometry.\n\nTo check the impact of these edge effects, we follow Yang et al. (2007) by\nmeasuring the edge parameter $f_{\\rm edge}$. For each central dwarf galaxy in\nS1+S2+S3, we randomly distribute $500$ points within a radius $1\\>h^{-1}{\\rm {Mpc}}$. Next\nwe apply the SDSS DR4 survey mask and remove those random points that fall\noutside of the region where the completeness ${\\cal C} > 0.7$. For each central\ndwarf galaxy we then compute the number of remaining points, $N_{\\rm remain}$,\nand we define $f_{\\rm edge}=N_{\\rm remain}\/500$ as a measure for the volume\naround the central dwarf galaxy that lies within the survey edges. To test\nthe impact of edge effects on our measurements in \\S\\ref{sec_analyze}, we\nremove those central dwarf galaxies with $f_{\\rm edge}\\le 0.8$ (about 13\\%)\nand recalculate the radial distribution of the remaining central galaxies. The\nresult is shown in Fig. \\ref{fig:edge} (dashed line), compared to the results\nfor all the central dwarfs, independent of their value of $f_{\\rm edge}$\n(solid line). Clearly, the two curves are almost indistinguishable,\nindicating that our results are not an artifact of survey edge effects.\n\n\\section{Test with mock samples}\n\\label{sec_mock}\n\n\\begin{figure} \\plotone{f8.eps}\n \\caption{Similar to Fig \\ref{fig:f_r}, but here we compare the observational\n and mock results. The upper-left, upper-right and lower-left panels show\n the results for all host-nearest halo systems using different color\n models: Case I, II and III as indicated (see text). In the lower-right\n panel, we show results for those central-nearest halo systems with $M_n\\ge\n 10^{12.0}\\>h^{-1}\\rm M_\\odot$ using color model Case III but with different parameters.\n In each panel, the symbols connected with dashed lines are results\n obtained from the mock galaxy and group catalogues where the halos are\n assumed to be spherical. The symbols connected with dot-dashed lines are\n results obtained from the mock galaxy and group catalogues where the halos\n are assumed to follow a triaxial Jing \\& Suto (2002) profile. For\n reference, in each panel we also show, as the dots connected with solid\n lines, the results we obtained for the SDSS samples S1+S2+S3. See text\n for details. } \\label{fig:mock}\n\\end{figure}\n\nOne potential problem with the results presented above is that the group\nfinder used to identify galaxy groups is not perfect. Hence, some of the\ndwarf galaxies classified as central galaxies may in fact be satellite\ngalaxies. To test the severity of such effects and to quantify the true\nassociation between central dwarf galaxies and their nearby massive halos, we\napply the same analysis to mock samples and compare the results with the\nobservational data that we have obtained. Here we use the mock SDSS DR4\ngalaxy and group catalogues that are constructed by Yang et al. (2007) to\ntest the performance of the group finder. Following Yang et al. (2004), the\nmock galaxy catalogue is constructed by populating dark matter haloes in\nnumerical simulations of the standard $\\Lambda$CDM model with galaxies of\ndifferent luminosities, using the conditional luminosity function (CLF) model\nof Cacciato et al. (2008). The cosmological parameters adopted here are\nconsistent with the three-year data release of the WMAP mission: $\\Omega_m =\n0.238$, $\\Omega_{\\Lambda}=0.762$, $n_s=0.951$, $h=0.73$ and $\\sigma_8=0.75$\n(Spergel et al. 2007). This CLF describes the halo occupation statistics of\nSDSS galaxies, and accurately matches the SDSS luminosity function, as well as\nthe clustering and galaxy-galaxy lensing data of SDSS galaxies as a function\nof their luminosity. Next a mock redshift survey is constructed mimicking the\nsky coverage of the SDSS DR4 and taking detailed account of the angular\nvariations in the magnitude limits and completeness of the data (see Li {et al.~}\n2007 for details). Finally we construct a group catalogue from this mock\nredshift survey, using the same halo-based group finder as for the real SDSS\nDR4.\n\nTo test the impact of contamination on our observational results obtained\nabove, we consider three models for the distribution of red dwarf galaxies:\n\\begin{itemize}\n\\item Case I: Here we assume that a fraction, $f_{\\rm red, sat}$, of true {\\it\n satellite} dwarf galaxies are red, but that all true central dwarf\n galaxies are blue.\n\\item Case II: Same as Case I, but here we assume that a fraction, $f_{\\rm\n red, cent}$, of true central dwarf galaxies are also red and have the same\n spatial distribution as blue central dwarfs.\n\\item Case III: Similar to Case II, but here we assume that $f_{\\rm red,\n cent}$ depends on the distance of the central galaxy to its nearest more\n massive halo, according to $f_{\\rm red, cent}(r)= a+b\\times\\exp(-(y-1)\/2)$.\n Here $a$ and $b$ are constants and $y=r\/R_{180}$.\n\\end{itemize}\nThese models are used to assign a color to each of the mock dwarf galaxies\naccording to their positions in real space and their true membership of host\nhalos.\n\nAs for the observational data, we select the 4500 faintest galaxies from the\nmock group catalogue, using the same criteria as described in\nSubsection~\\ref{sec:samples}. We choose the observational result for the\ncentral galaxies in S1+S2+S3 (shown as the dots with error bars in\nFig.~\\ref{fig:M_n} to compare with our models. As discussed in the previous\nsection, this result is representative of the distribution of red dwarf\ngalaxies with respect to their nearest more massive halos. For all Cases (I,\nII and III), we adopt $f_{\\rm red, sat}=0.38$ so that the red fraction of the\nsatellite galaxies is consistent with the SDSS data (repeated in Fig.\n\\ref{fig:mock} as a solid line). In Case II we set $f_{\\rm red, cent}=0.07$,\nso that the red fraction of the central galaxies at large projected distance,\n$r_{\\rm p}\/R_{180}\\ge 4$, is roughly the same as the observational data. The\ncorresponding results obtained from Case I and Case II are shown in the\nupper-left and upper-right panels of Fig. \\ref{fig:mock}, respectively. It\nis clear that in Case I, in which the only red dwarfs are satellites, the\nfraction of false central galaxies in the mock catalogue is too small too\nmatch the observational data at $r_{\\rm p}\/R_{180}\\ga 1$. For Case II, although by\nconstruction the fraction of red central galaxies matches the observational\nresults at large $r_{\\rm p}\/R_{180}\\ge 4$, the model underestimates the fraction of\nred central galaxies at intermediate $r_{\\rm p}\/R_{180}$. These results indicate\nthat (i) not all red dwarf galaxies are satellites, and (ii) the central red\ndwarf galaxies have a different distribution than the total central dwarf\npopulation.\n\nNow, let us look at Case III. We have experimented with different values for\n$a$ and $b$, and found that the following set of parameters matches the\nobservational data reasonably well: $(a,b)=(0.05, 0.36)$. The results\nobtained from the mock catalogue using this set of model parameters are shown\nin the lower-left panel of Fig. \\ref{fig:mock}. This shows that the central\nred dwarf galaxies are correlated with massive halos on scales given by $(y-1)\n\\la 2$ (i.e. $r\\la 3 R_{180}$).\n\nFinally, as we did for the observational sample, we can also obtain $f_{\\rm\nred, cent}(r)$ for dwarf galaxies near halos of different masses using the\nmock sample Case III. As an illustration, we consider the case with $M_n\\ge\n10^{12.0}\\>h^{-1}\\rm M_\\odot$. The model for $f_{\\rm red, cent}(r)$ that best matches the\nobservational result has $(a,b)=(0.05, 0.45)$ and is slightly steeper than\nthat obtained for the case without any restrictions on $M_n$. The model\nprediction obtained from the mock sample is shown in the lower-right panel of\nFig. \\ref{fig:mock} along with the corresponding observational data.\n\nIn the mock catalogue considered above, the distribution of satellite galaxies\nin individual halos is assumed to be spherically symmetric and to follow the\nNFW (Navarro, Frenk \\& White 1997) profile (see Yang et al. 2004 for\ndetails). In reality, the distribution of satellite galaxies in individual\nhalos may not be spherical, which may cause further contaminations of the\ngroup memberships selected by the group finder. To test this, we have\nconstructed a mock catalogue assuming that the distribution of satellite\ngalaxies in individual halos is triaxial, with axis ratios given by the model\nof Jing \\& Suto (2002) for CDM halos. We found a slightly higher level of\ncontamination in this new mock catalogue, but it does not change any of our\nresults significantly. As an illustration, in each panel of\nFig. \\ref{fig:mock} we also show the results for non-spherical halos using\nsymbols connected with dot-dashed lines. We obtain for Case III slightly\nweaker radial dependence with $(a,b)=(0.05,0.34)$ and $(a,b)=(0.05,0.43)$, for\nthe cases shown in the lower-left and lower-right panels, respectively.\n\n\\section{Discussion}\n\\label{sec_discussion}\n\n\nWe set out to understand why about 1\/4 of {\\it central} dwarf galaxies, those\nwith $r$-band magnitudes between -14.46 and -17.05, are red when one defines a\ndwarf galaxy to be red by extrapolating the division between red sequence\ngalaxies and the blue cloud, as determined by Yang et al. (2008a) for brighter\ngalaxies, down to dwarf magnitudes. Current models of galaxy formation would\nnaively expect such galaxies to be blue since they would be efficiently\naccreting gas through cold mode accretion and rapidly converting it into\nstars.\n\nIn a recent study, Ludlow et al. (2008; see also Lin et al. 2003) analysed\nthe properties of subhalos in galaxy-sized cold dark matter halos using a\nsuite of cosmological N-body simulations. The subhalos in their definition\nrefer to the whole population of subhalos physically associated with the main\nsystem, including both subhalos that are found within the virial radius of the\nhost halo at the present time, and halos that were once within the virial\nradius of the main progenitor of the host and have survived as self-bound\nentities until $z=0$. They found that such populations can extend beyond {\\it\nthree times} the virial radius, and contain objects on extreme orbits, with\nsome approaching the nominal escape speed from the system. On average the\nsubhalos identified within the virial radius represent only about {\\it one\nhalf} of all associated subhalos, and many relatively central halos may have\nactually been ejected in the past from a more massive system. Since galaxies\nare assumed to form in dark matter halos, it is interesting to see if the\nresults we obtain here can be understood in terms of galaxy formation in this\npopulation of subhalos.\n\nAccording to the current theory of galaxy formation, satellite galaxies can\nexperience various environmental effects that can quench their star formation\nand make them red (e.g., van den Bosch et al. 2008b and references therein).\nBecause the galaxies in ejected subhalos have also been satellite galaxies, at\nleast for some period of time, they are likely to have been subjected to\nsimilar environmental effects, and thus to have experienced some quenching of\ntheir star formation rates. It is thus likely that the association of red\ndwarf galaxies with massive halos presented here is produced by the\nassociation of ejected subhalos with their (former) hosts.\n\nAs shown in Table~\\ref{tab1}, about 30\\% of the dwarf galaxies are satellite\ngalaxies. According to the results obtained by Ludlow et al. (2008), there\nshould thus also be a significant fraction of dwarf galaxies that are\nphysically associated with nearby more massive halos out to about three times\nthe virial radius. If these associated galaxies (now outside their hosts)\nhave properties similar to the satellite galaxies, we would expect an enhanced\nfraction of red dwarf galaxies that are distributed outside massive halos.\nThis is qualitatively consistent with our findings presented above and also as\nshown in Table~\\ref{tab1}. However, since the observational data are obtained\nin redshift space and based on galaxy groups that may contain interlopers and\nmay be incomplete, a detailed comparison between the data and the models\nrequires the construction of mock catalogues that make use of the subhalo\npopulation and contain all the observational selection effects.\n\nThe fact that central dwarf galaxies have concentrations that are independent\nof their distances to the nearest massive halos indicates that the processes\nthat causes them to become red does not have a significant impact on their\nstructure. This is similar to the results obtained by van den Bosch et al.\n(2008a) who found that the transformation mechanisms operating on satellites\naffect color more than structure (see also Kauffmann et al. 2004; Blanton et\nal. 2005a; Ball, Loveday \\& Brunner 2008; Weinmann et al. 2008). Once again,\nthis similarity between red dwarfs that are satellites and those that are\ncentrals suggests that both populations may have experienced similar kinds of\nenvironmental effects.\n\nHowever, as shown in Table~\\ref{tab1}, in the S1+S2+S3 sample less than 42\\%\nof the red dwarf central galaxies have $r_{\\rm p}\/R_{180}\\le 3$. In other words,\nmore than 58\\% of the red dwarf central galaxies are not close enough to a\nlarger halo so that they could have been preprocessed there, becoming red and\nthen subsequently being ejected. The origin of this population of central red\ndwarf galaxies, which is almost 10\\% of the combined S1+S2+S3 dwarf sample,\nstill remains a mystery within the standard paradigm of galaxy formation.\nFurthermore, if we had used stellar mass instead of $r$-band magnitude to\ndefine our dwarf sample, the percentage of galaxies in this population would\nlikely increase. This population of isolated red dwarfs are not merely a dust\nreddened star forming population seen near edge-on because their axis ratios\nare consistent with a randomly oriented population. Although as Croton \\&\nFarrar (2008) probed the origin of the red dwarfs in voids using the\nsemi-analytical models, they only found $\\sim 0.4\\%$ of the dwarfs in the\ntotal population are red centrals in voids. While here we find that $\\sim\n10\\%$ dwarfs are red centrals without close neighbours with $r_{\\rm p}\/R_{180}\\le\n3$.\n\nAn outstanding problem for all galaxy formation models concerns the low mass\nslope of the galaxy mass function. CDM models in general predict too many\nlow-mass dark matter haloes compared to the number of low mass galaxies. The\nmass function of dark matter haloes, $n(M)$, scales with halo mass roughly as\n$n(M)\\propto M^{-2}$ at the low-mass end. This is in strong contrast with the\nobserved luminosity function of galaxies, $\\Phi (L)$, which has a rather\nshallow shape at the faint end, with $\\Phi(L) \\propto L^{-1}$. To reconcile\nthis difference one usually invokes some form of feedback within these low\nmass halos. If the feedback mechanism were to prevent gas from entering these\nhalos at late times, such galaxies would appear red. For example, the\npreheating mechanism of Mo et al. (2005), where gas is preheated by gas shocks\nwithin the forming large scale structures in which the low mass dark matter\nhalos themselves are forming, has this feature. Therefore, this population of\nisolated, red, central dwarf galaxies could represent the tail of the process\nthat prevents the vast majority of low mass dark matter halos from forming\ngalaxies and their further study could shed new light on the mechanism\nresponsible.\n\n\n\\acknowledgements We thank the referee Darren Croton for helpful comments that\nimproved the presentation of this paper. YW acknowledges the support of China\nPostdoctoral Science Foundation. This work is supported by the {\\it One\nHundred Talents} project, Shanghai Pujiang Program (No. 07pj14102), 973\nProgram (No. 2007CB815402), the CAS Knowledge Innovation Program (Grant No.\nKJCX2-YW-T05) and grants from NSFC (Nos. 10533030, 10673023, 10821302). HJM\nwould like to acknowledge the support of NSF AST-0607535, NASA AISR-126270 and\nNSF IIS-0611948. NSK and D.H.M. would like to acknowledge the support of NASA\nLTSA NAG5-13102.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Figures}\n\n\\begin{figure}[H]\n \\includegraphics[width=1.\\linewidth]{overlay}\n \\caption{\n Panels \\textbf{(a) and (b)} are all-sky views\n in Mollweide projection in\n Galactic coordinates of longitude $\\ell$ and latitude $b$ with east to the left, with the ROI marked by the dotted box.\n Panels \\textbf{(c)--(e)} are in a cylindrical projection and zoom in on the ROI.\n %\n \\textbf{{\\bf The Fermi Bubbles, including the Cocoon sub-structure, and the Sgr dSph galaxy}.\n Panels (a) and (c)} display \n the $\\gamma$-ray spatial template for the Fermi Bubbles\\cite{Ackermann2014} in arbitrary units with linear colour scale, highlighting the cocoon.\n \\textbf{Panels (b) and (d)}\n show the angular density of RR Lyrae stars with line-of-sight distances $>20$ kpc from the {\\it Gaia} Data Release 2 (DR2), in arbitrary units with logarithmic scaling; the Sgr dSph, Sgr stream, and the Large and Small Magellanic Clouds are clearly visible. The proper motion of the {Sgr~dSph~} is upwards in this figure. The dashed ellipses in panels (a)-(d) mark the same coordinates in each panel, and highlight both the cocoon and the Sgr dSph.\n \\textbf{Panel (e)} shows contours of RR Lyrae surface density overlaid on the Fermi Bubbles template shown as the coloured background.\n }\n \\label{fig:SgrdSphOverlay}\n\\end{figure}\n\n\n\n\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=\\linewidth]{plotSgrSpectrumDataPlusFit}\n\\caption{ \n{\\bf Measured $\\gamma$-ray spectral brightness distributions of the {Sgr~dSph~} and the surrounding Fermi Bubbles}. \nThe black, dashed line \nshows a differential number flux obeying\n$dN_\\gamma\/dE_\\gamma \\propto E_\\gamma^{-2.1}$.\nThese data \nare as obtained by us in our {{\\em Fermi}-LAT} \\ data analysis as described in Methods. \nWe have converted luminosities to surface brightnesses \nadopting source solid angles of $\\Omega_{\\rm Sgr \\ dSph} = 9.6 \\times 10^{-3}$ sr, \nand $\\Omega_{\\rm FB} = 0.49$ sr, with the latter set by the $40^\\circ \\times 40^\\circ$ region of interest (ROI), not the intrinsic sizes of the Bubbles (which are larger than the ROI).\nError bars show $1\\sigma$ errors; for the Sgr dSph, the error bars incorporate both statistical and systematic errors added in quadrature.\nThe smooth blue curves show (solid) the best fit combined (magnetospheric + IC) and (dashed) the best fit magnetospheric spectra.\n}\n\\label{fig:luminosities}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n\\includegraphics[width=\\linewidth]{plotLgammaOvrMstar.pdf}\n\\caption{{\\bf {$\\gamma$-ray} \\ luminosity\nnormalised to stellar mass for various structures whose emission is plausibly dominated by MSPs.}\nThe `Sgr magneto.' datum shows our\nbest-fit magnetospheric luminosity per stellar mass\n(the spectrum shown as the dashed blue curve in \\autoref{fig:luminosities})\nwhile the `Sgr tot' datum is the total, directly-measured luminosity. Globular cluster (`GC') measurements are from ref.~\\cite{Song2021}, while the remaining data (collated by ref.~\\cite{Song2021}) are from ref.~\\cite{Macias2019} (nuclear bulge of the Milky Way, `NB'), ref.~\\cite{Ackermann2017} (M31), and ref.~\\cite{Bartels2018} (Milky Way disc). \nError bars show 1$\\sigma$ errors.\nThe horizontal, dashed, grey curves show\nthe predicted total \n{$\\gamma$-ray} \\ luminosity per unit stellar mass\nat the nominated efficiencies, $f_{\\rm \\gamma,tot} = \\{0.1, 0.9\\}$, \ngiven an MSP spin-down power per unit stellar mass of\n$2 \\times 10^{28}$ erg\/s\/$M_{\\odot}$ as\nwe infer from ref.~\\cite{Sudoh2020}.}\n\\label{fig:LgammaOvrMstar}\n\\end{figure}\n\n\n\\begin{table}\n \\centering\n \\small\n \\begin{tabular}{llll@{\\qquad\\qquad}rrrr}\n \\hline\\hline\n \\multicolumn{4}{c}{Template choices} & \\multicolumn{4}{c}{Results} \\\\\n Hadr. \/ Bremss. & IC & FB & Sgr dSph &\n $-\\log(\\mathcal{L}_{\\rm Base})$ & $-\\log(\\mathcal{L}_{{\\rm Base}+{\\rm Sgr}})$ & $\\mbox{TS}_{\\rm Source}$& Significance \\\\[0.5ex] \\hline \n \\multicolumn{8}{c}{Default model} \\\\[0.5ex]\n HD & 3D & S & Model I & 866680.6 &866633.0 & 95.2 & $8.1\\;\\sigma$ \\\\[0.5ex] \\hline\n \\multicolumn{8}{c}{Alternative background templates} \\\\[0.5ex]\n HD & 2D A & S & Model I & 866847.1 & 866810.9 & 72.3 & $6.9\\;\\sigma$ \\\\\n HD & 2D B & S & Model I & 867234.9 &867192.1 & 85.8 & $7.8\\;\\sigma$ \\\\\n HD & 2D C & S & Model I & 866909.4 & 866868.5 & 81.7 & $7.4\\;\\sigma$ \\\\\n Interpolated & 3D & S & Model I & 867595.4 & 867567.4 & 56.0 & $5.8\\;\\sigma$ \\\\\n GALPROP & 3D & S & Model I & 866690.5 & 866640.8 & 99.5 & $8.3\\;\\sigma$ \\\\[0.5ex] \\hline\n \\multicolumn{8}{c}{Flat FB template} \\\\[0.5ex]\n HD & 3D & U & Model I & 867271.7 & 867060.1 & 423.2 & $19.1\\;\\sigma$ \\\\\n HD & 2D A & U & Model I & 867284.2 &867122.9 & 322.5 & $16.5\\;\\sigma$ \\\\\n HD & 2D B & U & Model I & 867624.3 & 867464.0& 320.7 & $16.4\\;\\sigma$ \\\\\n HD & 2D C & U & Model I & 867322.7 &867158.2 &329.0 & $16.6\\;\\sigma$ \\\\\n Interpolated & 3D & U & Model I & 867287.4 & 867081.2& 412.4 & $18.9\\;\\sigma$ \\\\\n GALPROP & 3D & U & Model I & 868214.6 & 868040.9& 347.6 & $17.2\\;\\sigma$ \\\\[0.5ex]\\hline\n \\multicolumn{8}{c}{Alternative Sgr dSph templates} \\\\[0.5ex]\n HD & 3D & S & Model II & 866680.6 & 866626.3 & 108.5 & $8.7\\;\\sigma$ \\\\\n HD & 3D & S & Model III & 866680.6 & 866647.5 & 66.1 & $6.4\\;\\sigma$ \\\\\n HD & 3D & S & Model IV & 866680.6 & 866678.2 & 4.8 & $0.4\\;\\sigma$ \\\\\n HD & 3D & S & Model V & 866680.6 &866644.9 & 71.5 & $6.7\\;\\sigma$ \\\\\n HD & 3D & U & Model II & 867271.7 & 866970.7 & 602.1 & $23.2\\;\\sigma$ \\\\\n HD & 3D & U & Model III & 867271.7 & 866994.1 & 555.3 & $22.2\\;\\sigma$ \\\\\n HD & 3D & U & Model IV & 867271.7 & 867152.2 & 239.1 & $14.0\\;\\sigma$ \\\\\n HD & 3D & U & Model V & 867271.7 & 866993.3 & 556.9 & $22.2\\;\\sigma$ \\\\\n \\hline\\hline\n \\end{tabular}\n \\caption{Template analysis results comparing {\\it baseline} to {\\it baseline + Sgr dSph} models. \n Columns (1) - (3) specify the {baseline} templates used for Galactic hadronic \/ bremsstrahlung emission, inverse Compton emission, and the Fermi Bubbles, respectively.\n %\n Column (4) specifies {source} templates\n describing the Sgr dSph (see Methods for details). Columns (5) and (6) give the log likelihood for the baseline model (without the Sgr dSph) and the baseline + Sgr dSph model, and columns (7) and (8) give the test statistic with which the baseline + Sgr dSph model is preferred, and the corresponding statistical significance of that preference. \n %\n The improvement in TS going from $\\{\\rm HD, 3D, U , Model \\ I\\}$ to $\\{\\rm HD, 3D, S , Model \\ I\\}$ is $\\Delta$TS = 854.2, equivalent to 28.0 $\\sigma$.\n Note that Sgr dSph model IV -- which generates a statistically insignificant improvement to the baseline for one particular combination in the last cluster -- is the sparsest stellar template, containing only 675 stars.\n }\n \\label{tab:loglikelihood}\n\\end{table}\n\n\n\\clearpage\n\n\\newrefsegment\n\n\\section*{Methods}\n\\label{sec:Methods}\n\nOur analysis pipeline consists of three steps: (1) data and template selection, (2) fitting, and (3) spectral modeling. \n\n\\subsection*{\nData and template selection\n}\n\\label{sec:fermidata}\n\nWe use eight years of LAT data, selecting \\texttt{Pass 8 UltraCleanVeto} class events in the energy range from 500 MeV to 177.4 GeV. We choose the limit at low energy to mitigate both the impact of {$\\gamma$-ray} \\ leakage from the Earth's limb and the increasing width of the point-spread function at lower energies.\nWe spatially bin the data to a resolution of $0.2^\\circ$, and divide it into 15 energy bins; the 13 lowest-energy of these are equally spaced in log energy, while the 2 highest-energy are twice that width in order to improve the signal to noise.\nWe select data obtained over the same observation period as that used in the construction of the Fourth Fermi Catalogue (4FGL)\\cite{Fermi-LAT:4FGL} (August 4, 2008 to August 2, 2016).\nThe region of interest (ROI) of our analysis is a square region defined by $-45^\\circ\\leq b \\leq -5^\\circ$, and $30^\\circ \\geq \\ell \\geq -10^\\circ$ (\\autoref{fig:SgrdSphOverlay}). \nThis sky region fully contains the Fermi cocoon substructure but avoids the Galactic plane ($|b|\\leq 5^\\circ$) where uncertainties are largest. Because the ROI is of modest size, we allow the Galactic diffuse emission (GDE)\ntemplates greater freedom to reproduce potential features in the data. \nWe carry out all data reduction and analysis using the standard \n\\textsc{Fermitools v1.0.1}\\footnote{\\url{https:\/\/github.com\/fermi-lat\/Fermitools-conda\/wiki}} \nsoftware package.\nWe model the performance of the LAT with the \\texttt{P8R3\\_ULTRACLEANVETO\\_V2} Instrument Response Functions (IRFs).\n\nWe fit the spatial distribution of the ROI data as the sum of a series of templates for different components of the emission. For all the templates we consider, we define a ``baseline'' model that includes only known point and diffuse emission sources, to which we compare a ``baseline + Sgr dSph'' model that includes those templates plus the Sgr dSph. Our baseline models, following the approach of Ref.~\\cite{Abazajian:2020}, contain the following templates: (1) diffuse isotropic emission, (2) point sources, (3) emission from the Sun and Moon, (4) Loop I, (5) the Galactic Centre Excess, (6) Galactic cosmic ray-driven hadronic and bremsstrahlung emission, (7) inverse Compton emission, and (8) the \\textit{Fermi} Bubbles; baseline + Sgr dSph models also include a Sgr dSph template.\n\n\nOur templates for the first five emission sources are straightforward, and we adopt a single template for each of them throughout our analysis. Since our data selection is identical to that used to construct the 4FGL, we adopt the standard isotropic background and point source models provided as part of the catalogue \\cite{Fermi-LAT:4FGL}, \\texttt{iso$_{-}$P8R3$_{-}$ULTRACLEANVETO$_{-}$V2$_{-}$v1.txt}, and \\texttt{gll\\_psc\\_v20.fit}, respectively; the latter includes 177 $\\gamma$ ray point sources within our ROI. We similarly adopt the standard Sun and Moon templates provided. For the foreground structure Loop I, we adopt the model of Ref.~\\cite{Wolleben:2007}. Finally, given that the low-latitude boundary of our ROI overlaps with the spatial tail of the Galactic Centre Excess (GCE), we include the `Boxy Bulge' template of Ref.~\\cite{Freudenreich1998}, which has been shown \\cite{Macias2018,Bartels2018,Macias2019} to provide a good description of the observed GCE away from the nuclear Bulge region (which is outside our ROI). The inclusion of this template in our ROI model has only a small impact on our results.\n\nThe remaining templates require more care. The dominant source of $\\gamma$-rays within the ROI is hadronic and bremsstrahlung emission resulting from the interaction of Milky Way cosmic ray (CR) protons and electrons with interstellar gas; the emission rate is proportional to the product of the gas density and the CR flux. We model this distribution using three alternative approaches. Our preferred approach follows that described in Ref.~\\cite{Macias2018}. We assume that the spatial distribution of $\\gamma$-ray emission traces the gas distribution from the hydrodynamical model of Ref.~\\cite{Pohl2008}, which gives a more realistic description of the inner Galaxy than alternatives. To normalise the emission, we divide the Galaxy into four rings spanning the radial ranges $0-3.5$ kpc, $3.5-8.0$ kpc, $8.0-10.0$ kpc, and $10.0-50.0$ kpc, within which we treat the emission per unit gas mass in each of our 15 energy bins as a constant to be fit. We refer to the template produced in this way as the ``HD'' model. Our first alternative is to use the same procedure of dividing the Galaxy into rings, but describe the gas distribution within those rings using a template constructed from interpolated maps of Galactic H~\\textsc{i} and H$_2$, following the approach described in Appendix B of Ref.~\\cite{Ackermann2012}; we refer to this as the ``Interpolated'' approach. Our third alternative, the ``GALPROP'' model, is the SA50 model described by Ref.~\\cite{Johannesson:2018bit}, which prescribes the full-sky hadronic CR emission distribution.\n\nWe similarly need a model for diffuse, Galactic IC emission -- the second largest source of background -- which is a product of the CR electron flux and the interstellar radiation field (ISRF). As with hadronic emission, we consider four alternative distributions. Our default choice is the SA50 model described by Ref.~\\cite{Johannesson:2018bit}, which includes 3D models for the ISRF~\\cite{Porter:2017vaa}. We therefore refer to this as the ``3D'' model. However, unlike in Ref.~\\cite{Johannesson:2018bit}, we use this model only to obtain the spatial distribution of the emission, not its normalisation or energy dependence. Instead, we obtain these in the same way as for our baseline hadronic emission model, i.e., we divide the Galaxy into four rings and leave the total amount of emission in each ring at each energy as a free parameter to be fit to the data; this approach reduces the sensitivity of our results to uncertainties in the electron injection spectrum and ISRF normalisation. Our three alternatives to this are models ``2D A'', ``2D B'', and ``2D C'', corresponding to models A, B, and C as described by Ref.~\\cite{Ackermann:2014usa}, which model IC emission over the full sky under a variety of assumptions about CR injection and propagation, but rely on a 2D model for the ISRF.\n\nThe final component of our baseline template is a model for the Fermi Bubbles themselves, which are one of the strongest sources of foreground emission in high latitude regions of the ROI. The FBs are themselves defined as highly statistically-significant and spatially-coherent residuals in the inner Galaxy that remain once other sources are modelled out in all-sky {$\\gamma$-ray} \\ analyses. The FBs are not reliably traced by emission at any other wavelength, so we do not have an {\\it a priori} model with which to guide the construction of a spatial template of these structures. However, one characteristic that renders the FBs distinct from other large angular scale diffuse $\\gamma$-ray structures is their hard {$\\gamma$-ray} \\ spectrum. Indeed, the state-of-the-art, {\\it structured} spatial template for them generated by the {\\it Fermi} Collaboration\\cite{Ackermann2014} -- the templates one would normally employ in large ROI, inner Galaxy {{\\em Fermi}-LAT} \\ analyses -- were constructed using a spectral component analysis. That study recovered a number of regions of apparent substructure within the solid angle of the FBs, most notably substructure overlapping the previously-discovered\\cite{Su2012,Selig2015} ``cocoon'' which, as we have discussed here, is largely coincident with the Sgr dSph. Of course, a potential issue with constructing a phenomenological, spectrally-defined model for the FBs is that, if there happens to be an extended, spectrally-similar source coincident with the FBs, it will tend to be incorporated into the template. For this reason Ref.~\\cite{Ackermann2014} suggest using a flat FB template when searching for new structures. Despite this proposal, our default analysis uses the more conservative choice of a structured FB template. However, we also run tests using an unstructured template for comparison, and to understand the systematic uncertainties associated with the choice of template. We refer to these two cases as the ``U'' (Unstructured) and ``S'' (Structured) FB templates, respectively.\n\nFinally, our baseline + Sgr dSph models require a template for the Sgr dSph. Our templates trace the distribution of bright stars in the dwarf, which we construct from five alternative stellar catalogues, all based on different selections from \\textit{Gaia} Data Release 2; we refer to the resulting templates as models I - V, and show them in E.D.~\\autoref{fig:Stellartemplates}. Full details on how we construct each of these templates are provided in S.I.~sec.~2. Model I, our default choice, comes from the catalogue of $2.26\\times 10^5$ Sgr dSph candidate member stars from Ref.~\\cite{Vasiliev2020}; the majority of the catalogue consists of red clump stars. Model II uses the catalogue of RR Lyrae stars in the Sagittarius Stream from Ref.~\\cite{Ibataetal:2020}, which we have down-selected to a sample of 2369 stars whose kinematics are consistent with being members of the Sgr dSph itself. Model III uses the catalogue of $1.31\\times 10^4$ RR Lyrae stars belonging to the Sgr dSph provided by Ref.~\\cite{Iorio2019}. Finally, models IV and V come from the nGC3 and Strip catalogues of RR Lyrae stars from Ref.~\\cite{Ramosetal:2020}; the former contains 675 stars with higher purity but lower completeness, while the latter contains 4812 stars of higher completeness but lower purity.\n\n\\subsection*{Fitting procedure}\n\nOur fitting method follows that introduced in Refs.~\\cite{Macias2018, Macias2019}, and treats each of the 15 energy bins as independent, thereby removing the need to assume any particular spectral shape for each component and allowing the spectra to be determined solely by the data. Our data to be fit consist of the observed $\\gamma$-ray photon counts in each spatial pixel $i$ and energy bin $n$, which we denote $\\Phi_{n,i,\\rm obs}$, where\n$n$ goes from 1 to 15, and the index $i$ runs over the positions $(\\ell_i,b_i)$ of all spatial pixels within the ROI. For a given choice of template, we write the corresponding model-predicted $\\gamma$-ray counts as $\\Phi_{n,i,\\rm mod} = \\sum_c \\mathcal{N}_{n,c} R_{n,i} \\Phi_{c,i}$, where $R_{n,i}$ is the instrument response for each pixel and energy bin (computed assuming an $E^{-2}$ spectrum within the bin), and $\\Phi_{c,i}$ is the value of template component $c$ evaluated at pixel $i$; for baseline models, we have a total of 8 components, while for baseline + Sgr dSph models we have 9. Note that $\\Phi_{c,i}$ is a function of $i$ but not of $n$, i.e., we assume that the spatial distribution of each template component is the same at all energies, except for the IC templates, for which an energy-dependent morphology is predicted by our GALPROP simulations. Without loss of generality we further normalise each template component as $\\sum_i \\Phi_{c,i} = 1$, in which case $\\mathcal{N}_{n,c}$ is simply the total number of photons contributed by component $c$ in energy bin $n$, integrated over the full ROI; the values of $\\mathcal{N}_{n,c}$ are the parameters to be fit. We find the best fit by maximising the usual Poisson likelihood function\n\\begin{equation}\n \\ln\\mathcal{L}_n = \\sum_{i} \\frac{\\Phi_{n,i,\\rm mod}^{\\Phi_{n,i,\\rm obs}} e^{-\\Phi_{n,i,\\rm mod}}}{\\Phi_{n,i,\\rm obs}!},\n \\label{eq:log-likelihood}\n\\end{equation}\nusing the \\texttt{pylikelihood} routine, the standard maximum-likelihood method in \\texttt{FermiTools}. Note that, since each energy bin $n$ is independent, we carry out the likelihood maximisation bin-by-bin.\n\n\nWe perform all fits in pairs, one for a baseline model containing only known emission sources, and one for a baseline + Sgr dSph model containing the same known sources plus a component tracing the Sgr dSph. The set of paired fits we perform in this manner is shown in \\autoref{tab:loglikelihood}. We compare the quality of these baseline and baseline + Sgr dSph fits by defining the test statistic $\\mathrm{TS}_n = -2\\ln(\\mathcal{L}_{n,\\rm base}\/\\mathcal{L}_{n,\\rm base+Sgr})$; the total test statistic for all energy bins is simply $\\mathrm{TS} = \\sum_n \\mathrm{TS}_n$. We can assign a $p$-value to a particular value of the TS by noting that baseline + Sgr dSph models have 15 additional degrees of freedom compared to baseline models: the value of $\\mathcal{N}_{n,c}$ for the component $c$ corresponding to the Sgr dSph, evaluated at each of the \n15 energy bins. In this case, the mixture distribution formula gives\\cite{Macias2018}\n\\begin{equation}\n p(\\mathrm{TS}) = 2^{-N} \\left[\\delta(\\mathrm{TS}) + \\sum_{n=1}^N \\binom{N}{n} \\chi^2_n(\\mathrm{TS})\\right],\n\\end{equation}\nwhere $N = 15$ is the difference in number of degrees of freedom, \n$\\binom{N}{n}$ is the binomial coefficient, $\\delta$ is the Dirac delta function, and $\\chi^2_n$ is the usual $\\chi^2$ distribution with $n$ degrees of freedom. The corresponding statistical significance (in $\\sigma$ units) is\\cite{Macias2018}:\n\\begin{equation}\n\\label{eq:numberofsigmas}\n\\mbox{Number of $\\sigma$}\\equiv \\sqrt{\\rm InverseCDF\\left(\\chi_1^2,{\\rm CDF}\\left[p(\\mbox{TS}),\\hat{{\\rm TS}}\\right]\\right)},\n\\end{equation}\nwhere (InverseCDF) CDF is the (inverse) cumulative distribution function and the first argument of each of these functions is the distribution function, the second is the value at which the CDF is evaluated, and the total TS is denoted by $\\hat{\\rm TS}$. For 15 extra degrees of freedom, a 5$\\sigma$ detection corresponds to $\\mbox{TS}=46.1$. (Additional details of these formulae are given in S.I. Sec.~2 of Ref.~\\cite{Macias2018}.) We report values of $\\mathcal{L}_{\\rm base}$, $\\mathcal{L}_{\\rm base+Sgr}$, $\\mathrm{TS}$, and the significance level for all the templates we try in \\autoref{tab:loglikelihood}.\n\nA final step in our fitting chain is to assess the uncertainties. For our default choice of baseline + Sgr dSph model (first row in \\autoref{tab:loglikelihood}), our maximum likelihood analysis returns the central value $\\mathcal{N}^{\\rm def}_n$ on the total $\\gamma$-ray flux in the $n$th energy bin attributed to the Sgr dSph, and also yields an uncertainty $\\sigma^{\\rm def}_{\\mathcal{N},n}$ on this quantity. This represents the statistical error arising from measurement uncertainties. However, there are also systematic uncertainties stemming from our imperfect knowledge of the templates characterising the other emission sources. To estimate these, we examine the five alternative models listed in \\autoref{tab:loglikelihood} as ``Alternative background templates'', where we use different templates for the hadronic plus bremsstrahlung and inverse Compton backgrounds. Each of these models $m$ also returns a central value $\\mathcal{N}_n^m$ and an uncertainty $\\sigma_{\\mathcal{N},n}^m$ on the Sgr dSph flux. We use the uncertainty-weighted dispersion of these models as an estimate of the systematic uncertainty (e.g.,ref.~\\cite{Ackermann2018}):\n\\begin{equation}\n \\delta\\mathcal{N}_n = \\sqrt{\\frac{1}{\\sum_m \\left(\\sigma_{\\mathcal{N},n}^m\\right)^{-2}} \\sum_m \\left(\\sigma_{\\mathcal{N},n}^m\\right)^{-2} \\left(\\mathcal{N}_n^{\\rm def} - \\mathcal{N}^m_n\\right)^2},\n\\end{equation}\nwhere the sums run over the $m=6-1$ alternative models. We take the total uncertainty on the Sgr dSph flux in each energy bin to be a quadrature sum of the systematic and statistical uncertainties, i.e., $(\\sigma^{\\rm def,tot}_{\\mathcal{N},n})^2 = (\\sigma^{\\rm def}_{\\mathcal{N},n})^2 + \\delta\\mathcal{N}_n^2$. We plot the central values and uncertainties of the fluxes for the default model derived in this manner in \\autoref{fig:luminosities}.\n\nWe have carried out several validation tests of this pipeline, which we describe in the Supplementary Information (SI).\n\n\n\n\n\\subsection*{Spectral modelling}\n\n\nWe model the observed Sgr dSph $\\gamma$-ray spectrum as a combination of prompt magnetospheric MSP emission and IC emission from {$e^\\pm$}~escaping MSP magnetospheres. We construct this model as follows. The prompt component is due to curvature radiation from {$e^\\pm$}~in within MSP magnetospheres. \nThe {$e^\\pm$}~energy distribution can be approximated as an exponentially-truncated power law \\cite{Abdo2013,Song2021}\n\\begin{equation}\n \\frac{dN_{\\mathrm{MSP},e^\\pm}}{dE_{e^\\pm}} \\propto E_{e^\\pm}^{\\gamma_\\mathrm{MSP}} \\exp\\left(-\\frac{E_{e^\\pm}}{E_{\\mathrm{cut},e^\\pm}}\\right),\n\\label{eq:promptSpec}\n\\end{equation}\nand curvature radiation from these particles has a rate of photon emission per unit energy per unit time\n\\begin{equation}\n \\frac{d\\dot{N}_{\\rm \\gamma,prompt}}{dE_\\gamma} = \\mathcal{N}\\left(L_{\\gamma,\\mathrm{prompt}}\\right) E_\\gamma^{\\alpha} \\exp\\left(-\\frac{E_\\gamma}{E_{\\rm cut, prompt}} \\right),\n\\label{eq:cutoffpwrlaw}\n\\end{equation}\nwhere $E_\\gamma$ is the photon energy, $\\mathcal{N}(L_{\\gamma,\\mathrm{prompt}})$ is a normalisation factor chosen so that the prompt component has total luminosity $L_{\\gamma,\\mathrm{prompt}}$, the index $\\alpha$ is related to that of the {$e^\\pm$}~distribution by $\\alpha = (\\gamma_{\\rm MSP} - 1)\/3$, and the photon cutoff energy is related to the {$e^\\pm$}~cutoff energy by \\cite{Baring2011}\n\\begin{equation}\nE_{\\rm cut,prompt} = \\frac{3 \\hbar c}{2 \\rho_c} \\left(\\frac{E_{\\rm cut,e^\\pm}}{m_e}\\right)^3 \\simeq 2.0 \\ {\\rm GeV} \\ \\left(\\frac{\\rho_c}{\\rm 30 \\ km}\\right)^{-1} \\left(\\frac{E_{\\rm cut,e^\\pm}}{\\rm 3 \\ TeV}\\right)^3\n\\label{eq:EcutPrompt}\n\\end{equation}\nwhere $m_e$ is the electron mass, $\\rho_c$ is the radius of curvature of the magnetic field lines, and the other symbols have the usual meanings. Given the rather small magnetospheres, we expect $\\rho_c$ to be a small multiple of the $\\sim$ 10 km neutron star characteristic radius; henceforth we set $\\rho_c =$ 30 km. Empirically, $L_{\\gamma,\\rm prompt}$ is $\\sim 10\\%$ of the total MSP spin-down power \\cite{Abdo2013}.\n\n\n\nA larger proportion of the spin-down power goes into a wind of {$e^\\pm$}~escaping the magnetosphere. In the ultra-low density environment of the Sgr dSph, ionization and bremmstrahlung losses for this population, which occur at a rate proportional to the gas density, are negligible. Synchrotron losses, which scale as the magnetic energy density, will also be negligible; as noted in the main text, observed magnetic fields in dwarf galaxies are very weak \\cite{Regis2015}, and we can also set a firm upper limit on the Sgr dSph magnetic field strength simply by noting that the magnetic pressure cannot exceed the gravitational pressure provided by the stars since, if it did, that magnetic field, and the gas to which it is attached, would blow out of the galaxy in a dynamical time. The gravitational pressure is $P \\approx (\\pi\/2) G \\Sigma^2$, where $\\Sigma = M\/\\pi R^2$ is the surface density, and using our fiducial numbers $M = 10^8$ M$_{\\odot}$ and $R = 2.6$ kpc gives an upper limit on the magnetic energy density 0.06 eV \/ cm$^3$; non-zero gas or cosmic ray pressure would lower this estimate even further. This is a factor of four smaller than the energy density of the CMB, implying that synchrotron losses are at most a 20\\% effect, and can therefore be neglected.\n\nThis analysis implies that the only significant loss mechanism for these {$e^\\pm$}~is IC emission, resulting in a steady-state {$e^\\pm$}~energy distribution\n\\begin{equation}\n\\frac{dN_{e^\\pm}}{dE_{e^\\pm}} \\propto E_{e^\\pm}^\\gamma \\exp\\left(-\\frac{E_{e^\\pm}}{E_{\\mathrm{cut},e^\\pm}}\\right),\n\\end{equation}\nwhere $\\gamma = \\gamma_{\\rm MSP}-1$. We compute the IC photon distribution produced by these particles following ref.~\\cite{Khangulyan2014}, assuming that ISRF of the {Sgr~dSph~} is the sum of the CMB \nand two subdominant contributions, one consisting of light escaping from the Milky Way and the other a dilute stellar blackbody radiation field due to the stars of the dwarf.\nWe estimate the Milky Way contribution to the photon field at position of the dwarf using GalProp \\cite{Porter:2017vaa},\nwhich predicts a total energy density of $0.095$ eV\/cm$^3$ (compared to 0.26 eV\/cm$^{-3}$ for the CMB), comprised of 5 dilute black bodies\nwith colour temperatures and dilution factors\n$\\{ T_{\\rm rad},\\kappa \\}$ as follows: \n$\\{40 \\ {\\rm K }, 1.4 \\times 10^{-6} \\},\n\\{430 \\ {\\rm K }, 3.0 \\times 10^{-11} \\},\n\\{3400 \\ {\\rm K }, 4.3 \\times 10^{-14} \\},\n\\{6400 \\ {\\rm K }, 4.0 \\times 10^{-15} \\},$\nand\n$ \\{26000 \\ {\\rm K }, 8.0 \\times 10^{-18} \\}\n$.\nWe characterise the intrinsic light field of the dwarf as having a\ncolour temperature 3500 K and dilution factor of $7.0\\times 10^{-15}$ (giving energy density $0.005$ eV cm$^{-3}$; these choices are those expected for a spherical region of radius 2.6 kpc and stellar luminosity $2\\times 10^8$ $L_\\odot$, the approximate parameters of the Sgr dSph).\nThis yields an IC spectrum\n\\begin{equation}\n \\frac{d\\dot{N}_{\\gamma,\\mathrm{IC}}}{dE_\\gamma} = \\mathcal{N}\\left(L_{\\gamma,\\rm IC}\\right) F\\left(\\gamma,E_{\\mathrm{cut},e^\\pm}\\right),\n\\end{equation}\nwhere $\\mathcal{N}\\left(L_{\\gamma,\\rm IC}\\right)$ is again a normalisation chosen to ensure that the total IC luminosity is $L_{\\gamma,\\rm IC}$, and $F\\left(\\gamma,E_{\\mathrm{cut},e^\\pm}\\right)$ is the functional form given by equation 14 of ref.~\\cite{Khangulyan2014}, which depends on the {$e^\\pm$}~spectral index $\\gamma$ and cutoff energy $E_{\\mathrm{cut},e^\\pm}$.\n\n\n\nCombining the prompt and IC components, we may therefore write the complete emission spectrum as\n\\begin{equation}\n \\frac{d\\dot{N}_\\gamma}{dE_\\gamma} = \\mathcal{N}\\left(L_{\\gamma,\\mathrm{prompt}}\\right) E_\\gamma^{\\alpha} \\exp\\left(-\\frac{E_\\gamma}{E_{\\rm cut, prompt}} \\right) + \\mathcal{N}\\left(L_{\\gamma,\\rm IC}\\right) F\\left(\\gamma,E_{\\mathrm{cut},e^\\pm}\\right).\n\\end{equation}\nThis model is characterised by four free parameters: the total prompt plus IC luminosity $L_{\\gamma,\\rm tot} = L_{\\gamma,\\rm prompt} + L_{\\gamma,\\rm IC}$, the ratio of the prompt and IC luminosities $f = L_{\\gamma,\\rm prompt}\/L_{\\gamma,\\rm IC}$, the spectral index $\\alpha$ of the prompt component (which in turn fixes the other two spectral indices $\\gamma_{\\rm MSP}$ and $\\gamma$), and the cutoff energy for the prompt component $E_{\\rm cut, prompt}$ (which then fixes the {$e^\\pm$}~cutoff energy $E_{\\mathrm{cut},e^\\pm}$).\nNote that we make the simplest assumption that $\\alpha$ and $E_{\\rm cut, prompt}$ are uniform across the MSP population.\nIn reality, there may be a distribution of these properties but the parameteric form of \\autoref{eq:promptSpec}\nprovides a good description, in general, \nof both individual MSP spectra and the\naggregate spectra of GC MSP populations \\cite{Song2021}.\n\n\nWe fit the observed {Sgr~dSph~}~spectrum to this model using a standard $\\chi^2$ minimisation, using the combined statistical plus systematic uncertainty.\nWe obtain an excellent fit: the minimum $\\chi^2$ is 7.7 for 15 (data points) - 4 (fit parameters) = 11 (degrees of freedom, dof) or a reduced $\\chi^2$ of 0.70.\nWe report the best-fitting parameters in E.D.~\\autoref{tab:table1}, and plot the result best-fit spectra over the data in \\autoref{fig:luminosities}; we show the best-fit estimate (with $\\pm 1\\sigma$ confidence region) for the magnetospheric luminosity per stellar mass of the {Sgr~dSph~} MSPs in \\autoref{fig:LgammaOvrMstar}. \n\nWe also carry out an additional consistency check, by comparing our best-fit parameters describing the prompt emission -- $\\alpha$ and $E_{\\rm cut,prompt}$ -- to direct measurements of the prompt component from nearby, resolved MSPs \\cite{Abdo2013,Song2021}, and to measurements of GCs, whose emission is likely dominated by unresolved MSPs \\cite{Song2021}. We carry out this comparison in E.D.~\\autoref{fig:FitContours}. In this figure, we show joint confidence intervals on $\\alpha$ and $E_{\\rm cut,prompt}$ from our fit. For comparison, we construct confidence intervals for $\\alpha$ and $E_{\\rm cut,prompt}$ from observations using the sample of ref.~\\cite{Song2021}, who fit the prompt emission from 40 GCs and 110 individually-resolved MSPs. We draw 100,000 Monte Carlo samples from these fits, treating the stated uncertainties as Gaussian, and construct contours in the $(E_{\\rm cut,prompt}, \\alpha)$ plane containing 68\\%, 95\\%, and 99\\% of the sample points. As the plot shows, the confidence region from our fit is fully consistent with the confidence regions from the observations, indicating that our best-fit parameters are fully consistent with those typically observed for MSPs and GCs.\n\n\n\\section*{Data availability}\n\nAll data analysed for this study are publicly available.\nIn particular, {{\\em Fermi}-LAT} \\ data are available from \\url{https:\/\/fermi.gsfc.nasa.gov\/ssc\/data\/} and Gaia data are available from \\url{https:\/\/gea.esac.esa.int\/archive\/}.\nThe statistical pipeline, astrophysical templates, and gamma-ray observations necessary to reproduce our main results are publicly available in the following zenodo repository: \\url{10.5281\/zenodo.6210967}.\n\n\n\\section*{Code availability}\n\n{{\\em Fermi}-LAT} \\ data used in our study were reduced and analysed using the standard \n\\textsc{Fermitools v1.0.1}\nsoftware package available from \\url{https:\/\/github.com\/fermi-lat\/Fermitools-conda\/wiki}.\nThe performance of the {{\\em Fermi}-LAT} \\ was modelled with the \\texttt{P8R3\\_ULTRACLEANVETO\\_V2} Instrument Response Functions (IRFs).\nSpectral analysis and fitting was performed using custom \\textsc{MATHEMATICA}\ncode created by the authors which is available upon reasonable request.\n\n\\clearpage\n\n\n\\section*{Acknowledgements}\n\nRMC acknowledges \nsupport from the Australian Government through the Australian Research Council, award\nDP190101258 (shared with MRK)\nand hospitality from the Virginia Institute of Technology, the Max-Planck Institut f\\\"ur Kernphysik, and the GRAPPA Institute at the University of Amsterdam supported by the Kavli IPMU at the University of Tokyo. \nO.M. is supported by the GRAPPA Prize Fellowship and JSPS KAKENHI Grant Numbers JP17H04836, JP18H04340, JP18H04578, and JP20K14463.\nThis work was supported by World Premier International Research Centre Initiative (WPI Initiative), MEXT, Japan. \nADM acknowledges support from the Australian Government through a Future Fellowship from the Australian Research Council, award FT160100206.\nM.R.K. acknowledges support from the Australian Government through the Australian Research Council, award\nDP190101258 (shared with RMC) and\nFT180100375.\nThe work of S.H.\\ is supported by the U.S.\\ Department of Energy Office of Science under award number DE-SC0020262 and NSF Grant No.\\ AST-1908960 and No.\\ PHY-1914409. The work of DS is supported by the U.S.\\ Department of Energy Office of Science under award number DE-SC0020262. \nT.V. and A.R.D. acknowledge the support of the Australian Research Council's Centre of Excellence for Dark Matter Particle Physics (CDM) CE200100008.\nAJR acknowledges support from the Australian Government through the Australian Research Council, award FT170100243.\nRMC thanks Elly Berkhuijsen, Rainer Beck, Ron Ekers, Matt Roth, and Thomas Siegert for useful communications.\n\n\n\\section*{Author contributions statement}\n\nR.M.C. initiated the project and led the spectral analysis and theoretical interpretation. \nO.M constructed the astrophysical templates, designed the analysis pipeline, and performed the data analysis of $\\gamma$ ray observations.\nD.M., M.R.K., C.G., R.J.T., F.A., J.A.H., S.A., S.H., A.G., M.R., L.F., and A.R. provided theoretical insights and interpretation, and advice about statistical analysis.\nT.V. and A.R.D. provided insights on the expected distribution of dark matter.\nR-Z.Y. performed an initial {$\\gamma$-ray} \\ data analysis.\nM.D.F helped with radio data.\nThe main text was written by RMC, MRK, and O.M. and the Methods section was written by O.M., R.M.C., and MRK.\nAll authors were involved in the interpretation of the results and all reviewed the manuscript.\n\n\n\n\\section*{Additional information}\n\nTo include, in this order: \\textbf{Accession codes} (where applicable); . \n\n\\subsection*{Competing interests}\n\nThe authors declare no competing interests.\n\n\n\n\\clearpage\n\n\n\\section*{Extended Data}\n\\setcounter{figure}{0}\n\\setcounter{table}{0}\n\n\\renewcommand{\\figurename}{Extended Data Figure}\n\\renewcommand{\\tablename}{Extended Data Table}\n\n\n\\begin{figure*}[ht!]\n\\centering\n\\begin{tabular}{lll}\n\\includegraphics[width=0.33\\textwidth]{Sgr_Stream_VasilevBelokurov.pdf} & \\includegraphics[width=0.33\\textwidth]{Sgr_Stream_Strasbourg.pdf} & \\includegraphics[width=0.33\\textwidth]{Sgr_Stream_Iorio.pdf}\\\\\n\\includegraphics[width=0.33\\textwidth]{Sgr_Stream_BarcelonaI.pdf} & \\includegraphics[width=0.33\\textwidth]{Sgr_Stream_BarcelonaII.pdf} & \\\\\n\\end{tabular}\n\\caption{The stellar density templates for the Sgr dSph used in this study. Each map has been normalized, so the units are arbitrary; the color scale is logarithmic. Morphological differences among the templates are due to different stellar candidates (red clump or RR Lyrae), search algorithms, and search target (the dwarf remnant or the stream). \nData sources are as follows: \nModel I, ref.~\\cite{Vasiliev2020};\nModel II, ref.~\\cite{Ibataetal:2020};\nModel III, ref.~\\cite{Iorio2019};\nModel IV and Model V, ref.~\\cite{Ramosetal:2020}.\nDetailed descriptions of these templates are given in the S.I.~sec.~2.}\n\\label{fig:Stellartemplates}\n\\end{figure*}\n\n\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[scale=0.3]{grid_MC_loglikes.pdf} \\\\\n\\caption{Goodness of fit computation for the best-fitting baseline + Sgr dSph model using our preferred set of templates (first entry in \\autoref{tab:loglikelihood}). In each of the 15 panels, one for each of the energy bins in our analysis pipeline, the blue histograms show the distribution of $-\\ln\\mathcal{L}$ values produced in 100 Monte Carlo trials where we use our pipeline to fit a mock data set produced by drawing photons from the same set of templates used in the fit; orange dashed vertical lines show the 68\\% confidence range of this distribution, and black dashed vertical lines show the mean. Under the null hypothesis that our best-fitting model for the real \\textit{Fermi} observations is a true representation of the data, and that disagreements between the model and the data are solely the result of photon counting statistics, the log-likelihood values for our best-fitting model should be drawn from the distributions shown by the blue histograms. For comparison, the red vertical line shows the actual measured log likelihoods for our best fit. The fact that these measured values are well within the range spanned by the Monte Carlo trials\nindicates that we cannot rule out the null hypothesis, indicating that our model is as good a fit to the data as could be expected given the finite number of photons that \\textit{Fermi} has observed.\n}\\label{fig:fitvalidation}\n\\end{figure}\n\n\n\\begin{figure*}[ht!]\n\\centering\n\\begin{tabular}{lll}\n\\includegraphics[width=0.32\\textwidth]{row_1a.pdf} & \\includegraphics[width=0.294\\textwidth]{row_1b.pdf} & \\includegraphics[width=0.294\\textwidth]{row_1c.pdf}\\\\\n\\includegraphics[width=0.32\\textwidth]{row_2a.pdf} & \\includegraphics[width=0.294\\textwidth]{row_2b.pdf} & \\includegraphics[width=0.294\\textwidth]{row_2c.pdf}\\\\\n\\includegraphics[width=0.32\\textwidth]{row_3a.pdf} & \\includegraphics[width=0.294\\textwidth]{row_3b.pdf} & \\includegraphics[width=0.294\\textwidth]{row_3c.pdf}\n\\end{tabular}\n\\caption{Measured photon counts (left), best-fit baseline + {Sgr~dSph~} model (middle), and the fractional residuals $(Data-Model)\/Model$ (right). The images were constructed by summing the corresponding energy bins over the energy ranges displayed on top of each panel: [0.5, 1.0] GeV, [1.0, 4.0] GeV, [4.0, 15.8] GeV, from top to bottom. The maps have been smoothed with Gaussian filters of radii $1.0^\\circ$, $0.8^\\circ$, and $0.5^\\circ$ for each energy range displayed, respectively\n(where these angular scales are determined by the {{\\em Fermi}-LAT} \\ point spread function at the low-edge of the energy interval for the former two, while the latter is determined by the angular resolution of the gas maps).\nThe spectrum of baseline + {Sgr~dSph~} model components shown here can be seen in ~\\autoref{fig:totalspectra}. The 4FGL~\\cite{Fermi-LAT:4FGL} {$\\gamma$-ray} \\ point sources included in the baseline model are represented by the red circles.\n}\n\\label{fig:Residuals}\n\\end{figure*}\n\n\\begin{figure}\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[scale=0.7]{Injection_mismodeling_GDE.pdf} &\\includegraphics[scale=0.7]{Injection_MC_StrucFB_Fit_FlatFB.pdf}\\\\\n\\includegraphics[scale=0.7]{RecoveredSpectra_GDE.pdf} & \\includegraphics[scale=0.7]{RecoveredSpectra_FBs.pdf} \n\\end{tabular}\\caption{\nResults from our template mismatch tests. Each of the coloured lines shows the results of a test where we generate synthetic data with one set of templates, and attempt to recover the Sgr dSph in those data using a different set. In the upper two panels, the horizontal axis shows the true, energy-integrated Sgr dSph photon flux in the synthetic data, while the vertical axis shows the value (with $1 \\sigma$ statistical error bars) retrieved by our pipeline; the black dashed lines indicate perfect recovery of the input, and the vertical bands show the photon flux we measure for the Sgr dSph in the real \\textit{Fermi} data. In the bottom two panels we plot the recovered energy flux in each energy bin (with $1 \\sigma$ statistical error bars), for the case where the injected photon flux most closely matches the real Sgr dSph flux; the black dashed line again shows perfect recovery of the injected signal. The left panels show experiments where we mismatch the Galactic hadronic and IC templates, while the right panels show experiments where we mismatch the FB templates; see Methods for details.\n}\\label{fig:injectionrecovery}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\begin{tabular}{lll}\n\\includegraphics[scale=0.65]{Results_rotation_around_center.pdf} & \\includegraphics[scale=0.65]{Results_rotation_around_GC.pdf} & \\includegraphics[scale=0.123]{translation_StructFBs.pdf}\n\\end{tabular}\\caption{\nResults of our rotation and translation tests. \\textit{Left:} change in TS when repeating the analysis using the default baseline + Sgr dSph model, but with the Sgr dSph rotated about its centre by the indicated angle (blue points); TS values $>0$ indicate an improved fit (dashed grey line), with $\\mbox{TS} = 46.1$ corresponding to a $5\\sigma$-significant improvement (red dashed line). \n\\textit{Centre:} same as the left panel, but for tests with the Sgr dSph template rotated about the Milky Way centre, rather than its own centre. \\textit{Right:} tests for translation of the Sgr dSph template. The true position of the Sgr dSph centre is the center of the plot, and the colour in each pixel indicates the change in TS if we displace the Sgr dSph centre to the indicated position; the maximum shown, at a displacement $\\Delta b \\approx -4^\\circ$, has $\\mbox{TS} = 40.8$, corresponding to $4.5\\sigma$ significance. For comparison, white contours show the original, unshifted Sgr dSph template, and the green arrow shows the direction anti-parallel to the Sgr dSph's proper motion, back along its past trajectory; red arrows show the projection of the green arrow in the $\\ell$ and $b$ directions.\n}\\label{fig:rotationAndTranslationTests} \n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=1]{Sgr_Sph_StructuredFBs_spectra_systematics.pdf}\\\\\n\\caption{{Sgr~dSph~} spectra derived from template analysis using different Galactic diffuse emission models; in all cases the spectrum shown is the flux averaged over the entire ROI, not the flux within the footprint of the Sgr dSph template. The fiducial model is our default choice (first entry in Table~\\ref{tab:loglikelihood}), while other lines correspond to alternate foregrounds -- models 2D A (red), 2D B (black), and 2D C (blue) for the Galactic IC foreground, and models Interpolated (dark green) and GALPROP 3D-gas (light green) for the Galactic hadronic + bremsstrahlung foreground. The error bars display $1\\sigma$ statistical errors. See Table~\\ref{tab:loglikelihood} and text for details.}\n\\label{fig:SgrSpecVar}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=1]{Total_measured_spectra_RoI.pdf}\\\\ \n\\caption{\nContribution of each template component to the $\\gamma$-ray spectrum averaged over the entire ROI, for our default baseline + {Sgr~dSph~} model. Components shown are as follows: $\\pi^0+\\mbox{brems}$ is the Galactic hadronic plus bremsstrahlung foreground, ICS is the Galactic inverse Compton foreground, 4FGL indicates point sources from the 4th \\textit{Fermi} catalogue, Fermi Bubbles indicates the structured Fermi Bubble template, isotropic is the isotropic $\\gamma$-ray background, ``other'' includes the Sun and Moon, Loop I, and the Galactic Centre Excess, and Sgr stream indicates the Sgr dSph.\nThe error bars display $1\\sigma$ statistical errors.\n}\n\\label{fig:totalspectra}\n\\end{figure}\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\linewidth]{plotSpectralFitContours.pdf}\n\\caption{ \nFilled contours indicate the best-fit region for the spectral parameters $E_{\\rm cut,prompt}$ and $\\alpha$\nthat determine the shape of the magnetospheric emission from the Sgr dSph; the outer, coloured region shows the 2$\\sigma$ region, the inner shows the 1$\\sigma$ region, and the red point marks the best fit.\nThe dotted and dashed contours describe the 1, 2, and 3 $\\sigma$ confidence regions measured in ref~\\cite{Song2021} for globular clusters (GCs) and individual resolved MSPs, respectively, constructed from the observations as described in Methods.\n}\n\\label{fig:FitContours}\n\\end{figure}\n\n\\begin{figure}\n \\includegraphics[width=1.\\linewidth]{plotLgammaOvrMstarVstSimple}\n \\caption{\n{\\bf Data:} $\\gamma$-ray luminosity per stellar mass for a number of stellar systems (cf.~fig.~3 main text) versus mean stellar age of those systems. The mean stellar ages have been determined from empirically-determined star formation histories for all these objects (data sources as follows: {Sgr~dSph~} \\cite{Weisz2014}, M31 \\cite{Williams2017}, Galactic Bulge \\cite{Bernard2018}, NB \\cite{Nogueras-Lara2020} and ref.~\\cite{Weisz2013} for the LMC). The globular cluster datum (`GCs') is plotted at the mean measured {$\\gamma$-ray} \\ luminosity for the 27 systems analysed in ref.~\\cite{Song2021} divided by their stellar masses, and the age is the luminosity-weighted mean age for the 31 systems analysed in ref.~\\cite{Wu2022} (while the error bars for this datum show the standard deviations of these measurements for each population). The purple datum shows the secondary electron plus positron luminosity of the Milky Way (`disk $e^\\pm$') as inferred in ref.~\\cite{Strong2010} and adopting a disk stellar mass of $5.2 \\times 10^{10} \\ M_{\\odot}$\\cite{Bland_Hawthorn_2016}. {\\bf Model curve:} The solid blue curve shows the evolution with time (since the initial, burst-like star formation event) of the total spin-down power generated by a population of MSPs (normalised to the stellar mass expected to host that same population) according to the recent binary population synthesis modelling presented in ref.~\\cite{Gautam2021} (with the blue band indicating the estimated the $\\pm 1 \\sigma$ error on this quantity dominated by the uncertainties in the overall stellar binarity fraction). The {\\bf dashed red line} is an approximate fit to the solid blue line described by $5.0 \\times 10^{28}$ $ \\exp(-t\/t_{\\rm decay})$ \\ erg$\/s\/M_{\\odot}$ with $t_{\\rm decay} = 3$ Gyr. The dashed blue curve shows 10\\% of the mass-normalised spin-down power (with the error band suppressed for clarity). The {\\bf brown, dashed, horizontal line} shows the total power (per unit stellar mass) from MSP spin-down we infer from the study by Sudoh et al.~\\cite{Sudoh2020} of radio continuum emission from massive, quiescent galaxies (with expected mean stellar ages $>8-10$ Gyr; see the main text for more details).\n}\n\\label{fig:plotLgammaOvrMstarVstSimple}\n\\end{figure}\n\n\\begin{table}[ht!]\n \\centering\n \\begin{tabular}{ccccc}\n \\hline\n quantity & best-fit & 68\\% c.l. & units & literature \\\\\n &&&& value(s) \\\\\n \\hline\n $l_0 \\equiv L_{\\rm \\gamma,tot}\/M_\\star$ & $5.2 $ & $[4.4 , 6.0]$ & $10^{28}$ erg\/s\/$M_{\\odot}$ \n & $\\sim (1-10)$ \\cite{Sudoh2020} \\\\\n $f = L_{\\rm \\gamma,prompt}\/L_{\\rm \\gamma,IC}$ & $0.83 $ & $[0.59, 1.3]$ & --- & $\\sim 0.1$\\cite{Sudoh2020} \\\\\n $\\alpha$ & $0.039 $ & $[-0.38,0.62]$ & --- & $-0.88 \\pm 0.44$\\cite{Song2021} \\\\\n $E_{\\rm cut, prompt}$ & $1.0 $ & $[0.74, 1.3]$ & GeV & \n $1.91^{+0.85}_{-0.59} \\pm 0.44$ \\cite{Song2021} \\\\\n \\hline\n \\end{tabular}\n \\caption{Best fit spectral parameters with $\\pm 1\\sigma$ confidence regions as determined from $\\chi^2$ fitting to the measured {$\\gamma$-ray} \\ spectrum of the Sgr dSph. \n %\n The parameter $l_0$ is calculated using a stellar mass\n $M_\\star = 10^8 M_{\\odot}$ \\cite{Vasiliev2021} for the Sgr dSph.\n %\n See also E.D. \\autoref{fig:FitContours}\n %\n }\n \\label{tab:table1}\n\\end{table}\n\n\n\n\n\\clearpage\n\n\n\\printbibliography[segment=\\therefsegment,title={Methods and Extended Data References}, check=onlynew]\n\n\\clearpage\n\n\\newrefsegment\n\n\\renewcommand{\\figurename}{Supplementary Information Figure}\n\\renewcommand{\\tablename}{Supplementary Information Table}\n\n\\setcounter{figure}{0}\n\\setcounter{table}{0}\n\n\\section*{Supplementary Information}\n\n\\section{Chance overlap calculation}\n\\label{sec:overlap}\n\nIn the main text we estimate the probability of a chance overlap between cocoon {$\\gamma$-ray} \\ structure and {Sgr~dSph~} to be $\\approx 1\\%$.\nThis follows simply from noting that the solid angle of the Bubbles is around 0.7 sr\\cite{Ackermann2014} and the cocoon covers $\\lesssim$20\\% of this solid angle, so the chance probability for an overlap if these objects were placed randomly on the sky is $\\lesssim 0.2 \\times 0.7\/(4 \\pi) \\sim 0.012$. However, this is a generous upper limit; it does not take into account that, as revealed by the template analysis, there is a much more detailed correspondence between the {$\\gamma$-ray} \\ substructure and the stellar distribution not accounted for here. Moreover, the naive 1\\% estimate does include a `look-elsewhere' correction: the Milky Way is surrounded by satellite galaxies and there are apparently other regions of sub-structure within the Fermi Bubbles.\nHowever, not only is the cocoon the brightest and first-discovered region of sub-structure \\cite{Su2012}, it is also the only region that has been reliably detected by independent analyses \\cite{Selig2015,Ackermann2014}, and is visibly-evident in \nindependently-produced\n{$\\gamma$-ray}\\ maps \\cite{Yang2014,deBoer2015}.\nThe {Sgr~dSph~} is also a special object: it is the\nbrightest MW satellite not yet (prior to this work) detected in $\\gamma$-rays. \n(In fact, not only is the {Sgr~dSph~} the brightest satellite undiscovered in $\\gamma$-rays, it is substantially brighter than the next brightest galaxy\\footnote{The list of all the MW satellites with apparent magnitude $m<10$ includes 8 objects,\nthe brightest two, the LMC and SMC, with $m \\sim 0.3$ and $\\sim 2.1$, respectively, are already detected in $\\gamma$-rays.\nThe next brightest is the {Sgr~dSph~} with $m \\sim 3$; after that come Fornax, Sculptor, and Leo I with $m \\sim 7.3, 8.7$ and 10.0\nand angular diameters of $0.24^\\circ, 0.51^\\circ$ and $0.11^\\circ$, respectively, which, even assuming they could be detected, would at best only appear marginally extended to {{\\em Fermi}-LAT}.}.)\nOverall, we have a spatial overlap\n(and detailed morphological correspondence as argued elsewhere) between \nthe brightest region of substructure within the {\\it Fermi} Bubbles\nand the Sgr dSph, the\nsecond closest, third-most massive, third brightest, and third most angularly extended satellite galaxy of the MW.\n\n\\begin{comment}\n\\section{Full description of templates}\n\\label{sec:templates}\n\n\n\\begin{table*}[!htbp]\\caption{{\\bf List of (baseline) spatial templates considered in our maximum likelihood runs. \\label{Tab:templates}}}\n\\begin{adjustbox}{width=1.0\\textwidth, center}\n\\centering\\begin{threeparttable}\n \\scriptsize\n\\begin{tabular}{llr}\n\\hline\\hline\nTemplates & Summary description & Reference\\\\\\hline \nHadronic and Bremsstrahlung & Three alternative models: (i) 3D templates predicted by \\texttt{GALPROP v56} (``{\\bf GALPROP}'')\n & \\\\\n$\\gamma$ rays & (ii) hydrodynamical gas (``{\\bf HD}''), and (iii) interpolated gas templates (``{\\bf Interpolated}''). & \\\\\n & These consist of H{\\tiny I}, H\\boldmath$_{2}$, and dust correction column density maps. & \\\\\n& The H{\\tiny I} and H\\boldmath$_{2}$ maps are divided in four rings each. In the case of the dust maps,\\\\\n& we use two total residual maps with different $E(B-V)$ magnitude cuts. & ~\\cite{Macias2018, Macias2019}\\\\\n &&\\\\\nInverse Compton & Used various alternatives models: (i) three kinds of 2D ICS maps; a standard one (``{\\bf 2D A}''), & \\\\\n$\\gamma$ rays & another that assumes spatially variable diffusion (``{\\bf 2D B}''), and one including a central source &\\\\\n& of electrons (``{\\bf 2D C}''), (ii) a 3D ICS map$^\\dagger$ divided in four rings (``{\\bf 3D}'') & \\cite{Porter:2017vaa,Ackermann:2014usa}\\\\\n&&\\\\\n\\textit{Fermi} bubbles& (i) Flat\/unstructured FBs template (``{\\bf U}''), and (ii) structured FBs template (``{\\bf S}'') &~\\cite{Macias2019}\\\\\n&&\\\\\nLoop I& Analytical model & \\cite{Wolleben:2007}\\\\\nGalactic centre excess& Stellar distribution model based on Freudenreich 1998 (F98) & \\cite{Freudenreich1998}\\\\\nPoint sources & 4FGL catalogue of $\\gamma$-ray point sources \n({\\it gll\\textunderscore psc\\textunderscore v20.fit})& \\cite{Fermi-LAT:4FGL}\\\\\nSun and Moon & Models constructed in the 4FGL catalog&\\cite{Fermi-LAT:4FGL} \\\\\nIsotropic emission& \\texttt{iso$_{-}$P8R3$_{-}$ULTRAC.L.EANVETO$_{-}$V2$_{-}$v1.txt}&\\\\\n\\hline\\hline\n\\end{tabular}\n\\begin{tablenotes}\n\\item The interstellar gas maps are divided in four rings of sizes ($0-3.5$, $3.5-8.0$, $8.0-10.0$, and $10.0-50.0$ kpc). The 3D ICS map are divided in rings of the same size as the gas maps. The 2D ICS maps correspond to ICS (Model A), ICS (Model B), and ICS (Model B) introduced in Ref.~\\cite{Ackermann:2014usa}. The baseline model considered in this work includes: (a) the hydrodynamical gas maps divided in rings, (b) the 3D ICS maps divided divided in rings, (c) the structured FBs template, (d) Loop I, (e) the F98 stellar template, (f) tailor-made maps for the Sun and the Moon, (g) an isotropic emission template, and (h) the 4FGL point sources. Note that in our bin-by-bin analysis procedure, only the normalisation of each template is varied in the fit. Since the energy bins are small, the fit results are independent of the assumed template spectra. \\end{tablenotes}\n\\end{threeparttable}\n\\end{adjustbox}\n\\end{table*}\n\n\n\n\n\nHere we provide full details for how we construct all the templates that we use in our analysis.\n\n\\subsection{Hadronic plus bremsstrahlung models}\n\\label{sec:GDEdescrip}\n\nAs discussed in Methods, the dominant source of {$\\gamma$-ray} s from the ROI is hadronic and bremsstrahlung emission resulting from the interaction of CR protons and electrons with interstellar gas.\nWe model this component with three alternative templates: HD (hydrodynamic), Interpolated, and 3D (GALPROP); \nof these, the hydrodynamical template provides the best fit to the inner Galaxy {$\\gamma$-ray} \\ sky and is used in our baseline analysis.\n\nIn detail, the models we investigated for the gas-correlated {$\\gamma$-ray} \\ emission are as follows:\n\\begin{itemize}\n \\item {\\bf Interpolated gas maps}: these assume that the hadronic and bremsstrahlung components can be phenomenologically modelled with a linear combination of atomic hydrogen, molecular hydrogen, and dust residual maps. These are divided in four Galactocentric rings (four rings of H$_{\\rm I}$, four rings of H$_{2}$, and two residual dust maps---see also Table~\\ref{Tab:templates}) to account for potential uncertainties in the CR densities. The method used to create the interpolated gas maps is given in Appendix B of Ref.~\\cite{Ackermann2012}, which we have faithfully reproduced in~\\cite{Macias2018}. \n %\n The main objective of this method is to estimate the gas column density in the direction of the inner Galaxy\n where the non-axisymmetric gravitational potential of the Galactic bar induces non-circular gas orbits.\n %\n Such interpolated gas maps are the standard interstellar gas distribution models employed in most studies by the Fermi team.\n \n \\item {\\bf Hydrodynamical gas maps}: these were constructed using a suite of hydrodynamical simulations of interstellar gas flow~\\cite{Pohl2008} that generate physically-motivated solutions for the gas kinematics in the direction of the inner Galaxy. \n The hydrodynamical gas maps provide a much better fit to the Galactic centre data than the interpolated gas maps\\cite{Macias2018}. Nevertheless, the main purpose of using alternative gas models in our study is evaluating the impact that these have in the inferred properties of the Sgr dSph. We note that the hydrodynamical maps are divided in the same ring scheme (four rings of H$_{\\rm I}$, four rings of H$_{2}$, and two residual dust maps) as the interpolated gas maps.\n \n \n \\item {\\bf 3D gas (GALPROP)}: to generate alternative templates for the hadronic and bremsstrahlung gamma-ray emission, we reproduced one of the models proposed in Ref.~\\cite{Johannesson:2018bit} (Model SA50 in Table 5 of that reference) using \\texttt{GALPROP V56}~\\cite{Porter:2017vaa,Johannesson:2018bit}. The latest release of this software contains new 3D spatial density models of atomic and molecular hydrogen. These include the effects of several Galactic structures such as the spiral arms, and the Galactic disk. Note that in this case, we do not divide the resulting hadronic and bremsstrahlung maps in rings so that we are able to explore the effects that a less flexible Galactic diffuse emission model has in our results. \n\\end{itemize}\n\nFor the IC background component, we again generated and tested alternative maps, in particular, 2D and 3D variants.\nUncertainties in the IC component become more important in the high latitude regions \\cite{Ackermann:2014usa}. \n\nIn detail, models we investigated for diffuse IC emission are:\n\\begin{itemize}\n \\item {\\bf 3D IC} maps divided in four rings: for these we utilized \\texttt{GALPROP~V56}, the propagation parameter setup SA50 (see Table 5 in Ref.~\\cite{Johannesson:2018bit}), and the same ring subdivisions as those of the interstellar gas maps (see Table~\\ref{Tab:templates}). The advantage of the new 3D IC maps (over the ones constructed with previous versions of the code), is that it now incorporates fully 3D models for the interstellar radiation fields (ISRF)~\\cite{Porter:2017vaa}; hence avoiding potential biases introduced by the previously implicitly-assumed Galactocentric symmetry. Also, since the 3D IC maps can be divided in rings, we are able to reduce the impact of modelling assumptions such as the characteristics of the electron injection spectrum, and the normalisation of the ISRF.\n \n \\item {\\bf 2D IC} maps: we used the three different IC maps constructed in Ref.~\\cite{Ackermann:2014usa} (Model A, B, and C). These were computed with an older version of the CR propagation code (\\texttt{GALPROP V54}), assume Galactocentric symmetry of the CR halo, and are monolithic (i.e., not divided in rings). These models encapsulate a wide range of uncertainties in the CR source distribution, CR injection spectra, the diffusion coefficient, Galactic magnetic fields, and a central source of electrons.\n\\end{itemize}\n\\end{comment}\n\n\n\\section{Construction of the Sgr dSph templates}\n\\label{sec:stellarmapsDetails}\n\nHere we provide detailed descriptions of \nhow we construct the Sgr dSph templates shown in E.D.~\\autoref{fig:Stellartemplates}.\n\n\\subsubsection*{Model I:} \n\nWe extract this template from the stellar catalogue constructed in Ref.~\\cite{Vasiliev2020}, which was derived using photometric and astrometric data from {\\it Gaia} Data Release 2 (DR2), and kinematic measurements from various other surveys. The catalogue consists of a list of $2.26\\times 10^5$ candidate member stars of the {Sgr~dSph~} remnant, which are reliably separated from the field stars. Every object in the catalogue has an extinction-corrected G-band magnitude larger than 18, and more than half of the objects in this catalogue are classified as red clump stars. Note that Ref.~\\cite{Vasiliev2020} adapted their procedure to reproduce the observed properties of the {Sgr~dSph~} remnant, not the stream, which is why the first panel of E.D.~\\autoref{fig:Stellartemplates} only shows the remnant. We show profiles of stellar number count along the long and short axes of the dwarf for this template in S.I.~\\autoref{fig:profile_longaxis}. \n\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[scale=0.15]{Long_axis_star_profile.pdf}\n \\includegraphics[scale=0.15]{Short_axis_star_profile.pdf}\n \\caption{Star count profiles for Model I, showing number of stars measured in bins of angular distance from the gravitational centre of the dwarf along its long (top) and short (bottom) axes. In the inset images (which are identical to the first panel of \\autoref{fig:Stellartemplates}), we mark the gravitational centre of the dwarf with a cyan circle, and show the long and short axes along which we measure the profiles as white bands.}\n \\label{fig:profile_longaxis}\n\\end{figure*}\n\n\n\\subsubsection*{Model II:}\n\nOur second template comes from ref.~\\cite{Ibataetal:2020}. Instead of red clump stars, this study selected a sample of RR Lyrae stars from {\\it Gaia} DR2 data, for which distances are accurately measured. Also, rather than focusing on member stars of the {Sgr~dSph~} remnant, Ref.~\\cite{Ibataetal:2020} used the \\textsc{streamfinder} algorithm to single out stars with high probability of belonging to the Sagittarius Stream. By using the kinematic properties of the stars in that study, we constructed a template containing 2369 RR Lyrae stars (cf. \\autoref{fig:Stellartemplates}) in our ROI. Note that the stellar number count in this map is approximately two orders of magnitude smaller than that in Model I. \n\n\\subsubsection*{Model III:}\n\nRef.~\\cite{Iorio2019} performed an all-sky analysis of RR Lyrae stars (in {\\it Gaia} DR2 data) belonging to globular clusters, dwarf spheroidal galaxies, streams, and the Magellanic Clouds. Our Model III template is a subset of their data identified as belonging to the Sgr dSph, selected to reproduce their Fig.~1 (bottom-right). It includes $1.31\\times 10^4$ RR Lyrae stars in our ROI. \n\n\\subsubsection*{Model IV and Model V:}\n\nRef.~\\cite{Ramosetal:2020} developed two empirical catalogues of RR Lyrae stars in {\\it Gaia} DR2 data, which form the basis for our final two templates. The first (Model IV), corresponds to the nGC3 sample, which is characterized for its lower-completeness and higher-purity. This template contains 675 stars in our ROI. The second (Model V), is the Strip sample, containing higher-completeness, but lower purity. The total number of stars in our ROI for this model is 4812.\n\n\n\n\n\n\n\\section{Validation tests}\n\\label{ssec:validation}\n\n\nWhile our template analysis indicates a strong statistical preference for emission tracing the Sgr dSph, we also carry out five further validation tests to check the robustness of the result.\n\nFirst, we check whether the residuals between the baseline + {Sgr~dSph~} source model and the {\\it Fermi} data from our ROI are consistent with the level expected simply as a result of photon counting statistics, using a method similar to that of Ref.~\\cite{Buschmann2020}. Under the null hypothesis that the {\\it Fermi} data are a Poisson draw from our best-fit baseline + {Sgr~dSph~} model (i.e., that our model is correct, and any differences between it and the actual data are simply due to shot noise), we can determine the expected distribution of $\\ln\\mathcal{L}_n$ values via Monte Carlo. For each Monte Carlo trial, we draw a set of mock photon counts $\\Phi_{n,i,\\rm mock}$ in each pixel and energy bin from our best-fitting model (multiplied by the instrument response function), and then compute the energy-dependent log-likelihood for this mock data set using the same pipeline we use on the real data. We repeat this procedure 100 times, and plot the distribution of log-likelihood values it produces \nas the blue histograms\nin E.D. \\autoref{fig:fitvalidation}. These\nhistograms represent the expected log likelihood in each energy bin under the null hypothesis. We then compare this to the actual value of $\\ln\\mathcal{L}_n$ we measure for our model as compared to the real \\textit{Fermi} data. The plot shows that our measured log-likelihood falls squarely within the range expected under the null hypothesis, and we therefore conclude that the residuals between our model and the real data are consistent with being solely the result of photon counting statistics. \n\nIn addition to testing whether the residuals between model and data are consistent with simply being shot noise when we sum over all pixels (which is what the likelihood measures), we can also examine the residuals as a function of position. We do so in E.D.~\\autoref{fig:Residuals}, which shows the measured \\textit{Fermi} counts in our ROI (summed in three energy bins) in the first column, our best-fitting baseline + {Sgr~dSph~} model in the second column, and fractional residuals [$(\\rm{Data}-\\rm{Model})\/\\rm{Model}$] in the third column. The images are smoothed with a $0.5^\\circ$ Gaussian kernel, since this is roughly the resolution of our interstellar gas maps~\\cite{Macias2018,Macias2019}. The plot shows that, on a point-by-point basis, our models reproduce the data within $\\sim 10\\%$ over most of the ROI. \nThere are, however, a few small patches of correlated residuals, which are only at the $\\sim 30\\%$ level, and are far from the Sgr dSph region. \nThis points to the existence of real structure in the Fermi Bubbles that is not yet perfectly modelled, but given the small level of the residuals and the distance between them and the signal in which we are interested, this modelling imperfection has little impact on our results.\n\n\n\n\n\nAs our second validation test, we evaluate the sensitivity of our pipeline to uncertainties in our templates for Galactic diffuse emission, and we verify that our pipeline can recover synthetic signals similar to the Sgr dSph even when our templates are imperfect. Recall that we have three components of Galactic diffuse emission for which the templates are at least somewhat uncertain: hadronic + bremsstrahlung emission (for which our template can be HD, Interpolated, or GALPROP), Galactic IC emission (for which the template can be 3D, 2D A, 2D B, or 2D C), and the Fermi Bubbles (for which the template can be S, structured, or U, unstructured). \nWe test the sensitivity of our fits to these template choices as follows. First, we generate a set of mock background data by drawing a random realisation of photon counts from one combination of these templates, and on top of this we add a synthetic Sgr dSph signal; the Sgr dSph photons follow the spatial morphology of our Sgr dSph model I template, have a spectral shape $dN_\\gamma\/dE_\\gamma \\propto E_\\gamma^{-2}$, and have a normalisation that we vary systematically from $\\approx 10^{-11}$ ph cm$^{-2}$ s$^{-1}$ (integrated over all energies) to $\\approx 10^{-5}$ ph cm$^{-2}$ s$^{-1}$; our best-fit Sgr dSph photon flux falls in the middle of this range, $\\approx 2\\times 10^{-8}$ ph cm$^{-2}$ s$^{-1}$. Then we use our pipeline to recover the flux of the Sgr dSph from the synthetic map, but using a \\textit{different} set of templates for Galactic diffuse emission to the ones used to generate the synthetic data. Comparing the recovered Sgr dSph spectrum to the injected one reveals how well our pipeline performs when the input diffuse emission templates are not exactly correct. We carry out this experiment with four diffuse emission template combinations: (1) synthetic data generated from GALPROP + 3D + S, analysed using HD + 3D + S; (2) synthetic data generated from HD + 2D A + S, analysed using HD + 3D + S; (3) synthetic data generated from HD + 3D + S, analysed using HD + 3D + U; (4) synthetic data generated using HD + 3D + S, analysed using HD + 3D but no template for the FBs at all.\n\nWe show the results for the first two of these experiments in the two left panels of E.~D.~\\autoref{fig:injectionrecovery}; the top left panel shows the recovered energy-integrated photon flux compared to the injected flux, while the bottom left shows the recovered spectra when the input flux is $\\approx 2 \\times 10^{-8}$ ph cm$^{-2}$ s$^{-1}$. The plot shows that our pipeline yields excellent agreement between the injected and recovered signals for both the integrated flux and the spectrum unless the Sgr dSph signal is $\\sim 1$ order of magnitude weaker than our estimate. In no circumstance does our pipeline produce a false signal comparable in magnitude to our observed one. The two right panels of E.~D.~\\autoref{fig:injectionrecovery} show the third and fourth tests, where we mismatch the FB template. Here the effects are somewhat larger, but still relatively minor: if we create synthetic data with the S Fermi Bubble template (so that there is structure corresponding to the cocoon), and then analyse it using either the U template or no FB template at all, then we make a factor of $\\sim 2-3$ level error in the absolute flux, but no substantial error in the spectral shape. This test suggests that our detection of the Sgr dSph is very robust, but that we have a factor of $\\sim 2-3$ uncertainty in its absolute flux, stemming from our imperfect knowledge of the foreground FBs.\n\n\n\n\nThe third validation test we perform is to check whether a fit using the observed stellar distribution of the Sgr dSph as a template performs better than one using a purely geometric template placed at the same position; if the emission really is tracing the stars of the dwarf, and is not merely a chance overlap, a template matching the shape of the dwarf should perform better than a purely geometric distribution. For this purpose we consider disc-shaped templates of varying radii, centred at Galactic coordinates $(\\ell,b)=(5.61^\\circ, -14.09^\\circ)$ --- the dynamical centre of the Sgr dSph -- and repeat our standard procedure of comparing baseline models to baseline + Sgr dSph models, using these geometric templates in place of the Sgr dSph stellar templates. We use our fiducial choices for all other templates (hadronic and bremmstrahlung emission, galactic IC emission, and the Fermi Bubbles).\n\nWe show the results of this experiment in S.I.~\\autoref{tab:geometric_templates}. We find that geometric templates do perform better than baseline models with no Sgr dSph component, but, as expected, even the best geometric template (for a disc of radius $r=2.0^\\circ$) yields significantly less fit improvement ($TS=63.8$) than our fiducial stellar template ($TS=95.2$); this difference in test statistic, $\\Delta\\, TS = 31.4$, corresponds to the Sgr dSph template being preferred at $3.7\\sigma$ significance. Moreover, this result becomes even stronger if we notice two additional points. First, because we tried a wide range of radii for the geometric models, the geometric templates effectively provide an extra degree of freedom that the Sgr dSph template, which is fixed by observations, lacks. Because we fix the template radius while performing each fit, we do not treat the varying radius as an extra degree of freedom when computing the test statistic, but if we did so, then the difference in performance between the geometric and stellar templates would be even larger. Second, the geometric model that gives the best fit to the data is in fact the one whose radius most closely approximates the actual size of the core of the Sgr dSph. Indeed, Fig.~\\ref{fig:profile_longaxis} (bottom) shows that, in the direction of the short axis, the Sgr dSph stellar profile falls off steeply $\\sim 2^\\circ-3^\\circ$ away from the Sgr dSph centre. Thus the geometric template that gives the best match to the observations happens to be the one that most closely approximates the actual distribution of stars in the Sgr dSph.\n\n\\begin{table}[h!]\n \\centering\n \\small\n \\begin{tabular}{llll@{\\qquad\\qquad}rrrr}\n \\hline\\hline\n \\multicolumn{4}{c}{Template choices} & \\multicolumn{4}{c}{Results} \\\\\n Hadr. \/ Bremss. & IC & FB & Sgr dSph &\n $-\\log(\\mathcal{L}_{\\rm Base})$ & $-\\log(\\mathcal{L}_{{\\rm Base}+{\\rm Sgr}})$ & $\\mbox{TS}_{\\rm Source}$& Significance \\\\[0.5ex] \\hline \n \\multicolumn{8}{c}{Default model} \\\\[0.5ex]\n HD & 3D & S & Model I & 866680.6 &866633.0 & 95.2 & $8.1\\;\\sigma$\n \\\\[0.5ex]\n HD & 3D & S & Disc ($r=0.5^\\circ$) & 866680.6 & 866666.1 & 28.9 & $3.5\\;\\sigma$\n \\\\[0.5ex]\n HD & 3D & S & Disc ($r=1.0^\\circ$) & 866680.6 & 866661.3 & 38.6 & $4.4\\;\\sigma$\n \\\\[0.5ex]\n HD & 3D & S & Disc ($r=2.0^\\circ$) & 866680.6 & 866648.7 & 63.8 & $6.3\\;\\sigma$\n \\\\[0.5ex]\n HD & 3D & S & Disc ($r=3.0^\\circ$) & 866680.6 & 866654.9 & 51.4 & $5.4\\;\\sigma$\n \\\\[0.5ex]\n HD & 3D & S & Disc ($r=4.0^\\circ$) & 866680.6 & 866658.1 & 45.0 & $4.9\\;\\sigma$\n \\\\[0.5ex]\n HD & 3D & S & Disc ($r=5.0^\\circ$) & 866680.6 & 866661.3 & 38.6 & $4.4\\;\\sigma$\n \\\\[0.5ex]\n HD & 3D & S & Disc ($r=6.0^\\circ$) & 866680.6 & 866669.3 & 22.7 & $2.8\\;\\sigma$\n \\\\[0.5ex]\n HD & 3D & S & Disc ($r=7.0^\\circ$) & 866680.6 & 866670.4 & 20.4 & $2.6\\;\\sigma$\n \\\\[0.5ex]\n \n HD & 3D & S & Disc ($r=9.0^\\circ$) & 866680.6 & 866664.9 & 31.4 & $3.7\\;\\sigma$ \\\\[0.5ex]\n HD & 3D & S & Disc ($r=11.0^\\circ$) & 866680.6 & 866665.8 & 29.6 & $3.6\\;\\sigma$ \\\\[0.5ex]\n HD & 3D & S & Disc ($r=13.0^\\circ$) & 866680.6 & 866673.0 & 15.2 & $1.9\\;\\sigma$ \\\\[0.5ex]\n HD & 3D & S & Disc ($r=15.0^\\circ$) & 866680.6 & 866676.5 & 8.3 & $0.9\\;\\sigma$ \\\\[0.5ex]\n \\hline\\hline\n \\end{tabular}\n \\caption{Same as \\autoref{tab:loglikelihood} in the main letter, except that here we compare the results for our fiducial stellar template for the Sgr dSph (Model I, top row) to results using disc templates of various angular radii centred at the dynamical centre of the Sgr dSph. }\n \\label{tab:geometric_templates}\n\\end{table}\n\n\nOur \nfourth\nvalidation test is to check whether our fit degrades if we artificially rotate or translate the Sgr dSph template; if the signal we are detecting really does come from the Sgr dSph, the best fit should be for a template that traces its actual orientation and position, while rotated or shifted templates should produce progressively worse fits. This check is significant in part because Ref.~\\cite{Ackermann2014} performed similar rotation analysis for the hypothesis that the cocoon is tracing a jet from Sgr A$^*$, and found that there was no preference for a jet oriented toward Sgr A$^*$ over one oriented in some other way; they took this as evidence against the jet hypothesis. To check if the Sgr dSph template performs better on this test, we first rerun our analysis pipeline for our default set of templates (first line in \\autoref{tab:loglikelihood}), but with the Sgr dSph template rotated about its core. For each rotation angle we compute the TS, and compare to the TS of the original, unrotated model. We plot the result of this experiment in the left panel of E.D.~\\autoref{fig:rotationAndTranslationTests}. It is clear that, as expected, the fit is best when we use the actual orientation of the Sgr dSph, and degrades as we increase the rotation. Next, we carry out a similar procedure, but this time rather than rotating the Sgr dSph template about its core, we rotate around the centre of the Galaxy, thereby both translating and rotating the template. (This latter test was motivated by the particular alignment of the Sgr Stream with the previously claimed collimated jets from the Galaxy's supermassive black hole~\\cite{Su2012}.) We show the results in the middle panel of E.D.~\\autoref{fig:rotationAndTranslationTests}, and, again as expected, the TS strongly favours the true location and orientation of the Sgr dSph. Finally, we translate the Sgr dSph while leaving its orientation unchanged. We show the TS for displaced Sgr dSph in the right panel of E.D.~\\autoref{fig:rotationAndTranslationTests}. In this case the fit improves if we do displace the Sgr dSph from its true position by $\\approx 4^\\circ$ south. The amount by which the shift is favoured is fairly significant -- the TS improved by 40.8, which corresponds to $4.5\\sigma$ significance. Interestingly, the direction of the displacement is within a few degrees of the direction anti-parallel to the Sgr dSph's proper motion, suggesting that the dwarf's $\\gamma$-ray signal trails it slightly on its orbit.\nIf IC-emitting CR {$e^\\pm$} \\ are largely responsible for the observed {Sgr~dSph~} {$\\gamma$-ray} \\ signal as suggested by our spectral modelling, a systematic displacement of this signal southward by $\\sim 4^\\circ$ from the stars of {Sgr~dSph~} is quite reasonable as we have explained elsewhere (and see \\autoref{sec:CRtrans}).\n\n\n\n\n\n\n\\section{Transport of IC-emitting CR $e^\\pm$}\n\\label{sec:CRtrans}\n\nWe have seen that, while our pipeline detects a signal from the Sgr dSph at very high statistical significance, the fit improves even more (by $\\approx 4.5\\sigma$) if we displace the Sgr dSph template $\\approx 4^\\circ$ from its actual position (corresponding to 1.9 kpc at the distance of the Sgr dSph), in a direction very close to anti-parallel to the dwarf's proper motion. Here we demonstrate that a displacement of this type is expected in a model where the $\\gamma$-ray signal from the Sgr dSph is powered by MSPs. Part of the MSP signal emerges directly from the MSP magnetospheres, and thus traces the stellar component of the Sgr dSph. However, the majority of the observed signal is, in our model, IC emission powered by {$e^\\pm$}~escaping MSP magnetospheres and interacting with the CMB. The time between when {$e^\\pm$}~leave MSPs and when they IC scatter to produce $\\gamma$-ray photons is non-negligible: the CMB is dominated by photons with energies $\\sim k_{\\rm B}T_{\\rm CMB}$ (with $T_{\\rm CMB}=2.7$ K), so IC photons with energies of $\\sim 1-100$ GeV must be produced by {$e^\\pm$}~with energies $E_{e^\\pm} \\sim 0.6-6$ TeV. The characteristic IC loss time for such particles is\n\\begin{equation}\n t_{\\rm IC} = \\frac{3 m_e^2 c^3}{4 \\sigma_{\\rm T} E_{e^\\pm} U_{\\rm CMB}} = 1.2\\left(\\frac{E_{e^\\pm}}{\\mbox{TeV}}\\right)^{-1}\\mbox{ Myr},\n\\end{equation}\nwhere $m_e$ is the electron mass, $c$ is the speed of light, $\\sigma_{\\rm T}$ is the Thomson cross section, and $U_{\\rm CMB} = a_R T_{\\rm CMB}^4 = 0.25$ eV cm$^{-3}$ is the energy density of the CMB.\n\nDuring this time, the {$e^\\pm$}~will have the opportunity to move a significant distance prior to producing $\\gamma$-rays, due to both bulk gas motion and CR flow relative to the gas. \nWith regard to bulk advection, we note that the proper speed of the {Sgr~dSph~} is $\\approx 260$ km s$^{-1}$, and we therefore expect an effective wind of Galactic halo gas to be blowing through (or, at least, around) the dwarf at approximately this speed.\nThis wind would advect the IC-radiating {$e^\\pm$} \\ southward.\nQuantitatively, the extent of the angular displacement of an IC {$\\gamma$-ray} \\ signal at $E_\\gamma$\n\\begin{equation}\n \\Delta \\theta_{\\rm adv}(E_\\gamma) \\simeq 1.0^\\circ \\left(\\frac{E_\\gamma}{\\rm GeV}\\right)^{-1} \n \\left(\\frac{v_{\\rm prop}}{\\rm 260 \\ km\/s}\\right)\n\\end{equation}\nwhere $v_{\\rm prop}$ is the proper motion on the sky.\nThus advection is expected to generate a \nsouthward displacement of $\\sim 1^\\circ$.\n\n\nThis is less than the displacement we observe, but advection is also likely less important than CR transport through the gas. While the diffusion coefficient for CRs in the galactic halo is very poorly known, we can make an order of magnitude estimate by adopting\nthe functional form for the diffusion coefficient given in ref~\\cite{Gabici2007} which is normalised to $3 \\times 10^{27}$ cm$^2$ s$^{-1}$ for a 1 GeV CR in a 3 $\\mu$G field. Then the expected diffusive displacement of the IC-radiating {$e^\\pm$}~is\n\\begin{equation}\n \\Delta \\theta_{\\rm diff}(E_\\gamma) \\simeq 3.5^\\circ \\left(\\frac{E_\\gamma}{\\rm GeV}\\right)^{-0.12} \n \\left(\\frac{B}{\\rm 0.1 \\ \\mu G}\\right)^{-0.27} .\n\\end{equation}\nWhile this is roughly the correct amount of displacement to reproduce what we observe, if the diffusion were isotropic then we would still not have explained the systematic offset between the dwarf and the displaced location picked out by our template analysis. However, we do not expect isotropic diffusion in the environment of the Sgr dSph. Simulations of objects plunging through diffuse halo gas indicate that a generic outcome of such interactions is the development of a coherent magneto-tail back along the objects' direction of motion \\cite{Dursi2008}. Such a structure formed by the Sgr dSph plunging through the Milky Way's halo would naturally explain why, rather than being isotropic, the diffusive transport is primarily backwards along the dwarf's trajectory.\n\n\n\\section{Energetics of the Sgr dSph MSP population}\n\\label{ssec:energetics}\n\n\nAs discussed in the main text, \nthe {$\\gamma$-ray} \\ luminosity per stellar mass we measure for the {Sgr~dSph~} \\ is substantially brighter than we measure for the Galactic Bulge, Galactic disk, or M31, but is substantially dimmer than is observed for globular clusters.\nIndeed, in \\autoref{fig:LgammaOvrMstar}, {Sgr~dSph~} \\ appears as a transition object between gas-poor, low metallicity, low star formation rate, and relatively low stellar mass systems on the left side and relatively gas rich and massive systems (some with appreciable star formation) on the right side.\nIn order to investigate more deeply how the $\\gamma$-ray luminosity of the Sgr dSph compares to that of other observed systems, and to theoretical expectations, in \\autoref{fig:plotLgammaOvrMstarVstSimple} we collect measurements of $\\gamma$-ray luminosity per unit stellar mass versus approximate age for a range of observed systems, and compare these measurements to model predictions. For the observed systems we include M31, the Milky Way bulge and nuclear bulge, the mean of Milky Way globular clusters, and the Milky Way disc; for the latter we have included both the $\\gamma$-ray emission directly measured from MSPs, and the observed {$e^\\pm$}~luminosity of the disc, which may include a significant MSP contribution. As in \\autoref{fig:LgammaOvrMstar}, we see that the Sgr dSph is intermediate between the metal-rich galactic systems -- M31, the Milky Way disc and bulge -- and the globular clusters (GCs). However, the figure also reveals a clear trend that galactic systems dim as a function of age, with Sgr dSph as both the youngest and the most luminous of the galactic systems.\n\n\nThe trend with age is consistent with theoretical expectations, indicated by the blue band in \\autoref{fig:plotLgammaOvrMstarVstSimple} which shows the prediction of a binary population synthesis (BPS) model \\cite{Gautam2021} for the total spin-down power per unit stellar mass liberated by magnetic braking of MSPs. Some of this power should emerge as prompt emission, and some as {$e^\\pm$} \\ injected into the ISM; the lower dashed blue line shows 10\\% of the total spin-down power, a rough estimate for the prompt component. In this particular calculation, the MSPs derive from Accretion Induced Collapse, the population is assumed to be of Solar metallicity, and each binary evolves independently (i.e., the `field star' limit is assumed). Based on the predictions of this model, and the estimated age of the Sgr dSph, we estimate that the $\\gamma$-ray signal we have detected can be explained by the presence of $\\approx 650$ MSPs in the galaxy. Given that the overall {$\\gamma$-ray} \\ luminosity of an MSP population is bounded by the spin-down power, it is evident from the figure that the expected energetics appear to be elegantly sufficient to power the signal from {Sgr~dSph~} \\ given the (relatively young) mean age of its stars; this age difference naturally explains why the Sgr dSph should be more luminous per unit mass than M31 or components of the Milky Way.\n\n\nIt is also noteworthy that the GCs are considerably more luminous per unit mass than both the BPS model and the Sgr dSph. The extremely high brightness of GCs is plausibly explained by some combination of dynamical effects, which lead to dynamical hardening of binaries and thence a higher production rate of MSPs, and metallicity effects, which lead to higher MSP production because metal-poor stars have weaker winds and thus experience less mass loss during their main sequence lifetimes than Solar-metallicity stars \\cite{Ruiter2019}. The former effect would not occur in the Sgr dSph, but the latter would, since the Sgr dSph has a metallicity $\\log_{10}(Z_{\\rm Sgr}\/Z_\\odot) \\simeq -0.9$ \\cite{Vasiliev2020}, where $Z_\\odot$ is the solar metallicity, which is comparable to typical GC metallicities.\n\n\n\n\nThe final comparison we show in \\autoref{fig:plotLgammaOvrMstarVstSimple} is with the MSP power inferred by Sudoh et al.\\cite{Sudoh2020} in massive, quiescent galaxies ($M_*>10^{9.5}$ M$_\\odot$, star formation rate $< 0.1$ M$_\\odot$ yr$^{-1}$). Such galaxies typically have stellar population ages $\\gtrsim 8-10$ Gyr \\cite{McDermid2015,Pacifici2016,Sudoh2020}, and Sudoh et al.~show that they produce anomalously-large synchrotron emission, which they attribute to radiation from {$e^\\pm$} \\ injected by MSPs; they infer an injection power $1.8\\times 10^{28}$ erg$\/$s$\/M_{\\odot}$, which we show as the brown dashed line in \\autoref{fig:plotLgammaOvrMstarVstSimple}. We see that this estimate is consistent both with the BPS model and comparable to the luminosity we infer for the Sgr dSph.\n\n\nOur overall conclusion is that the MSP luminosity we have derived for the Sgr dSph is fully consistent with both theoretical expectations and with a wide variety of observed systems. The Sgr dSph is more luminous per unit mass than the Milky Way or M31, but this is easily explained by its youth and low metallicity, and it is comparably- or less-luminous than other observed systems that are of comparable age or metallicity.\n\n\n\n\n\\section{Astrophysical {$\\gamma$-ray} \\ emission from other dSphs}\n\nOn the basis of the normalisation ($L_{\\gamma}\/M_\\star$) supplied by the Sgr dSph $\\gamma$-ray detection, we can make \nrough predictions for the astrophysical $\\gamma$-ray fluxes from a number of other dSph systems, simply by assuming this normalisation applies to them as well; future work should be based on full theoretical models including metallicity and age effects, but the simple calculation we present here can serve as a guide to the system for which such investigations are likely to be fruitful.\nOur predictions can, in turn, be compared to i) actual observational upper limits to the {$\\gamma$-ray} \\ fluxes from these dSphs and ii) (model-dependent) predictions for the (WIMP) dark-matter-driven {$\\gamma$-ray} \\ fluxes from the same satellite galaxies. For this purpose we use the data assembled in Winter et al.~\\cite{Winter2016} for the distances, stellar masses, \nand MSP- and DM-driven fluxes for a population of 30 dSphs satellites of the Milky Way. These authors derive their MSP fluxes by extrapolating the $\\gamma$-ray luminosity function of resolved Milky Way MSPs; their result implies that at energies above 500 GeV, galaxies should produce an MSP photon flux per unit stellar mass of $\\approx 6.3\\times 10^{29}$ s$^{-1}$ M$_\\odot^{-1}$, roughly a factor of 40 smaller than the $\\approx 2.5\\times 10^{31}$ s$^{-1}$ M$_\\odot^{-1}$ we detect for the Sgr dSph. We report our revised estimates dwarf spheroidals' MSP luminosity in S.I.~\\autoref{tab:dsph_fluxes}. This finding has two implications, which we explore below: first, for some dwarfs this brings the predicted {$\\gamma$-ray}~flux close to current observational upper limits, suggesting that a more detailed analysis of \\textit{Fermi}-LAT data might yield a detection. Second, in some dSph galaxies, the predicted MSP flux is comparable to or exceeds the {$\\gamma$-ray}~fluxes that might be expected from dark matter annihilation.\n\n\n\n\\begin{table}\n\\begin{tabular}{lcc}\n\\hline\nGalaxy name & Predicted MSP flux & Predicted DM flux \\\\\n& (cm$^{-2}$ s$^{-1}$) & (cm$^{-2}$ s$^{-1}$) \\\\\n\\hline\nFornax & $2.38\\times 10^{-10}$ & $2.06\\times 10^{-11}$ \\\\\nSculptor & $1.10\\times 10^{-10}$ & $5.18\\times 10^{-11}$ \\\\\nSextans & $1.96\\times 10^{-11}$ & $3.27\\times 10^{-11}$ \\\\\nUrsa Minor & $1.95\\times 10^{-11}$ & $8.20\\times 10^{-11}$ \\\\\nLeo I & $1.59\\times 10^{-11}$ & $6.52\\times 10^{-12}$ \\\\\nDraco & $1.18\\times 10^{-11}$ & $8.20\\times 10^{-11}$ \\\\\nCarina & $8.12\\times 10^{-12}$ & $1.64\\times 10^{-11}$ \\\\\nLeo II & $4.54\\times 10^{-12}$ & $5.18\\times 10^{-12}$ \\\\\nBootes I & $1.36\\times 10^{-12}$ & $2.06\\times 10^{-11}$ \\\\\nCanes Ven. I & $1.33\\times 10^{-12}$ & $6.52\\times 10^{-12}$ \\\\\nUrsa Major II & $1.10\\times 10^{-12}$ & $2.59\\times 10^{-10}$ \\\\\nReticulum II & $5.27\\times 10^{-13}$ & $2.59\\times 10^{-10}$ \\\\\nComa Ber. & $5.19\\times 10^{-13}$ & $1.30\\times 10^{-10}$ \\\\\nHercules & $4.48\\times 10^{-13}$ & $1.64\\times 10^{-11}$ \\\\\nUrsa Major I & $4.25\\times 10^{-13}$ & $2.59\\times 10^{-11}$ \\\\\nTucana III & $2.67\\times 10^{-13}$ & $2.59\\times 10^{-10}$ \\\\\nGrus II & $2.53\\times 10^{-13}$ & $6.52\\times 10^{-11}$ \\\\\nTucana IV & $1.99\\times 10^{-13}$ & $6.52\\times 10^{-11}$ \\\\\nTucana II & $1.88\\times 10^{-13}$ & $8.20\\times 10^{-11}$ \\\\\nEridanus II & $1.60\\times 10^{-13}$ & $2.59\\times 10^{-12}$ \\\\\nWillman I & $1.45\\times 10^{-13}$ & $1.64\\times 10^{-10}$ \\\\\nSegue 1 & $1.34\\times 10^{-13}$ & $4.11\\times 10^{-10}$ \\\\\nLeo IV & $7.53\\times 10^{-14}$ & $1.03\\times 10^{-11}$ \\\\\nHorologium I & $6.65\\times 10^{-14}$ & $3.27\\times 10^{-11}$ \\\\\nPhoenix II & $6.55\\times 10^{-14}$ & $3.27\\times 10^{-11}$ \\\\\nCanes Ven. II & $6.51\\times 10^{-14}$ & $1.03\\times 10^{-11}$ \\\\\nReticulum III & $4.95\\times 10^{-14}$ & $2.06\\times 10^{-11}$ \\\\\nColumba I & $3.91\\times 10^{-14}$ & $5.18\\times 10^{-12}$ \\\\\nIndus I & $3.50\\times 10^{-14}$ & $2.59\\times 10^{-11}$ \\\\\nIndus II & $2.24\\times 10^{-14}$ & $3.27\\times 10^{-12}$ \\\\\n\\hline\n\\end{tabular}\n\\caption{\n\\label{tab:dsph_fluxes}\nPredicted MSP photon and dark matter annihilation photon fluxes at energies $E_\\gamma > 500$ MeV from nearby dSph galaxies, taken from the sample of ref.~\\cite{Winter2016}. Column 1: galaxy name; column 2: predicted MSP photon flux based on the Sgr dSph (see SI for details); column 3: DM annihalation flux predicted by ref.~\\cite{Winter2016}.\n}\n\\end{table}\n\n\n\n\n\n\n\n\\subsection{omparison with existing upper bounds}\n\n\n\nTo estimate whether other dwarf spheroidals might be detectable, we compare our differential flux predictions (incorporating both prompt and IC emission where, for simplicity, we make the approximation that the CMB-dominated ISRF of the {Sgr~dSph~} \\ also pertains in each other dSph under consideration)\nagainst the results from ref.~\\cite{Mazziotta2012}\\footnote{This is the most recent publication we can find that explicitly tabulates bin-by-bin, numerical 95\\% confidence upper limits on the differential flux received by a number of dSphs that also appear in the compilation of ref.~\\cite{Winter2016}}. On the basis of this comparison, we do not predict {$\\gamma$-ray} \\ emission from any dSph that surpasses the upper limits from ref.~\\cite{Mazziotta2012}. However two dSphs reach a significant fraction of the relevant upper limit in at least one energy bin (of width $log_{10}(\\Delta E$\/GeV) = 0.5): Fornax (which reaches 0.24 of the upper limit for the energy bin centred at 1.36 GeV) and Sculptor (which reaches 0.09 of the upper limit for the energy bin centred at 2.46 GeV). Furthermore, the results of ref.~\\cite{Mazziotta2012} were obtained using Pass7 {{\\em Fermi}-LAT} \\ data accumulated over only the first 3 years of {{\\em Fermi}-LAT} \\ operation. On the basis of, e.g., the results of ref.~\\cite{Ackermann2015} we expect that updated upper limits (Pass8, 15 years data) should be at least a factor of 4 more stringent. This makes Fornax and Sculptor both very interesting targets for a future study, though we remind the reader that our predictions are predicated on a normalisation obtained from the {Sgr~dSph~} \\ detection that may be somewhat over-optimistic because it ignores the stellar age effects evidenced in \\autoref{fig:plotLgammaOvrMstarVstSimple}\\footnote{The stellar population of Sculptor, in particular, is significantly older \\cite{Bettinelli2019} than that of Sagittarius, giving the MSP population more time to have spun down, though Fornax, on the other, has experienced some significant and relatively recent star formation \\cite{Rusakov2021}, like Sagittarius, qualifying it as a particularly compelling target for {$\\gamma$-ray} \\ observation.}. After these two, the brightest expected dSphs are Sextans, Ursa Minor, Leo I, and Draco. These may also be interesting targets, though we note that we expect that they are almost one order of magnitude dimmer than Fornax and Sculptor.\n\n\n\\subsection{Comparison with predicted DM annihilation fluxes}\n\n\nWinter et al.\\cite{Winter2016} estimate DM annihilation fluxes for nearby dSphs using a DM annihilation cross section derived by assuming that the the Galactic Centre Excess (GCE) is a DM signal. We caution that this is likely only an upper limit, since of course our finding for the Sgr dSph suggests that some or all of the GCE is in fact due to MSPs (see also ref.~\\cite{Gautam2021}). Nonetheless, we proceed with our calculation using the Winter et al.~estimate precisely because it represents an upper limit on the DM signal. Comparing the MSP and DM signals estimated in S.I.~\\autoref{tab:dsph_fluxes} leads us to the important finding that, in contrast to the results obtained by Winter et al., there are three dSphs for which the MSP-driven $> 500$ MeV photon number flux exceeds the predicted DM flux (viz., Fornax by $\\sim$12; Leo I by $\\sim$2.4; and Sculptor by $\\sim$2.1) and three more where it exceeds $\\sim 1\/2$ the DM flux (viz., Leo \\\nII with 0.89; Sextans with 0.60; and Carina with 0.50 of the DM flux).\nA clear implication of these, albeit preliminary, results is that these targets should be avoided in the quest to better constrain putative WIMP DM self-annihilation cross-sections. By contrast, there remain other dSphs where the expected DM signal remains comfortably much larger than the MSP signal; these are more promising targets.\n\n\n\n\n\n\n\\clearpage\n\n\\printbibliography[segment=\\therefsegment,title={Supplementary Information References}, check=onlynew]\n\n\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nObservations of high-redshift quasars probe the growth of supermassive\nblack holes (SMBH) and their connection to galaxy formation at the earliest\ncosmic epochs. \nThe discovery of strong submillimeter\/millimeter [(sub)mm] dust \ncontinuum in about 30\\% of the quasars known at z$\\sim$6 provides the \nfirst evidence of active star formation in young quasar host \ngalaxies at the end of the reionization era \\citep{bertoldi03a,bertoldi03b,petric03,priddey03,\nrobson04,wang07,wang08}. The star formation rates \nestimated from the FIR luminosities (a few $\\rm 10^{12}$ \nto $\\rm 10^{13}\\, L_{\\odot}$) are on the order of $\\rm 10^{2}$ \nto $\\rm 10^{3}\\,M_{\\odot}\\,yr^{-1}$, which are comparable to the typical \nvalues found in so-called submillimeter galaxies at $\\rm z=2\\sim3$ \n\\citep{scott02,greve05,kovacs06}. The spatially resolved [C {\\small II}] \nline emission from one of the most FIR luminous z$\\sim$6 quasars, SDSS \nJ114816.64+525150.3 (hereafter J1148+5251), further suggests a high star formation surface density \nof $\\rm \\sim1000\\,M_{\\odot}\\,yr^{-1}\\,kpc^{-2}$ over the central \n1.5 kpc region of the quasar host galaxy \\citep{maiolino05,walter09}. \n\nMolecular CO (6-5) line emission has been detected in ten of the \nFIR luminous z$\\sim$6 quasars (\\citealp{bertoldi03b,walter03,carilli07,\nwang10}, 2011, in prep.), \nindicating the existence of highly-excited molecular \ngas in the quasar hosts. The CO (3-2), (6-5), and (7-6) transitions \ndetected in the z=6.42 quasar J1148+5251 reveal a molecular gas \ncomponent on scales of $\\sim$5 kpc in the host galaxy with CO \nexcitation conditions similar to those found in local starburst \ngalaxies and CO-detected quasars at lower redshifts \\citep{bertoldi03b,walter04,riechers09}. \n\nEmission in the low-order CO transitions ($\\rm J\\leq2$) from the z$\\sim$6 quasar host \ngalaxies is poorly constrained due to the limited sensitivity and \nfrequency coverage of the previous instruments \\citep{wagg08,wang10}. \nThe new Ka band receivers on the Expanded Very Large Array (EVLA, \\citealp{perley11}) open an \nimportant frequency window for studies of the cold molecular gas in high-redshift \ngalaxies (e.g., \\citealp{ivison10,ivison11,riechers10}). In this paper, we report \nour EVLA observations of the CO (2-1) line emission in five z$\\sim$6 \nquasars \\citep{fan04,fan06,willott10a,willott10b} \nThree of them are from the Sloan Digital Sky Survey (SDSS, \\citealp{fan04,fan06}), \nwith two objects, SDSS J084035.09+562419.9 and SDSS J092721.82+200123.7, \npreviously detected in strong ($\\rm >3\\,mJy$) 250 \nGHz dust continuum and molecular CO (6-5) and (5-4) line emission. \nAnother object, SDSS J162331.81+311200.5, was detected in the [C {\\small II}] \nline, but undetected in millimeter dust \ncontinuum and high-J CO transitions (Bertoldi et al. 2011, in prep.).\nThe other two objects are from the Canada-France High-z Quasar\nSurvey (CFHQS, \\citealp{willott10a,willott10b}) and do not have published CO observations yet. \nOne of them, CFHQS 142952.17+544717.6, was detected in 250 GHz dust continuum \n(Omont et al. 2011, in prep.). We describe the observations in Section 2, present \nthe results in Section 3, and discuss the CO excitation and host galaxy \nevolution properties of the detections in Section 4. A $\\rm \\Lambda$-CDM\ncosmology with $\\rm H_{0}=71km\\ s^{-1}\\ Mpc^{-1}$, $\\rm\n\\Omega_{M}=0.27$ and $\\rm \\Omega_{\\Lambda}=0.73$ is adopted throughout this\npaper \\citep{spergel07}.\n\n\\section{Observations}\n\nThe observations were carried out using the Ka-band receiver on the EVLA \nin 2010 in the D, DnC, and C configurations. \nThe WIDAR correlator in Open Shared Risk Observing mode provided a maximum \nbandwidth of 128 MHz and a resolution of 2 MHz in each of the two \nbasebands (A\/C and B\/D intermediate frequency [IF] bands). The A\/C IFs \ncould not be tuned below 32 GHz. The redshifts and observing frequencies \nof the CO (2-1) line of the five targets are estimated with previous detections of the CO (6-5), \n[C {\\small II}], or quasar UV lines (\\citealt{carilli07,wang10}; \nBertoldi et al 2011, in prep.). For the \nthree sources with redshifts of $\\rm z\\leq6.2$ [corresponding to redshifted CO (2-1) line \nfrequencies of $\\rm \\nu_{obs}\\geq32$ GHz], we use the two 128 MHz IF pairs overlapped \nby 30 MHz and cover a total bandwidth of 226 MHz (i.e., $\\sim$2000 $\\rm km\\,s^{-1}$ \nin velocity and $\\rm \\sim0.05$ in redshift at $\\rm z=6$)\\footnote{10 MHz overlap and a total \nbandwidth of 246 MHz for J1429+5447.}. \nFor the other two objects with $\\rm z>6.2$, we \ncentered the 128 MHz window of the B\/D IF pairs on the line frequency \nand observed the continuum at $\\geq$32 GHz with the other window. \nThe observing time is 15 to 20 hours for each of the five targets (see Table 1).\nFlux calibrations were performed using the standard VLA \ncalibrators, 3C286 and 3C48, and we use 5-minute \nscan loops between targets and phase calibrators to calibrate the phase. \nThe data were reduced with AIPS, and the \nspatial resolutions (FWHM) of the final images are typically 2$''$ for data taken in the \nD configuration and 0.7$''$ for the C configuration. \n\n\\section{Result}\n\nCO (2-1) line emission has been detected in two of the \nfive z$\\sim$6 quasars, J0927+2001 and J1429+5447, and marginally \ndetected in J0840+5624. We present all the observing parameters and \nmeasurements in Table 1. The detailed results are listed below.\n\n{\\bf J0927+2001} Toward this source strong dust continuum \nat 850 GHz, 250 GHz, and 85 GHz, and CO (6-5) and (5-4) line emission \nwere detected \\citep{carilli07,wang10}. We have \ndetected the CO (2-1) line and the emission distribution (averaged \nover a velocity range of 880 $\\rm km\\,s^{-1}$) along with a spectrum is \nshown in Figure 2. The line peak emission centeroid is consistent with the optical \nquasar position and the peaks of the high-J CO lines. \nThe line width (FWHM) and redshift fitted with a single Gaussian \nprofile are $\\rm 590\\pm130\\,km\\,s^{-1}$ and $\\rm 5.7716\\pm0.0012$\nwhich are in good agreement with the measurements \nfrom the high-order CO transitions ($\\rm z=5.7722\\pm0.0006$ and \n$\\rm FWHM=600\\pm70\\,km\\,s^{-1}$, \\citealt{carilli07}). \nThe line emission appears marginally \nresolved by the $\\rm 2.19''\\times1.96''$ synthesized beam with a peak \nsurface brightness of $\\rm 147\\pm21\\, \\mu Jy\\,beam^{-1}$ and a total \nintensity of $\\rm 230\\pm45\\,\\mu Jy$, with a source size of \n$\\rm (2.7''\\pm0.4'')\\times(2.4''\\pm0.3'')$ determined from a fit with a two-dimensional \nGaussian distribution (the deconvolved source size \nis about $\\rm 1.7''\\times 1.4''$, or $\\rm 10\\,kpc\\times8\\,kpc$). \nThe corresponding line fluxes and luminosities (Table 1) are higher than the upper limits estimated from\nprevious GBT observations, but are still consistent given the large\nuncertainties and baseline feature contamination in the GBT data \\citep{wagg08,wang10}.\n\n{\\bf J1429+5447} Toward this object strong radio continuum \nemission was detected in the FIRST survey \\citep{becker95} \nand recent VLBI observations \\citep{frey11}, making it the\nstrongest radio source among the known z$\\sim$6 quasars and the \nmost distant radio-loud quasar. It has also been detected in \ndust continuum at 250 GHz with a flux density of $\\rm \\sim3$ mJy \n(Omont et al. 2011, in prep.). \nWe have detected both CO (2-1) line emission and \ncontinuum emission at the line frequency. The \ncontinuum source is unresolved by the $\\rm 0.71''\\times0.67''$ synthesized \nbeam and the flux density averaged over the line-free channels at 32 GHz \nis $\\rm 257\\pm15$ $\\mu$Jy. We subtract the continuum by performing \nlinear fitting to the visibility data, \nusing the UVLIN task in AIPS. The CO line emission is resolved into two \npeaks with a spatial separation of $\\sim$1.2$''$ (6.9 kpc at the\nquasar redshift), and the optical and radio quasar positions are \nconsistent with the west peak (Figure 3). \nA Gaussian fit to the spectra yields a redshift \nof $\\rm z=6.1831\\pm0.0007$ and a line width \nof $\\rm FWHM=280\\pm70\\,km\\,s^{-1}$ for the west source, \nand $\\rm z=6.1837\\pm0.0015$ and $\\rm FWHM=400\\pm140\\,km\\,s^{-1}$ for the east source. \nThe line fluxes estimated with the peak surface brightness on \nthe velocity-averaged map averaging over a velocity range of \n$\\rm \\sim450\\,km\\,s^{-1}$ are $\\rm 0.065\\pm0.011\\,Jy\\,km\\,s^{-1}$ \nand $\\rm 0.050\\pm0.013\\,Jy\\,km\\,s^{-1}$\nfor the west and east components, respectively. However, \na two-dimensional Gaussian distribution fitted to the east component suggest \npossible extension with a source size of $\\rm (1.1''\\pm0.2'')\\times(0.7''\\pm0.2'')$, \nwhich should be checked with deeper observations at higher spatial resolution.\n\n{\\bf J0840+5624} This source was detected\nin (sub)mm dust continuum emission and CO\n(6-5) and (5-4) line emission; it has the broadest line width,\n$\\rm FWHM=860\\,km\\,s^{-1}$, among the CO-detected\nz$\\sim$6 quasars \\citep{wang07,wang10}. We observed the line\nat the redshift of $\\rm z=5.8441\\pm0.0013$ derived\nfrom the high-order CO detections and find no clear detection\nin a velocity-averaged map averaging over 1070 $\\rm km\\,s^{-1}$ made at the\nfull resolution of $\\rm 1.09''\\times0.76''$. At a lower resolution\nof $\\rm 2.19''\\times1.96''$, marginal signal ($\\rm 2.8\\sigma$)\nappears on the map (Figure 1), with a double-peaked\nmophology along the east-west direction. The optical quasar position\nis 0.8$''$ away from the east peak. We plot the spectrum at the position\nof east peak in the right panel of Figure 1, and there is only very\nmarginal signal (1 to 2$\\sigma$) over $\\rm \\sim-500$\nto $\\rm 500\\,km\\,s^{-1}$, i.e., the typical velocity range of the CO (6-5)\nand (5-4) line emission \\citep{wang10}.\nThe CO (2-1) line flux estimated with the surface\nbrightness of the east peak is $\\rm 0.062\\pm0.022\\,Jy\\,km\\,s^{-1}$ (Table 1).\nHowever, the signal is indeed marginal and deeper observations with a wider\nbandwidth are required to improve the measurement.\n\n{\\bf J0210$-$0456} This object is the highest\nredshift quasar known to date with $\\rm z=6.438\\pm0.004$ determined\nfrom the object's $\\rm Mg\\,{\\small II}\\,\\lambda2798\\AA$ line emission \\citep{willott10b}.\nWe searched for CO (2-1) line emission in the 128 MHz window centered at\nthe $\\rm Mg\\,{\\small II}$ redshift but did not detect it.\nHere we assume a line width of $\\rm 800\\,km\\,s^{-1}$, which is the\ntypical full width at zero intensity ($\\rm v_{FWZI}$) value found with\nsamples of high-z CO-detected quasars \\citep{coppin08,wang10}\nto estimate the upper limit of the line intensity.\nThe $\\rm 1\\sigma$ rms noise level on the map averaged over this velocity range\nis $\\rm \\sigma_{rms} =16\\,\\mu Jy\\,beam^{-1}$, and the $\\rm 3\\sigma$ upper limit of the line\nflux is estimated as $\\rm 3\\sigma_{rms} v_{FWZI}=0.038\\,Jy\\,km\\,s^{-1}$.\nThe corresponding 3$\\sigma$ upper limit of\nthe line luminosity is $\\rm {L'}_{CO(2-1)}<1.28\\times10^{10}\\,K\\,km\\,s^{-1}\\,pc^2$ \n(see equation (3) in \\citealt{solomon05}). However, we cannot rule out that \nthe $\\rm Mg\\,{\\small II}$ line emission is significantly offset \nfrom the quasar host galaxy redshift and CO (2-1) line falls outside the 128 MHz window. \nThe continuum emission is \nalso undetected with the other window centered at 32.1 GHz, and the \nchannel-averaged map yields a 3$\\sigma$ upper limit of $\\rm <54\\,\\mu Jy$.\n\n{\\bf J1623+3112} This object is detected in $\\rm [C\\,{\\small II}]$ 158$\\mu$m \nfine structure line emission by Bertoldi et al. (2011, in prep.), \nbut undetected in 250 GHz dust continuum \\citep{wang07}. We searched for the CO (2-1) line in \nthe 128 MHz-bandwidth window centered at the $\\rm [C\\,{\\small II}]$ \nredshift of $\\rm z=6.2605\\pm0.0005$ and did not detect the line. \nThe rms on the map averaged over a velocity range of \n$\\rm 800\\,km\\,s^{-1}$ is $\\rm 26\\,\\mu Jy\\,beam^{-1}$. This yields \na 3$\\sigma$ upper limit of $\\rm <0.062\\,Jy\\,km\\,s^{-1}$ for the line flux \nand $\\rm <2.0\\times10^{10}\\,K\\,km\\,s^{-1}\\,pc^2$ for the line luminosity. \nThe 3$\\sigma$ upper limit of the continuum emission at 35 GHz \nmeasured with A\/C IFs is $\\rm <75\\,\\mu Jy$. \n\n\\section{Discussion}\n\nWe have observed molecular CO (2-1) line emission toward five quasars \nat z$\\sim$6 using the EVLA, and detections\/marginal detection have been \nobtained from the three objects that have strong FIR dust \ncontinuum emission. This is consistent with the picture of massive star \nformation fueling by huge amount of molecular gas in these young \nquasar hosts. The detection of $\\rm [C\\,{\\small II}]$ in J1623+3112 is \nalso likely to be a sign of star formation, but the current sensitivity \nof our EVLA observations cannot detect molecular CO from the host galaxy. \nJ0927+2001 and J0840+5624, were \npreviously detected strongly in the CO (6-5) and (5-4) transitions. \nCO (2-1) line emission has been detected and marginally \nresolved in the host galaxy of J0927+2001 over a scale of $\\sim$10 kpc. \nThe molecular gas masses ($\\rm M_{gas}$) estimated from the CO (2-1) line\npeak surface brightness and the total intensity on the velocity-averaged\nmap are listed in Table 1, assuming a CO luminosity-to-gas mass conversion factor\nof $\\rm \\alpha=0.8 M_{\\odot}\\,(K\\,km\\,s^{-1}\\,pc^{2})^{-1}$ appropriate\nfor local ultraluminous infrared galaxies \\citep{solomon97,downes98}.\nThese estimates are 1.7 and 2.5 times higher than the \nvalue of $\\rm (1.8\\pm0.3)\\times10^{10}\\,M_{\\odot}$ estimated\nfrom the high-order CO transitions \\citep{carilli07,wang10}. \nWe plot the CO excitation ladder of this source in Figure 4,\ntogether with the results of Large Velocity Gradient (LVG) modeling\nof the highly-excited molecular gas components \n(gas densities of \norder $\\rm 10^{4}\\,cm^{-3}$, kinetic temperatures of 50 to 60 K, \nand peak at $\\rm J\\geq6$) \nfound in other high-z FIR and CO luminous\nquasars and nearby starburst galaxies \\citep{riechers06,riechers09,gusten06}.\nWe normalize the models to the high-order CO transitions. \nThe CO (2-1) line flux measured with the peak surface \nbrightness on the velocity-averaged map is consistent\/marginally consistent \nwith the values expected by these single-component models, while the total line flux \nintegrated over the line-emitting area falls above all the models. This may suggest \nthe exsitence of additional low excitation gas in the central $\\rm \\sim10\\,kpc$ region \nas was found in the submillimeter galaxy AzTEC-3\nat z=5.3 \\citep{riechers10} and the nearby starburst \ngalaxy M82 \\citep{weiss05}. However, there are still large \nuncertainties in the measurements of all the three transitions, \nand observations of other CO transitions are necessary to \naddress if there are multiple CO excitation components in the quasar host galaxy. \nOur observations show no evidence of excess CO (2-1) line emission and additional \nlow excitation component in the host galaxy of J0840+5624. \n\nThe C array imaging of the CO (2-1) line emission from J1429+5447 has \nresolved the molecular gas into two distinct peaks, with a spatial separation \nof $\\sim$6.9 kpc; the quasar position is consistent with the West peak. \nThere is no clear velocity offset ($\\rm 26\\pm60\\,km\\,s^{-1}$) between \nthe two components. These results suggest a gas-rich, major merging \nsystem with two distinct components that are comparable in CO luminosity and \nmolecular gas mass. The west component of this system \nis in a radio-loud quasar phase. Similar quasar-starburst systems with \nmultiple CO emission peaks were previously found in \nthe CO luminous quasars BRI 1202$-$0725 at z=4.7 \\citep{omont96,carilli02}, \nBRI 1335$-$0417 at z=4.4 \\citep{riechers08} \nand J1148+5251 at z=6.42 \\citep{walter04}. These systems demonstrate \nthe early phase of quasar-galaxy formation in which both AGN and starburst \nactivities are triggered by major mergers and the molecular gas in the \nnuclear region is not fully coalesced \\citep{narayanan08}. We will \nexpect further high-resolution observations with the EVLA in C or B \narray to constrain the gas surface density and dynamics, \nand with ALMA or the PdBI to resolve the dust continuum and distributed \nstar formation in these young quasar host galaxies.\n\n\\acknowledgments \nThis work is based on observations carried out with the Expanded Very Large \nArray (NRAO). The National\nRadio Astronomy Observatory (NRAO) is a facility of the National\nScience Foundation operated under cooperative agreement by Associated\nUniversities, Inc. We acknowledge support from the Max-Planck Society\nand the Alexander von Humboldt Foundation through the Max-Planck-Forschungspreis\n2005. Dominik A. Riechers acknowledges support from NASA through Hubble\nFellowship grant HST-HF-51235.01 awarded by the Space Telescope Science\nInstitute, which is operated by the Association of Universities for\nResearch in Astronomy, Inc., for NASA, under contract NAS 5-26555.\nM. A. Strauss acknowledges the support of NSF grant Ast-0707266.\n{\\it Facilities:} \\facility{EVLA}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzhiyd b/data_all_eng_slimpj/shuffled/split2/finalzzhiyd new file mode 100644 index 0000000000000000000000000000000000000000..42196099d0d096739e6ef7df9a0cbcdc04080708 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzhiyd @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\subsection{Summary of Results}\n \nThis paper proposes an approach to the Crepant Resolution Conjecture for open Gromov-Witten invariants, and supports it with a series of results and verifications about threefold $A_n$-singularities and their resolutions.\\\\\n\nLet $\\mathcal{Z}$ be a smooth toric Calabi--Yau Deligne--Mumford stack with\ngenerically trivial stabilizers and let $L$ be an Aganagic-Vafa brane\n(Sec. \\ref{sec:ogw}). Fix a Calabi--Yau torus action $T$ on $\\mathcal{Z}$ and\ndenote by $\\Delta_\\mathcal{Z}$ the free module over $H^\\bullet(BT)$ spanned by\nthe $T$-equivariant lifts of orbifold cohomology classes\nof Chen--Ruan degree at most two. We define\n(Sec. \\ref{ssec:dcrc}) a family of elements of Givental space,\n\\beq\n\\widehat{\\mathcal{F}}_{L,\\mathcal{Z}}^{\\rm disk}: H_T^\\bullet(\\mathcal{Z}) \\to \\mathcal{H}_\\mathcal{Z} = H_T^\\bullet(\\mathcal{Z})((z^{-1})),\n\\eeq\nwhich we call the \\textit{winding neutral disk potential}. Upon appropriate\nspecializations of the variable $z$, $\\widehat{\\mathcal{F}}^{\\rm disk}_{L,\\mathcal{Z}}$ encodes disk\ninvariants of $(\\mathcal{Z},L)$ at any winding $d$. \\\\\n\nConsider a \\textit{crepant\n resolution diagram} $\\mathcal{X} \\to X \\leftarrow Y$, where $X$ is the coarse moduli\nspace of $\\mathcal{X}$ and $Y$ is a crepant resolution of the singularities of $X$. A\nLagrangian boundary condition $L$ is chosen on $\\mathcal{X}$ and we denote by $L'$\nits transform in $Y$. \nOur\nversion of the open crepant resolution conjecture is a comparison of the (restricted) winding neutral disk potentials.\n\n\\begin{proposal}[The OCRC]\nThere exists a $\\mathbb{C}((z^{-1}))$-linear map of Givental spaces\n$\\mathbb{O}: \\mathcal{H}_\\mathcal{X} \\to \\mathcal{H}_Y$ and analytic functions $\\mathfrak{h}_\\mathcal{X}: \\Delta_\\mathcal{X}\n\\to \\mathbb{C}$, $\\mathfrak{h}_Y: \\Delta_Y \\to \\mathbb{C}$ such that\n\\beq\n\\mathfrak{h}_Y^{1\/z}{\\widehat \\mathcal{F}_{L,Y}^{\\rm disk}}\\big|_{\\Delta_Y}= \\mathfrak{h}_\\mathcal{X}^{1\/z} \\mathbb{O}\\circ \\widehat \\mathcal{F}_{L,\\mathcal{X}}^{\\rm disk}\\big|_{\\Delta_\\mathcal{X}}\n\\eeq\nupon analytic continuation of quantum cohomology parameters.\n\\end{proposal}\n\nFurther, we conjecture (Conjecture \\ref{conj:iri}) that both $\\mathbb{O}$ and\n$\\mathfrak{h}_\\bullet$ are completely determined\nby the classical toric geometry of $\\mathcal{X}$ and $Y$. In particular, we give a \nprediction for the transformation $\\mathbb{O}$ depending on a choice of identification of the\n$K$-theory lattices of $\\mathcal{X}$ and $Y$. \\\\\n\nWhen $\\mathcal{X}$ is a Hard Lefschetz\nCalabi--Yau orbifold, the OCRC extends to functions on all of $H_T^\\bullet(\\mathcal{Z})$.\nTogether with WDVV, this gives a Bryan--Graber-type statement for potentials encoding invariants from genus $0$ maps with an arbitrary number of boundary components:\n\n\\begin{prop1}\nLet $\\mathcal{X} \\rightarrow X \\leftarrow Y$ be a Hard Lefschetz diagram for which the OCRC holds. Defining $\\mathbb{O}^{\\otimes n}= \\mathbb{O}(z_1)\\otimes\\ldots\\otimes \\mathbb{O}(z_n)$, we have:\n\\beq\n{\\widehat \\mathcal{F}_{L',Y}^{n}}= \\mathbb{O}^{\\otimes n}\\circ \\widehat \\mathcal{F}_{L,\\mathcal{X}}^{n},\n\\eeq\nwhere $\\widehat \\mathcal{F}^{n}$ is the $n$-boundary components analog of $\\widehat\n\\mathcal{F}^{\\rm disk}$ defined in \\eqref{mholepot}. \\\\\n\\end{prop1}\n\n\nConsider now the family of threefold $A_n$ singularities, where $\\mathcal{X}=[\\mathbb{C}^2\/\\mathbb{Z}_{n+1}]\\times \\mathbb{C}$ and $Y$ is its canonical minimal\nresolution. \n\\begin{mt1}\nThe OCRC and Conjecture \\ref{conj:iri} hold for the $A_n$-singularities for any choice of Aganagic-Vafa brane on $\\mathcal{X}$.\n\\end{mt1}\nThe main theorem is an immediate consequence of Proposition \\ref{prop:wncrc}\nand Theorem \\ref{thm:sympl}. From it we deduce a series of comparisons of\ngenerating functions in the spirit of Bryan-Graber's formulation of the CRC. \\\\\n\nIn \\eqref{cohdp} we define the \\textit{cohomological disk potential}\n$\\mathcal{F}_{L}^{\\rm disk}$ - a cohomology valued generating function for disk\ninvariants that ``remembers\" the twisting and the attaching fixed point of an\norbi-disk map. We also consider the coarser \\textit{scalar disk potential}\n(see \\eqref{sdp}), which keeps track of the winding of the orbimaps but\nforgets the twisting and attaching point.\nThere are essentially two different choices for the Lagrangian boundary condition on $\\mathcal{X}$; the simpler case occurs when $L$ intersects one of the effective legs of the orbifold. In this case we have the following result.\n\\begin{cocrceff}\nIdentifying identically the winding parameters and setting $\\mathbb{O}_\\mathbb{Z}(\\mathbf{1_k})=P_{n+1}$ for every $k$, we have:\n\\beq\n\\mathcal{F}_{L',Y}^{\\rm disk}(t,y,\\vec{w}) = \\mathbb{O}_\\mathbb{Z} \\circ \\mathcal{F}_{L,\\mathcal{X}}^{\\rm\n disk}(t,y,\\vec{w}). \n \\eeq\n\\end{cocrceff}\nIt is immediate to observe that the scalar disk potentials coincide (Corollary \\ref{cor:esc}).\\\\\n\nThe case when $L$ intersects the ineffective leg of the orbifold is more subtle.\n\\begin{cocrcgerby}\nWe exhibit a matrix $\\mathbb{O}_\\mathbb{Z}$ of roots of unity and a specialization of the winding parameters depending on the equivariant weights such that\n\\beq\n\\mathcal{F}_{L',Y}^{\\rm disk}(t,y,\\vec{w}) = \\mathbb{O}_\\mathbb{Z} \\circ \\mathcal{F}_{L,\\mathcal{X}}^{\\rm disk}(t,y,\\vec{w}) .\n\\eeq\n\\end{cocrcgerby}\nThe comparison of scalar potentials in this case does not hold\nanymore. Because of the special form of the matrix $\\mathbb{O}_\\mathbb{Z}$ we deduce in\nCorollary \\ref{cor:sc} that the scalar disk potential for $Y$ corresponds to\nthe contribution to the potential for $\\mathcal{X}$ by the untwisted disk maps. \nAs the $A_n$-singularities satisfy the Hard Lefschetz condition, it is\nan exercise in book-keeping to extend the statements of Theorems \\ref{thm:dcrccoh}\nand \\ref{thm:dcrccoheff} to compare generating functions for arbitrary genus\nzero open invariants, even treating all boundary Lagrangian conditions at the\nsame time. \\\\\n\nIn order to prove our main theorem, we must establish a fully equivariant\nversion of the symplectomorphism of Givental spaces which verifies the closed\nCRC for the $A_n$ geometries. Our analysis is centered on a new global\ndescription of the gravitational quantum cohomology of these targets which enjoys a number of remarkable features, and\nmay have an independent interest {\\it per se}.\n\n\n\\begin{thmmir}\nBy identifying the $A$-model moduli space with a genus zero double Hurwitz space, we construct a global quantum $D$-module $(\\mathcal{F}_{\\lambda,\\phi}, T\\mathcal{F}_{\\lambda,\\phi}, \\nabla^{(g,z)},H(,)_{g})$ which is locally isomorphic to $\\mathrm{QDM}(\\mathcal{X})$ and $\\mathrm{QDM}(Y)$ in appropriate neighborhoods of the orbifold and large complex structure points.\n\\end{thmmir} \n \n\n\\subsection{Context, Motivation and Further Discussion}\n\nOpen Gromov-Witten (GW) theory \nintends to study holomorphic maps from bordered Riemann surfaces, where the image of the boundary is constrained\n to lie in a Lagrangian submanifold of the target. While some general foundational\n work has been done \\cite{Solomon:2006dx, MR2425184}, at this point most\n of the results in the theory rely on additional structure. In \\cite{hht1, hht2} Lagrangian Floer theory is employed to study the case when the boundary condition is a fiber of the moment map.\nIn the toric context, a mathematical approach\n\\cite{Katz:2001vm, Diaconescu:2003qa, MR2861610,r:lgoa} to construct operatively\na virtual counting theory of open maps is via the use of localization. \nA variety of striking relations have been verified connecting open GW theory and several other types of invariants,\nincluding open $B$-model invariants and matrix models \\cite{Aganagic:2000gs,\n Aganagic:2001nx, Lerche:2001cw, Bouchard:2007ys, fang2012open}, quantum knot invariants\n\\cite{Gopakumar:1998ki, Marino:2001re}, and ordinary\nGromov--Witten and Donaldson--Thomas theory via ``gluing along the boundary'' \\cite{Aganagic:2003db,\n Li:2004uf, moop}.\\\\\n\nSince Ruan's influential conjecture \\cite{MR2234886}, an intensely studied\nproblem in Gromov--Witten theory has been to determine the relation between GW invariants of target spaces\nrelated by a crepant birational transformation (CRC). The most general\nformulation of the CRC is framed in terms of Givental formalism\n(\\cite{MR2529944}, \\cite[Conj 4.1]{coates2007quantum}); the conjecture has been proved in\na number of examples \\cite{MR2510741, MR2529944, MR2486673} and has by now gained folklore status, with\na general proof in the toric setting announced for some time \\cite{ccit2}. A natural question one can ask is whether\nsimilar relations exist in the context of open Gromov--Witten theory. Within\nthe toric realm, physics arguments based on open mirror symmetry\n\\cite{Bouchard:2007ys, Bouchard:2008gu, Brini:2008rh} have given strong indications that\nsome version of the Bryan--Graber \\cite{MR2483931} statement of the crepant\nresolution conjecture should hold at the level of disk invariants. This was\nproven explicitly for the crepant resolution of the Calabi--Yau orbifold $[\\mathbb{C}^3\/\\mathbb{Z}_2]$\nin \\cite{cavalieri2011open}. \nAround the same time, it was suggested \n\\cite{Brini:2011ij, talk-banff} that a general statement of a Crepant Resolution\nConjecture for open invariants should have a natural formulation within\nGivental's formalism, as in \\cite{MR2510741, coates2007quantum}. Some implications of this\nphilosophy were verified in\n\\cite{Brini:2011ij} for the crepant resolution $\\mathcal{O}_{\\mathbb{P}^2}(-3)$ of the orbifold\n$[\\mathbb{C}^3\/\\mathbb{Z}_3]$. \\\\\n\nThe OCRC we propose here is a natural\nextension to open Gromov--Witten theory of the Coates--Corti--Iritani--Tseng\napproach \\cite{MR2529944} to Ruan's conjecture.\nThe observation that the disk function of \\cite{MR2861610,r:lgoa} can be interpreted as an endomorphism of Givental space makes the OCRC statement follow almost tautologically from the Coates--Corti--Iritani--Tseng\/Ruan picture of the ordinary \nCRC via toric mirror symmetry \\cite{MR2510741}. \nThe more striking aspect of our conjecture is then that the linear function $\\mathbb{O}$ comparing the winding neutral disk potentials is considerably simpler than the symplectomorphism $\\mathbb{U}_{\\rho}^{\\X,Y}$ in the closed CRC and it is characterized in terms of {\\it purely classical data}: essentially, the equivariant Chern characters of $\\mathcal{X}$ and $Y$. This is closely related to Iritani's proposal \\cite{MR2553377} that the analytic continuation for the flat sections of the global quantum $D$-module is realized via the composition of $K$-theoretic central charges; our disk endomorphisms are very close to just being inverses to the $\\Gamma$ factors appearing in Iritani's central charges and therefore ``undo\" most of the transcendentality of $\\mathbb{U}_{\\rho}^{\\X,Y}$. \\\\\n\nIritani's proposal is inspired and consistent with the idea of global mirror\nsymmetry, i.e. that there should be a global quantum $D$-module on the\n$A$-model moduli space which locally agrees with the Frobenius structure given\nby quantum cohomology. In order to verify Iritani's proposal in the fully\nequivariant setting, we construct explicitly such a global structure. Motivated by the connection of the Gromov--Witten theory of $A_n$ to certain integrable systems \\cite{agps}, we realize the Dubrovin\nlocal system as a system of one-dimensional hypergeometric periods. As a\nspecial feature of this case, structure constants of quantum cohomology\nare rational in exponentiated flat coordinates (or, equivalently, the inverse\nmirror map is a rational function of the $B$-model variables). Moreover, the $n$-dimensional\noscillating integrals describing the periods of the system reduce to Euler--Pochhammer line integrals in the\ncomplex plane. As a consequence, the computation of the analytic\ncontinuation of flat sections is drastically\nsimplified with respect to the standard toric mirror symmetry methods. Furthermore, in this context integral structures in\nequivariant cohomology emerge naturally from the interpretation of flat\nsections of the Dubrovin connection as twisted period maps. The\nDeligne--Mostow monodromy of hypergeometric periods translates then to an\naction of the colored braid group \nin equivariant\n$K$-theory. \nAn enticing speculation is that, upon mirror symmetry, this may correspond to\nautoequivalences of $D_T^b(Y)$ and surject to the\nSeidel--Thomas braid group action \\cite{MR1831820} in the\nnon-equivariant limit.\n\n\\begin{comment}\n Indeed, in\n\\cite{MR1831820} the authors famously constructed a faithful\nrepresentation of the braid group $B_{n+1}$ in terms of derived equivalences of $D^b(Y)$\ninduced by spherical twists. It is natural to speculate that our monodromy\ndescription of the global quantum $D$-module in Section~\\ref{sec:monodromy}\nsurjects to the Seidel--Thomas braid group, and recovers it in the non-equivariant limit.\n\\end{comment}\n\n\n\\subsection*{Acknowledgements} We are particularly grateful to Tom Coates for\nhis collaboration at the initial stages of this project, and the many\nenlightening conversations that followed. We would also like to thank Hiroshi\nIritani, Yunfeng Jiang, \\'Etienne Mann, Stefano Romano and Ed Segal for useful\ndiscussions and\/or\ncorrespondence. This project originated from discussions at the Banff Workshop on ``New recursion\nformulae and integrability for Calabi--Yau manifolds'', October 2011; we are\ngrateful to the organizers for the kind invitation and the great scientific\natmosphere at BIRS. A.~B.~has been supported by a Marie Curie Intra-European Fellowship\nunder Project n$^\\circ$ 274345 (GROWINT). R.~C.~ has been supported by NSF grant DMS-1101549. Partial support from the GNFM-INdAM under the\nProject ``Geometria e fisica dei sistemi integrabili'' is also acknowledged. \\\\\n\n\n\\section{Background}\n\nThis section gathers background for the formulation of\nthe open string Crepant Resolution Conjecture of\nSection~\\ref{sec:ocrc} and its proof in Section~\\ref{sec:j}. We give a self-contained account of the\nquantum $D$-module\/Givental space approach to the study of the closed string\nCrepant Resolution Conjecture in genus zero along the lines of\nCoates--Corti--Iritani--Tseng \\cite{MR2510741} and Iritani\n\\cite{MR2553377} (Section~\\ref{sec:qdm}). Section~\\ref{sec:ogw} provides an overview of open Gromov--Witten theory for\ntoric Calabi--Yau threefolds \\`a la Katz--Liu as well as its extension to toric\norbifolds. Section~\\ref{sec:an} collects relevant material on\nthe classical and quantum geometry of $A_n$-resolutions. \\\\\n\nThe content of Section~\\ref{sec:qdm} is surveyed in Iritani's\nexcellent review article \\cite{MR2683208}, to which the reader is referred for\nfurther details. For a more comprehensive introduction to the open\nGromov--Witten theory for toric orbifolds, see e.g. \\cite{MR2861610, r:lgoa}.\n\n\n\\subsection[Quantum $D$-modules and the CRC]{Quantum $D$-modules and the Crepant Resolution Conjecture}\n\\label{sec:qdm}\n\n\n\n\nLet $\\mathcal{Z}$ be a smooth Deligne--Mumford stack with coarse moduli\nspace $Z$ and suppose that $\\mathcal{Z}$ carries an algebraic $T\\simeq\\mathbb{C}^*$ action with\nzero-dimensional \nfixed loci. Write $I\\mathcal{Z}$ for the inertia stack of $\\mathcal{Z}$, \n$\\mathrm{inv}:I\\mathcal{Z}\\to I\\mathcal{Z}$ for its canonical involution and\n$i:I\\mathcal{Z}^T\\hookrightarrow I\\mathcal{Z}$ for\nthe inclusion of the $T$-fixed loci into $I\\mathcal{Z}$. \nThe equivariant Chen--Ruan cohomology ring $H(\\mathcal{Z}) \\triangleq H^{\\bullet}_{T,CR}(\\mathcal{Z})$ of $\\mathcal{Z}$ is a finite rank free module over\nthe $T$-equivariant cohomology of a point $H_T(\\mathrm{pt})\\simeq\n\\mathbb{C}[\\nu]$, where $\\nu=c_1(\\mathcal{O}_{BT}(1))$; we define\n$N_\\mathcal{Z} \\triangleq\\operatorname{rank}_{\\mathbb{C}[\\nu]} H(\\mathcal{Z})$. We furthermore suppose\nthat odd cohomology groups vanish in all degrees. \\\\\n\n\nThe $T$-action on $\\mathcal{Z}$ gives a non-degenerate inner product on\n$H(\\mathcal{Z})$ via the equivariant orbifold Poincar\\'e pairing\n\\beq\n\\eta(\\theta_1,\\theta_2)_{\\mathcal{Z}} \\triangleq \\int_{I\\mathcal{Z}^T}\\frac{i^*(\\theta_1 \\cup \\mathrm{inv}^*\n \\theta_2)}{e(N_{I\\mathcal{Z}^T\/I\\mathcal{Z}})},\n\\label{eq:pair}\n\\eeq\nand it induces a torus action on the moduli\nspace $\\overline{\\mathcal M_{g,n}}(\\mathcal{Z}, \\beta)$ of degree $\\beta$ twisted stable maps\n\\cite{MR2450211, MR1950941} from genus $g$ orbicurves to $\\mathcal{Z}$. For classes $\\theta_1, \\dots, \\theta_n\\in H(\\mathcal{Z})$ and\nintegers $r_1, \\dots, r_n \\in \\mathbb{N}$, the Gromov--Witten\ninvariants of $\\mathcal{Z}$\n\\bea\n\\left\\langle \\sigma_{r_1}(\\theta_1) \\dots \\sigma_{r_n}(\\theta_n) \\right\\rangle_{g,n,\\beta}^\\mathcal{Z}\n& \\triangleq & \\int_{[\\overline{\\mathcal M_{g,n}}(\\mathcal{Z},\n \\beta)]_T^{\\rm vir}} \\prod_{i=1}^n \\mathrm{\\operatorname{ev}}^*_i \\theta_i\n\\psi_i^{r_i}, \\label{eq:gwdesc} \\\\\n\\left\\langle \\theta_1 \\dots \\theta_n \\right\\rangle_{g,n,\\beta}^\\mathcal{Z} & \\triangleq & \\left\\langle \\sigma_{0}(\\theta_1) \\dots\n\\sigma_{0}(\\theta_n) \\right\\rangle_{g,n,\\beta}^\\mathcal{Z}, \n\\label{eq:gwprim}\n\\end{eqnarray}\ndefine a sequence of multi-linear functions on $H(\\mathcal{Z})$ with values in the\nfield of fractions $\\mathbb{C}(\\nu)$ of $H_T({\\rm pt})$. The correlators \\eqref{eq:gwprim}\n(respectively, \\eqref{eq:gwdesc} with $r_i>0$) are the {\\it\n primary} (respectively, {\\it descendent}) Gromov--Witten invariants of\n$\\mathcal{Z}$. \\\\\n\nFix a basis\n$\\{\\phi_i\\}_{i=0}^{N_\\mathcal{Z}-1}$ of $H(\\mathcal{Z})$ such that $\\phi_0=\\mathbf{1}_\\mathcal{Z}$\nand $\\phi_j$, $1\\leq j \\leq b_2(Z)$ are untwisted Poincar\\'e duals of $T$-equivariant divisors\nin $Z$. Denote by $\\{\\phi^i\\}_{i=0}^{N_\\mathcal{Z}-1}$ the dual basis with respect to the pairing \\eqref{eq:pair}. Let $\\tau=\\sum\\tau_i\\phi_i$ denote a general point of $H(\\mathcal{Z})$. The WDVV equation for primary Gromov--Witten invariants \\eqref{eq:gwprim} defines a family of associative\ndeformations $\\circ_\\tau$ of the $T$-equivariant Chen--Ruan cohomology ring of $\\mathcal{Z}$ via\n\\beq\n\\eta\\l(\\theta_1 \\circ_\\tau \\theta_2, \\theta_3\\r)_{\\mathcal{Z}} \\triangleq \\left\\langle\\bra \\theta_1, \\theta_2, \\theta_3 \\right\\rangle\\ket_{0,3}^\\mathcal{Z}(\\tau)\n\\eeq\nwhere\n\\beq\n\\left\\langle\\bra \\theta_1, \\dots, \\theta_k \\right\\rangle\\ket_{0,k}^\\mathcal{Z}(\\tau) \\triangleq \\sum_{\\beta}\\sum_{n\\geq 0} \\frac{\\big\\langle \\theta_1,\\dots,\\theta_k,\n \\overbrace{\\tau,\\tau,\\ldots,\\tau}^{\\text{$n$\n times}} \\big\\rangle_{0,n+k,\\beta}^\\mathcal{Z}}{n!} \\in \\mathbb{C}((\\nu)) ,\n\\eeq\nand the index $\\beta$ ranges over the cone of effective curve classes\n$\\mathrm{Eff}(\\mathcal{Z}) \\subset H_2(Z, \\mathbb{Q})$; we denote by $l_\\mathcal{Z} \\triangleq b_2(Z)$\nits dimension. \\\\\n\n\nBy the Divisor Axiom \\cite{MR2450211} this can be rewritten as\n\\beq\n\\eta\\l(\\theta_1 \\circ_\\tau \\theta_2, \\theta_3\\r)_{\\mathcal{Z}}= \\sum_{\\beta\\in \\mathrm{Eff}(\\mathcal{Z}), n\\geq 0} \\frac{\\big\\langle \\theta_1,\\theta_2,\\theta_3,\n \\overbrace{\\tau',\\tau',\\ldots,\\tau'}^{\\text{$n$\n times}} \\big\\rangle_{0,n+3,\\beta}^\\mathcal{Z}}{n!}\\mathrm{e}^{\\tau_{0,2} \\cdot \\beta}\n\\label{eq:qprod2}\n\\eeq\nwhere we have decomposed $\\tau=\\sum_{i=0}^{N_\\mathcal{Z}-1} \\tau_i \\phi_i = \\tau_{0,2}+\\tau'$ as\n\\bea\n\\tau_{0,2} &=& \\sum_{i=1}^{l_\\mathcal{Z}} \\tau_{i} \\phi_{i}, \\\\\n\\tau' &=& \\tau_0 \\mathbf{1}_\\mathcal{Z} + \\sum_{i=l_\\mathcal{Z}+1}^{N_\\mathcal{Z}-1} \\tau_i \\phi_i.\n\\label{eq:Tprime}\n\\end{eqnarray}\n\nThe quantum product \\eqref{eq:qprod2} is a formal Taylor series in $(\\tau',\n\\mathrm{e}^{\\tau_{0,2}})$. Suppose that it is actually {\\it convergent} in a contractible\nopen set $U \\ni (0,0)$; this is the case for many toric orbifolds\n\\cite{MR1653024, Coates:2012vs} and,\nas we see explicitly, for\nall the examples of Section~\\ref{sec:an}. Then the quantum product $\\circ_\\tau$ is an\nanalytic deformation of the Chen--Ruan cup product $\\cup_{\\rm CR}$, to which\nit reduces in the limit $\\tau' \\to 0$, $\\mathfrak{Re}(\\tau_{0,2}) \\to -\\infty$. Thus, the holomorphic\nfamily of rings $H(\\mathcal{Z}) \\times U \\to U$, together with the inner pairing \\eqref{eq:pair} and the\nassociative product \\eqref{eq:qprod2}, gives $U$ the structure of a\n(non-conformal) Frobenius manifold $QH(\\mathcal{Z})\\triangleq(U, \\eta, \\circ_\\tau)$\n\\cite{Dubrovin:1994hc}; this is the {\\it quantum cohomology ring} of $\\mathcal{Z}$. We refer to the Chen--Ruan limit $\\tau' \\to 0$, $\\mathfrak{Re}(\\tau_{0,2})\n\\to -\\infty$ as the {\\it large radius limit point} of $\\mathcal{Z}$. \\\\\n\n\nAssigning a Frobenius structure on $U$ is tantamount to endowing the trivial\ncohomology bundle $TU \\simeq H(\\mathcal{Z}) \\times U \\to U$ with a flat\npencil of affine connections \\cite[Lecture\n 6]{Dubrovin:1994hc}. Denote by $\\nabla^{(\\eta)}$ the Levi--Civita connection\nassociated to the Poincar\\'e pairing on $H(\\mathcal{Z})$; in Cartesian coordinates\nfor $U\\subset H(\\mathcal{Z})$ this reduces to the ordinary de Rham differential\n$\\nabla^{(\\eta)}=d$. Consider then the one parameter family of covariant\nderivatives on $TU$\n\\beq\n\\nabla^{(\\eta,z)}_X \\triangleq \\nabla^{(\\eta)}_X+z^{-1} X \\circ_\\tau.\n\\label{eq:defconn1}\n\\eeq\nThe fact that the quantum product is commutative, associative and integrable implies that\n$R_{\\nabla^{(\\eta,z)}}=T_{\\nabla^{(\\eta,z)}}=0$ identically in $z$; this is equivalent to the WDVV\nequations for the genus zero Gromov--Witten potential. The equation for the horizontal\nsections of $\\nabla^{(\\eta,z)}$,\n\\beq\n\\nabla^{(\\eta,z)} \\omega =0,\n\\label{eq:QDE}\n\\eeq\nis a rank-$N_\\mathcal{Z}$ holonomic system of\ncoupled linear PDEs. We denote by $\\mathcal{S}_\\mathcal{Z}$ the vector space of solutions\nof \\eqref{eq:QDE}: a $\\mathbb{C}((z))$-basis of $\\mathcal{S}_\\mathcal{Z}$ is by definition given by the gradient of\na flat frame $\\tilde \\tau (\\tau,z)$ for the deformed connection\n$\\nabla^{(\\eta,z)}$. The Poincar\\'e\npairing induces a non-degenerate inner product $H(s_1,s_2)_{\\mathcal{Z}}$ on $\\mathcal{S}_\\mathcal{Z}$ via\n\\beq\nH(s_1, s_2)_\\mathcal{Z} \\triangleq \\eta(s_1(\\tau, -z),s_2(\\tau,z))_\\mathcal{Z}.\n\\label{eq:pairDmod}\n\\eeq\nThe triple $\\mathrm{QDM}(\\mathcal{Z})\\triangleq(U,\\nabla^{(\\eta,z)}, H(,)_\\mathcal{Z})$ defines a {\\it\n quantum D-module} structure on $U$, and the system \\eqref{eq:QDE} is the {\\it quantum differential\n equation} (in short, QDE) of $\\mathcal{Z}$. \n\\begin{rmk}\n\\label{rmk:fuchsLR}\nNotice that the assumption that the quantum product\n \\eqref{eq:qprod2} is analytic in $(\\tau',\\mathrm{e}^{\\tau_{0,2}})$ around the large radius\n limit point translates into the statement that the QDE \\eqref{eq:QDE} has a\n Fuchsian singularity along $\\cup_{i=1}^{l_\\mathcal{Z}} \\{q_i\\triangleq\\mathrm{e}^{\\tau_i}=0\\}$. \\\\\n\\end{rmk}\nIn the same way in which the genus zero primary theory of $\\mathcal{Z}$ defines a quantum\n$D$-module structure on $H(\\mathcal{Z}) \\times U$, the genus zero gravitational\ninvariants \\eqref{eq:gwdesc} furnish a basis of horizontal sections\nof $\\nabla^{(\\eta,z)}$ \\cite{MR1408320}. For every $\\theta\\in\nH(\\mathcal{Z})$, a flat section of the $D$-module is given by an\n$\\mathrm{End}(H(\\mathcal{Z}))$-valued function $S_\\mathcal{Z}(\\tau,z):H(\\mathcal{Z})\\to \\mathcal{S}_\\mathcal{Z}$ defined as\n\\beq\nS_\\mathcal{Z}(\\tau,z)\\theta \\triangleq \\theta-\\sum_{k=1}^{N_\\mathcal{Z}}\\phi^k\\left\\langle\\bra\\phi_k,\\frac{\\theta}{z+\\psi}\\right\\rangle\\ket_{0,2}^\\mathcal{Z}(\\tau)\n\\label{eq:fundsol}\n\\eeq\nwhere $\\psi$ is a cotangent line class and we expand the denominator as a\ngeometric series\n$\\frac{1}{z+\\psi}=\\frac{1}{z}\\sum\\left(-\\frac{\\psi}{z}\\right)^k$. We call the\npair $(\\mathrm{QDM}(\\mathcal{Z}), S_\\mathcal{Z})$ a {\\it calibration} of the Frobenius structure\n$(H(\\mathcal{Z}), \\circ_\\tau, \\eta)$. \\\\\n\nThe flows of coordinate vectors for the flat frame of $TH(\\mathcal{Z})$ induced by\n$S_\\mathcal{Z}(\\tau,z)$ give a basis of deformed flat coordinates\n of\n$\\nabla^{(\\eta,z)}$, which is defined uniquely up to an additive $z$-dependent\n constant. A canonical basis is obtained upon applying the String Axiom:\ndefine the {\\it $J$-function} $J^\\mathcal{Z}(\\tau,z):U \\times \\mathbb{C} \\to H(\\mathcal{Z})$ by\n\\beq\nJ^\\mathcal{Z}(\\tau,z) \\triangleq zS_\\mathcal{Z}(\\tau,-z)^*\\mathbf{1}_\\mathcal{Z}\n\\label{eq:Jfun1}\n\\eeq\nwhere $S_\\mathcal{Z}(\\tau,z)^*$ denotes the adjoint to $S_\\mathcal{Z}(\\tau,z)$ under $H(-,-)_\\mathcal{Z}$. Explicitly, \n\\beq\n\\label{eq:resj}\nJ^\\mathcal{Z}(\\tau,z) = (z+\\tau_0)\\mathbf{1}_\\mathcal{Z}+\\tau_1\\phi_1+...+\\tau_{N_\\mathcal{Z}} \\phi_{N_\\mathcal{Z}}+\\sum_{k=1}^{N_\\mathcal{Z}} \\phi^k\\left\\langle\\bra\n\\frac{\\phi_k}{z-\\psi_{n+1}}\\right\\rangle\\ket_{0,1}^\\mathcal{Z}(\\tau).\n\\eeq\nComponents of $J^\\mathcal{Z}(\\tau,z)$ in the $\\phi$-basis give flat coordinates of\n\\eqref{eq:defconn1}; this is a consequence of \\eqref{eq:Jfun1} combined with\nthe String Equation. From \\eqref{eq:resj}, the undeformed flat coordinate system is obtained in the\nlimit $z\\to\\infty$ as\n\\beq\n\\lim_{z\\to \\infty} \\l(J^\\mathcal{Z}(\\tau,z)-z \\mathbf{1}_\\mathcal{Z}\\r) = \\tau.\n\\eeq\n\\\\\n\nBy Remark~\\ref{rmk:fuchsLR}, a loop around the origin in the variables $q_i=\\mathrm{e}^{\\tau_i}$\ngives a non-trivial monodromy action on the $J$-function. Setting $\\tau'=0$ in \\eqref{eq:resj} and applying the Divisor\nAxiom then gives \\cite[Proposition~10.2.3]{MR1677117}\n\\bea\n& J^{\\mathcal{Z}, \\rm small}(\\tau_{0,2},z) \\triangleq J^\\mathcal{Z}(\\tau,z)\\Big|_{\\tau'=0} \\nn \\\\\n=& z \\mathrm{e}^{\\tau_1 \\phi_1\/z}\\dots\\mathrm{e}^{\\tau_{l_\\mathcal{Z}} \\phi_{l_\\mathcal{Z}}\/z}\n\\l(\\mathbf{1}_\\mathcal{Z}+ \\sum_{\\beta,k}\\mathrm{e}^{\\tau_1 \\beta_1}\\dots\\mathrm{e}^{\\tau_{l_\\mathcal{Z}}\\beta_{l_\\mathcal{Z}}}\\phi^k\\left\\langle\n\\frac{\\phi_k}{z(z-\\psi_{1})}\\right\\rangle_{0,1,\\beta}^\\mathcal{Z}\\r).\n\\label{eq:Jred}\n\\end{eqnarray}\nIn our situation\nwhere the $T$-action has only zero-dimensional fixed loci $\\{P_i\\}_{i=1}^{N_\\mathcal{Z}}$, write \n\\beq\n\\phi_i \\to \\sum_{j=1}^{N_\\mathcal{Z}} c_{ij}(\\nu) P_j, \\quad i=1, \\dots, l_\\mathcal{Z},\n\\eeq\nfor the image of $\\{\\phi_i \\in H^2(\\mathcal{Z}, \\mathbb{C})\\}_{i=1}^{l_\\mathcal{Z}}$ under the\nAtiyah--Bott isomorphism.\n \n The image of each $\\phi_i$ is concentrated on the fixed point cohomology classes with trivial isotropy which \n are idempotents of the\nclassical Chen-Ruan cup\n product on $H(\\mathcal{Z})$. Therefore, the components of the $J$-function in the fixed points basis\n\\beq\nJ^{\\mathcal{Z}, \\rm small}(\\tau_{0,2},z) =: \\sum_{j=1}^{N_\\mathcal{Z}} J_j^{\\mathcal{Z}, \\rm small}(\\tau_{0,2},z) P_j\n\\eeq\nsatisfy\n\\beq\nJ_j^{\\mathcal{Z}, \\rm small}(\\tau_{0,2},z) = z \\mathrm{e}^{\\sum_{i=1}^{l_\\mathcal{Z}} \\tau_i\n c_{ij}\/z}\\l(1+\\mathcal{O}\\l(\\mathrm{e}^{\\tau_{0,2}}\\r)\\r)\n\\label{eq:Jloc}\n\\eeq\nwhere the $\\mathcal{O}\\l(\\mathrm{e}^{\\tau_{0,2}}\\r)$ term on the right hand side is an analytic power\nseries around $\\mathrm{e}^{\\tau_{0,2}}=0$ by \\eqref{eq:Jred} and the assumption of convergence\nof the quantum product. The localized basis $\\{P_j\\}_{j=1}^{N_\\mathcal{Z}}$ therefore\ndiagonalizes the monodromy around large radius: by \\eqref{eq:Jloc}, each\n$J_j^{\\mathcal{Z}, \\rm small}(\\tau_{0,2},z)$ is an eigenvector of the monodromy around a loop in the\n$q_i$-plane encircling the large radius\nlimit of $\\mathcal{Z}$ with eigenvalue $\\mathrm{e}^{2\\pi\\mathrm{i} c_{ij}\/z}$.\n\n\n\\subsubsection{Global mirror symmetry and the closed CRC}\n\n\n\n\n \nConsider a toric Gorenstein orbifold $\\mathcal{X}$, and let $X \\leftarrow Y$ be a crepant resolution of its coarse moduli space.\nRuan's Crepant Resolution Conjecture can be phrased as the existence of a {\\it global quantum $D$-module}\nunderlying the quantum differential systems of $\\mathcal{X}$ and $Y$. This is a 4-tuple\n$(\\mathcal M_A, F, \\nabla, H(,)_F)$ with\n\\bit\n\\item $\\mathcal M_A$ a complex quasi-projective variety\n \n \n \n \n\\item $F\\to \\mathcal M_A$ a rank-$N_\\mathcal{Z}$ holomorphic vector bundle on $\\mathcal M_A$; \n\\item $\\nabla$ a flat $\\mathcal{O}_{\\mathcal M_A}$-connection on $F$;\n\\item $H(,)_F \\in \\mathrm{End}(F)$ a non-degenerate $\\nabla$-flat inner product.\n\\end{itemize}\nIn the quantum $D$-module picture, the Crepant Resolution\nConjecture states that there exist open subsets $V_\\mathcal{X}$, $V_Y \\subset \\mathcal M_A$\nand functions $\\mathfrak{h}_\\mathcal{X}, \\mathfrak{h}_Y \\in \\mathcal{O}_{\\mathcal M_A}$\nsuch that \nthe global\n$D$-module $(\\mathcal M_A, F, \\nabla, H(,)_F)$ is locally isomorphic to $\\mathrm{QDM}(\\mathcal{X})$ and\n$\\mathrm{QDM}(Y)$:\n\\bea\n(\\mathcal M_A, F, \\nabla \\circ \\mathfrak{h}_\\mathcal{X}^{1\/z} , H(,)_F)\\big|_{V_\\mathcal{X}} &\\simeq &\\mathrm{QDM}(\\mathcal{X}), \\\\\n(\\mathcal M_A, F, \\nabla \\circ \\mathfrak{h}_Y^{1\/z} , H(,)_F)\\big|_{V_Y} &\\simeq &\\mathrm{QDM}(Y).\n\\end{eqnarray}\nNotice that the Dubrovin connections on $TH(\\mathcal{X})$ and $TH(Y)$ correspond to\ndifferent trivialization of the global flat system $\\nabla$ when $\\mathfrak{h}_\\mathcal{X}\\neq\n\\mathfrak{h}_Y$. \nAny 1-chain $\\rho$ in $\\mathcal M_A$ gives an analytic continuation map\nof $\\nabla$-flat sections \n$\\mathbb{U}^{\\mathcal{X}, Y}_{\\mathcal{S},\\rho}:\\Gamma(V_Y, \\mathcal{O}(F)) \\to \\Gamma(V_\\mathcal{X}, \\mathcal{O}(F))$,\nwhich is an isometry of $H(,)_F$ \nand identifies the quantum $D$-modules of\n$\\mathcal{X}$ and $Y$.\n\\begin{rmk}\nWhen $\\mathfrak{h}_{\\mathcal{X}}\\neq \\mathfrak{h}_{Y}$, the induced Frobenius structures on $H(\\mathcal{X})$ and\n$H(Y)$ are inequivalent. A sufficient condition \\cite{MR2529944} for the two Frobenius\nstructures to coincide is given by the Hard Lefschetz criterion for $\\mathcal{X} \\to X$:\n\\beq\n\\mathrm{age}(\\theta) - \\mathrm{age}(\\mathrm{inv}^*\\theta) = 0\n\\eeq\nfor any class $\\theta\\in H(\\mathcal{X})$.\n\\end{rmk}\n\n\\begin{rmk}\nSuppose that\n$c_1(\\mathcal{Z})\\geq 0$ and that the coarse moduli space $Z$ is a\nsemi-projective toric variety given by a GIT quotient of $\\mathbb{C}^{\\dim\\mathcal{Z}+l_\\mathcal{Z}}$ by $(\\mathbb{C}^*)^{l_\\mathcal{Z}}$.\nIn this setting, the global quantum $D$-module arises naturally in the\nform of the GKZ system associated to $\\mathcal{Z}$ \\cite{MR1653024,\n MR2271990, ccit2}. The scaling factor $\\mathfrak{h}_\\mathcal{Z}^{1\/z}$ then measures the\ndiscrepancy between the small $J$-function and the canonical basis-vector of\n solutions of the GKZ system (the {\\it $I$-function}), restriced to zero\n twisted insertions:\n\\beq\n\\mathfrak{h}_\\mathcal{Z}^{1\/z}(\\tau_{0,2}) J^{\\mathcal{Z}, \\rm small}(\\tau_{0,2}, z) =\nI^\\mathcal{Z}({\\frak a}(\\tau_{0,2}),z),\n\\label{eq:scalingIJ}\n\\eeq\nwhere ${\\frak a}(\\tau_{0,2})$ is the inverse mirror map. As a consequence of \\eqref{eq:scalingIJ}, the\nscaling factor $\\mathfrak{h}_{\\mathcal{Z}}$ is \ndetermined by the toric data defining $\\mathcal{Z}$ \\cite{MR1653024,\n MR2529944, ccit2}. Let $\\Xi_i\\in H^2(Z)$ be the $T$-equivariant Poincar\\'e dual of the reduction to the quotient of the $i^{\\rm th}$\ncoordinate hyperplane in $\\mathbb{C}^{\\dim\\mathcal{Z}+l_\\mathcal{Z}}$ and write\n$\\zeta^{(j)}_i=\\mathrm{Coeff}_{\\phi_j}\\Xi_i \\in \\mathbb{C}[\\nu]$ for the coefficient of the projection of\n$\\Xi_i$ along $\\phi_j\\in H(\\mathcal{Z})$ for $j=0, \\dots, l_\\mathcal{Z}$. Defining, for every\n$\\beta$, $D_i(\\beta) \\triangleq \\int_\\beta \\Xi_i$ and $J^\\pm_\\beta\\triangleq\\l\\{j \\in\n\\{1,\\dots,\\dim\\mathcal{Z}+l_\\mathcal{Z} \\} | \\pm D_j(\\beta)>0\\r\\}$, we have\n\\bea\n\\tau_l &=& \\log{{\\frak a}_l} + \\sum_{\\beta\\in \\mathrm{Eff}(\\mathcal{Z})}{\\frak a}^\\beta \\frac{\\prod_{j_{-}\\in\n J^-_\\beta}(-1)^{D_{j_{-}}(\\beta)} |D_{j_{-}}(\\beta)|!}{\\prod_{j_{+}\\in\n J^+_\\beta}D_{j_{+}}(\\beta)!}\\sum_{k_{-}\\in\n J^-_\\beta}\\frac{-\\zeta^{(l)}_{k_{-}}}{D_{k_{-}}(\\beta)}, \\quad l=1,\\dots,\nl_\\mathcal{Z}, \n\\end{eqnarray}\n\\bea\n\\mathfrak{h}_\\mathcal{Z} &=& \\exp\\l[\\sum_{\\beta\\in \\mathrm{Eff}(\\mathcal{Z})}{\\frak a}^\\beta \\frac{\\prod_{j_{-}\\in\n J^-_\\beta}(-1)^{D_{j_{-}}(\\beta)} |D_{j_{-}}(\\beta)|!}{\\prod_{j_{+}\\in\n J^+_\\beta}D_{j_{+}}(\\beta)!}\\sum_{k_{-}\\in J^-_\\beta}\\frac{-\\zeta^{(0)}_{k_{-}}}{D_{k_{-}}(\\beta)}\\r].\n\\end{eqnarray}\n\\end{rmk}\n\n\n\\subsubsection{Givental's symplectic formalism}\n\\label{sec:givental}\n\nThe global quantum D-module picture is intimately connected to\n the CRC statement of \\cite{MR2510741, coates2007quantum}. In view of our\n statement of the OCRC in Section \\ref{sec:ocrc}, we find it useful to spell\n it out here. Givental's symplectic space $(H_\\mathcal{Z},\\Omega_\\mathcal{Z})$ is the infinite dimensional vector space\n\\beq\n\\mathcal{H}_\\mathcal{Z}\\triangleq H(\\mathcal{Z})\\otimes\\mathcal{O}(\\mathbb{C}^*)\n\\eeq\nalong with the symplectic form\n\\beq\n\\Omega_\\mathcal{Z}(f,g)\\triangleq \\Res_{z=0} \\eta(f(-z),g(z))_\\mathcal{Z}.\n\\label{eq:sympform}\n\\eeq\nA general point of $\\mathcal{H}_\\mathcal{Z}$ can be written as\n\\beq\n\\sum_{k\\geq 0}\\sum_{\\alpha=0}^{N_\\mathcal{Z}-1} q_{k,\\alpha} \\phi_\\alpha z^k+\\sum_{l\\geq 0}\\sum_{\\beta=0}^{N_\\mathcal{Z}-1} p_{l,\\beta} \\phi_\\beta z^{-k-1}.\n\\eeq\nNotice that $\\{q_{k,\\alpha}, p_{l,\\beta}\\}$ are Darboux coordinates for\n\\eqref{eq:sympform}; call $\\mathcal{H}_\\mathcal{Z}^+$ the Lagrangian subpace spanned by\n$q_{k,\\alpha}$. The generating function of genus zero descendent Gromov--Witten invariants of\n$\\mathcal{Z}$,\n\\beq\n\\mathcal{F}_0^\\mathcal{Z} \\triangleq \\sum_{n=0}^\\infty \\sum_{\\beta \\in \\mathrm{Eff}(\\mathcal{Z})}\\sum_{\\substack{a_1, \\dots a_n \\\\ p_1\n \\dots p_n}} \\frac{\\prod_{i=1}^n \\tau_{a_i,r_i}}{n!}\\left\\langle\n\\sigma_{r_1}(\\phi_{a_1}) \\dots \\sigma_{r_n}(\\phi_{a_n}) \\right\\rangle_{0,n,\\beta}^\\mathcal{Z},\n\\label{eq:descpot}\n\\eeq\nis the germ of an analytic function on $\\mathcal{H}_\\mathcal{Z}^+$ upon identifying\n$\\tau_{0,0}=q_{0,0}+1$, $\\tau_{\\alpha,n}=q_{\\alpha,n}$; under the assumption of convergence\nof the quantum product, coefficients of monomials in $\\tau_{\\alpha,n}$ with\n$\\deg_{\\rm CR} \\phi_\\alpha \\neq 0$, $n > 0$ are analytic functions of $\\mathrm{e}^{\\tau_{0,2}}$\nin a neighbourhood of the origin. The graph of the differential of\n\\eqref{eq:descpot}, \n\\beq\np_{l,\\beta}=\\frac{{\\partial}\\mathcal{F}_0^\\mathcal{Z}}{{\\partial} q^{l,\\beta}},\n\\eeq\nthen yields a formal germ of a Lagrangian submanifold $\\mathcal{L}_\\mathcal{Z}$ (in\nfact, a ruled cone, as a consequence of the genus zero Gromov--Witten axioms),\ndepending analytically on the small quantum cohomology variables\n$\\tau_{0,2}$. By the equations defining the cone, the $J$-function $J^\\mathcal{Z}(\\tau,-z)$ yields a family of\nelements of $\\mathcal{L}_\\mathbb{Z}$ parameterized by $\\tau \\in H(\\mathcal{Z})$, which is uniquely\ndetermined by its large $z$ asymptotics $J(\\tau, -z)=-z+\\tau + \\mathcal{O}(z^{-1})$. Conversely, the genus zero topological recursion relations imply that $\\mathcal{L}_\\mathcal{Z}$ can be reconstructed entirely from $J^\\mathcal{Z}(\\tau, z)$.\n\\\\\n\nThe Crepant Resolution Conjecture has a natural formulation in terms of\nmorphisms of Givental spaces, as pointed out by\nCoates--Corti--Iritani--Tseng (CCIT) \\cite{MR2510741} and further explored by Coates--Ruan\n\\cite{coates2007quantum}. \n\\begin{conj}[\\cite{MR2510741}, \\cite{coates2007quantum}]\nThere exists $\\mathbb{C}((z^{-1}))$-linear symplectic\nisomorphism of Givental spaces $\\mathbb{U}_\\rho^{\\mathcal{X},Y}:\\mathcal{H}_\\mathcal{X}\\rightarrow \\mathcal{H}_Y,$\nmatching the Lagrangian cones of $\\mathcal{X}$ and $Y$ upon a suitable analytic\ncontinuation of small quantum cohomology parameters:\n\\beq\n\\mathbb{U}_{\\rho}^{\\X,Y}(\\mathcal{L}_\\mathcal{X})=\\mathcal{L}_Y.\n\\eeq\n\\end{conj}\nThis version of the CRC is equivalent to the quantum $D$-module approach via the\nfundamental solutions, which give a canonical $z$-linear identification\n\\beq\\label{eq:givetosect}\nS_\\mathcal{Z}(\\tau,z):\\mathcal{H}_\\mathcal{Z}\\stackrel{\\cong}{\\longrightarrow}\\mathcal{S}_\\mathcal{Z}.\n\\eeq\ntranslating the analytic continuation map $\\mathbb{U}_{\\mathcal{S},\\rho}^{\\mathcal{X},Y}$ to a\nlinear isomorphism of Givental spaces which is symplectic, as\n$\\mathbb{U}_{\\mathcal{S},\\rho}^{\\mathcal{X},Y}$ preserves the pairing \\eqref{eq:pairDmod}. \n\\\\\n\nSuppose now that $c_1(\\mathcal{X})=0$, $\\mathrm{dim}_\\mathbb{C}\\mathcal{X}=3$ and assume further that\nthe $J$-functions $J^\\mathcal{Z}$, for $\\mathcal{Z}$ either $\\mathcal{X}$ or $Y$, and $\\mathbb{U}_{\\rho}^{\\X,Y}$ admit well-defined non-equivariant limits,\n\\beq\nJ_{\\rm n-eq}^\\mathcal{Z}(\\tau,z) \\triangleq \\lim_{\\nu\\to 0}J^\\mathcal{Z}(\\tau,z), \\qquad \\mathbb{U}^{\\mathcal{X},Y}_{\\rho,0} \\triangleq \\lim_{\\nu\\to 0} \\mathbb{U}_{\\rho}^{\\X,Y}. \n\\eeq\nBy homogeneity, $\\mathrm{e}^{-\\tau_0\/z} J_{\\rm n-eq}^\\mathcal{Z}(\\tau,z)$ is a Laurent\npolynomial of the form \\cite[\\S10.3.2]{MR1677117}\n\\beq\nJ_{\\rm n-eq}^\\mathcal{Z}(\\tau,z) = \\mathrm{e}^{-\\tau_0\/z}\\l(z + \\sum_{i=1}^{N_\\mathcal{Z}-1}\\l(\\tau_i +\n\\frac{\\mathfrak{f_i}^\\mathcal{Z}(\\tau)}{z}\\r)\\phi_i + \\frac{\\mathfrak{g}^\\mathcal{Z}(\\tau)}{z^2}\\mathbf{1}_\\mathcal{Z}\\r),\n\\eeq\nwhere $\\mathfrak{f}^\\mathcal{Z}(\\tau)$ and $\\mathfrak{g}^\\mathcal{Z}(\\tau)$ are\nanalytic functions around the large radius limit point of $\\mathcal{Z}$. Restricting $J_{\\rm n-eq}^\\mathcal{Z}(\\tau,z)$\nto $\\Delta_\\mathcal{Z}$ and picking up a branch $\\rho$ of analytic continuation of the\nquantum parameters, the vector valued analytic function $\\mathcal{I}_\\rho^{\\mathcal{X},Y}$\ndefined by\n\\beq\n\\begin{xy}\n(0,20)*+{\\Delta_\\mathcal{X}}=\"a\"; (40,20)*+{\\Delta_Y}=\"b\";\n(0,0)*+{\\mathcal{H}_\\mathcal{X}}=\"c\"; (40,0)*+{\\mathcal{H}_Y}=\"d\";\n{\\ar^{\\mathcal{I}_\\rho^{\\mathcal{X},Y}} \"a\";\"b\"};\n{\\ar_{J_{\\rm n-eq}^\\mathcal{X}\\big|_{\\Delta_\\mathcal{X}}} \"a\";\"c\"};{\\ar^{J_{\\rm n-eq}^Y\\big|_{\\Delta_Y}} \"b\";\"d\"};\n{\\ar^{ \\mathfrak{h}_\\mathcal{X}^{1\/z}\\mathbb{U}^{\\mathcal{X},Y}_{\\rho,0} \\mathfrak{h}_Y^{-1\/z}} \"c\";\"d\"};\n\\end{xy}\n\\label{eq:iddelta}\n\\eeq\ngives an analytic\nisomorphism\\footnote{Explicitly, matrix entries $(\\mathbb{U}^{\\mathcal{X},Y}_{\\rho,0})_{ij}$ of\n$\\mathbb{U}^{\\mathcal{X},Y}_{\\rho,0}$ are monomials in $z$; call $\\mathfrak{u}_{ij}$ the\ncoefficient of such monomial. Then \\eqref{eq:iddelta} boils down to the\nstatement that quantum cohomology parameters \n$\\tau^\\bullet_i$ in $\\Delta_\\bullet$ for $i=1, \\dots, l_Y$ are identified as \n\\beq\n\\tau^Y_i = (\\mathcal{I}^{\\mathcal{X},Y}_\\rho \\tau^\\mathcal{X})_i \\triangleq\n\\mathfrak{u}_{i0}+\n\\sum_{j=1}^{l_Y}\\mathfrak{u}_{ij} \\tau_{j}^\\mathcal{X}+\n\\sum_{k=l_Y+1}^{N_Y-1}\\mathfrak{u}_{ik} \\mathfrak{f}^\\mathcal{X}_k(\\tau^\\mathcal{X}).\n\\label{eq:changevargen}\n\\eeq\nSince $\\deg (\\mathbb{U}^{\\mathcal{X},Y}_{\\rho,0})_{ij}>0$ for $j>l_Y$, in the Hard Lefschetz\ncase the condition that the coefficients of $\\mathbb{U}_{\\rho}^{\\X,Y}$ are Taylor series in $1\/z$\nimplies that $\\mathfrak{u}_{ik}=0$ for $k>l_Y$.\n} between neighbourhoods $V_\\mathcal{X}$, $V_Y$ of the\nprojections of the large radius points of $\\mathcal{X}$ and $Y$ to $\\Delta_\\mathcal{X}$ and\n$\\Delta_Y$. \nWhen $\\mathcal{X}$ satisfies the Hard--Lefschetz condition, the coefficients of $\\mathbb{U}_{\\rho}^{\\X,Y}$ contain\nonly non-positive powers of $z$ \\cite{coates2007quantum} and the non-equivariant limit coincides with the\n$z\\to\\infty$ limit; then the isomorphism\n$\\mathcal{I}_\\rho^{\\mathcal{X},Y}$ extends to an affine linear\n change of variables\n$\\widehat{\\mathcal{I}}_\\rho^{\\mathcal{X},Y}:H(\\mathcal{X})\\to H(Y)$ at the level of the full\ncohomology rings of $\\mathcal{X}$ and $Y$, which is \nan isomorphism of Frobenius\nmanifolds.\n\n\n\n\\subsubsection{Integral structures and the CRC}\n\\label{sec:intstr}\n\nIn \\cite{MR2553377}, Iritani uses $K$-groups to define an integral structure\nin the quantum D-module associated to the Gromov--Witten theory of a smooth\nDeligne--Mumford stack $\\mathcal{Z}$; we recall \n the discussion in \\cite{MR2553377, MR2683208}, adapting\nit to the equivariant setting. \\\\\n\nWrite $K(\\mathcal{Z})$ for the Grothendieck group of topological vector bundles\n$V\\to\\mathcal{Z}$ and consider the map $\\Psi:K(\\mathcal{Z})\\to H(\\mathcal{Z})\\otimes\\mathbb{C}((z^{-1}))$ given by \n\\beq\\label{eq:stackymukai}\n\\Psi(V)\\triangleq (2\\pi)^{-\\frac{\\dim\\mathcal{Z}}{2}}z^{-\\mu} \\widehat\\Gamma_\\mathcal{Z}\\cup(2\\pi\\mathrm{i})^{\\deg\/2}\\mathrm{inv}^*\\mathrm{ch}(V),\n\\eeq\nwhere $\\mathrm{ch}(V)$ is the orbifold Chern character, $\\cup$ is the topological cup\nproduct on $I\\mathcal{Z}$, and \n\\bea\n\\label{eq:gammaT}\n\\widehat\\Gamma_\\mathcal{Z} &\\triangleq & \\bigoplus_v\\prod_f\\prod_\\delta\\Gamma(1-f+\\delta), \\\\\n\\mu & \\triangleq &\\left(\\frac{1}{2}\\deg(\\phi)-\\frac{3}{2}\\right)\\phi,\n\\end{eqnarray}\nwhere the sum in \\eqref{eq:gammaT} is over all connected components of the inertia stack, the left\nproduct is over the eigenbundles in a decomposition of the tangent bundle $T\\mathcal{Z}$\nwith respect to the stabilizer action (with $f$ the weight of the action on the eigenspace), and the\nright product is over all of the Chern roots $\\delta$ of the\neigenbundle. Via the fundamental solution \\eqref{eq:fundsol} this induces a map\nto the space of flat sections of $\\mathrm{QDM}(\\mathcal{Z})$; its image is a lattice \\cite{MR2553377}\nin $\\mathcal{S}_\\mathcal{Z}$, which Iritani dubs the {\\it $K$-theory integral structure} of\n $QH(\\mathcal{Z})=(H(\\mathcal{Z}) , \\eta, \\circ_\\tau)$. This implies the existence of an integral\n local system underlying $\\mathrm{QDM}(\\mathcal{Z})$ induced by the $K$-theory of\n $\\mathcal{Z}$. \\\\\n\nIritani's theory has important implications for the Crepant Resolution\nConjecture. At the level of integral structures, the analytic continuation map\n$\\mathbb{U}_{\\mathcal{S},\\rho}^{\\mathcal{X}, Y}$ of flat sections should be induced by an isomorphism\n$\\mathbb{U}_{K,\\rho}^{\\mathcal{X},Y}: K(Y) \\to K(\\mathcal{X})$ at the $K$-group level,\n\\beq\\label{eq:intstructure}\n\\begin{xy}\n(0,20)*+{K(\\mathcal{X})}=\"a\"; (40,20)*+{K(Y)}=\"b\"\n(0,0)*+{\\mathcal{S}_\\mathcal{X}}=\"c\"; (40,0)*+{\\mathcal{S}_Y}=\"d\";%\n{\\ar^{\\mathbb{U}_{K,\\rho}^{\\mathcal{X},Y}} \"a\";\"b\"};\n{\\ar_{S_\\mathcal{X}(x,z)\\Psi_\\mathcal{X}} \"a\";\"c\"};{\\ar^{S_Y(t,z)\\Psi_Y} \"b\";\"d\"};\n{\\ar^{ \\mathfrak{h}_Y^{1\/z}\\mathbb{U}_{\\mathcal{S},\\rho}^{\\mathcal{X},Y} \\mathfrak{h}_\\mathcal{X}^{-1\/z}} \"c\";\"d\"};\n\\end{xy}\n\\eeq\n\n\nThe Crepant Resolution Conjecture can then be phrased in terms of the\nexistence of an identification of the integral structures underlying\nquantum cohomology. In \\cite{MR2553377}, it is conjectured that\n$\\mathbb{U}_{K,\\rho}^{\\mathcal{X},Y}$ should be induced by a natural geometric\ncorrespondence between $K$-groups (see also \\cite{MR2271990} for earlier work\nin this context). In terms of Givental's symplectic formalism, we have \n\\beq\\label{eq:iritanisymp}\n\\mathbb{U}_\\rho^{\\mathcal{X},Y}=\\Psi_Y\\circ\\mathbb{U}_{K,\\rho}^{\\mathcal{X},Y}\\circ\\Psi_\\mathcal{X}^{-1}.\n\\eeq\n\n\n\\subsection{Open Gromov--Witten theory}\n\\label{sec:ogw}\n\nFor a three-dimensional toric Calabi--Yau variety, open Gromov-Witten invariants \nare defined ``via\nlocalization\" in \\cite{Katz:2001vm, Diaconescu:2003qa}. This theory\nhas been first introduced for orbifold targets in \\cite{MR2861610} and developed in\nfull generality in \\cite{r:lgoa} (see also \\cite{fang2012open} for recent\nresults in this context). \n Boundary conditions are given by choosing special type of Lagrangian\n submanifolds introduced by Aganagic--Vafa in\n \\cite{Aganagic:2000gs}. These Lagrangians are defined locally in a formal neighborhood of each torus invariant line: in particular if $p$ is a torus fixed point adjacent to the torus fixed line $l$, and the local coordinates at $p$ are $(z,u,v)$, then $L$ is defined to be the fixed points of the anti-holomorphic involution\n \\beq\n (z,u,v)\\rightarrow (1\/\\overline{z}, \\overline{zu}, \\overline{zv})\n \\eeq\n defined away from $z=0$. Boundary conditions can then be thought of as ``formal'' ways\n of decorating the web diagram of the toric target. \\\\\n \n Loci of fixed maps are described in terms of closed\ncurves mapping to the compact edges of the web diagram in the usual way and disks mapping rigidly to\nthe torus invariant lines with Lagrangian conditions. Beside Hodge integrals coming from the contracting\ncurves, the contribution of each fixed locus to the invariants has a factor\nfor each disk, which is constructed as follows. The map from the disk to a neighborhood\nof its image is viewed as the quotient via an involution of a map of a\nrational curve to a canonical target. The obstruction theory in ordinary\nGromov-Witten theory admits a natural $\\mathbb{Z}_2$ action, and the equivariant Euler\nclass of the involution invariant part of the obstruction theory is chosen as\nthe localization contribution from the disk \\cite[Section~2.2]{MR2861610}, \\cite[Section~2.4]{r:lgoa}. This construction is efficiently\nencoded via the introduction of a ``disk function\", which we now review in the\ncontext of cyclic isotropy (see \\cite[Section~3.3]{r:lgoa} for the general\ncase of finite abelian isotropy groups). \\\\\n\nLet $\\mathcal{Z}$ be a three-dimensional CY toric orbifold, $p$ a fixed point such\nthat a neighborhood is isomorphic to $[\\mathbb{C}^3\/\\mathbb{Z}_{n+1}]$, with representation\nweights $(m_1, m_2,m_3)$ and CY torus weights $(w_1,w_2,w_3)$. Define ${n_{e}}=\n(n+1)\/\\gcd(m_1,n+1)$ to be the size of the effective part of the action along\nthe first coordinate axis. \n\n There exist a map from an orbi-disk mapping to the first coordinate axis with winding $d$ and twisting $k$ if the compatibility condition \n\\beq\n\\frac{d}{{n_{e}}}-\\frac{km_1}{n+1}\\in \\mathbb{Z}\n\\label{compat}\n\\eeq\nis satisfied. In this case the positively oriented disk function is\n\\begin{equation}\nD_k^+(d;\\vec{w})=\n\\left( \\frac{ {n_{e}}w_1}{d} \\right)^{\\text{age}(k)-1}\\frac{{n_{e}}}{d(n+1)\\left\\lfloor \\frac{d}{{n_{e}}} \\right\\rfloor !}\\frac{\\Gamma\\left( \\frac{dw_{2}}{{n_{e}}w_1}+\\left\\langle \\frac{k m_{3}}{n+1} \\right\\rangle + \\frac{d}{{n_{e}}} \\right)}{\\Gamma\\left( \\frac{dw_{2}}{{n_{e}}w_1}-\\left\\langle \\frac{k m_{2}}{n+1} \\right\\rangle +1 \\right)}.\n\\end{equation}\nThe negatively oriented disk function is obtained by switching the indices $2$\nand $3$. By renaming the coordinate axes this definition applies to the\ngeneral boundary condition. \\\\\n\nIn \\cite{r:lgoa} the disk function is used to construct the GW orbifold\ntopological vertex, a building block for open and closed GW invariants of\n$\\mathcal{Z}$. The disk potential is efficiently expressed in terms of the\ndisk and of the $J$ function of $\\mathcal{Z}$. Fix a Lagrangian boundary condition $L$\nwhich we assume to be on the first coordinate axis in the local chart ( $\\cong [\\mathbb{C}^3\/\\mathbb{Z}_{n+1}]$) around the\npoint $p$. Denote by $\\{\\mathbf{1_{p,k}}\\}_{k=1,...,n+1}$ the part of the localized basis for $H(\\mathcal{Z})$ supported at $p$.\nRaising indices using the orbifold\nPoincar\\'e pairing, and extending the disk function to be a cohomology valued\nfunction\n\\begin{equation}\n\\mathcal{D}^+(d;\\vec{w})=\\sum_{k=1}^{n+1}D_k^+(d;\\vec{w}) \\mathbf{1_p^k},\n\\end{equation}\nthe (genus zero) \\textit{scalar disk potential} is obtained by contraction with the $J$ function:\n\\bea\nF_{L}^{\\rm disk}(\\tau,y,\\vec{w}) &\\triangleq & \\sum_d\\frac{y^d}{d!}\\sum_n\n\\frac{1}{n!}\\langle \\tau, \\ldots, \\tau \\rangle_{0,n}^{L,d} \\nn \\\\\n&=& \\sum_d \\frac{y^d}{d!}\\left( \\mathcal{D}^+(d;\\vec{w}), J^\\mathcal{Z}\\left(\\tau,\\frac{gw_1}{d}\\right)\\right)_{\\mathcal{Z}},\n\\label{sdp}\n\\end{eqnarray}\nwhere we denoted by $\\langle \\tau, \\ldots, \\tau \\rangle_{0,n}^{L,d}$ the disk\ninvariants with boundary condition $L$, winding $d$\nand $n$ general cohomological insertions.\n\\begin{rmk}\nWe may consider the disk potential relative to multiple Lagrangian boundary conditions. In that case, we define the disk function by adding the disk functions for each Lagrangian, and we introduce a winding variable for each boundary condition. \n\\end{rmk}\n\n\\begin{rmk}\nIt is not conceptually difficult (but book-keeping intensive) to express the general genus zero open potential in terms of appropriate contractions of arbitrary copies of these disk functions with the full descendant Gromov-Witten potential of $\\mathcal{Z}$.\n\\end{rmk}\n\n\n\\subsection{$A_n$ resolutions}\n\\label{sec:an}\n\n\\subsubsection{GIT Quotients}\n\\label{sec:GIT}\n\nHere we review the relevant toric geometry concerning our targets. Let\n$\\mathcal{X}\\triangleq[\\mathbb{C}^3\/\\mathbb{Z}_{n+1}]$ be the 3-fold $A_n$ singularity and $Y$ its resolution.\nThe toric fan for $\\mathcal{X}$ has rays $(0,0,1)$, $(1,0,0)$, and $(1,n+1,0)$, while\nthe fan for $Y$ is obtained by adding the rays $(1,1,0)$, $(1,2,0)$,...,\n$(1,n,0)$. The divisor class group is described by the short exact sequence\n\\beq\n0\\longrightarrow\\mathbb{Z}^{n}\\stackrel{M^T}{\\longrightarrow}\\mathbb{Z}^{n+3}\\stackrel{N}{\\longrightarrow}\\mathbb{Z}^3\\longrightarrow\n0,\n\\label{eq:divclass}\n\\eeq\nwhere\n\\beq\nM=\\left[ \\begin{array}{cccccccc}\n1 & -2 & 1 & 0 & 0 &... &0 & 0\\\\\n0 & 1 & -2 & 1 & 0 &... &0 & 0\\\\\n\\vdots & &\\ddots & &\\ddots & && \\vdots\\\\\n0 &... & 0 & 0 & 1 & -2 & 1 & 0\n\\end{array}\n\\right]\n,\\hspace{.5cm} N=\\left[ \\begin{array}{cccccc}\n1 & 1 & 1 & & 1 & 0\\\\\n0 & 1 & 2 & ... & n+1 & 0\\\\\n0 & 0 & 0 & & 0 & 1\n\\end{array}.\n\\right]\n\\label{eq:MN}\n\\eeq\n\\\\\nBoth $\\mathcal{X}$ and $Y$ are GIT quotients: \n\\bea\\label{orbgit}\n\\mathcal{X} &=& \\left[\\frac{\\mathbb{C}^{n+3}\\setminus V(x_1\\cdot...\\cdot\n x_n)}{(\\mathbb{C}^*)^n}\\right], \\\\\nY &=& \\frac{\\mathbb{C}^{n+3}\\setminus V(I_1, \\dots, I_n),\n}{(\\mathbb{C}^*)^n}\n\\label{resgit}\n\\end{eqnarray}\nwhere \n\\beq\nI_i=\\prod_{j=0, j \\neq i-1, i}^{n+1} x_i,\n\\eeq\nand the torus action is specified by $M$. \nFrom the quotient \\eqref{orbgit}, we can compute pseudo-coordinates on the orbifold\n\\begin{equation}\\label{orbcoords}\n\\left[\\begin{array}{c}\nz_1\\\\\nz_2\\\\\nz_3\n\\end{array}\\right]\n=\n\\left[\\begin{array}{c}\nx_0x_1^{\\frac{n}{n+1}}x_2^{\\frac{n-1}{n+1}}\\cdot...\\cdot x_n^{\\frac{1}{n+1}}\\\\\nx_1^{\\frac{1}{n+1}}x_2^{\\frac{2}{n+1}}\\cdot...\\cdot x_n^{\\frac{n}{n+1}}x_{n+1}\\\\\nx_{n+2}\n\\end{array}\\right].\n\\end{equation}\nThese coordinates are only defined up to a choice of $(n+1)^{\\rm st}$\nroot of unity for each $x_i$. This accounts for a residual $\\mathbb{Z}_{n+1}\\subset\n(\\mathbb{C}^*)^n$ acting with dual representations on the first two coordinates. We\nidentify this residual $\\mathbb{Z}_{n+1}$ as the subgroup generated by \n$\\left(\\omega,\\omega^2, \\dots, \\omega^n\\right)\\in(\\mathbb{C}^*)^n$, where\n $\\omega=\\mathrm{e}^{\\frac{2\\pi \\mathrm{i} }{n+1}}$. This realizes the quotient\n\\eqref{orbgit} as the 3-fold $A_n$ singularity where $\\mathbb{Z}_{n+1}=\\langle \\omega\n\\rangle$ acts by $\\omega \\cdot(z_1,z_2,z_3)=(\\omega z_1,\\omega^{-1} z_2,z_3)$. \n\n\\begin{rmk}\\label{dualrmk}\nThe weights of the $\\mathbb{Z}_{n+1}$ action on the corresponding fibers of $T\\mathcal{X}$ are\ninverse to the weights on the local coordinates because a local trivialization\nof the tangent bundle is given by $\\frac{\\partial}{\\partial z^\\alpha}$ where\n$z^\\alpha$ are the local coordinates. \\\\\n\\end{rmk}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics{webdiagrama3.pdf}\n\\caption{The toric web diagrams for $Y$ and $\\mathcal{X}$ for $n=3$. Fixed points and invariants lines are labelled, together with the relevant torus and representation weights.}\n\\label{fig:web}\n\\end{figure}\n\nThe geometry of the space $Y$ is captured by the toric web diagram in Figure\n\\ref{fig:web}. In particular, $Y$ has $n+1$ torus fixed points (corresponding\nto the $n+1$ 3-dimensional cones in the fan) and a chain of $n$ torus\ninvariant lines connecting these points. We label the points\n$p_1$,...,$p_{n+1}$ where $p_i$ correspondes to the cone spanned by $(0,0,1)$,\n$(1,i-1,0)$, and $(1,i,0)$ and we label the torus invariant lines by\n$L_1$,...,$L_n$ where $L_i$ connects $p_i$ to $p_{i+1}$. We also denote by\n$L_0$ and $L_{n+1}$ the torus invariant (affine) lines corresponding to the\n2-dimensional cones spanned by the rays $(1,0,0), (0,0,1)$ and $(1,n,0),\n(0,0,1)$, respectively. From the quotient \\eqref{resgit} we compute\nhomogeneous coordinates on the line $L_i$\n\\begin{equation}\\label{projcoords}\n\\left[\\begin{array}{c}\nx_0^ix_1^{i-1}\\cdot...\\cdot x_{i-1}\\\\\nx_{n+1}^{n+1-i}x_{n-1}^{n-i}\\cdot...\\cdot x_{i+1}\n\\end{array}\\right]\n\\end{equation}\nwhere $p_i\\leftrightarrow[0:1]$ and $p_{i+1}\\leftrightarrow[1:0]$. \\\\\n\n\nOn the resolution, $H_2(Y)$ is generated by the torus invariant lines $L_i$.\nDefine $\\gamma_i\\in H^2(Y)$ to be dual to $L_i$. The $\\gamma_i$ form a\nbasis of $H^2(Y)$; denote the corresponding line bundles by $\\mathcal{O}(\\gamma_i)$.\nNote that $\\mathcal{O}(\\gamma_i)$ restricts to $\\mathcal{O}(1)$ on $L_i$ and $\\mathcal{O}$ on $L_j$ if\n$j\\neq i$ and this uniquely determines the line bundle $\\mathcal{O}(\\gamma_i)$. On the\norbifold, line bundles correspond to $\\mathbb{Z}_{n+1}$ equivariant line bundles on\n$\\mathbb{C}^3$. We denote $\\mathcal{O}_k$ the line bundle where $\\mathbb{Z}_{n+1}$ acts on fibers\nwith weight $\\omega^k$; then, for example, $T_\\mathcal{X}=\\mathcal{O}_{-1}\\oplus\\mathcal{O}_{1}\\oplus\\mathcal{O}_0$ where the subscripts are computed modulo $n+1$ (c.f. Remark \\ref{dualrmk}).\n\n\\subsubsection{Classical equivariant geometry}\\label{sec:elb}\n\nGiven that we are working with noncompact targets, all of our quantum\ncomputations utilize Atiyah-Bott localization with respect to an additional\n$T=\\mathbb{C}^*$ action on our spaces. Let $T$ act on $\\mathbb{C}^{n+3}$ with weights $(\\alpha_1,0,...,0,\\alpha_2,-\\alpha_1-\\alpha_2)$. Then the induced action on the orbifold and resolution can be read off from the local coordinates in \\eqref{orbcoords} and \\eqref{projcoords}. In particular, the three weights on the fibers of $T_\\mathcal{X}$ are $-\\alpha_1,-\\alpha_2, \\alpha_1+\\alpha_2$. The $T$-equivariant Chen-Ruan cohomology $H(\\mathcal{X})$ is\nby definition the $T$-equivariant cohomology of the inertia stack\n$\\mathcal{I}\\mathcal{X}$. The latter has components $\\mathcal{X}_1, \\dots, \\mathcal{X}_n, \\mathcal{X}_{n+1}$, the last being the untwisted sector\\footnote{While it is more common to index the untwisted sector by $0$, we make this choice of notation for the sake of the computations of Section \\ref{sec:j}, where certain matrices are triangular with this ordering.}: \n\\bea\n\\mathcal{X}_k &=& [\\mathbb{C}\/\\mathbb{Z}_{n+1}], \\quad 1\\leq k \\leq n, \\nonumber \\\\\n\\mathcal{X}_{n+1} &=& [\\mathbb{C}^3\/\\mathbb{Z}_{n+1}]\n\\end{eqnarray}\nWriting $\\mathbf{1}_k$, $k=1, \\dots, n+1$ for the fundamental class of\n$\\mathcal{X}_k$ we obtain a $\\mathbb{C}(\\nu)$ basis of $H(\\mathcal{X})$; the age-shifted grading\nassigns degree $0$ to the fundamental class of the untwisted sector, and\ndegree $1$ to every twisted sector. \nThe Atiyah-Bott localization isomorphism is trivial, i.e. the fundamental class on each twisted sector is identified with the unique $T$-fixed point on that sector. We abuse notation and use $\\mathbf{1}_k$ to also denote the fixed point basis. \nThe equivariant Chen-Ruan pairing in orbifold cohomology is\n\\beq\n\\eta\\l(\\mathbf{1}_i, \\mathbf{1}_j\\r)_\\mathcal{X} = \\frac{\\delta_{i,n+1}\\delta_{j,n+1}+\\alpha_1\\alpha_2 \\delta_{i+j,n+1}}{\\alpha_1 \\alpha_2(\\alpha_1+\\alpha_2)(n+1)}.\n\\eeq\n\\\\\n\nOn the resolution $Y$, the three weights on the tangent bundle at $p_i$ are \\beq(w_i^-,w_i^+,\\alpha_1+\\alpha_2)\\triangleq((i-1)\\alpha_1+(-n+i-2)\\alpha_2,-i\\alpha_1+(n+1-i)\\alpha_2,\\alpha_1+\\alpha_2).\\eeq\nMoreover, $\\mathcal{O}(\\gamma_j)$ is canonically linearized via the homogeneous coordinates in \\eqref{orbcoords}. The weight of $\\mathcal{O}(\\gamma_i)$ at the fixed point $p_i$ is\n\\begin{equation}\\label{canwts}\n\\begin{cases}\n(n+1-j)\\alpha_2 & i\\leq j,\\\\\nj\\alpha_1 & i>j.\n\\end{cases}\n\\end{equation}\nDenote by $\\{P_i\\}_{i=1}^{n+1}$ the equivariant cohomology classes\ncorresponding to the fixed points of $Y$. Choosing the canonical\nlinearization given in \\eqref{canwts}, the Atiyah-Bott localization\nisomorphism on $Y$ is given by\n\\bea\n\\label{eq:ab1}\n\\gamma_j &\\longrightarrow & \\sum_{i\\leq j}(n+1-j)\\alpha_2 P_i +\n\\sum_{i>j}j\\alpha_1P_i, \\\\\n\\gamma_{n+1} & \\longrightarrow &\\sum_{i=1}^{n+1}P_i.\n\\label{eq:ab2}\n\\end{eqnarray}\nwhere $\\gamma_{n+1}$ is the fundamental class on $Y$. \nGenus zero, degree zero GW invariants are given by equivariant triple\nintersections on $Y$,\n\\beq\n\\left\\langle \\gamma_i, \\gamma_j, \\gamma_k \\right\\rangle^Y_{0,3,0} = \\int_Y\\gamma_i\\cup\\gamma_j\\cup\\gamma_k.\n\\eeq\nWith $i\\leq j \\leq k}[r]& \\mathcal{U}\\ar[d]^\\pi \\ar[r]^{\\lambda} & \\mathbb{P}^1 \\\\\n [\\lambda] \\ar@{^{(}->}[r]^{pt.} \\ar@\/^1pc\/[u]^{P_i}& \\mathcal{H}_\\lambda \\ar@\/^1pc\/[u]^{\\Sigma_i}& \n}\n\\eeq\n\\end{defn}\n\n\\begin{rem}\nA genus zero double Hurwitz space is naturally isomorphic to $M_{0,n+3}$, and is therefore an open set in affine space $\\mathbb{A}^n$. This is the only case that we utilize and it may seem overly sophisticated to use the language of moduli spaces to then work on such a simple object. We choose to do so to connect to the work of Dubrovin \\cite{Dubrovin:1992eu, Dubrovin:1994hc} and Romano \\cite{2012arXiv1210.2312R} (after Saito \\cite{MR723468}; see also \\cite{Krichever:1992qe}), who studied\nexistence and construction of Frobenius structures on arbitrary double Hurwitz spaces. \n\\end{rem}\nLet \n$\\phi\\in \\Omega^1_{C}(\\log (\\lambda))$ be a meromorphic one form having simple poles at the support of $(\\lambda)$ with\nconstant residues; we call $(\\lambda, \\phi)$ respectively the {\\it\n superpotential} and the {\\it quasi-momentum differential} of $\\mathcal{H}_\\lambda$.\nBorrowing the terminology from \\cite{2012arXiv1210.2312R, phdthesis-romano},\nwe say that an analytic Frobenius manifold structure $(\\mathcal{F}, \\circ, \\eta)$ on\na complex manifold $\\mathcal{F}$ is\n{\\it weak} if\n\\ben\n\\item the $\\circ$-multiplication gives a commutative and associative\nunital $\\mathcal{O}$-algebra structure\non the space of holomorphic vector fields on $\\mathcal{F}$;\n\\item the metric $\\eta$ provides a flat\n pairing which is Frobenius w.r.t. to $\\circ$;\n\\item the algebra structure\nadmits a {\\it potential}, meaning that the 3-tensor\n\\beq\nR(X,Y,Z) \\triangleq \\eta(X,Y \\circ Z)\n\\eeq\nsatisfies the integrability condition\n\\beq\n(\\nabla^{(\\eta)} R)_{[\\alpha \\beta] \\gamma\\delta}=0.\n\\eeq\n\\end{enumerate}\nIn particular, this encompasses non-quasihomogeneous solutions of\nWDVV, and solutions without a flat identity element.\n\n\\begin{prop}[\\cite{2012arXiv1210.2312R}]\nFor vector fields $X$, $Y$, $Z \\in \\mathfrak{X}(\\mathcal{H}_\\lambda)$, define the\nnon-degenerate symmetric pairing $g$ and quantum product $\\star$ as\n\\bea\n\\label{eq:gmetr}\ng(X,Y) &\\triangleq & \\sum_{P\\in\\mathrm{supp}(\\lambda)}\\Res_P\\frac{X(\\log\\lambda)\n Y(\\log\\lambda)}{\\mathrm{d}_\\pi \\log\\lambda}\\phi^2, \\\\\ng(X,Y \\star Z) &\\triangleq & \\sum_{P\\in\\mathrm{supp}(\\lambda)}\\Res_P\\frac{X(\\log\\lambda)\n Y(\\log\\lambda) Z(\\log\\lambda)}{\\mathrm{d}_\\pi \\log\\lambda}\\phi^2,\n\\label{eq:star}\n\\end{eqnarray}\nwhere $\\mathrm{d}_\\pi$ denotes the relative differential with respect to the\nuniversal family (i.e. the differential in the fiber direction). Then the triple $\\mathcal{F}_{\\lambda,\\phi}=\\l(\\mathcal{H}_{\\lambda}, \\star, g\\r)$ endows\n$\\mathcal{H}_{\\lambda}$ with a weak Frobenius manifold structure.\n\\end{prop}\n\\begin{rmk}\n\\label{rmk:adual}\nEquations \\eqref{eq:gmetr}-\\eqref{eq:star} are the\nDijkgraaf--Verlinde--Verlinde formulae \\cite{Dijkgraaf:1990dj} for a\ntopological Landau--Ginzburg model on a sphere with $\\log\\lambda(q)$ as its\nsuperpotential. The case in which $\\lambda(q)$ itself is used as the\nsuperpotential gives rise to a {\\it different} Frobenius manifold structure,\nwhich is the case originally studied in \\cite[Lecture 5]{Dubrovin:1994hc}; the\nsituation at hand is its Dubrovin-dual\nin the sense of \\cite{MR2070050}, where $g$ plays the role of the\nintersection form and $\\star$ the dual product. \n\\end{rmk}\n\n\\subsubsection[Twisted periods and the QDE]{Twisted periods and the quantum\n differential equation}\n\nThe quantum $D$-module associated to $\\mathcal{F}_{\\lambda, \\phi}$,\n\\beq\n\\nabla^{(g,z)} \\omega =0,\n\\label{eq:QDELG}\n\\eeq\nwhere\n\\beq\n\\label{eq:defconn}\n\\nabla^{(g,z)}_X(Y,z) \\triangleq \\nabla^{(g)}_X Y+z^{-1} X \\star Y\n\\eeq\nenjoys a neat description in terms of the Landau--Ginzburg data $(\\lambda, \\phi)$:\nin particular, flat frames for \\eqref{eq:QDELG} can be computed from the\ntwisted Picard--Lefschetz theory of $\\lambda$ \\cite{MR936695, MR2070050, Brini:2011ff}.\nIn contrast with the classical Picard--Lefschetz theory,\nthis corresponds to\nconsidering cycles $\\gamma \\in H_1(\\mathbb{C} \\setminus H,\n\\mathbf{L})$ in the {\\it complement} of the zero-dimensional hypersurface\n$H=\\lambda^{-1}(0)$ cut\nby $\\lambda$,\nwhere the linear local system $\\mathbf{L}$ is defined by multiplication by\n$\\mathrm{e}^{2\\pi\\mathrm{i}\/z}$ when moving along a simple loop around\nany single point of $H$. Elements $\\gamma$ of the homology group with coefficients twisted by ${\\bf L}$\nare the {\\it twisted cycles} of $\\lambda$. \\\\\n\nOscillating integrals around a basis of twisted cycles of\nthe form\n\\beq\n\\Pi_{\\lambda, \\phi, \\gamma}(z) \\triangleq \\int_\\gamma \\lambda^{1\/z} \\phi\n\\label{eq:periods}\n\\eeq\nare called {\\it\n twisted periods}\\footnote{To be completely consistent with\n \\cite{MR2070050} we should more correctly call these the {\\it\n twisted periods} of $\\mathcal{F}_{\\mathrm{e}^\\lambda,\\phi}$. See Remark \\ref{rmk:adual}.}\nof $\\mathcal{F}_{\\lambda,\\phi}$. Denote by $\\mathrm{Sol}_{\\lambda,\n \\phi}$ the solution space of \\eqref{eq:QDELG},\n\\beq\n\\mathrm{Sol}_{\\lambda,\n \\phi} = \\{s \\in \\mathfrak{X}(\\mathcal{F}_{\\lambda,\\phi}), \\nabla^{(g,z)}s=0 \\}.\n\\eeq\nWe have the following\n\\begin{prop}[Dubrovin, \\cite{MR2070050}]\n\\label{thm:tp}\nThe solution space of the quantum differential equations of\n$\\mathcal{F}_{\\lambda,\\phi}$ is generated by gradients of the twisted periods\n\\eqref{eq:periods}\n\\beq\n\\mathrm{Sol}_{\\lambda, \\phi} = \\mathrm{span}_{\\mathbb{C}((z))}\n\\{\\nabla^{(g)} \\Pi_{\\lambda,\\phi,\\gamma} \\}_{\\gamma \\in H_1(\\mathbb{C} \\setminus H,\n\\mathbf{L})}\n\\eeq\n\\end{prop}\nIn particular, Proposition \\ref{thm:tp} implies that the quantum $D$-modules arising from weak Frobenius structures on genus zero\ndouble Hurwitz spaces are described by systems of period integrals of\ngeneralized hypergeometric type. \\\\\n\n\\begin{rmk}\nSince $\\lambda$ is a genus zero covering map, in an affine chart parametrized by $q\\in\\mathbb{C}$ its logarithm takes the\nform \n\\beq\n\\log\\lambda = \\sum_{i}a_i \\log(q-q_i),\n\\label{eq:logl}\n\\eeq\nwhere $a_i\\in \\mathbb{Z}$. In fact, the existence of the weak Frobenius structure\n\\eqref{eq:gmetr}-\\eqref{eq:star} extends \\cite{phdthesis-romano} to the case where $\\mathrm{d}_\\pi\\log\\lambda$\nis a meromorphic function on $C$; this in particular encompasses the case where $a_i\\in\n\\mathbb{C}$ in \\eqref{eq:logl}. As far as flat coordinates of the deformed connection\n$\\nabla^{(g,z)}$ are concerned, Proposition~\\ref{thm:tp} continues to hold,\n the only proviso being that the locally constant sheaf ${\\bf L}$ be replaced\n with the unique local system specified by the monodromy weights $a_i\/z$ in\n \\eqref{eq:periods}, \\eqref{eq:logl}. \\\\\n\\end{rmk}\n\n\\subsection{A one-dimensional Landau--Ginzburg mirror}\n\nIt is known that the quantum $D$-modules associated to the equivariant Gromov--Witten theory of\nthe $A_n$-singularity $\\mathcal{X}$ and its resolution $Y$ admit a Landau--Ginzburg\ndescription in terms of $n$-dimensional oscillating integrals\n\\cite{MR1408320, MR1328251, MR2700280, MR2529944}. We provide here an alternative description\nvia one-dimensional twisted periods of a genus zero double Hurwitz space\n$\\mathcal{F}_{\\lambda, \\phi}$. \\\\\n\n\nLet $\\mathcal M_A$ be $M_{0,n+3}$. By choosing the last three sections to be the constant sections $0, 1, \\infty$, we realize $\\mathcal M_A$ as an open subset of $\\mathbb{A}^{n}$ and trivialize the universal family. \nIn homogeneous coordinates $[u_0:\\dots:u_n]$ for $\\mathbb{P}^n$,\n\\beq\n\\mathcal M_A= \\mathbb{P}^n\\setminus \\mathrm{Proj} \\frac{\\mathbb{C}[u_0, \\dots, u_n]}{{\\left\\langle\n u_i(u_j-u_k)\\right\\rangle}} \\triangleq \\mathbb{P}^n\\setminus \\mathrm{discr} \\mathcal M_A .\n \\label{eq:discr}\n\\eeq\nLet\n$\\kappa_i=u_i\/u_0$, $i=1, \\dots, n$ be a set of global coordinates on $\\mathcal M_A$\nand $q$ be an affine coordinate on the fibers of the universal\nfamily. We give $\\mathbb{C}\\times \\mathcal M_A$ the structure of a one parameter family of double Hurwitz spaces by specifying the pair $(\\lambda, \\phi)$; we call \n$\\kappa_0$ the coordinate in the first factor, and define \n\\beq\n\\lambda(\\kappa_0, \\ldots \\kappa_n, q) = C_n(\\kappa)\n\\frac{q^{(n+1)\\alpha_1}}{\\left(1-q\\right)^{\\alpha_1+\\alpha_2}} \\prod _{k=1}^{n}\n\\left(1-q\\kappa_k\\right)^{-\\alpha_1-\\alpha_2}, \n\\label{eq:superpot}\n\\eeq\n\\beq\n\\phi(q) = \\frac{1}{\\alpha_1+\\alpha_2}\\frac{\\mathrm{d} q}{q},\n\\label{eq:primeform}\n\\eeq\nand\n\\bea\nC_n(\\kappa) &\\triangleq& \\prod_{j=0}^n \\kappa_j^{\\alpha_1}.\n\\end{eqnarray}\nThen Eqs.~\\eqref{eq:gmetr}-\\eqref{eq:star} and\n\\eqref{eq:superpot}-\\eqref{eq:primeform} define a Frobenius structure $\\mathcal{F}_{\\lambda,\n \\phi}$ on $\\mathbb{C}\\times\\mathcal M_A$; the discriminant ideal in \\eqref{eq:discr}\ncoincides with the locus where the $D$-module \\eqref{eq:defconn} is singular,\nand the irreducible components $V(\\kappa_i-\\kappa_j)$, for $i,j>0$, correspond to the loci where the\n$\\star$-product \\eqref{eq:star} blows-up. We have the following\n\\begin{thm}\n\\label{thm:mirror}\n\\ben\n\\item Let \n\\bea\n\\label{eq:kappa0Y}\n\\kappa_0 &=& \\mathrm{e}^{(t_{n+1}+\\delta_Y)\/\\alpha_1}, \\\\\n\\label{eq:kappaY}\n\\kappa_j &=& \\prod_{i=j}^n \\mathrm{e}^{t_i}, \\quad 1\\leq j\\leq n.\n\\end{eqnarray}\nwhere $\\delta_Y$ is an arbitrary constant. Then, in a neighbourhood $V_Y$ of $\\{ \\mathrm{e}^{t_i}=0\\}$, \n\\beq\n \\mathcal{F}_{\\lambda,\\phi} \\simeq QH_T(Y).\n\\eeq\n\\item Let\n\\bea\n\\label{eq:kappa0X}\n\\kappa_0 &=& \\mathrm{e}^{(x_{n+1}+\\delta_\\mathcal{X})\/\\alpha_1}, \\\\\n\\kappa_j &=& \\exp\\l[-\\frac{2\\mathrm{i}}{n+1}\\l(\\pi j+ \\sum_{k=1}^n\n \\mathrm{e}^{-\\frac{\\mathrm{i} \\pi k (j-1)}{n+1}} \\sin \\left(\\frac{\\pi j\n k}{n+1}\\right)x_k\\r)\\r], \\quad 1\\leq k\\leq n. \n\\label{eq:kappakX}\n\\end{eqnarray}\nwhere $\\delta_\\mathcal{X}$ is an arbitrary constant. Then, in a neighbourhood $V_\\mathcal{X}$ of $\\{x_i=0\\}$,\n\\beq\n \\mathcal{F}_{\\lambda,\\phi} \\simeq QH_T(\\mathcal{X}).\n\\eeq\n\\end{enumerate}\n\\end{thm}\n\\begin{proof} The proof is a straightforward computation from the\n Landau--Ginzburg formulae \\eqref{eq:gmetr}-\\eqref{eq:star}.\n\\ben\n\\item \nConsider the three-point\ncorrelator $R(\\kappa_i {\\partial}_i, \\kappa_j {\\partial}_j, \\kappa_k {\\partial}_k)$, where\n${\\partial}_k \\triangleq \\frac{{\\partial}}{{\\partial} \\kappa_k}$, and define\n\\bea\nR^{(l)}_{i,j,k} &\\triangleq& \\Res_{q=\\kappa_l^{-1}} \\frac{\\kappa_i \\frac{{\\partial} \\ln\\lambda}{{\\partial} \\kappa_i}\n \\kappa_j\\frac{{\\partial} \\ln\\lambda}{{\\partial} \\kappa_j} \\kappa_k \\frac{{\\partial}\n \\ln\\lambda}{{\\partial} \\kappa_k} }{(\\alpha_1+\\alpha_2)^2 q \\frac{{\\partial} \\ln\\lambda}{{\\partial} q}}\\frac{\\mathrm{d} q}{q}.\n\\end{eqnarray}\nInspection shows that $R^{(l)}_{ijk}=0$ unless $l=i=j$, $l=i=k$ or\n$l=j=k$. Assume w.l.o.g. $l=j=i$, and suppose that $i,k>0$. We compute\n\\bea\n\\label{eq:lgqu1}\nR^{(i)}_{i,i,k}\n&=& \n\\frac{\\kappa_i}{\\kappa_k-\\kappa_i}+\\frac{\\alpha_2}{\\alpha_1+\\alpha_2}, \\\\\n\\label{eq:lgqu2}\nR^{(i)}_{i,i,i}\n\\begin{comment}\n&=& \\Res_{q=\\kappa_i^{-1}} \\Bigg\\{\\frac{(\\alpha_1+\\alpha_2\n q\\kappa_i)^3}{(\\alpha_1+\\alpha_2)^3(1-q\\kappa_i)^2}\n \\frac{1}{\\frac{(n+1)\\alpha_1 (1-\\kappa_i\n q)}{\\alpha_1+\\alpha_2}+\\frac{q(1-\\kappa_i q)}{1-q}+\\kappa_i q+\\sum_{l\\neq\n i}^n\\frac{\\kappa_l\n q(1-\\kappa_i q)}{1-\\kappa_l q}}\\frac{\\mathrm{d} q}{q}\\Bigg\\} \\nn \\\\\n&=& \\Res_{q=\\kappa_i^{-1}} \\Bigg\\{\\frac{(\\alpha_1+\\alpha_2\n q\\kappa_i)^3}{(\\alpha_1+\\alpha_2)^3(\\kappa_i^{-1}-q)^2 \\kappa_i^2}\n \\frac{1}{\\kappa_i q^2}\n\\frac{1}{1-(q-\\kappa_i^{-1})\n\\l[ \\frac{(n+1)\\alpha_1}{q(\\alpha_1+\\alpha_2)}\n+\\frac{1}{1-q}\n +\\sum_{l\\neq i}^n\\frac{\\kappa_l}{1-\\kappa_l q}\\r]}\\mathrm{d} q \\Bigg\\} \n\\nn \\\\\n&=& \\Res_{q=\\kappa_i^{-1}} \\Bigg\\{\\frac{(\\alpha_2-2\\alpha_1)(\\alpha_1+\\alpha_2)^2}{(\\alpha_1+\\alpha_2)^3(q-\\kappa_i^{-1})}\n \\mathrm{d} q \n+\\frac{(\\alpha_1+\\alpha_2)^3}{(\\alpha_1+\\alpha_2)^3(q-\\kappa_i^{-1}) \\kappa_i^2}\n \\frac{1}{\\kappa_i q^2} \\nn \\\\\n& & \n\\l[\\frac{(n+1)\\alpha_1}{q(\\alpha_1+\\alpha_2)}\n+\\frac{1}{1-q}\n +\\sum_{l\\neq i}^n\\frac{\\kappa_l}{1-\\kappa_l q}\\r] \\mathrm{d} q \\Bigg\\} \n\\nn \\\\\n\\end{comment}\n&=& \\frac{(n-1) \\alpha_1+\\alpha_2}{\\alpha_1+\\alpha_2}+\n\\sum_{l\\neq i}^{n+1}\\frac{\\kappa_l}{\\kappa_i-\\kappa_l}, \\\\\nR^{(i)}_{0,i,i} \n&=& -\\frac{1}{\\alpha_1+\\alpha_2}.\n\\end{eqnarray}\nMoreover, for all $i$, $j$ and $k$ we have\n\\bea\nR^{(0)}_{i,j,k} &\\triangleq& \\Res_{q=0} \\frac{\\kappa_i \\frac{{\\partial} \\ln\\lambda}{{\\partial} \\kappa_i}\n \\kappa_j\\frac{{\\partial} \\ln\\lambda}{{\\partial} \\kappa_j} \\kappa_k \\frac{{\\partial}\n \\ln\\lambda}{{\\partial} \\kappa_k} }{(\\alpha_1+\\alpha_2)^2 q \\frac{{\\partial} \\ln\\lambda}{{\\partial} q}}\\frac{\\mathrm{d}\n q}{q},\\nn \\\\\n&=& \\frac{\\alpha_1^{2-\\delta_{i,n+1}-\\delta_{j,n+1}-\\delta_{k,n+1}}}{(n+1)(\\alpha_1+\\alpha_2)^2} \\\\\nR^{(\\infty)}_{i,j,k} &\\triangleq& \\Res_{q=\\infty} \\frac{\\kappa_i \\frac{{\\partial} \\ln\\lambda}{{\\partial} \\kappa_i}\n \\kappa_j\\frac{{\\partial} \\ln\\lambda}{{\\partial} \\kappa_j} \\kappa_k \\frac{{\\partial}\n \\ln\\lambda}{{\\partial} \\kappa_k} }{(\\alpha_1+\\alpha_2)^2 q \\frac{{\\partial} \\ln\\lambda}{{\\partial} q}}\\frac{\\mathrm{d}\n q}{q},\\nn \\\\\n&=& -\\frac{(-\\alpha_2)^{2-\\delta_{i,n+1}-\\delta_{j,n+1}-\\delta_{k,n+1}}}{(n+1)(\\alpha_1+\\alpha_2)^2}. \n\\label{eq:resinf}\n\\begin{comment}\n\\\\\nR^{(0)}_{0,0,0} &\\triangleq& \\Res_{q=0} \\frac{1}{(\\alpha_1+\\alpha_2)^2}\\frac{1}{(n+1)\\alpha_1+\\l(\\alpha_1+\\alpha_2\\r) \\sum_{l=1}^{n+1}\\frac{\\kappa_l\n q}{1-\\kappa_l q}}\\frac{\\mathrm{d}\n q}{q},\\nn \\\\\n&=& \\frac{1}{(\\alpha_1+\\alpha_2)^2 (n+1)\\alpha_1} \\\\\nR^{(\\infty)}_{0,0,0} &\\triangleq& \\Res_{q=\\infty} \\frac{1}{(\\alpha_1+\\alpha_2)^2}\\frac{1}{(n+1)\\alpha_1+\\l(\\alpha_1+\\alpha_2\\r) \\sum_{l=1}^{n+1}\\frac{\\kappa_l\n q}{1-\\kappa_l q}}\\frac{\\mathrm{d}\n q}{q},\\nn \\\\\n&=& \\frac{1}{(\\alpha_1+\\alpha_2)^2 (n+1)\\alpha_2}\n\\\\\nR^{(0)}_{0,0,i} &\\triangleq& \\Res_{q=0} \\frac{\\alpha_1+\\alpha_2 q\\kappa_i}{(\\alpha_1+\\alpha_2)^2(1-q\\kappa_i)}\\frac{1}{(n+1)\\alpha_1+\\l(\\alpha_1+\\alpha_2\\r) \\sum_{l=1}^{n+1}\\frac{\\kappa_l\n q}{1-\\kappa_l q}}\\frac{\\mathrm{d}\n q}{q},\\nn \\\\\n&=& \\frac{1}{(\\alpha_1+\\alpha_2)^2 (n+1)} \\\\\nR^{(\\infty)}_{0,0,i} &\\triangleq& \\Res_{q=\\infty} \\frac{\\alpha_1+\\alpha_2 q\\kappa_i}{(\\alpha_1+\\alpha_2)^2(1-q\\kappa_i)}\\frac{1}{(n+1)\\alpha_1+\\l(\\alpha_1+\\alpha_2\\r) \\sum_{l=1}^{n+1}\\frac{\\kappa_l\n q}{1-\\kappa_l q}}\\frac{\\mathrm{d}\n q}{q},\\nn \\\\\n&=& -\\frac{1}{(\\alpha_1+\\alpha_2)^2 (n+1)} \n\\\\\nR^{(0)}_{0,j,i} &\\triangleq& \\Res_{q=0} \\frac{(\\alpha_1+\\alpha_2 q\\kappa_i)(\\alpha_1+\\alpha_2 q\\kappa_j)}{(\\alpha_1+\\alpha_2)^2(1-q\\kappa_i)(1-q\\kappa_j)}\\frac{1}{(n+1)\\alpha_1+\\l(\\alpha_1+\\alpha_2\\r) \\sum_{l=1}^{n+1}\\frac{\\kappa_l\n q}{1-\\kappa_l q}}\\frac{\\mathrm{d}\n q}{q},\\nn \\\\\n&=& \\frac{\\alpha_1}{(\\alpha_1+\\alpha_2)^2 (n+1)} \\\\\nR^{(\\infty)}_{0,j,i} &\\triangleq& \\Res_{q=\\infty} \\frac{(\\alpha_1+\\alpha_2 q\\kappa_i)(\\alpha_1+\\alpha_2 q\\kappa_j)}{(\\alpha_1+\\alpha_2)^2(1-q\\kappa_i)(1-q\\kappa_j)}\\frac{1}{(n+1)\\alpha_1+\\l(\\alpha_1+\\alpha_2\\r) \\sum_{l=1}^{n+1}\\frac{\\kappa_l\n q}{1-\\kappa_l q}}\\frac{\\mathrm{d}\n q}{q},\\nn \\\\\n&=& \\frac{\\alpha_2}{(\\alpha_1+\\alpha_2)^2 (n+1)} \n\\\\\n\\end{comment}\n\\end{eqnarray}\nIt is immediate to see that \\eqref{eq:lgqu1}-\\eqref{eq:lgqu2} under the\nidentification \\eqref{eq:kappaY} imply that the quantum part of the\nthree-point correlator $R({\\partial}_{t_{i_1}}{\\partial}_{t_{i_2}}{\\partial}_{t_{i_3}})$\ncoincides with that of $\\left\\langle\\bra p_{i_1}, p_{i_2}, p_{i_3} \\right\\rangle\\ket^Y_{0}$ in\n\\eqref{eq:yukY}. A tedious, but straightforward computation shows that\n\\eqref{eq:lgqu1}-\\eqref{eq:resinf} yield the expressions\nfor the classical triple intersection numbers of $Y$. \\\\\n\\item This is a consequence of the computation above and Theorem \\ref{thm:crc}.\n\\end{enumerate}\n\\end{proof}\n\\begin{figure}\n\\includegraphics{pochcont.pdf}\n\\caption{The double loop contour $\\gamma_4$ for $n=4$.}\n\\label{fig:pochcont}\n\\end{figure}\n\n\\begin{rmk}\nThe freedom of shift by $\\delta_\\mathcal{X}$ and $\\delta_Y$ respectively along\n$H^0(\\mathcal{X})$ and $H^0(Y)$ in \\eqref{eq:kappa0Y},\n\\eqref{eq:kappa0X} is a consequence of the restriction of the String Axiom to the small phase\nspace. We set $\\delta_\\mathcal{X}=\\delta_Y=0$ throughout this section, but it will turn\nout to be useful to reinstate the shifts in the computations of Section~\\ref{sec:compsymp}.\n\\label{rmk:string}\n\\end{rmk}\n\n\n\\begin{rmk}\nIt should be possible to infer the form of the superpotential\n\\eqref{eq:superpot} from the equivariant GKZ system of $\\mathcal{X}$ and $Y$ by\narguments similar to the non-equivariant case (see e.g. \\cite[Appendix\n A]{MR2510741}). The conceptual path we followed to conjecture the form\n\\eqref{eq:superpot} for a candidate dual Landau--Ginzburg model parallels the study of\nthe equivariant local $\\mathbb{CP}^1$ theory in \\cite{Brini:2011ff}; there, the\nexistence of a relation with a reduction of the 2-dimensional Toda hierarchy allows to derive\na Landau--Ginzburg mirror model through the dispersionless Lax formalism for\n2-Toda. More generally, $(n,m)$-graded reductions \\cite{phdthesis-romano} of $2$-Toda are believed to be relevant for the\nequivariant Gromov--Witten theory of local $\\mathbb{P}(n,m)$ \\cite{agps}; the\ndegenerate limit $m=0$ corresponds to the threefold $A_n$ singularity. In this case, the\ndispersionless 2-Toda Lax function reduces to \\eqref{eq:superpot}. \\\\\n\\end{rmk}\n\n\\subsection{The global quantum $D$-module}\n\\label{sec:globdpic}\n\nAn immediate corollary of Theorem~\\ref{thm:mirror} and Proposition~\\ref{thm:tp} is a concrete description of a global quantum $D$-module $(\\mathcal M_A, F, \\nabla,\nH(,)_g)$ interpolating between $\\mathrm{QDM}(\\mathcal{X})$ and $\\mathrm{QDM}(Y)$. Let\n$F \\triangleq T\\mathcal{F}_{\\lambda, \\phi}$ be\nendowed with the family of connections $\\nabla=\\nabla^{(g,z)}$ as in\n\\eqref{eq:defconn} and for $\\nabla$-flat sections $s_1$, $s_2$ let\n\\beq\nH(s_1, s_2)_g = g(s_1(\\kappa, -z),s_2(\\kappa,z))\n\\eeq\nLet now $V_\\mathcal{X}$ and $V_Y$ be neighbourhoods of $\\{\\kappa_i=\\omega^{-i}\\}$ and\n$\\{\\kappa_i=0\\}$ respectively. Then Theorem~\\ref{thm:mirror} can be rephrased\nas\n\\bea\n(\\mathcal{F}_{\\lambda,\\phi}, T\\mathcal{F}_{\\lambda,\\phi}, \\nabla^{(g,z)},H(,)_{g})|_{V_\\mathcal{X}} &\\simeq &\n\\mathrm{QDM}(\\mathcal{X}), \\\\\n(\\mathcal{F}_{\\lambda,\\phi}, T\\mathcal{F}_{\\lambda,\\phi}, \\nabla^{(g,z)},H(,)_{g})|_{V_Y} &\\simeq &\n\\mathrm{QDM}(Y),\n\\end{eqnarray}\nthat is, the twisted period system of $\\mathcal{F}_{\\lambda,\\phi}$ is a global quantum\n$D$-module connecting the genus zero descendent theory of $\\mathcal{X}$ and $Y$; the\ntwisted periods \\eqref{eq:periods} thus define a global flat frame for the\nquantum differential equations of $\\mathcal{X}$ and $Y$ upon analytic continuation in\nthe $\\kappa$-variables, \n\\beq\n\\mathrm{Sol}_{\\lambda,\\phi}|_{V_\\mathcal{X}} = \\mathcal{S}_\\mathcal{X}, \\quad \n\\mathrm{Sol}_{\\lambda,\\phi}|_{V_Y} = \\mathcal{S}_Y. \n\\eeq\n\n\nA canonical basis of $\\mathrm{Sol}_{\\lambda,\\phi}$ can be\n constructed as follows.\nFor the superpotential \\eqref{eq:superpot}, the twisted homology $H_1(\\mathbb{C}\n \\setminus \\lambda^{-1}(0), \\mathbf{L})$ is generated \\cite{MR1424469} by Pochhammer double\n loop contours $\\{\\xi_i\\}_{i=1}^{n+1}$ encircling the origin\n $q=0$ and $q=\\kappa_i^{-1}$, $i=1, \\dots, n+1$, as in Figure~\\ref{fig:pochcont} (alternatively $\\xi_i=[\\rho_0,\\rho_i]$, where the $\\rho$'s are simple oriented loops around each of the punctures). Then the integrals\n\n\n\\bea\n\\Pi_i^{(n)}(\\kappa,z) & \\triangleq & \\frac{1}{(1-\\mathrm{e}^{2\\pi\\mathrm{i} a})(1-\\mathrm{e}^{-2\\pi\\mathrm{i} b})}\\int_{\\xi_i} \\lambda^{1\/z}(q) \\frac{\\mathrm{d} q}{q} \\nn\n\\\\ &=& \\frac{C_n(\\kappa)^{\\frac{1}{z}}}{(1-\\mathrm{e}^{2\\pi\\mathrm{i} a})(1-\\mathrm{e}^{-2\\pi\\mathrm{i} b})} \\int_{\\xi_i}\nq^{a} (1-q)^{-b} \\prod _{k=1}^n \\left(1-q\n\\kappa_k\\right)^{-b}\\frac{\\mathrm{d} q}{q} \\nn \\\\\n&=& \n\\frac{C_n(\\kappa)^{\\frac{1}{z}} \\kappa_i^{-a}}{(1-\\mathrm{e}^{2\\pi\\mathrm{i} a})(1-\\mathrm{e}^{-2\\pi\\mathrm{i} b})} \n \\int_{\\xi_{n+1}} q^{a} (1-q)^{-b}\\left(1-q\/\\kappa_i\\right)^{-b}\n\\prod_{k\\neq i}^n \\left(1-q \\kappa_k\/\\kappa_i\\right)^{-b} \\frac{\\mathrm{d} q}{q} \\nn \\\\\n\\label{eq:eulerint}\n\\end{eqnarray}\nwhere we defined\n\\bea\na & \\triangleq & \\frac{(n+1) \\alpha_1}{z}, \\\\\nb & \\triangleq & \\frac{\\alpha_1+\\alpha_2}{z},\n\\end{eqnarray}\ngive a basis of twisted periods of $\\mathcal{F}_{\\lambda, \\phi}$; when $\\Re(a)>0$, $\\Re\n(b)<1$ they reduce to line integrals along chains connecting $q=0$ to\n$q=\\kappa_i^{-1}$. \\\\\n\n\nThe integrals \\eqref{eq:eulerint} can be given very explicit expressions in\nterms of known generalized hypergeometric functions \\cite{MR0422713}. Namely, we have\n\\bea\n\\Pi_i^{(n)}(\\kappa,z) &=& \n\\frac{\\Gamma(a)\\Gamma(1-b)}{\\Gamma(1+a-b)} C_n(\\kappa)^{\\frac{1}{z}}\n\\kappa_i^{-a}\n \\nn \\\\\n&\\times & \n\\Phi^{(n)}\\l(a,b,1+a-b;\n\\frac{1}{\\kappa_i}, \\frac{\\kappa_1}{\\kappa_i}, \\dots,\n\\frac{\\kappa_n}{\\kappa_i}\\r), \\quad 1\\leq i\\leq n, \\label{eq:pilaur1} \\\\\n\\Pi_{n+1}^{(n)}(\\kappa,z) &=& \\frac{\\Gamma(a)\\Gamma(1-b)}{\\Gamma(1+a-b)} C_n(\\kappa)^{\\frac{1}{z}}\n\\Phi^{(n)}(a,b,1+a-b;\n\\kappa_1, \\dots, \\kappa_n),\n\\label{eq:pilaur2}\n\\end{eqnarray}\nwhere we defined\n\\beq\n\\label{eq:Phi}\n\\Phi^{(M)}(a, b, c, w_1, \\dots, w_M) \\triangleq F_D^{(M)}(a; b, \\dots, b; c; w_1, \\dots, w_{M}),\n\\eeq\nand $F_D^{(M)}(a; b_1, \\dots, b_M; c; w_1, \\dots, w_M)$ in \\eqref{eq:Phi} is the\ngeneralized hypergeometric Lauricella function of type $D$ \\cite{lauric}:\n\\beq\nF_D^{(M)}(a; b_1, \\dots, b_M; c; w_1, \\dots, w_M) \\triangleq \\sum_{i_1, \\dots, i_M}\n\\frac{(a)_{\\sum_j i_j}}{(c)_{\\sum_j i_j}}\\prod_{j=1}^M \\frac{(b_j)_{i_j} w_j^{i_j}}{i_j!}.\n\\label{eq:FD}\n\\eeq\nIn \\eqref{eq:FD}, we used the Pochhammer symbol $(x)_m$ to denote the ratio $(x)_m=\n\\Gamma(x+m)\/\\Gamma(x)$. \\\\\n\n\n\\begin{rmk}\nThat flat sections of $\\mathrm{QDM}(\\mathcal{X})$ and $\\mathrm{QDM}(Y)$ are solutions of a GKZ-type\nsystem, and therefore take the form of generalized hypergeometric functions in\n$B$-model variables, is a direct consequence of equivariant mirror symmetry for toric\nDeligne--Mumford stacks; see \\cite[Appendix A]{MR2510741} for the case under\nstudy here, and \\cite{ccit2} for the general case. Less expected, however, is the fact that flat sections of $\\mathrm{QDM}(\\mathcal{X})$ and $\\mathrm{QDM}(Y)$ are\nhypergeometric functions in {\\it exponentiated flat variables} for \\eqref{eq:pair},\nthat is, in $A$-model variables. This is a consequence of the particular form\n\\eqref{eq:yukY}, \\eqref{eq:lgqu1}-\\eqref{eq:lgqu2}\nof the quantum product: this depends {\\it rationally} on the variables\nin the K\\\"ahler cone for $Y$ in such a way that the quantum differential equation\n\\eqref{eq:QDE} for $Y$ (and therefore $\\mathcal{X}$, via \\eqref{eq:changevar}) becomes a\ngeneralized hypergeometric system in exponentiated flat coordinates. From the vantage\npoint of mirror symmetry, the rational dependence of the $A$-model three-point\ncorrelators on the quantum parameters can be regarded as an epiphenomenon of the Hard Lefschetz\ncondition, which ensures that the inverse mirror map is a rational\nfunction of the $B$-model variables. \n\\end{rmk}\n\n\\begin{rmk}\nAs a further surprising peculiarity of the\ncase of $A_n$ singularities,\nintegral representations of the flat sections have a {\\it simpler}\ndescription in $A$-model variables: the one-dimensional Euler integrals\n\\eqref{eq:eulerint} replace here the $n$-fold Mellin-Barnes contour integrals\nthat represent solutions of the corresponding GKZ system \\cite{MR2510741,\n MR2700280}. This technical advantage is crucial for our\ncalculations of Section~\\ref{sec:compsymp}.\n The reader may find a comparison of the Hurwitz mirror with the traditional\napproach of toric mirror symmetry in \\cite{bcr2013}. \\\\\n\\end{rmk}\n\n\\subsubsection{Example: $n=2$ and the Appell system}\n\\label{sec:appell}\n\nIn this case the quantum $D$-module has rank three. We factor out the dependence on $C_2(\\kappa)$ in\n\\eqref{eq:pilaur1}-\\eqref{eq:pilaur2} for the flat coordinates of the deformed\nconnection as\n\\beq\nf(\\kappa_1,\\kappa_2,z) \\triangleq (\\kappa_0\\kappa_1\\kappa_2)^{-a\/3} \\tilde t(\\kappa_0,\n\\kappa_1, \\kappa_2, z).\n\\label{eq:fdefn2}\n\\eeq\nThe flatness equations for $\\nabla^{(g,z)}$ for $n=2$ reduce to a\nhypergeometric Appell $F_1$ system \\cite{MR0422713} for $f$:\n\\bea\n\\label{eq:F1eq1}\n(\\kappa_1-\\kappa_2){\\partial}_1 {\\partial}_2 f -b ({\\partial}_1-{\\partial}_2)f &=& 0, \\\\\n\\bigg[\\kappa_1(1-\\kappa_1)\\theta_1^2 +\\kappa_2(1-\\kappa_1){\\partial}_{12} \n+(a+1-2b){\\partial}_1 &+& \\nn \\\\ -(a+1+2b) \\kappa_1 {\\partial}_1 -b \\kappa_2{\\partial}_2 -a b \\bigg]f &=&\n0. \n\\label{eq:F1eq2}\n\\end{eqnarray}\nFor $n=2$, the twisted periods \\eqref{eq:pilaur1}-\\eqref{eq:pilaur2} reduce to\nAppell $F_1$ functions \\cite{MR0422713}\n\\bea\n\\Pi_1^{(2)}(\\kappa_0,\\kappa_1,\\kappa_2,z) &=& \\frac{\\Gamma(a)\\Gamma(1-b)}{\\Gamma(1+a-b)} C_2(\\kappa)^{\\frac{1}{z}}\n\\kappa_1^{-a}\n\\Phi^{(2)}\\l(a,b,b, 1+a-b;\n\\frac{1}{\\kappa_1}, \\frac{\\kappa_2}{\\kappa_1} \\r) \\nn \\\\\n&=& \\frac{\\Gamma(a)\\Gamma(1-b)}{\\Gamma(1+a-b)} (\\kappa_0\\kappa_2)^{a\/3} \\kappa_1^{-a\/3}\n\\, F_1\\l(a,b,b,1+a-b,\\frac{1}{\\kappa_1},\n\\frac{\\kappa_2}{\\kappa_1}\\r) \\label{eq:twistn2a} \\\\\n\\Pi_2^{(2)}(\\kappa_0,\\kappa_1,\\kappa_2,z) &=&\n\\Pi_1^{(2)}(\\kappa_0,\\kappa_2,\\kappa_1,z) \\\\\n\\Pi_2^{(3)}(\\kappa_0,\\kappa_2,\\kappa_1,z)\n&=& \\frac{\\Gamma(a)\\Gamma(1-b)}{\\Gamma(1+a-b)} (\\kappa_0 \\kappa_1 \\kappa_2)^{a\/3} \\,\nF_1\\l(a,b,b, 1+a-b,\\kappa_1, \\kappa_2\\r) \\label{eq:twistn2b}\n\\end{eqnarray}\nwhere\n\\beq\nF_1\\l(a,b_1,b_2, c,x,y\\r) \\triangleq \\sum_{i_1, i_2 \\geq 0}\n\\frac{(a)_{i_1+i_2}}{(c)_{i_1+i_2}}\\frac{(b_1)_{i_1}\n x^{i_1}}{i_1!}\\frac{(b_2)_{i_2} y^{i_2}}{i_2!}.\n\\eeq\nIt is straightforward to check that \\eqref{eq:twistn2a}-\\eqref{eq:twistn2b} yield a complete set of\nsolutions of \\eqref{eq:F1eq1}-\\eqref{eq:F1eq2}. \\\\\n\nIn this case, irreducible components of the discriminant locus are given by the lines $\\kappa_1=\\kappa_2$ and\n$\\kappa_i=0,1,\\infty$, $i=1,2$. Its moduli space is depicted in\nFigure~\\ref{fig:modspace2}. The large radius point of $\\mathcal{X}$\n$(\\kappa_1,\\kappa_2)=(\\mathrm{e}^{4\\pi \\mathrm{i}\/3},\\mathrm{e}^{2\\pi \\mathrm{i}\/3})$, denoted OP in Figure~\\ref{fig:modspace2}, is a regular point of the quantum $D$-module\n\\eqref{eq:F1eq1}-\\eqref{eq:F1eq2}, and the Fuchsian singularities\n$(\\kappa_1,\\kappa_2)=(0,0)$ and $(\\infty, \\infty)$ correspond to two\ncopies of the large radius point (henceforth, LR) of $Y$, referred to as LR1 and LR2 in\nFigure~\\ref{fig:modspace2}. The Frobenius structure induced around the latter two\npoints are canonically isomorphic to $QH_T(Y)$, and they are related to one another\nby the involution $\\kappa_i \\to -\\kappa_i$. In contrast with the $n=1$ case\n\\cite{cavalieri2011open, bcr2013}, where the Appell system reduces to the\nGauss ${}_2 F_1$-system, it is\nimpossible here \\cite{MR0422713} to provide a local solution around LR of the Appell system\n\\eqref{eq:F1eq1}-\\eqref{eq:F1eq2} in terms of Appell $F_1$-functions only;\nsee Appendix~\\ref{sec:anFD} for a discussion of this point. \nRepresenting eigenvectors of the monodromy around LR in\ngeneral in terms of the twisted period basis will be the subject of the first\npart of the proof of Theorem \\ref{thm:sympl} in the next section.\n\n\n\\begin{figure}[t]\n\\includegraphics{modspace2.pdf}\n\\caption{The K\\\"ahler moduli space of the $A_2$ singularity in $A$-model\n coordinates.}\n\\label{fig:modspace2}\n\\end{figure}\n\n\\subsection{Proof of Theorem \\ref{thm:sympl}\n}\n\\label{sec:compsymp}\nLet $\\rho$ be a straight line in $\\mathcal M_A$ connecting the large radius point\n$\\{\\kappa_j=0\\}$ of $Y$ to\nthe one of $\\mathcal{X}$, given by $\\{\\kappa_j=\\omega^{-j}\\}$, with zero winding number\naround all irreducible components of the discriminant locus of $\\mathcal M_A$. We compute the analytic\ncontinuation map $\\mathbb{U}_\\rho^{\\mathcal{X}, Y} : \\mathcal{H}_\\mathcal{X} \\to \\mathcal{H}_Y$ that identifies the\ncorresponding flat frames and Lagrangian cones upon analytic continuation\nalong $\\rho$. \\\\\n\nDefine the period map $\\Omega$:%\n\\beq\n\\bary{ccccc}\n\\Omega &:& H_1\\l(\\mathbb{C} \\setminus (\\lambda), \\mathbf{L}\\r) & \\to &\n\\mathcal{O}_{\\mathcal{F}_{\\lambda, \\phi}}, \\\\\n& & \\xi & \\to & \\int_\\xi \\lambda^{1\/z} \\phi,\n\\eary\n\\label{eq:periodmap}\n\\eeq\nand denote by $\\Pi^{(n)}$ as in \\eqref{eq:eulerint} the image of the\nbasis $\\xi$ of twisted cycles of Section~\\ref{sec:globdpic} under the period map. The horizontality \\eqref{eq:fundsol}-\\eqref{eq:Jfun1} of the $J$-functions of\n$\\mathcal{X}$ and $Y$, the String Equation for $\\mathcal{X}$ and $Y$, and Proposition~\\eqref{thm:tp} together state that $J^\\mathcal{X}$,\n$J^Y$ and $\\Pi^{(n)}$ are three\ndifferent $\\mathbb{C}(\\mathrm{e}^{\\mathrm{i} \\pi a}, \\mathrm{e}^{\\mathrm{i}\\pi b}, z)$-bases of deformed flat coordinates of $\\nabla^{(g,z)}$ under the identifications\n\\eqref{eq:kappa0Y}-\\eqref{eq:kappaY},\n\\eqref{eq:kappa0X}-\\eqref{eq:kappakX}. This entails, for every $\\rho$, the\nexistence of two \n$\\mathbb{C}(\\mathrm{e}^{\\mathrm{i} \\pi a}, \\mathrm{e}^{\\mathrm{i} \\pi b}, z)$-linear maps $A$, $B$\n\\beq\n\\bary{ccccc}\n\\nabla^{(\\eta_Y)} A \\Omega & : & H_1\\l(\\mathbb{C} \\setminus (\\lambda), \\mathbf{L}\\r) & \\to & \\mathcal{S}_Y, \\\\\n\\nabla^{(\\eta_\\mathcal{X})}B^{-1} \\Omega & : & H_1\\l(\\mathbb{C} \\setminus (\\lambda), \\mathbf{L}\\r) & \\to &\n\\mathcal{S}_\\mathcal{X},\n\\label{eq:AB}\n\\eary\n\\eeq\nsuch that\n\\bea\n\\label{eq:pijy}\nA \\Pi^{(n)} &=& J_Y, \\\\\nB J_\\mathcal{X} &=& \\Pi^{(n)}.\n\\label{eq:pijx}\n\\end{eqnarray}\nIn particular,\n\\beq\n\\mathbb{U}_\\rho^{\\mathcal{X}, Y} = A B. \n\\label{eq:UBA}\n\\eeq\n\\\\\n\n$A$ sends the twisted period basis $\\Pi^{(n)}$ to a basis of eigenvectors of the\nmonodromy around the large radius point of $Y$ normalized as in\n\\eqref{eq:Jloc}. We compute $A$\nby investigating the leading asymptotics of the twisted periods\n\\eqref{eq:pilaur1}-\\eqref{eq:pilaur2} around the large radius point of $Y$; as in the example of\nSection~\\ref{sec:appell}, we denote the latter by LR. \\\\\n\n\nIn $\\mathbb{C}^m$ with coordinates $(w_1, \\dots, w_m)$, let $\\chi_i$, for\nevery $i=1, \\dots, m$, be a path connecting the point at\ninfinity $W^\\infty_i$,\n\\beq\nW_i^\\infty\\triangleq(\\overbrace{0,\\dots, 0}^{\\text{$i$ times}}, \\overbrace{\\infty,\\dots,\n \\infty}^{\\text{$m-i$ times}}),\n\\eeq\nwith zero winding number along $w_i=w_j$ ($i \\neq j$) and $w_i=0,1$. We want\nto compute the analytic continuation along $\\chi_i$ of the\nLauricella function \n$F_D^{(m)}(a,b_1, \\dots, b_n, c, w_1, \\dots, w_i, w_{i+1}^{-1}, \\dots, w_m^{-1})$\nfrom an open ball centered on $W^\\infty_i$ to the origin \n$W^\\infty_0=(0, \\dots, 0)$ in the sector where $w_i \\ll 1$,\n$w_i\/w_j \\ll 1$ for $ii$ appearing in \\eqref{eq:FD} through an iterated\nuse of Goursat's identity \\eqref{eq:2F1conn}. The final result is\n\\eqref{eq:fdinf}; we refer the reader to Appendix \\ref{sec:anFD} for the\ndetails of the derivation. \\\\\n\nIn our case, Eq.~\\eqref{eq:fdinf} (see also Remark~\\ref{rmk:relabel}) implies, around $w_i=\\infty$, that\n\\bea\n\\Phi^{(m)}(a, b, c; w_1, \\dots, w_m) & \\sim & \n\\sum_{j=0}^{m-1}\\Gamma\\l[\\bary{ccc}c, & a-j b, & (j+1)\n b-a \\\\ a, & b, & c-a \\eary\\r] \\nn \\\\ & & \\prod_{i=1}^j (-w_{m-i+1})^{-b}\n(-w_{m-j})^{-a+j b}\\nn \\\\ &+& \\prod_{j=1}^m (-w_j)^{-b}\n\\Gamma\\l[\\bary{cc}c, & a-m b \\\\ a, & c-m b \\eary\\r].\n\\end{eqnarray}\nwhen $w_i \\sim 0$, $w_i\/w_j \\sim 0$ for $j>i$. In particular, at the level of twisted periods this entails\n\\bea\n\\Pi_{n-k}^{(n)} & \\sim & C_n(\\kappa)^{\\frac{1}{z}}\n\\kappa_{n-k}^{-a}\n\\frac{\\Gamma(a)\\Gamma(1-b)}{\\Gamma(1+a-b)} \n\\Phi^{(k+1)}\\l(a,b,1+a-b, \\frac{\\kappa_{n-k+1}}{\\kappa_{n-k}},\n \\dots, \\frac{\\kappa_{n}}{\\kappa_{n-k}}, \\frac{1}{\\kappa_{n-k}} \\r)\n\\nn \\\\\n& \\sim & \nC_n(\\kappa)^{\\frac{1}{z}}\n\\kappa_{n-k}^{-a}\n\\Bigg\\{\n\\frac{\\Gamma(a)\\Gamma(b-a)}{\\Gamma(b)} \\l(-\\kappa_{n-k}\\r)^{a} \\nn \\\\ &+& \n\\sum_{j=1}^{k}\\frac{\\Gamma(a-j b)\\Gamma((j+1) b-a)}{\\Gamma(b)}\n\\l(-\\frac{\\kappa_{n+1-j}}{\\kappa_{n-k}}\\r)^{-a+j b} (-\\kappa_{n-k})^{b} \\prod_{i=1}^{j-1} \\l(-\\frac{\\kappa_{n+1-i}}{\\kappa_{n-k}}\\r)^{-b}\n\\nn \\\\ &+& \\kappa_{n-k}^{(k+1) b} \n\\frac{\\Gamma(1-b)\\Gamma(a-(k+1) b)}{\\Gamma(1+a-(k+2) b)} \\prod_{j=n-k+1}^{n} (-\\kappa_j)^{-b}\\Bigg\\}\n\\nn \\\\\n& \\sim & \nC_n(\\kappa)^{\\frac{1}{z}}\n\\Bigg\\{\n\\sum_{j=0}^{k}\\frac{\\Gamma(a-j b)\\Gamma((j+1) b-a)}{\\Gamma(b)} (-1)^a \\l(\\kappa_{n+1-j}\\r)^{-a+j b} \\prod_{i=1}^{j-1} \\l(\\kappa_{n+1-i}\\r)^{-b}\n\\nn \\\\ &+& (-1)^{(k+1) b}\n\\frac{\\Gamma(1-b)\\Gamma(a-(k+1) b)}{\\Gamma(1+a-(k+2) b)}\\kappa_{n-k}^{(k+1)\n b-a} \\prod_{j=n-k+1}^{n} (\\kappa_j)^{-b} \\Bigg\\}.\n\\label{eq:dectp}\n\\end{eqnarray}\nin a neighbourhood of $\\kappa=0$ given by $|\\kappa_i| \\ll 1$,\n$\\kappa_i\/\\kappa_j \\ll 1$ for $j>i$; notice that in cohomology coordinates\n\\eqref{eq:kappaY} for $Y$, this becomes an actual open ball $|q|\\ll 1$ around the point\nof classical limit $q_i = \\mathrm{e}^{t_i}=0$. Now, from the discussion of\nSection~\\ref{sec:GIT} and Eqns.~\\eqref{eq:Jred}, \\eqref{eq:kappaY}, around\nthe limit point of classical cohomology the $J$-function of $Y$ behaves as\n\\beq\nJ^Y_{p_i} = z C_n(\\kappa)^{\\frac{1}{z}} \\kappa_i^{(n-i+1)b-a}\\prod_{j=i+1}^n(\\kappa_j)^{-b}\\l(1+\\mathcal{O}(\\mathrm{e}^{t})\\r).\n\\label{eq:Jred2}\n\\eeq\nThen we can read off from \\eqref{eq:dectp}-\\eqref{eq:Jred2} the decomposition of each twisted period\n$\\Pi_i^{(n)}$ in terms of eigenvectors of the monodromy around LR, and in\nparticular, in terms of the localized components of the $J$-function. Explicitly,\n\\beq\n\\Pi^{(n)} = A^{-1} J^Y,\n\\eeq\nwhere\n\\beq\nA^{-1}_{ji} = \\left\\{\\bary{cl} (-1)^{(n-i+1) b}\n\\frac{\\Gamma(1-b)\\Gamma(a-(n-i+1) b)}{z \\Gamma(1+a-(n-i+2) b)} & \\mathrm{for}\n\\quad i=j, \\\\\n(-1)^a\\frac{\\Gamma(a-(n-i+1) b)\\Gamma((n-i+2) b-a)}{z \\Gamma(b)} & \\mathrm{for}\n\\quad ji.\n \\eary\\right. \n\\label{eq:matrAINV}\n\\eeq\nIts inverse reads\n\\beq\nA_{ij} = \\left\\{\\bary{cl} \\mathrm{e}^{\\pi\\mathrm{i} (n-i+1) b}\n\\frac{z \\Gamma(1+a-(n-i+2) b)\\Gamma(1-a+(n-i+1) b)\\sin(a+(n-i+1) b)}{\\Gamma(1-b)\\pi} & i=j, \\\\\n\\mathrm{e}^{-i \\pi (a-b (2 n-2j+3))} \\frac{z \\sin (\\pi b) \\Gamma (1-a+b (n+1-i)) \\Gamma (1+a-b (n-i+2))}{\\pi \\Gamma (1-b)}\n & j>i, \\\\\n0 & j1$; in doing so, we fix a path of\nanalytic continuation by choosing the principal branch for both the power functions\n$(-z)^{-a}$ and $(-z)^{-b}$ in \\eqref{eq:2F1conn} and continue $\\,\n_2F_1(a,b;c;z)$ to $|z>1|$ along a path that has winding number zero around\nthe Fuchsian singularity at $z=1$. As a power series in $w_N$ the analytic continuation\nof \n\\eqref{eq:FD2F1} around $w_N=\\infty$ then reads\n\\bea\n& & F_D^{(N)}(a; b_1, \\dots, b_N; c; w_1, \\dots, w_N) = (-w_N)^{-a}\n\\Gamma\\l[\\bary{cc}c, & b_N-a \\\\ b_N, & c-a \\eary\\r] \\nn \\\\ & & F_D^{(N)}\\l(a; b_1, \\dots,\nb_{N-1}, 1-c+a; 1-b_N+a,\\frac{w_1}{w_N}, \\dots, \\frac{1}{w_N}\\r) \n+\n(-w_N)^{-b_N}\n\\Gamma\\l[\\bary{cc}c, & a-b_N \\\\ a, & c-b_N \\eary\\r] \\nn \\\\ & & C_N^{(N-1)}\\l(b_1, \\dots,\nb_{N}, 1-c+b_N; a-b_N,-w_1,-w_2, \\dots, \\frac{1}{w_N}\\r),\n\\label{eq:FDcont1}\n\\end{eqnarray}\nwhere we defined \\cite[Chapter~3]{MR0422713}\n\\bea\nC_N^{(k)}\\l(b_1, \\dots,\nb_{N}, a; a',x_1, \\dots, x_N\\r) & \\triangleq & \\sum_{i_1, \\dots, i_{N}}\n(a)_{\\alpha_N^{(k)}(\\mathbf{i})}(a')_{-\\alpha_N^{(k)}(\\mathbf{i})} \\prod_{j=1}^{N}\n\\frac{(b_j)_{i_j} w_j^{i_j}}{i_j!} \n\\end{eqnarray}\nand \n\\bea\n\\alpha_N^{(k)}(\\mathbf{i}) &\\triangleq& \\sum_{j=k+1}^{N} i_j-\\sum_{j=1}^{k} i_j,\\\\\n\\Gamma\\l[\\bary{ccc}a_1, & \\dots, & a_m \\\\ b_1, & \\dots, & b_n \\eary\\r] &\\triangleq&\n\\frac{\\prod_{i=1}^m \\Gamma(a_i)}{\\prod_{i=1}^l\\Gamma(b_i)}.\n\\end{eqnarray}\nNow, notice that the $F_D^{(N-1)}$ function in the r.h.s. of \\eqref{eq:FDcont1} is\nanalytic in $\\Omega_N$; there is nothing more that should be done there. The\nanalytic continuation of the $C_N^{(N-1)}$ function is instead much more involved (see\n\\cite{MR0422713} for a complete treatment of the $N=3$ case); but as all we\nare interested in is the leading term of the expansion around $P$ in\n$\\Omega_N$ we isolate the $\\mathcal{O}(1)$ term in its $1\/w_N$ expansion to find\n\\bea\n& & C_N^{(N-1)}\\l(b_1, \\dots,\nb_{N}, 1-c+b_N; a-b_N,-w_1,-w_2, \\dots, \\frac{1}{w_N}\\r) = \\nn \\\\\n&=& F_D^{(N-1)}\\l(a-b_N,b_1, \\dots,\nb_{N-1}, c-b_N; w_1, \\dots, w_{N-1}\\r)+ \\mathcal{O}\\l(\\frac{1}{w_{N}}\\r)\n\\label{eq:CNFD}\n\\end{eqnarray}\nWe are done: by \\eqref{eq:CNFD}, the form of the leading terms in the expansion of\n$F_D^{(N)}$ inside $\\Omega_N$ can be found recursively by iterating $N$ times the\nprocedure we have followed in \\eqref{eq:FD2F1}-\\eqref{eq:CNFD}; as at each step \\eqref{eq:2F1conn}-\\eqref{eq:CNFD} generate\none additional term, we end up with a sum of $N+1$ monomials each having\npower-like monodromy around $P$. Explicitly:\n\\bea\nF_D^{(N)}(a; b_1, \\dots, b_N; c; w_1, \\dots, w_N) & \\sim & \n\\sum_{j=0}^{N-1}\\Gamma\\l[\\bary{ccc}c, & a-\\sum_{i=N-j+1}^N b_i, & \\sum_{i=n-j}^N\n b_i-a \\\\ a, & b_{N-j}, & c-a \\eary\\r] \\nn \\\\ & & \\prod_{i=1}^j (-w_{N-i+1})^{-b_{N-i+1}}\n(-w_{N-j})^{-a+\\sum_{i=N-j+1}^Nb_i}\\nn \\\\ &+& \\prod_{i=1}^N (-w_i)^{-b_i}\n\\Gamma\\l[\\bary{cc}c, & a-\\sum_{i=1}^N b_j \\\\ a, & c-\\sum_{i=1}^N b_j \\eary\\r].\n\\label{eq:fdinf}\n\\end{eqnarray}\n\n\\begin{rmk}\nThe analytic continuation to some other sectors of the ball $B(P,\\epsilon)$ is\nstraightforward. In particular we can replace the condition $w_i\/w_j \\sim 0$\nfor $j>i$ by its reciprocal $w_j\/w_i \\sim 0$; this amounts to relabeling $b_i\n\\to b_{N-i+1}$ in \\eqref{eq:fdinf}.\n\\label{rmk:relabel}\n\\end{rmk}\n\n\\begin{rmk} When $a=-d$ for $d\\in \\mathbb{Z}^+$, the function $F_D^{(N)}$ reduces to\n a polynomial in $w_1, \\dots, w_N$. In this case the arguments above reduce\n to a formula of Toscano \\cite{MR0340663} for Lauricella polynomials:\n\\bea\n& & F_D^{(N)}(-d; b_1, \\dots, b_N; c; w_1, \\dots, w_N) \\nn \\\\ &=& (-w_N)^{d}\\frac{(b)_d}{(c)_d}\n F_D^{(N)}\\l(-d; b_1, b_2 \\dots,\nb_{N-1};1-d-c, 1-d-b_N,\\frac{w_1}{w_N}, \\dots, \\frac{1}{w_N}\\r).\n\\label{eq:FDtosc}\n\\end{eqnarray}\n\\label{rmk:toscano}\n\\end{rmk}\n\\begin{comment}\n\\section{GKZ mirror symmetry}\n\\label{sec:tms}\n\\subsection{$I$-functions and the Picard--Fuchs system}\n\nIt is instructive to compare the hypergeometric form of the $\\nabla$-flat\nsections \\eqref{eq:pilaur1}-\\eqref{eq:pilaur2} in $A$-model variables with the\ngeneralized hypergeometric functions that arise from solutions of the\nmirror GKZ system. \\\\\n\nLet us start from $n=1$. If we dualize \\eqref{eq:divclass},\n\\beq\n0\\longrightarrow\\mathbb{Z}^3\\stackrel{\\l(\\bary{ccc} 1 & 0 & 0 \\\\ 1 & 1 & 0 \\\\ 1 & 2 & 0 \\\\\n 0 & 0 & 1\\eary\\r)}{\\longrightarrow}\\mathbb{Z}^{4}\\stackrel{\\l( \\bary{cccc} 1 & -2 &\n 1 & 0 \\eary\\r)}{\\longrightarrow}\\mathbb{Z}\\longrightarrow\n0,\n\\label{eq:divclass2}\n\\eeq\ngives us the chamber decomposition depicted in Figure~\\ref{fig:bmod1} (the\n{\\it secondary fan} of $\\mathcal{X}$ and $Y$).\n\\begin{figure}[h]\n\\centering\n\\includegraphics{bmod1.pdf}\n\\caption{The secondary fan for the case $n=1$.}\n\\label{fig:bmod1}\n\\end{figure}\nThe {\\it B-model moduli space} $\\mathcal M_B$ is the toric orbifold corresponding to the\none-dimensional fan in Figure~\\ref{fig:bmod1}. From \\eqref{eq:divclass2}, we have\n$\\mathcal M_B \\simeq \\mathbb{P}(1,2)$: the northern (smooth) hemisphere of the orbifold projective line\ncorresponds to the right hand chamber in Figure~\\ref{fig:bmod1}, which gives rise\nto the resolution $Y = (K\\oplus\\mathcal{O})_{\\mathbb{P}^1}$; conversely for the southern\nhemisphere and the singularity $\\mathcal{X}=[\\mathbb{C}^3\/\\mathbb{Z}_2]$. Let $y_s$ be a coordinate\npatch for the smooth patch, and $y_o$ for the orbifold patch, so that\n$y_s=1\/y_o^2$; the large volume points for $Y$ and $\\mathcal{X}$ correspond to $y_s=0$\nand $y_o=0$ respectively. The Picard--Fuchs operator $\\mathcal{D}$ for $\\mathcal{X}$ and $Y$ reads \\cite{MR2510741}\n\\bea\n\\label{eq:GKZX1}\n\\mathcal{D} &=& z \\theta_o (z\\theta_o -z)- y_o^2 \\l(\\alpha_1-\\frac{z}{2}\n\\theta_o\\r)\\l(\\alpha_2-\\frac{z}{2} \\theta_o\\r), \\qquad \\theta_o =\n\\frac{{\\partial}}{{\\partial} y_o}, \\\\\n&=& 2 z \\theta_s (2 z\\theta_s +z)- y_s^{-1}\\l(\\alpha_1+z \\theta_s\\r)\\l(\\alpha_2+z \\theta_s\\r), \\qquad \\theta_s =\n\\frac{{\\partial}}{{\\partial} y_s}.\n\\label{eq:GKZY1}\n\\end{eqnarray}\nConsider the patch of $\\mathcal M_B$ first and define the $I$-function\nof $Y$ as the cohomology valued series\n\\bea\nI^{K_{\\mathbb{P}^1}\\oplus\\mathcal{O}_{\\mathbb{P}^1}}(y_s,z) & \\triangleq & z y_s^{p\/z}\\left[1+\\sum_{d>0}\n2 p y_s^{d} \\frac{\\Gamma(\\frac{2 p}{z}+2d)}{\\Gamma(\\frac{2\n p}{z}+1)}\n\\frac{\\Gamma(\\frac{p+z+\\alpha_1}{z})}{\\Gamma(\\frac{p+z+\\alpha_1}{z}+d)} \n\\frac{\\Gamma(\\frac{p+z+\\alpha_2}{z})}{\\Gamma(\\frac{p+z+\\alpha_2}{z}+d)}\\right],\n\\nn \\\\\n&=& y_s^{\\alpha_1\/z}\\left[z+2\\alpha_1\\sum_{d>0}\n \\frac{y_s^{d}}{d!} \\frac{\\Gamma(2d+\\frac{2\\alpha_1}{z})}{\\Gamma(1+\\frac{2\n \\alpha_1}{z})}\n\\frac{\\Gamma(\\frac{z-\\alpha_2+\\alpha_1}{z})}{\\Gamma(\\frac{z-\\alpha_2+\\alpha_1}{z}+d)}\\right]\nP_2 + (1 \\leftrightarrow 2), \\nn \\\\\n&=& z y_s^{\\alpha_1\/z} \\, _2F_1\\l(\\frac{\\alpha_1}{z}, \\frac{1}{2}+\\frac{\\alpha_1}{z}, \\frac{\\alpha_1-\\alpha_2}{z}, 4 y_s\\r) P_2\n+ (1 \\leftrightarrow 2),\n\\label{eq:IY2F1}\n\\end{eqnarray}\nwhere $p=c_1(\\mathcal{O}_{\\mathbb{P}^1}(1))=\\alpha_2 P_1+\\alpha_1 P_2$ is the hyperplane class and\n$\\{P_1, P_2\\}$ is the localized basis for $H_T(Y)$ of Section~\\ref{sec:GIT}\n(that is, the equivariant classes corresponding to the North and the South\npole of the base $\\mathbb{P}^1$). Then the components of $I^Y$ are a basis of\nsolutions of \\eqref{eq:GKZY1}, and under the mirror map\n\\beq\ny_s = \\frac{\\mathrm{e}^{t}}{(1+\\mathrm{e}^{t})^2}\n\\label{eq:mirmap1}\n\\eeq\nit gives \\cite{MR2276766} a family of elements of the cone $\\mathcal{L}_Y$ of $Y$ such that\n\\beq\nI^Y(y_s(t),-z) = -z + t + \\mathcal{O}\\l(\\frac{1}{z}\\r).\n\\eeq\nThen\n\\beq\nJ^Y(t,z)|_{t_0=0} = I^Y(y_s(t),z),\n\\eeq\nas can be verified upon plugging \\eqref{eq:mirmap1} into \\eqref{eq:IY2F1},\nusing\n\\beq\n _2F_1(a,b;a-b+1;z)=(1-z)^{-a} \\,\n _2F_1\\left(\\frac{a}{2},\\frac{a+1}{2}-b;a-b+1;-\\frac{4\n z}{(1-z)^2}\\right)\n\\eeq\nand comparing with \\eqref{eq:t1LR1}-\\eqref{eq:t1LR2}. \\\\\n\nSimilarly, in the orbifold chamber we define\n\\bea\nI^{[\\mathbb{C}^3\/\\mathbb{Z}_2]}(y_o,z) & \\triangleq & z \\Bigg[\\sum_{k\\geq 0}\n \\frac{y_o^{2k}}{(2k)!}\\prod_{r=0}^{k-1}\\l(\\frac{\\alpha_1}{z}-r\\r)\\l(\\frac{\\alpha_2}{z}-r\\r) \\mathbf{1}_0\n \\nn \\\\\n&+& \\sum_{k\\geq 0}\n \\frac{y_o^{2k+1}}{(2k+1)!}\\prod_{r=0}^{k-1}\\l(\\frac{\\alpha_1}{z}-r-\\frac{1}{2}\\r)\\l(\\frac{\\alpha_2}{z}-r-\\frac{1}{2}\\r) \\mathbf{1}_{1\/2}\n\\Bigg], \\nn \\\\\n &=& z \\, _2F_1\\l(\\alpha_1, \\alpha_2, \\frac{1}{2}, \\frac{y_o^2}{4}\\r) \\mathbf{1}_0 + y_o\n\\, _2F_1\\l(\\alpha_1+\\frac{1}{2}, \\alpha_2+\\frac{1}{2}, \\frac{3}{2}, \\frac{y_o^2}{4}\\r) \\mathbf{1}_{1\/2}.\n\\end{eqnarray}\nAs before,\n\\beq\nJ^\\mathcal{X}(x,z)|_{x_0=0} = I^\\mathcal{X}(y_o(x),z).\n\\eeq\n\\begin{rmk}\nNotice that the orbifold point $y_o=0$ in the $B$-model moduli space is a\nFuchsian singularity for \\eqref{eq:GKZX1} with critical exponents\n$(0,1\/2)$, and therefore $\\mathbb{Z}_2$ monodromy. This is unlike the orbifold point in the $A$-model, which is a\nsmooth point for the Dubrovin connection, since the projection map\n$y_s:\\mathcal M_A\\to\\mathcal M_B$ realizes $\\mathcal M_A$ as a double cover of $\\mathcal M_B$ branched at the conifold and the orbifold point.\n\\end{rmk}\nFor general $n$, the $B$-model moduli space is the projective toric\nDeligne--Mumford stack associated to the simplicial stacky fan with rays given\nby the columns of $M$ in \\eqref{eq:MN}. The large volume chamber,\ncorresponding to the stability condition \\eqref{resgit}, is the chamber with\nmaximal cones \n\\beq\n\\sigma_i=\\l\\{ \\bary{ccc} M_{\\bullet,i} & \\mathrm{for} & 1\\leq i 0$, the\nresulting series converges for $|y_{s,n}|R_{d_1, \\dots, d_{n-1}}$. Iterating this procedure gives analytic\ncontinuation formulae for $I^Y(y_s,z)$ to other chambers of $\\mathcal M_B$. \\\\\n\nFor example, start from $n=1$ and rewrite \n\\beq\nI^{K_{\\mathbb{P}^1}\\oplus\\mathcal{O}_{\\mathbb{P}^1}}(y_s,z) =\n\\Theta(\\alpha_1,\\alpha_2,z) y_s^{-\\alpha_1\/z} \n \\sum_{d\\geq 0}\n \\frac{y_s^{d}}{d!}\n \\frac{\\Gamma(2d-\\frac{2\\alpha_1}{z})}{\\Gamma(\\frac{z+\\alpha_2-\\alpha_1}{z}+d)}\n P_2 + (1 \\leftrightarrow 2)\n\\eeq\nwhere $\\Theta(\\alpha_1,\\alpha_2,z)=-2 \\alpha_1 \n \\frac{\\Gamma(\\frac{z+\\alpha_2-\\alpha_1}{z})}{\\Gamma(1-\\frac{2\n \\alpha_1}{z})}$. This converges when $|y_s|<1\/4$. Let $C$ be the contour in\n the complex $s$-plane depicted in Fig. \\ref{fig:contc3z2}, and consider the integral\n\\beq\n\\mathcal{I}(y_s,z) = \n\\int_{C} \\mathrm{d} s\\frac{y_s^{s}}{\\Gamma(s+1)} \\frac{\\Gamma(s)\\Gamma(1-s)\n \\Gamma(2s-\\frac{2\\alpha_1}{z})}{\\Gamma(\\frac{z+\\alpha_2-\\alpha_1}{z}+s)} P_2 + (1 \\leftrightarrow 2)\n\\eeq\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.8]{analytic.pdf}\n\\caption{The contour $C$.}\n\\label{fig:contc3z2}\n\\end{figure}\nWhen $|y_s|<1\/4$, we close the contour to the right and pick up residues at\n$s=n\\geq 0$. This gives\n$I^N_{K_{\\mathbb{P}^1}\\oplus\\mathcal{O}_{\\mathbb{P}^1}}(y_s,z) y_s^{\\alpha_1\/z} \/\n\\Theta(\\alpha_1,\\alpha_2,z)$. When $|y_s|>1\/4$, we close the contour to the right\nand pick up residues at $s=-n\/2+\\alpha_1\/z$. We obtain\n\\bea\n\\tilde I_{K_{\\mathbb{P}^1}\\oplus\\mathcal{O}_{\\mathbb{P}^1}}(y_s,z) &=& -\\frac{\\Theta(\\alpha_1,\\alpha_2,z)}{2}\n \\sum_{n\\geq 0}\n \\frac{(-y_o)^{n}}{n!} \\frac{\\Gamma(n\/2-\\alpha_1\/z)}{\\Gamma(1-n\/2+\\alpha_2\/z)} P_2 + (1 \\leftrightarrow 2)\n\\end{eqnarray}\nThe symplectomorphism $\\mathbb{U}^{\\mathcal{X}, Y}: \\mathcal{L}_{[\\mathbb{C}^3\/\\mathbb{Z}_2]}\\to\n\\mathcal{L}_{(K+\\mathcal{O})_{\\mathbb{P}^1}}$ relating the cones of $\\mathcal{X}$ and $Y$ is designed so that\n$U(I^{[\\mathbb{C}^3\/\\mathbb{Z}_2]}(y_o, -z)) = \\tilde\nI^{K_{\\mathbb{P}^1}\\oplus\\mathcal{O}_{\\mathbb{P}^1}}(y_s,-z)$. As $U$ is\nindependent of $y_o$, we equate the first few powers of $y_o$ on both sides. We\nhave\n\\beq\nI^{[\\mathbb{C}^3\/\\mathbb{Z}_2]}(y_o, -z) = -z \\mathbf{1}_0 - z y_o \\l(\\frac{\\alpha_1}{z}+\\frac{1}{2}\\r)\\l(\\frac{\\alpha_2}{z}+\\frac{1}{2}\\r)\\mathbf{1}_{\\frac{1}{2}}+\\mathcal{O}\\l(y_o^2\\r)\n\\eeq\nand thus\n\\bea\n\\mathbb{U}^{\\mathcal{X},Y}\\l(\\mathbf{1}_0\\r) &=& \n\\frac{\\alpha_1 \\Gamma \\left(\\frac{\\alpha_1}{z}\\right) \\Gamma \\left(\\frac{z+\\alpha_1-\\alpha_2}{z}\\right)}{\\Gamma \\left(\\frac{2 \\alpha_1}{z}+1\\right) \\Gamma \\left(1-\\frac{\\alpha_2}{z}\\right)}P_2+\\frac{\\alpha_2\n \\Gamma \\left(\\frac{\\alpha_2}{z}\\right) \\Gamma \\left(\\frac{z-\\alpha_1+\\alpha_2}{z}\\right)}{\\Gamma\n \\left(1-\\frac{\\alpha_1}{z}\\right) \\Gamma \\left(\\frac{2\n \\alpha_2}{z}+1\\right)} P_1, \\\\\n\\mathbb{U}^{\\mathcal{X},Y}\\l(\\mathbf{1}_{\\frac{1}{2}}\\r) &=& \n\\sqrt{\\pi } z^2 \\left(\\frac{4^{-\\frac{\\alpha_1}{z}} \\Gamma \\left(\\frac{z+\\alpha_1-\\alpha_2}{z}\\right)}{\\Gamma\n \\left(\\frac{\\alpha_1}{z}\\right) \\Gamma \\left(\\frac{1}{2}-\\frac{\\alpha_2}{z}\\right)}P_2+\\frac{ 4^{-\\frac{\\alpha_2}{z}}\n \\Gamma \\left(\\frac{z-\\alpha_1+\\alpha_2}{z}\\right)}{\\Gamma \\left(\\frac{1}{2}-\\frac{\\alpha_1}{z}\\right) \\Gamma\n \\left(\\frac{\\alpha_2}{z}\\right)}P_1\\right)\n\\end{eqnarray}\n\n\\begin{rmk}\nWorking parametrically in $n$, the analytic continuation of the $n$-fold Mellin--Barnes\nintegral representation of \\eqref{eq:IYn} becomes practically unwieldy, as it\nwould require to iterate this procedure for every chamber that is crossed in\nthe process (which are $n$ in our case). \n\\end{rmk}\n\\end{comment}\n\\end{appendix}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{secIntro}\nThe class of adaptive importance sampling (AIS) methods is a key Monte Carlo methodology for estimating integrals that cannot be obtained in closed form \\citep{robert2004monte}. This problem arises in many settings, such as Bayesian signal processing and machine learning \\citep{bugallo2015adaptive, bugallo2017adaptive} or optimal control, \\citep{kappen2016adaptive} where the quantities of interest are usually defined as intractable expectations. Adaptive importance samplers are versions of classical importance samplers (IS) which iteratively improve the proposals to generate samples better suited to the estimation problem at hand. Its variants include, for example, \\textit{population Monte Carlo} methods \\citep{cappe2004population} and adaptive mixture importance sampling \\citep{cappe2008adaptive}. Since there has been a surge of papers on the topic of AIS recently, a comprehensive review is beyond the scope of this article; see e.g. \\cite{bugallo2017adaptive} for a recent review.\n\nDue to the popularity of the adaptive importance samplers, their theoretical performance has also received attention in the past few years. The same as conventional IS methods, AIS schemes enjoy the classical $\\mathcal{O}(1\/\\sqrt{N})$ convergence rate of the $L_2$ error, where $N$ is the number of Monte Carlo samples used in the approximations, see e.g. \\cite{robert2004monte} and \\cite{agapiou2017importance}. However, since an adaptation is performed over the iterations and the goal of this adaptation is to improve the proposal quality, an insightful convergence result would provide a bound which explicitly depends on the number of iterations, $t$, (which sometimes we refer to as \\textit{time}) and the number of samples, $N$. Although there are convergence results of adaptive methods (see \\cite{douc2007convergence} for a convergence theory for population Monte Carlo based on minimizing Kullback-Leibler divergence), none of the available results yields an explicit bound of the error in terms of the number of iterations and the number of particles at the same time.\n\nOne difficulty of proving such a result for adaptive mixture samplers is that the adaptive mixtures form an interacting particle system and it is unclear what kind of adaptation they perform or whether the adapted proposals actually get closer to the target for some metric. An alternative to adaptation using mixtures is the idea of minimizing a cost function in order to adapt the proposal. This idea has been popular in the literature, in particular, minimizing the variance of the weight function has received significant attention, see, e.g., \\citet{arouna2004adaptative,arouna2004robbins, kawai2008adaptive, lapeyre2011framework, ryu2014adaptive, kawai2017acceleration, kawai2018optimizing}. Relevant to us, in particular, is the work of \\citet{ryu2014adaptive}, who have have proposed an algorithm called Convex Adaptive Monte Carlo (Convex AdaMC). This scheme is based on minimizing the variance of the IS estimator, which is a quantity related to the $\\chi^2$ divergence between the target and the proposal. \\citet{ryu2014adaptive} have shown that the variance of the IS estimator is a convex function of the parameters of the proposal when the latter is chosen within the exponential family. Based on this observation, \\citet{ryu2014adaptive} have formulated Convex AdaMC, which draws one sample at each iteration and construct the IS estimator, which requires access to the normalised target. They proved a central limit theorem (CLT) for the resulting sampler. The idea has been further extended for self-normalised importance samplers by \\citet{ryu2016convex}, who considered minimising the $\\alpha$-divergence between the target and an exponential family. Similarly, \\citet{ryu2016convex} proved a CLT for the resulting sampler. Similar ideas were also considered by \\citet{kawai2017acceleration, kawai2018optimizing}, who also aimed at minimizing the variance expression. Similarly, \\citet{kawai2018optimizing} showed that the variance of the weight function is convex when the proposal family is suitably chosen and provided general conditions for such proposals. \\citet{kawai2018optimizing} has also developed an adaptation technique based on the stochastic approximation, which is similar to the scheme we analyse in this paper. {There have been other results also considering $\\chi^2$ divergence and relating it to the necessary sample size of the IS methods, see, e.g., \\citet{sanz2018importance}. Following the approach of \\citet{chatterjee2018sample}, \\citet{sanz2018importance} considers and ties the necessary sample size to $\\chi^2$-divergence, in particular, shows that the necessary sample size grows with $\\chi^2$-divergence, hence implying that minimizing it can lead to more efficient importance sampling procedures.}\n\nIn this work, we develop and analyse a family of adaptive importance samplers, coined \\textit{optimised adaptive importance samplers} (OAIS), which relies on a particular adaptation strategy based on convex optimisation. We adapt the proposal with respect to a quantity (essentially the $\\chi^2$-divergence between the target and the proposal) that also happens to be the constant in the error bounds of the IS (see, e.g., \\citep{agapiou2017importance}). Assuming that proposal distributions belong to the exponential family, we recast the adaptation of the proposal as a convex optimisation problem and then develop a procedure which essentially optimises the $L_2$ error bound of the algorithm. By using results from convex optimisation, we obtain error rates depending on the number of iterations, denoted as $t$, and the number of Monte Carlo samples, denoted as $N$, together. In this way, we explicitly display the trade-off between these two essential quantities. To the best of our knowledge, none of the papers on the topic provides convergence rates depending explicitly on the number of iterations and the number of particles together, as we do herein.\n\nThe paper is organised as follows. In Sec.~\\ref{sec:AISintro}, we introduce the problem definition, the IS and the AIS algorithms. In Sec.~\\ref{sec:theAlg}, we introduce the OAIS algorithms. In Sec.~\\ref{sec:analysis}, we provide the theoretical results regarding optimised AIS and show its convergence using results from convex optimisation. Finally, we make some concluding remarks in Sec.~\\ref{sec:conc}.\n\n\\subsection*{Notation}\n\nFor $L\\in{\\mathbb N}$, we use the shorthand $[L] = \\{1,\\ldots,L\\}$. We denote the state space as ${\\mathsf X}$ and assume ${\\mathsf X} \\subseteq {\\mathbb R}^{d_x}$, $d_x \\ge 1$. The space of bounded real-valued functions and the set of probability measures on space ${\\mathsf X}$ are denoted as $B({\\mathsf X})$ and ${\\mathcal P}({\\mathsf X})$, respectively. Given $\\varphi\\in B({\\mathsf X})$ and $\\pi\\in{\\mathcal P}({\\mathsf X})$, the expectation of $\\varphi$ with respect to (w.r.t.) $\\pi$ is written as $(\\varphi,\\pi) = \\int \\varphi(x) \\pi(\\mbox{d}x)$ or ${\\mathbb E}_\\pi[\\varphi(X)]$. The variance of $\\varphi$ w.r.t. $\\pi$ is defined as $\\var_\\pi(\\varphi) = (\\varphi^2,\\pi) - (\\varphi,\\pi)^2$. If $\\varphi\\in B({\\mathsf X})$, then $\\|\\varphi\\|_\\infty = \\sup_{x\\in{\\mathsf X}} |\\varphi(x)| < \\infty$. The unnormalised density associated to $\\pi$ is denoted with $\\Pi(x)$. We denote the proposal as $q_\\theta \\in {\\mathcal P}({\\mathsf X})$, with an explicit dependence on the parameter $\\theta\\in\\Theta$. The parameter space is assumed to be a subset of $d_\\theta$-dimensional Euclidean space, i.e., $\\Theta \\subseteq {\\mathbb R}^{d_\\theta}$. \n\nWhenever necessary we denote both the probability measures, $\\pi$ and $q_\\theta$, and their densities with the same notation. To be specific, we assume that both $\\pi(\\mbox{d}x)$ and $q_\\theta(\\mbox{d}x)$ are absolutely continuous with respect to the Lebesgue measure and we denote their associated densities as $\\pi(x)$ and $q_\\theta(x)$. The use of either the measure or the density will be clear from both the argument (sets or points, respectively) and the context.\n\n\\section{Background}\\label{sec:AISintro}\n\nIn this section, we review importance and adaptive importance samplers.\n\n\\subsection{Importance sampling}\n\nConsider a target density $\\pi \\in {\\mathcal P}({\\mathsf X})$ and a bounded function $\\varphi \\in B({\\mathsf X})$. Often, the main interest is to compute an integral of the form\n\\begin{align}\\label{eq:ProbDefn}\n(\\varphi,\\pi) = \\int_{\\mathsf X} \\varphi(x) \\pi(x) \\mbox{d}x.\n\\end{align}\nWhile perfect Monte Carlo can be used to estimate this expectation when it is possible to sample exactly from $\\pi(x)$, this is in general not tractable. Hereafter, we consider the cases when the target can be evaluated exactly and up to a normalising constant, respectively.\n\nImportance sampling (IS) uses a proposal distribution which is easy to sample and evaluate. The method consists in weighting these samples, in order to correct the discrepancy between the target and the proposal, and finally constructing an estimator of the integral. To be precise, let $q_\\theta\\in{\\mathcal P}({\\mathsf X})$ be the proposal which is parameterized by the vector $\\theta\\in\\Theta$. The unnormalised target density is denoted as $\\Pi:{\\mathsf X} \\to {\\mathbb R}_+$. Therefore, we have\n\\begin{align*}\n\\pi(x) = \\frac{\\Pi(x)}{Z_\\pi},\n\\end{align*}\nwhere $Z_\\pi :=\\int_{\\mathsf X} \\Pi(x) {\\mathrm{d}} x < \\infty$. Next, we define functions $w_\\theta, W_\\theta:{\\mathsf X} \\times \\Theta \\to {\\mathbb R}_+$ as\n\\begin{align*}\nw_\\theta(x) = \\frac{\\pi(x)}{q_\\theta(x)} \\quad \\textnormal{and} \\quad W_\\theta(x) = \\frac{\\Pi(x)}{q_\\theta(x)},\n\\end{align*}\nrespectively. For a chosen proposal $q_\\theta$, the IS proceeds as follows. First, a set of independent and identically distributed (iid) samples $\\{x^{(i)}\\}_{i=1}^N$ is generated from $q_\\theta$. When $\\pi(x)$ can be evaluated, one constructs the empirical approximation of the probability measure $\\pi$, denoted $\\pi_\\theta^N$, as\n\\begin{align*}\n\\pi_\\theta^N(\\mbox{d}x) = \\frac{1}{N} \\sum_{i=1}^N w_\\theta(x^{(i)})\\delta_{x^{(i)}}(\\mbox{d}x),\n\\end{align*}\nwhere $\\delta_{x'}(\\mbox{d}x)$ denotes the Dirac delta measure that places unit probability mass at $x=x'$. For this case, the IS estimate of the integral in \\eqref{eq:ProbDefn} can be given as\n\\begin{align}\\label{eq:ISestimate}\n(\\varphi,\\pi^N_\\theta) = \\frac{1}{N} \\sum_{i=1}^N w_\\theta(x^{(i)}) \\varphi(x^{(i)}).\n\\end{align}\nHowever, in most practical cases, the target density $\\pi(x)$ can only be evaluated up to an unknown normalizing proportionality constant (i.e., we can evaluate $\\Pi(x)$ but not $Z_\\pi$). In this case, we construct the empirical measure $\\pi^N_\\theta$ as\n\\begin{align*}\n\\pi_\\theta^N(\\mbox{d}x) = \\sum_{i=1}^N \\mathsf{w}_\\theta^{(i)} \\delta_{x^{(i)}}(\\mbox{d}x),\n\\end{align*}\nwhere\n\\begin{align*}\n\\mathsf{w}_\\theta^{(i)} = \\frac{W_\\theta(x^{(i)})}{\\sum_{j=1}^N W_\\theta(x^{(j)})}.\n\\end{align*}\nFinally this construction leads to the so called self-normalizing importance sampling (SNIS) estimator\n\\begin{align}\\label{eq:SNISestimate}\n(\\varphi,\\pi^N_\\theta) = \\sum_{i=1}^N \\mathsf{w}_\\theta^{(i)} \\varphi(x^{(i)}).\n\\end{align}\nAlthough the IS estimator \\eqref{eq:ISestimate} is unbiased, the SNIS estimator \\eqref{eq:SNISestimate} is in general biased. However, the bias and the MSE vanish with a rate $\\mathcal{O}(1\/N)$, therefore providing guarantees of convergence as $N\\to\\infty$. Crucially for us, the MSE of both estimators. {For clarity, below we present an MSE bound for the (more general) SNIS estimator \\eqref{eq:SNISestimate} which is adapted from \\citet{agapiou2017importance}.}\n\\begin{thm}\\label{thm:ISfund}\nAssume that $(W_\\theta^2,q_\\theta) < \\infty$. Then for any $\\varphi\\in B({\\mathsf X})$, we have\n\\begin{align}\n{\\mathbb E}\\left[\\left((\\varphi,\\pi) - (\\varphi,\\pi_{\\theta}^N)\\right)^2\\right] \\leq \\frac{c_\\varphi \\rho(\\theta)}{{N}},\n\\label{eqThm1-1}\n\\end{align}\nwhere $c_\\varphi = 4\\|\\varphi\\|_\\infty^2$ and the function $\\rho:\\Theta \\to [\\rho^\\star,\\infty)$ is defined as\n\\begin{align}\n\\rho(\\theta) = {\\mathbb E}_{q_\\theta}\\left[\\frac{\\pi^2(X)}{q^2_\\theta(X)}\\right],\n\\label{eqThm1-2}\n\\end{align}\nwhere $\\rho^\\star := \\inf_{\\theta\\in\\Theta} \\rho(\\theta) \\geq 1$.\n\\end{thm}\n\\begin{proof}\nSee Appendix \\ref{app:proofIS} for a self-contained proof.\n\\end{proof}\n\\begin{rem} For the IS estimator \\eqref{eq:ISestimate}, this bound can be improved so that $c_\\varphi~=~\\|\\varphi\\|_\\infty^2$. However, this improvement does not effect our results in this paper, hence we present a single bound of the form in \\eqref{eqThm1-1} for the estimators \\eqref{eq:ISestimate} and \\eqref{eq:SNISestimate} for conciseness. $\\square$\n\\end{rem}\n\\begin{rem}\\label{rem:relationToChi} As pointed out by \\cite{agapiou2017importance}, the function $\\rho$ is essentially the $\\chi^2$ divergence between $\\pi$ and $q_\\theta$, i.e.,\n\\begin{align*}\n\\rho(\\theta) := \\chi^2(\\pi || q_\\theta) + 1.\n\\end{align*}\nNote that $\\rho(\\theta)$ can also be expressed in terms of the variance of the weight function $w_\\theta$, which coincides with the $\\chi^2$-divergence, i.e.,\n\\begin{align*}\n\\rho(\\theta) = \\var_{q_\\theta}(w_\\theta(X)) + 1.\n\\end{align*}\nTherefore, minimizing $\\rho(\\theta)$ is equivalent to minimizing $\\chi^2$-divergence and the variance of the weight function $w_\\theta$, i.e., $\\var_{q_\\theta}(w_\\theta(X))$. $\\square$\n\\end{rem}\n\\begin{rem} Remark~\\ref{rem:relationToChi} implies that, when both $\\pi$ and $q_\\theta$ belong {to the same parametric family (i.e., there exists $\\theta \\in \\Theta$ such that $\\pi=q_\\theta$),} one readily obtains\n\\begin{align*}\n\\rho^\\star := \\inf_{\\theta\\in\\Theta} \\rho(\\theta) = 1. \\quad \\square\n\\end{align*}\n\\end{rem}\n\\begin{rem} For the IS estimator \\eqref{eq:ISestimate}, the bound in Theorem~\\ref{thm:ISfund} can be modified so that it holds for unbounded test functions $\\varphi$ as well; see, e.g. \\citet{ryu2014adaptive}. Therefore, a similar quantity to $\\rho(\\theta)$, which includes $\\varphi$ whilst still retaining convexity, can be optimised for this case. Unfortunately, obtaining such a bound is not straightforward for the SNIS estimator \\eqref{eq:SNISestimate} as shown by \\citet{agapiou2017importance}. In order to significantly simplify the presentation, we restrict ourselves to the class of bounded test functions, i.e., we assume $\\|\\varphi\\|_\\infty < \\infty$. $\\square$\n\\end{rem}\n{Finally, we present a bias result from \\citet{agapiou2017importance}.\n\\begin{thm}\\label{thm:SNISbias}\nAssume that $(W_\\theta^2,q_\\theta) < \\infty$. Then for any $\\varphi\\in B({\\mathsf X})$, we have\n\\begin{align*}\n\\left| {\\mathbb E}\\left[(\\varphi,\\pi_{\\theta}^N)\\right] - (\\varphi,\\pi) \\right| \\leq \\frac{\\bar{c}_\\varphi \\rho(\\theta)}{{N}},\n\\end{align*}\nwhere $\\bar{c}_\\varphi = 12\\|\\varphi\\|_\\infty^2$ and the function $\\rho:\\Theta \\to [\\rho^\\star,\\infty)$ is the same as in Theorem~\\ref{thm:ISfund}.\n\\end{thm}\n\\begin{proof}\nSee Theorem 2.1 in \\citet{agapiou2017importance}.\n\\end{proof}}\n\\subsection{Parametric adaptive importance samplers}\nStandard importance sampling may be inefficient in practice when the proposal is poorly calibrated with respect to the target. In particular, as implied by the error bound provided in Theorem~\\ref{thm:ISfund}, the error made by the IS estimator can be high if the $\\chi^2$-divergence between the target and the proposal is large. Therefore, it is more common to employ an iterative version of importance sampling, also called as \\textit{adaptive importance sampling} (AIS). The AIS algorithms are importance sampling methods which aim at iteratively improving the proposal distributions. More specifically, the AIS methods specify a sequence of proposals $(q_t)_{t\\geq 1}$ and perform importance sampling at each iteration. The aim is to improve the proposal so that the samples are better matched with the target, which results in less variance and more accuracy in the estimators. There are several variants, the most popular one being population Monte Carlo methods \\citep{cappe2004population} which uses previous samples in the proposal.\n\n\\begin{algorithm}[t]\n\\begin{algorithmic}[1]\n\\caption{Parametric AIS}\\label{alg:ParametricAIS}\n\\State Choose a parametric proposal $q_{\\theta}$ with initial parameter $\\theta=\\theta_0$.\n\\For{$t\\geq 1$}\n\\State Adapt the proposal,\n\\begin{align*}\n\\theta_t = \\mathcal{T}_t(\\theta_{t-1}),\n\\end{align*}\n\\State Sample,\n\\begin{align*}\nx_t^{(i)} \\sim q_{\\theta_t}, \\quad \\textnormal{for } i = 1,\\ldots,N,\n\\end{align*}\n\\State Compute weights,\n\\begin{align*}\n\\mathsf{w}_{\\theta_t}^{(i)} = \\frac{W_{\\theta_t}(x_t^{(i)})}{\\sum_{i=1}^N W_{\\theta_t}(x_t^{(i)})}, \\quad \\textnormal{where} \\quad W_{\\theta_t}^{(i)} = \\frac{\\Pi(x_t^{(i)})}{q_{\\theta_t}(x^{(i)})}.\n\\end{align*}\n\\State Report the point-mass probability measure\n\\begin{align*}\n{\\pi}_{\\theta_t}^N({\\mathrm{d}} x) = \\sum_{i=1}^N \\mathsf{w}_{\\theta_t}^{(i)} \\delta_{x_t^{(i)}}({\\mathrm{d}} x),\n\\end{align*}\nand the estimator\n\\begin{align*}\n(\\varphi,{\\pi}_{\\theta_t}^N) = \\sum_{i=1}^N \\mathsf{w}_{\\theta_t}^{(i)} \\varphi(x_t^{(i)}).\n\\end{align*}\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\nIn this section, we review one particular AIS, which we refer to as \\textit{parametric AIS}. In this variant, the proposal distribution is a parametric distribution, denoted $q_\\theta$. Over time, this parameter $\\theta$ is updated (or \\textit{optimised}) with respect to a predefined criterion resulting in a sequence $(\\theta_t)_{t\\geq 1}$. This yields a sequence of proposal distributions denoted as $(q_{\\theta_t})_{t\\geq 1}$.\n\nOne iteration of the algorithm goes as follows. Assume at time $t-1$ we are given a proposal distribution $q_{\\theta_{t-1}}$. At time $t$, we first update the parameter of this proposal,\n\\begin{align*}\n\\theta_t = \\mathcal{T}_t(\\theta_{t-1}),\n\\end{align*}\nwhere $\\{\\mathcal{T}_t:\\Theta \\to \\Theta, t\\geq 1\\}$, is a sequence of (deterministic or stochastic) maps, e.g., gradient mappings, constructed so that they minimise a certain cost function. Then, in the same way we have done in conventional IS, we sample\n\\begin{align*}\nx_t^{(i)} \\sim q_{\\theta_t}({\\mathrm{d}} x), \\quad \\textnormal{for } i = 1,\\ldots,N,\n\\end{align*}\ncompute weights\n\\begin{align*}\n\\mathsf{w}_{\\theta_t}^{(i)} = \\frac{W_{\\theta_t}(x_t^{(i)})}{\\sum_{i=1}^N W_{\\theta_t}(x_t^{(i)})},\n\\end{align*}\nand finally construct the empirical measure\n\\begin{align*}\n\\pi_{\\theta_t}^N({\\mathrm{d}} x) = \\sum_{i=1}^N \\mathsf{w}_{\\theta_t}^{(i)} \\delta_{x_t^{(i)}}({\\mathrm{d}} x).\n\\end{align*}\nThe estimator of the integral \\eqref{eq:ProbDefn} is then computed as in Eq. \\eqref{eq:SNISestimate}. \n\nThe full procedure of the parametric AIS method is summarized in Algorithm~\\ref{alg:ParametricAIS}. Since this is a valid IS scheme, this algorithm enjoys the same guarantee provided in Theorem~\\ref{thm:ISfund}. In particular, we have the following theorem.\n\\begin{thm}\\label{thm:ISfundAIS}\nAssume that, given a sequence of proposals $(q_{\\theta_t})_{t\\geq 1} \\in {\\mathcal P}({\\mathsf X})$, we have $(W_{\\theta_t}^2,q_{\\theta_t}) < \\infty$ for every $t$. Then for any $\\varphi\\in B({\\mathsf X})$, we have\n\\begin{align*}\n{\\mathbb E}\\left[\\left|(\\varphi,\\pi) - (\\varphi,\\pi_{\\theta_t}^N)\\right|^2\\right] \\leq \\frac{c_\\varphi \\rho(\\theta_t)}{{N}},\n\\end{align*}\nwhere $c_\\varphi = 4 \\|\\varphi\\|_\\infty^2$ and the function $\\rho(\\theta_t):\\Theta \\to [\\rho^\\star,\\infty)$ is defined as in Eq. \\eqref{eqThm1-2}.\n\\end{thm}\n\\begin{proof}\nThe proof is identical to the proof of Theorem~\\ref{thm:ISfund}. We have just re-stated the result to introduce the iteration index $t$.\n\\end{proof}\nHowever, this theorem does not give an insight of what happens as the number of iterations increases, i.e., when $t\\to\\infty$, with the bound. Ideally, the adaptation of the AIS should improve this bound with time. In other words, in the ideal case, the error should decrease as $t$ grows. Fortunately, Theorem~\\ref{thm:ISfundAIS} suggests that the maps $\\mathcal{T}_t:\\Theta\\to\\Theta$ can be chosen so that the function $\\rho$ is minimised over time. More specifically, the sequence $(\\theta_t)_{t\\geq 1}$ can be chosen so that it leads to a decreasing sequence (at least in expectation) $(\\rho(\\theta_t))_{t\\geq 1}$. In the following sections, we will summarize the deterministic and stochastic strategies to achieve this aim.\n\\begin{rem}\\label{remR} We define the unnormalised version of $\\rho(\\theta)$ and denote it as $R(\\theta)$. It is characterised as follows\n\\begin{align*}\n\\rho(\\theta) = \\frac{R(\\theta)}{Z_\\pi^2} \\quad \\textnormal{where} \\quad Z_\\pi = \\int_{\\mathsf X} \\Pi(x) {\\mathrm{d}} x < \\infty.\n\\end{align*}\nHence, $R(\\theta)$ can also be expressed as\n\\begin{align}\\label{eq:Rtheta}\nR(\\theta) = {\\mathbb E}_{q_{\\theta}} \\left[\\frac{\\Pi^2(X)}{q_{\\theta}^2(X)}\\right].\n\\end{align} $\\square$\n\\end{rem}\n\n\n\\subsection{AIS with exponential family proposals}\\label{sec:expFamily}\n\nFollowing \\cite{ryu2014adaptive}, we note that when $q_\\theta$ is chosen as an exponential family density, the function $\\rho(\\theta)$ is convex. In particular, we define\n\\begin{align}\\label{eq:PropDefineExp}\nq_\\theta(x) = \\exp(\\theta^\\top T(x) - A(\\theta)) h(x),\n\\end{align}\nwhere $A: {\\mathbb R}^{d_\\theta}\\to{\\mathbb R} \\cup \\{\\infty\\}$ is the log of the normalization constant, i.e.,\n\\begin{align*}\nA(\\theta) = \\log \\int \\exp(\\theta^\\top T(x)) h(x) \\mbox{d}x,\n\\end{align*}\nwhile $T:{\\mathbb R}^{d_x}\\to{\\mathbb R}^{d_\\theta}$ and $h:{\\mathbb R}^{d_x}\\to{\\mathbb R}_+$. Then we have the following lemma adapted from \\cite{ryu2014adaptive}.\n\\begin{lem}\\label{prop:rhoconvex} Let $q_\\theta$ be chosen as in \\eqref{eq:PropDefineExp}. Then $\\rho:\\Theta \\to [\\rho^\\star,\\infty)$ is convex, i.e., for any $\\theta_1,\\theta_2\\in\\Theta$ and $\\lambda \\in [0,1]$, the following inequality holds\n\\begin{align*}\n\\rho(\\lambda\\theta_1 + (1-\\lambda) \\theta_2) \\leq \\lambda \\rho(\\theta_1) + (1-\\lambda) \\rho(\\theta_2).\n\\end{align*}\n\\end{lem}\n\\begin{proof}\nSee Appendix \\ref{app:proofLemma1} for a self-contained proof.\n\\end{proof}\nLemma~\\ref{prop:rhoconvex} shows that $\\rho$ is a convex function, therefore, optimising it could give us provably convergent algorithms (as $t$ increases). Next lemma, borrowed from \\citet{ryu2014adaptive}, shows that $\\rho$ is differentiable and its gradient can indeed be computed as an expectation.\n{\\begin{lem}\\label{lem:GradientRho} The gradient $\\nabla\\rho(\\theta)$ can be written as\n\\begin{align}\\label{eq:gradRho}\n\\nabla \\rho(\\theta) = {\\mathbb E}_{q_\\theta} \\left[(\\nabla A(\\theta) - T(X)) \\frac{\\pi^2(X)}{q_\\theta^2(X)}\\right].\n\\end{align}\n\\end{lem}}\n\\begin{proof}\nThe proof is straightforward since $q_\\theta$ is from an exponential family and $A(\\theta)$ is differentiable.\n\\end{proof}\n\\begin{rem} Note that Eqs.~\\eqref{eq:Rtheta} and \\eqref{eq:gradRho} together imply that\n\\begin{align}\\label{eq:gradR}\n\\nabla R(\\theta) = {\\mathbb E}_{q_\\theta} \\left[(\\nabla A(\\theta) - T(X)) \\frac{\\Pi^2(X)}{q_\\theta^2(X)}\\right].\n\\end{align}\nWe also note (see Remark~\\ref{remR}) that\n\\begin{align}\\label{eq:RelGrads}\n\\nabla R(\\theta) = Z_\\pi^2 \\nabla \\rho(\\theta).\n\\end{align}\n$\\square$\n\\end{rem}\nIn the following sections, we assume that $\\rho(\\theta)$ is a convex function. Thus Lemma~\\ref{prop:rhoconvex} constitutes an important motivation for our approach. We leave general proposals which lead to nonconvex $\\rho(\\theta)$ for future work.\n\n\n\\section{Algorithms}\\label{sec:theAlg}\n\nIn this section, we describe adaptation strategies based on minimizing $\\rho(\\theta)$. In particular, we design maps $\\mathcal{T}_t:\\Theta\\to\\Theta$, for $t\\geq 1$, for scenarios where\n\\begin{itemize}\n\\setlength{\\itemindent}{2em}\n\\item[(i)] the gradient of $\\rho(\\theta)$ can be exactly computed,\n\\item[(ii)] an unbiased estimate of the gradient of $\\rho(\\theta)$ can be obtained, and\n\\item[(iii)] an unbiased estimate of the gradient of $R(\\theta)$ can be obtained.\n\\end{itemize}\nScenario (i) is unrealistic in practice but gives us a guideline in order to further develop the idea. {In particular, the error bounds for the more complicated cases follow the same structure as this case. Therefore, the results obtained in case (i) provide a good qualitative understanding of the results introduced later.} Scenario (ii) can be realized in cases where it is possible to evaluate $\\pi(x)$, in which case the IS leads to unbiased estimators. Scenario (iii) is what a practitioner would most often encounter: the target can only be evaluated up to the normalizing constant, i.e., $\\Pi(x)$ can be evaluated but $\\pi(x)$ cannot.\n\n{We finally remark that, for the cases where we assume a stochastic gradient can be obtained for $\\rho$ and $R$ (namely, the case (ii) and the case (iii) respectively), we consider two possible algorithms to perform adaptation. The first method is a \\textit{vanilla} SGD algorithm \\citep{bottou2016optimization} and the second method is a SGD scheme with iterate averaging \\citep{schmidt2017minimizing}. While vanilla SGD is easier to implement and algorithmically related to population-based Monte Carlo methods, iterate averaged SGD results in a better theoretical bound and it has some desirable variance reduction properties.}\n\n\n\n\\subsection{Exact gradient OAIS}\n\nWe first introduce the OAIS scheme where we assume that the exact gradients of $\\rho(\\theta)$ are available. Since $\\rho$ is defined as an expectation (an integral), this assumption is unrealistic. However, the results we can prove for this procedure shed light onto the results that will be proved for practical scenarios in the following sections.\n\nIn particular, in this scheme, given $\\theta_{t-1}$, we specify $\\mathcal{T}_t$ as\n\\begin{align}\\label{eq:exactOAIS}\n\\theta_t = \\mathcal{T}_t(\\theta_{t-1}) = \\mathsf{Proj}_\\Theta(\\theta_{t-1} - \\gamma \\nabla \\rho(\\theta_{t-1})),\n\\end{align}\nwhere $\\gamma > 0$ is the step-size parameter of the map and $\\mathsf{Proj}_\\Theta$ denotes projection onto the compact parameter space $\\Theta$. This is a classical gradient descent scheme on $\\rho(\\theta)$. In Section \\ref{ssErrorsExactGrad}, we provide non-asymptotic results for this scheme. However, as we have noted, this idea does not lead to a practical scheme and cannot be used in most cases in practice as the gradients of $\\rho$ in exact form are rarely available.\n\\begin{rem} {We use a projection operator in Eq.~\\eqref{eq:exactOAIS} because we assume throughout the analysis in Section~\\ref{sec:analysis} that the parameter space $\\Theta$ is compact.}\n$\\square$\n\\end{rem}\n\n{\\subsection{Stochastic gradient OAIS}}\n\n{Although it has a nice and simple form, exact-gradient OAIS is often intractable as, in most practical cases, the gradient can only be estimated. In this section, we first look at the case where $\\pi(x)$ can be evaluated, which means that an unbiased estimate of $\\nabla \\rho(\\theta)$ can be obtained. Then we consider the general case, where one can only evaluate $\\Pi(x)$ and can obtain an unbiased estimate of $\\nabla R(\\theta)$.}\n\n{In the following subsections, we consider an algorithm where the gradient is estimated using samples which can also be used to construct importance sampling estimators. The procedure is outlined in Algorithm \\ref{alg:SGDAIS} for the case in which only $\\Pi(x)$ can be evaluated and $\\nabla R(\\theta)$ is estimated.}\n\n{\\subsubsection{Normalised case}}\n\n{If we assume that the density $\\pi(x)$ can be evaluated exactly, then the algorithm can be described as follows. Given $(\\theta_k)_{1\\leq k\\leq t-1}$, at iteration $t$ we compute the next parameter iterate as\n\\begin{align*}\n\\theta_t = \\mathsf{Proj}_{\\Theta}(\\theta_{t-1} - \\gamma_t g_t),\n\\end{align*}\nwhere $g_t$ is an unbiased estimator of $\\nabla \\rho(\\theta_{t-1})$. We note that, due to the analytical form of $\\nabla \\rho$ (see Eq. \\eqref{eq:gradRho}), the samples and weights generated at iteration $t-1$, i.e., $\\left\\{ x_{t-1}^{(i)}, w_{\\theta_{t-1}}(x_{t-1}^{(i)}) \\right\\}_{i=1}^N$ can be reused to estimate the gradient. This makes an algorithmic connection to the population Monte Carlo methods where previous samples and weights are used to adapt the proposal \\citep{cappe2004population}.}\n\n{Given the updated parameter $\\theta_t$, the algorithm first samples from the updated proposal $x_t^{(i)} \\sim q_{\\theta_t}$, $i=1, \\ldots, N$, and then proceeds to construct the IS estimator as in \\eqref{eq:ISestimate}. Namely, \n\\begin{align}\n(\\varphi,\\pi^N_{{\\theta}_t}) = \\frac{1}{N} \\sum_{i=1}^N w_{{\\theta}_t}({x}_t^{(i)}) \\varphi({x}_t^{(i)}).\n\\label{eqEstimator_1}\n\\end{align}}\n\n{\\begin{algorithm}[tb!]\n\\begin{algorithmic}[1]\n\\caption{Stochastic gradient OAIS}\\label{alg:vanillaSGDAIS}\n\\State Choose a parametric proposal $q_{\\theta}$ with initial parameter $\\theta=\\theta_0$.\n\\For{$t\\geq 1$}\n\\State Update the proposal parameter,\n\\begin{align*}\n\\theta_t = \\mathsf{Proj}_\\Theta(\\theta_{t-1} - \\gamma_t \\tilde{g}_t)\n\\end{align*}\nwhere $\\tilde{g}_t$ is computed by approximating the expectation in Eq. \\eqref{eq:gradR} using the samples $x_{t-1}^{(i)}$ and weights $\\mathsf{w}_{\\theta_{t-1}}^{(i)} = \\Pi( x_{t-1}^{(i)} ) q_{\\theta_{t-1}}(x_{t-1}^{(i)})^{-1}$, $i=1, ..., N$.\n\\State Sample,\n\\begin{align*}\n{x}_t^{(i)} \\sim q_{{\\theta}_t}, \\quad \\textnormal{for } i = 1,\\ldots,N,\n\\end{align*}\n\\State Compute weights,\n\\begin{align*}\n\\mathsf{w}_{{\\theta}_t}^{(i)} = \\frac{W_{{\\theta}_t}({x}_t^{(i)})}{\\sum_{i=1}^N W_{{\\theta}_t}({x}_t^{(i)})}.\n\\end{align*}\n\\State Report,\n\\begin{align*}\n{\\pi}_{{\\theta}_t}^N({\\mathrm{d}} x) = \\sum_{i=1}^N \\mathsf{w}_{{\\theta}_t}^{(i)} \\delta_{{x}_t^{(i)}}({\\mathrm{d}} x),\n\\end{align*}\nand\n\\begin{align*}\n(\\varphi,{\\pi}_{{\\theta}_t}^N) = \\sum_{i=1}^N \\mathsf{w}_{{\\theta}_t}^{(i)} \\varphi({x}_t^{(i)}).\n\\end{align*}\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}}\n\n{\\subsubsection{Self-normalised case}}\n\n{For the general case, where we can only evaluate $\\Pi(x)$, the algorithm proceeds similarly. Given $(\\theta_k)_{1\\leq k\\leq t-1}$, the method proceeds by first updating the parameter\n\\begin{align*}\n\\theta_t = \\mathsf{Proj}_{\\Theta}(\\theta_{t-1} - \\gamma_t \\tilde{g}_t),\n\\end{align*}\nwhere $\\tilde{g}_t$ is an unbiased estimator of $\\nabla R(\\theta_{t-1})$. Given the updated parameter, we first sample $x_t^{(i)} \\sim q_{\\theta_t}$, $i=1, ..., N$, and then construct the SNIS estimate as in \\eqref{eq:SNISestimate}, i.e., \n\\begin{align*}\n(\\varphi,\\pi^N_{{\\theta}_t}) = \\sum_{i=1}^N \\mathsf{w}^{(i)}_{{\\theta}_t} \\varphi({x}_t^{(i)}).\n\\end{align*}\nwhere\n\\begin{align*}\n\\mathsf{w}_{{\\theta}_t}^{(i)} = \\frac{W_{{\\theta}_t}({x}^{(i)})}{\\sum_{j=1}^N W_{{\\theta}_t}({x}^{(j)})},\n\\end{align*}}\n\n\\subsection{Stochastic gradient OAIS with averaged iterates}\n\n{Next, we describe a variant of the stochastic gradient OAIS that uses averages of the iterates generated by the SGD scheme \\citep{schmidt2017minimizing} in order to compute the proposal densities, generate samples and compute weights. In Section \\ref{sec:analysis} we show that the convergence rate for this method is better than the rate that can be guaranteed for Algorithm \\ref{alg:vanillaSGDAIS}.}\n\n\\subsubsection{Normalised case}\n\nWe assume first that the density $\\pi(x)$ can be evaluated. At the beginning of the $t$-th iteration, the algorithm has generated the sequence $(\\theta_k)_{1\\leq k \\leq t-1}$. First, in order to perform the adaptive importance sampling steps, we set\n\\begin{align}\\label{eq:AveragingSGD}\n\\bar{\\theta}_t = \\frac{1}{t}\\sum_{k=0}^{t-1} \\theta_k\n\\end{align}\nand sample $\\bar{x}_{t}^{(i)} \\sim q_{\\bar{\\theta}_t}$ for $i = 1,\\ldots,N$. Following the standard parametric AIS procedure (Algorithm~\\ref{alg:ParametricAIS}), we obtain the estimate of $(\\varphi,\\pi)$ as,\n\\begin{align*}\n(\\varphi,\\pi^N_{\\bar{\\theta}_t}) = \\frac{1}{N} \\sum_{i=1}^N w_{\\bar{\\theta}_t}(\\bar{x}_t^{(i)}) \\varphi(\\bar{x}_t^{(i)}).\n\\end{align*}\nNext, we update the parameter vector using the projected stochastic gradient step\n\\begin{align}\\label{eq:recSgdAdaptNorm}\n\\theta_t = \\mathcal{T}_t(\\theta_{t-1}) = \\mathsf{Proj}_\\Theta(\\theta_{t-1} - \\gamma_t g_t),\n\\end{align}\nwhere $g_t$ is an unbiased estimate of $\\nabla\\rho(\\theta_{t-1})$, i.e., ${\\mathbb E}[g_t] = \\nabla \\rho(\\theta_{t-1})$ and $\\mathsf{Proj}_\\Theta$ denotes projection onto the set $\\Theta$. Note that in order to estimate this gradient using \\eqref{eq:gradRho}, we sample $x_t^{(i)} \\sim q_{\\theta_{t-1}}$ for $i = 1, \\ldots, N$, and estimate the expectation in \\eqref{eq:gradRho}. It is worth noting that the samples $\\{ x_t^{(i)} \\}_{i=1}^M$ are different from the samples $\\{ \\bar x_t^{(i)} \\}_{i=1}^N$ used to estimate $(\\varphi,\\pi)$.\n\n\\subsubsection{Self-normalised case}\n\n\\begin{algorithm}[tb!]\n\\begin{algorithmic}[1]\n\\caption{Stochastic gradient OAIS with averaged iterates}\\label{alg:SGDAIS}\n\\State Choose a parametric proposal $q_{\\theta}$ with initial parameter $\\theta = \\theta_0$.\n\\For{$t\\geq 1$}\n\\State Compute the average parameter vector\n\\begin{align*}\n\\bar{\\theta}_t = \\frac{1}{t} \\sum_{k=0}^{t-1} \\theta_k\n\\end{align*}\n\\State Sample,\n\\begin{align*}\n\\bar{x}_t^{(i)} \\sim q_{\\bar{\\theta}_t}, \\quad \\textnormal{for } i = 1,\\ldots,N,\n\\end{align*}\n\\State Compute weights,\n\\begin{align*}\n\\mathsf{w}_{\\bar{\\theta}_t}^{(i)} = \\frac{W_{\\bar{\\theta}_t}(\\bar{x}_t^{(i)})}{\\sum_{i=1}^N W_{\\bar{\\theta}_t}(\\bar{x}_t^{(i)})}.\n\\end{align*}\n\\State Report the point-mass probability measure\n\\begin{align*}\n{\\pi}_{\\bar{\\theta}_t}^N({\\mathrm{d}} x) = \\sum_{i=1}^N \\mathsf{w}_{\\bar{\\theta}_t}^{(i)} \\delta_{\\bar{x}_t^{(i)}}({\\mathrm{d}} x),\n\\end{align*}\nand the estimator\n\\begin{align*}\n(\\varphi,{\\pi}_{\\bar{\\theta}_t}^N) = \\sum_{i=1}^N \\mathsf{w}_{\\bar{\\theta}_t}^{(i)} \\varphi(\\bar{x}_t^{(i)}).\n\\end{align*}\n\\State Update the parameter vector,\n\\begin{align*}\n\\theta_t = \\mathsf{Proj}_\\Theta(\\theta_{t-1} - \\gamma_t \\tilde{g}_t)\n\\end{align*}\nwhere $\\tilde g_t$ is an estimate of $\\nabla R(\\theta_{t-1})$ computed by approximating the expectation in Eq. \\eqref{eq:gradR} using a set of iid samples ${x}_t^{(i)} \\sim q_{\\theta_{t-1}}$, $i=1, ..., N$.\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\nIn general, $\\pi(x)$ cannot be evaluated exactly, hence a stochastic unbiased estimate of $\\nabla\\rho(\\theta)$ cannot be obtained. When the target can only be evaluated up to a normalisation constant, i.e., only $\\Pi(x)$ can be computed, we can use the SNIS procedure as explained in Section~\\ref{sec:AISintro}. Therefore, we introduce here the most general version of the stochastic method, coined \\textit{stochastic gradient OAIS}, which uses the averaged iterates in \\eqref{eq:AveragingSGD} to construct the proposal functions. The scheme is outlined in Algorithm \\ref{alg:SGDAIS}.\n\nTo run this algorithm, given the parameter vector $\\bar{\\theta}_t$ in \\eqref{eq:AveragingSGD}, we first generate a set of samples $\\{\\bar{x}_t^{(i)}\\}_{i=1}^N$ from the proposal $q_{\\bar{\\theta}_t}$. Then the integral estimate given by the SNIS can be written as,\n\\begin{align*}\n(\\varphi,\\pi^N_{\\bar{\\theta}_t}) = \\sum_{i=1}^N \\mathsf{w}_{\\bar{\\theta}_t}^{(i)} \\varphi(\\bar{x}_t^{(i)}),\n\\end{align*}\nwhere\n\\begin{align*}\n\\mathsf{w}_{\\bar{\\theta}_t}^{(i)} = \\frac{W_{\\bar{\\theta}_t}(\\bar{x}^{(i)})}{\\sum_{j=1}^N W_{\\bar{\\theta}_t}(\\bar{x}^{(j)})}.\n\\end{align*}\nFinally, for the adaptation step, we obtain the unbiased estimate of the gradient $\\nabla R(\\theta)$ and adapt the parameter as\n\\begin{align}\\label{eq:SgdUnnormalizedAdapt}\n\\theta_t = \\mathsf{Proj}_\\Theta(\\theta_{t-1} - \\gamma_t \\tilde{g}_t)\n\\end{align}\nwhere $\\tilde{g}_t$ is an unbiased estimate of $\\nabla R(\\theta_{t-1})$, i.e., ${\\mathbb E}[\\tilde{g}_t] = \\nabla R(\\theta_{t-1})$. Note that, as in the normalised case, this gradient is estimated by approximating the expectation in \\eqref{eq:gradR} using iid samples $x_t^{(i)} \\sim q_{\\theta_{t-1}}$, $i = 1,\\ldots,N$. These samples are different, again, from the set $\\{ \\bar x_t^{(i)} \\}_{i=1}^N$ employed to estimate $(\\varphi,\\pi)$.\n{\n\\begin{rem}\nIn Algorithm \\ref{alg:SGDAIS} the samples $\\{ \\bar x_t^{(i)} \\}_{i=1}^N$ drawn from the proposal distribution $q_{\\bar \\theta_{t-1}}({\\mathrm{d}} x)$ are \\textit{not} used to compute the gradient estimator $\\tilde g_t$ which, in turn, is needed to generate the next iterate $\\theta_t$. Therefore, if we can afford to generate $T$ iterates, $\\theta_0, \\ldots, \\theta_{T-1}$, with $T$ known before hand, and we are only interested in the estimator $(\\varphi,\\pi_{\\bar \\theta_T}^N)$ obtained at the last iteration (once the proposal function has been optimized) then it is be possible to skip steps 3--6 in Algorithm \\ref{alg:SGDAIS} up to time $T-1$. Only at time $t=T$, we would compute the average parameter vector $\\bar \\theta_T$, sample $\\bar x_T^{(i)}$ from the proposal $q_{\\bar \\theta_T}({\\mathrm{d}} x)$ and generate the point-mass probability measure $\\pi_{\\bar \\theta_T}^N$ and the estimator $(\\varphi,\\pi_{\\bar \\theta_T}^N)$ .\n\\end{rem}\n}\n\n\\section{Analysis}\\label{sec:analysis}\n\nTheorem~\\ref{thm:ISfund} yields an intuitive result about the performance of IS methods in terms of the divergence between the target $\\pi$ and the proposal $q_\\theta$. We now apply ideas from convex optimisation theory in order to minimize $\\rho(\\theta)$ and obtain finite-time, finite-sample convergence rates for the AIS procedures outlined in Section \\ref{sec:theAlg}.\n\n\\subsection{Convergence rate with exact gradients} \\label{ssErrorsExactGrad}\n\nLet us first assume that we can compute the gradient of $\\rho(\\theta)$ exactly. In particular, we consider the update rule in Eq. \\eqref{eq:exactOAIS}. For the sake of the analysis, we impose some regularity assumptions on the $\\rho(\\theta)$.\n\n{\n\\begin{assumption}\\label{ass:LipschitzCont}\nLet $\\rho(\\theta)$ be a convex function with Lipschitz derivatives in the compact space $\\Theta$. To be specific, $\\rho$ is convex and differentiable, and there exists a constant $L<\\infty$ such that\n\\begin{eqnarray}\n\\|\\nabla \\rho(\\theta) - \\nabla \\rho(\\theta')\\|_2 &\\leq& L \\|\\theta - \\theta'\\|_2 \\nonumber\n\\end{eqnarray}\nfor any $\\theta,\\theta' \\in \\Theta$.\n\\end{assumption}\n}\n\n{\n\\begin{rem} Assumption \\ref{ass:LipschitzCont} holds when the density $q_\\theta(x)$ belongs to an exponential family (see Section~\\ref{sec:expFamily}) and $\\Theta$ is compact \\citep{ryu2014adaptive}, even if it may not hold in general for $\\theta \\in {\\mathbb R}^{d_\\theta}$. $\\square$\n\\end{rem}\n}\n\n\\begin{lem}\\label{lem:GDconv} If Assumption~\\ref{ass:LipschitzCont} holds and we set a step-size $\\gamma \\leq 1\/L$, then the inequality\n\\begin{align}\n\\rho(\\theta_t) - \\rho^\\star \\leq \\frac{\\|\\theta_0 - \\theta^\\star\\|^2}{2\\gamma t},\n\\label{eqLem3-1}\n\\end{align}\nis satisfied for the sequence $\\{\\theta_t\\}_{t\\ge 0}$ generated by the recursion \\eqref{eq:exactOAIS} where $\\theta^\\star$ is a minimum of $\\rho$.\n\\end{lem}\n\\begin{proof}\nSee, e.g., \\cite{nesterov2013introductory}.\n\\end{proof}\n\nThis rate in \\eqref{eqLem3-1} is one of the most fundamental results in convex optimisation. Lemma \\ref{lem:GDconv} enables us to prove the following result for the MSE of the AIS estimator adapted using exact gradient descent in Eq. \\eqref{eqEstimator_1}.\n\n\\begin{thm}\\label{thm:GD} Let Assumption~\\ref{ass:LipschitzCont} hold and construct the sequence $(\\theta_t)_{t\\geq 1}$ using recursion \\eqref{eq:exactOAIS}, where $(q_{\\theta_t})_{t\\geq 1}$ is the sequence of proposal distributions. Then, the inequality\n\\begin{align}\\label{ineq:gOAISbound}\n{\\mathbb E}\\left[\\left((\\varphi,\\pi) - (\\varphi,\\pi_{\\theta_t}^N)\\right)^2\\right] &\\leq \\frac{c_\\varphi \\|\\theta_0 - \\theta^\\star\\|_2^2}{2 \\gamma t {N}} + \\frac{c_\\varphi \\rho^\\star}{{N}}\n\\end{align}\nis satisfied, where $c_\\varphi = 4 \\|\\varphi\\|_\\infty^2$, $0 < \\gamma \\leq 1\/L$ and $L$ is the Lipschitz constant of the gradient $\\nabla\\rho(\\theta)$ in Assumption \\ref{ass:LipschitzCont}.\n\\end{thm}\n\n\\begin{proof}\nSee Appendix \\ref{app:thm:GD}.\n\\end{proof}\n\n\\begin{rem}\\label{remGDasymptote} Theorem~\\ref{thm:GD} sheds light onto several facts. We first note that $\\rho^\\star$ in the error bound \\eqref{ineq:gOAISbound} can be interpreted as an indicator of the quality of the parametric proposal. We recall that $\\rho^\\star = 1$ when both $\\pi$ and $q_\\theta$ belong to the same exponential family. For this special case, Theorem~\\ref{thm:GD} implies that\n\\begin{align*}\n\\lim_{t\\to\\infty} \\left\\|(\\varphi,\\pi) - (\\varphi,\\pi_{\\theta_t}^N)\\right\\|_2 \\leq \\mathcal{O}\\left(\\frac{1}{\\sqrt{N}}\\right).\n\\end{align*}\nIn other words, when the target and the proposal are both from the exponential family, this adaptation strategy is leading to an \\textit{asymptotically optimal} Monte Carlo estimator (optimal meaning that we attain the same rate as a Monte Carlo estimator with $N$ iid samples from $\\pi$). On the other hand, when $\\pi$ and $q_\\theta$ do not belong to the same family, we obtain\n\\begin{align*}\n\\lim_{t\\to\\infty} \\left\\|(\\varphi,\\pi) - (\\varphi,\\pi_{\\theta_t}^N)\\right\\|_2 \\leq \\mathcal{O}\\left(\\sqrt{\\frac{\\rho^\\star}{N}}\\right),\n\\end{align*}\ni.e., the $L_2$ rate is again asymptotically optimal, but the constant in the error bound is worse (bigger) by a factor $\\sqrt{\\rho^\\star}>1$. $\\square$\n\\end{rem}\n\nThis bound shows that as $t\\to\\infty$, what we are left with is essentially the minimum attainable IS error for a given parametric family $\\{q_\\theta\\}_{\\theta\\in\\Theta}$. Intuitively, when the proposal $q_\\theta$ is from a different parametric family than $\\pi$, the gradient OAIS optimises the error bound in order to obtain the best possible proposal. In particular, the MSE has two components: First an $\\mathcal{O}(1\/tN)$ component which can be made to vanish over time by improving the proposal and a second $\\mathcal{O}(1\/N)$ component which is related to $\\rho^\\star$. The quantity $\\rho^\\star$ is related to the minimum $\\chi^2$-divergence between the target and proposal. This means that the discrepancy between the target and \\textit{optimal} proposal (according to the $\\chi^2$-divergence) can only be tackled by increasing $N$. This intuition is the same for the schemes we analyse in the next sections, although the rate with respect to the number of iterations necessarily worsens because of the uncertainty in the gradient estimators.\n\n\\begin{rem} When $\\gamma = 1\/L$, Theorem~\\ref{thm:GD} implies that if $t = \\mathcal{O}({L}\/{\\rho^\\star})$ and $N = \\mathcal{O}(\\rho^\\star \/ \\varepsilon)$, for some $\\varepsilon>0$, then we have\n\\begin{align*}\n{\\mathbb E}\\left[\\left((\\varphi,\\pi) - (\\varphi,\\pi_{\\theta_t}^N)\\right)^2\\right] &\\leq \\mathcal{O}(\\varepsilon).\n\\end{align*}\nWe remark that once we choose the number of samples $N = \\mathcal{O}(\\rho^\\star\/\\varepsilon)$, the number of iterations $t$ for adaptation is independent of $N$ and $\\varepsilon$. $\\square$\n\\end{rem}\n\n\\begin{rem} One can use different maps $\\mathcal{T}_t$ for optimisation. For example, one can use Nesterov's accelerated gradient descent (which has more parameters than just a step size), in which case, one could prove (by a similar argument) the inequality \\citep{nesterov2013introductory}\n\\begin{align*}\n{\\mathbb E}\\left[\\left((\\varphi,\\pi) - (\\varphi,\\pi_{\\theta_t}^N)\\right)^2\\right] &\\leq \\mathcal{O}\\left(\\frac{1}{t^2 {N}} + \\frac{\\rho^\\star}{{N}}\\right).\n\\end{align*}\nThis is an improved convergence rate, going from $\\mathcal{O}(1\/t)$ to $\\mathcal{O}(1\/t^2)$ in the first term of the bound. $\\square$\n\\end{rem}\n\n\\subsection{Convergence rate with averaged SGD iterates} \\label{ssConvergence-Averaged-Iterates}\n\nWhile, for the purpose of analysis, it is convenient to assume that the minimization of $\\rho(\\theta)$ can be done deterministically, this is rarely the case in practice. The `best' realistic case is that we can obtain an unbiased estimator of the gradient. {Hence, we address this scenario, under the assumption that the actual gradient functions $\\nabla \\rho$ and $\\nabla R$ are bounded in $\\Theta$ (i.e., $\\rho(\\theta)$ is Lipschitz in $\\Theta$).}\n\n{\\begin{assumption}\\label{ass:BoundedGradient} The gradient functions $\\nabla \\rho(\\theta)$ and $\\nabla R(\\theta)$ are bounded in $\\Theta$. To be specific, there exist finite constants $G_\\rho$ and $G_R$ such that\n\\begin{eqnarray}\n\\sup_{\\theta\\in\\Theta} \\|\\nabla \\rho(\\theta)\\|_2 &<& G_\\rho <\\infty \\quad \\text{and} \\nonumber\\\\\n\\sup_{\\theta\\in\\Theta} \\|\\nabla R(\\theta)\\|_2 &<& G_R < \\infty. \\nonumber\n\\end{eqnarray}\n\\end{assumption}}\n\n{We note that this is a mild assumption in the case of interest in this paper, where $\\Theta \\subset {\\mathbb R}^{d_\\theta}$ is assumed to be compact.}\n\n\n\\subsubsection{Normalised target}\\label{sec:NormalisedIS}\n\nFirst, we assume that we can evaluate $\\pi(x)$, which means that at iteration $t$, we can obtain an unbiased estimate of $\\nabla \\rho(\\theta_{t-1})$, denoted $g_t$. We use the optimisation algorithms called \\textit{stochastic gradient} methods, which use stochastic and unbiased estimates of the gradients to optimise a given cost function \\citep{RobbinsMonro}. Particularly, we focus on optimised samplers using stochastic gradient descent (SGD) as an adaptation strategy.\n\n{We start proving that the stochastic gradient estimates $(g_t)_{t\\geq 0}$ have a finite mean-squared error (MSE) w.r.t. the true gradients. To prove this result, we need an additional regularity condition.}\n{\\begin{assumption}\\label{ass:SupSupBound} \nThe normalised target and proposal densities satisfy the inequality \n\\begin{align*}\n\\sup_{\\theta\\in\\Theta} {\\mathbb E}_{q_\\theta}\\left[ \\left|\\frac{\\pi^2(X)}{q_\\theta^2(X)} \\frac{\\partial \\log q_\\theta}{\\partial \\theta_j}(X) \\right|^2 \\right] =: D_\\pi^j < \\infty.\n\\end{align*}\nfor $j=1, \\ldots, d_\\theta$. We denote $D_\\pi := \\max_{j \\in \\{1,\\ldots,d_\\theta\\}} D_\\pi^j < \\infty$.\n\\end{assumption}}\n\n{\n\\begin{rem} \\label{remSupSup}\nLet us rewrite $D_\\pi^j$ in Assumption \\ref{ass:SupSupBound} in terms of the weight function, namely\n\\begin{align*}\nD_\\pi^j = \\sup_{\\theta\\in\\Theta} {\\mathbb E}_{q_\\theta} \\left[ \\left| w_\\theta^2(X) \\frac{\\partial \\log q_\\theta}{\\partial \\theta_j}(X) \\right|^2 \\right].\n\\end{align*}\nWhen $q_\\theta(x)$ belongs to the exponential family, we readily obtain\n\\begin{align*}\nD_\\pi^j = \\sup_{\\theta\\in\\Theta} {\\mathbb E}_{q_\\theta} \\left[ w_\\theta^4(X) \\left( \\frac{\\partial A(\\theta)}{\\partial \\theta_i} -T_i(X) \\right)^2 \\right],\n\\end{align*}\nwhere $T_i(X)$ is the $i$-th sufficient statistic for $q_\\theta(x)$. Let us construct a bounding function for the weights of the form\n$$\nK(\\theta) := \\sup_{x \\in {\\sf X}} w_\\theta(x).\n$$\nIf we choose the compact space $\\Theta$ in such a way that $K(\\theta)$ is bounded, then we readily have\n\\begin{align*}\nD_\\pi^j &\\le \\sup_{\\theta\\in\\Theta} K^4(\\theta) {\\mathbb E}_{q_\\theta} \\left[ \\left( \\frac{\\partial A(\\theta)}{\\partial \\theta_i} -T_i(X) \\right)^2 \\right] \\\\\n&\\le \\| K \\|_\\infty^4 \\text{Var}( T_i(X) ),\n\\end{align*}\nwhere we have used the fact that $\\frac{\\partial^m A(\\theta)}{\\partial \\theta_i} = {\\mathbb E}_{q_{\\theta}}\\left[ T_i^m(X) \\right]$. Therefore, if the weights remain bounded in $\\Theta$, a sufficient condition for Assumption \\ref{ass:SupSupBound} to hold is that the sufficient statistics of the proposal distribution all have finite variances, i.e., $\\max_{i \\in \\{1, \\ldots, d_\\theta\\} } T_i(X) < \\infty$.\n\\\\ \\\\\n{There are alternative conditions that, when satisfied, lead to Assumption \\ref{ass:SupSupBound} holding true. As an example, in Appendix \\ref{apRho2} we provide an alternative sufficient condition in terms of the function $\\rho_2(\\theta):={\\mathbb E}[ w_\\theta^4(X) ]$.}\n\\end{rem}\n}\n\nNow we show that when $g_t$ is an iid Monte Carlo estimator of $\\nabla \\rho$, we have the following finite-sample bound for the MSE.\n{\\begin{lem}\\label{lem:GradientMonteCarlo}\nIf Assumption~\\ref{ass:SupSupBound} holds, the following inequality holds,\n\\begin{align*}\n{\\mathbb E}[\\| g_t - \\nabla \\rho(\\theta_{t-1})\\|_2^2] \\leq \\frac{d_\\theta c_{\\rho} D_\\pi}{N},\n\\end{align*}\nwhere $d_\\theta$ is the parameter dimension and $c_\\rho, D_\\pi < \\infty$ are constant w.r.t. $N$.\n\\end{lem}\n\\begin{proof}\nSee Appendix~\\ref{app:lem:GradientMonteCarlo}.\n\\end{proof}}\n\nIn order to obtain convergence rates for the estimator $(\\varphi,\\pi_{\\bar \\theta_t}^N)$ we first recall a classical result for the SGD (see, e.g., \\cite{bubeck2015convex}).\n\\begin{lem}\\label{lem:SGDconv} \n{Let Assumptions \\ref{ass:BoundedGradient} and \\ref{ass:SupSupBound} hold, apply recursion \\eqref{eq:recSgdAdaptNorm} and let $(g_t)_{t\\geq 0}$ be the stochastic gradient estimates in Lemma~\\ref{lem:GradientMonteCarlo}}. If we choose the step-size sequence $\\gamma_k = \\alpha \/ \\sqrt{k}$, $1\\leq k \\leq t$, for any $\\alpha > 0$, then\n{\\begin{align}\\label{eq:SGDrate}\n{\\mathbb E}[\\rho(\\bar{\\theta}_t) - \\rho^\\star] \\leq \\frac{{\\mathbb E}\\|\\theta_0 - \\theta^\\star\\|_2^2}{2\\alpha\\sqrt{t}} + \\frac{\\alpha d_\\theta c_\\rho D_\\pi}{\\sqrt{t} N} + \\frac{\\alpha G^2_\\rho}{\\sqrt{t}},\n\\end{align}}\nwhere $\\bar{\\theta}_t = \\frac{1}{t}\\sum_{k=0}^{t-1} \\theta_k$.\n\\end{lem}\n\n\\begin{proof}\nSee Appendix \\ref{app:lem:SGDconv} for a self-contained proof.\n\\end{proof}\n\nWe can now state the first core result of the paper, which is the convergence rate for the AIS algorithm using a SGD adaptation of the parameter vectors $\\theta_t$.\n\\begin{thm}\\label{thm:SGDAIS} \nLet Assumptions \\ref{ass:BoundedGradient} and \\ref{ass:SupSupBound} hold, let the sequence $(\\theta_t)_{t\\geq 1}$ be computed using \\eqref{eq:recSgdAdaptNorm} and construct the averaged iterates $\\bar{\\theta}_t = \\frac{1}{t} \\sum_{k=0}^{t-1} \\theta_k$. Then, the sequence of proposal distributions $(q_{\\bar{\\theta}_t})_{t\\geq 1}$ satisfies the inequality\n{\\begin{align}\\label{eq:rateSGDAIS}\n{\\mathbb E}\\left[\\left((\\varphi,\\pi) - (\\varphi,\\pi_{\\bar{\\theta}_t}^N)\\right)^2\\right] &\\leq \\frac{c_1}{\\sqrt{t}N} + \\frac{c_2}{\\sqrt{t} N^2} + \\frac{c_3}{\\sqrt{t} N} + \\frac{c_4}{N}\n\\end{align}\nfor $t \\ge 1$ and any $\\varphi \\in B({\\sf X})$, where\n\\begin{align*}\nc_1 &= \\frac{c_\\varphi {\\mathbb E}\\|\\theta_0 - \\theta^\\star\\|_2^2}{2 \\alpha}, \\\\ \nc_2 &= {c_\\varphi c_\\rho \\alpha d_\\theta D_\\pi}, \\\\\nc_3 &={c_\\varphi \\alpha G_\\rho^2}, \\\\\nc_4 &={c_\\varphi \\rho^\\star},\n\\end{align*}\nand $c_\\varphi = 4\\|\\varphi\\|_\\infty^2$ are finite constants independent of $t$ and $N$.}\n\\end{thm}\n\n\\begin{proof}\nSee Appendix \\ref{app:thm:SGDAIS}.\n\\end{proof}\n\n\\begin{rem} \nNote that the expectation on the left hand side of \\eqref{eq:rateSGDAIS} is taken w.r.t. the distribution of the measure-valued random variable $\\pi_{\\bar \\theta_t}^N$. $\\square$\n\\end{rem}\n\nTheorem~\\ref{thm:SGDAIS} can be interpreted similarly to Theorem~\\ref{thm:GD}. One can see that the overall rate of the MSE bound is $\\mathcal{O}\\left({1}\/{\\sqrt{t}N} + {1}\/{N}\\right)$. This means that, as $t\\to\\infty$, we are only left with a rate that is optimal for the AIS for a given parametric proposal family. In particular, again, $\\rho^\\star$ is related to the minimal $\\chi^2$-divergence between the target $\\pi$ and the parametric proposal $q_\\theta$. When the proposal and the target are from the same family, we are back to the case $\\rho^\\star = 1$, thus the adaptation leads to the optimal Monte Carlo rate $\\mathcal{O}(1\/\\sqrt{N})$ for the $L_2$ error within this setting as well.\n\n\\subsubsection{Self-normalised estimators}\n\nWe have noted that it is possible to obtain an unbiased estimate of $\\nabla\\rho(\\theta)$ when the normalised target $\\pi(x)$ can be evaluated. However, if we can only evaluate the unnormalised density $\\Pi(x)$ instead of $\\pi(x)$ and use the self-normalized IS estimator, the estimate of $\\nabla\\rho(\\theta)$ is no longer unbiased. We refer to Sec.~5 of \\cite{tadic2017asymptotic} for stochastic optimisation with biased gradients for adaptive Monte Carlo, where the discussion revolves around minimizing the Kullback-Leibler divergence rather than the $\\chi^2$-divergence. The results presented in \\cite{tadic2017asymptotic}, however, are asymptotic, while herein we are interested in finite-time bounds. Due to the structure of the AIS scheme, it is possible to avoid working with biased gradient estimators. In particular, we can implement the proposed AIS schemes using unbiased estimators of $\\nabla R(\\theta)$ instead of biased estimators of $\\nabla \\rho(\\theta)$. Since optimizing the unnormalised function $R(\\theta)$ leads to the same minima as optimizing the normalised function $\\rho(\\theta)$, we can simply use $\\nabla R(\\theta)$ for the adaptation in the self-normalised case.\n\nSimilar to the argument in Section \\ref{sec:NormalisedIS}, we first start the assumption below, which is the obvious counterpart of Assumption \\ref{ass:SupSupBound}.\n{\\begin{assumption}\\label{ass:SupSupBoundPi} \nThe unnormalized target $\\Pi(x)$ and the proposal densities $q_\\theta(x)$ satisfy the inequalities\n\\begin{align*}\n\\sup_{\\theta\\in\\Theta} {\\mathbb E}_{q_\\theta} \\left[ \\left| \\frac{\\Pi^2(X)}{q_\\theta^2(X)} \\frac{\\partial \\log q_\\theta}{\\partial \\theta_j}(X) \\right|^2 \\right] =: D_\\Pi^j < \\infty\n\\end{align*}\nfor $j=1, \\ldots, d_\\theta$. We denote $D_\\Pi : = \\frac{1}{d_\\theta} \\sum_{j=1}^{d_\\theta} D_\\Pi^j < \\infty$.\n\\end{assumption}}\n\nRemark \\ref{remSupSup} holds directly for Assumption \\ref{ass:SupSupBoundPi} as long as $Z_\\pi<\\infty$. {Next, we prove an MSE bound for the stochastic gradients $(\\tilde{g}_t)_{t\\geq 0}$ employed in recursion \\eqref{eq:SgdUnnormalizedAdapt}, i.e., the unbiased stochastic estimates of $\\nabla R(\\theta)$.}\n\n{\n\\begin{lem}\\label{lem:GradientMonteCarloR}\nIf Assumptions \\ref{ass:BoundedGradient} and \\ref{ass:SupSupBoundPi} hold, the inequality\n\\begin{align*}\n{\\mathbb E}[\\| \\tilde{g}_t - \\nabla R(\\theta_{t-1})\\|_2^2] \\leq \\frac{d_\\theta c_R D_\\Pi}{N},\n\\end{align*}\nis satisfied, where $c_R, D_\\Pi < \\infty$ are constants w.r.t. of $N$.\n\\end{lem}\n\\begin{proof}\nThe proof is identical to the proof of Lemma~\\ref{lem:GradientMonteCarlo}.\n\\end{proof}\n}\n\nWe can now obtain explicit rates for the convergence of the unnormalized function $R(\\bar \\theta_t)$, evaluated at the averaged iterates $\\bar \\theta_t$. \n\n\\begin{lem}\\label{lem:SGDBiasedconv} \nIf Assumptions \\ref{ass:BoundedGradient} and \\ref{ass:SupSupBoundPi} hold and the sequence $(\\theta_t)_{t\\ge 1}$ is computed via recursion \\eqref{eq:SgdUnnormalizedAdapt}, with step-sizes $\\gamma_k = \\beta \/ \\sqrt{k}$ for $1\\leq k \\leq t$ and $\\beta > 0$, we obtain the inequality\n{\\begin{align}\\label{eq:UnnormRate}\n{\\mathbb E}[R(\\bar{\\theta}_t) - R ^\\star] \\leq \\frac{{\\mathbb E} \\|\\theta_0 - \\theta^\\star\\|_2^2}{2 \\beta \\sqrt{t}} + \\frac{\\beta d_\\theta c_R D_\\Pi}{\\sqrt{t} N} + \\frac{\\beta G^2_R}{\\sqrt{t}}\n\\end{align}}\nwhere $c_R,D_\\Pi<\\infty$ are constants w.r.t. $t$ and $N$. Relationship \\ref{eq:UnnormRate} implies that\n{\\begin{align}\\label{eq:NormRateWithNormConsts}\n{\\mathbb E}[\\rho(\\bar{\\theta}_t) - \\rho^\\star] \\leq \n\\frac{{\\mathbb E} \\|\\theta_0 - \\theta^\\star\\|_2^2}{2 \\beta Z_\\pi^2 \\sqrt{t}} + \\frac{\\beta d_\\theta c_R D_\\Pi}{Z_\\pi^2 \\sqrt{t} N} + \\frac{\\beta G^2_R}{Z_\\pi^2 \\sqrt{t}}.\n\\end{align}}\n\\end{lem}\n\\begin{proof} The proof of the rate in \\eqref{eq:UnnormRate} is identical to the proof of Lemma~\\ref{lem:SGDconv}. The rate in \\eqref{eq:NormRateWithNormConsts} follows by observing that $\\rho(\\theta) = R(\\theta) \/ Z_\\pi^2$ for every $\\theta\\in\\Theta$.\n\\end{proof}\n\nFinally, using Lemma~\\ref{lem:SGDBiasedconv}, we can state our main result: an explicit error rate for the MSE of Algorithm~\\ref{alg:SGDAIS} as a function of the number of iterations $t$ and the number of samples $N$.\n\n\\begin{thm}\\label{thm:SGDAISUN} \nLet Assumptions \\ref{ass:BoundedGradient} and \\ref{ass:SupSupBoundPi} hold and let the sequence $(\\theta_t)_{t\\ge 1}$ be computed via recursion \\eqref{eq:SgdUnnormalizedAdapt}, with step-sizes $\\gamma_k = \\beta \/ \\sqrt{k}$ for $1\\leq k \\leq t$ and $\\beta > 0$. We have the following inequality for the sequence of proposal distributions $(q_{\\bar{\\theta}_t})_{t\\geq 1}$,\n{\\begin{align}\\label{eq:rateUnnormAIS}\n{\\mathbb E}\\left[\\left((\\varphi,\\pi) - (\\varphi,\\pi_{\\bar{\\theta}_t}^N)\\right)^2\\right] &\\leq \\frac{C_1}{\\sqrt{t} N} + \\frac{C_2}{\\sqrt{t}N^2} + \\frac{C_3}{\\sqrt{t} N} + \\frac{C_4}{N},\n\\end{align}\nwhere\n\\begin{align*}\nC_1 &= \\frac{c_\\varphi {\\mathbb E}\\|\\theta_0 - \\theta^\\star\\|_2^2}{2 \\beta Z_\\pi^2}, \\\\\nC_2 &= \\frac{c_\\varphi \\beta c_R d_\\theta D_\\Pi}{Z_\\pi^2}, \\\\\nC_3 &= \\frac{c_\\varphi \\beta G^2_R}{Z_\\pi^2}, \\\\\nC_4 &= {c_\\varphi \\rho^\\star},\n\\end{align*}\nand $c_\\varphi = 4\\|\\varphi\\|_\\infty^2$ are finite constants independent of $t$ and $N$.}\n\\end{thm}\n\\begin{proof}\nThe proof follows from Lemma~\\ref{lem:SGDBiasedconv} and mimicking the exact same steps as in the proof of Theorem~\\ref{thm:SGDAIS}.\n\\end{proof}\n\n\\begin{rem} \nTheorem~\\ref{thm:SGDAISUN}, as in Remark~\\ref{remGDasymptote}, provides relevant insights regarding the performance of the stochastic gradient OAIS algorithm. In particular, for a general target $\\pi$, we obtain\n\\begin{align*}\n\\lim_{t\\to\\infty} \\left\\|(\\varphi,\\pi) - (\\varphi,\\pi_{\\bar{\\theta}_t}^N)\\right\\|_2 = \\mathcal{O}\\left(\\sqrt{\\frac{\\rho^\\star}{N}}\\right).\n\\end{align*}\nThis result shows that the $L_2$ error is asymptotically optimal. As in previous cases, if the target $\\pi$ is in the exponential family, then the asymptotic convergence rate is $\\mathcal{O}(1\/\\sqrt{N})$ as $t \\to \\infty$. $\\square$\n\\end{rem}\n\n\\begin{rem} \nTheorem~\\ref{thm:SGDAISUN} also yields a practical heuristic to tune the step-size and the number of particles together. Assume that $0 < \\beta < 1$ and let $N = 1\/\\beta$ (which we assume to be an integer without loss of generality). In this case, the rate \\eqref{eq:rateUnnormAIS} simplifies into\n\\begin{align*}\n{\\mathbb E}\\left[\\left((\\varphi,\\pi) - (\\varphi,\\pi_{\\bar{\\theta}_t}^N)\\right)^2\\right] &\\leq \\frac{c_\\varphi {\\mathbb E}\\|\\theta_0 - \\theta^\\star\\|_2^2}{2 Z_\\pi^2 \\sqrt{t}} + \\frac{c_\\varphi \\beta^3 c_R d_\\theta D_\\Pi}{Z_\\pi^2 \\sqrt{t}} + \\frac{c_\\varphi \\beta^2 G^2_R}{Z_\\pi^2\\sqrt{t}} + c_\\varphi \\rho^\\star \\beta\n\\end{align*}\nNow, if we let $t = \\mathcal{O}(1\/\\beta^2)$ we readily obtain\n\\begin{align*}\n{\\mathbb E}\\left[\\left((\\varphi,\\pi) - (\\varphi,\\pi_{\\bar{\\theta}_t}^N)\\right)^2\\right] &\\leq \\mathcal{O}(\\beta).\n\\end{align*}\nTherefore, one can control the error using the step-size of the optimisation scheme provided that other parameters of the algorithm are chosen accordingly. The same argument also holds for Theorem~\\ref{thm:SGDAIS}. $\\square$\n\\end{rem}\n\n\\begin{rem} \n{It is not straightforward to compare the rates in inequality \\eqref{eq:rateUnnormAIS} (for the unnormalized target $\\Pi(x)$) and inequality \\eqref{eq:rateSGDAIS} (for the normalized target $\\pi(x)$). Even if \\eqref{eq:rateUnnormAIS} may ``look better'' by a constant factor compared to the rate in \\eqref{eq:rateSGDAIS}, this is usually not the case. Indeed, the variance of the errors in the unnormalised gradient estimators is typically higher and this reflects on the variance of the moment estimators. Another way to look at this issue is to realise that, very often, $Z_\\pi << 1$, which makes the bound in \\eqref{eq:rateUnnormAIS} much greater than the bound in \\eqref{eq:rateSGDAIS}.}\n\\end{rem}\n\n{Finally, we can adapt Theorem~\\ref{thm:SNISbias} to our case, providing a convergence rate of the bias of the importance sampler given by Algorithm~\\ref{alg:SGDAIS}.}\n\n{\\begin{thm}\\label{thm:SGDAISbias}\nUnder the setting of Theorem~\\ref{thm:SGDAISUN}, we have\n\\begin{align}\\label{eq:rateUnnormAISbias}\n\\left| {\\mathbb E}\\left[(\\varphi,\\pi_{\\bar{\\theta}_t}^N)\\right] - (\\varphi, \\pi)\\right| &\\leq \\frac{3C_1}{\\sqrt{t} N} + \\frac{3C_2}{\\sqrt{t}N^2} + \\frac{3C_3}{\\sqrt{t} N} + \\frac{3C_4}{N},\n\\end{align}\nwhere $C_1,C_2,C_3,C_4$ are finite constants given in Theorem~\\ref{thm:SGDAISUN} and independent of $t$ and $N$.\n\\end{thm}}\n\\begin{proof}\nThe proof follows from Theorem~\\ref{thm:SNISbias} and mimicking the same proof technique used to prove Theorem~\\ref{thm:SGDAISUN}.\n\\end{proof}\n\n\\subsection{Convergence rate with vanilla SGD}\n\n{The arguments of Section \\ref{ssConvergence-Averaged-Iterates} can be carried over to the analysis of Algorithm \\ref{alg:vanillaSGDAIS}, where the proposal functions $q_{\\theta_t}(x)$ are constructed using the iterates $\\theta_t$ rather than the averages $\\bar \\theta_t$. Unfortunately, achieving the optimal $\\mathcal{O}(1\/\\sqrt{t})$ rate for the vanilla SGD is difficult in general. The best available rate without significant restrictions on the step-size is given by \\citet{shamir2013stochastic}. In particular, we can adapt \\citet[Theorem~2]{shamir2013stochastic} to our setting in order to state the following lemma.\n\\begin{lem}\\label{lem:vanillaSGDconv} \nApply recursion \\eqref{eq:SgdUnnormalizedAdapt} for the computation of the iterates $(\\theta_t)_{t\\ge 1}$, choose the step-sizes $\\gamma_k = \\beta \/ \\sqrt{k}$ for $1\\leq k \\leq t$, where $\\beta > 0$, and let Assumptions \\ref{ass:BoundedGradient} and \\ref{ass:SupSupBoundPi} hold. Then, we have the inequality\n\\begin{align}\n{\\mathbb E}[R({\\theta}_t) - R ^\\star] \\leq \\left(\\frac{D^2}{\\beta \\sqrt{t}} + \\frac{\\beta d_\\theta c_R D_\\Pi}{\\sqrt{t} N} + \\frac{\\beta G^2_R}{\\sqrt{t}}\\right) (2 + \\log t),\n\\end{align}\nwhere $D := \\sup_{\\theta,\\theta' \\in \\Theta} \\|\\theta - \\theta'\\| < \\infty$. This in turn implies that\n{\\begin{align}\n{\\mathbb E}[\\rho({\\theta}_t) - \\rho^\\star] \\leq \n\\left(\\frac{D^2}{\\beta \\sqrt{t}} + \\frac{\\beta d_\\theta c_R D_\\Pi}{\\sqrt{t} N} + \\frac{\\beta G^2_R}{\\sqrt{t}}\\right)\\frac{(2 + \\log t)}{Z_\\pi^2}.\n\\end{align}}\n\\end{lem}}\n\\begin{proof}\nIt is straightforward to prove this result using \\citet[Theorem~2]{shamir2013stochastic} and the proof of Lemma~\\ref{lem:SGDconv}.\n\\end{proof}\n{The obtained rate is, in general, $\\mathcal{O}\\left( \\frac{\\log t}{\\sqrt{t}}\\right)$. This is known to be suboptimal and it can be improved to the {information-theoretical optimal} $\\mathcal{O}(1\/\\sqrt{t})$ rate by choosing a specific step-size scheduling, see, e.g., \\citet{jain2019making}. {However, in this case, the scheduling of $(\\gamma_t)_{t\\geq 1}$ depends directly on the total number of iterates to be generated, in such a way that the error $\\mathcal{O}(1\/\\sqrt{t})$ is guaranteed only for the {\\em last} iterate, at the final time $t$.}}\n\nWe can extend Lemma \\ref{lem:vanillaSGDconv} to obtain the following result.\n\n{\\begin{thm}\\label{thm:vanillaSGDAIS} \nApply recursion \\eqref{eq:SgdUnnormalizedAdapt} for the computation of the iterates $(\\theta_t)_{t\\ge 1}$, choose the step-sizes $\\gamma_k = \\beta \/ \\sqrt{k}$ for $1\\leq k \\leq t$, where $\\beta > 0$, and let Assumptions \\ref{ass:BoundedGradient} and \\ref{ass:SupSupBoundPi} hold. If we construct the sequence of proposal distributions $(q_{{\\theta}_t})_{t\\geq 1}$ be the sequence of proposal distributions we obtain the following MSE bounds\n\\begin{align}\n{\\mathbb E}\\left[\\left((\\varphi,\\pi) - (\\varphi,\\pi_{{\\theta}_t}^N)\\right)^2\\right] &\\le \\left(\n\t\\frac{C_1}{\\sqrt{t} N} + \\frac{C_2}{\\sqrt{t}N^2} + \n\t\\frac{C_3}{\\sqrt{t} N}\n\\right)(2 + \\log t) + \\frac{C_4}{N},\n\\label{eq:rateUnnormAIS-2}\n\\end{align}\nwhere\n\\begin{align*}\nC_1 &= \\frac{c_\\varphi D^2}{2 \\beta Z_\\pi^2}, \\\\\nC_2 &= \\frac{c_\\varphi \\beta c_R d_\\theta D_\\Pi}{Z_\\pi^2}, \\\\\nC_3 &= \\frac{c_\\varphi \\beta G^2_R}{Z_\\pi^2}, \\\\\nC_4 &= {c_\\varphi \\rho^\\star},\n\\end{align*}\nand $c_\\varphi = 4\\|\\varphi\\|_\\infty^2$ are finite constants independent of $t$ and $N$.\n\\end{thm}}\n\\begin{proof}\nThe proof follows from Lemma~\\ref{lem:vanillaSGDconv} with the exact same steps as in the proof of Theorem~\\ref{thm:SGDAIS}.\n\\end{proof}\n{Finally, it is also straightforward to adapt the bias result in Theorem~\\ref{thm:SGDAISbias} to this case, which results in the similar bound. We skip it for space reasons and also because it has the same form as in Theorem~\\ref{thm:SGDAISbias} with an extra $\\log t$ factor.}\n\n\\section{Conclusions}\\label{sec:conc}\nWe have presented and analysed \\textit{optimised} parametric adaptive importance samplers and provided non-asymptotic convergence bounds for the MSE of these samplers. Our results display the precise interplay between the number of iterations $t$ and the number of samples $N$. In particular, we have shown that the optimised samplers converge to an optimal proposal as $t\\to\\infty$, leading to an asymptotic rate of $\\mathcal{O}(\\sqrt{\\rho^\\star\/N})$. This intuitively shows that the number of samples $N$ should be set in proportion to the minimum $\\chi^2$-divergence between the target and the exponential family proposal, as we have shown that the adaptation (in the sense of minimising $\\chi^2$-divergence or, equivalently, the variance of the weight function) cannot improve the error rate beyond $\\mathcal{O}(\\sqrt{\\rho^\\star\/N})$. The error rates in this regime may be dominated by how close the target is to the exponential family.\n\nNote that the algorithms we have analysed require constant computational load at each iteration and the computational load does not increase with $t$ as we do not re-use the samples in past iterations. Such schemes, however, can also be considered and analysed in the same manner. More specifically, in the present setup the computational cost of each iteration depends on the cost of evaluating $\\Pi(x)$.\n\nOur work opens up several other paths for research. One direction is to analyse the methods with more advanced optimisation algorithms. Another challenging direction is to consider more general proposals than the natural exponential family, which may lead to non-convex optimisation problems of adaptation. Analysing and providing guarantees for this general case would provide foundational insights for general adaptive importance sampling procedures. Also, as shown by \\citet{ryu2016convex}, similar theorems can also be proved for $\\alpha$-divergences.\n\nAnother related piece of work arises from variational inference \\citep{wainwright2008graphical}. In particular, \\citet{dieng2017variational} have recently considered performing variational inference by minimising the $\\chi^2$-divergence, which is close to the setting in this paper. In particular, the variational approximation of the target distribution in the variational setting coincides with the proposal distribution we consider within the importance sampling context in this paper. This also implies that our results may be used to obtain finite-time guarantees for the expectations estimated using the variational approximations of target distributions.\n\nFinally, the adaptation procedure can be modified to handle the non-convex case as well. In particular, the SGD step can be converted into a stochastic gradient Langevin dynamics (SGLD) step. The SGLD method can be used as a global optimiser when $\\rho$ and $R$ are non-convex and a global convergence rate can be obtained using the standard SGLD results, see, e.g., \\citet{raginsky2017non,zhang2019nonasymptotic}. Global convergence results for other adaptation schemes such as stochastic gradient Hamiltonian Monte Carlo (SGHMC) can also be achieved using results from nonconvex optimisation literature, see, e.g., \\citet{akyildiz2020nonasymptotic}.\n\\section*{Acknowledgements}\n\\\"O.~D.~A. is funded by the Lloyds Register Foundation programme on Data Centric Engineering through the London Air Quality project. This work was supported by The Alan Turing Institute for Data Science and AI under EPSRC grant EP\/N510129\/1. J.~M. acknowledges the support of the Spanish \\textit{Agencia Estatal de Investigaci\\'on} (awards TEC2015-69868-C2-1-R ADVENTURE and RTI2018-099655-B-I00 CLARA) and the Office of Naval Research (award no. N00014-19-1-2226).\n\n\n\n\n\n\n\\clearpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and Statement of Results}\nIt has long been known that, given an ergodic invertible probability measure preserving system, a Rohlin tower may be constructed with base independent of a given partition of the underlying space(\\cite{Roh:52}, \\cite{Roh:65}). In \\cite{Alp:79}, meanwhile, S. Alpern proved a `multiple' Rohlin tower theorem (see \\cite{EP:97} for an easy proof) whose full statement we will not give, but which has the following corollary of interest: \n\n\\begin{thm}\\label{thm:alp}\nLet $N \\in \\ensuremath{\\mathbb N} $ and \\ensuremath{\\epsilon > 0\\;} be given. For any ergodic invertible measure-preserving transformation $T$ of a Lebesgue probability space $(X, \\mathcal B, \\mu)$, there exists a Rohlin tower of height $N$ with base $B$ and error set $E$ with $\\mu(E) < \\ensuremath{\\epsilon} $, so that $T(E) \\subset B$. \n\\end{thm}\n\nA {\\em Rohlin tower of height} $N$ {\\em with base} $B$ {\\em and error set} $E$ is characterized by the collection of sets \n$\\{B, TB, \\dots, T^{N-1}B, E\\}$ forming a partition of $X$. If in addition $T(E) \\subset B$, we shall say {\\bf Alpern Tower}.\nIt is our goal to show that for ergodic transformations on $(X, \\ensuremath{\\mathcal B} , \\mu)$, given a finite measurable partition $\\mathbb P$ of $X$, an Alpern tower may be constructed with base $B$ independent of $\\mathbb P$. Precisely:\n\n\\begin{bigthm}\n\\label{mainthm}\nLet $(X, \\mathcal B, \\mu)$ be a Lebesgue probability space, and suppose $\\mathbb P$ is a finite measurable partition of $X$. For any ergodic invertible measure-preserving transformation $T$ of $X$, $N \\in \\ensuremath{\\mathbb N} $, \nthere exists a Rohlin tower of height $N$ with base $B$ and error set $E$ such that $T(E) \\subset B$ and $B$ is independent of $\\mathbb P$. \n\\end{bigthm}\n\nWe do not specify the size of the error set; but the process of constructing our tower makes it clear that the error set may be made arbitrarily small. \n\n\n\\section{Proof of main result}\n\nFor the remainder of the paper, $(X, \\mathcal B, \\mu)$ will be a fixed Lebesgue probability space and $T:X \\to X$ will be an invertible ergodic measure-preserving transformation on $X$. All mentioned sets will be measurable and we will adopt a cavalier attitude toward null sets. In particular, ``partition'' will typically mean ``measurable partition modulo null sets''. \n\\begin{defn}\nBy a {\\em tower over B} we will mean a set $B \\subset X$, called the {\\em base}, and a countable partition $B = B_1 \\cup B_2 \\cup \\cdots$, together with their images $T^iB_j$, $0 \\le i < j$, such that the family $\\{T^iB_j : 0 \\le i < j\\}$ consists in pairwise disjoint sets. If this family partitions $X$, we will say that the tower is {\\em exhaustive}. \n\\end{defn}\n\nIf a tower over $B$ is exhaustive and $B = B_N \\cup B_{N+1}$, we shall speak of an {\\em exhaustive Alpern tower of height} $\\{N, N+1\\}$, as in such a case, \n$\\{B, TB, \\ldots, T^{N-1}B, E=T^N B_{N+1}\\}$ partitions $X$ with $T(E) \\subset B$. So \nwe may re-phrase Theorem 1 as: \\medskip\n\n\\noindent {\\em {\\bf Theorem \\ref{mainthm}:} Let $(X, \\mathcal B, \\mu)$ be a Lebesgue probability space and suppose $\\mathbb P$ is a finite measurable partition of $X$. For any ergodic invertible measure-preserving transformation $T$ of $X$, $N \\in \\ensuremath{\\mathbb N} $, one may find an exhaustive Alpern tower of height $\\{N, N+1\\}$ having base independent of $\\mathbb P$.} \\medskip\n\n\\noindent We require a lemma (and a corollary). \n\n\n\n\\begin{lem}\\label{lem:m}\nLet $M \\in \\ensuremath{\\mathbb N} $ and let $\\ensuremath{\\mathbb P} = \\{P_1, \\dots, P_t\\}$ be a partition of $X$ with $\\mu(P_i) > 0$ for each $i$. There exists a set $S$ of positive measure so that if $x \\in S$ with first return $n(x) = n$, say, then $|\\{x, Tx, \\dots, T^{n-1}x\\} \\cap P_i | \\ge M, 1 \\le i \\le t$. \n\\end{lem}\n\\proof For almost every $x$ we may find $K(x)$ so that for each $i$ between $1$ and $t$ we have $|\\{x, Tx, \\dots, T^{K(x) - 1}x\\} \\cap P_i| \\ge M$. Since almost all of $X$ is the countable union (over $k \\in \\ensuremath{\\mathbb N} $) of $\\{x: K(x) = k\\}$, there exists some fixed $K$ so that the set $A = \\{x: K(x) \\le K\\}$ has positive measure. If $C \\subset A$ has very small measure ($\\mu(C) < 1\/K$) then the average first-return time of $x \\in C$ to $C$ is $\\frac{1}{\\mu(C)} > K$, so we can find $S \\subset C$ with $\\mu(S) > 0$ so that $S, TS, \\dots, T^{K - 1}S$ are pairwise disjoint. \\hfill\\ensuremath{_\\blacksquare} \n\n\n\\begin{cor}\\label{cor:B}\n Let $M \\in \\ensuremath{\\mathbb N} $ and $\\ensuremath{\\mathbb P} = \\{P_1, \\dots, P_t\\}$ be a partition of $X$ with $\\mu(P_i) > 0$ for each $i$. There is a tower having base \n$S = S_{tM} \\cup S_{tM+1} \\cup \\cdots$ where for each $x \\in S_r$, $|\\{x, Tx, \\dots, T^{r - 1}x\\} \\cap P_i| \\ge M$ for all $1 \\le i \\le t$. \n\\end{cor}\n\\proof Let $S$, $K$ be as in Lemma \\ref{lem:m} and choose any $k \\ge K$. \\hfill\\ensuremath{_\\blacksquare} \\medskip\n\nWe turn now to the proof of Theorem \\ref{mainthm}. Fix a partition $\\ensuremath{\\mathbb P} = \\{P_1, \\dots, P_t\\}$, an arbitrary natural number $N$, and $\\epsilon >0$. Set $m_i = \\mu(P_i)$, and assume (without loss of generality) that $0 < m_1 \\le m_2 \\le \\dots \\le m_t$. Select and fix $M > \\frac{3N^3t}{m_1}$. Let $S$ be as in Corollary \\ref{cor:B} for this $M$; hence $S = S_{tM} \\cup S_{tM+1} \\cup \\cdots$. (Some $S_i$ may be empty, of course.) For each non-empty $S_R$, partition $S_R$ by $\\ensuremath{\\mathbb P} $-name of length $R$. (Recall that $x, y$ in $S_R$ have the same $\\ensuremath{\\mathbb P} $-name of length $R$ if $T^ix$ and $T^iy$ lie in the same cell of \\ensuremath{\\mathbb P} for $0 \\le i < R$.) Let $C$ be the base of one of the resulting columns; hence, every $x \\in C$ has the same $\\ensuremath{\\mathbb P} $-name of length $R$ (for some $R\\ge tM$), and the length $R$ orbit of each $x \\in C$ meets each $P_i$ at least $M$ times. \n\nPartition $C$ into pieces $C^{(1)}, C^{(2)}\\dots$, $C^{(t)}$ whose measures will be determined later. Then partition each $C^{(i)}$ into $N$ equal measure pieces, $C^{(i)} = C^{(i)}_1 \\cup C^{(i)}_2 \\cup \\dots \\cup C^{(i)}_N$. \n\nNow we fix $(R,C)$ and focus our attention on the height $R$ {\\em column} over a single $C^{(i)}$ and its height $R$ {\\em subcolumns} over $C^{(i)}_j$, $1\\leq j\\leq N$. We refer to the sets $T^rC^{(i)}$, $0 \\le r < R$, as {\\em levels} and to the sets $T^rC^{(i)}_j$ as {\\em rungs}. We are going to build a portion of $B$ by carefully selecting some rungs from the subcolumns under consideration. As we move through the various subcolumns, we need to have gaps of length $N$ or $N+1$ between selections. Now to specifics. We want \nto have our $\\ensuremath{C^{(i)}}$-selections form a ``staircase'' of height $N$ starting at level $N^2 - N$. That is, at height $(N-1)N$, the rung over $C^{(i)}_1$ is the only one selected; at height $N(N-1) + 1$, the rung over $C^{(i)}_2$ is the only one selected; etc., so that at height $N^2-1$, the rung over $C^{(i)}_N$ is the only one selected. \n\nThis is easy to accomplish. First, we select each base rung $C^{(i)}_j$, $j = 1, 2, \\dots, N$ (i.e., the rungs in the zeroth level). Over $C^{(i)}_1$, we then select $N-1$ additional rungs with gaps of length $N$; that is, we select the rungs at heights $N$, $2N, \\dots, (N-1)N$. Over $C^{(i)}_2$ we select $N-2$ rungs with gap $N$, then a rung with gap $N+1$. We continue in this fashion, choosing one less gap of length $N$ and one more of length $N+1$ in each subsequent subcolumn. In the last subcolumn (that over $C^{(i)}_N$) we are thus choosing rungs with gaps of length $N+1$ a total of $N-1$ times. See the left side of \nFigure \\ref{pic:bottom} for the case $N = 4$.\n\nNow we perform a similar procedure moving down from the top, so as to obtain a staircase starting at height $R-(N^2-1)$. \nNote that there are either $N$ or $N-1$ unselected rungs at the top of each subcolunm. See the right side of Figure \\ref{pic:bottom}. \n\n\n\n\n\\begin{figure}[hbtp]\n\n\\caption[Bottom of Tower for $N = 4$]\n {Bottom, Top of Tower for $N = 4$}\n\\setlength{\\unitlength}{.2in}\n\n\\begin{picture}(20,20)(0,0) \n\\label{pic:bottom}\n\\put(0, 0){$C^{(i)}_1 \\hspace{4mm} C^{(i)}_2 \\hspace{4mm} C^{(i)}_3 \\hspace{3.3mm} C^{(i)}_4$}\n\\multiput(0,2)(0,1){16}{\\line(1,0){1}}\n\\multiput(2,2)(0,1){16}{\\line(1,0){1}}\n\\multiput(4,2)(0,1){16}{\\line(1,0){1}}\n\\multiput(6,2)(0,1){16}{\\line(1,0){1}}\n\\put(4,18){\\vdots}\n\\linethickness{1mm}\n\\multiput(0,2)(0,4){4}{\\line(1,0){1}}\n\\multiput(2, 2)(0,4){3}{\\line(1,0){1}}\n\\put(2, 15){\\line(1,0){1}}\n\\multiput(4, 2)(0,4){2}{\\line(1,0){1}}\n\\multiput(4, 11)(0, 5){2}{\\line(1, 0){1}}\n\\put(4, 16){\\line(1,0){1}}\n\\multiput(6, 2)(0,5){4}{\\line(1,0){1}}\n\n\\linethickness{.2mm}\n\\put(13, 1.5){\\vdots}\n\\multiput(10,3)(0,1){15}{\\line(1,0){1}}\n\\multiput(12,3)(0,1){15}{\\line(1,0){1}}\n\\multiput(14,3)(0,1){15}{\\line(1,0){1}}\n\\multiput(16,3)(0,1){15}{\\line(1,0){1}}\n\\linethickness{1mm}\n\\multiput(10, 3)(0,5){3}{\\line(1,0){1}}\n\\multiput(12, 4)(0,5){3}{\\line(1,0){1}}\n\\multiput(14, 5)(0,5){2}{\\line(1,0){1}}\n\\put(14, 14){\\line(1,0){1}}\n\\multiput(16,6)(0,4){2}{\\line(1,0){1}}\n\\put(16,14){\\line(1,0){1}}\n\n\\end{picture}\n\n\\end{figure} \\medskip\n\nNext, we want to select rungs through the middle of the tower so as to iterate the staircase pattern all the way up, except that we will skip certain levels (i.e. not select any of their rungs), continuing the staircase pattern where we left off with the following rung. As we want to match stride with the staircase already selected at the top, the total number of levels skipped in the middle section will be constrained to a certain residue class modulo $N$, and as we want the selected rungs to form a portion of an Alpern tower of height $\\{N, N+1\\}$, we cannot skip any two levels with fewer than $N$ levels between them.\n\nSome terminology: an {\\em appearance} of $P_j$ in \\ensuremath{C^{(i)}}\\ is just a level of \\ensuremath{C^{(i)}}\\ that is contained in $P_j$. \nA {\\em selection} of $P_j$ is just a selected rung in a subcolumn of \\ensuremath{C^{(i)}}\\ that is contained in $P_j$. The {\\em net skips} of $P_j$ in the tower over \n\\ensuremath{C^{(i)}}\\ is defined as \\[ S_j(\\ensuremath{C^{(i)}}) = (\\# \\textnormal{ of appearances of $P_j$) } - (\\# \\textnormal{ of selections of $P_j$)}.\\] \nFor example, looking at Figure \\ref{pic:bottom}, one sees that $4$ zeroth level rungs are selected. So if the zeroth level belongs to $P_j$, the zeroth level \ncontribution to $S_j(\\ensuremath{C^{(i)}})$ is $-3$ (one appearance and 4 selections).\n\n \n \n\nLet $\\delta = 2(N-1)(N-2)$ and choose $\\gamma$ with \n\\begin{displaymath}{\\delta\\over m_1}+N > \\gamma \\ge {\\delta\\over m_1} \\quad \\mbox{and} \\quad (t-1)\\delta + \\gamma \\equiv R ~(\\hspace{-2.2ex}\\mod N).\n\\end{displaymath} \nOver \\ensuremath{C^{(i)}}, we skip a quantity of ``middle'' levels belonging to each $P_j$ (for $j \\neq i$) sufficient to ensure that $S_j(\\ensuremath{C^{(i)}})=\\delta$ for $j\\neq i$ and $S_i(\\ensuremath{C^{(i)}})=\\gamma$. (Note that $P_j$ cannot have been skipped more than $\\delta$ times in the outer rungs.) This is not delicate; one can just enact the selection greedily. That is to say, travel up the tower, beginning at level $N^2$, skipping rungs that belong to cells requiring additional skips whenever there's been no too-recent skip. Since each $P_j$ appears at least $M>{3N^3t\\over m_1}$ times, and we need only $\\gamma +(t-1)\\delta\\le {2N^2t\\over m_1}$ net skips, we'll find all the skips we need. \n\n\nWe have not specified the relative masses of the bases of the columns $\\ensuremath{C^{(i)}}$. Set\n\\begin{equation}\n\\label{bi} b_j = \\frac{\\mu(P_j)(\\gamma + (t-1)\\delta) - \\delta}{\\gamma - \\delta} \\; \n\\end{equation}\nand put $\\mu(\\ensuremath{C^{(i)}})= b_i \\mu(C)$, $1\\leq i\\leq t$. Our choice of $\\gamma$ ensures that $b_i\\ge 0$ for each $i$, and one easily checks that $\\sum b_i=1$, so this is coherent.\n\nLet $B_{C}$ be the union of the rungs selected from the columns over $C$ \n(this includes each of the rungs selected from each of the $N$ subcolumns over $\\ensuremath{C^{(i)}}$, $1\\le i\\le t$) \nand put $B=\\bigcup_{C} B_{C}$. (Here $C$ runs over the bases of the columns corresponding to every $\\ensuremath{\\mathbb P} $-name of length $R$ for every $R\\geq tM$.) It\nis clear that $B$ forms the base of an Alpern tower of height $\\{N, N+1\\}$. It remains to show that $B$ is independent of $\\ensuremath{\\mathbb P} $, which we will do by constructing a set $A$, disjoint from $B$, such that both $A$ and $A\\cup B$ can be shown to be independent of $\\ensuremath{\\mathbb P} $. \n \nHere is how $A$ is constructed. Consider again the tower over $\\ensuremath{C^{(i)}}$. This tower had $R$ levels and $RN$ rungs, some of which were selected for the base $B$. We now choose $\\gamma +(t-1)\\delta$ additional rungs for the set $A$. For each $j\\neq i$, $\\delta$ of these rungs should be contained in $P_j$, with the remaining $\\gamma$ contained in $P_i$. (We don't worry about gaps and whatnot; just choose any such collection of rungs disjoint from the family of $B$ selections.) \nDenote the union of the these additional rungs (in all of the columns over $\\ensuremath{C^{(i)}}$, $1\\leq i\\leq t$) by $A_{C}$. Finally, put $A=\\bigcup _{C} A_{C}$. \n\nThat $A\\cup B$ is independent of $\\ensuremath{\\mathbb P} $ is a consequence of the fact that for each $\\ensuremath{C^{(i)}}$, the number appearances of $P_j$ in the column over $\\ensuremath{C^{(i)}}$ is \nprecisely the number of $B$-selections from $P_j$ plus the number of $A$-selections from $P_j$. Accordingly, the relative masses of the cells of $\\ensuremath{\\mathbb P} $ restricted to $A\\cup B$ are equal to the relative frequencies of the appearances of the cells of $\\ensuremath{\\mathbb P} $ in the column over $\\ensuremath{C^{(i)}}$. Therefore, since the proportion of the column that is selected for $A\\cup B$ is independent of $\\ensuremath{C^{(i)}}$ (in fact is always equal to ${1\\over N}$), and since the columns over the various $\\ensuremath{C^{(i)}}$ exhaust $X$, $A\\cup B$ is independent of $\\ensuremath{\\mathbb P} $ (in fact $\\mu\\big(P_j \\cap (A\\cup B)\\big) = {1\\over N} \\mu(P_j)$, $1\\leq j\\leq t$).\n\nThat $A$ is independent of $\\ensuremath{\\mathbb P} $, meanwhile, is a consequence of equation (\\ref{bi}). Fixing $C$ and recalling that $b_i = {\\mu(\\ensuremath{C^{(i)}})\\over \\mu(C)}$, \nthat there were $\\delta$ \n$P_j$-rungs in the column over $\\ensuremath{C^{(i)}}$ selected for $A$, $i\\neq j$, and that there were $\\gamma$ $P_i$-rungs in the column over $\\ensuremath{C^{(i)}}$ selected for $A$, \nthe relative mass of $P_i$ among the $A$-selections in the tower over $C$ is \n\\[ r_i = \\frac{b_i \\gamma + (1-b_i)\\delta}{\\gamma + (t-1)\\delta} .\\]\nBut, solving for $\\mu(P_i)$ in equation (\\ref{bi}), one gets that\n\\[ \\mu(P_i) = \\frac{b_i \\gamma + (1-b_i)\\delta}{\\gamma + (t-1)\\delta} \\]\nas well. So the intersection of $A$ with the column over $C$ is independent of $\\ensuremath{\\mathbb P} $. That this is true for every $C$ gives independence of $A$ from $\\ensuremath{\\mathbb P} $ simpliciter. \\hfill\\ensuremath{_\\blacksquare} \n\n\n\n\\providecommand{\\bysame}{\\leavevmode\\hbox to3em{\\hrulefill}\\thinspace}\n\\providecommand{\\MR}{\\relax\\ifhmode\\unskip\\space\\fi MR }\n\\providecommand{\\MRhref}[2]{%\n \\href{http:\/\/www.ams.org\/mathscinet-getitem?mr=#1}{#2}\n}\n\\providecommand{\\href}[2]{#2}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\bf Introduction}\n\\bigskip\n\nDenote by $S^{1} = \\mathbb{R}\/\\mathbb{Z}$ the circle and $p :\n\\mathbb{R}\\longrightarrow S^{1}$ the canonical projection. Let $f$\n be an orientation preserving homeomorphism of $S^{1}$. The homeomorphism $f$ admits a lift $\\widehat{f} :\n\\mathbb{R}\\longrightarrow \\mathbb{R}$ that is an increasing\nhomeomorphism of $\\mathbb{R}$ such that $p\\circ\\widehat{f} =\nf\\circ p$. Conversely, the projection of such a homeomorphism of\n$\\mathbb{R}$ is an orientation preserving homeomorphism of\n$S^{1}$. Let $x\\in S^{1}$. We call \\emph{orbit} of $x$ by $f$\nthe subset $O_{f}(x) = \\{f^{n}(x): n\\in\\mathbb{Z} \\}$. Historically, the study of the dynamics of \ncircle homeomorphisms was\ninitiated by Poincar\\'e (\\cite{hP}, 1886), who introduced the\nrotation number of a homeomorphism $f$ of $S^{1}$ as $\\rho (f) =\n\\underset{n\\to +\\infty}{\\lim}\\frac{\\widehat{f}^{n}(\\widehat{x}) -\n\\widehat{x}}{n}~(\\textrm{mod } 1)$, where $\\widehat{x}\\in\n\\mathbb{R}$ such that $p(\\widehat{x})= x$. Poincar\\'e shows that\nthis limit exists and does not depend on neither $x$ nor the lift\n$\\widehat{f}$ of $f$. We say that $f$ is semi-conjugate to the\nrotation $R_{\\rho(f)}$ if there exists an orientation preserving\nsurjective continuous map $h: S^{1}\\longrightarrow S^{1}$ of degree\none such that $h\\circ f = R_{\\rho(f)}\\circ h$.\n\n{\\it Poincar\\'e's theorem.} Let $f$ be an homeomorphism of $S^{1}$\nwith irrational rotation number $\\rho(f)$. Then $f$ is\nsemi-conjugate to the rotation $R_{\\rho(f)}$.\n\nA natural question is whether the semi-conjugation $h$ could be\nimproved to be a conjugation, that is $h$ to be an homeomorphism. In\nthis case, we say that $f$ is topologically conjugate to the\nrotation $R_{\\rho(f)}$. In this direction, Denjoy (\\cite{aD32})\nproves the following:\n\\medskip\n\n{\\it Denjoy's theorem \\cite{aD32}}. Every $C^{2}$-diffeomorphism $f$\nof $S^{1}$ with irrational rotation number $\\rho(f)$ is\ntopologically conjugate to the rotation $R_{\\rho(f)}$.\n\\medskip\n\nDenjoy asked whether or not $C^{2}$-diffeomorphisms $f$ of $S^{1}$\nare ergodic with respect to the Lebesgue measure $m$ ($f$ is said to\nbe ergodic with respect to $m$ if any $f$-invariant measurable set\n$A$ has measure $m(A)$ equal to $0$ or $1$). Simultaneously, Herman\nand Katok gave a positive answer to this question:\n\\medskip\n\n{\\it Herman-Katok's theorem} (\\cite{yKdO89}, \\cite{bHaK95}). Every\n$C^{2}$-diffeomorphism $f$ with irrational rotation number is\nergodic with respect to the Lebesgue measure $m$.\n\\medskip\n\nIt is well known that an homeomorphism $f$ of $S^{1}$ with\nirrational rotation number preserves a unique normalized measure on\n$S^{1}$, denoted by $\\mu_{f}$. If $\\widehat{h}$ is the lift of\n$h$, by taking $\\widehat{h}(0) = 0$, the conjugating\nhomeomorphism $h$ is unique and related to $\\mu_{f}$ by\n$h(x)=p(\\mu_{f}([0,x])) \\in S^{1}$ for $x \\in S^{1}$. Uniqueness of\n$\\mu_{f}$ implies that $\\mu_{f}$ is either singular, or absolutely\ncontinuous with respect to $m$, in this second case, $h$ is an\nabsolutely continuous function. Recall that $\\mu_{f}$ is said to be\n\\textit{singular} with respect to the Lebesgue measure $m$ on\n$S^{1}$ if there exists a measurable subset $E$ of $S^{1}$ such that\n$\\mu_{f}(E) = 1$ and $m(E) = 0$.\n In fact, if $\\mu_{f}$ is absolutely continuous with respect to\n the Lebesgue measure $m$ and $f$ is a $C^{2}$-diffeomorphism,\n $\\mu_{f}$ is necessarily equivalent to $m$ as a consequence of Herman-Katok's theorem\n above (i.e. $m$ is absolutely continuous with respect to $\\mu_{f}$\n and conversely).\n\\medskip\nIn the sequel we deote by $\\mathbb{R}^{\\ast}=\\mathbb{R}\\backslash\n\\{0\\}$ and $\\mathbb{N}^{\\ast}=\\mathbb{N}\\backslash \\{0\\}$.\n\n\\textit{Definition}. A real number $\\alpha\\in ]0, 1[$ is called\n\\textit{Diophantine with exponent $\\delta\\geq 0$} if there is a\nconstant $c(\\alpha)>0$ such that\n$$ \\ (1) \\qquad \\qquad \\arrowvert\\alpha -\\dfrac{p}{q}\\arrowvert\\geq \\dfrac{c(\\alpha)}{q^{2+\\delta}}\\qquad \\textrm{ for any }\\ \\dfrac{p}{q}\\in \\mathbb{Q}.$$\n\\medskip\n\\\n\\\\\nA number that is neither rational nor Diophantine is called a\n\\textit{Liouville number}. \\\n\\\\\n Every real number $\\alpha\\in ]0, 1[$ has a continued fraction expansion\nrepresented by\n$$\\alpha = \\frac{1}{a_{1}+ \\frac{1}{a_{2}+ \\dots}}: =\n[a_{1},a_{2},\\dots,a_{n},\\dots]$$ \\\n\\\\\nwhere $a_{m} \\in \\mathbb{N}^{\\ast}$, $m\\in \\mathbb{N}^{*}$ are\ncalled \\textit{partial quotients} of $\\alpha$. When ($a_{m})_{m\\in\n\\mathbb{N}}$ is bounded, $\\alpha$ is said to be of \\textit{bounded\ntype}. This is equivalent to the fact that $(1)$ holds with $\\delta\n= 0$.\n\\medskip\n\nThe problem of smoothness of the conjugacy $h$ of smooth\ndiffeomorphisms to rotations is now very well understood (see for\ninstance \\cite{mH79}, \\cite{yKdO89}, \\cite{kmKyS89}, \\cite{KT09},\n\\cite{jcY84}).\n\n\n\n\\medskip\n\n\n\nWe refer the reader to the books \\cite{aN10} and \\cite{wDvS91} for a\nthorough account on circle homeomorphisms.\n\\smallskip\n\nThe situation is more complicated for circle homeomorphisms with\nbreak points or shortly, class $P$-homeomorphisms (see the\ndefinition below). This class are known to satisfy the conclusion of\nDenjoy's theorem (Corollary 2.6) (see also \\cite{yKdO89};\n\\cite{mH79}, chapter VI) and, with additional regularity, of\nHerman-Katok's theorem (see \\cite{kaA06}). However,\nKatznelson-Ornstein's theorem \\cite{yKdO89} cannot be extended in\ngeneral to class $P$. The study of the regularity of the invariant\nmeasures of class $P$-homeomorphisms arises then naturally.\n\\bigskip\n\n\\textbf{Class $P$-homeomorphisms} \\\\\nThe following definition is du to M.R. Herman.\n\\begin{defn}(see \\cite{mH79}, p.74) An orientation preserving homeomorphism $f$ of $S^{1}$ is called a \\emph{class $P$}-homeomorphism if it is differentiable\nexcept at finitely many points, the so called \\textit{break points}\nof $f$, at which left and right derivatives (denoted, respectively,\nby $\\textrm{Df}_{-}$ and $\\textrm{Df}_{+}$) exist and such that the\nderivative $\\textrm{Df}: S^{1} \\longrightarrow \\mathbb{R}_{+}^{*}$\nhas the following properties:\n\\begin{itemize}\n\\item [\\textbullet] there exist two constants $01$. Assume that:\n\\begin{itemize}\n \\item [(i)] The rotation numbers $\\rho(f_{i})$ of $f_{i},~i=1,2$\nare irrational of bounded type and coincide\n$\\rho(f_{1})=\\rho(f_{2})=\\rho, ~\\rho \\in \\mathbb{R}\\backslash\n\\mathbb{Q}$,\n \\item [(ii)] $\\sigma_{f_{1}}(a_{1}) \\notin\n\\{\\sigma_{f_{2}}(a_{2}),~\\sigma_{f_{2}}(b_{2})\\}$,\n \\item [(iii)]\n$\\sigma_{f_{1}}(a_{1})~\\sigma_{f_{1}}(b_{1})=\\sigma_{f_{1}}(a_{2})~\\sigma_{f_{1}}(b_{2})$,\n \\item [(iv)] The break points of $f_{i},~i=1,2$ do not lie on the\nsame orbit,\n\\end{itemize}\nThen the map $h$ conjugating $f_{1}$ to $f_{2}$ is singular.\n\\end{cor}\n\\medskip\n\n\\begin{cor}\\label{c:15}\nLet $f$ and $g$ satisfy the assumptions of the main theorem. Then:\n\\begin{itemize}\n \\item [(i)] If $g$ does not have the $(D)-$ property and $f$ has\nthe $(D)-$ property then the map conjugating $f$ to $g$ is\nsingular.\n \\item [(ii)] If $f$ and $g$ have the $(D)-$property and $D^{2}f,~D^{2}g \\in\nL^{p}(S^{1})$ for some $p>1$ then the map conjugating $f$ to $g$ is\nabsolutely continuous.\n \\end{itemize}\n\\end{cor}\n\\medskip\n\nIn particular:\n\\medskip\n\n\\begin{cor} $($Adouani-Marzougui's theorem $($\\cite{am12}, Theorem B$)$$)$\\label{c:16}\nLet $f$ satisfies the assumptions of the main theorem. Then:\n\\begin{itemize}\n \\item [(i)] If $f$ does not have the $(D)$-property then the invariant measure $\\mu_{f}$ is singular with respect\nto the Lebesgue measure $m$.\n \\item [(ii)] If $f$ has the $(D)$-property and $D^{2}f \\in\nL^{p}(S^{1})$ for some $p>1$, then the measure $\\mu_{f}$ is\nequivalent to the Lebesgue measure $m$.\n \\end{itemize}\n\\end{cor}\n\\medskip\n\n\\begin{cor} $($\\cite{am12}, Corollary 1.5$)$ \\label{c:17} Let $f\\in \\mathrm{PL}(S^{1})$\nhave irrational rotation number $\\alpha$ of bounded type. Then the following are equivalent:\n\\begin{itemize}\n \\item [(i)] $f$ has the ($D$)-property\n\n \\item [(ii)] The measure $\\mu_{f}$ is equivalent to the Lebesgue measure $m$.\n\\end{itemize}\n\\end{cor}\n\\medskip\n\n\\textbf{Remark.} The main Theorem and in particular Corollary \\ref{c:17} cannot be extended to rotation number not of bounded type, since very recently,\nTeplinsky \\cite{aT15} constructs an example of a (PL) circle homeomorphism with $4$ non-trivial break points\nlying on different orbits that has invariant measure equivalent to the Lebesgue measure $m$.\nThe rotation number for such example can be chosen either Diophantine or Liouvillean, but not of bounded type.\n\n\\section{\\bf Notations and preliminary results}\n\\bigskip\n\n \\textit{\\bf 2.1. Dynamical partitions}. Let $f$ be a homeomorphism of $S^{1}$ with irrational rotation number $\\alpha = \\rho(f)$.\nWe identify $\\alpha$ to its lift $\\widehat{\\alpha}$ in $]0, 1[$. Let\n$(a_{n})_{n\\in \\mathbb{N}^{\\ast}}$ be the partial quotients of\n$\\alpha$ in the continued fractions expansion. For $n\\in\n\\mathbb{N}^{\\ast}$, the fractions $[a_{1},a_{2},\\dots,a_{n}]$ are\nwritten in the form of irreducible fractions $\\frac{p_{n}}{q_{n}}$.\nThe sequence $\\frac{p_{n}}{q_{n}}$ converges to $\\alpha$ and we say\nthat $\\frac{p_{n}}{q_{n}}$ are \\textit{rational approximations} of\n$\\alpha$. Their denominators $q_{n}$ satisfy the following recursion\nrelation:\n$$q_{n} = a_{n}q_{n-1}+q_{n-2}, ~n\\geq 2, ~q_{0}=1, ~q_{1}=a_{1}.$$\n\nLet $x_{0}\\in S^{1}$ fixed. Denote by:\n\n$$\\Delta_{0}^{(n)}(x_{0}) = \\begin{cases}\n{[x_{0}, f^{q_{n}}(x_{0})]}, &\\text{ if $n$ is even } \\\\\n{[f^{q_{n}}(x_{0}), x_{0}]}, &\\text{ if $n$ is odd }\n\\end{cases}$$\n\\\n\\\\\n$$\\Delta_{i}^{(n)}(x_{0}) : \\ = f^{i}\\left(\\Delta_{0}^{(n)}(x_{0})\\right), \\ i\\in \\mathbb{Z}$$\n\\medskip\n\n\\textit{In all the sequel, we deal with the case $n$ odd} (the case\n$n$ even is obtained by reversing the orientation of $S^{1}$).\n\\smallskip\n\nWe have then:\n\\medskip\n\n\\begin{lem}[See \\cite{yS94}]\\label{l:23} The segments $\\Delta_{i}^{(n-1)}(x_{0}) = f^{i}\\left(\\Delta_{0}^{(n-1)}(x_{0})\\right), \\ 0\\leq\ni0$ such that for any $x_{0}\\in S^{1}$, $n\\geq 1$ and any element\n$\\Delta^{(n)}$ of the dynamical partition $\\xi_{n}(x_{0})$, we have\n\\; $m(\\Delta^{(n)})\\leq C\\lambda^{n}$, where $\\lambda =\n(1+e^{-V})^{-\\frac{1}{2}}< 1$.\n\\end{cor}\n\\medskip\n\nFrom Corollary \\ref{c:24} it follows that every orbit of every $x\\in\nS^{1}$ is dense in $S^{1}$ and this implies the following\ngeneralization of the classical Denjoy theorem:\n\\medskip\n\n\\begin{cor}[Denjoy's theorem: the class P] Let $f\\in \\mathcal{P}(S^{1})$ with irrational rotation number $\\alpha = \\rho(f)$.\nThen $f$ is topologically conjugate to the rotation $R_{\\alpha}$.\n\\end{cor}\n\\medskip\n\nIn the following Lemma we have to compare the lengths of iterates\ndifferent of intervals.\n\\medskip\n\n\\begin{lem}\\label{l:31} Let $f\\in \\mathcal{P}(S^{1})$ with irrational rotation number $\\alpha = \\rho(f)$.\nLet $n\\in \\mathbb{N}^{*}$ and $z_{1}\\in S^{1}$. Set $z_{2} =\nf^{q_{n-1}}(z_{1}), \\ z_{3} = f^{q_{n-1}}(z_{2})$. Then for any\nsegments $K_{1}, \\ K_{2}\\subset [z_{1},z_{3}]$, one has:\\\\\n\n$$e^{-2V}\\dfrac{m(K_{1})}{m(K_{2})}\\leq\n\\dfrac{m(f^{j}(K_{1}))}{m(f^{j}(K_{2}))}\\leq\ne^{2V}\\dfrac{m(K_{1})}{m(K_{2})}$$ for all \\; $j=-q_{n}, \\dots\n,0,\\dots, q_{n}$.\n\\end{lem}\n\\medskip\n\n\\begin{proof} If $j= q_{n}$, Lemma \\ref{l:31} is a consequence of Denjoy's inequality.\nWe suppose that $0\\leq j1$ be a real number and $x_{0}\\in S^{1}$. We say that a triple\n$(z_{1},z_{2},z_{3})$~ of~ $S^{1}$ ($z_{1}\\prec z_{2}\\prec z_{3})$\n satisfies the conditions $(a)$~~\\textrm{and}~~$(b)$ for the point $x_{0}$ and\nthe constant $R$ if:\\\\\n\\medskip\n\n\\begin{itemize}\n \\item [(a)]: $R^{-1}\\leq\\dfrac{m([z_{2},~z_{3}])}{m([z_{1},~z_{2}])}\\leq R$\n\\\n\\\\\n\\item [(b)]: $\\underset{1\\leq i\\leq 3}{\\max}~m([x_{0},z_{i}])\\leq R m([z_{1},z_{2}])$\n\\end{itemize}\n\\end{defn}\n\\medskip\n\nWe call two intervals in $S^{1}$ $R$-\\textit{comparable} if the\nratio of their lenghts is in $[R^{-1}, R]$.\n\\bigskip\n\\medskip\n\n\\textit{\\bf 2.4. Reduction.} In this subsection, we will reduce any\nhomeomorphism $f\\in \\mathcal{P}(S^{1})$ with several break (or\nnon-break)\n points to the one with break points lying on different orbits.\n\\medskip\n\n\\begin{defn} Let $c\\in C(f)$. A \\emph{maximal $f$-connection} of $c$ is a segment\n\\ $[f^{-p}(c),\\dots, f^{q}(c)]:= \\{f^{s}(c): \\ -p \\leq s\\leq q\\}$\nof the orbit $O_{f}(c)$ which contains all the break points of $f$\ncontained on $O_{f}(c)$ and such that $f^{-p}(c)$ (resp.\n$f^{q}(c)$) is the first (resp. last) break point of $f$ on\n$O_{f}(c)$.\n\\end{defn}\n\\medskip\n\\\n\\\\\n We have the following properties:\n\\medskip\n\\\n\\\\\n- Two break points of $f$ are on the same maximal $f$-connection, if\nand only if, they are on the same orbit.\n\n- Two distinct maximal $f$-connections are disjoint.\n\\bigskip\n\nDenote by \\\n\\\\\n\\begin{itemize}\n\n\n \\item [-] $ M_{i}(f) = [c_{i},\\dots, f^{N_{i}}(c_{i})], \\ (N_{i}\\in \\mathbb{N}),$ the maximal $f$-connections of\n$c_{i}\\in C(f)$, ($0\\leq i\\leq p$).\n\n \\item [-] M$(f) = \\coprod _{i=0}^{p}M_{i}(f)$.\n\\end{itemize}\n\\medskip\nSo, we have the decomposition: $C(f) = \\coprod_{i=0}^{p} C_{i}(f)$\nwhere, $C_{i}(f) = C(f)\\cap M_{i}(f), \\ 0\\leq i\\leq p$. We also\nhave $$\\underset{d\\in C_{i}(f)}\\prod \\sigma_{f}(d) = \\underset{d\\in\nM_{i}(f)}\\prod \\sigma_{f}(d).$$\n\\medskip\n\\\n\\\\\n\\begin{prop}[\\cite{am15}, Theorem 2.1]\\label{p:623}\nLet $f\\in \\mathcal{P}(S^{1})$ with irrational rotation number, and\nlet $\\big(k_{0},\\dots,k_{p}\\big)\\in \\mathbb{Z}^{p+1}$. Then there\nexists a piecewise quadratic homeomorphism $K\\in \\mathcal{P}(S^{1})$\nsuch that $F:= K \\circ f \\circ K^{-1}\\in \\mathcal{P}(S^{1})$ with\n$C(F) \\subset \\{K(f^{k_{i}}(c_{i}))= F^{k_{i}}(K(c_{i})); i=0,1,\n\\dots, p\\}$ and such that $\\sigma_{F}(F^{k_{i}}(K(c_{i}))) =\n\\pi_{s,O_{f}(c_{i})}(f),~i=0,1, \\dots, p$.\n\\end{prop}\n\\medskip\n\n\\begin{cor}[\\cite{am15}, Corollary 2.3]\\label{c:21} Let $f\\in \\mathcal{P}(S^{1})$ with irrational rotation number.\nSuppose that $f$ satisfies the (KO) condition. Then, there exists a\npiecewise quadratic homeomorphism $K\\in \\mathcal{P}(S^{1})$ such\nthat: $F = K\\circ f \\circ K^{-1}\\in \\mathcal{P}(S^{1})$ with $C(F)\\subset\\{K(c_{0}),\\dots,K(c_{p})\\}$, where $c_{0},\\dots, c_{p}\\in C(f)$\nare on pairwise distinct orbits.\n\\end{cor}\n\\medskip\n\n\nIn particular, we have:\n\\medskip\n\n\\begin{cor}\\label{c:22} Let $f$, $K$ and $F$ as in Corollary \\ref{c:21}.\n\\begin{itemize}\n\\item [(i)] If $f$ does not satisfy the ($D$)-property, then there exists\n$0\\leq i\\leq p$ such that $K(c_{i})$ is the unique break point of\n$F$ in its orbit.\n \\item [(ii)] If $f$ satisfies the ($D$)-property then $F$ is a $C^{1}$-diffeomorphism with $\\mathrm{DF}$ absolutely continuous on\n$S^{1}$.\n\\end{itemize}\n\\end{cor}\n\\medskip\n\\bigskip\n\n\\begin{prop}\\label{p:23} Let $f, ~g\\in \\mathcal{P}(S^{1})$ and have the same irrational rotation number such that the break points of\n$f$ (resp. $g$) belong to pairwise distinct $f$-orbits (resp.\n$g$-orbits). Let $h$ be the conjugating homeomorphism from $f$ to\n$g$. Then there exist an integer $q\\geq p$,\n$c_{0},c_{1},\\dots,c_{q}; \\ d_{0},d_{1},\\dots,d_{q}$ belong to pairwise distinct orbits $f$-orbits and $g$-orbits respectively\nsuch that $C(f)\\subset \\{c_{0},c_{1},\\dots,c_{q}\\}$,\n$C(g)\\subset \\{d_{0},d_{1},\\dots,d_{q}\\}$, and a homeomorphism $u$\nof $S^{1}$ such that $G: = u \\circ f \\circ u^{-1}\\in\n\\mathcal{P}(S^{1})$ has break points belong to pairwise distinct\n$G$-orbits and satisfies\n\\\\\n\\begin{itemize}\n\n \\item [(i)] $C(G)\\subset \\{u(c_{i}): ~i=0,1, \\dots, q\\}$.\n \\item [(ii)] $\\sigma_{G}(u(c_{i})) = \\sigma_{g}(d_{i})~,~i=0,1, \\dots, q$.\n \\item [(iii)] $\\pi_{s}(G)=\\pi_{s}(g)$.\n \\item [(iii)] $h$ is singular if and only if so is $u$.\n\\end{itemize}\n\\end{prop}\n\\medskip\n\\\n\\\\\n\\textit{From now on we denote by \\ \\ $B:= \\{c_{0},\\dots, c_{q}\\}$} and $c_{0}=c$.\n\\medskip\n\nIn the sequel, we may assume that $f, ~g\\in \\mathcal{P}(S^{1})$ have\nthe same irrational rotation number $\\alpha$ and satisfy the\nfollowing:\n\\medskip\n\n- The points of $B$ belong to pairwise distinct $f-$ orbits.\n\n- The break points of $g$ belong to pairwise distinct $g$-orbits.\n\n- $C(f)\\subset B$.\n\n- The maps $f, ~g$ satisfy the Katznelson-Ornstein (KO) condition.\n\n- The conjugating $h$ is such that $C(g)\\subset h(B)$.\n\\bigskip\n\\medskip\n\n\\section{\\bf Primary Cells}\n\\medskip\n\\medskip\n\nLet $x_{0}\\in S^{1}$ and $f\\in \\mathcal{P}(S^{1})$\nwith irrational rotation number $\\alpha =\\rho(f)$. Let $c\\in B$\nand $n$ an odd integer. By Lemma \\ref{l:23}, either $c\\in\n\\Delta^{(n-1)}_{i_{n}(c)}(x_{0})$ for some $0\\leq i_{n}(c) < q_{n}$\nor $c\\in\\Delta^{(n)}_{i_{n}(c)}(x_{0})$ for some $0\\leq i_{n}(c)<\nq_{n-1}$. Set\n\n\\[\\ y_{2} = f^{-i_{n}(c)}(c), \\ y_{1} =\nf^{-q_{n-1}}(y_{2}), \\ y_{3} = f^{q_{n-1}}(y_{2}).\\]\n\\smallskip\n\nNotice that $y_{1},y_{2}$ and $y_{3}$ are defined with respect to\n$f,x_{0},c$ and the number $i_{n}(c)$ depends on $x_{0}$.\n\\medskip\n\nLet $\\delta > 0$ and $U_{\\delta}(x_{0})$ a\n$\\delta$-neighbourhood of $x_{0}$.\n\\medskip\n\n\\begin{prop}$($cf. \\cite{am12}, Proposition 3.1$)$ \\label{p:41} Under the notations above, there exists $N = N(x_{0}, \\delta)\\in \\mathbb{N}$ such that for all $n\\geq N$, there is\na triple $(y_{1},y_{2},y_{3})=(y_{1}(n),y_{2}(n),y_{3}(n))_{n\\geq\nN}$ with the following properties:\n\\medskip\n\n \\begin{itemize}\n \\item [(c-0)] $\\left(y_{1},y_{2},y_{3}\\right) \\left(\\mathrm{resp}.\n ~ (f^{q_{n}}\\left(y_{1}),f^{q_{n}}(y_{2}),f^{q_{n}}(y_{3}\\right)\\right))_{n\\geq N}\\subset\nU_{\\delta}(x_{0})$\\\\\n\n\\item [(c-1)] $~y_{2} \\in \\Delta^{(n-1)}_{0}(x_{0})~\\mathrm{ or }~y_{2}\\in \\Delta^{(n)}_{0}(x_{0})$\\\\\n\n \\item [(c-2)] $~m\\left(f^{j}([y_{1},y_{3}])\\right)\\leq K\\lambda^{n}$, for every $0\\leq j0,$ set\n\\medskip\n\n\\textbullet ~ $~l_{n}: = m(\\Delta^{(n)}(y_{1}))$\\\\\n\n\\textbullet ~ $V^{(f)}_{n,\\gamma}(a):=[a-\\gamma ~l_{n-1},~a+\\gamma\n~l_{n-1}]$ \\\\\n\n \\textbullet ~ $k_{n}(d)=\\begin{cases}\n\n i_{n}(d), & \\textrm{if}~d \\in \\{c\\} \\cup Q_{2}(\\mathbb{N}_{0}) \\\\\n j_{n}(d), & \\textrm{if}~d \\in Q_{3}(\\mathbb{N}_{0})\n \\end{cases}$\n\\medskip\n\\medskip\n\n\\begin{prop}\\label{p:47} There exists a positive constant $\\gamma_{0}>0$ such that:\n\\medskip\n\n\\begin{enumerate}\n \\item For every $d\\in E(c_{1},c)$ and every $0<\\gamma< \\gamma_{0}$, there exists\n$n_{\\gamma} \\in \\mathbb{N}_{0}\n $ such that for any $n \\in \\mathbb{N}_{0},~n \\geq n_{\\gamma}$ there exists a unique integer $0 \\leq k_{n}(d)0,~\\exists ~ n_{\\gamma} \\in \\mathbb{N}_{0}\n~\\textrm{such that for every }~n \\geq n_{\\gamma},~ n \\in\n\\mathbb{N}_{0}~:~f^{-i_{n}(d)}(d) \\in V^{(f)}_{n,\\gamma}(y_{2}).\n$$\n$\\bullet$ ~ Similarly, the ratios\n$\\dfrac{m([y_{2},~f^{-j_{n}(d)}(d)])}{m([y_{1},~y_{2}])}$ and\n$\\dfrac{m([y_{2},~f^{-j_{n}(d)}(d)])}{m([y_{2},~f^{i_{n}(d)-j_{n}(d)}(y_{2})])}$\nare comparable, which is comparable to the\nratio\n$\\dfrac{m([f^{j_{n}(d)-i_{n}(d)}(y_{2}),~f^{-i_{n}(d)}(d)])}{m([f^{j_{n}(d)-i_{n}(d)}(y_{2}),~y_{2}])}\n=1-t_{n}(d)\n$ (by the Lemma \\ref{l:31}). So, for $d \\in Q_{3}(\\mathbb{N}_{0}),~\\lim_{n \\to +\\infty}\n~t_{n}(d)=1$, which is equivalent to the assertion:\n$$\\forall ~\\gamma >0,~\\exists ~ n_{\\gamma}\\in \\mathbb{N}_{0}\n~\\textrm{such that for every }~n \\geq n_{\\gamma},~ n \\in\n\\mathbb{N}_{0}:~f^{-j_{n}(d)}(d) \\in V^{(f)}_{n,\\gamma}(y_{2}).\n$$\n\nAssertion (2): Let $d \\in Q_{1}(\\mathbb{N}_{0})$. There exists\n$0<\\gamma (d)<1$ such that for every $n \\in \\mathbb{N}_{0},~t_{n}(d)\n\\geq \\gamma (d) ~ \\textrm{and}~1-t_{n}(d) \\geq \\gamma (d) $. Since\n$Q_{1}(\\mathbb{N}_{0})$ is finite there exists $0<\\gamma_{0}<1$ and\nan infinite subset $M_{2}$ of $\\mathbb{N}_{0}$\n($\\mathbb{N}_{0}\\backslash M_{2}$ is finite) such that the points\n$f^{-i_{n}(d)}(d)$ and $f^{-j_{n}(d)}(d)$ are contained in the set\n$S^{1}\\backslash V^{(f)}_{n,\\gamma_{0}}(y_{2})$ for all $d\\in\nQ_{1}(M_{2})$ and $n \\in M_{2}$. By Lemma \\ref{l:25}, $~ d\\notin f^{i}\n([y_{1},~y_{2}])\\cup f^{j} ([y_{2},~y_{3}])$, for every $0\n\\leq i < q_{n}, ~i \\neq i_{n}(d), ~j = \\varphi_{n}(i)$. Hence, for every $0\n\\leq i 0$. Then the following properties hold:\n\n\\begin{itemize}\n \\item [(1)] $(h(y_{1}),h(y_{2}),h(y_{3}))$ is a\nprimary cell associated to $(g,h(x_{0}),h(c),\\mathbb{N}_{0})$.\n\n\\item [(2)] $(h(z_{1}),h(z_{2}),h(z_{3}))$ is a $(\\frac{\\beta}{2},2\\gamma)$-derived cell associated to\n$(g,h(x_{0}),h(c),\\mathbb{N}_{0})$.\n\n\\item [(3)] There is an integer $n_{\\gamma} \\in \\mathbb{N}_{0}$ such\nthat for every $n \\in \\mathbb{N}_{0},~n \\geq n_{\\gamma}$ :\n$$h \\big(V^{(f)}_{n,\\gamma}(y_{2}) \\big) \\subset V^{(g)}_{n,2\\gamma}\\big(h(y_{2})\\big)\n\\subset h \\big(V^{(f)}_{n,4\\gamma}(y_{2}) \\big)$$\n\\end{itemize}\n\\end{prop}\n\\medskip\n\n\\begin{prop}\\label{p:43} Under the notations of Proposition \\ref{p:46}, for every $n\n\\in \\mathbb{N}_{0},~n \\geq n_{\\beta} $ we have:\n\\medskip\n\\begin{itemize}\n\n \\item [(h-0)] $[h(z_{1}),h(z_{3})]\\subset V^{(g)}_{n,2\\gamma}(h(y_{2}))\\subset\n[h(y_{1}),h(y_{3})]$.\n\n \\item [(h-1)] For any $d \\in E(c_{1},c)$, $k_{n}(d)$ is the unique integer in $ [0,~q_{n}[$ such that\n$g^{-k_{n}(d)}(h(d))\\in V^{(g)}_{n,\\frac{1}{2}\\beta}(h(y_{2}))$.\n\n \\item [(h-2)] $~m\\left(g^{j}([h(z_{1}),h(z_{3})])\\right)\\leq K^{\\prime}(\\lambda^{\\prime})^{n}$, for every $0\\leq\nj0$. By Proposition\n\\ref{p:41} (6), there is an integer $n_{\\gamma}^{\\prime} \\in \\mathbb{N}_{0}$\nsuch that for every $n \\in \\mathbb{N}_{0},~n \\geq n_{\\gamma}^{\\prime}$ :\n$f^{-k_{n}(d)}(d) \\in V^{(f)}(y_{2})$. On the other hand, by\nProposition \\ref{p:46}, (3), there is an integer $n_{\\gamma} \\in\n\\mathbb{N}_{0},~n_{\\gamma}\\geq n_{\\gamma}^{\\prime}$ such that for every $n\n\\in \\mathbb{N}_{0},~n \\geq n_{\\gamma}$ : $ h\n\\big(V^{(f)}_{n,\\frac{\\gamma}{2}}(y_{2}) \\big) \\subset\nV^{(g)}_{n,\\gamma}(h(y_{2}))$. As $g^{-k_{n}(d)}(h(d)) =\nh(f^{-k_{n}(d)}(d))$, it follows that $g^{-k_{n}(d)}(h(d))\\in\nV^{(g)}_{n,\\gamma}(h(y_{2}))$ for every $n \\geq n_{\\gamma},~n \\in\n\\mathbb{N}_{0}$. Since $\\gamma>0$ is arbitrary, so, $$ \\lim_{n \\to\n+\\infty} \\dfrac{m\\big([g^{-k_{n}(d)}(h(d)),h(y_{2}) ] \\big)}{m\n\\big([h(y_{1}),h(y_{2}) ] \\big)}=0\n$$\n$($h-6$)$ is a consequence of Assertions ($h-0$) and ($h-1$).\n\\end{proof}\n\\medskip\n\n\\section{\\bf Control of distortions}\n\\medskip\n\n\\begin{prop}[\\cite{A}, Proposition 5.1]\\label{p:2} Let $\\mathbb{N}_{0}$ and $\\gamma_{0}$ are as in Proposition \\ref{p:47}. Let\n$\\beta,~\\gamma \\in ]0,\\gamma_{0}[~(\\beta <\\gamma)$. Assume that the\nconjugation homeomorphism $h$ from $f$ to $g$ admits at $x_{0}$ a\npositive derivative $\\mathrm{Dh}(x_{0}) = \\omega_{0} >0$. Then there\nexists an integer $n_{\\beta} \\in \\mathbb{N}_{0}$ such that: for\nevery $n\\in \\mathbb{N}_{0}, \\ n\\geq n_{\\beta},$ we have\n\n$$\\left|\n\\dfrac{\\mathrm{Dr}_{g^{q_{n}}}\\big(h(z_{1}),h(z_{2}),h(z_{3})\\big)}{\\mathrm{Dr}_{f^{q_{n}}}(z_{1},z_{2},z_{3})}\n- 1\\right|\\leq \\dfrac{2}{1-\\gamma_{0}}\\beta$$\n\\end{prop}\n\\bigskip\n\\\n\\\\\n\nThe next proposition is an another distortion control which is\nopposite to the one of Proposition \\ref{p:2}, this allows us to\nprove that the conjugation from $f$ to $g$ is singular with respect\nto the Lebesgue measure.\n\n\\begin{prop}\n\\label{p:44} Assume that the irrational rotation number\nof $f$ is of bounded type. Let $c, c_{1}$ in $B(f)$, $c\\neq c_{1}$, $\\mathbb{N}_{0}$ and $\\gamma_{0}$ are as in\nProposition \\ref{p:47}. Then\nfor any $\\varepsilon >0$, there exists $0<\\gamma<\\gamma_{0}$ such\nthat for any $\\beta \\in ]0,\\gamma[$ there exists $n_{\\beta} \\in\n\\mathbb{N}_{0}$ such that for all $n \\geq n_{\\beta},~n \\in\n\\mathbb{N}_{0}$ the ($\\beta,\\gamma$) secondary cell\n$(z_{1},z_{2},z_{3})$ associated to\n$(f,x_{0},c,\\mathbb{N}_{0},\\delta)$ satisfies the following\ninequality\n$$ \\left| \\dfrac{\\textrm{Dr}_{g^{q_{n}}}\\left(h(z_{1}),h(z_{2}),h(z_{3})\\right)}{\\textrm{Dr}_{f^{q_{n}}}(z_{1},z_{2},z_{3})}-\n\\Pi(c_{1},c) \\right|\\leq A \\ \\varepsilon, ~~\\textrm{for some\nconstant} \\; A> 0,$$\n\nwhere\n$$\\Pi(c_{1},c)=\n \\underset{d\\in E (c_{1},c)}\\prod \\dfrac{\\sigma_{g}(h(d))}{\\sigma_{f}(d)}.$$\n\\end{prop}\n\\medskip\n\nThe proof of the proposition \\ref{p:44} is an elaboration of the proof of (\\cite{A}, Proposition 5.3) and so we only\ndescribe the changes that are necessary.\n\\medskip\n\n\\begin{lem}\\label{l:49q}\nAssume that the\nrotation number $\\alpha$ of $f$ is of bounded type. Let $c_{1} \\in C(f)\\backslash \\{c\\}$, $u_{0}$, $\\gamma_{c,c_{1}}$ and\n$\\mathbb{N}_{0}$ as in Proposition \\ref{p:47}. Let $h$ be the conjugating from $f$ to $R_{\\alpha}$:\n$h\\circ f= R_{\\alpha}\\circ h$. Assume that Dh$(x_{0})>0$. Then for any $\\varepsilon >0$,\nthere exists $\\eta >0$ such that for any $u\\in ]0,\\eta[$ there exists $n_{u,r} \\in\n\\mathbb{N}_{c,c_{1}}$ such that for any $n \\in \\mathbb{N}_{c,c_{1}},\\ n\\geq\nn_{u,r}$: the $($r,u$)$-derived cell $(z_{1},z_{2},z_{3})$ associated to\n$(f,x_{0},c)$ satisfies\n\\\n\\\\\n$$\\arrowvert\\textrm{Dcr}_{f^{q_{n}}}(z_{1},z_{2},z_{3})-\n\\Pi_{f}(c,c_{1})\\arrowvert\\leq C_{1}\\varepsilon,$$ where $$\\Pi_{f}(c,c_{1})= \\underset{b\\in E (c_{1},c)}\\prod\n\\sigma_{f}(b)$$ and \\; $C_{1}$ is a positive constant.\n\\end{lem}\n\\bigskip\n\n\n\\begin{proof}\nLet $\\varepsilon>0$. Since $\\textrm{Df}$ is absolutely continuous on\nevery interval $[c_{i},c_{i+1}]~(0 \\leq i \\leq p)$, there exists a\nreal $0<\\eta_{0}0)$, the following properties hold:\\\\\n\n$0\\leq D^{\\ast}_{i}(f)-1 \\leq \\dfrac{1}{m_{1}}\n|\\textrm{Df}(b_{i})-\\textrm{Df}(a_{i})|$.\\\\\n\n$\\sum_{i \\in I} |D^{\\ast}_{i}(f)-1|\\leq \\dfrac{2}{m_{1}}\n\\varepsilon$.\\\\\n\n$ |\\log D_{i}(f)|= \\log D^{\\ast}_{i}(f)\\leq D^{\\ast}_{i}(f)-1$\\\\\n\n$\\dfrac{1}{P^{\\ast}} \\leq P \\leq P^{\\ast}$.\\\\\n\nIt follows that\n\\begin{equation}\\label{(4)}\n e^{-C_{0}\\varepsilon} \\leq P \\leq e^{C_{0}\\varepsilon}\n \\, \\, \\ \\textrm{where} \\, \\, C_{0} = \\dfrac{2}{m_{1}}.\n\\end{equation}\n\\medskip\n\n\\textbf{Step 2}. We consider $D_{k_{n}(d)}(f),~d\\in E(c_{1},c)$. We have $d \\in ~ ]f^{k_{n}(d)}(z_{1}),f^{k_{n}(d)}(z_{3})[$.\\\\\n\n\\textbf{Case 2a}. $d\\in ]f^{k_{n}(d)}(z_{1}),f^{k_{n}(d)}(z_{2})[$\nwith $k_{n}(d)=i_{n}(d)$. By Proposition \\ref{p:42} (5),\n$\\textrm{Df}$ is continuous on the intervals\n$[f^{k_{n}(d)}(z_{1}),d],~[d,f^{k_{n}(d)}(z_{2})]$ and\n$[f^{k_{n}(d)}(z_{2}),f^{k_{n}(d)}(z_{3})]$; by the mean value\ntheorem there exist $t_{1}\\in ]f^{k_{n}(d)}(z_{1}),d[,~t_{2} \\in\n]d,f^{k_{n}(d)}(z_{2})[$ and $t_{3} \\in\n]f^{k_{n}(d)}(z_{2}),f^{k_{n}(d)}(z_{3})[$ such that\n$$D_{k_{n}(d)}(f) = (1-r_{n}(d))\\dfrac{\\textrm{Df}(t_{1})}{\\textrm{Df}(t_{3})}+r_{n}(d)\\dfrac{\\textrm{Df}(t_{2})}{\\textrm{Df}(t_{3})}$$\n~~where~~$$r_{n}(d) =\n\\dfrac{m([d,f^{k_{n}(d)}(z_{2})])}{m([f^{k_{n}(d)}(z_{1}),f^{k_{n}(d)}(z_{2})])}.$$\n\nWe have\n\\medskip\n\n $\\begin{array}{ll}\n |D_{k_{n}(d)}(f)-\\sigma_{f}(d)| & \\leq |\\dfrac{\\textrm{Df}(t_{1})}{\\textrm{Df}(t_{3})}-\\dfrac{D_{-}f(d)}{D_{+}f(d)}|+\nr_{n}(d)\\dfrac{|\\textrm{Df}(t_{1})-\\textrm{Df}(t_{2})|}{\\textrm{Df}(t_{3})}\\end{array}\n$\n\\medskip\n\nSince $$\\begin{array}{ll}\n|\\dfrac{\\textrm{Df}(t_{1})}{\\textrm{Df}(t_{3})}-\\dfrac{D_{-}f(d)}{D_{+}f(d)}|\n\\leq \\dfrac{M_{1}}{m_{1}^{2}} \\big(\n|\\textrm{Df}(t_{1})-D_{-}f(d)|+|\\textrm{Df}(t_{3})-D_{+}f(d)|\\big),\n\\end{array}$$\n\\medskip\n\nand by Lemma \\ref{l:31}\n\\medskip\n\n$$\\begin{array}{ll}\n r_{n}(d) & = \\dfrac{m \\big(f^{k_{n}(d)}([f^{-k_{n}(d)}(d),z_{2}]) \\big)}{m \\big(f^{k_{n}(d)}([z_{1},z_{2}]\n\\big)}\\\\\\\\\n & \\leq e^{2V}\\dfrac{m ([f^{-k_{n}(d)}(d),z_{2}]) }{m ([z_{1},z_{2}])} \\\\\\\\\n& \\leq e^{2V}\\dfrac{m ([y_{1},y_{2}]) }{m ([z_{1},z_{2}])}\\times\n\\dfrac{m ([f^{-k_{n}(d)}(d),z_{2}]) }{m ([y_{1},y_{2}])} \\\\\\\\\n& \\leq \\dfrac{e^{2V}}{\\beta} s_{n}(d)\n \\end{array}$$\n\nwith ~~~~\n$$s_{n}(d):= \\dfrac{m ([f^{-k_{n}(d)}(d),z_{2}]) }{l\n([y_{1},y_{2}])}.$$\n\\medskip\n\nIt follows that:\n\n$$|D_{k_{n}(d)}(f)-\\sigma_{f}(d)| \\leq K_{0} \\big(\n\\dfrac{s_{n}(d)}{\\beta}+|\\textrm{Df}(t_{1})-D_{-}f(d)|+|\\textrm{Df}(t_{3})-D_{+}f(d)|\n \\big)$$\nwhere\n\n$K_{0}=\\max(\\dfrac{M_{1}}{m_{1}}e^{2V},\\dfrac{M_{1}}{m_{1}^{2}})$.\n\\medskip\n\nSince $\\underset{n\\to +\\infty}\\lim s_{n}(d)=0$ (Corollary\n\\ref{c:37}), there exists $n_{d,\\beta} \\in \\mathbb{N}_{0}$ such that\nfor any $n \\in \\mathbb{N}_{0},~ n \\geq n_{d,\\beta}$, one has\n$s_{n}(d)< \\beta\\varepsilon $. The intervals\n$(]t_{1},d[;~]d,t_{3}[)_{d\\in E(c_{1},c)}$ are disjoint intervals of\ncontinuity of $\\textrm{Df}$, they satisfy by Lemma \\ref{l:31}:\n\\bigskip\n\n $\n\\begin{array}{ll}\n m([t_{1},d])+m([d,t_{3}]) & \\leq 2 m([f^{k_{n}(d)}(z_{1}),f^{k_{n}(d)}(z_{2})])+m([f^{k_{n}(d)}(z_{2}),f^{k_{n}(d)}(z_{3})])\\\\\\\\\n\n & \\leq 3 e^{2V}\\gamma m([f^{k_{n}(d)}(y_{1}),f^{k_{n}(d)}(y_{2})])\\\\\\\\\n\n & \\leq 3 e^{2V} \\gamma < \\eta_{0}\n\\end{array}\n$\n\n So, $$ |\\textrm{Df}(t_{1})-D_{-}f(d)|+|\\textrm{Df}(t_{3})-D_{+}f(d)|< \\varepsilon $$\n\\medskip\n\nHence, there exists $n_{d,\\beta} \\in \\mathbb{N}_{0}$ such that for\nevery $n \\in \\mathbb{N}_{0}$, $n \\geq n_{d,\\beta}$,\n\n\\begin{equation}\\label{(5)}\n |D_{k_{n}(d)}(f)-\\sigma_{f}(d)|\\leq 2 K_{0}\\varepsilon\n\\end{equation}\n\\smallskip\n\n\\textbf{Case 2b}. $d\\in ]f^{k_{n}(d)}(z_{2}),f^{k_{n}(d)}(z_{3})[$\nwith $k_{n}(d)=j_{n}(d)$. By Proposition \\ref{p:42} (b-5),\n$\\textrm{Df}$ is continuous on the intervals\n$[f^{k_{n}(d)}(z_{1}),f^{k_{n}(d)}(z_{2})],~[f^{k_{n}(d)}(z_{2}),d]$\nand $[d,f^{k_{n}(d)}(z_{3})]$. By the mean value theorem, there\nexist $s_{1} \\in ]d, f^{k_{n}(d)}(z_{3})[,~s_{2} \\in\n]f^{k_{n}(d)}(z_{2}),d[$ and $s_{3} \\in\n]f^{k_{n}(d)}(z_{1}),f^{k_{n}(d)}(z_{2})[$ such that\n$$D_{k_{n}(d)}^{-1}(f)=(1-r^{\\prime}_{n}(d))\\dfrac{\\textrm{Df}(s_{1})}{\\textrm{Df}(s_{3})}+r^{\\prime}_{n}(d)\\dfrac{\\textrm{Df}(s_{2})}{\\textrm{Df}(s_{3})}$$\nwhere\n$$r^{\\prime}_{n}(d) = \\dfrac{m[f^{k_{n}(d)}(z_{2}),d])}{m[f^{k_{n}(d)}(z_{2}),f^{k_{n}(d)}(z_{3})])}.\n$$\n\\medskip\n\nAs in Case 2a, we have\n\\medskip\n\n $\\begin{array}{ll}\n |D_{k_{n}(d)}(f)^{-1}-\\sigma_{f}(d)^{-1}| & \\leq |\\dfrac{\\textrm{Df}(s_{1})}{\\textrm{Df}(s_{3})}-\\dfrac{D_{+}f(d)}{D_{-}f(d)}|+\nr^{\\prime}_{n}(d)\n\\dfrac{|\\textrm{Df}(s_{1})-\\textrm{Df}(s_{2})|}{\\textrm{Df}(s_{3})}\\end{array}\n$\n\\bigskip\n\nSince $\\begin{array}{ll}\n|\\dfrac{\\textrm{Df}(s_{1})}{\\textrm{Df}(s_{3})}-\\dfrac{D_{+}f(d)}{D_{-}f(d)}|\\leq\n\\dfrac{M_{1}}{m_{1}^{2}}\n\\big(|\\textrm{Df}(s_{1})-D_{+}f(d)|+|\\textrm{Df}(s_{3})-D_{-}f(d)|\n\\big),\\end{array} $ \\\n\\\\\n\\medskip\n\nand by Lemma \\ref{l:31}\n\\medskip\n\n$$\\begin{array}{ll}\n r^{\\prime}_{n}(d) & = \\dfrac{m \\big(f^{k_{n}(d)}([z_{2},f^{-k_{n}(d)}(d)]) \\big)}{m \\big(f^{k_{n}(d)}([z_{2},z_{3}]\n\\big)}\\\\\\\\\n & \\leq e^{2V}\\dfrac{m ([z_{2},f^{-k_{n}(d)}(d)]) }{m ([z_{2},z_{3}])} \\\\\\\\\n& \\leq e^{2V}\\dfrac{m ([y_{1},y_{2}]) }{m([z_{2},z_{3}])}\\times\n\\dfrac{m ([z_{2},f^{-k_{n}(d)}(d)]) }{m ([y_{1},y_{2}])} \\\\\\\\\n& \\leq \\dfrac{e^{2V}}{\\beta} s_{n}^{\\prime}(d),\\end{array}\n$$ where $$s_{n}^{\\prime}(d):= \\dfrac{m ([z_{2},f^{-k_{n}(d)}(d)]) }{m\n([y_{1},y_{2}])}.$$ \\\n\\\\\n\nIt follows that $$|D_{k_{n}(d)}(f)^{-1}-\\sigma_{f}(d)^{-1}| \\leq\nK_{0} \\big(\n\\dfrac{s_{n}^{\\prime}(d)}{\\beta}+|\\textrm{Df}(s_{1})-D_{+}f(d)|+|\\textrm{Df}(s_{3})-D_{-}f(d)|\n \\big)$$\nwhere $$K_{0} =\n\\max(\\dfrac{M_{1}}{m_{1}}e^{2V},\\dfrac{M_{1}}{m_{1}^{2}}).$$ By\nCorollary \\ref{c:37} : $\\underset{n\\to +\\infty}\\lim\ns_{n}^{\\prime}(d)=0$. Hence, there exists $n_{d,\\beta} \\in\n\\mathbb{N}_{0}$ such that for every $n\\in \\mathbb{N}_{0}, \\ n \\geq\nn_{d,\\beta}$,~~ $s_{n}^{\\prime}(d)< \\beta\\varepsilon $. The\nintervals $(]d,s_{1}[;~]s_{3},d[)_{d\\in E(c_{1},c)}$ are disjoint\nintervals of\ncontinuity of $\\textrm{Df}$, they satisfy, by Lemma \\ref{l:31}, \\\\\n\\medskip\n\n $\\begin{array}{ll}\n m([d,s_{1}])+m([s_{3},d]) & \\leq m([f^{k_{n}(d)}(z_{1}),f^{k_{n}(d)}(z_{2})])+ 2m([f^{k_{n}(d)}(z_{2}),f^{k_{n}(d)}(z_{3})])\\\\\\\\\n\n & \\leq 3e^{2V}\\gamma m([f^{k_{n}(d)}(y_{1}),f^{k_{n}(d)}(y_{2})])\\\\\\\\\n\n & \\leq 3e^{2V}\\gamma < \\eta_{0}\n\\end{array}\n$\n\n So, $$|\\textrm{Df}(s_{1})-D_{+}f(d)|+|\\textrm{Df}(s_{3})-D_{-}f(d)|< \\varepsilon $$\n\\medskip\n\nHence, there exists $n_{d,\\beta}\\in \\mathbb{N}_{0}$ such that for\nevery $n \\in \\mathbb{N}_{0}, n\\geq n_{d,\\beta}$,\n\n\\begin{equation}\\label{(6)}\n |D_{k_{n}(d)}(f)^{-1}-\\sigma_{f}(d)^{-1}| \\leq 2 K_{0}\\varepsilon\n\\end{equation}\n\\medskip\n\nTherefore from the cases 2a and 2b, we conclude that there exists\n$n_{d,\\beta} \\in \\mathbb{N}_{0}$ such that for every $n \\in\n\\mathbb{N}_{0},~n \\geq n_{d,\\beta}$,\n\\medskip\n\n\\begin{equation}\\label{(7)}\n |D_{k_{n}(d)}(f)-\\sigma_{f}(d)| \\leq K_{0}\\varepsilon\n\\end{equation}\n\\medskip\n\nSince $E(c_{1},c)$ is finite, there exists $n_{\\beta} \\in\n\\mathbb{N}_{0}$ such that for every $n \\in \\mathbb{N}_{0}, ~n \\geq\nn_{\\beta}$,\n\n\\begin{equation}\\label{(8)}\n |\\prod_{d \\in E(c_{1},c)}~D_{k_{n}(d)}(f)-\\prod_{d \\in\nE(c_{1},c)}~\\sigma_{f}(d)| \\leq \\varepsilon\n\\end{equation}\n\\medskip\n\nHence, (\\ref{(4)}) and (\\ref{(8)}) imply that: there exist $n_{\\beta} \\in \\mathbb{N}_{0}$ such that for every $n \\in\n\\mathbb{N}_{0},~n \\geq n_{\\beta}$,\n\n\\begin{equation}\\label{(9)}\n |Dcr_{f^{q_{n}}}(z_{1},z_{2},z_{3})-\\nu(c) | \\leq C_{1}\\varepsilon\n\\end{equation}\n\\medskip\n\nwhere $\\nu(c) = \\prod_{d\\in E(c_{1},c)}~\\sigma_{f}(d)$ and $C_{1}$ is a\npositive constant.\n\\end{proof}\n\\bigskip\n\n\\begin{lem}\n\\label{l:49} Under the hypothesis of Proposition \\ref{p:44}, for any $\\varepsilon >0$, there exists\n$0<\\gamma<\\gamma_{0}$, such that for any $\\beta \\in ]0,\\gamma[$,\nthere exist $n_{\\beta}\\in \\mathbb{M}_{0}$ such that for any $n\\in\n\\mathbb{M}_{0}, ~n\\geq n_{\\beta}$, the ($\\beta,\\gamma$)-secondary\ncell $(z_{1},z_{2},z_{3})$ associated to $(f,x_{0},b,\\mathbb{M}_{0}, \\delta)$ satisfies\n\n$$\\arrowvert\\textrm{Dr}_{g^{q_{n}(b)}}(h(z_{1}),h(z_{2}),h(z_{3}))-\n\\Pi_{g}(h(c), h(c_{1}))\\arrowvert\\leq C_{2}\\varepsilon$$\n\\medskip\n\nwhere \\ $\\Pi_{g}(h(c), h(c_{1})) = \\underset{d\\in E (c_{1},c)}\\prod \\sigma_{g}(h(d))$ and \\; $C_{2}$ is a\npositive constant.\n\\end{lem}\n\\medskip\n\n\\begin{proof} The proof is a consequence of Lemma \\ref{l:49q} and (Proposition \\ref{p:43}, (h-5)) applied to\nthe ($\\frac{\\beta}{2},\\gamma$)-derived cell\n$(h(z_{1}),h(z_{2}),h(z_{3}))$ associated to $(g,h(x_{0}),h(c),\n\\mathbb{N}_{0})$ instead of the ($\\beta,\\gamma$)-derived cell\n$(z_{1},z_{2},z_{3})$ associated to $(f,x_{0},c,\\mathbb{N}_{0})$.\n \\end{proof}\n\\medskip\n\\\n\\\\\n{\\it Proof of Proposition \\ref{p:44}}. The proof results from the\nLemmas \\ref{l:49q} and \\ref{l:49}. \\qed\n\\bigskip\n\n \\section{\\bf Proof of Main Theorem }\n \\medskip\n\nLet $f, ~g\\in \\mathcal{P}(S^{1})$ with the same irrational rotation number $\\alpha$ of bounded type.\nSuppose that $f$ and $g$ satisfy the (KO) condition. By Corollary \\ref{c:21}, there exist two\n piecewise quadratic homeomorphisms $K, L\\in \\mathcal{P}(S^{1})$ such that $F= L \\circ f \\circ L^{-1}$ and\n $G = K \\circ g \\circ K^{-1}$ have the following properties:\n\n \\begin{enumerate}\n \\item $F, G\\in \\mathcal{P}(S^{1})$ and have the same irrational rotation number $\\alpha$,\n\n \\item The break points of $F$ (resp. $G$) are on \\textit{pairwise distinct} $F$-orbits (resp. $G$-orbits),\n\n \\item $F$ and $G$ satisfy the (KO) condition,\n\n \\item Let $h$ the conjugating map between $f$ and $g$ ~ i.e. $h \\circ f = g \\circ h$ and set $v = K \\circ h \\circ L^{-1}$. Then\n\n \\subitem - $v$ is an absolutely continuous (resp. a singular) function if and only if so is $h$.\n\n \\subitem - $v\\circ F= G\\circ v$.\n \\end{enumerate}\n\n Therefore, we may assume that all break points of $f$ (resp. $g$) are on \\textit{pairwise distinct} $f$-orbits\n (resp. $g$-orbits). Now by Proposition \\ref{p:23}, there exists a\n homeomorphism $u$ of $S^{1}$ such that $G= u\\circ f\\circ u^{-1}\\in \\mathcal{P}(S^{1})$ with the following properties:\n\n \\\n \\\\\n \\textbullet ~ $C(f)\\subset B:= \\{c_{i}: ~i=0,1, \\dots, q\\}$, ($q\\geq p$). \\\\\n \\textbullet ~ $C(G)\\subset u(B):=\\{u(c_{i}): ~i=0,1, \\dots, q\\}$. \\\\\n \\textbullet ~ $\\sigma_{G}(u(c_{i})) = \\sigma_{g}(d_{i})~,~i=0,1, \\dots, q$.\\\\\n \\textbullet ~ $\\pi_{s}(G)=\\pi_{s}(g)$.\\\\\n \\textbullet ~ $h$ is singular if and only if so is $u$.\n \\medskip\n\n So we may assume that $u=h$ and $G=g$.\n\\\n\\\\\n To show that the conjugation homeomorphism $h$ from $f$ to $g$ is singular with respect to the Lebesgue measure $m$, it\n suffices to prove that its derivative $\\textrm{Dh}$ is zero on\n a set of Lebesgue total measure. Assume on the contrary that $h$ admits at a point $x_{0}$ a positive derivative $Dh(x_{0})>0$.\n\\bigskip\n\nLet $\\varepsilon >0, ~c \\in B$. For $0<\\beta<\\gamma <\\gamma_{0}$,\nwrite:\n\n$$D_{n}(\\beta, \\gamma):= \\dfrac{\\textrm{Dr}_{g^{q_{n}}}\\big(h(z_{1}),h(z_{2}), h(z_{3}) \\big)}{\\textrm{Dr}_{f^{q_{n}}}\\big(z_{1},z_{2}, z_{3} \\big)} $$\n\\medskip\n\nBy Proposition \\ref{p:2}, there exists $n_{\\beta} \\in\n\\mathbb{N}_{0}$ such that for every $n\\in \\mathbb{N}_{0},~n\\geq\nn_{\\beta}$:\n\n\\begin{equation}\\label{(10)}\n |D_{n}(\\beta, \\gamma)-1| \\leq \\dfrac{2}{1-\\gamma_{0}}\\beta\n\\end{equation}\n\\medskip\n\nBy Proposition \\ref{p:44}, there exists\n$\\gamma_{\\varepsilon}<\\gamma_{0}$ such that for every\n$0<\\beta<\\gamma_{\\varepsilon}$ there exists\n$n_{\\beta}(\\varepsilon)\\in \\mathbb{N}_{0}$ such that for every $n\\in\n\\mathbb{N}_{0},~~n\\geq n_{\\beta}(\\varepsilon)$, the $(\\beta,\\gamma)$\nsecondary cell $(z_{1},z_{2},z_{3})\\ (\\gamma<\\gamma_{\\varepsilon})$-\nassociated to $(f,x_{0},c,\\mathbb{N}_{0},\\delta)$ satisfies:\n\n\\begin{equation}\\label{(11)}\n |D_{n}(\\beta, \\gamma)-\\Pi(c_{1},c)| \\leq A \\varepsilon,\n\\end{equation}\n\\\n\\\\\nwhere $$\\Pi(c_{1},c):= \\underset{d \\in\nE(c_{1},c)}\\prod~\\dfrac{\\sigma_{g}(h(d))}{\\sigma_{f}(d)}$$ and $A$\nis a positive\nconstant.\\\\\n\nLet $\\beta < \\gamma <\\gamma_{\\varepsilon}$ and $n \\in\n\\mathbb{N}_{0}, ~n \\geq n_{\\beta}(\\varepsilon)$. Then (\\ref{(10)})\nand (\\ref{(11)}) imply that\n\\bigskip\n\n$ \\begin{array}{lll}\n |\\Pi(c_{1},c)-1| & \\leq |D_{n}(\\beta, \\gamma)-1|+|D_{n}(\\beta,\n\\gamma)-\\Pi(c_{1},c)| & \\leq \\dfrac{2\\beta}{1-\\gamma_{0}}+A\n\\varepsilon\n\n \\end{array}$\n\\bigskip\n\nSince $\\varepsilon$ and $ \\beta $ are arbitrary, it follows that $\\Pi(c_{1},c)=1$. i.e.\n\\begin{equation}\\label{(12)}\n \\prod_{d\\in E(c_{1},c)} ~\\dfrac{\\sigma_{g}(h(d))}{\\sigma_{f}(d)}=1\n\\end{equation}\nSince $c_{1}, c \\in B$,~~$c_{1}\\neq c~$ are arbitrary, so for every\n$(i,k)\\in \\{0,1, \\dots,~q \\}^{2}~~(i \\neq k),$ one has\n$$\\underset{d\\in E(c_{k}, c_{i})}\\prod~\\dfrac{\\sigma_{g}(h(d))}{\\sigma_{f}(d)}=1$$\n\nAs by Proposition \\ref{p:461}, $c_{k} \\notin E(c_{k},c_{i})$, so we have $$\\underset{0 \\leq j \\leq q\n}\\prod~\\big (\\dfrac{\\sigma_{g}(h(c_{j}))}{\\sigma_{f}(c_{j})}\n\\big)^{e(i,k,j)}=1,$$\n\nwhere $e(i,k,j) \\in \\{0,1\\}$ with\n$$\\varepsilon(i,k,i)=1 \\textrm{and } \\varepsilon(i,k,k)=0.$$\nIt follows that for every $(i,k)\\in \\{0,1, \\dots,~q \\}^{2}, \\ i\\neq\nk$, we have\n\n$$\\underset{0 \\leq j \\leq q}\\sum~ e (i,k,j)~\\log\\left(\\dfrac{\\sigma_{g}(h(c_{j}))}{\\sigma_{f}(c_{j})}\\right)=0.$$\n\\medskip\n \\\n\\\\\n\nFor $(i,j,k) \\in \\{0, \\dots, q\\}^{3}$, set\n$$\\varepsilon_{k+i(q+1),j}\n =\\left\\{\n \\begin{array}{ll}\n e (i,k,j) & \\textrm{if} ~~i \\neq k,~~j \\neq i ~~\\textrm{and}~~j\\neq\nk,\n\\\\\\\\\n 1 & \\textrm{if} ~~i \\neq k ~~\\textrm{and}~~j=\ni,\n\\\\\\\\\n 0 & \\textrm{if} ~~i \\neq k\n~~\\textrm{and}~~j=k,\n\\\\\\\\\n0 & \\textrm{if} ~~i = k\n \\end{array}\n \\right.\n$$\n\nIt follows that for every $(i,k) \\in \\{0, \\dots,\nq\\}^{2}$,~~$$\\sum_{j=0}^{q}\\varepsilon_{k+i(q+1),j}\\log\\left(\\frac{\\sigma_{g}(h(c_{j}))}{\\sigma_{f}(c_{j})}\\right)=0\n$$\n\nSet $A_{q+1}=(a_{l,j})_{0\\leq l<(q+1)^{2},~0 \\leq j1$. By\nKatznelson-Ornstein's theorem, $\\mu_{F}$ is equivalent to the\nLebesgue measure $m$ and so is $\\mu_{f}$. Hence there exists an\nabsolutely continuous map $\\varphi$ such that $\\varphi \\circ f=\nR_{\\rho(f)} \\circ \\varphi$. Similarly, there exists an absolutely\ncontinuous map $\\psi$ such that $\\psi \\circ g = R_{\\rho(f)} \\circ\n\\psi$. In addition, $\\psi^{-1}$ is absolutely continuous. It\nfollows that $(\\psi \\circ h\\circ\\varphi^{-1}) \\circ\nR_{\\rho(f)}=R_{\\rho(f)}\\circ (\\psi\\circ h\\circ \\varphi^{-1})$. As\n$\\rho(f)$ is irational then $\\psi\\circ h\\circ\n\\varphi^{-1}=R_{\\beta}$ a rotation, for some $\\beta \\in S^{1}$.\nTherefore $h= \\psi^{-1}\\circ R_{\\beta}\\circ \\varphi$, which is an\nabsolutely continuous map. This completes the proof. \\qed\n\\\n\\\\\n\n\\textit{Proof of Corollary \\ref{c:16}}. The proof follows easily from\nCorollary \\ref{c:15} by taking $g$ the rotation $R_{\\rho(f)}$. \\qed\n\n\n\n\\bigskip\n\n\\bibliographystyle{amsplain}\n\\bigskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nIn recent years, the atmospheric~\\cite{sk}, solar~\\cite{solar}, \nreactor~\\cite{kamland}, and accelerator~\\cite{k2k, minos_first} experiments \nhave provided convincing evidence of neutrino oscillations and \ntherefore have demonstrated that neutrinos have non--zero masses. This phenomenon \nis the first clear example of new physics beyond the Standard Model which \nassumes neutrinos are massless particles. Three \ngeneration neutrino oscillations are described by six independent parameters: \nthree mixing angles ${\\rm sin^2\\theta}_{12}, \n{\\rm sin^2\\theta}_{23}, {\\rm sin^2\\theta}_{13}$, two mass-squared \ndifferences $\\Delta m^2_{21} = m^2_2 - m^2_1$ and \n$\\Delta m^2_{23} = m^2_3 - m^2_2$, and one complex phase $\\delta$. \nBoth mass differences and two mixing \nangles ($\\theta_{12}$ and $\\theta_{23}$) are measured. The mixing angle \n$\\theta_{13}$ was found to be small and only an upper limit was \nobtained~\\cite{chooz,k2k_theta_13}. Presently nothing is known \nabout the CP violating Dirac phase $\\delta$. The near--future neutrino\noscillation experiments will be focused on the measurements of the unknown \nneutrino parameters: $\\theta_{13}$, mass hierarchy, and \n$\\delta$. Another important goal of these experiments is to \nmeasure the known mixing parameters more precisely.\n\n\\section{Principles of T2K}\n\\label{sec:t2k}\nThe T2K (Tokai--to--Kamioka) experiment~\\cite{t2k} will use a \nhigh intensity off--axis neutrino \nbeam generated by a 50 GeV (initially 30 GeV) proton beam at JPARC \n(Japan Proton Accelerator Research Complex), SuperKamiokande as a far \nneutrino \ndetector, and a set of dedicated neutrino detectors located \nat a distance of 280 m from the pion production target to\nmeasure the properties of the unoscillated neutrino beam. The schematic view \nof the T2K setup is shown in Fig.~\\ref{fig:t2k_setup}.\n\\begin{figure}[h]\n\\centering\\includegraphics[width=14cm,angle=0]{t2k_setup.eps}\n \\caption{General layout of the T2K experiment. The basic elements are \n the neutrino beam line, muon monitor, near neutrino detector at 280 meters \n from the pion production target, and the far neutrino detector \n SuperKamiokande. Possible future 2km near detector is also shown.}\n\\label{fig:t2k_setup}\n\\end{figure}\n The first phase of \n T2K has two main goals: a sensitive \nmeasurement of ${\\rm\\theta}_{13}$ and a more accurate determination of the \nparameters ${\\rm sin^22\\theta}_{23}$ and $\\Delta m^2_{23}$ than any previous\nexperiment. \n\nThe probability of $\\nu_{\\mu}$ transition to $\\nu_e$ can be approximately \ngiven by\n\\begin{equation}\nP(\\nu_{\\mu}\\to \\nu_e) \\approx 4{\\rm cos}^2\\theta_{13}{\\rm sin}^2\\theta_{13}\n{\\rm sin}^2\\theta_{23}{\\rm sin}^2\\Bigl\n(\\frac{1.27\\Delta m^2_{13}({\\rm eV}^2)L({\\rm km})}{E_{\\nu}({\\rm GeV})}\\Bigr),\n\\label{eq:p(numu-nue)}\n\\end{equation}\nwhere $L$ is the $\\nu$ flight distance, and $E_{\\nu}$ is the neutrino energy.\nIt follows from this expression, the maximum sensitivity to the\n$\\nu_{\\mu}\\to \\nu_e$ transition is expected around the oscillation maximum \nfor $\\Delta m_{13} \\simeq \\Delta m_{23} = \\Delta m_{atm} \\simeq 2.5\\times\n10^{-3}$. Based on this value, the neutrino peak energy in T2K should be tuned \nto $\\leq 1$ GeV to\nmaximize the sensitivity for muon neutrino \noscillations for a baseline of 295 km. \n\nT2K will adopt an \noff--axis beam configuration in which neutrino energy is almost independent of \npion energy and quasi-monochromatic neutrino spectrum can be achieved. \nThe neutrino beam is produced \nfrom pion decays in a 94 m decay tunnel filled with 1 atm He gas at an\nangle of \n$2.5^{\\circ}$ with respect to the proton beam axis, providing a narrow neutrino \nspectrum with mean neutrino energies from 0.7 to 0.9 GeV, as shown in \nFig.~\\ref{fig:t2k_nu_beam}. \n\\begin{figure}[h]\n\\centering\\includegraphics[width=10cm,angle=0]{nu_beam.eps}\n\\caption{Neutrino energy spectra at $0^{\\circ}$ and different \noff-axis angles.}\n\\label{fig:t2k_nu_beam}\n\\end{figure}\nThe high energy tail is considerably reduced at $2.5^{\\circ}$ in \ncomparison with\nthe standard on-axis wide-band beam. This minimizes the neutral \ncurrent $\\pi^0$ background in the $\\nu_e$ appearance search.\nMoreover, the intrinsic contamination of $\\nu_e$'s from muon and\nkaon decays is expected to be about 0.4\\% around the peak energy.\n\nTo achieve T2K goals, precise measurements \nof the neutrino flux, spectrum and \ninteraction cross sections are needed. For these purposes, the near detector \ncomplex (ND280)~\\cite{nd280} will \nbe built at a distance of 280 m from the target along the line defined by the \naverage pion decay point and SK (see Fig.~\\ref{fig:t2k_setup}). \nThis complex has two detectors: an on-axis detector (neutrino beam\nmonitor) and an off--axis detector. Physics \nrequirements for ND280 \ncan be briefly summarized as \nfollows: the absolute energy scale of the neutrino spectrum must\nbe calibrated with 2\\% precision, and the neutrino flux monitored \nwith \nbetter than 5\\% accuracy. The momentum resolution of muons from the \ncharged current quasi-elastic interactions~(CCQE) should be less than 10\\%, and the \nthreshold for \ndetection of recoil protons is required to be about 200~MeV\/c. The \n$\\nu_e$ fraction should be measured with an uncertainty of $\\leq 10$\\%.\nA measurement of the neutrino beam direction, with a precision much better \nthan 1 mrad, is required from the on-axis detector. \n\n\\section{Near neutrino detectors}\n\\label{sec:nd280}\n\\subsection{On-axis neutrino monitor}\nThe role of the on-axis neutrino detector~(INGRID) is to monitor the neutrino \nbeam direction and profile on a day to day basis. It consists of $7 + 7$ \nidentical modules, arranged to form a cross-configuration, and 2 diagonal modules, \n as shown in Fig.~\\ref{fig:ingrid}.\n \\begin{figure}[h]\n\\centering\\includegraphics[width=13cm,angle=0]{ingrid_combined.eps} \n\\caption{(a) schematic view of INGRID; (b) segmented iron-scintillator \nsandwich module; (c) a charged current neutrino interaction with muon \ntrack.}\n\\label{fig:ingrid}\n\\end{figure}\nINGRID samples the neutrino beam profile with an area of $9\\times 9$ m$^2$.\nEach iron-scintillator sandwich module covers an area of $125 \\times 125$ cm$^2$ \nand weighs 10 tons. The module consists of ten 6.5 cm-thick \niron layers and 11\nscintillator tracking planes, and is surrounded by four veto counters. Each\ntracking plane has one vertical and one horizontal scintillator layer \ncomposed of $5\\times 1\\times\n121$ cm$^3$ scintillator slabs.\nEach scintillator has a central hole to insert a wavelength shifting~(WLS) \nfiber for light \ncollection and routing to a photosensor. \n The total mass of INGRID is $\\sim 160$ tons.\nA typical event rate detected by the center module every spill is expected \nto be about 0.5 per ton, i.e. the whole INGRID will detect more than $10^5$\nneutrino events\/day. In order to minimize the systematic from the uncertainty \nof the off-axis angle, the neutrino beam direction will be monitored by \nINGRID with a precision of $<< 1$ mrad each day at designed intensity.\n \n\\subsection{Off-axis near detector}\nThe off-axis detector (Fig.~\\ref{fig:nd280})\n\\begin{figure}[h]\n\\centering\\includegraphics[width=10cm,angle=0]{nd2.eps} \n\\caption{The cutaway view of the T2K near detector.}\n\\label{fig:nd280}\n\\end{figure}\n includes the UA1 \nmagnet operating with a magnetic field of 0.2 T, \na Pi-Zero detector (POD), a tracking detector which includes time projection \nchambers (TPC's) and fine grained scintillator detectors (FGD's), an \nelectromagnetic calorimeter\n(Ecal), and a side muon range detector~(SMRD). \n\n \\subsubsection{Photosensors}\n Wavelength \nshifting fibers will be widely used for readout of all \nscintillator detectors which are the main active element of the ND280 \ndetector. \n A magnetic field environment and limited space inside the \nUA1 magnet put serious constraints for the usage of standard photodetectors \nsuch as traditional multi-anode photomultipliers. Since the ND280 has about \n60k individual readout channels, the cost of \nphotosensors is also very important. \n After studying several candidates, \na multi-pixel avalanche photo-diode operating in the \nlimited Geiger multiplication mode was selected as the baseline detector. These \n novel devices \nare compact, well matched to spectral emission of WLS fibers, and insensitive \nto magnetic fields~\\cite{gm1,gm2,andreev}. The required parameters for these\nphotosensors from all ND280 subdetectors can be summarized as follows: \nan active area diameter of $\\sim 1$ mm, photon detection efficiency for green \nlight\n$\\geq 20$\\% , pixels number $> 400$, and a dark rate at operating \nconditions $\\leq 1$ \nMHz. The gain should be $(0.5-1.0)\\times 10^6$, \nthe cross$-$talk $\\sim10$\\%, and \npulse width should be less than 100 ns to meet the spill structure of the \nJPARC proton beam. For \ncalibration and control purposes, it is very desirable to obtain well separated \nsingle photoelectron peaks in amplitude spectra at operating temperature.\n\nAfter a R\\&D study of 3 years, a Hamamatsu MPPC was chosen as the \nphotosensor for ND280. The description of this device\nand its parameters can be found in Ref.~\\cite{hamamatsu}. \nThe final T2K version is a 667 pixel MPPC with a sensitive area \nof $1.3\\times 1.3$ mm$^2$ (Fig.~\\ref{fig:mppc}). \n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=9cm,angle=0]{mppc_new_a.eps}\n\\includegraphics[width=6cm,angle=0]{mppc_new_b.eps}\n\\end{center}\n\\caption{(a) The photograph of a 667 pixel MPPC: a magnified face view of \nan MPPC with a sensitive area of $1.3\\times 1.3$ mm$^2$ (left),\nthe package of this MPPC (right); (b) ADC spectrum from an LED \nsignal. Clearly separated peaks at equal intervals correspond to 0, 1, 2, \n3... fired pixels.}\n\\label{fig:mppc}\n\\end{figure}\n These devices\ndemonstrated good performance at room temperature: a low cross-talk value of about 10\\%, \na photon detection \nefficiency for green light of $\\geq 25$\\%, a low dark rate of $\\sim 0.3$ \nMHz at the operating voltage, a high gain of about \n $0.7\\times 10^6$, and a pulse width of less than 50 ns. \n \n\\subsubsection{POD}\nThe POD is optimized for measurement of the inclusive $\\pi^0$ production by \n$\\nu_{\\mu}$ interactions on oxygen and will be installed in the upstream end of the \nmagnet. The cross section measurements on an oxygen target will be achieved by \nusing the following POD geometry: the upstream and downstream regions are\nconfigured as electromagnetic calorimeters providing energy containment and\nactive veto, and the central region of the POD provides the fiducial mass for \nthe $\\pi^0$ measurements (Fig.~\\ref{fig:pod}(a)).\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=6.2cm,angle=0]{pod_a.eps}\n\\includegraphics[width=7.5cm,angle=0]{pod_b_c.eps}\n\\end{center}\n\\caption{(a) POD schematic view: the central region is constructed of\nalternating water target and scintillator tracking layers; (b) one layer of \nthe target region; (c) a neutral current\n$\\pi^0$ event in POD. The horizontal and vertical axes are in centimeters. }\n\\label{fig:pod}\n\\end{figure}\n The schematic view of a plane of the POD target is shown in \n Fig.~\\ref{fig:pod}(b).\nIt consists of alternating water target layers of about 3 cm-thick and \ntracking layers composed of X-Y extruded triangular shaped scintillator bars of \n17 mm in height and 32.5 mm at the base and a central hole for a WLS fiber. \n A thin sheet of brass\n($\\sim 1.6$ mm-thick) is sandwiched in the 26 X-Y tracking layers of the target\nregion. The upstream and downstream regions have 7 X-Y scintillator layers \nwith 4 mm-thick lead radiators between them.\n The POD has a total mass \nof approximately 17 tons with a fiducial mass of about 3 tons of water and \n8 tons of other materials. \n The tests of light yield of the POD\nscintillators with MPPC's connected to one end of a Y11 WLS fiber \n showed good results. The light yields for a minimum \nionizing particles~(MIP) are 19.8 p.e.\/MeV and 8.7 p.e.\/MeV at 25 and 205 cm \nfrom the MPPC, respectively. With the mirrored far end of the WLS fiber, \nthe light yield \nof 15.7 p.e.\/MeV is much greater than the 5 p.e.\/MeV required for \nefficient reconstruction of electromagnetic showers.\n\nOxygen cross\nsection measurements will be made by comparing the interaction rate of events \ncollected with water in the target region versus similar running \nperiods with water removed from the target region. A typical neutral current \nevent with a $\\pi^0$ is shown in Fig.~\\ref{fig:pod}(c) in which a neutral pion is\naccompanied by a neutron.\nThe energy resolution for events fully contained in the active\ntarget is expected to be about 10\\% + 3.5\\%\/$\\sqrt{{\\rm GeV}}$ and \nthe reconstruction efficiency of a $\\pi^0$ with a momentum $\\geq 200$ MeV\/c\nis expected to be approximately 33\\%.\n\n\\subsubsection{ND280 tracker}\nThe ND280 tracker consists of three TPC's and two FGD's, as shown in \nFig.~~\\ref{fig:nd280}. Its main function is \nto measure the muon and electron neutrino beam fluxes and energy spectra, \nplus\nvarious \ncharged current cross sections. The tracker is designed to accurately \nmeasure CCQE events, the main \nprocess at the T2K peak neutrino energy, \n\\begin{equation}\n\\nu_{\\mu} + n \\to \\mu^- + p.\n\\label{eq:ccqe}\n\\end{equation}\n In order to measure this process the reconstruction of both\n proton and muon is useful. The proton will be primarily identified and measured \n by the FGD while the muon will be measured by the TPC. The initial neutrino energy \n will be reconstructed from the muon momentum. The measurements of CCQE events \n will be used for flux normalization in the oscillation analysis. \\\\\n \n{\\it FGD.} The ND280 will \ncontain two FGD's, each \nwith dimensions $1.84\\times 1.84\\times 0.3$ m$^3$ resulting in a total target \nmass of about 2.0 tons. \nThe first FGD will be an active scintillator detector, similar to the \nSciBar detector~\\cite{scibar} of the K2K experiment. \nIt consists of thirty scintillator layers of 192 \n$0.96\\times 0.96 \\times 184$ cm$^3$ extruded \nscintillator bars which are arranged \nin alternating vertical and horizontal layers perpendicular to the beam\ndirection. The second FGD is composed of seven X-Y \nsandwiches of scintillator \nlayers alternating with six 3-cm thick layers of water. The weight of the \nscintillator is 0.56 ton and of water, 0.44 ton. The readout of each\nscintillator bar is provided by an MPPC connected to one end of a WLS fiber \ninserted in \na central hole. \nA beam test of scintillator bars performed at TRIUMF showed that the light yield\nfor 120 MeV\/c pions, muons, electrons will be more than 10 p.e. at the far end \nfrom a photosensor without mirroring the far end of the fiber. The \nmirroring increases the light yield by $\\geq 80$\\%,\nguaranteeing a detection efficiency of more \nthan 99\\% for minimum ionizing particles. The FGD allows a clear\nseparation between protons and pions using dE\/dx information and tagging \nMichel electrons from the decay of short-ranged pions.\nComparing the interaction rates in both FGD's permits separate measurement \nof neutrino cross sections on carbon and on water.\nAbout $4\\times 10^5$ \nneutrino interactions are expected in both FGD modules for a one year exposure \nwith $10^{21}$ protons on target.\n\n{\\it TPC.} The primary purpose of the TPC is \nto measure the 3-momenta of muons produced in CCQE interactions in the FGD. \n The TPC will use a \nlow diffusion gas to obtain the \nmomentum resolution of $\\leq 10$\\% for particles below 1 GeV\/c. A 700 $\\mu$m \nspace point resolution per ``row'' of pads is required to achieve this momentum resolution. \nThe absolute energy scale will be checked at the 2\\% level using the invariant \nmass of neutral kaons produced in DIS neutrino interactions and decaying in the\nTPC volume. A good dE\/dx resolution of $< 10$\\% \nis expected for 72 cm long tracks which will provide better than 5$\\sigma$ \nseparation between muon and electron tracks \nin the momenta range 0.3-1.0 GeV\/c. \n\nThe three TPC modules are \nrectangular boxes with outer \ndimensions of approximately $2.5\\times 2.5$ m$^2$ in the plane perpendicular \nto the neutrino beam direction, and 0.9 m along the beam direction. A simplified\ndrawing of the TPC is shown in Fig.~\\ref{fig:tpc}.\n\\begin{figure}[h]\n\\centering\\includegraphics[width=8cm,angle=0]{tpc3d.eps} \n\\caption{The layout of TPC showing the inner and outer boxes and the \ncentral cathode.}\n\\label{fig:tpc}\n\\end{figure}\nThe TPC modules are operated at an electric field of 200 V\/cm. \nThe central cathode, which divides the drift space into two halves to limit the \nmaximum drift distance to $\\sim 1$ m, will be at a potential of -25 kV. The\nbaseline gas choice is Ar(95\\%)--CF$_4$(3\\%)--iC$_4$H$_{10}$(2\\%).\n\n \nThe `bulk' Micromegas detectors will be used to instrument the TPC readout\nplane. The active surface area of the \nMicromegas \nmodule is $359\\times 342$ mm$^2$ with 1726 active pads of $9.7\\times 6.9$ \nmm$^2$. 12 Micromegas modules will be used for each\nreadout plane of the TPC.\n In total, the 3 TPC's will consist of 72 modules with $\\sim 124000$ readout \nchannels. \n\nThe first prototypes of Micromegas detectors have been tested with cosmic \nmuons in the former HARP \nfield cage setup with a magnetic field~\\cite{micromegas} and demonstrated good\nmomentum resolution of 8.3\\% at 1 GeV\/c. A dE\/dx resolution of \nabout 12\\% for track lengthes of about 40 cm and a good \nuniformity of $\\sigma = 3.4$\\% for the gain $\\sim 1000$ have been obtained. \n\nA typical CCQE event for neutrino interaction in FGD1 is shown in \nFig.~\\ref{fig:ccqe_track}.\n\\begin{figure}[h]\n\\centering\\includegraphics[width=9cm,angle=0]{ccqe_track_new.eps} \n\\caption{Typical CCQE event in the tracker.}\n\\label{fig:ccqe_track}\n\\end{figure}\nThe reconstruction efficiency of CCQE \nevents produced in the FGD with a track in the TPC is estimated to be about \n50\\% at a $E_{\\nu} \\sim 0.7$ GeV.\n\n\\subsubsection{Electromagnetic calorimeter}\nThe Electromagnetic calorimeter (Ecal) shown in Fig.~\\ref{fig:ecal} \n\\begin{figure}[h]\n\\centering\\includegraphics[width=7cm,angle=0]{ecal.eps} \n\\caption{Basic structure of the electromagnetic calorimeter.}\n\\label{fig:ecal}\n\\end{figure}\nconsists of two sections. \nOne surrounds the POD (POD Ecal) for detectioning photons and muons escaping the \nPOD, and the second section, surrounding the FGD's and TPC's (TEcal), detects \nparticles leaving the tracking volume. \n\nTEcal modules are made of 4 cm-wide, 1 cm-thick plastic\nscintillator bars arranged in 32 layers and separated by 31 layers of 1.75\nmm-thick lead sheets. The orientation of the bars alternates between layers \nso that the bars in any layer are perpendicular to the bars in the two adjacent \nones. This bar width allows good $\\pi^0$ reconstruction efficiency and provides \nthe spatial resolution required for reconstruction of the direction of detected\nphotons. The active length of the TEcal along the neutrino beam is 384 cm and \nthe total depth is 50 cm corresponding to 10.5$X_0$. TEcal has two side\nmodules, one on each side of the UA1 iron yokes, one top and one \nbottom modules, each is split into two (left and right) so that they can move \nwith the magnet yoke when the magnet opens.\nAll scintillator \nbars have a hole in the center with a 1 mm WLS fiber inserted in it. All \nlong bars running along the neutrino beam are readout by an MPPC at each end\n(double-end readout), while all shorter bars (perpendicular to the neutrino\nbeam) are mirrored at one end and readout by an MPPC at the other (single-end\nreadout). The downstream Ecal is a single module with the same granularity \nas TEcal modules with an effective depth of 11$X_0$. It is located at the \ndownstream end of the magnet and covers \nan active surface area of $2\\times 2$ m$^2$. All bars have double-ended readout. \nThe total weight of the TEcal and downstream Ecal is 28.3 tons. \n\n The POD Ecal has modules with coarser segmentation and less total $X_0$ and \n does not provide good energy and spatial resolution required for $\\pi^0$ \n reconstruction. These modules consist of 6 scintillator layers separated by \n 5 layers of 5 mm-thick lead converters resulting in an effective depth of \n $4.5X_0$. \n \n The energy resolution of TEcal, dominated by sampling fluctuations, is\n estimated to be about 7.5\\%\/$\\sqrt{E(\\rm GeV)}$ for energies up to \n 5 GeV. TEcal is expected to provide good electron\/pion separation. An\n efficiency of 90\\% for electrons is expected with 95\\% pion rejection. \n \n\\subsubsection{Side muon range detector}\nMuons which escape at large angles with respect to the neutrino \nbeam can not be measured by the TPC's. However, they will intersect in the iron yoke \nand therefore a muon's momentum can be obtained from its range by \ninstrumenting the iron at various depths. About 40\\% of muons from \nCCQE reactions and about 15\\% of muons from \ncharge current \nnon-quasi-elastic reactions are expected to enter the SMRD. In addition, \nthe SMRD will be used to veto events from \nneutrino interactions in the magnet and in walls of the ND280 pit and will \nprovide a cosmic trigger \nfor calibration of inner detectors. \n\nThe UA1 iron yoke \nconsists of 16 C-shaped elements made of sixteen 5 cm thick iron plates, with \n1.7 cm air gap between the plates and is segmented in 12 azimuthal sections. \nThe active component of the SMRD will consist of 0.7 cm thick scintillator slabs\nsandwiched between the iron plates of the magnet yokes. \nDetails of the extrusion of the scintillator slabs and the method of \netching the plastic surface by a chemical agent can be found \nin Ref.~\\cite{extrusion}. For the readout, we employ a \nsingle WLS fiber embedded in a serpentine shaped (S--shape)\ngroove, as shown in Fig.~\\ref{fig:smrd_counter}.\n\\begin{figure}[h]\n\\centering\\includegraphics[width=12cm,angle=0]{smrd_counter_1.eps} \n\\caption{The SMRD counter with embedded Kuraray Y11 WLS fiber.}\n\\label{fig:smrd_counter}\n\\end{figure}\n Such a \nshape allows the fiber to collect the scintillation light over the whole \nsurface of a scintillator slab~\\cite{smrd_nim}. Two MPPC's are coupled to \nboth ends of a \nWLS fiber glued into the S--shape \ngroove. The detector performance has been tested using cosmic muons. Typical \nADC spectra for \nMIP's obtained with $1.0\\times 1.0$ mm$^2$ MPPC's are \nshown in Fig.~\\ref{fig:adc_smrd_spectra}. \n\\begin{figure}[h]\n\\centering\\includegraphics[width=11cm,angle=0]{good_spectra.ps} \n\\caption{The ADC spectra of the SMRD counter for minimum ionizing \nparticles measured at 22$^{\\circ}$C. The light yield (sum of both ends) is \nequal to 58 p.e.}\n\\label{fig:adc_smrd_spectra}\n\\end{figure}\nThe SMRD counters tests resulted in a high detection efficiency measurement \nof \ngreater than 99\\%, a time resolution of about 1 ns, and a spatial resolution \nalong the slab of about 8 cm for minimum ionizing particles.\n\\section{Conclusion}\nThe T2K experiment has a rich physics potential and provides an excellent \nopportunity to greatly extend our understanding of neutrino properties.\nTo achieve the physics goals of T2K, the complex of near neutrino detectors \nneeded for measurement of the unoscillated neutrino beam properties \nis under construction. The on-axis detector will be ready to accept the first \nneutrino beam in April 2009, the installation and commissioning of the whole \noff-axis detector will be finished during 2009. The T2K experiment is \nexpected to start data taking in 2009. \n \n{\\bf Acknowledgments.} I thank M.~Gonin, A.~Grant, D.~Karlen, T.~Nakaya, \nV.~Paolone and M.~Yokoyama for providing material for the talk and useful \ncomments on the manuscript. This work was supported in part by \nthe ``Neutrino Physics'' Program \n of the Russian Academy of Sciences and by the RFBR (Russia)\/JSPS (Japan) \n grant \\#08-02-91206.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzziazr b/data_all_eng_slimpj/shuffled/split2/finalzziazr new file mode 100644 index 0000000000000000000000000000000000000000..491bb18e6ca3d1d4c1a402c7ce79d0554d8b6682 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzziazr @@ -0,0 +1,5 @@ +{"text":"\\section{Unconventional d.c. and optical conductivities}\n\\label{sec:intro}\n\nIt is an observed fact that several chemically diverse families of unconventional materials exhibit a low temperature resistivity that is linear in temperature, when tuned to what appear to be quantum critical points.\nIllustrative recent experimental data can be found for cuprates, pnictides, heavy fermions, ruthenates and organic superconductors -- see \\cite{Sachdev:2011cs} for an overview and references. While it is likely that there may be more than one explanation for this behavior, the universality of the temperature dependence of the resistivity seen in the phase diagrams of these materials strongly motivates the search for robust physical mechanisms that can reproduce the observations.\n\nIn this paper we show, in the context of the holographic correspondence, that the structure of superradiant instabilities of extremal black hole horizons leads universally to a linear in temperature resistivity when these systems are tuned to the quantum critical point mediating the instability. This mechanism will be described in some generality. We go on to illustrate the process in a particular model that captures additional features of several of the unconventional materials of interest: the resistivity will be due to scattering off modes that are becoming unstable, and which are supported at nonzero momentum. One can take these modes to model the spin density and charge density wave instabilities appearing in the phase diagrams of \\cite{Sachdev:2011cs}.\n\nThat the resistivity is linear rather than, say, quadratic in temperature is only half of the mystery. Unlike conventional metals, many of the materials of interest fail to exhibit resistivity saturation as the temperature is increased. The resistivity increases unabated with temperature through the Mott-Ioffe-Regel limit. The materials are consequently known as bad metals \\cite{Emery:1995zz}. This fact becomes particularly confusing when considered in conjunction with the dependence of the in-plane (frequency dependent) optical conductivity on temperature. In conventional metals, the optical conductivity exhibits a Drude peak that broadens as the temperature is increased. Resistivity saturation occurs when the spectral weight is smeared out over the whole of the bandwidth and the Drude peak effectively disappears \\cite{hussey}. In several classes of unconventional materials that do not exhibit saturation, the Drude peak is accompanied by an extended tail that falls off slowly at large frequencies up to the bandwidth scale \\cite{basov}. At the would-be saturation temperature, the Drude peak is observed to melt into this broader feature \\cite{hussey, basov}. The d.c.$\\,$resistivity, however, continues to increase linearly in temperature with the same slope, irrespectively of whether or not there is an associated Drude peak! The conundrum is served: Does the linear in temperature resistivity originate in Drude-like (i.e. momentum-relaxing) scattering or not?\n\nThe answer to this question is again likely to be material dependent. The picture of resistivity saturation outline above may not be correct. Nonetheless, the characterization of bad metals as metals that do not exhibit a zero frequency collective mode encourages the notion that the resistivity might be controlled by quantum critical physics, presumably responsible for the extended tail in the spectral density, rather than be sensitive to the mechanism of momentum relaxation. Such a picture is likely to have trouble with the fact that at lower temperatures a very sharp Drude peak is observed on top of the broader feature, see e.g. \\cite{boris} for measurements in optimally doped YBCO. The mechanism of linear resistivity presented in this paper will be quantum critical in nature, and we will assume that the Drude peak has been swamped by the critical degrees of freedom.\n\nFigure \\ref{fig:optical} below illustrates the above discussion.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width = \\textwidth]{OpticalConductivity.pdf}\\caption{Theorist's schematic view of the optical conductivity in bad metals at lower temperatures (left) and higher temperatures (right). As the temperature is raised, spectral weight is shifted from the Drude peak into the broad tail and to interband energy scales. The linear in temperature d.c.$\\,$resistivity does not notice the melting of the Drude peak.\n\\label{fig:optical}}\n\\end{center}\n\\end{figure}\n\n\\section{Extremal horizons and BKT transitions}\n\\label{sec:bkt}\n\nIn quantum theories with a holographically dual description, the strongly interacting physics is described by classical gravitational dynamics in a dual spacetime with one extra spatial dimension. The extra dimension geometrically implements the renormalization group flow. In particular, the far interior of the spacetime describes the far IR, or lowest energy scales, of the dual quantum system \\cite{Hartnoll:2011fn}. In many holographic models placed at a finite charge density, $\\langle J^t \\rangle \\neq 0$, the far IR geometry is characterized by an emergent scaling symmetry that indicates a power law specific heat at low temperatures, and hence the presence of gapless degrees of freedom \\cite{Hartnoll:2011fn}. Working in two spatial dimensions for concreteness, the general scale invariant form of the metric is \\cite{Kachru:2008yh}\n\\begin{equation}\\label{eq:scalingIR}\nds^2_{\\text{IR}} = L^2_{\\text{IR}} \\left(\\frac{- dt^2 + d\\r^2}{\\r^2} + \\frac{dx^2 + dy^2}{\\r^{2\/z}} \\right) \\,.\n\\end{equation}\nThis spacetime admits the scaling symmetry $\\{t,\\r\\} \\to \\lambda \\{t,\\r\\}, \\vec x \\to \\lambda^{1\/z} \\vec x$. Therefore $z$ is the dynamical critical exponent. For simplicity we will not discuss here the more general class of metrics describing hyperscaling violation \\cite{Charmousis:2010zz, Huijse:2011ef}, although our considerations apply to those spacetimes also. Holographic IR scaling spacetimes typically arise over a finite range of parameter space and therefore describe critical phases.\n\nIn a scaling geometry of the form (\\ref{eq:scalingIR}), all the operators ${\\mathcal{O}}$ in the low energy theory have an energy scaling dimension $\\Delta$. This dimension determines the spectral weight (imaginary part of the retarded Green's function) in the regime $\\w \\ll T \\ll \\mu$ to be\n\\begin{equation}\\label{eq:im}\n\\lim_{\\w \\to 0}\\frac{1}{\\w} \\text{Im} \\, G_{{\\mathcal{O}} {\\mathcal{O}}}^R(\\w,T) \\sim T^{2 \\Delta - 2 - 2\/z}\\,.\n\\end{equation}\nHere the chemical potential $\\mu$ has been used to indicate the UV scale. Physics well below that scale is captured by the low energy spacetime (\\ref{eq:scalingIR}) and is amenable to dimensional analysis. We also used the fact that the spectral weight is odd in frequency for bosonic operators. In general, the above spectral weight is evaluated at zero momentum, $k=0$. A slight generalization is possible in the case of $z = \\infty$. In this case, space does not scale and so the momentum $k$ is dimensionless and can be nonzero. The scaling dimension becomes\nmomentum dependent: $\\Delta(k)$.\n\nAccording to the basic holographic dictionary \\cite{Witten:1998qj, Gubser:1998bc}, each operator ${\\mathcal{O}}$ is dual to a field $\\phi$ in the IR spacetime. For simplicity, to start with, consider the case in which $\\phi$ is a scalar field with mass $m$ that is not coupled to other fields at a linearized level. By solving the bulk wave equation in the geometry (\\ref{eq:scalingIR}) and reading off the scaling behavior from the solution as $\\rho \\to 0$, one\nimmediately finds\n\\begin{equation}\\label{eq:delta}\n\\Delta = \\frac{2 + z}{2 z} + \\nu \\equiv \\frac{2 + z}{2 z} + \\sqrt{L^2_\\text{IR} m^2 + \\frac{(2+z)^2}{4 z^2}} \\,.\n\\end{equation}\nThe parameter $\\nu$ introduced here will appear repeatedly below.\nIn the case of $z = \\infty$, the mass $m^2$ can be momentum dependent. In this expression we see that if the mass squared satisfies the generalized Breitenlohner-Freedman bound\n\\begin{equation}\\label{eq:bf}\nL^2_\\text{IR} m^2 > - \\frac{(2+z)^2}{4 z^2} \\,,\n\\end{equation}\nthen the scaling dimension $\\Delta$ is real.\n\nThe quantity $L^2_\\text{IR} m^2$ can be tuned by varying UV parameters. In particular, we can imagine tuning these parameters such that the mass squared drops below the bound (\\ref{eq:bf}), and the scaling dimension becomes complex. It is well understood by now that this triggers an interesting quantum phase transition in which the operator ${\\mathcal{O}}$ condenses once the mass squared of $\\phi$ becomes too negative. We will return to the nature of the transition very shortly, but first we notice that precisely at the critical point, where the square root in (\\ref{eq:delta}) vanishes and $\\nu = 0$, then from (\\ref{eq:im}) and (\\ref{eq:delta}) we have\n\\begin{equation}\\label{eq:critical}\n\\lim_{\\w \\to 0}\\frac{1}{\\w} \\text{Im} \\, G_{{\\mathcal{O}} {\\mathcal{O}}}^R(\\w) \\sim \\frac{1}{T} \\,.\n\\end{equation}\nThis expression, which has been previously emphasized in \\cite{Iqbal:2011aj, Vegh:2011aa}, will be at the heart of the linear resistivity above a quantum critical point that we will discuss in the following section. Previous interest in this expression is due to the fact that it resembles the spectral weight of the bosonic mode underlying the marginal Fermi liquid phenomenology of the cuprates \\cite{Varma:1989zz}. It is not quite the same, however, because in general (\\ref{eq:critical}) only holds at $k=0$. Even in the case of $z = \\infty$, because the dimension $\\Delta$ is $k$ dependent -- this is why these systems were termed semi-locally critical in \\cite{Iqbal:2011in}, rather than fully locally critical -- the full spectral density will have a nontrivial $k$ dependence, and (\\ref{eq:critical}) will only hold for one value of $k$ at a time. The absence of a $k$ dependence is crucial for the marginal Fermi liquid, as the mode is coupled to a Fermi surface, which has spectral weight at a nonzero $k=k_F$ that is set by UV dynamics. In contrast, our framework will operate entirely at the level of currents, which control the holographic charge dynamics at leading classical order in the bulk, and will not involve explicit discussion of Fermi surfaces.\n\nThe association of complex IR scaling dimensions to instabilities was first made in the context of holographic superconductors \\cite{Hartnoll:2008kx, Denef:2009tp, Gubser:2008pf}. In cases where the operator ${\\mathcal{O}}$ carries a charge, the instability can be understood as a cousin of the superradiant instabilities of charged black holes, driven by pair production of quanta near the horizon and leading to a discharging of the black hole \\cite{Hartnoll:2011fn}.\nIt was later realized that the quantum phase transition mediating this instability was of Berezinskii-Kosterlitz-Thouless (BKT) type. For instance, when the mass squared is just below the bound (\\ref{eq:bf}), the temperature below which the instability occurs scales like\n\\begin{equation}\\label{eq:tc}\nT_c \\sim \\mu \\, e^{- \\pi\/\\sqrt{-\\nu^2}} \\,.\n\\end{equation}\nSimilar exponential hierarchies control quantities such as the condensate just below the critical mass squared.\nSuch zero temperature BKT transitions were first discussed in \\cite{Kaplan:2009kr}.\nUnlike the conventional BKT transition, they can occur in any dimension and are not tied to an interpretation of vortex unbinding, but rather describe the merger of a UV and IR fixed point. These transitions were subsequently noted to be rather generic in holographic settings \\cite{Jensen:2010ga, Iqbal:2010eh} and to admit a `semi-holographic' description \\cite{Jensen:2011af, Iqbal:2011aj}, in which the only role of holography is to provide a critical IR sector in which the transition occurs \\cite{Faulkner:2010tq}. Our discussion in the following section will be essentially semi-holographic in nature, being independent of most details of the UV region of the bulk geometry. In \\S \\ref{sec:broader} of the discussion section we will take a further step back from holography and comment on the validity of the result we have just described for general BKT transitions.\n\nThere are two key points we would like the reader to take from the above. Firstly, that given a `critical phase' with an IR scaling symmetry described by the metric (\\ref{eq:scalingIR}) one can induce a quantum phase transition by tuning the dimension of an operator to become complex. Secondly, at the corresponding quantum critical point, the spectral density of this operator has the temperature dependence (\\ref{eq:critical}). This last statement holds at $k=0$ for finite $z$, and at some specific $k_\\star$ when $z = \\infty$.\n\n\\section{Mechanism of linear resistivity}\n\\label{sec:mechanism}\n\nThe d.c.$\\,$conductivity is given by\n\\begin{equation}\\label{eq:sigma}\n\\sigma = \\lim_{\\w \\to 0} \\frac{1}{\\w} \\, \\text{Im} \\, G^R_{J^x J^x}(\\w,T) \\,.\n\\end{equation}\nAt a nonzero charge density $\\langle J^t \\rangle$ and if momentum is conserved, this quantity is problematic because, in addition to the contribution (\\ref{eq:sigma}), there is a delta function in the dissipative conductivity at $\\w = 0$. In order to relax momentum over experimental timescales, the charge carriers must either be parametrically diluted or must interact with parametrically heavier degrees of freedom \\cite{Hartnoll:2012rj}. The result is the broadening of the delta function into a Drude peak. The contribution of the Drude peak to the d.c. conductivity is intimately connected to the mechanism by which momentum is relaxed. For instance, umklapp scattering in a Fermi liquid gives rise to the celebrated $T^2$ dependence of the resistivity. Instead, we would like the d.c.$\\,$conductivity to be dominated by the universal quantum critical dynamics underlying the spectral weight (\\ref{eq:critical}). This can happen if the Drude peak contribution is swamped by the zero frequency limit of an extended tail that arises from scattering off critical modes. As we discussed in \\S \\ref{sec:intro} above, this may be the case in bad metallic regimes.\n\nIt is sometimes asserted that the presence of a Drude peak is synonymous with a quasiparticle description of transport. This is not quite correct. The essential requirement for a Drude peak is a hierarchy of time scales, whereby the momentum relaxation timescale is much longer than any other timescale in the system. This is particularly clear in hydrodynamic or memory matrix approaches, e.g. \\cite{Hartnoll:2007ih, Hartnoll:2012rj}. In the presence of such a hierarchy, strongly correlated systems without a quasiparticle description will still exhibit a Drude peak. Conversely, \nthe absence of a Drude peak simply requires that momentum is being dumped by the charge carriers at a rate comparable to all other interactions. In such circumstances, the spectral weight from the delta function is transferred into the critical tail or indeed to interband energy scales. Strong interactions are presumably important here to maintain a metallic character and avoid localization \\cite{Emery:1995zz}. In the concrete model we consider in the following section, we assume that such a process is occurring in our system, without disrupting the momentum-conserving interactions that we consider, so that in effect we can ignore the delta function contribution to the conductivity. More generally, we can imagine that momentum-relaxing processes are already part of the critical system that is undergoing the quantum critical BKT transition.\n\nIf $J^x$ itself were the operator undergoing the BKT transition, then combining (\\ref{eq:critical}) and (\\ref{eq:sigma}) would directly give a linear in temperature resistivity at the critical point. Such a phase transition would correspond to the spontaneous generation of a uniform current and presumably requires spontaneous symmetry breaking of the global $U(1)$ symmetry. We are not aware of holographic, or other, models where this occurs. However, it is easy for the IR critical behavior (\\ref{eq:critical}) to get communicated to the current operator via operators that are irrelevant from the IR quantum critical point of view.\n\nThe IR scaling geometry (\\ref{eq:scalingIR}) is generically deformed by irrelevant operators that drive a renormalization group flow up towards the finite density UV fixed point or cutoff. In the IR scaling regime, before these irrelevant operators kick in, we can imagine diagonalizing the equations of motion for a generic perturbation of the background to obtain decoupled gauge invariant fields $\\Phi_I$. Near the boundary ($\\rho \\to 0$) of the IR geometry, these satisfy\n\\begin{equation}\\label{eq:GIR}\n\\Phi^I(\\rho) \\sim c^I \\Big( \\rho^{1+2\/z-\\Delta_I} + \\rho^{\\Delta_I} \\, {\\mathcal G}^R_I(\\w,T) \\Big) \\,.\n\\end{equation}\nHere ${\\mathcal G}^R_I(\\w,T)$ is the IR Green's function, the $c^I$ are constants, and $\\Delta_I$ is the IR dimension of the operator. We have allowed a temperature $T \\ll \\mu$ that only affects the IR spacetime. One of these dimensions $\\Delta_I$ is assumed to undergo a BKT transition of the form described in the previous section.\n\nAway from the IR geometry, the irrelevant operators will typically couple these perturbations. However, in the regime of interest $\\w,T \\ll \\mu$, this coupling does not introduce any additional non-analytic temperature or frequency dependence. In appendix \\ref{eq:matching} we show that a generalization of the usual matching procedure of e.g.\n\\cite{Faulkner:2011tm} implies that to leading order at low frequencies and temperatures\\footnote{To be precise, as explained in the appendix, equation (\\ref{eq:imgsum}) holds when the $\\nu_I$'s are real, that is, on the stable side of the quantum critical point. When some of the $\\nu_I$'s are imaginary, the general expression has a complicated logarithmic dependence on $T$ and $\\w$.}\n\\begin{equation}\\label{eq:imgsum}\n\\text{Im} \\, G^R_{J^x J^x}(\\w,T) = \\sum_I d^I \\, \\text{Im} \\, {\\mathcal G}^R_I(\\w,T) \\,.\n\\end{equation}\nHere the real coefficients $d^I$ will generically be nonzero if the irrelevant operators mix the IR mode $\\Phi_I$ with the gauge field $A_x$. This is by no means automatic; the model of the following sections will achieve the required mixing by combining several interesting ingredients. In more familiar condensed matter language, we can think of the direct coupling in (\\ref{eq:imgsum}) between the current and a fluctuating order parameter as a cousin of the Aslamazov-Larkin process \\cite{al}, in this case mediated by irrelevant operators. The key fact is that we couple the fluctuating field directly to the current, not going via e.g. a fermionic self-energy. It is now immediate that if one of these IR operators undergoes a BKT transition, then the spectral weight (\\ref{eq:critical}), plugged into the matching formula (\\ref{eq:imgsum}) leads to a linear in temperature resistivity at the critical point\n\\begin{equation}\\label{eq:linearT}\nr = \\frac{1}{\\sigma} \\sim T \\,.\n\\end{equation}\nNote that, according to (\\ref{eq:im}) and (\\ref{eq:delta}), the contribution from a general IR operator ${\\mathcal{O}}_I$ to the conductivity is\n\\begin{equation}\\label{eq:2nu}\n\\sigma = T^{-1 + 2 \\nu_I} \\,.\n\\end{equation}\nRecall that $\\nu$ was defined in (\\ref{eq:delta}). Assuming we are in a stable phase, then we see that the contribution of all the other operators with $\\nu_I > 0$ are subleading in the sum (\\ref{eq:imgsum}) compared to the critical operator with $\\nu = 0$. The above discussion is depicted in figure \\ref{fig:phased} below.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[height=270pt]{PhaseDiagram.pdf}\\caption{Schematic phase diagram. The BKT quantum phase transition occurs at the boundary of a quantum critical phase when the scaling dimension of an operator becomes complex, signaling a condensation instability. If the unstable operator is coupled to the current via irrelevant operators, then above the critical point the quantum critical contribution to the resistivity is linear in temperature. Close to the critical point, the mode becoming unstable has a strong effect on the d.c.$\\,$and optical conductivities.\n\\label{fig:phased}}\n\\end{center}\n\\end{figure}\n\nWhile (\\ref{eq:2nu}) allows the critical operator to dominate at the critical point, away from the critical point, on the disordered side, it implies that the exponent of the resistivity will be $1 - 2 \\nu < 1$. Here $\\nu$ is the exponent of the critical operator away from the critical point. This is in contrast to experiments in all the unconventional materials of interest, which show that when detuned from criticality, the exponent of the resistivity increases towards the Fermi liquid $T^2$ behavior \\cite{Sachdev:2011cs}. To capture this behavior we would need to increase the scope of our model. The simplest way to do this would be to combine the critical tail contribution we are discussing with a more conventional Femi liquid Drude peak contribution. It may be that by coupling the critical operator to the Fermi liquid, along the marginal Fermi liquid lines of \\cite{Iqbal:2011aj, Vegh:2011aa} or otherwise, one can remove the $T^2$ contribution to the resistivity in the critical region of the phase diagram. For instance, very schematically, a form like\n\\begin{equation}\\label{eq:fudged}\n\\sigma \\sim \\nu \\, T^{-2} + T^{-1 + 2 \\nu} \\,,\n\\end{equation}\nwould start to look closer to the data.\n\nComplications with the Drude peak are removed if one looks at the optical conductivity. The scaling arguments above now imply that the dissipative conductivity scales like\n\\begin{equation}\\label{eq:sw}\n\\sigma \\sim \\w^{-1 + 2 \\nu} \\,.\n\\end{equation}\nThis gives the marginal Fermi liquid form\n\\begin{equation}\\label{eq:1w}\n\\sigma \\sim \\frac{1}{\\w} \\,,\n\\end{equation}\nat the critical point. This may be compatible with observations in e.g. YBCO \\cite{Varma:1989zz} and LSCO \\cite{lsco}. We discuss the optical conductivity in more detail below. It would be interesting to measure in detail the doping dependence of the power law tail of the optical conductivity in these materials, to assess the plausibility of a dependence like (\\ref{eq:sw}) close to the critical point. As we noted above, the expressions (\\ref{eq:sw}) and (\\ref{eq:2nu}) are strictly applicable only on the $\\nu > 0$ side of the quantum phase transition. It is a feature of these models that the d.c.$\\,$and optical conductivities have asymmetric behavior on opposite sides of the quantum critical point.\n\nThere may seem to be a tension in the fact that $J^x$ does not acquire a vacuum expectation value and yet its correlator couples at the lowest frequencies to the unstable mode according to (\\ref{eq:imgsum}). We will verify explicitly in our concrete model below that these two statements are compatible.\n\nThe mechanism of linear in temperature resistivity we have described is universal in the sense that it only depends on the onset of a holographic BKT quantum phase transition at the boundary of a quantum critical phase, combined with the presence of irrelevant operators that couple the IR critical operator to the current. The mechanism is independent of the details of the theory undergoing the transition and also of the UV completion of the critical theory. Beyond the tuning to the quantum critical point, no additional specification of dimensions of operators or dynamical critical exponents is necessary. In the remainder of this paper we describe a specific holographic realization of the scenario that we have just outlined.\n\n\\section{Lattices and finite wavevector instabilities}\n\\label{sec:lattice}\n\nA concrete holographic model realizing the mechanism outlined above can be achieved by combining three interesting ingredients that have been the focus of recent discussion:\n\n\\begin{itemize}\n\n\\item Vectorial instabilities occurring at finite wavevector $k_\\star > 0$ \\cite{Nakamura:2009tf, Donos:2011bh, Donos:2011qt}.\n\n\\item A (semi-) locally quantum critical sector in which all momenta are critical\n\\cite{Sachdev:2010um, McGreevy:2010zz, Iqbal:2011in}.\n\n\\item A lattice that is irrelevant in the IR scaling regime \\cite{Hartnoll:2012rj, Horowitz:2012ky, Liu:2012tr}.\n\n\\end{itemize}\n\n\\noindent The essential idea is that the irrelevant coupling to the lattice will communicate the finite wavevector instability to the $k=0$ electrical current. The instability at $k_\\star$ can have critical scaling because of the local quantum criticality.\n\nThe most important effect of a lattice is to broaden the delta function in the conductivity \\cite{Hartnoll:2012rj} into a Drude peak. Impurities will also achieve this effect \\cite{Hartnoll:2008hs}. Nonetheless, many of the interesting materials are believed to be very clean, and indeed, the Fermi liquid $T^2$ resistivity observed over large ranges of temperatures in these materials, away from the critical regions, presumably originates from umklapp scattering off the lattice rather than impurities plus interactions \\cite{maslov}. As we have stressed repeatedly, however, in this work we are not interested in the Drude peak contribution to the conductivity. The role of the lattice for us will be to mix modes with different wavevectors.\n\nWe will consider the following 3+1 dimensional bulk model \\cite{Donos:2011bh}\n\\begin{align}\\label{lagra1}\n\\mathcal{L}=&{\\textstyle{\\frac12}} R\\,\\ast 1-{\\textstyle{\\frac12}} \\,\\ast d\\varphi\\wedge d\\varphi-V\\left(\\varphi\\right) \\ast 1- {\\textstyle{\\frac12}} \\,\\tau\\left(\\varphi\\right) F\\wedge\\ast F-{\\textstyle{\\frac12}} \\,\\vartheta\\left(\\varphi\\right) F\\wedge F\\, ,\n\\end{align}\nwhere $F=dA$. This Lagrangian describes Einstein-Maxwell theory coupled to a pseudoscalar field with both dilatonic and axionic couplings to the field strength. The corresponding equations of motion are given by\n\\begin{align}\\label{eomi}\n&R_{\\mu\\nu}=\\partial_\\mu \\varphi\\partial_\\nu \\varphi+g_{\\mu\\nu}\\,V-\\tau\\,\\left(\\frac{1}{4}g_{\\mu\\nu}\\,F_{\\lambda\\rho}F^{\\lambda\\rho} -F_{\\mu\\rho}F_{\\nu}{}^{\\rho}\\right) \\,, \\nonumber \\\\\n&d\\left(\\tau\\ast F+\\vartheta F\\right)=0 \\,, \\nonumber \\\\\n&d\\ast d\\varphi+V'*1+\\frac{1}{2}\\tau'\\,F\\wedge\\ast F+\\frac{1}{2}\\vartheta'\\,F\\wedge F=0\\, .\n\\end{align}\nWe will assume that the three functions $V$, $\\tau$ and $\\vartheta$ have the following expansions\n\\begin{align}\\label{expans}\nV=-6+\\frac{1}{2}m_{s}^{2}\\,\\varphi^{2}+\\dots,\\qquad\n\\tau=1-\\frac{n}{12}\\,\\varphi^{2}+\\dots,\\qquad\n\\vartheta=\\frac{c_{1}}{2\\sqrt{3}}\\,\\varphi+\\dots\\, .\n\\end{align}\nIt is sufficient to know the action for the pseudoscalar to quadratic order, as it will only appear as a perturbation of the background. In (\\ref{expans}) we have furthermore set the asymptotic $AdS_4$ radius to $L^2 = 1\/2$. In the Lagrangian (\\ref{lagra1}) we already set both the Newton and Maxwell constants to unity.\nAll of these quantities can be scaled out of the equations of motion in this model.\n\nThe key feature of the theory (\\ref{lagra1}), to be described immediately below, is that it can develop vectorial instabilities, which involve the electric current. Closely related to this fact is that the instabilities of interest occur at nonzero wavevector $k_\\star$, avoiding the spontaneous generation of a homogeneous current, as we would expect. By subsequently introducing a lattice, that becomes irrelevant in the IR and does not disrupt the critical phase, we will communicate the effects of this instability to the spectral weight of the current operator, leading to strong temperature and frequency dependence in the conductivity.\n\n\\subsection{IR spectrum of excitations and instability}\n\\label{sec:irspectrum}\n\nThe equations of motion (\\ref{eomi}) admit the following $AdS_{2}\\times \\mathbb{R}^{2}$ black hole solution\n\\begin{equation}\\label{eq:AdS2_bh}\nds_{4}^{2} = -f\\,dt^{2}+\\frac{1}{12} \\left(\\frac{dr^{2}}{f}+dx^{2}+dy^{2}\\right) \\,, \\qquad A=\\left(r-r_{+}\\right)\\,dt \\,, \\qquad\n\\varphi=0 \\,.\n\\end{equation}\nHere the emblackening factor\n\\begin{equation}\nf=r^{2}-r_{+}^{2} \\,.\n\\end{equation}\nThe horizon of the black hole \\eqref{eq:AdS2_bh} is at $r=r_{+}$ and the temperature is $T = \\sqrt{3}\\,r_{+}\/\\pi$.\nThe boundary of $AdS_2$ is at $r \\to \\infty$. This will serve as our near horizon scaling geometry,\n as per our discussion in \\S \\ref{sec:bkt} above, which we have further\nplaced at a finite temperature. That the effects of the temperature are restricted to the IR scaling geometry implies that $T \\ll \\mu$, the UV scale. The scaling regime described by $AdS_{2}\\times \\mathbb{R}^{2}$ has dynamical critical exponent $z=\\infty$ and an associated ground state entropy density. This ground state entropy has tainted the reputation of $AdS_{2}\\times \\mathbb{R}^{2}$ as a ubiquitous tool in the applied holography effort. (Semi) local quantum criticality with $z=\\infty$ is in fact compatible with a vanishing ground state entropy density if hyperscaling is violated \\cite{Hartnoll:2012wm}. Holographic scaling geometries with $z=\\infty$ are the only currently known holographic duals (away from the probe limit) that share at leading bulk classical order the basic property of Fermi liquids of having spectral weight at low energies but finite momentum. It seems conceivable that $AdS_{2}\\times \\mathbb{R}^{2}$ may have the last laugh.\n\nAs usual, perturbations about the background can be decomposed into transverse and longitudinal sectors that decouple from each other. The finite wavevector instability occurs in the transverse channel. We can write the transverse perturbations around the exact solution \\eqref{eq:AdS2_bh} as\n\\begin{align}\\label{fluctuations}\n\\delta g_{ty}=h_{ty}\\left(t,r\\right)\\sin\\left(kx\\right) \\,, \\qquad \\delta g_{xy}=h_{xy}\\left(t,r\\right)\\cos\\left(kx\\right) \\,, \\nonumber \\\\\n\\delta A_{y}=a\\left(t,r\\right)\\sin\\left(kx\\right) \\,, \\qquad \\qquad \\delta \\varphi= w\\left(t,r\\right)\\cos\\left(kx\\right) \\,.\n\\end{align}\nThe equations of motion \\eqref{eomi} then yield the system of linear coupled equations\n\\begin{align}\n-k\\,\\partial_{t}h_{xy}+k^{2}\\,h_{ty}-f \\left(2\\,\\partial_{r}a+\\partial_{r}^{2}h_{ty}\\right)=0 \\,, \\nonumber \\\\\n2\\,\\partial_{t}a+12kf\\,\\partial_{r}h_{xy}+\\partial_{t}\\partial_{r}h_{ty}=0 \\,, \\nonumber \\\\\n-f^{-1}\\,\\partial_{t}^{2}h_{xy}+12\\,\\partial_{r}\\left(f\\partial_{r}h_{xy} \\right)+kf^{-1}\\,\\partial_{t}h_{ty}=0 \\,, \\nonumber \\\\\n-f^{-1}\\,\\partial_{t}^{2}a+12\\,\\partial_{r}\\left(f\\partial_{r}a \\right)-12k^{2}\\,a+c_{1}k\\,w+12\\,\\partial_{r}h_{ty}=0 \\,, \\nonumber \\\\\n-f^{-1}\\,\\partial_{t}^{2}w+12\\,\\partial_{r}\\left(f\\partial_{r}w \\right)-\\left(12k^{2}+m_{s}^{2}+n\\right)\\,w+12c_{1}k\\,a=0 \\,.\n\\end{align}\nIntroducing the new field $\\phi_{xy}$ through\n\\begin{equation}\n\\partial_{t}\\phi_{xy}=f\\,\\partial_{r}h_{xy} \\,,\n\\end{equation}\nthe system of equations \\eqref{fluctuations} leads to the linear system of equations\n\\begin{equation}\\label{eq:system}\n\\left(\\Box_{2}-M^{2}\\right)\\mathbf{v}=0 \\,,\n\\end{equation}\nwith ${\\bf v}=(\\phi_{xy},a,w)$ and the mass matrix\n\\begin{equation}\\label{mmfirstmod}\nM^{2}=\\left(\\begin{array}{ccc}12\\, k^{2} & 2\\,k & 0 \\\\144\\,k & 24+12\\,k^{2} & -c_{1}k \\\\0 & -12\\,c_{1}k & 12\\,k^{2}+m_{s}^{2} + n\\end{array}\\right)\\, .\n\\end{equation}\nNote that the Maxwell field fluctuation decouples from the remainder at zero wavenumber, $k=0$.\nThe Laplacian appearing in equation \\eqref{eq:system} is with respect to the two dimensional metric\n\\begin{equation}\nds_{2}^{2}=-f\\,dt^{2}+\\frac{dr^{2}}{12\\,f} \\,.\n\\end{equation}\n\nThe linearly coupled system of equations \\eqref{eq:system} can be diagonalized to yield three independent modes\n\\begin{eqnarray}\n\\lefteqn{\\left(\\Box_{2}-\\mu_{i}^{2}\\right)\\,g_{i}=0}\\nonumber \\\\\n&& \\Rightarrow \\qquad \\partial_{r}^{2}g_{i}+\\frac{f^{\\prime}}{f}\\,\\partial_{r}g_{i}-\\left(\\frac{1}{12\\,f^{2}}\\partial_{t}^{2}+\\frac{1}{12\\,f}\\mu_{i}^{2} \\right)g_{i}=0 \\,,\n\\end{eqnarray}\nwith $\\mu_{i}^{2}$ being the three eigenvalues of the mass matrix \\eqref{mmfirstmod}. If we go to frequency space by writing $g_{i}(t,r)=e^{-\\imath\\omega t} u_{i}(r)$, the solution with infalling boundary conditions at the horizon \\cite{Son:2002sd, Hartnoll:2009sz} is then\n\\begin{equation}\nu_{i}(r)=\\left(\\frac{2}{r_{+}}\\right)^{\\nu_{i}}\\,\\Gamma\\left(a_{i} \\right)\\Gamma\\left(1+\\nu_{i}\\right)f(r){}^{-\\imath \\frac{\\omega}{4\\sqrt{3}r_{+}}}r^{-a_{i}}\\,{}_{2}F_{1}\\left(\\frac{a_{i}}{2},\\frac{a_{i}+1}{2},1-\\nu_{i};\\frac{r_{+}^{2}}{r^{2}} \\right)-\\left(\\nu_{i}\\leftrightarrow -\\nu_{i}\\right) \\,,\n\\end{equation}\nwhere $a_{i}=\\frac{1}{2}-\\imath \\frac{\\omega}{2\\sqrt{3}r_{+}}-\\nu_{i}$ and $\\nu_{i}=\\frac{1}{2}\\sqrt{1+\\frac{1}{3}\\mu_{i}^{2}}$.\nBy expanding the above solution near the boundary of the $AdS_{2}$, i.e. as $r \\to \\infty$, we can read off the $AdS_{2}$ retarded Green's functions in the usual way (e.g. \\cite{Faulkner:2011tm}) to obtain\\footnote{The case $\\nu_i = 0$, which will be of particular interest below, should properly be treated independently, with the solution to the wave equation expressed in terms of a Legendre function. Correct result are obtained by continuation of the general results to $\\nu_i \\to 0$. In particular, the spectra density $\\text{Im} G^R \\sim \\w\/T$ at $\\nu_i = 0$.}\n\\begin{equation}\n\\mathcal{G}^R_{i}\\left(\\omega, T \\right)=-\\left(\\frac{\\pi}{2\\sqrt{3}} T \\right)^{2\\nu_{i}}\\frac{\\Gamma\\left(1-\\nu_{i}\\right)\\Gamma\\left(\\frac{1}{2}-\\imath\\frac{\\omega}{2\\pi T}+\\nu_{i}\\right)}{\\Gamma\\left(1+\\nu_{i}\\right)\\Gamma\\left(\\frac{1}{2}-\\imath\\frac{\\omega}{2\\pi T}-\\nu_{i}\\right)} \\,. \\label{eq:gads2}\n\\end{equation}\nThese have the expected form that is determined by the $SL(2,{{\\Bbb R}})$ symmetry of $AdS_{2}$. Again, see for instance \\cite{Faulkner:2011tm}. These are the IR Green's functions that will appear in the matching formula (\\ref{eq:imgsum}).\nThe perturbation includes the transverse current mode $\\d A_y(x)$, at nonzero momentum $k$. Below we will couple this mode to the homogeneous current using a lattice.\n\nFor small $\\omega$ we have\n\\begin{eqnarray}\n\\lefteqn{\\mathcal{G}^R_{i}\\left(\\omega\\right)=-\\left(\\frac{\\pi}{2\\sqrt{3}} T \\right)^{2\\nu_{i}}\\frac{\\Gamma\\left(1-\\nu_{i}\\right)\\Gamma\\left(\\frac{1}{2}+\\nu_{i}\\right)}{\\Gamma\\left(1+\\nu_{i}\\right)\\Gamma\\left(\\frac{1}{2}-\\nu_{i}\\right)}}\\nonumber \\\\\n&& +\\imath\\omega \\frac{1}{2}\\,\\left(\\frac{\\pi}{2\\sqrt{3}} \\right)^{2\\nu_{i}}\\,T^{2\\nu_{i}-1}\\frac{\\Gamma\\left(\\frac{1}{2}+\\nu_{i} \\right)\\Gamma\\left(1-\\nu_{i}\\right)}{\\Gamma\\left(\\frac{1}{2}-\\nu_{i}\\right)\\Gamma\\left(1+\\nu_{i}\\right)}\\,\\tan\\pi\\nu_{i} \\,.\n\\end{eqnarray}\nIf $\\nu_i$ is real, then taking the imaginary part we recover the scaling with temperature that we anticipated in (\\ref{eq:im}) above, using the expression (\\ref{eq:delta}) for $\\Delta$ in terms of $\\nu$.\nIn general, the eigenvalues of the matrix \\eqref{mmfirstmod} are slightly complicated functions of the wavenumber $k$, but are easily found numerically.\n\nTo illustrate the features of the spectrum of our theory, consider the special case where $m_{s}^{2}+n=0$.\n(We will in fact consider a different case for the numerics below.) In that case we have\n\\begin{align}\n\\nu_{1}=&\\frac{1}{2}\\sqrt{1+4\\,k^{2}} \\,, \\nonumber \\\\\n\\nu_{2}=&\\frac{1}{2}\\sqrt{5+4\\,k^{2}-\\frac{\\sqrt{12}}{3}\\sqrt{12+\\left(24+c_{1}^{2}\\right)k^{2}}} \\,, \\nonumber \\\\\n\\nu_{3}=&\\frac{1}{2}\\sqrt{5+4\\,k^{2}+\\frac{\\sqrt{12}}{3}\\sqrt{12+\\left(24+c_{1}^{2}\\right)k^{2}}} \\,.\\label{eq:nus}\n\\end{align}\nFrom the expressions above we can see that $\\nu_{2}^{2}$ has two minima at\n\\begin{equation}\nk^{\\pm}_\\text{min} = \\pm\\frac{1}{2\\sqrt{12}}\\,\\frac{c_{1}\\sqrt{48+c_{1}^{2}}}{\\sqrt{24+c_{1}^{2}}} \\,,\n\\end{equation}\nat which\n\\begin{equation}\n\\nu_{2\\,\\text{min}} = \\frac{1}{4}\\sqrt{12-\\frac{c_{1}^{2}}{3}-\\frac{192}{24+c_{1}^{2}}} \\,.\n\\end{equation}\nFor $c_{1}>2\\sqrt{6}$ we see that there is a range of $k$ for which $\\nu_{2}$ is imaginary.\nThese are the finite wavenumber instabilities that we shall use. For our purposes, it does not matter whether or not the range of unstable momenta extends down to $k=0$. What is important is that there are finite wavenumber instabilities involving the Maxwell field $\\d A_y$.\n\nEven when the system is stable, when $00 : \\pi \\in \\Pi \\}$, \\lm{lm:occupancy} shows the one-to-one correspondence between $\\Pi$ and $\\caD$:\n\n\\begin{lemma}[Theorem 2 of \\cite{syed2008apprenticeship}]\\label{lm:occupancy}\n If $\\rho \\in \\caD$, then $\\rho$ is the occupancy measure for $\\pi_\\rho(a|s) \\triangleq \\frac{\\rho(s,a)}{\\sum_{a'}\\rho(s,a')}$, and $\\pi_\\rho$ is the only policy whose occupancy measure is $\\rho$.\n\\end{lemma}\n\\vspace{-2pt}\n\nThus, the occupancy measure can be used as an alternative in the general IL objective shown in \\eq{eq:il}. Applying KL divergence as the distance metric, we could construct the IL objective as minimizing the reverse KL divergence $\\kld$ of between the agent's occupancy measure and expert's:\n\\begin{equation}\\label{eq:kl-il}\n \\pi^* = \\argmin_{\\pi} \\kld ( \\rho_\\pi \\| \\rho_{{\\pi_E}})~.\n\\end{equation}\n\\begin{proposition}\\label{prop:kl-il}\n The optimal solution for the reverse KL divergence objective of IL shown in \\eq{eq:kl-il} is that $\\pi^*={\\pi_E}$.\n\\end{proposition}\n\\vspace{-6pt}\n\\begin{proof}\n It is obvious that the solution of \\eq{eq:kl-il} is unique since $\\kld$ is convex for $\\rho_\\pi$, which achieves the optimal value iff $\\rho_\\pi = \\rho_{{\\pi_E}}$. According to \\lm{lm:occupancy}, we can recover policy ${\\pi_E}$ if we can recover the occupancy measure of expert policy.\n Thus the optimal solution is that $\\pi^* = {\\pi_E}$.\n\\end{proof}\nConsidering to model the normalized occupancy measure with Boltzmann distribution, the density can be represented by an EBM as\n\\begin{equation}\\label{eq:ebm}\n (1-\\gamma)\\rho_\\pi(s,a) = \\frac{1}{Z}\\exp(-E(s,a))~,\n\\end{equation}\nwhere leads to the following proposition.\n\\begin{proposition}\\label{prop:eb-il}\n The reverse KL divergence objective of IL \\eq{eq:kl-il} is equivalent to the following of Energy-Based Imitation Learning (EBIL) objective \\eq{eq:eb-il}:\n\\iffalse\nThus we take \\eq{eq:ebm} into \\eq{eq:kl-il} for both policy $\\pi$ and ${\\pi_E}$, we get that: \n\\begin{equation}\\label{eq:kl-induce}\n \\begin{aligned}\n \\kld ( \\rho_\\pi \\| \\rho_{{\\pi_E}}) &= \\sum_{s,a} \\rho_\\pi(s,a) \\log \\frac{\\rho_\\pi(s,a)}{\\rho_{\\pi_E}(s,a)}\\\\\n &= \\sum_{s,a}\\rho_\\pi\\left( \\log{\\rho_\\pi(s,a)} - \\log{\\frac{e^{-E_{\\pi_E}(s,a)}}{Z'}} \\right )\\\\\n &= \\sum_{s,a}\\rho_\\pi\\left(E_{\\pi_E}(s,a)+\\log{\\rho_\\pi(s,a)} + \\log{Z'}\\right )\\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ]+\\sum_{s,a}\\rho_\\pi\\log{\\rho_\\pi(s,a)} + \\text{const}\\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ]+\\sum_{s,a}\\rho_\\pi\\log{\\rho_\\pi(s,a)} -\\log{\\sum_{s,a'}\\rho(s,a')}+ \\text{const}\\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ]+\\sum_{s,a}\\rho_\\pi\\log{\\left [ \\rho_\\pi(s,a)\/\\sum_{a'}\\rho(s,a')\\right ]}+ \\text{const}\\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ] - \\overline{H}\\left (\\rho_\\pi(s,a) \\right ) + \\text{const}\n \n \n \n \n \n ~,\n \\end{aligned}\n\\end{equation}\nwhere $\\overline{H}=-\\sum_{s,a}\\rho_\\pi\\log{\\rho_\\pi(s,a)\/\\sum_{a'}\\rho(s,a')}$ is the entropy of the occupancy measure, $E_{\\pi_E}$ is the EBM of policy ${\\pi_E}$ and $Z'$ is its partition function.\n\\begin{lemma}[Lemma 3 of \\cite{ho2016generative}]\n $\\overline{H}$ is strictly concave, and for all $\\pi \\in \\Pi$ and $\\rho \\in \\caD$, we have $H(\\pi) = \\overline{H}(\\rho_\\pi)$ and $\\overline{H}(\\rho)=H(\\pi_\\rho)$.\n\\end{lemma}\nThus, \\eq{eq:kl-il} in the end lead to the objective function of Energy-Based Imitation Learning (EBIL) as:\n\\fi\n\\begin{equation}\\label{eq:eb-il}\n\\pi^* = \\argmax_{\\pi} \\bbE_\\pi \\left[-E_{\\pi_E}(s,a)\\right ] + H(\\pi)~.\n\\end{equation}\n\\end{proposition}\n\\vspace{-6pt}\nThe proof of \\prop{prop:eb-il} is in \\ap{ap:eb-il}. A key observation is that \\eq{eq:eb-il} provides exactly the same form as the objective of MaxEnt RL shown in \\eq{eq:maxent-rl}, and thus we can just estimate the energy of expert data as a surrogate reward function without involving the intractable partition function $Z$\\footnote{One may notice that according to \\lm{lm:occupancy}, we can simply recover the expert policy directly through $\\pi^*=\\exp(-E(s,a))\/\\sum_{a'}{\\exp{(-E(s,a'))}}$. However, this may be hard to generalize to continuous or high-dimensional action space in practice. For more discussions one can refer to \\se{ap:ebil-sac}.}. Generally, we can choose the reward function as $r(s,a)=h(-E_{\\pi_E}(s,a))$ where $h$ is a monotonically increasing linear function to keep the objective of \\eq{eq:eb-il} still true. \n\nTherefore, EBIL that learns with MaxEnt RL using the expert energy function as the reward function aims to minimize the reverse KL divergence between the agent's occupancy measure and the expert's. It is worth noting that if we remove the entropy term to construct a standard RL objective, then it will collapse into minimizing the cross entropy of the occupancy measure rather than the KL divergence.\n\n\\subsection{Expert Energy Estimation with Demonstrations}\n\nAs described above, our reward function is determined by $E_{{\\pi_E}}(s,a)$, a learned energy function of the occupancy measure. In this section, we elaborate on how to estimate $E_{{\\pi_E}}(s,a)$ from expert demonstrations. \nSpecifically, in this paper, we leverage Deep Energy Estimator Networks (DEEN)~\\cite{saremi2018deep}, a scalable and efficient algorithm to directly estimate the energy of expert's occupancy measure of policy via a differentiable framework without involving the intractable estimation of the normalizing constant. Then we can take the estimated energy as the surrogate reward to train the agent policy.\n\nFormally, let the random variable $X=g(s,a)\\sim g(\\rho_{\\pi_E}(s,a))$. \nLet the random variable $Y$ be the noisy observation of $X$ that $y\\sim x+N(0, \\sigma^2I)$, \\textit{e.g.}, $y$ is derived from samples $x$ by adding with white Gaussian noise $\\xi\\sim N(0, \\sigma^2I)$. \nThe empirical Bayes least square estimator, \\textit{i.e.}, the optimal denoising function $g(y)$ for the white Gaussian noise, is solved as\n\\begin{equation}\n\\begin{aligned}\\label{eq:denoise}\n g(y) = y + \\sigma^2\\nabla_y\\log{p(y)}~.\n\\end{aligned}\n\\end{equation}\nSuch a function can be trained with a feedforward neural network $\\hat{g}$ by denoising each sample $y_i \\in Y$ to recover $x_i \\in X$, which can be implemented with a Denoising Autoencoder (DAE)~\\cite{vincent2008extracting}. Then we can use the DAE $\\hat{g}$ to approximate the score function $\\nabla_y \\log p(y)$ of the corrupted distribution by $\\nabla_y \\log p(y) \\propto \\hat{g}(y) - y$~\\cite{robbins1955empirical,miyasawa1961empirical,raphan2011least}. However, one should note that the DAE does not provide the energy function but instead approximates the score function -- the gradient of $\\log \\rho_\\pi(s,a)$, which cannot be adopted as the vital reward function.\n\nThus, in order to estimate the EBM of expert's state-action pairs provided through demonstration data, we consider to parameterize the energy function $E_\\theta(y)$ with a neural network explicitly. As shown in \\cite{saremi2018deep}, such a network called DEEN can be trained by minimizing the following objective:\n\\begin{equation}\\label{eq:deen}\n \\argmin_{\\theta} \\sum_{x_i\\in X, y_i\\in Y} \\left \\| x_i - y_i + \\sigma^2\\frac{\\partial E_\\theta(y=y_i)}{\\partial y} \\right \\|^2~,\n\\end{equation}\nwhich ensures the relation of score function $\\partial E_\\theta(y)\/\\partial y$ shown in \\eq{eq:denoise}. It is worth noting that the EBM estimates the energy of the noisy samples. This can be seen as a Parzen window estimation of $p(x)$ with variance $\\sigma^2$ as the smoothing parameter~\\cite{saremi2019neural,vincent2011connection}. A trivial problem here is that \\eq{eq:deen} requires the samples (state-action pairs) to be continuous so that the gradient can be accurately computed. Actually, EBIL can be easily expanded to discrete space by using other energy estimation methods, \\textit{e.g.}, Noise Contrastive Estimation~\\cite{gutmann2010noise}. In this work, we concentrate on the continuous space and leave the discrete version in the future.\n\nIn practice, we learn the EBM of expert data from off-line demonstrations and construct the reward function, which will be fixed until the end to help agent learn its policy with a normal RL procedure. Specifically, we construct the surrogate reward function $\\hat{r}(s,a)$ as follows:\n\\begin{equation}\\label{eq:reward}\n \\hat{r}(s,a) = h(-E_{{\\pi_E}}(s,a))~,\n\\end{equation}\nwhere $h(x)$ is a monotonically increasing linear function, which can be specified for different environments.\nFormally, the overall EBIL algorithm is presented in \\alg{alg:EBIL} of \\ap{ap:algo} .\n\\section{Discussions}\n\\label{sec:discussion}\n\nIn this paper, we propose to estimate the energy function from expert demonstrations directly, then regard it as the surrogate reward function to force the agent to learn a good policy that can match the occupancy measure of the expert. Interestingly, MaxEnt IRL can be seen as a special implementation of EBM, which constructs the expert demonstrations as a Boltzmann distribution of the cost \/ reward function, and tries to extract optimal policy from it. In this section, we theoretically clarify the relation between EBIL and MaxEnt IRL, and show that EBIL actually can be seen as a simplified and efficient solution for MaxEnt IRL.\n\nIRL aims to recover a cost or reward, under which the set of demonstrations are near-optimal. However, the optimal solution is still underdefined. To that end, MaxEnt IRL resolves the reward ambiguity problem in normal IRL by employing the principle of maximum entropy~\\cite{ziebart2008maximum,ziebart2010modeling,bloem2014infinite}, which introduces probabilistic models to explain suboptimal behaviors as noise. More specifically, MaxEnt IRL models the paths in demonstrations using a Boltzmann distribution, where the energy is given by the unknown reward function $\\hat{r}^*$:\n\\begin{equation}\n\\label{eq:maxent-irl}\n p_{\\hat{r}^*}(\\tau) = \\frac{1}{Z}\\exp{(\\hat{r}^*(\\tau))}~,\n\\end{equation}\nwhere $\\tau=\\{ s_0, a_0, \\cdots, s_T, a_T \\}$ is a trajectory of state-action pairs and $T$ is its length; $\\hat{r}^*(\\tau)=\\sum_{t}\\hat{r}^*(s_t,a_t)$ is the reward function, under which the expert demonstrations are optimal; $Z$ is the partition function.\\footnote{Note that \\eq{eq:maxent-irl} is formulated under the deterministic MDP setting. A general form for stochastic MDP is derived in \\cite{ziebart2008maximum,ziebart2010modeling} yet owns similar analysis: the probability of a trajectory is decomposed as the product of conditional probabilities of the states $s_t$, which can factor out of all likelihood ratios since they are not affected by the reward function.} Similar to other EBMs, the parameters of the reward function is optimized to maximize the likelihood of expert trajectories.\n\nUnder this model, ultimately we hope that our policy can generate any trajectories with the probability increases exponentially as the return gets higher, and we can obtain the desired optimal trajectories with the highest likelihood. Following previous work, we focus on the maximum casual entropy IRL \\cite{ziebart2008maximum,ziebart2010modeling}, which aims to maximize the entropy of the distribution over paths under the constraints of feature matching that can be regarded as the matching of the reward. Formally, maximum causal entropy IRL can be represented as the following optimization problem~\\cite{ho2016generative}:\n\\begin{equation}\\label{eq:irl-cas}\n\\begin{aligned}\n\\hat{r}^* &= \\argmax_{\\hat{r}} \\bbE_{{\\pi_E}}\\left [\\hat{r}(s,a) \\right ] - \\left ( \\max_{\\pi\\in\\Pi} \\bbE_{\\pi}\\left [\\hat{r}(s,a)\\right ] + H(\\pi) \\right ) ~,\n\\end{aligned}\n\\end{equation}\nwhere $H(\\pi) := \\mathbb{E}_\\pi [-\\log \\pi (a|s)]$ is the causal entropy~\\cite{bloem2014infinite} of the policy $\\pi$.\n\n\\subsection{EBIL is a Dual Problem of MaxEnt IRL}\n\nNow we first illustrate that EBIL (\\eq{eq:kl-il}) is a dual of the above MaxEnt IRL problem. The proof sketch is to show that EBIL is an instance of occupancy measure matching problem which has been proven to be a dual problem of MaxEnt IRL.\n\n\\begin{lemma}\\label{lm:dual}\nIRL is a dual of the following occupancy measure matching problem and the induced optimal policy is the primal optimum which matches the expert's occupancy measure $\\rho_{\\pi_E}$:\n\\begin{equation}\\label{eq:irl-dual}\n \\min_{\\rho_\\pi\\in\\caD} \\overline{H}(\\rho_\\pi) \\text{ subject to } \\rho_\\pi(s,a) = \\rho_{\\pi_E}(s,a) \\text{ } \\forall s \\in \\caS, a \\in \\caA~.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proposition}\\label{prop:ebil-dual}\n The EBIL objective \\eq{eq:kl-il} is an instance of the occupancy measure matching problem \\eq{eq:irl-dual}.\n\\end{proposition}\n\nThe proof of \\lm{lm:dual} can be refered to the Section 3 of \\cite{ho2016generative}, and the proof of \\prop{prop:ebil-dual} is shown in \\ap{ap:ebil-dual}. Combining \\lm{lm:dual} and \\prop{prop:ebil-dual}, we have that EBIL is a dual problem of MaxEnt IRL. \n\n\\subsection{EBIL is More Efficient than IRL}\nNow we are going to illustrate why EBIL is a more efficient solution.\nRecall that the intuition of MaxEnt IRL is to regard the demonstrations as a Boltzmann distribution of the reward function, and then induce the optimal policy. Suppose that we have already recovered the optimal reward function $\\hat{r}^*$, then the optimal policy is induced by the following practical forward MaxEnt RL procedure:\n\\begin{equation}\\label{eq:rl-hr}\n \\pi^* = \\argmax_{\\pi} \\bbE_\\pi\\left [\\hat{r}^*(s,a)\\right ]+ H(\\pi)~. \n\\end{equation}\nNow we further demonstrate that EBIL aims to seek the same policy, while through a simplified and more efficient training procedure.\n\n\\begin{proposition}\\label{prop:tau-kl}\n Denote $\\tau$ and $\\tau_E$ are trajectories sampled by the agent and the expert respectively, and suppose we have the optimal reward function $\\hat{r}^*$, then the policy $\\pi^*$ induced by $\\hat{r}^*$ is the optimal solution to the following optimization problem: \n \\begin{equation}\\label{eq:tau-kl}\n \\begin{aligned}\n \\min_\\pi \\kld(p(\\tau)\\|p(\\tau_E))~.\n \\end{aligned}\n \\end{equation}\n\\end{proposition}\n\n\\begin{proposition}\\label{prop:kl-kl}\n The optimization problem \\eq{eq:tau-kl} is equivalent to the KL divergence IL objective \\eq{eq:kl-il} and the EBIL objective \\eq{eq:eb-il}.\n\\end{proposition}\n\nThe proof for these two propositions are in \\ap{ap:tau-kl} and \\ap{ap:kl-kl} separately. These two propositions together reveal the optimal policy obtained from EBIL is the same as the one from MaxEnt IRL when the optimal reward function is recovered. However, the latter one is indirect and much more computationally expensive. Primitively, as shown in \\eq{eq:irl-cas}, IRL methods are in fact aimed to recover the optimal reward function (served as an EBM) by maximum likelihood on the expert trajectories instead of directly learning the policy and cannot avoid estimating the non-trivial partition function $Z$, which is always hard, especially in high-dimensional spaces. The SoTA IRL method, AIRL\\cite{fu2017learning}, as we shown in \\se{sec:one-domain} and \\ap{ap:airl}, actually does not recover the energy. By contrast, EBIL is essentially a fixed-reward-based method that seeks to learn the policy guided by the pre-estimated energy of expert's occupancy measure, which shows much more efficiency and provides more flexible and general choices on training the expert's EBM.\n\nAs a supplementary statement, \\cite{finn2016connection} reveals the close relationship among Guided Cost Learning~\\cite{finn2016guided} (a sample-based algorithm of MaxEnt IRL), GAIL and EBMs. Besides, \\cite{ghasemipour2019divergence} also presents a unified perspective among previous IL algorithms and discuss in a divergence minimization view similar as ours shown in \\se{subsec:EBIL}. Therefore, all of these methods show connections among each other.\n\\section{Related Work}\n\nInstead of seeking to alternatively update the policy and the reward function as in IRL and GAIL, many recent works of IL aim to learn a fixed reward function directly from expert demonstrations and then apply a normal reinforcement learning procedure with that reward function. Although this idea can be found inherently in previous GMMIL work~\\cite{kim2018imitation} that utilizes the maximum mean discrepancy as the distance metric to guide training,\nit is recently proposed by Random Expert Distillation (RED) \\cite{wang2019red}, which employs the idea of Random Network Distillation\n\\cite{burda2018rnd} to estimate the support of expert policy, and compute the reward function by the loss of fitting a random neural network. \nIn addition, Disagreement-Regularized Imitation Learning \n\\cite{brantley2020disagreementregularized} constructs the reward function using the disagreement in their predictions of an ensemble of policies trained on the demonstration data, which is optimized together with a supervised behavioral cloning cost.\nInstead of using a learned reward function, the fixed reward of Soft-Q Imitation Learning\n\\cite{reddy2019softqil} applies constant rewards by setting positive rewards for the expert state-actions and zero rewards for other ones, which is optimized with the off-policy SQL algorithm~\\cite{haarnoja2018soft}.\n\nOur EBIL relies highly on EBMs, which have played an important role in a wide range of tasks including image modeling, trajectory modeling and continual online learning \\cite{Du2019generalization}. \nThanks to the appealing features, EBMs have been introduced into many RL literature, for instance, parameterized as value function~\\cite{sallans2004valueEBM}, employed in the actor-critic framework~\\cite{heess2012acEBMs}, applied to MaxEnt RL~\\cite{haarnoja2017reinforcement} and model-based RL regularization~\\cite{boney2019regularizing}. \nHowever, EBMs are always difficult to train due to the partition function \\cite{finn2016connection}. Nevertheless, recent works have tackled the problem of training large-scale EBMs on high-dimensional data, such as DEEN~\\cite{saremi2018deep} which is applied in our implementation. Except for DEEN, there still leaves plenty of choices for efficiently training EBMs \\cite{gutmann2012noise,Du2019generalization,nijkamp2019learn}. \n\\section{Experiments}\n\n\\subsection{Synthetic Task}\n\\label{sec:one-domain}\n\n\\begin{wrapfigure}{r}{0.4\\textwidth}\n\\vspace{-7pt}\n\\includegraphics[width=1\\linewidth]{figs\/KL-crop.pdf}\n\\vspace{-2pt}\n\\caption{The KL divergence between the agent trajectories and the expert during the learning procedure, which indicates that EBIL is much more stable than the other methods. The blue dash line represents the converged result of EBIL.}\n\\vspace{-14pt}\n\\label{fig:kl-div}\n\\end{wrapfigure}\n\nIn the synthetic task, we want to evaluate the qualitative performance of different IL methods by displaying the heat map of the learned reward and sampled trajectories. \nAs analyzed in Section~\\ref{sec:ebil} and \\ref{sec:discussion}, EBIL is capable of guiding the agent to recover the expert policy and correspondingly generate the high-quality trajectories. To demonstrate this point, we evaluate EBIL along with its counterparts (GAIL~\\cite{ho2016generative}, AIRL~\\cite{fu2017learning} and RED~\\cite{wang2019red}) on a synthetic environment where the agent tries to move in a one-dimensional space. \nSpecifically, the state space is $[ -0.5, 10.5 ]$ and the action space is $[-1, 1]$. The environment initializes the state at $0$, and we set the expert policy as static rule policies $\\pi_E = {\\mathcal{N}}(0.25, 0.06)$ when the state $s\\in [ -0.5, 5 )$, and $\\pi_E = {\\mathcal{N}}(0.75, 0.06)$ when $s\\in [ 5, 10.5 ]$. The sampled expert demonstration contains $40$ trajectories with up to $30$ timesteps in each one. \nFor all methods, we choose SAC \\cite{haarnoja2018soft} as the learning algorithm.\n\nWe plot the KL divergence between the agent's and the expert's trajectories during the training procedure in \\fig{fig:kl-div} and visualize the final estimated rewards with corresponding induced trajectories in \\fig{fig:heat_maps}.\nAs illustrated in \\fig{fig:ebil_heat} and \\fig{fig:kl-div}, the reward estimated by EBIL successfully captures the likelihood of the expert trajectories, and the induced policy quickly converge to the expert policy. By contrast, GAIL requires a noisy adversarial process to correct the policy. As a result, although GAIL achieves compatible final performance against EBIL~(\\fig{fig:gail_heat}), it suffers a slow, unstable training as shown in \\fig{fig:kl-div} and assigns arbitrary reward for some regions of the state-action space.\nIn addition, as suggested in \\fig{fig:airl_heat} and \\fig{fig:red_heat} respectively, under this simple one-dimensional domain, AIRL does not in fact recover the energy as meaningful rewards, and RED suffers from the diverged reward and fail to imitate the expert.\n\n\n\n\\begin{figure*}[!t]\n\\centering\n\\subfigure[Expert Trajectories]{\n\\begin{minipage}[b]{0.28\\linewidth}\n\\label{fig:expert_heat}\n\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/expert-crop.pdf}\n\\vspace{-0.8pt}\n\\end{minipage}\n}\n\\hspace{-10pt}\n\\subfigure[EBIL]{\n\\begin{minipage}[b]{0.155\\linewidth}\n\\label{fig:ebil_heat}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/ebil_reward_heat_40-crop.pdf}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/ebil_agent_heat_40_667-crop.pdf}\n\\end{minipage}\n}\n\\subfigure[GAIL]{\n\\begin{minipage}[b]{0.155\\linewidth}\n\\label{fig:gail_heat}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/gail_heat_40_19890-crop.pdf}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/gail_agent_heat_40_15000-crop.pdf}\n\\end{minipage}\n}\n\\subfigure[AIRL]{\n\\begin{minipage}[b]{0.155\\linewidth}\n\\label{fig:airl_heat}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/airl_heat_40_16000_1-crop.pdf}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/airl_agent_heat_40_16000-crop.pdf}\n\\end{minipage}\n}\n\\subfigure[RED]{\n\\begin{minipage}[b]{0.155\\linewidth}\n\\label{fig:red_heat}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/rnd_heat_40_499-crop.pdf}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/rnd_agent_heat_40_19999-crop.pdf}\n\\end{minipage}\n}\n\\vspace{-7pt}\n\\caption{Heat maps of the expert trajectories (leftmost), the (final) \\textit{estimated rewards} recovered by different methods (top) and the corresponding \\textit{induced policy} (bottom). The horizontal and the vertical axis denote the \\textit{state space} and the \\textit{action space} respectively. The red dotted line represents the position where the agent should change its policy. \nIt is worth noting that EBIL and RED both learn fixed reward functions, while GAIL and AIRL iteratively update the reward signals. We do not compare BC since it learns the policy via supervised learning without recovering any reward signals.}\n\\vspace{-20pt}\n\\label{fig:heat_maps}\n\\end{figure*}\n\n\n\n\\subsection{Mujoco Tasks}\n\nWe further test our method on five continuous control benchmarking Mujoco environments: Humanoid, Hopper, Walker2d and Swimmer and InvertedDoublePendulum. In this experiments, we still compare EBIL against GAIL, AIRL, GMMIL and RED, where we employ Trust Region Policy Optimization (TRPO)~\\cite{schulman2015trust} as the learning algorithm in the implementation for all evaluated methods. Expert agents are trained with OpenAI baselines version~\\cite{dhariwal2016openai} of Proximal Policy Optimization (PPO)~\\cite{schulman2017proximal}. Furthermore, we consider to sample 4 trajectories by the trained expert policy, as \\cite{ho2016generative,wang2019red} do.\nThe training curves along with more experiments are in \\ap{ap:exp}, showing the training stability of the algorithm.\n\nAs shown in \\tb{tab:mujoco}, EBIL achieves the best or comparable performance among all environments, indicating that the energy is able to become an excellent fixed reward function and is efficient to guide the agent to do imitation learning. It is worth noting that we do not apply BC initialization for all tasks. We surprisingly find that we are unsuccessful with AIRL on several environments even after extensive tuning, similar to the results in \\cite{liu2019state}.\n\n\\begin{table*}[t]\n\\vskip 0.15in\n\\caption{Comparison for different methods of the episodic true rewards on 5 continuous control benchmarks. The means and the standard deviations are evaluated over 50 runs.}\n\\vspace{-5pt}\n\\begin{center}\n\\begin{scriptsize}\n\\begin{tabular}{cccccc}\n\\toprule\n& Humanoid & Hopper & Walker2d & Swimmer & InvertedDoublePendulum\\\\\n\\midrule\nRandom & 100.38 $\\pm$ 28.25 & 14.21 $\\pm$ 11.20 & 0.18 $\\pm$ 4.35 & 0.89 $\\pm$ 10.96 & 49.57 $\\pm$ 16.88 \\\\\n\\hline\nBC & 178.74 $\\pm$ 55.88 & 28.04 $\\pm$ 2.73 & 312.04 $\\pm$ 83.83 & 5.93 $\\pm$ 16.77 & 138.81 $\\pm$ 39.99\\\\\nGAIL & 145.84 $\\pm$ 7.01 & 459.33 $\\pm$ 216.79 & 278.93 $\\pm$ 36.82 & 23.79 $\\pm$ 21.84 & 122.71 $\\pm$ 71.36 \\\\\nAIRL & 286.63 $\\pm$ 6.05 & 126.92 $\\pm$ 62.39 & 215.79 $\\pm$ 23.04 & -13.44 $\\pm$ 2.69 & 76.78 $\\pm$ 19.63 \\\\\nGMMIL & 416.83 $\\pm$ 59.46 & 1000.87 $\\pm$ 0.87 & 1585.91 $\\pm$ 575.72 & -0.73 $\\pm$ 3.28 & 4244.63 $\\pm$ 3228.14\\\\\nRED & 140.23 $\\pm$ 19.10 & 641.08 $\\pm$ 2.24 & 641.13 $\\pm$ 2.75 & -3.55 $\\pm$ 5.05 & 6400.19 $\\pm$ 4302.03 \\\\\n\\textbf{EBIL (Ours)} & \\textbf{472.22 $\\pm$ 107.72} & \\textbf{1040.99 $\\pm$ 0.53} & \\textbf{2334.55 $\\pm$ 633.91} & \\textbf{58.09 $\\pm$ 2.03} & \\textbf{8988.37$\\pm$ 1812.76}\\\\\n\\hline\nExpert (PPO) & 1515.36 $\\pm$ 683.59 & 1407.36 $\\pm$ 176.91 & 2637.27 $\\pm$ 1757.72 & 122.09 $\\pm$ 2.60 & 6129.10 $\\pm$ 3491.47 \\\\\n\\bottomrule\n\\end{tabular}\n\\label{tab:mujoco}\n\\end{scriptsize}\n\\end{center}\n\\vspace{-15pt}\n\\end{table*}\n\\section{Conclusion}\n\nIn this paper, we propose Energy-Based Imitation Learning (EBIL), which shows that it is feasible to compute a fixed reward function via directly estimating the expert's energy to help agents learn from the demonstrations. We further theoretically discuss the connections of our method with Maximum Entropy Inverse Reinforcement Learning (MaxEnt IRL) and reveal that EBIL is a dual problem of MaxEnt IRL which provides a simplified and efficient solution. We empirically show the comparable or better performance of EBIL against SoTA Imitation Learning (IL) algorithms in multiple tasks. Future work can focus on different energy estimation methods for expert demonstrations and exploring more properties of EBM in IL.\n\n\\clearpage\n\\section{Broader Impact}\n\nFor potential positive impacts, EBIL simplifies the learning procedure and increases the efficiency of IL, which can be applied into practical decision-making problems where agents are required to imitate demonstrated behaviors such as robotics and autonomous driving in the future work. However, negative consequences also exist since advances in automation led by IL may bring about workers who are engaged in repetitive tasks being displaced by robots. After all, teaching machines with amounts of expert demonstrations must be much cheaper than hiring hundreds of teachers and skillful employees. \n\\section{Algorithm}\n\\label{ap:algo}\n\\begin{algorithm}[!h]\n \\caption{Energy-Based Imitation Learning}\n \\label{alg:EBIL}\n \\begin{algorithmic}[1]\n \\STATE {\\bfseries Input:} Expert demonstration data $\\tau_E=\\{ (s_i, a_i) \\}_{i=1}^{N}$, parameterized energy-based model $E_{\\phi}$, parameterized policy $\\pi_{\\theta}$;\n \n \\FOR{$k = 0, 1, 2, \\dotsc$}\n \\STATE Optimize $\\phi$ with the objective in \\eq{eq:deen}.\n \\ENDFOR\n \n Compute the surrogate reward function $\\hat{r}$ via \\eq{eq:reward}.\n \n \\FOR{$k = 0, 1, 2, \\dotsc$}\n \\STATE Update $\\theta$ with a normal RL procedure using the surrogate reward function $\\hat{r}$.\n \\ENDFOR\n \n \\RETURN {$\\pi$}\n \\end{algorithmic}\n\\end{algorithm}\n\n\\section{Proofs}\n\n\\subsection{Proof of \\protect\\prop{prop:eb-il}}\n\\label{ap:eb-il}\nBefore showing the equivalence between the reverse KL divergence objective and the EBIL objective, we first present the following lemma.\n\\begin{lemma}[Lemma 3 of \\cite{ho2016generative}]\n $\\overline{H}$ is strictly concave, and for all $\\pi \\in \\Pi$ and $\\rho \\in \\caD$, we have $H(\\pi) = \\overline{H}(\\rho_\\pi)$ and $\\overline{H}(\\rho)=H(\\pi_\\rho)$~,\nwhere $\\overline{H}\\left (\\rho \\right )=-\\sum_{s,a}\\rho_\\pi\\log{\\rho_\\pi(s,a)\/\\sum_{a'}\\rho_\\pi(s,a')}$ is the entropy of the occupancy measure.\n\\end{lemma}\n\n\\begin{proof}[Proof of \\protect\\prop{prop:eb-il}]\n\nTake \\eq{eq:ebm} into \\eq{eq:kl-il} for both policy ${\\pi_E}$, one can obtain that: \n\\begin{equation}\\label{eq:kl-induce}\n \\begin{aligned}\n \\kld ( \\rho_\\pi \\| \\rho_{{\\pi_E}}) &= \\sum_{s,a} \\rho_\\pi(s,a) \\log \\frac{\\rho_\\pi(s,a)}{\\rho_{\\pi_E}(s,a)}\\\\\n &= \\sum_{s,a}\\rho_\\pi\\left( \\log{\\rho_\\pi(s,a)} - \\log{\\frac{e^{-E_{\\pi_E}(s,a)}}{(1-\\gamma)Z'}} \\right )\\\\\n &= \\sum_{s,a}\\rho_\\pi\\left(E_{\\pi_E}(s,a)+\\log{\\rho_\\pi(s,a)} + \\log{(1-\\gamma)Z'}\\right )\\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ]+\\sum_{s,a}\\rho_\\pi\\log{\\rho_\\pi(s,a)} + \\text{const}\\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ]+\\sum_{s,a}\\rho_\\pi\\log{\\rho_\\pi(s,a)} -\\log{\\sum_{s,a'}\\rho_\\pi(s,a')} + \\log{\\sum_{s,a'}\\rho_\\pi(s,a')} + \\text{const}\\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ]+\\sum_{s,a}\\rho_\\pi\\log{\\left [ \\rho_\\pi(s,a)\/\\sum_{a'}\\rho_\\pi(s,a')\\right ]}+ \\text{const}\\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ] - \\overline{H}\\left (\\rho_\\pi \\right ) + \\text{const}\n \\\\\n &= \\bbE_\\pi \\left[E_{\\pi_E}(s,a)\\right ] - H(\\pi) + \\text{const}\n ~,\n \\end{aligned}\n\\end{equation}\nwhere $E_{\\pi_E}$ is the EBM of policy ${\\pi_E}$ and $Z'$ is its partition function. Therefore \\eq{eq:kl-il} in the end leads to the objective function of EBIL \\eq{eq:eb-il}:\n\\begin{equation}\\label{eq:kl-eq-ebil}\n \\begin{aligned}\n \\argmin_{\\pi} \\kld ( \\rho_\\pi \\| \\rho_{{\\pi_E}}) = \\argmax_{\\pi} \\bbE_\\pi \\left[-E_{\\pi_E}(s,a)\\right ] + H(\\pi)\n \\end{aligned}\n\\end{equation}\n\\end{proof}\n\n\n\\subsection{Proof of \\protect\\prop{prop:ebil-dual}}\n\\label{ap:ebil-dual}\n\n\\begin{proof}[Proof of \\protect\\prop{prop:ebil-dual}]\n Similarly as in \\cite{ho2016generative}, we would like to relax \\eq{eq:irl-dual} into the following form with the motivation from \\eq{eq:rl-irl}:\n \\begin{equation}\n \\min_{\\pi} d_\\psi(\\rho_\\pi,\\rho_{\\pi_E})- H(\\pi)~,\n \\end{equation}\n where we modify the IRL regularizer $\\psi$ so that $d_\\psi(\\rho_\\pi,\\rho_{\\pi_E}) \\triangleq \\psi^*(\\rho_\\pi - \\rho_{\\pi_E})$ is a smooth distance metric that penalizes violations in the difference between the occupancy measures.\n By choosing $\\psi= \\bbE_{\\pi_E} [-1-\\log(r(s, a))+r(s, a)]$, we obtain $d_\\psi(\\rho_\\pi,\\rho_{\\pi_E})=D_{\\text{KL}}(\\rho_\\pi\\|\\rho_{\\pi_E})$\\footnote{Full derivations can be found in Appendix D of \\protect\\cite{ghasemipour2019divergence} and we replace $c(s,a)$ with $-r(s,a)$.}. Thus we have:\n \\begin{equation}\\label{eq:kl-h}\n \\min_{\\pi} D_{\\text{KL}}(\\rho_\\pi\\|\\rho_{\\pi_E})- H(\\pi)~.\n \\end{equation}\n Refer to \\eq{eq:kl-induce}, we have:\n \\begin{equation}\n D_{\\text{KL}}(\\rho_\\pi\\|\\rho_{\\pi_E}) = \\bbE_\\pi\\left[E_{\\pi_E}(s,a)\\right ] - H(\\pi) + \\text{const}~.\n \\end{equation}\n Thus, we can rewrite \\eq{eq:kl-h} as the following optimization problem:\n \\begin{equation}\\label{eq:eb-il-2}\n \\pi^* = \\argmax_{\\pi} \\bbE_\\pi \\left[-E_{\\pi_E}(s,a)\\right ] + 2H(\\pi)~,\n \\end{equation}\n which leads to the EBIL objective \\eq{eq:eb-il} with the temperature hyperparameter $\\alpha=2$.\n This indicates that EBIL is a dual problem of the MaxEnt IRL problem.\n\\end{proof}\n\n\\subsection{Proof of \\protect\\prop{prop:tau-kl}}\n\\label{ap:tau-kl}\n\n\\begin{proof}[Proof of \\protect\\prop{prop:tau-kl}]\nSuppose we have recovered the optimal reward function $\\hat{r}$, then we can derive the objective of the KL divergence between the two trajectories into the forward MaxEnt RL procedure.\n\nWith chain rule, the induced trajectory distribution $p(\\tau)$ is given by\n\\begin{equation}\n p(\\tau) = p(s_0)\\prod_{t=0}^{T}P(s_{t+1}|s_t,a_t)\\pi(a_t|s_t)~.\n\\end{equation}\n\nSuppose the desired expert trajectory distribution $p(\\tau_E)$ is given by\n\\begin{equation}\n\\begin{aligned}\np(\\tau) &\\propto p(s_0)\\prod_{t=0}^{T}P(s_{t+1}|s_t,a_t) \\exp(\\hat{r}^*(\\tau))\\\\\n&= p(s_0)\\prod_{t=0}^{T}P(s_{t+1}|s_t,a_t) \\exp(\\sum_{t=0}^{T}\\hat{r}^*(s_t, a_t))~,\n\\end{aligned}\n\\end{equation}\nnow we will show that the following optimization problem is equivalent to a forward MaxEnt RL procedure given the optimal reward $\\hat{r}^*$:\n\\begin{equation}\\label{eq:tau-kl-induce-1}\n\\begin{aligned}\n \\kld(p(\\tau)\\|p(\\tau_E))&= \\sum_{\\tau\\sim\\pi} p(\\tau) \\log \\frac{p(\\tau)}{p(\\tau_E)}\\\\\n &= \\sum_{\\tau\\sim\\pi}p(\\tau)\\left( \\log{p(\\tau)} - \\log{p(\\tau_E)} \\right )\\\\\n &= \\bbE_{\\tau\\sim\\pi} \\left[ \\log p(s_0) + \\sum_{t=0}^T \\left (\\log P(s_{t+1}|s_t,a_t) + \\hat{r}^*(s_t, a_t)\\right ) - \\right. \\\\\n &~~~~~\\left. \\log p(s_0) - \\sum_{t=0}^T \\left (\\log P(s_{t+1}|s_t,a_t) + \\log \\pi(a_t|s_t)\\right ) \\right] + \\text{const}\\\\\n &= \\bbE_{\\tau \\sim p(\\tau)} \\left[ \\sum_{t=0}^T \\hat{r}^*(s_t, a_t) - \\pi(a_t|s_t)) \\right] + \\text{const} \\\\\n &= \\sum_{t=0}^T \\bbE_{(s_t, a_t) \\sim \\rho(s_t, a_t)} [\\hat{r}^*(s_t,a_t) - \\log \\pi(a_t|s_t)] + \\text{const}~.\n\\end{aligned}\n\\end{equation}\n\nWithout loss of generality, we approximate the finite term $\\sum_{t=0}^T \\bbE_{(s_t, a_t)}$ with an infinite term $\\bbE_{\\pi}$ by the definition, and then we have\n\\begin{equation}\\label{eq:tau-kl-induce-2}\n\\begin{aligned}\n \\kld(p(\\tau)\\|p(\\tau_E))\n &\\approx \\bbE_{(s, a) \\sim \\rho(s, a)} [\\hat{r}^*(s,a) - \\log \\pi(a_t|s_t)] + \\text{const}\\\\\n &= \\bbE_{\\pi} [\\hat{r}^*(s,a) - \\log \\pi(a|s)] + \\text{const}\\\\\n &= \\bbE_{\\pi} [\\hat{r}^*(s,a)] - \\bbE_{\\pi}[\\log \\pi(a|s)] + \\text{const}\\\\\n &= \\bbE_{\\pi} [\\hat{r}^*(s,a)] - H(\\pi) + \\text{const}~.\n\\end{aligned}\n\\end{equation}\n\nThus the objective \\eq{eq:tau-kl} is equivalent to the following optimization problem:\n\\begin{equation}\n \\max_{\\pi} \\bbE_\\pi\\left [\\hat{r}^*(s,a)\\right ]+ H(\\pi)~, \n\\end{equation}\nwhich is exactly the objective of a forward MaxEnt RL procedure (\\eq{eq:rl-hr}). This indicates that when MaxEnt IRL recovers the optimal reward function, a forward RL learning which leads to the optimal (expert) policy is equivalent to minimize the reverse KL divergence between the trajectories sampled by the agent and by the expert.\n\\end{proof}\n\n\n\\subsection{Proof of \\protect\\prop{prop:kl-kl}}\n\\label{ap:kl-kl}\n\\begin{proof}[Proof of \\protect\\prop{prop:kl-kl}]\n From the deviation \\eq{eq:tau-kl-induce-2}, we know that the optimization problem \\eq{eq:tau-kl} is equivalent to a forward MaxEnt RL problem \\eq{eq:rl-hr}. Let $r^*(s,a)=-E(s,a)$, and we will exactly get the EBIL objective \\eq{eq:eb-il}. \n Note that the reverse KL divergence IL objective \\eq{eq:kl-il} can also be derived into the EBIL objective \\eq{eq:eb-il} followed \\eq{eq:kl-induce}. Thus \\eq{eq:tau-kl} is equivalent to both the reverse KL divergence IL objective \\eq{eq:kl-il} and the EBIL objective \\eq{eq:eb-il}.\n\\end{proof}\n\n\\section{Experiments}\n\\label{ap:exp}\n\\subsection{Hyperparameters}\n\nWe show the hyperparameters for both DEEN training and policy training on different tasks in \\tb{tab:hyperparameters}. Specifically, we use MLPs as the networks for training DEEN and the policy network.\n\n\\begin{table*}[htbp]\n\\caption{Important hyperparameters used in our experiments}\n\\label{tab:hyperparameters}\n\\centering\n\\resizebox{0.7\\textwidth}{!}{\n\\begin{tabular}{llcccccc}\n\\toprule\n& Hyperparameter & One-D. & Human. & Hop. & Walk. & Swim. & Invert. \\\\\n\\midrule\n\\multirow{4}{*}{Policy} & Hidden layers & 3 & 3 & 3 & 3 & 3 & 3 \\\\\n& Hidden Size & 200 & 200 & 200 & 200 & 200 & 200 \\\\\n& Iterations & 6000 & 6000 & 6000 & 6000 & 6000 & 6000 \\\\\n& Batch Size & 32 & 32 & 32 & 32 & 32 & 32 \\\\\n\\midrule\n\\multirow{6}{*}{DEEN} & Hidden layers & 3 & 3 & 3 & 4 & 3 & 3 \\\\\n& Hidden size & 200 & 200 & 200 & 200 & 200 & 200 \\\\\n& Epochs & 3000 & 3000 & 6000 & 500 & 1900 & 500 \\\\\n& Batch Size & 32 & 32 & 32 & 32 & 32 & 32 \\\\\n& Noise Scale $\\sigma$ & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\\\\n& Reward Scale $\\alpha$ & 1 & 1 & 5 & 1 & 1 & 1000 \\\\\n\\bottomrule\n\\end{tabular}}\n\\end{table*}\n\n\\subsection{Synthetic Task Training Procedure}\n\nWe demonstrate more training slices of the synthetic task in this section. \n\nWe analyze the learned behaviors during the training procedure of the synthetic task, as illustrated by visitation heatmaps in \\fig{fig:train-heat}. For each method, we choose to show four training stages from different training iterations. These figures provide more evidence that although GAIL can finally achieve good results, EBIL provides fast and stable training. By contrast, GMMIL and RED fail to achieve effective results during the whole training time.\\footnote{For better understanding how these methods learn reward signals, we also visualize the changes of estimated rewards during the training procedures. Videos can be seen at \\url{https:\/\/www.dropbox.com\/s\/0mrsoqyu040crdo\/video.zip?dl=0}.}\n\n\\begin{figure*}[htb]\n\\centering\n\\includegraphics[width=0.90\\linewidth]{figs\/policy-iter.pdf}\n\n\\caption{The induced policy during policy training procedures. In each figure the horizontal axis denotes the \\textit{state space}, and the vertical axis represents the \\textit{action space}. Methods from top to bottom are separately EBIL, GAIL, AIRL and RED and each one contains four training stages shown in one line. The color bar is the same as \\fig{fig:heat_maps}. The brighter the yellow color, the higher the visitation frequency.}\n\\vspace{-15pt}\n\\label{fig:train-heat}\n\\end{figure*}\n\n\\subsection{Training Curves}\n\\label{ap:curves-mujoco}\n\nWe plot the episodes averaged return during the training procedure in \\fig{fig:curves-mujoco}, where EBIL shows effective guidance to help agent learn good policies while owns stability on all environments in comparison to the other methods. In our experiments, we find that the training procedure of GAIL suffers from instability.\n\n\\begin{figure*}[h!]\n\\centering\n\\includegraphics[width=1\\linewidth]{.\/figs\/curves-mujoco.pdf}\n\n\\vspace{-3pt}\n\\caption{Training curves of GAIL, GMMIL, RED and EBIL on different continuous control benchmarking tasks, where the solid curves depict the mean and the shaded areas indicate the standard deviation. Each iterations contains 1024 timesteps.}\n\\vspace{-15pt}\n\\label{fig:curves-mujoco}\n\\end{figure*}\n\n\n\\subsection{Energy Evaluation}\n\\label{curves-energy}\n\nSince the loss function of DEEN cannot be directly used as an indicator to evaluate the quality of the learned energy network, we propose to evaluate the averaged energy value for expert trajectories and the random trajectories on different tasks. As shown in \\fig{fig:curves-energy}, DEEN finally converges in all experiments by differentiating the expert data. It is worth noting that in our experiments, we find that a well-trained energy network may be hard for agents to learn the expert policies on some environments. \nWe regard it as the ``sparse reward signals'' problem, as discussed in \\se{sec:one-domain}. By contrast, sometimes a ``half-trained'' model may provide smoother rewards, and can help the agent to learn more efficiently. Similar phenomenon also occurs when training the discriminator in GAN.\n\nWe will further analyze the training performance with energy models of different epochs in ablation study \\se{sec:abl-study}.\n\n\\begin{figure*}[h!]\n\n\\centering\n\\includegraphics[width=1\\linewidth]{.\/figs\/curves-energy.pdf}\n\n\\vspace{-33pt}\n\\caption{Energy evaluation curves on different mujoco tasks, where the red line represents for the average energy estimation on expert data and the blue is for random trajectories, which contain 100 trajectories separately. Note that lower energy values correspond to higher rewards. The min value is -1, and the max is 1 since we use \\textit{tanh} for the last layer of the DEEN network.}\n\\vspace{-15pt}\n\\label{fig:curves-energy}\n\\end{figure*}\n\n\n\\subsection{Ablation Study}\n\\label{sec:abl-study}\n\nTo further understand what the better energy model for learning a good policy is, we conduct ablation study on energy models trained from different epochs. The results are illustrated in \\fig{fig:ablation-energy}, which verify our intuition that a ``half-trained'' model can provide smoother rewards that solve the \"sparse reward`` problem, which is better for imitating the expert policy.\n\n\\begin{figure*}[h!]\n\\centering\n\\includegraphics[width=0.45\\linewidth]{.\/figs\/ablation-energy-hopper}\n\\includegraphics[width=0.45\\linewidth]{.\/figs\/ablation-energy-walker}\n\\vspace{-4pt}\n\\caption{The average episode rewards evaluated on Hopper-v2 and Walker2d-v2 by agents that are learned with energy models from different training epochs.}\n\\vspace{-15pt}\n\\label{fig:ablation-energy}\n\\end{figure*}\n\n\\section{Further Discussions}\n\n\\subsection{Surrogate Reward Functions}\n\nAs discussed in \\cite{kostrikov2018discriminator}, the reward function is highly related to the property of the task. Positive rewards may achieve better performance in the ``surviving'' style environment, and negative ones may take advantage in the ``per-step-penalty'' style environment. The different choices are common in those imitation learning works based on GAIL, which can use either $\\log(D)$ or $-\\log(1-D)$, where\n$D\\in [0,1]$ is the output of the discriminator, determined by the final ``\\textit{sigmoid}'' layer. In our work, we choose ``\\textit{tanh}'' as the final layer of the energy network, which in result leads the energy into a range of $[-1,1]$. In order to adapt to different environments while holding the good property of the energy, we can apply a monotonically increasing linear function $h$ as the surrogate reward function, which makes translation or scaling transformation on the energy outputs. It appears that in all of our tasks, the original energy signal does not show much ascendancy, and thus we choose different $h$ for these tasks.\n\nIn the one-dimension domain experiment, we choose to use the following surrogate reward function:\n\\begin{equation}\n \\hat{r}(s,a)=h(x)=x+1~,\n\\end{equation}\nwhere $\\hat{r} \\in [0, 2]$ and $x=-E(s,a)$ is the energy function. Thus, the experts' state-action pair will get close-to-zero rewards at each step.\n\nIn mujoco tasks, we choose the surrogate reward function as:\n\\begin{equation}\n \\hat{r}(s,a)=h(x)=(x+1)\/2~,\n\\end{equation}\nwhere $x=-E(s,a)$ is the energy function. Note that we construct this reward function to make a normalized reward $\\hat{r} \\in [0, 1]$ so that the non-expert's state-action pairs will gain near-zero rewards while the experts' get close-to-one rewards at each step regarding the output range of the energy is $[-1,1]$. In our experiments, similar rewards as the one-dimensional synthetic environment can also work well.\n\n\\subsection{AIRL Does Not Recover the Energy}\n\\label{ap:airl}\n\nAdversarial Inverse Reinforcement Learning (AIRL)\\cite{fu2017learning}, is a SoTA IRL method that apply an adversarial architecture similar as GAIL to solve the IRL problem. Formally, AIRL constructs the discriminator as \n\\begin{equation}\\label{eq:airl-dis}\nD(s,a) = \\frac{\\exp(f(s,a))}{\\exp(f(s,a)) + \\pi(a|s)}~.\n\\end{equation}\nThis is motivated by the former GCL-GAN work~\\cite{finn2016connection}, which proposes that one can apply GAN to train GCL that formulate the discriminator as\n\\begin{equation}\\label{eq:airl-dis}\nD(\\tau) = \\frac{\\frac{1}{Z}\\exp(c(\\tau))}{\\frac{1}{Z}\\exp(c(\\tau)) + \\pi(\\tau)}~,\n\\end{equation}\nwhere $\\tau$ denotes the trajectory. AIRL uses a surrogate reward\n\\begin{equation}\\label{eq:airl-reward}\n\\begin{aligned}\nr(s,a) &= \\log D(s,a) - \\log(1-D(s,a))\\\\\n&= f(s,a)-\\log{\\pi(a|s)}~,\n\\end{aligned}\n\\end{equation}\nwhich can be seen as an entropy-regularized reward function.\n\nHowever, the difference between \\eq{eq:airl-dis} and \\eq{eq:airl-reward} indicates that AIRL does not actually recover the expert's energy since they omit the partition function $Z$ which is important for estimating the energy. Also, the learning signal that drives the agent to learn a good policy is not a pure reward term but contains an entropy term itself. We visualize the different reward choice ($f(s,a)$ or $f(s,a)-\\log{\\pi(a|s)}$) in \\fig{fig:airlreward} as comparison, which verify our intuition that AIRL in fact does not recover the expert's energy as EBIL does.\n\n\\begin{figure*}[!t]\n\\centering\n\\subfigure[Reward as $f(s,a)-\\log{\\pi(a|s)}$]{\n\\begin{minipage}[b]{0.33\\linewidth}\n\\label{fig:logreward}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/airl_heat_40_16000_1-crop.pdf}\n\\end{minipage}\n}\n\\subfigure[Reward as $f(s,a)$]{\n\\begin{minipage}[b]{0.393\\linewidth}\n\\label{fig:freward}\n\\includegraphics[width=1\\linewidth]{figs\/heat-crop\/airl_heat_40_16000_0_bar-crop.pdf}\n\\end{minipage}\n}\n\\vspace{-7pt}\n\\caption{Heat maps of the different estimated rewards recovered by AIRL.}\n\\label{fig:airlreward}\n\\end{figure*}\n\n\\subsection{Discussions with MaxEnt RL Methods}\n\\label{ap:ebil-sac}\n\nSoft-Q Learning (SQL)~\\cite{haarnoja2017reinforcement} and Soft Actor-Critic (SAC)~\\cite{haarnoja2018soft} are two main approaches of MaxEnt RL, particularly, they propose to use a general energy-based form policy as:\n\\begin{equation}\\label{eq:softq}\n \\pi(a_t|s_t) \\propto \\exp{(-E(s_t, a_t))}~.\n\\end{equation}\nTo connect the policy with soft versions of value functions and Q functions, they set the energy model $E(s_t,a_t) = -\\frac{1}{\\alpha}Q_{\\text{soft}}(s_t,a_t)$ where $\\alpha$ is the temperature parameter, such that the policy can be represented with the Q function which holds the highest probability at the action with the highest Q value, which essentially provides a soft version of the greedy policy. Thus, one can choose to optimize the soft Q function to obtain the optimal policy by minimizing the expected KL-divergence:\n\n\\begin{equation}\\label{eq:softac}\n J(\\pi) = \\bbE_{s\\sim \\rho^s_{\\caD}}\\left[\\kld\\left( \\pi(\\cdot|s) \\big \\| \\frac{\\exp{(Q(s,\\cdot))}}{Z(s)} \\right) \\right ]~,\n\\end{equation}\nwhere $\\rho^s_{\\caD}$ is the distribution of previously sampled states and actions, or a replay buffer. Therefore, the second term in the KL-divergence in fact can be regarded as the target or the reference for the policy.\n\nConsider to use the KL-divergence as the distance metric in the general objective of IL shown in \\eq{eq:il}, then we get:\n\\begin{equation}\n\\begin{aligned}\n \\pi^* &= \\argmin_\\pi \\mathbb{E}_{\\pi} \\left [\\kld \\left( \\pi(\\cdot|s) \\big \\| {\\pi_E}(\\cdot|s) \\right) \\right ]~.\n\\end{aligned}\n\\end{equation}\nIf we choose to model the expert policy using the energy form of \\eq{eq:softq} then we get:\n\\begin{equation}\n\\begin{aligned}\\label{eq:softil}\n \\pi^* &= \\argmin_\\pi \\mathbb{E}_{\\pi}\\left [\\kld \\left( \\pi(\\cdot|s) \\big \\| \\frac{\\exp{(-E_{\\pi_E}(s,a)})}{Z} \\right) \\right ]~.\n\\end{aligned}\n\\end{equation}\n\n\\begin{proposition}\n\\label{prop:ebil-sac}\n The IL objective shown in \\eq{eq:softil} is equivalent to the EBIL objective shown in \\eq{eq:eb-il}.\n\\end{proposition}\n\\begin{proof}\n Since \\eq{eq:eb-il} is equivalent to \\eq{eq:kl-il}, it holds the optimal solution such that $\\pi^*={\\pi_E}$ according to \\prop{prop:kl-il}. Also, it is easy to see that \\eq{eq:softil} has the same optimal solution such that $\\pi^*={\\pi_E}$.\n\\end{proof}\nThus, \\prop{prop:ebil-sac} reveals the relation between MaxEnt RL and EBIL. Specifically, EBIL employs the energy model learned from expert demonstrations as the target policy. The difference is that MaxEnt RL methods use the Q function to play the role of the energy function, construct it as the target policy, and iteratively update the Q function and the policy, while EBIL directly utilizes the energy function to model the expert occupancy measure and constructs the target policy. \n\nAs a result, it makes sense to directly optimize the policy by taking the energy model as the target policy instead of the reward function, which leads to the optimal solution as:\n\n\\begin{equation}\n \\begin{aligned}\n \\pi^*(a|s)&= \\frac{\\rho_{{\\pi_E}}(s,a)}{\\sum_{a'}\\rho_{{\\pi_E}}(s,a')}\\\\\n &=\\frac{\\frac{1}{Z}\\exp(-E_{{\\pi_E}}(s,a))}{\\frac{1}{Z}\\sum_{a'}\\exp{(-E_{{\\pi_E}}(s,a'))}}\\\\\n &=\\frac{\\exp(-E(s,a))}{\\sum_{a'}\\exp{(-E(s,a'))}}~.\n \\end{aligned}\n\\end{equation}\nTherefore, the optimal solution can also be obtained through estimating the energy function and summing it over the action space, which may be intractable for high-dimensional or continuous action space. Nevertheless, this can be solved in simple scenarios.","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section[Preliminaries]{Preliminaries}\n\nIn this section we discuss a local characterization of conformally flat hypersurfaces $f\\colon\\,M^3\\to\\mathbb{Q}^{4}(c)$ with three distinct principal curvatures \n and present the examples of minimal conformally flat hypersurfaces in $\\mathbb{Q}^{4}(c)$ with three distinct principal curvatures given by generalized cones over Clifford tori. \n\n\\subsection{Characterization of conformally flat hypersurfaces}\n\n First we recall the notion of holonomic hypersurfaces. One says that a hypersurface $f\\colon M^{n} \\to \\mathbb{Q}^{n+1}(c)$ is \\emph{holonomic} if $M^n$ carries global orthogonal coordinates $(u_1,\\ldots, u_{n})$ such that the coordinate vector fields \n$\\partial_j=\\dfrac{\\partial}{\\partial u_j}$ diagonalize the second fundamental form $I\\!I$ of $f$. \n\nSet $v_j=\\|\\partial_j\\|$\n and define $V_{j} \\in C^{\\infty}(M)$, $1\\leq j\\leq n$, by \n $I\\!I(\\partial_j, \\partial_j)=V_jv_j$, $1\\leq j\\leq n$. \nThen the first and second fundamental forms of $f$ are\n\\begin{equation} \\label{fundforms}\nI=\\sum_{i=1}^nv_i^2du_i^2\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,I\\!I=\\sum_{i=1}^nV_iv_idu_i^2.\n\\end{equation} \nDenote $v=(v_1,\\ldots, v_n)$ and $V=(V_{1},\\ldots, V_n)$. We call $(v,V)$ the pair associated to $f$. \nThe next result is well known.\n\n\\begin{proposition}\\label{fund}\n The triple $(v,h,V)$, where \n $h_{ij}=\\frac{1}{v_i}\\frac{\\partial v_j}{\\partial u_i},$\n satisfies the system of PDE's\n \n \\begin{eqnarray}\\label{sistema-hol}\n \\left\\{\\begin{array}{l}\n (i) \\ \\dfrac{\\partial v_i}{\\partial u_j}=h_{ji}v_j,\\,\\,\\,\\,\\,\\,\\,\\,\\,(ii) \\ \\dfrac{\\partial h_{ik}}{\\partial u_j}=h_{ij}h_{jk},\\vspace{1ex}\\\\\n (iii) \\ \\dfrac{\\partial h_{ij}}{\\partial u_i} + \\dfrac{\\partial h_{ji}}{\\partial u_j} + \\sum_{k\\neq i,j} h_{ki}h_{kj} + \n V_{i}V_{j}+cv_iv_j=0,\\vspace{1ex}\\\\\n (iv) \\ \\dfrac{\\partial V_{i}}{\\partial u_j}=h_{ji}V_{j},\\,\\,\\,\\,1\\leq i \\neq j \\neq k \\neq i\\leq n.\n \\end{array}\\right.\n \\end{eqnarray}\nConversely, if $(v,h,V)$ is a solution of $(\\ref{sistema-hol})$ on a simply connected open subset $U \\subset {\\mathbb R}^{n}$, with $v_i> 0$\neverywhere for all $1\\leq i\\leq n$,\nthen there exists a holonomic hypersurface $f\\colon U \\to \\mathbb{Q}^{n+1}(c)$ whose first and second fundamental forms are given by $(\\ref{fundforms}).$\n\\end{proposition}\n\nThe following characterization of conformally flat hypersurfaces $f\\colon\\,M^3\\to\\mathbb{Q}^{4}(c)$ with three distinct principal curvatures was given in \\cite{ct}, improving a theorem due to Hertrich-Jeromin \\cite{h-j}. \n\n\\begin{theorem}\\label{main3} Let $f\\colon\\,M^3\\to\\mathbb{Q}^{4}(c)$ \nbe a holonomic hypersurface whose associated pair $(v, V)$ satisfies \n\\begin{equation} \\label{holo.flat}\\sum_{i=1}^3\\delta_iv_i^2=0, \\,\\,\\,\\,\\,\\,\\sum_{i=1}^3\\delta_iv_iV_i=0\\,\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,\\sum_{i=1}^3\\delta_iV_i^2=1, \n\\end{equation} \nwhere $(\\delta_1, \\delta_2, \\delta_3)= (1,-1, 1)$. Then $M^3$ is conformally flat and $f$ has three distinct principal curvatures. \n\nConversely, any conformally flat hypersurface $f\\colon\\,M^3\\to\\mathbb{Q}^{4}(c)$ with three distinct \nprincipal curvatures is locally a \nholonomic hypersurface whose associated pair $(v, V)$ satisfies $(\\ref{holo.flat}).$\n\\end{theorem}\n\n\n\nIt will be convenient to use the following equivalent version of Theorem~\\ref{main3}.\n\n\n\\begin{corollary}\\label{le:asspair}\nLet $f\\colon M^3\\to \\mathbb{Q}^4(c)$ be a holonomic hypersurface whose associated pair $(v,V)$ satisfies \n\\begin{equation}\\label{holo.flat.med.2}\n\\begin{array}{lcl}\nv_2^2=v_1^2+v_3^2, & \\quad &\n\\displaystyle{V_2=-\\frac{1}{3}\\Big(\\frac{v_1}{v_3}-\\frac{v_3}{v_1}\\Big) + \\frac{v_2}{3}H}, \\vspace{.2cm} \\\\\n\\displaystyle{V_1=-\\frac{1}{3}\\Big(\\frac{v_2}{v_3}+\\frac{v_3}{v_2}\\Big) + \\frac{v_1}{3}H}, & \\quad &\n\\displaystyle{V_3=\\frac{1}{3}\\Big(\\frac{v_1}{v_2}+\\frac{v_2}{v_1}\\Big) + \\frac{v_3}{3}H},\n\\end{array}\n\\end{equation}\nwhere $H$ is\nthe mean curvature function of $f$. Then $M^3$ is conformally flat and $f$ has three distinct principal curvatures. \n\nConversely, any conformally flat hypersurface $f\\colon\\,M^3\\to\\mathbb{Q}^{4}(c)$ with three distinct \nprincipal curvatures is locally a \nholonomic hypersurface whose associated pair $(v, V)$ satisfies $(\\ref{holo.flat.med.2}).$\n\\end{corollary}\n\\begin{proof} It suffices to show that equations (\\ref{holo.flat}) together with \n\\begin{equation}\\label{lem1.med}\n\\begin{array}{l}\nH=\\sum_{i=1}^3{V_iv_i^{-1}}\n\\end{array}\n\\end{equation}\nare equivalent to (\\ref{holo.flat.med.2}). For that, consider the Minkowski space ${\\mathbb L}^3$ endowed with the Lorentz inner product \n$$\\<(x_1,x_2,x_3), (y_1,y_2,y_3)\\> = x_1y_1-x_2y_2+x_3y_3.$$\nThen the conditions in (\\ref{holo.flat}) say that $v=(v_1, v_2, v_3)$ and $V=(V_1, V_2, V_3)$ are orthogonal with respect to such inner product, $v$ is light-like and $V$ is a unit space-like vector. Since $w=(-v_3,0,v_1)\\in {\\mathbb L}^3$ is orthogonal to $v,$ we have $v^\\perp=\\mbox{span}\\{v,w\\}.$ As $V\\in v^{\\perp},$ we can write $V=av+bw$ for some $a,b \\in C^{\\infty}(M^3)$. Note that $V_2=av_2.$ \nUsing $(\\ref{holo.flat})$ we obtain\n $$1=\\langle V,V \\rangle = \\langle av+bw,av+bw \\rangle=b^2\\langle w,w \\rangle=b^2v_2^2.$$ \n Thus $V=\\frac{V_2}{v_2}v+\\frac{\\lambda}{v_2}w,$ with $\\lambda=\\pm 1.$ Therefore \n\\begin{equation}\\label{V1.e.V3.med.}\n\\begin{array}{lll}\n\\displaystyle{V_1=\\frac{1}{v_2}(V_2v_1-\\lambda v_3)}\\qquad \\mbox{and} \\qquad \\displaystyle{V_3=\\frac{1}{v_2}(V_2v_3+\\lambda v_1)}.\n\\end{array}\n\\end{equation}\nSubstituting (\\ref{V1.e.V3.med.}) in (\\ref{lem1.med}) \nwe obtain\n\\begin{equation}\\label{V2.med.}\n\\begin{array}{l}\n\\displaystyle{V_2=-\\frac{\\lambda}{3}\\Big(\\frac{v_1}{v_3}-\\frac{v_3}{v_1}\\Big) + \\frac{v_2}{3}H}.\n\\end{array}\n\\end{equation}\nSubstituting (\\ref{V2.med.}) in (\\ref{V1.e.V3.med.}) yields \n$$V_1=-\\frac{\\lambda}{3}\\Big(\\frac{v_2}{v_3}+\\frac{v_3}{v_2}\\Big) + \\frac{v_1}{3}H \\,\\,\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,\nV_3=\\frac{\\lambda}{3}\\Big(\\frac{v_1}{v_2}+\\frac{v_2}{v_1}\\Big) + \\frac{v_3}{3}H,$$\nand changing the orientation, if necessary, we may assume that $\\lambda=1$.\\end{proof} \n\n\\subsection{Generalized cones over Clifford tori}\n\nFirst we show that, if $g\\colon {\\mathbb R}^2 \\to {\\mathbb S}^3\\subset {\\mathbb R}^4$ is the Clifford torus parametrized by \n\\begin{equation} \\label{eq:g}\n g(x_1,x_2)=\\frac{1}{\\sqrt{2}}(\\cos (\\sqrt{2} x_1), \\sin (\\sqrt{2} x_1),\\cos (\\sqrt{2} x_2), \\sin (\\sqrt{2} x_2)),\n \\end{equation} \n then the standard cone\n$F\\colon (0, \\infty)\\times {\\mathbb R}^2\\to {\\mathbb R}^4$ over $g$ given by\n$$F(s,x)=sg(x),\\,\\,\\,\\,x=(x_1, x_2),$$\nis a minimal conformally flat hypersurface. \n\n\nThe first and second fundamental forms of $F$ with respect to the unit normal vector field \n$$\\eta(s, x_1, x_2)=\\frac{1}{\\sqrt{2}}(\\cos (\\sqrt{2} x_1), \\sin (\\sqrt{2} x_1),-\\cos (\\sqrt{2} x_2), -\\sin (\\sqrt{2} x_2))$$\nare\n$$\n\\begin{array}{lll}\nI=ds^2+ s^2(dx_1^2+dx_2^2) \\qquad \\text{and} \\qquad I\\!I= s(-dx^2_1+dx^2_2).\n\\end{array}\n$$\nIn terms of the new coordinates $u_1, u_2, u_3$, related to $s, x_1, x_2$ by\n$$\nu_2=\\log s,\\,\\,\\,\\,u_1=\\sqrt{2}x_1\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,u_3=\\sqrt{2}x_2,\n$$\nthe first and second fundamental forms of $F$ become\n$$I=\\frac{e^{2u_2}}{2}(du_1^2+2du_2^2+du_3^2)\\,\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,I\\!I=\\frac{e^{u_2}}{2}(-du_1^2+du_3^2),\n$$\nhence $F$ is a minimal conformally flat hypersurface with three distinct principal curvatures, one of which being zero.\\vspace{1ex}\n\nThe preceding example can be extended to the case in which the ambient space is any space form, yielding examples of minimal conformally flat hypersurfaces $f\\colon M^3\\to \\mathbb{Q}^4(c)$ with three distinct principal curvatures also for $c\\neq 0$. \\vspace{.5ex}\n\n Start with the Clifford torus $g\\colon {\\mathbb R}^2\\to {\\mathbb S}^3\\subset {\\mathbb R}^4$ parametrized by (\\ref{eq:g}).\n If $c>0$, define $F\\colon (0, \\pi\/\\sqrt{c})\\times {\\mathbb R}^2\\to {\\mathbb S}^4(c)\\subset {\\mathbb R}^5={\\mathbb R}^4\\times {\\mathbb R}$ by\n$$F(s,x)=\\frac{1}{\\sqrt{c}}(\\cos ({\\sqrt{c}}s) e_5+\\sin ({\\sqrt{c}}s) g(x)),$$\nwhere $x=(x_1, x_2)$ and $e_5$ is a unit vector spanning the factor ${\\mathbb R}$ in the orthogonal decomposition ${\\mathbb R}^5={\\mathbb R}^4\\times {\\mathbb R}$.\n\nNotice that, for each fixed $s=s_0$, the map $F_{s_0}\\colon {\\mathbb R}^2\\to {\\mathbb S}^4(c)$, given by $F_{s_0}(x)=F(s_0, x)$, is also a Clifford torus \nin an umbilical hypersurface ${\\mathbb S}^3(\\tilde c)\\subset {\\mathbb S}^4(c)$ with curvature $\\tilde c=c\/\\sin^2(\\sqrt{c}s_0)$, which has \n$$N_{s_0}=F_*\\frac{\\partial}{\\partial s}|_{s=s_0}=-\\sin(\\sqrt{c}s_0)e_5+\\cos(\\sqrt{c}s_0)g$$\nas a unit normal vector field along $F_{s_0}$. Notice also that\n$$F(s+s_0, x)=\\cos(\\sqrt{c}s)F_{s_0}(x)+\\sin(\\sqrt{c}s)N_{s_0}(x),$$\nthus $s\\mapsto F(s,x)$ parametrizes the geodesic in ${\\mathbb S}^4(c)$ through $F_{s_0}(x)$ tangent to $N_{s_0}(x)$ at $F_{s_0}(x)$.\nHence $F$ is a generalized cone over $F_{s_0}$.\n\nThe first and second fundamental forms of $F$ with respect to the unit normal vector field \n$$\\eta(s, x_1, x_2)=\\frac{1}{\\sqrt{2}}(\\cos (\\sqrt{2} x_1), \\sin (\\sqrt{2} x_1),-\\cos (\\sqrt{2} x_2), -\\sin (\\sqrt{2} x_2), 0)$$\nare\n$$\n\\begin{array}{lll}\n\\displaystyle{I=ds^2+ \\frac{1}{c}\\sin^2 ({\\sqrt{c}}s)(dx_1^2+dx_2^2)} \\;\\; \\mbox{and} \\;\\; \\displaystyle{I\\!I=\\frac{\\sin ({\\sqrt{c}}s)}{\\sqrt{c}}(-dx^2_1+dx^2_2)}.\n\\end{array}\n$$\nIn terms of the new coordinates $u_1, u_2, u_3$, related to $s, x_1, x_2$ by\n\\begin{equation} \\label{eq:uisxis}\n\\frac{du_2}{ds}=\\frac{\\sqrt{c}}{\\sin(\\sqrt{c}s)},\\,\\,\\,\\,u_1=\\sqrt{2}x_1\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,u_3=\\sqrt{2}x_2,\\end{equation} \nthe first and second fundamental forms of $F$ become\n\\begin{equation} \\label{eq:forms}\nI=\\frac{\\sin^2 \\theta}{2c}(du_1^2+2du_2^2+du_3^2)\\,\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,I\\!I=\\frac{\\sin \\theta}{2\\sqrt{c}}(-du_1^2+du_3^2),\n\\end{equation} \nwhere $\\theta=\\sqrt{c}s$, which, in view of the first equation in (\\ref{eq:uisxis}), \n satisfies \n$$\\frac{d\\theta}{du_2}=\\sin \\theta.$$\nIt follows from (\\ref{eq:forms}) that $F$ is a minimal conformally flat hypersurface.\n\nIf $c<0$, define\n$F\\colon (0, \\infty)\\times {\\mathbb R}^2\\to \\mathbb{H}^4(c)\\subset {\\mathbb L}^5$ by\n$$F(s,x)=\\frac{1}{\\sqrt{-c}}(\\cosh ({\\sqrt{-c}}s) e_5+\\sinh ({\\sqrt{-c}}s) g(x)),$$\nwhere $x=(x_1, x_2)$, $e_5$ is a unit time-like vector in ${\\mathbb L}^5$ and $e_5^\\perp$ is identified with ${\\mathbb R}^4$. \n\nAs in the previous case, for each fixed $s=s_0$ the map $F_{s_0}\\colon {\\mathbb R}^2\\to \\mathbb{H}^4(c)$, given by $F_{s_0}(x)=F(s_0, x)$, is also a Clifford torus \nin an umbilical hypersurface ${\\mathbb S}^3(\\tilde c)\\subset \\mathbb{H}^4(c)$ with curvature $\\tilde c=-c\/\\sinh^2(\\sqrt{-c}s_0)$, and \n$F$ is a generalized cone over $F_{s_0}$.\n\nNow the first and second fundamental forms of $F$ \nare\n$$I=ds^2+ \\frac{1}{-c}\\sinh^2 ({\\sqrt{-c}}s)(dx_1^2+dx_2^2)$$\nand\n$$I\\!I=\\frac{\\sinh ({\\sqrt{-c}}s)}{\\sqrt{-c}}(-dx^2_1+dx^2_2).$$\nIn terms of the new coordinates $u_1, u_2, u_3$, related to $s, x_1, x_2$ by\n$$\n\\frac{du_2}{ds}=\\frac{\\sqrt{-c}}{\\sinh(\\sqrt{-c}s)},\\,\\,\\,\\,u_1=\\sqrt{2}x_1\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,u_3=\\sqrt{2}x_2,\n$$\nthey become\n\\begin{equation} \\label{eq:formsb}I=\\frac{\\sinh^2 \\theta}{-2c}(du_1^2+2du_2^2+du_3^2)\\,\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,I\\!I=\\frac{\\sinh \\theta}{2\\sqrt{-c}}(-du_1^2+du_3^2),\\end{equation} \nwhere $\\theta(s)=\\sqrt{-c}s$\nsatisfies \n$$\\frac{d\\theta}{du_2}=\\sinh \\theta.$$\nIt follows from (\\ref{eq:formsb}) that $F$ is a minimal conformally flat hypersurface with three distinct principal curvatures,\none of which being zero.\n\n\n\\section{The proofs of Theorems \\ref{thm:cmc} and \\ref{thm:minimalcneq0}}\n\nFirst we derive a system of PDE's for new unknown functions associated to a conformally flat hypersurface $f\\colon M^3\\to \\mathbb{Q}^4(c)$ with three distinct principal curvatures under the assumption that \n$f$ has constant mean curvature.\n\n\n\n\\begin{proposition}\\label{flat.med.reduz.result.}\nLet $f\\colon M^3\\to \\mathbb{Q}^4(c)$ be a holonomic hypersurface with constant mean curvature $H$ whose associated pair $(v,V)$ satisfies $(\\ref{holo.flat.med.2})$. Set $$(\\alpha_1,\\alpha_2,\\alpha_3)=\\Big(\\frac{1}{v_2}\\frac{\\partial v_2}{\\partial u_1},\\frac{1}{v_3}\\frac{\\partial v_3}{\\partial u_2},\\frac{1}{v_1}\\frac{\\partial v_1}{\\partial u_3}\\Big).$$ \nThen $v_1, v_2, v_3, \\alpha_1, \\alpha_2, \\alpha_3$ satisfy the differential equations \n\\begin{equation} \\label{eq:e0}\\frac{\\partial v_1}{\\partial u_1}=\\frac{v_1}{v_2^4}(v_2^4+v_2^2v_3^2+v_3^4)\\alpha_1,\\,\\,\\,\\,\\,\\frac{\\partial v_2}{\\partial u_1}=v_2\\alpha_1,\\,\\,\\,\\,\\,\\frac{\\partial v_3}{\\partial u_1}=\\frac{v_3^5}{v_2^4}\\alpha_1,\\end{equation} \n\\begin{equation} \\label{eq:e0a}\\frac{\\partial v_1}{\\partial u_2}=\\frac{v_1^5}{v_3^4}\\alpha_2,\\,\\,\\,\\,\\,\\frac{\\partial v_2}{\\partial u_2}=\\frac{v_2}{v_3^4}(v_1^4-v_1^2v_3^2+v_3^4)\\alpha_2,\\,\\,\\,\\,\\,\\frac{\\partial v_3}{\\partial u_2}=v_3\\alpha_2,\\end{equation} \n\\begin{equation} \\label{eq:e0b}\\frac{\\partial v_1}{\\partial u_3}= v_1\\alpha_3,\\,\\,\\,\\,\\,\\frac{\\partial v_2}{\\partial u_3}=\\frac{v_2^5}{v_1^4}\\alpha_3,\\,\\,\\,\\,\\,\\frac{\\partial v_3}{\\partial u_3}=\\frac{v_3}{v_1^4}(v_1^4+v_1^2v_2^2+v_2^4)\\alpha_3,\\end{equation} \n\\begin{equation} \\label{eq:e1}\\begin{array}{l}\n{\\displaystyle \\frac{\\partial \\alpha_1}{\\partial u_1} = \\frac{1}{v_2^4}(3v_2^4-v_3^4)\\alpha_1^2 - \\frac{v_1^6}{v_3^8}(3v_1^2-2v_3^2)\\alpha_2^2 + \\frac{v_2^4}{v_1^2v_3^4}(3v_1^2+2v_2^2)\\alpha_3^2 } \\vspace{.17cm} \\\\ \n{\\displaystyle +\\ \\frac{1}{9v_3^4}(5v_1^4+2v_2^2v_3^2)- \\frac{v_1^2v_2^2}{v_3^2}c + \\frac{v_1v_2}{18v_3^3}(v_2^2+v_3^2-2v_1v_2v_3H)H,} \\end{array}\n\\end{equation} \n\\begin{equation} \\label{eq:e1a}\\frac{\\partial \\alpha_2}{\\partial u_1}=2 \\frac{v_1^2}{v_2^2}\\alpha_1\\alpha_2,\\,\\,\\,\\,\\,\\frac{\\partial \\alpha_3}{\\partial u_1}=\n2 \\frac{v_3^2}{v_2^4}(4v_3^2+v_1^2)\\alpha_1\\alpha_3,\\end{equation} \n\\begin{equation} \\label{eq:e2}\\begin{array}{l}\n{\\displaystyle \\frac{\\partial \\alpha_2}{\\partial u_2} = -\\frac{v_3^4}{v_1^4v_2^2}(3v_2^2 + 2v_3^2) \\alpha_1^2 -\\frac{1}{v_3^4}(v_1^4 - 3v_3^4)\\alpha_2^2 - \\frac{v_2^6}{v_1^8}(2v_1^2+3v_2^2)\\alpha_3^2 } \\vspace{.17cm} \\\\ \n{\\displaystyle -\\frac{1}{9v_1^4}(5v_2^4-2v_1^2v_3^2) + \\frac{v_2^2v_3^2}{v_1^2}c - \\frac{v_2v_3}{18v_1^3}(v_1^2-v_3^2-2v_1v_2v_3H)H,} \n\\end{array}\n\\end{equation} \n\\begin{equation} \\label{eq:e2a}\\frac{\\partial \\alpha_1}{\\partial u_2}=2 \\frac{v_1^2}{v_3^4}(4v_1^2- v_2^2)\\alpha_1\\alpha_2,\\,\\,\\,\\,\\,\\frac{\\partial \\alpha_3}{\\partial u_2}=2 \\frac{v_2^2}{v_3^2}\\alpha_2\\alpha_3,\\end{equation} \n\\begin{equation} \\label{eq:e2b}\\frac{\\partial \\alpha_1}{\\partial u_3}=-2 \\frac{v_3^2}{v_1^2}\\alpha_1\\alpha_3,\\,\\,\\,\\,\\,\\frac{\\partial \\alpha_2}{\\partial u_3}=2 \\frac{v_2^2}{v_1^4}(4v_2^2- v_3^2)\\alpha_2\\alpha_3\\end{equation} \nand\n\\begin{equation} \\label{eq:e3}\\begin{array}{l}\n{\\displaystyle \\frac{\\partial \\alpha_3}{\\partial u_3} = \\frac{v_3^6}{v_2^8}(2v_2^2+3v_3^2)\\alpha_1^2 + \\frac{v_1^4}{v_2^4v_3^2}(2v_1^2-3v_3^2)\\alpha_2^2 + \\frac{1}{v_1^4}(3v_1^4-v_2^4)\\alpha_3^2 } \\vspace{.17cm} \\\\ \n{\\displaystyle + \\frac{1}{9v_2^4}(5v_3^4+2v_1^2v_2^2) - \\frac{v_1^2v_3^2}{v_2^2}c - \\frac{v_1v_3}{18v_2^3}(v_1^2+v_2^2+2v_1v_2v_3H)H,} \n\\end{array}\n\\end{equation} \nas well as the algebraic relations\n\\begin{equation}\\label{equ.alg.f.e.}\n\\begin{array}{l}\n\\displaystyle{\\Big(\\frac{30v_1}{v_2v_3^9} F - \\frac{4v_1^4v_2^2}{v_3^6}(v_1^2-v_3^2)c + \\frac{v_1^3v_2}{18v_3^7} m_2 H \\Big)\\alpha_2 =0}, \\vspace{-.25cm} \\\\ \\\\\n\\displaystyle{\\Big( \\frac{30v_2}{v_1^9v_3}F + \\frac{4v_2^4v_3^2}{v_1^6}(v_1^2+v_2^2)c + \\frac{v_3v_2^3}{18v_1^7} m_3 H \\Big)\\alpha_3=0} , \\vspace{-.25cm} \\\\ \\\\\n\\displaystyle{\\Big( \\frac{30v_3}{v_1v_2^9} F - \\frac{4v_1^2v_3^4}{v_2^6}(v_2^2+v_3^2)c + \\frac{v_1v_3^3}{18v_2^7}m_1 H \\Big)\\alpha_1=0 ,}\n\\end{array}\n\\end{equation}\n where\n\\begin{equation*}\n\\begin{array}{rcl}\nm_1 &=& (v_2^2+4v_3^2)(4v_2^2+v_3^2)-8v_1v_2v_3(v_2^2+v_3^2)H,\\vspace{.2cm} \\\\\nm_2 &=& (v_1^2-4v_3^2)(4v_1^2-v_3^2)-8v_1v_2v_3(v_1^2-v_3^2)H, \\vspace{.2cm}\\\\\nm_3 &=& (v_1^2+4v_2^2)(4v_1^2+v_2^2)+8v_1v_2v_3(v_1^2+v_2^2)H, \\vspace{.2cm}\\\\\nF &=& \\displaystyle{\\frac{1}{27v_1v_2v_3}\\big[9v_1^2v_3^8(v_2^2+v_3^2)\\alpha_1^2 + 9v_1^8v_2^2(v_1^2-v_3^2)\\alpha_2^2 - 9v_2^8v_3^2(v_1^2+v_2^2)\\alpha_3^2} \\vspace{.2cm}\\\\\n && - \\ v_1^2v_2^2v_3^2(2v_1^2v_2^4-2v_1^2v_3^4-2v_2^2v_3^4-v_1^2v_2^2v_3^2)\\big].\n\\end{array}\n\\end{equation*}\n\\end{proposition}\n\\proof\nThe triple $(v,h,V),$ where $h_{ij}=\\frac{1}{v_i}\\frac{\\partial v_j}{\\partial u_i},$ satisfies the system of PDE's\n\\begin{equation}\\label{hol.flat.f.esp.}\n\\left\\{\\begin{array}{l}\\displaystyle{\n\\!\\!\\!(i) \\frac{\\partial v_i}{\\partial u_j}= h_{ji}v_j, \\,\\,\\,\\,\\,\\,\\, (ii) \\frac{\\partial h_{ij}}{\\partial u_i} + \\frac{\\partial h_{ji}}{\\partial u_j} +h_{ki}h_{kj}+ V_iV_j + cv_iv_j =0}, \\vspace{.18cm}\\\\\n\\!\\!\\! \\displaystyle{(iii) \\frac{\\partial h_{ik}}{\\partial u_j}=h_{ij}h_{jk}, \\,\\,\\,\\,\\,\\,\\, (iv) \\frac{\\partial V_i}{\\partial u_j}=h_{ji}V_j, \\hspace{.5cm} 1\\leq i\\neq j\\neq k\\neq i \\leq 3,\\vspace{.18cm}}\\\\\n\\!\\!\\! \\displaystyle{(v) \\delta_i\\frac{\\partial v_i}{\\partial u_i}+ \\delta_jh_{ij}v_j +\\delta_kh_{ik}v_k=0, \\, \\ \\ (vi) \\delta_i\\frac{\\partial V_i}{\\partial u_i}+ \\delta_jh_{ij}V_j +\\delta_kh_{ik}V_k=0.}\n\\end{array}\\right. \\!\\!\\!\\!\\!\\!\\!\n\\end{equation}\nEquations $(i)$, $(ii)$, $(iii)$ and $(iv)$ are due to the fact that $f$ is a holonomic hypersurface with $(v,V)$ as its associated pair, and $(v)$ and $(vi)$ follow by differentiating (\\ref{holo.flat}).\n\nUsing (\\ref{holo.flat.med.2}) and equations $(i)$, $(iv)$, $(v)$ and $(vi)$ in (\\ref{hol.flat.f.esp.}) one can show that\n\\begin{equation}\\label{rel.h-ki-kj}\n v_j^5h_{ki}=v_i^5h_{kj}, \\quad 1\\leq i\\neq j\\neq k\\neq i\\leq 3.\n\\end{equation}\nFrom $(i)$ and $(v)$ of (\\ref{hol.flat.f.esp.}), together with (\\ref{rel.h-ki-kj}), one obtains the formulae for the derivatives $\\frac{\\partial v_i}{\\partial u_j},$ $1\\leq i,j\\leq 3$. In a similar way, using $(i)$, $(iii)$ and $(v)$, together with (\\ref{rel.h-ki-kj}), one finds the derivatives $\\frac{\\partial \\alpha_i}{\\partial u_j},$ $1\\leq i\\neq j\\leq 3$. In order to compute $\\frac{\\partial \\alpha_i}{\\partial u_i},$ $1\\leq i\\leq 3$, we note that equation $(ii)$, together with (\\ref{rel.h-ki-kj}) and the remaining equations in (\\ref{hol.flat.f.esp.}), determines the system of linear equations \n\\begin{equation*}\nMP=-B\n\\end{equation*}\nin the variables $\\frac{\\partial \\alpha_i}{\\partial u_i},$ $1\\leq i\\leq 3$, where\n\\begin{displaymath}\nM= \\left( \\begin{array}{ccc}\n 9v_1^2v_2^4v_3^2 & \\frac{9v_1^8v_2^2}{v_3^2} & 0\\vspace{.17cm}\\\\\n \\frac{9v_1^2v_3^8}{v_2^2} & 0 & 9v_1^4v_2^2v_3^2 \\vspace{.17cm}\\\\\n 0 & 9v_1^2v_2^2v_3^4 & \\frac{9v_2^8v_3^2}{v_1^2}\n\\end{array}\\right)\\!\\!, \\quad P= \\left( \\begin{array}{c}\n\\frac{\\partial \\alpha_1}{\\partial u_1} \\vspace{.17cm}\\\\\n\\frac{\\partial \\alpha_2}{\\partial u_2} \\vspace{.17cm}\\\\\n\\frac{\\partial \\alpha_3}{\\partial u_3}\n\\end{array}\\right)\n\\end{displaymath}\nand\n\\begin{displaymath}\nB= \\left( \\begin{array}{r}\n- 9v_1^2v_3^4(v_2^2+v_3^2)\\alpha_1^2 + \\frac{9v_1^8v_2^2}{v_3^6}(4v_2^2+v_3^2)(v_1^2-v_3^2)\\alpha_2^2 + 9v_2^8\\alpha_3^2 \\\\- v_1^2v_2^2(2v_3^4-v_1^2v_2^2) + \\ 9v_1^4v_2^4v_3^2 c - v_1^3v_2^3v_3(v_1^2+v_2^2-v_1v_2v_3H)H. \\vspace{0.32cm}\\\\\n - \\frac{9v_1^2v_3^8}{v_2^6}(4v_1^2+v_2^2)(v_2^2+v_3^2)\\alpha_1^2 + 9v_1^8\\alpha_2^2 - 9v_2^4v_3^2(v_1^2+v_2^2)\\alpha_3^2\\\\ - v_1^2v_3^2(2v_2^4+v_1^2v_3^2) + \\ 9v_1^4v_2^2v_3^4 c + v_1^3v_2v_3^3(v_1^2-v_3^2+v_1v_2v_3H)H. \\vspace{0.32cm}\\\\\n 9v_3^8\\alpha_1^2 - 9v_1^4v_2^2(v_1^2-v_3^2)\\alpha_2^2 + \\frac{9v_2^8v_3^2}{v_1^6}(4v_3^2-v_1^2)(v_1^2+v_2^2)\\alpha_3^2 \\\\ - v_2^2v_3^2(2v_1^4-v_2^2v_3^2) + \\ 9v_1^2v_2^4v_3^4 c + v_1v_2^3v_3^3(v_2^2+v_3^2+v_1v_2v_3H)H.\n\\end{array}\\right)\\!\\!.\n\\end{displaymath}\nOne can check that \nsuch system has a unique solution given by (\\ref{eq:e1}), (\\ref{eq:e2}) and (\\ref{eq:e3}).\n\nFinally, computing the mixed derivatives $\\frac{\\partial^2 \\alpha_i}{\\partial u_j\\partial u_k} = \\frac{\\partial^2 \\alpha_i}{\\partial u_k\\partial u_j},$ $1\\leq i,k,j\\leq 3$, from (\\ref{hol.flat.f.esp.}) we obtain \n$$0=\\frac{\\partial^2 \\alpha_1}{\\partial u_2\\partial u_1} - \\frac{\\partial^2 \\alpha_1}{\\partial u_1\\partial u_2} = \\Big(\\frac{30v_1}{v_2v_3^9} F - \\frac{4v_1^4v_2^2}{v_3^6}(v_1^2-v_3^2)c + \\frac{v_1^3v_2}{18v_3^7} m_2 H \\Big)\\alpha_2, \n$$\n$$\n0=\\frac{\\partial^2 \\alpha_2}{\\partial u_3\\partial u_2} - \\frac{\\partial^2 \\alpha_2}{\\partial u_2\\partial u_3} = \\Big( \\frac{30v_2}{v_1^9v_3}F + \\frac{4v_2^4v_3^2}{v_1^6}(v_1^2+v_2^2)c + \\frac{v_3v_2^3}{18v_1^7} m_3 H \\Big)\\alpha_3 \n$$\nand\n$$\n 0=\\frac{\\partial^2 \\alpha_3}{\\partial u_1\\partial u_3} - \\frac{\\partial^2 \\alpha_3}{\\partial u_3\\partial u_1} = \\Big( \\frac{30v_3}{v_1v_2^9} F - \\frac{4v_1^2v_3^4}{v_2^6}(v_2^2+v_3^2)c + \\frac{v_1v_3^3}{18v_2^7}m_1 H \\Big)\\alpha_1.\\qed\n $$\n \n \n \n In the lemmata that follows we assume the hypotheses of Proposition \\ref{flat.med.reduz.result.} to be satisfied and use the notations therein.\n\n\\begin{lemma} \\label{le:v1equalv3} If $v_1=v_3$ everywhere then $H=0$.\n\\end{lemma}\n\\proof By the assumption and the first equation in (\\ref{holo.flat.med.2}) we have $v_2=\\sqrt{2}v_1$. We obtain from any two of the equations in (\\ref{eq:e0}) that $\\alpha_1=0$, whereas any two of the equations in (\\ref{eq:e0b}) imply that $\\alpha_3=0$.\nThen (\\ref{eq:e1}) and (\\ref{eq:e3}) give\n$$\n\\begin{array}{l}\n18 \\alpha_2^2 + 4 v_1^2 H^2 - 3 \\sqrt{2}v_1 H + 36 v_1^2 c -18=0, \\vspace{0.17cm} \\\\\n18 \\alpha_2^2 + 4 v_1^2 H^2 + 3 \\sqrt{2}v_1 H + 36 v_1^2 c -18=0,\n\\end{array}\n$$\nwhich imply that $H=0$. \\qed\n\n\n\\begin{lemma} \\label{le:alphaiszero} The functions $\\alpha_1, \\alpha_2, \\alpha_3$ can not vanish symultaneously on any open subset of $M^3$. \n\\end{lemma}\n\\proof If $\\alpha_1, \\alpha_2, \\alpha_3$ all vanish on the open subset $U\\subset M^3$, then (\\ref{eq:e1}), (\\ref{eq:e2}) and (\\ref{eq:e3}) become \n\\begin{equation}\\label{cond.a1=a2=a3=0.1}\n\\begin{array}{l}\n2(5v_1^4+2v_2^2v_3^2) - 18v_1^2v_2^2v_3^2c + v_1v_2v_3(v_2^2+v_3^2-2v_1v_2v_3H)H=0,\n\\end{array}\n\\end{equation}\n\\begin{equation}\\label{cond.a1=a2=a3=0.2}\n\\begin{array}{l}\n2(5v_2^4-2v_1^2v_3^2) - 18v_1^2v_2^2v_3^2c + v_1v_2v_3(v_1^2-v_3^2-2v_1v_2v_3H)H=0,\n\\end{array}\n\\end{equation}\n\\begin{equation}\\label{cond.a1=a2=a3=0.3}\n\\begin{array}{l}\n2(5v_3^4+2v_1^2v_2^2) - 18v_1^2v_2^2v_3^2c - v_1v_2v_3(v_1^2+v_2^2+2v_1v_2v_3H)H=0. \n\\end{array}\n\\end{equation}\nComparying (\\ref{cond.a1=a2=a3=0.1}) with (\\ref{cond.a1=a2=a3=0.2}) and (\\ref{cond.a1=a2=a3=0.3}) yields, respectively, \n\\begin{equation}\\label{cond.a1=a2=a3=0.3.1}\n\\begin{array}{ll}\n\\displaystyle{H = \\frac{2(v_1^2+v_2^2)}{v_1v_2v_3} \\qquad {\\text{and}} \\qquad H=-\\frac{2(v_1^2-v_3^2)}{v_1v_2v_3}},\n\\end{array}\n\\end{equation}\nwhich is a contradiction. \\qed \n\n\\begin{lemma} \\label{le:alpha1alpha3} There does not exist any open subset of $M^3$ where $v_1-v_3$ is nowhere vanishing and $\\alpha_1=0=\\alpha_3$. \n\\end{lemma}\n\\proof Assume that $\\alpha_1 = 0=\\alpha_3$ and that $v_1-v_3$ does not vanish on the open subset $U \\subset M^3$.\nBy Lemma \\ref{le:alphaiszero}, $\\alpha_2$ must be nonvanishing on an open dense subset $V\\subset U$. Then equations (\\ref{eq:e0}), (\\ref{eq:e0a}), (\\ref{eq:e0b}), (\\ref{eq:e1a}), (\\ref{eq:e2a}) and (\\ref{eq:e2b}) reduce to the following on $V$:\n\\begin{equation*}\n\\begin{array}{l}\n\\displaystyle{\\frac{\\partial v_i}{\\partial u_1}=\\frac{\\partial v_i}{\\partial u_3}=\\frac{\\partial \\alpha_i}{\\partial u_j}=0}, \\ \\ i,j=1,2,3, \\ \\ \\ \\ i\\neq j, \\hspace{2.08cm}\n\\end{array}\n\\end{equation*}\n\\begin{equation}\\label{col.ex.f.e.1}\n\\begin{array}{l}\n\\displaystyle{\\frac{\\partial v_1}{\\partial u_2}=\\frac{v_1^5}{v_3^4}\\alpha_2, \\quad \n\\frac{\\partial v_2}{\\partial u_2}=\\frac{v_2}{v_3^4}(v_1^4-v_1^2v_3^2+v_3^4) \\alpha_2, \\quad\n\\frac{\\partial v_3}{\\partial u_2}=v_3\\alpha_2},\n\\end{array}\n\\end{equation}\nand, since $\\alpha_1=\\alpha_3=0,$ equations (\\ref{eq:e1}), (\\ref{eq:e2}) and (\\ref{eq:e3}) become, respectively, \n\\begin{equation}\\label{col.ex.f.e.2}\n\\begin{array}{l}\n\\displaystyle{\\frac{v_1^6}{v_3^8}(3v_1^2-2v_3^2)\\alpha_2^2 + \\frac{1}{9v_3^4}(5v_1^4+2v_2^2v_3^2) - \\frac{v_1^2v_2^2}{v_3^2}c} \\vspace{1ex}\\\\ \\hspace*{20ex}\\displaystyle{+\\frac{v_1v_2}{18v_3^3}(v_2^2+v_3^2-2v_1v_2v_3H)H =0},\n\\end{array}\n\\end{equation}\n\\begin{equation}\\label{col.ex.f.e.3}\n\\begin{array}{l}\n\\displaystyle{\\frac{\\partial \\alpha_2}{\\partial u_2} = -\\frac{1}{v_3^4}(v_1^4 - 3v_3^4)\\alpha_2^2 -\\frac{1}{9v_1^4}(5v_2^4-2v_1^2v_3^2) + \\frac{v_2^2v_3^2}{v_1^2}c} \\vspace{1ex}\\\\ \\hspace*{20ex}\\displaystyle{- \\frac{v_2v_3}{18v_1^3}(v_1^2-v_3^2-2v_1v_2v_3H)H},\n\\end{array}\n\\end{equation}\n\\begin{equation}\\label{col.ex.f.e.4}\n\\begin{array}{l}\n\\displaystyle{\\frac{v_1^4}{v_2^4v_3^2}(2v_1^2-3v_3^2)\\alpha_2^2 + \\frac{1}{9v_2^4}(5v_3^4+2v_1^2v_2^2) - \\frac{v_1^2v_3^2}{v_2^2}c} \\vspace{1ex}\\\\ \\hspace*{20ex} \\displaystyle{-\\frac{v_1v_3}{18v_2^3}(v_1^2+v_2^2+2v_1v_2v_3H)H=0}. \n\\end{array}\n\\end{equation}\nMultiplying (\\ref{col.ex.f.e.2}) and (\\ref{col.ex.f.e.4}) by $2v_3^8$ and $3v_1^2v_2^4v_3^2$, respectively, and subtracting one\nfrom the other, yield\n\\begin{equation}\n\\begin{array}{lcr}\\label{col.ex.f.e.5}\n\\displaystyle{\\alpha_2^2 = \\frac{1}{90 v_1^6}\\big[-2 v_1^2 v_2^2 v_3^2 (3 v_1^2 + 2 v_3^2)H^2 - v_1 v_2 v_3 (6 v_1^4 + v_1^2 v_3^2 - 4 v_3^4)H} \\vspace{0.15cm} \\\\ \\hspace*{5ex} \\displaystyle{- \\ 18 v_1^2 v_2^2 v_3^2 (3 v_1^2 + 2 v_3^2)c + 2 (6 v_1^6 + 16 v_1^4 v_3^2 + 19 v_1^2 v_3^4 + 4 v_3^6)\\big]}.\n \\end{array}\n\\end{equation} \nOn one hand, substituting (\\ref{col.ex.f.e.5}) in (\\ref{col.ex.f.e.3}) we obtain\n\\begin{equation}\\label{col.ex.f.e.6}\n\\begin{array}{l}\n\\displaystyle{ \\frac{\\partial \\alpha_2}{\\partial u_2} = \\frac{1}{90 v_1^6 v_3^4}\\big[2 v_1^2 v_2^2 v_3^2 (3 v_1^6 + 2 v_1^4 v_3^2 - 4 v_1^2 v_3^4 - 6 v_3^6)H^2} \\vspace{0.1cm} \\\\\\hspace*{9ex} \\displaystyle{+ \\ v_1 v_2 v_3 (6 v_1^8 + v_1^6 v_3^2 - 27 v_1^4 v_3^4 + 2 v_1^2 v_3^6 + 12 v_3^8)H} \\vspace{0.1cm} \\\\\\hspace*{9ex} \\displaystyle{+ \\ 18 v_1^2 v_2^2 v_3^2 (3 v_1^6 + 2 v_1^4 v_3^2 - 4 v_1^2 v_3^4 - 6 v_3^6)c} \\vspace{0.1cm} \\\\\\hspace*{9ex} \\displaystyle{- \\ 4 (v_1^4 - v_3^4) (3 v_1^6 + 8 v_1^4 v_3^2 + 16 v_1^2 v_3^4 + 6 v_3^6) \\big]}.\n\\end{array}\n\\end{equation}\nOn the other hand, differentiating (\\ref{col.ex.f.e.5}) with respect to $u_2$ and using (\\ref{col.ex.f.e.1}) gives\n\\begin{equation}\\label{col.ex.f.e.7}\n\\begin{array}{l}\n\\displaystyle{\\alpha_2\\frac{\\partial \\alpha_2}{\\partial u_2} = -\\frac{\\alpha_2}{180 v_1^6 v_2 v_3^3} \\big[v_2^2(-4 v_1^2 v_2 v_3^3 (5 v_1^4 - 4 v_1^2 v_3^2 - 6 v_3^4)H^2} \\vspace{0.1cm} \\\\\\hspace*{9ex} \\displaystyle{- \\ v_1 v_3^2 (8 v_1^6 - 27 v_1^4 v_3^2 - 8 v_1^2 v_3^4 + 24 v_3^6)H} \\vspace{1ex}\\\\\\hspace*{9ex}\\displaystyle{- \\ 36 v_1^2 v_2 v_3^3 (5 v_1^4 - 4 v_1^2 v_3^2 - 6 v_3^4)c} \\vspace{1ex}\\\\\\hspace*{9ex}\\ \\displaystyle{+8 v_2 v_3 (v_1^2 - v_3^2) (v_2^2 + v_3^2) (8 v_1^2 + 3 v_3^2)\\big]}.\n\\end{array}\n\\end{equation}\nUsing that $\\alpha_2\\neq 0$ and $v_1-v_3\\neq 0$ on $V$, we obtain from (\\ref{col.ex.f.e.6}) and (\\ref{col.ex.f.e.7}) that\n\\begin{equation}\\label{col.ex.f.e.8}\n\\begin{array}{l}\n\\displaystyle{ H^2 + \\frac{4 v_1^6 - 2 v_1^4 v_3^2 - 9 v_1^2 v_3^4 + 4 v_3^6}{4 v_1^3 v_2 v_3 (v_1^2 - v_3^2)}H - \\frac{2 v_2^2 (v_1^2 - v_3^2)}{v_1^4 v_3^2} + 9 c=0.}\n\\end{array}\n\\end{equation}\nDifferentiating (\\ref{col.ex.f.e.8}) with respect to $u_2,$ and using (\\ref{col.ex.f.e.1}) we obtain\n\\begin{equation}\\label{col.ex.f.e.9}\n\\begin{array}{l}\nv_1 v_3 (22 v_1^6 - 11 v_1^4 v_3^2 - 4 v_1^2 v_3^4 + 8 v_3^6)H = -16 v_2^3 (v_1^2 - v_3^2)^2.\n\\end{array}\n\\end{equation}\nSince $v_1-v_3\\neq 0,$ we must have $(22 v_1^6 - 11 v_1^4 v_3^2 - 4 v_1^2 v_3^4 + 8 v_3^6)\\neq 0$ on $V.$ Therefore, (\\ref{col.ex.f.e.9}) implies that\n\\begin{equation}\\label{col.ex.f.e.10}\n\\begin{array}{l}\n\\displaystyle{H = -\\frac{16 v_2^3 (v_1^2 - v_3^2)^2}{v_1 v_3 (22 v_1^6 - 11 v_1^4 v_3^2 - 4 v_1^2 v_3^4 + 8 v_3^6)}}.\n\\end{array}\n\\end{equation}\nFinally, differentiating (\\ref{col.ex.f.e.10}) with respect to $u_2$ and using (\\ref{col.ex.f.e.1}) we obtain\n\\begin{equation}\\label{col.ex.f.e.11}\n\\begin{array}{l}\n\\displaystyle{0 = -\\frac{1680 v_1^5v_2^3 (v_1^2 - v_3^2)^2}{v_3 (22 v_1^6 - 11 v_1^4 v_3^2 - 4 v_1^2 v_3^4 + 8 v_3^6)^2}\\alpha_2,}\n\\end{array}\n\\end{equation}\nwhich is a contradiction, for the right-hand-side of (\\ref{col.ex.f.e.11}) is nonzero.\\qed \n\n\\begin{lemma} \\label{le:alpha2alphaj} There does not exist any open subset of $M^3$ where $\\alpha_2=0=\\alpha_j$ for some $j\\in \\{1, 3\\}$. \n\\end{lemma}\n\\proof We argue for the case in which $j=1$, the other case being similar. So, assume that $\\alpha_1$ and $\\alpha_2$ vanish on an open subset $U \\subset M^3$. By Lemma \\ref{le:alphaiszero}, $\\alpha_3$ is nonzero on an open dense subset $V\\subset U$. Equations (\\ref{eq:e1}) and (\\ref{eq:e2}) can be rewritten as follows on $V$: \n\n\\begin{equation}\\label{con.ad.f.e.5}\n\\begin{array}{l}\n\\displaystyle{\\frac{v_2^4}{v_1^2v_3^4}(3v_1^2+2v_2^2)\\alpha_3^2 + \\frac{1}{9v_3^4}(5v_1^4+2v_2^2v_3^2) - \\frac{v_1^2v_2^2}{v_3^2}c }\\vspace{1ex}\\\\ \\hspace*{20ex}\\displaystyle{+ \\frac{v_1v_2}{18v_3^3}(v_2^2+v_3^2-2v_1v_2v_3H)H =0}, \\vspace{0.17cm} \\\\\n\n\\displaystyle{- \\frac{v_2^6}{v_1^8}(2v_1^2+3v_2^2)\\alpha_3^2 -\\frac{1}{9v_1^4}(5v_2^4-2v_1^2v_3^2) + \\frac{v_2^2v_3^2}{v_1^2}c} \\vspace{1ex}\\\\ \\hspace*{20ex}\\displaystyle{- \\frac{v_2v_3}{18v_1^3}(v_1^2-v_3^2-2v_1v_2v_3H)H =0}.\n\\end{array}\n\\end{equation}\nEliminating $\\alpha_3^2$ from the equations in (\\ref{con.ad.f.e.5}) yields\n\\begin{equation}\\label{con.ad.f.e.6}\n\\begin{array}{ll}\n\\displaystyle{H^2- \\frac{7 v_1^4 + 7 v_1^2 v_3^2 + 2 v_3^4}{2 v_1 v_2 v_3 (v_1^2 + v_2^2)} H - \\frac{ 2 v_3^2}{v_1^2 v_2^2} + 9 c =0}.\n\\end{array}\n\\end{equation}\nDifferentiating (\\ref{con.ad.f.e.6}) with respect to $u_3$ and using that $\\alpha_3\\neq 0$ we obtain\n\\begin{equation}\\label{con.ad.f.e.7}\n\\displaystyle{H=\\frac{8 v_3^3 (v_1^2 + v_2^2)}{v_1 v_2 (21 v_1^4 + 21 v_1^2 v_3^2 + 4 v_3^4)}}.\n\\end{equation}\nFinally, differentiating (\\ref{con.ad.f.e.7}) with respect to $u_3$ and using the fact that $H$ is constant we obtain \n\\begin{equation*}\n\\begin{array}{ll}\n\\displaystyle{0=\\frac{120 v_3^3 v_2 (v_1^2 + v_2^2) (7 v_1^4 + 7 v_1^2 v_3^2 + 2 v_3^4)}{v_1^3 (21 v_1^4 + 21 v_1^2 v_3^2 + 4 v_3^4)^2},}\n\\end{array}\n\\end{equation*}\nwhich is a contradiction. \\qed\n\n\\begin{lemma} \\label{le:alphaialphajneq0} If there exist $p\\in M^3$ and $1\\leq i\\neq j\\leq 3$ such that $\\alpha_i(p)\\neq 0\\neq \\alpha_j(p)$ then $H=0=c$. \n\\end{lemma}\n\\proof We give the proof for the case in which $i=1$ and $j=2$, the remaining ones being similar. Let $U\\subset M^3$ be an open neighborhood of $p$ where $\\alpha_1$ and $\\alpha_2$ are nowhere vanishing. Then (\\ref{equ.alg.f.e.}) gives\n$$\\frac{30v_3}{v_1v_2^9} F - \\frac{4v_1^2v_3^4}{v_2^6}(v_2^2+v_3^2)c + \\frac{v_1v_3^3}{18v_2^7}m_1 H=0 $$\nand \n$$ \\frac{30v_1}{v_2v_3^9} F - \\frac{4v_1^4v_2^2}{v_3^6}(v_1^2-v_3^2)c + \\frac{v_1^3v_2}{18v_3^7} m_2 H =0,\n$$\nor equivalently,\n\\begin{equation}\\label{con.ad.f.e.2}\n\\left\\{\\begin{array}{l}\nF=-\\frac{1}{540}v_1^2v_2^2v_3^2[Hm_1-72cv_1v_2v_3(v_2^2+v_3^2)], \\vspace{0.18cm} \\\\\nF=-\\frac{1}{540}v_1^2v_2^2v_3^2[Hm_2-72cv_1v_2v_3(v_1^2-v_3^2)].\n\\end{array}\\right.\n\\end{equation}\nSubtracting one of the equations in (\\ref{con.ad.f.e.2}) from the other we obtain \n\\begin{equation}\\label{con.ad.f.e.3}\n\\begin{array}{lll}\n\\displaystyle{H^2-\\frac{7(v_1^2 \\ + \\ v_2^2)}{8v_1v_2v_3}H+9c=0}.\n\\end{array}\n\\end{equation}\nDifferentiating (\\ref{con.ad.f.e.3}) with respect to $u_1$ we obtain \n\\begin{equation}\\label{con.ad.f.e.4}\n\\begin{array}{lll}\n\\displaystyle{\\frac{21v_3^3}{8v_1v_2^3}H\\alpha_1=0}.\n\\end{array}\n\\end{equation}\nSince $\\alpha_1\\neq 0$, equation (\\ref{con.ad.f.e.4}) implies that $H=0$, and hence $c=0$ by (\\ref{con.ad.f.e.3}). \\vspace{1ex}\\qed\n\n\n\n\\begin{lemma} \\label{le:v1neqv3} If $v_1\\neq v_3$ at some point of $M^3$ then $H=0=c$. \n\\end{lemma}\n\\proof Assume that $v_1(p_0)\\neq v_3(p_0)$ for some $p_0\\in M^3$, and hence that $v_1\\neq v_3$ on some open neighborhood $U\\subset M^3$\nof $p_0$. By Lemma \\ref{le:alphaiszero}, there exist an open subset $U'\\subset U$ and $i\\in\\{1,2,3\\}$ such that $\\alpha_i(p)\\neq 0$ for all $p\\in U'$. It follows from Lemma \\ref{le:alpha1alpha3} and Lemma \\ref{le:alpha2alphaj} that there exist $q\\in U'$ and $j\\in\\{1,2,3\\}, \\ j\\neq i,$ such that $\\alpha_j(q)\\neq 0$. Thus there exists $q\\in M^3$ such that $\\alpha_i(q)\\neq 0$ and $\\alpha_j(q)\\neq 0, \\ i\\neq j,$ and the conclusion follows from Lemma \\ref{le:alphaialphajneq0}. \\vspace{2ex}\\qed\n\n\\noindent \\emph{Proof of Theorem $\\ref{thm:cmc}$:} Follows immediately from Lemma \\ref{le:v1equalv3} and Lemma \\ref{le:v1neqv3}.\\vspace{2ex}\\qed\n\n\n\n\n\n\n\n\n\\noindent \\emph{Proof of Theorem $\\ref{thm:minimalcneq0}$:} \nGiven $p\\in M^3$, let $u_1, u_2, u_3$ be $U$ be local principal coordinates on an open neighborhood $U$ of $p$ as in \nCorollary \\ref{le:asspair}. It follows from Lemma \\ref{le:v1neqv3} that the associated pair $(v,V)$ satisfies $v_1=v_3$ on $U$. Thus $\\lambda_2$ vanishes on $U$, and hence everywhere on $M^3$ by analyticity. The statement is now a consequence of the next proposition. \\qed\n\n\\begin{proposition}\\label{thm:minimalpczero} Let $f\\colon M^{3} \\to \\mathbb{Q}^{4}(c)$ be a conformally flat hypersurface with three distinct principal curvatures. If one of the principal curvatures is everywhere zero, then either $c=0$ and $f$ is locally a cylinder over a surface $g\\colon M^2(\\bar c)\\to {\\mathbb R}^3$ with constant Gauss curvature $\\bar c\\neq 0$ or $f$ is locally a generalized cone over a surface $g\\colon M^2(\\bar c)\\to \\mathbb{Q}^{3}(\\tilde c)$ with constant Gauss curvature $\\bar c\\neq \\tilde c$ in an umbilical hypersurface $\\mathbb{Q}^{3}(\\tilde c)\\subset \\mathbb{Q}^4(c)$, $\\tilde c\\geq c$, with $\\tilde c>0$ if $c=0$. If, in addition, $f$ is minimal, then $f(M^3)$ is an open subset of a generalized cone over a Clifford torus in an umbilical hypersurface $\\mathbb{Q}^{3}(\\tilde c)\\subset \\mathbb{Q}^4(c)$, $\\tilde c>0$, with $\\tilde c\\geq c$ if $c>0$.\n\\end{proposition}\n\\proof Let $e_1, e_2, e_3$\ndenote local unit vector fields which are principal directions correspondent to the distinct\nprincipal curvatures $\\lambda_1, \\lambda_2, \\lambda_3$, respectively. Then conformal flatness of $M^3$ is\nequivalent to the relations\n\\begin{equation} \\label{eq:uno}\n\\<\\nabla_{e_i}e_j,e_k\\>=0\n\\end{equation} \nand\n\\begin{equation} \\label{eq:dos}\n(\\lambda_j-\\lambda_k)e_i(\\lambda_i)+(\\lambda_i-\\lambda_k)e_i(\\lambda_j)+ (\\lambda_j-\\lambda_i)e_i(\\lambda_k)=0,\n\\end{equation} \nfor all distinct indices $i, j, k$ (see \\cite{la}, p. 84). It follows from Codazzi's equation and (\\ref{eq:uno}) that\n\\begin{equation} \\label{eq:tres}\n\\nabla_{e_i}e_i=\\sum_{j\\neq i}(\\lambda_i-\\lambda_j)^{-1}e_j(\\lambda_i)e_j.\n\\end{equation} \nIf, say, $\\lambda_2=0$, then equation\n(\\ref{eq:dos}) yields\n$$\n\\lambda_3^{-1}e_2(\\lambda_3)=\\lambda_1^{-1}e_2(\\lambda_1):=\\varphi,\n$$\nhence the distribution $\\{e_2\\}^\\perp$ spanned by $e_1$ and $e_3$ is umbilical in $M^3$ by\n(\\ref{eq:tres}).\n\nIf $\\varphi$ is identically zero on $M^3$, then $\\{e_2\\}^\\perp$ is a totally geodesic distribution, and hence \n$M^3$ is locally isometric to a Riemannian product $I\\times M^2$ by the local de Rham theorem. Since $M^3$ is conformally flat, it follows that $M^2$ must have constant Gauss curvature. Moreover, \nby Molzan's theorem (see Corollary $17$ in \\cite{nol}), $f$ is locally an extrinsic product of isometric immersions of the factors, \nwhich is not possible if $c\\neq 0$ because $f$ has three distinct principal curvatures. Therefore $c=0$ and $f$ is locally a cylinder over a surface with constant Gauss curvature in ${\\mathbb R}^3$. \n\nIf $\\varphi$ is not identically zero on $M^3$, given $x\\in M^3$ let $\\sigma$ be the leaf of $\\{e_2\\}^\\perp$ containing $x$ and let $j\\colon\\sigma\\to M^3$ be the inclusion of $\\sigma$ into $M^3$. Denote $\\tilde g=f\\circ j$. \nThen the normal bundle $N_{\\tilde g}\\sigma$ of $\\tilde g$ splits as \n\\begin{equation*}\nN_{\\tilde g}\\sigma=f_*N_j\\sigma\\oplus N_fM=\\mbox{span}\\{f_*e_2\\}\\oplus N_fM\n\\end{equation*}\nand\n\\bea\n\\tilde \\nabla_X f_*e_2\\!\\!\\!&=&\\!\\!\\!f_*\\nabla_Xe_2+\\alpha^f(j_*X,e_2)\\\\\n\\!\\!\\!&=&\\!\\!\\!-\\varphi \\tilde{g}_*X\n\\eea\nfor all $X\\in \\mathfrak{X}(\\sigma)$, where $\\tilde \\nabla$ is the induced \nconnection on ${\\tilde g}^*T\\mathbb{Q}^4(c)$. It follows that the normal vector field $\\eta=f_*e_2$ of $\\tilde g$ \nis parallel with respect \nto the normal connection of $\\tilde g$, and that the shape operator of $\\tilde g$ with \nrespect to $\\eta$ is \ngiven by \n$A^{\\tilde g}_\\eta=\\varphi I$. \nIt is a standard fact that this implies $\\tilde g(\\sigma)$ to be contained in an umbilical hypersurface $\\mathbb{Q}^{3}(\\tilde c)\\subset \\mathbb{Q}^4(c)$, $\\tilde c\\geq c$, that is, there exist an umbilical hypersurface $i\\colon \\mathbb{Q}^{3}(\\tilde c)\\to \\mathbb{Q}^4(c)$ and an isometric immersion \n$g\\colon M^2=\\sigma\\to\\mathbb{Q}^3({\\tilde c})$ such that $\\tilde g=i\\circ g$. \nMoreover, since at any $y\\in\\sigma$ the fiber $L(y)\\!=\\!\\mbox{span}\\{\\eta(y)\\}$ \ncoincides with the normal space of $i$ at $g(y)$, it follows that $f$ coincides with \nthe generalized cone over $g$ in a neighborhood of $x$. \n\nIn particular, $M^3$ is a warped product $I\\times_{\\rho}M^2$, and since $M^3$ is conformally flat, $M^2$ must have constant Gauss curvature. If, in addition, $f$ is minimal, then $g$ must be a Clifford torus in an umbilical hypersurface $\\mathbb{Q}^{3}(\\tilde c)\\subset \\mathbb{Q}^4(c)$, $\\tilde c>0$, with $\\tilde c\\geq c$ if $c>0$, and the preceding argument shows that $f(M^3)$ is an open subset of a generalized cone over~$g$.\\qed\n\n\\section{Proof of Theorem \\ref{thm:minimalceq0}}\nFirst we rewrite Proposition (\\ref{flat.med.reduz.result.}) when $H=0=c$ and state a converse to it.\n\n\\begin{proposition}\\label{flat.min.reduz.result.}\nLet $f\\colon M^3\\to {\\mathbb R}^4$ be a holonomic hypersurface whose associated pair $(v,V)$ satisfies \n\\begin{equation} \\label{eq:vis}v_2^2=v_1^2+v_3^2\\end{equation} and \n\\begin{equation}\\label{holo.flat.min.2}\n\\begin{array}{l}\n\\displaystyle{V_1=-\\frac{1}{3}\\Big(\\frac{v_2}{v_3}+\\frac{v_3}{v_2}\\Big)},\\quad\n\\displaystyle{V_2=-\\frac{1}{3}\\Big(\\frac{v_1}{v_3}-\\frac{v_3}{v_1}\\Big)}, \\quad\n\\displaystyle{V_3=\\frac{1}{3}\\Big(\\frac{v_1}{v_2}+\\frac{v_2}{v_1}\\Big)}.\n\\end{array}\n\\end{equation}\n Set $$\\alpha=(\\alpha_1,\\alpha_2,\\alpha_3)=\\Big(\\frac{1}{v_2}\\frac{\\partial v_2}{\\partial u_1},\\frac{1}{v_3}\\frac{\\partial v_3}{\\partial u_2},\\frac{1}{v_1}\\frac{\\partial v_1}{\\partial u_3}\\Big).$$\n Then $\\phi=(v_1,v_2,v_3,\\alpha_1,\\alpha_2,\\alpha_3)$ satisfies the system of PDE's \n\\begin{equation}\\label{flat.min.reduz.}\n\\left\\{\\begin{array}{l}\n\\displaystyle{\\frac{\\partial \\phi}{\\partial u_1}=\\Big(\\frac{\\partial v_1}{\\partial u_1},v_2\\alpha_1, \\frac{v_3^5}{v_2^4}\\alpha_1,\\frac{\\partial\\alpha_1}{\\partial u_1},2 \\frac{v_1^2}{v_2^2}\\alpha_1\\alpha_2,2 \\frac{v_3^2}{v_2^4}(4v_3^2+v_1^2)\\alpha_1\\alpha_3 \\Big)}, \\vspace{.25cm}\\\\\n\\displaystyle{\\frac{\\partial \\phi}{\\partial u_2}=\\Big( \\frac{v_1^5}{v_3^4}\\alpha_2, \\frac{\\partial v_2}{\\partial u_2}, v_3\\alpha_2, 2 \\frac{v_1^2}{v_3^4}(4v_1^2- v_2^2)\\alpha_1\\alpha_2,\\frac{\\partial\\alpha_2}{\\partial u_2},2 \\frac{v_2^2}{v_3^2}\\alpha_2\\alpha_3 \\Big)},\\vspace{.25cm}\\\\\n\\displaystyle{\\frac{\\partial \\phi}{\\partial u_3}=\\Big( v_1\\alpha_3,\\frac{v_2^5}{v_1^4}\\alpha_3, \\frac{\\partial v_3}{\\partial u_3}, -2 \\frac{v_3^2}{v_1^2}\\alpha_1\\alpha_3, 2 \\frac{v_2^2}{v_1^4}(4v_2^2- v_3^2)\\alpha_2\\alpha_3,\\frac{\\partial\\alpha_3}{\\partial u_3} \\Big),}\n\\end{array}\\right.\n\\end{equation}\nwhere\n$$\n\\begin{array}{l}\n\\displaystyle \\frac{\\partial v_1}{\\partial u_1}=\\frac{v_1}{v_2^4}(v_2^4+v_2^2v_3^2+v_3^4)\\alpha_1, \\;\\;\\;\n\\displaystyle \\frac{\\partial v_2}{\\partial u_2}=\\frac{v_2}{v_3^4}(v_1^4-v_1^2v_3^2+v_3^4)\\alpha_2, \\vspace{.2cm}\\\\\n\\displaystyle \\frac{\\partial v_3}{\\partial u_3}=\\frac{v_3}{v_1^4}(v_1^4+v_1^2v_2^2+v_2^4)\\alpha_3, \\vspace{.2cm}\\\\\n\\displaystyle{\\frac{\\partial \\alpha_1}{\\partial u_1} = \\frac{1}{v_2^4}(3v_2^4-v_3^4)\\alpha_1^2 - \\frac{v_1^6}{v_3^8}(3v_1^2-2v_3^2)\\alpha_2^2 + \\frac{v_2^4}{v_1^2v_3^4}(3v_1^2+2v_2^2)\\alpha_3^2} \\vspace{1ex}\\\\\\hspace*{6ex} \\displaystyle{+ \\frac{1}{9v_3^4}(5v_1^4+2v_2^2v_3^2)},\\vspace{.2cm}\\\\\n\\displaystyle{\\frac{\\partial \\alpha_2}{\\partial u_2} =-\\frac{v_3^4}{v_1^4v_2^2}(3v_2^2 + 2v_3^2) \\alpha_1^2 -\\frac{1}{v_3^4}(v_1^4 - 3v_3^4)\\alpha_2^2 - \\frac{v_2^6}{v_1^8}(2v_1^2+3v_2^2)\\alpha_3^2} \\vspace{1ex}\\\\\\hspace*{6ex}\\displaystyle{-\\frac{1}{9v_1^4}(5v_2^4-2v_1^2v_3^2)},\\vspace{.2cm}\\\\\n\\displaystyle{\\frac{\\partial \\alpha_3}{\\partial u_3} = \\frac{v_3^6}{v_2^8}(2v_2^2+3v_3^2)\\alpha_1^2 + \\frac{v_1^4}{v_2^4v_3^2}(2v_1^2-3v_3^2)\\alpha_2^2 + \\frac{1}{v_1^4}(3v_1^4-v_2^4)\\alpha_3^2} \\vspace{1ex}\\\\\\hspace*{6ex}\\displaystyle{+ \\frac{1}{9v_2^4}(5v_3^4+2v_1^2v_2^2)},\n\\end{array}\n$$\nas well as the algebraic equation \n\\begin{equation}\\label{equ.chave}\n\\begin{array}{rcr}\n9v_1^2v_3^8(v_2^2+v_3^2)\\alpha_1^2 + 9v_1^8v_2^2(v_1^2-v_3^2)\\alpha_2^2 - 9v_2^8v_3^2(v_1^2+v_2^2)\\alpha_3^2 \\vspace{.2cm}\\\\\n - \\ v_1^2v_2^2v_3^2(2v_1^2v_2^4-2v_1^2v_3^4-2v_2^2v_3^4-v_1^2v_2^2v_3^2) = 0.\n\\end{array}\n\\end{equation}\nConversely, if $\\phi=(v_1,v_2,v_3,\\alpha_1,\\alpha_2,\\alpha_3)$ is a solution of $(\\ref{flat.min.reduz.})$ satisfying (\\ref{eq:vis}) on an open simply-connected subset $U\\subset {\\mathbb R}^3$, then $\\phi$ satisfies (\\ref{equ.chave}) and the triple $(v, h, V)$, where $v=(v_1,v_2, v_3)$, $V=(V_1, V_2, V_3)$ is given by $(\\ref{holo.flat.min.2})$ and $h=(h_{ij})$, with $h_{ij}=\\frac{1}{v_i}\\frac{\\partial v_j}{\\partial u_i}$, $1\\leq i\\neq j\\leq 3$, satisfies (\\ref{sistema-hol}), and hence gives rise to a holonomic hypersurface $f\\colon U\\to {\\mathbb R}^4$ whose associated pair $(v,V)$ satisfies (\\ref{eq:vis}) and (\\ref{holo.flat.min.2}). \n\\end{proposition}\n\nIn view of Corollary \\ref{le:asspair} and Proposition \\ref{flat.min.reduz.result.}, minimal conformally flat hypersurfaces of ${\\mathbb R}^4$ are in correspondence with solutions $\\phi=(v_1,v_2,v_3,\\alpha_1,\\alpha_2,\\alpha_3)$ of (\\ref{flat.min.reduz.}) satisfying (\\ref{eq:vis}) and (\\ref{equ.chave}). We shall prove that such solutions are, in turn, in correspondence with the leaves of a foliation of codimension one on the algebraic variety constructed in the next result. \n\n\n\\begin{proposition}\\label{prop.chave} \nDefine $G,F:{\\mathbb R}^6={\\mathbb R}^3\\times {\\mathbb R}^3\\to{\\mathbb R}$ by \n$$G(x,y)= x_2^2-x_1^2-x_3^2$$\n and\n\\begin{equation*}\n\\begin{array}{r}\nF(x,y)= 9x_1^2x_3^8(x_2^2+x_3^2)y_1^2 + 9x_1^8x_2^2(x_1^2-x_3^2)y_2^2 - 9x_2^8x_3^2(x_1^2+x_2^2)y_3^2 \\vspace{.2cm}\\\\\n - x_1^2x_2^2x_3^2(2x_1^2x_2^4-2x_1^2x_3^4-2x_2^2x_3^4-x_1^2x_2^2x_3^2).\n\\end{array}\n\\end{equation*}\nLet\n$M^4:=F^{-1}(0)\\cap G^{-1}(0)\\cap\\{(x,y)\\in {\\mathbb R}^6;x_1>0,x_2>0,x_3>0\\,\\,\\mbox{and}\\,\\,y\\neq 0\\}$\nand let $\\ell_{\\pm}$ be the half lines in $M^4$ given by $$\\ell_{\\pm}=\\{(x,y)\\in M^4\\;:\\; x=s(1, \\sqrt{2},1)\\,\\,\\mbox{for some $s>0$} \\,\\,\\mbox{and}\\,\\, y=(0,\\pm 1,0)\\}.$$ Then $\\tilde{M}^4=M^4\\setminus (\\ell_-\\cup \\ell_+)$ is a regular submanifold of ${\\mathbb R}^6$ and $\\ell_-\\cup \\ell_+$ is the singular set of $M^4$.\n\\end{proposition}\n\\proof If $p\\in M^4$ we have $\\nabla G(p)=\\big(-2x_1,2x_2,-2x_3,0,0,0\\big)$, while the components of $\\nabla F(p)$ are given by \n$$x_1\\frac{\\partial F}{\\partial x_1}(p)= 18x_1^8x_2^2(4x_1^2-3x_3^2)y_2^2 + 18x_2^{10}x_3^2y_3^2-2x_1^4x_2^2x_3^2(2x_2^4-x_2^2x_3^2-2x_3^4),$$\n$$x_2\\frac{\\partial F}{\\partial x_2}(p)=-18x_1^2x_3^{10}y_1^2-18x_2^8x_3^2(3x_1^2+4x_2^2)y_3^2 -2x_1^2x_2^4x_3^2(4x_1^2x_2^2-x_1^2x_3^2-2x_3^4),$$\n$$x_3\\frac{\\partial F}{\\partial x_3}(p)= 18x_1^2x_3^8 (3x_2^2+4x_3^2)y_1^2 - 18x_1^{10}x_2^2y_2^2 +2x_1^2x_2^2x_3^4(x_1^2x_2^2+4x_1^2x_3^2+4x_2^2x_3^2),$$\n$$\\frac{\\partial F}{\\partial y_1}(p)= 18x_1^2x_3^8(x_2^2+x_3^2)y_1,\\,\\,\\,\\,\\,\n\\frac{\\partial F}{\\partial y_2}(p)=18x_1^8 x_2^2(x_1^2-x_3^2)y_2$$\nand\n$$\\frac{\\partial F}{\\partial y_3}(p)=-18x_2^8(x_1^2+x_2^2)x_3^2y_3.$$\nThat $M^4\\setminus (\\ell_+\\cup \\ell_-)$ is a smooth submanifold of ${\\mathbb R}^6$ and $\\ell_-\\cup \\ell_+$ is the singular set of $M^4$ is a consequence of the next two facts.\\vspace{1ex}\n\n\\noindent {\\bf Fact 1:}\n$\\nabla F(p)\\neq 0$ for all $p\\in M^4.$ \n\\proof If $\\nabla F(p)=0$ at $p=(x_1,x_2, x_3, y_1, y_2, y_3)$ then from $\\frac{\\partial F}{\\partial y_1}(p)=0$ it follows that $y_1=0$, whereas \n$\\frac{\\partial F}{\\partial y_3}(p)=0$ implies that $y_3=0$. Thus $y_2\\neq 0$, and hence $x_3=x_1$ from $\\frac{\\partial F}{\\partial y_2}(p)=0$. Therefore $x_2=\\sqrt{2}x_1,$ and then $0=\\frac{\\partial F}{\\partial x_2}(p)=-20\\sqrt{2}x_1^{11},$ which contradicts the fact that $x_1>0$. \\vspace{1ex} \\\\\n\\noindent {\\bf Fact 2:} The subset $\\{p\\in M^4\\,:\\,\\nabla F(p)=a\\nabla G(p)\\,\\,\\,\\mbox{for some}\\,\\,\\,a\\in {\\mathbb R}-\\{0\\}\\}$ coincides with $\\ell_{-}\\cup \\ell_{+}.$ \n\\proof Assume that \n\\begin{equation} \\label{eq:nFnG}\\nabla F(p)=a\\nabla G(p)\\end{equation} for some $a\\in {\\mathbb R}-\\{0\\}$. Equation (\\ref{eq:nFnG}) gives us six equations, the last three of which yield $y_1=y_3=0$ and $x_3=x_1$. Since $x_2^2=x_1^2+x_3^2,$ we obtain that $x_2=\\sqrt{2}x_1.$ Using this and the second of such equations we obtain that $a=-10x_1^{10}.$ Finally, the first one implies that $y_2^2=1$. \\vspace{1ex} \\qed\n\n\\begin{proposition}\\label{prop.chaveb} \nLet $X_1,X_2,X_3\\colon M^4\\to{\\mathbb R}^6$ be defined by \n\\begin{equation*}\n\\begin{array}{lcl}\n\\displaystyle{X_1(p)=\\frac{1}{x_2^4}\\Big((x_2^4+x_2^2x_3^2+x_3^4)x_1y_1,x_2^5y_1, x_3^5y_1,x_2^4 A_1(p)},\\vspace{1ex}\\\\\\hspace*{13ex} 2 x_1^2x_2^2y_1y_2,(8x_3^2+2x_1^2)x_3^2y_1y_3\\Big), \\vspace{.2cm} \\\\\n\\displaystyle{X_2(p)=\\frac{1}{x_3^4}\\Big( x_1^5y_2,(x_1^4-x_1^2x_3^2+x_3^4)x_2y_2,x_3^5y_2} ,\\vspace{1ex}\\\\\\hspace*{13ex}(8x_1^2- 2x_2^2)x_1^2y_1y_2, x_3^4A_2(p),2 x_2^2x_3^2y_2y_3 \\Big), \\vspace{.2cm} \\\\\n\\displaystyle{X_3(p)=\\frac{1}{x_1^4}\\Big( x_1^5y_3,x_2^5y_3, (x_1^4+x_1^2x_2^2+x_2^4)x_3y_3,-2x_1^2x_3^2y_1y_3},\\vspace{1ex}\\\\\\hspace*{13ex}(8x_2^2-2 x_3^2)x_2^2y_2y_3,x_1^4A_3(p) \\Big),\n\\end{array}\n\\end{equation*}\n where \n\\begin{equation*}\n\\begin{array}{l}\n\\displaystyle{A_1(p) = \\frac{1}{x_2^4}(3x_2^4-x_3^4)y_1^2 - \\frac{x_1^6}{x_3^8}(3x_1^2-2x_3^2)y_2^2 + \\frac{x_2^4}{x_1^2x_3^4}(3x_1^2+2x_2^2)y_3^2} \\vspace{1ex}\\\\\\hspace*{10ex} \\displaystyle{+ \\frac{1}{9x_3^4}(5x_1^4+2x_2^2x_3^2)}, \n\\end{array}\n\\end{equation*}\n\\begin{equation*}\n\\begin{array}{l}\n\\displaystyle{A_2(p) =-\\frac{x_3^4}{x_1^4x_2^2}(3x_2^2 + 2x_3^2) y_1^2 -\\frac{1}{x_3^4}(x_1^4 - 3x_3^4)y_2^2 - \\frac{x_2^6}{x_1^8}(2x_1^2+3x_2^2)y_3^2} \\vspace{1ex}\\\\\\hspace*{10ex}\\displaystyle{-\\frac{1}{9x_1^4}(5x_2^4-2x_1^2x_3^2)},\n\\end{array}\n\\end{equation*}\n\\begin{equation*}\n\\begin{array}{l}\n\\displaystyle{A_3(p) = \\frac{x_3^6}{x_2^8}(2x_2^2+3x_3^2)y_1^2 + \\frac{x_1^4}{x_2^4x_3^2}(2x_1^2-3x_3^2)y_2^2 + \\frac{1}{x_1^4}(3x_1^4-x_2^4)y_3^2} \\vspace{1ex}\\\\\\hspace*{10ex}\\displaystyle{+ \\frac{1}{9x_2^4}(5x_3^4+2x_1^2x_2^2)}.\n\\end{array}\n\\end{equation*}\nThen the following assertions hold:\n\\begin{itemize}\n\\item[(i)] $X_1(p),X_2(p),X_3(p)$ are linearly independent for all $p\\in~\\tilde M^4$.\n\\item[(ii)] $\\{p\\in M^4;X_1(p)=0\\}=\\ell_-\\cup \\ell_+=\\{p\\in M^4;X_3(p)=0\\}$.\n\\item[(iii)] The vector fields $X_1,X_2,X_3$ are everywhere tangent to $\\tilde{M}^4$ and the curves \n $\\gamma_\\pm\\colon {\\mathbb R}\\to {\\mathbb R}^6$ given by $\\gamma_\\pm(t)=(e^t,\\sqrt{2}e^t,e^t,0,\\pm 1,0)$,\n are integral curves of $X_2$ with $\\gamma_\\pm({\\mathbb R})=\\ell_\\pm$.\n \\item[(iv)] $[X_i, X_j]=0$ on $\\tilde{M}^4$ for all $1\\leq i\\neq j\\leq 3$.\n \\end{itemize}\n \\end{proposition}\n \\proof First notice that $X_2(p)=0$ if and only if $y_2=0$ and $A_2(p)=0$. Since $A_2(p)<0$ whenever $y_2=0$, it follows that\n $X_2(p)\\neq 0$ for all $p~\\in~M^4$. \n\nNow observe that $X_1(p)=0$ if and only if $y_1=0$ and $A_1(p)=0$. Thus, if $p=(x_1, x_2, x_3, y_1, y_2, y_3)$ is such that $X_1(p)=0$ then \n\\begin{equation*}\n\\left\\{ \\begin{array}{l}\nay_2^2+by_3^2 =c,\\vspace{.2cm}\\\\\ndy_2^2+ey_3^2 =f,\\vspace{.2cm}\\\\\nx_2^2=x_1^2+x_3^2,\n\\end{array}\\right.\n\\end{equation*}\n where\n$$a=9x_1^8x_2^4(3x_1^2-2x_3^2),\\,\\,\\,\\,b=-9x_2^8x_3^4(3x_1^2+2x_2^2),\\,\\,\\,\\, c= x_1^2x_2^4x_3^4(5x_1^4+2x_2^2x_3^2),$$\n$$ d=9x_1^8x_2^2(x_1^2-x_3^2),\\,\\,\\,\\, e=-9x_2^8x_3^2(x_1^2+x_2^2)$$\nand \n$$ f= x_1^2x_2^2x_3^2(2x_1^2x_2^4-2x_1^2x_3^2-2x_2^2x_3^4-x_1^2x_2^2x_3^2).$$\nSuch system has a unique solution given by\n\\begin{equation*}\ny_2^2=\\frac{x_3^8(x_1^2+x_2^2)^2}{9x_1^{12}} \\quad \\quad \\mbox{and} \\quad \\quad y_3^2=-\\frac{(x_1^2-x_3^2)^2}{9x_1^4}.\n\\end{equation*}\nThus we must have $y_3=0$, and hence $x_1=x_3$ and $y_2=\\pm 1.$ It follows that the subset $\\{p\\in M^4;X_1(p)=0\\}$ coincides with $\\ell_-\\cup \\ell_+$.\n\nIn a similar way one shows that the subset $\\{p\\in M^4;X_3(p)=0\\}$ coincides with $\\ell_-\\cup \\ell_+$, and the proof of $(ii)$ is completed.\n\n\nTo prove $(i)$, first notice that $X_1(p),X_2(p),X_3(p)$ are pairwise linearly independent. This already implies that if $\\lambda_1,\\lambda_2,\\lambda_3\\in {\\mathbb R}$ are such that \n\\begin{equation}\\label{combina.linear}\n\\lambda_1X_1(p)+\\lambda_2X_2(p)+\\lambda_3X_3(p)=0\n\\end{equation}\nthen either $\\lambda_1=\\lambda_2=\\lambda_3=0$ ou $\\lambda_1\\neq 0,\\lambda_2\\neq 0$ e $\\lambda_3\\neq 0.$ We will show that the last possibility can not occur. \n\nEquation (\\ref{combina.linear}) gives the system of equations \n\\begin{displaymath}\n\\left\\{ \\begin{array}{ll}\nx_2^2x_3^4y_1\\lambda_1+x_1^6y_2\\lambda_2=0 \\vspace{.2cm}\\\\\nx_1^4x_3^2y_2\\lambda_2+x_2^6y_3\\lambda_3=0 \\vspace{.2cm}\\\\\n\\displaystyle{\\big[ A_1(p)-\\frac{2}{x_2^4}(3x_2^4+x_3^4+2x_2^2x_3^2)y_1^2\\big]\\lambda_1=0}\\vspace{.2cm} \\\\\n\\displaystyle{\\big[A_2(p)-\\frac{2}{x_3^4}(x_1^4+3x_3^4-2x_1^2x_3^2)y_2^2\\big]\\lambda_2=0} \\vspace{.2cm}\\\\\n\\displaystyle{\\big[ A_3(p)-\\frac{2}{x_1^4}(3x_1^4+x_2^4+2x_1^2x_2^2)y_3^2 \\big]\\lambda_3=0.}\n\\end{array}\\right.\n\\end{displaymath}\nThus, it suffices to prove that the system of equations\n\\begin{displaymath}\n\\left\\{ \\begin{array}{ll}\n\\displaystyle{ A_1(p)-\\frac{2}{x_2^4}(3x_2^4+x_3^4+2x_2^2x_3^2)y_1^2=0}\\vspace{.2cm} \\\\\n\\displaystyle{A_2(p)-\\frac{2}{x_3^4}(x_1^4+3x_3^4-2x_1^2x_3^2)y_2^2=0 }\\vspace{.2cm}\\\\\n \\displaystyle{A_3(p)-\\frac{2}{x_1^4}(3x_1^4+x_2^4+2x_1^2x_2^2)y_3^2 =0}\n\\end{array}\\right.\n\\end{displaymath}\nhas no solutions for $p=(x_1, x_2, x_3, y_1, y_2, y_3)\\in \\tilde{M}^4$. We write the preceding system as a linear system \n\\begin{displaymath}\n\\left\\{ \\begin{array}{rcl}\na_1y_1^2+a_2y_2^2+a_3y_3^2=a_4 \\vspace{.2cm}\\\\\nb_1y_1^2+b_2y_2^2+b_3y_3^2=b_4 \\vspace{.2cm}\\\\\nc_1y_1^2+c_2y_2^2+c_3y_3^2=c_4\n\\end{array}\\right.\n\\end{displaymath}\nin the variables $y_1^2,y_2^2$ and $y_3^2$, where\n\\begin{equation*}\n\\begin{array}{lll}\na_1= 9x_1^2x_3^8(3x_1^4+10x_2^2x_3^2), \\hspace{.6cm} & b_1= 9x_1^4x_3^8(3x_2^2+2x_3^2), \\vspace{.2cm}\\\\\n\na_2= 9x_1^8x_2^4(3x_1^2-2x_3^2), & b_2= 9x_1^8x_2^2(3x_2^4-10x_1^2x_3^2), \\vspace{.2cm}\\\\\n\na_3= -9x_2^8x_3^4(3x_1^2+2x_2^2), & b_3=9x_2^8x_3^4(2x_1^2+3x_2^2), \\vspace{.2cm}\\\\\n\na_4= x_1^2x_2^4x_3^4(5x_1^4+2x_2^2x_3^2), & b_4= -x_1^4x_2^2x_3^4(5x_2^4-2x_1^2x_3^2), \n\\end{array}\n\\end{equation*}\nand\n\\begin{equation*}\n\\begin{array}{lll}\n c_1= 9x_1^4x_3^8(2x_2^2+3x_3^2),\\,\\,\\,\\,\\,\\,\\,\nc_2=9x_1^8x_2^4(2x_1^2-3x_3^2),\\vspace{.2cm}\\\\\n\n c_3=-9x_2^8x_3^2(3x_3^4+10x_1^2x_2^2),\\,\\,\\,\\,\\,\\,\\,\nc_4= -x_1^4x_2^4x_3^2(5x_3^4+2x_1^2x_2^2).\n\\end{array}\n\\end{equation*}\nSince\n\\begin{equation*}\n\\begin{array}{l}\n\\det\n\\left(\\!\\!\\begin{array}{ccc}\na_1 & a_2 & a_3 \\\\\nb_1 & b_2 & b_3 \\\\\nc_1 & c_2 & c_3\n\\end{array}\\!\\right)=656100x_1^{12}x_2^{12}x_3^{12}(x_3^4+x_1^2x_2^2)(x_1^2-x_2^2+x_3^2)=0\\vspace{.2cm}\\\\\n\n\\det\n\\left(\\!\\!\\begin{array}{ccc}\na_1 & a_2 & a_4 \\\\\nb_1 & b_2 & b_4 \\\\\nc_1 & c_2 & c_4\n\\end{array}\\!\\right)=-29160x_1^{14}x_3^{18}x_2^6(x_3^4+x_1^2x_2^2)\\neq 0,\n\\end{array}\n\\end{equation*}\nsuch system has no solutions. Thus $(i)$ is proved. \n\n\n\nNow, for $p\\in M^4$ we have \n\\begin{equation*}\n\\begin{array}{l}\n\\displaystyle{\\langle \\nabla F(p),X_1(p)\\rangle=\\frac{10y_1}{x_2^4}(x_2^4+x_3^4)F(p)=0=\\langle \\nabla G(p),X_1(p)\\rangle}\\vspace{.2cm} \\\\\n\\displaystyle{\\langle \\nabla F(p),X_2(p)\\rangle=\\frac{10y_2}{x_3^4}(x_1^4+x_3^4)F(p)=0=\\langle \\nabla G(p),X_2(p)\\rangle}\\vspace{.2cm} \\\\\n\\displaystyle{\\langle \\nabla F(p),X_3(p)\\rangle=\\frac{10y_3}{x_1^4}(x_1^4+x_2^4)F(p)=0= \\langle \\nabla G(p),X_3(p)\\rangle},\n\\end{array}\n\\end{equation*}\n hence $X_1,X_2,X_3$ are everywhere tangent to $\\tilde{M}^4$. That $\\gamma_\\pm$ is an integral curve of $X_2$ follows by \nchecking that $X_2(\\gamma_\\pm(t))=(e^t,\\sqrt{2}e^t,e^t,0,0,0)=\\gamma'_\\pm(t)$. Finally, a straightforward computation gives\n\\begin{equation*}\n\\begin{array}{l}\n\\displaystyle{[X_1,X_2](p)=\\Big(0,0,0,\\frac{10y_2}{9x_2^2x_3^{10}}F(p),\\frac{10y_1}{9x_1^6x_2^4x_3^2}F(p),0\\Big)=0} \\vspace{.2cm} \\\\\n\n\\displaystyle{[X_1,X_3](p)=\\Big(0,0,0,-\\frac{10y_3}{9x_1^4x_2^2x_3^6}F(p),0,-\\frac{10y_1}{9x_1^2x_2^{10}}F(p)\\Big)=0} \\vspace{.2cm} \\\\\n\n\\displaystyle{[X_2,X_3](p)=\\Big(0,0,0,0,\\frac{10y_3}{9x_1^{10}x_3^2}F(p),\\frac{10y_2}{9x_1^2x_2^6x_3^4}F(p)\\Big)=0}. \\qed\n\\end{array} \n\\end{equation*}\n\n\n\n\n\n\n \n\nThe proof of the next proposition is straightforward.\n\n\\begin{proposition}\\label{prop:involutions} $(i)$ For each $\\epsilon=(\\epsilon_1, \\epsilon_2, \\epsilon_3)$, $\\epsilon_j\\in \\{-1, 1\\}$ for $1\\leq j\\leq 3$, the map $\\Phi^{\\epsilon}\\colon \\tilde{M}^4\\to \\tilde{M}^4$ given by \n$$\\Phi^{\\epsilon}(x_1, x_2, x_3, y_1, y_2, y_3)=(x_1, x_2, x_3, \\epsilon_1y_1, \\epsilon_2y_2, \\epsilon_3y_3)$$\nsatisfies $\\Phi^{\\epsilon}_*X_j(p)=\\epsilon_jX_j(\\Phi^{\\epsilon}(p))$ for all $p\\in \\tilde{M}^4$.\\vspace{1ex}\\\\\n$(ii)$ The map $\\Psi\\colon \\tilde{M}^4\\to \\tilde{M}^4$ given by \n$$\\Psi(x_1, x_2, x_3, y_1, y_2, y_3)=\\bigg(x_3, x_2, x_1, \\frac{x_2^4}{x_1^4}y_3, \\frac{x_1^4}{x_3^4}y_2,\\frac{x_3^4}{x_2^4}y_1\\bigg)$$\nsatisfies \n$$\\Psi_*X_1(p)=X_3(\\Psi(p)),\\;\\;\\Psi_*X_2(p)=X_2(\\Psi(p))\\;\\;\\mbox{and}\\;\\;\\Psi_*X_3(p)=X_1(\\Psi(p))$$\n for all $p=(x_1, x_2, x_3, y_1, y_2, y_3)\\in \\tilde{M}^4$.\\vspace{1ex}\\\\\n$(iii)$ The maps $\\Psi$ and $\\Phi^{\\epsilon}$, $\\epsilon\\in \\{-1, 1\\}\\times \\{-1, 1\\}\\times\\{-1, 1\\}$, generate a group of involutions\nof $\\tilde{M}^4$ isomorphic to $\\mathbb{Z}_2\\times \\mathbb{Z}_2\\times \\mathbb{Z}_2\\times \\mathbb{Z}_2$ that preserves the distribution ${\\cal D}$ spanned by the vector fields $X_1, X_2$ and $X_3$.\n\\end{proposition}\n\n\n\n\n\n\n\\noindent \\emph{Proof of Theorem \\ref{thm:minimalceq0}:} \nFirst we associate to each leaf $\\sigma$ of ${\\cal D}$ a covering map $\\phi_\\sigma\\colon U_\\sigma\\to \\sigma$ from a simply-connected open subset $U_\\sigma\\subset {\\mathbb R}^3$ and a minimal immersion $f_\\sigma\\colon U_\\sigma\\to {\\mathbb R}^4$ with three distinct principal curvatures whose induced metric is conformally flat. \n\nFor any $q\\in \\tilde M^4$ and for $1\\leq i\\leq 3$ denote by $\\tau_q^i\\colon J_q^i\\to \\tilde{M}^4$ the maximal integral curve of $X_i$ through $q$, that is, $0\\in J_q^i$, $\\tau^i_q(0)=q$, $(\\tau^i_q)'(t)=X_i(\\tau^i_q(t))$ for all $t\\in J_q^i$, and $J_q^i$ is maximal with these properties. Let $${\\cal D}(X_i)=\\{(t, q)\\in {\\mathbb R}\\times \\tilde{M}^4\\,:\\, t\\in J^i_q\\}$$ and let $\\varphi^i\\colon {\\cal D}(X_i)\\to \\tilde{M}^4$ be the flow of $X_i$, given by $\\varphi^i(t, q)=\\tau^i_q(t)$. For a fixed $p\\in \\sigma$ define $U_\\sigma=U_\\sigma(p)$ by\n$$U_\\sigma=\\{(u_1, u_2, u_3)\\,:\\, u_1\\in J^1_p, \\,u_2\\in J^2_{\\varphi^1(u_1, p)}, \\, u_3\\in J^3_{\\varphi^2(u_2,\\varphi^1(u_1, p))}\\}$$\nand $\\phi_\\sigma=\\phi^p_\\sigma$ by\n$$\\phi_\\sigma(u_1, u_2, u_3)=\\varphi^3(u_3, \\varphi^2(u_2,\\varphi^1(u_1, p))).$$\nThen $0\\in U_\\sigma$, $\\phi_\\sigma(0)=p$, and for all $u\\in U_\\sigma$ we have\n\\begin{equation*\n\\frac{\\partial \\phi_\\sigma}{\\partial u_i}(u)=X_i(\\phi_\\sigma(u)), \\,\\, \\,\\,\\, 1\\leq i\\leq 3.\n\\end{equation*} \n \nWe claim that $\\phi_\\sigma$ is a covering map onto $\\sigma$. Given $x\\in\\sigma$, let $\\tilde B_{2\\epsilon}(0)$ be an open \nball of radius $2\\epsilon$ centered at the origin such that\n$\\phi_\\sigma^x|_{\\tilde B_{2\\epsilon}(0)}$ is a diffeomorphism onto \n$B_{2\\epsilon}(x)=\\phi_\\sigma^x(\\tilde B_{2\\epsilon}(0))$. Since\n\\begin{equation} \\label{eq:psix0}\n\\phi_\\sigma^p(t+s)=\\varphi^3(t_3,\\varphi^2(t_2,\\varphi^1(t_1, \\phi_\\sigma^p(s))))=\\phi_\\sigma^{\\phi_\\sigma^p(s)}(t)\n\\end{equation} \nwhenever both sides are defined, where $t=(t_1,t_2,t_3)$ \nand $s=(s_1,s_2,s_3)$, if $x=\\phi_\\sigma^p(s)$, $s=(s_1,s_2, s_3)\\in U_\\sigma$, then for any \n$y=\\phi_\\sigma^x(t)\\in B_{2\\epsilon}(x)$, $t=(t_1,t_2,t_3)\\in \\tilde B_{2\\epsilon}(0)$, we have\n$$ y=\\phi_\\sigma^x(t)=\\phi_\\sigma^{\\phi_\\sigma^p(s)}(t)=\\phi_\\sigma^p(s+t).$$\nThis shows that $B_{2\\epsilon}(x)\\subset \\phi_\\sigma^p(U_\\sigma)$ if $x\\in \\phi_\\sigma^p(U_\\sigma)$, hence $\\phi_\\sigma^p(U_\\sigma)$ is open in $\\sigma$. But since $y=\\phi_\\sigma^x(t)$ if and only if $x=\\phi_\\sigma^y(-t)$, as follows from (\\ref{eq:psix0}), the same argument shows that\n$x\\in \\phi_\\sigma^p(U_\\sigma)$ if $y\\in \\phi_\\sigma^p(U_\\sigma)$ for some $y\\in B_{2\\epsilon}(x)$, and hence $\\sigma\\setminus \\phi_\\sigma^p(U_\\sigma)$ is also open. It follows that $\\phi_\\sigma^p$ is onto $\\sigma$.\n\n\n\n\nNow, for any $x\\in \\sigma$ write \n$$\n(\\phi_\\sigma^p)^{-1}(x)=\\cup_{\\alpha\\in A}\\tilde x_\\alpha,\n$$\n and for each $\\alpha\\in A$ let $\\tilde B_{2\\epsilon}(\\tilde x_\\alpha)$ denote \nthe open ball of radius $2\\epsilon$ centered at $\\tilde x_\\alpha$. Define \na map $\\psi_\\alpha\\colon B_{2\\epsilon}(x)\\to\\tilde B_{2\\epsilon}(\\tilde x_\\alpha)$\nby \n$$\n\\psi_\\alpha(y)=\\tilde x_\\alpha+(\\phi_\\sigma^x)^{-1}(y).\n$$\nBy (\\ref{eq:psix0}) we have \n\\bea\n\\phi_\\sigma^p(\\psi_\\alpha(y))\\!\\!\\!&=&\\!\\!\\!\\phi_\\sigma^p(\\tilde x_\\alpha+(\\phi_\\sigma^x)^{-1}(y))\\\\\n\\!\\!\\!&=&\\!\\!\\!\\phi_\\sigma^{\\phi_\\sigma^p(\\tilde x_\\alpha)}((\\phi_\\sigma^x)^{-1}(y))\\\\\n\\!\\!\\!&=&\\!\\!\\!\\phi_\\sigma^x((\\phi_\\sigma^x)^{-1}(y))\\\\\n\\!\\!\\!&=&\\!\\!\\!y\n\\eea\nfor all $y\\in B_{2\\epsilon}(x)$. Thus $\\phi_\\sigma^p$ is a diffeomorphism \nfrom $\\tilde B_{2\\epsilon}(\\tilde x_\\alpha)$ onto \n$B_{2\\epsilon}(x)$ having $\\psi_\\alpha$ as its inverse. \nIn particular, this implies that $\\tilde B_\\epsilon(\\tilde x_\\alpha)$ and \n$\\tilde B_\\epsilon(\\tilde x_\\beta)$ are disjoint if $\\alpha$ and \n$\\beta$ are distinct indices in $A$. Finally, it remains to check that \nif $\\tilde y\\in (\\phi_\\sigma^p)^{-1}(B_\\epsilon(x))$ then \n$\\tilde y\\in \\tilde B_\\epsilon(\\tilde x_\\alpha)$ for some $\\alpha\\in A$. \nThis follows from the fact that \n$$\n\\phi_\\sigma^p(\\tilde y-(\\phi_\\sigma^x)^{-1}(\\phi_\\sigma^p(\\tilde y)))=\\phi_\\sigma^{\\phi_\\sigma^p(\\tilde y)}\n(-(\\phi_\\sigma^x)^{-1}(\\phi_\\sigma^p(\\tilde y)))=x.\n$$\nFor the last equality, observe from (\\ref{eq:psix0}) that for all \n$x, y\\in \\sigma$ we have that $\\phi_\\sigma^x(t)=y$ if and only if $\\phi_\\sigma^y(-t)=x$.\n\n\n \nWriting $\\phi_\\sigma=(v_1,v_2,v_3,\\alpha_1,\\alpha_2,\\alpha_3)$, it follows that $\\phi_\\sigma$ satisfies (\\ref{flat.min.reduz.}), as well as (\\ref{eq:vis}) and (\\ref{equ.chave}). Defining $h_{ij}=\\frac{1}{v_i}\\frac{\\partial v_j}{\\partial u_i}$, $1\\leq i\\neq j\\leq 3$, and $V=(V_1, V_2, V_3)$ by (\\ref{holo.flat.min.2}), it follows from Proposition~\\ref{flat.min.reduz.result.} that the triple $(v, h, V)$, where $v=(v_1,v_2, v_3)$ and $h=(h_{ij})$, satisfies (\\ref{sistema-hol}), and hence gives rise to a \n minimal conformally flat hypersurface $f_\\sigma\\colon U_\\sigma\\to {\\mathbb R}^4$ with three distinct principal curvatures by Corollary \\ref{le:asspair}. \n \n Given two distinct leaves $\\sigma$ and $\\tilde \\sigma$ of ${\\cal D}$, the corresponding immersions $f_\\sigma$ and $f_{\\tilde \\sigma}$ are congruent if and only if there exists a diffeomorphism $\\psi\\colon U_\\sigma\\to U_{\\tilde \\sigma}$ such that \n $$\\psi^*I_{\\tilde\\sigma}=I_\\sigma\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\psi^*I\\!I_{\\tilde\\sigma}=I\\!I_\\sigma$$\n where $I_\\sigma$ and $I\\!I_{\\sigma}$ are the first and second fundamental formulae of $f_\\sigma$, respectively, and $I_{\\tilde\\sigma}$, $I\\!I_{\\tilde\\sigma}$ are those of $f_{\\tilde \\sigma}$. A long but straightforward computation shows that, up to a translation, either $\\psi$ coincides with the map given by\n $$\\psi_\\epsilon(u_1,u_2,u_3)=(\\epsilon_1u_1, \\epsilon_2u_2, \\epsilon_3u_3)$$ \n for some $\\epsilon=(\\epsilon_1, \\epsilon_2, \\epsilon_3)$ with $\\epsilon_i\\in \\{-1, 1\\}$ for $1\\leq i\\leq 3$, or it is the composition of such a map with the map given by\n $$\\theta(u_1, u_2, u_3)=(u_3, u_2, u_1).$$\n It is easy to check that this is the case if and only if there exists $\\Theta\\in G$ such that $\\phi_{\\tilde \\sigma}\\circ \\psi=\\Theta\\circ \\phi_{\\sigma}$. \n \n \n \n \n \n \n Now let $f\\colon M^3\\to {\\mathbb R}^4$ be a minimal isometric immersion with three distinct principal curvatures of a simply connected\n conformally flat Riemannian manifold. We shall prove that either $f(M^3)$ is an open subset of the cone over a Clifford torus in ${\\mathbb S}^3$ or there exist a leaf $\\sigma$ of ${\\cal D}$ and a local diffeomorphism $\\rho\\colon M^3\\to V$ onto an open subset $V\\subset U_\\sigma$ \n such that $f$ is congruent to $f_\\sigma\\circ \\rho$.\n \n First we associate to $f$ a map $\\phi_f\\colon M^3\\to M^4\\subset {\\mathbb R}^6$ as follows.\n Fix a unit normal vector field $N$ along $f$ and denote by $\\lambda_1<\\lambda_2<\\lambda_3$ the distinct principal curvatures of $f$ with respect to $N$. For each $1\\leq j\\leq 3$, the eigenspaces $E_{\\lambda_j}=\\ker (A-\\lambda_j I)$ associated to $\\lambda_j$, where $A$ is the shape operator with respect to $N$ and $I$ is the identity endomorphism, form a field of directions along $M^3$, and since $M^3$ is simply connected we can find a smooth global unit vector field $Y_j$ along $M^3$ such that $\\mbox{span}\\{Y_j\\}=E_{\\lambda_j}$. \nLet the functions $v_1, v_2, v_3$ be defined on $M^3$ by\n\\begin{equation} \\label{eq:globalvj}v_j=\\sqrt{\\frac{\\delta_j}{(\\lambda_j-\\lambda_i)(\\lambda_j-\\lambda_k)}},\\,\\,\\,\\delta_j=\\frac{(\\lambda_j-\\lambda_i)(\\lambda_j-\\lambda_k)}{|(\\lambda_j-\\lambda_i)(\\lambda_j-\\lambda_k)|}, \\,\\,\\,i\\neq j\\neq k\\neq i,\\end{equation} \nand let $\\alpha_1, \\alpha_2,\\alpha_3$ be given by\n$$\\alpha_1=\\frac{v_1}{v_2}Y_1(v_2), \\,\\,\\,\\,\\alpha_2=\\frac{v_2}{v_3}Y_2(v_3)\\,\\,\\,\\mbox{and}\\,\\,\\,\\alpha_3=\\frac{v_3}{v_1}Y_3(v_1).$$ \nDefine $\\phi_f\\colon M^3\\to {\\mathbb R}^6$ by $\\phi_f=(v_1,v_2,v_3,\\alpha_1,\\alpha_2,\\alpha_3)$. \n\nNow, it follows from Theorem \\ref{main3} that each point $p\\in M^3$ has a connected open neighborhood $U\\subset M^3$ endowed with principal coordinates $u_1, u_2, u_3$ such that the pair $(v,V)$ associated to $f$ satisfies (\\ref{eq:vis}) and \n(\\ref{holo.flat.min.2}). Let $\\phi_U\\colon U\\to {\\mathbb R}^6$ be given by $\\phi_U=(v_1,v_2,v_3,\\alpha_1,\\alpha_2,\\alpha_3)$,\nwith $(\\alpha_1,\\alpha_2,\\alpha_3)=\\big(\\frac{1}{v_2}\\frac{\\partial v_2}{\\partial u_1},\\frac{1}{v_3}\\frac{\\partial v_3}{\\partial u_2},\\frac{1}{v_1}\\frac{\\partial v_1}{\\partial u_3}\\big)$. It is easy to check that $\\phi_f|_U=\\Theta\\circ \\phi_U$ for some $\\Theta\\in G$. \nOn the other hand, by Proposition \\ref{flat.min.reduz.result.} we have that $\\phi_U$ \n satisfies (\\ref{flat.min.reduz.}), as well as the algebraic equation \n(\\ref{equ.chave}). \nIt follows that $\\phi_U(U)\\subset M^4$ and that \n$$\\frac{\\partial \\phi_U}{\\partial u_i}(u)=X_i(\\phi_U(u)), \\,\\,\\, \\,\\,\\, 1\\leq i\\leq 3,$$\nfor all $u\\in U$. \nTherefore either $\\phi_U(U)$ is an open subset of a leaf $\\sigma_{U}$ of the distribution ${\\cal D}$ on $\\tilde M^4$ spanned by $X_1, X_2, X_3$, or $\\phi_U(U)$ is an open segment of either $\\ell_+$ or $\\ell_-$. If the latter possibility holds for some open subset $U\\subset M^3$, then $v_1=v_3$ on $U$, hence $\\lambda_2=0$ on $U$. By analyticity, $\\lambda_2=0$ on $M^3$, and hence Proposition \\ref{thm:minimalpczero} implies that $f(M^3)$ is an open subset of a cone over a Clifford torus in an umbilical hypersurface $\\mathbb{Q}^{3}(\\tilde c)\\subset {\\mathbb R}^4$, $\\tilde c>0$.\n\n\n\n\n\nOtherwise we have that each point $p\\in M^3$ has an open neighborhood $U\\subset M^3$ such that\n$\\phi_f(U)$ is an open subset of a leaf $\\sigma_{U}$ of ${\\cal D}$. It follows that $\\phi_f(M^3)$ is an open subset of a leaf $\\sigma$ of ${\\cal D}$. If $\\rho\\colon M^3\\to U_\\sigma$ is a lift of $\\phi_f$ with respect to $\\phi_\\sigma$, that is, $\\phi_f=\\phi_\\sigma\\circ \\rho$, then $\\rho$ is a local diffeomorphism such that $f$ and $f_\\sigma\\circ \\rho$ have the same first and second fundamental forms.\nTherefore $f$ is congruent to $f_\\sigma\\circ \\rho$.\\qed\n \n \n \n \n \n \n \n \n\n\n\n\n\n\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLast decade witnessed an increased effort in the deployment of automated vehicle (AV) technology because it can enhance passenger safety, improve travel mobility, reduce fuel consumption, and maximize traffic throughput \\cite{VanderWerfShladover02, Askari_2016, Li_AAP_2017}. Early deployment dates back to research projects, such as DARPA challenges \\cite{Campbell_2010}, PATH program \\cite{Rajam_Shlad_2001}, Grand Cooperative Driving Challenges \\cite{GCDCIntro2011}. Meanwhile, the automotive industry made achievements in equipping production vehicles with advanced driver assistant systems (ADAS) technology. Currently attentions are attracted to AVs with higher levels of autonomy \\cite{SAE_J3016_2016}.\n\nThe ultimate objective is to navigate and guide AVs to destinations following driving conventions and rules.\nThe key components include perception, estimation, planning and control. Perception algorithms collect and process raw sensor (camera, radar, lidar, etc.) data, and convert them to human-readable physical data. Estimation algorithms \\cite{Farrell_2017, Wischnewski_2019, Bersani_AEIT_2019} typically apply sensor fusion technique to obtain clean and sound vehicle state estimations based on sensor characteristics. Planning \\cite{Karaman_IJRR_2011, Paden_TIV_2016} can be further divided into mission planning (or route planning), behavior planning (or decision making), and motion planning: i) mission planning algorithms select routes to destinations through given road networks based on requirements; ii) behavior planning generates appropriate driving behaviors in real-time according to driving conventions and rules, to guide the interaction with other road-users and infrastructures; iii) motion planning translates the generated behavior primitives into a trajectory based on vehicle states. Control algorithms utilize techniques from control theory \\cite{Karl_Richard_Feedback, Khalil_NC, Luenberger_1997, Ioannou_Sun_RAC, Yuri_SMC} that enable vehicles to follow aforementioned trajectories in longitudinal and lateral directions.\n\n\\IEEEpubidadjcol\n\nVehicle control performance highly depends on perception and estimation outcomes, among which lane perception and lane estimation \\cite{AHuang_phd, Fakhfakh_JAIHC_2020} are crucial. In longitudinal control, estimated lanes are needed to identify preceding in-path vehicles, and obtain maximum allowable speed based on road curvature at a preview distance. In lateral control, estimated lanes serve as desired paths for controllers to follow. In practice, lanes are typically perceived by cameras in real-time when high-resolution GPS or high-definition map are not installed. Perception algorithms detect lanes using techniques from computer vision or machine learning and then characterize them as polynomial functions \\cite{Xu_ECCV_2020, Liu_ICCV_2021_CondLaneNetAT, Wang_ECCV_2020, Tabelini_2021} in vehicle body-fixed frame. However, representing lanes as polynomial functions brings great inconvenience to practical estimation and control algorithms. On the one hand, estimation algorithms usually predict polynomial coefficients based on a nominal model, among which Kalman filter \\cite{Klmn_1960, Humpherys_2012, Zarchan_KF_2015, Pei_KF_2019} is the most famous. However, it is infeasible to mathematically derive such a model that characterizes the evolution of coefficients for polynomial functions. This is because: i) vehicle body-fixed frame is translating and rotating as the vehicle moves, and ii) polynomial function representation does not preserve the form in coordinate transformation.\nOn the other hand, iterative methods need to be applied when control algorithms extract attributes (curvature, heading, etc.) from polynomial function representation of lanes at a preview distance.\n\nThis paper attempts to resolve this problem by characterizing lanes with arc-length-based parametric representation. To ensure compatibility with current platforms, we still assume that perception algorithms provide polynomial function representation of lanes in vehicle body-fixed frame.\nThe major contributions are as follows. Firstly, we mathematically derived a transformation and its inverse that reveal the relationship between polynomial function representation and arc-length-based parametric representation. Secondly, it is shown that such parametric representation preserves the form in coordinate transformation. Therefore, we are able to derive a mathematical model to characterize the evolution of coefficients that can be used for prediction during locomotion. Moreover, to simulate the whole process of lane perception, lane estimation and control, we set up a novel simulation framework that includes: i) usage of curvature as a function of arc length to represent lanes based on differential geometry, ii) derivation of coefficients for polynomial function representation to simulate perception in vehicle body-fixed frame, and iii) transformation from absolute dynamics in earth-fixed frame to relative dynamics observable in camera-based control.\n\n\nThis paper is organized follows. In Section~\\ref{sec:prelim}, we start with preliminaries on coordinate transformation, vectorization operator and fundamental theory on curves. In Section~\\ref{sec:curv_repr}, we provide theoretical results on curve representations, and derive transformations between polynomial function representation and arc-length-based parametric representation.\nSection~\\ref{sec:veh_ctrl_setup} investigates camera-based vehicle control problem, and applies theoretical results in Section~\\ref{sec:curv_repr} to obtain an intrinsic linear model for lane estimation. Also, the dynamics describing absolute position and orientation in earth-fixed frame are transformed into those describing relative position and orientation observed in the camera field-of-view (FOV). A controller is introduced from \\cite{Wubing_LC_TIV_2022} to demonstrate that prediction based on the proposed lane estimation approach is adequate for control algorithms. In Section~\\ref{sec:res}, we first show how we set up simulation framework, and then conduct experiments to demonstrate the efficacy of the simulation framework and lane estimation approach. In Section~\\ref{sec:conclusion}, conclusions are drawn and future research directions are pointed out.\n\n\\section{Preliminaries\\label{sec:prelim}}\nIn this section, preliminaries are provided. In Section~\\ref{sec:coord_transf_gen}, we provide the coordinate transformation applied extensively in later chapters. In Section~\\ref{sec:vec_op}, the definition and theorem on vector operator of matrices are introduced. In Section~\\ref{sec:curv_repr_def}, polynomial function representation and arc-length-based parametric representation are discussed to approximate curves based on Taylor series.\n\n\\subsection{Coordinate Transformation \\label{sec:coord_transf_gen}}\n\nGiven two coordinate systems ($O$-$xyz$ and $Q$-$\\tau\\eta z$) as shown in Fig.~\\ref{fig:gen_coord_transf}, where the $Q$-$\\tau\\eta z$ frame is obtained by translating the $O$-$xyz$ frame with vector $\\pvec{OQ}$, and then rotating with angle $\\psi$, we recall the following theorem.\n\n\\begin{theorem}[Change of Coordinates]\\label{thm:gen_coord_transf}\n Suppose the coordinates of $Q$ expressed in $O$-$xyz$ frame are\n \\begin{align}\n \\boldsymbol{d} &=\n \\begin{bmatrix}\n x_{\\rm Q} & y_{\\rm Q}\n \\end{bmatrix}^{\\top}\\,,\n \\end{align}\n and the coordinates of an arbitrary point $P$ are\n \\begin{align}\n \\boldsymbol{r} &=\n \\begin{bmatrix}\n x_{\\rm P} & y_{\\rm P}\n \\end{bmatrix}^{\\top}\\,,&\n \\hat{\\boldsymbol{r}} &=\n \\begin{bmatrix}\n \\tau_{\\rm P} & \\eta_{\\rm P}\n \\end{bmatrix}^{\\top}\\,,\n \\end{align}\n expressed in the $O$-$xyz$ and $Q$-$\\tau\\eta z$ frame, respectively, then one can change coordinates from $Q$-$\\tau\\eta z$ frame to $O$-$xyz$ frame by\n \\begin{align}\n \\boldsymbol{r} &= \\bfR\\, \\hat{\\boldsymbol{r}}+\\boldsymbol{d}\\,,\n \\end{align}\n where\n \\begin{align}\n \\bfR &=\n \\begin{bmatrix}\n \\cos\\psi & -\\sin\\psi \\\\ \\sin\\psi &\\cos\\psi\n \\end{bmatrix}\n \\end{align}\n is the rotation matrix.\n\\end{theorem}\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[scale=0.5]{Fig01.pdf}\\\\\n \\caption{Coordinate transformation.\\label{fig:gen_coord_transf}}\n\\end{figure}\n\n\\subsection{Vectorization of Matrices \\label{sec:vec_op}}\nVectorization of a matrix represents the operation that cuts the matrix into columns and stacks them sequentially as a column vector.\nOne can refer to \\cite{Laub2005} for more details. Here we recall the following definition and theorem.\n\n\\begin{definition}\\label{def:vec_op}\nLet $h_{i}\\in \\mathbb{R}^{n}$ denote the $i$-th column of matrix $\\bfH\\in \\mathbb{R}^{n\\times m}$, i.e., $\\bfH=[h_{1}\\quad h_{2}\\quad\\ldots\\quad\nh_{m}]$. The vector operator is defined as\n\\begin{equation}\\label{eqn:vec_def}\n \\vectr(\\bfH)=\n \\left[\n \\begin{array}{cccc}\n h_{1}^{\\rm T} & h_{2}^{\\rm T} &\\ldots & h_{m}^{\\rm T}\n \\end{array}\n \\right]^{\\rm T}\\in\n\\mathbb{R}^{mn}\\,.\n\\end{equation}\n\\end{definition}\n\n\\begin{theorem}\\label{thm:vec_kron_relation}\nFor any three matrices $\\mathbf{A}$, $\\mathbf{B}$ and $\\bfC$ where the matrix product\n$\\mathbf{ABC}$ is defined, we have\n\\begin{equation}\n \\vectr(\\mathbf{A}\\mathbf{B}\\bfC)=(\\bfC^{\\rm T}\\otimes \\mathbf{A})\\vectr(\\mathbf{B})\\,,\n\\end{equation}\nwhere $\\otimes$ denotes Kronecker product.\n\\end{theorem}\n\n\\subsection{Curves in 2-D Space \\label{sec:curv_repr_def}}\nAccording to Taylor series, curves can be approximated with polynomials.\nIn practice, lanes are typically represented as 2-D curves in vehicle body-fixed frame.\nThus, in this part we discuss Taylor approximation for 2-D curves in $xy$-plane. The shorthand notation\n\\begin{align}\n \\dfrac{\\diff^{0} y}{\\diff x^{0}}=y(x)\\,,\n\\end{align}\nis introduced, which will be kept throughout this paper.\n\n\\begin{definition}\n Given a curve $\\mathcal{C}$ in $xy$-plane and a point $P$ on the curve whose coordinates are $(x_{0}, y(x_{0}))$, its $N$-th order polynomial function representation about point $P$ is its Taylor approximation at point $P$ until the $N$-th order, i.e.,\n \\begin{equation}\n \\begin{split}\\label{eqn:def_func_repr_algb_form}\n \\mathcal{C}'&=\\{(x,\\, y)\\,|\\,\n y(x;x_{0})=\\varphi_{0}+\\varphi_{1}\\, (x-x_{0}) +\\cdots \\\\\n &+\\varphi_{N}\\, (x-x_{0})^{N}\\,,\\,x\\in\\mathbb{R}\\}\\,,\n \\end{split}\n \\end{equation}\n where the coefficients are\n \\begin{align}\\label{eqn:func_repr_coeff_def}\n \\varphi_{n} &= \\dfrac{1}{n!}\\dfrac{\\diff^{n} y}{\\diff x^{n}}\\Big|_{x=x_{0}}\\,,\n \\end{align}\n for $n=0, 1, \\ldots, N$.\n\\end{definition}\n\n\\begin{definition}\n Given a curve $\\mathcal{C}$ in $xy$-plane and a point $P$ on the curve, the $N$-th order parametric representation using arc length\n \\begin{align}\\label{eqn:arc_length_def}\n s &= \\int_{x_{0}}^{x}\\sqrt{1+\\Big(\\dfrac{\\diff y}{\\diff x}\\Big)^{2}}\\, \\diff x\\,,\n \\end{align}\n about point $P$ is its Taylor approximation with respect to arc length parameter $s$ at point $P$ until the $N$-th order, i.e.,\n \\begin{equation}\\label{eqn:def_param_repr_algb_form}\n \\begin{split}\n \\mathcal{C}'&=\\{(x,\\, y)\\,|\\,\n x(s; s_{0})=\\bar{\\phi}_{0}+\\bar{\\phi}_{1} \\,(s-s_{0})+\\cdots\\\\\n & +\\bar{\\phi}_{N} \\,(s-s_{0})^{N},\\;\n y(s; s_{0})=\\hat{\\phi}_{0}+\\hat{\\phi}_{1}\\, (s-s_{0})\\\\\n & +\\cdots +\\hat{\\phi}_{N}\\, (s-s_{0})^{N},\\,s\\in\\mathbb{R}\\}\\,\n \\end{split}\n \\end{equation}\n where $s_{0}$ is the corresponding arc length of point $P$, and the coefficients are\n \\begin{align}\\label{eqn:param_repr_coeff_def}\n \\bar{\\phi}_{n}&= \\dfrac{1}{n!}\\dfrac{\\diff^{n} x}{\\diff s^{n}}\\Big|_{s=s_{0}}\\,,&\n \\hat{\\phi}_{n}&= \\dfrac{1}{n!}\\dfrac{\\diff^{n} y}{\\diff s^{n}}\\Big|_{s=s_{0}}\\,,\n \\end{align}\n for $n=0, 1, \\ldots, N$.\n\\end{definition}\n\nWe remark that in the remainder of the paper polynomial function representation \\eqref{eqn:def_func_repr_algb_form} and arc-length-based parametric representation \\eqref{eqn:def_param_repr_algb_form} will be referred to as function representation and parametric representation, respectively. To simplify the notation, we rewrite \\eqref{eqn:def_func_repr_algb_form} into the matrix form, that is,\n\\begin{align}\\label{eqn:def_func_repr_matx_form}\n \\mathcal{C}'&=\\{(x,\\, y)\\,|\\,y(x; x_{0})=\\boldsymbol{\\varphi}(x_{0})\\,\\boldsymbol{p}_{N}(x-x_{0})\\,,\\,x\\in\\mathbb{R}\\}\\,,\n\\end{align}\nwhere\n\\begin{equation}\n\\begin{split}\\label{eqn:func_repr_coeff_vec_def}\n \\boldsymbol{\\varphi} & =\n \\begin{bmatrix}\n \\varphi_{0} &\\varphi_{1} & \\cdots & \\varphi_{N}\n \\end{bmatrix}\\,,\\\\\n \\boldsymbol{p}_{N}(x) & =\n \\begin{bmatrix}\n 1 & x & \\cdots & x^{N}\n \\end{bmatrix}^\\top \\,.\n\\end{split}\n\\end{equation}\nSimilarly, the matrix form of \\eqref{eqn:def_param_repr_algb_form} is\n\\begin{align}\\label{eqn:def_param_repr_matx_form}\n \\mathcal{C}'&=\\{\\boldsymbol{r}\\,|\\,\n \\boldsymbol{r}(s; s_{0})=\\boldsymbol{\\phi}(s_{0})\\,\\boldsymbol{p}_{N}(s-s_{0})\\,,\\,s\\in\\mathbb{R}\\}\\,,\n\\end{align}\nwhere\n\\begin{equation}\\label{eqn:param_repr_coeff_vec_def}\n\\begin{split}\n \\boldsymbol{r}(s; s_{0}) & =\n \\begin{bmatrix}\n x(s; s_{0}) \\\\ y(s; s_{0})\n \\end{bmatrix}\\,,\\enskip\n \\boldsymbol{\\phi} =\n \\begin{bmatrix}\n \\bar{\\phi}_{0} & \\bar{\\phi}_{1} & \\cdots & \\bar{\\phi}_{N} \\\\\n \\hat{\\phi}_{0} & \\hat{\\phi}_{1} & \\cdots & \\hat{\\phi}_{N}\n \\end{bmatrix}\\,,\n\\end{split}\n\\end{equation}\nand $\\boldsymbol{p}_{N}(\\cdot)$ uses the same definition in \\eqref{eqn:func_repr_coeff_vec_def}. Note that the dependency of coefficients ($\\boldsymbol{\\varphi}$ and $\\boldsymbol{\\phi}$) on the coordinates of point P ($x_{0}$ and $s_{0}$) is highlighted in the matrix form.\n\n\\begin{remark}\\label{remark:curv_repr_gen}\n In 2-D space, a curve is uniquely determined if the curvature $\\kappa(s)$ is given for arbitrary arc length position $s$.\n\\end{remark}\n\\begin{IEEEproof}\n For a general point $P$ on the curve, we assume that its coordinates, slope angle, curvature and arc length are $(x, y)$, $\\alpha$, $\\kappa$ and $s$, respectively.\n Based on differential geometry, their relationship is described by\n \\begin{equation}\\label{eqn:EOM_ref_path_ds}\n \\begin{split}\n x' &=\\cos\\alpha\\, ,\n \\\\\n y' &=\\sin\\alpha\\, ,\n \\\\\n \\alpha' &=\\kappa\\, ,\n \\end{split}\n \\end{equation}\n where prime denotes differentiation with respect to $s$. Solving \\eqref{eqn:EOM_ref_path_ds} with initial conditions, one can obtain the tuple $(x, y,\\alpha)$ as functions of $s$ that characterizes the curve.\n\\end{IEEEproof}\n\n\\section{Representations of Curves \\label{sec:curv_repr}}\nThis part presents theoretical results on the representations of curves. Section~\\ref{sec:property_param_repr} shows that parametric representations have property of preserving the form in parameter shifting and coordinate transformation. Section~\\ref{sec:transform_repr} derives the changes of coefficients that reveal the relationship between function representation and parametric representation. In Section~\\ref{sec:coeff_repr_curves}, we derive the coefficients for both representations for the curve \\eqref{eqn:EOM_ref_path_ds} with known curvature at arbitrary arc length position.\n\n\\subsection{Properties of Parametric Representation \\label{sec:property_param_repr}}\n\\begin{theorem}[Conformal Representation in Shifting]\\label{thm:curve_param_repr_shift}\n The parametric curve \\eqref{eqn:def_param_repr_matx_form} preserves its form when the Taylor expansion is shifted to parameter location $s_{1}=s_{0}+\\tilde{s}$. That is, $\\boldsymbol{r}(s; s_{0})$ can be rewritten into\n \\begin{align}\n \\boldsymbol{r}(s; s_{1}) &= \\boldsymbol{\\phi}(s_{1})\\,\\boldsymbol{p}_{N}(s-s_{1})\\,,\n \\end{align}\n where the change of coefficients is\n \\begin{align}\n \\boldsymbol{\\phi}^{\\top}(s_{1}) &= \\boldsymbol{T}(\\tilde{s})\\,\\boldsymbol{\\phi}^{\\top}(s_{0})\\,,\n \\end{align}\n and\n \\begin{align}\\label{eqn:T_expansion}\n \\boldsymbol{T}(\\tilde{s})&=\n \\begin{bmatrix}\n C_{0}^{0} & C_{1}^{0}\\tilde{s} & C_{2}^{0}\\tilde{s}^{2} & \\cdots & C_{N}^{0}\\tilde{s}^{N}\\\\\n & C_{1}^{1} & C_{2}^{1}\\tilde{s} &\\cdots & C_{N}^{1}\\tilde{s}^{N-1}\\\\\n & & C_{2}^{2} & \\ddots & \\vdots \\\\\n & & & \\ddots & C_{N}^{N-1}\\tilde{s}& \\\\\n & & & & C_{N}^{N}\n \\end{bmatrix}.\n \\end{align}\n Here, $C_{N}^{k}$ represents the binomial coefficients and $C_{0}^{0}=1$.\n\\end{theorem}\n\\begin{IEEEproof}\n \n Substituting ${s_{0}=s_{1}-\\tilde{s}}$ into \\eqref{eqn:def_param_repr_matx_form}, and then utilizing binomial theorem, one can obtain\n\\begin{align}\n \\boldsymbol{r}&(s; s_{0})= \\boldsymbol{\\phi}(s_{0})\\,\\boldsymbol{p}_{N}(s-s_{0}) \\nonumber\\\\\n &=\n \\begin{bmatrix}\n \\sum\\limits_{n=0}^{N} \\bar{\\phi}_{n} (s-s_{0})^{n}\\\\\n \\sum\\limits_{n=0}^{N} \\hat{\\phi}_{n} (s-s_{0})^{n}\n \\end{bmatrix}\n =\n \\begin{bmatrix}\n \\sum\\limits_{n=0}^{N} \\bar{\\phi}_{n} (s-s_{1}+\\tilde{s})^{n}\\\\\n \\sum\\limits_{n=0}^{N} \\hat{\\phi}_{n} (s-s_{1}+\\tilde{s})^{n}\n \\end{bmatrix} \\nonumber\\\\\n &=\n \\begin{bmatrix}\n \\sum\\limits_{n=0}^{N} \\bar{\\phi}_{n} \\sum\\limits_{m=0}^{n}C_{n}^{m}\\tilde{s}^{n-m}(s-s_{1})^{m}\\\\\n \\sum\\limits_{n=0}^{N} \\hat{\\phi}_{n} \\sum\\limits_{m=0}^{n} C_{n}^{m}\\tilde{s}^{n-m}(s-s_{1})^{m}\n \\end{bmatrix}\\\\\n &=\n \\begin{bmatrix}\n \\sum\\limits_{m=0}^{N} \\Big(\\sum\\limits_{n=m}^{N} C_{n}^{m}\\tilde{s}^{n-m}\\bar{\\phi}_{n}\\Big)(s-s_{1})^{m}\\\\\n \\sum\\limits_{m=0}^{N} \\Big(\\sum\\limits_{n=m}^{N} C_{n}^{m}\\tilde{s}^{n-m}\\hat{\\phi}_{n}\\Big)(s-s_{1})^{m}\n \\end{bmatrix}\\,,\\nonumber\n\\end{align}\nwhich is the $N$-th order parametric curve about $s_{1}$. Comparing the coefficients, we obtain the change of coefficients.\n\\end{IEEEproof}\n\n\\begin{theorem}[Conformal Representation in Transformation] \\label{thm:curve_param_repr_conformal}\n The parametric curve \\eqref{eqn:def_param_repr_matx_form} preserves its form in coordinate transformation. Suppose we are given two coordinate systems (frame $\\cFE_{1}$ and $\\cFE_{2}$), and the change of coordinates from $\\cFE_{2}$ to $\\cFE_{1}$ is\n \\begin{align}\n \\boldsymbol{r} &= \\bfR\\,\\hat{\\boldsymbol{r}} + \\bfd\\,,\n \\end{align}\n where $\\boldsymbol{r}$ and $\\hat{\\boldsymbol{r}}$ are the coordinates of an arbitrary point expressed in frame $\\cFE_{1}$ and $\\cFE_{2}$, respectively, $\\bfR$ is the rotation matrix, and $\\bfd$ is the origin of frame $\\cFE_{2}$ expressed in frame $\\cFE_{1}$.\n Then the parametric curve \\eqref{eqn:def_param_repr_matx_form} expressed in frame $\\cFE_{1}$ possesses the same form when expressed in frame $\\cFE_{2}$, that is,\n \\begin{align}\n \\hat{\\boldsymbol{r}}(s; s_{0}) &=\\widehat{\\boldsymbol{\\phi}}(s_{0})\\,\\boldsymbol{p}_{N}(s-s_{0})\\,,\n \\end{align}\n where the change of coefficients is\n \\begin{align}\n \\widehat{\\boldsymbol{\\phi}}(s_{0})& = \\bfR^{\\top}\\big(\\boldsymbol{\\phi}(s_{0})-\\bfD \\big)\\,,\n \\end{align}\n and\n \\begin{align}\n \\bfD& = \\begin{bmatrix}\n \\bfd & 0 & \\cdots & 0\n \\end{bmatrix}\\in \\mathbb{R}^{2\\times (N+1)}\\,.\n \\end{align}\n\\end{theorem}\n\\begin{IEEEproof}\n \n Based on change of coordinates, the curve \\eqref{eqn:def_param_repr_matx_form} can be expressed in $\\cFE_{2}$ as\n\\begin{equation}\n \\begin{split}\n \\hat{\\boldsymbol{r}}(s; s_{0}) &= \\bfR^{\\top}\\boldsymbol{r}(s; s_{0})- \\bfR^{\\top}\\bfd\\,,\\\\\n &=\\bfR^{\\top}\\boldsymbol{\\phi}(s_{0})\\,\\boldsymbol{p}_{N}(s-s_{0}) - \\bfR^{\\top}\\bfd\\,,\n \\end{split}\n\\end{equation}\nBy noticing that\n\\begin{align}\n \\bfd & = \\bfD\\,\\boldsymbol{p}_{N}(s-s_{0})\\,,\n\\end{align}\none can obtain the change of coefficients given in the theorem.\n\n\\end{IEEEproof}\n\n\\begin{remark}\\label{remark:func_repr_preserve_shift}\n Function representation preserves its form in parameter shifting, but not in coordinate transformation.\n\\end{remark}\n\n\\subsection{Transformations between Function Representation and Parametric Representation \\label{sec:transform_repr}}\n\nIn this part, we derive the transformations between function representation and parametric representation.\nResults are only provided until the fifth-order, which is considered adequate in practice.\nThe following assumptions are made: i) Taylor series are expanded about the intersection between the curve and the $y$-axis (also referred to as $y$-intercept); ii) this $y$-intercept marks the starting point of arc length, i.e., $s=0$; and iii) the positive direction of arc length parameter $s$ corresponds to the positive direction of the $x$-axis. We obtain the following transformations based on those assumptions.\n\n\n\\begin{theorem}\\label{thm:curve_func_2_param_repr}\n Given a curve with function representation \\eqref{eqn:def_func_repr_matx_form}, its parametric representation \\eqref{eqn:def_param_repr_matx_form} is uniquely determined in the same coordinate system. In other words, there exists a unique map $\\boldsymbol{\\phi}=\\boldsymbol{f}(\\boldsymbol{\\varphi})$. The coefficients until the fifth order are\n \\begin{equation\n \\begin{split}\n \\bar{\\phi}_{0} &= 0\\,, \\qquad\\qquad\\qquad\\qquad\\quad\\enskip\\;\n \\hat{\\phi}_{0} = \\varphi_{0}\\,,\\\\\n \\bar{\\phi}_{1} &= \\dfrac{1}{\\lambda}\\,, \\qquad\\qquad\\qquad\\qquad\\quad\\enskip\n \\hat{\\phi}_{1} = \\dfrac{\\varphi_{1}}{\\lambda}\\,,\\\\\n \\bar{\\phi}_{2} &= -\\dfrac{\\varphi_{1} \\varphi_{2}}{\\lambda^{4}}\\,, \\qquad\\qquad\\qquad\\quad\n \\hat{\\phi}_{2} =\\dfrac{\\varphi_{2}}{\\lambda^{4}}\\,,\\\\\n \\bar{\\phi}_{3} &= \\dfrac{2 \\varphi_{2}^{2}-\\varphi_{1} \\varphi_{3}}{\\lambda^{5}}-\\dfrac{8 \\varphi_{2}^{2}}{3\\lambda^{7}}\\,,\\qquad\n \\hat{\\phi}_{3} =\\dfrac{\\varphi_{3}}{\\lambda^{5}}- \\dfrac{8\\varphi_{1} \\varphi_{2}^{2}}{3\\lambda^{7}}\\,, \\\\\n \\bar{\\phi}_{4} &= \\dfrac{5 \\varphi_{2} \\varphi_{3}-\\varphi_{1} \\varphi_{4}}{\\lambda^{6}}\n -\\dfrac{10 \\varphi_{1} \\varphi_{2}^{3}+13 \\varphi_{2} \\varphi_{3}}{2\\lambda^{8}}\\\\\n &+\\dfrac{28\\varphi_{1} \\varphi_{2}^{3}}{3\\lambda^{10}}\\,,\\\\\n %\n \\hat{\\phi}_{4} &=\\dfrac{\\varphi_{4}}{\\lambda^{6}}\n +\\dfrac{16 \\varphi_{2}^{3}-13 \\varphi_{1} \\varphi_{2}\\varphi_{3}}{2\\lambda^{8}}\n - \\dfrac{28\\varphi_{2}^{3}}{3\\lambda^{10}} \\,,\\\\\n \\bar{\\phi}_{5} &= \\dfrac{3 \\varphi_{3}^{2}+6 \\varphi_{2} \\varphi_{4}-\\varphi_{1} \\varphi_{5}}{\\lambda^{7}}\\\\\n &-\\dfrac{39 \\varphi_{3}^{2}+210 \\varphi_{1}\\varphi_{2}^{2} \\varphi_{3}+76 \\varphi_{2} \\varphi_{4}-140 \\varphi_{2}^{4}}{10\\lambda^{9}}\\\\\n &+\\dfrac{188 \\varphi_{1} \\varphi_{2}^{2}\\varphi_{3}-248 \\varphi_{2}^{4}}{5\\lambda^{11}}\n + \\dfrac{112\\varphi_{2}^{4}}{3\\lambda^{13}}\\,,\\\\\n %\n \\hat{\\phi}_{5} &=\\dfrac{\\varphi_{5}}{\\lambda^{7}}\n +\\dfrac{326 \\varphi_{2}^{2} \\varphi_{3}-76 \\varphi_{1}\\varphi_{2} \\varphi_{4}-39 \\varphi_{1}\\varphi_{3}^{2}}{10\\lambda^{9}}\\\\\n &-\\dfrac{128 \\varphi_{1}\\varphi_{2}^{4}+188 \\varphi_{2}^{2} \\varphi_{3}}{5\\lambda^{11}}+ \\dfrac{112\\varphi_{1}\\varphi_{2}^{4} }{3\\lambda^{13}}\\,,\n \\end{split}\n \\end{equation}\n where\n \\begin{align}\n \\lambda &= \\sqrt{1+\\varphi_{1}^{2}}\\,.\n \\end{align}\n\\end{theorem}\n\n\\begin{IEEEproof}\n \n Based on the assumptions, we have\n\\begin{align}\\label{eqn:fun_repr_eval_0}\n x (0)&=0\\,, &\n y (0)&=\\varphi_{0}\\,,&\n x '(s) &>0\\,.\n\\end{align}\nwhere $'$ denotes differentiation with respect to $s$, implying $\\bar{\\phi}_{0}$ and $\\hat{\\phi}_{0}$ given in the theorem.\nNotice that \\eqref{eqn:arc_length_def} yields\n\\begin{align}\\label{eqn:ds_comp_equality}\n \\diff s^{2} &=\\diff x ^{2}+\\diff y ^{2} &\\Longrightarrow \\quad\n ( x ')^{2}+( y ')^{2} &=1\\,,\n \n\\end{align}\nand the derivative of \\eqref{eqn:def_func_repr_algb_form} with respect to $s$ yields\n\\begin{align}\\label{eqn:fun_repr_deriv_1}\n y ' &= \\varphi_{1} x '+\\cdots +n\\varphi_{n} x ^{n-1} x '\\,.\n\\end{align}\nEvaluating (\\ref{eqn:ds_comp_equality}, \\ref{eqn:fun_repr_deriv_1}) at $s=0$ and utilizing \\eqref{eqn:fun_repr_eval_0}, we obtain\n\\begin{align}\\label{eqn:fun_repr_deriv1_eval_0}\n x '(0) &=\\dfrac{1}{\\sqrt{1+\\varphi_{1}^{2}}}\\,,&\n y '(0) &=\\dfrac{\\varphi_{1}}{\\sqrt{1+\\varphi_{1}^{2}}}\\,,\n\\end{align}\nwhich implies $\\bar{\\phi}_{1}$ and $\\hat{\\phi}_{1}$ given in the theorem. Then taking the derivatives of (\\ref{eqn:ds_comp_equality}, \\ref{eqn:fun_repr_deriv_1}) with respect to $s$ yields\n\\begin{equation}\\label{eqn:fun_repr_deriv2}\n \\begin{split}\n & x ' x ''+ y ' y '' =0\\,,\\\\\n & y '' = (\\varphi_{1} +\\cdots +n\\varphi_{n} x ^{n-1}) x ''\\\\\n &\\quad +\\big(2\\varphi_{2} +\\cdots +n(n-1)\\varphi_{n} x ^{n-2}\\big)( x ')^{2}\\,.\n \\end{split}\n\\end{equation}\nEvaluating (\\ref{eqn:fun_repr_deriv2}) at $s=0$ and utilizing (\\ref{eqn:fun_repr_eval_0}, \\ref{eqn:fun_repr_deriv1_eval_0}), we obtain\n\\begin{align}\\label{eqn:fun_repr_deriv2_eval_0}\n x ''(0) &=-\\dfrac{2\\varphi_{1}\\varphi_{2}}{(1+\\varphi_{1}^{2})^{\\frac{3}{2}}}\\,,&\n y ''(0) &=\\dfrac{2\\varphi_{2}}{(1+\\varphi_{1}^{2})^{\\frac{3}{2}}}\\,,\n\\end{align}\nwhich implies $\\bar{\\phi}_{2}$ and $\\hat{\\phi}_{2}$ given in the theorem.\nSimilarly, evaluating the derivative of (\\ref{eqn:fun_repr_deriv2}) at $s=0$ and then utilizing (\\ref{eqn:fun_repr_eval_0}, \\ref{eqn:fun_repr_deriv1_eval_0}, \\ref{eqn:fun_repr_deriv2_eval_0}), one can obtain an algebraic linear equation about $ x '''(0)$ and $ y '''(0)$ and thus obtain $\\bar{\\phi}_{3}$ and $\\hat{\\phi}_{3}$ . Following this procedure, we can derive all the derivatives and obtain the coefficients given in the theorem.\n\n\\end{IEEEproof}\n\n\\begin{theorem}\\label{thm:curve_param_2_func_repr}\n Given a curve with parametric representation \\eqref{eqn:def_param_repr_matx_form}, its function representation \\eqref{eqn:def_func_repr_matx_form} is uniquely determined in the same coordinate system. In other words, there exists a unique map $\\boldsymbol{\\varphi}=\\boldsymbol{f}^{-1}(\\boldsymbol{\\phi})$. The coefficients until the fifth-order are\n \\begin{equation\n \\begin{split}\n \\varphi_{0} &= \\hat{\\phi}_{0}\\,,\\qquad\\,\n \\varphi_{2} = \\dfrac{\\hat{\\phi}_{2} \\bar{\\phi}_{1}-\\hat{\\phi}_{1} \\bar{\\phi}_{2}}{\\bar{\\phi}_{1}^3}\\,,\\\\\n \\varphi_{1} &= \\dfrac{\\hat{\\phi}_{1}}{\\bar{\\phi}_{1}}\\,,\\qquad\n \\varphi_{3} = \\dfrac{\\hat{\\phi}_{3}}{\\bar{\\phi}_{1}^3}\n -\\dfrac{\\hat{\\phi}_{1} \\bar{\\phi}_{3}+2 \\hat{\\phi}_{2} \\bar{\\phi}_{2}}{\\bar{\\phi}_{1}^4}\n +\\dfrac{2 \\hat{\\phi }_{1} \\bar{\\phi}_{2}^2}{\\bar{\\phi}_{1}^5}\\,,\\\\\n \\varphi_{4} &= \\dfrac{\\hat{\\phi}_{4}}{\\bar{\\phi}_{1}^4}\n -\\dfrac{\\hat{\\phi}_{1} \\bar{\\phi}_{4}+2 \\hat{\\phi}_{2} \\bar{\\phi}_{3}+3 \\hat{\\phi}_{3} \\bar{\\phi}_{2}}{\\bar{\\phi}_{1}^5}\\\\\n &+\\dfrac{5 \\bar{\\phi}_{2}(\\hat{\\phi}_{1} \\bar{\\phi}_{3} + \\hat{\\phi}_{2} \\bar{\\phi}_{2})}{\\bar{\\phi}_{1}^6}\n -\\dfrac{5 \\hat{\\phi}_{1} \\bar{\\phi}_{2}^3}{\\bar{\\phi}_{1}^7}\\,,\\\\\n \\varphi_{5} &=\\dfrac{\\hat{\\phi}_{5}}{\\bar{\\phi}_{1}^5}\n -\\dfrac{\\hat{\\phi }_{1} \\bar{\\phi}_{5}+2 \\hat{\\phi}_{2} \\bar{\\phi}_{4}+3 \\hat{\\phi }_{3} \\bar{\\phi}_{3}+4 \\hat{\\phi}_{4} \\bar{\\phi}_{2}}{\\bar{\\phi }_{1}^6}\\\\\n &+\\dfrac{3 \\hat{\\phi}_{1} (\\bar{\\phi}_{3}^2+2 \\bar{\\phi}_{2} \\bar{\\phi}_{4})+3\\bar{\\phi}_{2}(3 \\hat{\\phi}_{3} \\bar{\\phi}_{2}+4 \\hat{\\phi}_{2} \\bar{\\phi}_{3})}{\\bar{\\phi}_{1}^7}\\\\\n &-\\dfrac{7 \\bar{\\phi}_{2}^{2}(3 \\hat{\\phi}_{1} \\bar{\\phi}_{3} +2 \\hat{\\phi}_{2} \\bar{\\phi}_{2})}{\\bar{\\phi}_{1}^8}\n +\\dfrac{14 \\hat{\\phi}_{1} \\bar{\\phi}_{2}^4}{\\bar{\\phi}_{1}^9}\\,.\n \\end{split}\n \\end{equation}\n\\end{theorem}\n\n\\begin{IEEEproof}\n \n Given the parametric representation \\eqref{eqn:def_param_repr_algb_form} of the curve, one can obtain the derivatives ($x'$, $y'$, $x''$, $y''$, ...) with respect to $s$ until the required order. Also, notice that\n\\begin{equation}\\label{eqn:proof_dyx}\n \\dfrac{\\diff y}{\\diff x} =\\dfrac{y'}{x'}\\,,\n\\end{equation}\nand\n\\begin{equation}\\label{eqn:proof_dyx_recurv}\n \\dfrac{\\diff^{n+1} y}{\\diff x^{n+1}} =\\dfrac{\\diff}{\\diff x}\\left(\\dfrac{\\diff^{n} y}{\\diff x^{n}}\\right)\n =\\dfrac{\\left(\\frac{\\diff^{n} y}{\\diff x^{n}}\\right)'}{ x'}\\,, \\quad n=1,\\, 2,\\, 3,\\, \\ldots\n\\end{equation}\nCalculating the derivatives in (\\ref{eqn:proof_dyx}, \\ref{eqn:proof_dyx_recurv}) recursively and utilizing \\eqref{eqn:func_repr_coeff_def}, one can obtain the coefficients given in the theorem by noticing that $x_0=0$ implies $s=0$.\n\n\\end{IEEEproof}\n\n\\begin{remark}\nThe transformed representation is not necessarily equal to the original representation due to truncation errors in Taylor series, but can provide a very good approximation. This is because their derivatives are equal up until the specified order at the point where Taylor series are expanded.\n\\end{remark}\n\n\\subsection{Representations of a Given Curve \\label{sec:coeff_repr_curves}}\nIt is useful to derive the coefficients of parametric representation and function representation for curves described by \\eqref{eqn:EOM_ref_path_ds} with given $\\kappa(s)$. On the one hand, these coefficients can be used to validate the transformations derived in Section~\\ref{sec:transform_repr}. On the other hand, they can be used to simulate perception outcomes for camera-based control. In the following, we assume: i) the tuple $(x(s), y(s),\\alpha(s))$ are obtained with the given $\\kappa(s)$ according to \\eqref{eqn:EOM_ref_path_ds}; and ii) the representations are expanded about point $P$ whose arc length position is $s_{0}$. Other attributes at point P are\n\\begin{equation}\\label{eqn:gen_curv_P_loc}\n\\begin{split}\n &x(s_{0}) =x_{0}\\,, \\enskip\n y(s_{0}) =y_{0}\\,, \\enskip\n \\alpha(s_{0}) =\\alpha_{0}\\,, \\enskip\n \\kappa(s_{0}) =\\kappa_{0}\\,,\\\\\n %\n &\\dfrac{\\diff \\kappa}{\\diff s}(s_{0})=\\kappa'_{0}\\,,\\quad\n \\dfrac{\\diff^{2} \\kappa}{\\diff s^{2}}(s_{0})=\\kappa''_{0}\\,,\\quad\n \\dfrac{\\diff^{3} \\kappa}{\\diff s^{3}}(s_{0})=\\kappa'''_{0}\\,.\n\\end{split}\n\\end{equation}\n\n\n\\begin{theorem}\\label{thm:gen_curve_param_repr}\n Given a curve in $xy$-plane with known ($x(s)$, $y(s)$, $\\alpha(s)$, $\\kappa(s)$), its parametric representation about point $P$ is unique. The coefficients until fifth-order are\n \\begin{equation}\n \\begin{split}\n \\bar{\\phi}_{0}&=x_{0}\\,,\\qquad \\qquad \\qquad\\qquad\n \\hat{\\phi}_{0}=y_{0}\\,,\\\\\n \\bar{\\phi}_{1}&=\\cos\\alpha_{0}\\,,\\qquad\\hspace{45pt}\n \\hat{\\phi}_{1}=\\sin\\alpha_{0}\\,,\\\\\n \\bar{\\phi}_{2}&=-\\tfrac{1}{2}\\kappa_{0}\\sin\\alpha_{0}\\,,\\quad\\hspace{31pt}\n \\hat{\\phi}_{2}=\\tfrac{1}{2}\\kappa_{0}\\cos\\alpha_{0}\\,,\\\\\n \\bar{\\phi}_{3}&=-\\tfrac{1}{6}\\kappa_{0}^{2}\\cos\\alpha_{0}-\\tfrac{1}{6}\\kappa_{0}'\\sin\\alpha_{0}\\,,\\\\\n \\hat{\\phi}_{3}&=-\\tfrac{1}{6}\\kappa_{0}^{2}\\sin\\alpha_{0}+\\tfrac{1}{6}\\kappa_{0}'\\cos\\alpha_{0}\\,,\\\\\n \\bar{\\phi}_{4}&=-\\tfrac{1}{24}(\\kappa_{0}''-\\kappa_{0}^{3})\\sin\\alpha_{0}-\\tfrac{1}{8}\\kappa_{0}\\kappa_{0}'\\cos\\alpha_{0}\\,,\\\\\n \\hat{\\phi}_{4}&=\\tfrac{1}{24}(\\kappa_{0}''-\\kappa_{0}^{3})\\cos\\alpha_{0}-\\tfrac{1}{8}\\kappa_{0}\\kappa_{0}'\\sin\\alpha_{0}\\,,\\\\\n \\bar{\\phi}_{5}&=\\tfrac{1}{120}\\big(\\kappa_{0}^{4}-3(\\kappa_{0}')^{2}-4\\kappa_{0}\\kappa_{0}''\\big)\\cos\\alpha_{0}\\\\\n &\\quad+\\tfrac{1}{120}(6\\kappa_{0}^{2}\\kappa_{0}'-\\kappa_{0}''')\\sin\\alpha_{0}\\,,\\\\\n \\hat{\\phi}_{5}&=\\tfrac{1}{120}\\big(\\kappa_{0}^{4}-3(\\kappa_{0}')^{2}-4\\kappa_{0}\\kappa_{0}''\\big)\\sin\\alpha_{0}\\\\\n &\\quad-\\tfrac{1}{120}(6\\kappa_{0}^{2}\\kappa_{0}'-\\kappa_{0}''')\\cos\\alpha_{0}\\,.\n \\end{split}\n \\end{equation}\n\\end{theorem}\n\\begin{IEEEproof}\n \n Given an arbitrary point $(x, y)$ on the curve (\\ref{eqn:EOM_ref_path_ds}), the derivatives of the curve at that point are\n\\begin{equation}\\label{eqn:xy_derivs}\n \\begin{split}\n x' &= \\cos\\alpha\\,,\\qquad\\qquad\\qquad\\quad\\enskip\n y' =\\sin\\alpha\\,,\\\\\n x''&=-\\kappa\\sin\\alpha\\,,\\qquad\\qquad\\qquad\n y''=\\kappa\\cos\\alpha\\,,\\\\\n x'''&=-\\kappa^{2}\\cos\\alpha-\\kappa'\\sin\\alpha\\,,\\\\\n x^{(4)}&=-(\\kappa''-\\kappa^{3})\\sin\\alpha-3\\kappa\\kappa'\\cos\\alpha\\,,\\\\\n x^{(5)}&=\\big(\\kappa^{4}-3(\\kappa')^{2}-4\\kappa\\kappa''\\big)\\cos\\alpha\\\\\n &+(6\\kappa^{2}\\kappa'-\\kappa''')\\sin\\alpha\\\\\n y''' &=-\\kappa^{2}\\sin\\alpha+\\kappa'\\cos\\alpha\\,,\\\\\n y^{(4)}&=(\\kappa''-\\kappa^{3})\\cos\\alpha-3\\kappa\\kappa'\\sin\\alpha\\,,\\\\\n y^{(5)}&=\\big(\\kappa^{4}-3(\\kappa')^{2}-4\\kappa\\kappa''\\big)\\sin\\alpha\\\\\n &-(6\\kappa^{2}\\kappa'-\\kappa''')\\cos\\alpha\\,.\n \\end{split}\n\\end{equation}\nEvaluating \\eqref{eqn:xy_derivs} at point $P$ and utilizing \\eqref{eqn:param_repr_coeff_def}, we obtain the coefficients given in the theorem.\n\n\\end{IEEEproof}\n\n\n\\begin{theorem}\\label{thm:gen_curve_func_repr}\n Given a curve in $xy$-plane with known ($x(s)$, $y(s)$, $\\alpha(s)$, $\\kappa(s)$), its function representation about point $P$ is unique. The coefficients until fifth-order are\n \\begin{equation\n \\begin{split}\n \\varphi_{0} &= y_{0}\\,, \\qquad\\hspace{50pt}\n \\varphi_{1} = \\tan\\alpha_{0}\\,,\\\\\n \\varphi_{2} &= \\dfrac{\\kappa_{0}}{2\\cos^{3}\\alpha_{0}}\\,, \\qquad\\qquad\n \\varphi_{3} = \\dfrac{\\kappa_{0}'+3\\kappa_{0}^{2}\\tan\\alpha_{0}}{6\\cos^{4}\\alpha_{0}}\n \\,,\\\\\n %\n \\varphi_{4} &= \\dfrac{5\\kappa_{0}^3 }{8\\cos^{7}\\alpha_{0}}\n +\\dfrac{\\kappa_{0}''-12 \\kappa_{0}^3+10 \\kappa_{0}\\kappa_{0}'\\tan\\alpha_{0} }{24\\cos^{5}\\alpha_{0}} \\,,\\\\\n \\varphi_{5} &= \\dfrac{7\\kappa_{0}^{2} (\\kappa_{0}'+\\kappa_{0}^{2}\\tan\\alpha_{0})}{8\\cos^{8}\\alpha_{0}}\n +\\dfrac{\\kappa_{0}'''-86 \\kappa_{0}^{2} \\kappa_{0}'}{120\\cos^{6}\\alpha_{0}}\\\\\n &+\\dfrac{ \\big(3\\kappa_{0}\\kappa_{0}''+2(\\kappa_{0}')^{2}-12\\kappa_{0}^{4}\\big) \\tan\\alpha_{0} }{24\\cos^{6}\\alpha_{0}}\n \\,.\n \\end{split}\n \\end{equation}\n\\end{theorem}\n\\begin{IEEEproof}\n \n Given an arbitrary point $(x, y)$ on the curve (\\ref{eqn:EOM_ref_path_ds}), the derivatives of the coordinates $x$ and $y$ with respect to $s$ at that point are given in \\eqref{eqn:xy_derivs}. Also, notice that the derivatives of $y$ with respect to $x$ are the same as those given in (\\ref{eqn:proof_dyx}, \\ref{eqn:proof_dyx_recurv}). Substituting \\eqref{eqn:xy_derivs} into (\\ref{eqn:proof_dyx}, \\ref{eqn:proof_dyx_recurv}), one can obtain the derivatives recursively until the required order. Then the coefficients can be obtained by evaluating these derivatives at point $P$ and utilizing \\eqref{eqn:func_repr_coeff_def}.\n\\end{IEEEproof}\n\n\\begin{remark}\nOne can verify that coefficients given in Theorem~\\ref{thm:gen_curve_param_repr} and \\ref{thm:gen_curve_func_repr} satisfy the transformations given in Theorem~\\ref{thm:curve_func_2_param_repr} and \\ref{thm:curve_param_2_func_repr}.\n\\end{remark}\n\n\\section{Application in Lateral Control \\label{sec:veh_ctrl_setup}}\nThis section applies arc-length-based parametric representation to camera-based lateral control problem. We first present a typical architecture using function representation, and discuss the related issues in lane estimation and control. Then a new architecture is proposed to use parametric representation that can facilitate and improve lane estimation as well as control-related information extraction.\n\nFig.~\\ref{fig:block_diag}(a) illustrates a typical architecture of camera-based vehicle control. At first, lanes are captured by cameras. Then perception applies lane detection algorithms using computer vision or machine learning techniques, and outputs coefficients $\\boldsymbol{\\varphi}$ that represent lanes with polynomial function representations. In the scenarios where perception is not perfect (noisy or corrupted) or temporarily unavailable, lane estimation becomes extremely important, which is achieved through predictors, observers or estimators. Those techniques require a model on the evolution of coefficients for prediction. For example, the orange box in Fig.~\\ref{fig:block_diag}(a) represents a typical Kalman filter implementation, which is decoupled into two steps: time update and measurement update. Time update step utilizes a model to predict a priori estimate of the coefficients, while the measurement update step generates a posteriori estimate based on the newest measurement and signal characteristics.\nHowever, when function representation is used, it is rather difficult to find such a model because: i) the vehicle body-fixed frame is translating and rotating due to vehicle movement; and ii) function representation does not preserve the form in coordinate transformation.\nTherefore, in practice it is common to use simple approximated models for prediction, such as $\\boldsymbol{\\varphi}_{k+1}=\\boldsymbol{\\varphi}_{k}$. These predictions only work for a very short period, and underlying inaccuracies in the model lead to unexpected behaviors due to the aforementioned issues.\nThe other issue appears when control algorithms need to extract state information with respect to lanes at a preview distance, such as lateral deviation, relative heading angle, road curvature, etc. Extracting such information requires numerical iterations for function representation because the preview distance is implicit in the representation.\n\n\\begin{figure}[!t]\n \\centering\n \n \\includegraphics[scale=1.2]{Fig02.pdf}\\\\\n \\caption{Block diagrams of camera-based vehicle control. (a) A typical architecture using function representation. (b) Proposed architecture using arc-length-based parametric representation. }\\label{fig:block_diag}\n\\end{figure}\n\nThese issues can be resolved if arc-length-based parametric representation is utilized to characterize lanes.\nWhen the vehicle moves, the nice properties in Theorem~\\ref{thm:curve_param_repr_shift} and \\ref{thm:curve_param_repr_conformal} indicate an intrinsic linear model on the evolution of coefficients. Also, it is straightforward to extract information with respect to lanes at a preview distance since this distance is actually the arc length parameter explicitly used in the representation.\nFig.~\\ref{fig:block_diag}(b) proposes a new architecture for camera-based control problem using arc-length-based parametric representation.\nWe still assume perception outputs coefficients $\\boldsymbol{\\varphi}$ of polynomial function representation to ensure compatibility with current platforms. An additional step is introduced to transform function representation to parametric representation by applying Theorem~\\ref{thm:curve_func_2_param_repr}. Thus, we can derive an intrinsically linear model on the evolution of coefficients $\\boldsymbol{\\phi}$, and easily extract information needed by control algorithms. The details are provided in the remainder of this part. In Section~\\ref{sec:perception}, we introduce notations, and discuss lane representations and transformations. Section~\\ref{sec:estmation_lane} derives the model on the evolution of coefficients $\\boldsymbol{\\phi}$ for lane estimation. In Section~\\ref{sec:ctrl}, we introduce a vehicle dynamic model and a lateral controller as an example to demonstrate how to utilize the derived lane estimation model.\n\n\n\n\\subsection{Perception and Transformation \\label{sec:perception}}\n\n\nFig.~\\ref{fig:camera_control} depicts the camera-based perception of the path represented as the green dashed curve. Remark that in practice perception algorithms output representations of lane markers captured by cameras, which can be transformed into representations of lane centers. For simplicity, in this paper we use paths or lanes to indicate lane centers.\nWe also assume that the camera with FOV angle $2\\delta$ is mounted at point $Q$ along the longitudinal symmetry axis. The wheelbase length is $l$, and the distance from point $Q$ to the rear axle center is $d$. The light purple sector region denotes the FOV of the camera, which is also symmetric about the longitudinal axis. The closest on-path point observed in the camera is $\\Omega$. Note that the camera can only capture the segment $\\mathcal{C}$ of the path within the FOV that is highlighted as the solid green curve beyond point $\\Omega$. In this part, we maintain the following notations:\n\\begin{enumerate}\n \\item ${(x,y,z)}$ denotes the earth-fixed frame $\\cFE$.\n\n \\item ${(\\tau, \\eta, z)}$ denotes the vehicle body-fixed frame $\\cFB$ with the origin located at $Q$. Axes $\\tau$ and $\\eta$ are along the longitudinal and lateral directions, with the corresponding unit vectors denoted as $\\unitvec[\\tau]$ and $\\unitvec[\\eta]$, respectively.\n\n \\item The position of $Q$ is ($x_{Q}$, $y_{Q}$) expressed in $\\mathcal{F}$, and the vehicle heading angle is $\\psi$ with respect to the $x$-axis.\n\n \\item The position of $\\Omega$ is ${(x_{\\rm \\Omega}, y_{\\rm \\Omega})}$ expressed in $\\mathcal{F}$, while the slope angle and curvature at point $\\Omega$ are $\\alpha_{\\rm \\Omega}$ and $\\kappa_{\\rm \\Omega}$, respectively.\n\n \\item When it is needed to distinguish different time instants, subscripts $k$ or $k+1$ indicating time steps will be added to the points, axes, frames and time-varying variables.\n\\end{enumerate}\n\n\\begin{figure}[!t]\n \\centering\n \n \\includegraphics[scale=0.9]{Fig03.pdf}\\\\\n \\caption{Schematics of camera-based vehicle control.\\label{fig:camera_control}}\n\\end{figure}\n\n\nPerception algorithms process the observed path segment $\\mathcal{C}$ and approximate it with a polynomial function\n\\begin{align}\\label{eqn:poly_func_repr_D}\n \\mathcal{C}'=\\{(\\tau, \\eta)\\,|\\,\\eta = \\boldsymbol{\\varphi}(0)\\,\\boldsymbol{p}_{N}(\\tau)\\,,\\,\\tau\\in\\mathbb{R}\\}\\,,\n\\end{align}\nthat is expressed in frame $\\cFB$ and indicated by the red curve in Fig.~\\ref{fig:camera_control}.\nNote that point $D$ corresponding to $\\tau=0$ is invisible in the FOV since it is typically less than $180$ degrees. Strictly speaking, \\eqref{eqn:poly_func_repr_D} is not the Taylor approximation of $\\mathcal{C}$ about point $D$, but the expansion of Taylor approximation of $\\mathcal{C}$ about the closest on-path point (point $\\Omega$) in the FOV, that is,\n\\begin{align}\\label{eqn:poly_func_repr_omega}\n \\mathcal{C}'=\\{(\\tau, \\eta)\\,|\\,\\eta = \\boldsymbol{\\varphi}(\\tau_{\\Omega})\\,\\boldsymbol{p}_{N}(\\tau-\\tau_{\\Omega})\\,,\\,\\tau\\in\\mathbb{R}\\}\\,.\n\\end{align}\nAccording to Remark~\\ref{remark:func_repr_preserve_shift}, the change of coefficients is\n\\begin{align}\n \\boldsymbol{\\varphi}^\\top(0)&=\\boldsymbol{T}(-\\tau_{\\Omega})\\boldsymbol{\\varphi}^{\\top}(\\tau_{\\Omega})\\,,\n\\end{align}\nwhere $\\boldsymbol{T}(\\cdot)$ is the same as that given in \\eqref{eqn:T_expansion}.\n\nNext step is to transform function representation \\eqref{eqn:poly_func_repr_D} to arc-length-based parametric representation.\nIndeed, one should transform \\eqref{eqn:poly_func_repr_omega} to parametric representation about point $\\Omega$ since \\eqref{eqn:poly_func_repr_D} is the expansion of approximation \\eqref{eqn:poly_func_repr_omega} about point $\\Omega$.\nHowever, it is rather complicated to transform to parametric representation about point $\\Omega$; see proof of Theorem~\\ref{thm:curve_func_2_param_repr}. Also, transformation about point $\\Omega$ leads to the issue of floating origin of the arc length coordinate $s$ while the vehicle is moving. Therefore, we transform \\eqref{eqn:poly_func_repr_D} to parametric representation about point $D$, which is the $\\eta$-intercept of $\\mathcal{C}'$ in frame $\\cFB$.\nAs shown in Fig.~\\ref{fig:camera_control}, point $\\Omega$ is close to point $D$ when the following holds: i) the lateral deviation to the path is small; or ii) the camera has a relatively wide view. Applying Theorem~\\ref{thm:curve_func_2_param_repr}, we obtain the parametric representation of $\\mathcal{C}'$ given in \\eqref{eqn:poly_func_repr_D} about point $D$ as\n\\begin{align}\\label{eqn:param_repr_body_frame}\n \\mathcal{C}'&=\\{\\bfr\\,|\\,\\bfr(s; 0)= \\boldsymbol{\\phi}(0)\\,\\boldsymbol{p}_{N}(s)\\,,\\,s\\in\\mathbb{R}\\}\\,,\n\\end{align}\nwhere $\\bfr = [\\tau, \\; \\eta]^\\top$ is the coordinates in frame $\\cFB$, the location of arc length coordinate $s=0$ corresponds to point $D$, and the change of coefficients is\n\\begin{align}\\label{eqn:param_repr_body_frame_coeff}\n \\boldsymbol{\\phi}(0)&= \\boldsymbol{f}(\\boldsymbol{\\varphi}(0))\\,.\n\\end{align}\n\n\n\\subsection{Lane Estimation \\label{sec:estmation_lane}}\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[scale=0.6]{Fig04.pdf}\\\\\n \\caption{Vehicle movement from step $k$ to $k+1$.\\label{fig:lanepoly_time_evolve}}\n\\end{figure}\n\nFig.~\\ref{fig:lanepoly_time_evolve} depicts vehicle movement from time step $k$ to $k+1$. As mentioned earlier, Fig.~\\ref{fig:lanepoly_time_evolve} uses subscripts $k$ or $k+1$ on points, frames and time-varying variables to distinguish different instants; cf.~Fig.~\\ref{fig:camera_control}. Specifically, point $Q_{k}$, frame $\\cFB[k]$ with $\\tau_{k}\\eta_{k}z$ axis and point $D_{k}$ denote the location of the camera, the body-fixed frame and the $\\eta$-intercept of the lane at step $k$, respectively. The red curve $\\mathcal{C}'$ represents the curve \\eqref{eqn:param_repr_body_frame} at step $k$.\n$(x_{k}, y_{k})$ are the coordinates of $Q_{k}$ in the earth-fixed frame $\\cFE$, while $\\psi_{k}$ is the vehicle heading. $(\\tilde{x}_{k}, \\tilde{y}_{k})$ and $(\\tilde{\\tau}_{k}, \\tilde{\\eta}_{k})$ are the coordinates of the displacement vector $Q_{k}Q_{k+1}$ expressed in the earth-fixed frame $\\cFE$ and body-fixed frame $\\cFB[k]$, respectively. $\\tilde{\\psi}_{k}$ is the change of vehicle heading from step $k$ to step $k+1$. $s_{k}$ is the arc length coordinate at step $k$ such that $s_{k}=0$ corresponds to point $D_{k}$, while $\\tilde{s}_{k}$ represents the arc length distance from $D_{k}$ to $D_{k+1}$ along the curve $\\mathcal{C}'$. In summary,\n\\begin{equation}\\label{eqn:term_diff_k_to_k1}\n\\begin{split}\n \\tilde{x}_{k} & = x_{k+1}-x_{k}\\,,\\qquad\n \\tilde{y}_{k} = y_{k+1}-y_{k}\\,,\\\\\n \\tilde{\\psi}_{k} & = \\psi_{k+1}-\\psi_{k}\\,,\\qquad\n \\tilde{s}_{k} = s_{k}-s_{k+1}\\,.\n\\end{split}\n\\end{equation}\nIn the following we assume vehicle state changes $\\tilde{x}_{k}$, $\\tilde{y}_{k}$, $\\tilde{\\psi}_{k}$, $\\tilde{\\tau}_{k}$, $\\tilde{\\eta}_{k}$ and $\\tilde{s}_{k}$ are known, and will investigate the details on how to obtain them in Section~\\ref{sec:ctrl}.\n\nTo derive the evolution of coefficients in \\eqref{eqn:param_repr_body_frame}, we highlight the time-dependency and rewrite it for step $k$ as\n\\begin{align}\\label{eqn:param_repr_body_frame_k_in_k}\n \\mathcal{C}'&=\\{\\bfr_{k}\\,|\\,\\bfr_{k}(s_{k}; 0) = \\boldsymbol{\\phi}_{k}(0)\\,\\boldsymbol{p}_{N}(s_{k})\\,,\\,s_{k}\\in \\mathbb{R\\}}\\,,\n\\end{align}\nwhich is the parametric representation using arc length coordinates $s_{k}$ about point $D_{k}$ in frame $\\cFB[k]$. The objective is to derive the new representation\n\\begin{equation}\\label{eqn:param_repr_body_frame_k+1}\n\\begin{split}\n \\mathcal{C}'&=\\{\\bfr_{k+1}\\,|\\,\n \\bfr_{k+1}(s_{k+1}; 0)= \\boldsymbol{\\phi}_{k+1}(0)\\,\\boldsymbol{p}_{N}(s_{k+1})\\,,\\, s_{k+1}\\in\\mathbb{R}\\}\\,,\n\\end{split}\n\\end{equation}\ngiven \\eqref{eqn:param_repr_body_frame_k_in_k} based on vehicle state changes from step $k$ to step $k+1$ such that the relationship between the coefficients $\\boldsymbol{\\phi}_{k}(0)$ and $\\boldsymbol{\\phi}_{k+1}(0)$ can be obtained. Note that \\eqref{eqn:param_repr_body_frame_k+1} is based on arc length coordinates $s_{k+1}$ about point $D_{k+1}$ in frame $\\cFB[k+1]$. Hence, three steps are performed sequentially in the following: i) coordinate transformation from frame $\\cFB[k]$ to frame $\\cFB[k+1]$; ii) shifting expansion point from $D_{k}$ to $D_{k+1}$; iii) changing coordinate from $s_{k}$ to $s_{k+1}$.\n\nAccording to Theorem~\\ref{thm:gen_coord_transf}, the change of coordinates from body-fixed frame $\\cFB[k+1]$ to $\\cFB[k]$ is\n\\begin{align}\\label{eqn:coord_transf_k_k_1}\n\\boldsymbol{r}_{k} &= \\bfR_{k}\\, \\boldsymbol{r}_{k+1}+\\boldsymbol{d}_{k}\\,,\n\\end{align}\nwhere\n\\begin{align}\n \\bfR_{k} &=\n \\begin{bmatrix}\n \\cos\\tilde{\\psi}_{k} & -\\sin\\tilde{\\psi}_{k} \\\\ \\sin\\tilde{\\psi}_{k} &\\cos\\tilde{\\psi}_{k}\n \\end{bmatrix}\\,,&\n \\bfd_{k} &=\n \\begin{bmatrix}\n \\tilde{\\tau}_{k} \\\\ \\tilde{\\eta}_{k}\n \\end{bmatrix}\\,.\n\\end{align}\nApplying Theorem~\\ref{thm:curve_param_repr_conformal} to curve \\eqref{eqn:param_repr_body_frame_k_in_k} with coordinate transformation \\eqref{eqn:coord_transf_k_k_1}, we obtain the parametric representation using arc length coordinates $s_{k}$ about point $D_{k}$ in frame $\\cFB[k+1]$ as\n\\begin{align}\\label{eqn:param_repr_body_frame_k_in_k+1}\n \\mathcal{C}'&=\\{\\bfr_{k+1}\\,|\\,\n \\bfr_{k+1}(s_{k}; 0)= \\widehat{\\boldsymbol{\\phi}}_{k}(0)\\,\\boldsymbol{p}_{N}(s_{k})\\,,\\,s_{k}\\in\\mathbb{R}\\}\\,,\n\\end{align}\nwhere the change of coefficients is\n\\begin{align}\\label{eqn:coeff_change_step_1}\n \\widehat{\\boldsymbol{\\phi}}_{k}(0) & = \\bfR_{k}^{\\top}\\big(\\boldsymbol{\\phi}_{k}(0)-\\bfD_{k} \\big)\\,,\n\\end{align}\nand\n\\begin{align}\n\\bfD_{k} & = \\begin{bmatrix}\n \\bfd_{k} & 0 & \\cdots & 0\n\\end{bmatrix}\\in \\mathbb{R}^{2\\times (N+1)}\\,.\n\\end{align}\n\nNotice that the expansion point has shifted from $D_{k}$ to $D_{k+1}$ with distance $\\tilde{s}_{k}$.\nApplying Theorem~\\ref{thm:curve_param_repr_shift} to \\eqref{eqn:param_repr_body_frame_k_in_k+1}, we obtain the parametric representation of $\\mathcal{C}'$ using arc length coordinates $s_{k}$ about point $D_{k+1}$ in frame $\\cFB[k+1]$ as\n\\begin{align}\\label{eqn:param_repr_body_frame_k_in_k+1_sk}\n \\mathcal{C}'&=\\{\\bfr_{k+1}\\,|\\,\n \\bfr_{k+1}(s_{k}; \\tilde{s}_{k}) = \\widehat{\\boldsymbol{\\phi}}_{k}(\\tilde{s}_{k})\\,\\boldsymbol{p}_{N}(s_{k}-\\tilde{s}_{k})\\,,\\,s_{k}\\in\\mathbb{R}\\}\\,,\n\\end{align}\nwhere the change of coefficients is\n\\begin{align}\\label{eqn:coeff_change_step_2}\n \\widehat{\\boldsymbol{\\phi}}_{k}^\\top(\\tilde{s}_{k})&=\\boldsymbol{T}(\\tilde{s}_{k})\\,\\widehat{\\boldsymbol{\\phi}}_{k}^\\top(0)\\,,\n\\end{align}\nwhere $\\boldsymbol{T}(\\cdot)$ is the same as that given in \\eqref{eqn:T_expansion}.\n\nNext utilizing \\eqref{eqn:term_diff_k_to_k1}, we can change coordinates from $s_{k}$ to $s_{k+1}$ to obtain the parametric representation \\eqref{eqn:param_repr_body_frame_k+1} where the change of coefficients is\n\\begin{align}\\label{eqn:coeff_change_step_3}\n \\boldsymbol{\\phi}_{k+1}(0)&=\\widehat{\\boldsymbol{\\phi}}_{k}(\\tilde{s}_{k})\\,.\n\\end{align}\nIn summary, the evolution of coefficients in the parametric representation from step $k$ to step $k+1$ is\n\\begin{align}\\label{eqn:coeff_dyn_gen_equiv}\n \\boldsymbol{\\phi}_{k+1}(0)&=\\bfR_{k}^{\\top}\\big(\\boldsymbol{\\phi}_{k}(0)-\\bfD_{k} \\big)\\, \\boldsymbol{T}^\\top(\\tilde{s}_{k})\\,,\n\\end{align}\ncf. (\\ref{eqn:coeff_change_step_1}, \\ref{eqn:coeff_change_step_2}, \\ref{eqn:coeff_change_step_3}), implying a linear relationship by nature.\n\n\\begin{remark}\nUtilizing vectorization operator and Theorem~\\ref{thm:vec_kron_relation}, one can rewrite \\eqref{eqn:coeff_dyn_gen_equiv} into the standard form of linear system\n\\begin{align}\n \\boldsymbol{\\Phi}_{k+1}&=\\mathbf{A}_{k}\\boldsymbol{\\Phi}_{k}+\\mathbf{B}_{k}\\,,\n\\end{align}\nwhere\n\\begin{equation}\n \\begin{split}\n \\boldsymbol{\\Phi}_{k}&=\\vectr(\\boldsymbol{\\phi}_{k}(0))\\,, \\qquad\\qquad\n \\mathbf{A}_{k} =\\boldsymbol{T}(\\tilde{s}_{k})\\otimes \\bfR_{k}^{\\top}\\,,\\\\\n \\mathbf{B}_{k} &= -\\big(\\boldsymbol{T}(\\tilde{s}_{k})\\otimes \\bfR_{k}^{\\top}\\big)\\vectr(\\mathbf{D}_{k})\\,.\n \\end{split}\n\\end{equation}\n\\end{remark}\n\n\n\\subsection{Vehicle Dynamics and Control \\label{sec:ctrl}}\n\nThe proposed architecture and model on the evolution of coefficients using arc-length-based parametric representation can be applied to any vehicle models with reasonable control algorithms that require lane estimation and information extraction. To demonstrate the work flow, in this part we provide an example on the model derived in \\cite{Wubing_ND_2022} using the path-following controller proposed in \\cite{Wubing_LC_TIV_2022}.\n\n\\subsubsection{Dynamics and Transformation on States\\label{sec:dyn_estimation_states}}\n\nIn general, vehicle dynamics describe the evolution of absolute position (e.g., $x_{\\rm Q},\\, y_{\\rm Q}$) and orientation (heading angle $\\psi$) in the earth-fixed frame $\\cFE$. They have different levels of complexity based on fidelity.\nHere, we consider the model derived in \\cite{Wubing_ND_2022} on the camera location point $Q$, that is\n\\begin{equation}\\label{eqn:EOM_point_Q}\n\\begin{split}\n\\dot{x}_{\\rm Q} &= V\\,\\cos\\psi-d\\,\\dfrac{V}{l}\\tan\\gamma \\sin\\psi\\, , \\\\\n\\dot{y}_{\\rm Q} &= V\\,\\sin\\psi+d\\,\\dfrac{V}{l}\\tan\\gamma \\cos\\psi\\, , \\\\\n\\dot{\\psi} &= \\dfrac{V}{l}\\,\\tan\\gamma\\, .\n\\end{split}\n\\end{equation}\nwhere $x_{\\rm Q}$, $y_{\\rm Q}$, ${\\psi}$, $l$ and $d$ utilize the same notations introduced in Section~\\ref{sec:perception}, $V$ is the constant longitudinal speed, and $\\gamma$ is the steering angle.\n\nTo facilitate control design, the vehicle dynamics \\eqref{eqn:EOM_point_Q} on absolute position ($x_{\\rm Q},\\, y_{\\rm Q}$) and orientation (heading angle $\\psi$) expressed in the earth-fixed frame $\\cFE$ can be transformed to dynamics on relative position and orientation with respect to the path. Such transformation allows us to obtain the evolution of arc length position, lateral deviation and relative heading with respect to lanes perceived in the camera. In this paper, we choose the following relative position and orientation as new states: i) the arc length $s_{\\Omega}$ that the vehicle has travelled along the lane; ii) the observed distance $\\varepsilon_{\\Omega}$ that characterizes the length of vector $\\pvec{\\Omega Q}$ (cf.~Fig.~\\ref{fig:camera_control}), whose sign is positive when $Q$ is on the left side of the path;\nand iii) the relative heading angle\n\\begin{align}\\label{eqn:rel_heading_def}\n \\theta_{\\Omega} &= \\psi-\\alpha_{\\Omega}\\,,\n\\end{align}\nwith respect to point $\\Omega$ on the path.\nThe details on the state transformation are provided in Appendix~\\ref{append:coord_transf}, and the resulting transformed relative dynamics are\n\\begin{equation}\\label{eqn:transf_xy_se_diff_Q}\n \\begin{split}\n \\dot{s}_{\\Omega} &= \\dfrac{ V }{\\sin(\\delta-\\theta_{\\Omega})}\\big(\\sin\\delta+\\dfrac{d}{l}\\cos\\delta \\tan\\gamma\\big)\\\\\n &+\\dfrac{|\\varepsilon_{\\Omega}|\\cos\\big(\\delta-\\sign(\\varepsilon_{\\Omega})\\,\\delta\\big)}{\\sin(\\delta-\\theta_{\\Omega})}\\dfrac{V}{ l}\\tan\\gamma\\,,\\\\\n %\n \\dot{\\varepsilon}_{\\Omega} &=\\dfrac{ V }{\\sin(\\delta-\\theta_{\\Omega})}\\big(\\sin\\theta_{\\Omega}+\\dfrac{d}{l}\\cos\\theta_{\\Omega} \\tan\\gamma\\big)\\\\\n &+\\dfrac{|\\varepsilon_{\\Omega}|\\cos(\\delta-\\sign(\\varepsilon_{\\Omega})\\,\\theta_{\\Omega})}{\\sin(\\delta-\\theta_{\\Omega})}\\dfrac{V}{l}\\tan\\gamma\\,,\\\\\n %\n \\dot{\\theta}_{\\Omega} &=-\\dfrac{ V\\, \\kappa_{\\Omega}}{\\sin(\\delta-\\theta_{\\Omega})}\\big(\\sin\\delta+\\dfrac{d}{l}\\cos\\delta \\tan\\gamma\\big)\\\\\n &+\\Big(1-\\dfrac{\\kappa_{\\Omega} |\\varepsilon_{\\Omega}|\\cos\\big(\\delta-\\sign(\\varepsilon_{\\Omega})\\,\\delta\\big)}{\\sin(\\delta-\\theta_{\\Omega})}\\Big)\\dfrac{V}{l}\\tan\\gamma\\,.\n \\end{split}\n\\end{equation}\n\n\n\\subsubsection{Control}\n\nThe objective of a path-following controller is to generate desired steering angle $\\gamma_{\\rm des}$ such that point Q can follow the given path. In literature, there are many available controllers demonstrated to be effective in different scenarios. In this part we use the nonlinear controller proposed in \\cite{Wubing_LC_TIV_2022} as an example since cameras are typically mounted at the front of the vehicle.\nWe hightlight the key ideas of this controller, and refer readers to \\cite{Wubing_LC_TIV_2022} for more details on the design.\n\nWith the assumption that the steering angle $\\gamma$ can track any desired value $\\gamma_{\\rm des}$, that is, $\\gamma = \\gamma_{\\rm des}$,\nthe path-following controller is\n\\begin{align}\\label{eqn:lateral_controller}\n \\gamma_{\\rm des} & = \\gamma_{\\rm ff} + \\gamma_{\\rm fb}\\,,\n\\end{align}\nwhich consists of a feedforward control law\n\\begin{align}\n \\gamma_{\\rm ff} &= \\arctan \\dfrac{l\\,\\kappa_{\\rm D}}{\\sqrt{1-(d\\,\\kappa_{\\rm D})^{2}}}\\,, \\label{eqn:steer_controller_ff_sum}\n\\end{align}\nand a feedback control law\n\\begin{align}\\label{eqn:steer_controller_fb_sum}\n \\gamma_{\\rm fb} &= \\gamma_{\\rm sat}\\cdot g \\Big( \\tfrac{k_{1}}{\\gamma_{\\rm sat}}\\big(\\theta_{\\rm D}-\\theta_{0}+\\arctan (k_{2}\\,\\varepsilon_{\\rm D})\\big)\\Big)\\,.\n\\end{align}\nHere, $\\kappa_{\\rm D}$ is the road curvature at point $D$, $\\theta_{\\rm D}$ is the relative heading angle with respect to point $D$ (cf.~\\eqref{eqn:rel_heading_def}), and ${\\varepsilon_{\\rm D}=-\\eta_{\\rm D}}$ is the $\\eta$-deviation of point $D$ where $\\eta_{\\rm D}$ indicates the $\\eta$-coordinate of point $D$ in frame $\\cFB$. In other words, $\\varepsilon_{\\rm D}$ characterizes the length of vector $\\pvec{D Q}$ (cf.~Fig.~\\ref{fig:camera_control}), and is positive when $\\pvec{D Q}$ points towards the positive $\\eta$-axis.\nAlso,\n\\begin{align}\\label{eqn:theta0_des}\n \\theta_{0} & = -\\arcsin(d\\,\\kappa_{\\rm D})\\,\n\\end{align}\nis the desired yaw angle error, ($k_{1}$, $k_{2}$) are tunable control gains, $\\gamma_{\\rm sat}$ is the maximum allowable steering angle, and $g (x)$ denotes the wrapper function\n\\begin{equation}\n g(x) =\\dfrac{2}{\\pi}\\arctan \\Big(\\dfrac{\\pi}{2} x\\Big)\\,. \\label{eqn:satfunction}\n\\end{equation}\nThe feedforward control essentially provides the estimated steering angle to handle a given road curvature $\\kappa_{\\rm D}$ whereas the feedback control makes corrections based on lateral deviation $\\varepsilon_{\\rm D}$ and yaw angle error $\\theta_{\\rm D}-\\theta_{0}$.\n\nThe controller (\\ref{eqn:lateral_controller}-\\ref{eqn:satfunction}) relies on the information about point $D$ or with respect to point $D$. We remark that point $D$ is not the closest on-path point to $Q$ as that used in \\cite{Wubing_LC_TIV_2022}, or the closest observable on-path point (point $\\Omega$) to $Q$ in the FOV. One can verify that when the road curvature is constant (i.e., $\\kappa_{\\rm D}=\\kappa^{\\ast}$), the closed loop system (\\ref{eqn:transf_xy_se_diff_Q}-\\ref{eqn:satfunction}) possesses the desired equilibrium\n\\begin{align}\\label{eqn:equilb}\n s_{\\Omega}^{\\ast}=\\dfrac{V\\,t}{\\sqrt{1-(d\\,\\kappa^{\\ast})^{2}}}\\,,\\;\n \\varepsilon_{\\Omega}^{\\ast}=0\\,,\\;\n \\theta_{\\Omega}^{\\ast}=-\\arcsin(d\\,\\kappa^{\\ast})\\,,\n\\end{align}\nwhich can be stabilized with properly chosen gain $k_{1}$ and $k_{2}$. The equilibrium \\eqref{eqn:equilb} represents the scenario where the vehicle follows the given path perfectly with no lateral deviations.\nTo extract the information ($\\kappa_{\\rm D}$, $\\theta_{\\rm D}$, $\\varepsilon_{\\rm D}$) from representation \\eqref{eqn:param_repr_body_frame}, we provide the following lemma.\n\n\n\\begin{lemma}\\label{thm:terms_from_param_repr}\n Given the parametric representation \\eqref{eqn:param_repr_body_frame} of the lane in frame $\\cFB$, the $\\eta$-deviation $\\varepsilon_{\\rm D}$, the relative heading angle $\\theta_{\\rm D}$, and the curvature $\\kappa_{\\rm D}$ at point $D$ are\n \\begin{equation}\\label{eqn:extract_ctrl_info_from_repr}\n \\begin{split}\n \\varepsilon_{D} &=-\\hat{\\phi}_{0}\\,,\\,\\qquad\\qquad\\qquad\n \\theta_{D} = -\\arctan \\dfrac{\\hat{\\phi}_{1}}{\\bar{\\phi}_{1}}\\,,\\,\\\\\n \\kappa_{D} &= \\dfrac{2\\hat{\\phi}_{2}\\bar{\\phi}_{1}-2\\hat{\\phi}_{1}\\bar{\\phi}_{2}}{\\hat{\\phi}_{1}^{2}+\\bar{\\phi}_{1}^{2}}\\,.\n \\end{split}\n \\end{equation}\n\\end{lemma}\n\\begin{IEEEproof}\n Given \\eqref{eqn:param_repr_body_frame}, we obtain the $\\eta$-deviation $\\varepsilon$, the relative heading angle $\\theta$, and the road curvature $\\kappa$ at an arbitrary point $P$ with a preview distance $s_{\\rm P}$ as\n \\begin{equation}\\label{eqn:lemma_get_terms_from_repr}\n \\begin{split}\n \\varepsilon(s_{\\rm P}) &= -\\eta(s_{\\rm P}) \\,,\\qquad\n \\theta(s_{\\rm P}) = -\\arctan \\dfrac{\\eta'(s_{\\rm P})}{\\tau'(s_{\\rm P})}\\,,\\\\\n \\kappa(s_{\\rm P}) &= \\dfrac{\\eta''(s_{\\rm P})\\,\\tau'(s_{\\rm P})-\\eta'(s_{\\rm P})\\,\\tau''(s_{\\rm P})}{(\\eta'(s_{\\rm P}))^{2}+(\\tau'(s_{\\rm P}))^{2}}\\,,\n \\end{split}\n \\end{equation}\n where the derivatives $\\eta'$, $\\tau'$, $\\eta''$ and $\\tau''$ can be obtained by differentiating \\eqref{eqn:param_repr_body_frame} with respect to $s$.\n Point D marks the origin of arc length coordinate $s$. Thus, evaluating \\eqref{eqn:lemma_get_terms_from_repr} at $s_{\\rm P}=0$ yields \\eqref{eqn:extract_ctrl_info_from_repr} at point $D$.\n\\end{IEEEproof}\n\n\n\\subsubsection{Vehicle State Changes}\nIn the derivation of \\eqref{eqn:coeff_dyn_gen_equiv}, vehicle state changes $\\tilde{x}_{k}$, $\\tilde{y}_{k}$, $\\tilde{\\psi}_{k}$, $\\tilde{\\tau}_{k}$, $\\tilde{\\eta}_{k}$ and $\\tilde{s}_{k}$ are assumed to be known at each step in Section~\\ref{sec:estmation_lane}. In practice, they can be obtained based on a nominal vehicle dynamic model and sensor data. In this part we take model \\eqref{eqn:EOM_point_Q} as an example, and assume the longitudinal speed $V$ and yaw rate $\\omega=\\dot{\\psi}$ can be measured by onboard sensors. We rewrite model \\eqref{eqn:EOM_point_Q} into\n\\begin{equation}\\label{eqn:model_w_sensor_data}\n \\begin{split}\n \\dot{x}&= V\\cos\\psi -d\\, \\omega \\sin\\psi\\, ,\\\\\n \\dot{y}&= V\\sin\\psi+d\\, \\omega\\cos\\psi\\,, \\\\\n \\dot{\\psi}&=\\omega\\,.\n \\end{split}\n\\end{equation}\nbased on measurable data $V$ and $\\omega$.\nApplying Euler method to integrate \\eqref{eqn:model_w_sensor_data} at step $k$, we obtain\n\\begin{equation}\\label{eqn:changes_x_y_psi}\n \\begin{split}\n \\tilde{x}_{k}&= (V_{k}\\cos\\psi_{k} -d\\, \\omega_{k} \\sin\\psi_{k})T\\,,\\\\\n \\tilde{y}_{k}&= (V_{k}\\sin\\psi_{k}+d\\, \\omega_{k}\\cos\\psi_{k})T\\,,\\\\\n \\tilde{\\psi}_{k}&=\\omega_{k} T\\,,\n \\end{split}\n\\end{equation}\nwhere $T$ is the step size. Notice that $(\\tilde{x}_{k}, \\tilde{y}_{k})$ and $(\\tilde{\\tau}_{k}, \\tilde{\\eta}_{k})$ are the coordinates of displacement vector $Q_{k}Q_{k+1}$ expressed in earth-fixed frame $\\cFE$ and body-fixed frame $\\cFB[k]$, respectively. Thus, one can apply Theorem~\\ref{thm:gen_coord_transf} on coordinate transformation and obtain\n\\begin{align}\\label{eqn:changes_tau_eta_psi}\n\\tilde{\\tau}_{k}&= V_{k}T\\,,&\n\\tilde{\\eta}_{k}&= d\\, \\omega_{k} T\\,.\n\\end{align}\n\nNote that $\\tilde{s}_{k}$ is the arc length distance that point $D$ has travelled along the curve $\\mathcal{C}'$ from step $k$ to step $k+1$. Realizing that point $D$ coincides with point $\\Omega$ if $\\delta=\\frac{\\pi}{2}$, we can easily derive the evolution of $s_{\\rm D}$ from the relative dynamics \\eqref{eqn:transf_xy_se_diff_Q} that is transformed from \\eqref{eqn:EOM_point_Q}.\nBy setting $\\delta=\\frac{\\pi}{2}$ in \\eqref{eqn:transf_xy_se_diff_Q} and replacing point $\\Omega$ with point $D$, the evolution of $s_{\\rm D}$ becomes\n\\begin{equation}\n \\begin{split}\\label{eqn:transf_xy_se_diff_D}\n \\dot{s}_{\\rm D} &= \\dfrac{ V+\\omega\\,\\varepsilon_{\\rm D} }{\\cos\\theta_{\\rm D}}\\,,\n \\end{split}\n\\end{equation}\nwhere yaw rate $\\omega=\\tfrac{V}{l}\\,\\tan\\gamma$ is utilized; cf.~\\eqref{eqn:model_w_sensor_data}.\nApplying Lemma~\\ref{thm:terms_from_param_repr} to the parametric representation \\eqref{eqn:param_repr_body_frame},\nthe $\\eta$-deviation $\\varepsilon_{D}$ and relative heading angle $\\theta_{D}$ can be obtained at step $k$. Thus, integrating \\eqref{eqn:transf_xy_se_diff_D} with Euler method yields\n\\begin{equation}\\label{eqn:changes_s}\n \\tilde{s}_{k}=\\dfrac{V_{k}+\\omega_{k}\\,\\varepsilon_{\\textrm{D}_{k}}}{\\cos(\\theta_{\\textrm{D}_{k}})}\\,T\\,.\n\\end{equation}\n\n\n\n\\section{Results \\label{sec:res}}\nIn this section, simulations of the whole process explained in Section~\\ref{sec:veh_ctrl_setup} are performed to demonstrate the usage of arc-length-based parametric representation in lane estimation and vehicle control. In Section~\\ref{sec:sim_setup}, we start with the details on how we set up experiment scenarios and simulate lane perception in camera-based control. In Section~\\ref{sec:sim_res}, simulations with or without using model \\eqref{eqn:coeff_dyn_gen_equiv} for prediction are conducted for the path-following control problem and results indicate the efficacy and large potential of \\eqref{eqn:coeff_dyn_gen_equiv} in facilitating lane estimation in vehicle control.\n\n\n\\subsection{Simulation Setup\\label{sec:sim_setup}}\n\nThe model \\eqref{eqn:coeff_dyn_gen_equiv} integrated with the controller (\\ref{eqn:lateral_controller}-\\ref{eqn:satfunction}) is applicable to lane estimation in path-following problems where the path can have any reasonable shape. Remark~\\ref{remark:curv_repr_gen} indicates that a path can be fully described by $\\kappa(s)$. Also, if $\\kappa(s)$ has an analytical form, we can derive the coefficients for polynomial function representation \\eqref{eqn:poly_func_repr_D} given vehicle state with respect to the path. These coefficients can be used as: i) the output of perception algorithms at perception updating instants; and ii) the true values to compare against the predicted values when perception outcome is not available. Therefore, we consider the closed path used in \\cite{Wubing_ND_2022} and derive the coefficients for polynomial function representation in this part.\nThe curvature of the path is given as\n\\begin{align}\\label{eqn:curv_s_func}\n \\kappa(s) &= \\dfrac{\\kappa_{\\max}}{2}\\left(1-\\cos\\bigg(\\dfrac{2\\pi}{s_{\\rm T}}s\\bigg)\\right)\\ ,\n\\end{align}\nwhere $\\kappa_{\\max}$ is the maximum curvature along the path, and $s_{\\rm T}$ is the arc length period.\nThis path has $N$ corners and perimeter $Ns_{\\rm T}$ if\n\\begin{equation}\\label{eqn:curv_close_cond}\n \\kappa_{\\max}\\,s_{\\rm T}=\\dfrac{4\\pi}{N}\\ , \\quad N=2, 3, \\ldots \\ .\n\\end{equation}\n\nAs discussed in Section~\\ref{sec:perception}, perception algorithms provide representation \\eqref{eqn:poly_func_repr_D} of the lane in real-time, which can be viewed as the expansion of Taylor approximation \\eqref{eqn:poly_func_repr_omega} about point $\\Omega$ in the FOV.\nTo simulate perception, we need to derive the coefficients of function representation \\eqref{eqn:poly_func_repr_omega} in frame $\\cFB$ given the vehicle states and the path information \\eqref{eqn:curv_s_func}.\n\n\\begin{lemma}\\label{lemma:gen_curve_func_repr}\n Given the current vehicle relative states ($s_{\\Omega}$, $\\varepsilon_{\\Omega}$ and $\\theta_{\\Omega}$) with respect to the path, and the half angle $\\delta$ of camera FOV, the lane segment $\\mathcal{C}$ can be approximated as function representation \\eqref{eqn:poly_func_repr_omega} in frame $\\cFB$ by applying Taylor series to point $\\Omega$, where $\\tau_{\\Omega}:=\\tau(s_{\\Omega})=|\\varepsilon_{\\Omega}| \\cos\\delta$ and\n \\begin{equation}\\label{eqn:gen_curv_func_repr_coeff}\n \\begin{split}\n \\varphi_{0}(\\tau_{\\Omega}) &= -\\varepsilon_{\\Omega}\\sin\\delta\\,, \\quad\n \\varphi_{1}(\\tau_{\\Omega}) = -\\tan\\theta_{\\Omega}\\,, \\\\\n %\n \\varphi_{2}(\\tau_{\\Omega}) &= \\dfrac{\\kappa_{\\Omega}}{2\\cos^{3}\\theta_{\\Omega}}\\,,\\quad\n \\varphi_{3}(\\tau_{\\Omega}) = \\dfrac{\\kappa'_{\\Omega}-3\\kappa_{\\Omega}^{2}\\tan\\theta_{\\Omega}}{6\\cos^{4}\\theta_{\\Omega}}\\,,\\\\\n %\n \\varphi_{4}(\\tau_{\\Omega}) &= \\dfrac{5\\kappa_{\\Omega}^3}{8\\cos^{7}\\theta_{\\Omega}}\n +\\dfrac{\\kappa''_{\\Omega}-12 \\kappa_{\\Omega}^{3}-10\\kappa_{\\Omega}\\kappa'_{\\Omega}\\tan\\theta_{\\Omega} }{24\\cos^{5}\\theta_{\\Omega}} \\,,\\\\\n %\n \\varphi_{5}(\\tau_{\\Omega}) &=\n \\dfrac{ 7\\kappa_{\\Omega}^{2} (\\kappa'_{\\Omega}-\\kappa_{\\Omega}^{2}\\tan \\theta_{\\Omega}) }{8\\cos^{8}\\theta_{\\Omega}}\n +\\dfrac{\\kappa'''_{\\Omega}-86\\kappa_{\\Omega}^{2} \\kappa'_{\\Omega}}{120\\cos^{6}\\theta_{\\Omega}} \\\\\n &+\\dfrac{\\big(12\\kappa_{\\Omega}^{4}-3\\kappa_{\\Omega}\\kappa''_{\\Omega}-2(\\kappa'_{\\Omega})^{2}\\big)\\tan\\theta_{\\Omega}}{24\\cos^{6}\\theta_{\\Omega}}\\,.\n \\end{split}\n \\end{equation}\n\\end{lemma}\n\\begin{IEEEproof}\n See Appendix~\\ref{append:path_func_repr_body_frame}.\n\\end{IEEEproof}\n\n\n\\subsection{Simulation Results \\label{sec:sim_res}}\n\n\nIn this part, we simulate the whole process of perception, estimation and control explained in Section~\\ref{sec:veh_ctrl_setup} when the vehicle tries to follow the given path (\\ref{eqn:EOM_ref_path_ds}, \\ref{eqn:curv_s_func}) with camera-based perception. The controller (\\ref{eqn:lateral_controller}-\\ref{eqn:satfunction}) updates commands every $T$ seconds while perception provides the function representation \\eqref{eqn:poly_func_repr_D} every $T_{\\rm p}$ seconds. To demonstrate the efficacy of the proposed estimation algorithm, we also assume: i) $T_{\\rm p}$ is larger than $T$; ii) when the representation \\eqref{eqn:poly_func_repr_D} is not updated by perception, the controller may use the model \\eqref{eqn:coeff_dyn_gen_equiv} to update the parametric representation with the estimated state changes (\\ref{eqn:changes_x_y_psi}, \\ref{eqn:changes_tau_eta_psi}, \\ref{eqn:changes_s}) at each step; iii) the measurement \\eqref{eqn:poly_func_repr_D} is free of noise such that prediction using model \\eqref{eqn:coeff_dyn_gen_equiv} can be easily compared against true values without noise effects; and iv) the true values about coefficients of representation are obtained by calculating $\\boldsymbol{\\varphi}(\\tau_{\\Omega})$ using Lemma~\\ref{lemma:gen_curve_func_repr} at every control period with vehicle states ($s_{\\rm \\Omega}$, $\\varepsilon_{\\Omega}$, $\\theta_{\\Omega}$). The initial conditions are set to $s_{\\rm \\Omega}=0$ [m], $\\varepsilon_{\\Omega}=0.1$ [m], $\\theta_{\\Omega}=0$ [deg], and the other parameters used in the simulations are provided in TABLE~\\ref{tab:params}.\n\n\\begin{table}[!t]\n\\begin{center}\n\\renewcommand{\\arraystretch}{1.3}\n\\rowcolors{1}{LightCyan}{LightMagenta}\n\\begin{tabular}{l|c|l}\n\\hline\\hline\n \\rowcolor{Gray} Parameter & Value & Description\\\\\n \\hline\n $l$ [m]& $2.57$ & wheelbase length\\\\\n $d$ [m]& $2$ & distance from Q to rear axle center\\\\\n $\\delta$ [deg]& $60$ & half angle of camera FOV\\\\\n $k_{1}$ [m\/s]& $-l\/d$ & control gain\\\\\n $k_{2}$ [m$^{-1}$]& $0.02$ & control gain\\\\\n $\\gamma_{\\max}$ [deg]& $30$ & physical steering angle limit\\\\\n $s_{\\rm T}$ [m] & $250$ & arc length period of the path\\\\\n $N$ [1] & $4$ & number of corners for closed path\\\\\n $\\kappa_{\\max}$ [m$^{-1}$] & $0.004\\pi$ & maximum curvature of the path\\\\\n $V$ [m\/s] & 20 & longitudinal speed \\\\\n\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\caption{Parameters used in the simulations. \\label{tab:params}}\n\\end{table}\n\n\nFig.~\\ref{fig:sim_50ms} shows the simulation results without using \\eqref{eqn:coeff_dyn_gen_equiv} to predict coefficients when $T=0.05$ [s] and $T_{\\rm p}=0.15$ [s]. That is, the controller outputs a new command at one step based on the updated lane representation from perception, and then holds this command for the following two steps. This is because when new representation is not available, the outdated one is still used by the controller which outputs the same command. This is a typical and easy solution when characterizing lanes using function representation \\eqref{eqn:poly_func_repr_D} directly, since the evolution of coefficients in such representation is not obtainable as explained in Section~\\ref{sec:veh_ctrl_setup}.\n\nIn panel (a), the dotted black curve denotes the closed path (\\ref{eqn:EOM_ref_path_ds}, \\ref{eqn:curv_s_func}), while the solid red curve represents the position of the vehicle at point $Q$. Panel (b) plots the lateral deviation $\\varepsilon_{\\Omega}$ using the blue curve, and the steering command $\\gamma_{\\rm des}$ using the red curve with the axis marked on the right. Panels (c, d, e) show the time profiles of the coefficients $\\hat{\\phi}_{0}$, $\\hat{\\phi}_{1}$ and $\\hat{\\phi}_{2}$, respectively, of the parametric representation \\eqref{eqn:param_repr_body_frame} when the vehicle moves along the path. The blue curves denote the measurement outputs of perception algorithm, while the red curves indicate the true values of those coefficients. We remark that: i) only the coefficients until the second order in the lateral direction are shown here since they contain more important information in lateral control than other coefficients; ii) although this represents the scenario using function representation, the coefficients are still transformed to facilitate comparison against parametric representation; and iii) the time profiles of other coefficients in the representation \\eqref{eqn:param_repr_body_frame} are similar and reveal similar results.\nFig.~\\ref{fig:sim_50ms}(a,b) indicate that the controller (\\ref{eqn:lateral_controller}-\\ref{eqn:theta0_des}) allows the vehicle to follow the given path with reasonable performance in this scenario. However, during simulation we also observe that when $T_{\\rm p}\\ge 0.2$ [s], vehicle using this controller is not able to follow the given path using the outdated lane information without prediction. This result implies a high requirement on lane perception latency especially when measurements may also be corrupted with noises in practice.\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[scale=1.05]{Fig05.pdf}\\\\\n \\caption{Simulation results for camera-based lateral control without prediction on lane representation when $T=0.05$ [s] and $T_{\\rm p}=0.15$ [s]. (a) Vehicle position and the closed path (\\ref{eqn:EOM_ref_path_ds}, \\ref{eqn:curv_s_func}) in ($x$, $y$)-plane. (b) lateral deviation $\\varepsilon_{\\Omega}$ and steering command $\\gamma_{\\rm des}$. (c, d, e) time profiles of coefficients in the parametric representation \\eqref{eqn:param_repr_body_frame} while the vehicle moves along the path. \\label{fig:sim_50ms}}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[scale=1.05]{Fig06.pdf}\\\\\n \\caption{Simulation results for camera-based lateral control using \\eqref{eqn:coeff_dyn_gen_equiv} for estimation on lane representation when $T=0.05$ [s] and $T_{\\rm p}=2$ [s]. (a) Vehicle position and the closed path (\\ref{eqn:EOM_ref_path_ds}, \\ref{eqn:curv_s_func}) in ($x$, $y$)-plane. (b) lateral deviation $\\varepsilon_{\\Omega}$ and steering command $\\gamma_{\\rm des}$. (c, d, e) time profiles of coefficients in the parametric representation \\eqref{eqn:param_repr_body_frame} while the vehicle moves along the path. \\label{fig:sim_2s}}\n\\end{figure}\n\nFig.~\\ref{fig:sim_2s} shows the simulation results when function representation \\eqref{eqn:poly_func_repr_D} is transformed to parametric representation and \\eqref{eqn:coeff_dyn_gen_equiv} is used to predict the coefficients at every control step in the absence of measurement. Here, the control period $T$ is still $0.05$ [s], but perception period $T_{\\rm p}$ is set to $2$ [s] to demonstrate the effectiveness. Besides the same layout and color scheme as those used in Fig.~\\ref{fig:sim_50ms}, panels (c,d,e) use blue dots to represent new measurements of coefficients from perception algorithm, and also zoomed-in plots to highlight the errors on coefficients between the predicted values and the true values. Fig.~\\ref{fig:sim_2s}(a,b) indicate that with such low perception updating rate, the controller still achieves reasonable performance in following the given path by predicting the new representation using \\eqref{eqn:coeff_dyn_gen_equiv}. Fig.~\\ref{fig:sim_2s}(c,d,e) show that prediction errors on the coefficients of higher orders are much smaller than those of lower orders. This is natural because errors on higher orders lead to less accurate prediction for the future than errors on lower orders. The zoomed-in plots illustrate that the estimation errors in the coefficients $\\hat{\\phi}_{0}$, $\\hat{\\phi}_{1}$ and $\\hat{\\phi}_{2}$ become noticeable after predicting with \\eqref{eqn:coeff_dyn_gen_equiv} for about 0.3, 1.2, and 1.6 [s], respectively. The error in $\\hat{\\phi}_{0}$ leads to a noticeable offset in lateral deviation $\\varepsilon_{\\Omega}$ around $0.05$ [m] depicted in Fig.~\\ref{fig:sim_2s}(b).\nOne may notice that the vehicle travels 40 [m] during $2$ [s] prediction period since $V=20$ [m\/s]. Function representation \\eqref{eqn:poly_func_repr_D} is the Taylor approximation about point $\\Omega$ at the perception updating instant. The approximation error gradually becomes noticeable while the vehicle moves forward. Also, prediction with \\eqref{eqn:coeff_dyn_gen_equiv} requires estimated state changes (\\ref{eqn:changes_tau_eta_psi}, \\ref{eqn:changes_s}) using Euler integration that aggregates errors. Although the errors in Taylor approximation cannot be mitigated, the aggregation errors in Euler integration can be reduced by increasing the estimation frequency.\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[scale=1.05]{Fig07.pdf}\\\\\n \\caption{Comparison results for camera-based lateral control using \\eqref{eqn:coeff_dyn_gen_equiv} for estimation on lane representation when $T_{\\rm p}=2$ [s]. (a, c) $T=0.02$ [s]. (b, d) $T=0.01$ [s]. \\label{fig:sim_T_diff}}\n\\end{figure}\n\nFig.~\\ref{fig:sim_T_diff} shows the comparison results for the same scenario as that for Fig.~\\ref{fig:sim_2s} when perception updating period $T_{\\rm p}$ is still $2$ [s], but control updating period $T$ is reduced to $0.02$ [s] and $0.01$ [s] for panels (a,c) and (b,d), respectively.\nFig.~\\ref{fig:sim_2s}(c) and Fig.~\\ref{fig:sim_T_diff}(c,d) show that the error of $\\hat{\\phi}_{0}$ is decreased as control updating period $T$ decreases, which leads to decreased offset in lateral deviation $\\varepsilon_{\\Omega}$ depicted in Fig.~\\ref{fig:sim_2s}(b) and Fig.~\\ref{fig:sim_T_diff}(a,b).\n\nWe remark that in practice perception updating period is faster than $2$ [s], but the simulations shown here demonstrate the large potential in estimating lanes using parametric representation with \\eqref{eqn:coeff_dyn_gen_equiv}. This prediction can be used in the time update step in Kalman filter to get more accurate estimations, or as a pure prediction when measurements are temporarily not available. When noises and model mismatches appear in practice, performance degradation is inevitable. However, it is expected that prediction within $0.5\\sim 1$ [s] can still provide reasonable performance since integration aggregation is not long and the vehicle travelling distance is not far.\n\n\\section{Conclusion \\label{sec:conclusion}}\n\nThis paper revisited the fundamental mathematics on approximating curves as polynomial functions or parametric curves. It is shown that arc-length-based parametric representations possess the nice properties of preserving the form in coordinate transformation and parameter shifting. These properties have the potential in facilitating lane estimation for vehicle control since lanes are characterized as curves expressed in vehicle body-fixed frame by perception algorithms. As the vehicle moves, the body-fixed frame is translating and rotating. Thus, we proposed a new architecture using parametric representation in lane estimation and control. To ensure compatibility with most of current platforms, perception algorithms are still assumed to output coefficients of polynomial function representation. We derived the change of coefficients to transform polynomial function representation to arc-length-based parametric representation, and the evolution of coefficients using parametric representation. This evolution reveals an intrinsic linear relationship as the vehicles moves, which can be easily used for prediction or integrated with Kalman filters. We also set up a framework to simulate the whole process, including perception, estimation and control, for camera-based vehicle control problems. Simulation results indicate that controllers relying on predicted lanes using parametric representations can still achieve reasonably good performance at extremely low perception updating rate. These results are practically important in improving control performance with reduced perception updating rate, and obtaining better estimates when coefficients of representations are corrupted with noises. Future research directions may include lane estimation in the presence of noises and model mismatches, and field implementation with the proposed architecture and estimation model.\n\n\n\n\\bibliographystyle{IEEEtran}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe study of the interaction of Alfv\\'en waves (AWs) \nwith plasma inhomogeneities is \nimportant for\nboth astrophysical and laboratory plasmas.\nThis is because both AWs and inhomogeneities \noften coexist in these physical systems.\nAWs are believed to be\ngood candidates for plasma heating, energy and momentum transport.\nOn the one hand, in many physical situations AWs are easily excitable \n(e.g. through convective motion of the solar interior)\nand thus they are present in a number of astrophysical systems.\nOn the other hand, these waves dissipate due to \nthe shear viscosity as opposed to\ncompressive fast and slow magnetosonic waves which dissipate due to the\nbulk viscosity.\nIn astrophysical plasmas shear viscosity is extremely small\nas compared to bulk viscosity. Hence,\nAWs are notoriously difficult to dissipate.\nOne of the possibilities to improve AW dissipation is to introduce progressively\ndecreasing spatial scales, $\\delta l \\to 0$, into the system (recall that the \nclassical dissipation is $\\propto \\delta l^{-2}$). \nHeyvaerts and Priest have proposed (in the astrophysical context) one such \nmechanism, called \nAW phase mixing \\citep{hp83}. It occurs when a linearly polarised\nAW propagates in the plasma with a \none dimensional density inhomogeneity transverse \nto the uniform magnetic field.\nIn such a situation the initially plane AW front is progressively \ndistorted because of different Alfv\\'en speeds across the field. \nThis creates progressively stronger gradients\nacross the field (in the inhomogeneous regions \nthe transverse scale collapses to zero),\nand thus in the case of finite resistivity, dissipation is greatly enhanced.\nHence, it is believed that phase mixing can provide significant plasma\nheating. Phase mixing could be also important for laboratory plasmas.\n\\citet{hc74} proposed the heating of collisionless plasma by \nutilising spatial phase mixing by shear Alfv\\'en wave resonance and discussed potential \napplications to toroidal plasma.\nA significant amount of work has been done in the context of heating open\nmagnetic structures in the solar \ncorona \\citep{hp83,nph86,p91,nrm97,dmha00,bank00,tan01,hbw02,tn02,tna02,tnr03}.\nAll phase mixing studies so far have been performed in the MHD approximation,\nhowever, since the transverse scales in the AW collapse progressively to zero,\nthe MHD approximation is inevitably violated. \nThis happens when the transverse scale approaches\nthe ion gyro-radius $r_i$ and then electron gyro-radius $r_e$.\nThus, we proposed to study the phase mixing effect in the kinetic regime, i.e.\nwe go beyond a MHD approximation. Preliminary results were \nreported in \\citet{tss05}, where\nwe discovered a new mechanism for the acceleration of electrons\ndue to wave-particle interactions. This has important implications\nfor various space and laboratory plasmas, e.g. the \ncoronal heating problem and acceleration of the solar wind.\nIn this paper we present a full analysis of the discovered effect\nincluding an analysis of the broadening of the ion\ndistribution function due to the presence of Alfv\\'en waves and the generation of \ncompressive perturbations due to both\nweak non-linearity and plasma density inhomogeneity.\n\n\n\\section{The model}\nWe used 2D3V, the fully relativistic, electromagnetic, particle-in-cell (PIC)\ncode with MPI parallelisation, modified from the 3D3V TRISTAN code \\citep{b93}.\nThe system size is $L_x=5000 \\Delta$ and $L_y=200 \\Delta$, where\n$\\Delta(=1.0)$ is the grid size. The periodic boundary conditions for\n$x$- and $y$-directions are imposed on particles and fields. There are about\n478 million electrons and ions in the simulation. The average number of\nparticles per cell is 100 in low density regions (see below). \nThe thermal velocity of electrons is $v_{th,e}=0.1c$\nand for ions is $v_{th,i}=0.025c$.\nThe ion to electron\nmass ratio is $m_i\/m_e=16$. The time step is $\\omega_{pe} \\Delta t=0.05$. Here\n$\\omega_{pe}$ is the electron plasma frequency.\nThe Debye length is $v_{th,e}\/\\omega_{pe}=1.0$. The electron skin depth \nis $c\/\\omega_{pe}=10 \\Delta$, while the ion skin depth is $c\/\\omega_{pi}=40 \\Delta$.\nHere $\\omega_{pi}$ is the ion plasma frequency.\nThe electron Larmor radius is $v_{th,e}\/\\omega_{ce}=1.0 \\Delta$, while\nthe same for ions is $v_{th,i}\/\\omega_{ci}=4.0 \\Delta$.\nThe external uniform magnetic field, $B_0(=1.25)$,\nis in the $x$-direction and the initial\nelectric field is zero. \nThe ratio of electron cyclotron frequency to the electron plasma\nfrequency is $\\omega_{ce}\/\\omega_{pe}=1.0$, while the same for ions is\n$\\omega_{ci}\/\\omega_{pi}=0.25$. The latter ratio is essentially $V_A\/c$ -- the Alfv\\'en\nspeed normalised to the speed of light. Plasma $\\beta=2(\\omega_{pe}\/\\omega_{ce})^2(v_{th,e}\/c)^2=0.02$.\nHere all plasma parameters are quoted far away from the density \ninhomogeneity region. The dimensionless (normalised to some reference constant value of $n_0=100$ particles per cell) \nion and electron density inhomogeneity is described by\n\\begin{equation}\n {n_i(y)}=\n{n_e(y)}=1+3 \\exp\\left[-\\left(\\frac{y-100\\Delta}{50 \\Delta}\\right)^6\\right]\n\\equiv F(y).\n\\end{equation}\nThis means that in the central region (across the \n$y$-direction), the density is\nsmoothly enhanced by a factor of 4, and there are the \nstrongest density gradients having \na width of about ${50 \\Delta}$ around the \npoints $y=51.5 \\Delta$ and $y=148.5 \\Delta$.\nThe background temperature of ions and electrons, \nand their thermal velocities\nare varied accordingly\n\\begin{equation}\n{T_i(y)}\/{T_0}=\n{T_e(y)}\/{T_0}=F(y)^{-1},\n\\end{equation}\n\\begin{equation}\n {v_{th,i}}\/{v_{i0}}=\n{v_{th,e}}\/{v_{e0}}=F(y)^{-1\/2},\n\\end{equation}\nsuch that the thermal pressure remains constant. Since the background magnetic field\nalong the $x$-coordinate is also constant, the total pressure remains constant too.\nThen we impose a current of the following form\n\\begin{equation}\n{\\partial_t E_y}=-J_0\\sin(\\omega_d t)\\left(1-\\exp\\left[-(t\/t_0)^2\\right]\\right),\n\\end{equation}\n\\begin{equation}\n{\\partial_t E_z}=-J_0\\cos(\\omega_d t)\\left(1-\\exp\\left[-(t\/t_0)^2\\right]\\right).\n\\end{equation}\nHere $\\omega_d$ is the driving frequency which was fixed at $\\omega_d=0.3\\omega_{ci}$.\nThis ensures that no significant ion-cyclotron damping is present. Also,\n$\\partial_t$ denotes the time derivative.\n$t_0$ is the onset time of the driver, which was fixed at $50 \/\\omega_{pe}$\ni.e. $3.125 \/ \\omega_{ci}$. This means that the driver onset time is about 3 ion-cyclotron\nperiods. Imposing such a current on the system results in the generation of\nleft circularly polarised AW, which is driven at the left \nboundary of simulation box and has spatial width of $1 \\Delta$.\nThe initial amplitude of the current is such that \nthe relative AW amplitude is about 5 \\% of the background\n(in the low density homogeneous regions),\nthus the simulation is weakly non-linear.\n\n\\section{Main results}\n\nBecause no initial (perpendicular to the external magnetic field) velocity excitation\nwas imposed in addition to the above specified currents \n(cf. \\citet{tn02,dvl01,tt03,tt04}), \nthe circularly polarised AW excited (driven) at the left boundary\nis split into two circularly polarised AWs which travel in opposite directions. The dynamics of these\nwaves as well as other physical quantities is shown in Fig.~1.\n(cf. Fig.~1 from \\citet{tss05} where \n$B_z$ and $E_y$, the circularly polarised Alfv\\'en wave\ncomponents, were shown for three different times).\nA typical simulation, untill $t=875 \/ \\omega_{ce}=54.69 \/ \\omega_{ci}$ takes about 8 days\non the parallel 32 dual 2.4 GHz Xeon processors.\nIt can be seen from the figure \nthat because of the periodic boundary conditions, a circularly polarised\nAW that was travelling to the left, reappeared \non the right side of the simulation box.\nThe dynamics of the AW ($B_z,E_y$) progresses in a similar manner as in the \nMHD, i.e. it phase mixes \\citep{hp83}.\nIn other words, the middle region (in $y$-coordinate), i.e. $50 \\Delta \\leq y \\leq 150 \\Delta$, travels \nslower because of the density enhancement (note that \n$V_A(y) \\propto 1\/\\sqrt{n_i(y)}$).\nThis obviously causes a \ndistortion of initially plain wave front and the creation of strong gradients\nin the regions around $y \\approx 50$ and $150$.\nIn the MHD approximation when resistivity, $\\eta$, is finite, \nthe AW is strongly dissipated in these regions. This effectively means that the outer and inner parts of the\ntravelling AW are detached from each other and propagate independently.\nThis is why the effect is called phase mixing -- after a long time (in the case\nof developed phase mixing), \nphases in the wave front become effectively uncorrelated.\nBefore \\citet{tss05}, it was not clear what to expect from our PIC simulation. \nThe code is collisionless and there\nare no sources of dissipation in it (apart from the \npossibility of wave-particle interactions).\nIt is evident from Fig.~1 that in the developed stage of phase mixing \n($t=54.69 \/ \\omega_{ci}$), the AW front is substantially damped in the strongest density\ngradient regions.\nContrary to the AW ($B_z$ and $E_y$) dynamics we do not see any phase mixing for \n$B_y$ and $E_z$. The latter two behave similarly.\nIt should be noted that $E_z$ contains both driven (see Eq.(4) above) and \nnon-linearly generated fast magnetosonic wave components, with the former being dominant over the\nlatter. \nSince $B_x$ is not driven, and initially its perturbations are absent,\nwe see only non-linearly generated slow magnetosonic perturbations confined to the\nregions of strongest density gradients (around $y\\approx50$ and $150$).\nNote that these also have rapidly decaying amplitude.\nAlso, we gather from Fig.~1 that the density perturbation ($\\approx 10$\\%)\nwhich is also generated through the weak non-linearity is present too.\nThese are propagating density oscillations with variation both in overall magnitude (perpendicular to the\nfigure plane) and across the $y$-coordinate, and they are mainly confined to the strongest density gradients regions\n(around $y\\approx 50$ and $150$).\nNote that dynamics of $B_x,B_y$ and $B_z$ with \nappropriate geometrical switching (because in our\ngeometry the uniform magnetic field lies along the $x$-coordinate)\nis in qualitative agreement with \\citet{bank00} (cf their Fig.~9).\nThe dynamics of remaining $E_x$ component is treated separately in the next figure.\nIt is the inhomogeneity of the medium $n_i(y)$, $V_A(y)$, i.e.\n$\\partial \/ \\partial y \\not = 0$, is the cause of weakly non-linear coupling of the AWs to the \ncompressive modes (see \\citet{nrm97,tan01} for further details).\n\n\\begin{figure*}\n\\centering\n \\epsfig{file=fig1a.eps,width=6cm}\n \\epsfig{file=fig1b.eps,width=6cm}\n \\epsfig{file=fig1c.eps,width=6cm}\n \\epsfig{file=fig1d.eps,width=6cm}\n \\epsfig{file=fig1e.eps,width=6cm}\n \\epsfig{file=fig1f.eps,width=6cm}\n\\caption{Contour (intensity) plots of electromagnetic field components and electron density \nat time $t=54.69 \/ \\omega_{ci}$ (developed stage of phase mixing). \nThe phase mixed Alfv\\'en wave components are $B_z$ and $E_y$. \nThe excitation source is at the left\nboundary. Because of periodic boundary conditions, the \nleft-propagating AW re-appears from the \nright side of the simulation box. Note how the (initially plain) AW is stretched because of \ndifferences in local Alfv\\'en speed across the \n$y$-coordinate. Significant ($\\approx 10$\\%) \ndensity fluctuations can be seen.}\n\\end{figure*}\n\n\nIn Fig.~2 we try to address the question of \nwhere the AW energy went? (as we saw strong\ndecay of AWs in the regions of strong density gradients).\nThus in Fig.~2 we plot $E_x$, the longitudinal\nelectrostatic field, and electron phase space ($V_x\/c$ vs. $x$ and $V_x\/c$ vs. $y$) for different \ntimes.\nIn the \nregions around $y \\approx 50$ and $150$, for later times, a significant electrostatic field\nis generated. This is the consequence of stretching of the \nAW front in those regions\nbecause of the difference in local Alfv\\'en speed.\nIn the middle column of this figure we see that exactly in those regions\nwhere $E_x$ is generated, \nmany electrons are accelerated along $x$-axis.\nWe also gather from the right column that for \nlater times ($t=54.69 \/ \\omega_{ci}$), the\nnumber of high velocity electrons is increased around the \nstrongest density gradient regions\n(around $y \\approx 50$ and $150$). Thus, the generated $E_x$ field is somewhat oblique\n(not exactly parallel to the external magnetic field).\nHence, we conclude that the \nenergy of the phase-mixed AW goes into acceleration of electrons.\nLine plots of $E_x$ show that this electrostatic field is strongly damped,\ni.e. the energy is channelled to electrons via Landau damping.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=12cm]{fig2.eps}\n\\caption{Left column: contour plots of the generated electrostatic field $E_x$\nnearly parallel to the\nexternal magnetic field at instances: \n$t=(0, 31.25, 54.69) \/ \\omega_{ci}$. Central column: $V_x\/c$ versus $x$ of electron phase\nspace at the same times. To reduce figure size, only electrons with $V_x > 0.15c$\nwere plotted. Right column: $V_x\/c$ versus $y$ of electron phase\nspace at the same times. Only electrons with $V_x > 0.12c$\nwere plotted (note the dip in the middle due to the \ndensity inhomogeneity across $y$-coordinate).}\n\\end{figure*}\n\n\n\nIn Fig.~3 we investigate ion phase space\n($V_z\/c$ vs. $x$ and $V_z\/c$ vs. $y$) for the different \ntimes. The reason for choice of $V_z$ will become clear below.\nWe gather from this plot that in $V_z\/c$ vs. $x$ phase space, clear\npropagating oscillations are present (left column). These oscillations \nare of the incompressible, Alfv\\'enic \"kink\" type, \ni.e. for those $x$s where there is an increase of\ngreater positive velocity ions, there is also a \ncorresponding decrease of lower negative velocity ions.\nIn the $V_z\/c$ vs. $y$ plot we also see no clear acceleration of\nions.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=12cm]{fig3.eps}\n\\caption{Left column: $V_z\/c$ versus $x$ of ion phase\nspace at instances: \n$t=(0, 31.25, 54.69) \/ \\omega_{ci}$. \nRight column: $V_z\/c$ versus $y$ of ion phase\nspace at the same times. Only ions with $V_z > 0.03c$\nare plotted (note the dip in the middle due to the \ndensity inhomogeneity across the $y$-coordinate).}\n\\end{figure*}\n\n\n\nWe next look at the distribution functions of electrons and ions\nbefore and after the phase mixing took place.\nIn Fig.~4 we plot distribution functions of electrons and ions at $t=0$ and $t=54.69 \/ \\omega_{ci}$.\nNote that at $t=0$ the distribution functions do not look as \npurely Maxwellian because\nof the fact that the \ntemperature varies across the $y$-coordinate (to keep total pressure\nconstant) and the graphs are produced for the entire simulation domain.\nAlso, note that for electrons in $f(V_x)$ there is \na substantial difference at $t=54.69 \/ \\omega_{ci}$\nto its original form because of the aforementioned \nelectron acceleration.\nWe see that the number of electrons having velocities $V_x=\\pm (0.1-0.3)c$ is increased.\nNote that the acceleration of electrons takes place mostly along\nthe external magnetic field (along the $x$-coordinate). Thus, very little electron acceleration \noccurs for $V_y$ or $V_z$ (solid and dotted curves practically overlap each other).\nFor the ions the situation is different: we see broadening of the ion velocity distribution functions\nin $V_z$ and $V_y$ (that is why we have chosen to present the \n$V_z$ component of ion phase space in Fig.~3).\n\\begin{figure*}\n\\centering\n\\includegraphics[width=12cm]{fig4.eps}\n\\caption{All three components of the distribution functions of electrons \n(top row) and ions (bottom row) \nat $t=0$ (dotted curves) and $t=54.69 \/ \\omega_{ci}$\n(solid curves), i.e. for the developed stage of phase mixing.}\n\\end{figure*}\n\n\n\nThe reason for this broadening of the ion distribution function becomes \nclear in Fig.~5 where we plot\nkinetic energy $x,y,z$ components ($\\propto V_{x,y,z}^2$) and total kinetic energies for\nelectrons (top row) and ions (bottom row). For ions we gather that $y$ and $z$ components of\nthe kinetic energy (bottom left figure) oscillate in anti-phase and their oscillatory part perfectly cancels \nout in the total energy (bottom right figure). Thus, the broadening of the $y$ and $z$ components of the\nion velocity distribution functions is due to the presence of AWs (usual wave broadening, which is actually observed\ne.g. in the solar corona and solar wind, \\cite{bd71,sbnm95,bpb00}), and hence there is no ion acceleration present.\nNote that $y$ and $z$ components and hence total kinetic energy of ions is monotonously increasing due to continuous\nAW driving. Note that no significant motion of ions along the field is present.\nFor electrons, on the other hand, we see a \nsignificant increase of the \n$x$ component (along the magnetic field) of kinetic energy\nwhich is due to the new electron acceleration mechanism discovered by us (cf. Fig.10).\nNote that for ions the $y$ component reaches lower values (than the \n$z$ component) because of lower AW velocity in the middle \npart of the simulation domain.\n\n\\begin{figure}[]\n\\resizebox{\\hsize}{!}{\\includegraphics{fig5.eps}} \n\\caption{Top row: kinetic energies (calculated by $x$ (solid), $y$ (dotted), $z$ (dashed) velocity (squared) components) of \nelectrons (left), and total kinetic energy for electrons (right)\nas a function of time. Bottom row: As above but for ions. The units on $y$-axis are arbitrary.}\n\\end{figure}\n\n\n\nThe next step is to check whether the increase in electron velocities really comes from the\nresonant wave particle interactions. For this purpose in Fig.~6, left \npanel, we plot two snapshots of the\nAlfv\\'en wave $B_z(x,y=148)$ component at instances $t=54.69 \/ \\omega_{ci}$ (solid line)\nand $t=46.87 \/ \\omega_{ci}$ (dotted line).\nThe distance between the two upper leftmost peaks (which is the distance \ntravelled by the\nwave in the time span between the snapshots) \nis about $\\delta L=150\\Delta=15(c\/\\omega_{pe})$.\nThe time difference between the snapshots is $\\delta t=7.82 \/ \\omega_{ci}$.\nThus, the \nmeasured AW speed at the point of the strongest density gradient ($y=148$)\nis $V_A^M=\\delta L \/\\delta t=0.12c$. We can also work out \nthe Alfv\\'en speed from theory.\nIn the homogeneous low density region the Alfv\\'en speed was set to be\n$V_A(\\infty)=0.25 c$. From Eq.(1) it follows that for $y=148$ the \ndensity is increased by a factor of\n$2.37$ which means that the Alfv\\'en wave speed at this position is\n$V_A(148)=0.25\/\\sqrt{2.37}c=0.16c$.\nThe measured ($0.12c$) and calculated ($0.16c$) Alfv\\'en speeds in the inhomogeneous regions \ndo not coincide. This is probably because the \nAW front is decelerated (due to momentum conservation) \nas it passes on energy and momentum to the\nelectrons in the inhomogeneous regions (where electron acceleration takes place). \nHowever, this possibly is not the case if wave-particle interactions\nplay the same role as dissipation in MHD \\citep{sg69}:\nThen wave-particle interactions would result only in the decrease of the AW\namplitude (dissipation) and not in its deceleration.\nIf we compare these values to Fig.~4 (top left panel for $f(V_x)$), we deduce\n that these are the\nvelocities $>0.12c$ above which electron numbers with higher velocities\nare greatly increased. This deviation peaks at about $0.25c$ which,\nin fact, corresponds to the Alfv\\'en speed in the lower density regions.\nThis can be explained by the fact the electron acceleration takes\nplace in wide regions (cf. Fig.~2) along and around $y \\approx 148$ (and $y \\approx 51$) -- hence\nthe spread in the accelerated velocities.\nIn Fig.~6 we also plot a visual fit curve (dashed line) to \nquantify the amplitude decay law for the AW (at $t=54.69 \/ \\omega_{ci}$)\nin the strongest density inhomogeneity region.\nThe fitted (dashed) cure is represented by $0.056 \\exp \\left[ -\n\\left({x}\/{1250}\\right)^3\\right]$.\nThere is a surprising similarity of this fit with the \nMHD approximation results.\n\\citet{hp83} found that for large times (developed phase mixing),\nin the case of a harmonic driver, the amplitude decay law\nis given by $\\propto \\exp \\left[ -\n\\left(\\frac{\\eta \\omega^2 V_A^{\\prime 2}}{6 V_A^{5}}\\right)x^3\\right]$ which \nis much faster \nthan the usual resistivity dissipation\n$\\propto \\exp(-\\eta x)$. Here $V_A^{\\prime}$ is the derivative\nof the Alfv\\'en speed with respect to the $y$-coordinate.\nThe most interesting fact is that even in the kinetic approximation\nthe same $\\propto \\exp (-A x^3)$ law holds as in MHD.\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=12cm]{fig6.eps}\n\\caption{Left: two snapshots of the\nAlfv\\'en wave $B_z(x,y=148)$ component at instances $t=54.69 \/ \\omega_{ci}$ (solid line)\nand $t=46.87 \/ \\omega_{ci}$ (dotted line). The dashed line represents fit\n$0.056 \\exp \\left[ -\\left({x}\/{1250}\\right)^3\\right]$. Center: $B_z(x,y=10)$ \n(low density homogeneous region), $B_z(x,y=100)$ (high density homogeneous region).\nNote the\ndifferences in amplitudes and propagation speeds, which are consistent with the equilibrium\ndensity and thus Alfv\\'en speed dependence on $y$-coordinate.}\n\\end{figure*}\n\n\n\nIn MHD, finite resistivity and\nAlfv\\'en speed non-uniformity are responsible for the\nenhanced dissipation via phase mixing.\nIn our PIC simulations (kinetic phase mixing), however, we do not have dissipation\nand collisions (dissipation). Thus, in our case,\nwave-particle interactions play the same role as \nresistivity $\\eta$ in the MHD phase mixing \\citep{sg69}.\nNo significant AW dissipation\nwas found away from the density inhomogeneity regions (Fig.~6 middle and right panels, note also the\ndifferences in amplitudes and propagation speeds, which are consistent with the imposed density and hence Alfv\\'en speed variation\nacross the $y$-coordinate).\nThis has the same explanation as in the case of MHD --\nit is in the regions of density of inhomogeneities ($V_A^{\\prime}\\not=0$) \nthat the dissipation is greatly enhanced, while in the regions\nwhere $V_A^{\\prime}=0$ there is no substantial dissipation (apart from the \nclassical $\\propto \\exp(-\\eta x)$\none).\nIn the MHD approximation, the aforementioned amplitude decay law is derived\nfrom the diffusion equation, to which MHD equations reduce for large times (developed\nphase mixing \\citep{tnr03}). It seems that the kinetic description \nleads to the same type of diffusion equation.\nIt is unclear at this stage, however, what physical quantity \nwould play the role of resistivity $\\eta$ (from the MHD approximation) in the\nkinetic regime. \n\n\n\n\\subsection{Homogeneous plasma case}\n\nIn order to clarify the broadening of the ion velocity distribution function\n and also for a consistency check\nwe performed an additional simulation in the case of homogeneous plasma.\nNow the density was fixed at 100 ions\/electrons per cell in the entire simulation\ndomain and hence plasma temperature and thermal velocities were fixed too.\nIn such a set up no phase mixing should take place as the AW speed is uniform.\n\n\\begin{figure*}\n\\centering\n \\epsfig{file=fig7a.eps,width=6cm}\n \\epsfig{file=fig7b.eps,width=6cm}\n \\epsfig{file=fig7c.eps,width=6cm}\n \\epsfig{file=fig7d.eps,width=6cm}\n \\caption{As in Fig.~1 but for the case of homogeneous plasma density (no phase mixing).\n Note only non-zero (above noise level) components are plotted. There are no $B_x,E_x$ or density\n fluctuations present in this case.}\n\\end{figure*}\n\n\n\nIn Fig.~7 we plot the only non-zero (above noise level) components at $t=54.69 \/ \\omega_{ci}$, which\nare left circularly polarised AW fields: $B_z,B_y,E_z,E_y$. \nNote there are no $B_x,E_x$ or density fluctuations present in this case (cf. Fig.~1) as\nit is the plasma inhomogeneity that facilitates the coupling between AW and the compressive\nmodes.\n\\begin{figure*}\n\\centering\n\\includegraphics[width=12cm]{fig8.eps}\n\\caption{As in Fig.~3 but for the case of homogeneous plasma density (no phase mixing).}\n\\end{figure*}\n\n\n\nIn Fig.~8 we plot ion phase space ($V_z\/c$ vs. $x$ and $V_z\/c$ vs. $y$) in the\nhomogeneous plasma case for different times (cf. Fig.~3). We gather from the graph that\npropagating, incompressible, Alfv\\'enic \"kink\" type oscillations are still present (left column),\nwhile no significant ion\nacceleration takes place (right column). This is better understood from Fig.~9 where we plot electron and ion \ndistribution functions for $t=0$ and $t=54.69 \/ \\omega_{ci}$ (as in Fig.~4) for the \nhomogeneous plasma case. \nThere are three noteworthy points: (i) no electron acceleration takes place because of\nthe absence of phase mixing; (ii) there is (as in the inhomogeneous case) broadening of the ion\nvelocity distribution functions (in $V_y$ and $V_z$) due to the present AW (wave broadening);\n(iii) The distribution now looks Maxwellian (cf. Fig.~4) because \nthe distribution function\nholds for the entire homogeneous region.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=12cm]{fig9.eps}\n\\caption{As in Fig.~4 but for the case of homogeneous plasma density (no phase mixing).}\n\\end{figure*}\n\n\nIn Fig.~10 we plot the\nkinetic energy $x,y,z$ components ($\\propto V_{x,y,z}^2$) and \ntotal kinetic energy for\nelectrons (top row) and ions (bottom row). For ions we see (as in the inhomogeneous case) \nthat $y$ and $z$ components of\nthe kinetic energy (bottom left figure) oscillate in anti-phase and their oscillatory part perfectly cancels \nout in the total energy (bottom right figure). Thus, the broadening of the $y$ and $z$ components of the\nion velocity distribution functions is due to the presence of AWs, and, in turn, there is no ion acceleration.\nAgain the $y$ and $z$ components and hence the \ntotal kinetic energy of the ions is monotonously increasing due to continuous\nAW driving. No significant motion of ions along the field is present.\nFor electrons we do not observe any acceleration due to the absence of phase mixing (cf. Fig.5).\nNote that the $y$ component now attains the same values as the \n$z$ component (bottom left figure)\nbecause of the same AW velocity in the entire simulation domain.\n\n\\begin{figure}[]\n\\resizebox{\\hsize}{!}{\\includegraphics{fig10.eps}} \n\\caption{As in Fig.~5 but for the case of homogeneous plasma density (no phase mixing).} \n\\end{figure} \n\n\n\\section{Discussion}\n\nIn our preliminary work \\citep{tss05} we outlined the main results of our\nnewly-discovered mechanism of electron acceleration. Here we \npresented a more detailed analysis\nof the phenomenon. We have established the following:\n\n\\begin{itemize}\n\\item Progressive distortion of the Alfv\\'en wave front, due to the differences in \nlocal Alfv\\'en speed, generates oblique (nearly parallel to the magnetic\nfield) electrostatic fields, which accelerate electrons.\n\n\\item The amplitude decay law in the inhomogeneous regions, \nin the kinetic regime, is shown to be the same as in the MHD approximation \ndescribed by \\citet{hp83}.\n\n\\item The density perturbations ($\\approx 10$\\% of background)\nare generated due to both the weak non-linearity and plasma inhomogeneity. \nThese are propagating density oscillations with variations both \nin overall magnitude \nand across the $y$-coordinate. They are mainly confined to the strongest density gradients regions\n(around $y \\approx 50$ and $150$) i.e. edges of the density structure (e.g. boundary of a solar coronal loop). \nLongitudinal to the external magnetic field, $B_x$, perturbations are also generated in the\nsame manner, but with smaller ($\\approx 3$\\%) amplitudes.\n\n\\item Both in the homogeneous and inhomogeneous cases the presence \nof AWs causes broadening of the perpendicular \n(to the external magnetic field) ion velocity \ndistribution functions, while no ion acceleration is observed.\n\n\\end{itemize}\n\nIn the MHD approximation \n\\citet{hbw02} and \\citet{tnr03} showed that \nin the case of localised Alfv\\'en pulses,\nHeyvaerts and Priest's amplitude decay\nformula $\\propto \\exp (-A x^3)$ (which is true for\nharmonic AWs) is replaced by the power law $B_z \\propto x^{-3\/2}$. \nA natural next step forward\nwould be to check whether \nin the case of localised Alfv\\'en pulses the same power law holds\nin the kinetic regime.\n\nAfter this study was complete\nwe became aware of a study by \\citet{vh04}, who used a hybrid\ncode (electrons treated as a neutralising fluid, with ion\nkinetics retained) as opposed to our (fully kinetic) PIC code,\nto simulate resonant absorption. They found that \na planar (body) Alfv\\'en\nwave propagating at less than $90^{\\circ}$ to a background gradient \nhas field lines which lose wave energy to another set of field lines by\ncross-field transport. Further, \\citet{v04} found that \nwhen perpendicular scales of the\norder of 10 proton inertial lengths ($10 c\/\\omega_{pi}$) \ndevelop from wave refraction\nin the vicinity of the resonant field lines, a non-propagating \ndensity fluctuation begins\nto grow to large amplitudes. This saturates by exciting highly \noblique, compressive and\nlow-frequency waves which dissipate and heat protons. \nThese processes lead to a faster development of small\nscales across the magnetic field, i.e. the phase mixing mechanism, \nstudied here.\n\n\n\\begin{acknowledgements}\nThe authors gratefully acknowledge support from\nCAMPUS (Campaign to Promote University of Salford) which funded\nJ.-I.S.'s one month fellowship to the Salford University \nthat made this project possible.\nDT acknowledges use of E. Copson Math cluster \nfunded by PPARC and University of St. Andrews.\nDT kindly acknowledges support from Nuffield Foundation \nthrough an award to newly appointed lecturers in Science,\nEngineering and Mathematics (NUF-NAL 04).\nThe authors would like to \nthank the referee, Dr. Bernie J. Vasquez, for pointing out some minor\ninconsistencies, which have been now corrected.\n\\end{acknowledgements}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzibts b/data_all_eng_slimpj/shuffled/split2/finalzzibts new file mode 100644 index 0000000000000000000000000000000000000000..1a3ebe2f4906a88e9d35a52ffee8f53b2f2fb19d --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzibts @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction and summary of results}\n\nThe existence of dark matter and dark energy is now firmly established phenomenologically \\cite{Bradac,Spergel} but the theoretical understanding is far from complete. Einstein equations require ``exotic\" components in the right hand side corresponding to about $\\%96$ of the total energy density today. Understanding the microscopic nature of these extra components is one of the most challenging and important problems faced by theoretical physics at present.\n\nSupersymmetric and exotic particles in the standard model are the best candidates for dark matter (for a review see \\cite{Jungman}). Several experiments are now being devised for a direct detection of these particles. Alternative descriptions based on modifications to gravity have also been explored with interesting results. See \\cite{Sanders,Bekenstein,Moffat,Ferreira,Ferreira2,Carroll}, and references quoted therein, for some of these efforts and its consequences. For a recent review of the Einstein aether theory see \\cite{Jacobson}.\n\nThe problem of dark energy is somehow more recent, although the issue of the cosmological constant has been around for a long time. The discovery of an accelerating Universe \\cite{Expansion} resulted in deep changes in cosmology. The simplest explanation for this phenomena is a small positive cosmological constant, but many other possibilities have been explored (see \\cite{Carroll2,Lima} for recent reviews).\n\n~\n\nIn this paper we consider an action for general relativity coupled to a Born-Infeld theory. The Born-Infeld theory has as fundamental variable a symmetric connection $C^\\rho_{\\ \\mu\\nu}(x)$. $C^{\\mu}_{\\ \\nu\\rho}$ has the same symmetries and transformation properties of the Christoffel symbol but is independent from it. The action is\n\\begin{equation}\\label{I}\n I[g_{\\mu\\nu},C^{\\mu}_{\\ \\nu\\rho},\\Psi] = {1 \\over 16 \\pi G} \\int d^4x \\left[ \\sqrt{|g_{\\mu\\nu}|} R + {2 \\over \\alpha l^2}\\sqrt{\\left|g_{\\mu\\nu}-l^2 K_{(\\mu\\nu)}\\right| } \\right] + \\int d^4x\\, {\\cal L}_m(\\Psi,g_{\\mu\\nu}),\n\\end{equation}\nwhere $|A_{\\mu\\nu}|$, for any $A_{\\mu\\nu}$, denotes the absolute value of the determinant of $A_{\\mu\\nu}$. $K_{\\mu\\nu}$ is the ``Ricci\" curvature associated to $C^{\\mu}_{\\ \\nu\\rho }(x)$,\n\\begin{equation}\\label{Kmn}\nK_{\\mu\\nu} \\equiv K^{\\alpha}_{\\ \\mu\\alpha\\nu } \\ \\ \\ \\ \\ \\ (K^{\\mu}_{\\ \\nu\\, \\alpha\\beta} = C ^{\\mu}_{\\ \\nu\\beta,\\alpha} + C^{\\mu}_{\\ \\sigma\\alpha} C^{\\sigma}_{ \\ \\nu\\beta} - [\\alpha \\leftrightarrow \\beta]).\n\\end{equation}\nBesides Newton's constant, the action (\\ref{I}) has two extra parameters: $l$ is a length and $\\alpha$ is dimensionless. $\\Psi$ denotes all baryonic fields and ${\\cal L}_{{ m}}$ the baryonic Lagrangian.\n\nThe action (\\ref{I}) is similar in spirit although different in interpretation to the Born-Infeld gravity action proposed by Deser and Gibbons \\cite{Deser-Gibbons},\n\\begin{equation}\\label{IDG}\nI[g_{\\mu\\nu}]=\\int \\sqrt{|g_{\\mu\\nu} - l^2 R_{\\mu\\nu} + X_{\\mu\\nu}(R)|}\n\\end{equation}\nand elaborated in \\cite{DG2}. As discussed in \\cite{Deser-Gibbons}, the term $X_{\\mu\\nu}(R)$ must be chosen such that the action is free of ghost, and free of Schwarzschild-like singularities. The action (\\ref{IDG}) is an action for pure gravity, and it can be seen as a natural extension to spin two of the scalar $\\sqrt{|g_{\\mu\\nu} + \\partial_\\mu \\phi \\partial_\\nu\\phi| }$ and vector $\\sqrt{|g_{\\mu\\nu} + F_{\\mu\\nu}|}$ Born-Infeld (BI) theories. For the scalar and vector BI theories the equations of motion are of second order. For the spin two theory this is not automatic and requires the addition of $X_{\\mu\\nu}$.\n\nThe action (\\ref{I}), on the other hand, gives rise to second order equations because $K_{\\mu\\nu}(C)$ depends on first derivatives of the field $C^{\\mu}_{\\ \\nu\\rho }$. This action, however, is not an action for pure gravity but gravity coupled to $C^{\\mu}_{\\ \\nu\\rho }$. The equations of motion are discussed in the appendix and in Sec. \\ref{Eq\/Sec} below.\n\nIt is known (e.g. \\cite{Fradkin-T}) that general relativity with cosmological constant is dual to Eddington's action \\cite{Eddington} $I[C]\\sim\\int \\sqrt{|K_{\\mu\\nu}|}$. The action (\\ref{I}) can then be interpreted as general relativity interacting with its own dual field theory.\n\nThe action (\\ref{I}) can also be motivated by looking at general relativity without metric \\cite{B}. This interpretation will be discussed in Sec. \\ref{Sec\/g=0}.\n\nOur main goal in this paper is to argue that the field $C^{\\mu}_{\\ \\nu\\rho}$ has good properties to represent dark matter and dark energy. We shall study the equations of motion following from (\\ref{I}) and prove the following properties.\n\\begin{enumerate}\n\\item\nFor a cosmological model, there exist solutions where the expansion factor $a(t)$ behaves as $a(t) \\sim e^{Ht}$ for large $t$, and as $a(t)\\sim t^{2\/3}$ for small $t$.\nThe equation of state for the fluid interpolates between $p=0$ and $p=-\\rho$. The parameters in the solution can be adjusted such that this field contributes to $\\sim 23\\%$ of the total matter energy density and $\\sim \\%73$ of vacuum energy density, as required by observations \\footnote{Couplings between dark matter and energy have appeared in \\cite{Comelli}, and in \\cite{Bertolami} involving a Chapligyn gas.}.\n\n\\item\nFor a spherically symmetric configurations, the action (\\ref{I}) predicts asymptotically flat rotation curves, as required by galactic dynamics. The parameters involved in this solution can also be adjusted to deal with realistic situations.\n\\end{enumerate}\n\n\nWe would like to stress the simplicity of this proposal. The ``Born-Infeld\" term is all we need to account for both dark energy and dark matter, at least for the problems described above. More complicated tests, like lensing, fluctuations, and others will be discussed elsewhere \\cite{BFS,BRR}. See also \\cite{Davi}.\n\n\n\n\n\n\n\n\\section{The Equations of Motion}\n\n\n\\label{Eq\/Sec}\n\n\\subsection{A bi-metric theory}\n\nThe fields varied in the action (\\ref{I}) are the metric $g_{\\mu\\nu}$ and the connection $C^{\\mu}_{\\ \\nu\\rho }$. Both fields are independent. At the level of the equations of motion, the connection $C^{\\mu}_{\\ \\nu\\rho }$ can be written in terms of a second metric $q_{\\mu\\nu}$. (The full action can also be written as a bi-metric theory \\cite{andy}.) This action then represent a bi-metric theory if gravity. This result follows closely the structure of Eddington's theory \\cite{Eddington}. We shall postpone a detailed derivation for the appendix and include here only the result.\n\nLet $q_{\\mu\\nu}(x)$ be a rank two invertible symmetric tensor satisfying the metricity condition\n\\begin{equation}\nD_\\rho q_{\\mu\\nu}=0\n\\end{equation}\n{\\it with respect to} $C^{\\mu}_{\\ \\nu\\rho }$. Since $C^{\\mu}_{\\ \\nu\\rho }$ is symmetric this implies $C^{\\mu}_{\\ \\nu\\rho} = {1 \\over 2} q^{\\mu\\alpha} ( q_{\\alpha\\nu,\\rho} + q_{\\alpha\\rho,\\nu} - q_{\\nu,\\rho,\\alpha} )$, and for every $q_{\\mu\\nu}$ there is a unique $C^{\\mu}_{\\ \\nu\\rho }$.\n\nThe equations of motion derived from the action (\\ref{I}) can be written completely in terms of $g_{\\mu\\nu}$ and $q_{\\mu\\nu}$, and take the very simple form\n\\begin{eqnarray}\nG_{\\mu\\nu} &=& - {1 \\over l^2} \\sqrt{{q}\\over g}\\, g_{\\mu\\alpha}\\,q^{\\alpha\\beta}\\, g_{\\beta\\nu} + 8\\pi G\\, T^{{\\scriptscriptstyle (m)}}_{\\ \\ \\mu\\nu} \\label{ee} \\\\\nK_{\\mu\\nu} &=& {1 \\over l^2}( g_{\\mu\\nu} + \\alpha\\, q_{\\mu\\nu}) \\label{Ke}\n\\end{eqnarray}\n$T^{{\\scriptscriptstyle (m)}}_{\\ \\mu\\nu}$ is the energy momentum tensor associated to the baryonic Lagrangian ${\\cal L}_{{\\scriptscriptstyle (m)}}$. $q^{\\mu\\nu}$ is the inverse of $q_{\\mu\\nu}$. The derivation of these equations is left for the appendix.\n\n~\n\nEquation (\\ref{ee}) is the Einstein equation. The first term in the right hand side is the contribution from the Born-Infeld action. Our main goal will be to prove that this fluid can account for dark matter and dark energy.\n\n\n\\subsection{The de-Sitter solution} \\label{SecdeSitter}\n\nThe de-Sitter spacetime is an exact solution to this theory. This can be seen as follows. (The de-Sitter spacetime is expected to be relevant after matter becomes negligible so we set here $T^{(m)}_{\\ \\mu\\nu}=0$.)\n\nSuppose there exists solutions of the equations of motion with $R_{\\mu\\nu}=\\Lambda g_{\\mu\\nu}$. It is direct to see that this implies that both metrics must be proportional,\n\\begin{equation}\nq_{\\mu\\nu}(x) = \\gamma\\, g_{\\mu\\nu}(x)\n\\end{equation}\nwith $\\gamma$ a constant. The constant $\\gamma$ can be computed as follows.\nReplacing in (\\ref{Ke}) we derive,\n\\begin{equation}\nR_{\\mu\\nu} = {1 \\over l^2}\\left( \\gamma \\alpha + 1 \\right) g_{\\mu\\nu}.\n\\end{equation}\nReplacing in (\\ref{ee}) (with $T^{(m)}_{\\mu\\nu}=0$) we derive\n\\begin{equation}\nR_{\\mu\\nu} = {\\gamma \\over l^2} g_{\\mu\\nu}\n\\end{equation}\nConsistency determines $\\gamma$,\n\\begin{equation}\n\\gamma = {1 \\over 1-\\alpha}.\n\\end{equation}\nThus, the Born-Infeld field can behave as a cosmological constant with the value\n\\begin{equation}\n\\Lambda = {1 \\over 1-\\alpha}\\, {1 \\over l^2}.\n\\end{equation}\nThe value $\\alpha=1$ is a critical point where cosmological solutions ceases to exist.\nCuriously, we shall see that a good fit for the Friedman equation requires $\\alpha$ to be close, but not equal, to one.\n\n\n\n\\section{Friedman cosmological models}\n\nThe evolution equation for the scale factor in flat cosmological models is given by the Friedman equation (neglecting radiation)\n\\begin{equation}\\label{Fr}\n{\\dot a^2 \\over a^2} = {\\Omega_{bm} + \\Omega_{dm} \\over a^3} + \\Omega_{\\Lambda}.\n\\end{equation}\nCurrent values for the (relative) densities of barionic matter $\\Omega_{bm}$, dark matter $\\Omega_{dm}$ and vacuum energy $\\Omega_\\Lambda$ are,\n\\begin{equation}\\label{Omega}\n\\Omega_{bm} \\simeq 0.04, \\ \\ \\ \\ \\ \\ \\ \\Omega_{dm} \\simeq 0.23, \\ \\ \\ \\ \\ \\ \\Omega_\\Lambda \\simeq 0.73.\n\\end{equation}\nAmong the components appearing in the right hand side of (\\ref{Fr}), only the $\\sim 0.04$ fraction of baryonic matter is theoretically well-understood. The other $0.23+0.73=0.96$ fraction remains a great mystery.\n\n\\subsection{Goal of this section}\n\n\n\nThe goal of this section is to demonstrate that the field $C^{\\mu}_{\\ \\nu\\rho }$ behaves like dark matter for small times, and as dark energy for larger times. In other words, its equation of state evolves from $p=0$ into $p=-\\rho$. Adjusting the parameters $\\alpha$ and $l$, plus initial conditions, the Born-Infeld field can account for both the $\\Omega_{dm}$ and $\\Omega_{\\Lambda}$ contributions in (\\ref{Fr}). Thus, the action (\\ref{I}), is capable to reproduce the correct evolution of the scale factor without adding neither dark matter nor dark energy.\n\nOur approach does not shed any light into the particular values for $\\Omega_\\Lambda,\\Omega_{dm},\\Omega_{bm}$ and other cosmological parameters. We shall only prove that $l$ and $\\alpha$ can be chosen such that the predictions from (\\ref{I}) are consistent with the Friedman equation (\\ref{Fr}). In particular we have chosen here to set $k=0$ and consider only flat models. There is no particular reason for the choice other than simplicity. A full analysis with a varying $k$ and including other developments will be reported in \\cite{BFS}.\n\n\n\\subsection{The ansatz and equations}\n\\label{Equations}\n\n\nTo solve (\\ref{ee}) and (\\ref{Ke}) we assume that both $g_{\\mu\\nu}$ and $q_{\\mu\\nu}$ are homogeneous, isotropic and with flat spatial sections. Using the gauge freedom in the time coordinate to fix $g_{tt}=-1$, the ansatz for $g_{\\mu\\nu}$ and $q_{\\mu\\nu}$ is then,\n\\begin{eqnarray}\\label{FRW}\ng_{\\mu\\nu}dx^\\mu dx^\\nu &=& - dt^2 + a(t)^2 (dx^2 + dy^2 + dz^2), \\\\\nq_{\\mu\\nu}dx^\\mu dx^\\nu &=& - X(t)^2 dt^2 + Y(t)^2 (dx^2 + dy^2 + dz^2)\\label{qFRW}\n\\end{eqnarray}\nwhere $a(t),X(t),Y(t)$ are arbitrary functions of time to be fixed by the equations of motion and initial conditions.\n\nAs usual for flat models, and to match the choice made in (\\ref{Fr}), we set\n\\begin{equation}\na(t) |_{ \\scriptscriptstyle today }=1, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ H_0 = \\dot a(t) |_{ \\scriptscriptstyle today }\n\\end{equation}\nand use $H_0$ to define a natural dimensionless time coordinate $H_0 t$. The time coordinate in all expressions from now on refer to this choice.\n\nEquations (\\ref{ee},\\ref{Ke}) for the ansatz (\\ref{FRW},\\ref{qFRW}) become,\n\\begin{eqnarray}\n{\\dot a^2 \\over a^2} &=& {1 \\over 3l^2 H_0^2 } {Y^3\\over X}{1 \\over a^3} + {\\rho \\over \\rho_c} \\label{F} \\label{c1} \\\\\n\\left( {Y^3 \\over X} \\right)^. &=& 3 X Y a \\dot a \\label{c2} \\\\\n{1 \\over X^2} {\\dot Y^2 \\over Y^2} &=& {1 \\over 3l^2 H_0^2}\\left( - {1 \\over 2 X^2} + \\alpha + {3 \\over 2} {a^2\\over Y^2}\\right), \\label{c3}\n\\end{eqnarray}\nplus second order equations related to (\\ref{c1}-\\ref{c3}) by Bianchi identities.\nWe have introduced the usual notation $\\rho_c = {3H_0^2 \\over 8\\pi G}$. $\\rho$ is the baryonic matter and we shall assume\n\\begin{equation}\n{\\rho \\over \\rho_c} = {\\Omega_{bm} \\over a^3 }.\n\\end{equation}\n\nThe interpretation of equations (\\ref{c1}-\\ref{c3}) is straightforward. Equation (\\ref{F}) is the Friedman equation determining the time evolution of the scale factor $a(t)$. The first term in the right hand side of (\\ref{c1}) is the contribution from the Born-Infeld field $C^{\\mu}_{\\ \\nu\\rho}$. Defining the density and pressure for the Born-Infeld field,\n\\begin{equation}\\label{rhop}\n\\rho_{{\\scriptscriptstyle BI}} = {1 \\over 8\\pi G l^2}{Y^3 \\over X} {1 \\over a^3}, \\ \\ \\ \\ \\ \\ \\ p_{{\\scriptscriptstyle BI}} = - {1 \\over 8\\pi G l^2} {XY \\over a}\n\\end{equation}\nthe right hand side of (\\ref{c1}) is simply ${1 \\over \\rho_{c}}(\\rho_{{\\scriptscriptstyle BI}} + \\rho)$. Furthermore, in terms of $\\rho_{{\\scriptscriptstyle BI}}$ and $p_{{\\scriptscriptstyle BI}}$, equation (\\ref{c2}) takes the usual conservation form\n\\begin{equation}\n(\\rho_{{\\scriptscriptstyle BI}} a^3)^. = - p_{{\\scriptscriptstyle BI}} (a^3)^. .\n\\end{equation}\n\nEq. (\\ref{c3}) (``the Friedman equation for the metric $q_{\\mu\\nu}$\") provides the equation of state for $\\rho_{{\\scriptscriptstyle BI}}$ and $p_{{\\scriptscriptstyle BI}}$ allowing a full solution to the problem. Note that using (\\ref{rhop}) the functions $X(t),Y(t)$ can be written in terms of $\\rho_{{\\scriptscriptstyle BI}}(t),p_{{\\scriptscriptstyle BI}}(t)$ and (\\ref{c3}) becomes a (differential) relation between these two functions. This equation of state thus have one free parameter represented as an initial condition.\n\nWe shall now show that $\\rho_{{\\scriptscriptstyle BI}}$ behaves like dark matter for small times, and like dark energy for large times.\n\n\n\\subsection{Asymptotic $a \\rightarrow 0$ and $a \\rightarrow \\infty$ behavior}\n\n\nDue to the complicated and non-linear character of equations (\\ref{c1}-\\ref{c3}) we shall study them by series expansions and numerically.\n\n~\n\nWe first study the behavior for large values of $a$. In this regime, the baryonic matter density $\\rho \\sim a^{-3}$ does not contribute. (A radiation component would not contribute either.) Neglecting the term $\\rho\/\\rho_c$, it is direct to see that the functions,\n\\begin{equation}\na(t) = a_0\\, e^{t\/C }, \\ \\ \\ \\ \\ X(t) = {1 \\over \\sqrt{1-\\alpha}} \\ \\ \\ \\ \\ Y(t) = {a_0 \\over \\sqrt{1-\\alpha}} \\, e^{t\/C},\n\\end{equation}\nwith $C = \\sqrt{3(1-\\alpha)}\\,l H_0$ provides an exact solution to (\\ref{c1}-\\ref{c3}). Thus, de-Sitter\\footnote{The existence of this exact solution is not at all surprising because we already know that the general equations (\\ref{ee}) and (\\ref{Ke}) accepts solutions of the form $R_{\\mu\\nu} = \\Lambda g_{\\mu\\nu}$ when $q_{\\mu\\nu}$ is proportional to $g_{\\mu\\nu}$} space is a solution to (\\ref{c1}-\\ref{c3}) for large times. The constant $C$ measures the value of the associated vacuum density. In order for this solution to approach de-Sitter space with the correct exponent, we must impose\n\\begin{equation}\\label{LO}\n{1 \\over 3(1-\\alpha)l^2 H_0^2} = \\Omega_{\\Lambda}.\n\\end{equation}\n$H_0$ and $\\Omega_\\Lambda$ are determined by observations. This provides a first constraint on the parameters $l$ and $\\alpha$ entering in the action. We shall use (\\ref{LO}) to solve $l$ in terms of $\\alpha$.\n\n~\n\nNow, we study the $a(t) \\simeq 0$ region. In this regime, an exact solution is not available, but one can display a series expansion with the desired properties. The following series\n\\begin{equation}\na(t)= a_0\\, t^{2\/3} (1 + {\\cal O}(t^{4\/3})), \\ \\ \\ \\ \\ X(t) = x_0^3( 1 + {\\cal O}(t)), \\ \\ \\ \\ \\ Y(t) = x_0 (1 + {\\cal O}(t) )\n\\end{equation}\nprovide a solution to (\\ref{c1}-\\ref{c3}). The crucial point here is the exponent $t^{2\/3}$ in $a(t)$ meaning that $C^{\\mu}_{\\ \\nu\\rho}$ does indeed behave like matter for small times. The amount of dark matter is controlled by $a_0$.\n\n\\subsection{Numerical interpolation}\n\nOur final goal is to display a solution for $a(t)$ interpolating between $a(t)\\simeq t^{2\/3}$ for small $a(t)$ and $a(t)\\simeq e^{Ht}$ for large $a(t)$. Furthermore, we would like this solution to exhibit the right amount of dark matter and dark energy.\nThis will be done by a numerical analysis.\n\n\n\nEquations (\\ref{c1}-\\ref{c3}) are of first order and thus we need to give three conditions $a_1=a(1)$, $X_1=X(1)$ and $Y_1=Y(1)$, plus the values of $\\alpha$ and $l$ to integrate them. These are 5 parameters. However only two of them are independent. This can be seen as follows.\n\n\nFirst of all, for a flat model, we can choose $a(1)=1$. Second, in (\\ref{LO}), we already encounter one condition on the parameters to achieve the right evolution. Eq. (\\ref{LO}) allows to solve $l$ in terms of $\\alpha$. One extra condition follows by evaluating Eq. (\\ref{c1}) today,\n\\begin{equation}\n1 = {1 \\over 3 l^2 H_0^2} { Y_1^3 \\over X_1} + \\Omega_{bm},\n\\end{equation}\nfrom where we can solve $X_1$ in terms of $Y_1$ and $l$. The remaining parameters are thus $\\alpha$ and $Y_1$.\n\n~\n\nWe have integrated (\\ref{c1}-\\ref{c3}) numerically varying $\\alpha$ and $Y_1$. The resulting curve is compared with the evolution predicted by (\\ref{Fr},\\ref{Omega}). Our conclusions are the following.\n\n\\begin{enumerate}\n\n\\item\nFirst of all, there exists values of $\\alpha,Y_1$ such that the evolution predicted by (\\ref{Fr}) is almost undistinguishable from that following from (\\ref{c1}-\\ref{c3}), at least for the part of the Universe we can observe $01$ does not work either.\n\nThe fact that $\\alpha \\sim 1$ to have a good fit is quite peculiar because the actual value $\\alpha=1$ is singular and the de-Sitter solution does not exist (See Sec. \\ref{SecdeSitter}). In any case, recall that $\\alpha$ enters in the action as a coupling constant and is not subject to variations. More testings on the theory should narrow the actual value of this parameter.\n\n\n\\item\nOf course no measurements exist for $t>1$, but it is interesting to explore the predictions of Born-Infeld theory to larger times. If one chooses the parameters such that the Big-Bang occurs at the same value of $t$ in both theories, then for large $t$ the expansion factor $a(t)$ grows slightly slower in the Born-Infeld theory. Further details on this issue will be reported elsewhere.\n\\end{enumerate}\n\n\n\n\\subsection{The evolution of the equation of state}\n\nAs we mention in Sec. \\ref{Equations}, the field $C^{\\mu}_{\\ \\nu\\rho}$ can be characterized by an energy density $\\rho_{{\\scriptscriptstyle BI}}$ and pressure $p_{{\\scriptscriptstyle BI}}$ whose expressions are given in (\\ref{rhop}). The corresponding equation of state is,\n\\begin{equation}\n{p_{{\\scriptscriptstyle BI}} \\over \\rho_{{\\scriptscriptstyle BI}}} = - \\left( {a X \\over Y} \\right)^2\n\\end{equation}\nand we observe that the pressure is always negative. Fig. \\ref{eqst} shows the evolution $0 w_0.\n\\end{equation}\n\\item\nFor $w_0\\neq 0$ it will be convenient to use a dimensionless radial coordinate,\n\\begin{equation}\nk \\equiv {\\tilde k \\over w_0}.\n\\end{equation}\nIn particular the horizon is now located at,\n\\begin{equation}\nk=1, \\ \\ \\ \\ \\ (\\mbox{horizon}).\n\\end{equation}\nFrom now on, all formulas refer to this coordinate.\n\n\n\\end{enumerate}\n\n\n~\n\nHaving chosen the zero order solutions to (\\ref{oee}) and (\\ref{oKe}), we now discuss the corrections induced but the right hand side of these equations. We only discuss here the first order correction to $g_{\\mu\\nu}$, proportional to ${1 \\over l^2}$. Since the right hand side of (\\ref{oee}) is already of order ${1 \\over l^2}$, it is enough to know $q_{\\mu\\nu}$ to order zero. [Note that $q_{\\mu\\nu}^{{\\scriptscriptstyle (0)}}$ contributes to $g_{\\mu\\nu}^{{\\scriptscriptstyle (1)}}$, $q_{\\mu\\nu}^{{\\scriptscriptstyle (1)}}$ contributes to $g_{\\mu\\nu}^{{\\scriptscriptstyle (2)}}$, and so on.]\n\n\nOur problem then reduces to replacing $q_{\\mu\\nu}$ given by (\\ref{q0}) in (\\ref{oee}) and solve for the metric $g_{\\mu\\nu}$ to first order in ${1 \\over l^2}$. The metric $g_{\\mu\\nu}$ must be spherically symmetric. We then write,\n\\begin{eqnarray}\\label{metric}\nds^2 &=& -c^2\\left( 1 + {1 \\over c^2}\\Phi(r)\\right) dt^2 +\\left(1 - {2m(r) \\over c^2 r} \\right)^{-1} dr^2 + r^2d\\Omega^2,\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\n \\Phi &=& \\Phi^{{\\scriptscriptstyle (0)}} + {1 \\over l^2}\\, \\Phi^{{\\scriptscriptstyle (1)}} + {1 \\over l^4}\\, \\Phi^{{\\scriptscriptstyle (2)}} + \\cdots \\\\\n m &=& m^{{\\scriptscriptstyle (0)}} + {1 \\over l^2}\\, m^{{\\scriptscriptstyle (1)}} + {1 \\over l^4}\\, m^{{\\scriptscriptstyle (2)}} + \\cdots.\n\\end{eqnarray}\nAs we have already discussed, in the approximation with no baryonic matter, the zero order solution is simply flat space and thus\n\\begin{equation}\n \\Phi^{{\\scriptscriptstyle (0)}}=0, \\ \\ \\ \\ m^{{\\scriptscriptstyle (0)}}=0.\n\\end{equation}\n\nTo first order we obtain the equations,\n\\begin{eqnarray}\n {dm^{{\\scriptscriptstyle (1)}} \\over dr} \\left(1 - {1 \\over k}\\right) - {w_0^3 c_0^2 \\over 2 \\beta}\\, k^2{d k \\over dr} &=& 0, \\label{g11}\\\\\n \\beta c^2w_0 k^2\\left(1 - {1 \\over k}\\right) + 2 {dk\\over dr} u^{{\\scriptscriptstyle (1)}} &=& 0, \\label{g22} \\\\\n {du^{{\\scriptscriptstyle (1)}} \\over dr} + \\beta c^2 w_0\\,r\\, {dk \\over dr} &=& 0, \\label{g33}\n\\end{eqnarray}\nwhere we have re-defined $\\Phi^{{\\scriptscriptstyle (1)}}(r)$ in terms of a new function $u^{{\\scriptscriptstyle (1)}}(r)$ by\n\\begin{equation}\\label{phi1}\nr{d\\Phi^{{\\scriptscriptstyle (1)}} \\over dr}= u^{{\\scriptscriptstyle (1)}}(r) + {m^{{\\scriptscriptstyle (1)}}(r) \\over r}.\n\\end{equation}\n[To first order, the equations only depend on $\\Phi'$ and this is why this redefinition does not spoil locality.]\n\n\n\n\n\\subsection{Full parametric solution. Two branches}\n\nEquations (\\ref{g11}-\\ref{g33}) are three non-linear equations for the three unknowns $m^{{\\scriptscriptstyle (1)}}(r),u^{{\\scriptscriptstyle (1)}}(r)$ and $k(r)$. A much simpler set of equations can be obtained by changing the independent variable from $r$ to $k$.\n\nWe define the functions $u^{{\\scriptscriptstyle (1)}}(k),m^{{\\scriptscriptstyle (1)}}(k)$ and $r(k)$. Also, for any $f(r)$,\n\\begin{equation}\n{df\\!(r) \\over dr} = \\left. {df(k) \\over dk}\\right\/ {dr \\over dk}.\n\\end{equation}\nPerforming these substitutions, equations (\\ref{g1}-\\ref{g3}) become linear for the unknowns $m^{{\\scriptscriptstyle (1)}}(k),u^{{\\scriptscriptstyle (1)}}(k)$ and $r(k)$,\n\\begin{eqnarray}\n {dm^{{\\scriptscriptstyle (1)}} \\over dk} \\left(1 - {1 \\over k}\\right) - {w_0^3 c_0^2 \\over 2 \\beta}\\, k^2 &=& 0 \\label{g1}\\\\\n \\beta c^2 w_0 k^2\\left(1 - {1 \\over k}\\right){dr \\over dk} + 2\\, u^{{\\scriptscriptstyle (1)}} &=& 0 \\label{g2} \\\\\n {du^{{\\scriptscriptstyle (1)}} \\over dk} + \\beta c^2 w_0\\,r &=& 0 . \\label{g3}\n\\end{eqnarray}\nNote in particular that $m^{{\\scriptscriptstyle (1)}}$ has decoupled from $u^{{\\scriptscriptstyle (1)}}$ and $r(k)$. The general solution can be found in closed form,\n\\begin{eqnarray}\n r(k) &=& A_0 \\left( -\\left(k - {1 \\over 2}\\right)\\ln\\left(1-{1 \\over k}\\right) -1\\right) + B_0 \\left(k - {1 \\over 2} \\right) \\label{rk} \\\\\n u^{{\\scriptscriptstyle (1)}}(k) &=& {1 \\over 2} \\beta c^2 w_0 \\left[ A_0 \\left( k^2 \\left( 1- {1 \\over k} \\right) \\ln\\left( 1 - {1 \\over k}\\right) + k - {1 \\over 2} \\right) - B_0 ( k^2 - k) \\right] \\label{uk} \\\\\n m^{\\scriptscriptstyle (1)}(k) &=& {w_0^3c^2\\over 2\\beta}\\left( {1 \\over 3}k^3 + {1 \\over 2}k^2 + k + \\ln(k-1) - h_0 \\right) \\label{mk}\n\\end{eqnarray}\nwhere $A_0,B_0$ and $h_0$ are integration constants. This solution is real for $k>1$, that is outside the horizon in the reciprocal space $q_{\\mu\\nu}$.\n\n~\n\nTo explore the properties of the different solutions we first note that the function $r(k)$ displayed in (\\ref{rk}) diverges at two different values of $k$,\n\\begin{equation}\nk = \\infty, \\ \\ \\ \\ \\ \\ \\ \\ \\mbox{and} \\ \\ \\ \\ \\ \\ \\ \\ k=1.\n\\end{equation}\n\nSince the function $r(k)$ is a coordinate change and must be globally defined at least in the range $00$. In this case, $r(k)$ diverges for large $k$, and becomes zero at some finite value $k=k_0$. For $k0,B_0<0$. In this case, $r(k)$ diverges at $k=1$ and becomes zero at some finite value $k=k_0$. The physical range of the coordinate $k$ in this case is\n\\begin{equation}\n1 > k \\geq k_0\n\\end{equation}\nThe most salient and peculiar property of this branch is that infinity is mapped to the horizon in the metric $q_{\\mu\\nu}$. There is a strong\/weak relationship between both fields. The details of this branch are studied in the following paragraphs.\n\n\n\n\\end{itemize}\n\nFig. \\ref{bran} shows the behavior of the function $r(k)$ for each branch.\n\n\\begin{figure}[h]\n\\centerline{\\psfig{file=branches.eps,width=10cm,angle=0}}\n\\caption{Two branches}\n\\label{bran}\n\\end{figure}\n\n\n\n\n\n\\subsection{The logarithmic branch and asymptotically flat rotation curves}\n\nThe most important property of this branch is that the rotation curves are asymptotically flat. Let us recall the relation between the Newtonian potential appearing in (\\ref{metric}) and the rotation speed of a (non-relativistic) object at distance $r$,\n\\begin{equation}\\label{prof}\nv(r) = \\sqrt{ r {d\\Phi(r) \\over dr}}.\n\\end{equation}\n(This follows from the geodesic equation.) On the other hand, the derivative of the potential $\\Phi$, to first order in ${1 \\over l^2}$, is given in terms of $u^{{\\scriptscriptstyle (1)}}$ and $m^{{\\scriptscriptstyle (1)}}$ in (\\ref{phi1}). The rotation curve can be expressed as a parametric function ,\n\\begin{equation}\nv(k) = {1 \\over l} \\sqrt{ u^{{\\scriptscriptstyle (1)}}(k) + {m^{{\\scriptscriptstyle (1)}}(k) \\over r(k)}}, \\ \\ \\ \\ \\ \\ r=r(k)\n\\end{equation}\nwhere $u^{{\\scriptscriptstyle (1)}}(k),m^{{\\scriptscriptstyle (1)}}(k)$ and $r(k)$ are given in (\\ref{rk}-\\ref{mk}).\n\nFrom these expression is it direct to compute the limit,\n\\begin{eqnarray}\\label{vinf0}\nv_{\\infty}^2 & \\equiv & \\lim_{k\\rightarrow 1} v^2(k) \\nonumber\\\\\n&=& {w_0 (\\beta^2 A_0^2 - 4 w_0^2) \\over 4A_0 \\beta l^2}\\, c^2\n\\end{eqnarray}\nwhich is indeed finite.\n\nHowever, this is not the whole story. We need to impose boundary conditions at $r=0$ ($k=k_{0}$) to ensure that the solution and in particular the rotation curve (\\ref{prof}) is well-behaved there too. This will imply the following constraints and redefinitions of the parameters $A_0,B_0$ and $h_0$.\n\n\\begin{enumerate}\n\\item\nWe first express $B_0$ in terms of $k_0$, the point where $r(k_0)=0$. This gives the following expression for $B_0$,\n\\begin{equation}\nB_0 = {A_0 \\over 2k_0-1} \\left((2k_0-1) \\ln\\left(1 - {1 \\over k_0} \\right) + 2\\right).\n\\end{equation}\n\\item\nSecond, ${m^{{\\scriptscriptstyle (1)}}}\\over r$ must be finite at $r=0$. This implies that $m^{{\\scriptscriptstyle (1)}}(k)$ must vanish at $k=k_0$ and this fixes $h_0$ to be \\begin{equation}\n h_0 = {1 \\over 3} k_0^3 + {1 \\over 2}k_0^2 + k_0 + \\ln(k_0-1)\n\\end{equation}\n\\item\nFinally, the orbital velocity of an object at $r=0$ must be zero. This implies that $u^{{\\scriptscriptstyle (1)}} + m^{{\\scriptscriptstyle (1)}}\/r$ evaluated at $k=k_0$ must vanish. This is achieved by choosing the constant $A_0$ to be\n\\begin{equation}\n A_0 = {2 w_0k_0^2 (2k_0-1) \\over \\beta}\n\\end{equation}\n\\end{enumerate}\nIn summary, boundary conditions at $r=0$ fix $B_0,A_0$ and $h_0$ in terms of a new parameter $k_0$. The full solution is then characterized by three remaining constants. The length scale $w_0$, and two dimensionless numbers $\\beta$ and $k_0$.\n\n\n\\subsection{A better parametrization and examples}\n\nThe solution we have found is still parameterized by several numbers. The functions $r(k),v(k)$ depend on $l,c,\\beta,w_0,k_0$. The first two, $l,c$ enter in the action and cannot be varied. In fact $l$ has been already constrained by the cosmological analysis. The other three remaining parameters can be chosen to match a desired physical situation. Before plotting examples is it convenient to choose a different basis for these three arbitrary parameters.\n\nFirst, the asymptotic velocity $v_{\\infty}$ computed in (\\ref{vinf0}) in terms of $k_0$ is\n\\begin{equation}\\label{vinf}\nv^2_\\infty = {4k_0^6-4k_0^5+ k_0^4 - 1 \\over 2(2k_0-1)k_0^2}\\, {w_0^2 \\over l^2}\\, c^2\n\\end{equation}\nThis parameter is of course a natural observable which can be identified easily for most galaxies. We use this equation and express $w_0$ in terms of $v_\\infty$,\n\\begin{equation}\\label{w0}\nw_0 = \\,\\sqrt{{2(2k_0-1)k_0^2 \\over 4k_0^6-4k_0^5+ k_0^4 - 1}}\\, {l\\, v_\\infty \\over c}\n\\end{equation}\nSecond, the dimensionless parameter $\\beta$, which enter in (\\ref{q0}), can be redefined as\n\\begin{equation}\n\\beta = {l \\, \\over r_0} { v_\\infty \\over c}.\n\\end{equation}\nwhere $r_0$ is an arbitrary parameter with dimensions of length.\n\n\nWith these definitions, the functions $r(k),v(k)$ take the convenient form\n\\begin{equation}\nr(k) = r_0 f_1(k,k_0), \\ \\ \\ \\ \\ \\ v(k) = v_\\infty f_2(k,k_0).\n\\end{equation}\nThe arbitrary constant $r_0$ sets the length scale while $v_\\infty$ set the velocity scale. Since both are arbitrary, they can be fixed to any desired values to fit realistic curves. The constant $k_{0}$ controls the shape of the curve and how fast it grows. Since there are three independent parameters, there will be a degeneracy when fitting these curves with observational data (this will be discussed in \\cite{BRR}).\nThe explicit expressions for $f_1,f_2$ are not very illuminating, and can be derived directly from the solution (\\ref{rk}-\\ref{mk}). Of course $f_2$ satisfies $f_{2}(1,k_0)=1$.\n\n~\n\nFig. (\\ref{figg1}) shows examples of the curve with $v_\\infty=100km\/sec$, $r_0$ fixed, and varying $k_{0}$. The top curve corresponds to $k_0=1.5$. As $k_0$ increases we observe a slower growth of the rotation curve. All curves asymptotically reach the value $v_\\infty=100km\/sec$. The horizontal axis is expressed in terms of $r\/r_0$, and choosing $r_0$ one can fit any desired length scale.\n\n\n\\begin{figure}[h]\n\\centerline{\\psfig{file=galactic.eps,width=6cm,angle=270}}\n\\caption{Rotation curves for $k_0=50,15,5,1.5$. }\n\\label{figg1}\n\\end{figure}\n\nIt is interesting to note that for values of $k_0$ smaller than $k_0 \\simeq 1.5$, the curves change shape. Fig. (\\ref{figg2}) shows the rotation curve for $k_0=1.5,1.03,1.003,1.0005$. The top curve corresponds to $k_{0}=1.5$. As $k_{0}$ becomes smaller, the rotation curves growths more slowly.\n\\begin{figure}[h]\n\\centerline{\\psfig{file=galactic2.eps,width=6cm,angle=270}}\n\\caption{Rotation curves for $k_0=1.5,1.03,1.003,1.0005$. }\n\\label{figg2}\n\\end{figure}\n\nNote that one does not expect the curves to be asymptotically flat to all orders. The solutions discussed here are only the first order approximation in the coupling ${1 \\over l^2}$. The next orders are necessary to extrapolate the result to large values of $r$, comparable with $l$. Also, the near horizon region for the metric $q_{\\mu\\nu}$ is singular in Schwarzschild coordinates and thus a proper analysis in regular coordinates may also change the behavior near infinity.\n\n\\subsection{Final remarks}\n\nWe end this section with two extra comments regarding the solutions with spherical symmetry.\n\n\n\n\\sub{Orders of magnitude and Solar System:} The solutions we have considered contain a length scale, $w_0$. This parameter was replaced in (\\ref{w0}) by the final speed $v_\\infty$, which is a better observable. It is however interesting to estimate the values of $w_0$ for a realistic situation. We set $l \\sim 10^6 kpc$ (cosmological length), and ${v_{\\infty} \\over c} \\sim {1 \\over 3} 10^{-3}$, for a typical situation with $v_\\infty \\sim 100 km\/sec$. Fig. \\ref{figw0} shows $w_0$ as a function of $k_0$.\n\n\\begin{figure}[h]\n\\centerline{\\psfig{file=w0.eps,width=6cm,angle=270}}\n\\caption{$w_0$(kpc) as a function of $k_0$. }\n\\label{figw0}\n\\end{figure}\n\nFor $k_{0} > 3 $, $w_0$ is equal to a few kpc. This is a natural galactic scale. With an optimistic viewpoint one can thus assign to $w_0$ some physical meaning determined by the length of the object observed. In other words, the tensor $q_{\\mu\\nu}$ is a field whose natural length scale of variation is determined by the object.\n\nNow, the natural dimensionless parameter which controls the corrections from flat space is ${w_0 \\over l}$. If we believe that the value of $w_0$ is comparable to the object of study, then for Solar System experiments ${w_0 \\over l}$ is too small, and the effects of $C^{\\mu}_{\\ \\nu\\rho}$ should not contribute.\n\n~\n\n\\sub{Central density:} The central density associated to $C^{\\mu}_{\\ \\nu\\rho }$ diverges linearly, as the NFW profile (\\ref{NFWp}). This can be seen by solving (\\ref{g11}-\\ref{g33}), for small values of $r$, as a series expansion. The series,\n\\begin{eqnarray}\n k(r) &=& k_0 - {\\beta (k_0-1) \\over w_0k_0 }\\ r + {\\cal O}(r^2) \\\\\n m^{{\\scriptscriptstyle (1)}}(r) &=& -{w_0^2k_0^2 c^2 \\over 2} \\ r + {\\cal O}(r^2) \\\\\n u^{\\scriptscriptstyle (1)}(r) &=& {w_0^2k_0^2c^2 \\over 2} + {\\cal O}(r^2)\n\\end{eqnarray}\nsolve (\\ref{g11}-\\ref{g33}) with the boundary condition $v(r)\\rightarrow 0$ as $r\\rightarrow 0$. With this solution at hand we can compute the behavior of the associated mass density,\n\\begin{eqnarray}\n 4\\pi G\\, \\rho(r) &=& {1 \\over r^2}( r^2 \\Phi' )' \\\\\n &\\simeq & { 2(k_0-1)w_0 c^2 \\beta \\over l^2\\, r} + {\\cal O}(1)\n\\end{eqnarray}\nwith a linear divergency, as anticipated.\n\n\\section{Eddington action, the equivalence principle and $g_{\\mu\\nu}=0$}\n\n\\label{Sec\/g=0}\n\n\nOur proposal for dark matter and dark energy is summarized in the action (\\ref{I}). Once the action is written one can ``roll down\" exploring its predictions and consequences by usual methods. This is what we have done so far. However, it is also interesting to ``climb up\" and attempt a derivation, or at least a good motivation to include the Born-Infeld term in the gravitational action.\n\nWe start this section recalling a well-known effect. Consider a system of $N$ spins. If no external field is applied (and the temperature is not too small) the macroscopic average is $\\langle \\vec{S} \\rangle = 0$. On the contrary, in the presence of an external field, $H_{ext}$, the symmetry is broken, the spins align and produce a non-zero macroscopic average $\\langle \\vec{S} \\rangle_{\\vec{H}_{ext}}\\neq 0$. It then follows that the total magnetic field felt by a charge $q$ is\n\\begin{equation}\n\\vec{H}_T = \\vec{H}_{ext} + \\langle \\vec{S} \\rangle_{\\vec{H}_{ext}}.\n\\end{equation}\nThe orbit of the charge will obey the Lorentz equation with $\\vec{H}_T$ not $\\vec{H}_{ext}$. If we did not know about spins the contribution $\\langle \\vec{S} \\rangle_{\\vec{H}_{ext}}$ would be interpreted as a sort of `dark' magnetic field. If the temperature is below the Curie temperature, the external field could be removed and the spins remain in their `ordered' state with $\\langle \\vec{S} \\rangle_0 \\neq 0$.\n\nLet us now describe an analog of this effect in the theory of gravity. Topological manifolds are invariant under the full diffeomorphism group. Riemannian manifolds are invariant only under the subgroup of isometries of the metric. The state $g_{\\mu\\nu}=0$ represents the unbroken state of general relativity \\cite{Witten88}, and the introduction of a metric breaks the symmetry. The natural geometrical analog of the external field $\\vec{H}_{ext}$ is the metric tensor $g_{\\mu\\nu}$. (See \\cite{Horowitz,Giddings,Guendelman} for other discussions on the state $g_{\\mu\\nu}=0$, and \\cite{Witten07} for a recent critical viewpoint.)\n\nWe shall treat the metric as an external field which can be switched on and off\\footnote{In this picture, the big-bang could be understood as a smooth transition from a manifold without metric into a Riemanian manifold.}. Our first goal is to explore fields that can be defined in the absence of a metric. The simplest example is given by a connection $C^{\\mu}_{\\ \\nu\\rho}(x)$. In fact, Eddington introduced a purely affine theory a long time ago \\cite{Eddington},\n\\begin{equation}\\label{edd0}\nI_0[C] = \\kappa \\int d^4 x\\, \\sqrt{K_{\\mu\\nu}(C)}\n\\end{equation}\nwhere $K_{\\mu\\nu}$ is the curvature associated to the connection $C^{\\mu}_{\\ \\nu\\rho}(x)$ (see Eqn. (\\ref{Kmn})). This action is invariant under spacetime diffeomorphism and yields second order differential equations for the field $C^{\\mu}_{\\ \\nu\\rho}$. Eddington action was extensively studied as a purely affine theory of gravity, and also as a possible unification of gravity and electromagnetism \\cite{Eddington,Poplawski}. We take here a different interpretation and let the field $C^{\\mu}_{\\ \\nu\\rho}$ be an independent degree of freedom.\n\nWe now turn on the external field $g_{\\mu\\nu}$ and study the effects of both $g_{\\mu\\nu}$ and $C^{\\mu}_{\\ \\nu\\rho}$ on particles. The first problem is to determine the action for the coupled system. We do not want to introduce ghost or higher derivatives. The action (\\ref{edd0}) is already free of anomalies. So we start by adding the standard Einstein-Hilbert action for $g_{\\mu\\nu}$ and consider\n\\begin{equation}\n\\int d^4 x \\left( \\sqrt{g}R + \\kappa \\sqrt{K_{\\mu\\nu}}\\ \\right).\n\\end{equation}\nWith this action, the fundamental fields $g_{\\mu\\nu}$ and $C^{\\mu}_{\\ \\nu\\rho}$ are decoupled. To make the theory more interesting we add interactions. The most attractive theory (although not unique) having second order field equations is the Einstein-Born-Infeld action introduced in Eq. (\\ref{I}).\n\nAn important point now is to define the geodesic equation for the coupled system. In the presence of a metric $g_{\\mu\\nu}$ there is a natural affine connection $\\Gamma^{\\mu}_{\\ \\nu\\rho}$ represented by the Christoffel symbol,\n\\begin{equation}\\label{chr}\n\\Gamma^\\mu_{\\ \\nu\\rho} = {1 \\over 2}g^{\\mu\\sigma} ( g_{\\sigma\\nu,\\rho} + g_{\\sigma\\rho,\\nu} - g_{\\nu\\rho,\\sigma}).\n\\end{equation}\nThe question is, should geodesics be defined with respect to $C^{\\mu}_{\\ \\nu\\rho}$, $\\Gamma^{\\mu}_{\\ \\nu\\rho}$, both? In order to comply with the equivalence principle we shall postulate that particles only couple to the metric and not to the connection $C^{\\mu}_{\\ \\nu\\rho }$. The geodesic equation then take the usual form\n\\begin{equation}\\label{geo}\n\\ddot x^{\\mu} + \\Gamma^{\\mu}_{\\ \\alpha\\beta} \\dot x^\\alpha \\dot x^\\beta=0,\n\\end{equation}\nwhere $\\Gamma^{\\mu}_{\\ \\alpha\\beta}$ is the Christoffel symbol (\\ref{chr}). Observe that the metric satisfies the equations (\\ref{ee}) and is coupled to the field $C^{\\mu}_{\\ \\nu\\rho }$. In this sense, $C^{\\mu}_{\\ \\nu\\rho }$ does contribute to $g_{\\mu\\nu}$ and indirectly affects the motion of particles. This is how the field $C$ can explain flat rotation curves.\n\nNow, the analogy with spin systems can be pushed a little bit further. We have seen in the cosmological analysis that for large times the system approaches the de-Sitter solution (see Sec. \\ref{SecdeSitter}), and in particular the metric $q_{\\mu\\nu}$ becomes proportional to $g_{\\mu\\nu}$, $q_{\\mu\\nu}\\rightarrow \\lambda g_{\\mu\\nu}$. One can interpret this fact as analogous to the alignment of spins along the direction of the applied field, $\\langle \\vec{S} \\rangle \\rightarrow \\lambda \\vec{H}_{ext}$. Of course, to support this interpretation one would need to consider generic initial conditions. This will be analyzed elsewhere.\n\nFinally, recall that when the external magnetic field is removed, spins can have a spontaneous non-zero average $\\langle \\vec{S} \\rangle$, and this vector generates forces on charged particles. Is there a gravitational analogue to this effect? The gravitational force is measured by the connection (\\ref{chr}), entering in the geodesic equation. The external field is the metric. Now, as the metric is removed, the Christoffel connection becomes ${0 \\over 0}$, with the same scaling weight in the numerator and denominator. For a large class of paths the limit is a finite function. Since the only connection available at $g_{\\mu\\nu}=0$ is $C^{\\mu}_{\\ \\nu\\rho}$, it is tempting to conjecture that $\\Gamma^{\\mu}_{\\ \\nu\\rho} \\rightarrow C^{\\mu}_{\\ \\nu\\rho}$, as the metric is removed. In this way, the geodesic equation has a non-trivial limit when the metric vanishes, and particles will feel `forces'. These are not forces in the usual sense because there is no metric. (Although note that a geodesic equation, defined by parallel transport, can be introduced without a metric.) The limit $g_{\\mu\\nu}\\rightarrow 0$ was the key ingredient employed in \\cite{B} for a different approach to understand dark matter as an effect associated to a topological manifold. To make these ideas precise a theory describing the process $g_{\\mu\\nu}\\rightarrow 0$ is necessary. We hope to come back to this interpretation in the future.\n\n\n\n\\section{Conclusions}\n\nDark matter and dark energy have unique properties and their understanding in one of the most crucial problems faced by theoretical physics today. Dark matter does not interact with normal matter and this property has motivated us to look for fields which have this property somehow ``built in\". We have explored gravity coupled to connection $C^{\\mu}_{\\ \\nu\\alpha}$ field with a Born-Infeld action.\n\nThis theory comply with the main background properties normally attributed to dark matter and dark energy. First, the evolution of the scale factor in cosmological models has the right time dependence interpolating between pressureless matter and a cosmological constant.\n\nAt galactic scales dark energy is less relevant but dark matter still plays an important role. By an approximation valid for distances much smaller to the Hubble radius we have solved the equations of motion for spherical objects and find the expected rotation curves. These curves satisfy the basic asymptotic flatness observed in galaxies providing new support for this proposal.\n\nWe have left several topics for the future. The stability of this theory and the study of primordial fluctuations are important to determine the CMB anisotropies. This will be reported in \\cite{BFS}. On galactic scales a systematic fit with observational curves is necessary. This issue is presently under study and will be reported in \\cite{BRR}.\n\n\n\n\n\n\n\\section{Appendix. Derivation of the equations of motion}\n\n\nThe fields which are varied in the action (\\ref{I}) are the metric $g_{\\mu\\nu}$ and the connection $\\Gamma^{\\mu}_{\\ \\nu\\rho}$. The equations of motion for the metric follow by a straightforward variation of the action. The result is\n\\begin{equation}\\label{1}\nG_{\\mu\\nu} = \\sqrt{{|g_{\\mu\\nu}- l^2 K_{(\\mu\\nu)}| \\over |g_{\\mu\\nu}| }} \\ g_{\\mu\\alpha}\\left({1 \\over g-l^2 K }\\right)^{\\alpha\\beta} g_{\\beta\\nu} + 8\\pi G \\, T^{{\\scriptscriptstyle (m)}}_{\\mu\\nu}\n\\end{equation}\nThis equation can be drastically simplified by using the equation of motion for the connection $\\Gamma^{\\mu}_{\\ \\nu\\rho}$. This equation is derived in two steps. First, since the action only depends on the curvature $K_{\\mu\\nu}$ once can compute the variation using the chain rule,\n\\begin{equation}\n{\\delta I \\over \\delta \\Gamma^{\\mu}_{ \\nu\\rho}} = \\int {\\delta I \\over \\delta K_{(\\alpha\\beta)}} \\, {\\delta K_{(\\alpha\\beta)} \\over \\delta \\Gamma^{\\mu}_{\\nu\\rho}}\n\\end{equation}\nJust like in Eddington \\cite{Eddington} theory one finds by direct variation that the combination\n\\begin{equation}\\label{K00}\n\\sqrt{q}{q}^{\\mu\\nu} \\equiv -{1 \\over \\alpha}\\sqrt{|g_{\\mu\\nu} - l^2 K_{\\mu\\nu}|} \\left({1 \\over g-l^2 K }\\right)^{\\mu\\nu}\n\\end{equation}\nsatisfies\n\\begin{equation}\nD_\\rho(\\sqrt{q}{q}^{\\mu\\nu})=0\n\\end{equation}\nwhere $D_\\rho$ is the covariant derivative built with the connection $\\Gamma_{{\\scriptscriptstyle 0}}$. Since $\\Gamma^{\\mu}_{\\ \\nu\\rho}$ is symmetric, this equation imply\n\\begin{equation}\n\\Gamma^{\\mu}_{ \\nu\\rho} = {1 \\over 2} q^{\\mu\\alpha} ( q_{\\alpha\\nu,\\rho} + q_{\\alpha\\rho,\\nu} - q_{\\nu,\\rho,\\alpha} )\n\\end{equation}\nWe thus write $\\Gamma^{\\mu}_{\\ \\nu\\rho}$ in terms of $q_{\\mu\\nu}$. The equation (\\ref{K00}) now depends only on $q_{\\mu\\nu}$. Taking the determinant at both sides, and inverting one readily derives (\\ref{Ke}).\n\nThe final simplification follows by noticing that the right hand side of (\\ref{1}) contains $\\sqrt{q}q^{\\mu\\nu}$. Thus, using (\\ref{K00}), Eq. (\\ref{1}) is transformed into (\\ref{ee}).\n\nThe analysis of these equations is greatly simplified by using the bi-metric formalism \\cite{andy}.\n\n~\n\n~\n\n\n\\section{Acknowledgements}\n\nThe author would like to thank S. Carlip, P. Ferreira, A.Gomberoff, M. Henneaux, A. Reisenegger, D. Rodrigues, N. Rojas, C. Skordis and S. Theisen for useful comments and discussions. The author was partially supported by Fondecyt Grants (Chile) \\#1060648 and \\#1051084.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nBerry in his pioneer work raised a fundamentally important concept known as\ngeometric phase (GP) in addition to the usual dynamic phase accumulated on\nthe wave function of a quantum system, provided that the Hamiltonian varies\nwith multi-parameters cyclically and adiabatically \\cite{Berry}. At the\npresent time the GP with extensive generalization along many directions has\nwide applications in various branches of physics \\cite{Berry2,Shapere,Bohm}.\n\nRecently, the close relation between GP and quantum phase transition (QPT)\nhas been gradually revealed \\cite{Carollo,Zhu,Hamma} and increasing interest\nhas been drawn to the role of GP in detecting QPT for various many-body\nsystems \\cite{Zhu08,Chen,interests}, which, as a matter of fact, is also a\nnew research field in condensed matter physics \\cite{Sachdev,Sondhi}. QPT\nusually describes an abrupt change in the ground state of a many-body system\ninduced by quantum fluctuations. The phase transition between ordered and\ndisordered phases is accompanied by symmetry breaking, which can also be\ncharacterized by Landau-type order parameters.\n\nOn the other hand, a new type of QPT called topological quantum phase\ntransitions (TQPT) has attracted much attention. The first non-trivial\nexample is the fractional quantum Hall effect \\cite{Tsui,Laughlin}. In the\nlast decade, several exactly soluble spin-models with the TQPT, such as the\ntoric-code model \\cite{Kitaev03}, the Wen-plaquette model \\cit\n{Wen-plaquette,Wen08} and the Kitaev model on a honeycomb lattice \\cit\n{Kitaev}, were found. In contrast to the conventional QPT governed by local\norder parameters \\cite{Sachdev}, the TQPT can be characterized only by the\ntopological order \\cite{WenBook}. As good examples to illustrate the\nunderlying physics, different methods are developed to describe the TQPT in\nthe Kitaev honeycomb model \\cite{T. Xiang07,T. Xiang08,H. D. Chen07,H. D.\nChen08,S. Yang,Gu}. In Ref. \\cite{T. Xiang07}, Feng \\textit{et.al.} obtained\nthe local order parameters of Landau type to characterize the phase\ntransition by introducing Jordan-Wigner and spin-duality transformations\ninto the Majorana representation of the honeycomb model. Gu \\textit{et.al.}\nshowed an exciting result of the ground-state fidelity susceptibility \\cit\n{S. Yang,Gu}, which can be used to identify the TQPT from the gapped $A$\nphase with Abelian anyon excitations to gapless $B$ phase with non-Abelian\nanyon excitations.\n\nQuite recently, Zhu \\cite{Zhu} showed that the ground-state GP in the $XY$\nmodel is non-analytic with a diverged derivative with respect to the field\nstrength at the critical value of magnetic field. Thereupon, the relation\nbetween the GP and the QPT is established. Nevertheless, much attention has\nbeen paid to the QPT, while effort devoted to the relation between the GP\nand the TQPT is very little. The present paper is devoted to exploiting the\nGP of the Kitaev honeycomb model as an essential tool to establish a\nrelation between the GP and the TQPT and reveal the novel quantum\ncriticality. Unlike the GP in the usual lattice-spin model for the QPT,\nwhich is generated by the single-spin rotation of each lattice-site, the\nsimultaneous rotation of linked two spins in one unit-cell seems crucial to\ndescribe the TQPT in the honeycomb model. The non-analyticity of GP at the\ncritical points with a divergent second-order derivative with respect to the\ncoupling parameters shows that the TQPT is the second-order transition and\ncan be well described by the GP.\n\nIn Sec. II, the ground state wave function and energy spectrum of the Kitaev\nhoneycomb model are presented. After introducing a correlated rotation of\ntwo $z$-link spins in each unit-cell, the ground-state GP and its\nderivatives are obtained explicitly in Sec. III. Sec. IV is devoted to\ninvestigating the scaling behavior of the GP. A brief summary and discussion\nare given in Sec. V.\n\n\\section{The Kitaev honeycomb model and spectrum}\n\nThe Kitaev honeycomb model shown in Fig. \\ref{fig1}(a) is firstly introduced\nto illustrate the topologically fault-tolerant quantum-information\nprocessing \\cite{Kitaev03,Kitaev,Nayak}. In this model, each spin located at\nvertices of the lattice interacts with three nearest-neighbor spins through\nthree types of bonds, depending on their directions. By using the Pauli\noperators $\\sigma ^{a}$ $(a=x,y,z)$, the corresponding Hamiltonian is\nwritten a\n\\begin{equation}\nH=-J_{x}\\!\\!\\!\\sum_{x\\mbox{-links}}\\!\\!\\!\\sigma _{j}^{x}\\sigma\n_{k}^{x}-J_{y}\\!\\!\\!\\sum_{y\\mbox{-links}}\\!\\!\\!\\sigma _{j}^{y}\\sigma\n_{k}^{y}-J_{z}\\!\\!\\!\\sum_{z\\mbox{-links}}\\!\\!\\!\\sigma _{j}^{z}\\sigma\n_{k}^{z}, \\label{Kitaev model}\n\\end{equation\nwhere $j$, $k$ denote the two ends of the corresponding bond, and $J_{a}$\nare coupling parameters. After introducing a special notation $K_{jk}=\\sigma\n_{j}^{a}\\sigma _{k}^{a}$, where the indexes $a$ depend on the types of links\nbetween sites $j$ and $k$ (so we also write it into $a_{jk}$ in the\nfollowing text for perspicuousness), Hamiltonian (\\ref{Kitaev model}) can be\nrewritten into a compact form\n\\begin{equation}\nH=-\\frac{1}{2}\\sum_{\\langle j,k\\rangle }J_{a_{jk}}K_{jk}.\n\\label{Kitaev modelK}\n\\end{equation}\n\n\\begin{figure}[h]\n\\centering \\vspace{0cm} \\hspace{0cm} \\scalebox{0.8}\n\\includegraphics{fig1.eps}}\n\\caption{(Color online) (a) Kitaev honeycomb model, in which one spin\ninteracts with three nearest-neighbor spins through three types of bonds,\ndepending on their direction. A unit-cell with $x$, $y$ and $z$ links and\ngraphic representation of Hamiltonian (\\protect\\ref{HM}) with Majorana\noperators are marked by the red dotted line. (b) Phase diagram of the model,\nwhere $A$ phase is gapped and $B$ phase is gapless.}\n\\label{fig1}\n\\end{figure}\n\nIt has been known that the Kitaev honeycomb model can be solved exactly by\nintroducing Majorana fermion operators, which are defined as \\cite{Kitaev,S.\nYang}\n\\begin{equation}\n\\sigma ^{x}=ib^{x}c,\\quad \\sigma ^{y}=ib^{y}c,\\quad \\sigma ^{z}=ib^{z}c.\n\\label{MajoranaFermion}\n\\end{equation\nGenerally, a set of Majorana operators $M=\\left\\{\nb^{x},b^{y},b^{z},c\\right\\} $ can be employed to describe a spin by two\nfermionic modes. They are Hermitian and obey the relations $m^{2}=1$ and \nmm^{\\prime }=-m^{\\prime }m$ for $m,m^{\\prime }\\in M$ and $m\\neq m^{\\prime }\n. Moreover, in the Hilbert space with a spin described by two fermionic\nmodes, the relation $b^{x}b^{y}b^{z}c\\left\\vert \\Psi \\right\\rangle\n=\\left\\vert \\Psi \\right\\rangle $\\ must be satisfied to ensure the obeying of\nthe same algebraic relations as $\\sigma^{x}$, $\\sigma ^{y}$, and $\\sigma\n^{z} $ \\cite{Kitaev}.\n\nDrawing on the operators (\\ref{MajoranaFermion}) to the Kitaev honeycomb\nmodel, Hamiltonian (\\ref{Kitaev modelK}) is given b\n\\begin{equation}\nH=\\frac{i}{2}\\sum_{\\langle j,k\\rangle}\\hat{u}_{jk}J_{a_{jk}}c_{j}c_{k},\n\\label{HM}\n\\end{equation}\nwhere $\\hat{u}_{jk}=ib_{j}^{a_{jk}}b_{k}^{a_{jk}}$. Fig. \\ref{fig1}(a) also\nshows the structure of Hamiltonian (\\ref{HM}), from which it can be seen\nthat $\\hat{u}_{jk}=-\\hat{u}_{kj}$. Since these operators $\\hat{u}_{jk}$\ncommute with the Hamiltonian (\\ref{HM}) and with each other, the Hilbert\nspace splits into two common eigenspaces of $\\hat{u}_{jk}$ with eigenvalues \nu_{jk}=\\pm1$. Thus, Hamiltonian (\\ref{HM}) is reduced to a quadratic\nMajorana fermionic Hamiltonian\n\\begin{equation}\nH=\\frac{i}{2}\\sum_{\\langle j,k\\rangle}u_{jk}J_{a_{jk}}c_{j}c_{k}. \\label{HU}\n\\end{equation}\n\nWith a Fourier transformation\n\\begin{equation}\nc_{s,\\lambda}=\\frac{1}{\\sqrt{2L^{2}}}\\sum_{\\mathbf{q}}e^{i\\mathbf{q}\\cdo\n\\mathbf{r}_{s}}a_{\\mathbf{q},\\lambda}, \\label{Fourier}\n\\end{equation}\nwhere $s$ denotes a unit cell shown in Fig. \\ref{fig1}(a), $\\lambda $ refers\nto a position inside the cell, $r_{s}$ represents the coordinate of the unit\ncell, and $\\mathbf{q}$ are momenta of the system with finite system-size \n2L^{2}$, and a Bogoliubov transformation\n\\begin{equation}\n\\left\\{\n\\begin{array}{c}\nC_{\\mathbf{q},1}^{\\dag}=\\frac{1}{\\sqrt{2}}a_{-\\mathbf{q},1}-\\frac{1}{\\sqrt{2\n}A_{\\mathbf{q}}^{\\ast}a_{-\\mathbf{q},2}, \\\\\nC_{\\mathbf{q},2}^{\\dag}=\\frac{1}{\\sqrt{2}}A_{\\mathbf{q}}a_{-\\mathbf{q},1}\n\\frac{1}{\\sqrt{2}}a_{-\\mathbf{q},2}\n\\end{array}\n\\right.\n\\end{equation}\nwhere $A_{\\mathbf{q}}=\\sqrt{\\epsilon_{\\mathbf{q}}^{2}+\\Delta_{\\mathbf{q}}^{2\n}\/(\\Delta_{\\mathbf{q}}+i\\epsilon_{\\mathbf{q}})$, Hamiltonian (\\ref{HU}) is\ntransformed int\n\\begin{equation}\nH=\\sum_{\\mathbf{q}}\\sqrt{\\epsilon_{\\mathbf{q}}^{2}+\\Delta_{\\mathbf{q}}^{2}\n\\left( C_{\\mathbf{q},1}^{\\dagger}C_{\\mathbf{q},1}-C_{\\mathbf{q},2}^{\\dagger\n}C_{\\mathbf{q},2}\\right) \\label{HC}\n\\end{equation}\nwith $\\epsilon_{\\mathbf{q}}=J_{x}\\cos q_{x}+J_{y}\\cos q_{y}+J_{z}$, and \n\\Delta_{\\mathbf{q}}=J_{x}\\sin q_{x}+J_{y}\\sin q_{y}$. In Hamiltonian (\\re\n{HC}), the momenta take the values \\cite{S. Yang\n\\begin{equation}\nq_{x\\left( y\\right) }=\\frac{2n\\pi}{L},n=-\\frac{L-1}{2},\\cdots,\\frac{L-1}{2},\n\\label{Values of q}\n\\end{equation}\nwhen the system size is chosen as $N=2L^{2}$ with $L$ being an odd integer.\nThus, the ground and the first-excited states are obtained by\n\n\\begin{align}\n\\left\\vert \\Psi_{0}\\right\\rangle & \\!\\!=\\prod_{\\mathbf{q}}C_{q,2}^{\\dagger\n}\\left\\vert 0\\right\\rangle =\\prod_{\\mathbf{q}}\\frac{1}{\\sqrt{2}}\\left( A_\n\\mathbf{q}}a_{-\\mathbf{q},1}+a_{-\\mathbf{q},2}\\right) \\left\\vert\n0\\right\\rangle , \\label{wavefunction} \\\\\n\\left\\vert \\Psi_{1}\\right\\rangle & \\!\\!=\\prod_{\\mathbf{q}}C_{q,1}^{\\dagger\n}\\left\\vert 0\\right\\rangle =\\prod_{\\mathbf{q}}\\frac{1}{\\sqrt{2}}\\left( a_{\n\\mathbf{q},1}-A_{\\mathbf{q}}^{\\ast}a_{-\\mathbf{q},2}\\right) \\! \\left\\vert\n0\\right\\rangle ,\n\\end{align}\nwith the energy eigenvalue\n\\begin{equation}\nE_{0,1}=\\pm\\sum_{\\mathbf{q}}\\sqrt{\\epsilon_{\\mathbf{q}}^{2}+\\Delta _{\\mathbf\nq}}^{2}}, \\label{EP}\n\\end{equation}\n\nIt has been shown that the Kitaev honeycomb model (\\ref{Kitaev model}) has a\nrich phase diagram including a gapped phase with Abelian anyonic excitations\n(called $A$ phase) and a gapless phase with non-Abelian anyonic excitations \n$B$ phase) \\cite{Kitaev}. In Fig. \\ref{fig1}(b), the two phases $A $ and $B$\nare separated by three transition lines, \\textit{i.e.}, $J_{x}=1\/2$, \nJ_{y}=1\/2$, and $J_{z}=1\/2$, which form a small triangle surrounding the $B$\nphase. Here, we only plot the energy spectrum (\\ref{EP}) as a function of \nJ_{z}$ for $J_{x}=J_{y}$ (the vertical dot-and-dash line in Fig. \\ref{fig1\n(b)) in Fig. \\ref{fig2}. It can be seen from Fig. \\ref{fig2} that the\nenergy-level degeneracy arises or lifts at certain points, which can be\nregarded as the possible critical points of QPT \\cite{Zhu,Sachdev}. In Fig.\n\\ref{fig2}(b, c, d), the degenerate points occur in the $B$ phase, but\ndisappear in the $A$ phase as shown in Fig. \\ref{fig2}(f). Moreover, the\nenergy spectrum may have asymptotic degeneracy at the phase diagram edge\nseen from Fig. \\ref{fig2}(a) when the size of system tends to infinity. The\nnon-analyticity points of ground state in $B$ phase are actual\nlevel-crossing points.\n\n\\begin{figure}[t]\n\\centering \\vspace{0cm} \\hspace{0cm} \\scalebox{0.88}\n\\includegraphics{fig2.eps}}.\n\\caption{Energy spectrum for the parameters $J_{x}=J_{y}$ (a) $J_{z}=0$, the\nspectrum indeed is degenerate in a larger system size; (b), (c), and (d) in \nB$ phase, for $J_{z}<1\/3$, $J_{z}=1\/3$, and $1\/31\/2$) while becomes saltant in $B$ phase ($J_{z}<1\/2$) for the\nsize parameters $L=11$ (dark yellow line)$,$ $33$ (red), and $99$ (blue),\nrespectively. Moreover, all the data fall onto a single curve in $A$ phase,\nwhile the number of saltation increases with the system-size $L$ in $B$\nphase (see insets (1) and (2) of Fig. \\ref{fig4}(a) ). To be specific, the\nnumber of saltation for $L=33$ is three times than that for $L=11$ (see Fig.\n\\ref{fig4}(a)), and the same situation occurs in turn for $L=99$ and $33$.\n\n\\begin{figure*}[t]\n\\centering \\vspace{0cm} \\hspace{0cm} \\scalebox{1.0}\n\\includegraphics{fig4.eps}}\n\\caption{(Color online) (a) $\\protect\\gamma$ curve along the selected\nvariation path $J_{x}=J_{y}$ for $L=11,$ $33,$ and $99$. Both insets reveal\nthe increasing number of saltation in $B$ phase proportional to the system\nsize parameter $L$. (b) $g_{x}$ and (c) $g_{xx}$ as a function of $J_{z}$\nalong the variation path $J_{x}=J_{y}$ for system size parameters $L=101,303$\nand $909$. The two insets are local enlarged-pictures, which show the\nvibration in $B$ phase and the circumstances in the critical point\nrespectively.}\n\\label{fig4}\n\\end{figure*}\n\nIt is meaningful to consider the first-order partial derivative \ng_{_{\\beta}}=\\partial\\gamma\/\\partial J_{\\beta}$ ($\\beta=x,y$) of the GP \n\\gamma$. Since the GP $\\gamma$ in Eq. (\\ref{gammafinal}) is symmetric with\nrespect to $J_{x}$ and $J_{y}$, we only need to investigate $g_{x}$ (or\nequivalently $g_{y}$). The variation of $g_{x}$ with respect to $J_{z}$\nalong the selected path of $J_{x}=J_{y}$ from $B$ phase to $A$ phase is\nshown in Fig. \\ref{fig4}(b) for different system-size parameters $L=101$, \n303$ and $909$. It can be seen from Fig. \\ref{fig4}(b) that $g_{x}$\noscillates in $B$ phase with frequency (or number of peaks), which is\nproportional to $L$ (see inset (1) of Fig. \\ref{fig4}(b)). This rapid\nvariation of the GP $\\gamma$ (in the $B$ phase) has not yet been found, to\nour knowledge. However, a very similar behavior of fidelity susceptibility\nin the Kitaev model has been reported \\cite{S. Yang}. On the other hand, the\nvalue of $g_{x}$ at the critical point $J_{z}=1\/2$ increases with the\nsystem-size and sharply decays in $A$ phase (see inset (2) of Fig. \\ref{fig4\n(b) for detail). It is interesting to remark that the saltation of GP \n\\gamma $ in $B$ phase due to the complex structure of ground state with\ndegeneracy (Fig. \\ref{fig2} (b),(c),(d)) is not random rather has\nregulation, especially it tends to a regular oscillation above the point, \nJ_{z}=1\/3$ , (see inset (1) of Fig. \\ref{fig4}(b)). The oscillation\nfrequency depends linearly on the system size.\n\nTo show the non--analyticity of GP at the critical points explicitly the\nsecond-order derivative $g_{xx}$ of the GP $\\gamma$ with respect to the\ncoupling parameters is calculated. Fig. \\ref{fig4}(c) shows the variation of\n$g_{xx}$ with respect to $J_{z}$ along the variation path of $J_{z}=J_{y}$\nfor different system-size parameters $L=101$, $303$ and $909$. Inset (1)\nreveals the increase of peak-number of $g_{xx}$ in $B$ phase along with the\nsystem-size $L$ similar to $\\gamma$ and $g_{x}$ in behavior. The\nsecond-order derivative $g_{xx}$ is divergent at the critical point \nJ_{z}=1\/2$ as shown in inset (2) indicating that the TQPT of the Kitaev\nhoneycomb model is a second-order transition, While the QPT of the XY spin\nchain \\cite{Zhu,Zhu08} and the Dicke model \\cite{Zhu08,Chen} has been shown\nto be the first-order transition with the divergent first-order derivative\nof the GP. We conclude that the non-analytic GP $\\gamma$ can very well\ndescribe the TQPT in terms of the Landau phase-transition theory.\n\nSimilarly, we can also choose the variation path as $J_{z}=1\/4$ (dashed line\nin Fig. \\ref{fig1}(b)) with two critical points $J_{x}=1\/4$ and $J_{x}=1\/2$.\nQualitatively similar results are shown in Fig. \\ref{fig5} for different\nsize parameters $L=101$ (red line), $303$ (blue) and $707$ (dark),\nrespectively, where $\\gamma$-plot is a smooth curve in $A$ phase for \nJ_{x}<1\/4$ or $J_{x}>1\/2$ and becomes saltant in $B$ phase when \n1\/4J_{z}^{C})$ is respectively plotted in Fig. \\ref{fig6}(a) and (b) for\ndifferent system-size parameters $L=101$, $303$ and $909$. The corresponding\nexponents $\\alpha _{\\gamma }^{-}=-0.99934\\pm 0.00033$ (left-hand side) and \n\\alpha _{\\gamma }^{+}=-0.83538\\pm 0.00008$ (right-hand side) are obtained\nfrom Fig. \\ref{fig6}(a) and (b). Similarly, $g_{x}$ and $g_{xx}$ as a\nfunction of $|J_{z}-J_{z}^{C}|$ are plotted in Fig. \\ref{fig6}(c), (d) and\n(e), (f) for the gapless and gapped phases respectively, from which the\ncritical exponents $\\alpha _{g_{x}}^{-}=-0.17529\\pm 0.02056$, $\\alpha\n_{g_{xx}}^{-}=1.32523\\pm 0.01719$ (left) for B phase and $\\alpha\n_{g_{x}}^{+}=-0.48375\\pm 0.00033$, $\\alpha _{g_{xx}}^{+}=0.60687\\pm 0.00526$\n(right) for A phase are found. The fact of negative exponents $\\alpha\n_{\\gamma }^{\\pm }$, $\\alpha _{g_{x}}^{\\pm }$ and positive $\\alpha\n_{g_{xx}}^{\\pm }$ indicates that $\\gamma $ and $g_{x}$ are finite while \ng_{xx}$ is divergent at the critical point in the thermodynamic limit. Thus\nthe TQPT is a second-order phase transition characterized by the GP $\\gamma \n.\n\n\\begin{figure}[h]\n\\centering \\vspace{0cm} \\hspace{0cm} \\scalebox{1.0}\n\\includegraphics{fig6.eps}}\n\\caption{(Color online) Finite-size scaling analysis of the power-law\ndivergence for (a) GP $\\protect\\gamma$, (c) $g_{x}$, (e) $g_{xx}$ as a\nfunction of $|J_{z}-J_{z}^{C}|$ in the vicinity of critical point with\nsystem sizes $L=301,901$ and $1501$ on the left-hand side ($J_{z}J_{z}^{C}$) respectively.}\n\\label{fig6}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering \\vspace{0cm} \\hspace{0cm} \\scalebox{0.35}\n\\includegraphics{fig7.eps}}\n\\caption{(Color online) $L$-dependence of $J_{z}^{C}-J_{z}^{m}$ in\nlogarithmic coordinate for $L=901,951,...,1901$.}\n\\label{fig7}\n\\end{figure}\n\nThe position of maximum value of $\\gamma $ denoted by $J_{z}^{m}$ may not be\nlocated exactly at the critical point $J_{z}^{C}=1\/2$, but tends to it in\nthe thermodynamic limit $L\\rightarrow \\infty $, which is regarded as the\npseudocritical point \\cite{Barber}. The $J_{z}^{C}-J_{z}^{m}$ versus\ndifferent system-size $L=901,951,...,1901$ in a logarithmic coordinate is\nplotted in Fig. \\ref{fig7}, which is a straight line of slope $-0.37851\\pm\n0.00226$. It means that the $J_{z}^{m}$ tends toward to the critical point \nJ_{z}^{C}$ following the power-law decay\n\\begin{equation}\nJ_{z}^{C}-J_{z}^{m}\\propto L^{-0.37851}.\n\\end{equation}\n\nOn the other hand, the maximum value of $\\gamma $ at $J_{z}=J_{z}^{m}$ for a\nfinite-size system behaves as\n\\begin{equation}\n\\gamma ({J_{z}^{m}})\\propto {L}^{\\mu _{\\gamma }},\n\\end{equation\nwhich is shown in the inset of Fig. \\ref{fig8}(a) with a straight line in\nlogarithmic coordinate. The corresponding size-exponent is given by $\\mu\n_{\\gamma }=0.01148\\pm 0.00054$.\n\n\\begin{figure*}[t]\n\\centering \\vspace{0cm} \\hspace{0cm} \\scalebox{1.0}\n\\includegraphics{fig8.eps}}\n\\caption{(Color online) (a) $F_{\\protect\\gamma}$, (b) $F_{g_x}$, and (c) \nF_{g_{xx}}$ as a function of $L^{\\protect\\nu}\\left( J_{z}-J_{z}^{m}\\right)$\nfor $L=401,601,...,1601$. All the data fall on to a single curve\nrespectively. And the three insets in turn shows the variation of $\\protec\n\\gamma(J_{z}^{m})$, $g_{x}(J_{z}^{m})$, and $g_{xx}(J_{z}^{m})$ with respect\nto the system size-parameters $L=101,151,...,1901$.}\n\\label{fig8}\n\\end{figure*}\n\nSince the GP $\\gamma$ around its maximum position $J_{z}^{m}$ can be written\nas a simple function of $J_{z}^{m}-J_{z}$, it is possible to make all the\nvalue-data defined by a universal scaling function $F_{\\gamma}=\\left( {\\gamm\n}({J_{z}^{m}})-\\gamma\\right) \/\\gamma$ versus $L^{\\nu_{\\gamma}}\\left(\nJ_{z}-J_{z}^{m}\\right) $, namely,\n\\begin{equation}\nF_{\\gamma}=f\\left[ L^{\\nu_{\\gamma}}\\left( J_{z}^{m}-J_{z}\\right) \\right] ,\n\\end{equation}\nwhere $\\nu_{\\gamma}$ is a critical exponent that governs the divergence of\nthe correlation length. The values of $F_{\\gamma}$ for different system-size\nparameters $L$ fall onto a single curve as shown in Fig. \\ref{fig8}(a), from\nwhich we can extract the critical exponent $\\nu_{\\gamma}=-0.015$ numerically.\n\nIn fact, according to the scaling ansatz of a finite system \\cite{Barber,Lin\n, the critical exponent $\\nu $ can be determined by the relation $\\nu =\\mu\n\/\\alpha $. In terms of this relation, the critical exponents in $B$ and $A$\nphases are found as ${\\nu }_{\\gamma }^{-}=-0.01149$ and ${\\nu }_{\\gamma\n}^{+}=-0.01374$, which are consistent with the numerical result $\\nu\n_{\\gamma }$ extracted from Fig. \\ref{fig8}(a). Inset of Fig. \\ref{fig8}(b)\nis a plot of the maximum value of $g_{x}({J_{z}^{m}})$ as a function of $L$\n\\begin{equation}\ng_{x}({J_{z}^{m}})\\propto {L}^{\\mu _{g_{x}}}\n\\end{equation}\nin logarithmic coordinate, from which the size exponent $\\mu\n_{g_{x}}=0.01612\\pm 0.00094$ is found. The universal scaling function\n\\begin{equation}\nF_{g_{x}}=f\\left[ L^{\\nu _{g_{x}}}(J_{z}^{m}-J_{z})\\right]\n\\end{equation}\nis shown in Fig. \\ref{fig8}(b). We have the numerical value $\\nu\n_{g_{x}}=-0.040$ and the results determined by the relation $\\nu =\\mu\n\/\\alpha $ that $\\nu _{g_{x}}^{-}=-0.09196$, $\\nu _{g_{x}}^{+}=-0.03332$. The\ndeviation between $\\nu _{g}$ and $\\nu _{g_{xx}}^{-}$ may be due to the rapid\noscillation in the gapless $B$ phase. Similarly, the critical exponents of \ng_{xx}$ can be obtained from Fig. \\ref{fig8}(c) as $\\mu _{g_{xx}}=0.54570\\pm\n0.00237$, $\\nu _{g_{xx}}^{-}=0.410$ and $\\nu _{g_{xx}}^{+}=0.908$, and the\nresults determined by $\\nu =\\mu \/\\alpha $ are $\\nu _{g_{xx}}^{-}=0.41178$\nand $\\nu _{g_{xx}}^{+}=0.89920$ respectively.\n\n\\section{Summary and discussion}\n\nWe demonstrate that the ground-state GP generated by the correlated rotation\nof two linked-spins in a unit-cell indeed can be used to characterize the\nTQPT for the Kitaev honeycomb model. The non-analytic GP with a divergent\nsecond-order derivative at the critical points shows that the TQPT is a\nsecond-order phase-transition different from the $XY$ spin-chain \\cit\n{Zhu,Zhu08}, in which the first-order derivative of GP is divergent, and the\nLMG model\\cite{Zhu,Zhu08}, in which the GP itself is shown to be divergent.\nMoreover it is found that the GP is zigzagging with oscillating derivatives\nin the gapless $B$ phase, but is a smooth function in the gapped $A$ phase.\nThe scaling behavior of the non-analytic GP in the vicinity of critical\npoint is shown to exhibit the universality with negative exponents of both \n\\gamma$ and $g_{x}$ while a positive exponent of $g_{xx}$ indicating the\ncharacteristic of second-order phase transition.\n\n\\section*{Acknowledgments}\n\nThis work is supported by the NNSF of China under Grant Nos. 11075099 and\n11074154, ZJNSF under Grant No. Y6090001, and the 973 Program under Grant\nNo. 2006CB921603.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\t\\label{section1}\n\t\n\\begin{figure}\t\n\t\\begin{center}\n\t\t\\includegraphics[width=90mm]{Figure1.png}\n\t\t\\caption{Panel (a): Spatial positions of the stars in our sample, with the tidal radius ($r_t=10$\\arcmin) of M~54 over-plotted with a solid line. The open red symbols designate N-rich stars (the diamond symbol refers to a field star, while the open circle highlights the extra-tidal member of M~54). The lime circles designate the M~54 population analyzed in this work, while the black plus symbols designate the stars analyzed by \\citet{Nataf2019}. The empty grey `star' symbols designate the potential Sgr population from \\citet{Hayes2020}. The two concentric circles indicate 5 $r_{t}$ and 7 $r_{t}$ for reference. Panel (b): \\textit{Gaia} EDR3 proper motions of stars that have been associated with the Sgr stream: blue symbols for the \\citet{Antoja2020} stars and open black `star' symbols for \\citet{Hayes2020} stars. The orbital path of Sgr is shown by the dotted (backward) and solid (forward) purple line in panels (a) and (b), with the thick and thin lines showing the central orbit, and one hundred ensemble of orbits that shows the more probable regions of the space, which are crossed more frequently by the simulated orbit, respectively. Panel (c): Color magnitude diagram from \\textit{Gaia} EDR3 photometry of our sample. The symbols are the same as in panels (a) and (b), except the white circles, which denotes the M~54 members from \\textit{Gaia} EDR3, selected on proper motions and within 3$\\arcmin$ from the cluster center. Panel (d): Radial velocities versus [Fe\/H] ratios determined from APOGEE-2\/\\texttt{ASPCAP} (black symbols) and our [Fe\/H] ratio determinations from \\texttt{BACCHUS} (green and red symbols) in the field around M~54. The [Fe\/H] APOGEE-2\/\\texttt{ASPCAP} determinations have been systematically offset by $\\sim$0.11 dex in order to compare with our [Fe\/H] \\texttt{BACCHUS} determinations, as suggested in \\citep{Fernandez-Trincado2020c}.}\n\t\t\\label{Figure1}\n\t\\end{center}\n\\end{figure}\t\n\t\nThe Sagittarius (Sgr) dwarf spheroidal (dSph) galaxy is one of the closest massive satellites of the Milky Way (MW) \\citep{Ibata1994}, and has yielded a wealth of observational evidence of ongoing accretion by the MW in the form of persistent stellar debris and tidal streams discovered by \\citet{Mateo1996}, and extensively studied with photometric and spectroscopic observations over a huge range of distances ($\\sim$10--100 kpc) \\citep[see, e.g.,][]{Ibata2001, deBoer2015} using different stellar tracers--including Carbon stars \\citep{Totten1998}, the first all-sky map of the tails using 2MASS M-giants \\citep{Majewski2003}, red clump Stars \\citep{Correnti2010}, RR Lyrae stars \\citep{Newberg2003, Ramos2020}, and CN-strong stars \\citep{Hanke2020}, among other tracers, usually in small patches along the stream (see, e.g., \\citealt{Li2019}). These studies have been followed-up by numerical studies \\citep[see, e.g.,][]{Law2005, Vasiliev2020}, as well as by using precise astrometry from the \\textit{Gaia} second data release \\citep[Gaia DR2;][]{Brown2018}, based on proper motions alone \\citep{Antoja2020}. Its proximity provides a unique laboratory to study accretion in detail, through the tidally stripped streams that outflow from the Sgr system \\citep[][]{Hasselquist2017, Hasselquist2019, Hayes2020}.\n\t\t\nAs a natural result of such an accretion event, there is a claim in the literature that not only field stars but also GCs have been accreted \\citep[see, e.g.,][]{Massari2019}. Some have been speculated to be lost in the disruption process, and may lie immersed in the Sgr stream. Candidates include: M~54, Terzan 7, Arp 2, Terzan 8, Pal 12, Whiting 1, NGC 2419, NGC 6534, and NGC 4147 \\citep[e.g.,][]{Law2010a, Bellazzini2020}, but a firm connection is still under debate \\citep[e.g.,][]{Villanova2016, Tang2018, Huang2020, Yuan2020}. In this context, ``chemical tagging\" \\citep[e.g.,][]{Freeman2002}, which is based on the principle that the photospheric chemical compositions of stars reflect the site of their formation, is a promising route for investigation of this question. \n\nWhile the abundances of light and heavy elements for individual stars in GCs have been widely explored \\citep[e.g.,][]{Pancino2017, Meszaros2020}, little is known about these abundances in disrupted GCs likely associated with the closest dwarf galaxies, such as Sgr \\citep{Karlsson2012}. Although some evidence for chemical anomalies has been detected towards the inner bulge and halo of the MW \\citep[see, e.g.,][]{Fernandez-Trincado2016b, Recio-Blanco2017, Schiavon2017, Fernandez-Trincado2017} and Local Group dwarf galaxies \\citep[see, e.g.,][]{Fernandez-Trincado2020b}, suggesting the presence of GCs in the form of disrupted remnants, alternative ways to produce these stars have been recently discussed \\citep{Bekki2019}.\n\nThis paper is outlined as follows. The high-resolution spectroscopic observations are discussed in Section \\ref{section2}. Section \\ref{section3} describes the sample associated with M~54, including a comparison with data from the literature. Section \\ref{section4} presents our estimated stellar parameters and derived chemical-abundance determinations. Section \\ref{section5} discusses the results, and our concluding remarks are presented in Section \\ref{section6}.\n \n\\section{Data}\n\\label{section2}\n\n We make use of the internal dataset (which includes all data taken through March 2020) of the second-generation Apache Point Observatory Galactic Evolution Experiment \\citep[APOGEE-2;][]{Majewski2017}, which includes the first observations from the Ir\\'en\\'ee du Pont 2.5-m Telescope at Las Campanas Observatory \\citep[APO-2S;][]{Bowen1973} in the Southern Hemisphere (Chile), and more observations from the Sloan 2.5-m Telescope at Apache Point Observatory \\citep[APO-2N;]{Gunn2006} in the Northern Hemisphere (New Mexico). The survey operates with two nearly identical spectrographs \\citep{Eisenstein2011, Wilson2012, Wilson2019}, collecting high-resolution ($R\\sim22,000$) spectra in the near-infrared textit{H}-band (1.5145--1.6960 $\\mu$m, vacuum wavelengths). This data set provides stellar parameters, chemical abundances, and radial velocity (RV) information for more than 600,000 sources, which include $\\sim$437,000 targets from the sixteenth data release \\citep[DR16;][]{Ahumada2020} of the fourth generation of the Sloan Digital Sky Survey \\citep[SDSS-IV;][]{Blanton2017}. APOGEE-2 target selection is described in full detail in \\citet{Zasowski2017} (APOGEE-2), Santana et al. (in prep.) (APO-2S), and Beaton et al. (in prep.) (APO-2N). \n \n APOGEE-2 spectra were reduced \\citep{Nidever2015} and analyzed using the APOGEE Stellar Parameters and Chemical Abundance Pipeline \\citep[ASPCAP;][]{Garcia2016, Holtzman2015, Holtzman2018, Henrik2018, Henrik2020}. The model grids for APOGEE-2 internal dataset are based on a complete set of \\texttt{MARCS} stellar atmospheres \\citep{Gustafsson2008}, which now extend to effective temperatures as low as 3200 K, and spectral synthesis using the \\texttt{Turbospectrum} code \\citep{Plez2012}. The APOGEE-2 spectra provide access to more than 26 chemical species, which are described in \\citet{Smith2013}, \\citet{Shetrone2015}, \\citet{Hasselquist2016}, \\citet{Cunha2017}, and \\citet{Holtzman2018}. \n\n\\subsection{M~54 field}\n\\label{section3}\n\nThe APOGEE-2 field toward M~54 was previously examined in \\citet{Meszaros2020} based on public DR16 spectra. In that work, 22 stars were identified as potential members linked to M~54 based in the APOGEE-2 radial velocities \\citep{Nidever2015}, i.e., stars with RV within $3\\sigma _{RV,cluster}$, metallicity within $\\pm$0.5 dex around the cluster average, proper motion from the \\textit{Gaia} Early Data Release 3 \\citep[\\textit{Gaia} EDR3;][]{Brown2020} within 2.5$\\sigma$ around the cluster average proper motion, and located inside the cluster tidal radius, $r_{t}\\lesssim10$ arcmin, \\citep[][2010 edition]{Harris1996} were classified as potential members of M~54. However, only 7 out of 22 stars were spectroscopically examined with the \\texttt{BACCHUS} code in \\citet{Masseron2016}, since only these stars achieved a signal-to-noise (S\/N$>$70) sufficient to provide reliable abundance determinations. \n\nThe post-APOGEE DR16 dataset provides incremental visits toward M~54, which has allowed to increase the signal-to-noise for 20 out of 22 of the potential cluster members. As a result, nitrogen, titanium, and nickel abundances can be now obtained from the stronger absorption features (as shown for $^{12}$C$^{14}$N lines as shown in Figure \\ref{Figure4}), and other chemical species can also be studied. \n\n\\citet{Nataf2019}, using APOGEE-2 DR14 data \\citep{Abolfathi2018} and abundance determinations from the \\texttt{Payne} pipeline \\citep{Ting2019}, have catalogued eight possible members from M~54. Two of those objects (2M18544275$-$3029012 and 2M18550740$-$3026052) were included in our study. The remaining six stars were rejected from our analysis for the following reasons. Six objects in \\citet{Nataf2019} were found to have low S\/N ($<$70) spectra, resulting in very uncertain CNO abundance ratios for many chemical species, since the molecular lines ($^{16}$OH, $^{12}$C$^{16}$O, and $^{12}$C$^{14}$N) are very weak. Secondly, 6 out of the 8 objects in \\citet{Nataf2019} exhibit [Fe\/H] $> -1.1$, and were recently classified as Sgr stars \\citep[see, e.g.,][]{Hayes2020}, which make them unlikely members of M~54.\n\nIn this study, we make use of the more recent spectra to examine the chemical composition of added stars to the abundance average of M~54. As in \\citet{Meszaros2020}, we also limit our discussion only to stars with S\/N$>70$.\n\n\\subsection{Extra-tidal stars}\n\nWe also report on the serendipitous discovery of two nitrogen-enhanced (N-rich) metal-poor stars beyond the tidal radius of M~54, as shown in pane (a) of Figure \\ref{Figure1}. APOGEE-2 stars in the stream$+$core Sagittarius (Sgr) system \\citep[see, e.g.,][]{Hasselquist2017, Hasselquist2019, Hayes2020} are highlighted as black open `star' symbols in pane (a) of Figure \\ref{Figure1}, while potential star members (blue symbols) of the stream$+$core Sgr system from \\citet[][]{Antoja2020} are also displayed in panel (a) of Figure \\ref{Figure1}. It is important to note that the [Fe\/H] abundance of APOGEE-2 Sgr stars are provided by the \\texttt{ASPCAP} pipeline \\citep[see][]{Hasselquist2017, Hasselquist2019, Hayes2020}. In order to compare with our [Fe\/H] determinations, an offset of $\\sim$0.11 dex was applied to \\texttt{ASPCAP} metallicities in panel (d) of Figure \\ref{Figure1}, as suggested in \\citet{Fernandez-Trincado2020c}.\n\nPanels (a) to (d) of Figure \\ref{Figure1} reveal that one (2M18565969$-$3106454) of the newly discovered N-rich stars meets the minimum criterion to be considered a potential extra-tidal star which has likely escaped the cluster potential, while the second N-rich star (2M18533777$-$3129187) has physical properties that are clearly offset from the M~54 population. In particular, this star is brighter than the typical population of M~54 (see panel (c) in Figure \\ref{Figure1}), and both proper motions and RV differ from the nominal proper motion and RV of the cluster as shown in panels (b) and (d) of Figures \\ref{Figure1}. It is likely that 2M18533777$-$3129187 is a foreground field star (hereafter N-rich field star). \n\n\n\\section{Stellar parameters and chemical-abundance determinations}\n \\label{section4}\n\nThe chemical analysis is very similar to that carried out by \\citet[][]{Fernandez-Trincado2019a, Fernandez-Trincado2019b, Fernandez-Trincado2019c, Fernandez-Trincado2019d, Fernandez-Trincado2020a, Fernandez-Trincado2020b, Fernandez-Trincado2020c, Fernandez-Trincado2020d, Fernandez-Trincado2021a}. The stellar parameters ($T_{\\rm eff}$, $\\log$ \\textit{g}, and first guess on metallicity) for the 20 cluster members with S\/N$>$70 were extracted from \\citet{Meszaros2020}, while we adopt the atmospheric parameters from the uncalibrated post-APOGEE DR16 values for the two stars beyond the cluster tidal radius. The elemental abundances and final errors in [Fe\/H] and [X\/Fe], astrometric and kinematic properties of our sample are listed in Tables \\ref{Table1}, \\ref{Table11}, and \\ref{Table2}, respectively. \n\nA consistent chemical-abundance analysis was then carried out with the \\texttt{BACCHUS} code \\citep{Masseron2016}, from which we obtained the metallicities from Fe I lines, and abundances for twelve other chemical species belonging to the light- (C, N), $\\alpha$- (O, Mg, Si, Ca, and Ti), Fe-peak (Ni), odd-Z (Al, K) and \\textit{s}-process (Ce, Nd) elements.\n\n \\begin{figure}\t\n \t\\begin{center}\n \t\t\\includegraphics[width=88mm]{Figure2.png}\n \t\t\\includegraphics[width=92mm]{Figure3.png}\n \t\t\\caption{{\\bf \\texttt{BACCHUS} elemental abundances}. Panel (a): The observed [X\/H] and [Fe\/H] abundance-density estimation (violin representation) of M~54 stars, and the observed abundance ratios of newly identified N-rich stars. The extra-tidal star from M~54 and a field star is highlighted with a black open circle and diamond, respectively. Each violin indicates with horizontal lines the median and limits of the distribution. The lime and dark violet violin representation refer to the abundance ratios of 20 stars (this work) and 7 stars from \\citet{Meszaros2020}, respectively. Panels (b)--(e): Distributions of light- (C,N), $\\alpha$- (Mg, Si) and odd-Z (Al) elements in different abundance planes. In each panel, the planes [Al\/Fe] -- [Mg\/Fe], [N\/Fe]--[C\/Fe], [Al\/Fe]--[Si\/Fe], [Si\/Fe]--[Mg\/Fe] are shown, respectively, for GCs from \\citet{Meszaros2020}. The black dotted line at [Al\/Fe] $=+0.3$ indicates the separation of FG and SG stars as proposed in \\citet{Meszaros2020}. The distribution of M~54 stars (lime squares) analyzed in this work are overlaid. The black open circle and diamond refer to the extra-tidal and field N-rich star, respectively. The plotted error bars show the typical abundance uncertainties.}\n \t\t\\label{Figure2}\n \t\\end{center}\n \\end{figure}\t\n \n \\section{Results and discussion}\n \\label{section5}\n \n Panel (a) of Figure \\ref{Figure2} summarizes the chemical enrichment seen in M~54 stars analyzed in this work, and compares to the \\citet{Meszaros2020} determinations. The chemical composition of the two newly identified N-rich stars beyond the cluster tidal radius is also shown in the same figure. Overall, the chemical abundance of M~54 based on the added cluster stars is within the typical errors, and does not affect the science results presented in \\citet{Meszaros2020}, while the two external N-rich stars share chemical patterns similar to the M~54 population.\n \n For M~54, we find a mean metallicity $\\langle$[Fe\/H]$\\rangle = -1.30\\pm0.12$, which agrees well with \\citet{Meszaros2020}\\footnote{Note that here, and for the abundances described below, the number following the average abundance represents the one-sigma dispersion, not the error in the mean.}. The spread in [Fe\/H] increased from 0.04 to 0.12 dex, but it is still smaller than that reported in \\citet{Carretta2010}. Even if the measured scatter is larger than that reported by \\citet{Meszaros2020}, it does not seem to indicate the presence of a significant spread in [Fe\/H], and is similar to that observed in Galactic globular clusters (GCs) at similar metallicity, such as M~10 \\citep[see, e.g.,][]{Meszaros2020}. Nickel (an element that belongs to the Fe-group), exhibits a flat distribution as a function of [Fe\/H], similar to that observed in \\citet{Carretta2010}, and at odds with that observed in Sgr stars. \n \nRegarding the other chemical species, we find excellent agreement with the values provided by \\citet{Meszaros2020}, as can be seen in panel (a) of Figure \\ref{Figure2}, with the main difference that the added stars introduce a larger star-to-star scatter than previously measured. M~54 exhibits a modest enhancement in $\\alpha$-elements, with mean values for [O\/Fe], [Mg\/Fe], [Si\/Fe], [Ca\/Fe], and [Ti\/Fe] which is similar to what is seen in halo GCs: $\\langle$[O\/Fe]$\\rangle = +0.64\\pm0.36$ (14 stars); $\\langle$[Mg\/Fe]$\\rangle = +0.18\\pm0.11$ (18 stars); $\\langle$[Si\/Fe]$\\rangle = +0.26\\pm0.10$ (20 stars); $\\langle$[Ca\/Fe]$\\rangle = +0.25\\pm0.07$ (16 stars); and the new measured $\\langle$[Ti\/Fe]$\\rangle = +0.21\\pm0.21$ (16 stars), indicating a fast enrichment provided by supernovae (SNe) II events. Mean values are in good agreement with \\citet{Meszaros2020}, with the exception of oxygen, which displays the larger star-to-star spread expected in likely second-generation stars. \n\nWe also find that the [O\/Fe], [Mg\/Fe], and [Si\/Fe] ratios are almost flat as a function of the metallicity, while [Ca\/Fe] and [Ti\/Fe] ratios slightly increases as [Fe\/H] increases, similar to the behaviour found by \\citet{Carretta2010}. On the contrary, the $\\alpha$-element trend observed in Sgr stars \\citep[see, e.g.,][]{Carretta2010, McWilliam2013, Hasselquist2017, Hasselquist2019} differ from those seen in the population of M~54. Overall, the $\\alpha$-elements in the cluster are higher than seen in Sgr stars. In conclusion, the measured $\\alpha$-enrichment in this work support the previous hypothesis suggesting that the $\\alpha$-element in M~54 stars formed before the typical $e$-folding time for SN Ia contributing their ejecta to the gas pool \\citep[e.g.,][]{Carretta2010}. \n\nWe also found that some stars in M~54 appear to be quite Mg poor, with strong enrichment in aluminum and nitrogen, providing further evidence for the presence of second-generation stars in M~54, and the signature of very high temperatures achieved during H-burning \\citep[e.g.,][]{Carretta2010, Meszaros2020}. The odd-Z elements (Al and K) in M~54 exhibit an average $\\langle$[Al\/Fe]$\\rangle = +0.14\\pm0.37$ (19 stars) and $\\langle$[K\/Fe]$\\rangle = +0.15\\pm0.18$ (17 stars), with a clear anti-correlation in Al-Mg, as can be seen in in panel (b) of Figure \\ref{Figure2}, with moderate Mg depletions related to the enrichment in Al abundances, as the result of the conversion of Mg into Al during the Mg-Al cycle \\citep[e.g.,][]{Carretta2010, Denissenkov2015, Renzini2015, Pancino2017}. This pattern is evidently not present in the Sgr stars, where, on the contrary, \\texttt{ASPCAP} Mg and Al abundances are positively correlated with each other \\citep[see, e.g.,][]{Hasselquist2017, Hasselquist2019, Hayes2020}. \n\nWe derived average abundances for C and N in M~54, of $\\langle$[C\/Fe]$\\rangle = -0.36\\pm0.25$ (13 stars) and $\\langle$[N\/Fe]$\\rangle = +1.12\\pm0.48$ (17 stars). Most of the stars in M~54 are C deficient ([C\/Fe]$\\lesssim$+0.3) and N enhanced ([N\/Fe]$>+0.5$), but they do not exhibit the typical N-C anti-correlation (see panel (c) of Figure \\ref{Figure2}) seen in other GCs at similar metallicity \\citep[e.g.][]{Meszaros2020}, most probably due the lack of stars with low nitrogen abundances. On the contrary, an apparent continuous distribution of N abundances is present in M~54. This result indicates the prevalence of the multiple-population phenomenon in M~54 as previously suggested in the literature \\citep{Carretta2010, Milone2017, Sills2019, Meszaros2020}.\n\nAdditionally, we do not find any evidence for the presence of the K-Mg anti-correlation in M~54, as have been suggested to be present in a few Galactic GCs at similar metallicity \\citep{Meszaros2020}. Furthermore, a Si-Al correlation is slightly evident in M~54, as shown in panel (d) of Figure \\ref{Figure2}, and has a stubby Mg-Si distribution (see panel (e) of Figure \\ref{Figure2}), which is an indication of $^{28}$Si production from the result of a secondary leakage in the main Mg-Al cycle, which is instead absent in the Sgr stars.\n \nFor the elements produced by neutron(\\textit{n})-capture processes (Ce II and Nd II), we find, on average, $\\langle$[Ce\/Fe]$\\rangle = +0.18\\pm0.13$ (10 stars) and [Nd\/Fe]$=+0.44$ (1 star). Overall, M~54 exhibits a modest enrichment in \\textit{s}-process elements, with a few stars as enhanced as $+0.4$, similar to that observed in Galactic GC stars at similar metallicity \\citep[see, e.g.,][]{Meszaros2020}, suggesting that it is possible that the \\textit{s}-process enrichment has been produced by a different source than the progenitor of the Mg-Al anti-correlations, possibly by low-mass asymptotic giant branch stars. Lastly, we find that [Ce\/Fe] ratios in M~54 are almost flat as a function of metrallicity. Unfortunately, Nd II is measured in only one star, which has been found to exhibit the modest enhancement, consistent with a moderate enrichment of \\textit{s}-process elements.\n\nFurthermore, we report the serendipitous discovery of two N-enhanced stars identified within $\\sim$7$r_t$ from M~54, as shown in panel (a) of Figure \\ref{Figure1}. Panel (a) of Figure \\ref{Figure2} show the collection of [X\/Fe] and [Fe\/H] abundance ratios for the two newly identified N-rich stars beyond the tidal radius of M~54. Both stars exhibit very similar chemical-abundance patterns as those seen in the population of M~54. A plausible explanation is that both stars were previous members of M~54, from which they have been ejected. However, this possibility seems unlikely for one of these extra-tidal stars (2M18533777$-$3129187), which was ruled out as a possible member of M~54. \n\nAs can be appreciated from inspection of panels (a) to (d) of Figure \\ref{Figure1}, the current position of 2M18533777$-$3129187 does not resemble the kinematic and astrometric properties \\citep[e.g.,][]{Antoja2020, Hayes2020} of Sgr+M~54 stars, nor the orbital path of Sgr\\footnote{The Sgr orbit was computed with the \\texttt{GravPot16} model, \\url{https:\/\/gravpot.utinam.cnrs.fr}, by adopting the same model configurations as described in \\citet{Fernandez-Trincado2020c}. For the Sgr centre, we adopt the heliocentric distance $d_{\\odot} =$ 26.5 kpc and heliocentric radial velocity $RV = 142$ km s$^{-1}$ from \\citet{Vasiliev2020b}, and proper motions from \\citet{Helmi2018}: $\\mu_{\\alpha}\\cos{\\delta} = -2.692$ mas yr$^{-1}$ and $\\mu_{\\delta} = -1.359$ mas yr$^{-1}$, with uncertainties assumed of the order of 10\\% in $d_{\\odot}$, $RV$, and proper motions.} . It is also the most luminous star in our sample, making it a likely foreground star. The possibility that this star was disrupted from M~54 and deposited in the inner Galaxy seems unlikely, as the perigalacticon of M~54 is located well beyond the solar radius \\citep[see, e.g.,][]{Baumgardt2019}. We conclude that 2M18533777$-$3129187 is a N-enhanced field star born in a different progenitor than M~54, but with a similar chemical-enrichment history to this cluster.\n\n Aside from 2M18533777$-$3129187, there is another N-enhanced field star (2M18565969$-$3106454) located $\\sim{}5\\times{}r_{t}$ from the cluster center, which exhibits a stellar atmosphere strongly enriched in nitrogen ([N\/Fe]$>+1.4$), as extreme as M~54 stars, accompanied by a very low carbon abundance ([C\/Fe]$<-0.7$), and with discernible contributions from the \\textit{s}-process elements (Ce II). Since the [Al\/Fe] ratio is $>+0.5$, which is a 'typical' value for stars in GCs, and unlikely in dwarf galaxy populations, we conclude that 2M18565969$-$3106454 shares the same nucleosynthetic pathways of second-generation stars in M~54. \n \n2M18565969$-$3106454 is a potential extra-tidal star with kinematics and astrometric properties similar to that of M~54 stars, and exhibits unique chemical patterns comparable to that of genuine second-generation GC stars, which makes it very different from Sgr stars. On the other hand, N-rich stars are commonly observed to be more centrally concentrated in GCs \\citep[e.g.][]{Dalessandro2019} and as a consequence they have smaller probabilities to be tidally stripped. Thus, it is likely that the extra-tidal star could well be just a stripped M~54 star as many others in its surroundings. Our finding demonstrate that N-rich stars are a promising route for identifying the unambiguous chemical signatures of stars formed in GC-like environment which may lie immersed in the M~54+Sgr core and\/or Sgr stream, as well as confirm or discard the possible association of GCs to the Sgr stream \\citep{Bellazzini2020}. \n \nFollowing the same methodology as described in \\citet{Fernandez-Trincado2021b}, we compute the predicted number ($N_{N-rich}$) of N-rich field stars observed in APOGEE-2 toward M~54\/Sgr using the smooth halo density relations presented in \\citet{Horta2021}, and by adopting the same Monte Carlo implementation of the Von Neumann Rejection Technique \\citep[see e.g.,][]{Press2002} as in Eq. 7 in \\citep{Fernandez-Trincado2015a}. We find the expected number of observed N-rich halo stars beyond $d_{\\odot}\\gtrsim15$ kpc over the sky area of 1.5 degree radius centred in M~54, and with both astrometric and kinematic properties as M~54 to be $N_{N-rich}< 0.1$ (from 1000 Monte Carlo realisations). This yield a very low probability that the new identified extra-tidal N-rich star associated with M~54 is due to random fluctuations in the field. Furthermore, we also use the Besan\\c{}con galactic model \\citep{Robin2003} and the \\texttt{GravPot16} model \\citep{Fernandez-Trincado2020e} to explore the expectations for a \"default\" Milky Way along the RVs to the Sgr+M\\~54 surrounding field beyond $d_{\\odot}\\gtrsim$ 15 kpc. The \"all\" sample is dominated by halo kinematics with a negligible contribution from the thin and thick disk beyond $RV \\gtrsim 120$ km s$^{-1}$. Thus, our Milky Way simulated sample act to guide us in $RV$ space, confirming that the kinematics of the newly identified extra-tidal N-rich star differs from the disk population, with practically low contribution form the expected halo. , \n\n \\section{Concluding remarks}\n \\label{section6}\n\nWe present a spectroscopic analysis for 20 out 22 red giant stars that are members of M~54 from the internal APOGEE DR16 dataset. This study doubles the sample of stars with spectroscopic measurements for this cluster, and the new post-APOGEE DR16 spectra achieve high signal-to-noise (S\/N$>70$), allowing the addition of new chemical species not examined in previous studies \\citep[e.g.,][]{Meszaros2020} in the \\textit{H}-band--APOGEE-2 footprint. \n \n Overall, the chemical species re-examined in M~54 were found to be consistent with previous studies \\citep{Meszaros2020}, although most of them exhibit a large star-to-star scatter. We find that 15 out of the 20 stars investigated show a high [N\/Fe] abundance ratio ([N\/Fe]$\\gtrsim+0.5$), confirming the prevalence of the MPs phenomenon in M~54. Both [Ni\/Fe] and [Ti\/Fe], not previously examined in \\citet{Meszaros2020}, were found to be in good agreement with measurements in the literature. In particular, we confirm the [Ti\/Fe]~ratio slightly increases as [Fe\/H] increases, as has been reported in \\citet{Carretta2010}. We also find a large spread in [Al\/Fe], and the presence of a genuine second-generation star in M~54, which exhibits Mg deficiency ([Mg\/Fe]$<$0) accompanied with large enhancements in nitrogen and aluminum. In general, all chemical species examined in the M~54 members present distinguishable chemical behaviour compared with Sgr stars, suggesting a different chemical-evolution history that resembles other Galactic halo GCs at similar metallicity.\n\nFurthermore, we report on the serendipitous discovery of a potential extra-tidal star toward the surrounding regions of the M~54+Sgr core, which exhibits a strong enrichment in nitrogen comparable to that seen in M~54 stars. As far as we know this is the first study reporting on the unambiguous chemical signatures of stars formed in GC-like environment into a nearby satellite dwarf galaxy around the Milky Way. Finding out how many of such chemical unusual stars likely originated in GCs are present in dwarf galaxy systems, help to understand the link between GCs and their stellar streams \\citep[see e.g.,][]{Bellazzini2020}. \n\n\t\\begin{acknowledgements} \n\tThe author is grateful for the enlightening feedback from the anonymous referee.\n\t J.G.F-T is supported by FONDECYT No. 3180210. \n\t T.C.B. acknowledges partial support for this work from grant PHY 14-30152: Physics Frontier Center \/ JINA Center for the Evolution of the Elements (JINA-CEE), awarded by the US National Science Foundation. \n\t D.M. is supported by the BASAL Center for Astrophysics and Associated Technologies (CATA) through grant AFB 170002, and by project FONDECYT Regular No. 1170121. \n\t S.V. gratefully acknowledges the support provided by Fondecyt regular No. 1170518. \n D.G. gratefully acknowledges support from the Chilean Centro de Excelencia en Astrof\\'isica y Tecnolog\\'ias Afines (CATA) BASAL grant AFB-170002. D.G. also acknowledges financial support from the Direcci\\'on de Investigaci\\'on y Desarrollo de la Universidad de La Serena through the Programa de Incentivo a la Investigaci\\'on de Acad\\'emicos (PIA-DIDULS). \n\t A.R.-L. acknowledges financial support provided in Chile by Agencia Nacional de Investigaci\\'on y Desarrollo (ANID) through the FONDECYT project 1170476.\n\t B.B. acknowledge partial financial support from the Brazilian agencies CAPES-Financial code 001, CNPq, and FAPESP. \n\t \\\\\n\t\n\tThis work has made use of data from the European Space Agency (ESA) mission Gaia (\\url{http:\/\/www.cosmos.esa.int\/gaia}), processed by the Gaia Data Processing and Analysis Consortium (DPAC, \\url{http:\/\/www.cosmos.esa.int\/web\/gaia\/dpac\/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.\\\\\n\t\n\tFunding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS- IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\\`{i}sica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) \/ University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut f\\\"{u}r Astrophysik Potsdam (AIP), Max-Planck-Institut f\\\"{u}r Astronomie (MPIA Heidelberg), Max-Planck-Institut f\\\"{u}r Astrophysik (MPA Garching), Max-Planck-Institut f\\\"{u}r Extraterrestrische Physik (MPE), National Astronomical Observatory of China, New Mexico State University, New York University, University of Notre Dame, Observat\\'{o}rio Nacional \/ MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\\'{o}noma de M\\'{e}xico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.\\\\\n\\end{acknowledgements}\n\t\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe magneto-rotational instability (MRI) is a powerful process to drive\nturbulence and angular momentum transport in protoplanetary disks, \nultimately enabling the accretion of matter onto the central object \\citep{bal91,haw91,bal98}.\nThere is vast literature studying this mechanism in local shearing box\nsimulations with an ideal MHD description\n\\citep{bra95,haw95,haw96,mat95,sto96,san04}. \\\\\nThe effect of non-ideal MHD on MRI, regarding the issue of resistive \nprotoplanetary disks, was mostly studied in\nlocal box simulations \\citep{bla94,jin96,san00,fle00,san02I,san02II,fle03,inu05,tur07,tur08,tur10}. \nThe various studies showed that at a \ncertain level of resistivity the MRI will be suppressed.\n\nUp to now there is no prescription of resistive profile \nin protoplanetary disks which applies for longer timescales.\nIt is known that the dust grains control the ionization level\nin protoplanetary disk. The particle cross section and dust-to-gas\nratio are most important parameters for defining the ionization level of\nthe gas \\citep{tur06,war07}.\nMost studies of non-ideal MRI turbulence use a static dust\ndistribution and neglect dust\ngrowth and evolution \\citep{sim09,tur06,tur07,dzy10}. But exactly in non-turbulent regions, the dust\n particles can grow, reducing quickly the cross section and so driving to\nbetter ionization levels enabling again MRI \\citep{zso10}. \nIn our study we focus on ideal MHD, which applies to sufficiently\nionized disk regions depleted of small dust grains. \nThis also applies to the innermost hot parts of \nprotoplanetary accretion disk and even extended radial regions \nas expected for transitional disks \\citep{chi07}.\nHere, an MRI turbulent ionization front starting at the inner rim of the \ndisk propagates radially outward and the disk get evacuated from\ninside-out.\\\\\n\nRecent results on the MRI obtained in local unstratified box simulations by \\citet{les07},\n\\citet{froII07}, \\citet{sim09} and \\citet{fro10} show that occurrence and saturation level of MRI in zero net flux simulations is controlled by the magnetic Prandtl number \\footnote{The ratio of viscosity vs.\\\nresistivity}. \nIdeal MHD simulations including stratification found at least convergence for zero-net flux local\nsimulations \\citep{dav10,fla10}. Here, the vertical resolution plays the key role for the convergence\n\\citep{shi10}. Despite the simplicity of box simulations and their interesting results obtained over the\nlast years, the study of radial extended structures are very restricted.\nFor instance, the aspect ratio of the boxes is known to influence the saturation level of turbulence \\citep{bod08}.\nBesides local box simulations, global simulations of MRI have also been performed\n\\citep{arm98,haw01I,ste01,haw01,arl01,fro06,lyr08,fro09}.\nThey confirmed the picture of a viscously spreading disk as a proxy for the action of MHD turbulence. \nRecently, the first global non-ideal MHD simulation \\citep{dzy10}, \nwhich included a radial dead-zone \/ active zone interface demonstrated the importance of the inner edge of the\ndead zone as a trap for planetesimals and even small planets.\\\\\n\nSo far only finite difference schemes as implemented in the ZEUS and Pencil\ncodes have been used to perform global simulations. \nHowever, a Godunov code would have several advantages over a finite difference scheme. \nWithout using any artificial viscosity the code solves \nthe MHD Riemann problem and can better handle the supersonic MHD turbulence in corona regions of the disk. Several papers recognized the importance of Godunov-type shock capturing upwind schemes for future astrophysical \nsimulations \\citep{sto05,fro06b,mig07,flo10}.\nIn this project we use the Godunov code PLUTO \\citep{mig09} to perform\nisothermal global MHD simulations of protoplanetary disks. \nIn future work, we\nwill switch on the total energy conservation property of this scheme and\ninclude radiation MHD to follow the temperature evolution in global disk.\\\\\nAn important issue in stratified global simulations is the\nrelatively low resolution per scale height compared to what is possible in local box simulations.\nIn addition, the extent of the azimuthal domain is often restricted to save computational time.\nThe first $2\\pi$ global disk simulations were performed by\n\\citet{arm98}, \\citet{haw00} and\n\\citet{arl01} for a short period of time.\nBut most of the global simulations were performed in restricted azimuthal domains like\n$\\pi\/4$ or $\\pi\/2$ \\citep{fro06,fro09,dzy10}. \nBy restricting the azimuthal domain one also restricts the largest possible mode in the\ndomain. One of the goal of this paper is to investigate whether these\nmodes affects the nonlinear state of the turbulence.\\\\\n\nThe standard viscous $\\alpha-$disk theory \\citep{sha73} introduces an effective turbulent viscosity \n $\\eta = \\rho \\nu = \\alpha P \/ \\Omega$ with the local thermal pressure $P$\nand the orbital frequency $\\Omega$,\narising from undefined magnetic or hydrodynamic processes,\ntransporting angular momentum outward and allowing mass accretion onto the star.\n\\citet{lyn74} calculated the radial mass accretion rate and the\nradial accretion velocity for a 1D viscous disk model as a local function of surface density $\\Sigma$ and the value for $\\alpha$.\nInterestingly 2D viscous disk models \\citep{kle92} showed the appearance of meridional\noutflows. Here, the mass flows radially outward near the\nmidplane compensated by increased radial inflow at upper layers of the disk to allow for net-accretion,\nfor $\\alpha < 0.05$. Much emphasis was given to this radial outflow and its role\nfor the radial transport for grains and chemical species over large distances and relative short time scales\n\\citep{kel04,cie09}.\\\\\nIn addition, we will investigate the onset of a vertical outflow\nas it was described in local box simulations \\citep{suz09,suz10} using a net\nflux vertical field. Such outflows can be launched in the\nmagnetized corona region of the MRI turbulent disk \\citep{mil00,mac00}.\nThey could have an important effect on the dissipation timescales of\naccretion disks and may be related to jet production from accretion\ndisks \\citep{fer06}.\nAn interesting property of MRI in stratified zero-net flux simulations is the emergence of\na \"butterfly\" pattern, an oscillating mean azimuthal magnetic field with an\nperiod of 10 local orbits. It was found in many local MRI simulations,\nrecently again by \\citet{dav10}, \\citet{gre10}, \\citet{fla10} and in global\nsimulations by \\citet{sor10} and \\citet{dzy10}.\nWe indeed identify such a \"butterfly\" pattern in our global runs, which was suggested to be linked to magnetic dynamo action in accretion disks \\citep{sor10,gre10}.\n\nOur paper is organized in the following way.\nIn Section 2 we will present our model setup and the numerical configuration.\nSection 3 contains the results\nSection 4 and 5 will provide a discussion, summary and outlook for our work.\n\\section{Model setup}\nThe setup follows closely the disk model which is presented by\n\\citet{fro06,fro09}.\nWe define the cylindrical radius with $R = r \\sin{(\\theta)}$ with the spherical\nradius $r$ and polar angle $\\theta$.\nThe initial density, pressure and azimuthal velocity are set to be in hydrostatic equilibrium.\n$$\\rho = \\rho_{0} \u00a0R^{-3\/2}\\exp{}\\Bigg(\\frac{\\sin{(\\theta)}-1}{(H\/R)^2}\\Bigg) $$\nwith $\\rho_{0} = 1.0$, $\\rm H\/R = c_0 = 0.07$.\\\\ \nWe choose an isothermal equation\nof state. The pressure is set to $P = c_{s}^2\\rho$ with $\\rm c_{s} =\nc_0\\cdot1\/\\sqrt{R}$.\nThe azimuthal velocity follows $$V_{\\phi} = \\sqrt{\\frac{1}{r}}\\Bigg(1- \\frac{2.5}{\\sin(\\theta)}c^2_0 \\Bigg).$$\nFor the initial velocities $V_{R}$ and $V_{\\theta}$ we use a white noise\nperturbation amplitude of $V_{R,\\theta}^{Init} = 10^{-4} c_{s}$.\ufffd\nWe start the simulation with a pure toroidal magnetic seed field with constant plasma beta\n$\\beta = 2P \/ B^{2} = 25$.\nThe radial domain extends from 1 to 10 radial code units (CU)\\footnote{We refer to CU instead of a physical length unit because ideal MHD simulations without radiation transport\nare scale free. Thus our simulations could represent a disk from 1 to 10 AU as much as a disk from\n$0.1$ to 1 AU. Only explicit dust physics and radiative transfer will introduce a realistic physical scale.} with radial buffer zones from 1 to 2 CU\nand 9 to 10 CU.\nIn the buffer zones we use a linearly increasing resistivity. This damps \nthe magnetic field fluctuations and suppresses boundary interactions,\nespecially for the closed boundary runs. Our buffer\nzone follows mainly the ones used in global simulations by \\citep{fro06,fro09,dzy10}. \nThe $\\theta$ domain is set to $\\theta = \\pi\/2 \\pm 0.3 $, corresponding to $\\pm\n4.3 \\rm $ scale heights.\nWe calculated in total five disk models. Three models cover the complete $2\\pi$ azimuthal domain (FC, FO and BO in Table 1) and \ntwo models are constrained to $\\pi\/4$ (PC, PO in Table 1). The simulation FO\nis also used for a test model FOR which is described later.\nThe simulation BO has the best resolution.\nOne subset of models has a closed boundary (FC and PC). Here we\nuse a reflective radial boundary with a sign flip\nfor the tangential magnetic fields and a periodic boundary condition for the $\\theta$ direction.\nA second subset of models has an outflow boundary condition (FO,PO and BO). Here, we use a relaxation function in the radial buffer zones which reestablishes gently the initial value of density over a time period of one local orbit. \nIn the buffer zones we set: $\\rho^{new} = \\rho - (\\rho-\\rho^{Init})\\cdot \\Delta\nt \/ T_{Orbits}$. \nOur outflow boundary condition projects the radial gradients\nin density, pressure and azimuthal velocity into the radial boundary and the\nvertical gradients in density and pressure at the $\\theta$ boundary. \nWe ensure to have no inflow velocities. For an inward pointing velocity\nwe mirror the values in the ghost cell to ensure no inward mass flux. \nThe $\\theta$ boundary condition for the magnetic field are also set up \nto be zero gradient, which approximates \"force-free\" - outflow conditions. \nWe also ensure the force free character of the tangential components for the radial boundary\nby adjusting the $1\/r$ profile in the magnetic field components in the ghost\ncells.\nThe normal component of the magnetic field in the ghost cells is always \nset to have $\\nabla \\cdot \\vec{B}$ = 0.\nWe set the CFL value to 0.33. Also higher CFL values were successfully tested and\nwill be used for future calculations.\nWe use a uniform grid with an aspect ratio of the individual cells at 5 CU of $1:0.67:1.74$\n$(\\Delta r: r\\Delta\\theta:r\\Delta\\phi)$.\nUsing a uniform grid instead of a logarithmic grid, where\n$\\Delta r\/r$ is constant, has the disadvantage that it will reduce the\naccuracy in the sense that the inner part of the disk is poorly resolved,\ncompared to the outer part of the disk: $H(1AU)\/\\Delta r < H(10AU)\/\\Delta r $.\\\\ \nHowever for the uniform grid, the relative broad radial inner \nbuffer zone lies in the poorly resolved disk part and is excluded from\nanalysis.\nThe outer parts of the disk are, compared\nto a logarithmic grid with the same resolution, better resolved. \nLogarithmic grid requires a much smaller buffer\nzone, e.g. a logarithmic grid would place one third of the total number of grid\ncells in the first ninth of the domain, between 1 and 2 AU.\nOf course, using a uniform grid will always restrict the range of the\nradial domain and for more radially extended simulation the need of a logarithmic grid is mandatory.\n\nFor all runs we employ the second order scheme in\nPLUTO with the HLLD Riemann solver \\citep{miy05}, piece-wise linear\nreconstruction and $2^{nd}$ order Runge Kutta time integration. \nWe treat the induction equation with the \"Constrained Transport\" (CT) method in combination with the upwind CT method described in\n\\citet{gar05}.\nThe detailed numerical configuration is presented in \\citet{flo10}.\nOur high resolution run BO was performed on a Blue-gene\/P cluster with 4096\ncores and was calculated\nfor over 1.5 million time steps which corresponds to 1.8 million CPU hours.\n\\begin{table}[th]\n\\begin{center}\n\\begin{tabular}{ccccc}\nModel name & Resolution ($R$ $\\theta$ $\\phi$) \u00a0& $\\phi$-range & Boundary & Orbits at 1 AU (Years)\\\\\n\\hline\n\\hline\nPC & 256 128 64 & $\\pi\/4$ & closed & 1435 \\\\\nPO & 256 128 64 & $\\pi\/4$ & open & 1519 \\\\\nFC & 256 128 512 & $2\\pi$ & closed & 1472 \\\\\nFO & 256 128 512 & \u00a0$2\\pi$ & open & 1526 \\\\\n\\hline\nFOR& 256 128 512 & $2\\pi$ & open & $1000-1448$ \\\\ \n\\hline\nBO& 384 192 768 & $2\\pi$ & open & $1247$ \n\\\\\n\\end{tabular}\n\\caption{MHD runs performed. (P - $\\pi\/4$; F - $2\\pi$; O - open boundary; C -\nclosed boundary; B - best resolved run.)}\n\\end{center}\n\\end{table}\n\\subsection{Code Units vs. Physical Units}\nIsothermal ideal MHD simulations are scale-invariant. \nOne has to define unit-variables to transform from code to cgs units. \nWe can set three independent values to \ndefine our problem. \nThis is gas density for which we choose for instance $\\rm \\rho_u = 10^{-10} g\/cm^3$, the radial\ndistance unity as length $1 CU = 1 AU$\nand the Keplerian velocity $\\rm v_u = \\sqrt{ G\\cdot M_{\\sun} \/ l_u}$ with the gravitational\nconstant $G$ and the solar mass $M_{\\sun}$.\n\nWith those three quantities, we translate the values for our measured surface density and mass accretion\nrate into cgs units. Using this values we derive at 1 AU at the midplane a gas density of\n$\\rm \\rho = 10^{-10} g\/cm^3$ with a Keplerian velocity of $\\rm v_K= 2.98* 10^6 cm\/s$. \nWith this the surface density becomes $524 g\/cm^2$ at 1AU.\nGas velocities and the Alfv\\'enic speed are always presented in units of the sound\nspeed for convenience.\n\n\n\\subsection{Turbulent stresses}\nThe $\\alpha$ parameter relates the turbulent stresses to \nthe local thermal pressure. For the calculation of the $\\alpha$ values\nwe measure the Reynolds and Maxwell\nstresses, which are the $R-\\phi$ components of the respective stress tensors. \nThe Reynolds stress is calculated as $\\rm T_{R}= \\overline{\\rho v'_{\\phi}v'_{R}} $\nand the Maxwell stress as $\\rm T_{M}= \\overline{B'_{\\phi}B'_{R}}\/ 4 \\pi $\nwith the turbulent velocity or magnetic fields, e.g., $\\rm v'_{\\phi} = v_{\\phi} - \\overline{v_{\\phi}}$.\nThe mean component of the velocity and magnetic field are always calculated\nonly along the azimuthal direction because of the radial and vertical gradients in the disk.\nIn our simulations, the amplitude of Maxwell stress is about three times the Reynolds stress.\nFor the total $\\alpha$ value we integrate the mass weighted stresses over the total domain\n$$ \\alpha = \\frac{ \\int \\rho \\Bigg( \\frac{v'_{\\phi}v'_{R}}{c^2_s} - \\frac{B'_{\\phi}B'_{R}}{4 \\pi \\rho c^2_s}\\Bigg)dV} {\\int \\rho dV}.$$\nThe respective turbulence enhanced viscosity can now be represented as $\\nu = \\alpha H c_s$\nwith the height of the disk $H$ and the sound speed $c_s$.\n\\begin{figure}\n\\hspace{-0.6cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-1A.ps,scale=0.46}\n\\psfig{figure=FIG-1C.ps,scale=0.46}\n\\end{minipage}\n\\hspace{4.0cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-1B.ps,scale=0.46}\n\\psfig{figure=FIG-1D.ps,scale=0.46}\n\\end{minipage}\n\\label{totalal}\n\\caption{Top left: Total stresses expressed as $\\alpha$-parameters for the $\\pi\/4$ models PC\nand PO and the $2\\pi$ models FC, FO and BO. The parameter is mass weighted\nand integrated for the domain (3-8 AU).\nTop right: Radial $\\alpha$ profile, time averaged between 300 and 1200 inner\norbits.\nFor the best resolution model BO, the profile follows roughly $\\sqrt{r}$ in the region B. \nRegion A and C are affected by boundary conditions and buffer zones.\nBottom left: Vertical $\\alpha$ profile, averaged over time and space region ($II\/B$). \nBottom right: Evolution of the vertical distribution of azimuthally averaged Maxwell and Reynolds stress\n$T_S$ for a radius of 4 AU with\ntime. Colors are logarithmic values of the corresponding dynamical viscosity including the density profile.}\n\\end{figure}\n\\section{Results}\n\\subsection{Disk evolution}\nWe first describe the typical evolution for an azimuthal MRI (AMRI) \nin global disk simulation with open boundaries. \nThe AMRI simulation starts with a purely toroidal net magnetic field which \nbecomes MRI unstable on timescales of around 10 local orbits.\nAfter approximately 250 inner orbits, the disk reaches its maximum\n$\\alpha$ value of $0.01$.\nAt this time (equivalent to 10 local orbits at\nthe outer boundary of the undamped region) the disk has become fully turbulent.\nDuring this evolution, the initial magnetic flux decreases as it leaves the computational\ndomain in the vertical direction. \nStarting at approximately 250 inner orbits, there is an evolution of oscillating mean fields.\nFig. 1, top left, presents the mass weighted and domain integrated $\\alpha$ value \nover time for all models.\nDuring the time period between 800 and 1200 inner orbits, we get a\nrelatively constant $\\alpha$ value of $5\\cdot10^{-3}$ (model BO). \nWe mark three different time stages of the turbulent\ndisk evolution:\nIn period I (0 to 800 inner orbits), the turbulence is not yet saturated.\nAfter a strong initial rise due to the net azimuthal field the turbulent decays\nto a level where self-sustained turbulence is possible, e.g., the loss of magnetic flux\nin the vertical direction is balanced by the generation of magnetic flux in the turbulent flow.\nThe nature of this generation of magnetic fields can be an indication for dynamo action,\ne.g., an $\\alpha \\Omega$ dynamo\n\\citep{bra05,gre10}, but detailed studies of this effect will be subject to future work.\nIn period II, we have a quasi steady state of at least 400 inner orbits. \nDuring this time period, we do all our analysis. \nIn the period III, a comparison of the models becomes less useful.\nThe models with lower resolution and open boundary (PO and FO) \nshow a decreasing $\\alpha$-stress in time. Thus they are not useful for long time integrations past\n1200 inner orbits. In the closed models, on the other hand, the magnetic flux cannot escape vertically \n(PC and FC) and therefore, turbulence does not decay. \nOn the contrary, turbulence even increases in these runs as the flux in the box \ncannot efficiently escape.\\\\\nA closer view shows that the stress can oscillate locally on shorter time scales.\nIn Fig. 1, bottom right, we plot the mass weighted stresses $\\alpha\\rho$ at 4 AU over height\nand local orbits. The strength of stresses locally oscillates with a period of around 5 local\norbits. The maxima in the stresses always appear first in the midplane and then propagate\nvertically. These oscillations in the stresses are connected to the $B_\\phi$ \"butterfly\" structures\nwhere the azimuthal mean field oscillates with a frequency of 10 local orbits (Fig. 13). \nEvery change of sign in the mean $B_\\phi$ is now correlated with a minimum in the stresses, \nwhich both occur every 5 local orbits.\nAt the same time this plot shows the importance of the stresses in a region up\nto 3 disk scale heights.\n\nAlthough one can always define a total $\\alpha$-parameter in the disk, the spatial\nvariations are enormous, especially in the vertical direction. \nThe vertical $\\alpha$ profile is plotted in Fig.1, bottom left. For model\nBO, the turbulent stress at the midplane increases from $2.0\\cdot 10^{-3}$\nup to $8\\cdot10^{-2}$ at 4 scale heights.\nThe simulations with moderate resolution show significant lower values around the midplane due\nto the lack of resolution there. For the closed models, the stresses in the\ncorona are artificially increased due to the periodic boundary.\\\\\n\n\\subsection{Radial profile of turbulent stress}\nBeside the vertical profile of turbulent stress, which has been already studied in local\nbox simulations, the radial profile of the turbulent stress can only be addressed in global\nsimulations.\nIn Fig. 1, top right, we present the radial $\\alpha$ profile, averaged\nbetween 300 and 1200 inner orbits.\nIn the inner buffer zones (1 - 2 AU) the $\\alpha$ values are practically zero because of the\nresistive damping. Starting from 2 AU $\\alpha$ rises until it levels off at\naround 3 AU. From 3 to 8 AU we obtain a radial $\\alpha$ profile which can be\napproximated by a $\\sqrt{r}$ dependence. Beyond 9 AU $\\alpha$ is again close to zero because of the damping applied there.\nWe mark three regions in radius (Fig. 1, top right, green lines).\nRegion A, extending from 1 to 3 AU, is affected by the buffer zone.\nRegion B, ranging from 3 to 8 AU shows the $\\sqrt{r}$ slope.\nRegion C, covering 8 to 10 AU, is again affected by the buffer zone.\nIn the following analysis, we will therefore concentrate on region B.\\\\\nIn order to have a radial force-free accretion disk, \nfields have to drop radially as $B \\propto r^{-1}$ (Fig. 12, top left).\nThis was also observed for magnetic fields in galactic disks \\citep{bec01} (Fig. 1).\nIf the most important toroidal field follows $\\propto r^{-1}$, \nthe radial Lorentz force vanishes:\n$$\\rm F_{radial} = - \\frac{1}{r^2\\rho} \\frac{\\partial r^2 B_{\\phi}^2}{\\partial\nr}.$$\nIn the case of $\\rm \\partial \\log{\\rho}\/\\partial \\log{r} = -1.5$\nand $\\partial \\log{c_s}\/\\partial \\log{r} = -0.5$ the $\\alpha$ value,\ndominated by the Maxwell stresses will then \nscale as $\\sqrt{r}$, which is actually matching the value that \nwe measure in our best resolved model (see Fig. 1, top right).\n\n\n\\subsection{Mass loss}\nThe models with open boundaries show a considerable mass loss over the course of our\nsimulations.\nA vertical outflow removes a substantial amount of mass. The\ntotal mass loss over time is presented in Fig. 5. The mass loss is\ndetermined in space region B. The closed models FC and PC loose there mass\ndue to radial mass movement. The open models loose there mass mainly due to\nthe vertical outflow. We will discuss this outflow in section 3.5 .\nTo check the possible impact of this mass loss onto the properties of the\nturbulence, we restarted run FO after 1000 inner orbits with the current\nvelocity and magnetic field configuration but the initial density distribution.\nWe call this model FOR (FO Restarted, Table 1).\nAfter restarting the simulation, the turbulence needs a couple of\ninner orbits to readjust the fully turbulent state.\nWe compared the mean total $\\alpha$ stress of the runs FO and FOR and found a comparable evolution \nof the $\\alpha$ values. We measure $\\alpha = 1.4\\cdot10^{-3}$ for FOR and $\\alpha = 1.3\\cdot10^{-3}$ for FO in the time period\nfrom 1000 up to 1400 inner orbits. We conclude that the mass loss is not\nyet influencing the development and strength of turbulence. \n\\begin{figure}\n\\hspace{-0.6cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-2A.ps,scale=0.46}\n\\psfig{figure=FIG-2C.ps,scale=0.46}\n\\end{minipage}\n\\hspace{4.0cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-2B.ps,scale=0.46}\n\\psfig{figure=FIG-2D.ps,scale=0.46}\n\\end{minipage}\n\\label{surf}\n\\caption{Top left: The surface density profile after 1000 inner orbits for the $\\pi\/4$ model\nPO, the $2\\pi$ model FO and the high-resolution model BO. Dashed lines represent the surface\ndensity profile for the respective viscous disk model and the dotted line the initial profile. \nTop right: 2D contour plot of time and azimuthal averaged radial mass flow for model BO.\nThe red color indicates inward accretion to the star, blue color shows outward motion.\nWe do not observe a meridional flow.\nBottom left: Radial profile of the time-averaged radial mass flow for\nthe high-resolution model BO (solid line) and the viscous models. \nBottom right: 2D contour plot of $(\\Omega - \\Omega_0) \/ \\Omega_0$ over\nradius and time, averaged over azimuth at the midplane. The orbital frequency\nremains sub-Keplerian $(\\Omega_K - \\Omega_0) \/ \\Omega_0 = 0.012$.}\n\n\\end{figure}\n\\subsection{Viscous disk models}\nThe classical $\\alpha$ viscous disk model should reproduce the radial mass flow \nas it occurs in global MHD simulations of MRI turbulent disks.\n\\citet{bal99} have\nargued that the mean flow dynamics in MRI turbulence follows the\n$\\alpha$ prescription. \nTo further test this supposition, we performed a series of 2D HD viscous comparison simulations with the PLUTO code \nfor several of our 3D MHD runs. \nWe use the same resolution and the same initial setup. \nYet, the magnetic field evolution is now replaced by an explicit shear \nviscosity from the time averaged radial $\\alpha$ profile $\\nu (R) = \\alpha(R) H c_s$\nobtained from the MHD simulations (see Fig. 1 top right).\nFig. 2, top left, shows the surface density profile for the models PO,\nFO, model BO and the corresponding viscous\nmodel.\nThe surface density profile of the viscous runs follow nicely the respective MHD model\nprofile (Fig. 2, top left, dashed line) for the region that we use for analysis (3-8 AU).\nAll viscous models show a higher surface\ndensity profile than the open models BO, PO and FO, but of course\nin contrast to the MHD models the viscous models do not show any \nsubstantial vertical mass outflow.\nThe total radial mass flow (e.g. azimuthally and vertically integrated) is plotted as\na time average (0 - 1000 inner orbits) for \nthe high-resolution model B0 (Fig. 2, bottom left, solid line) and the respective viscous\nruns (Fig. 2, bottom left, dashed and dotted line). \nThe radial mass flow of the viscous run matches very well the flow obtained in the MHD model.\nA constant $\\alpha$ value will not reproduce the proper\nevolution of the MRI run. If we adopt for instance a constant $\\alpha$ value of $5\\cdot10^{-3}$, \nwhich would be the global mean value of the MHD run, we get a globally constant accretion rate of\n$\\rm 5.1\\cdot10^{-9} M_\\sun\/yr$.\nAs a sanity check for our viscosity module we compare this value to the \nanalytical estimates by \\citet{lyn74}: \n$$\\dot M(r) = 3 \\pi \\Sigma_g \\nu + 6 \\pi r \u00a0\\frac{\\partial(\\Sigma_g\\nu)}{\\partial r}$$\nand find a value very close to the time dependent viscous run of $\\dot M = 6\\cdot10^{-9}\n\\frac{M_{\\sun}}{yr}$,\nbased on a surface density profile of $\\rm \\Sigma_g = 524\\cdot R^{-0.5} [g\/cm^2]$ and our disk parameter\n$H=0.07*R$.\\\\\nIn Fig. 2, top right, we show the time and azimuthal average of \nthe accretion rate over radius and height. \nThere is a dominant inward accretion at the midplane (Fig. 2, top right, red\ncolor).\nThis result is in contrast to the viscous runs where we see the minimum of\naccretion and even a small outflow at the midplane \\citep{kle92,tak02}.\nAfter \\citet{tak02} (eq. 8) there are several possibilities which could change\nthe vertical profile of the radial velocity, and therefore the mean accretion\nflow. Radial and vertical gradients in the orbital frequency as well as a\nspatially varying $\\alpha$ will affect the vertical profile of the meridional outflow.\nFor the MHD simulations one has to include the vertical gradient as well as the time derivative of the orbital\nfrequency. Fig. 2, bottom right, demonstrate the change of the orbital frequency\nwith a period of around 50 local orbits at 5 AU.\\\\\nRadial mass flow and surface density evolution have shown that we can fit\nour MHD global models with a viscous disk model as long as we use \nan $\\alpha$ profile compatible with our MHD run.\nOf course, the disk spreading that we observe in our MHD run\nis partly due to the existence of our radial buffer zones,\nin which not only the fields decay, but also the $\\alpha$ stresses\nvanish. In a larger radial domain we can expect that also a larger\nregion of the disk will get into a steady state of accretion.\nHowever, one could also argue that in a realistic protoplanetary accretion disk\none will ultimately reach dead zones which behave similar as our\nbuffer zones. In that sense the active part of our global disk is \nembedded between two dead zones.\n\\begin{figure}\n\\hspace{-1.2cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-3A.ps,scale=0.56}\n\\end{minipage}\n\\hspace{4.2cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-3B.ps,scale=0.46}\n\\end{minipage}\n\\label{vel_spec3}\n\\caption{Left: Angle between the cylindrical radial and vertical velocity \nwith respect to the midplane axis ($V_R = -1$ and $V_Z = 0$) for the upper hemisphere.\nRight: Vertical mass outflow $\\rho v_z dA_z$ in units of $\\rm M_\\sun\/yr$ at 5 AU. \nThere is a mass outflow present above 3 scale\nheights. The evaporation time, $\\rm \\tau_{ev} = \\Sigma\/(\\rho v_z)$, was determined to 2070 local orbits.}\n\\end{figure}\n\n\\begin{figure}\n\\hspace{-1.2cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-4A.ps,scale=0.49}\n\\end{minipage}\n\\hspace{4.2cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-4B.ps,scale=0.46}\n\\end{minipage}\n\\label{vel_spec4}\n\\caption{Left: Logarithmic contour plot of the density, over plotted\nwith the velocity vector in the $R-\\theta$ plane for model BO. Both are averaged over azimuth and time\nand plotted for the upper disk hemisphere. The velocity vectors show a\n outflow pattern above two disk scale heights. The red line marks the optical depth of $\\tau = 1$ for our setup.\nRight: Vertical density profile at 5 AU after 1000 inner orbits for\nmodel BO (solid line), the respective viscous model (dashed line) and the \ninitial profile (dotted line). The optical depth of $\\tau = 1$ is around 2.8 scale\nheights for our model.}\n\\end{figure}\n\n\\begin{figure} \n\\hspace{-0.6cm}\n\\psfig{figure=FIG-5.ps,scale=0.46}\n\\label{totalmass}\n\\caption{Total mass plotted over time for all models. The mass is integrated in the\nspace region B (3-8 AU). The change of the mass for the closed models is due\nto radial movement. The mass loss for the open models is dominated by the vertical outflow.}\n\\end{figure}\n\n\n\\subsection{Vertical outflow}\nIn the previous section, we have seen that the MHD simulations point to the\npresence of an additional process besides turbulent \"viscous\" spreading that removes gas from the disk.\nFig. 4, right, shows the initial vertical density profile at 5 AU (dotted\nline), the respective profile for the MHD model BO (solid line) and the viscous HD run (dashed line) after 1000 inner orbits (90 local orbits).\nFor our model, the vertically integrated optical depth of $\\tau = 1$ is around 2.8 scale heights (Fig. 4, red line).\nFor the calculation of the optical depth, we used Rosseland mean opacities \\citep{dra84} with the temperature at 5 AU.\nThe additional magnetic pressure as well as the vertical mass flow, present in the MHD run, generates higher gas density \nabove 3 scale heights compared to the hydrostatic equilibrium.\nIn Fig. 4, left, we plotted a snapshot of the azimuthally averaged velocity \nfield, taken after 1000 inner orbits. The plot indicates a vertical outflow in the disk starting above \n2 scale heights. We measured the angle \nbetween the cylindrical radial velocity $V_R$ and the vertical velocity\n$V_Z$ for the mean and turbulent components (Fig. 3, left). \nThe angle is measured with respect to the midplane axis\n(pointing to the star, see Fig. 3, left, $V_R = -1$ and $V_Z = 0$).\nThe upper (red solid line) and the lower hemisphere (blue solid line) \npresent similar profiles. \nFrom the midplane up to $1.8$ scale heights, the turbulent velocity field is directed\nupwards but still pointing to the star. \nThe low angle of $10^{o}$ for the mean velocities shows the gas motion\npointing to the star and towards the midplane. \nAt $1.8$ scale heights the turbulent velocity is pointing away from the\nmidplane and the star.\nAlso the mean velocity angle changes quickly in the region between 1.6 and 2 scale heights \nto an outflow angle, e.g., steeper than $90^{o}$. This region coincides with the region where\nthe vertical outflow is launched \\citep{suz10}. \nAbove 2 scale heights the angle between the turbulent and the mean velocity\ncomponents stays above $90^{o}$, leading a vertical outflow with a small radial outward component.\nThe so-called dynamical evaporation time is the time to evacuate\nthe gas completely from the disk assuming no supply of matter. In our\nmodel the value is slightly larger than 2000 local orbits (Fig. 3, right)\nwhich provides a confirmation for the vertical outflow obtained by local box\nsimulations from \\citet{suz10} with a vertical net flux field.\n\n\nIn Fig. 3, right, we plotted the vertical mass flow over height at 5 AU.\nThe outflow starts at 2 scale heights and reaches mass fluxes of $\\rm 10^{-10} M_\\sun\/yr$ at 5 AU (model BO, solid line). The influence of the\npure outflow boundary is observed to cause the small outflow in the HD viscous run (dashed line).\nIn the midplane region, the disk reestablishes the hydrostatic \nequilibrium due to the radial mass loss at the midplane (Fig. 6, bottom\nright, red solid line). This drives to a small mean vertical motions visible\nin the vertical velocity (Fig. 6, bottom right, green dotted line).\n\nThe gas does leave the grid with Mach numbers of only\n0.5, which is significantly lower that the local escape velocity, which would be about Mach 20.\nEven the results indicate a stable vertical outflow, \nwithout including the sonic point and the Alfv\\'enic point in the simulation\nit is not possible to make prediction about the flow, leaving or returning to\nthe disk at larger radii. \nThus the fate of the vertical outflow to be a disk wind or not will have to be determined in more detail \nin future simulations with a much broader vertically extent. For this study one would\nmost probably need an additional vertical field in the corona, which could support \nadditional propulsion effects like magneto-centrifugal acceleration.\n\\subsection{Velocity analysis}\nPlanet formation processes in circumstellar accretion disk are strongly dependent on the\nstrength of the turbulence. Turbulence mixes gas and particles, diffuses or concentrates them \nand makes them collide \\citep{ilg04,joh05,joh07,bra08,cuz08,car10,bir10}.\nThe property of MHD turbulence that is important for planet formation are the turbulent velocity\nand density fluctuations of the gas.\nThe density fluctuations are around $10\\%$ and follow the results by\n\\citet{fro06}.\nThe spatial distribution of the fluctuating and mean part of the velocities \nis presented in Fig. 6.\nAll results are obtained for time averages from 800 to 1200 inner orbits\nand are given in units of the sound speed.\nSpatial averaging is performed in azimuth and between 3 and 7 AU for the vertical\nprofiles. The radial profiles are mass\nweighted. \nFig. 6, top left, shows the turbulent RMS velocity over radius. \nThe profile is roughly constant with a total RMS velocity\nof $0.1 c_s$, dominated by the radial turbulent velocity.\nThe vertical dependence of the turbulent velocity (Fig. 6 - right - top) shows a flat profile \naround $\\pm 1$ scale height above and below the\nmidplane for the radial and azimuthal velocity.\nBoth components increase above one scale height by an order of\nmagnitude.\nThe radial component dominates with $0.07 c_{s} $ around the midplane up to $0.3 c_{s} $ at 4 scale\nheights. \nThe azimuthal component follows with $0.05 c_{s} $ up to $0.2 c_{s}$ at 4 scale heights.\nOnly the $\\theta$-component does not show a flat profile around the\nmidplane and increases steadily from $0.02 c_{s} $ to $0.2 c_{s}$ at 4 scale heights,\nwhich is an effect of the density stratification.\nThe small decrease of the $\\theta$ component near the vertical boundary is due to the \noutflow boundary because it does not allow inflow velocities.\\\\\n\nA global picture of the total rms velocity is presented in Fig. 7.\nThe 3D picture is taken after 750 inner orbits and shows again the \ndifferent turbulent structures of the midplane and coronal region. \nThere are also localized supersonic turbulent motions in the disk\ncorona (Fig. 7, white color). Compared to the turbulent velocity, the mean\nvelocities of the gas are two\norder of magnitude smaller. They show small but steady gas motions in the\ndisk. \nThe vertical dependence for the mean velocity (Fig. 6 right - bottom) \nshows the small inward motion (red solid line) as well as the change of $r$ and $\\theta$-velocity\ncomponents to an outflow configuration around 1.6 scale heights.\n\\begin{figure}\n\\hspace{-0.6cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-6A.ps,scale=0.46}\n\\psfig{figure=FIG-6C.ps,scale=0.46}\n\\end{minipage}\n\\hspace{4.0cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-6B.ps,scale=0.46}\n\\psfig{figure=FIG-6D.ps,scale=0.46}\n\\end{minipage}\n\\label{vel_turb}\n\\caption{Top left: RMS fluctuations of the velocity versus radius for model BO,\naveraged over time and azimuth. \nAll components show a roughly flat profile, dominated by the radial turbulent velocity.\nThe radial profiles are mass weighted. The time average is performed during time period II (Fig.1, top\nleft, green line).\nTop right: Turbulent velocity profile versus scale height for model\nBO, averaged over time and azimuth.\nThere is a flat profile visible in the range $\\pm 1.5$\nscale heights above and below the midplane.\nStarting at $1.5$ scale heights the turbulent velocity increases.\nBottom left: Energy power spectra $E_m\\cdot m^2$ for $\\pi\/4$ model PO (red dotted line),\n$2\\pi$ model FO (blue dashed line) and the high-resolution model BO (black solid line). \nBottom right: \u00a0Time average of the mean velocity over scale height.\n}\n\\end{figure}\n\n\n\\subsubsection{Kinetic spectra}\nNot all particles do couple alike to the turbulent gas flow. \nIn fact particles have a size-dependent friction or stopping time \\citep{wei77}.\nThis stopping time is also the time a particle needs to couple to the\nturbulent gas flow. \nBest coupled to turbulence are those particles, which have a coupling\ntime shorter then turbulent correlation time. \nParticle collision velocities are maximized for particles whose stopping time\ncoincides with the turbulent correlation time, e.g., the eddy turn over time.\nThis means that particles of different sizes couple to different length scales of the\nturbulent spectrum. Therefore, a study of planet formation processes needs \nnot only the mean turbulent velocity but also its spectral distribution.\\\\\nIn the global domain, only the $k_{\\phi}$ space of the spectra is\nwithout modifications accessible as only the $\\phi$ direction is periodic in space.\nThe investigation of the complete $k$-space for this model goes beyond the\nscope of this work.\nThe classical Kolmogorov theory predicts the scaling of the energy\nspectra per wavenumber: $E(k) \\propto v_k^2 k^{-1} \\propto \\epsilon^{2\/3} k^{-5\/3}$. \nWe calculate along azimuth $|v(k_\\phi)|^2 = |v_r(k_\\phi)|^2 +\n|v_\\theta(k_\\phi)|^2 + |v_\\phi(k_\\phi)|^2 $ with $v_r(k_\\phi) = \\left\n\\langle \\int_\\phi\nv_r(r,\\theta,\\phi)e^{-ik_{\\phi}\\phi} d\\phi \\right \\rangle$.\nThe average is done in radius (region B, Fig. 1) and height ($\\pm 0.5$ disk scale heights).\nFor our spectra we use the azimuthal wavenumber $\\rm m$ instead of $k$\nto be independent from radius: $k = 2\\pi\/\\lambda = m\/R$.\nIn Fig. 6, bottom left, we plotted the energy power spectra $E_m =\nv(m)^{2}\\cdot m$ with time and\nspace averaged over $\\pm 0.5$ scale heights around the midplane.\nIn our models we do not observe Kolmogorov inertial like range, $E_m\n\\cdot m^2 \\sim m^{1\/3}$.\nThe $2\\pi$ runs F0 and BO have most of the energy placed at $m=5$.\nThe high resolution model BO present a $k^{-1.2}$ dependence, starting from $m=5$ until\n$m=30$.\nThe $\\pi\/4$ run PO piles up the energy at its domain size ($m=8$), reaching higher\nenergy levels compared to the $2\\pi$ models FO and BO.\nThe velocity spectra for each component along the azimuth, \nplotted in Fig. 8, left, indicate that all velocity components have \nsimilar amplitude for the small scales and do not deviate \nby more than a factor of two at the largest scales.\nThe radial velocities peak at m equals 5, but \noverall the entire spectrum above m=20 is essentially flat.\nThe peak at m=5 could be connected to the production of shear waves in the simulations.\nThese shear or density waves are described in \\citet{hei09}.\nOn top of the shorter time scale of MRI turbulence, these long \"time scale\" shear waves\nare visible in the contour plot of the radial velocity in the $r-\\phi$ midplane (Fig. 8 right).\nThe shear wave structures drive the radial velocity up to $0.3 c_s$.\nIn the velocity spectra we see the start of the dissipation regime\nat $m=30-40$ for the high-resolution run BO. For the model BO, \nthis corresponds to 26 or rather 19 grid cells per wavelength, \nwhich is still well resolved by the code \\citep{flo10}.\nShearing waves are also visible in a $r-\\theta$ snapshot \nof the velocity (Fig. 9, left). Here we plot the azimuthal \nvelocity $V_\\phi - V_K$ as contour color, over-plotted with the velocity vectors.\nRed contour lines show Keplerian azimuthal velocities.\nSuper-Keplerian regions are important for dust particle migration.\nThey reverse the radial migration of particles, leading to their efficient\nconcentration and triggering parasitic instabilities in the dust layer, like\nthe streaming instability leading potentially to gravoturbulent planetesimal formation\n\\citep{kla08,joh07}.\nIn our simulation these super-Keplerian regions are not completely\naxisymmetric, but have\na large extension in the azimuthal direction of several scale heights.\nThe variation of the orbital frequency over time and space, presented in\nFig. 9, left, and Fig. 2, bottom right, are connected to zonal flows. They\nare observed and discussed in several local and global studies\n\\citep{joh09,dzy10}.\n\\subsection{Magnetic field analysis}\nThe azimuthal MRI generates a turbulent zero-net field configuration in the\ndisk.\nDespite the loss of mass and magnetic flux through the vertical\nboundary there is no sign of decay for the highest resolution case BO \n(Fig. 1, top left, bottom right).\nWe find well established turbulence. Fig. 9, right, presents a snapshot of the magnetic fields \nafter 750 inner orbits. \nThe $r-\\theta$ components are shown as vectors \nwith the azimuthal magnetic field as background color. \n\\subsubsection{Magnetic energy spectrum}\nTo understand the magnetic turbulence at the midplane, we\ninvestigated the spectral distribution of the magnetic energy.\nThe magnetic energy power spectrum (Fig. 12 - bottom left) is plotted along\nthe azimuthal direction with the same time and space average as for the \nkinetic energy power spectra.\nWe plot the magnetic energy power spectra times the wave-number \n$m\\cdot B_m^2\/2P_{Init-5AU}$ to show where most of the \nmagnetic energy is located.\nFor all runs, most of the magnetic energy is\ndeposited in small scale magnetic turbulence. This was found in several\nrecent MRI simulations, latest in local box simulations by \\citet{dav10} and \\citep{fro10}.\nThe peak of the magnetic energy lies just above the dissipation regime.\nFor the $2\\pi$ model FO, the peak is located between \n$m=10$ and $m=20$, whereas for the high-resolution run BO this regime\nis shifted to $m=20$ and $m=30$. \nThe spectra follows closely the $m^{1.0}$ slope until the dissipation regime is reached.\nThe $\\pi\/4$ run does not resolve the scales where we observe this $m$ dependence. \nIn the restricted model PO most of the magnetic energy is again located \nat the scale of the domain size.\\\\\n\\subsubsection{Convergence}\nThe convergence of MRI is an important aspect in ongoing MRI research in\nlocal and global simulations. In local boxes, there was found convergence for\nthe large scale turbulence between 32 and 64 grid cells per scale height\n\\citep{dav10}. Due to the large domain in global simulations, it was up to now not feasible to reach\nsuch resolutions per scale height. Here the first resolution level is needed \nto reach a self sustaining turbulence, at least for\nsimulations with a zero-net flux toroidal field \\citep{fro06}. Comparing the\nresults from stratified local box simulations we can already give\npredictions for global simulations with such high resolutions per scale\nheight. \nIn comparison with the local box simulations by \\citet{dav10} we get a very\nsimilar profile of the magnetic energy with increasing resolution.\nWith higher resolution (FO to BO, Fig. 12, bottom left) the large scale magnetic energy decreases \nwhile the small scale energy increases. A doubled resolution as model BO should also\nshow convergence for the large scales. Doing this, we expect\nonly a weak decrease for the large scale modes, as presented in\n\\citet{dav10}, Fig. 3.\n\n\n\\subsubsection{Plasma beta}\nThe overall strength of the magnetic fields is best analyzed by this plasma beta\nvalue $\\beta = 2P\/B^2$.\nFig. 10 presents a 3D picture of the logarithmic plasma beta for the \n$r-\\theta$ components, taken at 750 inner orbits. \nThe two-phase structure of the disk is again visible.\nThe well established turbulence at the midplane has a broad distribution \nof high plasma beta values (Fig. 11 , top left).\nIn contrast, there are regions in the corona of the disk with plasma beta below unity (Fig. 10, black regions).\nThe azimuthal and time averaged plasma beta at the midplane lies around 400 (Fig. 11, bottom right).\nIn Fig. 11, top left, we plot the correlation of plasma beta over height in a scatter plot of all grid cells.\nWe find the distribution of beta values to be very narrow in the disk corona (1-10) but on the\nother hand to be much broader (10 - $10^4$) around the midplane, but strongly peaked around $\\beta$ = 500. \nThe value of plasma beta in the disk corona depends on several issues. A\nzero-net flux MRI turbulence with toroidal field produces lower magnetic\nfields in the corona. This was already shown in a very similar simulation by \\citet{fro06} (Fig. 8, solid line, model S2). \nIn contrast, a vertical initial field produces a stronger turbulence level\nwith plasma beta values below unity in the corona.\nThe boundary condition also affects the values in the corona.\nA closed boundary condition, e.g. periodic in the vertical direction \nwill accumulate large amount of magnetic flux in the corona and drive to a \nplasma beta value smaller then one (observed in model FC and PC). \nThe small increase of plasma beta above 3 disk scale\nheights is connected to the vertical outflow and the increase of gas\npressure and density in this area (Fig. 4, right). This effect has to be investigated in future work\nwith a much broader vertical extent.\nVery high plasma beta values in the midplane (Fig. 11, top left) indicate reconnection.\nTwo magnetic fields with different sign and comparable strength coming\ntoo close to each other, e.g., in the same grid cell, do reconnect. \nSuch reconnections are visible in single grid cells with nearly no magnetic field.\nFor our BO model, the reconnection zones reach plasma beta values up to $10^{11}$. \nThe heating due to reconnection in those regions is not covered in our\nisothermal model, but shall be a subject for future studies.\n\n\\begin{figure}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-7.ps,scale=0.22}\n\\end{minipage}\n\\label{vrm}\n\\caption{3D picture of turbulent RMS velocity at 750 inner orbits for model BO.\nThe white regions in the corona present super sonic turbulence.}\n\\end{figure}\n\\begin{figure}\n\\hspace{-0.6cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-8A.ps,scale=0.46}\n\\end{minipage}\n\\hspace{4.0cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-8B.ps,scale=0.46}\n\\end{minipage}\n\\label{vel_spec2}\n\\caption{Left: Velocity spectra in units of the sound speed for all three components\nat the midplane. Space and time averaged is again between 3 and 8 AU\nand between 800 and 1200 inner orbits. The radial velocity peaks at $m=3-5$\nfor both $2\\pi$ models.\nRight: Contour plot of the radial velocity at the midplane ($R-\\phi$ plane).\nLarge shear wave structures become visible. This snapshot is taken after 750\ninner orbits.}\n\\end{figure}\n\\begin{figure}\n\\hspace{-0.6cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-9A.ps,scale=0.46}\n\\end{minipage}\n\\hspace{4.0cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-9B.ps,scale=0.46}\n\\end{minipage}\n\\label{pow_spec}\n\\caption{Left: Contour plot of $V_\\phi - V_K$ for an azimuthal slice. The red\ncontour line encloses regions with Super-Keplerian velocity.\nOverplotted are the $r-\\theta$ velocity fields.\nRight: \u00a0Contour plot of $B_\\phi$ for an azimuthal slice.\nOverplotted are the $r-\\theta$ magnetic fields fields.\nBoth snapshots are taken after 750 inner orbits.\n}\n\\end{figure}\n\\begin{figure}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-10.ps,scale=0.22}\n\\end{minipage}\n\\label{pbeta}\n\\caption{3D picture of plasma beta after 750 inner orbits for model BO.\nThe black regions in the corona present plasma beta values below unity.}\n\\end{figure}\n\\begin{figure}\n\\hspace{-0.6cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-11A.ps,scale=0.46}\n\\psfig{figure=FIG-11C.ps,scale=0.46}\n\\end{minipage}\n\\hspace{4.0cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-11B.ps,scale=0.46}\n\\psfig{figure=FIG-11D.ps,scale=0.46}\n\\end{minipage}\n\\label{mag_turb2}\n\\caption{Top left: Distribution of plasma beta, $N(\\beta)\/N_{Total}$, over height at 750 inner orbits for model BO.\nThe color represents the relative number of grid cells, containing specific\nplasma beta values. At the midplane,\nthere is a wide distribution of plasma beta values between 10 and 10000.\nIn the coronal region the distribution becomes more narrow with values between 1 and 10.\nTop right: Contour plot of azimuthal and time averaged plasma beta of BO\nwith radial (bottom left) and vertical profile (bottom right).}\n\\end{figure}\n\\subsubsection{Spatial distribution}\nAs we already mentioned in section 3.2, the radial profile of the turbulent magnetic field\nhas a direct effect onto the radial profile of the Maxwell stress in the $\\alpha$ parameter.\nThe dominant turbulent azimuthal magnetic field goes as $1\/r$, as shown in a\nazimuthal and time average in Fig. 12, top left.\nThe saturated turbulent field is 4 times lower than the initial azimuthal field.\nAll values are normalized to the initial gas pressure at 5 AU at the midplane\nand the radial profiles are again mass weighted.\nThe vertical profile shows a constant distribution around $\\pm 2$ scale heights \nfrom the midplane until it decreases with height (Fig. 12, top right).\nIn contrast, the radial and $\\theta$ component show a local\nminimum at the midplane with a peak of turbulent magnetic field slightly\nabove 2 scale heights. \\\\\nThe turbulent magnetic fields are around 2 orders of magnitude larger than the\nmean fields. \nThe vertical profiles of mean magnetic fields over height are presented in Fig. 12, bottom right.\nThe radial magnetic field is anti-symmetric to the midplane and correlated\nwith the dominating azimuthal component.\nThe distribution of mean magnetic fields are connected to the \"butterfly\"\noscillations.\n\n\\subsubsection{Butterfly structure}\nThe butterfly pattern is a general property of MRI turbulence and \nwas found in many local and global simulations, latest by\n\\citet{gre10}, \\citet{fla10}, \\citet{sor10} and \\citet{dzy10} .\nThe \"butterfly\" pattern becomes visible for the mean $B_\\phi$ evolution, \nplotted over disk height and time.\nIn Fig. 13, bottom, we plotted the $B_\\phi$ component of the magnetic field\naveraged over a small radius (4 - 5 AU) and over azimuth for model FO, left,\nand PO, right. \nWe see a clear \"butterfly\" pattern in both models. \nThis pattern is also visible in the total accretion stress with\ndoubled period (Fig. 1, bottom right). In comparison, the $\\pi\/4$ run shows \nno systematic and more violent picture of the butterfly. The amplitudes are stronger and \nit has mixed symmetry (Fig. 13, bottom right). Also the total magnetic flux\nevolution shows these properties for model PO (Fig. 13, top right). \nThe FO run presents a similar amplitude and period as the BO run. \nThe effect of the narrow azimuthal domain on the mean fields will be investigated in a follow-up work.\nThe reason of this butterfly structure and its role for the MRI is still under discussion. \nRecent studies show the connection to the MHD dynamo \\citep{gre10}\nand magnetic buoyancy \\citep{shi10}.\n\n\\begin{figure}\n\\hspace{-0.6cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-12A.ps,scale=0.46}\n\\psfig{figure=FIG-12C.ps,scale=0.46}\n\\end{minipage}\n\\hspace{4.0cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-12B.ps,scale=0.46}\n\\psfig{figure=FIG-12D,scale=0.46}\n\\end{minipage}\n\\label{mag_turb}\n\\caption{Top left: Time averaged turbulent magnetic field over radius for\nmodel BO. The turbulent field adjusts to the force-free $r^{-1}$ profile.\nTop right: Time-averaged turbulent magnetic field over scale height for\nmodel BO. The dominating turbulent azimuthal field represents the same flat profile\n$\\pm 1.5$ scale heights around the midplane as the velocity (Fig. 6, top\nright). The turbulent radial and $\\theta$ components represent a different profile with\nmaximum at 2.3 scale heights.\nBottom left: \u00a0Magnetic energy power spectra $B_m^2\\cdot m$ for $\\pi\/4$ model PO (red dotted line), $2\\pi$ model FO\n(blue dashed line) and the high-resolution model BO (black solid line).\nThe profile follows the $m^{1.0}$ slope until the dissipation range.\nBottom right: \u00a0Time-averaged mean magnetic field over height for \u00a0\u00a0\u00a0\u00a0\nmodel BO. The radial and azimuthal field show again anti-correlation.\nThe anti-symmetry for the upper and lower hemisphere could be correlated\nwith a $\\alpha$-$\\Omega$ MHD dynamo.\nAll radial profiles are mass weighted. The time averaged is performed in time\nperiod II (Fig.1, top left, green line).}\n\\end{figure}\n\\begin{figure} \n\\hspace{-0.6cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-13A.ps,scale=0.55}\n\\psfig{figure=FIG-13C.ps,scale=0.46}\n\\end{minipage}\n\\hspace{4.0cm}\n\\begin{minipage}{5cm}\n\\psfig{figure=FIG-13B.ps,scale=0.46}\n\\psfig{figure=FIG-13D.ps,scale=0.46}\n\\end{minipage}\n\\label{mag_spec2}\n\\caption{\nTop left: Total magnetic flux evolution integrated over the entire computational \ndomain (without the buffer zones) normalized over initial flux $B_\\phi$\nin the $2\\pi$ run FO and $\\pi\/4$ run PO (top right).\nBottom: Contour plot of $B_{\\phi}$ over disk height and time. The value is averaged over\nazimuth and radius (4 AU to 5 AU). Local \norbits are calculated at 4.5 AU. Bottom left: model FO. Bottom right: model\nPO. The butterfly pattern becomes visible. The $\\pi\/4$ model shows irregular and stronger amplitudes.}\n\\end{figure}\n\n\n\n\n\n\\section{Discussion}\nFor a number of aspects, our Godunov method confirms results previously\nobtained with a finite difference method as presented by \\citet{fro06}.\n\\begin{itemize}\n\\item A minimum amount of grid cells per scale height, which is about 25 grid\ncells, is needed to sustain turbulence, which was to be expected as both methods \npresent similar numerical behavior \\citep{flo10}. \nOtherwise the turbulent magnetic energy slowly decays in the nonlinear MRI\nevolution \\citep{fro06}. \nOur highest resolution model BO was able to sustain a constant level of\nturbulent stress for more than 400 inner orbits.\n\n\\item The toroidal magnetic net flux is quickly lost via an \n open vertical boundary. Then, there is a oscillating zero-net\nflux field present in the disk.\n\n\\item The disks show a two layer structure of turbulence.\n\\end{itemize}\n\n\\subsection*{$\\alpha$-stress evolution}\nWe obtained a steady state $\\alpha$ value of about $5\\cdot10^{-3}$, which\nis comparable to the results obtained in \\citet{fro06}.\nThe time averaged radial profile of $\\alpha$ follows $\\sqrt{r}$.\nThis profile can be explained by the choice of our\nradial pressure (density and temperature) profile, in combination\nwith the resulting magnetic field profile which is\nforce free, $|B'_\\phi| \\propto r^{-1}$.\nFor this magnetic field profile, the net radial magnetic force\nvanishes. \nAny quasi steady state of disk turbulence must display this profile,\notherwise large scale radial readjustments in the density profile would occur.\nBoth $B'_\\phi$ and $B'_r$ determine the Maxwell stress $B'_\\phi B'_r $ to be $\\sim 1\/r^2$.\nFor the chosen $ \\partial \\ln{P} \/ \\partial \\ln {R} = -2.5$ this results in $\\alpha \\sim B'_\\phi\nB'_r\/P \\sim \\sqrt{r}$.\\\\\nOf course, this profile is only valid for well-ionized accretion disk\nregions.\nThe radial $\\alpha$ profile in protoplanetary disks remains an open\nquestion. \nThe ionization rate, and possibly the MRI\nactivity, will be a function of radius and height \\citep{sem04,dzy10}. Furthermore, the pressure scale height \nwill vary with radius, and this also changes the MRI evolution.\nBoth effects will lead to different saturation levels for MRI turbulence\nat different radii and thus also to different $\\alpha$ values. \n\\subsection*{Magnetic energy convergence}\n\n\\citet{dav10}, \\citet{shi10} and \\citet{fla10} show in local box simulations \nthat the large scale magnetic energy converges for a resolution\nbetween 32 and 64 grid cells per pressure scale height (\\citet{dav10}, Fig. 3). \nOtherwise, the large scale magnetic energy decreases with increasing\nresolution.\nAt the current state we could only handle 25 cells per scale height in our global\nsimulations. We expect also a large scale \nconvergence of magnetic energy with 1.5(2.5) higher resolution.\nFuture calculations shall complete this point, but will be five times more computationally\nexpensive.\\\\\nWe can already conclude that our global magnetic energy spectra as well as the effect of increasing \nresolution (Fig, 12, bottom left, FO to BO) look very similar\nto the results presented in local box simulations. \nThe magnetic energy power spectra reveals that most of\nthe magnetic energy is deposited in the small turbulent scales.\nFor the model with a restricted azimuthal domain of $\\pi\/4$,\nthe largest energy scale is always the domain size.\n\\subsection*{Turbulent velocity}\nRecently observed turbulent velocities in TW Hya and HD 163296\n\\citep{hug10} fit nicely to our computed Mach numbers of 0.1 and 0.4 in the midplane and corona of the\ndisks.\\\\\nThe kinetic energy spectra as well as the velocity spectra along\nazimuth ($k_\\phi$ space) show a peak between $\\rm m=5$ and $6$ due to radial shear\nwaves.\nLatest results in local box MRI simulations presented\na $k^{-3\/2}$ slope for the kinetic energy power spectra \\citep{fro10}.\nSimilar as in \\citet{fro10}, our power-law fit \napplies only for a small range in k space.\nWe do not find a Kolmogorov type slope of $k^{-5\/3}$ (Fig. 6).\nFurther studies are needed, including the $k_r$ and $k_{\\theta}$.\\\\\nA Kolmogorov scaling was predicted for magnetic ISM turbulence by\n\\citet{gol95} and recently confirmed in numerical simulations by \\citet{ber10}.\nHowever, it only applies for the inertial range of incompressible\nisotropic turbulence. The driving of the turbulence via MRI,\nthe anisotropy of the turbulent eddies, the geometry and rotation of the disk \nand the compressibility of the gas\nmake it difficult to argue for a Kolmogorov scaling. \nWe expected a spectrum to be more or less deviating from this simple law.\n\n\\subsection*{Two-phase disk structure}\nWe observe that the accretion disks establish a two-phase structure:\\\\\n- The midplane region between $\\pm$ two scale heights shows a pretty\nconstant turbulent RMS velocity of about $10\\%$ of the local sound speed\n independent of radius or height.\nPart of the RMS velocity occurs due to global shear waves which have radial\npeak velocities of up to $30\\%$ of the local sound speed. \nThe amplitude of the azimuthal fluctuations in the magnetic\nfield is also independent of height ($\\pm$ two scale heights around the midplane) but develops a \n$1\/r$ profile in radius.\nThe midplane region shows a broad distribution of plasma beta values, \n$\\beta = \\frac{2P}{B^2}$, with a mean value of about 500 and a full width at\nhalf maximum of two order of magnitude.\\\\\n- In the coronal region, more than two scale heights above the midplane,\n the mean turbulent velocity reaches a Mach number of 0.5 with supersonic peaks up to 1.5. \nThe mean magnetic fields decrease in this region with height.\nThe disk corona shows a narrower distribution of the plasma beta values with most values between 1\nand 10. Here the magnetic fields are buoyant, gas and fields are\nexpelled from the disc.\nRelative high plasma beta ($\\beta > 1$) in the corona have been\nreported in \\citet{fro06} for global models of AMRI with open boundary. The\nmagnetic flux escapes through the vertical boundary with a remaining zero-net flux\nin the computational domain. This leads to the weakly magnetized corona (below equipartition).\\\\\n\n\n\\subsection*{Vertical outflow}\nOur models show a MRI driven vertical outflow.\nAbove 2 scale heights, the gas flow is directed vertically and radially \noutward, Fig. 3. The outflow velocity of the gas (measured at the vertical\nboundary) is still subsonic.\nThe disk evaporation time was determined at 5 AU to 2000 local orbits.\nThe launching region is located between 1.6 and 2 scale heights. \nThis results matches values obtained in local box simulations \\citep{suz09,suz10} with a\nvertical net-flux field.\\\\\nHowever, we are aware that a detailed study of the vertical outflows\nrequires \nmuch broader vertical extended\nsimulations to confirm that the gas is evacuated from the disk and not\nreturning.\nThese simulations should then include the sonic point or even the Alfv\\'enic point to give\nfurther insight into disk-wind and disk-jet interacting regions.\n\n\\subsection*{Meridional flows}\nOur present work shows that the meridional\noutflow at the midplane is only present in HD simulations, e.g., in\nviscous simulations with an $\\alpha$ value assumed to be constant in time and space.\nFor our MHD models, we find time variations of the orbital frequency of\naround 50 local orbits, which are not present in the viscous disk models\nand which prevent a steady radial outflow.\nA similar result, the absence of a meridional flow in global MHD simulations \nwas recently found by \\citet{fro11}.\\\\\nHowever, we confirm the more general picture of a viscous disk and \nshow that viscous disk models with a radial viscosity\nprofile can reproduce successfully the radial mass flow rate in global MRI turbulent stratified disks.\nClearly, the vertical mass flow cannot be described with such an HD\nmodel.\\\\\n\n\\subsection*{Mean field evolution}\nThe azimuthal MRI is self-sustaining in our zero net flux simulations with\nopen boundaries.\nThe fact that the total flux oscillates around zero could be due\nto the generation of a mean poloidal magnetic field by a turbulent toroidal\nfield. \nWe observe also an antisymmetric distribution of the mean magnetic fields in the upper and\nlower hemisphere which could be an indication for the action of an MHD dynamo in our global\nsimulations.\\\\\nThe existence of an $\\alpha$-$\\Omega$ MHD dynamo and its role for accretion\ndisks was investigated by \\citet{bra95}, \\citet{zie00}, \\citet{arl01},\n\\citet{bra07}, \\citet{les08} and \\citet{bla10}.\nThe temporal oscillations of the mean azimuthal field, plotted \nover height and time (Fig. 13), generates a butterfly pattern. \nLatest results connect the butterfly pattern with \na dynamo mechanisms \\citep{gre10}. \nWe present also a butterfly pattern with a period of 10 local orbits,\nindependent of the azimuthal extent.\nAdditionally, the butterfly structure is reflected in the temporal\nspatial fluctuations of the mean turbulent stresses with double period. \nA change of sign of the mean azimuthal field occurs every five local orbits, at the same time the \n$\\alpha$-stresses show a minimum (Fig. 1, bottom right).\\\\\nThe magnetic energy as well as the mean field evolution have shown\nthat the $\\pi\/4$ model does not capture the correct properties \nof the larger scale simulations. \n\\citet{haw00} also studied full $2\\pi$ and restricted $\\pi\/2$ models of\naccretion tori. However, a detailed study of the impact of different \nazimuthal domain extents is still needed and will be covered in future work.\n\n\\section{Summary}\nWe have performed full $2\\pi$ 3D stratified global MHD simulations of \naccretion disks with the Godunov code PLUTO.\nOur chosen disk parameter represent well-ionized proto-planetary disk\nregions. We obtain a quasi steady state zero-net flux MRI turbulence \nafter around 250 inner orbits.\n\n\\begin{itemize}\n\n\n\\item The second order Godunov scheme PLUTO including the HLLD Riemann\nsolver presents a similar nonlinear MRI evolution as finite difference\nschemes. There is also a need of about 25 grid cells per pressure\nscale height to reach a self-sustaining MRI turbulence in global zero net flux\nazimuthal MRI simulations.\n\n\\item We observe a total $\\alpha$ parameter of about $5\\cdot10^{-3}$,\nwhich remains constant for at least 400 inner orbits and scales with $\\sqrt{r}$ for our used pressure and\ndensity profile.\n\n\\item The turbulent magnetic fields show a $1\/r$ profile in radius, mainly\nvisible in the dominating toroidal magnetic field. This configuration is force free\nin the sense that there exist no large scale net force on the gas. This\nprofile determines the slope of the $\\alpha$ parameter.\n\n\\item The magnetic energy spectra is similar as in local box simulations.\nMost of the magnetic energy is placed in the smallest resolved turbulent\nscale. \n\n\\item The kinetic energy spectra as well as the velocity spectra peak for an\nazimuthal wavenumber between $\\rm m=3$ and $5$ due to shear waves, driving the\nradial velocity up to a Mach number of 0.3. We do not find a Kolmogorov type\nscaling in the $k_\\phi$ space.\n\n\\item The model with an azimuthal extent of only $\\pi\/4$ has most of the energy at the domain\nsize and does not show the same mean field evolution. \n\n\\item We observe a butterfly pattern with then local orbits independent of\nthe azimuthal extent. The butterfly period becomes also visible in the\nMaxwell stress with double period. The mean magnetic fields are\nantisymmetric for the two hemispheres.\n\n\\item At the midplane ($\\pm 2 $ disk scale heights), \nour turbulent RMS velocity presents a constant Mach number of 0.1\nindependent on radius. At the corona ($> 2$ disk scale heights), the\nturbulent velocity increases up to a Mach number of 0.5 at 4 scale heights.\n\n\\item The turbulent magnetic fields at the midplane present a broad plasma\nbeta distribution with a mean of about 500 $\\pm$ one order of magnitude.\nIn the corona the plasma beta is between unity and ten.\n\n\\item The turbulent and the mean velocities are pointing vertically and\nradially outward in the disk corona ($> 2$ disk scale heights). We observe a\nsteady vertical outflow for the open boundary models, dominating the radial\naccretion flow.\n\n\\item We do not see a meridional flow pointing radially outward at the\nmidplane in our MHD models. However, we reproduce our total radial mass flow\nin 2D viscous disk simulations with a radial dependent $\\alpha$-viscosity.\n\n\n\n\n \n\n\n\n\\end{itemize}\n\n\\section{Outlook}\n\nThis paper presents a huge data set of about 10 TBytes.\nThis means we will continue to analyze the data for different goals. \nOne study will deal with a closer investigation of dynamo properties in\nour global disk models. Another one will analyze the turbulent spectra in a\nbetter way to derive correlation times for the turbulence.\nWe will also fill the parameter space with $\\pi$ and $\\pi\/2$ models to\nidentify whether a subsection will be sufficient. Higher resolution is\nenvisioned to reach resolution per scale height comparable to recent stratified local\nbox simulations. Finally, our global MHD model will be the work horse for our\nfuture investigations of planet formation processes in circumstellar disks,\nlike collisions of boulders, planetesimal formation and planet migration.\\\\\nIn future runs, we also plan to use non-ideal MHD to include more realistic\nmagnetic Prandtl numbers and magnetic Reynolds numbers to understand the occurrence and\nsaturation level of the turbulence. Improving the thermodynamics is also\na must in future work, dealing with the proper ionization of the disk, like\ncapturing $p\\Delta V$ terms or magnetic dissipation as heat input.\n\n\\acknowledgments\nWe thank Andrea Mignone for very useful discussions about the numerical setup.\nWe thank Sebastien Fromang for the helpful comments on the global models and \non the manuscript.\nWe thank Alexei Kritsuk for the discussion about turbulent spectra.\nWe also thank Willy Kley for the comments on the viscous model.\nWe thank Frederic A. Rasio and an anonymous referee for the fast and very professional processing of\nthis work.\nH. Klahr, N. Dzyurkevich and M. Flock have been supported in part by the\nDeutsche Forschungsgemeinschaft DFG through grant DFG Forschergruppe 759\n\"The Formation of Planets. Neal Turner was supported by a\nNASA Solar Systems Origins grant through the Jet Propulsion\nLaboratory, California Institute of Technology, and by an Alexander\nvon Humboldt Foundation Fellowship for Experienced Researchers.\nThe Critical First Growth Phase\". Parallel\ncomputations have been performed on the PIA cluster of the MaxPlanck\nInstitute for Astronomy Heidelberg as well as the GENIUS Blue Gene\/P cluster\nboth located at the computing center of the MaxPlanck Society in Garching.\n\\bibliographystyle{apj}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{s_intro}\n\n\n\nGlobular clusters (GCs) host multiple populations of stars characterized by different spectroscopic and photometric signatures. Anti-correlations between the abundances of several elements such as C-N, Na-O, and sometimes Mg-Al have been reported at the surface of various types of stars, from the main sequence (MS) to the red giant branch (RGB) and the asymptotic giant branch (AGB) \\citep[e.g.,][]{Cohen1978,Peterson1980,sneden92,Kraft1994,Thevenin2001,gratton07,Gratton2019,Meszaros2015,carretta15,Carretta2019,Johnson2016,Pancino2017,Wang2017,Masseron2019}. These differences in chemical composition are thought to explain the various sequences observed in color-magnitude diagrams (CMDs) built with specific filters sensitive to atomic and molecular lines of the elements listed above \\citep[e.g.,][]{bedin04,piotto07,Bowman2017,nardiello18,marino19}. The discovery of these peculiarities (variations in chemical composition and discrete sequences in CMDs) turned GCs from textbook examples of clusters born in a single star forming event into complex structures the origin of which remains elusive.\n\n\nThe standard explanation\\footnote{Alternative possibilities exist but all rely at least partly on nucleosynthesis products of AGB or massive stars, see \\citet{marcolini09,elmegreen17}.} to these puzzling properties is the pollution of the original proto-cluster gas by a population of rapidly evolving stars more massive than the present-day GC members. Indeed, the abundance patterns observed are typical outcomes of nucleosynthesis at about 75 MK. More precisely, hydrogen burning through the CNO cycle, together with the Ne-Na and the Mg-Al chains, fully explains the anti-correlations \\citep{1988ATsir1525...11K,Denisenkov1990,prantzos07,Prantzos2017}. Potential polluters experiencing such nucleosynthesis phases are $\\sim$5-7.5~M$_{\\sun}$\\ AGB stars \\citep[e.g.,][]{ventura01,Ventura2009}, massive stars either rotating fast \\citep{decressin07,DecressinCM2007}, in binary systems \\citep{demink09,Izzard2013}, or in a red supergiant phase \\citep{szecsi19}, and super-massive stars \\citep{denis14}. Whatever the nature of the polluter, it should be objects present at early times of the GC history that have spread their nucleosynthesis products in the surrounding gas and out of which the stellar populations we see today were formed. Depending on the degree of mixing of the nucleosynthesis products with pristine gas, a range of chemical composition is predicted for the newly formed stars. The polluters have long disappeared because of their relatively high mass, and only low-mass stars either from the initial population formed out of original proto-GC gas or from the second population formed out of polluted material are still present. We usually refer to these two populations of stars as 1P (or 1G) and 2P (or 2G), standing for first and second population (generation), respectively.\n\nOne of the key differences in the predictions of the scenarios that invoke the different polluters listed above is the amount of helium that is inevitably ejected together with other nucleosynthesis products. Indeed, as hot hydrogen-burning is needed to explain the observed abundance patterns, helium is naturally produced and released too. On one hand, the fast-rotating massive star (FRMS) models of \\citet{decressin07}, developed specifically to reproduce the observations of the GC NGC~6752, predict a wide range of Y for the ejecta, up to Y=0.8, and the test case\\footnote{The study focuses on one binary system made of a 20~M$_{\\sun}$\\ primary and a 15~M$_{\\sun}$\\ secondary star orbiting each other on a 12-day orbit, and with individual rotation periods synchronized on the orbital period.} of binary evolution presented by \\citet{demink09} also predicts a wide range of Y, up to 0.63.\nOn the other hand, models of \\citet{doherty14} indicate that nucleosynthesis in massive AGB stars should produce material with helium mass fraction of about 0.35--0.40, as a result of second dredge-up. On the other hand, supermassive stars\ncan potentially release material rich in hot hydrogen-burning products but only mildly enriched in helium \\citep{denis14}, with a helium mass fraction close to that of the proto-cluster gas in the case where they form through the so-called runaway collision scenario \\citep{Gieles2018}.\n\nMeasuring the helium surface abundance from spectroscopy is not possible in most GC stars. Only hot horizontal branch (HB) stars display weak \\ion{He}{i} lines, and these are difficult to model and interpret. However, the helium surface abundance in these stars, with effective temperatures higher than $\\sim$11 000~K, is affected by atomic diffusion \\citep[e.g.,][]{Michaud08,Quievy09}. Thus, the measured helium no longer represents the original chemical composition of the stars. The vast majority of GC stars are too cool to display any helium line in their spectra. Nevertheless, helium enrichment is detected in a few hot HB stars with $\\Delta$Y ---the difference between the highest Y of 2P stars and Y of 1P stars--- generally not larger than $\\sim$0.1 \\citep{villanova12,marino14}, although \\citet{pasquini11} and \\citet{dupree13} report a Y difference of up to 0.17 between two HB stars, of NGC~2808 and $\\omega$~Cen, respectively.\n\nA change in helium mass fraction affects the internal structure as well as the color and the brightness of stars because the opacity is modified. As a consequence, stars with the same mass, age, and metallicity but different Y have different effective temperatures ($T_{\\rm{eff}}$; the higher Y, the higher $T_{\\rm{eff}}$; e.g., \\citealt{IbenFaulkner1968,chantereau15,CassisiSalaris2020}). Their spectral energy distributions (SEDs) therefore peak at different wavelengths: stars with high Y have more flux at shorter wavelengths, and thus bluer colors. Consequently they are located to the left of stars with smaller Y in CMDs (e.g., \\citealt{sbordone11} and Sect.~\\ref{s_sed}). Variations in the helium content naturally lead to a widening of classical branches (MS, RGB, HB, AGB) in CMDs \\citep[e.g.,][]{Rood1973,Norris1981,DAntonaCaloi2004,DAntona2005ApJ,chantereau16}. In addition, the associated variations in (among others) C, N, and O abundances impact specific colors built with photometric filters encompassing lines sensitive to these elements. This further increases the separation between stars in CMDs, leading to the observed multiple and discrete sequences \\citep[e.g.,][]{Lardo2012,marino19}. \n\nComparison of observed and theoretical colors in filters mostly sensitive to $T_{\\rm{eff}}$, and thus Y, has been used to indirectly constrain the amount of helium present at the surface of GC stars \\citep[e.g.,][]{piotto07,milone15}. \\citet{king12} report a maximum helium mass fraction of about 0.39$\\pm$0.02 in $\\omega$~Cen based on the analysis of MS stars. Using the Hubble Space Telescope (HST) ultra-violet (UV) survey of GCs \\citep[HUGS][]{piotto15,nardiello18}, \\citet{milone18} determined Y in 57 GCs. $\\Delta$Y ranges from nearly 0 to 0.124, corresponding to a maximum Y of about 0.38. \\citet{milone18} also show that $\\Delta$Y correlates with the cluster mass, with higher Y being determined in more massive clusters.\n\nAnother indirect constraint on the helium content of GCs comes from the morphology of their HB. Its different shape in clusters with similar general properties, such as M3 and M13 \\citep[e.g.,][]{ferraro97}, is difficult to explain. A spread in Y of HB stars is a viable possibility \\citep{rood73} and quantitative determinations give $\\Delta$Y between 0.02 and 0.15 on the HB, depending on the cluster \\citep{Caloi2005,lee05,dicri10,valcarce16,tailo16,denis17,vandenberg18,chantereau19}. \n\nThe presence of hot HB stars in GCs of early-type galaxies is also thought to be responsible for the existence of a so-called UV upturn \\citep{gr90}. This feature refers to the increase of flux below $\\sim$2500 \\AA\\ in galaxies that no longer form stars and that are made of low-mass MS and post-MS stars. These objects have SEDs that rapidly drop short of about 3500~\\AA. Hot HB stars, such as those seen in Galactic GCs, may explain the UV fluxes. \\citet{ali18} argued that if such hot HB stars are present because of a high helium content (as suggested by \\citealt{Meynet2008} for elliptical galaxies), there should be a redshift above which they would no longer be observed, having not yet sufficiently evolved. The position of this transition redshift can be used to constrain Y in hot HB stars. \\citet{ali18} showed that a value of $\\sim$0.45 would be compatible with the observed disappearance of the UV upturn with redshift.\n\nThus, modern estimates of Y in GCs indicate that the very helium-rich stars predicted in particular by the FRMS scenario are not detected. However, a key specificity of the FRMS model highlighted by \\citet{chantereau15} is that such He-rich objects evolve faster and differently compared to more classical stars. In particular, \\citet{chantereau16} showed that in a GC that would have been formed under the FRMS scenario, the distribution of stars quickly falls as Y increases (at ages typical of GCs, i.e., 9 to 13.5 Gyr). In practice, there should be little to no He-rich stars on the RGB and AGB of the cluster NGC~6752, for which the models of Chantereau et al. were tailored. This also naturally explains the lack of Na-rich AGB stars in some GCs \\citep{Campbell2013,Wang2016,Wang2017}, although observations reveal that the presence of 2P AGB stars can be affected by more than one factor \\citep{Wang2017}.\n\nThe question therefore naturally arises as to whether the absence of very He-rich stars (i.e., Y$>$0.4) is a robust observational fact, or these stars simply escaped detection due to their exceptionally small number. In view of the discriminating nature of the helium content for various scenarios, it is important to investigate this potential issue, which we aim to do in the present study. In practice, we build on the work of \\citet{chantereau16} to compute synthetic clusters with the predicted distribution of multiple populations for NGC~6752. We compute synthetic spectra consistently with isochrones, produce synthetic photometry and CMDs, and perform determinations of the maximum Y values of these synthetic clusters in order to see if we recover or miss the highly enriched populations predicted by the model of Chantereau et al.\n\nOur paper is organized as follows: Sect.~\\ref{s_specphotom} presents the computation of synthetic spectra and photometry. We then discuss the behavior of our models with special emphasis on the effects of surface abundances on colors. We subsequently compare our predicted photometry to observations in various CMDs before moving to the determination of the maximum helium content in NGC~6752 and synthetic clusters in Sect.~\\ref{s_maxHe}. We discuss our results in Sect.~\\ref{s_disc} and give our conclusions in Sect.~\\ref{s_conc}.\n\n\\section{Spectral synthesis and synthetic photometry}\n\\label{s_specphotom}\n\nIn this section we first present our computations of synthetic spectra and the associated photometry (Sect.~\\ref{s_meth}). We then describe the effects of stellar parameters and surface abundances on both the SEDs and synthetic colors (Sect.~\\ref{s_sed}). We build synthetic CMDs and compare them with observations (Sect.~\\ref{s_compobs}).\n\n\n\\subsection{Method}\n\\label{s_meth}\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=9cm]{hrd_theo.eps}\n\\caption{Hertzsprung-Russell diagram with the isochrones of \\citet{chantereau15}. Different symbols and colors stand for different initial composition, especially different helium mass fraction. Symbols correspond to the points for which atmosphere models have been calculated.}\n\\label{hr_theo}\n\\end{figure}\n\nWe proceeded as in \\citet{martins20} to compute synthetic photometry. We relied on the evolutionary tracks and isochrones computed by \\citet{chantereau15} with the code STAREVOL.\nThese models are based on the FRMS scenario \\citep{decressin07} for the formation of multiple stellar populations in GCs. Each isochrone is characterized by a set of abundances that directly comes from the FRMS predictions. We refer to \\citet{chantereau15} for details. Each set of abundances is tailored to reproducing 1P and various 2P stars in NGC~6752, in which the abundances vary according to different degrees of pollution by FRMS. Figure~\\ref{hr_theo} shows the Hertzsprung-Russell diagram (HRD) with the isochrones we consider in the present work. Each isochrone is labeled according to its helium mass fraction (Y), but we stress that abundances of other elements are also varied (in particular, nitrogen is increased and carbon and oxygen are depleted when Y increases; see Appendix~\\ref{ap_chem}). We stress that the isochrones have been recomputed for a metallicity of $[Fe\/H]=-1.53$, which is slightly higher than the value of -1.75 used in \\citet{chantereau15}. This change was made to better match the metallicity of NGC~6752. \n\nAlong each isochrone we computed atmosphere models and synthetic spectra at ten points (see Fig.~\\ref{hr_theo}) using the codes ATLAS12 \\citep{kur14} and SYNTHE \\citep{kur05}. Our computations include the so-called predicted lines, that is, lines for which at least one of the energy levels comes from quantum mechanics calculation and not from laboratory measurements. These lines thus have approximate wavelengths, but their opacities have been shown to be of prime importance to reproducing observed SEDs \\citep{coelho14}. We adopted a microturbulent velocity of 1~km~s$^{-1}$\\ in all our calculations. \n\nTo convert the HRD into CMDs we computed synthetic photometry in the Vegamag system: \n\n\\begin{equation}\n mX = -2.5 log (F_X\/F^{Vega}_X) = -2.5 log (F_X) + ZP^{Vega}_X\n\\label{eq_mX}\n,\\end{equation}\n\n\\noindent where $mX$ is the magnitude in the X filter and ZP the zero point. The average flux $F_X$ over the passband X was calculated according to \\citet{bohlin14}:\n\n\\begin{equation}\nF_X = \\frac{\\int \\lambda F_{\\lambda} R_X d\\lambda}{\\int \\lambda R_X d\\lambda}\n,\\end{equation}\n\n\\noindent where $R_X$ is the transmission curve of filter X\\footnote{Transmission curves were retrieved from the Spanish VO \\url{http:\/\/svo2.cab.inta-csic.es\/svo\/theory\/fps3\/}.}.\nThe zero point $ZP^{Vega}_X$ in Eq.~\\ref{eq_mX} was calculated using the Vega STScI reference spectrum\\footnote{File alf\\_lyr\\_stis\\_010.fits from \\url{ftp:\/\/ftp.stsci.edu\/cdbs\/current_calspec\/}.} and the appropriate transmission curve.\n\n\n\nFinally, for comparison with observations, we assumed a distance modulus of 13.18 for NGC~6752 based on the \\emph{Gaia} DR2 determination \\citep{helmi18}. This value is consistent with those found by \\citet{harris96} (13.19), \\citet{renzini96} (13.05) , and \\citet{gratton03} (13.24).\nPrior to synthetic photometry calculations, we reddened our SEDs using a color excess $E(B-V) = 0.060$ and A$_V$=3.2, adopting the extinction law of \\citet{ccm89}.\nThis choice best reproduces the turn-off (TO) and sub-giant region in the m814W (m606W-m814W) CMD (see for instance Fig.~\\ref{cmd}). This is slightly larger than the value of 0.046$\\pm$0.005 reported by \\citet{gratton05} but close to the value of \\citet{shleg98}: 0.056.\n\n\n\n\\subsection{Spectral energy distributions}\n\\label{s_sed}\n\nIn this section we describe the effects of chemical composition on the SED. The goal is to identify spectral regions that are affected by certain species. We refer to \\citet{sbordone11} or \\citet{milone20}, among other recent studies, for similar descriptions.\n\nFigure~\\ref{comp_sed_L0p85} shows a selection of models at the bottom of the RGB, with luminosities equal to $10^{0.85}$~L$_{\\odot}$ (see Fig.~\\ref{hr_theo}). These spectra have different $T_{\\rm{eff}}$, surface gravities and chemical compositions (see Table~\\ref{tab_chem}). Because of the higher $T_{\\rm{eff}}$\\ in higher Y models, the SED peak is shifted towards shorter wavelengths. In the optical region, the slope of the SED becomes steeper (faster decline with wavelength) which translates into bluer colors. Variations in individual abundances of light elements (CNO) also changes the strength of absorption lines, mostly below 4500~\\AA. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=9cm]{comp_sed_L0p85.eps}\n\\caption{Spectral energy distributions of some of the models with \\ifmmode \\log \\frac{L}{L_{\\sun}} \\else $\\log \\frac{L}{L_{\\sun}}$\\fi~=~0.85 but different chemical compositions (and thus different $T_{\\rm{eff}}$\\ and $\\log g$). Different colors correspond to different models, identified by their helium mass fraction.}\n\\label{comp_sed_L0p85}\n\\end{figure}\n\nTo better separate the effects of light elements and helium on the SED, we show in Fig.~\\ref{comp_CNO} examples of spectra in which the abundances of C, N, and O have been varied by a factor of three, all other parameters being kept constant. A reduction of the carbon abundance translates into a weaker CH absorption between 4200 and 4400 \\AA. Conversely, an increase in the nitrogen content leads to a stronger NH absorption at 3300-3500 \\AA. The CN band around 3900 \\AA\\ is less affected. The OH absorption between 2800 and 3300 \\AA\\ reacts to a change in the oxygen content. Consequently, photometry in the HST filters F275W and F336W is affected by C, N, and O abundances \\citep{sbordone11,milone18,CassisiSalaris2020}. These filters have been used to build the so-called super color C$_{410}$=(m275W-m336W)-(m336W-m410M) \\citep{milone13}. This photometric diagnostic has been shown to be a powerful tool for separating multiple populations (see also following section). Filters F395N, F467M, F606W, F814W and to some extent F410M are relatively insensitive to variations in C, N, and O abundances. An important result of Fig.~\\ref{comp_CNO} is that C, N, and O abundances do not affect the global shape of the SED, but only specific wavelength regions that include molecular bands \\citep{milone13,Dotter2015}. \n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=9cm]{comp_sed_effectCNO.eps}\n\\caption{Spectral energy distribution of two models with similar parameters except the C, N, and O abundances: the red line shows a model with C\/H reduced by a factor three (top panel), N\/H increased by a factor three (middle), and O\/H reduced by a factor three (bottom) compared to the initial model in black. The other parameters are: $T_{\\rm{eff}}$\\ = 5375 K, $\\log g$\\ = 3.37, Y=0.248. The gray solid lines indicate the transmission curves of the HST WFC3\/UVIS F275W, F336W, F395N, F410M, F467M, and ACS\/WFC F606W and F814W.}\n\\label{comp_CNO}\n\\end{figure}\n\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=9cm]{comp_sed_L0p85_effectcompo.eps}\n\\caption{Spectral energy distribution of three models with \\ifmmode \\log \\frac{L}{L_{\\sun}} \\else $\\log \\frac{L}{L_{\\sun}}$\\fi=0.85: one with Y=0.248, $T_{\\rm{eff}}$\\ = 5375 and $\\log g$\\ = 3.37 (black line); one with Y=0.400 and the same $T_{\\rm{eff}}$ \\ and $\\log g$\\ (green line); and the model with Y=0.400 and the associated $T_{\\rm{eff}}$\\ and $\\log g$\\ (5549 K and 3.31 respectively, red line).}\n\\label{comp_sed_L0p85_compo}\n\\end{figure}\n\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=9cm]{comp_Tstru_L0p85_effectcompo.eps}\n\\caption{Temperature as a function of Rosseland optical depth in the three models leading to the SEDs shown in Fig. \\ \\ref{comp_sed_L0p85}.}\n\\label{comp_Tstru_L0p85}\n\\end{figure}\n\n\n\nTo further quantify the effect of chemical composition (and in particular of Y) on the SED, in Fig.\\ \\ref{comp_sed_L0p85_compo} we compare the SEDs of three models with the same luminosity but different chemical composition. We focus on the models with L=$10^{0.85}$~L$_{\\odot}$. We have chosen the models of the Y=0.248 and Y=0.400 sequence, to which we add a third model with the effective temperature and gravity of the Y=0.248 sequence, but the chemical composition of the Y=0.400 sequence. Hence, we are able to disentangle the effect of chemical composition on the effective temperature and logg on one side, and on the SED on the other side. Comparing the two models with $T_{\\rm{eff}}$\\ = 5375 K, we find the same trends as the ones drawn from Fig.\\ \\ref{comp_CNO}. It is mainly the CH, CN, NH and OH bands that are affected. The general shape of the SED is otherwise relatively similar in the two models. Hence, models with different helium content but similar $T_{\\rm{eff}}$\\ and surface gravity have very similar SEDs, except in regions containing lines involving C, N, and O. If we now compare the two models with the same composition but different effective temperature and surface gravity, we see a major modification of the SED. The hottest model has more (less) flux blueward (redward) of 6000 \\AA. This is mainly governed by the change in $T_{\\rm{eff}}$\\ which itself is due to the different helium content. Figure\\ \\ref{comp_Tstru_L0p85} shows the temperature structure in the three models. Changing only the chemical composition leads to a higher temperature in the inner atmosphere, but a similar structure in the outer parts. This effect is entirely dominated by the difference in the helium content: computing a model with $T_{\\rm{eff}}$\\ = 5375 K, $\\log g$\\ = 3.37, and the helium content of the Y=0.400 sequence but the abundances of all elements heavier than He from the Y=0.248 sequence leads to a temperature structure which is almost indistinguishable from that of the green model in Fig.\\ \\ref{comp_Tstru_L0p85}. \nIf we now change both the chemical composition and the effective temperature\/surface gravity (red model), we obtain a global increase in the temperature at all depths in the atmosphere, which translates into a change of the SED. \n\nFrom these comparisons we conclude that the global shape of the SED is mainly sensitive to the effective temperature which itself depends critically on the helium content and its effect on opacities. The abundances of heavier elements affect certain portions of the SED but not its global shape. Variations in Y thus affect colors sensitive to the global shape of the SED. The filters F395N, F410M, F606W, and F814W should be relatively free of contamination by absorption lines of light elements, and can therefore be used to study the helium content \\citep{milone13,milone18,Dotter2015,CassisiSalaris2020}.\n\n\\subsection{Comparison with observed CMDs}\n\\label{s_compobs}\n\n\n\nIn this section we compare our synthetic photometry to the observed CMDs of NGC~6752. We use the data of \\citet{milone13} and of the HUGS survey \\citep{piotto15,nardiello18}. The goal is to see if synthetic colors reproduce observations and to identify potential failures.\n\nThe bottom-right panel of Fig.~\\ref{cmd} shows the m814W versus (m606W-m814W) CMD. Taking the Y=0.248 isochrone as a reference, we see that on average our synthetic photometry is able to reproduce the shape of the observed sub-giant branch and the bottom of the RGB. More specifically, the synthetic isochrone is located near the red part of the envelopes that define these branches, which are broadened because of intrinsic dispersion and the presence of multiple populations. The synthetic isochrone reproduces the TO relatively well.\nOn the upper RGB, the synthetic isochrone appears slightly too red compared to the observations, which could be due to the fact that the stellar evolution models were computed with an atmosphere treated in the gray, plane-parallel, and Eddington approximations. \n\n\n\n\\begin{figure*}[]\n \\centering\n\\includegraphics[width=0.4\\textwidth]{cmd_275W_814W.eps}\n\\includegraphics[width=0.4\\textwidth]{cmd_336W_814W.eps}\n\n\\includegraphics[width=0.4\\textwidth]{cmd_410M_814W.eps}\n\\includegraphics[width=0.4\\textwidth]{cmd_C410_814W.eps}\n\n\\includegraphics[width=0.4\\textwidth]{cmd_467M_814W.eps}\n\\includegraphics[width=0.4\\textwidth]{cmd_606W_814W.eps} \n\\caption{Color-magnitude diagrams: In all panels, the ordinate axis is the magnitude in the ACS F814W filter. The abscissa axis is the difference between magnitude in the X filter and the magnitude in the ACS F814W filter, where X is the WFC3 F275W filter (top left panel), WFC3 F336W filter (top right panel), WFC3 F410M filter (middle left panel), WFC3 F467M filter (bottom left panel), and ACS 606W (bottom right panel). In the middle right panel, the abscissa axis is the color difference (m275W-m336W)-(m336W-m410M). In all panels, different symbols and colors correspond to models with different chemical composition, tagged by their helium content (Y). Gray points correspond to the photometric data of \\citet{milone13}, except in the bottom right panel where they are from the HUGS survey \\citep{piotto15,nardiello18}.}\n\\label{cmd}\n\\end{figure*}\n\nFigure~\\ref{cmd} shows several CMDs involving filters F275W, F336W, F410M, F467M and F814W. These filters are sensitive to the chemical composition as described in Sect.~\\ref{s_sed}. \nFocusing on the Y=0.248 isochrone we find, in general, relatively good qualitative agreement with observations. We note that in the m814W versus (m410M-m814W) diagram, the synthetic isochrone is located slightly too much to the red on the RGB (at least when compared to the m814W-(m275W-m814W) and m814W-(m336W-m814W) CMDs). In the (m336W-m814W) color the Y=0.248 isochrone is located roughly in the middle of the observed branch, for reasons that are explained in the following paragraph. The observed RGB is redder than the synthetic isochrone in the m814W versus C$_{410}$ diagram. This is likely the result of the small offsets seen in other CMDs that are amplified by the super color. All these (relatively) small offsets between observations and synthetic isochrones are due to different photometric calibrations between observations and synthetic colors and to limitations in the stellar evolution and spectral modeling. This likely has a small impact on the determination of the helium content of multiple populations, because only relative color differences are used and not absolute ones. However, it is important to keep in mind that full consistency is not achieved between observations and synthetic photometry.\n\n\nIncreasing Y globally leads to a shift of all isochrones to the left because of the increased $T_{\\rm{eff}}$. An exception is the m814W-(m336W-m814W) CMD where the isochrones move first to the right (Y=0.260 to Y=0.300) and then move back to the blue from Y=0.300 to Y=0.600. This is caused by the strong sensitivity of the F336W filter to the nitrogen abundance (see Sect.~\\ref{s_sed}). The N abundance increases rapidly when Y increases, which causes a deepening of the NH absorption band. Consequently, the (m336W-m814W) color is redder. At the same time, $T_{\\rm{eff}}$\\ increases because of the higher helium content. But at first, the effect of nitrogen is stronger, leading to a redder color. When Y reaches $\\sim$0.300, the nitrogen increase is not sufficient to counter-balance the effect of the higher $T_{\\rm{eff}}$\\ and the colors become bluer again. This effect, specific to the F336W filter, is not seen in the TO region of the CMD because at the corresponding $T_{\\rm{eff}}$\\ there is no molecular NH band in the spectra of the stars investigated here. \n\n\n\n\\vspace{0.5cm}\n\nHaving presented and described our synthetic photometry of NGC~6752, we now turn to the main question tackled by this study: the maximum helium content.\n\n\n\\section{Maximum helium content}\n\\label{s_maxHe}\n\nIn this section, we investigate whether the current estimates of the maximum helium content of stars of the second population are true values or lower limits. More precisely, we study the possibility that highly enriched stars could be missed when studying multiple populations with HST photometry.\n\n\n\\subsection{Helium in NGC~6752}\n\\label{smax_ngc6752}\n\n\nIn a first step, we re-determine the maximum helium mass fraction difference in NGC~6752. We use a similar method to that of \\citet{milone18} which we describe below. Briefly, we select the extreme populations, that is, the least and most chemically enriched ones, from the chromosome map \\citep{milone17}. We then estimate the helium mass fraction difference between these two populations in various CMDs using the theoretical isochrones and synthetic photometry presented in Sects.~\\ref{s_meth} and \\ref{s_sed}.\n\nTo build the chromosome map we first create the m814W versus (m275W-m814W) and m814W versus C$_{410}$ diagrams shown in the top panels of Fig.~\\ref{fidchmap6752}. We subsequently define the red and blue so-called fiducial lines that bracket the distribution of stars along the RGB. These lines are defined manually by selecting points along the red and blue envelopes and by applying a spline function over the selected points. The width of the RGB is set to the difference between the two fiducial lines at a magnitude of 14.9, corresponding to 2.0 magnitudes above the TO, in agreement with the definition of \\citet{milone17}. For each star on the RGB in the m814W versus (m275W-m814W) and m814W versus C$_{410}$ diagrams, the quantities $\\Delta$(m275W-m814W) and $\\Delta (C_{410})$ are calculated according to equations 1 and 2 of \\citet{milone17}. We select stars with m814W magnitudes between 15.8 and 12.8 (dashed lines in Fig.~\\ref{fidchmap6752}) to cover the bottom of the RGB but avoid its part above the bump in the luminosity function where internal mixing can affect surface chemical composition \\citep[e.g.,][]{Briley1990,Shetrone2003,CZ2007,Lind2009,Henkel2017}. The resulting chromosome map is shown in the bottom panel of Fig.~\\ref{fidchmap6752}.\n\n\n\\begin{figure}[]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{fid_chmap_ngc6752.eps}\n\\caption{\\textit{Top left panel}: m814W versus (m275W-m814W) CMD of NGC~6752. \\textit{Top right panel}: m814W versus C$_{410}$ diagram. In both panels, the red and blue solid lines are the fiducial lines along the RGB. The horizontal dashed lines indicate the magnitude bin considered to build the chromosome map shown in the bottom panel. The horizontal dotted line marks the m814W value at which the width of the RGB is measured (see text). \\textit{Bottom panel}: Chromosome map ($\\Delta (C_{410})$ versus $\\Delta$ (m275W-m814W)) on the RGB. The dotted lines define boxes in which stars are considered either 1Pe (bottom right) or 2Pe (top left).}\n\\label{fidchmap6752}\n\\end{figure}\n\n\nFollowing the concept of \\citet{milone17}, we manually select the extreme 1Pe and 2Pe stars from the chromosome map by defining boxes respectively near the right-most and left-most parts of the stars' distribution, as illustrated in Fig.~\\ref{fidchmap6752}. We then study the color (and thus Y) differences between these two extreme populations in three CMDs: m814W versus (m395N-m814W), m814W versus (m410M-m814W), and m814W versus (m467M-m814W). According to Sect.~\\ref{s_compobs} and \\citet{milone18} the colors (m395N-m814W), (m410M-m814W), and (m467M-m814W) depend almost exclusively on the helium content at the metallicity of NGC~6752. In addition, the color difference is the largest for the colors that involve filters located at sufficiently blue wavelengths to show sensitivity to effective temperature ---and thus Y--- variations (see Fig.~\\ref{comp_sed_L0p85} and middle panel of Fig.~5 of \\citealt{milone18}).\n\nFigure~\\ref{395m814_1G2G_6752} shows the m814W versus (m395N-m814W) diagram for the 1Pe and 2Pe stars (see Appendix \\ref{ap_dY} for the other CMDs). Building on \\citet{milone18} we define new fiducial lines along the two populations. We divide each distribution into m814W bins of 0.2 mag in width. In each bin, we calculate the median m814W and (m395N-m814W). We subsequently perform a boxcar averaging, replacing each median point by the average of its three closest neighbours. This gives the filled circles in Fig.~\\ref{395m814_1G2G_6752}. Finally, we run a spline function over these new points to obtain the fiducial lines for each of the two extreme populations. The color difference between these lines is estimated at six m814W magnitudes: 15.2, 14.9, 14.6, 14.3, 14.0, and 13.7. We do not consider brighter stars for reasons that will be explained further below. Section~\\ref{ap_dY} of the Appendix shows the CMDs using the F410M and F467M filters. \n\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{delta395m814_ngc6752.eps}\n\\caption{m814W versus (m395N-m814W) CMD for the 1Pe (black) and 2Pe (red) populations selected in the chromosome map of Fig.~\\ref{fidchmap6752}. The filled circles are the points used to define the fiducial lines, which themselves are shown by the solid lines. See text for details. The horizontal dotted lines indicate the m814W magnitudes at which the color difference between the 2Pe and 1Pe fiducial lines is determined.}\n\\label{395m814_1G2G_6752}\n\\end{figure}\n\n\nOnce obtained, this set of six color differences is compared to theoretical values in Fig.~\\ref{Ydcol395_6752}. The latter are calculated from our synthetic photometry, including distance and extinction corrections appropriate for NGC~6752 as described in Sect.~\\ref{s_meth}. For each m814W magnitude, we determine the Y difference between the isochrones that match the (m395N-m814W) color difference determined from observations. We do this for the six selected m814W magnitudes and finally average the six determinations to yield the final Y difference. The standard deviation is taken as the uncertainty in this measurement. We perform this process for the three selected colors. The results are gathered in Table~\\ref{tab_he_ngc6752}. They are broadly consistent with the determination of \\citet{milone18} who quote a maximum Y difference of 0.042$\\pm$0.004. We note that our uncertainties are much larger. We tested the effect of varying the size of the boxes to select the 1Pe and 2Pe stars in the chromosome map. Increasing the size of 0.2 magnitudes does not affect the results. However, reducing the size (i.e., selecting fewer points, but at even more extreme positions) translates into an increase in $\\Delta$Y by between 0.015 and 0.020. However, at the same time, the uncertainties also increase by the same amount because of\nthe reduced number of stars, \n\n\n\n\\begin{figure}[]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{Y_dcol395_ngc6752.eps}\n\\caption{Determination of the helium mass fraction difference between 1Pe and 2Pe stars for the six magnitudes (see labels in panels) defined in Fig.~\\ref{395m814_1G2G_6752}. For each panel the filled circles are the theoretical values calculated from our isochrones and synthetic photometry. The black solid line is a linear regression to these values. The vertical gray line highlights the measured color difference between 1Pe and 2Pe fiducial lines. The horizontal dashed line indicates the corresponding Y difference read from the black solid line.}\n\\label{Ydcol395_6752}\n\\end{figure}\n\n\n\n\\begin{table}[ht]\n\\begin{center}\n \\caption{Difference in the helium mass fraction Y between the 1Pe and 2Pe populations, for different colors and for the RGB and MS stars.}\n\\label{tab_he_ngc6752}\n\\begin{tabular}{lc}\n\\hline \ncolor & $\\Delta$Y \\\\ \n\\hline\nRGB & \\\\\n(m395N-m814W) & 0.039$\\pm$0.013 \\\\ \n(m410M-m814W) & 0.052$\\pm$0.011 \\\\ \n(m467M-m814W) & 0.068$\\pm$0.025 \\\\\n\\hline\nMS & \\\\\n(m395N-m814W) & 0.042$\\pm$0.004 \\\\ \n(m410M-m814W) & 0.047$\\pm$0.004 \\\\ \n(m467M-m814W) & 0.049$\\pm$0.049 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nIn anticipation of the following section, in Fig.~\\ref{fidchmap6752MS} and Table~\\ref{tab_he_ngc6752} we present the analysis of extreme populations on the MS of NGC~6752. The details of the analysis are presented in Sect.~\\ref{clu_W16}. The results indicate that the maximum helium mass fraction reported by \\citet{milone18} based on the RGB is also recovered for MS stars. These results are further discussed in Sect.\\ref{s_disc}.\n\n\\begin{figure}[]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{fidchmap6752MS.eps}\n\\caption{Same as Fig.~\\ref{fidchmap6752} but for the MS of NGC~6752.}\n\\label{fidchmap6752MS}\n\\end{figure}\n\n\n\n\\subsection{Synthetic clusters}\n\\label{smax_synthclu}\n\nIn a second step, we build synthetic clusters to investigate whether or not we underestimate the maximum helium difference between extreme populations. \n\nWe first start by building synthetic CMDs. In practice, we draw artificial points along the isochrones in the m814W versus (m275W-m814W), m814W versus (m336W-m814W), m814W versus (m410M-m814W), m814W versus (m395N-m814W), and m814W versus (m467M-m814W) diagrams (see Fig.~\\ref{cmd}). We assume a magnitude distribution in the F814W filter similar to that of NGC~6752 (see Fig.~\\ref{distrib814}). We build clusters made of 9000 stars, ensuring a consistent number of stars on the RGB compared to NGC~6752.\n\n\\begin{figure}[]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{distrib814.eps}\n\\caption{Distribution of stars according to their m814W magnitude in NGC~6752 (black) and in a typical synthetic cluster (red). The latter distribution results from random sampling using a Gaussian distribution.}\n\\label{distrib814}\n\\end{figure}\n\n\nFor each m814W magnitude drawn along the isochrone (in each diagram) we add a correction to the color (x-axis in each diagram) to introduce a dispersion on the theoretical isochrone. We estimate the correction in the following way. We retrieve photometry of NGC~6752 from the HUGS survey\\footnote{\\url{https:\/\/archive.stsci.edu\/prepds\/hugs\/}} and build the m814W versus (m275W-m814W), m814W versus (m336W-m814W), and m814W versus (m435W-m814W) diagrams. We select stars at m814W magnitudes of 14.9 and 16.5 (in a bin of size 0.2 magnitude). For these subgroups of stars and for each color, (m275W-m814W), (m336W-m814W), and (m435W-m814W), we take the standard deviation with respect to the median point as representative of the dispersion on the RGB and MS. We subsequently use these values to introduce a dispersion in the synthetic colors. For that we randomly draw a color correction by means of a Gaussian distribution centered on zero and characterized by the standard deviation determined above.\nIn the absence of photometric error for the m395N, m410M, and m467M magnitudes, we assume the dispersion in the colors (m395N-m814W), (m410M-m814W), and (m467M-m814W) is the same as in (m435W-m814W). We perform this process for all isochrones, that is, for all chemical compositions. We finally end up with a series of synthetic CMDs that are used to build synthetic clusters.\n\n\n\\subsubsection{Two chemical compositions}\n\\label{clu_2pop}\n\nWe build synthetic clusters by mixing populations of different chemical composition. We start with a cluster made up of one-third stars with Y=0.248, and two-thirds stars with Y=0.330. For the m814W versus (m275W-m814W) CMD, we therefore select 3000 stars from Y=0.248 isochrone in the synthetic m814W versus (m275W-m814W) just created, and 6000 stars from the Y=0.330 isochrone. We repeat the process for all five diagrams.\nWe then apply the same method as described in Sect.~\\ref{smax_ngc6752} to determine the maximum helium mass fraction difference in the synthetic cluster. We know by construction that it is equal to 0.082. We find that $\\Delta$Y=0.081$\\pm$0.002, 0.079$\\pm$0.002, and 0.070$\\pm$0.015 using the (m395N-m814W), (m410M-m814W), and (m467M-m814W) colors, respectively. We thus recover the input value ($\\Delta$Y=0.082) with a good level of confidence.\n\n\n\\begin{figure}[]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{fidchmap_unif0.eps}\n\\caption{Same as Fig.~\\ref{fidchmap6752} but for a synthetic cluster with one-third stars with Y=0.248 and two-thirds stars with Y=0.260 to 0.600 with a flat distribution.}\n\\label{fidchmapunif0}\n\\end{figure}\n\n\n\\begin{figure}[]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{delta395m814_unif0.eps}\n\\caption{Same as Fig.~\\ref{395m814_1G2G_6752} but for a synthetic cluster with one-third stars with Y=0.248 and two-thirds stars with Y=0.260 to 0.600 with a flat distribution.}\n\\label{395m814_1G2G_unif0}\n\\end{figure}\n\n\n\\subsubsection{Uniform distribution among second population}\n\\label{clu_unif}\n\nWe then run another test by building a synthetic cluster still made up of one-third 1P stars (3000 data points) drawn from the Y=0.248 isochrone, but with the remaining two-thirds of 2P stars (i.e., 6000 data points) spread uniformly between isochrones with Y varying from 0.260 to 0.600. For that, we create the isochrones characterized by Y=0.430, 0.470, 0.500, 0.530, and 0.570 by linearly interpolating between the Y=0.400 and Y=0.600 isochrones. We create the five CMDs with these additional isochrones using the same method as described immediately above. We select 500 stars for each isochrone between 0.260 and 0.600, which is 5.5\\% per isochrone, to obtain the final synthetic cluster.\nWe then proceed as previously to determine $\\Delta$Y. Figure~\\ref{fidchmapunif0} shows the definition of the fiducial lines and the chromosome map, together with the selected 1Pe and 2Pe stars. Figure~\\ref{395m814_1G2G_unif0} shows the 1Pe and 2Pe stars in the m814W versus (m395N-m814W) diagram. Here, we face a problem: the number of 2Pe stars is too small to apply the automatic process for defining the fiducial line of the 2Pe population. We therefore select the fiducial line by hand\\footnote{We estimate the uncertainty on $\\Delta$Y resulting from the use of this manual process by performing ten repetitions on the same set of 1Pe and 2Pe populations. We find the uncertainty is of the order 0.004.}, as done in the first step of the process that leads to the chromosome map. We then determine the Y difference and we obtain $\\Delta$Y=0.362$\\pm$0.005. Using the other colors, (m410M-m814W) and (m467M-m814W), we obtain $\\Delta$Y=0.364$\\pm$0.008 and $\\Delta$Y=0.367$\\pm$0.007, respectively. The true Y difference, being 0.352, shows that we are able to measure it with good accuracy. \n\n\n\\begin{figure}[]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{distriW16.eps}\n\\caption{Black solid histogram: Relative fraction of stars as a function of their helium content on the lower RGB in the models of \\citet{chantereau16} and for an age of 13~Gyr. Red dashed histogram: Distribution we adopt to build synthetic clusters with populations as close as possible to those of \\citet{chantereau16}.}\n\\label{distriW16}\n\\end{figure}\n\n\n\\subsubsection{Distribution of Chantereau et al.}\n\\label{clu_W16}\n\n\nWe finally move to clusters made of populations similar to those presented by \\citet{chantereau16} within the framework of the FRMS scenario that fits the O-Na anti-correlation in NGC~6752, and which is the most extreme case in terms of He enrichment. We use the distribution of stars according to their helium content shown in Fig.~6 of \\citet{chantereau16} for an age of 13 Gyr (at that age, their model predicts that 10$\\%$ of the stars should have Y$>$0.4, from an initial fraction of 21$\\%$). We adopt the distribution of the lower RGB as representative of the entire population of the cluster. Examination of Fig.~6 of \\citet{chantereau16} reveals that this is a fair approximation for the MS up to the upper RGB. As we do not have synthetic photometry for the full distribution of Y presented by Chantereau et al., we group bins of Y. This is illustrated in Fig.~\\ref{distriW16}. For instance, in the bin with Y=0.30 we gather the stars of Chantereau et al. with Y=0.29, 0.30, and 0.31. In addition, we linearly extrapolated our isochrones beyond Y=0.6 (up to Y=0.72) using our Y=0.4 and Y=0.6 synthetic isochrones. For the populations with Y$>$0.6 in Fig.~\\ref{distriW16} we group stars in bins of 0.2 in width (e.g., the Y=0.64 bin gathers stars with Y=0.64 and 0.65 from the distribution of Chantereau et al.). The most enriched population has Y=0.72 in our clusters, corresponding to $\\Delta$Y=0.472. We built ten clusters.\n\nWe then perform the determination of the maximum helium mass fraction as in the previous example where a uniform distribution of stars with Y$>$0.248 was used. Figures~\\ref{fidchmapW16} and \\ref{395m814_W16} show the synthetic CMDs, chromosome map, and extreme populations in the m814W versus (m395M-m814W) CMD in a representative example. We gather the results in Table~\\ref{tab_he_synth} and Fig.~\\ref{deltaY}. For some realizations of our synthetic clusters, and for some colors, we derive enrichments that are marginally compatible with the input value $\\Delta$Y=0.472: we obtain $\\Delta$Y=0.437$\\pm$0.038 for cluster 8 using the color (m467M-m814W). However, the main result is that, on average, we determine a maximum $\\Delta$Y of about 0.43, which is smaller than the input value of 0.472. This is qualitatively understood by looking at Figs.~\\ref{distriW16} and \\ref{fidchmapW16}. Stars with Y$>$0.6 have an almost flat distribution which translates into the extended population in upper left part of the chromosome map. When picking the stars identified as the 2Pe population, and to ensure a sufficient number of stars in that population, we include the stars with the highest Y, but also some stars with slightly smaller Y. We therefore tend to create a population with an average Y that is smaller than 0.72. \n\n\n\\begin{figure}[]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{fidchmap_W16_000.eps}\n\\caption{Same as Fig.~\\ref{fidchmap6752} but for a synthetic cluster with the populations of \\citet{chantereau16}.}\n\\label{fidchmapW16}\n\\end{figure}\n\n\n\\begin{figure}[]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{delta395m814_W16_000.eps}\n\\caption{Same as Fig.~\\ref{395m814_1G2G_6752} but for a synthetic cluster with the populations of \\citet{chantereau16}.}\n\\label{395m814_W16}\n\\end{figure}\n\n\n\\begin{table}[t]\n\\begin{center}\n \\caption{Helium difference recovered in ten synthetic clusters with a distribution of the population as in Fig.~\\ref{distriW16}. $\\Delta$Y$_{X}$ stands for the helium difference estimated from color ($X$-m814W) where $X$ is the magnitude in one of the filters F395N, F410M, or F467M.}\n\\label{tab_he_synth}\n\\begin{tabular}{cccc}\n\\hline \ncluster id & $\\Delta$Y$_{395N}$ & $\\Delta$Y$_{410M}$ & $\\Delta$Y$_{467M}$ \\\\ \n\\hline\n0 & 0.444$\\pm$0.022 & 0.448$\\pm$0.014 & 0.468$\\pm$0.019 \\\\\n1 & 0.409$\\pm$0.007 & 0.411$\\pm$0.006 & 0.416$\\pm$0.007 \\\\\n2 & 0.411$\\pm$0.031 & 0.416$\\pm$0.021 & 0.434$\\pm$0.038 \\\\\n3 & 0.425$\\pm$0.027 & 0.434$\\pm$0.027 & 0.427$\\pm$0.051 \\\\\n4 & 0.420$\\pm$0.013 & 0.429$\\pm$0.008 & 0.436$\\pm$0.010 \\\\\n5 & 0.437$\\pm$0.025 & 0.438$\\pm$0.023 & 0.438$\\pm$0.011 \\\\\n6 & 0.417$\\pm$0.013 & 0.421$\\pm$0.009 & 0.430$\\pm$0.014 \\\\\n7 & 0.472$\\pm$0.010 & 0.472$\\pm$0.017 & 0.488$\\pm$0.011 \\\\\n8 & 0.457$\\pm$0.029 & 0.460$\\pm$0.020 & 0.492$\\pm$0.023 \\\\\n9 & 0.449$\\pm$0.021 & 0.460$\\pm$0.023 & 0.457$\\pm$0.019 \\\\\n\\hline\naverage & 0.430$\\pm$0.004 & 0.426$\\pm$0.004 & 0.441$\\pm$0.004 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{figure}[]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{deltaY.eps}\n\\caption{Maximum helium difference $\\Delta$Y for ten synthetic clusters. Different symbols refer to different colors used for the determination. Blue (red) symbols stand for stars on the MS (RGB). The horizontal black solid line shows the true helium difference used to build the clusters.}\n\\label{deltaY}\n\\end{figure}\n\n\n\nTo further investigate this behavior, we show in Fig.~\\ref{chmapzoom2Ge} and Table~\\ref{tab_he_2Ge} the effect of the selection of 2Pe stars on the determination of the maximum helium difference. Choosing a larger population (population 2Pe(c)) translates into a decrease in $\\Delta$Y by $\\sim$0.03, as expected because of the inclusion of stars with less extreme Y in the 2Pe population. We also note that the values of $\\Delta$Y obtained with that selection of 2Pe stars are consistent with those obtained for other synthetic clusters reported in Table \\ref{tab_he_synth} and for which the 2Pe selection was more strict. Inversely, reducing the 2Pe population (2Pe(a) in Fig.~\\ref{chmapzoom2Ge}), but still keeping a number of stars sufficient to define the 2Pe fiducial line, leads to a larger $\\Delta$Y, which is consistent with the input value. \n\n\nFrom these experiments, we conclude that for distributions of multiple populations with small numbers of stars with large chemical enrichments, such as that of \\citet{chantereau16}, the derived maximum helium enrichment critically depends on the selection of the 2Pe population. We also find that we slightly underestimate the maximum Y of the cluster.\n\n\n\n\n\n\\begin{figure}[]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{plot_chmap_zoom_2Ge_000.eps}\n\\caption{Zoom into the upper left part of the chromosome map shown in the bottom panel of Fig.~\\ref{fidchmapW16}. The gray dotted lines mark the selection of 2Pe stars as in Fig.~\\ref{fidchmapW16}. The blue and red broken lines indicate alternative selections of 2Pe stars.}\n\\label{chmapzoom2Ge}\n\\end{figure}\n\n\\begin{table}[]\n\\begin{center}\n \\caption{Effect of the selection of 2Pe stars (see Fig.~\\ref{chmapzoom2Ge}) on the determination of the maximum helium difference in synthetic cluster number 0. The first column gives the label of the 2Pe selection. The following columns give the helium difference estimated using the same three colors as in Table~\\ref{tab_he_synth}.}\n\\label{tab_he_2Ge}\n\\begin{tabular}{cccc}\n\\hline \n2Pe selection & $\\Delta$Y$_{395N}$ & $\\Delta$Y$_{410M}$ & $\\Delta$Y$_{467M}$ \\\\ \n\\hline\n2Pe(a) & 0.444$\\pm$0.022 & 0.448$\\pm$0.014 & 0.468$\\pm$0.019 \\\\\n2Pe(b) & 0.458$\\pm$0.018 & 0.471$\\pm$0.011 & 0.489$\\pm$0.006 \\\\\n2Pe(c) & 0.410$\\pm$0.040 & 0.410$\\pm$0.034 & 0.418$\\pm$0.032 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\\vspace{0.2cm}\n\n\\citet{chantereau16} showed that the RGB contains less very enriched (Y$>$0.6) stars than the MS (see their Fig.~5).\nWe therefore expect that the determination of the maximum helium content of a GC using MS stars suffers less from the difficulties described above. To test this, we determined $\\Delta$Y as reported for RGB stars in Figs.~\\ref{fidchmapW16} and \\ref{395m814_W16} and Table~\\ref{tab_he_synth} but focusing on MS stars. Figure~\\ref{fidchmapW16_MS} illustrates our selection of stars: those with m814 magnitudes between 16.9 and 18.1. We estimate the width of the MS at m814=17.5. The 1Pe and 2Pe stars are extracted from the chromosome map as shown in the lower panel of Fig.~\\ref{fidchmapW16_MS}. The results are gathered in Table~\\ref{tab_he_synth_MS}. Compared to Table~\\ref{tab_he_synth} we see that the true helium mass fraction difference (0.472) is indeed better recovered when using MS stars. A graphical representation of this result is given in Fig.~\\ref{deltaY} where we see that depending on the cluster simulation, $\\Delta$Y based on RGB stars may be underestimated, while for MS stars, the input value is always recovered. We thus conclude that for GCs that would have stellar populations similar to those of \\citet{chantereau16}, the study of the maximum helium content should be preferentially performed on MS stars, provided sufficient data quality. \n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{fidchmap_W16_000_MS.eps}\n\\caption{Same as Fig.~\\ref{fidchmapW16} but for MS stars.}\n\\label{fidchmapW16_MS}\n\\end{figure}\n\n\n\n\\begin{table}[]\n\\begin{center}\n \\caption{Same as Table~\\ref{tab_he_synth} but for MS stars.}\n\\label{tab_he_synth_MS}\n\\begin{tabular}{cccc}\n\\hline \ncluster id & $\\Delta$Y$_{395N}$ & $\\Delta$Y$_{410M}$ & $\\Delta$Y$_{467M}$ \\\\ \n\\hline\n0 & 0.464$\\pm$0.004 & 0.468$\\pm$0.005 & 0.478$\\pm$0.010 \\\\\n1 & 0.472$\\pm$0.007 & 0.478$\\pm$0.004 & 0.472$\\pm$0.009 \\\\\n2 & 0.468$\\pm$0.010 & 0.478$\\pm$0.007 & 0.482$\\pm$0.009 \\\\\n3 & 0.466$\\pm$0.009 & 0.474$\\pm$0.007 & 0.487$\\pm$0.006 \\\\\n4 & 0.473$\\pm$0.007 & 0.472$\\pm$0.010 & 0.473$\\pm$0.009 \\\\\n5 & 0.459$\\pm$0.005 & 0.474$\\pm$0.005 & 0.482$\\pm$0.003 \\\\\n6 & 0.468$\\pm$0.010 & 0.470$\\pm$0.011 & 0.475$\\pm$0.008 \\\\\n7 & 0.464$\\pm$0.008 & 0.464$\\pm$0.007 & 0.476$\\pm$0.013 \\\\\n8 & 0.466$\\pm$0.016 & 0.470$\\pm$0.015 & 0.481$\\pm$0.005 \\\\\n9 & 0.467$\\pm$0.006 & 0.478$\\pm$0.011 & 0.491$\\pm$0.006 \\\\\n\\hline\naverage & 0.465$\\pm$0.002 & 0.473$\\pm$0.002 & 0.482$\\pm$0.002 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\n\\subsubsection{Steep distribution}\n\\label{clu_steep}\n\n\nIn this section we try to answer the question of whether or not we can miss very helium-rich stars if multiple populations follow a steeper distribution than that presented by \\citet{chantereau16}. To this end, we define an artificial distribution as shown in Fig.~\\ref{distribsteep}. We start from the distribution of \\citet{chantereau16} between Y=0.248 and Y=0.29. We linearly interpolate their distribution in this range of Y, and then extrapolate the resulting distribution up to Y=0.400 (blue line in Fig.~\\ref{distribsteep}). We thus obtain a much steeper distribution of populations as a function of Y than that of Chantereau et al. for Y$>$0.29. According to \\citet{milone18} and Sect.~\\ref{smax_ngc6752}, the maximum helium mass fraction observed in NGC~6752 is about 0.29. Our artificial distribution can therefore be seen as a test case where only a few stars with Y$>$0.29 are present in synthetic clusters. We then try to see if they are recovered or missed when determining $\\Delta$Y.\n\n\\begin{figure}[]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{distribsteep.eps}\n\\caption{Black solid histogram: Relative fraction of stars as a function of their helium content on the lower RGB in the models of \\citet{chantereau16} and for an age of 13~Gyr. Blue solid line: Linear interpolation of Chantereau's distribution between Y=0.248 and 0.290, and extrapolation up to Y=0.400. Red dashed histogram: Adopted distribution based on the previous interpolation and re-scaling so that the sum of relative fractions is equal to 1.}\n\\label{distribsteep}\n\\end{figure}\n\n\nWe built ten synthetic clusters using the above distribution. We use only Y values for which isochrones are available, that is, Y=0.248, 0.260, 0.270, 0.300, 0.330, 0.370, and 0.400 (red bins in Fig.~\\ref{distribsteep}). We perform the $\\Delta$Y determination as in the previous sections. The results are gathered in Fig.~\\ref{deltaYsteep}. Using RGB stars, we are not able to recover the input value ($\\Delta$Y=0.152), but we can still detect stars with Y$>$0.29 (i.e., $\\Delta$Y=0.042). If present, and under the assumption that their distribution follows that described above, stars more helium-rich than Y=0.29 should therefore be detectable. On the MS, we confirm the trend seen in Sect.~\\ref{clu_W16} that higher values of $\\Delta$Y are recovered. However, with the present distribution, the maximum helium mass fraction of the cluster is also missed on the MS due to the very small number of highly enriched stars. This is different from Sect.~\\ref{clu_W16} where on the MS the initial value of $\\Delta$Y could be retrieved. \n\n\n\n\\begin{figure}[]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{deltaYsteep.eps}\n\\caption{Same as Fig.~\\ref{deltaY} but for the distribution of the population described in Sect.~\\ref{clu_steep}. The gray area marks the observed value of $\\Delta$Y in NGC~6752.}\n\\label{deltaYsteep}\n\\end{figure}\n\n\n\\section{Discussion}\n\\label{s_disc}\n\nOur method for determining the maximum helium mass fraction in GCs is slightly less automated than that presented by \\citet{milone18}. In particular, the selection of the extreme 1Pe and 2Pe populations is made visually in our case, while it relies on a more sophisticated treatment in \\citet{milone18}. We tested the effect of the 2Pe selection in Sect.~\\ref{clu_W16} and Fig.~\\ref{chmapzoom2Ge}, and argue that the choice of 2Pe stars can affect the value of the maximum Y. However, qualitatively, stars much more He-rich than the observed limit of 0.042 are easily detected regardless of the 2Pe selection. In addition, the automated method of \\citet{milone18} works well when 2Pe stars are clearly identified by an overdensity in the chromosome map, but there are clusters for which no such overdensity is visible. For instance, the chromosome map of NGC~6752 itself, presented in the bottom panel of Fig.~\\ref{fidchmap6752}, shows a rather uniform distribution along the 2G sequence (i.e., from the bottom right corner up to the upper left corner). Hence, we argue that the choice of 2Pe stars in our study is not drastically different from \\citet{milone18}.\n\n\nHe-rich populations may be contaminated by blue stragglers and their descendants which would populate similar regions of CMDs. \\citet{marino19} showed that candidate evolved blue stragglers should be located preferentially along the 1P sequence, and thus would not contaminate 2Pe stars. However, one may wonder whether blue stragglers resulting from the merger of two 2P stars could produce stars along the 2P sequence too. This needs to be investigated further. \nBinaries can also contaminate the 2P sequence. As shown by \\citet{martins20}, the presence of a companion among RGB stars will tend to displace a star up and left in the chromosome map. The magnitude of the displacement depends on the relative brightness and effective temperature of the two stars. Hence, stars along the 2G sequence, with medium Y but with a companion, may be moved to the position of single stars with higher Y. Consequently, the extremity of the 2G sequence may contain stars with less extreme Y. This could lead to overestimation of the maximum Y. The binary fraction among GCs is low, usually below 10\\% \\citep{sollima07,milone12,jb15}, and their effect on the maximum Y should therefore be limited; unless the number of very He-rich stars is also small, as in the distribution of \\citet{chantereau16}. In any case, if binaries are present, the maximum Y determined from photometry is probably overestimated.\n\n\nWith these limitations in mind, and given the results of the present study, it is unlikely that NGC~6752 contains stars with Y$\\gtrsim$0.3. If multiple populations were formed as predicted by the FRMS scenario developed by \\citet{decressin07} and \\citet{chantereau15,chantereau16}, a wide distribution of Y, from 0.248 to 0.72, should be present among stars either on the MS or the RGB, with ~39$\\%$ of the stars with Y$>$0.3 and ~10$\\%$ with Y$>$0.4 at 13~Gyr. Our study reveals that while we may not retrieve the most helium-rich objects on the RGB, we should still be able to detect stars with Y as high as 0.65 (see Fig.~\\ref{deltaY}). This is clearly in contrast with results based on HST photometry indicating Y no higher than 0.3. We have shown that even in the case of a Y distribution much steeper than that predicted by \\citet{chantereau16} we would be able to find stars with helium enrichment beyond the observed value.\n\nA prediction of our study dedicated to NGC~6752 is that the maximum helium content of GC stars, if it follows a distribution where helium-rich stars are less numerous than helium-poor ones, is best determined on the MS rather than on the RGB. This is due to the faster evolution of He-rich stars and consequently the larger number of stars on the MS than on the RGB \\citep{dm73,dantona10,chantereau16}. \nHowever, the maximum Y we obtain on the MS using HST data of NGC~6752 does not indicate a significant difference compared to the value found on the RGB.\n\nAll in all, the present results strongly suggest that stars in NGC~6752 do not follow the distribution predicted by the FRMS model presented in \\citet{chantereau16}, and consequently, multiple populations were not likely formed out of material polluted by this type of object.\nThis conclusion applies only to NGC~6752, and additional studies of clusters with different ages, metallicities, and masses are required to see whether generalizations can be made. A wider study of this kind is necessary to investigate whether or not the current observational limit of Y$\\sim$0.4 is robust and to investigate whether or not it suffers from observational limitations. As recalled in Sect.~\\ref{s_intro}, this limit is a key prediction of some models, but also a building block of some others. In particular, the formation of multiple populations caused by pollution of material ejected from a super-massive star (SMS) originally {assumes} that the SMS stops its evolution when its core helium content reaches 0.4, and is further dislocated by instabilities and\/or strong stellar winds \\citep{denis14}. This assumption was dictated by the observational fact that Y appears to be no higher than 0.4 in the most extreme GC populations. On the other hand, hot-hydrogen burning products with low helium abundances in agreement with the photometric constraints can be ejected by SMSs in the case where they are continuously rejuvenated by runaway stellar collisions \\citep{Gieles2018}.\n\nPollution by material produced in massive AGB stars predicts that the helium distribution among multiple populations reaches a maximum of $\\sim$0.38 \\citep[e.g.,][]{dercole10}. This is consistent with the current observational limit, and would make these objects good polluter candidates if other problems were not linked to their use, as summarized in \\citet{Renzini2015} and \\citet{bastianlardo18} for instance. Other important factors are the time constraints on the dilution by pristine material, as also highlighted by \\citet[][see also \\citealt{dercole16}]{dercole11}, and the mass budget issue \\citep[e.g.,][]{PC2006,Krause2016}. The transformation of the Na-O correlation in AGB ejecta into a Na-O anti-correlation, as seen in all GCs, also requires specific conditions for mixing of pristine material with nucleosynthesis products \\citep[e.g.,][]{Ventura2008,Karakas2010}. In addition, the C+N+O sum of AGB ejecta is not conserved \\citep{Decressin2009}, contrary to what is deduced from spectroscopic analysis of stars in NGC~6752 \\citep{yong15}. \n\nThe role of massive binaries in the multiple population phenomenon remains to be investigated. \\citet{demink09} showed that these objects could potentially produce the required abundance patterns. But their study is limited to a 20+15 M$_{\\sun}$\\ system on a 12-day orbit and in which both components have reached synchronization. The average helium mass fraction of the ejecta of this binary system is 0.3. Material ejected at later phases of the system's evolution has Y as large as $\\sim$0.63 (see Fig.~1 of de Mink et al.). The average helium content is therefore consistent with the current maximum Y observed in GCs. However binary evolution depends not only on the properties and evolution of the components, but also on the parameters of the system (separation, eccentricity, mass ratio, mass transfer efficiency); see for example \\citet{menon20}. Additionally, the maximum central temperature of the considered mass domain does not reach the high values required to build the Mg-Al anti-correlation \\citep[e.g.,][]{prantzos07,Prantzos2017}. A wider study involving population synthesis is therefore required to assess the impact of massive binaries on the origin of multiple populations in GCs. \n\n\n\n\n\n\\section{Conclusion}\n\\label{s_conc}\n\nIn this study, we investigated the determination of the maximum helium mass fraction in stars of the GC NGC~6752. Our goal was to decipher whether we really detect the most He-rich stars with present-day\nphotometric methods, or miss them.\nWe relied on the work of \\citet{chantereau16} who produced isochrones with various chemical compositions corresponding to different degrees of pollution by FRMS which is an extreme case in terms of He enrichment. We computed synthetic spectra along these isochrones using the atmosphere code ATLAS12 and the spectral synthesis code SYNTHE. The resulting spectra were used to compute synthetic photometry in the following HST filters: WFC3 F275W, WFC3 F336W, WFC3 F410M, WFC3 F467M, ACS F606W, and ACS F814W. We compared the synthetic colors with data of NGC~6752 obtained by \\citet{milone13}. The different CMDs are usually reasonably well reproduced, although offsets exist between synthetic and observed sequences (MS, TO, RGB).\n\nWe re-determined the maximum helium mass fraction of stars in NGC~6752 using a method very similar to that of \\citet{milone18}. Our results are consistent with those of Milone at al.\nWe built synthetic clusters with various populations characterized by their He content (they also have different composition in light elements). We validated our method on simple population distributions, ensuring that we are able to recover the input maximum Y. We subsequently created synthetic clusters following the distribution presented by \\citet{chantereau16}, that is, with stars with Y of between 0.248 and 0.72. We show that on the RGB, we slightly underestimate the maximum Y, but by a relatively small amount ($\\sim$0.05). On the MS, we retrieve the input value. In any case, even on the RGB the maximum Y we determine in synthetic clusters is higher than the observed value of 0.042$\\pm$0.004 \\citep{milone18}. We tested that even if populations followed a steeper Y distribution than that of \\citet{chantereau16}, stars with Y higher than 0.042 are recovered (in that case the maximum Y is underestimated both on the RGB and MS). Finally, we determined the maximum Y on the MS of NGC~6752 using HST data and find it to be consistent with the value obtained on the RGB.\n\nThese results indicate that multiple populations in NGC~6752 have a Y distribution that is very likely not that assumed by \\citet{chantereau16} in the framework of the FRMS scenario. Our results also show that in the specific case studied here, the maximum helium mass fraction determined from observations is probably the true value (i.e., there are no stars more He-rich in that cluster). \nWe stress that although we have focused on the specific case of pollution by FRMS, our results apply to any scenario that would predict a strong He enrichment among 2P stars \\citep{Salarisetal2006,Pietrinferni2009,Cassisi2013,Dotter2015}.\nOur results need to be extended to other clusters spanning a range of parameters (age, metallicity, mass) in order to confirm that FRMSs are not responsible for the multiple populations in GCs and to better constrain the scenario that produce them. \n\n\n\\section*{Acknowledgments}\n\nWe thank an anonymous referee for a positive report. We thank A. Milone for sharing HST photometry of NGC~6752 and for interesting discussions. This work was supported by the Swiss National Science Foundation (Project 200020-192039 PI C.C.). This work was supported by the Agence Nationale de la Recherche grant POPSYCLE number ANR-19-CE31-0022.\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzihft b/data_all_eng_slimpj/shuffled/split2/finalzzihft new file mode 100644 index 0000000000000000000000000000000000000000..6f6b81c1b9b9b78f733ab4442a8d99bb158659a5 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzihft @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nOne of the widely accepted mechanisms\nof planet formation is the core-nucleated instability theory (Perri \\& Cameron 1974; Haris 1978; Mizuno\net al. 1978; Stevenson 1982). According to this scenario, the massive gaseous atmosphere would\nbe accumulated in a runaway manner when the core mass reaches a critical value.\nIn static models, when heating balances cooling,\nrunaway accretion occurs when the planet is beyond the critical mass,\nbecause the envelope fails to hold hydrostatic equilibrium. Rafikov (2006) found\na broad range of critical mass ($0.1 M_{\\oplus} \\le M_{\\rm critical} \\le 100 M_{\\oplus}$)\ndue to various disk properties and planetesimal accretion rate.\nHowever, in dynamic or quasi-static models, the thermal disequilibrium\nrather than the hydrostatic disequilibrium plays the dominant role.\nThe runaway accretion occurs because the envelope becomes thermally unstable\nas the cooling timescale becomes catastrophically shorter.\nIn this case, the runaway accretion is driven by\nrunaway cooling (Bodenheimer \\& Pollack 1986; Lee et al. 2014; Piso \\& Youdin 2014).\nThree stages are involved in the formation process. In the first stage,\nrocky cores grow by rapid planetesimal accretion. In the second stage, the core's feeding zone is depleted of solids and the atmosphere grows gradually, regulated by the KH contraction. Finally, when the atmosphere reaches the crossover mass,\ngas runaway takes place and the planet gets inflated into a gas giant. The timescale of second stage is the longest among\nthe three and dominates whole formation process (Pollack et al. 1996).\n\n\nAbout 20\\% of Sun-like stars host super-Earths with radii of 1-4 $R_{\\oplus}$ at distance 0.05-0.3 AU (Howard et al. 2010; Batalha et al. 2013; Petigura et al. 2013).\nRadial velocity measurements (Weiss \\& Marcy 2014) and transit timing variations (Wu \\& Lithwick 2013) manifest that\n masses of these super-Earths are in the range of 2$-$20 $M_{\\oplus}$.\nThe abundance of super-Earths presents a puzzle for the core instability theory.\nThis theory indicates that when a protoplanet reaches the super-Earth size, two physical processes\nmake the survival of super-Earths difficult, leading to a planetary ``desert\" in this size range (Ida \\& Lin 2004).\nSuper-Earths would excite density waves in PPDs and give rise to rapid type I migration.\nThis type of migration would cause the planet to be engulfed by its host star if the disk\ninner edge touches the stellar surface.\nRecent studies have sought various remedies\nfor type I migration (Yu et al. 2010; Fung \\& Chiang 2017).\nPPDs are expected to have an inner edge at the stellar magnetosphere (e.g., Long et al. 2005). For planets undergoing disk-driven migration, they are expected to pile up near this edge. They would stay either at the edge because the gas runs out, or inside the edge down to 2:1 resonance because that's where the tidal torque will taper off, or outside the edge as the standing waves generated by wave reflection off the inner edge stall planet migration (Tsang 2011).\nIn this paper we would focus on another threat for super-Earths.\nSuper Earths have low mean density, which suggests that they must\nbe surrounded by gas envelopes (Rogers \\& Seager 2010).\nSince these observed super-Earths are in the range of critical mass,\nthey would trigger efficient gas runaway and accumulate massive gas envelope.\nThey would become gas giants. As a result, super-Earths are supposed to be rare.\nHowever, the Kepler's discovery wreck these predictions. Lee et al. (2014) has proposed\nmetallicity gradient inside the PPDs or late assembly of cores to\nresolve the puzzle of super-Earth formation. Lee \\& Chiang (2016) stressed\nthat the late core assembly in transitional PPDs is more consistent with\nobservations. In gas-poor environments, gas dynamical friction has\nweakened to allow proto-cores to stir one another and merge.\nIn addition, this formation scenario ensures that super-Earth cores accrete mass\nwith a few percent envelope mass fraction (EMF).\n\n\n\n\n\nGuillot \\& Showman (2002) argued that the dissipation of kinetic energy of atmospheric wind,\ndriven by intense irradiation, could bury heat inside the planet.\nMany studies extend this idea to explain the radius anomaly of hot Jupiters\n(Youdin \\& Mitchell 2010; Ginzburg \\& Sari 2015; Komacek \\& Youdin 2017).\nThese investigations focus on the late evolution after the disk dispersal.\nUnfortunately, this is invalid for the early evolution of super-Earths because they are still\nembedded within disks. The irradiation may not penetrate the\ndisk and is not able to bury heat in the exoplanets.\n\nHowever, we note that\ntidal interactions between the host star and planet can periodically perturb the planet\nand generate mechanical forcing of the fluid motions (Zahn 1977; Goldreich \\& Nicholson 1989).\nHeating by tidal dissipation in primordial super-Earth envelope can inhibit the gas cooling (Ginzburg \\& Sari 2017).\nThis mechanism requires the orbital eccentricity of super-Earths be continuously pumped.\nBut super-Earths may not be massive enough to clear a clean gap to excite orbital\neccentricity (Goldreich \\& Sari 2003).\nAnother important aspect about tidal interaction is that \ntidally-forced turbulent mixing would induce heat transport inside the planets.\nRecent laboratory experiment shows that turbulence could penetrate deep inside the\nplanet interior (Cabanes et al. 2017).\nBy combining laboratory measurements and high resolution simulations, Grannan et al. (2017)\nconfirmed the generation of bulk filling turbulence inside planet driven by tidal forcing.\nTurbulent mixing plays an essential role in heat transport in strongly stratified environments (Garaud \\& Kulenthirarajah 2016).\nThis motivates us to study the effects of turbulent diffusion on the planet's thermal evolution.\nPrior studies have noticed that the turbulent mixing induced by mechanical forcing\nleads to heat transport inside hot Jupiters (Youdin \\& Mitchell 2010).\nThese tides would produce appreciable thermal feedback and may lead to interior radiative zones, enhancing\ng-mode dissipations with a wide spectrum of resonances (Jermyn et al. 2017).\nWe find that the thermal feedback associated with the externally-forced turbulent stirring\nmay greatly alter the accretion history of super-Earths.\n\n\n\nIt is well known that the timescale of gas accretion is dictated by the KH timescale.\nIn other words, accretion is determined by the planet's ability to cool (Lee \\& Chiang 2015).\nIn this paper, we note that\nthe tidally-forced turbulent diffusion influences the heat transport inside the planet's envelope.\nThermal feedback would be induced by turbulent diffusion.\nThe heat transport associated with tidally-forced turbulent diffusion\nwould reduce the cooling luminosity\nand enhance the KH timescale.\nWe find that turbulent diffusion may have significant effects\non the planet accretion history\\footnote{In our\ncalculation, turbulent diffusion coefficient $\\mu_{\\rm turb}\\sim 10^{7} - 10^{9}$ cm$^2$ s$^{-1}$,\ncomparable to typical\n$\\mu_{\\rm turb}\\sim 10^{6}-10^{10}$ cm$^2$ s$^{-1}$ in solar system planets (de Pater \\& Lissauer 2001).}.\nBased on our calculations, we propose that tidally-forced turbulent diffusion would effectively\nhelp super-Earths evade growing into gas giants.\n\n\n\nThis paper is structured as follows. In section 2, we provide a brief description of the accreting\nplanet envelope with tidally-forced turbulent diffusion.\nIn section 3, we compare the planet interior thermal\nprofile with and without turbulent diffusion, discussing the thermal feedback\ninduced by turbulent diffusion, especially the shift of RCBs.\nIn section 4, we depict the cooling luminosity variations and onset of gas runaway.\nThe quasi-static Kelvin-Helmholtz evolution and critical turbulent diffusivity\nare discussed in Section 5. In Section 6, we discuss the mass loss\nmechanisms for super-Earths and the limitation of super-Earth formation by the turbulent diffusion.\nSummary and conclusions are given\nin Section 7.\n\n\\section{Accreting Envelope with Tidally-Forced Turbulence}\nSuper-Earths are susceptible to runaway accretion (Pollack et al. 1996).\nThe ability to accrete is determined by the planet's power to cool (Lee \\& Chiang 2015).\nHow super-Earths avoid rapid gas runaway\ndepends critically on the cooling history of planet, which is closely related to\nthe thermal structure of the envelope.\nIn the convectively stable region, the turbulent diffusion would induce heat transport within the planet.\nIn this section we will concentrate on the thermal feedback caused by tidally-induced turbulent diffusion.\n\n\\subsection{Thermal structure of Gaseous Envelope}\nSince the planet's ability to cool depends on planets' thermal structure of the envelope,\nwe first study the gaseous envelope structure of planets, i.e., the distribution of pressure,\ntemperature, and mass around a protoplanetary core with mass $M_{\\rm c}$ embedded\nwithin the protoplanetary nebular.\nThe planet envelope (or, interchangeably ``atmosphere'') structure is governed by the following equations of mass\nconservation, hydrostatic equilibrium, thermal gradient, and energy conservation\n(Kippenhahn et al. 2012) :\n\\begin{equation}\n\\frac{d M_r}{d r} = 4 \\pi \\rho r^2 \\ ,\n\\end{equation}\n\\begin{equation}\n\\frac{d P}{d r} = - \\frac{G M_r}{r^2} \\rho \\ ,\n\\end{equation}\n\\begin{equation}\n\\frac{d T}{d r} = \\nabla_{\\rm } \\frac{T}{P} \\frac{d P}{d r} \\ ,\n\\end{equation}\n\\begin{equation}\n\\frac{d L}{d r} = \\frac{d M_r}{d r} \\left( \\epsilon - T\\frac{\\partial s}{\\partial t} \\right) \\ ,\n\\end{equation}\nwhere $G$ is the gravitational constant, $P$ is the pressure, $\\rho$ is the density,\n$T$ is the temperature, $L$ is the luminosity,\nand $M_r$ is the mass, including the core mass\nand the atmosphere mass, enclosed inside the radius $r$,\n$M_r = M_{\\rm atm} + M_{\\rm c}$.\nThe symbol ``$\\nabla$\" denotes the temperature gradient\ninside the envelope.\nThe energy generation $\\epsilon$ is set to zero since there is no nuclear reaction\ninside the planet.\nThe above equations implicitly indicate\nthat the envelope quickly adjusts and dynamical timescale is shorter\nthan the accretion timescale (Rafikov 2006).\nNote that the right hand side term, $-T\\frac{\\partial s}{\\partial t}$, in the energy equation dictates the cooling process.\nReplacing the local energy equation by a global energy equation would greatly reduce the\nnumerical tasks and we need only deal with ODEs rather than PDEs\n(Piso \\& Youdin 2014; Lee et al. 2014). Details will be discussed in Section 3.\n\nThe energy transport in the convective region is very efficient and\nthe temperature gradient is\\footnote{This assumption is mainly made for simplicity\nof the models, they are not necessarily correct (Stevenson 1985; Leconte \\& Chabrier 2012).\nWe are working on including the mixing length theory (e.g Kippenhahn et al. 2012) to\nbetter quantify the issue of super-adiabaticity.}\n\\begin{equation}\n\\nabla = \\nabla_{\\rm ad} = \\left( \\frac{d \\ln T}{d \\ln P}\\right)_{\\rm ad} \\ .\n\\end{equation}\nThe convective and radiative layers of the envelope are specified by\nthe Schwarzschild criterion: the atmosphere is stable against\nconvection when $\\nabla < \\nabla_{\\rm ad} $ and convectively\nunstable when $\\nabla \\ge \\nabla_{\\rm ad} $. Since the convective energy\ntransport is efficient, $\\nabla = \\nabla_{\\rm ad}$ in the convective region.\nThe actual temperature gradient can be expressed as\n\\begin{equation}\n\\nabla_{\\rm } = \\min(\\nabla_{\\rm ad}, \\nabla_{\\rm rad}) \\ .\n\\end{equation}\nIn this paper, we adopt a polytropic index $\\gamma=7\/5$ for an ideal\ndiatomic gas and the adiabatic gradient $\\nabla_{\\rm ad} = (\\gamma-1)\/\\gamma$.\nNote that the realistic equation of state (EOS) would change the value of $\\nabla_{\\rm ad}$\nand the effects of realistic EOS will be left for future studies.\n\n\n\n\nThe radiative temperature gradient\n\\begin{equation}\n\\label{radTgrad}\n\\nabla_{\\rm rad} = \\frac{3 \\kappa L P}{64\\pi\\sigma G M_r T^4} \\ ,\n\\end{equation}\nwhere $\\kappa$ is the opacity. \nIn the upper part of the atmosphere, the exact\nvalue of $\\kappa$ is highly uncertain because the amount of dust and the dust size\ndistribution are not well constrained in PPDs.\nLee et al. (2014) studied both dusty and dust-free atmosphere and\nfound that the radiative-convective boundaries (RCBs) are determined by\nH$_2$ dissociation at an almost fixed temperature $\\sim$2500 K for dusty atmosphere.\nThey also found the for dust-free atmosphere, the radiative region keeps an almost\nisothermal temperature fixed by the envelope outer surface.\nTechnically, the opacity laws can be written as a power law as a function\nof pressure and temperature whether or not the total opacity is dominated\nby dust grains. For these reasons, we adopt a power law opacity\n(Rafikov 2006; Piso \\& Youdin 2014; Ginzburg et al. 2016), by assuming that\n\\begin{equation}\n\\kappa = \\kappa_0 (P\/P_0)^{\\alpha} (T\/T_0)^{\\beta} \\ .\n\\end{equation}\nHere we choose $\\kappa_0 = 0.001$cm$^{2}$g$^{-1}$, which allows our\nfiducial model without turbulent diffusion to possess properties of more\nsophisticated super-Earth models (Lee et al. 2014).\nWhat is important is the opacity near the RCB. In that sense,\nit is important to keep in mind that the power-law indices\n$\\alpha$ and $\\beta$ can change significantly within the envelope (and with distance from the star).\nWe have tried different choices of $\\alpha$ and $\\beta$. We find that, as long as the\nparameter $\\alpha$ and $\\beta$ satisfy $\\nabla_0 \\equiv \\frac{1+\\alpha}{4-\\beta} > \\nabla_{\\rm ad}$,\nour results are robust and insensitive to the choices we made\\footnote{\nIn later part of this paper, we present the results with $\\alpha=1$, $\\beta = 1$,\nwhich ensures the existence of the inner convective region and outer radiative region\ninside the planet gas envelope. For details, please refer to discussions in Rafikov (2006)\nand Youdin \\& Mitchell (2010).}.\n\n\n\n\n\n\n\nConventionally, it is believed that solid cores accrete planetesimal\nand gas simultaneously (Pollack et al. 1996; Bodenheimer et al.\n2000).\nHowever, estimation shows that the termination epoch\nof accretion of solids is well before the accretion of gas.\nThe dust coagulation timescale can be as short as\n$t_{\\rm coagulate} \\sim 10^4$ yr especially\nwhen the planet is close to the central host (Lee et al. 2014).\nThis timescale is much shorter than typical disk dispersal timescale ($\\sim$ 0.5$-$10 Myr).\nIn addition, calculations by\nLee \\& Chiang (2015) showed that planetesimal accretion does not generically prevent runaway.\nAs a result, it is physically valid to set the planetesimal\naccretion rate to zero ($L_{\\rm acc}=0$)\nwhen we study accreting super-Earths within the disk.\nIn this case, the core is free to cool and contract, and it is extremely susceptible to the gas runaway.\n\n\n\nNote that the above differential equations are essentially identical\nto the usual planet interior structure equations. The distinction is the thermal\nfeedback generated by tidally-forced turbulent mixing inside the stably stratified region.\nMore specifically, $\\nabla_{\\rm rad}$ is affected by the turbulent diffusion, which will\nbe further discussed in the next section.\n\n\n\n\\subsection{Thermal Feedback by Tidally-Forced Turbulent Mixing}\nHow do super-Earths evade becoming gas giants?\nIn this paper, we propose a robust mechanism to avoid runaway accretion.\nDue to the tidal forcing, the planet's gas envelope would be stirred and\nthe turbulent motion may be initiated.\nDetailed analyses of these processes are rather complex\nand beyond the scope of this paper (e.g., Garaud \\& Kulenthirarajah 2016; Grannan et al. 2017).\nIn this paper, we try to constrain the turbulent diffusion\nthat is necessary to influentially affect the planet accretion timescale.\nWe find\nthat even weak turbulence would affect the planet accretion history significantly.\n\n\nSince the sound-crossing time is much shorter than\nthe time for heat to diffuse across the fluid blob, the blob conserves entropy (i.e. adiabatically) and keeps\npressure equilibrium with the ambient environments when it displaces over a radial\ndistance $\\ell$.\nThe temperature difference between the blob and its surroundings is\n\\begin{equation}\n\\delta T = \\left(\\frac{d T}{dr}\\bigg|_{\\rm ad} - \\frac{d T}{dr} \\right) \\ell = - \\frac{\\ell T}{c_p} \\frac{d s}{dr} \\ .\n\\end{equation}\nThe heat excess associated with these fluid blobs can be written\nas $\\delta q = \\rho c_p \\delta T$ and the corresponding turbulent heat flux is\n$F_{\\rm turb} = v \\delta q$, where $v$ is the characteristic speed of turbulent eddies.\nThe entropy gradient can be put down as\n\\begin{equation}\n\\frac{d s}{d r} = \\frac{ g}{T \\nabla_{\\rm ad} } (\\nabla_{\\rm ad} - \\nabla) \\ ,\n\\end{equation}\nwhere $g$ is the gravitational acceleration. This equation\nindicates that in the stably stratified region ($\\nabla < \\nabla_{\\rm ad}$),\nthe entropy gradient is positive ($ds\/dr>0$). The heat flux by turbulent mixing is\nthen\n\\begin{equation}\nF_{\\rm turb} = v\\delta q = \\rho c_p v \\delta T = - \\rho g v \\ell\n \\left( 1- \\frac{\\nabla}{\\nabla_{\\rm ad}} \\right) \\ .\n\\end{equation}\nThe flux is negative for stable stratification.\nFor a thermal\nengine without external forcing, heat always flows from hot to cold regions.\nHowever, with external mechanical forcing by tides, heat flows from cold\nto hot regions becomes feasible (Youdin \\& Mitchell 2010).\nNote that the turbulent diffusion coefficient $\\mu_{\\rm turb} \\equiv v \\ell$\\footnote{Note that $\\mu_{\\rm turb} = K_{zz}$, a symbol widely used in the community of planetary atmospheres.} and the corresponding luminosity is\n\\begin{equation}\nL_{\\rm turb} = 4 \\pi r^2 \\left[ - \\rho g \\mu_{\\rm turb} \\left( 1- \\frac{\\nabla}{\\nabla_{\\rm ad}} \\right) \\right] \\ .\n\\end{equation}\nThe total luminosity is carried by two components, the radiative and the turbulent\n\\begin{equation}\nL = L_{\\rm rad} + L_{\\rm turb} \\ .\n\\end{equation}\nWe note that the temperature gradient in the radiative region\ncan be arranged in a compact form as (see Appendix A for details),\n\\begin{equation}\n\\nabla^{\\rm }_{\\rm rad} = \\frac{1 + \\eta }{1\/\\nabla^{(0)}_{\\rm rad} + \\eta\/\\nabla_{\\rm ad}} \\ .\n\\end{equation}\nIn the above equation,\n\\begin{equation}\n\\nabla^{(0)}_{\\rm rad} \\equiv \\frac{3 \\kappa P L}{64\\pi\\sigma G M_r T^4} \\ ,\n\\end{equation}\nand\n\\begin{equation}\n\\eta \\equiv \\frac{4 \\pi \\mu_{\\rm turb} G M_r \\rho}{L} = 4 \\pi \\left( \\frac{M_{\\rm c}}{M_{\\oplus}} \\right) \\nu_{\\rm turb} \\left(\\frac{M_r}{M_{\\rm c}} \\right) \\left( \\frac{\\rho}{\\rho_{\\rm disk}} \\right) \\ ,\n\\end{equation}\nwhere the superscript ``(0)\" indicates the radiative temperature gradient without turbulence\\footnote{This equation is actually\nthe same as the equation (\\ref{radTgrad}) in this paper.} and $M_{\\rm c}$ is the mass of the solid core.\nIt can be readily shown that the following inequality holds in\nradiative region $\\nabla^{(0)}_{\\rm rad} < \\nabla < \\nabla_{\\rm ad}$ (see Figure 3 for the\npseudo-adiabatic region).\nHere we stress that it is the turbulent diffusion driven by external tidal forcing that makes $\\nabla$\nsteeper than $\\nabla_{\\rm rad}^{(0)}$. This inequality has significant implications for\nthe thermal feedback induced by tidally-forced turbulent diffusion.\nAn interesting issue is that radiative zones would be enlarged and the cooling luminosity\nwould be greatly reduced.\n\nHere we define two dimensionless parameters\n\\begin{equation}\n\\label{turb_def}\n\\nu_{\\rm turb} \\equiv \\frac{\\mu_{\\rm turb}}{L\/(GM_{\\oplus}\\rho_{\\rm disk})} \\ , \\ \\zeta \\equiv \\frac{\\mu_{\\rm turb}} { H_p c_s} \\ .\n\\end{equation}\nThe two parameters represent the strength of turbulence. In the definition of $\\zeta$,\n$H_p\\equiv -d r\/d\\ln P$ and $c_s$ are pressure scale height and sound speed, respectively.\nIt is obvious that, if the turbulence in the radiative region is negligible, i.e., $\\eta = 0$,\nthe temperature gradient recovers its usual definition,\n$\\nabla_{\\rm rad} \\rightarrow \\nabla^{(0)}_{\\rm rad}$.\nIn section 5.1, we will give a physical estimation of the parameter $\\zeta$ based on our calculations. We will\nsee that small value of $\\zeta \\sim 10^{-6} - 10^{-5}$ has already appreciable effects on the\nformation of super-Earths. This mechanism is robust in the sense that even weak turbulence is\nadequate for it to operate.\nWe should keep in mind that one limitation is that the turbulence strength is parameterized,\nnot physically specified. This is an important issue which still remains to be addressed, i.e.,\nforcing turbulence induced by tides\nshould be investigated in further detail (Barker 2016; Grannan et al. 2017).\n\n\n\n\n\n\n\n\n\n\n\\subsection{Boundary Conditions}\nThe density and temperature at the outer boundary of the atmosphere are given\nby the nebular density and temperature. We adopt the minimum mass extrasolar nebula (MMEN) model\nof Chiang \\& Laughlin (2013). According to MMEN, the disk structure reads,\n\\begin{equation}\n\\rho_{\\rm disk}= 6\\times 10^{-6} \\left(\\frac{a}{0.1 {\\rm AU}} \\right)^{-2.9} {\\rm g \\ cm^{-3}} \\ ,\n\\end{equation}\n\\begin{equation}\nT_{\\rm disk} = 1000 \\left( \\frac{a}{0.1 {\\rm AU}} \\right)^{-3\/7} {\\rm K} \\ .\n\\end{equation}\n\nThe inner boundary lies at the surface of the inner core.\nThe core density is assumed to be $\\rho_{\\rm core} = 7$g cm$^{-3}$, the core mass\nis 5 $M_{\\oplus}$ and the core radius is $R_{\\rm core}$ = 1.6 $R_{\\oplus}$.\nThe outer boundary condition is chosen at the smaller of the Bondi radius and Hill radius,\nwhich are\n\\begin{equation}\nR_H \\approx 40 R_{\\oplus} \\left[ \\frac{(1+{\\rm EMF}) M_{\\rm core}}{5 M_{\\oplus}}\\right]^{1\/3} \\left( \\frac{a}{0.1 {\\rm AU}}\\right) \\ ,\n\\end{equation}\n\\begin{equation}\nR_B \\approx 90 R_{\\oplus} \\left[ \\frac{(1+{\\rm EMF}) M_{\\rm core}}{5 M_{\\oplus}}\\right] \\left( \\frac{1000 {\\rm K}}{T}\\right) \\ ,\n\\end{equation}\nrespectively.\n\n\n\n\n\n\\section{Thermal Properties of Gas Envelopes}\nSince the thermal cooling timescale is intimately related to the planet interior structure,\nwe first describe the interior structure of the gaseous envelope.\nTo avoid the complication induced by sandwiched convection-radiation structure\ninside the planet interior (Ginzburg \\& Sari 2015; Jermyn et al. 2017),\nwe simply consider a two-layer model, i.e.,\na convective interior and a radiative exterior (Piso \\& Youdin 2014).\n\nWe adopt the assumption that the luminosity, $L$, is spatially constant, which\nis valid in radiative region if the thermal relaxation timescale is shorter than thermal times in the rest\nof the atmosphere. The validation of such assumption is corroborated by\nPiso \\& Youdin (2014) and Lee et al. (2014).\nTo get thermal profiles within the envelope, a luminosity $L$\nis required to obtain $\\nabla_{\\rm rad}$ before we numerically integrate the structure\nequations.\nThe spatially constant $L$ is treated as an eigenvalue of the ODEs.\nTo get the eigenvalue numerically, we first give a guess value of $L$ and\nre-iterate the integration until the mass at the core, $m(R_{\\rm c})$,\nmatches the actual mass $M_{\\rm c}$. Note that, once the luminosity is found,\nthe location of radiative-convective boundary (RCB) can be specified accordingly.\n\n\n\\subsection{Envelopes without Heat Transport by Turbulent Mixing}\nFor the convenience of comparison, we first consider a fiducial model, i.e.,\nan envelope without turbulence ($\\nu_{\\rm turb} = 0$).\nIn Figure \\ref{AtmProfile}, we show the radial profiles of pressure, temperature, and density of\nthe envelope for a 5$M_{\\oplus}$ core with increasing envelope mass during atmospheric growth.\nThe green, cyan and yellow curves denote the envelope mass fraction (EMF) = 0.1, 0.4, 0.8, respectively.\nThe thicker and thinner parts stand for the convective and radiative region, respectively.\nThe boundaries of the thicker and thinner part are the radiative-convective boundaries (RCBs).\nThe convective region is adiabatic.\nThe radiative region connects the lower entropy interior to the higher entropy exterior.\nIn Figure \\ref{AtmProfile}, we note that the pressure in the convection zone increases with\nenvelope mass, but the temperature only varies slightly.\nSince the entropy is $\\propto \\ln(T^{1\/\\nabla_{\\rm ad}}\/P)$,\nit is clear that, with increasing envelope mass,\nthe steady-state envelopes evolve in order of decreasing entropy (Marleau \\& Cumming 2014).\nThis is consistent with the cooling process that the envelope experiences,\nwhich allows the atmosphere to accrete more gas.\n\n\nLee et al. (2014) found that, for dusty atmosphere, the location of RCBs lies\nat an roughly fixed temperature where H$_2$ dissociates ($\\sim$ 2500K).\nIn Figure \\ref{AtmProfile}, the RCB lies at the bottom\nof the outermost radiative region and the temperatures at the RCBs are no longer 2500K.\nThis is because we adopt a grain-free atmosphere due to efficient grain coagulation (Ormel 2014).\nAccording to the middle panel of Figure \\ref{AtmProfile},\nwe find that grain-free atmosphere behaves differently from grain-rich atmosphere.\nThe outer radiative region is nearly isothermal, which implies that $T_{\\rm RCB} \\sim T_{\\rm out}$.\nSuch features have also been identified in Lee \\& Chiang (2015, 2016) and Inamdar \\& Schlichting (2015),\nwhich can be readily understood\nin terms of the following relation (Rafikov 2006; Piso \\& Youdin 2014)\n\\begin{equation}\n\\frac{T_{\\rm RCB}}{ \\ T_{\\rm out} } \\sim \\left(1 - \\frac{\\nabla_{\\rm ad}}{\\nabla_0} \\right)^{-1\/(4-\\beta)} \\sim 1 \\ .\n\\end{equation}\nThe term on the right hand side of this equation is around the order of unity.\nThis explains why the temperature at RCB, $T_{\\rm RCB} \\sim T_{\\rm out}$.\nWe would stress that, the above relation is only valid for atmosphere without turbulence.\nWhen heat transport by turbulent mixing is taken into account, the RCB is pushed inwards,\nand the temperature at the RCB ($T_{\\rm RCB}$) becomes higher.\n\n\n\nAt the early stage of accretion, the envelope mass is small and\nthe envelope can be well treated as non-self-gravitating.\nIn this case, simple analytic results can be derived (Rafikov 2006; Piso \\& Youdin 2014).\nThough the envelope we consider in this paper is self-gravitating, these analytical results\nare still very instructive to understand atmospheric evolution and interpret our\nnumerical results. How the position of the RCBs\nvaries with envelope mass can be understood with the following relations (Piso \\& Youdin 2014),\n\\begin{equation}\n\\label{LMrelation}\n\\frac{M_{\\rm atm}}{M_{\\rm c}} = \\frac{P_{\\rm RCB}}{\\xi P_{\\rm M}} \\ , \\\n\\frac{P_{\\rm RCB}}{ \\ P_{\\rm disk} } \\sim \\ e^{R_{\\rm B}\/R_{\\rm RCB} } \\ .\n\\end{equation}\nwhere $\\xi$ is a variable on the order of unity and $P_{\\rm M}$ is the characteristic pressure that is\nrelated to the core mass (Piso \\& Youdin 2014).\nIn the early stage of planet accretion, with the increase of envelope mass, the pressure at\nRCB would increase as well.\nAccordingly, the cooling luminosity would be reduced.\nWhen the self-gravity becomes important, the above relations no longer hold.\nThe stronger luminosity is necessary to support the more massive envelope.\nWith the increase of luminosity, the RCB would be shifted outward\nas shown in Figure \\ref{AtmProfile} (Ginzburg \\& Sari 2015).\n\n\n\n\\begin{figure}\n\\includegraphics[scale=0.75]{AtmProfileRevise.eps}\n\\caption{\\label{AtmProfile}\nThermal profiles around a planet core\nwith mass $M_{\\rm c} = 5 M_{\\oplus}$ at 0.1 AU.\nTurbulence is not included, $\\nu_{\\rm turb} = 0$.\nThe pressure, temperature, and density are shown in the upper, middle, and lower panels,\nrespectively. In each panel,\nthe green, cyan, yellow lines stand for $M_{\\rm atm}\/M_{\\rm c} = $ 0.1, 0.4, 0.8,\nrespectively.\nWith the increase of envelope mass, the pressure at RCBs always increases.\nHowever, the position of RCBs inside the planet first decreases and then increases.\nThis non-monotonic behavior is\ndue to the effects of self-gravity (Piso \\& Youdin 2014).\nNote in particular that no pseudo-adiabatic region appears in the envelope (cf. Figure \\ref{AtmProfile_with_turb}).\n}\n\\end{figure}\n\n\n\n\\subsection{Envelopes with Heat Transport by Turbulent Mixing}\nIn this section, we explore how turbulence ($\\nu_{\\rm turb} \\neq 0$) changes the structure\nof the planet envelope.\nThe most interesting feature is that the turbulence would push the RCBs\ninwards and diminish the cooling luminosity.\nIn Figure \\ref{AtmProfile_with_turb}, we show the planet thermal profiles for envelope\nmass fraction, $M_{\\rm atm}\/M_{\\rm c} =$ 0.2, 0.4 and 0.8.\nThe core mass $M_{\\rm c} = 5 M_{\\oplus}$.\nIn Figure \\ref{AtmProfile_with_turb}, we find that the difference\nis that a pseudo-adiabatic region appears.\nExplicitly, we point out the location of the pseudo-adiabatic region\nin the middle panel of Figure \\ref{AtmProfile_with_turb}.\nIn such regions, the temperature gradient is\nvery close to adiabatic gradient, but still smaller than adiabatic gradient (see Figure \\ref{TempGradVsP}).\n\nFrom middle panel of Figure \\ref{AtmProfile}, we see that,\nwhen the heat transport by turbulent diffusion is not included,\nthe RCB lies around the isothermal radiative region, $T_{\\rm RCB}\\sim T_{\\rm out}$.\nWhen turbulent diffusion is included, the temperature gradient would deviate\nfrom the isothermal approximation, which is most obvious by comparing middle panels of Figure\n\\ref{AtmProfile} and Figure \\ref{AtmProfile_with_turb}.\nWe can identify from Figure \\ref{TempGradVsP}, that the temperature gradient near\nRCBs is approaching $\\nabla_{\\rm ad}$ and clearly deviates from isothermal temperature\ngradient. Due to this temperature gradient deviation, a pseudo-adiabatic region appears.\nAs a result, the temperature at RCB becomes higher and RCBs would penetrate deeper inside\nthe envelope.\n\n\\begin{figure}\n\\includegraphics[scale=0.75]{AtmProfile_with_turbRevise.eps}\n\\caption{\\label{AtmProfile_with_turb}\nThe same as Figure \\ref{AtmProfile}, but for a turbulent envelope with $\\nu_{\\rm turb} = 0.016$.\nThe pseudo-adiabatic region is most clearly visible when comparing the middle panel of Figure 1 and Figure 2.\nDue to the presence of pseudo-adiabatic region, the RCBs are pushed inwards. The temperature at RCBs\nbecomes higher when heat transport by tidally-forced turbulent mixing is taken into account.\n}\n\\end{figure}\n\nTo better understand the effects of heat transport by turbulent mixing, we compare the profiles of planet\nenvelope with and without turbulence. The results are shown in Figure \\ref{TempGradVsP}\nas red solid and blue dashed lines, respectively.\nThe upper panel shows the global variation of temperature with pressure\nwithin the envelope.\nIn this panel, the difference between the two cases with and without turbulence\nis hardly discernible.\nThe middle panel shows again the variation of temperature with pressure but\nfocuses on the localized region around the radiative-convective transition region.\nIt shows that the turbulent mixing smoothes\nthe transition toward the adiabat.\nThere would appear a pseudo-adiabatic region above the\nactual adiabatic region. This pseudo-adiabatic region pushes the RCB inward to higher pressure.\nTurbulent mixing leads to a more gradual approach to adiabat.\n\n\nThe turbulent diffusion in stably stratified region provides heating, instead\nof cooling so it is natural to expect that with turbulent diffusion taken into\naccount, the total cooling rate of envelope will decrease and KH\ncontraction timescale would be prolonged (see Figure \\ref{LVsMatm} for details).\n\n\n\n\n\\begin{figure}\n\\includegraphics[scale=0.65]{TempGradVsP.eps}\n\\caption{\\label{TempGradVsP}\nThermal profiles of planet envelope. The EMF $M_{\\rm atm}\/M_{\\rm c} = 0.1 $.\n{\\it Upper panel:} The blue dashed curve represents the\nenvelope without turbulence.\nThe red solid curve denotes envelope with turbulence, $\\nu_{\\rm turb} = 0.016$.\nThe RCBs are denoted as blue and red dots.\nThe two temperature profiles are very similar and difficult to\ndistinguish.\n{\\it Middle panel:} To identify their differences,\nwe show the two profiles near the radiative-convective transition region.\nThe red curve shows a more gradual transition from the radiative\nto adiabatic region. The region between the blue dot and red dot is the\npseudo-adiabatic region.\n{\\it Bottom panel:} The ratio of temperature gradient to adiabatic gradient.\nThe region with $\\nabla\/\\nabla_{\\rm ad} = 1$ is the convection zone.\nThe region with $\\nabla\/\\nabla_{\\rm ad} < 1$ is the radiative zone.\nIn the pseudo-adiabatic region, $\\nabla$ is\nvery close to $\\nabla_{\\rm ad}$, but still smaller than $\\nabla_{\\rm ad}$.\nThe RCBs shift inwards when heat transport by turbulent mixing\nis taken into account. The RCBs penetrate deeper with stronger turbulent mixing.\n }\n\\end{figure}\n\n\n\\section{Onset of Gas Runaway and Cooling Luminosity Variations}\nSince we are interested in the planet accretion history,\nit is necessary to investigate the luminosity with increasing envelope mass.\nIn the deep atmosphere, heat is advected by convective eddies.\nNear the surface, this could be achieved by diffusion. The surface temperature\ngradients would become shallower and a radiative region shows up.\nThe variations of luminosity with envelope mass is shown in Figure \\ref{LVsMatm}.\nWith the accumulation of envelope mass, the luminosity reaches a minimum.\nBeyond this minimum, the luminosity $L$ increases. As a result, the planet begins to cool at a very\nshort timescale and the envelope mass would grow super-linearly after this epoch.\nPhysically, it is natural to adopt the epoch when the minimum $L$ is reached as the\nonset of gas runaway, $t_{\\rm run}$.\n\nOn the right hand side of luminosity minimum, the luminosity-mass relation is relatively\neasy to understand. At this late stage of mass growth, the self-gravity\nof envelope appears to be prominent, and greater\nluminosity is necessary to support stronger gravity.\nHowever, on the left hand side of the luminosity minimum,\nthe mass of envelope is small and the planet is at the its early stage\nof mass growth.\nAt this early stage (envelope's self-gravity can be ignored),\nthe luminosity diminishes with a thicker radiative outer layer and more massive envelope.\nThis reduction in cooling luminosity is intimately related to the shift of RCBs.\nWhen the envelope self-gravity can be ignored,\nthe luminosity at RCB can be written as (Piso \\& Youdin 2014)\n\\begin{equation}\nL_{\\rm RCB} = \\frac{64\\pi \\sigma G M_{\\rm RCB} T^4_{\\rm RCB}}{3 \\kappa P_{\\rm RCB}} \\nabla_{\\rm ad}\n\\approx \\frac{L_{\\rm disk} P_{\\rm disk}}{P_{\\rm RCB}} \\ ,\n\\end{equation}\nwhere $M_{\\rm RCB}$ and $L_{\\rm disk}$ reads\n\\begin{equation}\nM_{\\rm RCB} = \\frac{5\\pi^2}{4}\\rho_{\\rm RCB} R_{\\rm B}^{\\prime}\\sqrt{R_{\\rm RCB}} \\ , \\\nL_{\\rm disk} \\approx\n\\frac{64\\pi\\sigma G M_{\\rm RCB} T_{\\rm disk}^4}{3 \\kappa_{\\rm d} P_{\\rm disk}} \\nabla_{\\rm ad} \\ .\n\\end{equation}\nThe above equations can be written in terms of known properties if the envelope\nmass is centrally concentrated (see, e.g., Lee \\& Chiang 2015).\nThis central concentration is physically expected since in deeper layers where temperatures\nrise above $\\sim2500$K, hydrogen molecules dissociate. As energy is spent on dissociating H$_2$\nmolecules rather than heating up the gas, the adiabatic index drops below 4\/3, to approach 1.\nThe upshot is that both the densities at the RCB and the radiative luminosity\ncan be written in terms of core properties and the temperature at the RCB.\n\n\nAs RCB deepens, the RCB becomes even more\noptically thick so it is harder to radiate energy away; as a result, the\nenvelope cools more slowly.\n\n\n\n\nIn Figure \\ref{LVsMatm}, we stress that two important aspects of thermal\nevolution during the planet accretion would be\naffected by turbulent mixing. The first is that it influences the luminosity.\nIn Figure \\ref{LVsMatm}, we know that when the turbulent diffusivity ($\\nu_{\\rm turb}$) is enhanced, the\ncooling luminosity would be reduced globally.\nThat is, for any particular value of envelope mass, the cooling luminosity for\nan envelope with turbulence is always below that without turbulence.\nWhen the turbulence is stronger, the luminosity becomes even smaller.\nThe second is that it changes the EMF at which the gas runaway occurs.\nIn Figure \\ref{LVsMatm}, our calculations show that,\nwhen the turbulence becomes stronger, the onset of gas runaway takes place\nat higher envelope mass fraction (EMF).\n\n\n\n\n\n\\begin{figure}\n\\includegraphics[scale=0.75]{LVsMatm.eps}\n\\caption{\\label{LVsMatm}\nThe luminosity $L$ varies non-monotonically with envelope mass.\nThe results for $\\nu_{\\rm turb} = 0, 0.005, 0.016$ are shown in\nblue solid, green dot-dashed, and red dashed lines, respectively.\nThe luminosity minimum is reached at $M_{\\rm atm}\/M_{\\rm c}$\n= 0.86, 1.16, 1.20, respectively.\nWhen the envelope mass is small, the increase of envelope mass\ncauses the luminosity to decrease. When the envelope mass is sufficiently large,\nthe self-gravity of gas envelope become important, and bigger\nluminosity $L$ is necessary to support stronger gravity. We choose\nthe luminosity minimum as the epoch when the runaway accretion sets in.\nWe note that two important aspects of thermal evolution during the planet accretion would be\naffected. With the enhanced turbulence, the cooling luminosity is reduced globally.\nWhen the turbulence becomes stronger, the onset of gas runaway occurs\nat a higher envelope mass fraction.\n}\n\\end{figure}\n\n\n\n\n\n\n\n\\section{Quasi-Static KH Evolution and Critical Turbulent Diffusivity}\n\nSince we ignore the accretion luminosity from the planetesimals,\nthe gravitational KH contraction is the only source for the cooling.\nThe gas accretion is regulated by the KH timescale.\nOur time evolution model\ncan follow the envelope mass growth up to the very early epoch of\nrunaway growth around the crossover mass.\nFortunately, Pollack et al. (1996) found that the timescale spent in the runaway accretion stage\nis orders of magnitude smaller than the KH timescale. The mass growth\ntimescale is actually dominated by the KH stage. For this reason,\nour model can get rather accurate estimation of mass growth timescale of an accreting planet.\nIn this section, we will explore how the turbulent mixing affect the KH contraction timescale.\nFor strong turbulent diffusion, the heat transport may even inflate the planet (Youdin \\& Mitchell 2010).\nWe are not interested in planet inflation induced by strong turbulence in this paper. We find that even\nweak turbulence can already play an essential role to delay the KH contraction.\n\n\n\n\n\n\\subsection{ Time evolution: Temporally Connecting Snapshots }\nIn the previous section, we have obtained snapshots of envelope\nstructure for different envelope masses.\nTo estimate the accretion timescale,\nwe need to connect them temporally in order of increasing mass.\nThe gas accretion history can be followed\nby the cooling process (Piso \\& Youdin 2014).\nDetailed estimation shows the luminosity generated in the radiative\nregion can be safely ignored (Lee et al. 2014).\nIt is physically valid to assume the luminosity of the envelope is generated\nin the convective zone and the luminosity can be treated as constant in the\nouter radiative zone (Piso \\& Youdin 2014).\nThis would greatly simplify our evolutionary calculations.\nUnder such circumstances, we only need to solve a set of ordinary differential equations and connect\nthe solutions in time.\nLee \\& Chiang (2015) shows that it is physically valid to omit\nplanetesimal heating during the gas accretion of super-Earths.\nWhen there is no planetesimal accretion to power the gas envelope,\nthe time interval between two adjacent hydrostatic snapshots is the\ntime it spends to cool between them.\nIn addition to internal energy variations, gas accretion and envelope contraction\nalso bring about changes to the global energy budget.\nSpecifically, the time interval between two steady state solutions can be written as (Piso \\& Youdin 2014)\n\\begin{equation}\\label{budget}\n\\Delta t = \\frac{-\\Delta E + \\langle e\\rangle\\Delta M - \\langle P\\rangle\\Delta V_{\\langle M\\rangle}}{\\langle L\\rangle} \\ .\n\\end{equation}\nNote that the symbol $\\Delta$ designates the difference between\nthe two adjacent states and the bracket denotes the average of them.\nThe total energy $E$ consists of the internal energy and\nthe gravitational potential energy, which reads\n\\begin{equation}\nE = \\int_{M_c}^{M_{}} u \\ d M_r - \\int_{M_c}^{M_{}} \\frac{G M_r}{r} d M_r\\ ,\n\\end{equation}\nwhere $u$ is the specific internal energy, $u = c_{\\rm v} T$. The second term in\nequation (\\ref{budget}) stands for contribution from gas accretion. The specific energy of the accreting gas\nis $e = - G M_r \/r + u$. The third term in equation (\\ref{budget}) accounts for $P dV$ work done by the envelope\ncontraction.\nAll terms are calculated at the RCB. Note in particular that the volume difference\nbetween two adjacent snapshots are performed at fixed mass.\nWe choose the fixed mass as the average of the masses at the RCB (Piso \\& Youdin 2014).\n\n\n\\begin{figure}\n\\includegraphics[scale=0.65]{timescale_new.eps}\n\\caption{\\label{timescale}\n{\\it Upper panel} :\nThe accretion history for $\\nu_{\\rm turb} =$ 0, 0.0016, 0.005, and 0.016 is shown\nas cyan dot-dashed, blue solid, green dotted, red dashed lines, respectively.\nThe initial time for the accretion is estimated as $t_0 = |E|\/L$.\nThe slightly different starting time is due\nto the luminosity decrease by the inclusion of turbulence (see Figure \\ref{LVsMatm}).\nThe initial EMF is around 6\\%, where the planet is nearly fully convective.\nDifferent color dots in the upper panel denote the epoch, $t_{\\rm run}$, when the gas runaway takes place.\nThe runaway time is $t_{\\rm run} =$ 4.04, 10, 18.4, and 48.3 Myrs,\nrespectively. The solid blue curve shows the critical solution, where $t_{\\rm run} = t_{\\rm disk}$.\nThe critical diffusivity for $M_{\\rm core} = 5 M_{\\oplus}$ is\n$\\nu_{\\rm critical}\\sim 1.6\\times10^{-3}$ if $t_{\\rm disk}=10$ Myrs.\n{\\it Lower panel} : The critical $\\nu_{\\rm critical}$ for various core mass. For higher core\nmass, the critical $\\nu_{\\rm critical}$ is higher.\nWe note that a weak turbulence with small diffusivity,\n$\\mu_{\\rm turb} \\sim 10^{7} -10^{8}$ cm$^2$ s$^{-1}$, can already enhance\nthe runaway timescale and delay the gas runaway. \n}\n\\end{figure}\nIn the upper panel of Figure \\ref{timescale}, we shown the planet mass growth\nhistory for different turbulent diffusivity.\nIn our fiducial model without turbulence, $t_{\\rm run}\\sim$ 4.04 Myrs.\nBeyond this epoch, the gas runaway occurs.\nThe gas runaway is due to the fast increase of $L$ beyond $t_{\\rm run}$,\nwhich leads to a rapid cooling process on a shorter timescale.\nThe most intriguing feature is that\nthe runaway time is delayed and accretion timescale is prolonged\nwhen heat transport by tidally-forced turbulent mixing is taken into account.\nFor instance, when $\\nu_{\\rm turb} = 0.0016, 0.005, \\ 0.016$, the runaway time,\n$t_{\\rm run} = 10, \\ 18.4, \\ 48.3$ Myr, respectively.\nThe stronger the turbulence, the longer the gas runaway timescale.\n\nIn our calculations, we find that a small value of\n$\\nu_{\\rm turb}$, on the order of $10^{-3}$, can already appreciably affect\nthe cooling timescale of super-Earths. Since $\\nu_{\\rm turb}$ is dimensionless,\nit is better to recover its physical value according to Equation (\\ref{turb_def}).\nTypically, luminosities for super-Earths are $L \\sim 10^{26}$erg\/s,\n$M_{\\oplus} = 5.97 \\times 10^{27}$g, and $\\rho_0 = 6\\times 10^{-6}$ g cm$^{-3}$.\nThen the term, $L\/(GM_{\\oplus}\\rho_0)$, defined in Equation (\\ref{turb_def})\nis approximately $ \\sim 4.2 \\times10^{10}$ cm$^2$ s$^{-1}$.\nFor the dimensionless diffusivity $\\nu_{\\rm turb} = 0.0016$, the physical diffusivity\nis approximately $\\mu_{\\rm turb} \\sim 4.2 \\times 10^{7}$ cm$^2$ s$^{-1}$.\nFor even larger $\\nu_{\\rm turb}$,\nthe K-H contraction timescale can be enhanced by orders of magnitude.\nAccording to Figure \\ref{timescale}, it is evident that the turbulent diffusivity on the order\n$\\sim 10^7 - 10^{8}$ cm$^2$ s$^{-1}$ can already\nenhance the runaway timescale by an oder of magnitude.\nThe pressure scale height inside the planet is $H_p \\sim 10^9$ cm and the sound speed\nis $c_s \\sim 10^5$ cm s$^{-1}$. \nWe can get a physical sense how large the turbulent diffusivity is by estimating\nthe dimensionless parameter $\\zeta$ in Equation (\\ref{turb_def}).\nIn our calculation, the parameter $\\zeta$ is pretty small, on the order\nof $10^{-7}\\sim 10^{-6}$. This means that the turbulent diffusion\nnecessary to prolong the cooling timescale needs not to be very strong.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Critical Turbulence Diffusivity $\\nu_{\\rm critical}$ and Super-Earth Formation}\n\nA gas giant would be formed if the protoplanetary disk is still full of gas when\nthe planet enters the runaway accretion stage. However,\nif the runway time $t_{\\rm run}$ is longer than the disk lifetime $t_{\\rm disk}$,\nthe disk gas is depleted and the planet is unable to accrete sufficient gas to become a gas giant,\nthen a super-Earth may be formed.\nTwo timescales, $t_{\\rm run}$\nand $t_{\\rm disk}$, determine the ultimate destiny of the planet,\ni.e., whether the planet becomes a super-Earth\nor a gas giant. If $t_{\\rm run} < t_{\\rm disk}$, gas runaway occurs within the lifetime of\ndisk. The planet would get inflated by the runaway gas accretion and become a gas giant.\nOn the contrary, if $t_{\\rm run} > t_{\\rm disk}$, the disk disperses before the gas\nrunaway takes place. Because there is not enough gas material for the planet to accrete, the planet is\nunable to become a gas giant. Usually the disk life is about $5-10$ Myr.\nTo be specific, we take the disk lifetime as $t_{\\rm disk} = $ 10 Myrs throughout this paper.\n\nIn the upper panel of Figure \\ref{timescale}, the core mass fixed at $M_{\\rm core} = 5 M_{\\oplus}$.\nWe find that there exists a critical diffusivity $\\nu_{\\rm critical} = 1.6\\times 10^{-3}$.\nWhen $\\nu_{\\rm turb}> \\nu_{\\rm critical}$, the K-H contraction timescale\nbecomes longer than the disk lifetime and the core would not be able to experience the gas runaway.\nIn this case, the formation of gas giants can be avoided and the formation of super-Earths becomes viable.\nIn the lower panel of Fig. \\ref{timescale}, we show the variations of $\\nu_{\\rm critical}$\nwith $M_{\\rm core}$. The critical diffusivity becomes larger when the core mass increases.\nSpecifically, for a 10 Earth mass core, the critical dimensionless diffusivity is approximately\n$\\nu_{\\rm critical } = 3.2\\times10^{-2}$. The actual diffusivity is about $\\sim 10^{9}$ cm$^{2}$ s$^{-1}$.\n\n\n\n\n\n\n\n\n\n\\subsection{Variations of $\\nu_{\\rm critical}$ with Planet Location in PPDs }\nObservationally, the Kepler statistics show that $\\sim$20\\% of Sun-like stars harbors super-Earths\nat distance of 0.05-0.3 AU. By contrast, the occurrence rate\nfor hot Jupiters inside $\\sim 0.1$ AU is only 1\\%. To explain these observational features,\nwe consider how the turbulence affects the thermal evolution for planets\nat different locations in PPDs.\nThe turbulent mixing considered in this paper is driven by the tides raised by the host star.\nWe believe that the tidally-induced turbulent mixing inside the planet\nwould become weaker when the planet is farther away from the host star.\n\nLee et al. (2014) found that, for dusty disk, the runaway timescale is independent of\nthe orbital location. However, since dust can not persist in the envelope due to\ncoagulation and sedimentation (Ormel 2014; Mordasini 2014), \nthe runaway timescale is no longer independent of the orbital location.\nIn the upper panel of Figure \\ref{semimajor}, we show the accretion\nhistory for planets at three different locations. The core mass is $M_{\\rm c} = 6 M_{\\oplus}$.\nThe blue solid, green dot-dashed, and red dashed curves\ndesignate the temporal variations of envelope mass for $a = $ 0.1AU, 1AU, and 5AU, respectively.\nThe turbulent diffusivity is $\\nu_{\\rm turb} = 0.013$. The gas runaway occurs at\n$t_{\\rm run} $= 33.1, 3.2, and 1.7 Myrs. It is clear that gas accretion\nonto cores is hastened for planets that are farther away from the central star.\nThis behaviour can be understood from the decrease in opacity\nat the RCB which makes the envelope more transparent, enhancing the rate of cooling\n(Lee \\& Chiang 2015; Inamdar \\& Schlichting 2015).\nThe planet at $a = $ 1AU and 5 AU would become a gas giant due to\nrunaway accretion ($t_{\\rm run} < t_{\\rm disk}$).\nHowever, the planet in the inner region $a= $ 0.1 AU would become a super-Earth ($t_{\\rm run} > t_{\\rm disk}$).\nThe fact that atmospheres cool more rapidly at large distances as\ndust-free worlds has been used to explain the presence of extremely puffy, low mass planets\n(Inamdar \\& Schlichting 2015; Lee \\& Chiang 2016).\n\n\nWe explore the critical diffusivity, $\\nu_{\\rm critical}$, for planets at\ndifferent locations inside the minimum mass extrasolar nebula (MMEN).\nThe results are shown in the lower panel of Figure \\ref{semimajor}.\nIt shows that the critical diffusivity increases with the semi-major axis.\nWhen the planet is farther from the central star, $\\nu_{\\rm critical}$ becomes larger.\nThis means that the more distant planet requires stronger turbulence\nto lengthen the KH timescale and avoid gas runaway.\nFor tidally-induced forcing, we believe that the turbulent diffusion $\\nu_{\\rm turb}$ is determined by\nthe tides inside the planet raised by host star. The tides become weaker if the planet is farther away\nfrom the host star.\nOur proposed mechanism can naturally explain the formation of close-in super-Earth,\nwhile still ensuring the gas giant formation at larger orbital distance.\nWhen the planet is near the host star, tidally-forced turbulent mixing is stronger and $\\nu_{\\rm turb}$ would be larger.\nAccording to Figure \\ref{semimajor}, the required threshold $\\nu_{\\rm critical}$ is smaller,\nAs a result, the inequality $\\nu_{\\rm turb} > \\nu_{\\rm critical}$ can be more readily\nto satisfy and formation of super-Earth becomes possible.\nOn the contrary, when the planet is far from the host star, $\\nu_{\\rm turb}$ becomes smaller as the stirring by\ntides becomes weaker. The required threshold $\\nu_{\\rm critical}$ becomes larger.\nThe threshold to avoid gas runaway is more difficult to satisfy.\nThis indicates that, in the in-situ planet formation scenario,\nit is more readily to form close-in super-Earths\nand gas giants are more prone to appear in the outer region of PPDs.\nThe above implication is consistent with occurrence rate inferred from observations.\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}\n\\includegraphics[scale=0.7]{semimajor_new.eps}\n\\caption{\\label{semimajor}\n{\\it Upper panel} : Variations of envelope mass with time.\nThe core mass is $M_{\\rm c} = 6 M_{\\oplus}$. The turbulent\ndiffusivity is $\\nu_{\\rm turb} = 0.01$. The blue solid, green dot-dashed, red dashed lines denote\nmass growth history for planets at 0.1 AU, 1 AU, and 5 AU, respectively.\nThe critical mass ratio at the epoch of runaway decreases for more distant planet.\nThe runaway time for the three different cases are 33.1, 3.2, and 1.7 Myr, respectively.\nIt is expected that for more distant planets, larger turbulent diffusivity is required to prevent\nrunaway gas accretion within $t_{\\rm disk} \\sim$ 10 Myrs.\n{\\it Lower panel} : The critical diffusivity, $\\nu_{\\rm critical}$, for different orbital locations, required\nto prevent gas runaway for disk lifetime $t_{\\rm disk} \\sim$ 10 Myrs.\nBeyond $\\nu_{\\rm critical}$, the KH timescale is longer than the disk lifetime.\nThe formation of super-Earths becomes possible.\n }\n\\end{figure}\n\n\n\n\n\n\\section{Mass Loss Mechanisms}\nObservation shows that super-Earths possess hydrogen and helium\nenvelopes containing only several percent of the planet's mass.\nHowever, we can see in Figure \\ref{timescale} that the planets accrete\nvery massive gas envelopes.\nThe planet core with $\\nu_{\\rm turb}=0$ reaches an envelope mass fraction (EMF)\nof $\\sim 0.8$ at the epoch of gas runaway.\nThe envelope mass is considerably higher than the mass inferred from observations.\nThese primordial super-Earths may experience\nsignificant mass loss during the post-formation evolution.\n\nHow super-Earths lose their mass still remains an open question.\nHere we briefly mention some possible ways to lose the envelope mass.\nThe first possibility is that close-in planets are exposed to intense XUV (extreme UV\nand X-ray) irradiation from their host stars. Photoevaporation\ncan significantly modify the structure of their atmosphere.\nOver the timescale of $\\sim 100$ Myrs, X-rays from host stars can photoevaporate\nthe super-Earth envelopes from initial EMF $\\sim 1$ to EMF of $\\sim 0.01-0.1$,\nwhich may naturally explain the differences between\nthe theoretical predictions and observational facts (e.g., Murray-Clay et al. 2009;\nOwen \\& Wu 2013; Owen \\& Wu 2017; Gaudi et al. 2017).\n\nGiant impact is the second possible mechanism to explain the mass loss,\nwhich is expected to be common because they are needed to provide\nlong-term orbital stability of planetary systems (Cossou et al. 2014).\nHydrodynamical simulations show that a single collision between similarly sized exoplanets\ncan easily reduce the envelope-to-core-mass ratio by a factor of two.\nSuper-Earths' asymptotic mass can be achieved by one or two giant impacts.\nUnder certain circumstances, almost 90\\% of the gas envelope can be\nlost during impact process (Liu et al. 2015; Inamdar \\& Schlichting 2016).\n\n\nMass transfer between the close-in planet and host star via Roche lobe represent the third way to\nreduce the planet mass (Valsecchi et al. 2015; Jia \\& Spruit 2017; Jackson et al. 2017).\nTidal dissipation can drive orbits of these primordial super-Earths to decay toward the Roche limit.\nThe mass transfer is quite rapid, potentially leading to complete removal of the gaseous envelope in a few Gyr,\nand leaving behind a super-Earth.\nMany gaseous exoplanets in short-period orbits are on the verge or are in the process of Roche-lobe overflow (RLO).\nThe coupled processes of orbital evolution and RLO likely shape the observed distribution of close-in exoplanets and may even be responsible for producing some of the short-period rocky planets. But recent calculations by Dosopoulou et al. (2017) challenged this idea by claiming that, for high eccentric planets or retrograde planets, self-accretion by\nthe planet would slow down the mass loss rate via Roche lobe overflow.\n\n\nSuper-Earth envelope mass fractions range just 1-10\\% and more typically just $\\sim$1\\%\n(see Rogers \\& Seager 2010, Lopez \\& Fortney 2014, Wolfgang \\& Lopez 2015).\nThe mechanism discussed in this paper overpredicts the envelope mass fraction of super-Earths, often beyond 80\\%.\nPhotoevaporation, even around Sun-like stars, are only effective out to ~10 days and many super-Earths lie beyond this (see, e.g., Figure 8 of Owen \\& Wu 2013). Removal of $>$90\\% of the envelope by giant impact requires impact velocity that exceeds the escape velocity (see, e.g., Figure 3 of Inamdar \\& Schlichting 2016). Finally, Roche lobe overflow only works within ~2 stellar radii where the Roche radius is.\nLee \\& Chiang (2016) proposed that the late-time formation of cores ensures that super-Earth\ncores accrete a few percent envelope mass fraction, in agreement with the observations.\nThere is a clear difference in the expected final envelope mass fraction\nbetween their work and ours.\n\nVery recent works have revealed that planetary envelopes embedded within PPDs\nmay not be in hydrostatic balance, which slows down envelope growth. It\nis possible for a steady state gas flow enters\nthrough the poles and exits in the disc mid-plane (Lambrechts \\& Lega 2017).\nIn the presence of a magnetic field and weakly ionizing winds,\nohmic energy is dissipated more readily for lower-mass planets.\nOhmic dissipation would make super-Earths more vulnerable to atmospheric evaporation (Pu \\& Valencia 2017).\nThese findings may offer new explanations for the typical low-mass envelopes around the cores of Super-Earths.\nIn addition, we also note that the turbulent\ndiffusion mechanism may be still operating in the late core assembly scenario.\nIn the late core assembly scenario without turbulent diffusion, the asymptotic EMF is about 3-5\\% (Lee \\& Chiang 2016).\nWhen turbulent diffusion is taken into account, the EMF can be further reduced to 1\\%.\n\n\n\n\n\n\\section{Summary and Conclusion}\n\nIn this paper, we propose a new mechanism to avoid gas runaway for planet cores\nwithin the lifetime of disks.\nThe mechanism proposed in this paper is not subject the $\\kappa$ or $\\mu$\ncatastrophe (Lee \\& Chiang 2015). Tidal heating (Ginzburg \\& Sari 2017) requires\norbital eccentricity be continuously pumped up during super-Earth formation.\nOur mechanism does not depend on the orbital eccentricity of super-Earth.\nIncorporating this model into a population synthesis model may better constrain our\nunderstanding of the exoplanet formation (Ida \\&Lin 2004; Jin \\& Mordasini 2017).\n\nWe have explored the effects of heat transport induced by tidal stirring on the thermal\nstructure of stably stratified, radiative layers of super-Earths,\nfocusing on their influences on the KH timescale.\nWhen we take turbulent stirring into account,\npseudo-adiabatic regions would show up within the radiative zone.\nThis may push the RCBs inwards.\nThe temperature, pressure at RCBs becomes higher and the cooling luminosity would be reduced.\nAs a result, the KH timescale would be enhanced.\nWe find that\nthere exist a critical turbulent diffusivity $\\nu_{\\rm critical}$. When\n$\\nu_{\\rm turb} > \\nu_{\\rm critical}$, the runaway time is greater than\nthe disk lifetime ($t_{\\rm run} > t_{\\rm disk}$). Under such circumstances,\nthe onset of the planet gas runaway lags behind the disk gas depletion.\nSince the planet has not enough gas to accumulate, it can no longer grow\ninto a gas giant and become a super-Earth instead. In addition, we also investigate\nthe variations of $\\nu_{\\rm critical}$ with planet's semi-major axis in MMEN.\nOur calculations show that the condition for turbulence-induced formation of super-Earths\nis more readily satisfied in the inner disk region, but is harder to satisfy in the outer\ndisk region. The occurrence rate of super-Earths and gas giant is consistent our calculations.\n\n\nThe extent of radiative region has important implication\nfor the tidal dissipations inside the planet.\nThe turbulence pushes the RCBs inwards and produces enlarged radiative zones.\nSince the internal gravity waves can\npropagate inside the radiative zone, the variations of this resonant cavity\nwould significantly influence the dissipation of internal gravity waves.\nThis would greatly influence the propagation and dissipation of internal\ngravity waves inside the radiative zone (Jermyn et al. 2017).\nAnother effect is that the transition between convective zone to radiative zone is smoothed.\nThe radiative zone is thickened and\nthis bears important implications for the internal gravity wave\nexcitation and propagation (Lecoanet \\& Quataert 2013).\nThis would have appreciable effects on the thermal tides inside the planet.\nThese issues will be addressed in a further study.\n\n\nA limitation of this work is that the turbulence strength is not specified from first principle.\nAs a compromise, we parameterize the turbulence diffusion as a free parameter. We try\nto constrain the turbulence strength in terms of the planet thermal evolution. Interestingly,\nwe find the turbulence in the radiative region have substantial effects on the planet\naccretion history.\nHow turbulence is initiated during the planet formation and how strong the turbulent diffusion\nis involve very complicated physical processes, which are worth further investigations.\n\n\nRealistic opacities and EOS have influential effects on\nthe planetary thermal structure and the core accretion process (e.g. Stevenson 1982;\nIkoma et al. 2000; Rafikov 2006),\nespecially for timescale of the KH timescale (Lee et al. 2014; Piso \\& Youdin 2014).\nOur simple prescription of opacity needs to be improved.\nGuillot et al. (1994)\nshowed that an convective layer lies between two adjacent radiative\nregion due to the opacity window near $\\sim$ 2000K.\nA relevant caveat is the existence of radiative zones sandwiched inside convective interior.\nSuch radiative windows are ignored in our two-layer models.\nIt would be interesting\nto consider how a downward turbulent heat flux would interact with such\na sandwiched region. In summary, how super-Earth envelope cooling\nhistory responds to more realistic opacities and EOS needs to be further investigated.\nCalculation with realistic EOS and opacity are underway and will be reported elsewhere.\n\n\nWe have found that the epoch of runaway accretion can be effectively\ndelayed by the turbulent diffusion within the stably stratified region. But we should be cautious that\nthe envelope mass fraction predicted by this mechanism is not fully consistent with observations.\nThe envelope mass fraction for planet embedded within the gas-rich MMEN is greater than 80\\%, much higher\nthan the typical super-Earth envelope.\nIt is difficult for the turbulent diffusion alone to make the envelope mass fraction be consistent with observations.\nAdditional physical process, such as giant impact, photo-evaporation, Roche-lobe overflow may be operating to\nreduce the envelope mass fraction during\nthe formation of super-Earth. But these mass loss processes either operate on distances shorter\nthan most super-Earths or are applicable under certain circumstances. A promising mechanism for super-Earth formation\nis the late core assembly within the transitional PPDs. In this scenario, with the reduction of the\nPPD mass density, the envelope mass fraction can be as low as 3-5\\% (Lee \\& Chiang 2016). We note that the turbulent diffusion\nmay be still working in the late core assembly scenario. How turbulent diffusion affect the envelope\nmass fraction within transitional PPDs is an interesting issue worth further investigation.\n\n\n\\acknowledgments\nWe thank the anonymous referee for the thoughtful comments that greatly improve this paper.\nDiscussions about heat transport inside planet interior\nwith Yanqin Wu and Re'em Sari are highly appreciated.\nThis work has been supported by National Natural\nScience Foundation of China (Grants 11373064, 11521303, 11733010),\nYunnan Natural Science Foundation (Grant 2014HB048)\nand Yunnan Province (2017HC018).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLacunary generating functions appeared previously in a number of circumstances, including for example the treatment of Cauchy problems in partial differential equations~\\cite{babusci2017lacunary,penson2018quasi}. Here, we develop a rather general technique for the treatment of such generating functions, applicable to sequences $P=(p_n(x,y))_{n=0}^{\\infty}$ of polynomials $p_n(x,y)$, where $x$ is the generic variable and $y$ plays the role of a parameter. Such two-variable extensions of one-variable polynomials have been strongly advocated in~\\cite{babusci2010lectures}. They can be logically and consistently defined for all standard families of orthogonal polynomials such as Hermite, Laguerre, Chebyshev of first and second kind, Jacobi and Legendre polynomials~\\cite{babusci2017lacunary,babusci2010lectures,beals2016special}. Once such two-variable equivalents are properly defined, their one-variable variants are obtained by fixing the values of both variables to functions of one of the variables. The concrete example considered in this paper is given by the two-variable Hermite (or so-called Hermite-Kamp\\'{e} de F\\'{e}riet) polynomials $H_n(x,y)$~\\cite{kampe,dattoli1997evolution}, from which the standard one-variable Hermite polynomials $H_n(x)$ may be recovered via (see~\\eqref{eq:HPtwo} below for the definition of $H_n(x,y)$)\n\\begin{equation}\nH_n(x)=H_n(2x,-1)\\,.\n\\end{equation}\nWe will focus our particular attention onto the derivation of a general formula for the $K$-tuple $L$-shifted lacunary generating functions $\\cH_{K,L}(\\lambda;x,y)$ of the two-variable Hermite polynomials $H_n(x,y)$, which are defined (for $K=1,2,3,\\dotsc$ and $L=0,1,2,\\dotsc$) by\n\\begin{equation}\\label{eq:HLGFa}\n\\cH_{K,L}(\\lambda;x,y):=\\sum_{n=0}^{\\infty}\\frac{\\lambda^n}{n!}\\; H_{n\\cdot K+L}(x,y)\\,.\n\\end{equation}\nThe exponential generating functions of type~\\eqref{eq:HLGFa} for Hermite and other types polynomials are very sparsely known, and progress in obtaining new closed-form formulas has been painstakingly slow. A glance at standard reference tables~\\cite{prudnikov1992integrals} reveals only a few known examples. A number of results in this vein were obtained by combinatorial approaches initiated by D.~Foata and V.~Strehl in~\\cite{foataStrehl1984} supplemented by umbral methods, see~\\cite{dattoli2017operational} and references therein. This methodology culminated recently in a tandem study of various lacunary generating functions of Laguerre polynomials derived by purely umbral-type~\\cite{babusci2017lacunary} and purely combinatorial methods~\\cite{strehl2017lacunary}. Only a few results are currently available for lacunary generating functions of Hermite polynomials: the double lacunary case has been combinatorially re-derived by D.~Foata in~\\cite{foata1981some}, whereas the more challenging triple-lacunary generating function has been derived by both umbral and combinatorial methods in~\\cite{gessel2005triple}. Finally, several new lacunary generating functions for Legendre and Chebyshev polynomials were obtained recently by a combination of analytic and umbral methods in~\\cite{gorskalacunary}. To conclude our short survey of results known previously in the literature, let us comment that there exists a related result due to Nieto and Truax~\\cite{nieto1995arbitrary}, which (in the form adapted to the two-variable Hermite polynomials $H_n(x,y)$ as presented in~\\cite{dattoli1997evolution,dattoli1998operational}) reads for $K\\in \\bZ_{\\geq 1}$, $L\\in \\bZ_{\\geq 0}$ and $L0$), the Hamiltonian $H_{\\texttt{GRWA}}$ can be written in the matrix form as\n\\begin{widetext}\n\\begin{equation}\nH_{\\texttt{GRWA}}=\\left(\n\\begin{array}{cccc}\n\\omega (n+2)+\\mu _{1}(n+2) & \\Delta R_{n+1,n+2}^{\\prime } & 0 & 0 \\\\\n\\Delta R_{n+1,n+2}^{\\prime } & \\omega (n+1)+\\mu _{2}(n+1) & \\Delta R_{n,n+1}^{\\prime }\n& 0 \\\\\n0 & \\Delta R_{n,n+1}^{\\prime } & \\omega n+\\mu _{3}(n) & \\Delta R_{n-1,n}^{\\prime } \\\\\n0 & 0 & \\Delta R_{n-1,n}^{\\prime } & \\omega (n-1)+\\mu _{4}(n-1\n\\end{array\n\\right) ,\n\\end{equation}\n\\end{widetext}with $R_{n+1,n+2}^{\\prime }=\\frac{-\\sqrt{3}K_{2}+K_{1}(\\sqrt{3\n+2K_{2})}{C_{1}C_{2}}R_{n+1,n+2}\\sqrt{n+2}$, $R_{n,n+1}^{\\prime }=\\frac{\n\\sqrt{3}K_{3}+K_{2}(\\sqrt{3}-2K_{3})}{C_{2}C_{3}}R_{n,n+1}\\sqrt{n+1}$ and \nR_{n-1,n}^{\\prime }=\\frac{-\\sqrt{3}K_{4}+K_{3}(\\sqrt{3}+2K_{4})}{C_{3}C_{4}\nR_{n-1,n}\\sqrt{n}$.\n\nTo this end, the GRWA can be also performed analytically without more efforts than\nthose in the original Hamiltonian $H_{\\texttt{RWA}}$ in Eq.(\\ref{RWA}).\nThe displaced oscillator states $|n\\rangle _{m}$, $|n\\pm\n1\\rangle _{m}$ and $|n+2\\rangle _{m}$ depend upon the Dicke state \n|j,m\\rangle $, and are definitely different from both the RWA ones and the zeroth-order approximations where\nonly the state $|n\\rangle_{m}$ is considered. Hence, as $\\Delta\/\\omega$ increases,\nthe first-order correction provides an efficient, yet accurate analytical solution.\n\nThe ground-state energy for the ground state $|-\\frac{3}{2}\\rangle |0\\rangle\n$ is\n\\begin{equation}\nE_{0}=-\\frac{5g^2}{4\\omega}-\\frac{\\Delta }{2}e^{-\\frac{g^{2}}{2\\omega ^{2}}}-2\\chi _{1,0}.\n\\end{equation}\nThe first and second excited energies $\\{E_{0}^{k}\\}$ ($k=1,2$) can be given\nby expanding the GRWA Hamiltonian in the basis $|-\\frac{3}{2}\\rangle\n|1\\rangle$ and $|-\\frac{1}{2}\\rangle |0\\rangle$\n\\begin{equation}\nH_{\\mathtt{GRWA}}=\\left(\n\\begin{array}{cc}\n\\omega +\\mu _{1}(1) & \\Delta R_{0,1}^{\\prime } \\\\\n\\Delta R_{0,1}^{\\prime } & \\mu _{2}(0\n\\end{array\n\\right) .\n\\end{equation\nSimilarly, $H_{\\mathtt{GRWA}}$ is given in terms of $|-\\frac{3}{2}\\rangle\n|2\\rangle $, $|-\\frac{1}{2}\\rangle |1\\rangle $, $|\\frac{1}{2}\\rangle\n|0\\rangle $ as\n\\begin{equation}\nH_{\\mathtt{GRWA}}=\\left(\n\\begin{array}{ccc}\n2\\omega +\\mu _{1}(2) & \\Delta R_{1,2}^{\\prime } & 0 \\\\\n\\Delta R_{1,2}^{\\prime } & \\omega +\\mu _{2}(1) & \\Delta R_{0,1}^{\\prime } \\\\\n0 & \\Delta R_{0,1}^{\\prime } & \\mu _{3}(0\n\\end{array\n\\right) ,\n\\end{equation\nwhich provides three analytical excited energies $\\{E_{0}^{k}\\}$ ($k=3,4,5$).\n\nEnergies obtained by the GRWA are presented in dashed lines in Fig.~\\re\n{energy level}. Especially, for the resonance case $\\Delta =\\omega $, the GRWA results are much better than the zeroth-order results (blue dotted lines) in Fig.\\ref{energy level}(b). It ascribes to the effect of the coupling between states\nwith different manifolds. Our approach is basically a perturbative expansion\nin terms of $\\Delta\/\\omega$. As the increase of the $\\Delta \/\\omega$ ,\nthe high order terms in Eq.(5) still cannot be neglected in the intermediate and strong coupling regimes.\nSo the GRWA works reasonably well in the ultra-strong coupling regime $g\/\\omega<0.3$ at resonance.\nInterestingly, the level crossing is present in both the GRWA results\nand the exact ones. The RWA requires weak coupling due to the complete neglect of the CRW terms, which are\nqualitatively incorrect as the coupling strength increases. So the GRWA includes the dominant contribution of the CRW\nterms, exhibiting substantial improvement\nof energy levels over the RWA one. The RWA fails in particular to describe\nthe eigenstates, which should be more sensitive in the quantum entanglement\npresented in the next section.\n\n\\section{Quantum entanglement}\n\nIn the present three-qubit system, we study the GME for the multipartite entanglement and the concurrence for the bipartite\nentanglement. A fully separable three-particle state must contain no entanglement.\nIf the state is not fully separable, then it\ncontains some entanglement, but it might be still separable with respect to\ntwo-party configurations. For genuine multiparticle entangled states, all\nparticles are entangled and therefore GME is very important among various definition of entanglements.\n\n\nWe review the basic definitions of GME for the three qubits $A$, $B$, and $C$. A separable state is a mixture of product states with respect to a bipartition $A|BC$, that is $\\rho_{A|BC}^{sep}=\\sum_{j}p_j|\\varphi_A^j\\rangle\\langle\\varphi_A^j|\\otimes|\\varphi_{BC}^j\\rangle\\langle\\varphi_{BC}^j|$,\n where $p_j$ is a coefficient. Similarly, we denote other separable states for the two other bipartitions as $\\rho_{B|AC}^{sep}$ and $\\rho_{C|AB}^{sep}$. A biseparable state is a mixture of separable states, and combines the separable states $\\rho_{A|BC}^{sep}$, $\\rho_{B|AC}^{sep}$, and $\\rho_{C|AB}^{sep}$ with respect to all possible bipartitions. Any state that is not a biseparable state is called genuinely multipartite entangled.\n\nRecently, a powerful technique has been advanced to characterize multipartite entanglement using positive partial transpose (PPT) mixtures~\\cite{peres}. It is well known that a separable state is PPT, implying that its partial transpose is positive semidefinite.\nWe denote a PPT mixture of a tripartite state as a convex combination of PPT states $\\rho_{A|BC}^{PPT}$, $\\rho_{B|AC}^{PPT}$ and $\\rho_{C|AB}^{PPT}$\nwith respect to different bipartitions.\nThe set of PPT mixtures contains the set of\nbiseparable states. The advantage of using PPT mixtures instead of biseparable states is that the set of PPT mixtures\ncan be fully characterized by the linear semidefinite programming (SDP)~\\cite{boyd},\nwhich is a standard problem of constrained convex optimization theory.\n\nIn order to characterize PPT mixtures, a multipartite state which is not a PPT mixture can be detected by a decomposable entanglement witness $W$~\\cite{novo}. The witness operator is defined as $W=P_M+Q_M^{T_M}$ for all bipartitions $M|\\bar{M}$, where $P_M$, and $Q_M$ are positive semidefinite operators, and $T_M$ is the partial transpose with respect to $M$. This observable $W$ is positive on all PPT mixtures, but has a negative expectation value on at least one entangled state.\nTo find a fully decomposable witness for a given state $\\rho$, the convex optimization technique SDP\nbecomes important, since it allows us to optimize over all fully decomposable witnesses.\nHence, a state $\\rho$ is a PPT mixture only if the optimization problem~\\cite{novo},\n\\begin{equation} \\label{minm}\n\\textrm{minimize:} {}{} \\mathtt{Tr}(W\\rho).\n\\end{equation}\nhas a positive solution. If the minimum in Eq. (~\\ref{minm}) is negative, $\\rho$ is\nnot a PPT mixture and hence is genuinely multipartite entangled.\nWe denote the absolute value of the above minimization as $E(\\rho)$. For solving the SDP we use the programs YALMIP and SDPT3~\\cite{yalmip,program}, which are freely available.\n\n\nNow we discuss the dynamics of the GME for the three-qubit entanglement.\nThe initial entangled three-qubit state is chosen as the W state with only one excitation\n\\begin{equation}\n|W\\rangle =\\frac{1}{\\sqrt{3}}(|100\\rangle +|010\\rangle +|001\\rangle ),\n\\label{initial state}\n\\end{equation\nwhich corresponds to the Dicke state $|D_{3}\\rangle =|-\\frac{1}{2\n\\rangle $. For the Hamiltonian (~\\ref{Ham}) with respect to the rotation around the \ny$ axis by the angle $\\pi\/2$, the initial Dicke state can be written as\n\\begin{equation}\n|D_{3}\\rangle =\\frac{1}{\\sqrt{8}}(-\\sqrt{3}|-\\frac{3}{2}\\rangle -|-\\frac{1}{\n}\\rangle +|\\frac{1}{2}\\rangle +\\sqrt{3}|\\frac{3}{2}\\rangle),\n\\label{initial state1}\n\\end{equation\nand the initial cavity state is the vacuum state $|0\\rangle $. Based on the\neigenstates $\\left\\{ |\\varphi _{k,n}\\rangle\\right\\} $ and eigenvalues \n\\left\\{ E_{n}^{k}\\right\\} $ in the GRWA and the zeroth-order approximation,\nthe wavefunction evolves from the initial state as $|\\phi (t)\\rangle\n=\\sum_{n,k}e^{-iE_{n}^{k}t}|\\varphi _{k,n}\\rangle \\langle \\varphi\n_{k,n}|D_{3}\\rangle $. And the three-qubit reduced state $\\rho (t)$ can be given by\ntracing out the cavity degrees of freedom\n\\begin{equation}\n\\rho (t)=\\texttt{Tr}_{\\mathtt{cavity\n}(|\\phi (t)\\rangle \\langle \\phi (t)|).\n\\end{equation}\nWe then calculate the absolute value of the minimum $E(\\rho )$ to detect the GME by solving the minimum in Eq.(~\\ref{minm}).\n\n\\begin{figure}[tbp]\n\\includegraphics[scale=0.45]{GMEDym.eps}\n\\caption{(Color online) Dynamics of the GME for three-qubit entanglement\nwith the initial W state for the ultrastrong-coupling strength $g\/\\protec\n\\omega=0.1$ with the different detuning $\\Delta\/\\protect\\omega=0.1$ (a) and \n\\Delta\/\\protect\\omega=1$ (b) by the GRWA method (dash-dotted lines),\nnumerical method (solid lines), RWA (short-dotted\nlines), and the zeroth-order approximation (dashed lines).}\n\\label{dynamics GEM}\n\\end{figure}\n\nFig.~\\ref{dynamics GEM} shows the $E(\\rho )$ plotted against parameter $\\Delta t\/(2\\pi)$ for different detunings $\\Delta\/\\omega$\nfor the ultra-strong-coupling strength $g\/\\omega =0.1$. For comparison, results from numerical exact\ndiagonalization and RWA are also shown. We observe a quasi-periodic behavior\nof the GME dynamics. $E(\\rho )$ decays from the initial entangled W state\nand falls off to a nonzero minimum value, implying no death of the three-qubit entanglement. The GME dynamics obtained by the\nGRWA are consistent with the numerical results, while the RWA results are\nqualitatively incorrect for the off-resonance case $\\Delta \/\\omega =0.1$ in\nFig.~\\ref{dynamics GEM} (a). The zeroth-order approximation, where only states within\nthe same manifold are included, works well for the off-resonance case \n\\Delta =0.1$ in Fig.~\\ref{dynamics GEM} (a) but not for the on-resonance\ncase in Fig.~\\ref{dynamics GEM} (b). The validity of the GRWA ascribes to the\ninclusion of the CRW interaction $iJ_{y}F_{1}\\left( a^{\\dagger }a\\right)\n(a^{\\dagger }-a)$.\n\n\nThe onset of the decay of the multipartite\nentanglement is due to the information loss of qubits dynamics to the cavity.\nOn the other hand, it is the interaction with the cavity that leads to the\nentanglement resurrection. The lost information will be transferred back to the qubit\nsubsystem after a finite time, which is associated with the ratio between the\ncoupling strength $g\/\\omega$ and the level-splitting of qubits $\\Delta\/\\omega$.\nAs the ratio $g\/\\Delta$ increases, the contributions of the qubit-cavity interaction become dominant\nand the lost entanglement will be transferred quickly from the cavity to qubits with\nless revivals time, as shown in Fig~\\ref{dynamics GEM} (a).\n\n\nMoreover, it is significant to study the different behavior of the multipartite entanglement and the bipartite entanglement.\nThe concurrence characterizes the\nentanglement between two qubits. Due to the symmetric Dicke states in the\nthree-qubit collective model, the concurrence is evaluated in terms of the\nexpectation values of the collective spin operators as $C=\\max\n\\{0,C_{y},C_{z}\\}$, where the quantity $C_{n}$ is defined for a given\ndirection $n(=y,z)$ as $C_{n}=\\frac{1}{2N(N-1)}\\{N^{2}-4\\langle\nS_{n}^{2}\\rangle -\\sqrt{[N(N-2)+4\\langle S_{n}^{2}\\rangle\n]^{2}-[4(N-1)\\langle S_{n}\\rangle ]^{2}}\\}$~\\cite{vidal}. From the dynamical\nwavefunction $|\\phi (t)\\rangle $, we can easily evaluate the coefficients\nfor the qubit to remain in the $|j,m\\rangle $ state\n\\begin{equation}\\label{zeroprob}\nP_{m}^{0th}=\\sum_{n=0}^{\\infty\n}\\sum_{k=1}^{4}f_{n}(t)e^{-iE_{n}^{k}t},\n\\end{equation\nin the zeroth-order approximation and\n\\begin{eqnarray}\\label{probability}\nP_{m}^{\\mathtt{GRWA}} &\\approx&\\sum_{n}^{\\infty\n}\\sum_{k=1}^{4}f_{n}^{k}(t)(e^{-iE_{n-2}^{k}t}+e^{-iE_{n-1}^{k}t} \\notag \\\\\n&&+e^{-iE_{n}^{k}t}+e^{-iE_{n+1}^{k}t}),\n\\end{eqnarray\nin the GRWA. $f_{n}^{k}(t)$ is a dynamical parameter associated with the\ninitial state and the $k$-th eigenstates for each $n$. From $P_{m}^{\\mathtt{GRWA}}$ in Eq.(~\\ref{probability}), we\nobserve energy-level transitions among $E_{n-2}^{k}$, $E_{n\\pm 1}^{k}$ and \nE_{n}^{k}$ in the GRWA, which produce essential improvement of the dynamics\nover the zeroth-order ones in Eq.(~\\ref{zeroprob}). Since the average value of collective\nspin operators can be expressed by $P_m$, such as $4\\langle S_{y}^{2}\\rangle =4\\sqrt{3}({}_{-\\frac{3}{\n}}\\langle n-2|n\\rangle _{\\frac{1}{2}}P_{-\\frac{3}{2}}P_{\\frac{1}{2}}+{}_{\n\\frac{1}{2}}\\langle n-1|n+1\\rangle _{\\frac{3}{2}}P_{-\\frac{1}{2}}P_{\\frac{3}\n2}})-4(P_{-\\frac{1}{2}}^{2}+P_{\\frac{1}{2}}^{2})+3$, we calculate the concurrence $C$ by the zeroth-order approximation and the GRWA, respectively.\n\n\\begin{figure}[tbp]\n\\includegraphics[scale=0.7]{condyn.eps}\n\\caption{(Color online) Dynamics of the concurrence for the qubit-qubit\nentanglement with the initial W state for the ultrastrong coupling strength \ng\/\\protect\\omega=0.1$. The parameters are the same as in Fig.~\\protect\\re\n{dynamics GEM}.}\n\\label{dynamics concurrence}\n\\end{figure}\n\nWe plot the dynamics of the concurrence for different detunings $\\Delta \/\\omega =0.1$ and $1$ in\nFig.~\\ref{dynamics concurrence}. The initial W state gives the maximum\npairwise entanglement $C=2\/3$ of any Dicke states. Fig.~\\ref{dynamics\nconcurrence} (a) shows that dynamics of the concurrence by the zeroth-order\napproximation are similar to the numerical ones in the off-resonance case \n\\Delta \/\\omega =0.1$, in which the RWA results are invalid. The sudden death\nof the bipartite entanglement is observed in the resonance case in Fig.~\\re\n{dynamics concurrence} (b). The dynamics of the concurrence obtained by the\nGRWA is similar to the numerical results, exhibiting the disappearance of the\nentanglement for a period of time.\nHowever, there is no sudden death of the\nentanglement in the RWA case, indicating that the CRW terms are not negligible.\n\nVery interestingly, as shown in Fig.~\\ref{dynamics GEM}, the GME for the three-qubit entanglement never vanishes, in sharp contrast with bipartite entanglement.\nDuring the vanishment of\nconcurrence, the GME is generally small but still finite.\nIt follows that the two-qubit state is separable in the system, but the three-qubit state still contains residual entanglement. This may be one\nadvantage to using GME as a quantum information resource.\n\nFinally, it is significant to clarify why the GME of the tripartite entanglement behaves differently\nwith the concurrence of the bipartite entanglement. The well-known death of the concurrence\nis related to the disappearance of the entanglement in an arbitrary two-qubit subsystem, say A and B, while a deep understanding\nis associated with the question of whether there exists entanglement in the three-qubit system. Intuitively, we may think that entanglement is still stored in the bipartition $AB|C$. Negativity is used to detect the entanglement for this bipartition~\\cite{vidal2}, which\nfalls off to a nonzero minimum in Fig.~\\ref{entanglement}. It reveals that the state for the bipartition $AB|C$ is not a separable state. Similarly, those states with respect to other bipartitions $AC|B$ and $BC|A$ are not separable. Therefore, the three-qubit state stays in an entangled state and\nthe GME for the three-qubit entanglement never disappears during the death of the two-qubit entanglement. The theory of the multipartite entanglement is not fully developed and requires more insightful investigations into more- than two-party systems. We highlight here the different features of the multipartite entanglement and bipartite entanglement in the more- than two-qubit system, and have found that the GME is always robust at least in the qubits and single-mode cavity system.\n\\begin{figure}[tbp]\n\\includegraphics[scale=0.45]{compare.eps}\n\\caption{(Color online) GME for the three qubits A, B,C (dash-dotted line), negativity for the entanglement with respect to the bipartition $AB|C$ (solid line),\nand concurrence between A and B qubits (dashed line) obtained by the numerical method for \ng\/\\protect\\omega=0.1$ and $\\Delta\/\\protect\\omega =1$.}\n\\label{entanglement}\n\\end{figure}\n\n\\section{Conclusion}\n\nIn this work, we have extended the original GRWA by Irish for the one-qubit Rabi model to the three-qubit Dicke model by the unitary transformation. The zeroth-order approximation, equivalent to the adiabatic approximation,\nis suited for arbitrary coupling strengths for the large detuning case. The first-order approximation, also called GRWA,\nworks well in a wide range of coupling strength even on resonance and much better than the RWA ones. In the GRWA, the effective Hamiltonian with\nthe CRW interactions is evaluated as the same form of the ordinary RWA one, which facilitates the derivation of the explicit analytic solutions. All eigenvalues and eigenstates can be approximately given.\n\nBy the proposed GRWA scheme, we have also calculated the dynamics of concurrence for the bipartite entanglement and the GME for the multipartite entanglement, which are in quantitative agreement with the numerical ones. The well-known sudden death of the two-qubit entanglement is observed by our analytic solution.\nAn interesting phenomenon of entanglement is that the GME for the three-qubit entanglement decays to the nonzero minimum during the time window in which the two-qubit entanglement disappears, implying that three qubits remain entangled when the two-qubit state is separable.\nOur results indicate that the GME is the powerful entanglement to detect quantum correlations in multipartite systems that cannot be described via bipartite entanglement in subsystems of smaller particles.\nThere still exists many open problems to the theory of entanglement for multipartite systems due to much richer structure of the entanglement in a more- than two-party system. In particular, the dynamical behaviors for two kinds of\nentanglement may be explored in the multi-qubit realized in the recent\ncircuit QED systems in the ultra-strong coupling.\n\n\n\nIn the end of the preparation of the present work, we noted a recent paper\nby Mao et al. ~\\cite{mao} for the same model. We should say that the approach\nused there is the adiabatic approximation of the present work, i.e., the\nzeroth-order approximation.\n\n\\section{Acknowledgements}\n\nThis work was supported by National Natural Science Foundation of China\n(Grants No.11547305, and No.11474256), Chongqing Research Program of Basic Research and\nFrontier Technology (Grant No.cstc2015jcyjA00043), and Research Fund for the Central Universities\n(Grant No.106112016CDJXY300005).\n\n$^{*}$ Email:yuyuzh@cqu.edu.cn\n\n$^{\\dagger}$ Email:qhchen@zju.edu.cn\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn this paper, we are interested to solve the following unconstrained optimization problem:\n\\begin{eqnarray}\\label{general}\n\\min_{x\\in\\Bbb{R}^n}f(x),\n\\end{eqnarray}\nin which $f:\\Bbb{R}^n\\rightarrow \\Bbb{R}$ is a continuously differentiable function. There are various iterative \napproaches for solving (\\ref{general}) \\citep{Nocedal}. The Conjugate Gradient (CG) method is one such approach. The CG based methods\ndo not need any second-order information of the objective function. For a given point $x_0\\in \\Bbb{R}^n$, the iterative formula\ndescribing the CG method is:\n\\begin{equation}\\label{iter}\nx_{k+1}=x_k+\\alpha_k d_k,\n\\end{equation}\nin which $x_k$ is current iterate point, $\\alpha_k$ is the step size, and $d_k$ is the search direction determined by:\n\\begin{eqnarray}\\label{dk}\nd_k=\\left\\{\n\\begin{array}{lr}\n-g_k\\qquad\\qquad\\quad\\qquad k=0,&\\\\\n-g_k+\\beta_{k-1}d_{k-1}\\qquad k\\geq 1,&\\\\\n\\end{array} \\right.\n\\end{eqnarray}\nwhere $g_k=\\nabla f(x_k)$ is the gradient of the objective function in the current iteration. The conjugate gradient parameter is $\\beta_k$, whose choice of different values leads to various CG methods.\nThe most well-known of the CG methods are the Hestenes-Stiefel (HS) method \\citep{hestenes}, Fletcher-Reeves (FR) method \\citep{Fletcher64},\nConjugate Descent (CD) \\citep{Fletcher13}, and Polak-Ribiere-Polyak (PRP) \\citep{prp}.\n\nThere are various approaches to determining a suitable step size in each iteration such as Armijo line search, Goldstein line search, and Wolfe line search \\citep{Nocedal}. The Armijo line search finds the largest value of step size in each iteration such that the following inequality holds:\n\\begin{eqnarray}\\label{line}\nf(x_k+\\alpha_kd_k)\\leq f(x_k)+\\gamma\\alpha_kg_k^Td_k\n\\end{eqnarray}\nin which $\\gamma\\in(0,1)$ is a constant parameter.\nGrippo et al. \\citep{Grippo86} introduced a non-monotone Armijo-type line search technique as another way to compute step size.\nThe Incorporation of the non-monotone strategy into the gradient and projected gradient approaches, the conjugate gradient method, and the trust-region methods has led to significant improvements to these methods. Zhang and Hager \\citep{Zhang04} gave some conditions to improve the convergence rate of this strategy. Ahookhosh et al. \\citep{Ahookhosh122} built on these results and investigated a new non-monotone condition:\n\\begin{equation}\\label{amin}\nf(x_k+\\alpha_kd_k)\\leq R_k+\\gamma\\alpha_kg_k^T d_k,\n\\end{equation}\nwhere $R_k$ is defined by\n\\begin{eqnarray}\n& R_k=\\eta_k f_{l_k}+(1-\\eta_k)f_k, & \\label{rk}\\label{flk}\\\\\n& \\eta_k\\in[\\eta_{\\min},\\eta_{\\max}],~\\eta_{\\min}\\in[0, 1), \\ \\eta_{\\max}\\in[\\eta_{\\min},1], & \\notag\\\\\n& f_{l_k}=\\max_{0\\leq j\\leq m_k}\\{f_{k-j}\\}, \\nonumber \\\\\n& m_0=0, \\ \\ 0\\leq m_k\\leq \\min\\{m_{k-1}+1,N\\} \\mbox{ for some } N\\geq 0. \n\\end{eqnarray}\nNote that $\\eta_k$ is known as the non-monotone parameter and plays an essential role in the algorithm's convergence.\n\nAlthough this new non-monotone strategy in \\citep{Ahookhosh122} has some appealing properties, especially in functional performance, current algorithms based on this non-monotone strategy\nface the following challenges.\n\\begin{itemize}\n \\item The existing schemes for determining the parameter $\\eta_k$ \nmay not reduce the value of the objective function significantly in initial iterations.\nTo overcome this drawback, we propose a new scheme for choosing $\\eta_k$ \nbased on the gradient behaviour of the objective function.\nThis can reduce the total number of iterations.\n\\item Many evaluations of the objective function are needed to find \nthe step length $\\alpha_k$ in step $k$. \nTo make this step more efficient, we use an adaptive and composite step length procedure from \\citep{Li19} to determine the initial value of the step length in inner iterations.\n\\item The third issue is the global convergence for the non-monotone CG method. Most exiting CG methods use the Wolfe condition, which plays a vital role in establishing the global convergence of various CG methods \\citep{Nazareth01}. Wolfe line search is more expensive than the Armijo line search strategy. Here, we define a suitable conjugate gradient parameter so that the scheme proposed here has global convergence property.\n\n\\end{itemize}\n\n\n\n\n\n\n\n\nBy combining the outlined strategies, we propose a modification to the non-monotone line search method. Then, we incorporate this approach into the CG method and introduce a new non-monotone CG algorithm. We prove that our proposed algorithm has global convergence. Finally, we compare our algorithm and eight other algorithms on standard tests and non-negative matrix factorization instances. We utilize some criteria such as the number of objective function evaluations, the number of gradient evaluations, the number of iterations, and the CPU time to compare the performance of algorithms.\n\n\n\\section{An improved non-monotone line search algorithm} \\label{s:algorithm}\n\nThis section discusses the issues with the state of the art of non-monotone line search strategy, choice of the step sizes, and finally, the conjugate gradient parameter.\n\\subsection{A new scheme of choosing $\\eta_k$}\nRecall that the non-monotone line search strategy is determined by equation \\eqref{amin} in step $k$.\nThe parameter $\\eta_k$ is involved in the non-monotone term (\\ref{flk})\nand its choice can have a significant impact on the performance of the algorithm. There are two common approaches for calculating \n$\\eta_k$.\nThe scheme proposed by Ahookhosh et al. \\citep{Ahookhosh122} has been used in most of the existing non-monotone algorithms \\citep{Esmaeili,Ahookhosh_Nu,Amini_App14,Ahookhosh15}.\nThis strategy can be formulated as $\\eta_k=\\frac{1}{3}\\eta_0 (-\\frac{1}{2})^k+\\frac{2}{3}\\eta_0$ \nwhere $\\eta_0=0.15$ and the limit value of $\\eta_k$ is 0.1.\nThe other scheme proposed by Amini et al. \\citep{Amini14}, which depends on the behaviour of gradient is given by:\n\\begin{equation}\\label{Amini's_method}\n\\eta_0=0.95, \\ \\\n\\eta_{k}= \\left\\{\n\\begin{array}{ll}\n\\frac{2}{3}\\eta_{k-1} +0.01, & \\mbox{if } ~\\|g_{k} \\|_{\\infty}\\leq 10^{-3}; \\\\\n\\max\\{ 0.99\\eta_{k-1},0.5\\}, & \\mbox{otherwise}.\n\\end{array} \\right.\n\\end{equation}\nTo illustrate the behaviour of $\\eta_k$ proposed in \\citep{Ahookhosh122} and \\citep{Amini14}, we solve the problem $f(x)= (x_0-5)^2+\\sum_{i=1}^{40} (x_i-1)^2$ for $ x\\in \\Bbb{R}^{41}$. \nThe values of the parameter $\\eta_k$ corresponding to the two schemes are displayed in Fig. \\ref{muk} (Left).\n \\begin{figure}[h!]\n\\centering\n \\includegraphics[width=.45\\textwidth]{a1.jpg}\n \\includegraphics[width=.45\\textwidth]{a4.jpg}\n\\caption{(Left): Values of $\\eta_k$ proposed in \\citep{Ahookhosh122} and \\citep{Amini14}, (Right): Values of $\\eta_k$ for the new scheme.}\n\\label{muk}\n\\end{figure}\nAs shown in Fig. \\ref{muk}, for the scheme proposed by Ahookhosh et al. \\citep{Ahookhosh122}, $\\eta_k$\nis close to $0.1$ after only a few iterations. {Notice that $\\eta_k$ in each iteration does not have any connection with the behaviour of the objective function. Thus this scheme is not effective.} In addition, there are two issues with the scheme introduced by Amini et al. in \\citep{Amini14}.\nOne problem indicated by Fig. \\ref{muk} is that that $\\eta_k$ decreases relatively\nquickly for the first 65 iterations.\n{Since the algorithm requires the long iterations to solve his problem}, ideally $\\eta_k$ should be close to 1 for these initial iterations.\nThe second problem is that the value of $\\eta_k$ remains the same for a large number of iterations and it is not affected by the behaviour of the objective function.\n\nTo avoid theses challenges, we propose an adaptive strategy for calculating the value of $\\eta_k$:\n\\begin{equation} \\label{eq:etakn}\n\\eta_{k}=0.95\\sin\\left(\\frac{\\pi \\|g_{k}\\|}{1+2\\|g_{k}\\|}\\right)+0.01.\n\\end{equation}\nWhen $x_k$ is far away from the minimizer, we can reasonably assume that $\\|g_k\\|$ is large. Thus the value of $\\eta_k$ defined by \\eqref{eq:etakn} is close to 1.\nThis makes the scheme closer to the original non-monotone strategy in the initial iterations, providing a chance to reduce the value of the objective function more significantly in the initial iterations. On the other hand, when $x_k$ is close to the minimizer, $\\|g_k\\|$ is small, then the value of $\\eta_k$ is close to zero. Thus, the step length is small so that the new point stays in the neighbourhood of the optimal point. Thus the new scheme is closer to the monotone strategy. We plot the behaviour of $\\eta_k$ denoted by \\eqref{eq:etakn} in Fig. \\ref{muk} (Right), using the same values of the gradient for the optimization problem mentioned above.\n\n\\subsection{ New schemes for choosing $\\alpha_k$ }\nWe utilize a convex combination of the Barzilai-Borwein (BB) step sizes to calculate an appropriate $\\alpha_k$ in each outer iteration as in \\citep{Li19}. Our strategy calculates the value of $\\alpha_k$, using the following equation:\n\\begin{equation}\\label{newalpha}\n\\alpha_k^{{\\scriptscriptstyle \\textrm{CBB}}} =\\mu_k\\alpha^{(1)}_k+(1-\\mu_k)\\alpha^{(2)}_k,\n\\end{equation}\nwhere\n\\begin{eqnarray*}\n\t&\\alpha_k^{(1)}=\\frac{s_k^Ts_k}{s_k^Ty_k},\\quad \\alpha^{(2)}_k=\\frac{s_k^Ty_k}{y_k^Ty_k},\\quad s_k:=x_k-x_{k-1},\\quad y_k:=g_k-g_{k-1};&\\\\\n\t&\\mu_k=\\frac{K_2}{K_1+K_2}\\quad\n\tK_1=\\|\\alpha^{(1)}_k y_k-s_k\\|^2,\\quad K_2=\\|(\\alpha^{(2)}_k)^{-1}s_k-y_k\\|^2.&\n\\end{eqnarray*}\n\\subsection{Conjugate gradient parameter}\nHere, we propose the new conjugate gradient parameter given by:\n\\begin{eqnarray}\\label{cgpar}\n\\beta_k=\\omega \\frac{\\|g_k\\|}{\\|d_{k-1}\\|},\\quad \\omega \\in (0,1).\n\\end{eqnarray}\nThe complete algorithm is in Appendix \\ref{AppA} (see Algorithm \\ref{alg1}). The next lemma proves a key property of $\\beta_k$ which is very important in proving the algorithm's convergence. The proofs are in the Appendix \\ref{AppA}.\n\\begin{lemma}\\label{decent}\nFor the search direction $d_k$ and the constant $c>0$ we have:\n\t\\begin{eqnarray}\n\td_k^Tg_k\\leq -c\\|g_k\\|.\n\t\\end{eqnarray}\n\\end{lemma}\n The following assumptions are used to analyze the convergence properties of Algorithm \\ref{alg1}.\n\\begin{description}\n\t\\item[H1] The level set $\n\t\\mathcal{L}(x_0)=\\{x|f(x)\\leq f(x_0),~~~~x\\in \\Bbb{R}^n\\}$ is bounded set.\n\t\\item[H2] The gradient of objective function is Lipschitz continuous over an open convex set $C$ containing $\t\\mathcal{L}(x_0)$. That is:\n\t\\begin{equation*}\n\t\\|g(x)-g(y)\\|\\leq L\\|x-y\\|,\\qquad \\forall ~x,y\\in C.\n\t\\end{equation*}\n\\end{description}\nWe prove the following Theorem about the global convergence of Algorithm \\ref{alg1}, the proof of which follows from the Lemmas presented in \nthis section. Please see the appendix for the proofs.\n\n\n\\begin{theorem}\\label{glob}\n\t{Let $(H1)$, $(H2)$, and Lemmas \\ref{decent} and \\ref{aboveserch} hold. Then, for the\n\tsequence $\\{x_k\\}$ generated by Algorithm \\ref{alg1}, we have $\\lim_{k\\rightarrow \\infty} \\|g_k\\|=0.$\n}\\end{theorem}\n\n\n\\begin{lemma}\\label{aboveserch}\n\tSuppose that the search direction $d_k$ with the CG parameter $\\beta_k$ given by (\\ref{cgpar}) is generated by Algorithm \\ref{alg1}. Then, an upper bound for $d_k$ is given by $\\|d_k\\|\\leq (1+\\omega)\\|g_k\\|.$\n\\end{lemma}\n\\begin{lemma}\\label{low-bou}\n\tSuppose that $x_k$ is not a stationary point of (\\ref{general}). Then there exists a constant\n\t\\begin{equation*}\n\t{\\lambda}=\\min \\left\\{\\beta_1\\rho,\\frac{2(1-\\omega)\\rho(1-\\gamma)}{L(1+\\omega)^2}\\right\\},\n\t\\end{equation*}\n\tsuch that $\\alpha_k\\geq {\\lambda}$.\n\\end{lemma}\n\n\n\n\\section{Numerical Results}\nIn this section we test the new algorithm to solve a set of standard optimization problems and the non-negative matrix factorization problem, which is a non-convex optimization problem. The implementation level details are in Appendix \\ref{AppB}.\nTo demonstrate the efficiency of the proposed algorithm, we compare our algorithm and eight other existing algorithms introduced in \\citep{Ahookhosh122,Amini14,Jiang,Zhang} on a set of $110$ standards test problems. To describe the behaviour of each strategy, we use\nperformance profiles proposed by Dolan and Mor\u00e9 \\citep{Dolan}.\nNote that the performance profile for an algorithm $p_s(\\tau): \\Bbb{R}\\mapsto [0, 1]$ is a non-decreasing, piece-wise constant function, continuous from the right at each breakpoint. Moreover, the value $p_s(1)$ denotes the probability that the algorithm will win against the rest of the algorithm. More information on the performance profile is in Appendix \\ref{AppB}. We plot the performance profile of each algorithm in terms of the total number of outer iteration and the CPU time on the set of standard test problems in Fig. \\ref{results}. \n \\begin{figure}[h!]\n\\centering\n \\includegraphics[width=.45\\textwidth]{1.jpg}\n \\includegraphics[width=.45\\textwidth]{3.jpg}\n\\caption{(Left): Performance profiles of the total number of outer iterations, (Right): Performance profiles of CPU Time.}\n\\label{results}\n\\end{figure}\n\n\nWe also apply our algorithm to solve the Non-Negative Matrix Factorization (NMF)\nwhich has several applications in image processing such as face detection problems. \nGiven a non-negative matrix $V\\in\\Bbb{R}^{m\\times n}$, a NMF\nfinds two non-negative matrices\n$W\\in\\Bbb{R}^{m\\times k}$ and $H\\in\\Bbb{R}^{k\\times n}$ with\n$k\\ll\\min(m,n)$ such that $X\\approx WH$. This problem can be formulated as\n\\begin{equation}\\label{opti-n}\n\\min_{W,H\\geq0} F(W,H)=\\frac{1}{2}\\|V-WH\\|_{F}^2.\n\\end{equation}\nEquation \\eqref{opti-n} is a non-convex optimization problem. We compare our method and Zhang's algorithm \\citep{Zhang} on some random datasets and reported these results in Appendix \\ref{AppB}. \n\n\n\\section{Conclusion} In this paper, we introduced a new non-monotone conjugate gradient algorithm based on efficient Barzilai-Borwein step size. We introduced a new non-monotone parameter based on gradient behaviour and determined by a trigonometric function. We use a convex combination of the determined method to compute the step size value in each iteration. We prove that the proposed algorithm has global convergence. We implemented and tested our algorithm on a set of standard test problems and the non-negative matrix factorization problems. The proposed algorithm can solve $98\\%$ of the test problems for a set of standard test instances. For the non-negative matrix factorization, the results indicate that our algorithm is more efficient compared to Zhang' s method \\citep{Zhang}. \n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{unsrtnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\\section{Introduction}\n\nThe difference-of-log-Normals distribution, henceforth DLN, is the distribution arising when one subtracts a log-Normal random variable (RV) from another. To define the DLN, consider an RV $W$ such that \n\\begin{equation} \\label{eq:DLN}\nW = Y_{p} - Y_{n} = \\text{exp}(X_{p}) - \\text{exp}(X_{n}) \\ \\ \\text{with} \\ \\ \\pmb{X} = (X_{p},X_{n})^{T} \\sim \\mathcal{N}(\\pmb{\\mu},\\pmb{\\Sigma})\n\\end{equation}\nin which $\\pmb{X}$ is a bi-variate Normal with\n\\begin{equation} \\label{eq:BVN}\n\\pmb{\\mu} = \\begin{bmatrix} \\mu_p \\\\ \\mu_n \\end{bmatrix} \\ \\ \\ \n\\pmb{\\Sigma} = \\begin{bmatrix} \\sigma_p^2 & \\sigma_p\\cdot\\sigma_n\\cdot\\rho_{pn} \\\\ \\sigma_p\\cdot\\sigma_n\\cdot\\rho_{pn} & \\sigma_n^2 \\end{bmatrix}\n\\end{equation}\nWe say $W$ follows the five-parameter DLN distribution, $W \\sim \\text{DLN}(\\mu_p,\\sigma_p,\\mu_n,\\sigma_n,\\rho_{pn})$.\n\nThe companion paper \\cite{Parham2022} makes the case that the DLN is a \\emph{fundamental distribution in nature}, in the sense that it arises naturally in a plethora of disparate natural phenomena, similar to the Normal and log-Normal distributions. It shows that firm income, return, and growth are all well-described by the DLN, it further shows that city population growth, per-county GDP growth, and the per-industry per-Metro GDP growth all show remarkable fit to the DLN. \\cite{Parham2022} describes how the emergence of the DLN is a direct result of an application of the Central Limit Theorems and ``Gibrat's Law'' when applied to various economic phenomena. As the DLN is almost completely unexplored,\\footnote{At the time of writing, I was able to find only two statistical works considering it, \\cite{Lo2012} and \\cite{GulisashviliTankov2016}. Both papers concentrate on the sum of log-Normals but show their results hold for the difference of log-Normals as well, under some conditions.} this paper aims to fill the gap.\n\nThe next section fully characterizes the DLN distribution, deriving its PDF, CDF, central moments, and estimators for the distribution parameters given data. It also introduces an extension of the DLN to the multi-variate N-dimensional case using elliptical distribution theory. A full suite of computer code is provided for future use.\n\nNext, Section~\\ref{sec:Methods} discusses the difficulty of working with the raw DLN distribution, stemming from its characteristic ``double-exponential'' heavy tails. To alleviate this difficulty, I discuss the close link between the DLN and the Hyperbolic Sine (\\emph{sinh}) function and its inverse (\\emph{asinh}) and present the ADLN distribution - the DLN under an asinh transform. The section then considers the problem of measuring growth in DLN-distributed RVs. To that end, it generalizes the concept of growth, currently defined only for strictly positive RVs, to DLN RVs that are sometimes negative. I show that the appropriate growth concept for an RV (e.g. percentage, difference in logs, or DLN-growth) intimately depends on the RV's statistical distribution.\n\nSection~\\ref{sec:MC} explores the properties of the estimators presented via extensive Monte-Carlo experiments. It: (i) reports the empirical bias and variance of the moment estimators and the MLE parameter estimators; (ii) establishes critical values for the Kolmogorov-Smirnov and Anderson-Darling distributional tests for DLN RVs; and (iii) presents the relation between the measures of growth developed in Section~\\ref{sec:Methods}. \\comments{Finally, it discusses using the DLN as an approximating distribution, and presents evidence that the DLN is an excellent approximating distribution for several distributions, including the distributions arising from the sum of two DLN RVs, the multiplication of DLN by Normal RVs, and the multiplication of Normal by log-Normal RVs.}\n\n\n\n\\section{Definitions and properties}\n\\label{sec:Def}\n\nPrior to proceeding, and to fix ideas, Figure~\\ref{fig:DLNexam} presents several instances of the DLN distribution. Panel (a) presents and contrasts the standard Normal, standard DLN, and standard log-Normal. The uncorrelated standard DLN is defined as DLN(0,1,0,1,0), i.e. the difference between two exponentiated uncorrelated standard Normal RVs. Panel (b) shows the role of the correlation coefficient $\\rho_{pn}$ in the standard DLN, controlling tail-weight vs. peakedness. Panel (c) repeats the analysis of Panel (b) for a different parametrization common in practical applications, exhibiting the problem of dealing with heavy tails. Panel (d) presents the data of panel (c) in asinh space, showing how asinh resolves the problem of graphing heavy tails and why the ADLN distribution is useful in practice.\n\n\\RPprep{DLN Examples}{0}{0}{DLNexam}{%\n This figure presents examples of the DLN distribution. Panel (a) graphs the PDFs of the standard Normal, log-Normal, and DLN. Panel (b) graphs the PDFs of standard DLN with different correlation coefficients $\\rho_{pn}$. Panel (c) presents the PDFs of a DLN with parameters $(3,2,2,2)$, common in practice, and varying correlation coefficients $\\rho_{pn}$. Panel (c) presents the PDF for the range $\\pm 10$, which is a significant truncation due to the long tails of this DLN. Panel (d) presents the same PDFs as Panel (c), but the x-axis is asinh-transformed, such that it spans the range sinh(-10) $\\approx$ -11,000 to sinh(10) $\\approx$ 11,000.\n}\n\\RPfig{%\n\t\\begin{tabular}{cc} \n\t\t\\subfigure[standard DLN, N, LN]{\\includegraphics[width=3in]{Img\/DLN_N_LN.pdf}} & \n\t\t\\subfigure[Std. DLN w\/ corrs]{\\includegraphics[width=3in]{Img\/SDLN_corrs.pdf}} \\\\ \\\\\n\t\t\\subfigure[DLN w\/ corrs]{\\includegraphics[width=3in]{Img\/DLN_corrs.pdf}} &\n\t\t\\subfigure[ADLN w\/ corrs]{\\includegraphics[width=3in]{Img\/ADLN_corrs.pdf}} \\\\ \\\\\n\t\\end{tabular}\n}\n\n\n\n\\subsection{PDF and CDF}\n\nThe PDF for the bi-variate Normal (BVN) RV $\\pmb{X}$ is well-known to be\n\\begin{equation} \\label{eq:PDFBVN}\nf_{BVN}(\\pmb{x}) = \\frac{\\lvert\\pmb{\\Sigma}\\rvert^{-\\frac{1}{2}}}{2\\pi}\\cdot \\text{exp}\\left(-\\frac{1}{2} (\\pmb{x}-\\pmb{\\mu})^{T} \\pmb{\\Sigma}^{-1} (\\pmb{x}-\\pmb{\\mu})\\right) = \\frac{\\lvert\\pmb{\\Sigma}\\rvert^{-\\frac{1}{2}}}{2\\pi}\\cdot \\text{exp}\\left(-\\frac{1}{2} \\lvert\\lvert\\pmb{x}-\\pmb{\\mu}\\rvert\\rvert_{\\pmb{\\Sigma}}\\right)\n\\end{equation} \nwith $\\lvert\\pmb{\\Sigma}\\rvert$ the determinant of $\\pmb{\\Sigma}$ and $\\lvert\\lvert\\pmb{x}\\rvert\\rvert_{\\pmb{\\Sigma}}$ the Euclidean norm of $\\pmb{x}$ under the Mahalanobis distance induced by $\\pmb{\\Sigma}$.\n\nThe PDF for the bi-variate log-Normal (BVLN) RV $\\pmb{Y} = (Y_{p},Y_{n})^{T}$ can be obtained by using the multivariate change of variables theorem. If $\\pmb{Y}=g(\\pmb{X})$ then\n\\begin{equation}\nf_{Y}(\\pmb{y}) = f_{X}(g^{-1}(\\pmb{y})) \\cdot \\lvert\\lvert J_{g^{-1}}(\\pmb{y})\\rvert\\rvert\n\\end{equation}\nwith $J_{g^{-1}}$ the Jacobian matrix of $g^{-1}(\\cdot)$ and $\\lvert\\lvert J_{g^{-1}}\\rvert\\rvert$ the absolute value of its determinant. Applying the theorem for $\\pmb{Y} = g(\\pmb{X}) = (\\text{exp}(X_p),\\text{exp}(X_n))^{T}$ we have $g^{-1}(\\pmb{y}) = (log(y_p),log(y_n))^{T}$ and $\\lvert\\lvert J_{g^{-1}}(\\pmb{y})\\rvert\\rvert = (y_p\\cdot y_n)^{-1}$. The PDF of a BVLN RV is then\n\\begin{equation} \\label{eq:PDFBVLN}\nf_{BVLN}(\\pmb{y}) = \\frac{\\lvert\\pmb{\\Sigma}\\rvert^{-\\frac{1}{2}}}{2\\pi y_p y_n} \\text{exp}\\left(-\\frac{1}{2}\\lvert\\lvert\\log(\\pmb{y})-\\pmb{\\mu}\\rvert\\rvert_{\\pmb{\\Sigma}}\\right)\n\\end{equation}\n\nWe can now define the cumulative distribution function (CDF) of the DLN distribution using the definition of the CDF of the difference of two RV\n\\begin{equation} \\label{eq:CDFDLN1}\n\\begin{split}\nF_{DLN}(w) & = P[W\\leq w] = P[y_{p} - y_{n} \\leq w] = P[y_{p} \\leq y_{n} + w] \\\\\n & = \\int_{-\\infty}^{\\infty}\\int_{-\\infty}^{y_{n}+w}f_{BVLN}(y_{p},y_{n})dy_{p}dy_{n}\n\\end{split}\n\\end{equation}\nwhich can be differentiated w.r.t $w$ to yield the PDF\n\\begin{equation} \\label{eq:PDFDLN1}\nf_{DLN}(w) = \\int_{-\\infty}^{\\infty}f_{BVLN}(y+w,y)dy = \\int_{-\\infty}^{\\infty}f_{BVLN}(y,y-w)dy\n\\end{equation}\nbut because $f_{BVLN}(\\pmb{y})$ is non-zero only for $\\pmb{y}>0$, we limit the integration range\n\\begin{equation} \\label{eq:PDFDLN}\nf_{DLN}(w) = \\int_{\\text{max}(0,w)}^{\\infty}f_{BVLN}(y,y-w)dy\n\\end{equation}\nwhich yields the PDF of the DLN distribution.\n\nIt is well-known, however, that the integral in equation~\\ref{eq:PDFDLN} does not have a closed-form solution. The accompanying code suite evaluates it numerically, and also numerically evaluates the CDF using its definition\n\\begin{equation} \\label{eq:CDFDLN}\nF_{DLN}(w) = \\int_{-\\infty}^{w}f_{DLN}(y)dy\n\\end{equation}\n\n\\begin{sloppypar}\nFor the simpler case with difference of uncorrelated log-Normals, i.e. $\\rho_{pn}=0$, we can derive the PDF of the DLN via a characteristic function (CF) approach as well. In this case, we can write the CF of the DLN as ${\\varphi_{DLN}(t)=\\varphi_{LN}(t)\\cdot\\varphi_{LN}(-t)}$ with $\\varphi_{LN}(t)$ the CF of the log-Normal. Next, we can apply a Fourier transform to obtain the PDF,\n\\begin{equation} \\label{eq:PDFDLNCF}\nf_{DLN}(w) = \\frac{1}{2\\pi}\\int_{-\\infty}^{\\infty}e^{-i\\cdot t\\cdot w} \\cdot \\varphi_{DLN}(t)dt\n\\end{equation}\nUnfortunately, the log-Normal does not admit an analytical CF, and using Equation~\\ref{eq:PDFDLNCF} requires a numerical approximation for $\\varphi_{LN}(t)$ as well. \\cite{Gubner2006} provides a fast and accurate approximation method for the CF of the log-Normal which I use in the calculation of $f_{DLN}(w)$ when using this method.\n\\end{sloppypar}\n\n\n\\subsection{Moments}\n\\label{sec:Moms}\n\n\\subsubsection{MGF}\n\nThe moment generating function (MGF) of the DLN can be written as\n\\begin{equation} \\label{eq:MGFDLN}\nM_{W}(t) = \\mathbb{E}\\left[e^{tW}\\right] = \\int_{-\\infty}^{\\infty}\\int_{-\\infty}^{\\infty}e^{tw}f_{BVLN}(y+w,y)dydw\n\\end{equation}\nbut this formulation has limited usability due to the lack of closed-form solution for the integrals. Instead, it is useful to characterize the moments directly, as we can obtain them in closed-form.\n\n\n\n\\subsubsection{Mean and variance}\n\nUsing the definitions of $\\pmb{\\mu}$ and $\\pmb{\\Sigma}$ in \\ref{eq:BVN}, define the mean and covariance of the BVLN RV, $\\pmb{\\hat{\\mu}}$ and $\\pmb{\\hat{\\Sigma}}$ (element-wise) as\n\\begin{equation} \\label{eq:BVLN}\n\\begin{split}\n\\pmb{\\hat{\\mu}}_{(i)} & = \\text{exp}\\left(\\pmb{\\mu}_{(i)} + \\frac{1}{2}\\pmb{\\Sigma}_{(i,i)}\\right) \\\\\n\\pmb{\\hat{\\Sigma}}_{(i,j)} & = \\text{exp}\\left(\\pmb{\\mu}_{(i)} + \\pmb{\\mu}_{(j)} + \\frac{1}{2}\\left(\\pmb{\\Sigma}_{(i,i)} + \\pmb{\\Sigma}_{(j,j)}\\right)\\right)\\cdot \\left( \\text{exp}\\left(\\pmb{\\Sigma}_{(i,j)}\\right)-1\\right) \\\\\n\\end{split}\n\\end{equation}\nNote that if $\\pmb{\\Sigma}$ is diagonal (i.e., $X_{p}$ and $X_{n}$ are uncorrelated) then $\\pmb{\\hat{\\Sigma}}$ will be diagonal as well. We are however interested in the general form of the DLN distribution. The identities regarding the expectation and variance of a sum of RV yield\n\\begin{equation} \\label{eq:MUDLN}\n\\mathbb{E}\\left[W\\right] = \\mathbb{E}\\left[Y_p\\right] - \\mathbb{E}\\left[Y_n\\right] = \\pmb{\\hat{\\mu}}_{(1)} - \\pmb{\\hat{\\mu}}_{(2)} = \\text{exp}(\\mu_p + \\frac{\\sigma_p^2}{2}) - \\text{exp}(\\mu_n + \\frac{\\sigma_n^2}{2})\n\\end{equation}\nand\n\\begin{equation} \\label{eq:SIGDLN}\n\\begin{split}\n\\text{Var}\\left[W\\right] & =\\mathbb{C}\\left[Y_p,Y_p\\right] + \\mathbb{C}\\left[Y_n,Y_n\\right] -2\\cdot\\mathbb{C}\\left[Y_p,Y_n\\right] = \\pmb{\\hat{\\Sigma}}_{(1,1)} + \\pmb{\\hat{\\Sigma}}_{(2,2)} - 2\\cdot\\pmb{\\hat{\\Sigma}}_{(1,2)} \\\\\n & = \\text{exp}\\left(2\\mu_{p}+\\sigma_p^2\\right)\\cdot\\left(exp\\left(\\sigma_p^2\\right) - 1\\right) \n + \\text{exp}\\left(2\\mu_{n}+\\sigma_n^2\\right)\\cdot\\left(exp\\left(\\sigma_n^2\\right) - 1\\right) \\\\\n & - 2\\text{exp}\\left(\\mu_{p}+\\mu_{n}+\\frac{1}{2}(\\sigma_p^2+\\sigma_n^2)\\right)\n \\cdot\\left(\\text{exp}\\left(\\sigma_p\\sigma_n\\rho_{pn}\\right) - 1\\right)\n\\end{split}\n\\end{equation}\nwith $\\mathbb{C}$ the covariance operator of two general RV $U_{1},U_{2}$\n\\begin{equation} \\label{eq:COVAR}\n\\mathbb{C}\\left[U_{1},U_{2}\\right] = \\mathbb{E}\\left[(U_1 - \\mu_1)(U_2-\\mu_2)\\right]\n\\end{equation}\n\n\n\n\\subsubsection{Skewness and kurtosis}\n\nSkewness and kurtosis of the DLN can similarly be established using coskewness and cokurtosis (for overview, see e.g. \\cite{Miller2013}). Coskewness of three general RV $U_{1},U_{2},U_{3}$ is defined as\n\\begin{equation} \\label{eq:COSKEW}\n\\mathbb{S}\\left[U_{1},U_{2},U_{3}\\right] = \\frac{\\mathbb{E}\\left[(U_1 - \\mu_1)(U_2-\\mu_2)(U_3-\\mu_3)\\right]}{\\sigma_1\\sigma_2\\sigma_3}\n\\end{equation}\nand cokurtosis of four general RV $U_{1},U_{2},U_{3},U_{4}$ is defined as \n\\begin{equation} \\label{eq:COKURT}\n\\mathbb{K}\\left[U_{1},U_{2},U_{3},U_{4}\\right] = \\frac{\\mathbb{E}\\left[(U_1 - \\mu_1)(U_2-\\mu_2)(U_3-\\mu_3)(U_4-\\mu_4)\\right]}{\\sigma_1\\sigma_2\\sigma_3\\sigma_4}\n\\end{equation}\nwith the property that $\\mathbb{S}\\left[U,U,U\\right] = \\text{Skew}\\left[U\\right]$ and $\\mathbb{K}\\left[U,U,U,U\\right] = \\text{Kurt}\\left[U\\right]$. More importantly, it is simple to show that\n\\begin{equation} \\label{eq:SKEWDIFF}\n\\text{Skew}\\left[U-V\\right] = \\frac{\\sigma_U^3\\mathbb{S}\\left[U,U,U\\right] -3\\sigma_U^2\\sigma_V\\mathbb{S}\\left[U,U,V\\right]+3\\sigma_U\\sigma_V^2\\mathbb{S}\\left[U,V,V\\right] -\\sigma_V^3\\mathbb{S}\\left[V,V,V\\right]}{\\sigma_{U-V}^{3}}\n\\end{equation}\nand similarly\n\\begin{equation} \\label{eq:KURTDIFF}\n\\begin{split}\n\\text{Kurt}\\left[U-V\\right] & = \\frac{1}{\\sigma_{U-V}^{4}} [ \\sigma_U^4\\mathbb{K}\\left[U,U,U,U\\right] -4\\sigma_U^3\\sigma_V\\mathbb{K}\\left[U,U,U,V\\right] \\\\ & + 6\\sigma_U^2\\sigma_V^2\\mathbb{K}\\left[U,U,V,V\\right] -4\\sigma_U\\sigma_V^3\\mathbb{K}\\left[U,V,V,V\\right]+\\sigma_V^4\\mathbb{K}\\left[V,V,V,V\\right] ]\n\\end{split}\n\\end{equation}\nwith $\\sigma_{U-V} = \\text{Var}\\left[U-V\\right]^{\\frac{1}{2}}$ calculated using Equation~\\ref{eq:SIGDLN}. Evaluating the operators $\\mathbb{S}$ and $\\mathbb{K}$ for the case of DLN requires evaluating expressions of the general form $\\mathbb{E}\\left[Y_{p}^{i}Y_{n}^{j}\\right]$, which can be done via the MGF of the BVN distribution\n\\begin{equation} \\label{eq:EUVSimp}\n\\mathbb{E}\\left[Y_{p}^{i}Y_{n}^{j}\\right] = \\mathbb{E}\\left[e^{i X_p}e^{j X_n}\\right] = \\text{MGF}_{BVN}\\left(\\big[\\begin{smallmatrix} i \\\\ j \\end{smallmatrix}\\big] \\right) = \\mathbb{E}\\left[Y_{p}^{i}\\right]\\mathbb{E}\\left[Y_{n}^{j}\\right]e^{ij\\pmb{\\Sigma}_{(1,2)}}\n\\end{equation}\nwith $\\mathbb{E}\\left[Y_{p}^{i}\\right]=\\text{exp}\\left(i\\mu_{p} + \\frac{1}{2}i^2\\sigma_{p}^2\\right)$. This concludes the technical details of the derivation. \n\nThe method presented can be extended to higher central moments as well. The accompanying code suite includes functions that implement the equations above and use them to calculate the first five moments of the DLN given the parameters $(\\mu_p,\\sigma_p,\\mu_n,\\sigma_n,\\rho_{pn})$. Section~\\ref{sec:MC} later describes the results of Monte-Carlo experiments testing the empirical variance and bias of the moments as a function of sample size.\n\n\n\n\\subsection{Estimation}\n\\label{sec:Estim}\n\nGiven data $\\pmb{D} \\sim \\text{DLN}(\\pmb{\\Theta})$ with $\\pmb{\\Theta} = (\\mu_p,\\sigma_p,\\mu_n,\\sigma_n,\\rho_{pn})$, we would like to find an estimate $\\pmb{\\hat{\\Theta}}$ to the parameter vector $\\pmb{\\Theta}$. Experiments show that given an appropriate initial guess, the MLE estimates of $\\pmb{\\Theta}$ perform well in practice. The main parameter of difficulty is $\\rho_{pn}$. This parameter is akin to the shape parameter in the Stable distribution, which plays a similar role and is similarly difficult to estimate, see e.g. \\cite{FamaRoll1971}. It hence requires special care in the estimation.\n\nThe estimation code provided minimizes the negative log-likelihood of the data w.r.t the DLN PDF using a multi-start algorithm. The starting values for the first four parameters are fixed for all start points as:\n\\begin{equation} \\label{eq:ESTIM_GUESS}\n\\begin{bmatrix}\n\\mu_p \\\\ \\sigma_p \\\\ \\mu_n \\\\ \\sigma_n\n\\end{bmatrix} = \n\\begin{bmatrix}\n\\text{Median}\\left[\\text{log}\\left(\\pmb{D}\\right)\\right] \\ \\ \\text{for} \\ \\ \\pmb{D}>0 \\\\\n\\text{IQR}\\left[\\text{log}\\left(\\pmb{D}\\right)\\right]\/1.35 \\ \\ \\text{for} \\ \\ \\pmb{D}>0 \\\\\n\\text{Median}\\left[\\text{log}\\left(-\\pmb{D}\\right)\\right] \\ \\ \\text{for} \\ \\ \\pmb{D}<0 \\\\\n\\text{IQR}\\left[\\text{log}\\left(-\\pmb{D}\\right)\\right]\/1.35 \\ \\ \\text{for} \\ \\ \\pmb{D}<0 \\\\\n\\end{bmatrix}\n\\end{equation}\nwhile the initial guesses for $\\rho_{pn}$ are $(-0.8,-0.3,0,0.3,0.8)$. The estimator $\\pmb{\\hat{\\Theta}}$ is then the value which minimizes the negative log-likelihood in the multi-start algorithm. The estimator inherits asymptotic normality, consistency, and efficiency properties from the general M-estimator theory, as the dimension of $\\pmb{\\hat{\\Theta}}$ is fixed, the likelihood is smooth, and is supported on $\\mathbb{R}\\ \\forall \\pmb{\\hat{\\Theta}}$. A better estimation procedure for the parameters of the DLN might be merited, but is left for future work.\n\n\n\n\\subsection{The elliptical multi-variate DLN}\n\\label{sec:mvsdln}\n\nPractical applications of the DLN require the ability to work with multi-variate DLN RVs. I hence present an extension of the DLN to the multi-variate case using elliptical distribution theory, with the standard reference being \\cite{FangEtAl1990}.\n\n\\begin{sloppypar}\nThe method of elliptical distributions requires a symmetric baseline distribution. We will therefore focus our attention on the symmetric DLN case in which ${\\mu_p=\\mu_n\\equiv\\mu}$ and ${\\sigma_p = \\sigma_n\\equiv\\sigma}$, yielding the three parameter uni-variate symmetric distribution $\\text{SymDLN}(\\mu,\\sigma,\\rho)=\\text{DLN}(\\mu,\\sigma,\\mu,\\sigma,\\rho)$. I begin by defining a standardized N-dimensional elliptical DLN RV using SymDLN and the spherical decomposition of \\cite{CambanisEtAl1981}, and later extend it to a location-scale family of distributions.\n\\end{sloppypar}\n\nLet $\\mathbf{U}$ be an N-dimensional RV distributed uniformly on the unit hyper-sphere in $\\mathbb{R}^{N}$ and arranged as a column vector. Let $R\\geq0$ be a uni-variate RV independent of $\\mathbf{U}$ with PDF $f_{R}\\left(r\\right)$ to be derived momentarily, and let $\\mathbf{Z}=R\\cdot\\mathbf{U}$ be a standardized N-dimensional elliptical DLN RV. A common choice for $\\mathbf{U}$ is $\\widehat{\\mathbf{U}}\/\\lvert\\lvert\\widehat{\\mathbf{U}}\\rvert\\rvert_{2}$ with $\\widehat{\\mathbf{U}} \\sim MVN(\\mathbf{0}_N,\\mathbf{1}_N)$. $\\mathbf{U}$ captures a direction in $\\mathbb{R}^{N}$, and we have $\\sqrt{\\mathbf{U}^{T}\\cdot\\mathbf{U}} = \\lvert\\lvert\\mathbf{U}\\rvert\\rvert_{2} \\equiv 1$, which implies $\\sqrt{\\mathbf{Z}^{T}\\cdot\\mathbf{Z}} = \\lvert\\lvert\\mathbf{Z}\\rvert\\rvert_{2} = R$. We further know that the surface area of an N-sphere with radius $R$ is given by\n\\begin{equation} \\label{eq:Surface}\nS_{N}\\left(R\\right) = \\frac{2\\cdot\\pi^{\\frac{N}{2}}}{\\Gamma\\left(\\frac{N}{2}\\right)}\\cdot R^{N-1}\n\\end{equation}\nand can hence write the PDF of $\\mathbf{Z}$ as\n\\begin{equation} \\label{eq:fZPDF1}\nf_{\\mathbf{Z}}\\left(\\mathbf{z}\\right) = \\frac{f_{R}\\left(\\lvert\\lvert\\mathbf{z}\\rvert\\rvert_{2}\\right)}{S_{N}\\left(\\lvert\\lvert\\mathbf{z}\\rvert\\rvert_{2}\\right)} = \\frac{\\Gamma\\left(\\frac{N}{2}\\right)\\cdot f_{R}\\left(\\lvert\\lvert\\mathbf{z}\\rvert\\rvert_{2}\\right)}{2\\cdot\\pi^{\\frac{N}{2}}\\cdot\\lvert\\lvert\\mathbf{z}\\rvert\\rvert_{2}^{N-1}}\n\\end{equation}\n\nWe require $f_{R}\\left(r\\right)$ and $f_{\\mathbf{Z}}\\left(\\mathbf{z}\\right)$ to be valid PDFs, which yields the conditions\n\\begin{equation} \\label{eq:RZcond}\n\\begin{split}\n& f_{R}\\left(r\\right) \\geq 0\\ \\forall\\ r\\in\\mathbb{R} \\\\\n& f_{\\mathbf{Z}}\\left(\\mathbf{z}\\right) \\geq 0\\ \\forall\\ \\mathbf{z}\\in\\mathbb{R}^{N} \\\\\n& \\int_{-\\infty}^{\\infty}f_{R}\\left(r\\right)\\ dr = 1 \\\\\n& \\int_{-\\infty}^{\\infty}\\cdot\\cdot\\cdot \\int_{-\\infty}^{\\infty} f_{\\mathbf{Z}}\\left(\\mathbf{z}\\right)\\ d\\mathbf{z}_{(N)}\\cdot\\cdot\\cdot d\\mathbf{z}_{(1)} = 1 \\\\\n\\end{split}\n\\end{equation}\nto those, we can add the condition that the properly normalized distribution of $f_{R}\\left(r\\right)$ will be SymDLN,\n\\begin{equation} \\label{eq:fRcond}\nf_{R}\\left(r\\right) = \\widetilde{M}_{N}\\left(r\\right)\\cdot f_{DLN}(r)\n\\end{equation}\nwith $\\widetilde{M}_{N}\\left(r\\right)$ chosen such that the conditions in Equation~\\ref{eq:RZcond} hold. Solving for this set of conditions yields\n\\begin{equation} \\label{eq:fR}\nf_{R}\\left(r\\right) = \\frac{r^{N-1}} {\\int_{0}^{\\infty}\\widetilde{r}^{N-1}\\cdot f_{DLN}\\left(\\widetilde{r}\\right)\\ d\\widetilde{r}}\\cdot f_{DLN}\\left(r\\right)\n\\end{equation}\nand\n\\begin{equation} \\label{eq:fZ}\nf_{\\mathbf{Z}}\\left(\\mathbf{z}\\right) = \\frac{\\Gamma\\left(\\frac{N}{2}\\right)}{2\\cdot\\pi^{\\frac{N}{2}}\\cdot \\int_{0}^{\\infty}\\widetilde{r}^{N-1}\\cdot f_{DLN}\\left(\\widetilde{r}\\right)\\ d\\widetilde{r}}\\cdot f_{DLN}\\left(\\lvert\\lvert\\mathbf{z}\\rvert\\rvert_{2}\\right) = M_{N}\\cdot f_{DLN}\\left(\\lvert\\lvert\\mathbf{z}\\rvert\\rvert_{2}\\right)\n\\end{equation}\nwith $M_{N}$ a normalization constant depending only on the dimension N and the parameters of the baseline SymDLN$\\left(\\mu, \\sigma, \\rho\\right)$ being used. We can further use the definition of the CDF of $\\mathbf{Z}$ to write\n\\begin{equation} \\label{eq:FZ}\n\\begin{split}\nF_{\\mathbf{Z}}\\left(\\mathbf{z}\\right) & = \\int_{-\\infty}^{\\mathbf{z}_{(1)}}\\cdot\\cdot\\cdot \\int_{-\\infty}^{\\mathbf{z}_{(N)}} f_{\\mathbf{Z}}\\left(\\mathbf{\\widehat{z}}\\right)\\ d\\mathbf{\\widehat{z}}_{(N)}\\cdot\\cdot\\cdot d\\mathbf{\\widehat{z}}_{(1)} \\\\ \n& = \\int_{-\\infty}^{\\mathbf{z}_{(1)}}\\cdot\\cdot\\cdot \\int_{-\\infty}^{\\mathbf{z}_{(N)}} M_{N}\\cdot f_{\\mathbf{DLN}}\\left(\\lvert\\lvert\\mathbf{z}\\rvert\\rvert_{2}\\right)\\ d\\mathbf{\\widehat{z}}_{(N)}\\cdot\\cdot\\cdot d\\mathbf{\\widehat{z}}_{(1)} \\\\\n\\end{split}\n\\end{equation}\nwhich concludes the characterization of the standardized N-dimensional\nelliptical DLN RV.\n\nExtending the standardized N-dimensional DLN to a location-scale family of distributions is now straightforward. Let $\\widetilde{\\pmb{\\mu}}=\\left(\\mu_1 , \\mu_2 , ... , \\mu_N\\right)^{T}$ be a column vector of locations and let $\\widetilde{\\pmb{\\Sigma}}$ be a positive-semidefinite scaling matrix of rank $N$. Define \n\\begin{equation} \\label{eq:MVDLN}\n\\mathbf{W} = \\widetilde{\\pmb{\\mu}} + \\widetilde{\\pmb{\\Sigma}}^{\\frac{1}{2}}\\cdot\\mathbf{Z}\n\\end{equation}\nwith $\\widetilde{\\pmb{\\Sigma}}^{\\frac{1}{2}}$ denoting the eigendecomposition of $\\widetilde{\\pmb{\\Sigma}}$. The PDF of $\\mathbf{W}$ is then given by\n\\begin{equation} \\label{eq:PDFMVDLN}\n\\begin{split}\nf_\\mathbf{W}\\left(\\mathbf{w}\\right) & = \\lvert\\widetilde{\\pmb{\\Sigma}}\\rvert^{-\\frac{1}{2}}\\cdot f_{\\mathbf{Z}}\\left(\\widetilde{\\pmb{\\Sigma}}^{-\\frac{1}{2}}\\cdot\\left(\\mathbf{w}-\\widetilde{\\pmb{\\mu}}\\right)\\right) \\\\\n& = \\lvert\\widetilde{\\pmb{\\Sigma}}\\rvert^{-\\frac{1}{2}}\\cdot M_{N}\\cdot f_{DLN}\\left(\\sqrt{\\left(\\mathbf{w}-\\widetilde{\\pmb{\\mu}}\\right)^{T}\\cdot\\widetilde{\\pmb{\\Sigma}}^{-1}\\cdot\\left(\\mathbf{w}-\\widetilde{\\pmb{\\mu}}\\right)}\\right) \\\\\n& = \\lvert\\widetilde{\\pmb{\\Sigma}}\\rvert^{-\\frac{1}{2}}\\cdot M_{N}\\cdot f_{DLN}\\left(\\lvert\\lvert\\mathbf{w}-\\widetilde{\\pmb{\\mu}}\\rvert\\rvert_{\\widetilde{\\pmb{\\Sigma}}}\\right) \\\\\n\\end{split}\n\\end{equation}\nThe CDF of $\\mathbf{W}$ can similarly be written as\n\\begin{equation} \\label{eq:CDFMVDLN}\n\\begin{split}\nF_{\\mathbf{W}}\\left(\\mathbf{w}\\right) & = \\lvert\\widetilde{\\pmb{\\Sigma}}\\rvert^{-\\frac{1}{2}}\\cdot M_{N}\\cdot \\int_{-\\infty}^{\\mathbf{w}_{(1)}}\\cdot\\cdot\\cdot \\int_{-\\infty}^{\\mathbf{w}_{(N)}} f_{\\mathbf{DLN}}\\left(\\lvert\\lvert\\mathbf{w}-\\widetilde{\\pmb{\\mu}}\\rvert\\rvert_{\\widetilde{\\pmb{\\Sigma}}}\\right)\\ d\\mathbf{\\widehat{w}}_{(N)}\\cdot\\cdot\\cdot d\\mathbf{\\widehat{w}}_{(1)} \\\\\n\\end{split}\n\\end{equation}\nwhich characterizes a general elliptical multi-variate DLN RV.\n\nFinally, note that the scaling matrix $\\widetilde{\\pmb{\\Sigma}}$ is not the covariance matrix of $\\mathbf{W}$ due to the heavy-tails of $\\mathbf{W}$, similar to other heavy-tailed elliptical distributions such as the multi-variate Stable, t, or Laplace distributions. Further note that the normalization integral in Equation~\\ref{eq:fR} is numerically unstable for high values of N (e.g., $N\\geq 5$), and care should be taken when deriving the PDF of high-dimensional DLN RVs.\n\n\n\n\\section{Methods for heavy-tailed analysis}\n\\label{sec:Methods}\n\nAs discussed above, a main difficulty of working with the DLN distribution stems from its ``double exponential'' nature, i.e. the fact it exhibits exponential tails in both the positive and negative directions. The usual mitigation for a single exponential tail, applying a log transform, fails as the log is undefined on the negatives. This section describes how to extend methods applied to one-sided exponential tails to double-exponential distributions.\n\n\n\n\\subsection{Inverse-Hyperbolic-Sine space and the ADLN}\n\nA common alternative to using log-transforms is transforming the data using the Inverse Hyperbolic Sine (asinh). For a review of the use of asinh in economic applications see \\cite{BellemareWichman2020}. The hyperbolic sine and its inverse are given by\n\\begin{equation} \\label{eq:ASINH}\n\\begin{split}\n\\text{sinh}(x) & = \\frac{e^{x}-e^{-x}}{2} \\\\\n\\text{asinh}(x) & = \\log\\left(x+\\sqrt{1+x^2}\\right)\n\\end{split}\n\\end{equation}\nThe asinh transform has the following useful properties:\n\\begin{enumerate}\n \\item Differentiable and strictly increasing in x.\n \\item $\\text{asinh}(x)\\approx \\text{sign}(x)(\\log\\lvert x\\rvert + \\log(2))$, with the approximation error rapidly vanishing as $\\lvert x\\rvert$ increases.\\footnote{About 1\\% approximation error at $\\lvert x\\rvert$=4, and about 0.1\\% at $\\lvert x\\rvert$=10.}\n \\item Odd function, such that $\\text{asinh}(-x) = -\\text{asinh}(x)$.\n \\item Zero based, such that $\\text{asinh}(0)=0$\n\\end{enumerate}\nI.e., asinh is a bijection similar in flavor to the neglog transform:\n\\begin{equation} \\label{eq:NEGLOG}\n\\text{neglog}(x) =\\text{sign}(x)\\log(1+\\lvert x\\rvert)\n\\end{equation}\nbut with less distortion than the neglog around 0, at the cost of the fixed bias $\\log(2)\\approx 0.7$.\n\nIt is useful to note that any difference of exponentials function can be factored into an exponential multiplied by a Hyperbolic Sine, i.e., \n\\begin{equation} \\label{eq:NEGLOG}\ny =\\exp\\left(x_1\\right) - \\exp\\left(x_2\\right) = 2\\cdot\\exp\\left(\\frac{x_1 + x_2}{2}\\right)\\cdot\\text{sinh}\\left(\\frac{x_1 - x_2}{2}\\right)\n\\end{equation}\nwhich highlights the intimate intuitive relation between the sinh function and the DLN and Laplace distributions. All three are expressed in terms of difference of exponentials, leading to their characteristic ``double exponential'' nature. Sinh's inverse, the asinh, is hence a natural transform to apply to DLN and Laplace distributed RVs.\n\nAs asinh is differentiable and strictly increasing, the method of transformation applies. If $Z=\\text{asinh}(W)$ where $W\\sim DLN$ then $Z\\sim ADLN$, $W=\\text{sinh}(Z)$, and $\\frac{dZ}{dW} = \\left(1+\\text{sinh}(Z)^{2}\\right)^{-1\/2}$. We can now write the PDF for the ADLN distribution\n\\begin{equation} \\label{eq:PDFADLN}\nf_{ADLN}(z) = \\frac{f_{DLN}(\\text{sinh}(z))}{\\text{asinh}'(\\text{sinh}(z))} = f_{DLN}(\\text{sinh}(z))\\sqrt{1+\\text{sinh}(z)^2} \n\\end{equation}\nwhich allows analysis of $Z\\sim ADLN$, the transformed DLN RVs, whose histogram is more ``compact'' and easier to present.\n\nPanels (c) and (d) of Figure~\\ref{fig:DLNexam} present typical DLN distributions encountered in practice with linear (Panel c) and asinh (Panel d) horizontal axis. Panel (c) presents a truncated segment of the distribution. Due to the asinh transform, Panel (d) is able to present the entire distribution. The approximate log-Normality of the positive and negative sides of the DLN is not visible in Panel (c), but is made clear by the asinh transform in Panel (d).\n\n\n\n\\subsection{Growth in DLN-distributed variates}\n\\label{sec:Growth}\n\nHow does one measure growth in DLN-distributed RVs? A firm that had $\\$100M$ of income in year $1$ and $\\$120M$ of income in year $2$ has certainly grown its income. One can argue whether it is preferable to say the firm grew by $\\frac{120M}{100M}-1=0.2=20\\%$ or by $\\log(120M)-\\log(100M)=0.182$ log-points, yet the question itself is well-formed. But what if the firm had $-\\$100M$ of income (i.e., loss) in year $1$, and then $\\$120M$ of income in year $2$? What was its growth? This section aims to provide a rigorous answer to that question.\n\nTo begin, we require a definition of growth. \\cite{BarroSala-I-Martin2003} and \\cite{StudenyMeznik2013} define instantaneous growth of a time-continuous and \\emph{strictly positive} RV $Z(t)>0$ as \n\\begin{equation} \\label{eq:pergrowth}\n\\frac{dZ(t)\/dt}{Z(t)} = \\frac{Z'(t)}{Z(t)} \\approx \\frac{Z_{t+1}-Z_t}{Z_t}\n\\end{equation}\nwith the second part of the equation using the first-difference of discrete variables as an approximation to the derivative $Z'(t)$, which yields the well-known formulation of percentage growth in discrete variables. Generalizing this definition to $Z(t)\\in \\mathbb{R}$ yields:\n\\begin{equation} \\label{eq:pergrowth2}\nd\\% \\equiv \\frac{dZ(t)\/dt}{\\lvert Z(t)\\rvert} = \\frac{Z'(t)}{\\lvert Z(t)\\rvert} \\approx \\frac{Z_{t+1}-Z_t}{\\lvert Z_t\\rvert} \\ \\ \\text{for} \\ \\ Z(t) \\neq 0\n\\end{equation}\nwhich guarantees that $Z_{t+1}>Z_t$ will imply positive growth, regardless of the sign of $Z_t$. The approximate term $\\left(Z_{t+1}-Z_t\\right)\/\\lvert Z_t\\rvert$ is \\emph{generalized percentage growth} (hereafter denoted d\\%), and is explosive if $\\lvert Z_t\\rvert\\to 0$, similar to ``traditional'' percentage growth.\n\nNext, it is instructive to consider the growth of a log-Normally distributed RV, as most measures of size encountered in firm dynamics (and elsewhere) are approximately log-Normally distributed. To that end, consider the following setting:\n\\begin{equation} \\label{eq:AR_LN}\n\\begin{split}\n& X_{t+1} = \\left(1-\\rho_X\\right)\\cdot\\mu_X + \\rho_X\\cdot X_{t} + \\epsilon^{X}_{t} \\\\\n& \\epsilon^{X}_{t} \\sim \\mathcal{N}(0,\\sigma_{X}^2) \\\\\n& Y_{t} = \\text{exp}\\left(X_{t}\\right)\n\\end{split}\n\\end{equation}\nIn which $X_{t}$ is a simple $AR(1)$ stochastic process, and hence distributes Normally, and $Y_{t}>0$ is log-Normally distributed. What is the growth in $Y_{t}$?\n\nApplying the definition, we have:\n\\begin{equation} \\label{eq:loggrowth}\n\\frac{Y'(t)}{\\lvert Y(t)\\rvert} = \\frac{Y(t)\\cdot X'(t)}{Y(t)} = X'(t) \\approx X_{t+1} - X_t = \\log(Y_{t+1}) - \\log(Y_{t}) \\equiv \\text{dlog}(Y_{t+1})\n\\end{equation}\nwhich yields the well-known formulation of growth as a difference in logs between consecutive values, denoted dlog(). The difference between Equations~\\ref{eq:pergrowth2} and~\\ref{eq:loggrowth} is in whether we differentiate before applying the first-difference approximation. Note that using percentage growth as in Equation~\\ref{eq:pergrowth2} in this case would yield:\n\\begin{equation} \\label{eq:pergrowthYt}\n\\frac{Y_{t+1}}{Y_t} - 1 = \\exp(X_{t+1} - X_t) - 1\n\\end{equation}\nor the general observation that percentage growth is a convex transform of log growth. It is further worth noting that $\\lim_{\\rho_X \\to 1} \\left(X_{t+1} - X_t\\right) = \\epsilon^{X}_{t}$. Log growth yields the innovation in the underlying AR(1) process, while percent growth yields the transformed value $\\exp(\\epsilon^{X}_{t})-1$. I.e., percent growth introduces a convexity bias relative to log growth in the case of a log-Normally distributed RV.\n\nConversely, using log growth to measure growth in a Normally distributed RV, even if said RV is strictly positive in practice, would introduce a similar but opposite ``concavity bias.'' To see that, consider the growth in $X(t)>0$, when measured in dlog terms:\n\\begin{equation} \\label{eq:loggrowthNorm}\n\\text{dlog}(X_{t+1}) = \\log(X_{t+1}) - \\log(X_t) = \\log\\left(\\frac{X_{t+1}}{X_t} -1 +1\\right) = \\log\\left(\\frac{ X'(t)}{\\lvert X(t)\\rvert} + 1\\right)\n\\end{equation}\nPut differently, using dlog() to measure growth in $X$ yields the log of percent growth, which is the appropriate measure by the definition in Equations~\\ref{eq:pergrowth} and~\\ref{eq:pergrowth2}. Hence, the concept of growth used is closely related to the distribution being considered.\n\nNext, consider a similar setting, but for a DLN RV:\n\\begin{equation} \\label{eq:AR_DLN}\n\\begin{split}\n& X^{p}_{t+1} = \\left(1-\\rho_{p}\\right)\\cdot\\mu_{p} + \\rho_{p}\\cdot X_{t}^{p} + \\epsilon^{p}_{t} \\\\\n& X^{n}_{t+1} = \\left(1-\\rho_{n}\\right)\\cdot\\mu_{n} + \\rho_{n}\\cdot X_{t}^{n} + \\epsilon^{n}_{t} \\\\\n& (\\epsilon^{p}_{t},\\epsilon^{n}_{t})^{T} \\sim \\mathcal{N}\\left(\\pmb{0},\\pmb{\\Sigma}\\right) \\\\\n& Y^{p}_{t} = \\text{exp}\\left(X^{p}_{t}\\right) \\ \\ ; \\ \\ Y^{n}_{t} = \\text{exp}\\left(X^{n}_{t}\\right) \\\\\n& W_{t} = Y^{p}_{t} - Y^{n}_{t}\n\\end{split}\n\\end{equation}\nwith $\\pmb{\\Sigma}$ as in Equation~\\ref{eq:BVN}. By applying the generalized growth definition~\\ref{eq:pergrowth2}, we have:\n\\begin{equation} \\label{eq:DLNGROWTH}\n\\begin{split}\n\\frac{W'(t)}{\\lvert W(t)\\rvert} & = \\frac{Y^{p}(t)\\cdot dX^{p}(t)\/dt - Y^{n}(t)\\cdot dX^{n}(t)\/dt}{\\lvert W(t)\\rvert} \\approx \\frac{Y^{p}_{t}\\cdot\\left(X^{p}_{t+1} - X^{p}_{t}\\right) - Y^{n}_{t}\\cdot\\left(X^{n}_{t+1} - X^{n}_{t}\\right)}{\\lvert W(t)\\rvert} \\\\\n& = \\frac{Y^{p}_{t}\\cdot\\text{dlog}\\left(Y^{p}_{t+1}\\right) - Y^{n}_{t}\\cdot\\text{dlog}\\left(Y^{n}_{t+1}\\right)}{\\lvert Y^{p}_{t} - Y^{n}_{t}\\rvert}\n\\end{split}\n\\end{equation}\nwhich implies the growth of a DLN RV can be defined as a function of the levels and growth rates of its two component log-Normal RVs. Section~\\ref{sec:MC} conducts Monte-Carlo experiments to explore the relation between the measures of growth presented above for Normal, log-Normal, and DLN distributed RVs.\n\n\\comments{\n\\begin{equation} \\label{eq:DLNGROWTH}\n\\begin{split}\ng & = \\frac{200\\cdot\\left(\\log\\left(270\\right)-\\log\\left(200\\right)\\right) - 100\\cdot\\left(\\log\\left(120\\right)-\\log\\left(100\\right)\\right)}{\\lvert200 - 100\\rvert} = 0.4179 \\ dlnp\\ (\\approx 0.4055 \\ lp) \\\\\ng & = \\frac{50\\cdot\\left(\\log\\left(270\\right)-\\log\\left(50\\right)\\right) - 100\\cdot\\left(\\log\\left(120\\right)-\\log\\left(100\\right)\\right)}{\\lvert50 - 100\\rvert} = 1.3218 \\ dlnp\n\\end{split}\n\\end{equation}\n}\n\n\n\n\\section{Monte-Carlo experiments}\n\\label{sec:MC}\n\nThis section reports the results of Monte-Carlo experiments designed to ascertain the properties of the moments, estimators, and measures discussed above. \\comments{, as well as present further results on the properties of the DLN as an approximating distribution.}\n\n\n\n\\subsection{Properties of estimators}\n\nI begin by exploring the moments and parameter estimators of Sections~\\ref{sec:Moms} and~\\ref{sec:Estim}. I concentrate the experiments on a region of the parameter space that arises in practical applications related to the theory of the firm:\n\\begin{equation} \\label{eq:MC_Region_1}\n\\pmb{Q}: \\ \\ \\left(\\mu_p,\\sigma_p,\\mu_n,\\sigma_n,\\rho_{pn}\\right) \\in \\left(\\left[-3,3\\right],\\left[0.5,2.5\\right],\\left[-3,3\\right],\\left[0.5,2.5\\right],\\left[-1,1\\right]\\right)\n\\end{equation}\n\n\\noindent The data collection\/creation for the Monte-Carlo analysis proceeds as follows.\\\\\n\\noindent For each $i \\in \\{1...N\\}$:\n\\begin{enumerate}\n \\item Draw a parameter vector $\\pmb{\\Theta}_i\\in\\pmb{Q}$ with Uniform probability.\n \\item Calculate the theoretical central moments based on $\\pmb{\\Theta}_i$ using the method of Section~\\ref{sec:Moms}.\n \\item Draw $K$ observations $W_{i,k}\\sim\\text{DLN}(\\pmb{\\Theta}_i)$.\n \\item Calculate the first five empirical central moments of $W_{i,k}$.\n \\item Recalculate the first five empirical moments using iteratively smaller subsets of the $K$ observations.\\footnote{Specifically, I recalculate the moments based on the first $K\/2^s$ observations for $s\\in\\{1...11\\}$.}\n \\item Estimate the parameters of $W_{i,k}$, denoted $\\pmb{\\widehat{\\Theta}}_i$, using the method of Section~\\ref{sec:Estim}.\n \\item Calculate the Kolmogorov-Smirnov (K-S), Chi-square (C-2), and Anderson-Darling (A-D) test statistics based on $\\pmb{\\widehat{\\Theta}}_i$ and $W_{i,k}$.\n\\end{enumerate}\nI repeat the data creation process $N=70,000$ times. Within each loop, I draw $K=100,000$ observations $W_{i,k}\\sim\\text{DLN}(\\pmb{\\Theta}_i)$.\n\nPanel (a) of Table~\\ref{tab:MC1} presents the Monte-Carlo results for the moment estimators of Section~\\ref{sec:Moms}. It compares the theoretical moments derived in Step 2 of the Monte-Carlo experiment to the empirical moments derived in Step 4, concentrating on the first five moments of the distribution. The analysis is done in asinh space because the moments of the DLN explode quickly due to its heavy tails (similar to moments of the log-Normal, which are similarly considered in log space). The empirical and theoretical moments show high correlation, and the odd moments (mean or $1^{st}$ moment, skewness or $3^{rd}$ moment, and $5^{th}$ moment) exhibit no significant bias. The even moments (variance or $2^{nd}$, and kurtosis or $4^{th}$) show evidence of bias, which is fairly severe for kurtosis. Small-sample bias correction to the kurtosis estimator appears warranted, but is outside the scope of this work. The IQR of the difference between the theoretical and empirical moments is increasing with the moment degree, as expected.\n\n\\RPprep{Estimator Monte Carlo Experiments}{0}{0}{MC1}{%\n This table presents results of estimator Monte-Carlo experiments with $N=70,000$ repetitions and $K=100,000$ observations drawn in each repetition. Panel (a) tests the moments estimators $\\widehat{M}_i\\ \\ i\\in\\{1...5\\}$ of Section~\\ref{sec:Moms} vs. the actual moments $M_i$, conducting all analysis in asinh space. It reports the general accuracy corr($\\text{asinh}(\\widehat{M}_i),\\text{asinh}(M_i)$); the bias median($\\text{asinh}(\\widehat{M}_i)-\\text{asinh}(M_i)$) ; and the accuracy IQR($\\text{asinh}(\\widehat{M}_i)-\\text{asinh}(M_i)$). Panel (b) reports similar statistics comparing the DLN parameter estimators of Section~\\ref{sec:Estim} $\\pmb{\\widehat{\\Theta}}$ and the actual parameters $\\pmb{\\Theta}$. Panel (c) reports the values of parameters a,b,c,d in the approximations $ICDF(p) = a\\cdot\\exp(b\\cdot p) + c\\cdot\\exp(d\\cdot p)$ for the ICDFs of the Kolmogorov-Smirnov, Chi-square, and Anderson-Darling test statistics for DLN RVs, as well as the approximation $R^2$.\n}\n\\RPtab{%\n \\begin{tabularx}{\\linewidth}{Frrrrr}\n \\toprule\n\t\\textit{Panel (a): Moment estimators} & $\\widehat{M}_1$ & $\\widehat{M}_2$ & $\\widehat{M}_3$ & $\\widehat{M}_4$ & $\\widehat{M}_5$ \\\\\n \\midrule\n Correlation & 0.9997 & 0.9929 & 0.9282 & 0.8238 & 0.8478 \\\\\n Bias & -0.0001 & 0.1092 & -0.0002 & 6.3410 & 0.0220 \\\\\n Accuracy & 0.0217 & 0.4785 & 3.4480 & 8.5609 & 32.0236 \\\\ \\\\\n \n\t\\textit{Panel (b): Parameter estimators} & $\\widehat{\\mu}_p$ & $\\widehat{\\sigma}_p$ & $\\widehat{\\mu}_n$ & $\\widehat{\\sigma}_n$ & $\\widehat{\\rho}_{pn}$ \\\\\n \\midrule\n Correlation & 0.9408 & 0.9619 & 0.9412 & 0.9623 & 0.9190 \\\\\n Bias & -0.0034 & 0.0019 & -0.0043 & 0.0019 & -0.0048 \\\\\n Accuracy & 0.0588 & 0.0251 & 0.0614 & 0.0259 & 0.0762 \\\\ \\\\\n\n \\textit{Panel (c): ICDF approximations} & a & b & c & d & $R^2$ \\\\\n \\midrule\n Kolmogorov-Smirnov & 6.75e-7 & 0.1553 & -6.7520 & -0.0011 & 0.9976 \\\\\n Chi-square & 1.88e-8 & 0.1955 & 1.2080 & 0.0044 & 0.9920 \\\\\n Anderson-Darling & 1.18e-5 & 0.1350 & -5.7070 & -0.0060 & 0.9900 \\\\\n\t\\bottomrule\n \\end{tabularx}\n}\n\nPanel (b) of Table~\\ref{tab:MC1} goes on to present the Monte-Carlo results for the parameter estimators of Section~\\ref{sec:Estim}. It compares the actual parameters drawn in Step 1 to the estimated parameters calculated in Step 6. The results indicate the estimation procedure is performing quite well. There is high correlation between the actual and estimated parameters, including the hard to estimate correlation parameter. The parameter estimates also exhibit no systematic bias and reasonably low estimation error IQR. These results imply the estimation procedure, while cumbersome, is able to capture the DLN parameters correctly.\n\nTo further explore the precision and small-sample bias of the moment estimators, Figure~\\ref{fig:MC1} presents the dependence of estimator quality on sample size. Panel (a) of the figure presents the dependence of the correlation between the theoretical and empirical moments on sample size. Kurtosis is even less precise than the $5^{th}$ moment, and is strongly influenced by sample size. Panel (b) of Figure~\\ref{fig:MC1} then presents the dependence of the bias on sample size. The $1^{st}$ and $3^{rd}$ moment estimators exhibit no small-sample bias. The $2^{nd}$ and $5^{th}$ exhibit small and rapidly decreasing bias. Kurtosis, again, shows high bias, only slowly decreasing with sample size.\n\n\\RPprep{Estimator Monte-Carlo experiments}{0}{1}{MC1}{%\n This figure presents results of estimator Monte-Carlo experiments. Panel (a) graphs the dependence of the correlation between the theoretical and empirical moments on sample size. Panel (b) graphs the dependence of moment bias on sample size. Panel (c) presents the distribution of (log of) the K-S statistic in the simulations. Panels (d)-(f) then present the ICDF of the (log) K-S, C-2, and A-D statistics, along with the fitted curves.\n}\n\\RPfig{%\n\t\\begin{tabular}{ccc} \n\t\t\\subfigure[Corr($\\text{asinh}(\\widehat{M}_i),\\text{asinh}(M_i)$)] {\\includegraphics[width=2.5in]{Img\/Moment_Corr.pdf}} & \n\t\t\\subfigure[Median($\\text{asinh}(\\widehat{M}_i)-\\text{asinh}(M_i)$)] {\\includegraphics[width=2.5in]{Img\/Moment_Bias.pdf}} & \n\t\t\\subfigure[PDF of log K-S statistic] {\\includegraphics[width=2.5in]{Img\/KS_PDF.pdf}} \\\\ \\\\\n\t\t\\subfigure[ICDF of log K-S statistic]\n\t\t{\\includegraphics[width=2.5in]{Img\/KSfit.pdf}} &\n\t\t\\subfigure[ICDF of log C-2 statistic]\n\t\t{\\includegraphics[width=2.5in]{Img\/C2fit.pdf}} &\n\t\t\\subfigure[ICDF of log A-D statistic]\n\t\t{\\includegraphics[width=2.5in]{Img\/ADfit.pdf}} \\\\ \\\\\n\t\\end{tabular}\n}\n\n\n\n\\subsection{Test-statistic critical values}\n\\label{sec:TestStats}\n\nA second goal of the Monte-Carlo experiments is to establish critical values for test statistics of the hypothesis that some given data are drawn from a DLN distribution. This is especially important for the Anderson-Darling test statistic, whose critical values are well-known to strongly depend on the distribution being examined. See e.g. \\cite{Stephens1979}, \\cite{DAgostinoStephens1986} Chapter 4, and \\cite{JantschiBolboaca2018}.\n\nTo that end, I calculate the K-S, C-2, and A-D test statistics for each of the $N$ draws in the sample, as described in Step 8 above. To fix ideas, Panel (c) of Figure~\\ref{fig:MC1} presents the distribution of (log of) the K-S statistic in the Monte-Carlo experiment. I then calculate the inverse-CDF (ICDF) of the resulting distribution of (log of) each test statistic. Panels (d), (e), and (f) of Figure~\\ref{fig:MC1} present the ICDFs of the (log) K-S, C-2, and A-D test statistics, respectively. E.g., Panel (f) indicates that one should reject the hypothesis that given data are drawn from the DLN distribution (at a 5\\% confidence level) if the A-D statistic is higher than $\\text{exp}(ICDF(95)) = \\text{exp}(1.135) = 3.110$.\n\nTo move from calculating critical values to deriving a continuous mapping between p-values and test-statistic values, it is common in the literature discussed above to propose an ad-hoc functional form which is able to approximate the ICDF well. Once one estimates the approximating functional form using non-linear least-squares, one can use it to find the p-values associated with each test-statistic value, and vice-versa. Following experimentation, the functional form most closely able to replicate the resulting ICDFs is of the form:\n\\begin{equation} \\label{eq:Pvals}\nICDF(p) = a\\cdot\\text{exp}\\left(b\\cdot p\\right) + c\\cdot\\text{exp}\\left(d\\cdot p\\right)\n\\end{equation}\nwhich is a four-parameter sum (or difference, if $c<0$) of exponentials.\n\nPanels (d), (e), and (f) of Figure~\\ref{fig:MC1} include the fitted values of the functional form, and show that there is an excellent fit between the functional form and the empirical ICDFs. Panel (c) of Table~\\ref{tab:MC1} presents the values of the four approximating parameters for each of the (log) test statistics' ICDFs, and further reports the $R^2$ of the fit, which is above $0.99$ for all three statistics. Hence, one can safely use these functionals to derive p-values for tests of distributional hypotheses.\n\n\n\n\n\\subsection{Growth measures}\n\nA second set of Monte-Carlo experiments tests the relation between the growth measures described in Section~\\ref{sec:Growth}, for RVs distributed Normal, log-Normal, and DLN. To that end, I define three stochastic processes yielding stationary distributions distributed N, LN, and DLN. For each RV type, in each Monte-Carlo iteration, I draw random parameters for the distribution, simulate it forward, measure growth per-period using the different measures discussed above, and consider the relation between the random innovations $\\epsilon_t$ and the various growth measures.\n\nThe stochastic processes for $X$, $Y$, and $W$, distributed N, LN, and DLN, respectively, are as described in Equations~\\ref{eq:AR_LN} and~\\ref{eq:AR_DLN} above. The parameter regions are:\n\\begin{equation} \\label{eq:MC_Region_2}\n\\begin{split}\n\\pmb{Q}_{N}: & \\ \\ \\left(\\rho_{N},\\mu_{N},sd_{N}\\right) \\in \\left(\\left[0.60,0.99\\right],\\left[-100,100\\right],\\left[10,100\\right]\\right) \\\\\n\\pmb{Q}_{LN}: & \\ \\ \\left(\\rho_{LN},\\mu_{LN},sd_{LN}\\right) \\in \\left(\\left[0.60,0.99\\right],\\left[-3,3\\right],\\left[0.5,2.5\\right]\\right) \\\\\n\\pmb{Q}_{DLN}: & \\ \\ \\left(\\rho^{p,n}_{DLN},\\mu^{p,n}_{DLN},sd^{p,n}_{LN},\\rho^{pn}_{DLN}\\right) \\in \\left(\\left[0.60,0.99\\right],\\left[-3,3\\right],\\left[0.5,2.5\\right],\\left[-1,1\\right]\\right) \\\\\n\\end{split}\n\\end{equation}\nwith $\\sigma_{\\Box} = \\sqrt{sd_{\\Box}^2\\cdot\\left(1-\\rho_{\\Box}^2\\right)}$ for $\\Box \\in \\{N,LN,DLN\\}$.\n\n\\noindent The data collection\/creation for the second Monte-Carlo analysis proceeds as follows:\\\\\n\\noindent For each RV type $\\Box \\in \\{N,LN,DLN\\}$: \\\\\n\\noindent For each $i \\in \\{1...N\\}$:\n\\begin{enumerate}\n \\item Draw a parameter vector $\\pmb{\\Theta}_i\\in\\pmb{Q}_\\Box$ with Uniform probability.\n \\item Initialize the RV $Z_{\\Box,0}$ to $\\mu_\\Box$ for N, exp($\\mu_\\Box$) for LN, and $Z^{p,n}_{\\Box,0}$ at exp($\\mu^{p,n}_{\\Box,0}$) for DLN.\n \\item Draw a shock vector of length $K+100$ (two correlated shock vectors for DLN).\n \\item Simulate the process forward $K+100$ period based on its laws of motion.\n \\item Drop the first 100 observation as burn-in.\n \\item Calculate the set of growth measures from Section~\\ref{sec:Growth}.\n\\end{enumerate}\nI repeat the data creation process $N=10,000$ times, each for $K=1,000$ periods, yielding a total of $10M$ growth observations to be analyzed per distribution type.\n\nPanels (a),(b),(c) of Table~\\ref{tab:MC2} presents the correlations between different growth measures for N, LN, and DLN RVs, respectively. The panels also report correlations concentrating on strictly positive values (i.e., when $Z_{t}>0$ and $Z_{t+1}>0$) and when further avoiding tiny beginning values (i.e., $Z_t>1$). The appropriate concept of growth for Normally distributed RV is $\\epsilon_t\/\\lvert Z_{t}\\rvert$, and Panel (a) shows it is highly correlated with the generalized percentage growth measure. The panel further shows that using dlog as a measure of growth for Normal RVs is inaccurate. This fact is further highlighted by Panels (a) and (b) of Figure~\\ref{fig:MC2} which present the relation between the appropriate growth measure and the generalized percent (d\\%) and dlog measures, respectively. Panel (a) shows d\\% captures growth of Normal RVs well, and Panel (b) highlights the ``concavity bias'' arising from using the dlog measure rather than the d\\% measure. The dispersion around the 45-degree line in Panel (a) is driven by the mean-reversion term of the AR(1), which the growth concept ignores.\n\n\\RPprep{Growth Monte Carlo Experiments}{0}{0}{MC2}{%\n This table presents results of growth Monte-Carlo experiments with $N=10,000$ repetitions and $K=1,000$ observations simulated forward in each repetition. Panels (a), (b), and (c) present results for N, LN, DLN, respectively. Within each panel, I report correlations between the following measures of growth: $\\epsilon_t$ the stochastic innovation underlying the growth at time $t$; $\\epsilon_t\/\\lvert Z_{t}\\rvert$ the relative stochastic innovation; d\\%($Z_{t+1}$)=$\\left(Z_{t+1} - Z_{t}\\right\/\\lvert Z_{t}\\rvert$ the generalized percentage growth; dlog($Z_{t+1}$)=log($Z_{t+1}$)-log($Z_{t}$) the log point growth; dDLN($Z_{t+1}$) the DLN growth formulation based on Equation~\\ref{eq:DLNGROWTH}.\n}\n\\RPtab{%\n \\begin{tabularx}{\\linewidth}{Flllll}\n \\toprule\n\t\\textit{Panel (a): N} & $\\epsilon_t$ & $\\epsilon_t\/\\lvert Z_{t}\\rvert$ & d\\%($Z_{t+1}$) & dlog($Z_{t+1}$) & \\\\\n \\midrule\n $\\epsilon_t$ \n & 1.000 & 0.010 & 0.009 & 0.659$^{a}$ & \\\\\n $\\epsilon_t\/\\lvert Z_{t}\\rvert$ \n & 0.380$^{b}$ & 1.000 & 0.973 & 0.031$^{a}$ & \\\\\n d\\%($Z_{t+1}$) \n & 0.357$^{b}$ & 0.960$^{b}$ & 1.000 & 0.033$^{a}$ & \\\\\n dlog($Z_{t+1}$) \n & 0.712$^{b}$ & 0.590$^{b}$ & 0.617$^{b}$ & 1.000 & \\\\ \\\\\n\n\t\\textit{Panel (b): LN} & $\\epsilon_t$ & $\\epsilon_t\/\\lvert Z_{t}\\rvert$ & d\\%($Z_{t+1}$) & dlog($Z_{t+1}$) & \\\\\n \\midrule\n $\\epsilon_t$ \n & 1.000 & 0.023$^{a}$ & 0.269$^{a}$ & 0.931$^{a}$ & \\\\\n $\\epsilon_t\/\\lvert Z_{t}\\rvert$\n & 0.644$^{b}$ & 1.000 & 0.097$^{a}$ & 0.023$^{a}$ & \\\\\n d\\%($Z_{t+1}$) \n & 0.381$^{b}$ & 0.363$^{b}$ & 1.000 & 0.295$^{a}$ & \\\\\n dlog($Z_{t+1}$) \n & 0.929$^{b}$ & 0.620$^{b}$ & 0.381$^{b}$ & 1.000 & \\\\ \\\\\n\n\t\\textit{Panel (c): DLN} & $\\widehat{\\epsilon}_t^c$ & $\\widehat{\\epsilon}_t\/\\lvert Z_{t}\\rvert^c$ & d\\%($Z_{t+1}$) & dlog($Z_{t+1}$) & dDLN($Z_{t+1}$) \\\\\n \\midrule\n $\\widehat{\\epsilon}_t^c$ \n & 1.000 & 0.000$^{a}$ & 0.000$^{a}$ & 0.038$^{a}$ & 0.000$^{a}$ \\\\\n $\\widehat{\\epsilon}_t\/\\lvert Z_{t}\\rvert^c$\n & 0.043$^{b}$ & 1.000 & 0.652 & 0.022$^{a}$ & 0.944 \\\\\n d\\%($Z_{t+1}$) \n & 0.009$^{b}$ & 0.464$^{b}$ & 1.000 & 0.016$^{a}$ & 0.645 \\\\\n dlog($Z_{t+1}$) \n & 0.057$^{b}$ & 0.739$^{b}$ & 0.397$^{b}$ & 1.000 & 0.023$^{a}$ \\\\\n dDLN($Z_{t+1}$)\n & 0.040$^{b}$ & 0.931$^{b}$ & 0.455$^{b}$ & 0.797$^{b}$ & 1.000 \\\\ \\\\\n \\bottomrule\n \\end{tabularx}\n \\begin{flushleft}\n $^a$ For strictly positive values ($Z_{t}>0$ and $Z_{t+1}>0$) \\\\\n\t$^b$ For strictly positive and non-tiny initial values ($Z_{t}>1$ and $Z_{t+1}>0$) \\\\\n\t$^c$ For DLN, I define $\\widehat{\\epsilon}_t = \\left(Z_t^p\\cdot\\epsilon_t^p - Z_t^n\\cdot\\epsilon_t^n\\right)$ and $Z_t = Z_t^p - Z_t^n$\n \\end{flushleft}\n}\n\n\\RPprep{Growth Monte-Carlo experiments}{1}{0}{MC2}{%\n This figure presents results of growth Monte-Carlo experiments. Panels (a) and (b) graph the relation between the growth of a Normal RV and (a) generalized percentage growth d\\%($Z_{t+1}$)=$\\left(Z_{t+1} - Z_{t}\\right)\/\\lvert Z_{t}\\rvert$; (b) log point growth dlog($Z_{t+1}$)=log($Z_{t+1}$)-log($Z_{t}$). Panels (c) and (d) graph the relation between the growth of a log-Normal RV and (c) dlog($Z_{t+1}$) ; (d) d\\%($Z_{t+1}$). Panels (e) and (f) graph the relation between the growth of a DLN RV and (e) the DLN growth measure from Equation~\\ref{eq:DLNGROWTH}, dDLN($Z_{t+1}$); (f) d\\%($Z_{t+1}$).\n}\n\\RPfig{%\n\t\\begin{tabular}{cc} \n\t\t\\subfigure[N growth vs. d\\%$^a$] {\\includegraphics[width=2.5in]{Img\/N_grow_per.pdf}} & \n\t\t\\subfigure[N growth vs. dlog$^b$] {\\includegraphics[width=2.5in]{Img\/N_grow_dlog.pdf}} \\\\ \n\t\t\\subfigure[LN growth vs. dlog] {\\includegraphics[width=2.5in]{Img\/LN_grow_dlog.pdf}} & \n\t\t\\subfigure[LN growth vs. d\\%] {\\includegraphics[width=2.5in]{Img\/LN_grow_per.pdf}} \\\\ \t\t\n\t\t\\subfigure[DLN growth vs. dDLN$^a$] {\\includegraphics[width=2.5in]{Img\/DLN_grow_dDLN.pdf}} & \n\t\t\\subfigure[DLN growth vs. d\\%$^a$] {\\includegraphics[width=2.5in]{Img\/DLN_grow_per.pdf}} \\\\ \n\t\\end{tabular}\n \\begin{flushleft}\n $^a$ For non-tiny initial values ($\\lvert Z_{t}\\rvert>1$) \\\\\n\t$^b$ For strictly positive and non-tiny initial values ($Z_{t}>1$ and $Z_{t+1}>0$) \\\\\n \\end{flushleft}\t\n}\n\nPanel (b) of Table~\\ref{tab:MC2} moves on to considering LN RVs. Here, the appropriate concept of growth is just $\\epsilon_t$, and the panel shows that dlog measures growth well, while d\\% suffers from a convexity bias and is a poor measure of growth. Panels (c) and (d) of Figure~\\ref{fig:MC2} make the convexity bias clear by plotting the relation between growth and dlog and between growth and d\\%, respectively.\n\nFinally, Panel (c) of Table~\\ref{tab:MC2} presents correlations between growth of DLN RVs and the growth measures. For DLN, the appropriate concept of growth is $\\left(Z_t^p\\cdot\\epsilon_t^p - Z_t^n\\cdot\\epsilon_t^n\\right)\/\\lvert Z_t^p-Z_t^n\\rvert$, and the panel shows that the growth formula for DLN derived in Equation~\\ref{eq:DLNGROWTH} captures it well. The panel also shows that dlog, which has limited usability for measuring DLN growth as it is limited to positive values, does poorly even when limited to positive values, and reaches a correlation of ~0.75 with DLN growth even when limiting to positive, non-tiny values. Panels (e) and (f) of Figure~\\ref{fig:MC2} show that dDLN is indeed an appropriate measures, while d\\% is an unbiased but noisy measure of DLN growth.\n\n\n\n\\comments{\n\\subsection{DLN as an approximating distribution}\n\nA third Monte-Carlo experiment tests how well the DLN distribution approximates several ``compound'' distributions arising in practice. The test is also useful in providing evidence that our tests have power to reject ``non-DLN'' distributions. The distributions I concentrate on are: (i) sum of two DLN RVs; (ii) multiplication of DLN by log-Normal RV; (iii) Division of DLN by log-Normal RV; (iv) multiplication of DLN by Normal RV; and (v) multiplication of Normal by log-Normal RV. The two distributions being compounded are independent of each other.\n\nFor all DLN RVs, I use the parameter region $\\pmb{Q}$ from Equation~\\ref{eq:MC_Region_1}. For Normal and log-Normal RVs, I use the following parameter regions:\n\\begin{equation} \\label{eq:MC_Region_3}\n\\begin{split}\n\\pmb{Q}_{\\widehat{N}}: \\ \\ & \\left(\\mu_N,\\sigma_N\\right) \\in \\left(\\left[-100,100\\right],\\left[10,100\\right]\\right) \\\\\n\\pmb{Q}_{\\widehat{LN}}: \\ \\ & \\left(\\mu_{LN},\\sigma_{LN}\\right) \\in \\left(\\left[-3,3\\right],\\left[0.5,2.5\\right]\\right) \\\\\n\\end{split}\n\\end{equation}\n\nThe data collection\/creation for the Monte-Carlo analysis in this section proceeds as follows: \\\\\nFor each $i \\in \\{1...N\\}$:\n\\begin{enumerate}\n \\item Draw a first parameter vector $\\pmb{\\Theta}^1_i$, from the appropriate parameter range, with Uniform probability.\n \\item Draw a second parameter vector $\\pmb{\\Theta}^2_i$, from the appropriate parameter range, with Uniform probability.\n \\item Draw $K$ observations $X_{i,k}$ from the first distribution with parameter vector $\\pmb{\\Theta}^1_i$.\n \\item Draw $K$ observations $Y_{i,k}$ from the second distribution with parameter vector $\\pmb{\\Theta}^2_i$.\n \\item Calculate the compound RV values $W_{i,k}$ using $X_{i,k}$ and $Y_{i,k}$.\n \\item Estimate the DLN parameters of $W_{i,k}$, denoted $\\pmb{\\hat{\\Theta}}_i$, using the method of Section~\\ref{sec:Estim}.\n \\item Calculate the K-S, C-2, and A-D test statistics based on $\\pmb{\\hat{\\Theta}}_i$ and $W_{i,k}$.\n \\item Calculate the p-values of the test statistics using the ICDF approximations from Section~\\ref{sec:TestStats}.\n\\end{enumerate}\n\nTable~\\ref{tab:MC3} presents the results of the analysis.\\rp{Need to refresh the table after the fixes to the dlnfit code.} For each compound distribution, Panel (a) presents the percent of Monte-Carlo repetitions rejected as DLN by each of the three distributional tests, at the 1\\%, 5\\%, and 10\\% confidence levels. Panels (b) and (c) present the (5,10,50)$^{th}$ percentiles of the test statistics and their accompanying p-values, respectively, across the Monte-Carlo experiments.\n\n\\RPprep{Approximation Monte Carlo Experiments}{0}{0}{MC3}{%\n This table presents results of approximation Monte-Carlo experiments with $N=25,000$ repetitions and $K=100,000$ observations drawn in each repetition. For each compound distribution tested, Panel (a) reports the share of observations rejected as being DLN at the 1\\%, 5\\%, and 10\\% confidence levels using each of the three distributional tests K-S, C-2, and A-D. Panels (b) and (c) present the (5,10,50)$^{th}$ percentiles of the test statistics and their accompanying p-values, respectively, across all Monte-Carlo runs for each compound distribution.\n}\n\\RPtab{%\n \\begin{tabularx}{\\linewidth}{Frrrrrrrrr}\n \\toprule\n & \\multicolumn{3}{c}{K-S} & \\multicolumn{3}{c}{C-2} & \\multicolumn{3}{c}{A-D} \\\\\n\t\\textit{Panel (a): rejected} & \\multicolumn{1}{c}{1\\%} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{1\\%} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{1\\%} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} \\\\\n \\midrule\n DLN + DLN & 0.007 & 0.076 & 0.337 & 0.007 & 0.104 & 0.334 & 0.009 & 0.082 & 0.355 \\\\\n DLN * LN & 0.101 & 0.377 & 0.539 & 0.111 & 0.408 & 0.538 & 0.084 & 0.378 & 0.549 \\\\\n DLN \/ LN & 0.129 & 0.375 & 0.521 & 0.148 & 0.385 & 0.522 & 0.117 & 0.370 & 0.515 \\\\\n DLN * N & 0.001 & 0.148 & 0.690 & 0.001 & 0.385 & 0.753 & 0.001 & 0.178 & 0.668 \\\\\n LN * N & 0.000 & 0.070 & 0.324 & 0.000 & 0.080 & 0.270 & 0.000 & 0.086 & 0.353 \\\\ \\\\\n \n\t\\textit{Panel (b): stats} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{50\\%} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{50\\%} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{50\\%} \\\\\n \\midrule\n DLN + DLN & 0.002 & 0.002 & 0.004 & 3.910 & 4.279 & 9.975 & 0.011 & 0.025 & 0.209 \\\\\n DLN * LN & 0.001 & 0.002 & 0.008 & 3.349 & 3.855 & 32.59 & 0.006 & 0.009 & 1.068 \\\\\n DLN \/ LN & 0.001 & 0.001 & 0.007 & 3.441 & 3.864 & 22.28 & 0.006 & 0.009 & 0.707 \\\\\n DLN * N & 0.002 & 0.002 & 0.009 & 4.170 & 4.657 & 56.31 & 0.011 & 0.018 & 1.358 \\\\\n LN * N & 0.001 & 0.001 & 0.002 & 3.233 & 3.660 & 5.321 & 0.008 & 0.010 & 0.042 \\\\ \\\\\n \n\t\\textit{Panel (c): p-vals} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{50\\%} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{50\\%} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{50\\%} \\\\\n \\midrule\n DLN + DLN & 0.042 & 0.055 & 0.136 & 0.035 & 0.047 & 0.138 & 0.040 & 0.053 & 0.132 \\\\\n DLN * LN & 0.000 & 0.010 & 0.082 & 0.000 & 0.007 & 0.073 & 0.001 & 0.013 & 0.080 \\\\\n DLN \/ LN & 0.000 & 0.002 & 0.092 & 0.000 & 0.002 & 0.087 & 0.000 & 0.007 & 0.091 \\\\\n DLN * N & 0.041 & 0.046 & 0.072 & 0.030 & 0.034 & 0.057 & 0.039 & 0.046 & 0.074 \\\\\n LN * N & 0.044 & 0.058 & 0.238 & 0.047 & 0.056 & 0.315 & 0.040 & 0.055 & 0.247 \\\\\n\t\\bottomrule\n \\end{tabularx}\n}\n\nThe results in Table~\\ref{tab:MC3} first establish that the distributional tests used have sufficient power to reject distributions that are non-DLN. In Panel (a), for each of the three test methods, about 30\\%-70\\% of Monte-Carlo repetitions are rejected as being DLN at the 10\\% confidence level. Even at the 5\\% level, around 40\\% of DLN*LN and DLN\/LN repetitions are rejected.\n\nSecond, the results in Table~\\ref{tab:MC3} indicate that DLN performs well as an approximating distribution for sum of DLN (DLN+DLN), multiplication of log-Normal by Normal (LN*N), and to a lesser extent multiplication of DLN by Normal (DLN*N). E.g., for these three compound distributions, the 5$^{th}$ percentile of p-values using all three tests is around $0.03$-$0.05$. DLN however performs poorly as an approximating distribution for the other two compound distributions, DLN*LN and DLN\/LN. For comparison, both of these compound distributions have 5$^{th}$ percentile of p-values around $0.000$-$0.001$.\n}\n\n\n\n\\section{Summary}\n\nThis paper presents the Difference-of-Log-Normals (DLN) distribution, stemming from the multiplicative CLT, and lays a methodological and quantitative foundation for the analysis of DLN-distributed phenomena. It begins by characterizing the distribution, defining its PDF and CDF, presenting estimators for its moments and parameters, and generalizing it to a elliptical multi-variate RVs.\n\nIt goes on to discuss mathematical methods useful in the analysis of DLN distributions. First, it shows the intimate intuitive relation between the DLN distribution and the Hyperbolic Sine, and why the Inverse Hyperbolic Sine (asinh) is a useful transform when dealing with ``double exponential'' RVs such as the DLN.\n\nNext, it considers the concept of growth for DLN RVs. It extends the classical definition of growth, applying only to positive RVs, to RVs $\\in \\mathbb{R}$. It then shows that the measure of growth used is dependant on the distribution of the data being measured. It makes the case that growth in Normal, log-Normal and DLN RVs should be measured using different measures of growth and develops the appropriate measure of growth for DLN RVs.\n\nThe paper reports the results of extensive Monte-Carlo experiments, aimed to establish the properties of the estimators and measures presented. It shows that the moment estimators have good accuracy, but highlights their small-sample bias, especially for the case of kurtosis. A small-sample bias-correction method for the kurtosis estimator is merited. It also shows that the parameter estimators proposed are reasonably accurate and unbiased. To enable accurate tests of whether some data are DLN, it establishes critical values and p-value estimators for three distributional tests: Kolmogorov-Smirnov, Chi-square, and Anderson-Darling.\n\nA second Monte-Carlo experiment verifies the generalized growth measures discussed indeed back-out the appropriate growth concept for Normal, log-Normal, and DLN distributions. It especially highlights the ``convexity\/concavity bias'' arising when applying the wrong measure of growth to an RV. Of importance here is the evidence that measuring growth of log-Normal RVs using percentage growth leads to a significant convexity bias. \\comments{A third Monte-Carlo experiment presents evidence that DLN is also a useful approximating distribution, able to approximate several compound distributions.}\n\n\n\n\n\n\n\n\n\n\n\\comments{\n\n\\subsection{Alternative parametrization}\n\\label{sec:Alt}\n\nConsider the following bijection:\n\\begin{equation} \\label{eq:REPARAM}\n\\begin{bmatrix}\n\\alpha \\\\ \\beta \\\\ \\gamma \\\\ \\delta \\\\ \\epsilon\n\\end{bmatrix}\n= \\text{asinh}\\left(\n\\begin{bmatrix}\n\\text{exp}\\left(\\mu_p+\\frac{\\sigma_p^2}{2}\\right) - \\text{exp}\\left(\\mu_n+\\frac{\\sigma_n^2}{2}\\right) \\\\\n\\text{exp}\\left(\\mu_p+\\frac{\\sigma_p^2}{2}\\right) + \\text{exp}\\left(\\mu_n+\\frac{\\sigma_n^2}{2}\\right) \\\\\n\\left(\\text{exp}\\left(\\sigma_p^2\\right)-1\\right) - \\left(\\text{exp}\\left(\\sigma_n^2\\right)-1\\right) \\\\\n\\left(\\text{exp}\\left(\\sigma_p^2\\right)-1\\right) + \\left(\\text{exp}\\left(\\sigma_n^2\\right)-1\\right) \\\\\n\\text{exp}\\left(\\sigma_p\\cdot\\sigma_n\\cdot\\rho_{pn}\\right)-1\n\\end{bmatrix}\\right)\n\\end{equation}\nwhich maps the parameter vector $\\pmb{\\Theta}_1 = (\\mu_p,\\sigma_p,\\mu_n,\\sigma_n,\\rho_{pn})$ to the parameter vector $\\pmb{\\Theta}_2 = (\\alpha,\\beta,\\gamma,\\delta,\\epsilon)$. This parametrization stems from Equations~\\ref{eq:MUDLN} and~\\ref{eq:SIGDLN}, in which the various terms appear. It further concentrates on the sums and differences of the terms, and applies an asinh transform to the parameter space. The transformed parameters in $\\pmb{\\Theta}_2$ correlate with the (asinh of the) first four moments of the DLN distribution described by the vector $\\pmb{\\Theta}_1$, as shown at the Monte-Carlo experiments in Section~\\ref{sec:MC}. This alternative parametrization is useful for implementing method-of-moments estimators for the parameters of the DLN.\n\nPanel (c) of the same table presents an analysis of the alternative parametrization described in Section~\\ref{sec:Alt}. The correlation between the alternative parameters and the predicted and actual moments is high for the first four parameters and respective moments, but is practically zero for the fifth parameter and moment. This indicates the fifth parameter in the alternative parametrization does not capture the associated moment. There is again significant bias in the even parameters relative to their corresponding moments.\n\nPanel (c) --- Alternative parameters & $\\alpha$ & $\\beta$ & $\\gamma$ & $\\delta$ & $\\epsilon$ \\\\\n\\midrule\n$\\widehat{M}_i$ Correlation & 1.0000 & 0.9280 & 0.8568 & 0.9682 & 0.0183 \\\\\n$\\widehat{M}_i$ Bias & 0.0000 & -5.8456 & -0.0574 & -11.486 & 4.5365 \\\\\n$\\widehat{M}_i$ Accuracy & 0.0000 & 4.2309 & 4.7317 & 7.6611 & 60.5636 \\\\\n$M_i$ Correlation & 0.9994 & 0.9359 & 0.7638 & 0.7692 & 0.0081 \\\\\n$M_i$ Bias & 0.0001 & -5.4896 & 0.0160 & -3.7869 & 4.3550 \\\\\n$M_i$ Accuracy & 0.0275 & 4.0366 & 2.1522 & 1.6222 & 26.3605 \\\\ \\\\\n\nPanel (c) compares the alternative parametrization of Section~\\ref{sec:Alt} $\\pmb{\\widetilde{\\Theta}}$ with the first five moment estimators and actual moments $\\widehat{M}_i$ and $M_i$.\n\nOur last distribution of interest is the firm income growth distribution (FIGD). Dealing with growth aspects of income presents a methodological issue, however, as our measures of growth are ill-equipped to describe growth in sometimes-negative values. I hence begin by extending the growth measures to deal with values in $(-\\infty,\\infty)$ rather than $(0,\\infty)$. To fix ideas, consider the following seven scenarios: a firm earns (i) \\$100 in period $t$ and \\$200 in period $t+1$, (ii) \\$100 in $t$ and \\$1 in $t+1$, (iii) \\$100 in $t$ and -\\$100 in $t+1$, (iv) -\\$100 in $t$ and \\$100 in $t+1$, (v) -\u00a210,000 in $t$ and \u00a210,000 in $t+1$, (vi) \\$0 in $t$ and \\$100 in $t+1$, (vii) \\$0 in $t$ and -\\$100 in $t+1$, .\n\nWhat was the growth in firm income in each scenario? The two standard ways in which to measure growth are percent change $d\\%(X_{t+1}) = (X_{t+1} - X_{t})\/X_{t}$, and log-point change $\\text{dlog}(X_{t+1}) = \\log(X_{t+1}) - \\log(X_{t})$. Using either method, the first two scenarios are well-defined. In scenario (i), income growth was (200-100)\/100 = 1 = 100\\%, or it was log(200)-log(100) = 0.693 = 69.3 log-points (lp). In scenario (ii), it was (1-100)\/100 = -99\\% or log(1)-log(100) = -461 lp. Note that log-point growth quickly tends to $-\\infty$ as we decrease firm income during the second period in scenario (ii) from $1$ to $0$.\n\nIn scenario (iii), one could say that percent growth was ((-100)-100\/100) = -2 = -200\\%, extending the definition of percent changes. But this extension leads percent growth in scenario (iv) to be -200\\% as well, even though firm income grew. A more intuitive extension of the percent concept is to use\n\\begin{equation} \\label{eq:gprecent}\n\\widetilde{d\\%}(X_{t+1}) = (X_{t+1} - X_{t})\/\\lvertX_{t}\\rvert\n\\end{equation}\nGrowth rates then become (i) 100\\%, (ii) -99\\%, (iii) -200\\%, (iv) 200\\%, (v) 200\\%, (vi) $-\\infty$, and (vii) $+\\infty$. This extension improves the direction (i.e. sign) of percent growth. As can be seen in scenario (v), it is also scale-invariant (i.e., not impacted by the unit of measurement). Yet growth rates from values close to zero remain explosive.\n\nCan we likewise extend the log-point growth concept? Using the inverse hyperbolic sine (asinh) again, we can define the growth in $X$ to be \n\\begin{equation} \\label{eq:dasinh}\n\\text{dasinh}(X_{t+1}) = \\text{asinh}(X_{t+1}) - \\text{asinh}(X_{t})\n\\end{equation}\nGrowth rates in the seven scenarios are then (i) 69.3 asinh points (ap), (ii) -442 ap, (iii) -1060 ap, (iv) +1060 ap, (v) +1980 ap (vi) +530 ap, (vii) -530 ap. This measure has several desirable properties: (a) growth to and from zero is well-defined and non-explosive, (b) growth from -X to X is double the growth from 0 to X, (c) growth from -X to X is the opposite of growth from X to -X, (d) growth between two positive values quickly approaches the proper log-point growth (due to the quickly decreasing approximation error of asinh discussed above). One downside of the measure is that it is scale-dependent when the two values being compared have opposite signs, as can be seen in scenario (v).\n\n}\n\n\\clearpage\n\\bibliographystyle{JFE}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzziueo b/data_all_eng_slimpj/shuffled/split2/finalzziueo new file mode 100644 index 0000000000000000000000000000000000000000..7b2aa625952a0f61ebe2048b96661c8cea317a1f --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzziueo @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe determination of the exact string theory low energy effective action is a very difficult problem in general. In the case of type II string theory on $\\mathds{R}^{1,10-d} \\times T^{d-1}$, the lowest order non-perturbative corrections could nonetheless have been computed \\cite{Green:1997tv,Green:1997as,Kiritsis:1997em}. Although there is no non-perturbative formulation of the theory, the constraints following from supersymmetry and $U$-duality have permitted to determine the non-perturbative low energy effective action from perturbative computations in string theory \\cite{D'Hoker:2005jc,Green:2008uj,Gomez:2013sla,Green:2014yxa,D'Hoker:2014gfa} and in eleven-dimensional supergravity \\cite{Green:1997as,Green:1999pu,Green:2005ba,Green:2008bf,Basu:2014uba}. The four-graviton amplitude allows in particular to determine the $\\nabla^{2k}R^4$ type correction in the effective action,\n\\begin{equation}}\\def\\ee{\\end{equation} {\\cal L}\\sim \\frac{1}{\\kappa^2} R +\\sum_{p,q} \\kappa^{2 \\frac{d-3+4p+6q}{9-d}} E_{(p,q)} \\nabla^{4p+6q} R^4 +\\dots \\ee \nwhere the dots stand for other terms including the supersymmetric completion, $(p,q)$ labels the different invariant combinations of derivatives compatible with supersymmetry according to the notations used in \\cite{Green:2010wi}, and $E_{(p,q)}$ are automorphic functions of the scalar fields defined on $E_{d(d)}(\\mathds{Z}) \\backslash E_{d(d)}\/ K_d$. For $(p,q) = (0,0)$, $(1,0)$ and $(0,1)$, the complete effective action at this order is determined by these functions ${\\cal E}_\\gra{p}{q}$, which have been extensively studied \\cite{Review,Obers:1999um,Basu:2007ck,Green:2010kv,Green:2011vz,Fleig:2013psa,Minimal,D4R4,Gustafsson:2014iva,Pioline:2015yea}. \n\n$E_{(0,0)}$ is an Eisenstein series associated to the minimal unitary representation \\cite{Obers:1999um,Green:2010kv}, $E_{(1,0)}$ is an (or a sum of two) Eisenstein series associated to the next to minimal unitary representation(s) \\cite{Green:2010kv}, and both are therefore relatively well understood. They are nonetheless very complicated functions, and the explicit expansion of $E_{(1,0)}$ in Fourier modes is not yet determined \\cite{Green:2011vz,Fleig:2013psa,Gustafsson:2014iva}. $E_{(0,1)}$ is not even an Eisenstein series, and was shown in \\cite{Green:2005ba} to satisfy to an inhomogeneous Poisson equation in type IIB. A proposal for this function in eight dimensions \\cite{Basu:2007ck}, suggested a split of the function into the sum of an Eisenstein series and an inhomogeneous solution, which was subsequently generalised in seven and six dimensions \\cite{Green:2010wi,Green:2010kv}, and recently clarified in \\cite{Pioline:2015yea}. \n\nIn this paper we extend the analysis carried out in \\cite{Minimal,D4R4} to the study of $E_{(0,1)}$. We show that this function indeed splits into the sum of two functions that are associated to two distinct supersymmetry invariants, and therefore satisfy to inequivalent tensorial differential equations. In particular, the second satisfies to a homogeneous equation, which is solved by the Eisenstein function appearing in \\cite{Green:2010wi,Basu:2007ck,Pioline:2015yea}. One can distinguish the two functions by looking at specific higher point couplings that we identify. The new class of invariants generalises to an infinite class admitting a coupling in $F^{2k} \\nabla^4 R^4$, and we identify a unique Eisenstein function solving the corresponding tensorial differential equations in all dimensions greater than four. This function turns out to be compatible with perturbative string theory, and only admits three perturbative contributions in four dimensions, at 1-loop, $(k+2)$-loop, and $2k$-loop. However, the only amplitude that seems to unambiguously distinguish it from others is the $(k+2)$-loop four-graviton amplitude in a non-trivial Ramond--Ramond background, which makes an explicit check extremely challenging. \n\n\nWe start with the analysis of the supersymmetry invariants in four dimensions. The two $\\nabla^6 R^4$ type invariants in the linear approximation are associated to two distinct classes of chiral primary operators of $SU(2,2|8)$ discussed in \\cite{Drummond:2003ex}. We identify the corresponding representations of $E_{7(7)}$ associated to nilpotent coadjoint orbits \\cite{E7Djo} that are summarised in figure \\ref{ClosureDiag}. \n\\begin{figure}[htbp]\n\\begin{center}\n \\begin{tikzpicture}\n \\draw (1,0) node{\\textbullet};\n \\draw (1,0 - 1) node{\\textbullet};\n \\draw (1,0 - 2) node{\\textbullet};\n \\draw (1,0 + 2) node{\\textbullet};\n \\draw (1,0 + 4) node{\\textbullet};\n \\draw (1 + 1,0 + 2.5) node{\\textbullet};\n \\draw (1 - 1,0 + 3.5) node{\\textbullet};\n \\draw (1 - 1 ,0 + 1) node{\\textbullet};\n \n \\draw (1 + 0.3,0 - 2) node{$R$};\n \\draw (1 + 0.4,0 - 1) node{$R^4$};\n \\draw (1 + 0.7,0) node{$ \\nabla^4 R^4$};\n \\draw (1 - 1.9,0 + 1) node{$ {F}^{2k} \\nabla^4 R^4$};\n \\draw (1 + 0.7 + 1,0 + 2.5) node{$ \\nabla^6 R^4$};\n \\draw[-,draw=black,very thick](1,0) -- (1,0 + 2 );\n \\draw[-,draw=black,very thick](1 - 1,0 + 1) -- (1 - 1,0 + 3.5);\n \\draw[-,draw=black,very thick](1 - 1,0 + 3.5) -- (1,0 + 2);\n \\draw[-,draw=black,very thick](1,0 + 2) -- (1 + 1,0 + 2.5);\n \\draw[-,draw=black,very thick](1 + 1,0 + 2.5) -- (1,0 + 4);\n \\draw[-,draw=black,very thick](1 - 1,0 + 3.5) -- (1,0 + 4);\n \\draw[dashed,draw=black,very thick](1,0 + 4) -- (1,0 + 4.5);\n \\draw[-,draw=black,very thick](1,0) -- (1 - 1,0 + 1);\n\\draw[-,draw=black,very thick] (1,0 - 1) -- (1,0);\n\\draw[-,draw=black,very thick] (1,0 - 2) -- (1,0 - 1);\n\\draw[<-,draw=black,thick] (1 - 3-1,0 + 5) -- (1 - 3 - 1,0 - 2);\n\\draw (1 - 3 + 0.2 - 1,0 - 2) node{$0$};\n\\draw (1 - 3- 1,0 - 2) node{-};\n\\draw (1 - 3- 1,0 - 1) node{-};\n\\draw (1 - 3- 1,0) node{-};\n\\draw (1 - 3- 1,0 + 2) node{-};\n\\draw (1 - 3- 1,0 + 1) node{-};\n\\draw (1 - 3- 1,0 + 2.5) node{-};\n\\draw (1 - 3- 1 ,0 + 3.5) node{-};\n\\draw (1 - 3 - 1,0 + 4) node{-};\n\\draw (1 - 3 + 0.3 - 1,0 - 1) node{$34$};\n\\draw (1 - 3 + 0.3 - 1,0) node{$52$};\n\\draw (1 - 3 + 0.3 - 1,0 + 1) node{$54$};\n\\draw (1 - 3 + 0.3 - 1,0 + 2) node{$64$};\n\\draw (1 - 3 + 0.3 - 1,0 + 2.5) node{$66$};\n\\draw (1 - 3 + 0.3 - 1,0 + 3.5) node{$70$};\n\\draw (1 - 3 + 0.3 - 1,0 + 4) node{$76$};\n\\draw (1 - 3 + 0.5 - 1,0 + 5) node{dim};\n\\end{tikzpicture}\n\\end{center}\n\\caption{\\small Closure diagram of nilpotent orbits of $E_{7(7)}$ of dimension smaller than 76.}\n\\label{ClosureDiag}\n\\end{figure}\nIn the linearised approximation, the $F^2 \\nabla^4 R^4$ type invariant does not carry a $\\nabla^6 R^4$ coupling, but we explain that the structure of the linearised invariant allows for this mixing at the non-linear level, and that the latter must occur because the two classes of invariants merge in one single $E_{8(8)}$ representation in three dimensions. We conclude that the exact threshold function in four dimensions takes the form\n\\begin{equation}}\\def\\ee{\\end{equation} E_{\\gra{0}{1}} = \\hat{{\\cal E}}_{\\grad{8}{1}{1}} + \\frac{32}{189\\pi}\\hat{{E}}_{\\mbox{\\DEVII000000{5}}}\\ , \\ee\nwhere $\\hat{{\\cal E}}_{\\grad{8}{1}{1}}$ is the solution to the inhomogeneous differential equation \\eqref{E811Equation} that is consistent with perturbative string theory. The explicit relation between the tensorial differential equations and the associated nilpotent orbits permits us to determine the wavefront set of the associated functions, extending the results of \\cite{Green:2011vz,Fleig:2013psa} to the $\\nabla^6 R^4$ threshold function. It appears, as can be seen in figure \\ref{ClosureDiag}, that the two functions admit distinct wavefront sets. In particular we show that although $\\hat{{\\cal E}}_{\\grad{8}{1}{1}}$ is not an Eisenstein series, it admits the same wavefront set as $\\hat{{E}}{\\mbox{\\DEVII{\\mathnormal{6}}00000{\\mathfrak{0}}}}$.\n\n\nWe then consider the uplift of our results in higher dimensions, and exhibit that this general structure extends to all dimensions lower than eight, and is in perfect agreement with the exact threshold functions proposed in \\cite{Green:2010wi,Basu:2007ck,Pioline:2015yea}. In each dimension, the supersymmetry invariants transform in irreducible representations of $E_{d(d)}$, defined by the representation of $E_{d(d)}$ on the associated function on $E_{d(d)}\/ K_d$ satisfying to the relevant differential equations implied by supersymmetry. The inequivalent invariants are summarised in figure \\ref{DimensionMultiplets}.\n\\begin{figure}[htbp]\n\\center\n \\begin{tikzpicture}\n\\draw (1 - 1,0 + 5) node{$IIA$}; \\draw (1,0 + 5) node{$IIB$}; \\draw (1 + 2,0 + 5) node{$IIA$}; \\draw (1 + 3,0 + 5) node{$IIB$};\n \\draw (1 + 5,0 + 5) node{$IIA$}; \\draw (1 + 6,0 + 5) node{$IIB$};\n \n\\draw[-,draw=black, thick](1 - 1,0 + 4) -- (1 - 1,0 + 4.7); \\draw[-,draw=black, thick](1,0 + 4) -- (1,0 + 4.7);\n\\draw[-,draw=black, thick](1 + 2,0 + 4) -- (1 + 2,0 + 4.7); \\draw[-,draw=black, thick](1 + 3,0 + 4) -- (1 + 3,0 + 4.7);\n\\draw[-,draw=black, thick](1 + 5,0 + 4) -- (1 + 5,0 + 4.7); \\draw[-,draw=black, thick](1 + 6,0 + 4) -- (1 + 6,0 + 4.7);\n\\draw[dashed,draw=black, thick](1 + 7,0 + 4) -- (1 + 7,0 + 4.5);\n\\draw[-,draw=black, thick](1,0 + 4) -- (1 - 0.9,0 + 4.7);\n\\draw[-,draw=black, thick](1 + 3,0 + 4) -- (1 + 2.1,0 + 4.7);\n\n\\draw[dashed,draw=black, thick](1 + 2.5,0 - 1) -- (1 + 2.5,0 - 1.3);\n\\draw[dashed,draw=black, thick](1 - 0.5,0 - 1) -- (1 - 0.5,0 - 1.3);\n\\draw[dashed,draw=black, thick](1 + 3 + - 1 + 7.5\/2,0 - 1) -- (1 + - 1 + 3 + 7.5\/2 ,0 - 1.3);\n\n\\draw[<-,draw=black,thick] (1 - 3,0 + 6) -- (1 - 3,0 - 2); \n\\draw (1 - 3,0 + 5) node{-};\n\\draw (1 - 3,0 + 4) node{-}; \\draw (1 + - 1,0 + 4) node{\\color{rouge} \\textbullet};\\draw (1 + 1 + - 1,0 + 4) node{ \\textbullet}; \\draw (1 + - 1 + 3,0 + 4) node{\\textbullet};\\draw (1 + - 1 + 4,0 + 4) node{$\\circ$};\n\\draw (1 - 3,0 + 3) node{-}; \\draw (1 + - 1 + 0.5,0 + 3) node{\\textbullet}; \\draw (1 + - 1 + 3,0 + 3) node{\\textbullet};\\draw (1 + 4 + - 1 ,0 + 3) node{$\\circ$};\n\\draw (1 - 3,0 + 2) node{-}; \\draw (1 + - 1 + 0.5,0 + 2) node{\\textbullet}; \\draw (1 + - 1 + 3,0 + 2) node{\\textbullet};\\draw (1 + 4 + - 1,0 + 2) node{\\color{rouge} \\textbullet};\n\\draw (1 - 3,0 + 1) node{-}; \\draw (1 + - 1 + 0.5,0 + 1) node{\\textbullet}; \\draw (1 + - 1 + 3.5,0 + 1) node{\\textbullet};\n\\draw (1 - 3,0) node{-};\t\t\t \\draw (1 + - 1 + 0.5,0) node{\\textbullet}; \\draw (1 + - 1 + 3.5,0) node{\\textbullet};\n\\draw (1 - 3,0 - 1) node{-}; \\draw (1 + - 1 + 0.5 ,0 - 1) node{\\textbullet}; \\draw (1 + - 1 + 3.5,0 - 1) node{\\textbullet};\n\\draw (1 - 3 + 0.3,0 + 5) node{$10$};\n\\draw (1 - 3 + 0.3,0 + 4) node{$8$};\n\\draw (1 - 3 + 0.3,0 + 3) node{$7$};\n\\draw (1 - 3 + 0.3,0 + 2) node{$6$};\n\\draw (1 - 3 + 0.3,0 + 1) node{$5$};\n\\draw (1 - 3 + 0.3,0) node{$4$};\n\\draw (1 - 3 + 0.3,0 - 1) node{$3$};\n\\draw (1 - 3 + 0.5,0 + 6) node{dim};\n\n\\draw (1 + - 1 + 6\/2 + 7.5\/2 ,0 - 1) node{\\textbullet}; \n\\draw (1 + - 1 + 6 ,0 ) node{\\color{rouge} \\textbullet}; \n\\draw (1 + - 1 + 7.5 ,0 ) node{ \\textbullet}; \n\\draw (1 + - 1 + 6 ,0 +1) node{$\\circ$}; \n\\draw (1 + - 1 + 7.5 ,0 +1) node{\\textbullet}; \n\\draw (1 + - 1 + 6 ,0 +2) node{$\\circ$}; \n\\draw (1 + - 1 + 7.5 ,0 +2) node{\\color{rouge} \\textbullet}; \n\\draw (1 + - 1 + 6 ,0 +3) node{$\\circ$}; \n\\draw (1 + - 1 + 7.5 ,0 +3) node{$\\circ$}; \n\\draw (1 + - 1 + 6 ,0 +4) node{$\\circ$}; \n\\draw (1 + - 1 + 7 ,0 +4) node{$\\circ$};\n\\draw (1 + - 1 + 8 ,0 +4) node{$\\circ$}; \n\n\\draw (1 + 0.5 + - 1,0 - 1.6) node{$R^4$};\n\\draw (1 + 3.5 + - 1,0 - 1.6) node{$\\nabla^4 R^4$};\n\\draw (1 + 6\/2 + 7.5\/2 + - 1,0 - 1.6) node{$\\nabla^6 R^4$};\n\n\\draw[-,draw=black, thick](1 + 8 + - 1,0 + 4) -- (1 + 0.5 + 7 + - 1,0 + 3);\n\\draw[-,draw=black, thick](1 + 0.5 + 6.5 + - 1,0 + 4) -- (1 + 0.5 + 7 - 1,0 + 3);\n\\draw[-,draw=black, thick](1 + 0.5 + - 1,0 -1) -- (1 + 0.5 + - 1,0 + 3);\n\\draw[-,draw=black, thick](1 + 3.5 + - 1 ,0 -1) -- (1 + 3.5 + - 1,0 + 1);\n\\draw[-,draw=black, thick](1 + 3 + 7.5\/2 + - 1 ,0 -1) -- (1 + 6 + - 1,0);\n\\draw[-,draw=black, thick](1 + 3 + 7.5\/2 + - 1 ,0 -1) -- (1 + 7.5 + - 1,0);\n\\draw[-,draw=black, thick](1 + 6 + - 1,0) -- (1 + 6 + - 1,0 + 4);\n\\draw[-,draw=black, thick](1 + 7.5 + - 1,0) -- (1 + 7.5 + - 1,0 + 3);\n\n\\draw[-,draw=black, thick](1 + 3.5 + - 1,0 + 1) -- (1 + - 1 + 4,0 + 2); \\draw[-,draw=black, thick](1 + - 1 + 3.5,0 + 1) -- (1 + - 1 + 3,0 + 2);\n\\draw[-,draw=black, thick](1 + - 1 + 0.5,0 + 3) -- (1 + - 1 ,0 + 4); \\draw[-,draw=black, thick](1 + - 1 + 0.5,0 + 3) -- (1 + - 1 + 1,0 + 4);\n\n\\draw[-,draw=black, thick](1 + - 1 + 4,0 + 4) -- (1 + - 1 + 4,0 + 2); \\draw[-,draw=black, thick](1 + - 1 + 3,0 + 4) -- (1 + - 1 + 3,0 + 2);\n\n\n\\end{tikzpicture}\n\n\n\\caption{\\label{DimensionMultiplets}\\small Each node corresponds to an inequivalent supersymmetry invariant, white if it cannot be written in harmonic superspace in the linearised approximation, and red if the corresponding harmonic superspace is chiral. For $\\nabla^6 R^4$, the links to 10 dimensions are valid for the homogeneous solution, while all the eight-dimensional invariants uplift to type IIA for the inhomogeneous solution.}\n\\end{figure}\nThe tensorial differential equations satisfied by Eisenstein functions relevant to our analysis are reviewed in the appendices. \n\n\\section{$\\mathcal{N}=8$ supergravity in four dimensions}\nMaximal supergravity includes 70 scalar fields parametrising the symmetric space $E_{7(7)} \/ SU_{\\scriptscriptstyle \\rm c}(8)$ \\cite{Cremmer:1979up}, and can be defined in superspace by promoting these fields to superfields $\\phi^\\upmu$ \\cite{Brink:1979nt,Howe:1981gz}. One defines the Maurer--Cartan form \n\\begin{equation}}\\def\\ee{\\end{equation} d {\\cal V} \\, {\\cal V}^{-1} = \\left( \\begin{array}{cc}\\ 2 \\delta_{[i}^{[k} \\omega^{l]}{}_{j]} \\ &\\ P_{ijkl} \\ \\\\ \\ P^{ijkl}\\ & -2 \\delta^{[i}_{[k} \\omega^{j]}{}_{l]} \\ \\end{array}\\right) \\ , \\ee\nwith \n\\begin{equation}}\\def\\ee{\\end{equation} P^{ijkl} = \\frac{1}{24} \\varepsilon^{ijklpqrs} P_{pqrs} \\ . \\label{ComplexSelfual}\\ee\nThe metric on $E_{7(7)} \/ SU_{\\scriptscriptstyle \\rm c}(8)$ is defined as\n\\begin{equation}}\\def\\ee{\\end{equation} G_{\\upmu\\upnu}(\\phi) d\\phi^\\upmu d\\phi^\\upnu = \\frac{1}{3} P_{ijkl} P^{ijkl} \\ , \\ee\nand the derivative in tangent frame is defined such that for any function \n\\begin{equation}}\\def\\ee{\\end{equation} d {\\cal E} = 3 P^{ijkl} {\\cal D}_{ijkl} {\\cal E} \\ . \\ee\nThe superfields satisfy to\n\\begin{equation}}\\def\\ee{\\end{equation} D_\\alpha^i {\\cal E} = \\frac{1}{4} \\varepsilon^{ijklpqrs} \\chi_{\\alpha jkl} \\, {\\cal D}_{pqrs} {\\cal E} \\ , \\qquad \\bar D_{\\adt i} {\\cal E} = 6 \\bar \\chi_{\\adt}^{jkl} \\, {\\cal D}_{ijkl} {\\cal E} \\ , \\ee\nwhere $\\chi_{\\alpha ijk}$ is the Dirac superfield in Weyl components, and $\\bar \\chi_\\adt^{ijk}$ its complex conjugate. The expansion of the scalar fields include the 28 Maxwell field strengths $F_{\\alpha\\beta ij}$, the 8 Rarita--Schwinger field strengths $\\rho_{\\alpha\\beta\\gamma i}$ and the Weyl tensor $C_{\\alpha\\beta\\gamma\\delta}$, satisfying to $\\mathcal{N}=8$ supergravity classical (two derivatives) field equations. The supervielbeins are the solutions to the Bianchi identities defined such that the Riemann tensor is valued in $\\mathfrak{sl}(2,\\mathds{C}) \\oplus \\mathfrak{su}(8)$ and the $\\mathfrak{su}(8)$ component is identified with the scalar field curvature \\cite{Brink:1979nt,Howe:1981gz},\n\\begin{equation}}\\def\\ee{\\end{equation} R^i{}_j = \\frac{1}{3} P_{jklp} \\wedge P^{iklp} \\ . \\ee\nThe covariant derivative on $E_{7(7)} \/ SU_{\\scriptscriptstyle \\rm c}(8)$ in tangent frame satisfies to \n\\begin{equation}}\\def\\ee{\\end{equation} [ {\\cal D}^{ijkl} , {\\cal D}_{pqrs} ] {\\cal D}_{tuvw}= - 24 \\delta^{ijkl}_{qrs][t} {\\cal D}_{uvw][p} + 3 \\delta^{ijkl}_{pqrs} {\\cal D}_{tuvw} \\ , \\label{Comut} \\ee\nand the Laplace operator is\n\\begin{equation}}\\def\\ee{\\end{equation} \\Delta = \\frac{1}{3} {\\cal D}^{ijkl} {\\cal D}_{ijkl} \\ . \\ee\nIn the linearised approximation, the scalar superfield $W_{ijkl}$ satisfies to the reality constraint \\eqref{ComplexSelfual} and to\n\\begin{equation}}\\def\\ee{\\end{equation} D_\\alpha^p W_{ijkl} = 2 \\delta^p_{[i} \\chi_{\\alpha jkl]} \\ , \\qquad \\bar D_{\\adt p } W_{ijkl} = \\frac{1}{12} \\varepsilon_{ijklpqrs} \\bar \\chi_\\adt^{qrs} \\ . \\ee \nIn this approximation the superfield $W^{ijkl}$ transforms in the minimal unitary representation of the superconformal group $SU(2,2|8)$ \\cite{Gunaydin:1984vz}. This property permits a complete classification of supersymmetry invariants in the linearised approximation in terms of irreducible representations of $SU(2,2|8)$ of Lorentz invariant top component \\cite{Drummond:2003ex,Drummond:2010fp}. In our analysis, we rely on the assumption of absence of supersymmetry anomaly, such that there is no algebraic obstruction to the extension of a linearised invariant to a full non-linear invariant. This implies a bijective correspondence between the set of linearised invariants and the non-linear invariants, such that one can deduce the explicit gradient expansion of the functions (or tensor functions) of the scalar fields on $E_{7(7)} \/ SU_{\\scriptscriptstyle \\rm c}(8)$ that determine the invariants.\n\\subsection{The standard $\\nabla^6 R^4$ type invariant}\\label{811D6R4}\nOne can define a $\\nabla^6 R^4$ type invariant in harmonic superspace, using the harmonic variables $u^1{}_i,\\, u^r{}_i,\\, u^8{}_i$ parametrising $SU(8)\/ S(U(1) \\times U(6)\\times U(1))$, such that $r=2$ to $7$ of $SU(6)$ \\cite{Drummond:2003ex,Drummond:2010fp,Hartwell:1994rp}. In this case the harmonic superspace integral can be defined at the non-linear level \\cite{Bossard:2011tq}, but we will only consider its linearised approximation. The superfield in the ${\\bf 20}$ of $SU(6)$\n\\begin{equation}}\\def\\ee{\\end{equation} W_{rst} = u^i{}_8 u^j{}_r u^k{}_s u^l{}_t W_{ijkl} \\ , \\ee\nsatisfies to the G-analyticity constraints \n\\begin{equation}}\\def\\ee{\\end{equation} u^1{}_i D_\\alpha^i W_{rst} = 0 \\ , \\qquad u^i{}_8 \\bar D_{\\adt i} W_{rst} = 0 \\ . \\ee\nOne can therefore integrate any function of $W_{rst}$ on the associated analytic superspace. To understand the most general integrand, we must decompose monomials of $W_{rst}$ in irreducible representations of $SU(6)$. At quadratic order we have the representation $[0,0,2,0,0]$ and the combination \n\\begin{equation}}\\def\\ee{\\end{equation} W^{rtu} W_{stu} = \\frac{1}{6} \\varepsilon^{rtuvwx} W_{stu} W_{vwx} \\ee\nin the $[1,0,0,0,1]$. Because one obtains the $[0,0,2,0,0]$ by simply adding the Dynkin labels of $W_{rst}$, we will say that this representation is freely generated, whereas we shall consider the $[1,0,0,0,1]$ as a new generator at order two. At cubic order, we have the two elements freely generated by the ones already discussed, {\\it i.e.}\\ $[0,0,3,0,0]$ and $[1,0,1,0,1]$, and the additional combination \n\\begin{equation}}\\def\\ee{\\end{equation} W_{u[rs} W_{t]vw} W^{uvw} \\ , \\ee\nin the $[0,0,1,0,0]$. At quartic order we have the four elements freely generated by the ones already discussed, and the two additional elements \n\\begin{equation}}\\def\\ee{\\end{equation} W_{vw[r} W^{vw[t} W_{s]xy} W^{u]xy} \\ , \\quad W_{urs} W_{tvw} W^{uvw} W^{rst} \\ , \\ee\nthat decompose into the $[0,1,0,1,0]$ and the singlet representation. One checks that these elements freely generate the general polynomials in $W_{rst}$, such that the latter are labeled by five integers. \n\nTo integrate such a function in analytic superspace, one needs to consider these generating monomials with additional harmonic variables in order to compensate for the $S(U(1)\\times U(6)\\times U(1))$ representation, {\\it i.e.}\\ \n\\begin{eqnarray} \\label{Uintegral811} \\int du\\, u^8{}_i u^r{}_j u^s{}_k u^t{}_l W_{rst} &=& W_{ijkl} \\, , \\\\\n\\int du\\, u^8{}_i u^s{}_j u_1{}^k u_r{}^l W^{rtu} W_{stu} &=& W_{ijpq} W^{klpq} - \\frac{1}{28} \\delta_{ij}^{kl} W_{pqrs} W^{pqrs} \\, , \\nonumber \\\\*\n\\int du\\, u_1{}^q u^8{}_p u^8{}_i u^r{}_j u^s{}_k u^t{}_l W_{u[rs} W_{t]vw} W^{uvw} &=& W_{po[ij} W_{kl]mn} W^{qomn} - \\frac{|W|^2\\hspace{-1.7mm}}{108} \\scal{ \\delta_p^q W_{ijkl} - \\delta^p_{[i} W_{jkl]p}} \\, , \\nonumber \\\\*\n\\int du u_1{}^k u_1{}^l u^8{}_i u^8{}_j W_{urs} W_{tvw} W^{uvw} W^{rst} &=& W_{npq(i} W_{j)mp^\\prime q^\\prime} W^{np^\\prime q^\\prime(k} W^{l)pqm} - \\delta_{(i}^{(k} \\delta_{j)}^{l)} ( \\dots ) \\ ,\\nonumber \n\\end{eqnarray}\nwhich are respectively in the $[0,0,0,1,0,0,0]$, the $[0,1,0,0,0,1,0]$, the $[1,0,0,1,0,0,1]$ and the $[2,0,0,0,0,0,2]$ irreducible representations of $SU(8)$, whereas \n\\begin{equation}}\\def\\ee{\\end{equation} \\int du u_1{}^m u_1{}^n u^8{}_k u^8{}_l u^r{}_i u^s{}_j u_t{}^p u_u{}^q W_{vwr} W^{vwt} W_{sxy} W^{uxy} = W_{i^\\prime j^\\prime [ij} W_{k]lk^\\prime l^\\prime} W^{i^\\prime j^\\prime [pq} W^{m]nk^\\prime l^\\prime} + \\dots \\label{Uintegral811B}\n\\ee\ngives rise to the fourth order monomial in the $[1,0,1,0,1,0,1]$ irreducible representation. \n\nOne obtains in this way that the harmonic superspace integral of a general monomial of order $n_1 + 2n_2 +3n_3+ 4n_4+4n_4^\\prime + 4$ in the $[n_2,n_4,n_1+n_3,n_4,n_2]$ of $SU(6)$ gives rise to a term in $\\nabla^6 R^4$ with a monomial of order $n_1 + 2n_2 +3n_3+ 4n_4+4n_4^\\prime $ in the $[n_3+n_4 +2n_4^\\prime,n_2,n_4,n_1+n_3,n_4,n_2,n_3+n_4 +2n_4^\\prime]$ of $SU(8)$, {\\it i.e.}\\ \n\\begin{eqnarray} &&\\int du D^{14} \\bar D^{14} \\, F(u)_{\\scriptscriptstyle [n_3+n_4 +2n_4^\\prime,n_2,n_4,n_1+n_3,n_4,n_2,n_3+n_4 +2n_4^\\prime]}^{[ n_2,n_4,n_1+n_3,n_4,n_2]} W^{n_1 + 2n_2 +3n_3+ 4n_4+4n_4^\\prime + 4}|_{[n_2,n_4,n_1+n_3,n_4,n_2]} \\nonumber \\\\*\n&&\\sim \\nabla^6 R^4 \\, W^{n_1 + 2n_2 +3n_3+ 4n_4+4n_4^\\prime }|_{[n_3+n_4 +2n_4^\\prime,n_2,n_4,n_1+n_3,n_4,n_2,n_3+n_4 +2n_4^\\prime]} +\\dots\\label{811Linear} \\end{eqnarray}\nwhere the function $F(u)$ is the function of the harmonic variable defined as a product of the generating functions defined in (\\ref{Uintegral811},\\ref{Uintegral811B}). One needs at least one quartic singlet in the G-analytic superfield to get a non-vanishing integral \\cite{Drummond:2003ex}. \n\nReferring to the one to one correspondence between linearised and non-linear invariants \\cite{Drummond:2003ex}, one deduces that the non-linear invariant must admit the same gradient expansion, {\\it i.e.}\\ \n\\begin{eqnarray}&& {\\cal L}_\\grad811[{\\cal E}_\\grad811] \\nonumber \\\\*\n&=& \\hspace{-5mm}\\sum_{n_1,n_2,n_3,n_4,n_4^\\prime} \\hspace{-5mm} {\\cal D}^{n_1 + 2 n_2 + 3 n_3 + 4 n_4 + 4 n^\\prime_4 }_{\\scriptscriptstyle [n_3+n_4 +2n_4^\\prime,n_2,n_4,n_1+n_3,n_4,n_2,n_3+n_4 +2n_4^\\prime]} \\hspace{2mm} \n{\\cal E}_{\\grad{8}{1}{1}} \\, {\\cal L}^{\\scriptscriptstyle [n_3+n_4 +2n_4^\\prime,n_2,n_4,n_1+n_3,n_4,n_2,n_3+n_4 +2n_4^\\prime]}_\\grad811 \\hspace{5mm} \\ \\label{811GradExpand} \\end{eqnarray}\nwhere each $ {\\cal L}^{\\scriptscriptstyle [n_3+n_4 +2n_4^\\prime,n_2,n_4,n_1+n_3,n_4,n_2,n_3+n_4 +2n_4^\\prime]}_\\grad811$ is an $E_{7(7)}$ invariant superform in the corresponding representation of $SU(8)$. Note that although the irreducible representation remains unchanged under the substitution \n\\begin{equation}}\\def\\ee{\\end{equation} (n_1,n_3,n_4^\\prime) \\rightarrow(n_1+2, n_3-2,n_4^\\prime +1)\\ee\nthe corresponding superforms and the tensor structure of the derivative are different, and are really labelled by the five integers $n_1,n_2,n_3,n_4,n_4^\\prime$ without any further identification. Of course the mass dimension implies that these integers are bounded from above, and the maximal weight terms in $\\chi^{14} \\bar \\chi^{14}$ can only be in representations like $[2,6,0,8,0,6,2]$, $[2,6,1,6,1,6,2]$, \\dots $[2,10,0,0,0,10,2]$, \\dots $[11,1,0,0,0,1,11]$. \n\n\nThis gradient expansion implies in particular that the third order derivative of ${\\cal E}_\\grad811$ in the $[0,2,0,0,0,0,0]$ and its complex conjugate must vanish, {\\it i.e.}\\ \n\\begin{eqnarray} \\Scal{ 4 {\\cal D}_{ijpq} {\\cal D}^{pqmn} {\\cal D}_{mnkl} - {\\cal D}_{ijkl} \\scal{ \\Delta + 24} } {\\cal E}_\\grad811 &=& 0 \\ , \\label{CubicC}\\nonumber \\\\*\n \\Scal{ 4 {\\cal D}^{ijpq} {\\cal D}_{pqmn} {\\cal D}^{mnkl} - {\\cal D}^{ijkl} \\scal{ \\Delta + 24} } {\\cal E}_\\grad811 &=& 0 \\ . \\end{eqnarray}\nThese equations imply all the higher order constraints on the function such that its gradient expansion is in agreement with \\eqref{811GradExpand}. \nDefining the covariant derivative in tangent frame as a Lie algebra generator in the fundamental representation of $E_{7(7)}$, this equation reads equivalently \n\\begin{equation}}\\def\\ee{\\end{equation} {\\bf D}_{56}^{\\; 3} {\\cal E}_\\grad811 = {\\bf D}_{56} \\Scal{ 6 + \\tfrac{1}{4} \\Delta } {\\cal E}_\\grad811 \\ . \\label{Cubic56} \\ee\nThis implies in particular that all the Casimir operators are determined by the quadratic one such that \n\\begin{equation}}\\def\\ee{\\end{equation} {\\rm tr} \\scal{ {\\bf D}_{56}^{\\; 2+2n}} \\, {\\cal E}_\\grad811= 6 \\Delta \\scal{ 6 + \\tfrac{1}{4} \\Delta}^{n} {\\cal E}_\\grad811 \\ , \\ee\nbut the quadratic Casimir is not a priori determined by equation \\eqref{CubicC} alone. We will need to consider the other invariants to finally conclude that supersymmetry moreover implies \\cite{Green:2010kv}\n\\begin{equation}}\\def\\ee{\\end{equation} \\Delta {\\cal E}_\\grad811 = - 60 {\\cal E}_\\grad811 - ({\\cal E}_\\grad844)^2 \\ .\\label{Laplace811} \\ee\nEquation \\eqref{Cubic56} defines a qantization of the algebraic condition ${\\bf Q}_{56}^{\\; 3}=0$ associated to the complex nilpotent orbit of $E_{7}$ of Dynkin label \\DEVII{2}00000{\\mathfrak{0}}, while the condition that the fourth order derivative does not vanish generically in the ${\\scriptstyle [2,0,0,0,0,0,2]}$ distinguishes its real form of $SU(8)$ Dynkin label [$\\scriptstyle \\mathfrak{2}\\mathfrak{0}\\mathfrak{0}\\mathfrak{0}\\mathfrak{0}\\mathfrak{0}\\mathfrak{2}$] \\cite{E7Djo}, which defines the graded decomposition of $SU(8)$ associated to the $(8,1,1)$ harmonic superspace we consider in this section. The property that the linearised structure does not permit to determine the eigenvalue of the Laplace operator in this case, implies that the quantization of the associated nilpotent orbit is not unique, and depends on one free parameter. This property follows from the fact that a nilpotent element of this kind can be obtained as the appropriate limit of a semi-simple element satisfying to the characteristic equation ${\\bf Q}_{56}^{\\; 3}=\\frac{1}{24} {\\rm tr}({\\bf Q}_{56}^{\\; 2}) {\\bf Q}_{56} $. \n\n\n\\subsection{$F^2 \\nabla^4 R^4$ type invariant and its relation to $\\nabla^6 R^4$}\n\\label{F2D4R4} \nAlthough the $\\nabla^6 R^4$ type invariant provides the unique supesymmetric invariant preserving $SU(8)$ one can write at this order, there is another class of invariants that can be defined form the chiral harmonic superspace defined in terms of the harmonic variables $u^{\\hat{r}}{}_i,\\, u^r{}_i$ parametrising $SU(8)\/S(U(2)\\times U(6))$ \\cite{Drummond:2003ex,Hartwell:1994rp}, with $\\hat{r},\\hat{s}$ equal to $1, 2$ of $SU(2)$, and $r ,s$ running from $3$ to $8$ of $SU(6)$. One defines the superfield \n\\begin{equation}}\\def\\ee{\\end{equation}\nW^{rs} = u^1{}_i u^2{}_j u^r{}_k u^s{}_l W^{i j k l}\n\\ee\nthat satisfies to the G-analiticity constraint \n\\begin{equation}}\\def\\ee{\\end{equation} \nu^{\\hat{r}}{}_{i} \\bar D^i_\\alpha W^{rs} = 0 \\ . \n\\ee\nSimilarly as in the preceding section, the most general function of $W^{rs}$ is freely generated by the three monomials \n\\begin{equation}}\\def\\ee{\\end{equation} W^{rs} \\ , \\qquad \\frac{1}{2} \\varepsilon_{rstuvw} W^{tu} W^{vw} \\ , \\qquad \\frac{1}{2} \\varepsilon_{rstuvw} W^{rs} W^{tu} W^{vw} \\ . \\ee\nOne must supplement them with harmonic variables to preserve $S(U(2)\\times U(6))$ invariance, using \n\\begin{eqnarray} \\int du\\, u^i{}_1 u^j{}_2 u^k{}_r u^l{}_s \\, W^{rs} &=& W^{ijkl} \\ , \\\\\n \\int du\\, u^i{}_1 u^j{}_2 u^r{}_k u^s{}_l \\, \\frac{1}{2} \\varepsilon_{rstuvw} W^{tu} W^{vw} &=& W^{ijpq} W_{klpq} - \\frac{1}{28} \\delta^{ij}_{kl}W^{pqrs} W_{pqrs} \\ , \\nonumber \\\\*\n \\int du\\, u^i{}_1 u^j{}_2 u^k{}_1 u^l{}_2 \\, \\frac{1}{2} \\varepsilon_{rstuvw} W^{rs} W^{tu} W^{vw} &=& W^{ijpq} W_{pqrs} W^{rskl} - \\frac{1}{12} W^{ijkl} W_{pqrs} W^{pqrs} \\ . \\nonumber\n\\end{eqnarray}\nOne only gets a non-trivial integral if the cubic $SU(6)$ singlet in $W^{rs}$ appears at least quadratically, which can be understood from the property that the associated chiral primary operator of $SU(2,2|8)$ is otherwise in a short representation \\cite{Drummond:2003ex}. Because the $U(1)$ weight of the measure is compensated by a single factor of this cubic $SU(6)$ singlet, it appears that there is no $SU(8)$ invariant that exists in this class. \n\n For a general monomial, one gets an invariant of the form \n\\begin{eqnarray}&& \\int du \\bar D^{16} D^{12} \\, F(u)^{[0,n_1,0,n_2,0]}_{[0,n_2+2n_3+2,0,n_1,0,n_2,0]} \\, W^{n_1 + 2n_2 + 3n_3 + 6} |_{[0,n_1,0,n_2,0]} \\\\\n&\\sim&W^{n_1 + 2n_2 + 3n_3}_{\\scriptscriptstyle [0,n_2+2n_3,0,n_1,0,n_2,0]} \\bar F^{2}_{\\scriptscriptstyle[0,2,0,0,0,0,0]} \\nabla^4 R^4+\\dots + W^{n_1+2n_2+n_3-22}_{\\scriptscriptstyle [0,n_2+2n_3-8,0,n_1-8,0,n_2-4,0]}\\bar \\chi^{16}_{\\scriptscriptstyle[0,8,0,4,0,0,0]} \\chi^{12}_{\\scriptscriptstyle[0,2,0,4,0,4,0]} \\ , \\nonumber\\end{eqnarray}\nwhere all terms are projected to the $\\scriptstyle [0,n_2+2n_3+2,0,n_1,0,n_2,0]$ irreducible representation, and the term in $\\bar F^2$ is\n\\begin{equation}}\\def\\ee{\\end{equation} \\bar F_{\\adt\\bdt}^{ij} \\bar F^{\\adt\\bdt kl} - \\bar F_{\\adt\\bdt}^{[ij} \\bar F^{kl]\\adt\\bdt} \\ . \\ee\nFor a generic function ${\\cal F}[W]$ of $W_{rs}$, one obtains\n\\begin{equation}}\\def\\ee{\\end{equation} D^{16} \\bar D^{12} {\\cal F}[W] = \\sum_{n_1,n_2,n_3} \\frac{\\partial^{n_1+2n_2+3n_3+6} {\\cal F}[W]}{\\partial W^{n_1+2n_2+3n_3+6}}{}\\Big|_{[0,n_2,0,n_1,0]} {\\cal L}_{\\grad820\\, {\\rm \\scriptscriptstyle lin}}^{[0,n_1,0,n_2,0]\\ord{n_1+2n_2+3n_3+3}} \\ , \\ee \nwhere the densities $ {\\cal L}_{\\grad820\\, {\\rm \\scriptscriptstyle lin}}^{[0,n_1,0,n_2,0]\\ord{n_1+2n_2+3n_3+3}}$ are of order $n_1+2n_2+3n_3+6$ in the fields and only depend on the scalar fields through their space-time derivative. The number $n_1+2n_2+3n_3+3$ is the $U(1)$ weight of the density. These densities determine by construction covariant superforms in the linearised approximation \\cite{Voronov,Gates:1997kr,Gates:1997ag}, such that \n\\begin{eqnarray} d^\\ord{0} {\\cal L}_{\\grad820\\, {\\rm \\scriptscriptstyle lin}}^{ij,kl} &=& 0 \\ , \\nonumber \\\\*\n\\Scal{ d^\\ord{0} {\\cal L}_{\\grad820\\, {\\rm \\scriptscriptstyle lin}}^{ij,kl,pqrs} + 3 P^{pqrs} \\wedge {\\cal L}_{\\grad820\\, {\\rm \\scriptscriptstyle lin}}^{ij,kl} }{}_{\\scriptscriptstyle [0,2,0,1,0,0,0]} &=& 0 \\ , \\nonumber \\\\*\n\\Scal{ d^\\ord{0} {\\cal L}_{\\grad820\\, {\\rm \\scriptscriptstyle lin}}^{ij,kl,pqrs,mntu} + 3 P^{pqrs} \\wedge {\\cal L}_{\\grad820\\, {\\rm \\scriptscriptstyle lin}}^{ij,kl,mntu} }{}_{\\scriptscriptstyle [0,2,0,2,0,0,0]} &=& 0 \\ , \\nonumber \\\\*\n\\Scal{ d^\\ord{0} {\\cal L}_{\\grad820\\, {\\rm \\scriptscriptstyle lin}}^{ij,kl,pq}{}_{rs}+18 P_{rsmn} \\wedge {\\cal L}_{\\grad820\\, {\\rm \\scriptscriptstyle lin}}^{ij,kl,pqmn} }{}_{\\scriptscriptstyle [0,3,0,0,0,1,0]} &=& 0 \\ ,\n\\end{eqnarray}\nwhere $d^\\ord{0}$ is the superspace exterior derivative in the linear approximation. At the next order, because \n\\begin{equation}}\\def\\ee{\\end{equation} d = \\sum_{n=0}^{\\infty} d^\\ord{n} \\ee\nsatisfies to $d^2=0$, one has \n\\begin{equation}}\\def\\ee{\\end{equation} \\{ d^\\ord{0},d^\\ord{1}\\} = 0\\ , \\ee\nand therefore \n\\begin{equation}}\\def\\ee{\\end{equation} d^\\ord{0} \\Scal{ d^\\ord{1} {\\cal L}_{\\grad820\\, {\\rm \\scriptscriptstyle lin}}^{ij,kl} } = 0 \\ . \\ee\nWe assume in this paper that the structure of superconformal multiplets implies the absence of supersymmetry anomaly, or equivalently that the fifth cohomology of $d^\\ord{0}$ is empty. Nevertheless, even if $d^\\ord{1} {\\cal L}_{\\grad820\\, {\\rm \\scriptscriptstyle lin}}^{ij,kl} $ only depends on the covariant superfields, nothing prevents its $d^\\ord{0}$ antecedent to depend explicitly on the scalar fields. This implies in this case that\n\\begin{equation}}\\def\\ee{\\end{equation} d^\\ord{1} {\\cal L}_{\\grad820\\, {\\rm \\scriptscriptstyle lin}}^{ij,kl} =-d^\\ord{0} {\\cal L}_{\\grad820\\, \\ord{1}}^{ij,kl} + P_{pqrs} \\wedge {\\cal M}^{ij,kl,pqrs} + P^{pqij} \\wedge {\\cal M}_{pq}{}^{kl} + P^{pqkl} \\wedge {\\cal M}_{pq}{}^{ij} -2 P^{i]pq[k} \\wedge {\\cal M}_{pq}{}^{l][j} \\ , \\label{d1F2R4} \\ee\nwhere $ {\\cal L}_{\\grad820\\, \\ord{1}}^{ij,kl}$ is the covariant correction to the superform, whereas ${\\cal M}^{ij,kl,pqrs}$ and ${\\cal M}^{ij}{}_{kl}$ are superforms of order six in the fields in the $[0,2,0,1,0,0,0]$ and the $[0,1,0,0,0,1,0]$, respectively, that must satisfy to\n\\begin{eqnarray}\nd^\\ord{0} {\\cal M}^{ij,kl,pqrs} &=& \\scal{ P^{pqrs} \\wedge \\mathcal{N}^{ij,kl} }{}_{\\scriptscriptstyle [0,2,0,1,0,0,0]}\\ , \\nonumber \\\\*\nd^\\ord{0} {\\cal M}^{ij}{}_{kl} &=& P^{ijpq} \\wedge \\mathcal{N}_{klpq} - \\frac{1}{28} \\delta^{ij}_{kl} P^{pqrs} \\wedge \\mathcal{N}_{pqrs} \\ . \\end{eqnarray}\nIn order to have such corrections that could not be reabsorbed in a covariant correction as $ {\\cal L}_{\\grad820\\, \\ord{1}}^{ij,kl}$, one must have a corresponding short multiplet associated to a linearised invariant of the same dimension. The only candidate for a superform $ {\\cal M}^{ij,kl,pqrs}$ is $ {\\cal L}_{\\grad820\\, {\\rm \\scriptscriptstyle lin}}^{ij,kl,pqrs} $, but it is of order seven in the fields, and therefore $ {\\cal M}^{ij,kl,pqrs}=0$ at this order. However, there is a candidate for $ {\\cal M}^{ij}{}_{kl} $ which is $ {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}}^{\\hspace{10mm}ij}{}_{kl} $, the superform that appears in the $\\nabla^6 R^4$ type invariant discussed in the last section. Following \\eqref{811Linear}, we have \n\\begin{eqnarray} d^\\ord{0} {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}} &=& 0 \\ , \\nonumber \\\\*\n d^\\ord{0} {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}}^{ijkl} &=& - 3 P^{ijkl} \\wedge {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}} \\ , \\nonumber \\\\*\n d^\\ord{0} {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}}^{ijkl,pqrs} &=& - 3 \\scal{ P^{ijkl} \\wedge {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}}^{pqrs} }{}_{\\scriptscriptstyle [0,0,0,1,0,0,0]} \\ , \\nonumber \\\\*\n d^\\ord{0} {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}}^{\\hspace{10mm}ij}{}_{kl} &=& - 18 \\scal{ P_{klpq} \\wedge {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}}^{ijpq} }{}_{\\scriptscriptstyle [0,1,0,0,0,1,0]} \\ ,\\label{Linear811}\n\\end{eqnarray}\nand therefore \n\\begin{eqnarray} && d^\\ord{0} \\Bigl(\\Scal{ W^{ijpq} W_{pqrs} W^{rskl} - \\tfrac{1}{12} W^{ijkl} W_{pqrs} W^{pqrs} } {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}} \\Bigr . \\nonumber \\\\*\n&& \\qquad + W^{ijpq} W_{pqrs} {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}}^{rskl} + W^{ijpq} W^{klrs} {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}}{}_{pqrs}+ W^{klpq} W_{pqrs} {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}}^{rsij} \\nonumber \\\\*\n&& \\hspace{10mm} \\Bigl . + 6 W^{pqij} {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}}^{\\hspace{10mm}kl}{}_{pq} + 6 W^{pqkl} {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}}^{\\hspace{10mm}ij}{}_{pq} -12 W^{i]pq[k} {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}}^{\\hspace{10mm}l][j}{}_{pq} \\Bigr)\\nonumber \\\\*\n&=& 18 \\Scal{ P^{pqij}\\wedge {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}}^{\\hspace{10mm}kl}{}_{pq} + P^{pqkl} \\wedge {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}}^{\\hspace{10mm}ij}{}_{pq} -2 P^{i]pq[k}\\wedge {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}}^{\\hspace{10mm}l][j}{}_{pq}} \\ , \\label{d0Cohom}\n \\end{eqnarray}\nsuch that $ {\\cal L}_{\\grad811\\, {\\rm \\scriptscriptstyle lin}}^{\\hspace{10mm}ij}{}_{kl} $ is indeed a consistent candidate. Moreover, the structure of the linearised $(8,1,1)$ invariant does not permit to have the tensor function $W^{ijpq} W_{pqrs} W^{rskl}$, such that \\eqref{d0Cohom} is not the exterior derivative of a superform that does not depend on the naked scalar fields (uncovered by a space-time derivative). It follows that such a correction, if it appeared in \\eqref{d1F2R4}, could not be reabsorbed in a redefinition of $ {\\cal L}_{\\grad820\\, \\ord{1}}^{ij,kl}$. \n\n\nIf this mixing between the $(8,2,0)$ and the $(8,1,1)$ superforms was not appearing at the non-linear level, then the action of the exterior derivative in the function of the scalar fields should not introduce lower derivative terms such that it should satisfy then to \n\\begin{equation}}\\def\\ee{\\end{equation} {\\cal D}^{ijpq} \\Scal{ 4 {\\cal D}_{pqrs} {\\cal D}^{rsmn} {\\cal D}_{mnkl} - {\\cal D}_{pqkl} \\scal{ \\Delta + 24} } {\\cal E}_{\\grad{8}{2}{0}} = 0 \\ . \\label{D4Consistency} \\ee\nIf the mixing did appear, then the unicity of the linearised invariants \\eqref{Linear811} would imply that the corresponding non-linear superform should be the same as in \\eqref{811GradExpand}, such that once again the exterior derivative acting on $D^3_{\\scriptscriptstyle [0,2,0,0,0,0,0]} {\\cal E}_\\grad820$ should not generate lower derivative terms and one would conclude again that \\eqref{D4Consistency} must be satisfied. Therefore this equation must be satisfied in either cases.\n\n\nUsing moreover the property that the gradient expansion of the linearised invariant is inconsistent with the presence of the third order derivative in the $[1,0,0,1,0,0,1]$ of $SU(8)$, one requires\n\\begin{equation}}\\def\\ee{\\end{equation} \\Scal{ 36 {\\cal D}_{jr[kl} {\\cal D}^{irmn} {\\cal D}_{pq]mn} - \\delta^i_j {\\cal D}_{klpq} ( \\Delta + 42) + \\delta^i_{[k} {\\cal D}_{lpq]j} ( \\Delta-120)} {\\cal E}_{\\grad{8}{2}{0}} = 0 \\ . \\label{CubicR} \n\\ee\nUsing this equation one computes independently of \\eqref{D4Consistency} that \n\\begin{equation}}\\def\\ee{\\end{equation} {\\cal D}^{ijpq} \\Scal{ 4 {\\cal D}_{pqrs} {\\cal D}^{rsmn} {\\cal D}_{mnkl} - {\\cal D}_{pqkl} \\scal{ \\Delta + 24} } {\\cal E}_{\\grad{8}{2}{0}} = \\frac{1}{12} \\Scal{ 28 {\\cal D}^{ijpq} {\\cal D}_{klpq} - 3 \\delta^{ij}_{kl} \\Delta} \\scal{ \\Delta + 60 } {\\cal E}_\\grad820 \\ee\nand we conclude that \\eqref{D4Consistency} and \\eqref{CubicR} imply together\n\\begin{equation}}\\def\\ee{\\end{equation} \\Delta {\\cal E}_\\grad820 = -60 {\\cal E}_\\grad820 \\ .\\label{Laplace820} \\ee\nThis eigenvalue is such that the structure of the invariant is consistent with the mixing between the $(8,2,0)$ and the $(8,1,1)$ superforms. Only in this case can they reduce to the same invariant for a function ${\\cal E}_{\\grad{8}{2}{2}}$ satisfying to both \\eqref{CubicC} and \\eqref{CubicR}, as for the $\\nabla^4 R^4$ type invariant. \n\nWe are going to argue now that this chiral invariant must indeed include a $\\nabla^6 R^4$ coupling, because the two classes of invariants reduce to one single class in three dimensions. But before to do this, let us mention that \\eqref{CubicR} can be rewritten as\n\\begin{equation}}\\def\\ee{\\end{equation} {\\bf D}_{133}^{\\; 3} {\\cal E}_\\grad820 =\\frac{1}{3} {\\bf D}_{133} \\Delta {\\cal E}_\\grad820 \\ , \\ee\nwhich defines a qantization of the algebraic eqation ${\\bf Q}_{133}^{\\; 3}=0$ associated to the complex nilpotent orbit of $E_{7}$ of Dynkin label \\DEVII{0}00000{\\mathfrak{2}} with the real form defined with the $SU(8)$ Dynkin label [$\\scriptstyle \\mathfrak{0}\\mathfrak{2}\\mathfrak{0}\\mathfrak{0}\\mathfrak{0}\\mathfrak{0}\\mathfrak{0}$] \\cite{E7Djo}, which defines the graded decomposition of $SU(8)$ associated to the $(8,2,0)$ harmonic superspace we consider in this section. In this case the choice of real form moreover implies that the complex charge in the ${\\bf 70}$ defining the nilpotent orbit through the Kostant--Sekiguchi correspondence satisfies to \n\\begin{equation}}\\def\\ee{\\end{equation} Q^{ijpq} Q_{pqmn} Q^{mnkl} = 0 \\ , \\ee\nsuch that it admits a unique quantization, with the eigenvalue of the Laplace operator $-60$. However, we will see in the following that the constraint \\eqref{D4Consistency} can be relaxed while keeping the property that the associated representation of $E_{7(7)}$ is a highest weight representation. \n\n\n\\subsection{Dimensional reduction to three dimensions}\nIn three dimensions, the duality group is $E_{8(8)}$, of maximal compact subgroup $Spin(16)\/\\mathds{Z}_2$. We denote $i, j$ the $SO(16)$ vector indices and $A, B$ the positive chirality Weyl spinor indices. The covariant derivative in tangent frame is a chiral Weyl spinor, {\\it i.e.}\\ in the \\WSOXVI00000001 representation. In the linearised approximation, the covariant fields all descend from the Weyl spinor scalar field, satisfying to \\cite{Greitz:2011vh}\n\\begin{equation}}\\def\\ee{\\end{equation}\nD_{\\alpha}^{i} W^{A} = \\Gamma^{i A \\dot{A}} \\chi_{\\alpha \\dot A} \\ . \n\\ee\nBoth four-dimensional $(8,1,1)$ and $(8,2,0)$ harmonic superspaces descend to the same $(16,2)$ harmonic superspace in three dimensions, defined through the introduction of harmonic variables parametrising $SO(16)\/(U(2) \\times SO(12))$ \\cite{Howe:1994ms}. The Weyl spinor representation decomposes with respect to $U(2) \\times Spin(12)$ as\n\\begin{equation}}\\def\\ee{\\end{equation} {\\bf 128} \\cong {\\bf 32}_+^{\\ord{-1}} \\oplus \\scal{ {\\bf 2}\\otimes {\\bf 32}_-}^\\ord{0} \\oplus {\\bf 32}_+^{\\ord{1}}\\ , \\label{128inD6} \\ee\nsuch that the grad $1$ Weyl spinor $W$ of $Spin(12)$ satisfies to a G-analyticity constraint with respect to the positive grad covariant derivative in the ${\\bf 2}$ of $U(2)$. The general polynomial in the $Spin(12)$ Weyl spinor is parametrised by four integers, just as for the rank three antisymmetric tensor of $SU(8)$ in section \\eqref{811D6R4}.\\footnote{This property follows from the fact that the classification of duality orbits of the black hole charges are the same in the $\\mathcal{N}=2$ supergravity theories of duality group $SO^*(12)$ and $SU(3,3)$ \\cite{Ferrara:1997uz}.} One computes in a similar way the general integral \n\\begin{eqnarray} && \\int du F(u)_{{\\mbox{\\WSOXVI0{n_3\\hspace{-0.5mm}\\mbox{+}n_4\\hspace{-0.5mm}\\mbox{+}2n_4^\\prime}0{n_2}0{n_4}0{n_1\\hspace{-0.5mm}\\mbox{+}n_3}} } }^{{\\mbox{\\WSOXII0{n_2}0{n_4}0{n_1\\hspace{-0.5mm}\\mbox{+}n_3}} }} D^{28} W^{n_1 + 2n_2 +3n_3+ 4n_4+4n_4^\\prime + 4}|_{{\\mbox{\\WSOXII0{n_2}0{n_4}0{n_1\\hspace{-0.5mm}\\mbox{+}n_3}} }} \\nonumber \\\\*\n&\\sim &\\nabla^{10} P^4 \\, W^{n_1 + 2n_2 +3n_3+ 4n_4+4n_4^\\prime }|_{{\\mbox{\\WSOXVI0{n_3\\hspace{-0.5mm}\\mbox{+}n_4\\hspace{-0.5mm}\\mbox{+}2n_4^\\prime}0{n_2}0{n_4}0{n_1\\hspace{-0.5mm}\\mbox{+}n_3}} } } +\\dots\\end{eqnarray}\nwhere $\\nabla^{10} P^4 $ is a $Spin(16)$ invariant quartic term in the scalar field momentum, that replaces the $\\nabla^6 R^4$ type term that vanishes modulo the equations of motion in three dimensions. In three dimensions it is not established if there is a one to one correspondence between non-linear and linear invariants defined as harmonic superspace integrals. Nevertheless, the class of invariants we discuss descends from four dimensions, and we can therefore assume they admit the same structure, {\\it i.e.}\\ \n\\begin{equation}}\\def\\ee{\\end{equation} {\\cal L}_\\gra{16}{2}[{\\cal E}_{\\gra{16}{2}}] = \\sum_{n_1,n_2,n_3,n_4,n_4^\\prime} {\\cal D}_{{\\mbox{\\WSOXVI0{n_3\\hspace{-0.5mm}\\mbox{+}n_4\\hspace{-0.5mm}\\mbox{+}2n_4^\\prime}0{n_2}0{n_4}0{n_1\\hspace{-0.5mm}\\mbox{+}n_3}} }} {\\cal E}_{\\gra{16}{2}}\\, {\\cal L}^{{\\mbox{\\WSOXVI0{n_3\\hspace{-0.5mm}\\mbox{+}n_4\\hspace{-0.5mm}\\mbox{+}2n_4^\\prime}0{n_2}0{n_4}0{n_1\\hspace{-0.5mm}\\mbox{+}n_3}} }}\\ . \\ee\nThis expansion implies that the fourth order derivative of the function ${\\cal E}_{\\gra{16}{2}}$ restricted to the \\WSOXVI10001000 must vanish, {\\it i.e.}\\ \\begin{equation}}\\def\\ee{\\end{equation} \\scal{ {\\cal D} \\Gamma_{i[jk}{}^r {\\cal D}} \\scal{ {\\cal D} \\Gamma_{lpq]r} {\\cal D} }{\\cal E}_{\\gra{16}{2}} = - \\delta_{i[j} \\scal{ {\\cal D} \\Gamma_{klpq]} {\\cal D}} ( \\Delta+48 ) {\\cal E}_{\\gra{16}{2}} \\ ,\\label{quarticE8} \\ee\nwhere the Laplace operator $\\Delta$ is defined as\n\\begin{equation}}\\def\\ee{\\end{equation}\n\\Delta = {\\cal D}_{A} {\\cal D}^{A}\\ . \n\\ee\nBy dimensional reduction of the four-dimensional equation \\eqref{Laplace820}, one computes that \\begin{equation}}\\def\\ee{\\end{equation}\n\\Delta {\\cal E}_{\\gra{16}{2}} = - 198 {\\cal E}_{\\gra{16}{2}} \\ . \n\\ee\nOne can understand that the two kinds of 1\/8 BPS invariants discussed in the preceding section dimensionally reduce to this single class. If one consider the decomposition of \\eqref{128inD6} with respect to $U(6)\\subset Spin(12)$, one obtains for one embedding \n \\begin{equation}}\\def\\ee{\\end{equation} {\\bf 32}_+\\cong {\\bf 6}^\\ord{-2} \\oplus {\\bf 20}^\\ord{0} \\oplus \\overline{\\bf 6}^\\ord{2}\\ , \\label{32inA5} \\ee\nsuch that the G-analytic superfield in the ${\\bf 32}_+$ includes the four-dimensional $(8,1,1)$ G-analytic scalar $W^{rst}$ as well as some components of the vector fields. A generic spinor of non-zero quartic invariant can be represented by $W^{rst}$. For the other embedding $U(6)\\subset Spin(12)$, one gets\n \\begin{equation}}\\def\\ee{\\end{equation} {\\bf 32}_+\\cong \\overline{\\bf 1}^\\ord{-3} \\oplus {\\bf 15}^\\ord{-1} \\oplus \\overline{\\bf 15}^\\ord{1} \\oplus {\\bf 1}^\\ord{3}\\ , \\label{32inA5p} \\ee\nsuch that the G-analytic superfield in the ${\\bf 32}_+$ includes the four-dimensional $(8,2,0)$ G-analytic scalar $W^{rs}$ as well as some components of the vector fields, and a Ehlers complex scalar parametrising the four-dimensional metric. The scalar field alone only parametrises a null spinor of $Spin(12)$ of vanishing quartic invariant, and only together with the Ehlers scalar field it can provide a representative of a generic spinor. One could have naively concluded that the function ${\\cal E}_\\grad820$ should give rise to a function on $E_{8(8)}\/Spin_{\\scriptscriptstyle \\rm c}(16)$ satisfying moreover to \n\\begin{equation}}\\def\\ee{\\end{equation} 5\\scal{ {\\cal D} \\Gamma_{ijpq} {\\cal D}} \\scal{ {\\cal D} \\Gamma^{klpq} {\\cal D}} {\\cal E} = - 20 \\scal{{\\cal D} \\Gamma_{ij}{}^{kl} {\\cal D}} \\scal{ \\Delta + 48 } {\\cal E} + 28 \\delta_{ij}^{kl} \\Delta \\scal{\\Delta + 120} {\\cal E} \\ , \\ee\nbut this equation only admits solutions for functions satisfying to the Laplace equation\n\\begin{equation}}\\def\\ee{\\end{equation} \\Delta {\\cal E} = -210\\, {\\cal E} \\ , \\ee\nexcepted for the functions satisfying to the quadratic and cubic constraints that define the $R^4$ and $\\nabla^4 R^4$ type invariants. We see therefore that this equation is incompatible with supersymmetry.\n\n It follows that both $(8,1,1)$ and $(8,2,0)$ type invariants dimensionally reduce to three-dimensional invariants depending of functions on $E_{8(8)}\/Spin_{\\scriptscriptstyle \\rm c}(16)$ that belong to the same representation of $E_{8(8)}$. Being in the same representation, they both carry a quartic component in the linearised approximation and they must both include a $\\nabla^6 R^4$ type term in their uplift to four dimensions. This proves that the mixing between the two different linearised structures must occur such that the non-linear $\\bar F^2 \\nabla^4 R^4$ type invariant cannot exist without including a $\\nabla^6 R^4$ type term as well. \n\nBefore to end this section on the three-dimensional theory, let us discuss the modification of the supersymmetry constraint due to the completion of the $R^4$ type invariant at the next order. As it is argued in \\cite{Green:2005ba}, the appearance of a $R^4$ correction with threshold function ${\\cal E}_\\gra{16}{8}$, will modify the Laplace equation with a non-zero right-hand-side, {\\it i.e.}\\ \n \\begin{equation}}\\def\\ee{\\end{equation}\n\\Delta {\\cal E}_{\\gra{16}{2}} = - 198 {\\cal E}_{\\gra{16}{2}} - {\\cal E}_{\\gra{16}{8}}^{\\; 2} \\ . \n\\ee\nBecause the function $ {\\cal E}_{\\gra{16}{8}}$ satisfies to \\cite{Minimal}\n\\begin{equation}}\\def\\ee{\\end{equation} \\scal{ {\\cal D} \\Gamma_{ijkl} {\\cal D}} {\\cal E}_{\\gra{16}{8}} = 0 \\ , \\ee\nthe second derivative of its square must necessarily vanish in the \\WSOXVI10001000, and we get accordingly a modification of \\eqref{quarticE8} to\n\\begin{equation}}\\def\\ee{\\end{equation} \\scal{ {\\cal D} \\Gamma_{i[jk}{}^r {\\cal D}} \\scal{ {\\cal D} \\Gamma_{lpq]r} {\\cal D} } {\\cal E}_{\\gra{16}{2}}= 150 \\delta_{i[j} \\scal{ {\\cal D} \\Gamma_{klpq]} {\\cal D}} {\\cal E}_{\\gra{16}{2}} + \\delta_{i[j} \\scal{ {\\cal D} \\Gamma_{klpq]} {\\cal D}} {\\cal E}_{\\gra{16}{8}}^{\\; 2}\\ . \\ee\n\n\\subsection{$E_{7(7)}$ Eisenstein series}\nIn this section we shall discuss some properties of Einstein series that solve the differential equations we have derived for the $\\nabla^6 R^4$ type invariants.\n\n\\addtocontents{toc}{\\protect\\setcounter{tocdepth}{1}}\n\\subsubsection{Fundamental representation}\n\\addtocontents{toc}{\\protect\\setcounter{tocdepth}{2}}\nAs discussed in \\cite{Obers:1999um,D4R4}, one can define the Eisenstein series \n\\begin{equation}}\\def\\ee{\\end{equation} E_{\\mbox{\\DEVII000000s}} = \\sum_{\\vspace{-2mm}\\begin{array}{c}\\scriptstyle \\vspace{-4mm} \\Gamma\\in \\mathds{Z}^{56} \\vspace{2mm}\\\\ \\scriptscriptstyle I_4^{\\prime\\prime}(\\Gamma)|_{\\bf 133}=0\\end{array}} |Z(\\Gamma)_{ij} Z(\\Gamma)^{ij}|^{-s} \\ , \\label{E56s} \\ee\nas a sum over the rank one integral charge vectors $\\Gamma$ in the ${\\bf 56}$ of $E_{7(7)}$ satisfying to the constraint that the quadratic tensor $\\Gamma\\otimes \\Gamma$ vanishes in the adjoint representation. This formula is rather useful to identify the differential equations satisfied by the Eisenstein function, because one can simply consider the case of one charge $\\Gamma$, with $Z(\\Gamma)_{ij} = {\\cal V}_{ij}{}^I\\Gamma_I$, such that the quadratic constraint becomes \n\\begin{equation}}\\def\\ee{\\end{equation} Z_{[ij} Z_{kl]} = \\frac{1}{24} \\varepsilon_{ijklpqrs} Z^{pq} Z^{rs} \\ , \\qquad Z_{ik} Z^{jk} = \\frac{1}{8} \\delta_i^j Z_{kl} Z^{kl} \\ ,\\ee\nand the differential operator acts on $Z_{ij} $ as an element of $\\mathfrak{e}_{7(7)}$\n\\begin{equation}}\\def\\ee{\\end{equation} {\\cal D}_{ijkl} Z^{pq} = 3 \\delta^{pq}_{[ij} Z_{kl]} \\ , \\qquad {\\cal D}_{ijkl} Z_{pq} = \\frac{1}{8} \\varepsilon_{ijklpqrs} Z^{rs} \\ . \\ee\nUsing the definition $|Z|^2 = Z_{ij} Z^{ij}$, one computes that the function $|Z|^{-2s}$ satisfies to \n\\begin{eqnarray} {\\cal D}_{ijpq} {\\cal D}^{klpq} |Z|^{-2s} &=& 2s(s-2) Z_{ij} Z^{kl} |Z|^{-2s-2} + \\frac{s(s-11)}{4} \\delta_{ij}^{kl} |Z|^{-2s} \\ , \\nonumber \\\\*\n{\\cal D}_{ijpq} {\\cal D}^{pqrs} {\\cal D}_{rskl} |Z|^{-2s} &=& - 3 s(s-2)(s-4) Z_{ij} Z_{kl} |Z|^{-2s-2} + \\frac{s^2-15s + 8}{4} {\\cal D}_{ijkl} |Z|^{-2s} \\ , \\nonumber \\\\*\n {\\cal D}_{jr[kl} {\\cal D}^{irmn} {\\cal D}_{pq]mn} |Z|^{-2s} &=&\\frac{(s-2)(s-7)}{12} \\delta^i_j{\\cal D}_{klpq} |Z|^{-2s} -\\frac{ s^2-9s -40}{12} \\delta^i_{[k} {\\cal D}_{pql]j} |Z|^{-2s} \\ , \\hspace{10mm} \\label{ConstraintsZs} \n\\end{eqnarray}\nand to the Laplace equation \n\\begin{equation}}\\def\\ee{\\end{equation} \\Delta |Z|^{-2s} = 3s(s-9) |Z|^{-2s} \\ . \\ee\nFor $s\\ne2,\\, 4$, the function admits a generic gradient expansion in the irreducible representations $[0,n_2+2n_3,0,n_1,0,n_2,0]$ and their complex conjugate. To exhibit this property, it is convenient to consider a restricted set of indices as follows \n\\begin{eqnarray} && \\scal{ {\\cal D}_{12ij} {\\cal D}^{ijkl} {\\cal D}_{kl12}}^{n_3} \\scal{ {\\cal D}_{12pq} {\\cal D}^{78pq}}^{n_2} \\scal{{\\cal D}_{1234}}^{n_1} |Z|^{-2s} \\\\ &=& \\tfrac{(s+n_1+n_2+n_3-1)!(s+n_2+n_3-3)!(s+n_3-5)!}{(s-1)!(s-3)!(s-5)!} \\scal{\\mbox{-}3 Z_{12}^{\\; 2}}^{n_3} \\scal{2 Z_{12} Z^{78}}^{n_2} \\scal{\\mbox{-}6 Z_{[12} Z_{34]}}^{n_1} |Z|^{-2(s+n_1+n_2+n_3)} \\, . \\nonumber \\label{GradExpandZs} \\end{eqnarray}\nOne computes moreover that for $m\\le n$ \n\\begin{eqnarray} && \\scal{{\\cal D}^{78ij} {\\cal D}_{ijkl} {\\cal D}^{kl78}}^{m} \\scal{{\\cal D}_{12pq} {\\cal D}^{pqrs} {\\cal D}_{rs12}}^n |Z|^{-2s} \\\\\n&=& \\tfrac{(s+n-1)!(s+n-3)!(s+n-5)!(s+n+m-1)!(s+n+m-3)!(s-n+m-5)!}{(s-1)!(s-3)!(s-5)!(s+n-1)!(s+n-3)!(s-n-5)!} \\scal{\\mbox{-}3 Z^{78\\; 2}}^{m} \\scal{\\mbox{-}3 Z_{12}^{\\; 2}}^{n}|Z|^{-2(s+n+m)} \\nonumber \\\\*\n&=&\\scal{\\mbox{-}\\tfrac{3}{2}}^{n+m} \\tfrac{(s+n-5)!(s+n+m-1)!(s+n+m-3)!(s-n+m-5)!}{(s+n-m-5)! (s+2n-1)!(s+2n-3)!(s-n-5 )! } \\scal{{\\cal D}_{12ij} {\\cal D}^{ijkl} {\\cal D}_{kl12}}^{n-m}\\scal{{\\cal D}_{12pq} {\\cal D}^{78pq}}^{n+m} |Z|^{-2s} \\nonumber \\end{eqnarray}\nsuch that acting with a derivative operator in the conjugate representation $[0,0,0,0,0,2m,0]$ does not produce an independent tensor. One has in particular for $s$ an integer greater than $5$ \n\\begin{equation}}\\def\\ee{\\end{equation} \\scal{{\\cal D}^{78ij} {\\cal D}_{ijkl} {\\cal D}^{kl78}} \\scal{{\\cal D}_{12pq} {\\cal D}^{pqrs} {\\cal D}_{rs12}}^{s-4} |Z|^{-2s} = 0 \\ . \\ee\nThis equation is the equivalent on $E_{7(7)}\/SU_{\\scriptscriptstyle \\rm c}(8)$ of the equation on $SL(2)\/SO(2)$ \n\\begin{equation}}\\def\\ee{\\end{equation} \\bar {\\cal D} {\\cal D}^{s-1} E_{[s]} = 0 \\ , \\ee\nfor integral $s$, and we would like to see that the function $ E{\\mbox{\\DEVII000000s}}$ also decomposes somehow into a ``holomorphic'' part $ {\\cal F}_s$ and a ``anti-holomorphic'' part $\\bar {\\cal F}_s$, satisfying respectively to \n\\begin{equation}}\\def\\ee{\\end{equation} \\scal{{\\cal D}_{12pq} {\\cal D}^{pqrs} {\\cal D}_{rs12}}^{s-4} \\bar {\\cal F}_s = 0 \\ , \\qquad \\scal{{\\cal D}^{78ij} {\\cal D}_{ijkl} {\\cal D}^{kl78}}^{s-4} {\\cal F}_s = 0 \\ , \\ee\nsuch that \n\\begin{equation}}\\def\\ee{\\end{equation} \\scal{{\\cal D}_{12pq} {\\cal D}^{pqrs} {\\cal D}_{rs12}}^{s-4} E_{\\mbox{\\DEVII000000s}} = \\scal{{\\cal D}_{12pq} {\\cal D}^{pqrs} {\\cal D}_{rs12}}^{s-4} {\\cal F}_s \\ , \\label{D3sF} \\ee\nand respectively for the complex conjugate. By consistency, this requires for instance that acting with further derivatives on this tensor does not permit to get back lower order tensors with $n_3 3.0$) and the median length of time new leaders stay over the threshold after they have become the highest status individual. These are plotted in \\figreftext{3} with varying numbers of links originating from each individual ($\\lambda$). The plot demonstrates there are ranges of the inequality parameter ($q$) where leader turnover is relatively high, but the number of leaders is relatively constant. This shows that there is a power-vacuum effect in our model.\n\n\\begin {figure}\n \\includegraphics[width=0.5\\textwidth]{figure_3.png}\n \\caption{During transient periods, as one leader loses status, another quickly replaces it. Increasing the inequality parameter increases the time that leaders stay above the status level. The lines in panel {\\bf A} show the mean numbers of leaders over a threshold status level throughout a complete run of the model. For each value of $q$, vertical boxes indicate the relative proportion of timesteps with each number of leaders, for instance the width of $\\approx 0.98$ at $q=0.532$ and $\\lambda=3$ indicates that there was generally only one leader above the threshold throughout the simulation. Panel {\\bf B} shows the mean time that an individual stays above the threshold after they have reached highest status.Leader turn over was quite high at $q=0.532$ with the average leader lasting 7000 time steps with simulations run over 2 million time steps. Parameters as in \\figreftext{2} unless shown.\n }\n\\end{figure}\n\n\nTo understand the dynamics created by the model in more detail, we compare them to a simpler model of a branching process. Branching processes specify the rates at which an individual with $x_i$ links acquires or loses links. In such processes, we observe a power-law distribution of the number of links when the link gain rate is close to the loss rate. We observe such a power law distribution when this is the case in our model (see Fig. 4, panel A). The formation of a hump in the distribution to the right is also consistent with branching processes with a reflecting boundary. However, the results suggest our model is not simply a superposition of branching processes for each individual. A superposition would not explain why we find a power vacuum effect, where there is always a leader which changes identity over time (see, e.g., \\figreftext{3}, $q=0.532$, $\\lambda=3$). The reason the power vacuum happens is because the presence of a leader suppresses the others (see \\figreftext{4} panel {\\bf B}), which is due to the relatively high number of links to the leader.\n\n\n\\begin {figure}\n \\includegraphics[width=0.5\\textwidth]{figure_4.png}\n \\caption{Frequency distributions of the numbers of links to individuals demonstrate how individuals show critical behaviour of a branching process over a range of values of $q$ (panel {\\bf A}. There is a truncated power law distribution with exponent $\\approx -8.0$ on panel {\\bf A} at a relatively low level of $q=0.525$. At higher levels of $q$ we can see how dominant individuals will suppress others to subcritical behaviour (panel {\\bf B}). We can see how the rewiring of links to an extra leader between $q=0.54$ and $q=0.55$ (see \\figreftext{3} panel {\\bf A}) suppresses the frequencies of individuals with mid-range number of links as $q$ is increased. Parameters are the same as in \\figreftext{2}, $q$ as shown. Simulations were run over 2 million time steps.}\n\\end{figure}\n\n\nIn order to check whether our model is consistent with evidence from the Neolithic Era, we looked to see if the increase in leadership time-length is consistent over a wide range of parameters and that our results can be found in large groups. We ran simulations over a wide range of parameters to investigate the role of each parameter on the dynamics. Over each simulation run, we recorded the number of times there was a change of leader. We counted new leaders when they hadn't been one of the previous $l-1$ leaders. Our results confirm that as we increased the inequality parameter, we recorded fewer leaders as individuals spend longer time periods as leader. The full range of parameters are presented in the Supplementary Material (see Figures S1-9).\n\n\\section{Discussion}\n\nThe model presented here demonstrates a rich set of leadership structure dynamics amongst individuals in a non-coercive environment. The model reveals an interesting phase where competition to be leader is suppressed by the temporary presence of one leader, meaning that when a leader loses status it will be quickly replaced. We show how the dynamics can depend on the level of inequality of alliances between individuals, and on the numbers of alliances an individual can form. This suggests that technology and social norms can modulate such a system and implies that self-organisation in a society can play a role in keeping a system near to an equilibrium point where leadership changes relatively frequently.\n\nThis work pushes forward our understanding of hierarchies in human networks. This is currently largely based on static networks which are formed by preferential-attachment where nodes are more likely to connect to other nodes which are already of high status \\cite{albert_statistical_2002}. Our model presents an alternative where the hierarchy is dynamic: nodes have high numbers of connections (alliances) at some points and then other nodes take over. \n\nThis work also contributes new insights into the Neolithic transitions in human societies from relatively flat power structures, through a period where leaders changed over time, to dominant institutionalised leaders \\cite{bar-yosef_sedentary_2001}. Our model presents a potential explanation for this, given that status would be closely linked to control of food or other monopolisable resources. Contemporary to the political transitions were innovations in agriculture, which enabled a high status individual to control a large food surplus. These high status individuals were able to feed a large number of supporters at relatively little cost to themselves, for example funding a military, enabling them to maintain and eventually institutionalise their power. Monopolisable resources could also be less tangible, such as religious authority \\cite{cauvin_birth_2001} however the evidence suggests these changes followed technological advances \\cite{whitehouse_complex_2019}. In either case, the growth of population size and the transition to a sedentary, agricultural, lifestyle would have made it more difficult for followers to leave their group and hence easier for a dominant individual to monopolise \\cite{carneiro_theory_1970,powers_evolutionary_2014}. These factors, especially the ability to monopolise resources, relate to a high level of the inequality parameter ($q$) in our model as the leader individual is able to form alliances where they exchange a small proportion of these resources to gain loyalty from their supporters.\n\nThe three phases of human leadership dynamics correspond to three phases identified in the organisational psychology literature. Lewin has identified three modes of leadership: Laissez Faire, Democratic and Autocratic \\cite{lewin_patterns_1939}. In this analysis, the Laissez Faire mode was found to be the case where there is no central resource to coordinate and corresponds to the no-leader phase. When there is a central resource, our model predicts that those individuals with higher status are able to control this central resource, and thus not lose out when more individuals join their group. This is because a controlled surplus of this central resource enables a leader to pay off many individuals and maintain their leadership \\cite{bueno_de_mesquita_logic_2005}. The ability to control a central resource means that such groups will switch from the Laissez Faire mode to more Democratic and Autocratic modes. \n\nIn this paper we focussed primarily on applying this model to develop insights regarding the Neolithic transitions from flat power structures to hierarchical societies. Future work can build upon these foundations to examine whether this model can be applied to other changes in societal structure, such as the movements from monarchy toward parliamentary democracies in 18th-century Europe, or the transitions of Roman civilization betwween monarchy, through annually electing two concurrent consuls in the Roman Republic, to a single Imperator Caesar in the Roman Empire. Other work might investigate the impact of relaxing some of our assumptions. For instance, exploring different rewiring rules where individuals have different numbers of links, or rewire to others based on a similar or higher levels of status or numbers of links. The model also can be extended in various ways to better represent the real-world contexts in which leadership dynamics operate; these could include representations of technological innovations, changes in social norms, or power struggles between potential leaders. These extensions would enable us to develop the model further into a powerful exploratory tool for human leadership dynamics. \n\n\\bibliographystyle{unsrt} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\label{sec:intro}\n\nAn accurate description of excited nucleons and their interaction with probes such as photons at GeV energies has remained elusive for decades. \nThe standard model~\\citep{Gross:1973id,Politzer:1973fx} underpins the structure of the \nnucleons and their excitations, but in the low-energy non-perturbative regime, competing semi-phenomenological models of specific\nreaction dynamics are all that are available. Present-day \nlattice QCD calculations~\\citep{Edwards:2011jj, Edwards:2012fx} and quark models~\\citep{Capstick:2000qj, Capstick:1998uh, Capstick:1986bm,Loring:2001kx,Glozman:1997ag, Giannini:2001kb} predict a richer baryon spectrum than experimentally observed~\\citep{Patrignani:2016xqp, Klempt:2009pi,Koniuk:1979vw}: the so-called {\\it missing resonance problem}. There are theoretical approaches to the nucleon resonance spectrum that predict that some quark-model states do not exist, including models with quasi-stable diquarks~\\citep{Anselmino:1992vg},\nAdS\/QCD string-based models~\\citep{Brodsky:2006uq}, and ``molecular'' models in which some baryon resonances are dynamically generated from\nthe unitarized interaction among ground-state baryons and mesons~\\citep{Kolomeitsev:2003kt}.\nBut finding such missing states may in part be an experimental problem: high-mass nucleon resonances may couple weakly to $\\pi N$ and may thus have escaped detection in the analysis of $\\pi N$ elastic scattering experiments.\nFurther, they are wide and overlapping, and partial-wave analysis (PWA) of reaction data for specific final states remains difficult due to channel-coupling effects and insufficient experimental constraints.\nThe experimental results discussed here represent one step in the direction of adding constraints to the hyperon photoproduction database, which ultimately impacts models for nucleon excitations. \n\nCross-section measurements alone are not enough to constrain PWA models of meson production amplitudes. Polarization observables related to the spins of the beam photons, target, and recoiling baryons are also needed.\nPhotoproduction of pseudoscalar mesons is governed by four complex amplitudes that lead to an interaction cross sections and 15 spin observables~\\citep{Chew:1957tf, Barker:1975bp,Fasano:1992es, Chiang:1996em, Keaton:1996pe, Sandorfi:2010uv, Nys:2016uel}. \nA \\textit{mathematically complete} experiment would require data, with negligible uncertainties, on a minimum of eight well-chosen observables at each center-of mass (c.m.) energy, $W$, and meson polar angle, $\\cos \\theta_{c.m.}$. In practice, with realistically achievable uncertainties, measurements of many more are needed to select between competing partial wave solutions, and even knowledge of the sign of an asymmetry can provide valuable discrimination~\\cite{Sandorfi:2010uv}. Furthermore, avoiding ambiguities in PWA solutions requires measurements of observables from each spin configuration of the three combinations of beam-target, target-recoil, and beam-recoil polarization~\\cite{Sandorfi:2010uv, Nys:2016uel}.\n\nFurthermore, while isospin $I=3\/2$ transitions ($\\Delta^*$ excitations) can be studied with proton target data alone, both proton- and neutron-target\nobservables are necessary to study $I=1\/2$ transitions and isolate the separate $\\gamma p N^*$ proton and $\\gamma n N^*$ neutron photo-couplings~\\citep{Sandorfi:2013cya}.\nInformation from neutron targets is comparatively scarce~\\citep{ Anisovich:2017afs}, particularly in the hyperon channels~\\citep{Compton_PhysRevC.96.065201, AnefalosPereira:2009zw},\nwhich is why the present measurement is of value. Furthermore, the hyperon photoproduction channels $\\gamma N\\rightarrow K \\Lambda (\\Sigma^{0})$\nare attractive for analysis for two reasons. First, the threshold for two-body hyperon final states is at $W \\simeq 1.6$~GeV, above which lie numerous poorly known resonances. Two-body strange decay modes, rather than cascading non-strange many-body decays, may be easier to interpret.\nSecond, the hyperon channels give easy access to recoil polarization observables on account of their self-analyzing weak decays. While the present work does not involve measurement of hyperon polarizations, previous work has shown the benefit of using such information to extract properties of higher-mass nucleon resonances~\\citep{Paterson:2016vmc,Bradford_CxCz, Bradford_xsec, McNabb, Anisovich:2007bq, Paterson:2016vmc, McCracken, Dey, Anisovich:2017ygb}. Thus, \npursuing ``complete\" amplitude information in the hyperon photoproduction channels can be complimentary to the analogous quest in, say, pion photoproduction.\n\nIn this article, we present first-time measurements of the beam-target observable $E$ on a longitudinally polarized neutron bound in deuterium in the quasi-free reaction $\\gamma n(p) \\to K^0Y(p)$.\nThe helicity asymmetry $E$ is formally defined as the normalized difference in photoproduction yield between antiparallel ($\\sigma^{A}$) and parallel ($\\sigma^{P}$) configurations, {\\it i.e.}, settings where the incident photon beam polarization is antialigned or aligned, respectively, with the longitudinal polarization of the target. Following Ref.~\\cite{Barker:1975bp} and Ref.~\\cite{Sandorfi:2010uv} write \n\\begin{equation}\nE=\\frac{\\sigma^{A}-\\sigma^{P}}{\\sigma^{A}+\\sigma^{P}}.\n\\end{equation}\nThis helicity asymmetry, $E$, is related to the cross section by\n\\begin{equation}\n\\left(\\frac{d\\sigma}{d\\Omega}\\right)=\\left(\\frac{d\\sigma}{d\\Omega}\\right)_{0}\\left(1-P_{T}P_{\\odot}E\\right),\n\\label{equation2}\n\\end{equation}\n %\nwhere $\\left(d\\sigma \/ d\\Omega\\right)_{0}$ is the differential cross section averaged over initial spin states and summed over the final states, and $P_{T}$\nand $P_{\\odot}$ are the target longitudinal and beam circular polarizations, respectively. \n\nThe asymmetry results obtained are compared with several model predictions. The first is a single-channel effective Lagrangian approach, \nKaonMAID~\\citep{Mart:1999ed,Lee:1999kd}, with parameter constraints largely imposed from SU(6). Without experimental constraints on the $N^* \\Lambda K^0$ and $\\gamma n N^*$ vertices, the reaction of interest is difficult to model accurately. \nThe second model giving predictions for the present results is the data description given by SAID~\\citep{SAID, Adelseck:1986fb}. In general, SAID is more up to date than KaonMAID; for the present reaction channels the SAID predictions are a polynomial fit to all available data before 2008, assuming final state interactions for these polarization observables can be neglected~\\cite{strakovsky}.\nThe third comparison is made to the multichannel K-matrix formalism of the Bonn-Gatchina~\\citep{Anisovich:2012ct} group, which is the most up to date, being constrained by recent first-time measurements~\\citep{Compton_PhysRevC.96.065201} of the differential cross section for the reaction $\\gamma n(p) \\to K^0\\Lambda (p)$ [with $(p)$ as the spectator proton].\n\n\n\\section{Experimental Procedures\n\\label{sec:Section-II}} \n\nThe experiment was performed at the Thomas Jefferson National Accelerator Facility (JLab) using the CEBAF Large Acceptance\nSpectrometer (CLAS)~\\citep{CLAS-NIM}. \nThis setup has been used for several studies of $K^+$ photoproduction of \nhyperonic final states on a proton target~\\citep{Bradford_CxCz, Bradford_xsec, McNabb, Moriya:2014kpv, Moriya:2013hwg, Moriya:2013eb, McCracken, Dey} \nand on an effective neutron (deuteron) target~\\citep{Compton_PhysRevC.96.065201,AnefalosPereira:2009zw}.\nThe present results stem from the so-called ``\\textit{g14}\" run period between December 2011 and May 2012, from which non-strange \nresults have been previously reported~\\citep{Ho:2017kca}. \nThe CEBAF accelerator provided longitudinally polarized electron beams with energies of \n$E_{e}=2.281$, \n$2.257$, \nand $2.541$ GeV, \nand an \\textit{average} electron beam polarization for the present study of $P_{e}=0.82\\pm0.04$, which was measured routinely\nby the Hall-B M\\\"oller polarimeter~\\cite{Moller2}. The electron beam helicity was pseudorandomly flipped between +1 and $-1$ with a 960 Hz flip\nrate. The electron beam was incident on the thin gold radiator of the Hall-B Tagger system~\\citep{Sober} and produced circularly polarized\ntagged photons. The polarization of the photons was determined using the Maximon and Olsen formula~\\citep{Olsen:1959zz}\n\\begin{equation}\nP_{\\odot}=P_{e}\\frac{4k-k^{2}}{4-4k+3k^{2}},\n\\end{equation}\nwhere $P_{\\odot}$ and $P_{e}$ are the photon and electron polarizations, respectively, and $k=E_{\\gamma} \/ E_e$ is the ratio between\nthe photon energy and the electron beam energy. \n\nA 5-cm-long solid target of hydrogen deuteride (HD) was used in the experiment~\\citep{Lowry:2016uwa,Bass:2013noa}.\nIt achieved vector polarizations of 25\\%-30\\% for deuterons, i.e., for \n{\\it bound} neutrons in the deuteron with relaxation times of about a year. \nThe polarized target was held at the center of CLAS using an in-beam cryostat that produced a 0.9~T holding field and operated at 50 mK. The target polarization was monitored using nuclear magnetic resonance measurements~\\citep{Lowry:2016uwa}. The orientation of the target longitudinal polarization direction was inverted between periods of data taking, either parallel or antiparallel to the direction of the incoming photon beam. Background events from the unpolarizable target wall material and aluminum cooling wires~\\citep{Bass:2013noa} were removed using empty-target data, as discussed in Secs.~\\ref{sec:Section-IIIa} and~\\ref{sec:Section-IIIb}.\n\nThe specific reaction channel for this discussion came from events of the type $\\gamma d \\to \\pi^+ \\pi^- \\pi^- p (X)$ using a readout trigger requiring a minimum of two charged particles in different CLAS sectors. After particle identification we required the ``spectator,\" $X$, to be an undetected low-momentum proton and possibly a photon, via the missing mass technique, as explained in the next section. In order to determine the $E$ asymmetry experimentally, the event yields in a given kinematic bin of $W$ and kaon center-of-mass angle were obtained by counting events with total c.m. helicity $h=$3\/2 (laboratory-frame antiparallel configuration), called $N_{A}$, and events with $h=$1\/2 (laboratory frame parallel configuration) called $N_{P}$, respectively. The $E$ observable \nwas then computed as \n\\begin{equation}\nE=\\frac{1}{\\overline{P_{T}}\\cdot\\overline{P_{\\odot}}}\\left(\\frac{N_{A}-N_{P}}{N_{A}+N_{P}}\\right),\n\\label{Eq_Eval}\n\\end{equation}\nwhere $\\overline{P_{T}}$ and $\\overline{P_{\\odot}}$ are the run-averaged\ntarget and beam polarizations, respectively. \n\n\n\n\\section{Data Analysis \n\\label{sec:Section-III}}\n\nThe performance of the system was extensively studied for a reaction with much higher count rates than the present one. The nonstrange reaction $\\gamma d \\to \\pi^- p (X) $ was investigated using many of the same analysis steps and methods discussed in this article to extract the $E$ observable for $\\gamma n \\to \\pi^- p$~\\citep{Ho:2017kca}. The analysis steps outlined below were all tested on that reaction. In particular, the boosted decision tree (BDT) selection procedure~\\cite{DruckerCortes, ROE2005577} used below was validated against alternative ``cut-based\" and kinematic fit methods, with the result that the BDT procedure resulted in $\\sim30\\%$ larger yields of signal events and therefore gave better statistical precision on the final $E$ asymmetry.\n\n\\subsection{Particle identification}\n\\label{sec:Section-IIIa}\n\nFor this particular analysis, we required that every selected event consists of at least two positive tracks and two negative tracks with associated photon tagger hits~\\cite{Sober}.\nThe CLAS detector system determined the path length, the charge type,\nthe momentum and the flight time for each track~\\citep{Sharabian,Mestayer,Smith}.\nFor each track of momentum $\\overrightarrow{p}$, we compared the measured time of flight, $TOF_{m}$, to a hadron's expected time of\nflight, $TOF_{h}$, for a pion and proton of identical momentum and path length. \nCLAS-standard cuts were placed on the difference between the measured and the expected time of flight, $\\triangle TOF=TOF_{m}-TOF_{h}$. We selected events for which the two positively charged particles were the proton and $\\pi^{+}$, and the two negatively charged were both $\\pi^{-}$. Well-established CLAS fiducial cuts were applied to select events with good spatial reconstruction. \n\nEvents originating from unpolarized target material\\textit{\\footnotesize{}\\textemdash }aluminum cooling wires and polychlorotrifluoroethylene (pCTFE)\\textit{\\footnotesize{}\\textemdash } dilute $E$ and must be taken into account. A period of data taking was dedicated to an \\textit{empty} target cell in\nwhich the HD material was not present. This set of data was used to study and remove the bulk of the target material background on the basis of a loose missing mass cut. \nFigure \\ref{Z-vertex} shows the resulting reconstructed reaction vertex for four-track data along the beam line both for a full target and for an empty target scaled to match the counts in several downstream target foils. \nThe full-to-empty ratio of about 3.3:1 in the target region was important in selecting the optimal BDT cut discussed below.\n\n\\begin{figure}\n\\vspace{-0.5cm}\n\\includegraphics[width=0.5\\textwidth]{Z_vertex_RS2}\\protect\n\\vspace{-0.5cm}\n\\caption{The open histogram shows the vertex distribution of events along the beam\nline for a full target is the open histogram. Dashed red lines show the nominal target boundaries. The peaks at $z>0$ are from target-independent foils in the cryostat; the positions of two are highlighted with dotted blue lines~\\cite{Lowry:2016uwa}. The filled histogram shows the scaled target-empty background distribution. \n\\label{Z-vertex} }\n\\end{figure}\n\nFigure \\ref{MMass} shows the resulting target-full missing mass distribution for spectator $X$ in $\\gamma d\\rightarrow\\pi^{-}\\pi^{+}\\pi^{-}p(X)$,\nafter these cuts. A clear peak corresponding to the spectator proton is seen at point 1 for events that produced a $\\Lambda$ particle. A loose cut was applied to reject events with missing mass larger than 1.4 GeV\/c$^{2}$ at point 4 because of the presence of $\\Sigma^{0}\\rightarrow\\pi^{-}p(\\gamma)$ events. These have a 73-MeV photon in the final state in addition to the proton, and the distribution peaks at point 2 and has a kinematic tail to about point 3.\n\n\\begin{figure}[htbp]\n\\vspace{-0.5cm}\n\\includegraphics[width=0.5\\textwidth,height=0.35\\textwidth]{MMass_RS}\\protect\n\\vspace{-0.5cm}\n\\caption{The missing mass distribution, $\\gamma d\\rightarrow\\pi^{-}\\pi^{+}\\pi^{-}pX$\nafter PID cuts showing the dominant spectator proton peak at ``1.'' The magenta line at ``4'' indicates a loose event rejection for $m_X > 1.4$~GeV\/c$^2$. This rejects unambiguous\nbackground but keeps $\\Sigma^{0}\\rightarrow\\pi^{-}p(\\gamma)$ events in which both a proton and a photon are missing between ``2'' and ``3.'' (See text.)\n\\label{MMass} }\n\\end{figure}\n\n\n\\subsection{$K^{0}Y$ event selection using BDT analysis}\n\\label{sec:Section-IIIb}\n\nBecause of the small reaction cross section in this experiment, a method was needed to optimally isolate the events of interest with minimal statistics loss.\nThe multivariate analysis tool called the boosted decision tree (BDT) approach was used to select the exclusive events of interest in this study. Three steps were needed to achieve this result. The first BDT was created to select events from both the \n$\\gamma d\\rightarrow\\pi^{-}\\pi^{+}\\pi^{-}p(p_{S})$ and the $\\gamma d\\rightarrow\\pi^{-}\\pi^{+}\\pi^{-}p(p_{S}\\gamma)$ final states, \nconsistent with quasi-free production from a deuteron. This was to reject target-material background and events with a high missing momentum of the undetected spectator nucleon, $p_S$. The second BDT was created to remove the nonstrange pionic background with the same final states, that is, to pick out events with $\\Lambda$ and $\\Sigma^0$ intermediate-state particles. The third BDT was to separate the $K^{0}\\Lambda$ and $K^{0}\\varSigma^{0}$ events. \n\nThis BDT algorithm is more efficient than a simple ``cut'' method in both rejecting background and keeping signal events~\\citep{ROE2005577,Ho-thesis}. The method builds a ``forest'' of \\textit{distinct} \\textit{decision trees} that are linked together by a \\textit{boosting} mechanism. Each decision tree constitutes a \\textit{disjunction} of logical conjunctions (i.e., a graphical representation of a set of \\textit{if-then-else} rules). Thus, the entire reaction phase-space is considered by every decision tree. Before employing the BDT for signal and background classification, the BDT algorithm needs to be constructed (or trained) with \\textit{training} data\\textit{\\textemdash }wherein the category of every event is definitively known. We used the ROOT implementation of the BDT algorithm ~\\citep{Hocker:2007zz}. \nEvery event processed by the constructed BDT algorithm is assigned a value of between $-1$ and +1 that\nquantifies how likely the processed event is a background event (closer to $-1$) or a signal event (closer to $+1$). An optimal cut on the\nBDT output is chosen to maximize the\\textit{ }$S\/\\sqrt{S+B}$ ratio, where $S$ and $B$ are the estimations, based on training data, of the initial number of signal\nand background events, respectively.\n\nThe initial assignment of the $\\pi^-$ particles to either $K^0$ or $\\Lambda$ decay was studied with Monte Carlo simulation, and a loose selection based on invariant masses was made. Specific details of these cuts are given in Ref.~\\citep{Ho-thesis}. \n\nThe first BDT was trained using real empty-target data for the background training. A signal Monte Carlo simulating quasifree hyperon production on the neutron was used for signal training data. The momentum distribution of the spectator proton, $p_{s}$, followed the Hulth\\`en potential~\\citep{Cladis_PhysRev.87.425,Lamia:2012zz} for the deuteron. Based on this training, an optimal BDT cut that maximized the estimated initial\\textit{ }$S\/\\sqrt{S+B}$ ratio was selected. Figure \\ref{Z_vertex_BDT} shows the total (blue histogram) and rejected (black histogram) events by the first BDT cut. In comparing Figs.~\\ref{Z-vertex} and \\ref{Z_vertex_BDT}, two items should be noted. First, the BDT was trained to remove target-material background events with missing momentum not consistent with a Hulth\\`en distribution. Second, the BDT background-rejection efficiency was not perfect, leaving some target-material background events that were removed in a subsequent step (Sec.~\\ref{sec:Section-IIIc}). We then rejected events with $z>-2$~cm on the reaction vertex to remove remaining unambiguous background events due to various cryostat foils. \n\n\\begin{figure}[htbp]\n\\vspace{-0.5cm}\n\\includegraphics[width=0.5\\textwidth]{Z_vertex_BDT1_RS}\n\\vspace{-1.cm}\n\\protect\\caption{The reconstructed distribution of the reaction vertex along the beam\nline showing target-full events in the top histogram (blue) after loose $K^0Y^0$ selection and the missing mass cut shown in Fig.~\\ref{MMass}.\nEvents selected by the first BDT are shown in the middle histogram (red), and rejected events in the bottom histogram (black).\nThe magenta line indicates a loose cut to reject unambiguous target-material background. \n\\label{Z_vertex_BDT} }\n\\end{figure}\n\nThe second-step BDT was trained using a four-body phase-space $\\gamma d\\rightarrow\\pi^{-}\\pi^{+}\\pi^{-}p(p_{S})$\nsimulation as background training data and the $\\gamma d\\rightarrow K^{0}\\Lambda(p_{S})$ simulation as signal training data. There were two negative pions in each event: one from the decay of the $K^0$ and one from the decay of the hyperon. The goal of the BDT analysis was to use the available correlations among all particles to sort the pions correctly and to select events with decaying strange particles. The main training variables at this stage of the analysis included the 3-momenta of all the particles and the detached decay vertices of the $K^0$s and the hyperons. \nAfter the optimized BDT cut was placed, Fig.~\\ref{lambda K0 IM} shows the total (red histogram) and\nrejected (black histogram) events after this second BDT analysis step. The efficiency of the second BDT was less than 100\\%, thus, there are remaining\ntarget background events in the selected data sample. The dips near the signal maxima in the background spectra show that the background is slightly undersubtracted. This issue is discussed and corrected below. A fit with a Breit-Wigner line shape and a polynomial was used to estimate that the \nstrange-to-non-strange ratio of events in the data set at this stage was about 2.3:1 in the peak regions.\n\n\\begin{figure}[htpb]\n\\includegraphics[width=0.48\\textwidth]{IM_ppim_g2}\n\\includegraphics[width=0.48\\textwidth]{IM_pippim_g2}\n\n\\protect\\caption{Invariant $\\pi_{\\Lambda}^{-}p$ mass (top) and invariant $\\pi_{K^{0}}^{-}\\pi^{+}$\nmass (bottom) after target material background rejection by the first\nBDT cut. Black histograms show events rejected by the second BDT cut. Fits of the sum (red curve) of a Breit-Wigner line-shape (blue curve) and a third order polynomial (black curve) are shown. The fits aid the discussion in the text but were not used in the subsequent analysis. \\label{lambda K0 IM} }\n\\end{figure}\n\nFor the final task, separating the $K^{0}\\Lambda$ and $K^{0}\\varSigma^{0}$ channels, the third BDT was trained using $\\gamma d\\rightarrow K^{0}\\Sigma^{0}(p_{S})$ simulation as ``background'' training data and $\\gamma d\\rightarrow K^{0}\\Lambda(p_{S})$ simulation as ``signal'' training data. Note that the term background used here is just for semantic convenience, since both channels were retained after applying the third optimized BDT cut.\nFigure~\\ref{MMass_off_K0_simulation} shows in the left [right] histogram the classification success of the third BDT on $\\gamma d\\rightarrow K^{0}\\Lambda(p_{S})$ [$\\gamma d\\rightarrow K^{0}\\Sigma^{0}(p_{S})$] simulation data. The histograms reveal that a small number of $K^{0}\\Lambda$ events would be misclassified as $K^{0}\\varSigma^{0}$ events, and vice versa. In the next section, the correction for the contamination on both final\ndata sets is discussed. Figure \\ref{MMass_off_K0} shows the separation result from the third BDT on real data. \n\n\\begin{figure*}[t]\n\\includegraphics[width=0.5\\textwidth]{MM_pippim_K0Lambda_MC}\\includegraphics[width=0.5\\textwidth]{MM_pippim_K0Sigma0_MC}\\protect\n\\caption{Distributions of missing mass from the reconstructed $K^{0}$, $\\gamma n\\rightarrow\\pi_{K^{0}}^{-}\\pi^{+}X$\nfor simulation data, assuming that the target is an at-rest neutron.\nLeft: the magenta histogram represents events with correct $K^{0}\\Lambda$\nclassification, while the cyan histogram represents events with the wrong\n$K^{0}\\Sigma^{0}$classification. Right: the cyan histogram represents\nevents with the correct $K^{0}\\Sigma^{0}$~classification, while the magenta\nhistogram represents events with the wrong $K^{0}\\Lambda$ classification.\n\\label{MMass_off_K0_simulation} }\n\\end{figure*}\n\n\n\\begin{figure}[t]\n\\includegraphics[width=0.5\\textwidth]{MM_pippim}\\protect\\caption{Distribution of missing mass \nfrom the reconstructed $K^{0}$, $\\gamma n\\rightarrow\\pi_{K^{0}}^{-}\\pi^{+}X$\nfor real data, assuming that the target is an at-rest neutron,\nafter rejecting non-hyperon background by the second BDT cut.\nThe magenta (cyan) histogram was classified as $K^{0}\\Lambda$ ($K^{0}\\Sigma^{0}$)\nusing the third BDT selection step. \\label{MMass_off_K0} }\n\\end{figure}\n\n\\subsection{Corrections for remaining backgrounds and asymmetry calculation}\n\\label{sec:Section-IIIc}\n\nThe $E$ asymmetry values for both target-material and non-strange background events were statistically\nconsistent with 0~\\citep{Ho-thesis}; therefore, we implemented an approximation procedure\nto correct for the dilution effect from the remaining background.\nWe estimated two ratios: one for the remaining fraction of target background (TGT), $R^{TGT}$, \nand one for the fraction of remaining nonstrange (NS) final-state events mixed with the hyperon events, $R^{NS}$.\nWe write \n$R^{TGT}= {N^{remain}} \/ {N^{HD}}$,\nand \n$R^{NS}={Y^{remain}}\/{Y^{K^{0}Y}}$. \n$N^{remain}$ and $N^{HD}$ are the estimated number of remaining target-material background events\nand true deuteron events after the first BDT and $z=-2$~cm vertex cuts, respectively.\n$Y^{remain}$ and $Y^{K^{0}Y}$ are the estimated number of remaining nonstrange and true $K^{0}Y$ events after the second\nBDT cut, respectively. Next, let $Y_{BDT}$ be the number of events that passed the $z$-vertex cut and the first two BDT selections; then $Y_{BDT}$\ncan be partitioned into \n\\begin{align}\nY_{BDT}&=\\left(1+R^{NS}\\right)Y^{K^{0}Y} \\nonumber \\\\\n&=\\left(1+R^{NS}\\right)\\left[Y_{HD}^{K^{0}Y}+Y_{TGT}^{K^{0}Y}\\right],\\;\\label{eq:wideeq-2-3}\n\\end{align}\nsince $Y^{K^{0}Y}$ also comprises events from the remaining target-material\nbackground and the bound signal events. If we further allow \n$Y_{TGT}^{K^0 Y} \/ Y_{HD}^{K^0 Y} = N^{remain} \/ N^{HD} = R^{TGT}$,\nthen $Y_{BDT}$ can finally be expressed as\n\\begin{equation}\nY_{BDT}=\\left(1+R^{NS}\\right)\\left(1+R^{TGT}\\right)Y_{HD}^{K^{0}Y},\\;\\label{eq:wideeq-2}\n\\end{equation}\nor\n\\begin{equation}\nY_{HD}^{K^{0}Y}=\\left(1+R^{NS}\\right)^{-1}\\left(1+R^{TGT}\\right)^{-1}Y_{BDT}.\n\\label{eq:wideeq-2-1}\n\\end{equation}\nThese relations should remain valid for both $Y_{BDT}^{K^{0}\\Lambda}$ and $Y_{BDT}^{K^{0}\\Sigma^{0}}$, \nwhich are the $K^{0}\\Lambda$ and $K^{0}\\Sigma^{0}$ signal events from bound neutrons, respectively. \nThe backgrounds that leak through the BDT filters will be helicity independent and will be subtracted in the numerator of Eq.~(\\ref{Eq_Eval}). \nUsing Eq.~(\\ref{eq:wideeq-2-1}) to correct the summed yields in the denominator gives the corrected asymmetry as\n\\begin{equation}\nE_{corrected}^{K^{0}Y}=\\left(1+R^{NS}\\right)\n\\times\\left(1+R^{TGT}\\right)E_{BDT}^{K^{0}Y},\\;\\label{eq:wideeq-2-2}\n\\end{equation}\nwhere $E_{BDT}^{K^{0}Y}$ is obtained from $Y_{BDT}^{K^{0}Y}$ (or,\nmore exactly, $Y_{BDT}^{P}$ and $Y_{BDT}^{A}$\nof the $K^{0}Y$ parallel and antiparallel subsets). \nFrom the simulations we found average values of $R^{TGT}$ and $R^{NS}$ of 0.09 and 0.17, respectively, with some dependence on\nthe specific run period.\n\nNext we discuss a correction for the third BDT classification result.\nRecall that the third BDT selection separates the true signal $K^{0}Y$ events into two subsets: one is mostly $K^{0}\\Lambda$ events,\nand the other is mostly $K^{0}\\varSigma^{0}$. If we denote $N_{\\Lambda}^{BDT}$ and $N_{\\Sigma^{0}}^{BDT}$ as the\nnumber of events the third BDT identified as $K^{0}\\Lambda$ and $K^{0}\\varSigma^{0}$ events, respectively, then we have the expressions\n\n\\begin{equation}\nN_{\\Lambda}^{BDT}=\\omega_{\\Lambda}N_{\\Lambda}^{true}+(1-\\omega_{\\Sigma^{0}})N_{\\Sigma^{0}}^{true},\\;\\label{eq:wideeq-3}\n\\end{equation}\n\\begin{equation}\nN_{\\Sigma^{0}}^{BDT}=(1-\\omega_{\\Lambda})N_{\\Lambda}^{true}+\\omega_{\\Sigma^{0}}N_{\\Sigma^{0}}^{true},\\;\\label{eq:wideeq-3-1}\n\\end{equation}\nwhere $\\omega_{\\Lambda}$ and $\\omega_{\\Sigma^{0}}$ are the fractions of events correctly identified: these values were\nestimated based on simulation data. After rearrangement, we arrive at the expressions \n\\begin{align}\nN_{\\Lambda}^{true}&=\\left[\\omega_{\\Lambda}-\\frac{(1-\\omega_{\\Sigma^{0}})}{\\omega_{\\Sigma^{0}}}(1-\\omega_{\\Lambda})\\right]^{-1} \\nonumber \\\\\n&\\times \\left[N_{\\Lambda}^{BDT}-\\frac{(1-\\omega_{\\Sigma^{0}})}{\\omega_{\\Sigma^{0}}}N_{\\Sigma^{0}}^{BDT}\\right],\\;\n\\label{eq:wideeq}\n\\end{align}\n\\begin{align}\nN_{\\Sigma^{0}}^{true}&=\\left[\\omega_{\\Sigma^{0}}-\\frac{(1-\\omega_{\\Lambda})}{\\omega_{\\Lambda}}(1-\\omega_{\\Sigma^{0}})\\right]^{-1} \\nonumber \\\\ \n&\\times \\left[N_{\\Sigma^{0}}^{BDT}-\\frac{(1-\\omega_{\\Lambda})}{\\omega_{\\Lambda}}N_{\\Lambda}^{BDT}\\right]\\;\n\\label{eq:wideeq-1}.\n\\end{align}\n\nThe \\textit{corrected} $E$ asymmetry was obtained using the derived $N_{\\Lambda}^{true}$ and $N_{\\Sigma^{0}}^{true}$ by using\nEq.~(\\ref{Eq_Eval}). From the simulations we found average values of $\\omega_Y$ of 0.87 and 0.91 for $\\Lambda$ and $\\Sigma^0$ events, respectively.\n\nThe neutron polarization in the deuteron is smaller than the deuteron polarization because the deuteron wavefunction has, in addition to an $S$-wave component, a $D$-wave component in which the spin of the neutron need not be aligned with the deuteron spin. This was studied using data for the $\\gamma n \\to \\pi^- p $ reaction and reported in our previous publication~\\citep{Ho:2017kca}. It was found that for spectator recoil momenta of less than 100~MeV\/$c$ the correction was negligible. Had we cut on the recoil momentum at 200~MeV\/$c$ rather than 100~MeV\/$c$, a measured dilution factor of $(8.6\\pm0.1)$\\% would have been necessary for the nonstrange channel. But different reaction channels may exhibit different sensitivities to recoil momentum. For the reaction under discussion here we could not afford the statistical loss by cutting on recoil momentum, and we elected to make a conservative correction based on the general considerations in \\citep{Ramachandran:1979ck}. The neutron polarization can\nbe estimated as $P_{n}=P_{d}(1-\\frac{3}{2}P_{D})$, where $P_{n}$ and $P_{d}$ are neutron and deuteron polarizations, respectively, and $P_{D}$ denotes the deuteron $D$-state probability. The latter is not strictly an observable and needs only to be treated consistently within a given $NN$ potential.\nFollowing Ref.~\\citep{Ramachandran:1979ck}, we take the $D$-state contribution averaged over a range of $NN$ potentials as about 5\\%, which implies that the neutron polarization is 92.5\\% of the deuteron polarization, or a 7.5\\% dilution factor. \n\n\\subsection{Systematic Uncertainties}\n\nWe implemented four systematic studies to quantify the robustness of the trained BDT algorithms and the sensitivity of our results on\nthe correction procedures introduced in the previous section. \nTwo tests studied the effect of loosening the first and the second BDT cuts, respectively. One test focused on the sensitivity of the $E$ results on the third correction\\textit{\\textemdash }the correction procedure that was implemented to ``purify'' the final selected $K^{0}\\Sigma^{0}$($K^{0}\\Lambda$) sample. Finally, we reduced the beam and target polarizations by one standard deviation of their respective total uncertainties (statistical and systematic) to study the changes in the $E$ results.\n\nFinally, we note a complication that could occur when summing $\\Lambda$ yields to create the $E$ asymmetries. The relative angular distribution between the $\\pi^-$ and the $p$ that are used to reconstruct a $\\Lambda$ carries information on the recoil polarization of the latter. When summed over azimuthal angles, this information is lost. However, limitations in detector acceptance could result in incomplete integration, which in principle could introduce into Eq.~\\ref{equation2} a dependence on six additional observables~\\cite{Sandorfi:2010uv}. The gaps in CLAS acceptance are modest, and due to the lower than expected production cross sections, the data below are presented in broad kinematic bins, which tends to dilute such effects. On the scale of our statistical uncertainties, such corrections are expected to be negligible and we have not attempted to correct for them.\n\n\n\n\\section{Results \\label{sec:Section-IV}}\n\nWe present here the results for the $E$ asymmetry in two $W$ energy bins. The lower bin is from 1.70 to 2.02 GeV and denoted $W_{1}$, while the higher bin is from 2.02 to 2.34 GeV and referred to $W_{2}$. Due to small cross sections for $K^0Y$ photoproduction, and to detector inefficiencies that are amplified by the required identification of four charged particles, our statistics are sufficient for only three bins in the $K^0$ center-of-mass production angle. The measurements for the $\\gamma n \\to K^0 \\Lambda$ reaction are plotted together with predictions from the KaonMAID, SAID, and Bonn-Gatchina (BnGa) models in Fig.~\\ref{Ecostheta_K0L}. The data show that the $K^0\\Lambda$ asymmetry is largely positive below 2~GeV and mostly negative above 2~GeV, without more discernible trends. Values of $E$ must approach $+1$ at $\\cos \\theta^{c.m.}_{K^0}\\to \\pm 1$ to conserve angular momentum. Thus, the values for $E$ in bin $W_2$ must change rather rapidly near the extreme angles.\n\n\nFor comparison, the PWA combine results from many experiments at different energies, and this results in varying degrees of sensitivity to energy and angle. This is illustrated in Fig.~\\ref{Ecostheta_K0L} by the SAID and BnGa PWA predictions at the limits of the energy bins. None of the models were tuned to these results; that is, the models are all predictions based on fits to previously published data on other observables.\nFirst, one observes that the data are not statistically strong enough to strongly discriminate among the models. In the lower $W$ bin all three models\ncan be said to agree with the data. In the higher $W$ bin the SAID model may be slightly favored by the data among the three.\n\n\\begin{figure}[htbp]\n\\vspace{-0.5cm}\n\\includegraphics[angle=-90 , width=0.55\\textwidth, trim=0 3.5cm 0 0, clip]{e-k0-lambda-1.pdf}\n\\vspace{-10mm} \n\\caption{Helicity asymmetry $E$ for the ${K^{0}\\Lambda}$ final state (with combined statistical and systematic uncertainties) vs. $\\cos\\theta_{K^{0}}$ The asymmetries are shown with the neutron-target theoretical models KaonMaid ~\\citep{Mart:1999ed} (dashed red curve) and SAID~\\citep{SAID} (dot-dashed blue curve) and Bonn-Gatchina~\\citep{Anisovich:2007bq,Anisovich:2012ct} (solid black curve). Because of the 0.32-GeV-wide $W$ bins, each model is represented by two curves, computed at the bin endpoint $W$ values, as labeled. \n\\label{Ecostheta_K0L} }\n\\end{figure}\n\nThe results for the $\\gamma n \\to K^0 \\Sigma^0$ channel are plotted in Fig.~\\ref{Ecostheta_K0S}, together with model predictions from SAID and Kaon-MAID. In contrast to the $K^0 \\Lambda$ channel at lower $W$, here the data hint at less positive values for $E$. \nIn the bin for $W$ above 2 GeV, the data are also consistent with 0 for $K^0\\Sigma^0$, whereas the $K^0\\Lambda$ data tended to be negative. In fact, the $K^0\\Sigma^0$ asymmetry is consistent with 0 in all available bins. \nThe model comparisons show that the KaonMAID prediction for the $K^0\\Sigma^0$ channel in the higher $W$ bin are probably not consistent with the data, while the SAID result is consistent with the data. For the $K^0\\Sigma^0$ case we do not have predictions from the Bonn-Gatchina model because the unpolarized differential cross section has not been measured yet, and without it the model does not have a prediction available. \n\n\\begin{figure}[htbp]\n\\vspace{-0.5cm}\n\\includegraphics[angle=-90 , width=0.55\\textwidth, trim=0 3.5cm 0 0, clip]{e-k0-sigma0-1.pdf}\n\\vspace{-10mm} \n\\caption{Helicity asymmetry $E$ for the ${K^{0}\\Sigma^0}$ final state (with combined statistical and systematic uncertainties) vs. $\\cos\\theta_{K^{0}}$ for two 0.32-GeV-wide energy bands in $W$, as labeled. Model curves are as in Fig.~\\ref{Ecostheta_K0L}.\n\\label{Ecostheta_K0S} }\n\\end{figure}\nIn order to show one other comparison between data and theory, we plot some of the present results for a neutron target together with the model predictions for the $K^+ \\Lambda$ reaction on a {\\textit {proton}} target in Fig.~\\ref{Ecostheta_K0L2}. This is intended to show the difference in the model predictions on protons versus neutrons. One sees how different the three model predictions are for protons versus neutrons. One notes that the predictions for the proton target calculations all tend to be closer to the new data we are presenting for a neutron target. This suggests that calculations of the $E$ observable for a neutron target can be improved. \nThus, we may expect these present results to have some impact on the further development of these models.\n\n\\begin{figure}[htbp]\n\\vspace{-0.5cm}\n\\includegraphics[angle=-90 , width=0.55\\textwidth, trim=0 3.5cm 0 0, clip]{e-k0-lambda-2.pdf}\n\\vspace{-10mm} \n\\caption{\nHelicity asymmetry $E$ for the ${K \\Lambda}$ final state vs. $\\cos\\theta_{K^{0}}$ for energy band $W_2$. Left: Data from Fig.~\\ref{Ecostheta_K0L} together with model predictions for a neutron target. Right: Model calculations for the $K^+ \\Lambda$ reaction on a proton target, as computed using KaonMaid~\\citep{Mart:1999ed} (dashed red curve), SAID~\\citep{SAID} (dot-dashed blue curve) and Bonn-Gatchina~\\citep{Anisovich:2007bq,Anisovich:2012ct} (solid and dashed black curves). Curves on the right are closer to the (reaction mismatched) data shown on the left.\n\\label{Ecostheta_K0L2} }\n\\end{figure}\n\nSo far unpublished CLAS results for the corresponding reaction $\\gamma p \\to K^+ \\Lambda$ have higher statistics and finer energy bins than the present results (since the identification of this final state requires the detection of fewer particles). The present $K^0 \\Lambda$ results are, within our uncertainties, similar to the $K^+\\Lambda$ asymmetries in Ref.~\\cite{LiamCasey}. The numerical values of the measured $K^0 \\Lambda$ and $K^0 \\Sigma^0$ $E$ asymmetries, together with their statistical and systematic uncertainties, are reported in Table \\ref{Tab:E_sys_stat}.\n\n\\begin{table*}[htbp]\n\\hfill{}%\n\\begin{tabular}{ccccc}\n\\hline\\hline \n\\multicolumn{1}{c}{ } & & \\multicolumn{3}{c}{$\\cos\\theta_{K^{0}}$}\\tabularnewline\n\\cline{3-5} \n\\multicolumn{1}{c}{} & & $-$0.6 & 0.0 & $+$0.6\\tabularnewline\n\\hline \n\\multirow{2}{*}{$K^{0}\\Lambda$ } & $W_{1}$ & 0.834$\\pm$0.499$\\pm$0.287 & $-$0.144$\\pm$0.436$\\pm$0.098 & 1.066$\\pm$0.419$\\pm$0.231\\tabularnewline\n\\cline{2-5} \n & $W_{2}$ & $-$0.533$\\pm$0.752$\\pm$0.345 & $-$0.263$\\pm$0.618$\\pm$0.101 & $-$0.648$\\pm$0.464$\\pm$0.136\\tabularnewline\n\\hline \n\\multirow{2}{*}{$K^{0}\\Sigma^{0}$} & $W_{1}$ & $-$0.110$\\pm$0.723$\\pm$0.406 & 0.581$\\pm$0.539$\\pm$0.144 & $-$0.319$\\pm$0.541$\\pm$0.460\\tabularnewline\n\\cline{2-5} \n & $W_{2}$ & $-$0.471$\\pm$0.446$\\pm$0.391 & 0.0002$\\pm$0.317$\\pm$0.150 & 0.054$\\pm$0.281$\\pm$0.065\\tabularnewline\n\\hline \\hline\n\\end{tabular}\n\\hfill{}\n\n\\protect\\caption{Numerical values of the $E$ asymmetry measurements for the $K^{0}\\Lambda$\/$K^{0}\\Sigma^{0}$\nchannels. The uncertainties are statistical and systematic, respectively. Center-of-mass energy ranges are $1.70 < W_1 < 2.02$~GeV and $2.02 < W_2 < 2.34$~GeV.\n\\label{Tab:E_sys_stat}}\n\\end{table*}\n\n\\section{Conclusions \\label{sec:Section-V}}\n\nWe have reported the first set of $E$ asymmetry measurements for the reaction $\\gamma d\\rightarrow K^{0}Y(p_{s})$ for 1.70~GeV$\\leq W \\leq$ 2.34~GeV. In particular, we have described the three-step BDT-based analysis method developed to select a clean sample of $p\\pi^{+}\\pi^{-}\\pi^{-}$ with intermediate hyperons. We have plotted the $E$ asymmetry as a function of $\\cos \\theta_{K^{0}}^{CM}$.\nSeveral systematic uncertainty tests led to the conclusion that statistical uncertainties dominated the final results. The numerical values of the measured $E$ asymmetries and their statistical and systematic uncertainties are reported in Table \\ref{Tab:E_sys_stat}.\n\nEvidently, this analysis is limited by the small cross sections of the channels of interest, leading to large uncertainties in the measurements of the $E$ asymmetry. At present, comparison with several models makes no decisive selections among the model approaches. \nOverall, the BnGa predictions are of a quality similar to that of the SAID predictions. The Kaon-MAID predictions for both channels seem less successful. Among all three model comparisons, the distinction between proton- and neutron-target predictions are differentiated by the data: The proton-target predictions compare better than the neutron-target predictions with the experimental results. In principle, this information is valuable since it hints at the necessary isospin decomposition of the hyperon photoproduction mechanism. \n\nAt present, multipole analyses for $K^0Y$ channels are severely limited by the available data. Higher statistical data on these channels for a number of other polarization observables, from a much longer (unpolarized) target, have been collected during the $g13$ running period with CLAS and are under analysis. A greater number of different polarization observables is generally more effective than precision at determining the photoproduction amplitude~\\cite{Sandorfi:2010uv}. When these $g13$ results become available, the present data on the beam-target $E$ asymmetry are likely to have a larger impact. \n\n\\begin{acknowledgments}\nWe acknowledge the outstanding efforts of the staff of the Accelerator\nand Physics Divisions at Jefferson Lab who made this experiment\npossible. The work of the Medium Energy Physics group at Carnegie\nMellon University was supported by DOE Grant No. DE-FG02-87ER40315. The\nSoutheastern Universities Research Association (SURA) operated the\nThomas Jefferson National Accelerator Facility for the United States\nDepartment of Energy under Contract No. DE-AC05-84ER40150. Further\nsupport was provided by \nthe National Science Foundation, \nthe Chilean Comisi\\'on Nacional de Investigaci\\'on Cient\\'ifica y Tecnol\\'ogica (CONICYT),\nthe French Centre National de la Recherche Scientifique,\nthe French Commissariat \\`{a} l'Energie Atomique,\nthe Italian Istituto Nazionale di Fisica Nucleare,\nthe National Research Foundation of Korea,\nthe Scottish Universities Physics Alliance (SUPA),\nand the United Kingdom's Science and Technology Facilities Council.\n\\end{acknowledgments}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn a previous communication$^1$ it was suggested that a typical elementary\nparticle, the electron can be considered to be what was termed a Quantum\nMechanical Black Hole (or QMBH), made up of a relativistic fluid of subconstituents,\ndescribed by the Kerr-Newman metric giving both its gravitational and\nelectromagnetic fields$^2$. It was pointed out that alternatively the QMBH could\nbe described as a relativistic vortex in the hydrodynamical formulation. It\nwas pointed out that the QMBH or vortex could also be thought of as a\nrelativisitc rotating shell.\\\\\nIn Section 2 we examine this model which explains several observed facts,\nwhile in Section 3 we try to explore the mechanism which triggers off the\nformation of these QMBH particles. In Section 4 we examine the cosmological\nimplications of the model and again discover that a surprisingly large\nnumber of observed facts are neatly explained. Finally in Section 5 we\nmake some comments and observations.\n\\section{Quantum Mechanical Black Holes}\nIf we adhoc treat an electron as a charged and spinning black hole, described\nby the Kerr-Newman metric, the pleasing fact which emerges is that this\nmetric describes the gravitational and electromagnetic field of an electron\nincluding the anomalous gyromagnetic ratio$^2$, $g=2.$\\\\\nHowever the horizon of the Kerr-Newman Black Hole becomes in this case\ncomplex$^3$,\n\\begin{equation}\nr_+ = \\frac{GM}{c^2} + \\imath b,b \\equiv (\\frac{G^2Q^2}{c^8} + a^2 -\n\\frac{G^2M^2}{c^4})^{1\/2}\\label{e1}\n\\end{equation}\nwhere $G$ is the gravitational constant, $M$ the mass and $a \\equiv L\/Mc,L$\nbeing the angular momentum. That is, we have a naked singularity apparently\ncontradicting the cosmic censorship conjecture. However, in the Quantum\nMechanical domain, (\\ref{e1}) can be seen to be meaningful.\\\\\nInfact, the position coordinate for a Dirac particle is\ngiven by$^4$\n\\begin{equation}\nx_\\imath = (c^2p_\\imath H^{-1} t + a_\\imath)+\\frac{\\imath}{2}\nc\\hbar (\\alpha_\\imath - cp_\\imath H^{-1})H^{-1},\n\\label{e2}\n\\end{equation}\nwhere $a_\\imath$ is an arbitrary constant and $c\\alpha_\\imath$ is the velocity operator\nwith eigen values $\\pm c$. The real part in (\\ref{e2}) is the usual position\nwhile the imaginary part arises from Zitterbewegung. Interestingly, in both\n(\\ref{e1}) and (\\ref{e2}), the imaginary part is of the order of $\\frac{\\hbar}\n{mc}$, the Compton wavelength, and leads to an immediate identification of\nthese two equations. We must remember that our physical measurements are\ngross - they are really measurements averaged over a width\nof the order $\\frac{\\hbar}{mc}$. Similarly, time measurements are imprecise\nto the tune $\\sim \\frac{\\hbar}{mc^2}$. Very precise measurements if possible,\nwould imply that all Dirac particles would have the velocity of light, or\nin the Quantum Field Theory atleast of Fermions, would lead to divergences.\n(This is closely related to the non-Hermiticity of position operators in\nrelativistic theory as can be seen from equation (\\ref{e2}) itself$^5$.\nPhysics begins after an averaging over the above unphysical\nspace-time intervals. In the process as is known (cf.ref.5), the imaginary\nor non-Hermitian part of the position operator in (\\ref{e2}) disappears.\nThat is in the case of the QMBH (Quantum Mechanical Black Hole), obtained by\nidentifying (\\ref{e1}) and (\\ref{e2}), the naked singularity is shielded\nby a Quantum Mechanical censor.\\\\\nTo examine this situation more closely we reverse the arguments after\nequation (\\ref{e2}), and consider instead the complex displacement,\n\\begin{equation}\nx^\\mu \\to x^\\mu + \\imath a^\\mu\\label{e3}\n\\end{equation}\nwhere $a^o \\approx \\frac{\\hbar}{2mc^2}, \\mbox{and} a^\\mu \\approx \\frac{\\hbar}{mc}$\nas before. That is, we probe into the QMBH or the Zitterbewegung region\ninside the Compton wavelength as suggested by (\\ref{e1}) and (\\ref{e2}).\nRemembering that $|a^\\mu| < < 1$, we have, for the wave function,\n\\begin{equation}\n\\psi (x^\\mu) \\to \\psi (x^\\mu + \\imath a^\\mu) = \\frac{a^\\mu}{\\hbar}\n[\\imath \\hbar \\frac{\\partial}{\\partial x^\\mu} + \\frac{\\hbar}{a^\\mu}]\n\\psi (x^\\mu)\\label{e4}\n\\end{equation}\nWe can identify from (\\ref{e4}), by comparison with the well known electromagnetism-\nmomentum coupling, the usual electrostatic charge as,\n\\begin{equation}\n\\Phi e = \\frac{\\hbar}{a^o} = mc^2\\label{e5}\n\\end{equation}\nIn the case of the electron, we can verify that the equality (\\ref{e5})\nis satisfied. Infact it was shown that from here we can get a rationale\nfor the value of the fine structure constant (cf.ref.1).\\\\\nWe next consider the spatial part of (\\ref{e3}), viz.,\n$$\\vec x \\to \\vec x + \\imath \\vec a, \\mbox{where} |\\vec a| = \\frac{\\hbar}{2mc},$$\ngiven the fact that the particle is now seen to have the charge $e$ (and mass\n$m$). As is well known this leads in General Relativity from the static Kerr\nmetric to the Kerr-Newman metric where the gravitational and electromagnetic\nfield of the particle is given correctly, including the anomalous factor\n$g = 2$. In General Relativity, the complex transformation (\\ref{e3}) and\nthe subsequent emergence of the Kerr-Newman metric has no clear explanation. Nor\nthe fact that, as noted by Newman$^6$ spin is the orbital angular momentum\nwith an imaginary shift of origin. But in the above context we can see the\nrationale: the origin of (\\ref{e3}) lies in the QMBH and Zitterbewegung\nprocesses inside the Compton wavelength.\\\\\nHowever the following question has to be clarified: How can an electron\ndescribed by the Quantum Dirac spinor $(\\theta_\\chi)$, where $\\theta$\ndenotes the positive energy two spinor and $\\chi$ the negative energy\ntwo spinor, be identified with the geometrodynamic Kerr-Newman Black\nHole characterised by the curved space time (without any doublevaluedness,\ncf.ref.2).\\\\\nWe observe that as is well known,$^7$ at and within the Compton wavelength\nit is the negative energy $\\chi$ that dominates. Further, under reflection,\nwhile $\\theta \\to \\theta, \\chi$ behaves like a psuedo-spinor,\\\\\n$$\\chi \\to -\\chi$$\nHence the operator $\\frac{\\partial}{\\partial x^\\mu}$ acting on $\\chi$, a\ndensity of weight $N = 1,$ has the following behaviour$^8$,\n\\begin{equation}\n\\frac{\\partial \\chi}{\\partial x^\\mu} \\to \\frac{1}{\\hbar} [\\hbar \\frac{\\partial}\n{\\partial x^\\mu} - NA^\\mu]\\chi\\label{e6}\n\\end{equation}\nwhere,\n\\begin{equation}\nA^\\mu = \\hbar \\Gamma_\\sigma^{\\mu \\sigma} = \\hbar \\frac{\\partial}{\\partial x^\\mu}\nlog (\\sqrt{|g|})\\label{e7}\n\\end{equation}\nAs before we can identify $NA^\\mu$ in (\\ref{e6}) with the electromagnetic\nfour potential. That $N = 1$, explains the fact that charge is discrete.\\\\\nIn this formulation, electromagnetism arises from the covariant derivative\nwhich is the result of the Quantum Mechanical behaviour of the negative\nenergy components of the Dirac spinor at the Compton wavelength scale. We can see\nat once how an electron can be associated with curvature and how the double\nconnectivity of spin half surfaces in the geometrodynamical formulation.\n(\\ref{e7}) strongly resembles Weyl's formulation for the unification\nof electromagnetism and gravitation$^9$. However it must be noted that the\noriginal Christofell symbol of Weyl contained two independent entities viz.\nthe metric tensor \\underline{and} the electromagnetic potential, so that there was\nreally no unification. In our formulation we have used only the Quantum\nMechanical psuedo spinor property.\\\\\nSo we could treat the Quantum Mechanical Black Hole as a relativistic fluid\nof subconstituents (or Ganeshas). In a linearized theory (cf.ref.2) we\nhave\n\\begin{equation}\ng_{\\mu v} = \\eta_{\\mu v} + h_{\\mu v}, h_{\\mu v} = \\int \\frac{4T_{\\mu v}(t -\n|\\vec x - \\vec x'|, \\vec x')}{|\\vec x - \\vec x'|}d^3 x'\\label{e8}\n\\end{equation}\nIt was then shown, (cf.ref.1), that not only do we recover the Quantum\nMechanical spin but using equation (\\ref{e7}) and (\\ref{e8}) that for\n$r = |\\vec x| >> |\\vec x'|$ we get\n\\begin{equation}\n\\frac{e'e}{r} = A_o \\sim \\frac{\\hbar c^3}{r} \\int \\rho \\omega d^3 x'\n\\sim (Gmc^3)\\frac{mc^2}{r}\\label{e9}\n\\end{equation}\nwhere $e' = 1 \\mbox{esu}$ corresponds to the charge $N = 1$ and $e$ is the\ntest charge. (\\ref{e9}) is correct and infact leads to the well known empirical result,\n\\begin{equation}\n\\frac{e^2}{Gm^2} \\sim 10^{40},\\label{e10}\n\\end{equation}\nThe above model gives a rationale for the left handedness of the neutrino,\nwhich can be treated as an electron with vanishing mass so that the\nCompton wavelength becomes arbitrarily large. For such a particle, we\nencounter in effect the region within the Compton wavelength with the pseudo\nspinorial property discussed above, that is left handedness.\\\\\nFinally it may be remarked that the electron, the positron and its special case the neutrino\nare the fundamental elementary particles which could be used to generate\nthe mass spectrum of elementary particles$^{10}$.\\\\\nWe now briefly examine why the Compton wavelength emerges as a fundamental\nlength. Our starting point could be the Dirac or Klein-Gordon equations. For\nsimplicity we consider the Klein-Gordon equation. It is well known that the\nposition operator is given by$^{5}$\n\\begin{equation}\n\\vec X_{op} = \\vec x_{op} - \\frac{\\imath \\hbar c^2}{2} \\frac{\\vec p}{E^2}\\label{e11}\n\\end{equation}\n(The Dirac equation also has a similar case).\\\\\nWe saw in (ref.1) that the imaginary part in equation (\\ref{e11}) which\nmakes $\\vec X_{op}$ non-Hermitian, and for the Dirac particle gives Zitterbewegung\ndisappears on averaging over intervals $\\Delta t \\sim \\frac{\\hbar}{mc^2} (\\mbox{and}\n\\Delta r \\sim \\frac{\\hbar}{mc})$ so that $\\vec X_{op}$ becomes Hermitian (this is\nalso the content of the Foldy-Wothuysen transformation). Our physics, as\npointed out begins after such an average or Hermitization. Our measurement\nin other words are necessarily gross to this extent - we will see this more\nclearly. From equation (\\ref{e11})\nwe now get\n\\begin{equation}\n\\hat X^2_{op} \\equiv \\frac{2m^3c^4}{\\hbar^2} X^2_{op} = \\frac{2m^3c^6}{\\hbar^2}\nx^2 + \\frac{p^2}{2m}\\label{e12}\n\\end{equation}\nMathematically equation (\\ref{e12}) shows that $\\hat X^2_{op}$ gives a problem\nidentical to the harmonic oscillator with quantized levels: Infact the quantized\n\"space-levels\" for $\\vec X^2_{op}$ turn out to be multiples of $(\\hbar\/mc)^2$!\nFrom here, we get $\\Delta t = \\frac{\\Delta x}{c} = \\frac{\\hbar}{mc^2}$.\n\\section{The formation of QMBH particles}\nWe now investigate how such QMBH can be formed. For this we digress\ntemporarily to vaccuum fluctuations. It is well known that there is a zero\npoint field (ZPF). According to QFT this arises due to the virtual quantum\neffects of the electromagnetic field already present. Whereas according to\nwhat has now come to be called Stochastic Electrodynamics (SED), it is\nthese ZPF that are primary and result in Quantum Mechanical primary effects\n$^{11}$. Many Quantum Mechanical effects can indeed be explained this way.\nWithout entering into the debate about the ZPF fluctuations for the moment, we observe that\nthe energy of the fluctuations of the magnetic field in a region of length\n$\\lambda$ is given by$^{2}$ $(\\vec E$ and $\\vec B$ are electromagnetic field\nstrengths)\n\\begin{equation}\nB^2 \\sim \\frac{\\hbar c}{\\lambda^4}\\label{e13}\n\\end{equation}\nIf $\\lambda$ as in the QMBH is taken to be the Compton wavelength,\n$\\frac{\\hbar}{mc}$ (\\ref{e13}) gives us for the energy in this volume\nof the order $\\lambda^3$,\\\\\n$$\\mbox{Total\\quad energy\\quad of\\quad QMBH\\quad}\\sim \\frac{\\hbar c}{\\lambda} =\nmc^2,$$\nexactly as required. In other words the entire energy of the QMBH\nof mass $m$ can be thought to have been generated by the fluctuations\nalone. Further the fluctuation in curvature over the length $l$ is given\nby$^{2}$,\n\\begin{equation}\n\\Delta R \\sim \\frac{L^*}{l^3},\\label{e14}\n\\end{equation}\nwhere $L^*$ is the Planck length of the order $10^{-33}cms$.\\\\\nFor the electron which we consider, $l$ is of the order of the Compton\nwavelength, that is $10^{-11}cms$. Substitution in (\\ref{e14}) therefore\ngives\n$$\\Delta R \\sim 1$$\nIn other words the entire curvature of the QMBH is also generated by these\nfluctuations. That is the QMBH can be thought to have been created by\nthese fluctuations alone.\\\\\nWithin the framework of QED, we can come to this conclusion in another\nway$^{12}$. It is known that the vaccuum energy of the electron field with\na cut off $k_{max}$ is given by,\n\\begin{equation}\n\\frac{\\mbox{Energy}}{\\mbox{Volume}} \\sim \\hbar c k^4_{max}\\label{e15}\n\\end{equation}\nThis is the same as equation (\\ref{e13}) encountered earlier. Also the\ninfinite energy of the vaccuum is avoided by the assumption of the cut\noff normally taken to be of the order of a typical Compton wavelength on\nthe ground that we do not know that the laws of electromagnetism are\nvalid beyond these high frequencies, that is within these length scales.\\\\\nBut the preceding discussion shows that it is natural to take $k_{max} =\n\\frac{mc}{\\hbar},$ the inverse Compton wavelength of the electron. The energy\nof the electron from equation (\\ref{e15}) then comes out to be\n$$E \\sim mc^2,$$\nas before. So we are led to the important conclusion that the\ninfinity of QED is avoided by the fact that QMBH are formed, rather than by\nthe arbitrary prescription of a cut off. Infact there is a further bonus\nand justification for the above interpretation. Let us use in (\\ref{e15})\nthe pion Compton wavelength as the cut off. The reason we choose the pion\nis that it is considered to be a typical elementary particle in the sense\nthat it plays a role in the strong interactions, and further it could\nbe used as a building block for developing a mass spectrum, and finally\nas seen in (ref.1) can be considered to be made up of an electron and\na positron. Then from (\\ref{e15}) we can recover the pion mass,\n$m_\\pi$ and moreover,\n\\begin{equation}\nNm_\\pi = M,\\label{e16}\n\\end{equation}\nwhere $N$ is the number of elementary\nparticles, typically pions, $N \\sim 10^{80}$ and $M$ is the mass\nof the universe,viz. $10^{56}gms$.\\\\\nIn other words, in our interpretation we have not only avoided the QED\ninfinity but have actually recovered the mass of the universe. We will return\nto this point shortly.\\\\\nWe now consider the same scenario from a third point of view, viz. from the\nstandpoint of Quantum Statistical Mechanics. Here also the spirit is that\nof randomness$^{13}$. A state can be written as\n\\begin{equation}\n\\psi = \\sum_{n} c_n \\phi_n,\\label{e17}\n\\end{equation}\nin terms of basic states $\\phi_n$ which\ncould be eigen states of energy for example, with eigen values $E_n$. It\nis known that (\\ref{e17}) can be written as\n\\begin{equation}\n\\psi = \\sum_{n} b_n \\phi_n\\label{e18}\n\\end{equation}\nwhere $|b_n|^2 = 1$ if $E = \\sum_{n} |b_n|^2 (\\phi_n, O \\phi_n)\/ \\sum_{n} |b_n|^2\\label{e20}\n\\end{equation}\n(\\ref{e18}) and (\\ref{e20}) show that effectively we have incoherent states\n$\\phi_1, \\phi_2,....$ once averages over time intervals for the phases\n$c_n$ in (\\ref{e19})vanish owing to their relative randomness.\\\\\nIn the light of the preceding discussion of random fluctuations in the context of QMBH\nin SED and QED, we can interpret the above meaningfully: We can identify\n$\\phi_n$ with the ZPF. The time averages are the Zitterbewegung averages\nover intervals $\\sim \\frac{\\hbar}{mc^2}$. We then get disconnected or\nincoherent particles or QMBH from a single background of vaccuum\nfluctuations exactly as before. The incoherence arises because of the well\nknown random phase relation (\\ref{e19}) that is after averaging over the\nsuitable interval.\\\\\nBut in all of the above considerations, and in present day theory the question\nthat comes up is: How can we reconcile the fact that the various particles\nin the universe are not infact incoherent but rather occupy a single\ncoherent space-time. The answer which can now be seen to emerge in the\nlight of the above discussion is that all these particles are linked by\ninteraction. These interactions as pointed out in (ref.1) arise within\nthe Compton wavelength or Zitterbewegung region, that is in phenomena\nwithin the time scale $\\frac{\\hbar}{mc^2}$. It will be observed in the\nabove discussion that at these time scales the equation (\\ref{e19}) is\nno longer valid and we have to contend with equation (\\ref{e17}) rather\nthan equation (\\ref{e18}). So interactions arising within the\nCompton wavelength link or make coherent\nthe various particles.\\\\\nInfact all this is perfectly in tune with the QFT picture wherein the\ninteractions are caused by virtual particles with life time less than\n$\\frac{\\hbar}{mc^2}$. It may also be observed that in Wheeler's Geometrodynamical\nmodel$^{14}$, the various particles are linked by exactly such\nwormholes linking distant regions.\\\\\nIn the above formulation we could take $\\phi_n$ to be the particlets or Ganeshas instead of\nenergy eigen states, that is to be position eigen states and consider sets\n$$\\overline{(c_{n_\\imath}, c_{m_j})} = 0,$$\nexactly as before (cf.(\\ref{e19})). Each set $\\phi_{n_\\imath}$ defines a particle\n$P_n$ consisting of $n_\\imath$ Ganeshas or particlets. It is the link\nat $\\Delta t < \\hbar\/mc^2$ between $P_n$ and $P_m$ which puts otherwise incoherent particles into a\nsingle space-time, that is allows interactions.\\\\\nIn other words, a set of particles can be said to be in the same space-time\nif every particle interacts with atleast one other member of the set.\\\\\nFor completeness we mention that the above bunching could be carried out\nin principle for two\nor more universes. Thus a set of particles constitutes universe $U_1$ while\nanother set of particles constitute an incoherent universe $U_2$. Again the\nincoherence can be broken at a suitable time scale (cf.ref.15 for a pictorial\nmodel in terms of wormholes).\\\\\nThere is another way of looking at all this.\nWe first note that the space-time symmetry of relativity has acquired\na larger than life image. Infact our perception of the universe is\nessentially one of all space (or as much of it as possible) at one instant\nof time (cf.also ref.2). Further, time is essentially an ordering or sequencing\nof events. To understand time we must know on what basis this ordering is\ndone so that causality and other laws of physics hold or in other words we\nhave the universe of the physical hyperboloid.\\\\\nWe now approach this problem by trying to liberate the sequence of\nevents in time from any ordering at all. At first sight it would appear that\nthis approach would lead to a chaotic universe without physics that is\ncausality, interaction and so on. We will actually try to attempt to explain\nthe emergence of physics from such a, what may be called pre-space-time scenario. It must be\nnoted that both Special and General Relativity work in a deterministic\nspace-time. Even relativistic Quantum Mechanics and Quantum Field Theory\nassume the space-time of Special Relativity. Quantum Gravity on the other\nhand which has not yet proved to be a completely successful theory questions\nthis concept of space-time$^{16}$.\\\\\nWhile a random time sequence is ruled out at what may be called the macro\nlevel, in our case above the Compton wavelength, within the framework of\nQMBH and as seen above this is certainly possible below the Compton\nwavelength scale. Infact this is the content of non locality and non Hermiticity\nof the Zitterbewegung in the region of QMBH.\\\\\nSo we start with truly instantaneous point particles or particlets (or Ganeshas)\nwhich are therefore indistinguishable, (cf. ref.1) (and could be denoted by\n$\\phi_n$ of (\\ref{e17})). We then take a random sequence of such\nparticlets$^{13}$. Such a sequence for the interval $\\Delta t \\sim \\frac\n{\\hbar}{mc^2}$ in time collectively constitutes a particle that has come\ninto existence and is spread over a space interval of the order of the\nCompton wavelength. In other words we have made a transition from pre-\nspace-time to a particle in space-time. This is exactly the averaging over random phases in\nequation (\\ref{e19}). Hermiticity of position operators has now been\nrestored and we are back with the states $\\phi_n$ in equation\n(\\ref{e18}). All this is in the spirit that our usual time is such that,\nwith respect to it vaccuum fluctuations are perfectly random as pointed\nby Macrea$^{17}$. So the subconstituents of the relativistic fluid given\nin (\\ref{e8}) (or the Quantized Vortex in the hydrodynamical formulation\n(cf. ref.1)), are precisely these particlets.\\\\\nTo visualize the above consideration in greater detail we first consider strictly\npoint particles obtained by taking the random sequences over time\nintervals $\\sim \\frac{\\hbar}{mc^2}$. We consider an assembly of such truly\npoint particles which as yet we cannot treat either with Fermi-Dirac or\nBose-Einstein statistics but rather as a Maxwell-Boltzman distribution. If there\nare $N$ such particles in a volume $V$, it is known that$^{13}$, the volume\nper particle is of the order of,\n$$(\\frac{V}{N})^{1\/3} \\sim \\lambda_{thermal} \\approx \\frac{\\hbar}{\\sqrt{m^2c^2}} =\n\\frac{\\hbar}{mc},$$\nwhere we take the average velocity of each particle to be equal to $c$. Infact,\nthis is exactly what happens, as Dirac pointed out (cf. ref.4), for a truly\nhypothetical point electron, in the form of Zitterbewegung within the\nCompton wavelength.\\\\\nSo the Compton wavelength arises out of the (classical) statistical inability to\ncharacterise a point particle precisely: It is not that the particle has an\nextension per se. In this sense the Compton wavelength has a very\nCopenhagen character, except that it has been deduced on the basis of an\nassembly of particles rather than an isolated particle.\n\\section{The Universe of Fluctuations}\nThe question that arises is, what are the cosmological implications of the\nabove scenario, that is, if we treat the entire universe as arising from\nfluctuations, is this picture consistent with the observed universe? It\nturns out that not only is there no inconsistency, but on the contrary\na surprising number of correspondences emerge.\\\\\nThe first of these is what we have encountered a little earlier viz.\nthe fact that we recover the mass of the universe as in equation (\\ref{e16}).\\\\\nWe can next deduce another correspondence. The ZPF gives the correct\nspectral density viz.\n$$\\rho (\\omega) \\alpha \\omega^3$$\nand infact the Planck spectrum$^{18}$. We then get the total intensity\nof radiation from the fluctuating field due to a single star as$^{11}$,\n$$I(r) \\alpha \\frac{1}{r^2}$$\nIt then follows that given the observed isotropy and homogeneity of the\nuniverse at large, as is well known,\n\\begin{equation}\nM \\alpha R\\label{e21},\n\\end{equation}\nwhere $R$ is the radius of the universe.\\\\\nEquation (\\ref{e21}) is quite correct and infact poses a puzzle, as is\nwell known and it is to resolve this dependence that dark matter has\nbeen postulated$^{19}$ whereas in our formulation the correct mass radius\ndependence has emerged quite naturally without any other adhoc postulates.\\\\\nAs we have seen above the Compton wavelength of a typical particle, the pion viz $l_\\pi$ can be\ngiven in terms of the volume of uncertainity. However in actual observation\nthere is an apparent paradox. If the universe is $n$ dimensional then we\nshould have,\n$$Nl^n_\\pi \\sim R^n$$\nfor the universe itself. This relation is satisfied with $n = 2$ in which\ncase we get a relation that has been known emperically viz.,\n$$l_\\pi \\sim \\frac{R}{\\sqrt{N}}$$\n(Even Eddington had used this relation).\\\\\nSo in conjunction with (\\ref{e16}) we have an apparent paradox where the actual universe appears to be\ntwo dimensional. This will be resolved shortly and it will be seen that\nthere is no contradiction.\\\\\nAnother interesting consequence is as follows: According to our formulation\nthe gravitational potential energy of a pion in a three dimensional isotropic\nsphere of pions is given by\n$$\\frac{Gm_\\pi M}{R}$$\nThis should be equated with the energy of the pion viz. $m_\\pi c^2$. We then\nget,\n\\begin{equation}\n\\frac{GM}{c^2} = R,\\label{e22}\n\\end{equation}\na well known and observationally correct relation. In our formulation we\nget $m_\\pi$ from the ZPF and given $N$ we know $M$ so that from equation\n(\\ref{e22}) we can deduce the correct radius $R$ of the universe.\\\\\nProceeding further we observe that the fluctuations in the particle number $N$ itself is\nof the order $\\sqrt{N}^{13,20}$. Also $\\Delta t$ above is the typical\nfluctuating time. So we get,\n$$\\frac{dN}{dt} = \\frac{\\sqrt{N}}{\\Delta t} = \\frac{m_\\pi c^2}{\\hbar} \\sqrt{N}$$\nwhence as $t = 0, N = 0,$\n\\begin{equation}\n\\sqrt{N} = \\frac{2m_\\pi c^2}{\\hbar} .T\\label{e23}\n\\end{equation}\nwhere $T$ is the age of the universe $\\approx 10^{17}secs$. It is remarkable\nthat equation (\\ref{e23}) is indeed correct. One way of looking at this is\nthat not only the radius but also the age of the universe is correctly\ndetermined. As we saw before,\n$$R = \\frac{GM}{c^2} = \\frac{GNm_\\pi}{c^2}$$\nso that\n\\begin{equation}\n\\frac{dR}{dt} = \\frac{Gm_\\pi}{c^2} \\frac{dN}{dt} = \\frac{Gm^2_\\pi}{\\hbar} \\sqrt{N} = HR\\label{e24}\n\\end{equation}\nwhere\n\\begin{equation}\nH = \\frac{Gm_\\pi^3 c}{\\hbar^2}\\label{e25}\n\\end{equation}\nOne can easily verify that (\\ref{e25}) is satisfied for the Hubble constant so\nthat (\\ref{e24}) infact gives the Hubble's velocity distance relation.\\\\\nFurthermore from (\\ref{e25}) we deduce that,\n\\begin{equation}\nm_\\pi = (\\frac{\\hbar^2 H}{Gc})^{1\/3}\\label{e26}\n\\end{equation}\nIt is remarkable that equation (\\ref{e26}) is known to be true from a\npurely empirical standpoint$^{21}$. However we have actually deduced it in\nour formalism. Another way of interpreting equation (\\ref{e26}) is that\ngiven $m_\\pi$ (and $\\hbar, G$ and $c$) we can actually deduce the value of\n$H$ in our formalism.\\\\\nFrom Equation (\\ref{e24}), we deduce that,\n\\begin{equation}\n\\frac{d^2R}{dt^2} = H^2R\\label{e27}\n\\end{equation}\nThat is, effectively there is a cosmic repulsion. Infact, from (\\ref{e27}) we\ncan identify the cosmological constant as\n$$\\Lambda \\sim H^2$$\nwhich is not only consistent but agrees exactly with the limit on this constant\n(cf.ref.2).\\\\\nThe final correspondence is to do with an explanation for the microwave\ncosmic background radiation within the above framework of fluctuations. It\nis well known that the fluctuations of the Boltzmann $H$ function for interstellar\nspace is of the order of $10^{-11}secs^{13}$. These fluctuations can be\nimmediately related to the ZPF exactly as in the case of the Lamb shift\n(cf.ref.2). So $\\frac{\\hbar}{mc^2} = 10^{-11}$ or the associated wavelength\nviz.,\n$$\\frac{\\hbar}{mc} \\sim 0.3 cms,$$\nwhich corresponds to the cosmic background\nradiation$^{3}$. The same conclusion can be drawn from a statistical\ntreatment of interstellar Hydrogen$^{22}$.\n\\section{Comments}\n1. We could arrive at equation (\\ref{e13}) by a slightly different route\n(cf.ref.2). We could start with a single oscillator in the ground state\ndescribed by the wave function\n\\begin{equation}\n\\psi(x) = \\mbox{const} \\quad exp[-(m\\omega\/2 \\hbar)x^2]\\label{e28}\n\\end{equation}\nwhich would fluctuate with a space uncertainity of\n$$\\Delta x \\sim (\\hbar\/m\\omega)^{\\frac{1}{2}} = \\frac{\\hbar}{mc}$$\nThe electromagnetic ZPF could be treated as an infinite collection of\nindependent oscillators and we could recover equation (\\ref{e13}).\\\\\n2.Earlier we skirted the issue whether the ZPF is primary or secondary.\nWe now start either with the ZPF or with the pre-space-time background\nfield of the instantaneous particles (or Ganeshas). We could assign a\nprobability $p$ for them to appear in space-time and the probability\n$1-p = q$ for this not to happen. From here we get the probability for\n$N$ of them to appear as\n\\begin{equation}\n\\mbox{Probability}\\quad \\alpha \\quad exp \\quad [-\\mu^2 N^2]\\label{e29}\n\\end{equation}\nThis immediately ties up with the considerations following from equation\n(\\ref{e12}) (cf.ref.2), if we identify $N$ with $x$. The justification\nfor this can be seen by a comparison with $|\\psi (x)|^2$ from (\\ref{e28}):\nFrom (\\ref{e29}), the probability is non-negligible if\n$$\\Delta N \\sim \\frac{1}{\\mu},$$\nwhich turns out to be, from (\\ref{e28}),\n$$\\Delta N \\sim \\frac{1}{\\mu} \\approx \\frac{\\hbar}{mc},$$\nthe Compton wavelength. Thus once again we conclude from (\\ref{e29})\nthat a probabilistic fluctuational collection of instantaneous particlets\nfrom a pre-space-time background shows up as a particle in space-time.\\\\\nWheeler considers the algebra of propositions as providing the link\nbetween what he terms pre-geometry and geometrodynamics. In our formulation\nprobabilistic fluctuations lead to space-time and physics from\npre-space-time.\\\\\n3. It was pointed out that the equation\n\\begin{equation}\nl_\\pi \\sim \\frac{R}{\\sqrt{N}}\\label{e30}\n\\end{equation}\nsuggests that the universe is apparently two dimensional. This paradoxical\nresult is consistent with astrophysical data (cf.ref.19). We could resolve\nthe paradox as follows:\\\\\nWe start with the fact that the universe on the average is neutral. Further\nthe fluctuation in the number of electrons is $\\sim \\sqrt{N}$. So an extra\nelectrostatic potential energy is created which is balanced by (or in\nour formulation manifests itself as) the energy of the electron itself\n(cf.ref.20):\n$$\\frac{e^2\\sqrt{N}}{R} = mc^2$$\nwhich leads to the above relation.\\\\\nSo in the conventional theory, that is in the language of a fixed particle\nnumber universe, we would say that the universe is apparently two dimensional.\nBut once we recognise the fluctuations, the universe is really three\ndimensional. Infact the fundamental equation (\\ref{e10}) which was\nderived purely from the point of view of an isolated particle can also be\nderived on the basis of a \"two dimensional\" universe$^{23}$.\\\\\n4. The considerations of the previous section show that there exists, what\nmay be called a micro-macro nexus: Fundamental constants of Quantum Theory\nare tied up with constants from macro physics and cosmology. So the universe\nis holistic. It has a slightly different connotation from the Machian formulation,\nbecause the latter deals with a deterministic universe with rigid physical\nlaws.\\\\\nInfact from (\\ref{e30}), (\\ref{e22}) and (\\ref{e23}), we can deduce that,\n\\begin{equation}\n\\frac{2Gm_\\pi^3c}{\\hbar^2} = \\frac{1}{T}\\label{e31}\n\\end{equation}\nwhich is a variant of equation (\\ref{e10}), if we replace its right side\nby $\\sqrt{N}$. This may be interpreted as giving $e,G,c$ or $\\hbar$ in\nterms of $m_\\pi\\quad \\mbox{and}\\quad N$. More interestingly, (\\ref{e31})\ngives the variation of $G$, or more generally, the left side, with the\nage of the universe (cf.ref.2 for Dirac's conjecture in this connection).\\\\\n5. The quantization formula for space following from equation (\\ref{e12}),\nreflects an empirical formula deduced by Chacko$^{24}$ which can be used to\ngenerate a mass spectrum. It also vindicates a close connection between\nenergy and space-time: As pointed out (cf.ref.1) inertial mass arises from\nthe non local Zitterbewegung processes within the Compton wavelength$^{25}$.\\\\\n6. Finally we observe that inspite of similarities, the above scenario of\nfluctuations differs from steady-state cosmology and the $C$ field\nformulation$^{26}$.\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nA citation network is an important source of analysis in science.\nCitations serve multiple purposes such as crediting an idea, signaling\nknowledge of the literature, or critiquing others' work \\citep{martyn1975citation}.\nWhen citations are thought of as impact, they inform tenure, promotion,\nand hiring decisions \\citep{meho2007impact}. Furthermore, scientists\nthemselves make decisions based on citations, such as which papers\nto read and which articles to cite. Citation practices and infrastructures\nare well-developed for journal articles and conference proceedings.\nHowever, there is much less development for dataset citation. This\ngap affects the increasingly important role that datasets play in\nscientific reproducibility \\citep{task2013out,belter2014measuring,robinson2016analyzing,park2018informal},\nwhere studies use them to confirm or extend the results of other research\n\\citep{sieber1995not,darby2012enabling}. One historical cause of\nthis gap is the difficulty in archiving datasets. While less problematic\ntoday, the citation practices for datasets take time to develop. Better\nalgorithmic approaches to track dataset usage could improve this state.\nIn this work, we hypothesize that a network flow algorithm could track\nusage more effectively if it propagates publication and dataset citations\ndifferently. With the implementation of this algorithm, then, it will\nbe possible to correct differences in citation behavior between these\ntwo types of artifacts, increasing the importance of datasets as first\nclass citizens of science.\n\nDifferent researchers use citation networks to evaluate the importance\nof authors \\citep{ding2009pagerank,ding2011applying,west2013author},\npapers \\citep{chen2007finding,ma2008bringing}, journals \\citep{bollen2006journal,bergstrom2007eigenfactor},\ninstitutions \\citep{fiala2013suborganizations} and even countries\n\\citep{fiala2012bibliometric}. The \\textsc{PageRank} algorithm \\citep{page1999pagerank}\nhas served as a base for much of these citation network-based evaluations.\nFor example, \\citet{bollen2006journal} proposed a weighted \\textsc{PageRank\n}to assess the prestige of journals, while \\citet{ding2009pagerank}\nand \\citet{ding2011applying} proposed a weighted \\textsc{PageRank\n}to measure the prestige of authors. \\citet{fiala2012time} defined\na time-aware \\textsc{PageRank }method for accurately ranking the most\nprominent computer scientists. \\citet{franceschet2017timerank} introduced\nan approach called \\textsc{TimeRank} for rating scholars at different\ntime points. \\textsc{TimeRank} updates the rating of scholars based\non the relative rating of the citing and cited scholars at the time\nof the citation. Citation networks are thus an important source of\ninformation for ranking homogenous types of nodes.\n\nHistorically, ranking datasets using citation networks is significantly\nmore challenging. These challenges have technical and social issues\nalike. First, datasets cost time and labor to prepare and to share,\nresulting in some articles failing to provide datasets \\citep{alsheikh2011public}.\nSecond, archiving and searching massive datasets is prohibitively\nexpensive and difficult. Third, scholars are not used to citing datasets.\nSurvey research shows that scholars value citing dataset \\citep{kratz2015researcher}\nyet they tend to cite the \\emph{article} rather than the dataset or\nthey merely mention the dataset without explicit reference \\citep{force2014encouraging}.\nTherefore, these reasons have prevented the proper assignment of credit\nto dataset usage.\n\nSeveral initiatives attempt to improve citation practices for datasets.\nIn 2014, the Joint Declaration Of Data Citation Principles was officially\nreleased. These principles, however, mainly focus on normalizing dataset\nreferences rather than normalizing storage and some other technical\nissues \\citep{altman2015introduction,callaghan2014joint,mooney2012anatomy}.\nFor instance, some researchers have suggested assigning specific DOIs\nto datasets to mitigate differences between datasets and articles\n\\citep{callaghan2012making}. Others have proposed to automatically\nidentify uncited or unreferenced datasets used in articles \\citep{boland2012identifying,kafkas2013database,ghavimi2016identifying}.\nAll these solutions try to make citation dataset behavior more standard\nor attempt to fix the citation network by estimating which data nodes\nare missing. Therefore, these solutions necessarily modify the source\nthat algorithms use to estimate impact.\n\nIn this article, we develop a method for assigning credit to datasets\nfrom citation networks of publications, assuming that dataset citations\nhave biases. Importantly, our method does not modify the source data\nfor the algorithms. The method does not rely on scientists explicitly\nciting datasets but infers their usage. We adapt the network flow\nalgorithm of \\citet{walker2007ranking} by including two types of\nnodes: datasets and publications. Our method simulates a random walker\nthat takes into account the differences between obsolescence rates\nof publications and datasets, and estimates the score of each dataset---the\n\\textsc{DataRank}. We use the metadata from the National Center for\nBioinformatics (NCBI) GenBank nucleic acid sequence and Figshare datasets\nto validate our method. We estimate the relative rank of the datasets\nwith the \\textsc{DataRank} algorithm and cross-validate it by predicting\nthe actual usage of them---number of visits to the NCBI dataset web\npages and downloads of Figshare datasets. We show that our method\nis better at predicting both types of usages compared to citations\nand has other qualitative advantages compared to alternatives. We\ndiscuss interpretations of our results and implications for data citation\ninfrastructures and future work.\n\n\\section{Why measure dataset impact?}\n\nScientists may be incentivized to adopt better and broader data sharing\nbehaviors if they, their peers, and institutions are able to measure\nthe impact of datasets (e.g., see \\citet{silvello2018theory} and\n\\citet{kidwell2016badges}). In this context, we review impact assessment\nconceptual frameworks and studies of usage statistics and crediting\nof scientific works more specifically. These areas of study aim to\ndevelop methods for scientific indicators of the usage and impact\nof scholarly outputs. Impact assessment research also derives empirical\ninsights from research products by assessing the dynamics and structures\nof connections between the outputs. These connections can inform better\npolicy-making for research data management, cyberinfrastructure implementation,\nand funding allocation.\n\nMethods for measuring usage and impact include a variety of different\ndimensions of impact, from social media to code use and institutional\nmetrics. Several of these approaches recognize the artificial distinction\nbetween the scientific process and product \\citep{priem2013scholarship}.\nFor example, altmetrics is one way to measure engagement with diverse\nresearch products and to estimate the impact of non-traditional outputs\n\\citep{priem2014altmetrics}. Researchers predict that it will soon\nbecome a part of the citation infrastructure to routinely track and\nvalue \\textquotedblleft citations to an online lab notebook, contributions\nto a software library, bookmarks to datasets from content-sharing\nsites such as Pinterest and Delicious\\textquotedblright{} \\citep[from ][]{priem2014altmetrics}.\nIn short, if science has made a difference, it will show up in a multiplicity\nof places. As such, a correspondingly wider range of metrics are needed\nto attribute credit to the many locations where research works reflect\ntheir value. For example, datasets contribute to thousands of papers\nin NCBI\\textquoteright s Gene Expression Omnibus and these attributions\nwill continue to accumulate, just like paper accumulate citations,\nfor a number of years after the datasets are publicly released \\citep{piwowar2011data,piwowar2013altmetrics}.\nEfforts to track these other sources of impact include ImpactStory,\nstatistics from FigShare, and Altmetric.com \\citep{robinson2017datacite}.\n\nCredit attribution efforts include those by federal agencies to expand\nthe definition of scientific works that are not publications. For\nexample, in 2013 the National Science Foundation (NSF) recognized\nthe importance of measuring scientific artifacts other than publications\nby asking researchers for \\textquotedblleft products\\textquotedblright{}\nrather than just publications . This represents a significant change\nin how scientists are evaluated \\citep{piwowar2013altmetrics}. Datasets,\nsoftware, and other non-traditional scientific works are now considered\nby the NSF as legitimate contributions to the publication record.\nFurthermore, real-time science is presented in several online mediums;\nalgorithms filter, rank, and disseminate scholarship as it happens.\nIn sum, the traditional journal article is increasingly being complemented\nby other scientific products \\citep{priem2013scholarship}.\n\nYet despite the crucial role of data in scientific discovery and innovation,\ndatasets do not get enough credit \\citep{silvello2018theory}. If\ncredit was properly accrued, researchers and funding agencies would\nuse this credit to track and justify work and funding to support datasets---consider\nthe recent Rich Context Competition which aimed at to filling this\ngap by detecting dataset mentions in full-text papers \\citep{zengnyu}.\nBecause these dataset mentions are not tracked by current citation\nnetworks, this leads to biases in dataset citations \\citep{robinson2016analyzing}.\nThe FAIR (findable, accessible, interoperable, reproducible) principles\nof open research data are one major initiative that is spearheading\nbetter practices with tracking digital assets such as datasets \\citep{wilkinson2016fair}.\nHowever, the initiative is theoretical, and lacks technical implementation\nfor data usage and impact assessment. There remains a need to establish\nmethods to better estimate dataset usage.\n\n\\section{Materials and methods}\n\n\\subsection{Datasets}\n\n\\subsubsection{OpenCitations Index (COCI)}\n\nThe OpenCitations index (COCI) is an index of Crossref's open DOI-to-DOI\ncitation data. We obtained a snapshot of COCI (November 2018 version),\nwhich contains approximately 450 million DOI-to-DOI citations. Specifically,\nCOCI contains information including citing DOI, cited DOI, the publication\ndate of citing DOI. The algorithm we proposed in the paper requires\nthe publication year. However, not all the DOIs in COCI have a publication\ndate. We will introduce Microsoft Academic Graph to fill this gap.\n\n\\subsubsection{Microsoft Academic Graph (MAG)}\n\nThe Microsoft Academic Graph is a large heterogeneous graph consisting\nof six types of entities: paper, author, institution, venue, event,\nand field of study \\citep{sinha2015overview}. Concretely, the description\nof a paper consists of DOI, title, abstract, published year, among\nother fields. We downloaded a copy of MAG in November 2019, which\ncontains 208,915,369 papers. As a supplement to COCI, we extract the\nDOI and published year from MAG to extend those DOIs in COCI without\na publication date.\n\n\\subsubsection{PMC Open Access Subset (PMC-OAS)}\n\nThe PubMed Central Open Access Subset is a collection of full-text\nopen access journal articles in biomedical and life sciences. We obtained\na snapshot of PMC-OAS in August 2019. It consists of about 2.5 million\nfull-text articles organized in well-structured XML files. The articles\nare identified by a unique id called PMID. We also obtained a mapping\nbetween PMIDs and DOIs from NCBI, which enabled us to integrate PMC-OAS\ninto the citation network.\n\n\\subsubsection{GenBank}\n\nGenBank is a genetic sequence database that contains an annotated\ncollection of all publicly available nucleotide sequences for almost\n420,000 formally described species \\citep{Sayers2019}. The information\nabout a nucleotide sequence in GenBank is organized as a record consisting\nof many data elements (fields) and stored in a flat-file (see Fig.\n\\ref{fig:sample-data-of}). The ACCESSION field contains the unique\nidentifier of the dataset. The last citation in the REFERENCE field\ncontains the information about the submitter, including the author\nlist and the date in which the dataset was introduced (e.g., ``16-JAN-1992''\nin Fig. \\ref{fig:sample-data-of}). We obtained a snapshot of the\nGenBank database (version 230) with 212,260,377 gene sequences. We\nremove those sequences without submission date. This left us with\n77,149,105 sequences.\n\nThe National Institutes of Health (NIH) provided us with number of\nvisits to a sequence's landing page for the top 1000 Nucleotide sequences\nduring the month of September 2012. We use these visits as a measure\nof real \\emph{usage.}\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics{genbank_record}\n\\par\\end{centering}\n\\caption{\\label{fig:sample-data-of} Sample record of a sequence submission\nfrom GenBank. The ACCESSION field is the unique identifier of a dataset.}\n\\end{figure}\n\n\n\\subsubsection{Figshare}\n\nFigshare is a multidisciplinary, open access research repository where\nresearchers can deposit and share their research output. Figshare\nallows users to upload various formats of research output, including\nfigures, datasets, and other media \\citep{thelwall2016figshare}.\nIn order to encourage data sharing, all research data made publicly\navailable has unlimited storage space and is allocated a DOI. This\nDOI is used by scientists to cite Figshare resources using traditional\ncitation methods. Figshare makes the research data publicly and permanently\navailable which mitigates the resource decay problem and improves\nresearch reproducibility \\citep{zeng2019deadscience}. A Figshare\nDOI contains the string 'figshare' (e.g., '10.6084\/m9.figshare.6741260').\nWe can leverage this feature to determine whether a publication cites\nFigshare resources by detecting Figshare-generated DOIs.\n\nFigshare also keeps track of dataset accesses, such as page views\nand dataset downloads. We get the downloads statistics of Figshare\nDOI from the Figshare Stats API\\footnote{https:\/\/docs.figshare.com\/\\#stats}\nand use it as a measure of real \\textit{usage}.\n\n\\subsection{Construction of citation network}\n\nThe citation networks in this paper consist of two types of nodes\nand two types of edges. The node is represented by the paper and the\ndataset and the edge is represented by the citation between two nodes.\nConcretely, papers cite each other to form paper-paper edges as datasets\ncan only be cited by papers which are represented by the paper-dataset\nedges. As shown in the construction workflow (Fig. \\ref{fig:Network-construction-workflow}),\nwe build the paper-paper citation network using COCI and MAG and build\ntwo separate paper-dataset edge sets using GenBank and Figshare. Then\nwe integrate the paper-dataset edge sets into the paper-paper citation\nnetwork to form two complete citation networks. The construction workflow\nis illustrated in Figure \\ref{fig:Network-construction-workflow}.\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[width=1\\columnwidth]{network_construction}\n\\par\\end{centering}\n\\caption{\\label{fig:Network-construction-workflow}Citation network construction\nworkflow}\n\\end{figure}\n\n\n\\subsubsection{Paper-paper citation network}\n\nAs our proposed method takes the publication year into account, we\nneed to remove those edges without publication year. The DOI-DOI citation\nin COCI dataset provides us a skeleton of the citation network. Even\nthough COCI comes with a field named timespan which originally refers\nto the publication time span between the citing DOI and cited DOI,\nthis timespan leads to different results depending on the time granularity\n(i.e., year-only vs year+month). As explained above, we have to complement\nthe COCI dataset with MAG to complete the year of publication of cited\nand citing articles. By joining the COCI and MAG, we build a large\npaper-paper citation network consisting of 45,180,243 nodes and 443,055,788\nedges. As the usage data of GenBank only covers 2012 (see Genbank\nabove), we pruned the network to remove papers and datasets published\nafter 2012. For Figshare, we use all the nodes and edges. The statistics\nof the networks described above are listed in Table \\ref{tab:Statistics-of-network}.\n\n\\begin{table}\n\\caption{\\label{tab:Statistics-of-network}Statistics of article citation networks}\n\n\\centering{}%\n\\begin{tabular}{cccc}\n\\hline \nSources & Filter & Number of Nodes & Number of Edges\\tabularnewline\n\\hline \nCOCI (original) & N\/A & 46,534,424 & 449,840,585\\tabularnewline\nCOCI (filtered) & contains publication year & 21,689,394 & 203,884,791\\tabularnewline\nCOCI \\& MAG (for Figshare) & contains publication year & 45,180,243 & 443,055,788\\tabularnewline\nCOCI \\& MAG (for GenBank) & publication year \\ensuremath{\\le} 2012 & 30,304,869 & 212,410,743\\tabularnewline\n\\hline \n\\end{tabular}\n\\end{table}\n\n\n\\subsubsection{Paper-dataset citation network}\n\\begin{enumerate}\n\\item \\textbf{Constructing paper-dataset citations for GenBank. }As previously\ndescribed by \\citet{Sayers2019}, authors should use GenBank accession\nnumber with the version suffix as identifier to cite a GenBank data.\nThe accession number is usually mentioned in the body of the manuscript.\nThis practice enables us to extract GenBank dataset mention from the\nPMC-OAS dataset to build the paper-dataset citation network. \n\\begin{enumerate}\n\\item \\emph{Parsing XML file to extract full-text}\\textbf{. }We first parse\nthe XML files using XPath expressions to extract the PMID and the\nfull-text. We get 2,174,782 articles with PMID and full-text from\nPMC-OAS dataset.\n\\item \\emph{Matching the accession number to build paper-dataset citations}.\nAccording to the GenBank Accession Prefix Format, an accession number\nis composed of a fixed-number of letters plus a fixed-number of numerals\n(e.g., 1 letter + 5 numerals, 2 letters + 6 numerals). Based on the\nformat, we composed a regular expression to match individual mentions\nof accession numbers in the full-text. There are two kinds of accession\nnumber mentions: accession number only (e.g., U00096) and range of\naccession number (e.g., KK037225-KK037232). For the second kind, we\nexpanded the range to recover all the omitted accession numbers.\n\\end{enumerate}\n\\end{enumerate}\n\\begin{enumerate}[resume]\n\\item \\textbf{Extracting paper-dataset citation for Figshare. }As there\nis a string 'figshare' in a Figshare DOI, we can use a regular expression\nto search for them in COCI. \n\\begin{enumerate}\n\\item \\textit{Identifying Figshare DOI in the set of cited DOI. }In this\nstep, we extracted 918 Figshare DOIs as dataset candidates.\n\\item \\textit{Filtering by resource type. }Not all the Figshare DOIs are\ndatasets because Figshare supports many kinds of resources. We use\nthe Figshare Article API\\footnote{https:\/\/docs.figshare.com\/\\#public\\_article}\nto get the meta-data of a DOI. In the meta-data, there is a field\nindicating the resource type (type code 3 is dataset). After filtering\nthe candidates by resource type, we get 355 datasets.\n\\end{enumerate}\n\\end{enumerate}\n\n\\subsubsection{Visualization of citation network}\n\nFor the purpose of exploring this network, we sample about one thousand\nnodes and 1.5 thousand edges from the network. We observe four patterns\nof papers citing datasets (a dataset cannot cite anything): one-to-one,\none-to-many, many-to-one and many-to-many (Fig. \\ref{fig:network-of-sample}).\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[width=1\\textwidth]{F1A_sampleData_colored}\n\\par\\end{centering}\n\\caption{\\label{fig:network-of-sample} Visualization of publication--dataset\ncitation network. Papers appear as red nodes and Datasets are green.\nThe lighter the color, the older the resource.}\n\\end{figure}\n\n\n\\subsection{Network models for scientific artifacts}\n\n\\subsubsection{\\textsc{NetworkFlow}}\n\nWe adapt the network model proposed in \\citet{walker2007ranking}.\nThis method, which we call \\textsc{NetworkFlow} here, is inspired\nby \\textsc{PageRank} and addresses the issue that citation networks\nare always directed back in time. In this model, each vertex in the\ngraph is a publication. The goal of this method is to propagate a\nvertex's impact through its citations.\n\nThis method simulates a set of researchers traversing publications\nthrough the citation network. It ranks publications by estimating\nthe average path length taken by researchers who traverse the network\nconsidering the age of resources and a stopping criterion. Mathematically,\nit defines a traffic function $T_{i}(\\tau_{\\text{pub}},\\alpha)$ for\neach paper. A starting paper is selected randomly with a probability\nthat exponentially decays in time with a \\emph{decay} parameter $\\tau_{\\text{paper}}$.\nEach occasion the researcher moves, it can stop with a probability\n$\\alpha$ or continue the search through the citation network with\na probability $1-\\alpha$. The predicted traffic $T_{i}$ is proportional\nto the rate at which the paper is accessed. The concrete functional\nform is as follows. The probability of starting at the $i$th node\nis \n\\begin{equation}\n\\rho_{i}\\propto\\exp\\left(-\\frac{\\text{age}_{i}}{\\tau_{\\text{pub}}}\\right)\\label{eq:starting-probability}\n\\end{equation}\n\nThen,\\textsc{ }the method defines a transition matrix from the citation\nnetwork as follows \n\\begin{equation}\nW_{ij}=\\begin{cases}\n\\frac{1}{k_{j}^{\\text{out}}} & \\text{if \\ensuremath{j} cites \\ensuremath{i}}\\\\\n0 & \\text{o.w.}\n\\end{cases}\\label{eq:transition-probability}\n\\end{equation}\nwhere $k_{j}^{\\text{out}}$ is the out-degree of the $j$th paper.\n\nThe average path length to all papers in the network starting from\na paper sampled from the distribution $\\rho$ is defined as \n\\begin{equation}\nT=I\\cdot\\rho+(1-\\alpha)W\\cdot\\rho+(1-\\alpha)^{2}W^{2}\\rho+\\cdots\\label{eq:average-path-length}\n\\end{equation}\n\nThe parameters $\\tau_{\\text{paper}}$ and $\\alpha$ are found by cross\nvalidation by predicting real traffic. In practice, this series can\nbe solved iteratively by computing the difference between $T_{t+1}$\nand $T_{t}$. For this and all algorithms used in this work, we stop\nthe iterations when the total absolute difference in rank between\ntwo consecutive iterations is less than $10^{-2}$ which typically\nrequired around 30 iterations.\n\n\\subsubsection{\\textsc{DataRank}}\n\nIn this article, we extend\\textsc{ NetworkFlow} to accommodate different\nkinds of nodes. The extension considers that the probability of starting\nat any single node should depend on whether the node is a publication\nor a dataset. This is, publications and datasets may vary in their\nrelevance period. We call this new algorithm \\textsc{DataRank}. Mathematically,\nwe redefine the starting probability in Eq. \\ref{eq:starting-probability}\nof the $i$th node as \n\\begin{equation}\n\\rho_{i}^{\\text{DataRank}}=\\begin{cases}\n\\exp\\left(-\\frac{\\text{age}_{i}}{\\tau_{\\text{pub}}}\\right) & \\text{if \\ensuremath{i} is a publication}\\\\\n\\exp\\left(-\\frac{\\text{age}_{i}}{\\tau_{\\text{dataset}}}\\right) & \\text{\\text{if \\ensuremath{i} is a dataset}}\n\\end{cases}\\label{eq:data-rank-starting}\n\\end{equation}\n\nThis initialization process is depicted in Figure \\ref{fig:Initialize-the-value}.\nHere, datasets have a smaller decay than papers. The size of the bubble\nindicates the initial flow as defined by Eq. \\ref{eq:data-rank-starting}.\nAfter initializing, we can easily find that nodes of the same type\nand same age have the same value, and that the younger a node is,\nthe bigger its value.\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[width=0.95\\textwidth]{F2_data_paper_arrow}\n\\par\\end{centering}\n\\caption{\\label{fig:Initialize-the-value}Nodes after initialization. The size\nof the node represents its value, different color means different\nnode type: red nodes are papers and green nodes are datasets.}\n\\end{figure}\n\nNow, we estimate the traffic in a similar fashion but now the traffic\nof a node $T_{i}^{\\text{DataRank}}(\\tau_{\\text{pub}},\\tau_{\\text{dataset}},\\alpha)$\ndepends on three parameters that should be found by cross-validation\nwith the rest of the components of the method remaining the same (e.g.,\n\\eqref{transition-probability} and \\eqref{average-path-length}).\n\n\\subsubsection{\\textsc{DataRank-FB}}\n\nIn \\textsc{DataRank}, each time the walker moves, there are two options:\nto stop with a probability $\\alpha$ or to continue the search through\nthe reference list with a probability $1-\\alpha$. However, there\nmay exist a third choice: to continue the search through papers who\ncite the current paper. In other words, the researcher can move in\ntwo directions: forwards and backwards. We call this modified method\n\\textsc{DataRank-FB. }In this method, one may stop with a probability\n$\\alpha-\\beta$, continue the search forward with a probability $1-\\alpha$,\nand backward with a probability $\\beta$. To keep them within the\nunity simplex, the parameters must satisfy $\\alpha>0$, $\\beta>0$,\nand $\\alpha>\\beta$,\n\nThen\\textbf{\\textsc{, }}we define another transition matrix from the\ncitation network as follows \n\\begin{equation}\nM_{ij}=\\begin{cases}\n\\frac{1}{k_{j}^{\\text{in}}} & \\text{if \\ensuremath{j} cited by \\ensuremath{i}}\\\\\n0 & \\text{o.w.}\n\\end{cases}\\label{eq:transition-probability-1}\n\\end{equation}\nwhere $k_{j}^{\\text{in}}$ is the number of papers that cite $j$.\nWe update the average path length to all papers in the network starting\nfrom $\\rho$ as\n\n\\begin{equation}\nT=I\\cdot\\rho+(1-\\alpha)W\\cdot\\rho+\\beta M\\cdot\\rho+(1-\\alpha)^{2}W^{2}\\rho+\\beta^{2}M^{2}\\rho+\\cdots\\label{eq:average-path-length-1}\n\\end{equation}\n\n\n\\subsection{Other network models}\n\n\\subsubsection{\\textsc{PageRank}}\n\n\\textsc{PageRank} is a well-known and widely used webpage ranking\nalgorithm proposed by Google \\citep{page1999pagerank}. It uses the\ntopological structure of the web to determine the importance of a\nwebpage, independently of its content \\citep{bianchini2005inside}.\nFirst, it builds a network of the web through the link between webpages.\nSecond, each webpage is assigned a random value, which is then updated\nbased on the link relationships---an iterative process that will\neventually converge to a stationary value.\n\nThe mathematical formulation of \\textsc{PageRank} is\n\n\\begin{equation}\nPR(p_{i})=\\frac{1-d}{N}+d\\underset{p_{j}\\in M(p_{i})}{\\sum\\frac{PR(p_{j})}{L(p_{j})}},\\label{eq:pagerank-algorithm}\n\\end{equation}\nwhere $p_{1},p_{2},\\cdots,p_{N}$ are the webpages whose importance\nneed to be calculated, $M(p_{i})$ is the set of webpages that has\na link to page $p_{i}$, $L(p_{j})$ is the number of outbound links\non webpage $p_{j}$, and $N$ is the total number of webpages. The\nparameter $d$ is a dampening factor which ranges from 0 to 1 and\nis usually set to 0.85 \\citep{brin1998anatomy,page1999pagerank}.\n\n\\subsubsection{Modified \\textsc{PageRank}}\n\nConsidering that we have two types of resources in the network, we\nmodify the standard \\textsc{PageRank} to allow a different damping\nfactor for publication and for dataset. This amounts to modifying\nthe update equation to\n\n\\begin{equation}\nPR(p_{i})=\\frac{1-d_{data}}{N_{data}}+\\frac{1-d_{pub}}{N_{pub}}+d_{data}\\underset{p_{j}\\in M^{data}(p_{i})}{\\sum\\frac{PR(p_{j})}{L(p_{j})}}+d_{pub}\\underset{p_{k}\\in M^{pub}(p_{i})}{\\sum\\frac{PR(p_{k})}{L(p_{k})}},\\label{eq:modified-pagerank}\n\\end{equation}\nwhere $p_{1},p_{2},\\cdots,p_{N}$ are still the nodes whose importance\nneed to be calculated, $M^{data}(p_{i})$ is the set of datasets that\nhave a link to page $p_{i}$, $M^{pub}(p_{i})$ is the set of papers\nthat has a link to page $p_{i}$, $L(p_{j})$ is the number of outbound\nlinks on webpage $p_{j}$, and $N_{data},N_{pub}$ are the total size\nof datasets and paper collections, respectively. The parameter $d_{data}$\nis a damping factor for datasets and $d_{pub}$ is a damping factor\nfor paper sets.\n\n\\section{Results}\n\nWe aim at finding whether the estimation of the rank of a dataset\nbased on citation data is related to a real-world measure of relevance\nsuch as page views or downloads. We propose a method for estimating\nrankings that we call \\textbf{\\textsc{DataRank,}} which considers\ndifferences in citation dynamics for publications and datasets. We\nalso propose some variants to this method, and compare all of them\nto standard ranking algorithms. We use the data of visits of GenBank\ndatasets and downloads of Figshare datasets as measure of real usage.\nThus, we will investigate which of the methods work best for ranking\nthem.\n\n\\subsection{Properties of the citation networks}\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[width=0.8\\textwidth]{Citation_network_publication_age_prob_dist}\n\\par\\end{centering}\n\\begin{centering}\n\\includegraphics[width=0.8\\textwidth]{Citation_network_received_citation_prob_dist}\n\\par\\end{centering}\n\\caption{\\label{fig:paper_network_age_citation_dist}Probability density of\nage and citations. Power laws describing the growth of both aspects\nof the network have parameters $k_{\\text{age}}\\approx-4.1$ and $k_{\\text{citations}}\\approx-2.16$.}\n\\end{figure}\n\nWe first examined some statistics of the paper--paper citation network\nand the paper--dataset citation network. Specifically, we examined\ndataset age because it determines the initial rank of a node and we\nexamine the citations because they control how ranks diffuse through\nthe network. We modeled publication age with respect to 2019 as a\npower law distribution $p(a)\\propto a^{-k}$ and we found the best\nparameter to be $k\\approx-4.1$ (SE=$0.07$, $p<0.001$, Fig. \\ref{fig:paper_network_age_citation_dist}\ntop panel). Similarly, the citation count can be modeled by a power\nlaw distribution $p(d)\\propto d^{-k}$ with parameter $k\\approx-2.16062$\n(SE=$0.012$, $p<0.001$, Fig. \\ref{fig:paper_network_age_citation_dist}\nbottom panel). While both distributions can be well described by power\nlaws, the age distribution had some out-of-trend dynamics for small\nage values because the number of publications is not growing as fast\nas the power law would predict. The networks thus is expanding fast\nin nodes (e.g., age) and highly skewed for citations.\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[width=0.8\\textwidth]{genebank_usage}\n\\par\\end{centering}\n\\begin{centering}\n\\includegraphics[width=0.8\\textwidth]{figureshare_usage}\n\\par\\end{centering}\n\\caption{\\label{fig:pdf-of-figshare}Probability density function of usage\n(website visits for Genbank and downloads for Figshare)}\n\\end{figure}\n\nWe then wanted to examine differences in usage between Genbank and\nFigshare. We plotted the estimated probability density function of\nthis usage for both datasets as a log-log plot (Fig. \\ref{fig:pdf-of-figshare}).\nWhile the scale of usage is significantly different (i.e., overall\nGenBank is more used than Figshare), there seems to be a long-tail\npower law relationship in usage. The GeneBank dataset had a larger\nbut not significantly different scale parameter than Figshare ($k\\approx-1.416$,\nSE=0.2699, in GenBank and $k\\approx-1.125$, SE=0.0907, in Figshare,\nfor $p(u)\\propto u^{-k}$, two-sample t-test $t(220)=-1.05$, $p=0.29$).\nThere is a significant bias for downloads below a certain threshold,\nwhere smaller number of downloads are less frequent than expected\nby the power laws. However, both datasets show similar patterns suggesting\na common mechanism driving the dataset usage behavior.\n\n\\subsection{Prediction of real usage}\n\nOne of the real tests of whether the methods work is to predict how\nthey are related to real usage data. For each algorithm and set of\nparameters, we estimated the rank of networks' nodes. We then correlated\nthese ranks with real usage (i.e., web visits for GenBank and downloads\nfor Figshare). We now describe the best performance after these parameter\nsearch.\n\nWe performed a grid search with publication decay year $\\tau_{\\text{pub}}\\in\\left\\{ 1,5,10,20,30,50,70,100\\right\\} $,\nthe dataset decay year $\\tau_{\\text{dataset}}\\in\\left\\{ 1,5,10,20,30,50,70,100\\right\\} $,\nand alpha in $\\alpha\\in\\left\\{ 0.05,0.15\\right\\} $. The best model\nof \\textsc{DataRank} achieved a correlation of 0.3336 in Genbank which\nis slightly better than \\textsc{DataRank-FB} 0.3335, \\textsc{NetworkFlow}\n0.3327, \\textsc{PageRank} 0.3324, and Modified \\textsc{PageRank} 0.3327\n(Fig. \\ref{fig:result-correlation}). The best model of \\textsc{DataRank}\nachieves a correlation of 0.1078 in Figshare, which is substantially\nbetter than \\textsc{DataRank-FB} with 0.0723. The best absolute correlations\nby all the other methods only achieved negative correlations (\\textsc{NetworkFlow}\\textbf{\\textsc{=}}$-0.072$,\n\\textsc{PageRank}=$-0.073$, and Modified \\textsc{PageRank}=$-0.073$)---we\nselected models based on absolute correlation because it would produce\nthe best predictive performance. Taken together, \\textsc{DataRank}\nhas both better absolute correlation performance and highest correlation,\nsuggesting a superior ability to predict real usage.\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[width=1\\textwidth]{correlation}\n\\par\\end{centering}\n\\caption{\\label{fig:result-correlation}Model comparison in correlation coefficient.\n\\textsc{DataRank} is able to predict better the real usage of datasets\ncompared to other variants.}\n\\end{figure}\n\n\n\\subsection{Interpretation of best-fit model}\n\nWe wanted to explore how \\textsc{DataRank} changes in performance\nacross the different parameters. This exploration allowed us to understand\nhow the parameters tell us something about the underlying characteristics\nof the citation networks.\n\nThe results across all parameters are shown in Figure \\ref{fig:Parameter-search}\nand we now describe general trends. For experiments on GenBank dataset,\nwe found the \\textsc{DataRank} model reaches its best with $\\tau_{\\text{pub}}=100$\nand $\\tau_{\\text{dataset}}=30$ (Fig. \\ref{fig:result-correlation}\ntop panel). However, for dataset decay, the performance reaches a\nplateau after around 20 years and it reaches its peak at 30 years.\nAfter 30 years, performance goes down slowly. In terms of publication\ndecay, the performance increases significantly before 20 years. After\n20 years, the performance enters into a steady but marginal increase.\nIn all the cases, the alpha at 0.05 is better than 0.15. For Figshare\ndataset, dataset decay has divergent patterns: for small publication\ndecays, dataset decay increases the performance. For large publication\ndecays, dataset decay decreases the performance. For publication decay,\nmore specifically, we observed an opposite trend compared to GenBank:\nthe performance goes down rapidly as the publication decay increases\n(Fig. \\ref{fig:result-correlation} bottom panel). However, similar\nto GenBank, it losses momentum after 20 years but still decreases\nsteadily.\n\nOne explanation for the differences between the best parameters for\nGenbank and Figshare is that dataset age and citation distribution\nhave different patterns. We perform an analysis to confirm this hypothesis.\nIndeed, we found that Figshare datasets are significantly younger\nthan Genbank datasets (bootstrapped difference between average age\n= $-4.80$ years, SE = 0.24, $p<0.001$). We also found that Figshare\ndataset citations are more uniform than Genbank dataset citations\nalthough not significantly different (bootstrapped difference in dataset\ncitation kurtosis = -44.34, SE = 48.74, $p=0.25$). The \\textsc{DataRank}\nalgorithm uses large values of dataset decay to propagate publication\ncitations over a longer time and small values of data decay to steer\nthose citations towards a concentrated set of top cited datasets.\nThis is the case of GenBank dataset dynamics. Figshare, comparatively,\nobeys opposite age and citation dynamics which \\textsc{DataRank} attempts\nto accommodate by using small values of publication decay and large\nvalues of dataset decays. Concretely, the best publication decay for\nGenbank is 100 and for Figshare is 1, and the best dataset decay for\nGenbank is 30 and for Figshare is 100. Taken together, \\textsc{DataRank}'s\nparameters offer inferential interpretation of the citation network\ndynamics, which can help us understand how they are distributed in\ntime and credit spaces.\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[width=0.8\\textwidth]{genebank_parameter_search}\n\\par\\end{centering}\n\\begin{centering}\n\\includegraphics[width=0.8\\textwidth]{figshare_parameter_search}\n\\par\\end{centering}\n\\caption{\\label{fig:Parameter-search}Model performance during grid search\nwith publication decay time (years) $\\tau_{\\text{pub}}\\in\\left\\{ 1,5,10,20,30,50,70,100\\right\\} $,\nthe dataset decay time (years) $\\tau_{\\text{dataset}}\\in\\left\\{ 1,5,10,20,30,50,70,100\\right\\} $,\nand alpha in $\\alpha\\in\\left\\{ 0.05,0.15\\right\\} $}\n\n\\end{figure}\n\n\n\\section{Discussion}\n\nThe goal of this article is to better evaluate the importance of datasets\nthrough article citation network analysis. Compared with the mature\ncitation mechanisms of articles, referencing datasets is still in\nits infancy. Acknowledging the long time the practice of citing datasets\nwill take to be adopted, our research aims at recovering the true\nimportance of datasets even if their citations are biased compared\nto publications.\n\nScholars disagree on how to give credit to research outcomes. Regardless\nof how disputed citations are as a measure of credit \\citep[see][]{moed1985use,diamond1986citation,martin1996use,seglen1997impact,wallin2005bibliometric},\nthey complement other measures that are harder to quantify such as\npeer review assessment \\citep{meho2007impact,piwowar2007sharing}\nor real usage such as downloads \\citep{belter2014measuring}. Citations,\nhowever, are rarely used for datasets \\citep{altman2013evolution},\ngiving these important research outcomes less credit that they might\ndeserve. Our proposal aims at solving some of these issues by constructing\na network flow that is able to successfully predict real usage better\nthan other methods. While citations are not a perfect measure of credit,\nhaving a method that can at least attempt to predict usage is advantageous.\n\nPrevious research has examined ways of normalizing citations by time,\nfield, and quality. This relates to our \\textsc{DataRank} algorithm\nin that we are trying to normalize the citations by year and artifact\ntype. Similar work has been done in patent citations: \\citet{hall2001nber}\nattempt to eliminate the effects caused by year, field and year-field\ninteraction through dividing the number of citation received by a\ncertain patent by the corresponding year-field average number of citation\nof patents of each cohort in each field. \\citet{yang2015using} used\npatent citation networks to propagate citations and evaluate patent\nvalue. Other researchers have gone beyond time and field by also attempting\nto control by quality of the resource. Clarivate Analytics editors\nconsider other normalizing factors such as publishing standards, editorial\ncontent and citation information when creating the Data Citation Index\n(DCI) database \\citep{reuters2012repository}. However, a control\nlike this requires significantly more manual curation.\n\nWe found that the practice of following who cites an artifact (i.e.,\nbackward propagation) seemed to be not as important compared to following\na cited artifact (i.e., forward propagation). This is, the more specialized\n\\textsc{DataRank-FB} model (see Methods) did not produce higher predictability.\nThis lack of improvement could suggest that when scientists traverse\nthe network of citations, they seem to only follow one direction of\nthe graph (i.e., the reference list). This could be perhaps a limitation\nof the tools available to scientists to explore the citation network\nor because backward propagation changes constantly year after year.\nInitiatives that force the creation of identifiers (DOI) for datasets,\nsuch as the NERC Science Information Strategy Data Citation and Publication\nproject \\citep{callaghan2012making}, might change this pattern.\n\nWe find it useful to interpret the effect of different decay times\non the performance of \\textsc{DataRank}. The best-fitting parameters\nshow that \\textsc{DataRank} attempts to model the network temporal\nand topological dynamics differently for Genbank and Figshare. Because\nGenbank tends to have older datasets that have more concentrated citations\ncompared to Figshare, the large value in publication decay time and\nsmall value in dataset decay time produces rank distributions that\nmatch the underlying dynamics of Genbank (Fig. \\ref{fig:Parameter-search}).\nSimilarly, the alpha parameter, which controls the probability of\nthe random walk to stop, is larger for Figshare, intuitively suggesting\nthat due to the smaller network size and shorter temporal paths, a\nhigher probability of stopping should be in place (Fig. \\ref{fig:Parameter-search}).\nThus, \\textsc{DataRank} can help to interpret dataset citation behavior.\n\nThere are some shortcomings in our study. For GenBank and Fighsare,\nwe do not have a great deal of information on actual usage. There\nare 1000 records but we only located 693 in the publication network\nof Genbank and 355 datasets from Figshare. Also, there are significant\ndifferences in the information, with Figshare dataset being cited\nless often than GenBank. In the future, we will explore other data\nrepositories such as PANGEA, Animal QTLdb, and UK Data Archive, and\nwe will request updates about web visits to other GenBank sequences.\n\nOpen research datasets have little means for systematic attribution\neven in well-resourced disciplines, such as the biomedical community,\nand our proposal attempts to systematize this attribution more broadly\nwithout requiring changes in behavior. The lack of systematic benchmarking\ntools prevents good policy-making. Furthermore, it contributes to\nthe invisibility of digital objects and labor, an issue of serious\nconcern \\citep{scroggins2019labor}. When digital objects are not\nwell described, they tend to disappear from view. They fall to the\nbottom of classic search engine results when there are no mechanisms\nto assign credit to datasets. This lack of indexing creates an asymmetrical\nrepresentation of some kinds of objects of science and can be an obstacle\nto quality evaluation and data reuse. Thus \\textsc{DataRank} can be\nused to correct these asymmetrical discrepancies between articles\nand datasets.\n\nOur approach does not directly estimate true impact but predicts one\npossible measure of usage. However, this prediction could be a foundational\nstep toward developing technical and theoretical models of impact.\nAlso, in the literature, \\textquotedblleft impact\\textquotedblright{}\nis defined differently depending on the goal \\citep{piwowar2013altmetrics}.\nMetrics estimating impact can be defined through \\textquotedblleft use\\textquotedblright ,\n\\textquotedblleft reuse\\textquotedblright , or \\textquotedblleft engagement\\textquotedblright ,\nand have a range of proxies. For example, a download might measure\nuse while in another context it may only indicate viewing. Comparatively,\nthe edifice of citation standards in the realm of journal articles\nis relatively well-established, with refinements to the interpretation\nof citation behavior such as the relative value of citations \\citep{stuart2017data},\nthe disciplinary context \\citep{borgman1989bibliometrics}, and the\nin-text location \\citep{teplitskiy2018almost}. Nonetheless, usage\nserves as a proximal indicator of influence, and a first-order approximation\nof the impact of the scientific work. Recent work in developing impact\nassessment tools continue development and refinement \\citep{silvello2018theory}.\nHowever, existing evaluation mechanisms often fail where there are\nno direct or indirect measure for usage, which can be an important\nindicator of scientific impact. Clearly, our approach serves not only\nas a concrete tool to measure impact---albeit imperfectly---but\nalso as a vehicle to discuss why and how to measure impact on datasets.\n\n\\section{Conclusion}\n\nUnderstanding how datasets are used is an important topic in science.\nDatasets are becoming crucial for the reproduction of results and\nthe acceleration of scientific discoveries. Scientists, however, tend\nto dismiss citing datasets and therefore there is no proper measurement\nof how impactful datasets are. Our method uses the publication-publication\ncitation network to propagate the impact to the publication--dataset\ncitation network. Using two databases of real dataset networks, we\ndemonstrate how our method is able to predict actual usage more accurately\nthan other methods. Our results suggest that datasets have different\ncitation dynamics to those of publications. In sum, our study provides\na prescriptive model to understand how citations interact in publication\nand dataset citation networks and gives a concrete method for producing\nranks that are predictive of actual usage.\n\nOur study advances an investigation of datasets and publications with\nnovel ideas from network analysis. Our work puts together concepts\nfrom other popular network flow estimation algorithms such as \\textsc{PageRank}\n\\citep{brin1998anatomy} and \\textsc{NetworkFlow} \\citep{walker2007ranking}.\nWhile scientists might potentially take a great amount of time to\nchange their citation behavior, we could use techniques such as the\none put forth in this article to accelerate the credit assignment\nfor datasets. Ultimately, the need for tracking datasets will only\nbecome more pressing and therefore we must adapt or miss the opportunity\nto make datasets first-class citizens of science.\n\n\\section*{Acknowledgements}\n\nThe authors would like to thank the Dr. Kim Pruitt from National Center\nfor Biotechnology Information, NLM, NIH, DHHS. Tong Zeng was funded\nby the China Scholarship Council \\#201706190067. Sarah Bratt was partially\nfunded by National Science Foundation award \\#1561348. Daniel E. Acuna\nwas partially funded by the National Science Foundation awards \\#1800956.\n\n\\bibliographystyle{apalike}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzjcar b/data_all_eng_slimpj/shuffled/split2/finalzzjcar new file mode 100644 index 0000000000000000000000000000000000000000..10b278b4330fc3c137ac6f6314b75518bdd2c127 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzjcar @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{introduction}\nThe physical state and structure of the interstellar medium (ISM) are important parameters for understanding the star formation in a galaxy. In a typical star-forming region, young massive stars are born and start to illuminate their parental cloud. UV photons ionize the surrounding medium, creating H\\,{\\sc ii}\\xspace regions, while the transition to the neutral atomic or molecular phase occurs at higher visual extinction, where the material is more effectively shielded. Far-UV (FUV) photons control the chemical activity in these regions, namely the photodissociation region (PDR; \\citealt{tielens-1985}). By studying the latter, we can investigate the conditions of the molecular clouds, which in turn will be potential sites for the next episode of star formation. \n\nHow does the propagation of radiation and the ISM composition affect\nISM observables in low-metallicity galaxies? Addressing this question is important to understand the evolution of low-metallicity galaxies, which undergo more bursty star formation than normal galaxies. Nearby star-forming dwarf galaxies present distinct\nobservational signatures compared to well-studied disk galaxies. Dwarfs are usually metal poor, H\\,{\\sc i}\\xspace\\ rich, and molecule poor as\na result of large-scale photodissociation \\citep[e.g.,][]{kunth-2000,hunter-2012,schruba-2012}. Mid-IR (MIR) and far-IR (FIR) observations have revealed bright atomic lines from H\\,{\\sc ii}\\xspace regions ([S\\,{\\sc iii}]\\xspace, [Ne\\,{\\sc iii}]\\xspace, [Ne\\,{\\sc ii}]\\xspace, [O\\,{\\sc iii}]\\xspace, etc.) and PDRs ([C\\,{\\sc ii}]\\xspace, [O\\,{\\sc i}]\\xspace) \\citep[e.g.,][]{hunter-2001,madden-2006,wu-2008,hunt-2010,cormier-2015}. Their spectral energy distributions (SEDs) are also different from spiral and elliptical galaxies and indicative of altered dust properties, with a relatively low abundance of polycyclic aromatic hydrocarbons (PAHs) and perhaps a different dust composition \\citep[e.g.,][]{madden-2006,galliano-2008,remy-2013}. \nIt is still unknown, however, whether these differences between dwarf and disk galaxies are the direct result of recent star formation activity shaping the ISM or instead a consequence of the low-metallicity ISM that is independent of star formation activity. To answer this, one needs to observe tracers of the interplay between the ISM and various stages of star formation activity. While there are now a number of important studies available on PDR properties modeling FIR lines on large scales in various extragalactic environments \\citep[e.g.,][]{kaufman-2006,vasta-2010,gracia-carpio-2011,cormier-2012,parkin-2013} or in our Galaxy under solar-metallicity conditions \\citep[e.g.,][]{cubick-2008,bernard-salas-2012,bernard-salas-2015}, only a few studies are published on individual extragalactic regions \\citep{mookerjea-2011,lebouteiller-2012}. Of particular interest are dwarf galaxies, where the effect due to radiative feedback is expected to be most significant. \nThe goal of this paper is to investigate how the low-metallicity ISM reacts under the effects of star formation in regions that have undergone different histories. The nearby low-metallicity galaxy NGC\\,4214 provides an excellent environment to perform this experiment because it has well-separated star-forming centers, one hosting a super star cluster, which allows us to study the effects of extreme star-forming conditions on the surrounding ISM. \n\nNGC\\,4214 is a nearby irregular galaxy located 3\\,Mpc away \\citep{dalcanton-2009} with a metallicity of $\\sim$0.3\\,Z$_{\\odot}$ \\citep{kobulnicky-1996} and a wealth of ancillary data. It shows various morphological characteristics such as H\\,{\\sc i}\\xspace holes and shells and a spiral pattern \\citep{mcintyre-1998}. \nNGC\\,4214 is known to host two main, well-defined star-forming regions with recent activity (Fig.\\,\\ref{3color}). The largest of the two regions is found in the center of the galaxy (also referred to as NW or region~I) and contains several clusters, including a super star cluster, while the second region is found to the southeast (also referred to as SE or region~II) and is younger and more compact. \nUsing near-IR, optical, and UV data, several studies have constrained the ages of the clusters in the two main regions, which show evidence for recent star formation \\citep{ubeda-2007,sollima-2013,sollima-2014}. \\cite{schruba-2012} have measured the ongoing SFR of NGC\\,4214 to be 0.12\\,M$_{\\odot}$\\,yr$^{-1}$. \nThe galaxy seems to have maintained its star formation in the past 10\\,Gyr at an average rate of $\\sim$0.02\\,M$_{\\odot}$\\,yr$^{-1}$, with a prolonged star formation episode that occurred about 3\\,Gyr ago and several shorter bursty events within the past Gyr at a rate of 0.05-0.12\\,M$_{\\odot}$\\,yr$^{-1}$ \\citep{mcquinn-2010,williams-2011}. \\par\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.52]{fig1.eps}\n\\centering\n\\caption{Three-color image of NGC\\,4214 using the HST WFC3 filters F438W (B, blue), F502N ([O\\,{\\sc iii}]\\xspace, green), and F657N (H$\\alpha$+[N\\,{\\sc ii}]\\xspace, red), downloaded from the Hubble Legacy Archive (\\protect\\url{http:\/\/hla.stsci.edu\/}).}\n\\label{3color}\n\\end{figure}\n\nIn this paper, we present observations of MIR and FIR fine-structure cooling lines in NGC\\,4214, which provide key diagnostics of the physical conditions of the ISM. We focus our analysis on the two main star-forming complexes. The line emission is analyzed with radiative transfer models to characterize the ISM conditions. We take into account directly observed star formation histories and explore how they affect the IR line emission. Photometry is used for the energy budget of the models. The structure of this paper is the following: Sect.~\\ref{data} describes the data, Sect.~\\ref{method} describes the model, and the results are presented in Sect.~\\ref{results}. We summarize and discuss our results in Sect.~\\ref{discussion}. \n\n\n\\section{Data}\n\\label{data}\n\\subsection{{\\it Herschel} data}\nWe used observations of NGC\\,4214 obtained by the PACS instrument \\citep{poglitsch-2010} onboard the \\textit{Herschel}\\xspace Space Observatory \\citep{pilbratt-2010} as part of the Dwarf Galaxy Survey \\citep{madden-2013}. The list of observations can be found in Table~\\ref{AOR}. The photometry data at 70$\\mu$m\\xspace, 100$\\mu$m\\xspace, and 160$\\mu$m\\xspace, with respective beam sizes (FWHM) of 5.6$^{\\prime\\prime}$\\xspace, 6.7$^{\\prime\\prime}$\\xspace, and 11.3$^{\\prime\\prime}$\\xspace, were published by \\cite{remy-2013}. These bands cover the peak of the SED originating from the reprocessed stellar light by the dust. \nThe spectroscopy comprises observations of the \\oiii88$\\mu$m\\xspace and \\nii122$\\mu$m\\xspace lines, which trace the ionized gas, as well as the \\cii157$\\mu$m\\xspace, \\oi63$\\mu$m\\xspace, and \\oi145$\\mu$m\\xspace lines, which trace the PDR. The data consist of small mappings of $5\\times5$ rasters separated by $\\sim$16$^{\\prime\\prime}$\\xspace for \\oiii88$\\mu$m\\xspace and \\oi63$\\mu$m\\xspace and $3\\times3$ rasters separated by $\\sim$24$^{\\prime\\prime}$\\xspace for the other lines, ensuring a uniform coverage of $1.6^{\\prime}\\times1.6^{\\prime}$. Originally presented in \\cite{cormier-2010}, the PACS spectral data were re-processed with the reduction and analysis software HIPE user release v.11 \\citep{ott-2010} and PACSman v.3.5 \\citep{lebouteiller-2012}. With the improved calibration and definition of the regions, flux maps are globally consistent with those published in \\cite{cormier-2010} and line ratios agree within 30\\%. Flux maps of the \\cii157$\\mu$m\\xspace and \\oi63$\\mu$m\\xspace lines are shown in Fig.\\,\\ref{mask}. The associated error maps include data and line-fitting uncertainties, but not calibration uncertainties, which are on the order of 15\\%. \nThe FWHM is 9.5$^{\\prime\\prime}$\\xspace below 100$\\mu$m\\xspace and 10$^{\\prime\\prime}$\\xspace, 11$^{\\prime\\prime}$\\xspace, 12$^{\\prime\\prime}$\\xspace at 122$\\mu$m\\xspace, 145$\\mu$m\\xspace, 160$\\mu$m\\xspace, respectively. All maps were convolved to the \\cii157$\\mu$m\\xspace resolution of $\\sim$12$^{\\prime\\prime}$\\xspace , which at the distance of NGC\\,4214 corresponds to a physical scale of 175\\,pc. \nIn both the photometry and spectroscopy data sets, the convolutions were performed using kernels provided by \\cite{aniano-2011}\\footnote{\\url{http:\/\/www.astro.princeton.edu\/~ganiano\/Kernels\/}}.\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=5.585cm]{fig2a.eps}\\hspace{-1mm}\n\\includegraphics[width=4.25cm]{fig2b.eps}\\hspace{-1mm}\n\\includegraphics[width=4.25cm]{fig2c.eps}\\hspace{-1mm}\n\\includegraphics[width=4.25cm]{fig2d.eps}\n\\centering\n\\caption{\nMaps of the \\cii157$\\mu$m\\xspace, \\oi63$\\mu$m\\xspace, \nand TIR emission in NGC\\,4214. Units are W\\,m$^{-2}$\\,sr$^{-1}$. \nThe two star-forming regions, as defined in Sect.~\\ref{sect:definesf}, \nare outlined with red contours. \nThe right panel shows the \\textit{Spitzer}\\xspace IRS mapping strategy. \nOrange: Long-High module coverage; \ncyan: Short-High module coverage; \ngray background: \\cii157$\\mu$m\\xspace map. \n}\n\\label{mask}\n\\end{figure*}\n\n\n\n\\subsection{{\\it Spitzer} data}\nNGC\\,4214 was observed with the three instruments onboard the \\textit{Spitzer}\\xspace space telescope \\citep{werner-2004}. We used the MIPS 24$\\mu$m\\xspace observations obtained within the Local Volume Legacy Survey \\citep{dale-2009} that were processed by \\cite{bendo-2012}. The MIPS 24$\\mu$m\\xspace map, which has an original FWHM of 5.9$^{\\prime\\prime}$\\xspace, was convolved to a resolution of $\\sim$12$^{\\prime\\prime}$\\xspace to match that of the PACS data. \\par\n\nThe IRS observations (program ID 3177, PI. Skillman) consist of small mappings of the two main star-forming regions in high-resolution mode \\citep{houck-2004}. We extracted the data from the \\textit{Spitzer}\\xspace Heritage Archive (see Table~\\ref{AOR}) and processed them with the software CUBISM v1.8 \\citep{smith-2007}. We used the default mapping procedure and bad pixel removal to produce spectral cubes with pixel sizes 2.26$^{\\prime\\prime}$\\xspace for the Short-High module and 4.46$^{\\prime\\prime}$\\xspace for the Long-High module. \nWe then created surface brightness maps for all spectral lines of interest -- \\siv10.5$\\mu$m\\xspace, \\neii12.8$\\mu$m\\xspace, \\neiii15.6$\\mu$m\\xspace, \\siii18.7$\\mu$m\\xspace, \\siii33.5$\\mu$m\\xspace, which all trace H\\,{\\sc ii}\\xspace regions -- in the following way: for each pixel of the cube, we extracted the signal with a range of $\\pm$0.7$\\mu$m\\xspace around the line and fit a polynomial of order two for the baseline and a Gaussian for the line with the IDL routine \\texttt{mpfit}. For a more stable fit, the peak of the Gaussian is required to be positive, the position of the peak is expected within one instrumental FWHM of the rest wavelength, and the width is limited to the instrumental resolution ($R=600$). Finally, we added random noise and iterated the fit $300$ times to estimate the best-fit parameters as the median of the resulting parameters and the error on those parameters as the standard deviation. \nError maps again include data and line-fitting uncertainties, but not calibration uncertainties, which are on the order of 5\\%. \nThe coverage of the star-forming regions of the galaxy in the IRS maps is only partial, as shown in Fig.\\,\\ref{mask}. No integrated values for the flux of the whole regions could be retrieved. To obtain a representative value for the line flux in each region, we regridded the IRS maps to that of the \\oiii88$\\mu$m\\xspace map. Then we selected the pixels that appear in both maps and scaled the emission of these pixels to the \\oiii88$\\mu$m\\xspace line to infer the corresponding line fluxes for the star-forming regions as a whole. \n\n\\begin{table}\n\\caption{List of \\textit{Herschel}\\xspace and \\textit{Spitzer}\\xspace observations.}\n\\label{AOR}\n \\centering\n\\vspace{-5pt}\n \\begin{tabular}{l l}\n\\hline \\hline\n \\vspace{-10pt}\\\\\n\\multicolumn{2}{c}{\\textit{Herschel}\\xspace data}\\\\\nInstrument & Observation Identification number (OBSID) \\\\\n\\hline\n \\vspace{-10pt}\\\\\nPACS phot. & $1342211803$, $1342211804$, $1342211805$, \\\\\n & $1342211806$ \\\\\nPACS spec. & $1342187843$, $1342187844$, $1342187845$, \\\\\n & $1342188034$, $1342188035$, $1342188036$ \\\\\n\\hline \\hline\n \\vspace{-10pt}\\\\\n\\multicolumn{2}{c}{\\textit{Spitzer}\\xspace data}\\\\\nInstrument & Astronomical Observation Request (AOR) \\\\\n\\hline\n \\vspace{-10pt}\\\\\nMIPS 24$\\mu$m\\xspace & $22652672$, $22652928$, $22710528$, $22710784$ \\\\\n\\multicolumn{2}{l}{IRS Short-High} \\\\\n~On-source: & $10426368$, $10426624$, $10426880$, $10427136$ \\\\\n~Background: & $13728256$, $13729792$, $13730304$, $13730816$, \\\\\n& $13733120$, $13762304$, $13767424$, $13767936$, \\\\\n& $13768448$, $13768704$, $13769728$, $13770496$, \\\\\n& $13773568$\\\\\n\\multicolumn{2}{l}{IRS Long-High} \\\\\n~On-source: &$10424832$, $10425088$, $10425344$, $10425600$, \\\\\n& $10425856$, $10426112$, $10427392$\\\\\n~Background: & $13728768$, $13729792$, $13730304$, $13730816$, \\\\\n& $13733120$, $13763328$, $13764352$, $13765376$, \\\\\n&$13767424$, $13767936$, $13768448$, $13768704$,\\\\\n& $13769728$, $13770496$, $13773568$\\\\\n\\hline \\hline\n\\end{tabular}\n\\end{table}\n\nWe focus on these selected IRS lines because they are among the brightest MIR fine-structure cooling lines and can be used as reliable diagnostics of the physical conditions in H\\,{\\sc ii}\\xspace regions. In general, the intensity or luminosity ratio of two lines of the same element but different ionization level is indicative of the radiation field hardness. Such diagnostics are the \\neiii15.6$\\mu$m\\xspace\/\\neii12.8$\\mu$m\\xspace or the \\siv10.5$\\mu$m\\xspace\/\\siii18.7$\\mu$m\\xspace ratios \\citep[e.g.,][]{verma-2003}, which are insensitive to the density because of their high critical densities (see Table~\\ref{fluxes}). Accordingly, species of the same ionization level but different transition are indicative of the electron density as a result of the different critical densities for each transition \\citep{osterbrock}. Examples are the \\siii18.7$\\mu$m\\xspace\/\\siii33.5$\\mu$m\\xspace, \\neiii15.6$\\mu$m\\xspace\/\\neiii36.0$\\mu$m\\xspace, or \\nii122$\\mu$m\\xspace\/\\nii205$\\mu$m\\xspace ratios \\citep[e.g.,][]{rubin-1994}. These diagnostics are insensitive to the temperature inside the H\\,{\\sc ii}\\xspace region. Unfortunately, the \\neiii36.0$\\mu$m\\xspace and \\nii205$\\mu$m\\xspace lines fall at the edge of the IRS and PACS wavelength ranges, respectively, where the spectra are too noisy to detect or derive a reliable line ratio for the two star-forming regions. Therefore we relied on the [S\\,{\\sc iii}]\\xspace line to probe the electron density.\n\n\n\n\\subsection{Total infrared luminosity map}\nTo construct a total infrared (TIR) luminosity map of the galaxy, we combined the MIPS 24$\\mu$m\\xspace and PACS 70, 100, and 160$\\mu$m\\xspace data, following \\cite{galametz-2013}: \n\\begin{equation}\nL_{\\rm TIR}=\\int_{3\\mu m}^{1\\,100\\mu m}L_{\\nu}d\\nu=\\sum c_i L_i\n.\\end{equation}\nWe used the values of the coefficients, $c_i$, from Table~3 of their paper: $[c_{24},c_{70},c_{100},c_{160}] = [2.064,0.539,0.277,0.938]$. This method, although slightly less accurate than a direct integration of a well-sampled SED, does not require degrading the resolution of our data beyond the PACS 160$\\mu$m\\xspace beam and is sufficient for our modeling purposes to estimate the energy budget in the star-forming regions. The $L_{\\rm TIR}$ map is shown in Fig.\\,\\ref{mask}.\n\n\n\n\\subsection{Defining the star-forming regions}\n\\label{sect:definesf}\nTo define the apertures for the main star-forming regions, we set a threshold for the signal-to-noise ratio (S\/N) equal to 5 in each individual PACS 70$\\mu$m\\xspace, 100$\\mu$m\\xspace, and 160$\\mu$m\\xspace photometry and PACS spectral map. We masked all pixels below this S\/N and drew the contours, which include all the remaining unmasked pixels separately for the photometry and the spectroscopy maps. Because the emission in the photometry maps is more extended, we kept the contours from the photometry and used these apertures throughout the analysis to define the two star-forming regions, as shown in Fig.\\,\\ref{mask}. This means that pixels in the spectroscopy maps that are below the S\/N threshold but within the region contours are still counted. \nThe fluxes and uncertainties for the line and TIR emission were measured taking into account all pixels in each region. They\nare reported in Table~\\ref{fluxes}. \nThe ISM emission (gas and dust) peaks in these two regions, and most of the line fluxes are twice as high in the central region as$\\text{ in}$ the southern region, except for \\neiii15.6$\\mu$m\\xspace (factor 1.3) and \\siv10.5$\\mu$m\\xspace, which have lower fluxes toward the central region. This hints at different physical conditions in the two regions, which we investigate with radiative transfer models. \n\n\\begin{table*}\n\\caption{Observed MIR and FIR fluxes for the line and broadband emission.\nUncertainties on the fluxes include data and line-fitting uncertainties, but not \ncalibration uncertainties, which are on the order of 5\\% for the \\textit{Spitzer}\\xspace lines and 15\\% for the \\textit{Herschel}\\xspace lines.\nCritical density and ionization potential values are taken from \\citet{cormier-2012}. \nCritical densities are noted [e] for collisions with electrons and [H] for collisions with hydrogen atoms.}\n\\label{fluxes}\n \\centering\n\\vspace{-5pt}\n \\begin{tabular}{l c c c c c}\n\\hline\\hline\n \\vspace{-10pt}\\\\\n & & \\multicolumn{2}{c}{Flux $\\pm$ uncertainty} & & Ionization \\\\ \n Line & Wavelength & \\multicolumn{2}{c}{($\\times10^{-16}$ W~m$^{-2}$)} & Critical density & potential \\\\ \n & ($\\mu$m\\xspace) & Region I & Region II & (cm$^{-3}$) & (eV) \\\\\n\\hline\n \\vspace{-10pt}\\\\\n\\lbrack \\textsc{S\\,iv}] & 10.51 & $5.68\\pm0.21$ & $8.40\\pm0.10$ & $5\\times10^4$ [e] & 34.79\\\\ \n\\lbrack Ne\\textsc{\\,ii}] & 12.81 & $8.98\\pm0.22$ & $4.13\\pm0.11$ & $7\\times10^5$ [e] & 21.56\\\\\n\\lbrack Ne\\textsc{\\,iii}] & 15.56 & $18.70\\pm0.14$ & $14.25\\pm0.08$ & $3\\times10^5$ [e] &40.96\\\\\n\\lbrack \\textsc{S\\,iii}] & 18.71 & $11.78\\pm0.20$ & $6.85\\pm0.08$ & $2\\times10^4$ [e] & 23.34 \\\\\n\\lbrack \\textsc{S\\,iii}] & 33.48 & $18.71\\pm0.27$ & $8.20\\pm0.12$ & $7\\times10^3$ [e] & 23.34\\\\\n\\lbrack \\textsc{O\\,i}] & 63.18 & $10.11\\pm0.35$ & $4.06\\pm0.21$ & $5\\times10^5$ [H] & -\\\\\n\\lbrack \\textsc{O\\,iii}] & 88.36 & $31.86\\pm0.62$ & $13.50\\pm0.40$ & $5\\times10^2$ [e] & 35.12\\\\\n\\lbrack \\textsc{N\\,ii}] & 121.90 & $0.44\\pm0.20$ & $0.14\\pm0.08$ & $3\\times10^2$ [e] & 14.53\\\\ \n\\lbrack \\textsc{O\\,i}] & 145.52 & $0.65\\pm0.09$ & $0.32\\pm0.07$ & $1\\times10^5$ [H] & -\\\\\n\\lbrack \\textsc{C\\,ii}] & 157.74 & $26.34\\pm0.33$ & $10.05\\pm0.21$ & 50 [e], $3\\times10^3$ [H] & 11.26\\\\\n\\hline\n\\end{tabular}\n\\begin{tabular}{l c c c}\n\\hline\n \\vspace{-10pt}\\\\\n & & \\multicolumn{2}{c}{Flux density $\\pm$ uncertainty} \\\\\n Broadband & Wavelength & \\multicolumn{2}{c}{(Jy)}\\\\\n & ($\\mu$m\\xspace) & Region I & Region II \\\\\n\\hline\n \\vspace{-10pt}\\\\\nMIPS & 24 & $0.67\\pm0.01$ & $0.48\\pm0.01$ \\\\\nPACS & 70 & $7.36\\pm0.21$ & $3.72\\pm0.12$ \\\\\nPACS & 100 & $7.91\\pm0.19$ & $4.10\\pm0.11$ \\\\\nPACS & 160 & $6.07\\pm0.10$ & $3.23\\pm0.06$ \\\\\n\\hline\n \\vspace{-10pt}\\\\\n{$L_{\\rm TIR}$ (erg\\,s$^{-1}$)} & 3 -- 1100 & $5.25\\times10^{41}$ & $3.02\\times10^{41}$ \\\\\n\\hline \\hline\n\\end{tabular}\n\\end{table*}\n\n\\section{Description of the model}\n\\label{method}\n\\subsection{Model geometry and strategy}\nOur objective is to characterize the physical conditions of the ISM phases from which the IR emission arises in NGC\\,4214. To that end, we used the spectral synthesis code \\textsc{Cloudy} v.13, last described by \\cite{ferland-2013}. We performed a multiphase detailed modeling of the ISM for which we combined line and continuum emission, following the method described in \\cite{cormier-2012}. We considered the two main star-forming regions of NGC\\,4214: the most evolved central region (NW-I) and the southern region (SE-II). Here we present the main aspects of the model and how it is applied to each region. The components\/ISM phases of the model are \n\\begin{enumerate}\n \\item a central source of radiation,\n \\item an ionized medium component (H\\,{\\sc ii}\\xspace region) that surrounds the central source, \n \\item a neutral medium (PDR) surrounding the H\\,{\\sc ii}\\xspace region.\n\\end{enumerate}\n\\noindent \nThis method assumes a single radiation source responsible for the observed SED of the studied region. In other words, we took all of the different sources (star clusters) and the surrounding clouds from which they have formed and represented them with one central source and one surrounding cloud. We thus targeted the integrated properties of each region. In practice, we mixed components of the ISM that have different composition and properties\nand blend them in a single system. \nThe applied geometry is spherical. The source is in the center and the illuminated face of the cloud lies at a certain distance that we call inner radius. In our case, the effective geometry is one-dimensional plane-parallel because the cloud forms a thin shell and its distance from the radiation source is large. \\par\n\nThe radiation source, representative of the stars that populate the clusters of the star-forming region, illuminates a cloud of dust and gas. It controls the ionization parameter, $U$, which characterizes the field and is defined as the ratio of the incident ionizing photon density to the hydrogen density. Hard UV photons from the source ionize hydrogen and form the H\\,{\\sc ii}\\xspace region. As this radiation is transmitted through the cloud, it is attenuated\nand thus becomes softer, which decreases its influence on ionization. However, it still controls the processes further in the cloud (in the PDR).\\par\n\nThe adopted strategy is to treat the H\\,{\\sc ii}\\xspace region first and then use the H\\,{\\sc ii}\\xspace region parameters as input for the PDR modeling. This allows for a self-consistent approach \\citep{abel-2005}, which is usually not directly available in standard PDR codes (see \\citealt{roellig-2007} for a comparison of PDR codes), and is important to accurately derive the radiation field that impinges\non the PDR. We first ran the simulation until the end of the H\\,{\\sc ii}\\xspace region, choosing to stop the simulation when the (electron) temperature reached 500\\,K to ensure that the model had transitioned to the atomic phase. We iterated to optimize our parameters so\nthat they matched the observed emission of known H\\,{\\sc ii}\\xspace region-diagnostic lines ([\\textsc{S\\,iv}]\\,10.5$\\mu$m\\xspace, [Ne\\textsc{\\,ii}]\\,12.8$\\mu$m\\xspace, [Ne\\textsc{\\,iii}]\\,15.6$\\mu$m\\xspace, [\\textsc{S\\,iii}]\\,18.7$\\mu$m\\xspace, [\\textsc{S\\,iii}]\\,33.5$\\mu$m\\xspace, \\oiii88$\\mu$m\\xspace, and \\nii122$\\mu$m\\xspace). Then we fed the result of this model to the PDR and compared the predictions to the remaining three PDR lines observed: \\oi63$\\mu$m\\xspace, \\oi145$\\mu$m\\xspace, and \\cii157$\\mu$m\\xspace, choosing a visual extinction of 10\\,mag as the stopping criterion. At this point, the gas temperature had fallen to roughly 10\\,K. \n\n\n\n\\subsection{Model parameters}\nWe constrained the properties of the star-forming regions by varying some of the parameters that control the physics of the models while keeping others fixed. The main parameters that we consider are\n\\begin{enumerate}\n \\item a radiation field source: shape, age, luminosity (varied),\n \\item the hydrogen density of the ISM, $n_{\\rm H}$ (varied),\n \\item the ISM gas elemental abundances (fixed),\n \\item the inner radius, $r_{\\rm in}$ (varied),\n \\item the magnetic field, B (fixed),\n \\item the turbulent velocity, $v_{\\rm turb}$ (fixed).\n\\end{enumerate}\n\\noindent Parameters that are fixed were set to values from the literature. The other parameters were varied inside a range whose width reflects the dispersion of published measurements or of the data. The main parameter of interest for this study is the radiation field, which was varied within a range guided by studies of the star formation history.\n\n\n\\subsubsection{Hydrogen density ($n_{\\rm H}$)}\n\\label{density}\nWe performed our simulations assuming pressure equilibrium. As the model proceeds through consecutive zones of the cloud, it keeps the pressure constant. Thus, the density of the medium varies to satisfy this equilibrium. The initial density that we specified in the models is the density at the illuminated face of the cloud, where the H\\,{\\sc ii}\\xspace region starts. The initial values and the range we probed are motivated by the observed \\siii18.7$\\mu$m\\xspace\/\\siii33.5$\\mu$m\\xspace ratio in the H\\,{\\sc ii}\\xspace region, which is\nknown to be sensitive to the electron density in the range 10$^2$-10$^4$\\,cm$^{-3}$ \\citep{rubin-1994}. We therefore let the initial density vary in the range 100-300\\,cm$^{-3}$ for the central region and 300-600\\,cm$^{-3}$ for the southern region with a common step of 25\\,cm$^{-3}$. By iterating this procedure, we constrained the best values for the density at the beginning of the H\\,{\\sc ii}\\xspace region. \n\n\n\\subsubsection{Inner radius ($r_{\\rm in}$)}\nIn our spherical geometry, the source is at the center and is surrounded by a cloud. The illuminated face of the cloud lies at a certain distance $r_{\\rm in}$. This is not a strictly physically constrained parameter because the setup we used does not realistically model each cluster, but instead tries to mimic a whole region and reproduce its emission. The variation of this radius changes the photons flux and thus is expected to affect our results. We let $r_{\\rm in}$ vary from 1 to 100\\,pc for both regions.\n\n\n\\subsubsection{Elemental abundances}\nElemental abundances in the models were set to the observed values for oxygen, sulfur, nitrogen, and neon, taken from \\cite{kobulnicky-1996}. Some measurements partially cover our defined regions, and we adopted them as representative. Exclusively for carbon, we scaled its abundance according to the study on the dependence of $\\log(C\/O)$ on metallicity by \\cite{izotov-1999}. For other elements, we used the default ISM composition of \\textsc{Cloudy} and scaled the abundances to our metallicity (1\/3). The values used are indicated in Table~\\ref{elements}. \n\n\\begin{table}\n \\caption{Elemental abundances in NGC\\,4214. \n Values (in logarithmic scale) for NGC\\,4214 are taken from \\citet{kobulnicky-1996} and solar values from \\citet{asplund-2009}.}\n\\label{elements}\n \\centering\n\\vspace{-5pt}\n \\begin{tabular}{c c c c}\n \\hline \n \\vspace{-10pt}\\\\\n Abundance & Region I & Region II & Solar value \\\\\n \\hline\n \\vspace{-10pt}\\\\\n \\lbrack O\/H] & $-3.795\\pm0.05$ & $-3.64\\pm0.04$ & $-3.31\\pm0.05$ \\\\\n \\lbrack S\/H] & $-5.380\\pm0.06$ & $-5.21\\pm0.06$ & $-4.88\\pm0.03$ \\\\\n \\lbrack N\/H] & $-5.094\\pm1.00$ & $-5.02\\pm0.10$ & $-4.17\\pm0.05$ \\\\\n \\lbrack Ne\/H] & $-4.535\\pm0.11$ & $-4.51\\pm0.08$ & $-4.07\\pm0.10$ \\\\\n \\lbrack C\/H] & $-4.295\\pm0.30$ & $-4.14\\pm0.30$ & $-3.57\\pm0.05$ \\\\\n \\hline \n \\end{tabular}\n \\end{table}\n\n\\subsubsection{Radiation field - shape and energy}\nWe used the code \\textsc{Starburst99} \\citep{leitherer-2010} to produce a stellar population spectrum that serves as input for our models. We chose a Kroupa initial mass function between 0.08 and 120\\,M$_{\\odot}$ \\citep{kroupa-2001}, as done in \\cite{andrews-2013}, and Padova asymptotic giant branch tracks with Z=0.004. \nAs discussed above, we did not model each cluster individually, but we used integrated emission from the entire star-forming regions instead. We tried to be as close to the shape and intensity of the radiation field of the H\\,{\\sc ii}\\xspace regions as possible. Motivated by the star formation history of the galaxy presented in \\cite{mcquinn-2010} and \\cite{williams-2011}, we tested the following cases:\n\\begin{itemize}\n\\item For the central region, we considered two limiting scenarios: \n(i)~a single-burst star formation event and (ii)~a continuous star formation model, with SFR=0.07\\,${\\rm M_{\\odot}\\,yr^{-1}}$. The ages of the clusters were varied within a range of (i)~1-20\\,Myr (in steps of 0.5\\,Myr) and (ii)~200-1000\\,Myr (in steps of 200\\,Myr).\n\\item For the southern region, we considered a single-burst event with an age that varied from 1 to 20\\,Myr (in steps\nof 0.5\\,Myr).\n\\end{itemize}\n\\noindent\nA fixed mass of 10$^5$\\,M$_{\\odot}$ (typical cluster mass in \\citealt{sollima-2013}) was considered for the single bursts, where the stars are created at once (delta burst). Our value for the SFR (0.07\\,${\\rm M_{\\odot}\\,yr^{-1}}$) is representative of the `average' rate at which this galaxy formed stars within the past 1\\,Gyr of its history. The ages of the clusters in both regions were guided by values from \\cite{ubeda-2007}, \\cite{sollima-2013}, and \\cite{sollima-2014}. \\cite{ubeda-2007} found 2-7\\,Myr for region~I, along with extended clusters in the same region with ages of 150 to 190\\,Myr. In region~II, they found ages spreading around 2\\,Myr. \\cite{sollima-2013,sollima-2014} reported a larger spread in ages. In region~I, they obtained ages around a median of 14\\,Myr, and for the more extended clusters the ages lie between 10 and 300\\,Myr. For region~II, they found a median age of $\\sim$20\\,Myr. \\par\n\nFor the luminosity emitted from each region, we chose to use the TIR luminosity as a first approximation of the luminosity of the starburst. This choice implies the assumption that all radiation from the clusters in the region is reprocessed by the dust and thus emitted at longer wavelengths. In doing so, we kept in mind that there can be processes that we did not model (UV escape fraction or a diffuse ionized medium, for example) and that can contribute to this radiated energy (see Sect.~\\ref{discussion}). \n\n\n\\subsubsection{Magnetic field strength (B) and turbulent velocity ($v_{\\rm turb}$)}\nMagnetic fields and turbulence play an important role in structuring the ISM \\citep[e.g.,][]{mckee-2007}. When \\textsc{Cloudy} solves the pressure equilibrium for each zone of the modeled cloud, a magnetic pressure term equal to $\\displaystyle P_{\\rm B}=\\frac{B^2}{8\\pi}$ is included in the equation of state along with a turbulent pressure term equal to $\\displaystyle P_{\\rm turb}=2.8 \\cdot 10^6 \\cdot 3 \\cdot (\\frac{n_{\\rm H}}{10^5\\,{\\rm cm^{-3}}})(\\frac{v_{\\rm turb}}{\\rm {1\\,km\\,s^{-1}}})^2$~[cm$^{-3}$\\,K], for isotropic turbulent motions, where $n_{\\rm H}$ is the total hydrogen density and $v_{\\rm turb}$ is the turbulent velocity (see the {\\sc Hazy} documentation of \\textsc{Cloudy} for more information). \\par\n\nThe magnetic field of NGC\\,4214 was measured by \\cite{kepley-2011} using multiwavelength radio emission. The reported field strength in the center of the galaxy is 30\\,$\\mu$G, and the pressure term due to this field has the same order of magnitude as the hot gas and the gravitational contributions. Since it is not well known how the observed magnetic field might affect our observed line intensities, we excluded it from our default models and tested one case with a magnetic field strength of 30\\,$\\mu$G. \\par \n\nAnother potential energy source to consider can arise from the dissipation of turbulence. Turbulent energy is converted into thermal energy as it cascades from large scales to small scales through dissipation. However, we did not resolve size scales for which we can measure this. \n\\textsc{Cloudy} does not attempt to model the dissipation mechanism, but assumes a simple thermal energy source based on line width. The turbulent velocity was set to a value of 1.5\\,km\\,s$^{-1}$ by default in our models, and we tested two other cases: one case with an intermediate turbulent velocity ($v_{\\rm turb}$=3\\,km\\,s$^{-1}$ or FWHM=5\\,km\\,s$^{-1}$) that corresponds to the approximate line width observed in the CO(1-0) data by \\cite{walter-2001}, and one case with a high turbulent velocity ($v_{\\rm turb}$=50\\,km\\,s$^{-1}$) as found in the diffuse ionized gas by \\cite{WilcotsThurow-2001} and used also in \\cite{kepley-2011}.\n\nNevertheless, we explore the effects of excluding or including magnetic fields and turbulence in Sect.~\\ref{magn}. \n\n\n\\subsection{Determination of best-fitting models}\n\\label{minchi}\nWe aim to converge on a unique parameter set that best describes the conditions of the regions. We first ran models for which\nwe varied the parameters in a coarse grid to narrow down the parameter space, using ranges of values found in the literature to start with. We then used the \\textit{optimization} option of \\textsc{Cloudy}, which automatically varies the specified parameters in a finer grid to find the optimal solution.\n\nWe computed the average $\\chi ^2$, denoted $\\bar\\chi ^2$, for each model by comparing the observed fluxes of \\siv10.5$\\mu$m\\xspace, \\neiii15.6$\\mu$m\\xspace, \\siii18.7$\\mu$m\\xspace, \\siii33.5$\\mu$m\\xspace, and \\oiii88$\\mu$m\\xspace to the fluxes predicted from the radiative transfer calculation. These lines are the most luminous and most strongly correlated with the H\\,{\\sc ii}\\xspace region (as opposed to [N\\,{\\sc ii}]\\xspace and [Ne\\,{\\sc ii}]\\xspace which, from experience, can arise from other phases). We refer to these as the optimized lines in Table~\\ref{chi2ionic}. The goodness of the line emission fit is given by low $\\bar\\chi ^2$ values.\nThe \\textit{optimization} method of \\textsc{Cloudy} searches the minimum of $\\bar\\chi ^2$ that is defined as\n\\begin{align}\n \\bar\\chi ^2=\\frac{1}{n}\\sum{\\chi_i ^2}=\\frac{1}{n}\\sum{\\frac{( M_i - O_i)^2}{(min\\{O_i;M_i\\} \\times \\sigma_i)^2}},\n\\end{align}\n\\noindent where $n$ is the number of lines optimized and $\\chi_i^2$ are the $\\chi^2$ values of the individual optimized lines.\nM$_i$ and O$_i$ are the modeled and observed fluxes, and $\\sigma_i$ is the fractional error on the observed flux (uncertainty\/flux) with calibration uncertainties added in quadrature to the measured uncertainties described in Sect.~\\ref{data}. We have five observables (the ionic lines listed above) and varied four parameters (cluster age, source luminosity, hydrogen density, and inner radius). \n\n\n\n\\section{Results}\n\\label{results}\nIn this section we present the model results for the two regions according to the star formation histories considered. Parameters of the best-fitting models and their corresponding $\\chi^2$ values are reported in Tables~\\ref{model} and \\ref{chi2ionic}, respectively. \n\n\n\\begin{table*}[!t]\n\\caption{Input parameters for the best-fitting models. \nValues are given at the illuminated face of the cloud. \nValues in parenthesis for the magnetic field strength \nand turbulent velocities are tested in Sect.~\\ref{magn}.}\n\\label{model}\n \\centering\n\\vspace{-5pt}\n \\begin{tabular}{l c c c}\n \\hline \n \\vspace{-10pt}\\\\\n Parameter & \\multicolumn{2}{c}{Central region} & Southern region\\\\\n & Single burst & Continuous & Single burst \\\\\n \\hline\n \\vspace{-10pt}\\\\\n Density $n_{\\rm H}$ [cm$^{-3}$] & 155 & 180 & 440 \\\\\n Inner radius $r_{\\rm in}$ [pc] & 85.4 & 62.1 & 22.3 \\\\\n Stellar age t [Myr] & 4.1 & 440 & 3.9 \\\\\n Total luminosity L [erg\\,s$^{-1}$] & $1.15\\times10^{42}$ & $1.91\\times10^{42}$ & $5.25\\times10^{41}$ \\\\\n Magnetic field B [$\\mu$G] & - & - (30) & - \\\\\n Turbulent velocity $v_{\\rm turb}$ [km\\,s$^{-1}$] & 1.5 & 1.5 (3, 50) & 1.5 \\\\\n \\hline \n \\end{tabular}\n\\end{table*}\n\\begin{figure*}\n\\centering\n\\includegraphics[clip,trim=0 20mm 0 0,width=17cm,height=5cm]{fig3a}\n\\includegraphics[clip,trim=0 20mm 0 0,width=17cm,height=5cm]{fig3b}\n\\caption{Results for the central region (top panel) \nand for the southern region (bottom panel): line emission \nfor the H\\,{\\sc ii}\\xspace region (left side) and the PDR (right side), assuming \npressure equilibrium.\nGreen bars represent the observations, blue bars our single-burst model predictions, \nand gray bars with a dashed outline the continuous star formation model predictions.}\n\\label{h2bars}\n\\end{figure*}\n\n\\subsection{Line emission}\n\\label{modelresults}\n\\subsubsection{Central region (I): The single-burst model}\n\\label{SB}\nFor the single-burst star formation event, the best-fitting model of the central region has the following parameters: burst age of 4.1\\,Myr with luminosity $1.1\\times10^{42}$\\,erg\\,s$^{-1}$, density of 155\\,cm$^{-3}$, and $r_{\\rm in}\\simeq85$\\,pc. The corresponding ionization parameter at the illuminating face of the cloud is $\\log(U)=-2.7$. For the H\\,{\\sc ii}\\xspace region, all optimized lines are matched within $\\pm$30\\%, and \\neii12.8$\\mu$m\\xspace and \\nii122$\\mu$m\\xspace are underpredicted by a factor of $\\sim$8 and 3, respectively (see Fig.\\,\\ref{h2bars}, blue bars in the top panel). \nIn the PDR, the \\oi145$\\mu$m\\xspace and 63$\\mu$m\\xspace lines are overpredicted by a factor of $\\sim$2.5, and the \\cii157$\\mu$m\\xspace line is matched within 20\\%. We note that [C\\,{\\sc ii}]\\xspace emission can arise from both the neutral and the ionized phases of the ISM, with a potentially non-negligible contribution from the warm ionized medium in the Milky Way \\citep{heiles-1994}. The H\\,{\\sc ii}\\xspace region of our best-fitting model contributes negligibly to the predicted \\cii157$\\mu$m\\xspace and [O\\,{\\sc i}]\\xspace emission ($<$3\\%). The ultraviolet radiation field strength $G_0$ inside the PDR is $455$ in units of the equivalent \\cite{habing-1968} flux ($1\\,G_0 = 1.6\\times10^{-3}$\\,erg\\,cm$^{-2}$\\,s$^{-1}$). The input luminosity required by the model, which all comes out as $L_{\\rm TIR}$ in the PDR, is $\\text{about twice as}$ high\nas the observed $L_{\\rm TIR}$ in that region. \n\nThese results represent the simplifying case that the recent starburst dominates the star formation in this central region, so the line emission can be explained with a single burst. The age of the burst in the model nicely agrees with the range of ages from \\cite{ubeda-2007} and is at the younger end of ages from \\cite{sollima-2013,sollima-2014}. \n\n\n\n\\subsubsection{Central region (I): The continuous star formation model}\nIn the continuous star formation scenario, the best-fitting model of the central region has the following parameters: stellar age of 440\\,Myr with luminosity $1.9\\times10^{42}$\\,erg\\,s$^{-1}$, density of 180\\,cm$^{-3}$, and $r_{\\rm in}\\simeq62$\\,pc. The ionization parameter is $\\log(U)=-2.5$. Line predictions for the H\\,{\\sc ii}\\xspace region and the PDR from this model solution are shown in Fig.\\,\\ref{h2bars} (gray bars). For all optimized lines (\\oiii88$\\mu$m\\xspace, \\siii18.7 and 33.5$\\mu$m\\xspace, \\siv10.5$\\mu$m\\xspace, and \\neiii15.6$\\mu$m\\xspace), the model matches the observations within $\\pm$20\\%. The two other ionic lines, \\neii12.8$\\mu$m\\xspace and \\nii122$\\mu$m\\xspace, are underpredicted by a factor of $\\sim$7 and 4. \nIn the PDR, [C\\,{\\sc ii}]\\xspace and the [O\\,{\\sc i}]\\xspace lines are matched within a factor of $\\sim$2. The contribution of the H\\,{\\sc ii}\\xspace region to the predicted PDR emission is only 1\\%. \nWe find $G_0\\simeq1.2\\times10^3$, which is higher than in the single-burst case. The luminosity of the model exceeds the observed $L_{\\rm TIR}$ by a factor of 3.5. \\par\n\nThe results represent the simplifying case of continuous star formation dominating this region, with the starbursts being embedded in it. The age of the model agrees well with the star formation event in the window 400-500\\,Myr ago reported by \\cite{mcquinn-2010}. \n\n\n \n\\subsubsection{Southern region (II)}\n\\label{southres}\nThe best-fitting model for the southern region is characterized by a burst age of 3.9\\,Myr with luminosity $5.3\\times10^{41}$\\,erg\\,s$^{-1}$, density of 440\\,cm$^{-3}$, and $r_{\\rm in}\\simeq22$\\,pc. The ionization parameter is $\\log(U)=-2.3$. Line predictions for the H\\,{\\sc ii}\\xspace region and the PDR are shown in the bottom panel of Fig.\\,\\ref{h2bars}. This burst found for the southern region is slightly younger than the burst in the central region, in agreement with \\cite{ubeda-2007}. In the H\\,{\\sc ii}\\xspace region, the \\oiii88$\\mu$m\\xspace, \\siii18.7 and 33.5$\\mu$m\\xspace, and \\siv10.5$\\mu$m\\xspace lines are reproduced within 30\\%, while \\neiii15.6$\\mu$m\\xspace, \\neii12.8$\\mu$m\\xspace, and \\nii122$\\mu$m\\xspace are underpredicted by a factor of 1.7, 10, and 6, respectively. \nFeeding this model to the PDR, the \\cii157$\\mu$m\\xspace line is underpredicted by a factor of 3.4, while the [O\\,{\\sc i}]\\xspace lines are both overpredicted by a factor of $\\sim$2. The contribution of the H\\,{\\sc ii}\\xspace region to the PDR line emission is only 1\\%. $G_0$ is found to be about $3.2\\times10^3$, which is higher than in the central region. The luminosity of the model is 1.7 times higher than the observed $L_{\\rm TIR}$ in this region. \n\n\n\n\\subsubsection{Comparison to empirical line ratios}\nPhysical conditions in the H\\,{\\sc ii}\\xspace region are mainly determined by tracers of the radiation field strength ([Ne\\,{\\sc iii}]\\xspace\/[Ne\\,{\\sc ii}]\\xspace, [S\\,{\\sc iv}]\\xspace\/\\siii18.7$\\mu$m\\xspace) and of density (\\siii18.7$\\mu$m\\xspace\/\\siii33.5$\\mu$m\\xspace). We compare these well-known diagnostic ratios in the two star-forming regions in Table~\\ref{ratios}. In the southern region, ratios of [Ne\\,{\\sc iii}]\\xspace\/[Ne\\,{\\sc ii}]\\xspace and [S\\,{\\sc iv}]\\xspace\/\\siii18.7$\\mu$m\\xspace are observed to be about twice as high and \\siii18.7$\\mu$m\\xspace\/\\siii33.5$\\mu$m\\xspace marginally higher than in the central region, indicating that the radiation field is harder and the medium denser. This is indeed what we recover with our best-fitting models (Table~\\ref{model}), as they match the sulfur ratios well. \n\n\n\\begin{table}\n\\caption{$\\chi^2$ values for the two star-forming regions.}\n\\label{chi2ionic}\n \\centering\n\\vspace{-5pt}\n \\begin{tabular}{l c c c}\n \\hline \\hline\n \\vspace{-10pt}\\\\\n H\\,{\\sc ii}\\xspace region~~~ &\\multicolumn{2}{c}{Central region} & \\hspace{-2mm}Southern region \\\\ \n & Burst & \\hspace{-1mm}Continuous & Burst \\\\\n \\hline\n \\vspace{-10pt}\\\\\n \\multicolumn{2}{l}{Individual $\\chi^2$ values:} & & \\\\\n \\lbrack \\textsc{O\\,iii}] \\,88$\\mu$m\\xspace & 0.01 & 0.03 & 0.01 \\\\\n \\lbrack \\textsc{N\\,ii}]\\,122$\\mu$m\\xspace & 12.58 & 44.63 & 82.56 \\\\\n \\lbrack \\textsc{S\\,iii}]\\,18.7$\\mu$m\\xspace & 2.24 & 2.38 & 37.69 \\\\\n \\lbrack \\textsc{S\\,iii}]\\,33.5$\\mu$m\\xspace & 8.10 & 6.99 & 31.89 \\\\\n \\lbrack \\textsc{S\\,iv}]\\,10.5$\\mu$m\\xspace & 0.76 & 0.70 & 4.94 \\\\\n \\lbrack Ne\\textsc{\\,ii}]\\,12.8$\\mu$m\\xspace & 14442.2 & 13842.6 & 58101.7 \\\\\n \\lbrack Ne\\textsc{\\,iii}]\\,15.6$\\mu$m\\xspace & 8.14 & 24.65 & 224.51 \\\\\n \\hline\n \\vspace{-10pt}\\\\\n $\\bar\\chi^2$ (all ionic lines) & 2067.72 & 1988.85 & 8354.76 \\\\\n $\\bar\\chi^2$ (optimized lines) \\hspace{-4mm} & 3.85 & 6.95 & 59.81 \\\\\n \\hline\n \\hline\n \\vspace{-10pt}\\\\\n PDR~~~ & \\multicolumn{2}{c}{Central region} & \\hspace{-2mm}Southern region \\\\ \n & Burst & \\hspace{-1mm}Continuous & Burst \\\\\n \\hline\n \\vspace{-10pt}\\\\\n \\multicolumn{2}{l}{Individual $\\chi^2$ values:} & & \\\\\n \\lbrack \\textsc{C\\,ii}]\\,157$\\mu$m\\xspace & 1.57 & 31.83 & 258.63 \\\\\n \\lbrack \\textsc{O\\,i}]\\,63$\\mu$m\\xspace & 149.78 & 57.05 & 85.37 \\\\\n \\lbrack \\textsc{O\\,i}]\\,145$\\mu$m\\xspace & 34.14 & 11.83 & 21.27 \\\\\n \\hline\n \\vspace{-10pt}\\\\\n $\\bar\\chi^2$ (all PDR lines) & 61.83 & 33.57 & 121.76 \\\\\n \\hline \\hline\n \\end{tabular}\n \\end{table}\n\\begin{table*}\n\\caption{Observed and predicted MIR line ratios for the two star-forming regions.\nThe [Ne\\,{\\sc iii}]\\xspace\/[Ne\\,{\\sc ii}]\\xspace and [S\\,{\\sc iv}]\\xspace\/[S\\,{\\sc iii}]\\xspace line ratios are indicative of the radiation field strength, \nand the ratio of the two [S\\,{\\sc iii}]\\xspace lines is a density diagnostic.}\n\\label{ratios}\n\\centering\n\\vspace{-5pt}\n \\begin{tabular}{l c c c c c c c}\n \\hline\n \\vspace{-10pt}\\\\\nRatio & & \\multicolumn{3}{c}{Central region} & & \\multicolumn{2}{c}{Southern region} \\\\\n & & Observed & Burst & Continuous & & Observed & Burst \\\\\n \\hline\n \\vspace{-10pt}\\\\\n\\lbrack Ne\\textsc{\\,iii}]\/[Ne\\,{\\sc ii}]\\xspace& & 2.082 & 13.72 & 12.34 & & 3.448 & 28.955 \\\\ \n\\lbrack \\textsc{S\\,iv}]\/[\\textsc{S\\,iii}]18.7 & & 0.481 & 0.479 & 0.477 & & 1.227 & 1.044\\\\\n\\lbrack \\textsc{S\\,iii}]18.7\/[\\textsc{S\\,iii}]33 & & 0.630 & 0.593 & 0.601 & & 0.835 & 0.846\\\\\n\\hline\n \\end{tabular}\n \\end{table*}\n\n\n\\subsection{Effects of magnetic fields and turbulence on the PDR temperature and density}\n\\label{magn}\nAfter determining the best parameters for the H\\,{\\sc ii}\\xspace regions, we explored the effect of cloud density on the PDR emission in more\ndetail. \nDensity is of critical importance in the emission output of the simulation because of the different critical densities of the observed lines. Density values quoted so far are representative of the H\\,{\\sc ii}\\xspace region and evolve inside the modeled cloud. Figure~\\ref{dens} shows hydrogen density profiles in the clouds for each case presented in Sect.~\\ref{modelresults}. The density starts at the initial value we set for each model in the H\\,{\\sc ii}\\xspace region, remaining practically at the same level throughout the H\\,{\\sc ii}\\xspace region. At the interface between the H\\,{\\sc ii}\\xspace region and the PDR, there is a jump in density required to keep the model in pressure equilibrium. When pressure is only determined by the gas pressure ($P_{\\rm gas}=n_{\\rm H}kT$; i.e., magnetic fields and turbulence are omitted), the temperature difference at the phase transition is balanced by a rise in density of 2-3 orders of magnitude. Within this frame, the effects of magnetic fields or turbulence, implemented as pressure terms in \\textsc{Cloudy}, can be understood as follows: when total pressure equilibrium is assumed, they give more support at the phase transition, thus preventing a large difference in density between the H\\,{\\sc ii}\\xspace region and the PDR and moderating the density increase at large optical depths. \nFor the model presented in Sect.~\\ref{modelresults}, where a low turbulence value is included but no magnetic fields, representative PDR densities are $2\\times10^4$\\,cm$^{-3}$ in the central region for both the single-burst and continuous star formation models, and $7\\times10^4$\\,cm$^{-3}$ in the southern region (see Fig.\\,\\ref{dens}). \n\n\\begin{figure}\n\\centering\n\\includegraphics[clip,width=8.2cm]{fig4.eps}\n\\vspace{-5pt}\n\\caption{Density profiles in the modeled clouds for the central and southern region, which include a turbulence pressure term ($v_{\\rm turb}$=1.5\\,km\\,s$^{-1}$). Note that the x-axis is logarithmic, so the H\\,{\\sc ii}\\xspace region (with a constant low density) occupies a thin layer of the cloud, stopping at low visual extinction ($\\sim$0.1\\,mag).}\n\\label{dens}\n\\end{figure}\n\n\nThe effects of magnetic field and turbulence on the cloud density, temperature, and line emission are shown in Fig.\\,\\ref{turb}. \nWe present a set of runs for our best-fitting model in the central region single-burst case (note that we recover similar behaviors for the central continuous and southern single-burst cases, as shown in Appendix~\\ref{appendixa}) with the magnetic field (B=30\\,$\\mu$G) and\/or the turbulence pressure ($v_{\\rm turb}$=1.5, 3, 50\\,km\\,s$^{-1}$) terms switched on. These terms have no impact on the ionic line emission because thermal pressure dominates the pressure balance in the H\\,{\\sc ii}\\xspace region, but they noticeably change the emission of the PDR lines. When only thermal pressure is considered, the gas density jumps to values $>3\\times10^4$\\,cm$^{-3}$ in the PDR and the [O\\,{\\sc i}]\\xspace lines are overpredicted by an order of magnitude (black bars and dotted line). \nWith only the magnetic field on, all three PDR lines are underpredicted by a factor of $\\sim$3 (orange bars), but their ratios are kept in the range observed thanks to the lower densities achieved ($2\\times10^3$\\,cm$^{-3}$). Comparing models with different turbulent velocities, we see that the [C\\,{\\sc ii}]\\xspace line is best matched for low\nor intermediate velocities because of their moderate densities ($\\sim10^4$\\,cm$^{-3}$) and slightly lower PDR temperatures (at $A_{\\rm V} \\simeq 1-3$\\,mag). Increasing the turbulent velocity reduces the predicted [O\\,{\\sc i}]\\xspace emission and PDR density. The high-turbulence model performs poorly because it has the most dramatic effect on the density and line emission. \n\nTo summarize, we find that the best case that simultaneously matches all three PDR lines in the central region is the model with intermediate turbulent velocity ($v_{\\rm turb}$=3\\,km\\,s$^{-1}$), which has a density of $8\\times10^3$\\,cm$^{-3}$, but we stress that the main effect of turbulence is to reduce the PDR density.\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[clip,width=\\textwidth,height=6cm]{fig5a.eps}\n\\includegraphics[clip,width=8cm]{fig5b.eps} \\hspace{1cm}\n\\includegraphics[clip,width=8cm]{fig5c.eps}\n\\vspace{-5pt}\n\\caption{Effect of turbulence and magnetic fields on the predicted line intensities (top panel), density, and temperature (bottom panels) in the modeled cloud for the central region single-burst case. Green bars: observations. Gray bars and solid lines: only low turbulence switched on ($v_{\\rm turb}$=1.5\\,km\\,s$^{-1}$, default model). Black bars and dotted lines: no magnetic fields and no turbulence. Orange bars and dashed lines: only magnetic field switched on (B=30\\,$\\mu$G). Red bars and dash-dotted line: only moderate turbulence switched on ($v_{\\rm turb}$=3\\,km\\,s$^{-1}$). Cyan bars and triple-dotted-dashed line: only high turbulence switched on ($v_{\\rm turb}$=50\\,km\\,s$^{-1}$).}\n\\label{turb}\n\\end{figure*}\n\n\n\n\\subsection{Input spectra and SED}\n\\label{insed}\nFigure~\\ref{spectra} shows the input and output SEDs of the models for the two regions. The input SED is the stellar spectrum of the illuminating source modeled with \\textsc{Starburst99} and also includes the CMB at millimeter wavelengths. In the central region, the stellar spectrum has a wider distribution for the continuous star formation model (red curve) than the single-burst star formation model (black curve), and it is more luminous in the near-IR regime due to the presence of old stars. \nCompared to observations, all input SEDs fall above the GALEX FUV data because they are unattenuated, and the single-burst input SEDs fall below the 2MASS data because they lack old stars. In the central region, the input SED of the continuous model, on the other hand, agrees well with the 2MASS data. For better agreement with the FUV data, we estimated the level of extinction required to attenuate the input SEDs. We considered average extinction values $E(B-V)$ of 0.1, 0.05, and 0.1\\,mag for the central continuous central single-burst and southern single-burst models, respectively (dotted lines in Fig.\\,\\ref{spectra}), which are in the range of values found by \\cite{ubeda-2007b}. \n\nFocusing on the output SEDs of the models for the central region, it is anticipated that the level of the FIR continuum is different. The higher the input luminosity of the source, the higher the peak of the output SED. Moreover, as the dust temperature rises, the peak of the output SED is expected to shift to shorter wavelengths. For the continuous model, the higher FUV luminosity therefore\nprovides more dust heating, explaining the slight shift of its peak to shorter wavelengths. \nWe can compare the output SEDs to the observed PACS 70$\\mu$m\\xspace, 100$\\mu$m\\xspace, and 160$\\mu$m\\xspace fluxes. The models agree relatively well with observations for the southern region, but overpredict the FIR continuum emission in the central region.\nThis is not surprising because we used as input luminosity a higher value than the observed TIR flux and the modeled PDR has a high $A_{\\rm V}$. Better agreement with continuum observations requires a model that predicts a TIR flux lower by a factor of 2-3, for example, by reducing the covering factor of the PDR. We return to this in Sect.~\\ref{modelum}.\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[clip,trim=0 0 0.1cm 0,width=6.7cm]{fig6a.eps}\n\\includegraphics[clip,trim=1.95cm 0 0.1cm 0,width=5.75cm]{fig6b.eps}\n\\includegraphics[clip,trim=1.95cm 0 0.1cm 0,width=5.75cm]{fig6c.eps}\n\\vspace{-10pt}\n\\caption{Spectral energy distributions of the two star-forming regions: central region single-burst case (left panel), central region continuous case (middle panel), and southern region single-burst model (right panel). The black and blue curves correspond to the input and output SEDs, respectively. The dotted lines are the attenuated input SEDs. The data points are the photometry measurements from GALEX FUV, 2MASS J, H, K bands, MIPS 24\\,$\\mu$m\\xspace, and PACS at 70$\\mu$m\\xspace, 100$\\mu$m\\xspace, and 160\\,$\\mu$m\\xspace. In panels for the central region, the dashed curves are scaled versions of the output SEDs, considering a covering factor of $0.5$ for the PDR. }\n\\label{spectra}\n\\end{figure*}\n\n\n\n\\section{Discussion}\n\\label{discussion}\nWe have presented models for the two star-forming regions of NGC\\,4214 that work for most of the observed MIR and FIR lines. Some discrepancies remain between our models and observations ([Ne\\,{\\sc ii}]\\xspace, [N\\,{\\sc ii}]\\xspace, [C\\,{\\sc ii}]\\xspace\/[O\\,{\\sc i}]\\xspace, and $L_{\\rm TIR}$); these are not due to the choice of parameter space but rather to missing physics or components in our models. In this section, we discuss various aspects of our analysis: 1)~the discrepancies with observations, and we give clues to improve our models, 2)~which star formation scenario describes the data better, and 3)~how our ISM results relate to the known properties and evolution of the two regions.\n\n\n\\subsection{Discrepancies between models and observations}\n\\label{disclines}\n\\subsubsection{Ionic lines}\nIn our best-fitting models of the H\\,{\\sc ii}\\xspace region, the \\neii12.8$\\mu$m\\xspace and \\nii122$\\mu$m\\xspace lines are systematically underpredicted. These lines have lower excitation potentials (21.56 and 14.53\\,eV, respectively) than the other, better matched ionic lines, and can therefore be partially excited outside of the main H\\,{\\sc ii}\\xspace region. This effect can be significant in NGC\\,4214 because of the poor spatial resolution. \n\nDiscrepancies between the observed and modeled [Ne\\,{\\sc iii}]\\xspace\/[Ne\\,{\\sc ii}]\\xspace ratio, with the same amplitude as we found for NGC\\,4214, were reported by \\cite{martin-hernandez-2002} for H\\,{\\sc ii}\\xspace regions observed by ISO\/SWS (see their figure~2). These could again be related to a mixture of physical conditions within the ISO beam, which is also relatively large. For the starburst galaxy Haro\\,11, we have examined the effect of an additional low-ionization component (star of effective temperature 35\\,000\\,K and $n_{\\rm H}\\simeq 10^{1-3}$\\,cm$^{-3}$). This reproduced the observed \\neii12.8$\\mu$m\\xspace and \\nii122$\\mu$m\\xspace emission without significantly affecting the other ionic lines \\citep{cormier-2012}.\nAlternatively, given that the neon lines have the largest energy difference, they are more sensitive to the underlying stellar atmosphere models than the sulfur lines. Constraining those models is beyond the scope of this paper, and we relied on the sulfur lines as being more robust diagnostics of the H\\,{\\sc ii}\\xspace region conditions in NGC\\,4214 (i.e., models presented in Sect.~\\ref{modelresults}). \n\nWe also assessed the effect of including the \\neii12.8$\\mu$m\\xspace line in the best-fitting solution-tracking procedure for the southern region. This resulted in a solution where the \\neii12.8$\\mu$m\\xspace absolute flux was better reproduced (within a factor of 3). This model uses an older burst (5\\,Myr), higher density (500\\,cm$^{-3}$), and the same inner radius (20\\,pc). The corresponding ionization parameter is $\\log(U)=-2.4$, which is 0.1\\,dex lower than previously found. However, the model underpredicts the neon intensities by a factor of 2 and the [\\textsc{S\\,iv}]\/[\\textsc{S\\,iii}]\\,18.7$\\mu$m\\xspace ratio by a factor of $\\sim$4, while that ratio was matched within 20\\% without the [Ne\\,{\\sc ii}]\\xspace constraint. \n\n\n\\subsubsection{PDR lines}\nIn our default PDR solutions for both regions, [C\\,{\\sc ii}]\\xspace is systematically underpredicted compared to [O\\,{\\sc i}]\\xspace. The best PDR model, found in Sect.~\\ref{magn} for the central region single-burst case, includes a turbulent velocity of 3\\,km\\,s$^{-1}$. In the southern and central region continuous case, similar turbulent velocities also lead to better agreement with the [C\\,{\\sc ii}]\\xspace\/[O\\,{\\sc i}]\\xspace line ratios, but still underpredict the observed emission in absolute values (Appendix~\\ref{appendixa}).\n\nIn addition to the stars, X-rays can be a source of heating in the PDR and affect the FIR line emission. Point sources and diffuse X-ray emission have been reported in \\cite{hartwell-2004} and \\cite{ghosh-2006}. The identified point sources are not coincident with the peak of the FIR emission, and we therefore ignored them. The diffuse emission is mostly detected in the central region, with a luminosity of $3\\times 10^{38}$\\,erg\\,s$^{-1}$ \\citep{hartwell-2004}, which is lower than that of the starburst. We have tested the effect of this diffuse X-ray component on the PDR lines in the central region and found that it increases the predicted intensity of the [O\\,{\\sc i}]\\xspace lines by $\\sim$30\\% and the [C\\,{\\sc ii}]\\xspace intensity by less than 10\\%. As X-rays are not the main source of heating in the PDR, they do not help to produce significantly more [C\\,{\\sc ii}]\\xspace emission. \n\nWe further explored the possibility of [C\\,{\\sc ii}]\\xspace originating from a diffuse ionized component. We compared the [C\\,{\\sc ii}]\\xspace and \\nii122$\\mu$m\\xspace intensities and the PACS upper limit on the \\nii205$\\mu$m\\xspace line, which gives \\nii122$\\mu$m\\xspace\/\\nii205$\\mu$m\\xspace$>$1, to theoretical predictions assuming pure collisional regime. Following \\cite{bernard-salas-2012} and applying $C$ and $N$ elemental abundances observed in NGC\\,4214, we found that less than 16\\% of the total observed [C\\,{\\sc ii}]\\xspace emission arises in diffuse ionized gas. \\textit{Herschel}\\xspace SPIRE FTS observations of \\nii205$\\mu$m\\xspace toward the central region also indicate that \\nii122$\\mu$m\\xspace\/\\nii205$\\mu$m\\xspace$=$2.5 (priv. comm. R. Wu), which gives an ionized gas density of $\\sim$60\\,cm$^{-3}$ and a contribution of only 8\\% to the total [C\\,{\\sc ii}]\\xspace emission. Therefore, if a low-density ionized component is added to our current models to account for the missing [N\\,{\\sc ii}]\\xspace and [Ne\\,{\\sc ii}]\\xspace emission, this component will not contribute significantly to the [C\\,{\\sc ii}]\\xspace emission. Our best-fitting models for the central region also predict that the H\\,{\\sc ii}\\xspace region contributes less than a few percent to the [C\\,{\\sc ii}]\\xspace emission. \n\nThe [C\\,{\\sc ii}]\\xspace emission most likely arises from a neutral phase, but its conditions are not well described by our default PDR models. With the PDR tests performed in Sect.~\\ref{magn}, we have explored the effect of density on the PDR emission lines, but these lines are also sensitive to the radiation field strength. To reduce the radiation field intensity $G_0$ (not the hardness), we placed the PDR farther away by stopping the model at the H$^+$\/H\\,{\\sc i}\\xspace phase transition and resuming the calculation at a larger distance in the H\\,{\\sc i}\\xspace phase (note that this breaks the pressure equilibrium). The direct effect is to dilute the UV field before it reaches the PDR, as proposed by \\cite{israel-2011} and \\cite{cormier-2015}, which is equivalent to increasing the porosity of the medium. For the central single-burst central continuous and the southern single-burst cases, we increased the PDR distance by a factor of 2, 3, and 5 ($r_{\\rm in}$$\\simeq$170, 186, and 115\\,pc), respectively. This way, $G_0$ decreases to $\\sim$120 in all three cases, boosting the [C\\,{\\sc ii}]\\xspace emission, and the predicted [C\\,{\\sc ii}]\\xspace\/[O\\,{\\sc i}]\\xspace ratios match the observed ratios within 40\\% (Fig.~\\ref{pdrdist}). In absolute values, the [C\\,{\\sc ii}]\\xspace and [O\\,{\\sc i}]\\xspace intensities are 2 to 3 times too high,\nhowever. This can be compensated by a PDR covering factor lower than unity (see Sect.~\\ref{modelum} below). \n\nWe conclude that a moderate density, moderate $G_0$ neutral medium (compared to our dense, high $G_0$ default PDR model) with a\nlow turbulent velocity and a covering factor lower than unity is the most plausible origin for the observed [C\\,{\\sc ii}]\\xspace emission. \n\n\\begin{figure*}\n\\centering\n\\includegraphics[clip,trim=0 10mm 0 0,width=\\textwidth,height=5.8cm]{fig7.eps}\n\\vspace{-10pt}\n\\caption{Effect of changing the distance to the modeled cloud for the PDR calculation. The PDR distance is increased by a factor of 2, 3, and 5 for the central single-burst (left panel), central continuous (middle panel), and southern single-burst cases (right panel), respectively.}\n\\label{pdrdist}\n\\end{figure*}\n\n\n\\subsubsection{Model luminosity}\n\\label{modelum}\n\\cite{hermelo-2013} have fit the dust SED of the whole galaxy. In particular, their models require less UV luminosity than that observed to match the IR emission. They discussed possible explanations for this disagreement, proposing an escape of unattenuated UV radiation along with a particular geometry and dust properties in the galaxy. Such an argument of a UV escape fraction could also apply in our case. As seen in Fig.\\,\\ref{spectra}, the observed photometry data points in the FIR, which correspond to emission originating from the PDR, are lower than the modeled SED for the central region. This disagreement can indicate a different covering factor for the PDR. In our model, the PDR fully covers the sphere around the source (i.e., covering factor of unity). For the best-fitting models to better match the photometry, it could be that the PDR component is more porous, allowing radiation to escape the cloud. To illustrate this, we plot the resulting SEDs for the two models of the central region considering a PDR covering factor of 0.5 (dashed curves in Fig.\\,\\ref{spectra}).\n\nPart of the discrepancies in our results that we have discussed in this section~\\ref{disclines} originate in modeling each complex as a single cloud, which is imposed by the lack of spatial resolution in the observations. Clearly, future improvements are expected from observations with better spatial resolution. \n\n\n\\subsection{Central region: bursty or continuous star formation?}\n\\label{borc}\nThe star formation history of NGC\\,4214 over the last Gyr is complex. It shows bursts lasting for shorter or longer periods and a continuous `background', which takes place throughout this whole time window \\citep{mcquinn-2010,williams-2011}. In that sense, we could say that it is a rather hybrid star formation pattern. \nIn the central region, we have investigated cases of both a single-burst and a continuous star formation mode. However, when we use a single model to reproduce the observed line emission, we simplify the problem, since we do not take into account both modes. Hence arises the question of which of the two approaches is the most adequate to model the MIR-FIR line emission. \nWe have shown that both modes can satisfyingly reproduce the observed mid- and far-infrared line emission. By comparing the $\\bar\\chi^2$ values of the best-fitting models (Table~\\ref{chi2ionic}), we found that the single-burst case seems to globally\nperform better when considering the lines used in the optimization method. When including [N\\,{\\sc ii}]\\xspace and [Ne\\,{\\sc ii}]\\xspace, the two modes perform similarly, although the high $\\bar\\chi^2$ values are driven by the poor fit to the \\neii12.8$\\mu$m\\xspace line. \nFor the PDR, the continuous scenario gives a lower $\\bar\\chi^2$ , but the default PDR solutions are not optimum and can be fine-tuned for both modes (by lowering the density and $G_0$, see Sects.~\\ref{magn} and \\ref{disclines}). We conclude that both are limiting, simplifying cases of modeling the ISM in NGC\\,4214, with the continuous star formation model being marginally more accurate inside the PDR and the single-burst in the H\\,{\\sc ii}\\xspace region (without further refinement).\n\n\n\\subsection{Comparison between the two star-forming regions}\nHow do the ISM conditions that we have characterized in the two star-forming regions relate to their star formation properties? \nWe have found that the modeled cluster (radiation source) in the southern region contains younger stellar populations with a harder radiation field than that in the central region, in agreement with the results of \\cite{ubeda-2007}, for instance. The hydrogen density is also higher in the southern star-forming region, but the metallicities of the regions are very similar. The southern region is observed to be at a younger, more compact stage than the central region. The central region is more evolved\nand had time to expand, as observed by the presence of shells that may have swept away the dense material \\citep{walter-2001}, and is thus consistent with a more diffuse ISM. \n\nWe calculated the star formation rate surface densities for the two regions combining the GALEX FUV map and the \\textit{Spitzer}\\xspace 24$\\mu$m\\xspace map, as done in \\cite{leroy-2008}, with%\n\\begin{equation}\n\\label{sfcomp}\n\\begin{split}\n\\Sigma_{\\rm SFR} {\\rm [M_{\\odot}\\,yr^{-1}\\,kpc^{-2}]} = 3.2 \\cdot 10^{-3} \\cdot {\\rm I_{24}~[MJy\\,sr^{-1}]} \\\\ ~+~ 8.1 \\cdot 10^{-2} \\cdot {\\rm I_{\\rm FUV}~[MJy\\,sr^{-1}]}\n\\end{split}\n,\\end{equation}\n\\noindent where $\\Sigma_{\\rm SFR}$ is the SFR surface density and I$_{24}$ and I$_{\\rm FUV}$ the 24$\\mu$m\\xspace and FUV intensities. We also measured the atomic and molecular hydrogen content of the two regions using the 21cm map from THINGS\\footnote{\\url{http:\/\/www.mpia-hd.mpg.de\/THINGS\/Data.html}} \\citep{walter-2008} and the CO(1-0), CO(2-1) transition maps from \\cite{walter-2001} and HERACLES\\footnote{\\url{http:\/\/www.cv.nrao.edu\/~aleroy\/heracles_data\/}} \\citep{leroy-2009}, respectively. We used a conversion factor of $\\alpha_{\\rm CO}$=4.38~[M$_\\odot$\\,pc$^{-1}$\\,(K\\,km\\,s$^{-1}$)$^{-1}$] from CO(1-0) luminosity to H$_2$ mass. If we were to use a different conversion factor due to the low metallicity of these regions \\citep[e.g.,][]{schruba-2012}, this would not affect the relative comparison of the regions (see Table~\\ref{sfprops}). \nThe central region (I) has a total hydrogen content of M$_{\\rm gas,I}$=M$_{\\rm HI}$+M$_{\\rm H_2}$=2.05$\\times$10$^6\\,{\\rm M_\\odot}$ and a molecular (mass) fraction of $f_{\\rm mol,I}$=M$_{\\rm H_2}$\/M$_{\\rm HI}$=0.35. Integrating Eq.~\\ref{sfcomp} in the region, we find SFR$_{\\rm I}$=2.2$\\times$10$^{-2}\\,{\\rm M_\\odot\\,yr^{-1}}$. The southern region (II) has a higher total hydrogen content of M$_{\\rm gas,II}$=2.68$\\times$10$^6\\,{\\rm M_\\odot}$, an H$_2$ fraction $f_{\\rm mol,II}$=0.32, and SFR$_{\\rm II}$=1.9$\\times$10$^{-2}\\,{\\rm M_\\odot\\,yr^{-1}}$. \nTherefore the southern region has relatively more gas compared to its SFR than the central region. In terms of efficiency, SFR\/M$_{\\rm gas}$, it is about 50\\% lower in the southern region. \nThis could reflect a slightly more efficient, cluster-like star formation episode in the central region or simply encode a different evolutionary state as the SFR and gas masses are a strong function of time on scales of individual star-forming regions \\citep[e.g.,][]{schruba-2010}. The southern region being younger, it may still be in the process of forming stars. \n\n\n\\begin{table}\n \\caption{Comparison of star formation properties.}\n\\label{sfprops}\n \\centering\n\\vspace{-5pt}\n \\begin{tabular}{l c c c}\n \\hline \n \\vspace{-10pt}\\\\\n Quantity & & Region I & Region II \\\\\n \\hline\n \\vspace{-10pt}\\\\\n {$L_{\\rm CII}\/L_{\\rm TIR}$} & & $5.4\\times10^{-3}$ & $3.6\\times10^{-3}$ \\\\\n {$L_{\\rm CII}\/L_{\\rm CO(1-0)}$} & & $6.7\\times10^4$ & $2.5\\times10^4$ \\\\\n {M$_{\\rm gas=H_2+HI}$~$[$M$_\\odot]$} & & $2.05\\times10^6$ & $2.68\\times10^6$ \\\\\n {M$_{\\rm H_2}$\/M$_{\\rm HI}$} & & 0.35 & 0.32 \\\\\n {SFR~$[$M$_\\odot$\\,yr$^{-1}]$} & & $2.2\\times10^{-2}$ & $1.9\\times10^{-2}$ \\\\\n {SFE~$[$Gyr$^{-1}]$} & & 10.7 & 7.1 \\\\\n \\hline \n \\end{tabular}\n \\end{table}\n\n\nThe main differences in ISM conditions that we extracted from our modeling relate to the H\\,{\\sc ii}\\xspace region properties. The emission lines are a factor of two lower in luminosity in the southern region, except [Ne\\,{\\sc iii}]\\xspace and [S\\,{\\sc iv}]\\xspace, which are proportionally higher in the southern region. By contrast, the PDR conditions in the two regions are similar ($n_{\\rm H}\\simeq10^4$\\,cm$^{-3}$ and $G_0\\simeq150$). Our modeling reflects conditions resulting from the recent star-forming event and has little predictive power regarding a different, future star-forming event. In particular, at the linear scale that we probe ($\\sim$175\\,pc), the PDR conditions are averaged over multiple star-forming clouds and not representative of the underlying, possibly different, substructure in individual molecular clouds. \nHowever, there is more PDR emission relative to CO in the central region, that is, high [C\\,{\\sc ii}]\\xspace\/CO and [O\\,{\\sc i}]\\xspace\/CO ratios \\citep[see also][]{cormier-2010}. \nAs found by \\cite{walter-2001}, CO emission is centrally concentrated in the south and more diffuse in the center. The concentration of molecular gas may be nourishing the current star formation episode in the south or is being observed at a pre-disruption stage with the same fate as the central region. \nThe increased porosity, which evidently is an intrinsic property of the low-metallicity ISM, is seen in both the central and southern star-forming regions. \nThe main evolution within the dense medium is seen in the covering factor of the PDR, which is found to be lower in the central, more evolved region than in the southern region, and in CO, which probably suffers more from photodissociation with time and its emission is seen farther away from the cluster center, but this cannot be modeled with our static approach. Observing an intermediate PDR tracer at the C$^+$\/CO transition, such as C\\,{\\sc i}, would help to test this evolution scenario. \n\n\n\n\\section{Conclusion}\nWe have investigated the physical conditions characterizing \nthe ISM of the dwarf irregular galaxy NGC\\,4214 by modeling \\textit{Spitzer}\\xspace \nand \\textit{Herschel}\\xspace observations of MIR and FIR fine-structure cooling lines. \nWe used the spectral synthesis code \\textsc{Cloudy} to \nself-consistently model the H\\,{\\sc ii}\\xspace region and PDR properties of the two main \nstar-forming regions in NGC\\,4214. \nWe summarize our results as follows: \n\\begin{itemize}\n\\item The ionized gas in the southern region is found to be $2.5$ \ntimes denser than in the central region (440\\,cm$^{-3}$ \nversus 170\\,cm$^{-3}$) and typified by a harder radiation field. \nOur best-fitting models of the H\\,{\\sc ii}\\xspace region+PDR reproduce most \nionic and neutral atomic lines, namely the \\oiii88$\\mu$m\\xspace, \\siii18.7 \nand 33.5$\\mu$m\\xspace, \\siv10.5$\\mu$m\\xspace, \\neiii15.6$\\mu$m\\xspace, and \\oi63$\\mu$m\\xspace lines, \nwithin a factor of $\\sim$2. \n\\item The observed \\nii122$\\mu$m\\xspace and \\neii12.8$\\mu$m\\xspace lines are \nthe most discrepant with our model solutions for the H\\,{\\sc ii}\\xspace region. \nA single model component seems too simplistic to account for all \nobserved lines simultaneously. Given the complexity of these star-forming regions, \na multi-component modeling would be more appropriate. \nIn particular, a lower excitation ionized gas component may be \nrequired to match the [N\\,{\\sc ii}]\\xspace and [Ne\\,{\\sc ii}]\\xspace emission in both regions. \n\\item Our H\\,{\\sc ii}\\xspace region models and the established observational \n[C\\,{\\sc ii}]\\xspace\/[N\\,{\\sc ii}]\\xspace line ratio used as a proxy for the fraction of [C\\,{\\sc ii}]\\xspace arising in the ionized gas \nboth indicate that the [C\\,{\\sc ii}]\\xspace emission is mostly associated with the PDR gas, \nwith only a $\\sim$10\\% contribution from the ionized gas. \n\\item Constant pressure models where thermal pressure dominates \nthe pressure equilibrium perform rather poorly for the PDR lines \nmostly because of the high densities and high $G_0$ values \nreached in the PDR. Including additional pressure terms, such as \nweak turbulent or magnetic pressure, or placing the \nPDR cloud farther away and reducing its covering factor, \nleads to a much improved reproduction of the observed line intensities. \n\\item Star formation histories have an effect on the predicted \nMIR-FIR line emission. We have explored the two simplifying cases \nof a bursty and a continuous star formation scenario. \nIn the central region, we found that the bursty scenario works marginally \nbetter for the H\\,{\\sc ii}\\xspace region and the continuous scenario for the PDR, \nalthough both modes can reproduce \nthe observations after refining the PDR conditions. \n\\item The H\\,{\\sc ii}\\xspace region modeling from IR emission is consistent with \nthe evolutionary stages of the regions found in previous studies: \nthe southern region is younger and more compact, while the central region \nis more evolved and diffuse. \nOn the linear scale of our study ($\\sim$175\\,pc), the PDR conditions \nof individual star-forming clouds are averaged out and do not echo \nthe observed differences between the two regions (stellar ages, H\\,{\\sc ii}\\xspace conditions, etc.). \nThe increased porosity of the star-forming regions appears \nas an intrinsic characteristic of the low-metallicity ISM, \nwhile the covering factor of the PDR, which is reduced \nin the central region, stands out as the main evolution tracer.\n\\end{itemize}\n\\vspace{3mm}\n\n\\begin{acknowledgements}\nWe would like to thank S. Hony for his help with the IRS data and \nfor fruitful discussion, F. Walter for providing us the CO(1-0) data, \nand S. Glover for his advice on turbulence issues.\nWe thank the referee for his or her comments on the manuscript.\nDC and FB acknowledge support from DFG grant BI 1546\/1-1.\nThis work is based in part on archival data obtained with the \nSpitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, \nCalifornia Institute of Technology under a contract with NASA.\n\\textit{Herschel}\\xspace is an ESA space observatory with science instruments \nprovided by European-led Principal Investigator consortia and \nwith important participation from NASA.\n\\end{acknowledgements}\n\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nA precise determination of Parton Distribution Functions (PDFs) with \nreliable estimate of their uncertainties is crucial for the success of \nthe physics program at the LHC experiments. \nOn the one hand PDF uncertainties are often the dominant theoretical \nuncertainties for many relevant signal and background \nprocesses~\\cite{Campbell:2006wx}. \nOn the other hand overestimated PDF errors might hinder \nthe discovery of new physics effects, as shown for example \nin~\\cite{Ferrag:2004ca}.\nTop physics at the LHC is no exception and both top pair and single-top\nproduction present complementary and interesting properties as far as PDF\ndetermination\/effects are concerned. This contribution aims to review\nsome of the implications of PDFs for top quark physics at the LHC.\n\nIn the first part of the contribution we summarize the present \nstatus of the predictions for $t\\bar{t}$, $t-$ and $s-$channel single-top \ncross-sections, computed using different PDF sets. We show how differences \nin the predictions can directly be traced to both differences in the parton \nluminosities, and the values of physical parameters used in the PDF \nanalyses, such as the strong coupling constant $\\alpha_s$ or the \nthe $b$-quark mass, $m_b$.\nIn particular, we highlight the importance to account for the uncertainty \non the $b$-quark mass for accurate predictions of single-top production at \nthe LHC.\n\nIn the second part we study PDF induced correlations between PDFs and \n$t\\bar{t}$\/single-top cross-sections and between top and $W^\\pm$\/$Z^0$ \ncross-sections. \nWe briefly discuss how these correlations could be used in order to \nimprove the accuracy of top cross-section measurements with early data \nat the LHC.\nThese correlation studies are performed within the framework of the \nNNPDF parton analysis~\\cite{DelDebbio:2007ee,Ball:2008by,Ball:2009mk,\nBall:2010de} which, by relying on Monte Carlo techniques \nfor the estimation of uncertainties, provides an ideal tool for such \nstatistical studies.\n\nThe baseline PDF set for the studies presented in this contribution\nis the recently released NNPDF2.0~\\cite{Ball:2010de}, the first NLO global \nfit using the NNPDF methodology. \n\n\\section{Top-quark production at the LHC}\n\n\\subsection{$t\\bar{t}$ production}\n\nTop pair production is the main channel for top quark production at Tevatron \nand LHC.\nIn Table~\\ref{tab:ttbar} we collect the predictions for the top pair\ncross-section at LHC 7 TeV at NLO computed with the MCFM code~\\cite{ref:MCFM} \nusing different PDF sets.\n\n\\begin{table}[ht!]\n \\caption{Top pair cross-section at NLO with different PDF sets at \n LHC 7 TeV.}\n \\label{tab:ttbar}\n \\begin{narrowtabular}{2cm}{c|c}\n \\hline\n CTEQ6.6~\\cite{Nadolsky:2008zw} & 147.7 $\\pm$ 6.4 pb \\\\\n MSTW2008~\\cite{Martin:2009iq} & 159.0 $\\pm$ 4.7 pb \\\\\n NNPDF2.0~\\cite{Ball:2010de} & 160.0 $\\pm$ 5.9 pb \\\\\n \\hline\n ABKM09~\\cite{Alekhin:2009ni} & 131.9 $\\pm$ 4.8 pb \\\\\n HERAPDF1.0~\\cite{:2009wt} & 136.4 $\\pm$ 4.7 pb \\\\\n \\hline\n \\end{narrowtabular}\n\\end{table}\n\nWe notice that the predictions from the three global fits, NNPDF2.0, CTEQ6.6 \nand MSTW08 agree at the 1-sigma level.\nThe differences with PDF sets based on reduced datasets, ABKM09 and HERAPDF1.0, \nare larger. \nWe note that top pair production depends strongly on the large-$x$ gluon, \nand thus using sets which do not include Tevatron jet data might lead to rather \ndifferent predictions for this observable.\nOne should notice that differences between the predictions from different PDF \nsets also arise from the use of different values for the strong coupling \nconstant $\\alpha_s$. \nIt has been shown that using a common value of $\\alpha_s$ brings predictions\nfrom different groups for various LHC observables, including top pair \nproduction, in better agreement~\\cite{Demartin:2010er,Ubiali:2010xc}.\n\nOnce we subtract the difference introduced by different choices for $\\alpha_s$, \nthe remaining differences can be directly traced to differences in the PDF \nluminosities at the typical scale of the process.\nThis is illustrated in Fig.~\\ref{fig:gglumi}, where the gluon-gluon luminosity\nfor LHC at 7 TeV is plotted for the CTEQ6.6, MSTW2008 and NNPDF2.0 NLO sets.\nFor example, the lower value for the cross-section obtained using the CTEQ6.6\nset reflects the smaller $gg$ luminosity as compared to the other sets at\n$Q^2=m_t^2$.\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=7cm]{gglumi-7-rel-global}\n \\caption{Gluon-gluon parton luminosities for CTEQ6.6, MSTW2008 and NNPDF2.0\n including the associated PDF uncertainties. Results are showed as ratios\n to NNPDF2.0.}\n \\label{fig:gglumi}\n\\end{figure}\n\n\\subsection{Single-top production}\n\nNext we present predictions for single-top production at the LHC at 7 TeV.\nThese predictions for various PDF sets are collected in \nTable~\\ref{tab:singletop}, where we present results for both the $t$- \nand $s$-channel single-top cross-sections computed at NLO in QCD with \nthe MCFM code. We have used the $N_f=5$ (massless) calculation \nfor the results presented here, recently the $N_f=4$ calculation, which\nproperly takes into account the effects due to finite $b$-quark mass, also \nbecame available~\\cite{Campbell:2009ss}.\n\nWhile at the Tevatron the contributions of $t$- and $s$-channel $W$ exchange \nto single-top production are comparable in size, at the LHC $t$-channel \nproduction is by far the dominant production mechanism.\n\nFrom the point of view of testing the predictions from different PDF sets \n$t$-channel single-top is also very interesting due to the fact that, in the \nso called 5-flavour scheme ({\\it i.e.} a scheme where the $b$ is assumed to\nbe a parton in the proton) the cross-section at LO probes directly the \n$b$-quark PDF, which in turn is closely related to the gluon distribution \nfrom which it is generated radiatively.\n\nFrom Table~\\ref{tab:singletop} one notices that the central predictions from \nthe various PDF sets can differ by several times the quoted 1-sigma PDF \nuncertainty\\footnote{Note that for the ABKM09 prediction the uncertainty \nincludes the associated $\\alpha_s$, $m_c$ and $m_b$ uncertainties, which \ncannot be disentangled from the PDF uncertainties, and is thus much larger \nthan the one obtained using other PDF sets.}.\nThere are different contributions to this discrepancy. The first stems from \nthe different values of the strong coupling constant $\\alpha_S$ which are \nused by different parton sets. Since single-top production is mediated\nby electroweak gauge bosons, $\\alpha_s$ enters only in radiative corrections,\nunlike the case of $t\\bar{t}$ production discussed above, this\neffect is rather small.\n\nIn order to separate the differences in the single-top production \ncross-section which arise from the differences in the PDFs themselves\nand those from other physical parameters (like $m_b$ or $\\alpha_s$) which \nalso enter in the PDF analyses and in the computation of the partonic matrix \nelement, we plot in Fig.~\\ref{fig:bglumi} the $b$-gluon parton luminosity, \nwhich determines the LO cross-section, for the CTEQ6.6, MSTW2008 and \nNNPDF2.0 NLO sets. \nIt is clear that parton luminosities in the kinematic region relevant for \nsingle-top production differ by an amount much smaller than the cross-sections \nthemselves, suggesting that the differences indeed come from variations of \nother physical parameters which enter the PDF analysis.\n\nIndeed, it can be seen that the bulk of this difference is related to the \ndifferent values of the $b$-quark mass used in the fits by the different\ncollaborations. The NNPDF collaboration sets $m_b=4.3$ GeV, CTEQ uses \n$m_b=4.5$ GeV while the MSTW08 fit is performed setting $m_b=4.75$ GeV. \nIn order to substantiate our claim that the different values used the $b$-quark \nmass explain the bulk of the difference for the $t$-channel single-top \ncross-section, we produced two NNPDF2.0 sets with $m_b = 3.7$ GeV \nand $m_b = 5.0$ GeV respectively. \nThe results for the $b$-gluon parton luminosities for these modified \nsets are shown in the right plot in Fig.~\\ref{fig:bglumi} and the \ncorresponding cross-sections for the $t$-channel single-top cross-section \nare collected in Table~\\ref{tab:singletop-mb}.\nIt is clear that the value of $m_b$ is anti-correlated with the $bg$ luminosity \nand the $t$-channel single-top cross-section.\n\nFrom Table~\\ref{tab:singletop-mb} one sees that variations of the $b$-quark \nmass of the order or $\\delta m_b\\sim 0.7$ GeV induce an uncertainty in the \ncross-section of $\\delta\\sigma \\sim 3$ pb.\nIf we take the PDG average as the best available determination of the $b$-quark\nmass and convert it from the \\hbox{$\\overline{\\rm MS}$} scheme to the pole mass scheme we obtain an\nuncertainty of approximately $\\delta m_b({\\rm PDG})\\sim 0.2$ GeV. Rescaling \nerrors, we are still left with an uncertainty in the $t$-channel single-top \ncross-section of $\\delta\\sigma \\sim 0.8$ pb, still larger than the typical \nnominal PDF uncertainties quoted in Table~\\ref{tab:singletop}. \nIt is also clear from Table~\\ref{tab:singletop-mb} and Fig.~\\ref{fig:bglumi} \nthat using a similar value of $m_b$ would bring the predictions from different \nPDF sets into much better agreement. The uncertainty due to $m_b$ should \nthus always be accounted for in the theoretical predictions for LHC single-top \nproduction.\n\n\\begin{table}[t!]\n \\caption{Single-top cross-section at NLO with different PDF sets at \n LHC 7 TeV.}\n \\label{tab:singletop}\n \\begin{narrowtabular}{2cm}{c|c|c}\n \\hline\n & $t$-channel & $s$-channel \\\\ \n \\hline\n CTEQ6.6~\\cite{Nadolsky:2008zw} & 40.85 $\\pm$ 0.50 pb & 2.33 $\\pm$ 0.05 pb\\\\\n MSTW2008~\\cite{Martin:2009iq} & 41.96 $\\pm$ 0.26 pb & 2.38 $\\pm$ 0.04 pb\\\\\n NNPDF2.0~\\cite{Ball:2010de} & 44.33 $\\pm$ 0.32 pb & 2.38 $\\pm$ 0.06 pb\\\\\n \\hline\n ABKM09~\\cite{Alekhin:2009ni} & 43.17 $\\pm$ 1.98 pb & 2.40 $\\pm$ 0.03 pb\\\\\n HERAPDF1.0~\\cite{:2009wt} & 40.04 $\\pm$ 0.33 pb & 2.38 $\\pm$ 0.05 pb\\\\\n \\hline\n \\end{narrowtabular}\n\\end{table} \n\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{bglumi-7-rel-global.eps}\\qquad\n \\includegraphics[width=0.45\\textwidth]{bglumi-7-rel-nnpdf.eps}\n \\caption{(left) $b$-gluon parton luminosities for CTEQ6.6, MSTW2008 and \n NNPDF2.0, normalized to the NNPDF2.0 value. (right) $b$-gluon parton \n luminosities for NNPDF2.0 fits with different values of the $b$-quark \n mass normalized to the standard NNPDF2.0.}\n \\label{fig:bglumi}\n\\end{figure}\n\n\\begin{table}[t!]\n \\caption{$t$-channel single-top cross-section at NLO computed using NNPDF2.0 \n sets with different values of the $b$-quark mass.}\n \\label{tab:singletop-mb}\n \\begin{narrowtabular}{2cm}{l|c}\n \\hline\n NNPDF2.0 ($m_b=3.7$ GeV) & 46.77 $\\pm$ 0.36 pb \\\\\n NNPDF2.0 ($m_b=4.3$ GeV) & 44.33 $\\pm$ 0.32 pb \\\\\n NNPDF2.0 ($m_b=5.0$ GeV) & 41.04 $\\pm$ 0.32 pb \\\\\n \\hline\n \\end{narrowtabular}\n\\end{table}\n\n\\section{PDF-induced correlations}\n\nIt is well known (see for example the discussion in \nRef.~\\cite{Nadolsky:2008zw,Ball:2008by}) that parton densities\ninduce correlations among different observables measured at hadron \ncolliders. \nThese can be the PDFs themselves, one PDF and a physical observable or \ntwo physical observables. The latter case is especially important from \nthe experimental point of view, since it allows to define measurement \nstrategies in which the PDF uncertainties between two observables cancel, \nfor example in the case in which these correlation between the two observables \nis maximal.\n\nIn the case of a PDF set based on the Monte Carlo method, like NNPDF, \nthe correlation coefficient $\\rho[A,B]$ for two observables $A$ and $B$ which \ndepend on PDFs is given by the standard expression for the correlation of two\nstochastic variables~\\cite{Ball:2008by,Demartin:2010er}\n\\begin{equation}\n \\label{eq:correlation}\n \\rho[A,B]=\\frac{\\langle A B\\rangle_{\\mathrm{rep}}\n - \\langle A\\rangle_{\\mathrm{rep}}\\langle B\\rangle_{\\mathrm{rep}} }\n {\\sigma_A\\sigma_B}\n\\end{equation}\nwhere the averages are taken over the ensemble of the $N_{\\mathrm{rep}}$ values \nof the observables computed with the different replicas of the PDF set, \nand $\\sigma_{A,B}$ are the standard deviations for the observables as computed \nfrom the MC ensemble.\nThe value of $\\rho$ characterizes whether two observables are correlated \n($\\rho \\approx 1$), anti-correlated ($\\rho \\approx -1$) or uncorrelated \n($\\rho\\approx 0$). In the following we present results for the NNPDF2.0 set, \nthe LHC cross-sections have been obtained as before using the MCFM code.\n\nAs a first example, we compute the correlation between the $t\\bar{t}$ and\n$t$-channel single-top cross-section at the LHC (7 TeV) and different PDFs\nat the factorization scale $\\mu_f=m_{\\mathrm{top}}$ as a function of $x$.\nThe results are plotted in Fig.~\\ref{fig:pdf_obs_corr}. \nThe most remarkable features are that the $t\\bar{t}$ cross-section at the \nLHC at 7 TeV is mostly correlated to the gluon distribution at $x\\sim 0.1$ \nand anti-correlated with it at small-$x$, with the same behaviour present \nfor sea quark PDFs, generated radiatively from the gluon. \nWe note also that the $u$- and $d$-quark distributions are anti-correlated \nwith the cross-section at medium-\/large-$x$. \n\nAs for the case of the $t$-channel single-top cross-section, we point out \nthat the strong correlation with the gluon (and therefore with the $c$- \nand $b$-quark distributions) present for $t\\bar{t}$ is now milder \nand peaked at medium-$x$, $x\\sim 0.01$, and now we find a moderate \ncorrelation at medium-\/small-$x$ with the $u$ and $d$ PDFs, of the \nopposite sign as in the $t\\bar{t}$ cross-section. \nWe would like to stress as well the correlation of single-top\ncross-section and the $s$-,$\\bar{s}$-quark PDFs at medium-\/small-$x$, which \nis notably absent in the $t\\bar{t}$ case.\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=0.33\\textwidth,angle=270]{pdf_ttbar_corr}\\qquad\n \\includegraphics[width=0.33\\textwidth,angle=270]{pdf_singletop_corr}\n \\caption{Correlation between parton densities and $t\\bar{t}$ (left) \n and $t$-channel single-top (right) cross-sections at the LHC 7 TeV. \n The PDF set used is NNPDF2.0 and cross-sections have been\n computed with MCFM.}\n \\label{fig:pdf_obs_corr}\n\\end{figure}\n\nAs previously pointed out, the correlation coefficient \nEq.~(\\ref{eq:correlation}) can also be computed between two cross-sections, \nwhich is potentially relevant since in the case of a sizable correlation \nthe measurement of one of these observables would provide useful information \non the value of the other one. In this respect we have computed the \ncorrelation between the $t\\bar{t}$ and $t$-channel single-top cross-sections \non one side and $W^{\\pm}$ or $Z^0$ cross-sections at the LHC on the other.\nThe values for the correlation coefficients for the different pairs of \nobservables are collected in Table~\\ref{tab:obs_obs_corr} and the correlation \nellipses are plotted in Fig.~\\ref{tab:obs_obs_corr} for the $t\\bar{t}$ \ncross-section and in Fig.~\\ref{fig:singletop_VB_corr} for $t$-channel \nsingle-top.\n\n\\begin{table}[t!]\n \\centering\n \\begin{narrowtabular}{3cm}{c|c|c|c}\n \\hline\n $\\mathbf{\\rho}$ & $\\sigma_{W^+}$ & $\\sigma_{W^-}$ & $\\sigma_{Z^0}$\\\\\n \\hline\n $\\sigma_{t\\bar{t}}$ & -0.716 & -0.694 & -0.773 \\\\\n \\hline\n $\\sigma_{t}$ & 0.330 & 0.140 & 0.240 \\\\\n \\hline\n \\end{narrowtabular}\n \\caption{Correlation coefficients between $t\\bar{t}$ or $t$-channel \n single-top and $W^{\\pm}$ or $Z^0$ cross-sections at the LHC 7 TeV in the\n NNPDF2.0 analysis. }\n \\label{tab:obs_obs_corr}\n\\end{table}\n\nBoth the values of $\\rho$ and the shape of the correlation ellipses show \na significant anti-correlation between the $t\\bar{t}$ cross-section and \nthe $W^\\pm$ and $Z^0$ ones.\nGiven the fact that the vector boson cross-sections at the LHC are \n${\\cal{O}}(10)$ times larger than the $t\\bar{t}$ cross-section and \nmore accurately known from the theoretical point of view, it is foreseeable \nto use those in order to better calibrate the top pair cross-section \nmeasurement in early data.\n\nOn the other hand the single-top cross-section shows a very mild \ncorrelation to the vector boson one thus rendering a similar approach based \non the precision measurement of EW bosons difficult. However, if one is able\nto identify other observables which should be measured with a similar\nprecision and that are correlated to the single-top cross-section, the\ndiscussion of the $t\\bar{t}$ case would also apply here.\n \n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.31\\textwidth]{Wminus-ttbar}\\quad\n \\includegraphics[width=0.31\\textwidth]{Wplus-ttbar}\\quad\n \\includegraphics[width=0.31\\textwidth]{Z-ttbar}\\quad\n \\caption{Correlation between $t\\bar{t}$ and Electroweak Vector Boson cross \n sections at the LHC 7 TeV. The cross-sections have been computed\nwith MCFM and the NNPDF2.0 parton set.}\n \\label{fig:ttbar_VB_corr}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.31\\textwidth,angle=90]{Wminus-singletop}\\quad\n \\includegraphics[width=0.31\\textwidth]{Wplus-singletop}\\quad\n \\includegraphics[width=0.31\\textwidth]{Z-singletop}\\quad\n \\caption{Correlation between $t$-channel single-top and Electroweak Vector \n Boson cross-sections at the LHC 7 TeV. The cross-sections have\n been computed\nwith MCFM and the NNPDF2.0 parton set.}\n \\label{fig:singletop_VB_corr}\n\\end{figure}\n\n\\section{Conclusions}\n\nThe quality of top physics results from the LHC experiments will be affected, \namong other factors, by our knowledge of Parton Distribution Functions and \ntheir uncertainties. In this contribution we have reviewed the present status \nof predictions for $t\\bar{t}$ and single-top cross-sections evaluated at \nNLO in QCD with different PDF sets and pointed out differences among them, \ntrying to elucidate the reasons for these differences. For the case of \nsingle-top production, we have shown that one important source of difference\namong predictions obtained using different PDF sets is the value of the \n$b$-quark mass $m_b$ used by the different collaborations. \n\nIn the second part we briefly discussed PDF-induced correlations between \nparton densities and top cross-sections and between the latter and \nelectroweak vector boson production cross-sections at the LHC at 7 TeV. \nThese correlation could be useful to define experimental strategies to \nmeasure the top quark cross-section in a way in which PDF uncertainties are \nreduced.\n\n\\acknowledgments\nAG would like to thank the organizers, and in particular Fabio Maltoni,\nfor the kind invitation to participate in a very nice and stimulating \nWorkshop and the HEPTOOLS European Network for providing the financial \nsupport for his participation.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{1. ISM computational complexity derivation}\n\n\n\nFor ISM, DG and SM, the bottleneck resides in the computation of the gradient.\n\\[ f ( W) = \\sum_{i, j} \\gamma_{i, j} e^{- \\frac{\\tmop{Tr} ( W^T A_{i, j}\n W)}{2 \\sigma^2}} \\]\n\\[ \\frac{\\partial f}{\\partial W} = \\left[ \\sum_{i, j} \\frac{\\gamma_{i,\n j}}{\\sigma^2} e^{- \\frac{\\tmop{Tr} ( W^T A_{i, j} W)}{2 \\sigma^2}} A_{i, j}\n \\right] W \\]\n\\[ \\frac{\\partial f}{\\partial W} = \\left[ \\sum_{i, j} \\frac{\\gamma_{i,\n j}}{\\sigma^2} e^{- \\frac{\\tmop{Tr} ( W^T \\Delta x_{i, j} \\Delta x_{i, j}^T\n W)}{2 \\sigma^2}} A_{i, j} \\right] W \\]\nWhere $A_{i, j} = \\Delta x_{i, j} \\Delta x_{i, j}^T$. The variables have the\nfollowing dimensions.\n\\[ \\begin{array}{l}\n x_{i, j} \\in \\mathbbm{R}^{d \\times 1}\\\\\n W \\in \\mathbbm{R}^{d \\times q}\n \\end{array} \\]\nTo compute a new $W$ with DG, we first mulitply $\\Delta x_{i, j}^T W$, which\nis $O ( d)$. Note that $W$ in DG is always 1 single column. Next, it\nmultiplies with its own transpose to yied $O ( d + q^2)$. Then we\ncompute $A_{i, j}$ to get $O ( d + q^2 + d^2)$. Since this operation needs to\nbe added $n^2$ times, we get, $O ( n^2 ( d + q^2 + d^2))$. Since $d \\gg q$,\nthis notation reduces down to $O ( n^2 d^2)$. Let $T_1$ be the number of\niterations until convergence, then it becomes $O ( T_1 n^2 d^2)$. Lastly, in\nDG, this operation needs to be repeated $q$ times, hence, $O ( T_1 n^2 d^2\nq)$.\n\n\n\nTo compute a new $W$ with SM, we first mulitply $\\Delta x_{i, j}^T W$, which\nis $O ( d q)$. Next, it multiplies with its own transpose to yied $O ( d q\n + q^2)$. Then we compute $A_{i, j}$ to get $O ( d q + q^2 + d^2)$.\nSince this operation needs to be added $n^2$ times, we get, $O ( n^2 ( d q +\nq^2 + d^2))$. Since $d \\gg q$, this notation reduces down to $O ( n^2 d^2)$.\nThe SM method requires the computation of the inverse of $d \\times d$ matrix.\nSince inverses is cubic, it becomes $O ( n^2 d^2 + d^3)$. Lastly, let\n$T_2$ be the number of iterations until convergence, then it becomes $O ( T_2\n( n^2 d^2 + d^3))$.\n\n\n\nTo compute a new $W$ with ISM, we first mulitply $\\Delta x_{i, j}^T W$, which\nis $O ( d q)$. Next, it multiplies with its own transpose to yied $O ( d q\n + q^2)$. Then we compute $A_{i, j}$ to get $O ( d q + q^2 + d^2)$.\nSince this operation needs to be added $n^2$ times, we get, $O ( n^2 ( d q +\nq^2 + d^2))$. Since $d \\gg q$, this notation reduces down to $O ( n^2 d^2)$.\nThe ISM method requires the computation of the eigen decomposition of $d\n\\times d$ matrix. Since inverses is cubic, it becomes $O ( n^2 d^2 +\nd^3)$. Lastly, let $T_3$ be the number of iterations until convergence, then\nit becomes $O ( T_3 ( n^2 d^2 + d^3))$.\n\n\\end{document}\n\n\\section{GENERAL FORMATTING INSTRUCTIONS}\n\n\n\\end{document}\n\n\\section{GENERAL FORMATTING INSTRUCTIONS}\n\n\\end{document}\n\n\n\n\\section{Introduction}\n\nClustering, i.e., the process of grouping similar objects in a dataset together, is a classic problem. It is extensively used for exploratory data analysis. Traditional clustering algorithms typically identify a single partitioning of a given dataset. However, data is often multi-faceted and can be both interpreted and clustered through multiple viewpoints (or, {\\em views}). For example, the same face data can be clustered based on either identity or based on pose. In real applications, partitions generated by a clustering algorithm may not correspond to the view a user is interested in. \n\n\n\n\\iffalse\n\\fi\nIn this paper, we address the problem of finding an {\\em alternative clustering}, given a dataset and an existing, pre-computed clustering. Ideally, one would like the alternative clustering to be {\\em novel} (i.e., non-redundant) w.r.t. the existing clustering to reveal a new viewpoint to the user. Simultaneously, one would like the result to reveal partitions of high clustering {\\em quality}.\n Several recent papers propose algorithms for alternative clustering~\\cite{gondek2007non,cui2010learning,dang2010generation,davidson2008finding,cui2007non,niu2014iterative}. Among them, Kernel Dimension Alternative Clustering (KDAC) is a flexible approach, shown to have superior performance compared to several competitors~\\cite{niu2014iterative}. KDAC is as powerful as spectral clustering in discovering arbitrarily-shaped clusters (including ones that are not linearly separable) that are non-redundant w.r.t.~an existing clustering. As an additional advantage, KDAC can simultaneously learn the subspace in which the alternative clustering resides. \n\nThe flexibility of KDAC comes at a price: the KDAC formulation involves optimizing a non-convex cost function constrained over the space of orthogonal matrices (i.e, the Stiefel manifold). Niu et al.~\\cite{niu2014iterative} proposed a Dimension Growth (DG) heuristic for solving this optimization problem, which is nevertheless highly computationally intensive. We elaborate on its complexity in Section~\\ref{gen_inst}; experimentally, DG is quite slow, with a convergence time of roughly $46$ hours on an Intel Xeon CPU, for a $624$ sample-sized face data (c.f.~Section~\\ref{sec:exp}). This limits the applicability of KDAC in interactive exploratory data analysis settings, which often require results to be presented to a user within a few seconds. It also limits the scalability of KDAC to large data. Alternately, one can solve the KDAC optimization problem by gradient descent on a Stiefel manifold (SM)~\\cite{wen2013feasible}. However, given the lack of convexity, both DG or SM are prone to get trapped to local minima. Multiple iterations with random initializations are required to ameliorate the effect of locality. This increases computation time, and also decreases in effectiveness as the dimensionality of the data increases: the increase in dimension rapidly expands the search space and the abundance of local minima. As such, with both DG and SM, the clustering quality is negatively affected by an increase in dimension.\n\n\n\\iffalse\n\\fi\n\n{\\bf Our Contributions.} \nMotivated by the above issues, we make the following contributions:\n\\begin{packeditemize}\n\\item We propose an Iterative Spectral Method (ISM), a \\emph{novel algorithm} for solving the non-convex optimization constrained on a Stiefel manifold problem inherent in KDAC. Our algorithm has several highly desirable properties. First, it \\emph{significantly outperforms} traditional methods such as DG and SM in terms of both computation time and quality of the produced alternative clustering. Second, the algorithm relies on an \\emph{intuitive use of iterative spectral decompositions}, making it both easy to understand as well as easy to implement, using off-the-shelf libraries. \n\\item ISM has a natural initialization, constructed through a Taylor approximation of the problem's Lagrangian. Therefore, high quality results can be obtained without random restarts in search of a better initialization. We show that this initialization is a contribution in its own right, as its use improves performance of competitor algorithms.\n\\item We provide \\emph{theoretical guarantees} on its fixed point. In particular, we establish conditions under which the fixed point of ISM satisfies both the 1st and 2nd order necessary conditions for local optimality. \n\\item We extensively evaluate the performance of ISM in solving KDAC with synthetic and real data under various clustering quality and cost measures. Our results show an improvement in execution time by up to a factor of roughly $70$ and $10^5$, compared to SM and DG, respectively. At the same time, ISM outperforms SM and DG in clustering quality measures along with significantly lower computational cost. \n\n\n\n\\end{packeditemize}\n\\section{Kernel Dimension Alternative Clustering (KDAC) }\n\\label{gen_inst}\n\nIn alternative clustering, a dataset is provided along with existing clustering labels. Given this\nas input, we seek a $\\emph{new}$ clustering that is (a) distinct from the existing clustering, and (b) has high quality with respect to a clustering quality measure.\nAn example illustrating this is shown in Figure \\ref{fig:moon_illustrate}. \nThis dataset comprises 400 points in $\\mathcal{R}^4$. Projected to the first two dimensions, the dataset contains two clusters of intertwining parabolas shown as Clustering A. Projected to the last two dimensions, the dataset contains two Gaussian clusters shown as Clustering B. Points clustered together in one view can be in different clusters in the alternative view. In alternative clustering, given (a) the dataset, and (b) one of the two possible clusterings (e.g., Clustering B), we wish to discover the alternative clustering illustrated by the different view.\n\n\\begin{figure}[ht] \n \\centering\n \\includegraphics[width=7cm,height=3cm]{{extras\/moon}}\n \\caption{Four-dimensional moon dataset. Projection into the first two dimensions reveals different clusters than projection to the latter two dimensions.}\n \\label{fig:moon_illustrate}\n\\end{figure}\n\n\n\nFormally, let $X \\in \\mathcal{R}^{n \\times d}$ be a dataset with $n$ samples and\n$d$ features, along with an existing clustering \n$Y \\in \\mathcal{R}^{n \\times k}$, where $k$ is\nthe number of clusters. If $x_i$ belongs\nto cluster $j$, then $Y_{i, j}= 1$; otherwise, $Y_{i,j}=0$.\nWe wish to discover an alternative clustering $U \\in \\mathcal{R}^{n \\times k}$ \non some lower dimensional subspace of dimension $q \\ll d$.\nLet $W \\in \\mathcal{R}^{d \\times q}$ be a projection\nmatrix such that $ X W \\in \\mathcal{R}^{n \\times q}$. \n\n\n\n\nWe seek the optimal projection $W$ and clustering $U$ that \nmaximizes the statistical dependence between $X W$ with $U$, yielding a high clustering quality, while minimizing the\ndependence between $X W$ and $Y$, ensuring the novelty of the new clustering. \nDenoting DM as a Dependence Measure function, and using $\\lambda$ as a weighing constant, this optimization can be written as: \n\\begin{subequations}\\label{eq:orig_objective}\n \\begin{align}\n\\text{Maximize:}& \\quad\\tmop{DM} ( X W , U) - \\lambda \\tmop{DM} ( X W, Y),\\\\\n \\text{s.t :}& \\quad W^T W = I, U^T U = I.\n\\end{align}\n\\end{subequations} \nAs in spectral clustering, the labels of the alternative clustering are retrieved by performing $K$-means on matrix $U$, treating its rows as samples. \nThere are many potential choices for DM. The most well-known measures are correlation and mutual information (MI). While correlation performs well in many applications, it lacks the ability to measure non-linear relationships. Although there is clear relationship in Clustering A in Figure \\ref{fig:moon_illustrate}, correlation would mistakenly yield a value of nearly 0. \nAs a dependence measure, MI is superior in that it also measures non-linear relationships. However, due to the probabilistic nature of its formulation, a joint distribution is required. Depending on the distribution, the computation of MI can be prohibitive. \n\nFor these reasons, the Hilbert Schmidt Independence Criterion (HSIC) \\cite{gretton2005measuring} has been proposed for KDAC \\cite{niu2014iterative}. Like MI, it captures non-linear relationships. Unlike MI, HSIC does not require estimating a joint distribution, and it relaxes the need to discretize continuous variables. In addition, as shown by Niu et al.~\\cite{niu2014iterative}, \nHSIC is mathematically equivalent to spectral clustering, further implying that a high HSIC between the data and $U$ yields high clustering quality.\nA visual comparison of HSIC and correlation can be found in\nFigure \\ref{HSIC_capture_nonlinear} of Appendix \\ref{App:HSIC} in the supplement.\n\n\nUsing HSIC as a dependence measure, the objective of KDAC becomes \n\\begin{subequations}\\label{eq:main_objective}\n \\begin{align}\n\\text{Maximize:}& \\quad\\tmop{HSIC} ( X W , U) - \\lambda \\tmop{HSIC} ( X W, Y),\\\\\n \\text{subject to:}& \\quad W^T W = I, U^T U = I.\n\\end{align}\n\\end{subequations} \nwhere\n$\\tmop{HSIC} ( X, Y) \\equiv \\frac{1}{(n-1)^2} \\tmop{Tr} ( K_{X} H K_Y H).$\nHere, the variables $K_{X}$ and $K_Y$ are Gram matrices, \nand the $H$ matrix is a\ncentering matrix where $H = I - \\frac{1}{n} \\bm{1}_n \\bm{1}^T_n$ with \n$\\bm{1}$ the $n$-sized vector of all ones. \nThe elements of $K_{X}$ and $K_Y$\nare calculated by kernel functions $k_{X} ( x_i, x_j)$ and $k_Y ( y_i, y_j)$. \n The kernel functions for $Y$ and $U$ used in KDAC are $K_Y = Y Y^T$ and $K_U = U U^T$, and the kernel function for $XW$ is the Gaussian $k_{XW} ( x_i, x_j) = \\exp(- {\\tmop{Tr} [ ( x_i - x_j)^T\nW W^T ( x_i - x_j)]}\/{(2 \\sigma^2)})$. \nDue to the equivalence of HSIC and spectral clustering, the practice of normalizing the kernel $K_{XW}$ is adopted from spectral clustering by Niu et al.~\\cite{niu2014iterative}. \nThat is, for $K_{XW}$ the unnormalized\nGram matrix, the normalized matrix is defined as $D^{- 1 \/ 2} K_{XW} D^{- 1 \/ 2}$ where $D=\\mathrm{diag}(\\bm{1}_n^TK_{XW})$ is a diagonal matrix whose elements are the column-sums of $K_{XW}$.\n\n \n\\textbf{KDAC Algorithm.}\nThe optimization problem \\eqref{eq:main_objective} is non-convex. The KDAC algorithm solves (\\ref{eq:main_objective}) using alternate maximization between the variables $U$, $W$ and $D$, updating each while holding the other two fixed. After convergence, motivated by spectral clustering, $U$ is discretized via $K$-means to provide the alternative clustering. The algorithm proceeds in an iterative fashion, summarized in Algorithm \\ref{KDAC_algorithm}. In each iteration, variables $D$, $U$, and $W$ are updated as follows:\n\n\\textbf{Updating D:} \nWhile holding $U$ and $W$ constant, $D$ is computed as \n$D=\\mathrm{diag}(\\bm{1}_n^TK_{XW})$.\nMatrix $D$ is subsequently treated as a scaling constant throughout the rest of the iteration. \n\n\\textbf{Updating U:} \nHolding $W$ and $D$ constant and solving for $U$, (\\ref{eq:main_objective}) reduces to :\n\\begin{align} \\label{eq:spectral_clustering}\n \\textstyle\\max_{U:U^TU=I} \\tmop{Tr} ( U^T \\mathcal{Q} U), \n\\end{align}\nwhere $\\mathcal{Q}= H D^{- 1 \/ 2} K_{XW} D^{- 1 \/ 2} H$. This is precisely spectral clustering~\\cite{von2007tutorial}: \n \\eqref{eq:spectral_clustering} can be solved by setting $U$'s columns to the $k$ most dominant eigenvectors of $\\mathcal{Q}$, which can be done in $O(n^3)$ time.\n\n\n \n\\begin{algorithm}[t]\n \\scriptsize\n \\SetKwInOut{Input}{Input}\n \\SetKwInOut{Output}{Output}\n \\Input{dataset $X$, original clustering $Y$}\n \\Output{alternative clustering $U$ }\n Initialize $W_0$ using $W_{\\mathrm{init}}$ from (\\ref{eq:winit})\\\\\n Initialize $U_0$ from original clustering\\\\\n Initialize $D_0$ from $W$ and original clustering\\\\\n \\While{($U$ not converged) or ($W$ not converged)}{\n Update $D$ \\\\\n Update $W$ by solving Equation (\\ref{eq:main_cost_function})\\\\\n Update $U$ by solving Equation (\\ref{eq:spectral_clustering})\\\\\n\t}\n\tClustering Result $\\gets$ Apply K-means to $U$\n\t\\caption{KDAC Algorithm} \n\\label{KDAC_algorithm}\n\\end{algorithm}\n\\begin{algorithm}[t]\n \\scriptsize\n \\SetKwInOut{Input}{Input}\n \\SetKwInOut{Output}{Output}\n \\Input{$U$,$D$,$X$, $Y$}\n \\Output{ $W^*$ }\n Initialize $W_0$ to the previous value of $W$ in the master loop of KDAC.\\\\\n\t\\While{$W$ not converged}{\n\t\t$W \\gets \\mathop{\\mathrm{eig}}_{\\min} ( \\Phi ( W));$\\\\\n\t\n\t} \t\n\t\\caption{ISM Algorithm} \\label{masteralg} \n\\end{algorithm}\n \n \n\n\n\\textbf{Updating W:} \nWhile holding $U$ and $D$ constant to solve for $W$, (\\ref{eq:main_objective}) reduces to:\n\\begin{subequations}\n\\label{eq:main_cost_function}\n\\begin{align} \n \\text{Minimize:}\\quad&F(W) =- \\textstyle\\sum_{i, j} \\gamma_{i, j} e^{- \\frac{\\tmop{Tr} [ W^T A_{i, j}\n W]}{2 \\sigma^2}}\\label{eq:mainobj}\\\\\n \\text{subject to:}\\quad &W^TW=I\\label{eq:mainconstr}\n\\end{align}\n\\end{subequations}\nwhere $\\gamma_{i,j}$ are the elements of matrix $\\gamma = D^{- 1 \/ 2} H ( U U^T - \\lambda Y Y^T) H D^{- 1 \/ 2}$, and \n$A_{i, j} = ( x_i - x_j) ( x_i - x_j)^T$ (see Appendix \\ref{Cost_Derivation} in the supplement for the derivation). \nThis objective, along with a Stiefel Manifold constraint, $W^TW=I$, pose a challenging optimization problem as neither is convex. Niu et al.~\\cite{niu2014iterative} propose solving \\eqref{eq:main_cost_function} through an algorithm termed Dimensional Growth (DG). This algorithm solves for $W$ by computing individual columns of $W$ separately through gradient descent (GD). Given a set of computed columns, the next column is computed by GD projected to a subspace orthogonal to the span of computed set. Since DG is based on GD, the computational complexity is dominated by computing the gradient of \\eqref{eq:mainobj}. The latter is given by:\n\\begin{equation} \\label{eq:gradient_of_main_cost_function}\n\\nabla F(W) =\\textstyle\\sum^n_i \\sum_j^n \\frac{\\gamma_{i, j}}{\\sigma^2} e^{- \\frac{\\tmop{Tr} [\n W^T A_{i, j} W]}{2 \\sigma^2}} A_{i, j} W. \n\\end{equation}\nThe complexity of DG is $O(t_{DG} n^2d^2q)$, where $n$, $d$ are the dataset size and dimension, respectively, $q$ is the dimension of the subspace of the alternative clustering, and $t_{DG}$ is the number of iterations of gradient descent. The calculation of the gradient contributes the term $O(n^2d^2q)$. Although this computation is highly parallelizable, the algorithm still suffers from slow convergence rate. Therefore, $t_{DG}$ often dominates the computation cost. \n\n\n\n\nAn alternative approach to optimize \\eqref{eq:main_cost_function} is through classic methods for performing optimization on the Stiefel Manifold (SM) \\cite{wen2013feasible}. The computational complexity of this algorithm is dominated by the computation of the gradient and a matrix inversion with $t_{SM}$ iterations. This yields a complexity of $O(t_{SM} n^2 d^2 + t_{SM}d^3)$ for SM. Finally, as gradient methods applied to a non-convex objective, both SM and DG require multiple executions from random initialization points to find improved local minima. This approach becomes less effective as the dimension $d$ increases.\n\n\n\n\n\\section{An Iterative Spectral Method}\n\nThe computation of KDAC is dominated by the $W$ updates in Algorithm~\\ref{KDAC_algorithm}. Instead of using DG or SM to solve the optimization problem for $W$ in KDAC, we propose an Iterative Spectral Method (ISM). Our algorithm is motivated from the following observations. The Lagrangian of \\eqref{eq:main_cost_function} is:\n \\begin{align*}\n \\mathcal{L} ( W, \\Lambda) =& - \\textstyle\\sum_{i, j} \\gamma_{i, j} \\exp\\left(- \\frac{\\tmop{Tr}\n (W^T A_{i, j} W)}{2 \\sigma^2}\\right)\\\\\n &- \\frac{1}{2} \\tmop{Tr} ( \\Lambda ( W^T W -\n I)) \\numberthis \\label{eq:lag}\n\\end{align*}\n Setting $\\nabla_W \\mathcal{L}(W,\\Lambda)=0$ gives us the equation:\n\\begin{align}\\Phi(W)W = W \\Lambda,\\label{eq:balance} \\end{align}\nwhere \n\\begin{equation} \\label{eq:phi_equation}\n \\Phi ( W) = \\textstyle\\sum_{i, j} \\frac{\\gamma_{i, j}}{\\sigma^2} \\exp(- \\frac{\\tmop{Tr} [ W^T A_{i, j}\n W]}{2 \\sigma^2}) A_{i, j},\n\\end{equation}\nand $\\Lambda$ is a diagonal matrix.\n\nRecall that a feasible $W$, satisfying \\eqref{eq:mainconstr}, is orthonormal. \n\\eqref{eq:balance} is an eigenequation; thus, a stationary point $W$ of the Lagrangian \\eqref{eq:lag} comprises of $q$ eigenvectors of $\\Phi(W)$ as columns.\nMotivated by this observation, ISM attempts to find such a $W$ in the following iterative fashion. Let $W_0$ be an initial matrix. Given $W_k$ at iteration $k$, the matrix $W_{k+1}$ is computed as:\n$$W_{k+1} = \\textstyle\\mathop{\\mathrm{eig}}_{\\min} ( \\Phi ( W_k)), \\quad k=0,1,2,\\ldots,$$\nwhere the operator $\\mathop{\\mathrm{eig}}_{\\min} (A)$ returns a matrix whose columns are\nthe $q$ eigenvectors corresponding to the smallest eigenvalues of $A$.\n\n\nISM is summarized in Alg.~\\ref{masteralg}. Several important observations are in order. First, the algorithm ensures that $W_k$, for $k\\geq 1$, is feasible, by construction: selected eigenvectors are orthonormal and satisfy \\eqref{eq:mainconstr}. Second, it is also easy to see that a fixed point of the algorithm will also be a stationary point of the Lagrangian \\eqref{eq:lag} (see also~Lemma~\\ref{lemma:eig}). Though it is harder to prove, selecting eigenvectors corresponding to the \\emph{smallest} eigenvalues is key: we show that this is precisely the property that relates a fixed point of the algorithm to the local minimum conditions (see Thm.~\\ref{thm:stationary}).\nFinally, ISM has several computational advantages. For $t_{ISM}$ iterations, the calculation of $\\Phi(W)$, and the ensuing eigendecomposition yields a complexity of $O(t_{ISM}( n^2 d^2 + d^3))$. Since $q\\ll d$, various approximation methods \\cite{vladymyrov2016variational}\\cite{richtarikgeneralized}\\cite{lei2016coordinate} can be employed to find the few eigenvectors. For example, the Coordinate-wise Power Method\\cite{lei2016coordinate}, approximates the most dominant eigenvalue at $O(d)$ time, reducing ISM's complexity to $O(t_{ISM} n^2 d^2)$. This improvement is further confirmed experimentally (see Figure \\ref{fig:ND_vs_time}). Lastly, $t_{ISM}$ is magnitudes smaller than both $t_{DG}$ and $t_{SM}$. In general $t_{ISM} < 10$, while $t_{SM} > 50$ and $t_{DG} > 200$.\n\n \n\\subsection{Convergence Guarantees}\n\nAs mentioned above, the selection of the eigenvectors corresponding to the \\emph{smallest} eigenvalues of $\\Phi(W_k)$ is crucial for the establishment of a stationary point. Namely, we establish the following theorem:\n\\begin{theorem}\\label{thm:stationary}\n For large enough $\\sigma$ (satisfying Inequality \\eqref{ineq:large_sigma_orig}), a fixed point $W^*$ of Algorithm~\\ref{masteralg} satisfies the necessary conditions of a local minimum of \\eqref{eq:main_cost_function} if $\\Phi(W^*)$ is full rank.\n\\end{theorem}\n\\begin{proof}\nThe main body of the proof is organized into a series of lemmas proved in the supplement.\n\nOur first auxiliary lemma (from~\\cite{wright1999numerical}), establishes conditions necessary for a stationary point of the Lagrangian to constitute local minimum. \n\\begin{lemma} \\label{lemma:2nd_order}\n [Nocedal,Wright, Theorem 12.5~{\\cite{wright1999numerical}}] (2nd Order Necessary Conditions)\n Consider the optimization problem:\n $ \\min_{W : h (W) = 0} f (W), $\nwhere $f : \\mathbb{R}^{d \\times q} \\to \\mathbb{R}$ and $h :\n \\mathcal{R}^{d \\times q} \\to \\mathbb{R}^{q \\times q}$ are twice continuously\n differentiable. Let $\\mathcal{L}$ be the Lagrangian of this optimization problem. Then, a local minimum must satisfy the following conditions:\n \\begin{subequations}\n \n \\begin{align} \n &\\nabla_W \\mathcal{L} (W^{\\ast}, \\Lambda^{\\ast}) = 0, \\label{eq:1st_W}\\\\\n &\\nabla_{\\Lambda} \\mathcal{L} (W^{\\ast}, \\Lambda^{\\ast}) = 0, \\label{eq:1st_lambda}\\\\\n \n \n \n \n \n \\begin{split}\n \\tmop{Tr} ( Z^T &\\nabla_{W W}^2 \\mathcal{L}(W^{\\ast}, \\Lambda^{\\ast}) Z) \\geq 0 \\\\&\\tmop{for} \\tmop{all} Z \\neq 0 , \\tmop{with} \n \\nabla h (W^{\\ast})^T Z = 0. \\label{eq:2nd_W}\n \\end{split}\n \\end{align} \n \n \\end{subequations}\n\\end{lemma}\nArmed with this result, we next characterize the properties of a fixed point of Algorithm \\ref{masteralg}:\n\\begin{lemma} \\label{basic_lemma}\n Let $W^{\\ast}$ be a fixed point of Algorithm \\ref{masteralg}. Then it satisfies:\n \n $ \\Phi ( W^{\\ast}) W^{\\ast} = W^{\\ast} \\Lambda^{\\ast},$\n \n where $\\Lambda^{\\ast} \\in \\mathcal{R}^{q \\times q}$ is a diagonal matrix\n containing the $q$ smallest eigenvalues of $\\Phi ( W^{\\ast})$ and\n\n $W^{\\ast^T} W^{\\ast} = I. $\n \n\\end{lemma}\nThe proof can be found in Appendix \\ref{proof_of_lemma_2}.\nOur next result, whose proof is in Appendix \\ref{proof_of_lemma_3}, states that \n a fixed point satisfies the 1st order conditions of Lemma \\ref{lemma:2nd_order}. \n\\begin{lemma} \\label{lemma:eig}\n If $W^{\\ast}$ is a fixed point and $\\Lambda^{\\ast}$ is as defined in Lemma~\\ref{basic_lemma},\n then $W^{\\ast}$, $\\Lambda^*$ satisfy the 1st order conditions (\\ref{eq:1st_W})(\\ref{eq:1st_lambda}) of Lemma \\ref{lemma:2nd_order}.\n\\end{lemma}\nOur last lemma, whose proof is in Appendix \\ref{proof_of_lemma_4}, establishes that a fixed point satisfies the 2nd order conditions of Lemma \\ref{lemma:2nd_order}, for large enough $\\sigma$. \n\\begin{lemma} \\label{lemma:2nd_order_lemma}\n If $W^{\\ast}$ is a fixed point, $\\Lambda^{\\ast}$ is as defined in Lemma~\\ref{basic_lemma}, and $\\Phi(W^*)$ is full rank,\n then given a large enough $\\sigma$ (satisfying Inequality \\eqref{ineq:large_sigma_orig}), $W^{\\ast}$ and $\\Lambda^*$ satisfy the 2nd order condition (\\ref{eq:2nd_W}) of Lemma \\ref{lemma:2nd_order}.\n\\end{lemma}\n Thm.~\\ref{thm:stationary} therefore follows.\n\\end{proof}\n\n\nThm.~\\ref{thm:stationary} is stated in terms of a large enough $\\sigma$; we can characterize this constraint precisely. In the proof of Lemma~\\ref{lemma:2nd_order_lemma} we establish the following condition on $\\sigma$:\n\\begin{multline}\n \\sigma^2 [\\min_i ( \\bar{\\Lambda^*}_i) -\\max_j(\\Lambda_j^*)] \\geq \\\\\n \\sum_{i, j} \\frac{|\\gamma_{i, j}|}{\\sigma^2} e^{-\n \\frac{\\tmop{Tr} ( ( W^{*^T} A_{i, j} W^* )}{2 \\sigma^2}}\\tmop{Tr} (A^T_{i,j}A_{ij}). \\label{ineq:large_sigma_orig}\n\\end{multline}\nHere, $\\Lambda^*$ is the set of $q$ smallest eigenvalues of $\\Phi(W)$, and $\\bar{\\Lambda^*}$ is the set of the remaining eigenvalues. The left-hand side (LHS) of the equation further motivates ISM's choice of eigenvectors corresponding to the $q$ smallest eigenvalues. This selection guarantees that the LHS of the inequality is positive. Therefore, given a large enough $\\sigma$, Inequality \\eqref{ineq:large_sigma_orig} and the 2nd order condition is satisfied. \n\nFurthermore, this equation provides a reasonable suggestion for the value of $q$. Since we wish to maximize the term $(\\min_i ( \\bar{\\Lambda^*}_i) -\\max_j(\\Lambda_j^*))$ to satisfy the inequality, the value $q$ should be set where this gap is maximized. More formally, we will defined\n\\begin{align}\n \\delta_{gap}=\\min_i ( \\bar{\\Lambda^*}_i) -\\max_j(\\Lambda_j^*)\n \\label{eq:eiggap}.\n\\end{align}\nas the eigengap.\n\n\n\n\n\n\n\\subsection{Spectral Initialization via Taylor Approximation}\n\\label{sec:initialization}\nISM admits a natural initialization point, constructed via a Taylor approximation of the objective. As we show experimentally in Section~\\ref{sec:exp}, this initialization is a contribution in its own right: it improves both clustering quality and convergence time for ISM \\emph{as well as} competitor algorithms. To obtain a good initialization, observe that by using the 2nd order Taylor approximation of the objective function \\eqref{eq:mainobj} at $W=0$, the Lagrangian can be approximated by \n\\begin{align*}\n\\tilde{\\mathcal{L}}(W,\\Lambda) \\approx & -\\textstyle \\sum_{i, j} \\gamma_{i, j} \\left( 1 - \\frac{\\tmop{Tr}\n (W^T A_{i, j} W)}{2 \\sigma^2} \\right)\\\\\n & + \\frac{1}{2} \\tmop{Tr} (\\Lambda (I -\n W^T W)). \n\\end{align*}\n Setting $\\nabla_W \\tilde{\\mathcal{L}}(W,\\Lambda)=0$ reduces the problem into a simple eigendecomposition, namely, the one defined by the system \n$\\left[ \\textstyle \\sum_{i, j} \\frac{\\gamma_{i, j}}{\\sigma^2} A_{i,\n j} \\right] W = W \\Lambda .$\nHence, the 2nd order Taylor approximation of the original cost objective has a closed form global minimum that can be used as an initialization point, namely:\n\\begin{align}\n \\textstyle W_{\\mathrm{init}}=\\mathrm{eig}_{\\min}( \\sum_{i, j} {\\gamma_{i, j}} A_{i,\n j}\/{\\sigma^2})\\label{eq:winit}.\n\\end{align}\n\nWe use this spectral initialization (SI) in the first master iteration of KDAC. In subsequent master iterations, $W_0$ (the starting point of ISM) is set to be the last value to which ISM converged to previously.\n\n\n\n\n\\section{Scalability}\\label{sec:scalability}\n\\chieh{The scalability adventage of ISM is two folds. First, the most computational complex portion of the algorithm is the eigen decomposition of $\\Phi(W) \\in \\mathbb{R}^{d \\times d}$. Since $d<0$\nare moving under the action of the inverse-square law of universal gravitation.\nIf the components of $x=(r_1,\\dots,r_N)\\in E^N$\nare the positions of the bodies,\nthen we shall denote $r_{ij}=\\norm{r_i-r_j}_E$\nthe distance between bodies $i$ and $j$\nfor any pair $1\\leq i0$.\n\nWe say that a motion $x(t)$ has \\emph{limit shape}\nwhen there is a time dependent similitude $S(t)$ of the space $E$\nsuch that $S(t)x(t)$ converges to some configuration $a\\neq 0$\n(here the action of $S(t)$ on $E^N$ is the diagonal one).\nThus the limit shape of an hyperbolic motion is the shape\nof his asymptotic velocity $a=\\lim_{t\\to +\\infty}t^{-1}x(t)$.\nNote that, in fact,\nthis represents a stronger way of having a limit shape,\nsince in this case the similarities are given by homotheties.\n\n\n\\subsection{Existence of hyperbolic motions}\n\\label{s-existence}\n\nThe only explicitly known hyperbolic motions\nare of the homographic type,\nmeaning that the configuration is\nall the time in the same similarity class.\nFor this kind of motion,\n$x(t)$ is all the time a central configuration,\nthat is, a critical point of $I^{1\/2}U$.\nThis is a strong limitation,\nfor instance the only central configurations for $N=3$\nare either equilateral or collinear.\nMoreover,\nthe Painlev\u00e9-Wintner conjecture states that up to similarity\nthere are always a finite number of central configurations.\nThe conjecture was confirmed by Hampton and Moeckel\n\\cite{HamMoe} in the case of four bodies,\nand by Albouy and Kaloshin \\cite{AlbKal}\nfor generic values of the masses in the planar five-body problem.\n\nOn the other hand,\nChazy proved in \\cite{Cha2} that the set of initial conditions\ngiving rise to hyperbolic motions is an open subset of $T\\Omega$,\nand moreover,\nthat the limit shape depends continuously on the initial condition\n(see Lemma \\ref{lema-cont.limitshape}).\nIn particular,\na motion close enough to some hyperbolic homographic motion\nis still hyperbolic.\nHowever, this does not allow us to draw conclusions\nabout the set of configurations that are realised as limit shapes.\nIn this paper we prove that \\emph{any} configuration\nwithout collisions is the limit shape of some hyperbolic motion.\nAt our knowledge, there are no results in this direction\nin the literature of the subject.\n\n\n\nAn important novelty in this work is the use of global viscosity solutions, in the sense introduced by Crandall, Evans and Lions \\cite{CraLio,CraEvaLio},\nfor the supercritical Hamilton-Jacobi equation\n\\begin{equation}\\tag{HJ}\\label{HJh}\nH(x,d_xu)=h \\qquad x\\in E^N,\n\\end{equation}\nwhere $H$\nis the Hamiltonian of the Newtonian $N$-body problem,\nand $h>0$.\n\nWe will found global viscosity solutions through a limit process\ninspired by the Gromov's construction of the ideal boundary\nof a complete locally compact metric space.\nTo do this,\nwe will have to generalize to the case $h>0$ the H\u00f6lder estimate\nfor the action potential discovered by the first author in\n\\cite{Mad1} in the case $h=0$.\nWith this new estimate we will remedy the loss of\nthe Lipschitz character of the viscosity subsolutions,\nwhich is due to the singularities of the Newtonian potential.\n\nIn a second step, we will show that the functions thus obtained\nare in fact fixed points of the Lax-Oleinik semigroup. \nMoreover,\nwe will prove that given any configuration without collisions\n$a\\in\\Omega$,\nthere are solutions of Equation (\\ref{HJh}) such that\nall its calibrating curves are hyperbolic motions\nhaving the shape of $a$ as limit shape.\nFollowing this method (developed in Sect. \\ref{s-HJ})\nwe get to our main result.\n\n\\begin{theorem}\n\\label{thm-princ}\nFor the Newtonian $N$-body problem in a space $E$\nof dimension at least two,\nthere are hyperbolic motions $x:[0,+\\infty)\\to E^N$ such that\n\\[x(t)=\\sqrt{2h}\\,t\\;a+o(t)\\quad\\text{ as }\\quad t\\to +\\infty,\\]\nfor any choice of $x_0=x(0)\\in E^N$,\nfor any configuration without collisions $a\\in\\Omega$ normalized by $\\norm{a}=1$,\nand for any choice of the energy constant $h>0$. \n\\end{theorem}\n\nWe emphasize the fact that the initial configuration can be chosen \\emph{with} collisions.\nThis means that in such a case, the motion $x$ given by the theorem is continuous at $t=0$,\nand defines a maximal solution $x(t)\\in\\Omega$ for $t>0$. \nFor instance,\nchoosing $x_0=0\\in E^N$,\nthe theorem gives the existence of ejections from the total collision configuration,\nwith prescribed positive energy and arbitrarily chosen limit shape.\n\nMoreover, the well known Sundman's inequality (see Wintner \\cite{Win}) implies that motions with total collisions have zero angular momentum. \nTherefore, we deduce the following non trivial corollary.\n\n\\begin{corollary}\nFor any configuration without collisions $a\\in\\Omega$\nthere is a hyperbolic motion with zero angular momentum\nand asymptotic velocity $a$. \n\\end{corollary}\n\n\nIt should be said that the hypothesis that excludes\nthe collinear case $\\dim E=1$ is only required to ensure\nthat action minimizing curves do not suffer collisions.\nThe avoidance of collisions is thus assured by the celebrated\nMarchal's Theorem that we state below in Sect. \\ref{s-var.setting}.\nThe collinear case could eventually be analyzed\nin the light of the results obtained by Yu and Zhang \\cite{YuZha}.\n\nTheorem \\ref{thm-princ} should be compared with that obtained\nby the authors in \\cite{MadVen} which concerns completely parabolic motions.\nWe recall that completely parabolic motions (as well as total collisions)\nhave a very special asymptotic behaviour.\nIn his work of 1918 \\cite{Cha1},\nChazy proves that the normalized configuration\nmust approximate the set of normal central configurations.\nUnder a hypothesis of non-degeneracy,\nhe also deduces the convergence to a particular central configuration. \nThis hypothesis is always satisfied in the three body problem.\nHowever,\na first counterexample with four bodies in the plane\nwas founded by Palmore \\cite{Pal},\nallowing thus the possibility of motions\nwith infinite spin (see Chenciner \\cite{Che1} p.281).\n\nIn all the cases, Chazy's Theorem prevents arbitrary limit shapes\nfor completely parabolic motions as well as for total collisions.\nIn this sense,\nlet us mention for instance the general result by Shub \\cite{Shu}\non the localisation of central configurations,\nshowing that they are isolated from the diagonals.\n\nWe should also mention that the confinement of the\nasymptotic configuration to the set of central configurations,\nboth for completely parabolic motions and for total collisions,\nextends to homogeneous potentials of degree $\\alpha\\in (-2,0)$.\nFor these potentials the mutual distances must grow like $t^{2\/(2-\\alpha)}$ in\nthe parabolic case, and must decay like $\\abs{t-t_0}^{2\/(2-\\alpha)}$\nin the case of a total collision at time $t=t_0$.\nOn the other hand,\nit is known that potentials giving rise to strong forces near collisions\ncan present motions of total collision with non-central asymptotic configurations.\nWe refer the reader to the comments on the subject by Chenciner in \\cite{Che3}\nabout the Jacobi-Banachiewitz potential, and to Arredondo et al. \\cite{ArPeChSt}\nfor related results on the dynamics of total collisions in the case of\nSchwarzschild and Manev potentials. \n\n\nLet us say that there is another natural way to prove the existence of hyperbolic motions, \nusing the fact that the Newtonian force vanishes when all mutual distances diverge.\nWe could call these motions \\emph{almost linear}.\nThe way to do that is as follows.\nSuppose first that $(x_0,a)\\in \\Omega\\times\\Omega$\nis such that the half-straight line given by $\\bar{x}(t)=x_0+ta$, $t>0$\nhas no collisions ($\\bar{x}(t)\\in\\Omega$ for all $t>0$).\nConsider now the motion $x(t)$ with initial condition\n$x(0)=x_0$ and $\\dot x(0)=\\alpha a$ for some positive constant $\\alpha$.\nIt is not difficult to prove that, for $\\alpha>0$ chosen big enough,\nthe trajectory $x(t)$ is defined for all $t>0$, and moreover,\nit is a hyperbolic motion with limit velocity $b\\in\\Omega$ close to $\\alpha a$.\nIn particular,\nthe limit shape of such a motion can be obtained\nas close as we want from the shape of $a$.\n\nThe previous construction is unsatisfactory for several reasons.\nFirst, we do not get exactly the desired limit shape but a close one.\nThis approximation can be made as good as we want,\nbut we lose the control of the energy constant $h$ of the motion,\nwhose order of magnitude is that of $\\alpha^2$.\nSecondly,\nit is not possible to apply this method when the half-straight line\n$\\bar{x}$ presents collisions.\nFor instance this is the case if we take $a=z_0-x_0$\nfor any choice of $z_0\\in E^N\\setminus\\Omega$.\nFinally,\neven if the homogeneity of the potential can be exploited\nto find a new hyperbolic motion\nwith a prescribed positive energy constant,\nand the same limit shape,\nwe lose the control on the initial configuration.\nIndeed,\nif $x$ is a hyperbolic motion defined for all $t\\geq 0$\nwith energy constant $h$,\nthen the motion $x_\\lambda$ defined by\n$x_\\lambda(t)=\\lambda\\,x(\\lambda^{-3\/2}t)$\nis still hyperbolic with energy constant $\\lambda^{-1}h$.\nMoreover, the limit shapes of $x$ and $x_\\lambda$ are the same,\nbut $x_\\lambda(0)=\\lambda x(0)$ meaning that\nthe initial configuration is dilated by the factor $\\lambda$.\n\n\n\n\n\n\n\\subsection{Other expansive motions}\n\\label{s-other}\n\nHyperbolic motions are part of the family of\n\\emph{expansive motions} which we define now.\nIn order to classify them,\nas well as for further later uses,\nwe summarize below a set of well-known facts\nabout the possible evolutions of the motions\nin the Newtonian $N$-body problem.\n\n\\begin{definition}\n[Expansive motion]\nA motion $x:[0,+\\infty)\\to \\Omega$ is said to be expansive\nwhen all the mutual distances diverge.\nThat is, when $r_{ij}(t)\\to+\\infty$ for all $i0$ and the center of mass is at rest,\nthen $R(t)>At$ for some constant $A>0$.\n\\end{remark}\n\n\n\\begin{theorem*}[1922, Chazy \\cite{Cha2} pp. 39 -- 49]\nLet $x(t)$ be a motion with energy constant $h>0$ and defined for all $t>t_0$.\n\n\\begin{enumerate}\n\\item[(i)] The limit\n\\[\\lim_{t\\to +\\infty}R(t)\\,r(t)^{-1}=L\\in [1,+\\infty]\\]\nalways exists.\n\n\\item[(ii)] If $L<+\\infty$ then there is a configuration $a\\in\\Omega$,\nand some function $P$, which is analytic in a neighbourhood of $(0,0)$,\nsuch that for every $t$ large enough we have\n\\[x(t)=ta-\\log(t)\\,\\nabla U(a)+P(u,v)\\]\nwhere $u=1\/t$ and $v=\\log(t)\/t$.\n\\end{enumerate}\n\\end{theorem*}\n\nAs Chazy pointed out, surprisingly Poincar\u00e9 made the mistake of \nomitting the $\\log(t)$ order term in his\n\\emph{``M\u00e9thodes Nouvelles de la M\u00e9canique C\u00e9leste\"}.\n\nSubsequent advances in this subject were recorded much later,\nwhen Chazy's results on final evolutions were included\nin a more general description of motions.\nFrom this development we must recall the following theorems.\nNotice that none of them make assumptions\non the sign of the energy constant $h$.\n\n\\begin{theorem*}[1967, Pollard \\cite{Pol}]\nLet $x$ be a motion defined for all $t>t_0$.\nIf $r$ is bounded away from zero then we have that\n$R=O(t)$ as $t\\to +\\infty$.\nIn addition $R(t)\/t\\to +\\infty$ if and only if $r(t)\\to 0$.\n\\end{theorem*}\n\nThis leads to the following definition.\n\n\\begin{definition} A motion is said to be superhyperbolic when\n\\[\\limsup_{t\\to +\\infty}\\;R(t)\/t=+\\infty.\\]\n\\end{definition}\n\nA short time later it was proven that,\neither the quotient $R(t)\/t\\to +\\infty$,\nor $R=O(t)$ and the system expansion can be described more accurately.\n\n\\begin{theorem*}[1976, Marchal-Saari \\cite{MarSaa}]\nLet $x$ be a motion defined for all $t>t_0$.\nThen either $R(t)\/t \\to +\\infty$ and $r(t)\\to 0$,\nor there is a configuration $a\\in E^N$ such that $x(t)=ta+O(t^{2\/3})$.\nIn particular, for superhyperbolic motions the quotient $R(t)\/t$ diverges.\n\\end{theorem*}\n\nOf course this theorem does not provide much information\nin some cases, for instance if the motion is bounded then\nwe must have $a=0$.\nOn the other hand,\nit admits an interesting refinement concerning the\nthe behaviour of the subsystems.\nMore precisely,\nwhen $R(t)=O(t)$ and the configuration $a$ given by the theorem\nhas collisions the system decomposes naturally into subsystems,\nwithin which the distances between the bodies\ngrow at most like $t^{2\/3}$.\nConsidering the internal energy of each subsystem,\nMarchal and Saari (Ibid, Theorem 2 and corollary 4 pp.165-166)\ngave a decription of the possible dynamics\nthat can occur within the subsystems.\nFrom these results we can easily deduce the following.\n\n\\begin{theorem*}[1976, Marchal-Saari \\cite{MarSaa}]\nSuppose that $x(t)=ta+O(t^{2\/3})$ for some $a\\in E^N$,\nand that the motion is expansive.\nThen, for each pair $i0$,\nwhile the third requires $h_0=0$.\n\nFinally,\nwe observe that Chazy's Theorem applies in the first two cases .\nIn these cases,\nthe limit shape of $x(t)$ is the shape of the configuration $a$\nand moreover,\nwe have $L<+\\infty$ if and only if $x$ is hyperbolic.\nOf course if $h_0>0$ and $L=+\\infty$ then\neither the motion is partially hyperbolic or it is not expansive.\n\n\n\n\n\n\n\n\n\\subsection{The geometric viewpoint}\n\\label{s-geom.view}\n\nWe explain now the geometric formulation\nand the geometrical meaning of this work\nwith respect to the Jacobi-Maupertuis metrics associated\nto the positive energy levels.\nSeveral technical details concerning these metrics are given\nin Sect. \\ref{s-jm.dist}.\nThe boundary notions are also discussed in Sect. \\ref{s-busemann}.\nIt may be useful for the reader to keep in mind\nthat reading this section can be postponed to the end.\n\nWe recall that for each $h\\geq 0$, the Jacobi-Maupertuis metric\nof level $h$ is a Riemannian metric defined\non the open set of configurations without collisions $\\Omega$.\nMore precisely,\nit is the metric defined by $j_h=2(h+U)\\,g_m$,\nwhere $g_m$ is the Euclidean metric in $E^N$\ngiven by the mass inner product.\nOur main theorem has a stronger version\nin geometric terms.\nActually Theorem \\ref{thm-princ} can be reformulated\nin the following way.\n\n\\begin{theorem}\\label{thm-princ.rays}\nFor any $h>0$, $p\\in E^N$ and $a\\in\\Omega$,\nthere is geodesic ray of the Jacobi-Maupertuis metric of level $h$\nwith asymptotic direction $a$ and starting at $p$.\n\\end{theorem}\nThis formulation requires some explanations.\nThe Riemannian distance $d_h$ in $\\Omega$ is defined as usual\nas the infimum of the length functional among all the\npiecewise $C^1$ curves in $\\Omega$ joining two given points.\nWe will prove that $d_h$ can be extended\nto a distance $\\phi_h$ in $E^N$,\nwhich is a metric completion of $(\\Omega,d_h)$,\nand which also we call the Jacobi-Maupertuis distance.\nMoreover, we will prove that $\\phi_h$ is precisely the action\npotential defined in Sect. \\ref{s-var.setting}.\n\nThe minimizing geodesic curves can then be defined\nas the isometric immersions of compact intervals\n$[a,b]\\subset\\mathbb{R}$ within $E^N$.\nMoreover, we will say that a curve $\\gamma:[0,+\\infty)\\to E^N$\nis a geodesic ray\nfrom $p\\in E^N$, if $\\gamma(0)=p$ and each restriction\nto a compact interval is a minimizing geodesic.\nTo deduce this geometric version of our main theorem\nit will be enough to observe that the obtained\nhyperbolic motions can be reparametrized\ntaking the action as parameter to obtain geodesic rays.\n\nWe observe now the following interesting implication of\nChazy's Theorem.\n\\begin{remark}\n\\label{rmk-Chazy.implic}\nIf two given hyperbolic motions have the same asymptotic\ndirection, then they have a bounded difference.\nIndeed,\nif $x$ and $y$ are hyperbolic motions with the same asymptotic\ndirection, then the two unbounded terms\nof the Chazy's asymptotic development of $x$ and $y$ also agree.\n\\end{remark}\n\n\nWe recall that the Gromov boundary of a geodesic space\nis defined as the quotient set of the set of geodesic rays\nby the equivalence that identifies rays that are kept\nat bounded distance.\nFrom the previous remark,\nwe can deduce that two geodesic rays with asymptotic direction\ngiven by the same configuration $a\\in\\Omega$\ndefine the same point at the Gromov boundary.\n\n\\begin{notation}\nLet $\\phi_h:E^N\\times E^N\\to\\mathbb{R}^+$ be the Jacobi-Maupertuis\ndistance for the energy level $h\\geq 0$\nin the full space of configurations.\nWe will write $\\mathcal{G}_h$ for the corresponding Gromov boundary.\n\\end{notation}\n\nThe proof of the following corollary is given in Sect. \\ref{s-jm.dist}.\n\n\\begin{corollary}\\label{cor-Gr.boundary}\nIf $h>0$, then each class in $\\Omega_1=\\Omega\/\\mathbb{R}^+$\ndetermines a point in $\\mathcal{G}_h$ which is composed\nby all geodesic rays with asymptotic direction in this class.\n\\end{corollary}\n\n\nOn the other hand,\nif instead of the arc length we parametrize the geodesics\nby the dynamical parameter,\nthen it is natural to question the existence\nof non-hyperbolic geodesic rays.\nWe do not know if there are partially hyperbolic geodesic rays. \nNor do we know if a geodesic ray should be an expansive motion. \n\nIn what follows we denote $\\norm{v}_h$\nfor the norm of $v\\in T\\Omega$ with respect to the metric $j_h$,\nand $\\norm{p}_h$ the dual norm of an element $p\\in T^*\\Omega$.\nIf $\\gamma:(a,b)\\to\\Omega$ is a geodesic\nparametrized by the arc length, then\n\\[\\norm{\\dot\\gamma(s)}_h^2=\n2(h+U(\\gamma(s))\\,\\norm{\\dot\\gamma(s)}^2=1\\]\nfor all $s\\in (a,b)$.\nTaking into account that $U\\approx r^{-1}$ we see that\nthe parametrization of motions as geodesics\nleads to slowed evolutions over passages near collisions.\nWe also note that for expansive geodesics we have\n$\\norm{\\dot\\gamma(s)}\\to 1\/\\sqrt{2h}$.\n\nFinally we make the following observations about the\nHamilton-Jacobi equation that we will solve in the weak sense.\nFirst, the equation (\\ref{HJh}) , that explicitly reads\n\\[\\frac{1}{2}\\norm{d_xu}^2-U(x)=h\\]\ncan be written in geometric terms, precisely as the eikonal equation\n\\[\n\\norm{d_xu}_h=\\frac{1}{\\sqrt{2(h+U(x))}}\\,\\norm{d_xu}=1\n\\]\nfor the Jacobi-Maupertuis metric.\nOn the other hand,\nthe solutions will be obtained as limits of weak subsolutions,\nwhich can be viewed as $1$-Lipschitz functions for the\nJacobi-Maupertuis distance.\nWe will see that the set of viscosity subsolutions\nis the set of functions\n$u:E^N\\to\\mathbb{R}$ such that $u(x)-u(y)\\leq \\phi_h(x,y)$\nfor all $x,y\\in E^N$.\n\n\n\n\n\n\n\n\\section{Viscosity solutions of the Hamilton-Jacobi equation}\n\\label{s-HJ}\n\n\n\nThe \\emph{Hamiltonian} $H$ is defined over\n$T^*E^N\\simeq E^N\\times (E^*)^N$ as usual by\n\\[\nH(x,p)=\\frac{1}{2}\\norm{p}^2-U(x)\n\\]\nand taking the value $H(x,p)=-\\infty$\nwhenever the configuration $x$ has collisions.\nHere the norm is the dual norm with respect to the mass product,\n that is,\nfor $p=(p_1,\\dots,p_N)\\in (E^*)^N$\n\\[\n\\norm{p}^2=\nm_1^{-1}\\,\\norm{p_1}^2+\\dots+m_N^{-1}\\,\\norm{p_N}^2\\,,\n\\]\nthus in terms of the positions of the bodies\nequation (\\ref{HJh}) becomes\n\\[H(x,d_xu)=\n\\sum_{i=1}^N\\frac{1}{2\\,m_i}\\norm{\\frac{\\partial u}{\\partial r_i}}^2-\n\\;\\sum_{i0$, let\n\\[\n\\mathcal{C}(x,y,\\tau)=\n\\set{\\gamma:[a,b]\\to E^N \\mid\\gamma(a)=\nx,\\,\\gamma(b)=y,\\,b-a=\\tau}\n\\]\nbe the set of absolutely continuous curves\ngoing from $x$ to $y$ in time $\\tau$,\nand\n\\[\n\\mathcal{C}(x,y)=\\bigcup_{\\tau>0}\\mathcal{C}(x,y,\\tau)\\,.\n\\]\nThe Lagrangian action of a curve $\\gamma\\in\\mathcal{C}(x,y,\\tau)$\nwill be denoted\n\\[\n\\mathcal{A}_L(\\gamma)=\\int_a^b L(\\gamma,\\dot\\gamma)\\,dt=\n\\int_a^b \\tfrac{1}{2}\\norm{\\dot\\gamma}^2+U(\\gamma)\\,dt\n\\]\n\nIt is well known that Tonelli's Theorem\non the lower semicontinuity of the action for convex Lagrangians\nextends to this setting.\nA proof can for instance be found in \\cite{daLMad} (Theorem 2.3).\nIn particular we have, for any pair of configurations $x,y\\in E^N$,\nthe existence of curves achieving the minimum value\n\\[\n\\phi(x,y,\\tau)=\n\\min\\set{\\mathcal{A}_L(\\gamma)\\mid \\gamma\\in\\mathcal{C}(x,y,\\tau)}\n\\]\nfor any $\\tau>0$.\nWhen $x\\neq y$ there also are curves reaching the minimum\n\\[\n\\phi(x,y)=\\min\\set{\\mathcal{A}_L(\\gamma)\\mid \\gamma\\in\\mathcal{C}(x,y)}\n=\\min\\set{\\phi(x,y,\\tau)\\mid \\tau>0}\n\\]\nIn the case $x=y$ we have\n$\\phi(x,x)=\\inf\\set{\\mathcal{A}_L(\\gamma)\\mid \\gamma\\in\\mathcal{C}(x,y)}=0$.\nWe call these functions on $E^N\\times E^N$ respectively\nthe \\emph{fixed time action potential} and the\n\\emph{free time} (or \\emph{critical}) \\emph{action potential}.\n\nAccording to the Hamilton's principle of least action,\nif a curve $\\gamma:[a,b]\\to E^N$ is a minimizer of the Lagrangian\naction in $\\mathcal{C}(x,y,\\tau)$ then $\\gamma$ satisfy Newton's\nequations at every time $t\\in [a,b]$ in which $\\gamma(t)$\nhas no collisions, i.e. whenever $\\gamma(t)\\in\\Omega$.\n\nOn the other hand,\nit is easy to see that there are curves\nboth with isolated collisions and finite action. \nThis phenomenon,\nalready noticed by Poincar\u00e9 in \\cite{Poi}, prevented the use\nof the direct method of the calculus of variations\nin the $N$-body problem for a long time. \n\nA big breakthrough came from Marchal,\nwho gave the main idea needed to prove the following theorem.\nComplete proofs of this and more general versions were\nestablished by Chenciner \\cite{Che1}\nand by Ferrario and Terracini \\cite{FerTer}.\n\n\\begin{theorem*}[2002, Marchal \\cite{Mar}]\nIf $\\gamma\\in\\mathcal{C}(x,y)$ is defined on some interval $[a,b]$,\nand satisfies $\\mathcal{A}_L(\\gamma)=\\phi(x,y,b-a)$,\nthen $\\gamma(t)\\in\\Omega$ for all $t\\in (a,b)$.\n\\end{theorem*}\nThanks to this advance,\nthe existence of countless periodic orbits has been established\nusing variational methods.\nAmong them, the celebrated three-body figure eight\ndue to Chenciner and Montgomery \\cite{CheMon}\nis undoubtedly the most representative example,\nalthough it was discovered somewhat before.\nMarchal's Theorem was also used to prove the nonexistence\nof entire free time minimizers \\cite{daLMad},\nor in geometric terms,\nthat the zero energy level has no straight lines.\nThe proof we provide below for our main result\ndepends crucially on Marchal's Theorem.\nOur results can thus be considered as a new application\nof Marchal's Theorem, this time in positive energy levels.\n\nWe must also define for $h>0$ the\n\\emph{supercritical action potential} as the function \n\\[\n\\phi_h(x,y)=\n\\inf\\set{\\mathcal{A}_{L+h}(\\gamma)\\mid \\gamma\\in\\mathcal{C}(x,y)}=\n\\inf\\set{\\phi(x,y,\\tau)+h\\tau\\mid \\tau>0}.\n\\]\n\nFor the reader familiar with the Aubry-Mather theory,\nthis definition should be reminiscent of the supercritical\naction potentials used by Ma\u00f1\u00e9 to define the critical value\nof a Tonelli Lagrangian on a compact manifold. \n\nAs before we prove\n(see Lemma \\ref{lema-JM.geod.complet} below),\nnow for $h>0$,\nthat given any pair of different configurations $x,y\\in E^N$,\nthe infimum in the definition of $\\phi_h(x,y)$\nis achieved by some curve $\\gamma\\in\\mathcal{C}(x,y)$,\nthat is, we have $\\phi_h(x,y)=\\mathcal{A}_{L+h}(\\gamma)$.\nIt follows that if $\\gamma$ is defined in $[0,\\tau]$,\nthen $\\gamma$ also minimizes $\\mathcal{A}_L$ in $\\mathcal{C}(x,y,\\tau)$\nand by Marchal's Theorem we conclude that $\\gamma$\navoid collisions,\ni.e. $\\gamma(t)\\in\\Omega$ for every $t\\in (0,\\tau)$.\n\n\n\\subsubsection{Dominated functions and viscosity subsolutions}\n\nLet us fix $h>0$\nand take a $C^1$ subsolution $u$ of $H(x,d_xu)=h$,\nthat is, such that $H(x,d_xu)\\leq h$ for all $x\\in E^N$.\nNotice now that,\nsince for any absolutely continuous curve\n$\\gamma:[a,b]\\to E^N$ we have\n\\[\nu(\\gamma(b))-u(\\gamma(a))=\n\\int_a^b d_{\\gamma}u(\\dot\\gamma)\\,dt\\,,\n\\]\nby Fenchel's inequality we also have\n\\[\nu(\\gamma(b))-u(\\gamma(a))\\leq\n\\int_a^b L(\\gamma,\\dot\\gamma)+\nH(\\gamma,d_{\\gamma}u)\\;dt\\;\\leq\n\\;\\mathcal{A}_{L+h}(\\gamma)\\,.\n\\]\nTherefore we can say that if $u$ is a $C^1$ subsolution,\nthen\n\\[\nu(x)-u(y)\\leq\n\\mathcal{A}_{L+h}(\\gamma)\n\\]\nfor any curve\n$\\gamma\\in\\mathcal{C}(x,y)$.\nThis motivates the following definition.\n\n\\begin{definition}[Dominated functions]\nWe said that $u\\in C^0(E^N)$ is dominated by $L+h$,\nand we will denote it by $u\\prec L+h$,\nif we have\n\\[\nu(y)-u(x)\\leq\n\\phi_h(x,y)\\quad\\text{ for all }\\quad x,y\\in E^N.\n\\]\n\\end{definition}\n\nThus we know that $C^1$ subsolutions are dominated functions.\nWe prove now the well-known fact that dominated functions\nare indeed viscosity subsolutions.\n\n\\begin{proposition}\n\\label{prop-dom.are.visc.ssol}\nIf $u\\prec L+h$ then $u$ is a viscosity subsolution of (\\ref{HJh}).\n\\end{proposition}\n\n\\begin{proof} \nLet $u\\prec L+h$ and consider a test function $\\psi\\in C^1(E^N)$.\nAssume that $u-\\psi$ has a local maximum\nat some configuration $x_0\\in E^N$.\nTherefore,\nfor all $x\\in E^N$ we have $\\psi(x_0)-\\psi(x) \\leq u(x_0)-u(x)$.\n\nOn the other hand,\nthe convexity and superlinearity of the Lagrangian\nimplies that there is a unique $v\\in E^N$ such that\n$H(x_0,d_{x_0}\\psi)=d_{x_0}\\psi (v)-L(x_0,v)$.\nTaking any smooth curve $x:(-\\delta,0]\\to E^N$\nsuch that $x(0)=x_0$ and\n$\\dot x(0)=v$ we can write, for $t\\in (-\\delta,0)$\n\\[\\frac{\\psi(x_0)-\\psi(x(t))}{-t}\\leq \\frac{u(x_0)-u(x(t))}{-t}\\leq\n\\;-\\frac{1}{t}\\;\\mathcal{A}_{L+h}\\left(x\\mid_{[t,0\\,]}\\right)\\]\nthus for $t\\to 0^-$ we get\n$d_{x_0}\\psi (v)\\leq L(x_0,v)+h$,\nthat is to say, $H(x_0,d_{x_0}\\psi)\\leq h$ as we had to prove.\n\\end{proof}\nActually, the converse can be proved.\nFor all that follows,\nwe will only need to consider dominated functions,\nand for this reason,\nit will no longer be necessary to manipulate test functions\nto verify the subsolution condition in the viscosity sense.\nHowever,\nfor the sake of completeness we give a proof of this converse.\n\nA first step is to prove that viscosity subsolutions\nare locally Lipschitz on the open, dense,\nand full measure set of configurations without collisions\n(for this we follow the book of\nBardi and Capuzzo-Dolcetta \\cite{BarCap}, Proposition 4.1, p. 62).\n\n\\begin{lemma}\n\\label{lema-visc.subsol.locLip}\nThe viscosity subsolutions of (\\ref{HJh})\nare locally Lipschitz on $\\Omega$.\n\\end{lemma}\n\n\\begin{proof}\nLet $u\\in C^0(E^N)$ be a viscosity subsolution and\nlet $z\\in\\Omega$.\nWe take a compact neighbourhood $W$ of $z$\nin which the Newtonian potential is bounded,\ni.e. such that $W\\subset\\Omega$.\nThus our Hamiltonian is coercive on $T^*W$,\nmeaning that given $h>0$ we can choose $\\rho>0$ for which,\nif $\\norm{p}>\\rho$ and $w\\in W$ then $H(w,p)>h$.\n\nWe choose now $r>0$ such that\nthe open ball $B(z,3r)$ is contained in $W$.\nLet $M=\\max\\set{u(x)-u(y)\\mid\\;x,y\\in W}$\nand take $k>0$ such that $2kr>M$.\n\nWe take now any configuration $y\\in B(z,r)$ and we define,\nin the closed ball $\\overline{B}_y=\\overline{B}(y,2r)$,\nthe function $\\psi_y(x)=u(y)+k\\norm{x-y}$.\nWe will use the function $\\psi_y$ as a test function in the open set\n$B^*_y=B(y,2r)\\setminus\\set{y}$.\nWe observe first that $u(y)-\\psi_y(y)=0$ and that\n$u-\\psi_y$ is negative in the boundary of $\\overline{B}_y$.\nTherefore the maximum of $u-\\psi_y$\nis achieved at some interior point\n$x_0\\in B(y,2r)$.\n\nSuppose that $x_0\\neq y$.\nSince $\\psi_y$ is smooth on $B^*_y$,\nand $u$ is a viscosity subsolution,\nwe must have $H(x_0,d_{x_0}\\psi_y)\\leq h$.\nTherefore we must also have $k=\\norm{d_{x_0}\\psi_y}\\leq\\rho$.\n\nWe conclude that, if we choose $k>\\rho$ such that $2rk>M$,\nthen for any $y\\in B(z,r)$\nthe maximum of $u-\\psi$ in $\\overline{B}_y$\nis achieved at $y$, meaning that $u(x)-u(y)\\leq k\\norm{x-y}$\nfor all $x\\in\\overline{B}_y$.\nThis proves that $u$ is $k$-Lipschitz on $B(z,r)$.\n\\end{proof}\n\n\\begin{remark}\n\\label{rmk-vssol.are.subsol.ae}\nBy Rademacher's Theorem, we know that any viscosity subsolution\nis differentiable almost everywhere in the open set $\\Omega$.\nIn addition, at every point of differentiability $x\\in\\Omega$\nwe have $H(x,d_xu)\\leq h$.\nTherefore, since $\\Omega$ has full measure in $E^N$,\nwe can say that viscosity subsolutions satisfies $H(x,d_xu)\\leq h$\nalmost everywhere in $E^N$.\n\\end{remark}\n\n\\begin{remark}\nWe observe that the local Lipschitz constant $k$ we have obtained\nin the proof depends, a priori,\non the chosen viscosity subsolution $u$.\nWe will see that this is not really the case.\nThis fact will result immediatly from the following proposition and\nTheorem \\ref{thm-phih.estim}.\n\\end{remark}\n\nWe can prove now that\nthe set of viscosity subsolutions of $H(x,d_xu)=h$\nand the set of dominated functions $u\\prec L+h$ coincide.\n\n\\begin{proposition}\n\\label{prop-visc.ssol.are.dom}\nIf $u$ is a viscosity subsolution of (\\ref{HJh}) then $u\\prec L+h$.\n\\end{proposition}\n\n\\begin{proof}\nLet $u:E^N\\to\\mathbb{R}$ be a viscosity subsolution.\nWe have to prove that\n\\[u(y)-u(x)\\leq \\mathcal{A}_{L+h}(\\gamma)\\quad\n\\text{ for all }x,y\\in E^N,\\;\\gamma\\in\\mathcal{C}(x,y)\\,.\\]\nWe start by showing the inequality for any segment\n$s(t)=x+t(y-x)$, $t\\in [0,1]$.\nNote first that in the case $y=x$ there is nothing to prove,\nsince the action is always positive.\nThus we can assume that $r=\\norm{y-x}>0$.\n\nWe know $H(x,d_xu)\\leq h$ is satisfied\non a full measure set $\\mathcal{D}\\subset E^N$\nin which $u$ is differentiable,\nsee Lemma \\ref{lema-visc.subsol.locLip}\nand Remark \\ref{rmk-vssol.are.subsol.ae}.\nAssuming that $s(t)\\in\\mathcal{D}$ for almost every $t\\in[0,1]$\nwe can write\n\\[\n\\frac{d}{dt}\\,u(s(t))=d_{s(t)}u\\,(y-x)\\quad\\text{ a.e. in }[0,1]\n\\]\nfrom which we deduce,\napplying Fenchel's inequality and integrating,\n\\[\nu(y)-u(x)\\leq\n\\int_0^1L(s(t),y-x)+H(s(t),d_{s(t)}u)\\;dt\\leq \\mathcal{A}_{L+h}(s)\\,.\n\\]\nOur assumption may not be satisfied.\nMoreover,\nit could even happen that all the segment\nis outside the set $\\mathcal{D}$ in which the derivatives of $u$ exist. \nThis happens for instance if $x$ and $y$ are configurations\nwith collisions and with the same colliding bodies.\nHowever Fubini's Theorem say us that our assumption is verified\nfor almost every $y\\in S_r=\\set{y\\in E^N\\mid \\norm{y-x}=r}$.\nThen \n\\[\nu(y)-u(x)\\leq \\mathcal{A}_{L+h}(s)\\quad\\text{ for almost }y\\in S_r\n\\]\nTaking into account that both $u$ and $\\mathcal{A}_{L+h}(s)$\nare continuous as functions of $y$,\nwe conclude that the previous inequality holds in fact,\nfor all $y\\in S_r$.\n\nWe remark that the same argument applies\nto any segment with constant speed,\nthat is to say, to any curve $s(t)=x+tv$, $t\\in [a,b]$.\nConcatenating these segments we deduce that the inequality\nalso holds for any piecewise affine curve\n$p\\in\\mathcal{C}(x,y)$. The proof is then achieved as follows.\n\nLet $\\gamma\\in\\mathcal{C}(x,y)$ be a curve such that\n$\\mathcal{A}_{L+h}(\\gamma)=\\phi_h(x,y)$.\nThe existence of such a curve is guaranteed by Lemma\n\\ref{lema-JM.geod.complet}.\nSince this curve is a minimizer of the Lagrangian action,\nMarchal's Theorem assures that,\nif $\\gamma$ is defined on $[a,b]$,\nthen $\\gamma(t)\\in\\Omega$ for all $t\\in(a,b)$.\nIn consequence, the restriction of $\\gamma$\nto $(a,b)$ must be a true motion.\n\nSuppose that there are no collisions at configurations\n$x$ and $y$.\nSince in this case $\\gamma$ is thus $C^1$ on $[a,b]$,\nwe can approximate it by sequence of piecewise affine curves\n$p_n\\in\\mathcal{C}(x,y)$,\nin such a way that $\\dot p_n(t)\\to\\dot\\gamma(t)$\nuniformly for $t$ over some full measure subset $D\\subset [a,b]$.\nIn order to be explicit, let us define for each $n>0$\nthe polygonal $p_n$ with vertices at configurations\n$\\gamma(a+k(b-a)n^{-1})$ for $k=0,\\dots,n$.\nThen $D$ can be taken as the complement in $[a,b]$\nof the countable set $a+\\mathbb{Q}(b-a)$. \nTherefore, we have\n$u(y)-u(x)\\leq \\mathcal{A}_{L+h}(p_n)$ for all $n\\geq 0$,\nas well as\n\\[\n\\lim_{n\\to \\infty}\\mathcal{A}_{L+h}(p_n)=\n\\mathcal{A}_{L+h}(\\gamma)=\\phi_h(x,y)\\,,\n\\]\nThis implies that $u(y)-u(x)\\leq \\phi_h(x,y)$.\nIf there are collisions at $x$ or $y$,\nthen we apply what we have proved\nto the configurations without collisions\n$x_\\epsilon=\\gamma(a+\\epsilon)$ and\n$y_\\epsilon=\\gamma(b-\\epsilon)$,\nand we get the same conclusion\ntaking the limit as $\\epsilon\\to 0$. \nThis proves that $u\\prec L+h$.\n\\end{proof}\n\n\\begin{remark}\nThe use of Marchal's Theorem in the last proof\nseems to be required by the argument.\nIn fact, the argument works well for non singular Hamiltonians\nfor which it is known \\emph{a priori} that the corresponding\nminimizers are of class $C^1$. \n\\end{remark}\n\n\\begin{notation}\nWe will denote $\\mathcal{S}_h$\nthe set of viscosity subsolutions of (\\ref{HJh}).\n\\end{notation}\n\nObserve that,\nnot only we have proved that $\\mathcal{S}_h$ is precisely the set\nof dominated functions $u\\prec L+h$,\nbut also that $\\mathcal{S}_h$ agrees with the set of functions\nsatisfying $H(x,d_xu)\\leq h$ almost everywhere in $E^N$. \n\n\\subsubsection{Estimates for the action potentials}\n\nWe give now an estimate for $\\phi_h$ which implies that\nthe viscosity subsolutions form an equicontinuous family\nof functions.\nTherefore, if we normalize subsolutions by imposing $u(0)=0$,\nthen according to the Ascoli's Theorem\nwe get to the compactness of the set of normalized subsolutions.\n\nThe estimate will be deduced from the basic estimates for\n$\\phi(x,y,\\tau)$ and $\\phi(x,y)$\nfound by the first author for homogeneous potentials\nand that we recall now.\nThey correspond in the reference\nto Theorems 1 \\& 2 and Proposition 9,\nconsidering that in the original formulation\nthe value $\\kappa=1\/2$ is for the Newtonian potential.\n\nWe will say that a given configuration $x=(r_1,\\dots,r_N)$\nis contained in a ball of radius $R>0$ if we have\n$\\norm{r_i-r_0}_E0$, then for any $\\tau>0$\n\\[\\phi(x,y,\\tau)\\leq \\;\n\\alpha_0\\; \\frac{\\;R^2}{\\tau}\\;+\\;\\beta_0\\;\\frac{\\;\\tau}{R}\\;.\\]\n\\end{theorem*}\n\nIf a configurations $y$ is close enough\nto a given configuration $x$,\nthe minimal radius of a ball containing both configurations\nis greater than $\\norm{x-y}$. \nHowever,\nthis result was successfully combined with an argument\nproviding suitable cluster partitions,\nin order to obtain the following theorem.\n\n\\begin{theorem*}[\\cite{Mad1}]\nThere are positive constants $\\alpha_1$ and $\\beta_1$ such that,\nif $x$ and $y$ are any two configurations, and $r>\\norm{x-y}$,\nthen for all $\\tau>0$\n\\begin{equation}\n\\tag{*}\n\\label{ineq-basic}\\phi(x,y,\\tau)\\leq \\;\n\\alpha_1\\; \\frac{\\;r^2}{\\tau}\\;+\\;\\beta_1\\;\\frac{\\tau}{r}\\;.\n\\end{equation}\n\\end{theorem*}\n\nNote that the right side of the inequality\nis continuous for $\\tau,\\rho>0$.\nTherefore,\nwe can replace $r$ by $\\norm{x-y}$ whenever $x\\neq y$.\n\n\\begin{remark}\n\\label{rmk-bound.phixxt}\nIf $x=y$ then the upper bound (\\ref{ineq-basic})\nholds for every $r>0$.\nChoosing $r=\\tau^{2\/3}$, we get to the upper bound\n$\\phi(x,x,\\tau)\\leq\\mu\\,\\tau^{1\/3}$ which holds for any $\\tau>0$,\nany $x\\in E^N$,\nand for the positive constant $\\mu=\\alpha_1+\\beta_1$.\n\\end{remark}\n \nTherefore we can now bound the critical potential.\nThe previous remark leads to $\\phi(x,x)=0$ for all $x\\in E^N$.\nOn the other hand,\nfor the case $x\\neq y$ we can bound $\\phi(x,y)$\nwith the bound for $\\phi(x,y,\\tau)$,\ntaking $r=\\norm{x-y}$ and $\\tau=\\norm{x-y}^{3\/2}$.\n\n\\begin{theorem*}\n[H\u00f6lder estimate for the critical action potential, \\cite{Mad1}]\nThere exist a positive constant $\\eta>0$\nsuch that for any pair of configurations\n$x,y \\in E^N$\n\\[\\phi(x,y)\\leq \\;\\eta\\;\\norm{x-y}^\\frac{1}{2}\\;.\\]\n\\end{theorem*}\n\nThese estimates for the action potentials have been used\nfirstly to prove the existence of parabolic motions\n\\cite{Mad1,MadVen} and were the starting point\nfor the study of free time minimizers \\cite{daLMad,Mad2},\nas well as their associated Busemann functions\nby Percino and S\u00e1nchez \\cite{Per, PerSan},\nand later by Moeckel, Montgomery and S\u00e1nchez \\cite{MoMoSa}\nin the planar three-body problem.\n \nFor our current purposes,\nwe need to generalize the H\u00f6lder estimate\nof the critical action potential\nin order to also include supercritical potentials.\nAs expected, the upper bound we found\nis of the form $\\xi(\\norm{x-y})$,\nwhere $\\xi:[0,+\\infty)\\to\\mathbb{R}^+$\nis such that $\\xi(r)\\approx r^\\frac{1}{2}$\nfor $r\\to 0$ and $\\xi(r)\\approx r$ for $r\\to +\\infty$.\n\n\\begin{theorem}\n\\label{thm-phih.estim}\nThere are positive constants $\\alpha$ and $\\beta$ such that,\nif $x$ and $y$ are any two configurations and $h\\geq 0$, then\n\\[\n\\phi_h(x,y)\\leq \\;\n\\left(\\alpha\\norm{x-y}+h\\;\\beta\\norm{x-y}^2\\right)^{1\/2}\\;.\n\\]\n\\end{theorem}\n\n\\begin{proof}\nWe have to bound\n$\\phi_h(x,y)=\n\\inf\\set{\\phi(x,y,t)+h\\tau\\mid\\tau>0}$.\nFix any two configurations $x$ and $y$ and let $r>\\norm{x-y}$.\nWe already know by (\\ref{ineq-basic})\nthat for any $\\tau>0$ we have\n\\begin{equation}\n\\tag{**}\n\\label{ineq-basic.h}\n\\phi(x,y,\\tau) + h\\tau\\;\\leq \\;\nA\\; \\frac{1}{\\tau}\\;+B\\;\\tau\n\\end{equation}\n\\[A=\\alpha_1\\,r^2\\quad\\text{ and }\\quad B=\\beta_1\\,r^{-1} + h\\,,\\]\n$\\alpha_1$ and $\\beta_1>0$ being two positive constants.\nSince the minimal value of the right side of inequality\n(\\ref{ineq-basic.h})\nas a function of $\\tau$ is $2(AB)^{1\/2}$ we conclude that\n\\begin{eqnarray*}\n\\phi_h(x,y)&=&\\inf\\set{\\phi(x,y,t)+h\\tau\\mid\\tau>0}\\\\\n&\\leq&\\left(\\alpha\\,r+h\\;\\beta\\,r^2\\right)^{1\/2}\n\\end{eqnarray*}\nfor\n$\\alpha=4\\,\\alpha_1\\beta_1$ and $\\beta=4\\,\\alpha_1$.\nBy continuity,\nwe have that the last inequality also holds for $r=\\norm{x-y}$\nas we wanted to prove.\n\\end{proof}\n\n\\begin{corollary} \\label{comp-visc-subsol}\n\\label{coro-visc.ssol.comp}\nThe set of viscosity subsolutions\n$\\mathcal{S}^0_h=\\set{u\\in\\mathcal{S}_h\\mid u(0)=0}$\nis compact for the topology of the uniform convergence\non compact sets.\n\\end{corollary}\n\n\\begin{proof}\nBy Propositions \\ref{prop-dom.are.visc.ssol} and\n\\ref{prop-visc.ssol.are.dom}\nwe know that $u\\in\\mathcal{S}_h$ if and only if $u\\prec L+h$.\nThus by Theorem \\ref{thm-phih.estim} we have that,\nfor any $u\\in\\mathcal{S}_h$, and for all $x,y\\in E^N$, \n\\[u(x)-u(y)\\leq \\phi_h(x,y)\\leq \\xi(\\norm{x-y})\\]\nwhere $\\xi:[0,+\\infty)\\to\\mathbb{R}^+$ is given by\n$\\xi(\\rho)=\\left(\\alpha\\,\\rho+h\\,\\beta\\,\\rho^2\\right)^{1\/2}$.\n\nSince $\\xi$ is uniformly continuous,\nwe conclude that the family of functions $\\mathcal{S}_h$\nis indeed equicontinuous.\nTherefore,\nthe compactness of $\\mathcal{S}^0_h$ is actually\na consequence of Ascoli's Theorem.\n\\end{proof}\n\n\n\n\\subsection{The Lax-Oleinik semigroup}\n\nWe recall that a solution of $H(x,d_xu)=h$\ncorresponds to a stationary solution\n$U(t,x)=u(x)-ht$ of the evolution equation\n\\[\\partial_tU+H(x,\\partial_xU)=0\\,,\\]\nfor which the Hopf-Lax formula reads\n\\[U(t,x)=\n\\inf\\set{u_0(y)+\\mathcal{A}_L(\\gamma)\\mid\ny\\in E^N,\\,\\gamma\\in\\mathcal{C}(y,x,t)}\\,.\\]\nIn a wide range of situations,\nthis formula provides the \\emph{unique}\nviscosity solution satisfying the initial condition $U(0,x)=u_0(x)$.\nUsing the action potential we can also write the formula as\n\\[U(t,x)=\\inf\\set{u_0(y)+\\phi(y,x,t)\\mid y\\in E^N}.\\]\nIf the initial data $u_0$ is bounded,\nthen $U(t,x)$ is clearly well defined and bounded.\nIn our case, we know that solutions will not be bounded,\nthus we need a condition ensuring that the function\n$y\\mapsto u_0(y)+\\phi(y,x,t)$ is bounded by below.\nAssuming $u_0\\prec L+h$ we have the lower bound\n\\[u_0(x)-ht \\leq u_0(y)+\\phi(y,x,t)\\]\nfor all $t>0$ and all $x\\in E^N$,\nbut this is in fact an equivalent formulation\nfor the domination condition\n$u_0\\prec L+h$, that is to say $u\\in\\mathcal{S}_h$.\n\n\\begin{definition}\n[Lax-Oleinik semigroup] The backward\\footnote{\nThe \\emph{forward} semigroup is defined in a similar way,\nsee \\cite{Eva2}.\nThis other semigroup gives the opposite solutions\nof the reversed Hamiltonian $\\tilde{H}(x,p)=H(x,-p)$.\nIn our case the Hamiltonian is reversible,\nmeaning that $\\tilde{H}=H$.}\nLax-Oleinik semigroup is the map\n$T:[0,+\\infty)\\times \\mathcal{S}_h\\to\\mathcal{S}_h$,\ngiven by $T(t,u)=T_tu$, where\n\\[\nT_tu(x)=\\inf\\set{u(y)+\\phi(y,x,t)\\mid y\\in E^N}\n\\]\nfor $t>0$, and $T_0u=u$.\n\\end{definition}\n\nObserve that $u\\prec L+h$ if and only if\n$u\\leq T_tu+ht$ for all $t>0$.\nAlso,\nwe note that $T_tu-u\\to 0$ as $t\\to 0$, uniformly in $E^N$.\nThis is clear since for all $x\\in E^N$ and $t>0$ we have\n$-ht\\leq T_tu(x)-u(x)\\leq \\phi(x,x,t)\\leq \\mu\\,t^{1\/3}$,\nwhere the last inequality is justified by Remark\n\\ref{rmk-bound.phixxt}.\n\nIt is not difficult to see that $T$ defines an action on $\\mathcal{S}_h$,\nthat is to say, that the semigroup property $T_t\\circ T_s=T_{t+s}$\nis always satisfied.\nThus the continuity at $t=0$ spreads throughout all the domain.\n\nOther important properties of this semigroup are the\n\\emph{monotonicity},\nthat is to say, that $u\\leq v$ implies $T_tu\\leq T_tv$,\nand the \\emph{commutation with constants},\nsaying that for every constant $k\\in\\mathbb{R}$,\nwe have $T_t(u+k)=T_tu+k$.\n\nThus,\nfor $u\\in\\mathcal{S}_h$ and $s,t>0$ we can write\n$T_su\\leq T_s(T_tu+ht)=T_t(T_su)+ht$,\nwhich implies that we have $T_su\\in\\mathcal{S}_h$ for all $s>0$.\n\n\\begin{definition}\n[Lax-Oleinik quotient semigroup]\nThe semigroup $(T_t)_{t\\geq 0}$\ndefines a semigroup $(\\hat{T}_t)_{t\\geq 0}$\non the quotient space $\\hat{\\mathcal{S}}_h=\\mathcal{S}_h\/\\mathbb{R}$,\ngiven by $\\hat{T}_t[u]=[T_tu]$.\n\\end{definition}\n\n\\begin{proposition}\n\\label{prop-LO.fixed.points}\nGiven $h\\geq 0$ and $u\\in\\mathcal{S}_h$ we have that,\n$[u]\\in\\hat{\\mathcal{S}}_h$ is a fixed point of $(\\hat{T}_t)_{t\\geq 0}$\nif and only if\nthere is $h'\\in [0,h]$ such that $T_tu=u-h't$ for all $t\\geq 0$.\n\\end{proposition}\n\n\\begin{proof}\nThe sufficiency of the condition is trivial.\nIt is enough then to prove that it is necessary.\nThat $[u]$ is a fixed point of $\\hat{T}$ means that we have\n$\\hat{T}_t[u]=[u]$ for all $t>0$.\nThat is to say, there is a function $c:\\mathbb{R}^+\\to\\mathbb{R}^+$ such that\n$T_tu=u+c(t)$ for each $t\\in\\mathbb{R}^+$.\nFrom the semigroup property, we can easily deduce that\nthe function $c(t)$ must be additive,\nmeaning that $c(t+s)=c(t)+c(s)$\nfor all $t,s\\geq 0$.\nMoreover,\nthe continuity of the semigroup implies the continuity of $c(t)$.\nAs it is well known,\na continuous and additive function from $\\mathbb{R}^+$ into itself is linear,\ntherefore we must have $c(t)=c(1)t$.\nNow, since $u\\leq T_tu+ht$ for all $t\\in \\mathbb{R}^+$,\nwe get $0\\leq c(1)+h$.\nOn the other hand, since $u\\prec L-c(1)$\nand $\\mathcal{S}_h=\\emptyset$ for $h<0$, hence $-c(1)\\geq 0$.\nWe conclude that $c(t)=-h't$ for some $h'\\in [0,h]$.\n\\end{proof}\n\n\n\\subsubsection{Calibrating curves and supersolutions}\n\nWe finish this section by relating the fixed points\nof the quotient Lax-Oleinik semigroup and\nthe viscosity supersolutions of (\\ref{HJh}).\nThis relationship is closely linked\nto the existence of certain minimizers,\nwhich will ultimately allow us to obtain\nthe hyperbolic motions we seek.\n\n\\begin{definition}\n[calibrating curves]\nLet $u\\in\\mathcal{S}_h$ be a given subsolution.\nWe say that a curve $\\gamma:[a,b]\\to E^N$\nis an \\emph{$h$-calibrating}\ncurve of $u$,\nif $u(\\gamma(b))-u(\\gamma(a))=\\mathcal{A}_{L+h}(\\gamma)$.\n\\end{definition}\n\n\\begin{definition}\n[h-minimizers]\nA curve $\\gamma:[a,b]\\to E^N$\nis said to be an \\emph{$h$-minimizer} if\nit verifies $A_{L+h}(\\gamma)=\\phi_h(\\gamma(a),\\gamma(b))$.\n\\end{definition}\n\n\\begin{remark}\n\\label{rmk-hcalib.hmin}\nAs we have see,\nthe fact that $u\\in\\mathcal{S}_h$ is characterized by $u\\prec L+h$.\nTherefore for all $x,y\\in E^N$ we have\n\\[\nu(x)-u(y)\\leq \\phi_h(x,y)\\leq \\mathcal{A}_{L+h}(\\gamma)\n\\]\nfor any $\\gamma\\in\\mathcal{C}(x,y)$.\nIt follows that every $h$-calibrating curve of $u$\nis an $h$-minimizer.\n\\end{remark}\n\nIt is easy to prove that restrictions of $h$-calibrating curves\nof a given subsolution $u\\in\\mathcal{S}_h$\nare themselves $h$-calibrating curves of $u$.\nThis is also true, and even more easy to see, for $h$-minimizers.\nBut nevertheless,\nthere is a property valid for the calibrating curves\nof a given subsolution but which is not satisfied in general\nby the minimizing curves.\nThe concatenation of two calibrating curves is again calibrating.\n\n\\begin{lemma}\n\\label{lema-concat.calib}\nLet $u\\in\\mathcal{S}_h$.\nIf $\\gamma_1\\in\\mathcal{C}(x,y)$ and $\\gamma_2\\in\\mathcal{C}(y,z)$\nare both $h$-calibrating curves of $u$,\nand $\\gamma\\in\\mathcal{C}(x,z)$ is a concatenation of\n$\\gamma_1$ and $\\gamma_2$,\nthen $\\gamma$ is also an $h$-calibrating curve of $u$.\n\\end{lemma}\n\n\\begin{proof}\nWe have $u(y)-u(x)=\\mathcal{A}_{L+h}(\\gamma_1)$,\nand $u(z)-u(y)=\\mathcal{A}_{L+h}(\\gamma_2)$.\nAdding both equations we get $u(z)-u(x)=\\mathcal{A}_{L+h}(\\gamma)$.\n\\end{proof}\n\nWe give now a criterion for a subsolution to be a viscosity solution.\nFrom here on, a curve defined on a noncompact interval will be said\n$h$-calibrating if all its restrictions to compact intervals are too.\nIn the same way we define $h$-minimizers\nover noncompact intervals.\n\nWe start by proving a lemma on calibrating curves of subsolutions.\n\\begin{lemma}\n\\label{lema-no.test.col}\nLet $u\\in\\mathcal{S}_h$, and let $\\gamma:[a,b]\\to E^N$ be an\n$h$-calibrating curve of $u$.\nIf $x_0=\\gamma(b)$ is a configuration with collisions,\nthen there is no Lipschitz function\n$\\psi$ defined on a neighbourhood of $x_0$\nsuch that $\\psi\\le u$ and $\\psi(x_0)=u(x_0)$.\n\\end{lemma}\n\n\\begin{proof}\nSince our system is autonomous,\nwe can assume without loss of generality that $b=0$.\nThus the $h$-calibrating property of $\\gamma$ says that\nfor every $t\\in [a,0]$\n\\[\n\\int_t^0 \\tfrac{1}{2}\\norm{\\dot\\gamma}^2\\,dt\\,+\n\\int_t^0 U(\\gamma)\\,dt\\,+ \\,h\\abs{t}=\n\\mathcal{A}_{L+h}(\\gamma\\mid_{[t,0]})=u(x_0)-u(\\gamma(t))\\,.\n\\]\nOn the other hand, if $\\psi\\leq u$ is a $k$-Lipschitz function\non a neighbourhood of $x_0$ such that $\\psi(x_0)=u(x_0)$\nthen we also have, for $t$ close enough to $0$,\n\\[\nu(x_0)-u(\\gamma(t))\\leq\n\\psi(x_0)-\\psi(\\gamma(t))\\leq\nk\\norm{\\gamma(t)-x_0}.\n\\]\nTherefore we also have\n\\[\n\\int_{t}^0\\norm{\\dot\\gamma}^2\\,dt\\leq\\,2k\\norm{\\gamma(t)-x_0}.\n\\]\nNow, applying Cauchy-Schwarz we can write\n\\[\n\\int_{t}^0\\norm{\\dot\\gamma}\\,dt\\leq\n\\abs{t}^{1\/2}\\left(\\int_{t}^0\\norm{\\dot\\gamma}^2\\,dt\\right)^{1\/2}\\]\nand thus we deduce that\n\\[\n\\norm{\\gamma(t)-x_0}^2\\leq\n\\left(\\int_{t}^0\\norm{\\dot\\gamma}\\,dt\\right)^2\\leq \n\\,2k\\norm{\\gamma(t)-x_0}\\,\\abs{t}\n\\]\nhence that\n\\[\n\\norm{\\gamma(t)-x_0}\\leq\n\\,2k\\abs{t}.\n\\]\nFinally, since\n\\[\n\\int_{t}^0U(\\gamma)\\,dt\\leq u(x_0)-u(\\gamma(t))\\leq\n\\,k\\norm{\\gamma(t)-x_0}\n\\]\nwe conclude that\n\\[\n\\int_{t}^0U(\\gamma)\\,dt\\leq\n\\,2k^2\\abs{t}.\n\\]\nTherefore, dividing by $\\abs{t}$ and taking the limit for $t\\to 0$\nwe get $U(x_0)\\leq 2k^2$.\nThis proves that $x_0$ has no collisions.\n\\end{proof}\n\n\\begin{proposition}\n\\label{prop-criterion.visc.sol}\nIf $u\\in\\mathcal{S}_h$ is a viscosity subsolution of (\\ref{HJh}), and\nfor each $x\\in E^N$ there is at least one $h$-calibrating curve\n$\\gamma:(-\\delta,0]\\to E^N$ with $\\gamma(0)=x$,\nthen $u$ is in fact a viscosity solution.\n\\end{proposition}\n\n\\begin{proof}\nWe only have to prove that $u$ is a viscosity supersolution.\nThus we assume that $\\psi\\in C^1(E^N)$ and $x_0\\in E^N$\nare such that $u-\\psi$ has a local minimum in $x_0$.\nWe must prove that $H(x_0,d_{x_0}\\psi)\\geq h$.\nFirst of all, we exclude the possibility that $x_0$\nis a configuration with collisions.\nTo do this, it suffices to apply Lemma \\ref{lema-no.test.col},\ntaking the locally Lipschitz function $\\psi+u(x_0)-\\psi(x_0)$.\n\nLet now $\\gamma:(-\\delta,0]\\to E^N$ with $\\gamma(0)=x_0$\nand $h$-calibrating.\nThus for $t\\in(-\\delta,0]$\n\\[\n\\int_t^0L(\\gamma,\\dot\\gamma)\\,dt\\,-\\,ht\n=u(x_0)-u(\\gamma(t))\n\\]\nand also, given that $x_0$ is a local minimum of $u-\\psi$,\nfor $t$ close enough to $0$\n\\[\nu(x_0)-u(\\gamma(t))\\leq\n\\psi(x_0)-\\psi(\\gamma(t))\\,.\n\\]\nSince $x_0\\in\\Omega$ and $\\gamma$ is a minimizer,\nwe know that $\\gamma$ can be extended beyond $t=0$\nas solution of Newton's equation.\nIn particular $v=\\dot\\gamma(0)$ is well defined,\nand moreover, using the previous inequality we find\n\\[\nd_{x_0}\\psi(v)=\n\\;\\lim_{t\\to 0^-}\\,\\frac{\\psi(x_0)-\\psi(\\gamma(t))}{-t}\\geq\nL(x_0,v)+h\n\\]\nwhich implies, by Fenchel's inequality,\nthat $H(x_0,d_{x_0}\\psi)\\geq h$.\n\\end{proof}\n\nThe following proposition complements the previous one.\nIt states that under a stronger condition,\nthe viscosity solution is in addition a fixed point\nof the quotient Lax-Oleinik semigroup.\n\n\\begin{proposition}\n\\label{prop-fixedLO.if.calib}\nLet $u\\in\\mathcal{S}_h$ be a viscosity subsolution of (\\ref{HJh}).\nIf for each $x\\in E^N$ there is\nan $h$-calibrating curve of $u$, say \n$\\gamma_x:(-\\infty,0]\\to E^N$,\nsuch that $\\gamma_x(0)=x$,\nthen $T_tu=u-ht$ for all $t\\geq 0$.\n\\end{proposition}\n\n\\begin{proof}\nFor each $x\\in E^N$, for $t\\geq 0$ we have\n\\[\nT_tu(x)-u(x)=\\inf\\set{u(y)-u(x)+\\phi(y,x,t)\\mid y\\in E^N} \n\\]\nthus it is clear that $T_tu(x)-u(x)\\geq -ht$\nsince we know that $u\\prec L+h$.\nOn the other hand,\ngiven that $\\gamma_x$ is an $h$-calibrating curve of $u$,\n\\[\nu(x)-u(\\gamma_x(-t))=\\phi(\\gamma_x(-t),x,t)+ht.\n\\]\nWriting $y_t=\\gamma_x(-t)$ we have that\n$u(y_t)-u(x)+\\phi(y_t,x,t)=-ht$\nand we conclude that $T_tu(x)-u(x)\\leq -ht$.\nWe have proved that $T_tu=u-ht$ for all $t\\geq 0$.\n\\end{proof}\n\n\\begin{remark}\n\\label{rmk-inverse.sens.lamin}\nThe formulation of the previous condition can confuse a little,\nsince the calibrating curves are parametrized on negative intervals.\nHere the Lagrangian is symmetric,\nthus reversing the time of a curve always preserves the action.\nMore precisely, \ngiven an absolutely continuous curve $\\gamma:[a,b]\\to E^N$,\nif we define $\\tilde{\\gamma}$ on $[-b,-a]$\nby $\\tilde{\\gamma}(t)=\\gamma(-t)$,\nthen $\\mathcal{A}_L(\\tilde{\\gamma})=\\mathcal{A}_L(\\gamma)$.\n\nWe can reformulate the calibrating condition of the previous\nproposition in this equivalent way:\n\\emph{For each $x\\in E^N$,\nthere is a curve $\\gamma_x:[0,+\\infty)\\to E^N$ such that\n$\\gamma_x(0)=x$, and such that\n$u(x)-u(\\gamma_x(t))=\\mathcal{A}_{L+h}(\\gamma_x\\mid_{[0,t]})$\nfor all $t>0$}.\n\\end{remark}\n\\begin{remark}\nThe hypothesis of\nProposition \\ref{prop-criterion.visc.sol}\nimplies the hypothesis of\nProposition \\ref{prop-fixedLO.if.calib}.\nThis is exactly what we do in the proof of\nTheorem \\ref{thm-horofuns.have.lamin} below.\n\\end{remark}\n\n\n\n\\section{Ideal boundary of a positive energy level}\n\n\nThis section is devoted to the construction of global viscosity\nsolutions for the Hamilton-Jacobi equations (\\ref{HJh}).\nThe method is quite similar to that developed by Gromov\nin \\cite{Gro} to compactify locally compact metric spaces\n(see also \\cite{BaGrSch}, chpt. 3).\n\n\\subsection{Horofunctions as viscosity solutions}\n\nThe underlying idea giving rise to the construction of horofunctions\nis that each point in a metric space $(X,d)$ can be identified \nwith the distance function to that point.\nMore precisely,\nthe map $X\\to C(X)$ which associates to each point $x\\in X$\nthe function $d_x(y)=d(y,x)$\nis an embedding such that for all $x_0,x_1\\in X$\nwe have $\\max\\abs{d_{x_0}(y)-d_{x_1}(y)}=d(x_0,x_1)$.\n\nIt is clear that\nany sequence of functions $d_{x_n}$ diverges if $x_n\\to\\infty$,\nthat is to say,\nif the sequence $x_n$ escapes from any compact subset of $X$. \nHowever, for a noncompact space $X$,\nthe induced embedding of $X$ into the quotient space\n$C(X)\/\\mathbb{R}$ has in general an image with a non trivial boundary.\nThis boundary can thus be considered\nas an ideal boundary of $X$.\n\nHere the metric space will be $(E^N,\\phi_h)$ with $h>0$,\nand the set of continuous functions $C^0(E^N)$\nwill be endowed with\nthe topology of the uniform convergence on compact sets.\nInstead of looking at equivalence classes of functions,\nwe will take as the representative of each class\nthe only one vanishing at $0\\in E^N$. \n\n\\begin{definition}\n[Ideal boundary]\nWe say that a function $u\\in C^0(E^N)$ is in the ideal boundary\nof level $h$ if there is a sequence of configurations $p_n$,\nwith $\\norm{p_n}\\to +\\infty$ and such that for all $x\\in E^N$\n\\[\nu(x)=\\lim_{n\\to\\infty}\\phi_h(x,p_n)-\\phi_h(0,p_n).\n\\]\nWe will denote $\\mathcal{B}_h$ the set of all these functions,\nthat we will also call horofunctions.\n\\end{definition}\n\nThe first observation is that $\\mathcal{B}_h\\neq\\emptyset$\nfor any value of $h\\geq 0$.\nThis can be seen as a consequence of the estimate for the\npotential $\\phi_h$ we proved,\nsee Theorem \\ref{thm-phih.estim}.\n\nActually for any $p\\in E^N$,\nthe function $x\\to \\phi_h(x,p)-\\phi_h(0,p)$ is in $\\mathcal{S}^0_h$,\nthe set of viscosity subsolutions vanishing at $x=0$.\nSince by Corollary \\ref{coro-visc.ssol.comp}\nwe know that $\\mathcal{S}^0_h$ is compact,\nfor any sequence of configurations $p_n$\nsuch that $\\norm{p_n}\\to +\\infty$\nthere is a subsequence\nwhich defines a function in $\\mathcal{B}_h$ as above.\n\nIt is also clear that $\\mathcal{B}_h\\subset\\mathcal{S}_h$.\nFunctions in $\\mathcal{B}_h$ are limits of functions in $\\mathcal{S}_h$,\nand this set is closed in $E^N$\neven for the topology of pointwise convergence.\nBut, since we already know that\nthe family $\\mathcal{S}_h$ is equicontinuous,\nthe convergence is indeed uniform on compact sets.\n\n\\begin{notation}\nWhen the value of $h$ is understood,\nwe will denote $u_p$ the function\ndefined by $u_p(x)=\\phi_h(x,p)$\nwhere $p$ is a given configuration.\n\\end{notation}\n\nOne fact that should be clarifying is that for any $p\\in E^N$,\nthe subsolution given by $u_p$ fails to be a viscosity solution\nprecisely at $x=p$.\nIf $x\\neq p$, then there is a minimizing curve of $\\mathcal{A}_{L+h}$\nin $\\mathcal{C}(p,x)$\n(see Lemma \\ref{lema-JM.geod.complet} below),\nand clearly this curve is $h$-calibrating of $u_p$.\nOn the other hand, there are no $h$-calibrating curves of $u_p$\ndefined over an interval $(-\\delta,0]$\nand ending at $x=p$.\nThis is because $u_p\\geq 0$, $u_p(p)=0$,\nand $h$-calibrating curves, as $h$-minimizers,\nhave strictly increasing action.\nActually, this property of the $u_p$ functions occurs\nfor all energy levels greater than or equal to the critical one,\nin a wide class of Lagrangian systems.\nThe simplest case to visualize is surely\nthe case of absence of potential energy in an Euclidean space,\nin which we have $u_p(x)=h\\,\\norm{x-p}$ and\nhis $h$-calibrating curves are segments of the half-lines\nemanating from $p$ with a constant speed (gradient curves).\n\nThis suggest that the horofunctions must be viscosity solutions,\nwhich is what we will prove now.\n\n\\begin{theorem}\n\\label{thm-horofuns.are.viscsol}\nGiven $u\\in\\mathcal{B}_h$ and $r>0$ there is,\nfor each $x\\in E^N$, some $y\\in E^N$ with $\\norm{y-x}=r$,\nand a curve $\\gamma_x\\in\\mathcal{C}(y,x)$ such that\n$u(x)-u(y)=\\mathcal{A}_{L+h}(\\gamma_x)$.\nIn particular, every function $u\\in\\mathcal{B}_h$\nis a global viscosity solution of (\\ref{HJh}).\n\\end{theorem}\n\n\\begin{proof}\nLet $u\\in\\mathcal{B}_h$, that is to say\n$u=\\lim_n(u_{p_n}-u_{p_n}(0))$\nfor some sequence of configurations $p_n$ such that\n$\\norm{p_n}\\to +\\infty$, and $u_{p_n}(x)=\\phi_h(x,p_n)$.\n\nLet $x\\in E^N$ be any configuration, and fix $r>0$.\nUsing Lemma \\ref{lema-JM.geod.complet} we get,\nfor each $n>0$, a curve $\\gamma_n\\in\\mathcal{C}(p_n,x)$\nsuch that $\\mathcal{A}_{L+h}(\\gamma_n)=\\phi_h(p_n,x)$.\nEach curve $\\gamma_n$ is thus\nan $h$-calibrating curve of $u_{p_n}$.\n\nIf $\\norm{p_n-x}>r$,\nthen the curve $\\gamma_n$ must pass through a configuration\n$y_n$ with $\\norm{y_n-x}=r$.\nExtracting a subsequence if necessary,\nwe may assume that this is the case for all $n>0$,\nand that $y_n\\to y$, with $\\norm{y-x}=r$.\nSince the arc of $\\gamma_n$ joining\n$y_n$ to $x$ also $h$-calibrates $u_{p_n}$ we can write\n\\[\nu_{p_n}(x)-u_{p_n}(y_n)=\\phi_h(y_n,x)\n\\]\nfor all $n$ big enough. We conclude that\n\\[\nu(x)-u(y)=\\lim_{n\\to\\infty}u_{p_n}(x)-u_{p_n}(y)=\\phi_h(y,x)\n\\]\nwhich proves the first statement.\nThe second one follows now from the criterion for\nviscosity solutions given in Proposition\n\\ref{prop-criterion.visc.sol}.\t\n\\end{proof}\nOur next goal is to prove that horofunctions are actually\nfixed points of the quotient Lax-Oleinik semigroup.\nWe will achieve this goal by showing the existence\nof calibrating curves allowing the use of\nProposition \\ref{prop-fixedLO.if.calib}.\nThese calibrating curves will be the key to the proof of\nthe existence of hyperbolic motions.\n\nThanks to the previous theorem\nwe can build maximal calibrating curves.\nThen, Marchal's Theorem will allow us to assert that\nthese curves are in fact true motions of the $N$-body problem.\nNext we have to prove\nthat these motions\nare defined over unbounded above time intervals,\nthat is to say,\nwe must exclude the possibility of\ncollisions or pseudocollisions.\nIt is for this reason that we will also invoke\nthe famous von Zeipel's theorem\\footnote{\nThis theorem had no major impact on the theory\nuntil it was rediscovered after at least half a century later,\nand proved to be essential for the understanding\nof pseudocollision singularities, see for instance Chenciner's\nBourbaki seminar \\cite{Che2}.\nAmong other proofs, there is a modern version due to McGehee \n \\cite{McG} of the proof originally outlined by von Zeipel.}\n that we recall now.\n\n\\begin{theorem*}\n[1908, von Zeipel \\cite{Zei}]\nLet $x:(a,t^*)\\to E^N$ be a maximal solution of the\nNewton's equations of the $N$-body problem with $t^*<+\\infty$.\nIf $\\norm{x(t)}$ is bounded in some neighbourhood of $t^*$, then\nthe limit $\\,x_c=\\lim_{t\\to t^*}x(t)$\nexists and the singularity is therefore due to collisions. \n\\end{theorem*}\n\n\n\\begin{theorem}\n\\label{thm-horofuns.have.lamin}\nIf $u\\in\\mathcal{B}_h$ then for each $x\\in E^N$ there is a curve\n$\\gamma_x:[0,+\\infty)\\to E^N$ with $\\gamma_x(0)=x$,\nand such that for all $t>0$\n\\[\nu(x)-u(\\gamma_x(t))=\\mathcal{A}_{L+h}(\\gamma_x\\mid_{[0,t]}).\n\\]\nIn particular,\nevery function $u\\in\\mathcal{B}_h$ satisfies $T_tu=u-ht$ for all $t>0$.\n\\end{theorem}\n\n\\begin{proof}\nLet us fix a configuration $x\\in E^N$.\nBy Theorem \\ref{thm-horofuns.are.viscsol} we know that\n$u$ has at least one $h$-calibrating curve\n$\\gamma:(-\\delta,0]\\to E^N$ such that $\\gamma(0)=x$.\nBy application of Zorn's Lemma\nwe get a maximal $h$-calibrating curve\nof the form $\\gamma:(t^*,0]\\to E^N$ with $\\gamma(0)=x$.\nWe will prove that $t^*=-\\infty$,\nand thus the required curve can be defined on $[0,+\\infty)$ by\n$\\gamma_x(t)=\\gamma(-t)$.\n\nSuppose by contradiction that $t^*>-\\infty$.\nSince $\\gamma$ is an $h$-minimizing curve, we know that its\nrestriction to $(t^*,0)$ is a true motion with energy constant $h$.\nEither the curve can be extended as a motion for values\nless than $t^*$, or it presents a singularity at $t=t^*$.\nIn the case of singularity, we have at $t=t^*$ either a collision,\nor a pseudocollision.\nAccording to von Zeipel's Theorem,\nin the pseudocollision case we must have\n$\\sup\\set{\\norm{\\gamma(t)}\\mid t\\in (t^*,0]}=+\\infty$.\n\nSuppose that the limit $y=\\lim_{t\\to t^*}\\gamma(t)$ exists.\nThen by Theorem \\ref{thm-horofuns.are.viscsol} we can choose\na calibrating curve $\\tilde{\\gamma}$ defined on $(-\\delta,0]$\nand such that $\\tilde{\\gamma}(0)=y$.\nThus the concatenation of $\\tilde{\\gamma}$ with $\\gamma$\ndefines a calibrating curve $\\gamma^+$ defined on\n$(t^*-\\delta,0]$ and such that $\\gamma^+(0)=x$.\nBut this contradicts the maximality of $\\gamma$.\n\nOn the other hand, if we suppose that $\\norm{\\gamma(t)}$\nis unbounded, we can choose a sequence $y_n=\\gamma(t_n)$\nsuch that $\\norm{y_n-x}\\to +\\infty$.\nLet us define $A_n=\\mathcal{A}_L(\\gamma\\mid_{[t_n,0]})$.\n\nA standard way to obtain a lower bound for $A_n$\nis by neglecting the potential term which is positive.\nThen by using the Cauchy-Schwarz inequality we obtain that\nfor all $n>0$ we have $2\\abs{t_n}A_n\\geq \\norm{y_n-n}^2$.\nSince $\\gamma$ is $h$-minimizing we deduce that\n\\[\n\\phi_h(y_n,x)\\geq\n\\frac{\\norm{y_n-x}^2}{2\\abs{t_n}}+h\\abs{t_n}\n\\]\nfor all $n>0$.\nSince $\\norm{y_n-x}\\to +\\infty$ and $t_n\\to t^*>-\\infty$\nwe get a contradiction with the upper estimate given by Theorem\n\\ref{thm-phih.estim}.\nIndeed that theorem implies that\n$\\phi_h (y_n,x)$ is bounded above by a function which is\nof order $O(\\norm{y_n - x})$ as $\\norm{y_n-x} \\to +\\infty$, \nwhich contradicts the displayed inequality.\n\nThe last assertion is a consequence of\nProposition \\ref{prop-fixedLO.if.calib} and\nRemark \\ref{rmk-inverse.sens.lamin}.\n\\end{proof}\n\n\n\n\\subsection{Busemann functions}\n\\label{s-busemann}\n\nWe recall that a length space $(X,d)$ is say to be a\n\\emph{geodesic space}\nif the distance between any two points is realized as the length\nof a curve joining them.\nA \\emph{ray} in $X$ is an isometric embedding\n$\\gamma:[0,+\\infty)\\to X$.\nAs we already say in Sect. \\ref{s-geom.view},\nthe Gromov boundary of a geodesic space is defined as\nthe quotient space of the set of rays of $X$ under the\nequivalence relation: $\\gamma\\sim\\gamma'$ if and only if the\nfunction given by $d(\\gamma(t),\\gamma'(t))$ on $[0,+\\infty)$\nis bounded.\n\nThere is a natural way to associate a horofunction to each ray.\nLet us write $d_p$ for the function measuring the distance to\nthe point $p\\in X$ that is, $d_p(x)=d(x,p)$.\nOnce $\\gamma$ is fixed, we define\n\\[\nu_t(x)=d_{\\gamma(t)}(x)-d_{\\gamma(t)}(\\gamma(0))\\,,\n\\quad\\text{ and }\\quad\nu_\\gamma=\\lim_{t\\to\\infty}u_t\\,.\n\\]\n\nThese horofunctions $u_\\gamma$ are called\n\\emph{Busemann functions}\nand are well defined because of\nthe geodesic characteristic property of rays.\nMore precisely, for any ray $\\gamma$ and for all $0\\leq s\\leq t$,\nwe have $d(\\gamma(t),\\gamma(s))=t-s$,\nhence that $u_t\\leq u_s$.\nMoreover, \nit is also clear that $u_t\\geq -d_{\\gamma(0)}$,\nwhich implies that $u_\\gamma\\ge -d_{\\gamma(0)}$.\nWe also note that $u_\\gamma=\\lim_n u_{t_n}$ whenever\n$(t_n)_{n>0}$ is a sequence such that $t_n\\to\\infty$.\n\nIt is well known that under some hypothesis on $X$ we have that,\nfor any two equivalent rays $\\gamma\\sim\\gamma'$,\nthe corresponding Busemann functions are the same\nup to a constant, that is $[u_\\gamma]=[u_{\\gamma'}]$.\nTherefore in these cases a map is defined\nfrom the Gromov boundary into the ideal boundary,\nand it is thus natural to ask about the injectivity and the surjectivity\nof this map.\nHowever, the following simple and enlightening example\nshows a geodesic space in which there are equivalent rays\n$\\gamma\\sim\\gamma'$ for which\n$[u_\\gamma]\\neq [u_{\\gamma'}]$.\n\n\\begin{example}\n[The infinite ladder]\nWe define $X\\subset\\mathbb{R}^2$ as the union of the two straight lines\n$\\mathbb{R}\\times\\set{-1,1}$ with the segments $\\mathbb{Z}\\times [-1,1]$,\nsee figure \\ref{ladder}. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=1]{ladder.pdf}\n\\caption{The infinite ladder.} \n\\label{ladder}\n\\end{figure}\n\nWe endow $X$ with the length distance induced by the standard\nmetric in $\\mathbb{R}^2$.\nIt is not difficult to see that every ray in $X$\nis eventually of the form\n$x(t)=(\\pm t+c, \\pm 1)$.\nEach possibility for the two signs determines one of the four\ndifferent Busemann functions which indeed compose the\nideal boundary.\nTherefore,\n there are four points in the ideal boundary of $X$,\nwhile there is only two classes of rays\ncomposing the Gromov boundary of $X$.\n\\end{example}\n\nLet us return to the context of the $N$-body problem,\nthat is to say,\nlet us take as metric space the set of configurations $E^N$,\nwith the action potential $\\phi_h$ as the distance function.\nActually $(E^N,\\phi_h)$ becomes a length space, and $\\phi_h$\ncoincides with the length distance of the Jacobi-Maupertuis\nmetric when restricted to $\\Omega$.\nProofs of all these facts are given in Sect. \\ref{s-jm.dist}.\nWe are interested in the study of the ideal and Gromov\nboundaries of this space,\nin particular we need to understand the rays in\nthis space having prescribed asymptotic direction.\nAs we will see, they will be found as calibrating curves\nof horofunctions in a special class.\n\n\\begin{definition}\n[Directed horofunctions]\nGiven a configuration $a\\neq 0$ we define the set\nof horofunctions directed by $a$ as the set\n\\[\n\\mathcal{B}_h(a)=\n\\set{u\\in\\mathcal{B}_h\\mid\nu=\\lim_n(u_{p_n}-u_{p_n}(0))\\,,\\;\np_n=\\lambda_na+o(\\lambda_n),\\;\n\\lambda_n\\to +\\infty}.\n\\]\n\\end{definition}\n\\begin{remark}\nTheorem \\ref{thm-phih.estim} implies,\nin a manner identical to the proof of\nCorollary \\ref{comp-visc-subsol},\nthat $\\mathcal{B}_h(a)\\neq\\emptyset$.\n\\end{remark}\n \n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=1.1]{HyperKepler2.pdf}\n\\caption{Calibrating curves of a hyperbolic Busemann function\n$u(x)=\\lim_n (\\phi_h(x,na)-\\phi_h(0,na))$ in the Kepler problem.} \n\\label{HyperKepler2}\n\\end{figure}\n\nThe following theorem is the key\nfor the proof of Theorem \\ref{thm-princ}\nand its proof is given in Sect. \\ref{s-proofs.main}.\n\n\\begin{theorem}\n\\label{thm-calib.direct.horo}\nLet $a\\in\\Omega$ and $u\\in\\mathcal{B}_h(a)$.\nIf $\\gamma:[0,+\\infty)\\to E^N$ satisfies\n\\[\nu(\\gamma(0))-u(\\gamma(t))=\\mathcal{A}_{L+h}(\\gamma\\mid_{[0,t]})\n\\]\nfor all $t>0$,\nthen $\\gamma$ is a hyperbolic motion of energy $h$\nwith asymptotic direction $a$.\n\\end{theorem}\n\nWe can thus deduce the following corollary,\nwhose proof is a very easy application of the Chazy's Theorem on\nhyperbolic motions, see Remark \\ref{rmk-Chazy.implic}.\n\\begin{corollary}\n\\label{coro-calib.bus.equiv}\nIf $a\\in\\Omega$ and $u\\in \\mathcal{B}_h(a)$ then\nthe distance between any two $h$-calibrating curves for $u$\nis bounded on their common domain.\n\\end{corollary}\n\nWe can also apply Theorem \\ref{thm-calib.direct.horo} to deduce\nthat calibrating curves of a hyperbolic Busemann function are\nmutually asymptotic hyperbolic motions.\n\\begin{corollary}\n\\label{coro-calib.of.busemann}\nIf $\\gamma$ is an hyperbolic $h$-minimizer,\nand $u_\\gamma$ its associated Busemann function,\nthen all the calibrating curves of $u_\\gamma$ are hyperbolic\nmotions with the same limit shape and direction as $\\gamma$.\n\\end{corollary}\n\n\\begin{proof}\nSince $\\gamma$ is hyperbolic, we know that\nthere is a configuration without collisions $a\\in\\Omega$\nsuch that $\\gamma(t)=ta+o(t)$ as $t\\to +\\infty$.\nTaking the sequence \n$p_n=\\gamma(n)$ we have that $p_n=na+o(n)$,\nand also that\n\\[\nu_\\gamma-u_\\gamma(0)=\n\\lim_{n\\to +\\infty}[u_{p_n}-u_{p_n}(0)].\n\\]\nThis implies that $u_\\gamma-u_\\gamma(0)\\in\\mathcal{B}_h(a)$,\nhence that $u_\\gamma$ is a viscosity solution and\nmoreover, Theorem \\ref{thm-calib.direct.horo} says that\nthe calibrating curves of $u_\\gamma$ all of the form $ta+o(t)$.\nOn the other hand, clearly $\\gamma$ calibrates $u_\\gamma$\nsince for any $0\\leq s \\leq t$ we have that\n\\[\nu_{\\gamma(t)}(\\gamma(s))-u_{\\gamma(t)}(\\gamma(0))=\n-\\phi_h(\\gamma(0),\\gamma(s)),\n\\]\nwhich in turn implies, taking the limit for $t\\to +\\infty$, that\n\\[\nu_\\gamma(\\gamma(0))-u_\\gamma(\\gamma(s))=\n-u_\\gamma(\\gamma(s))=\n\\phi_h(\\gamma(0),\\gamma(s)).\\]\n\\end{proof}\n\n\n\n\\section{Proof of the main results on hyperbolic motions}\n\n\n\nThis part of the paper contains the proofs that so far it has been\npostponed for different reasons.\nIn the first part we deal with several lemmas and technical results,\nafter which we complete the proof of the main results in Sect.\n\\ref{s-proofs.main}.\n\n\n\\subsection{Chazy's Lemma}\n\\label{s-Chazy.lema}\n\nThe first lemma that we will prove states that the set\n$\\mathcal{H}^+\\subset T\\Omega$ of initial conditions in the\nphase space given rise to hyperbolic motions is an open set.\nMoreover,\nit also says that the map defined in this set\nwhich gives the asymptotic velocity in the future\nis continuous. \nThis is precisely what in Chazy's work appears as\n\\emph{continuit\u00e9 de l'instabilit\u00e9}.\nWe give a slightly more general version for homogeneous\n potentials of degree $-1$,\nbut the proof works the same for potentials\nof negative degree in any Banach space.\n\nIntuitively what happens is that,\nif an orbit is sufficiently close to some given hyperbolic motion,\nthen after some time the bodies will be so far away each other,\nthat the action of the gravitational forces\nwill not be able to perturb their velocities too much.\n\n\\begin{lemma}\n\\label{lema-cont.limitshape}\nLet $U:E^N\\to\\mathbb{R}\\cup\\set{+\\infty}$ be a homogeneous potential\nof degree $-1$ of class $C^2$ on the open set\n$\\Omega=\\set{x\\in E^N\\mid U(x)<+\\infty}$.\nLet $x:[0,+\\infty)\\to\\Omega$ be a given solution of\n$\\ddot x=\\nabla U(x)$ satisfying $x(t)=ta+o(t)$\nas $t\\to +\\infty$ with $a\\in\\Omega$.\n\nThen we have:\n\\begin{enumerate}\n\\item[(1)] The solution $x$ has asymptotic velocity $a$,\nmeaning that\n\\[\n\\lim_{t\\to+\\infty}\\dot x(t)=a\\,.\n\\]\n\\item[(2)] (Chazy's continuity of the limit shape)\nGiven $\\epsilon>0$,\nthere are constants $t_1>0$ and $\\delta>0$ such that,\nfor any maximal solution $y:[0,T)\\to\\Omega$ satisfying\n$\\norm{y(0)-x(0)}<\\delta$ and\n$\\norm{\\dot y(0)-\\dot x(0)}<\\delta$,\nwe have:\n\\begin{enumerate}\n\\item[(i)] $T=+\\infty$, $\\norm{y(t)-ta}t_1$,\nand moreover\n\\item[(ii)] there is $b\\in\\Omega$ with $\\norm{b-a}<\\epsilon$ \nfor which $y(t)=tb+o(t)$.\n\\end{enumerate}\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nLet $0<\\rho<\\epsilon$ such that the closed ball\n$B=\\overline B(a,\\rho)$ is contained in $\\Omega$.\nLet $k=\\max \\set{\\norm{\\nabla U(z)}\\mid z\\in B}$, and choose\n$t_0>0$ in such a way that for any $t\\geq t_0$ we have\n$\\norm{x(t)-ta}t_0$ large enough such that\n\\[\n\\norm{x_1-t_1a}3k\/\\rho$ we also have\n\\[\n\\norm{\\dot x_1-a}\\leq \\frac{k}{t_1}<\\frac{\\rho}{3}\\,.\n\\]\n\nOn the other hand, since the vector field $X(x,v)=(v,\\nabla U(x))$\nis of class $C^1$, it defines a local flow on $T\\Omega$.\nLet us denote by $(x_0,\\dot x_0)$ the initial condition\n$(x(0),\\dot x(0))$ of $x(t)$.\nWe can choose $\\delta>0$ such that, for any choice of\n$(y_0,\\dot y_0)\\in T\\Omega$ verifying\n$\\norm{y_0-x_0}<\\delta$ and $\\norm{\\dot y_0-\\dot x_0}<\\delta$,\nthe maximal solution $y:[0,T)\\to\\Omega$\nwith $y(0)=y_0$ and $\\dot y(0)=\\dot y_0$\nsatisfies the following two conditions: $T>t_1$, and\n\\[\n\\norm{y_1-t_1a}0$,\nthe length space $(E^N,\\phi_h)$ is indeed geodesically convex.\n\nActually the lemma give us minimizing curves\nfor any pair of configurations, even with collisions,\nand it follows from Marchal's Theorem that\nsuch curves avoid collisions at intermediary times.\nThe proof is a well-known argument based on the\nTonelli's Theorem for convex Lagrangians,\ncombined with Fatou's Lemma\nfor dealing with the singularities of the potential.\n\n\\begin{lemma}[Existence of minimizers for $\\phi_h$]\n\\label{lema-JM.geod.complet}\nGiven $h>0$ and $x\\neq y\\in E^N$ there is a curve\n$\\gamma\\in\\mathcal{C}(x,y)$ such that\n$\\mathcal{A}_{L+h}(\\gamma)=\\phi_h(x,y)$.\n\\end{lemma}\n\nWe need to introduce before some notation\nand make a simple remark that we will use several times.\nIt is worth noting that the remark applies whenever we consider\na system defined by a potential $U>0$.\n\n\\begin{notation}\nGiven $h\\geq 0$, for $x,y\\in E^N$ and $\\tau>0$ we will write\n\\[\n\\Phi_{x,y}(\\tau)=\\tfrac{1}{2}\\norm{x-y}^2\\tau^{-1}+h\\,\\tau\\,.\n\\]\n\\end{notation}\n\n\\begin{remark}\n\\label{rmk-usual.bound}\nGiven $h\\geq 0$ we have, for any pair of configurations\n$x,y\\in E^N$ and any $\\tau>0$\n\\[\n\\phi(x,y,\\tau)+ h \\tau \\;\\geq\\; \\Phi_{x,y}(\\tau).\n\\]\nIndeed, given any pair of configurations\n$x,y\\in E^N$ and for any $\\sigma\\in\\mathcal{C}(x,y,\\tau)$,\nthe Cauchy-Schwarz inequality implies\n\\[\n\\norm{x-y}^2\\leq\n(\\;\\int_a^b\\norm{\\dot \\sigma}\\,dt\\;)^2\\leq\n\\;\\tau\\,\\int_a^b\\norm{\\dot \\sigma}^2\\,dt\\,,\n\\]\nthus, since $U>0$,\n\\[\n\\mathcal{A}_L(\\sigma)>\n\\tfrac{1}{2}\\int_a^b\\norm{\\dot \\sigma}^2\\,dt \\geq\n\\tfrac{1}{2}\\,\\norm{x-y}^2\\tau^{-1}.\n\\]\nThis justifies the assertion,\nsince this lower bound does not depend on the curve $\\sigma$.\n\\end{remark}\n\n\n\\begin{proof}\n[Proof of Lemma \\ref{lema-JM.geod.complet}]\nLet $x,y\\in E^N$ be two given configurations, with $x\\neq y$.\nWe start by taking a minimizing sequence of\n$\\mathcal{A}_{L+h}$ in $\\mathcal{C}(x,y)$, that is to say,\na sequence of curves $(\\sigma_n)_{n> 0}$ such that\n\\[\n\\lim_{n\\to\\infty}\\mathcal{A}_{L+h}(\\sigma_n)=\\phi_h(x,y)\\,.\n\\]\nThen from this minimizing sequence we build a new one,\nbut this time composed by curves with the same domain.\nTo do this, we first observe that,\nif each $\\sigma_n$ is defined on an interval $[0,\\tau_n]$,\nthen by the previous remark we know that\n\\[\n\\mathcal{A}_{L+h}(\\sigma_n)\\geq\n\\phi(x,y,\\tau_n)+h\\tau_n \\geq\n\\Phi_{x,y}(\\tau_n)\n\\]\nwhere $\\Phi_{x,y}$ is the above defined function.\nSince clearly $\\Phi_{x,y}$ is a proper function on $\\mathbb{R}^+$,\nwe deduce that\n$0<\\liminf \\tau_n\\leq \\limsup \\tau_n<+\\infty$,\nand therefore we can suppose without loss of generality\nthat $\\tau_n\\to\\tau_0$ as $n\\to\\infty$.\nIt is not difficult to see that reparametrizing linearly\neach curve $\\sigma_n$ over the\ninterval $[0,\\tau_0]$ we get a new minimizing sequence.\nMore precisely, for each $n>0$ the reparametrization is\nthe curve\n$\\gamma_n:[0,\\tau_0]\\to E^N$ defined by\n$\\gamma_n(t)=\\sigma_n(\\tau_n\\tau_0^{-1}\\, t)$.\nComputing the action of the curves $\\gamma_n$ we get\n\\[\n\\int_0^{\\tau_0}\\tfrac{1}{2}\\norm{\\dot\\gamma_n}^2\\,dt=\n\\tau_n\\tau_0^{-1}\n\\int_0^{\\tau_n}\\tfrac{1}{2}\\norm{\\dot\\sigma_n}^2\\,dt\n\\]\nand\n\\[\n\\int_0^{\\tau_0} U(\\gamma)\\,dt=\n\\tau_0\\tau_n^{-1}\n\\int_0^{\\tau_n} U(\\sigma)\\,dt\n\\]\nthus we have that\n\\[\n\\lim_{n\\to\\infty}\\mathcal{A}_{L+h}(\\gamma)=\n\\lim_{n\\to\\infty}\\mathcal{A}_{L+h}(\\sigma)=\n\\phi_h(x,y).\n\\]\n\nOn the other hand, It is easy to see that a uniform bound\nfor the action of the family of curves $\\gamma_n$\nimplies the equicontinuity of the family.\nMore precisely,\nif the bound $\\mathcal{A}_L(\\gamma_n)\\leq \\tfrac{1}{2}\\,M^2$\nholds for all $n>0$,\nthen using Cauchy-Schwarz inequality as in Remark\n\\ref{rmk-usual.bound} we have\n\\[\n\\norm{\\gamma_n(t)-\\gamma_n(s)}\\leq\nM\\abs{t-s}^\\frac{1}{2}\n\\]\nfor all $t,s\\in [0,t_0]$ and for all $n>0$.\nThus by Ascoli's Theorem we can assume that\nthe sequence $(\\gamma_n)$ converges uniformly to\na curve $\\gamma\\in\\mathcal{C}(x,y,\\tau_0)$.\nFinally, we apply Tonelli's Theorem for convex Lagrangians to get\n\\[\n\\int_0^{\\tau_0}\\tfrac{1}{2}\\norm{\\dot\\gamma}^2\\,dt\\;\\leq\\;\n\\liminf_{n\\to\\infty}\n\\int_0^{\\tau_0}\\tfrac{1}{2}\\norm{\\dot\\gamma_n}^2\\,dt\n\\]\nand Fatou's Lemma to obtain that\n\\[\n\\int_0^{\\tau_0} U(\\gamma)\\,dt\\;\\leq\\;\n\\liminf_{n\\to\\infty}\n\\int_0^{\\tau_0} U(\\gamma_n)\\,dt.\n\\]\nTherefore $\\mathcal{A}_L(\\gamma)\\leq \\phi(x,y,\\tau_0)$,\nwhich is only possible if the equality holds,\nand thus we deduce that $\\mathcal{A}_{L+h}(\\gamma)=\\phi_h(x,y)$.\n\\end{proof}\n\nThe next lemma is quite elementary and provides a\nrough lower bound for $\\phi_h$.\nHowever it has an interesting consequence, namely that\nreparametrizations of the $h$-minimizers by arc length\nof the metric $\\phi_h$ are Lipschitz\nwith the same Lipschitz constant.\nWe point out that this lower bound only depends\non the positivity of the Newtonian potential.\n\n\\begin{lemma}\\label{lema-geod.are.lipschitz}\nLet $h>0$.\nFor any pair of configurations $x,y\\in E^N$ we have\n\\[\n\\phi_h(x,y)\\geq \\sqrt{2h}\\norm{x-y}.\n\\]\n\\end{lemma}\n\n\\begin{proof}\nWe note that\n\\[\n\\phi_h(x,y)=\n\\min\\set{\\phi(x,y,\\tau)+\\tau h\\mid \\tau>0}\\geq\n\\min\\set{\\Phi_{x,y}(\\tau)\\mid\\tau>0},\n\\]\nand that\n\\[\n\\min\\set{\\Phi_{x,y}(\\tau)\\mid \\tau>0}=\n\\sqrt{2h}\\norm{x-y}.\n\\]\n\\end{proof}\n\n\\begin{remark}\n\\label{rmk-reparam.are.Lip}\nIf $\\gamma(s)$ is a reparametrization of an $h$-minimizer\nand the parameter is the arc length for the metric $\\phi_h$, \nthen we have\n\\[\n\\sqrt{2h}\\,\\norm{\\gamma(s_2)-\\gamma(s_1)}\\leq\n\\phi_h(\\gamma(s_1),\\gamma(s_2))=\\abs{s_2-s_1}.\n\\]\nTherefore all these reparametrizations are Lipschitz\nwith Lipschitz constant $1\/\\sqrt{2h}$. \n\\end{remark}\n\nFinally, the following and last lemma will be used to estimate\nthe time needed by an $h$-minimizer to join two given\nconfigurations.\n\n\\begin{lemma}\n\\label{lema-time.estim}\nLet $h>0$, $x,y\\in E^N$ two given configurations,\nand let $\\sigma\\in\\mathcal{C}(x,y,\\tau)$ be an $h$-minimizer.\nThen we have\n\\[\n\\tau_-(x,y)\\leq \\tau \\leq \\tau_+(x,y)\n\\]\nwhere $\\tau_-(x,y)$ and $\\tau_+(x,y)$\nare the roots of the polynomial\n\\[\nP(\\tau)=2h\\,\\tau^2-2\\phi_h(x,y)\\,\\tau+\\norm{x-y}^2.\n\\] \n\\end{lemma}\n\n\\begin{proof}\nSince $\\sigma$ minimizes $A_{L+h}$,\nin view of Remark \\ref{rmk-usual.bound} we have\n\\[\n\\phi_h(x,y)=\n\\phi(x,y,\\tau)+\\tau h\\geq\n\\Phi_{x,y}(\\tau)\n\\]\nthat is,\n\\[\\phi_h(x,y)\\geq \\frac{\\norm{x-y}^2}{2\\tau}+\\tau h\\,,\\]\nwhich is equivalent to say that $P(\\tau)<0$.\n\\end{proof}\n\n\n\\subsection{Proof of Theorems \\ref{thm-princ} and \\ref{thm-calib.direct.horo} }\n\\label{s-proofs.main}\n\n\n\\begin{proof}[Proof of Theorem \\ref{thm-princ}]\nGiven $h>0$, $a\\in\\Omega$ and $x_0\\in E^N$ we proceed as\nfollows.\nFirst, we define the sequence of functions\n\\[\nu_n(x)=\n\\phi_h(x,na)-\\phi_h(0,na)\\,,\\qquad x\\in E^N.\n\\]\nEach one of this functions is a viscosity subsolution\nof the Hamilton-Jacobi equation $H(x,d_xu)=h$,\nthat is to say, we have $u_n\\prec L+h$ for all $n>0$.\nSince the estimate for the action potential $\\phi_h$\ngiven by Theorem \\ref{thm-phih.estim} implies that\nthe set of such subsolutions is an equicontinuous family,\nand since we have $u_n(0)=0$ for all $n>0$,\nwe can extract a subsequence converging to a function\n\\[\n\\mathfrak{u} (x)=\\lim_{k\\to +\\infty}u_{n_k}(x),\n\\]\nand the convergence is uniform on compact subsets of $E^N$.\nActually the limit is a directed horofunction\n$\\mathfrak{u}\\in\\mathcal{B}_h(a)$.\n\nBy Theorem \\ref{thm-horofuns.have.lamin}\nwe know that there is at least one curve $x:[0,+\\infty)\\to E^N$,\nsuch that\n\\[\n\\phi_h(x_0,x(t))=\n\\mathcal{A}_L(x\\mid_{[0,t]})+ht=\n\\mathfrak{u} (x_0)-\\mathfrak{u} (x(t)).\n\\]\nfor any $t>0$, and such that $x(0)=x_0$.\nProposition \\ref{prop-criterion.visc.sol} now implies\nthat $\\mathfrak{u}$ is a viscosity solution of\nthe Hamilton-Jacobi equation $H(x,d_xu)=h$, and moreover,\nthat $\\mathfrak{u}$ is a fixed point of the quotient Lax-Oleinik semigroup.\n\nFinally, by Theorem \\ref{thm-calib.direct.horo} we have that\nthe curve $x(t)$ is a hyperbolic motion, with energy constant $h$,\nand whose asymptotic direction is given by the configuration $a$.\nMore precisely, we have that\n\\[\nx(t)=t\\;\\frac{\\sqrt{2h}}{\\norm{a}}\\;a \\,+\\,o(t)\n\\]\nas $t\\to +\\infty$, as we wanted to prove.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{thm-calib.direct.horo}]\nFor $h>0$ and $a\\in\\Omega$,\nlet $u\\in\\mathcal{B}_h(a)$ be a given horofunction directed by $a$.\nThis means that there is a sequence of configurations\n$(p_n)_{n>0}$, such that $p_n=\\lambda_na+o(\\lambda_n)$\nwith $\\lambda_n\\to+\\infty$ as $n\\to\\infty$, and such that\n\\[\nu(x)=\\lim_{n\\to\\infty}(u_{p_n}(x)-u_{p_n}(0))\n\\]\nwhere $u_p$ denotes the function $u_p(x)=\\phi_h(x,p)$.\nLet also $\\gamma:[0,+\\infty)\\to E^N$ be the curve\ngiven by the hypothesis and satisfying\n\\[\nu(\\gamma(0))-u(\\gamma(t))=\\mathcal{A}_{L+h}(\\gamma\\mid_{[0,t]})\n\\]\nfor all $t>0$.\nIn particular $\\gamma$ is an $h$-minimizer.\nWe recall that this means that the restrictions of\n$\\gamma$ to compact intervals are global\nminimizers of $\\mathcal{A}_{L+h}$.\nThus the restriction of $\\gamma$ to $(0,+\\infty)$ is\na genuine motion of the $N$-body problem,\nwith energy constant $h$,\nand it is a maximal solution if and only if\n$\\gamma(0)$ has collisions,\notherwise the motion defined by $\\gamma$\ncan be extended as a motion \nto some interval $(-\\epsilon, +\\infty)$.\n\nThe proof is divided into three steps.\nThe first one will be to prove that the curve $\\gamma$\nis not a superhyperbolic motion. This will be deduced from\nthe minimization property of $\\gamma$.\nThen we will apply the Marchal-Saari theorem to conclude\nthat there is a configuration $b\\neq 0$ such that\n$\\gamma(t)=tb+O(t^{2\/3})$.\nThe second and most sophisticated step will be to\nexclude the possibility of having collisions in $b$,\nthat is to say, in the limit shape of the motion $\\gamma$.\nFinally, once it is known that $\\gamma$ is a hyperbolic motion,\nan easy application of the Chazy's Lemma\n\\ref{lema-cont.limitshape} will allow us to conclude\nthat we must have $b=\\lambda a$ for some $\\lambda>0$.\nThen the proof will be achieved by observing that, since\n$\\norm{b}=\\sqrt{2h}$,\nwe must also have $\\lambda=\\sqrt{2h}\\norm{a}^{-1}$.\n\nWe start now by proving that $\\gamma$ is not superhyperbolic.\nWe will give a proof by contradiction.\nSupposing that $\\gamma$ is superhyperbolic\nwe can choose $t_n\\to +\\infty$ such\nthat $R(t_n)\/t_n\\to +\\infty$.\nWe recall that $R(t)=\\max\\set{ r_{ij}(t)\\mid i0$ we have\n\\[\n\\mathcal{A}_L(\\gamma\\mid_{[0,t_n]})+ht_n=\n\\phi_h(\\gamma(0),\\gamma(t_n)).\n\\]\nLet us write for short $r_n=\\norm{\\gamma(0)-\\gamma(t_n)}$.\nIn view of the observation we made in\nRemark \\ref{rmk-usual.bound},\nand using Theorem \\ref{thm-phih.estim},\n we have the lower and upper bounds\n\\[\n\\tfrac{1}{2}\\,r_n^2\\;t_n^{-1} +ht_n\\;\\leq\\;\n\\phi_h(\\gamma(0),\\gamma(t_n))\\;\\leq\\;\n\\left(\\alpha\\;r_n+h\\beta\\; r_n^2\\right)^{1\/2}\n\\]\nfor some constants $\\alpha,\\beta >0$ and for any $n>0$.\nIt is not difficult to see that this is impossible for $n$ large\nenough using the fact that $r_n\\,t_n^{-1}\\to +\\infty$.\nThus by the Marchal-Saari theorem there is a configuration\n$b\\in E^N$ such that $\\gamma(t)=tb+O(t^{2\/3})$.\nSince by the Lagrange-Jacobi identity $b=0$ forces $h=0$,\nwe know that $b\\neq 0$.\n\nWe prove now that $b$ has no collisions, that is to say,\nthat $b\\in\\Omega$. This is our second step in the proof.\nLet us write $p=\\gamma(0)$, $q_0=\\gamma(1)$\nand let us also define $\\sigma_0\\in\\mathcal{C}(q_0,p,1)$\nby reversing the parametrization of\n$\\gamma_0=\\gamma\\mid_{[0,1]}$.\nThus $\\sigma_0$ calibrates the function $u$,\nthat is to say, we have\n$u(p)-u(q_0)=\\mathcal{A}_{L+h}(\\sigma_0)$.\n\nNow, using Lemma \\ref{lema-JM.geod.complet} we can define\na sequence of curves $\\sigma'_n\\in\\mathcal{C}(p_n,q_0)$,\nsuch that $\\mathcal{A}_{L+h}(\\sigma'_n)=\\phi_h(p_n,q_0)$ for all $n>0$.\nThus each curve $\\sigma'_n$\nis an $h$-calibrating curve of the function\n$u_{p_n}(x)=\\phi_h(x,p_n)$.\nIt will be convenient to also consider the curves $\\gamma'_n$\nobtained by reversing the parametrizations of\nthe curves $\\sigma'_n$.\nIf for each $n>0$ the curve $\\sigma'_n$ is defined over\nan interval $[-s_n,0]$, then we get a sequence of curves\n$\\gamma'_n\\in\\mathcal{C}(q_0,p_n,s_n)$,\nrespectively defined over the intervals $[0,s_n]$. \n\nSince $q_0$ is an interior point of $\\gamma$,\nMarchal's Theorem implies that $q_0\\in\\Omega$.\nThus for each curve $\\gamma'_n$ the velocity\n$w_n=\\dot\\gamma'_n(0)$ is well defined.\nSince $h$-minimizers have energy constant $h$,\nwe also have $\\norm{w_n}^2=2(h+U(q_0))$ for all $n>0$.\nThis allow us to choose a subsequence $n_k$ such that\n$w_{n_k}\\to v_0$ as $k\\to\\infty$.\nAt this point we need to prove\nthat $\\lim s_n=+\\infty$.\nThis can be done by application of Lemma \\ref{lema-time.estim}\nto the $h$-minimizers $\\gamma'_n$ as follows.\nGiven two configurations $x,y\\in E^N$,\nthe polynomial given by the lemma satisfies\n$P(\\tau)\\geq \\norm{x-y}^2-2\\phi_h(x,y)\\tau$ for all $\\tau>0$.\nTherefore,\nwhen $x\\neq y$ its roots can be bounded below by\n$\\norm{x-y}^2\/2\\phi_h(x,y)$.\nUsing this fact, we have that for all $n>0$,\n\\[\ns_n>\\frac{\\norm{q_0-p_n}^2}{2\\,\\phi(q_0,p_n)}.\n\\]\nThen the upper bound for $\\phi_h$ given by\nTheorem \\ref{thm-phih.estim} implies that $\\lim s_n=+\\infty$. \n\nLet us summarize what we have built so far.\nFrom now on, let us write for short $q_k=p_{n_k}$, $t_k=s_{n_k}$,\n$v_k=w_{n_k}$, and also $\\gamma_k=\\gamma'_{n_k}$ and\n$\\sigma_k=\\sigma'_{n_k}$.\nFirst, there is a sequence of configurations $(q_k)_{k>0}$,\nsuch that, for some increasing sequence $n_k$\nof positive integers, we have\n$q_k=\\lambda_{n_k}a+o(\\lambda_{n_k})$ as $k\\to\\infty$.\nAssociated to each $q_k$ there is an $h$-minimizer\n$\\gamma_k:[0,t_k]\\to E^N$, with $t_k\\to +\\infty$,\nsuch that $\\gamma_k\\in\\mathcal{C}(q_0,q_k)$.\nMoreover, $v_k=\\dot\\gamma_k(0)$ and we have\n$v_k\\to v_0$ as $k\\to\\infty$.\nIn addition, each reversed curve $\\sigma_k\\in\\mathcal{C}(q_k,q_0)$\nis an $h$-calibrating curve of the function\n$u_{q_k}(x)=\\phi_h(x,q_k)$.\n\nWe will prove that $v_0=\\dot\\gamma(1)$. \nTo do this, we start by considering the maximal solution\nof Newton's equations with initial conditions $(q_0,v_0)$\nand by calling $\\zeta$ its restriction to positive times,\nlet us say for $t\\in [0,t^*)$.\nNext, we choose $\\tau\\in (0,t^*)$ and we observe that\nwe have $t_k>\\tau$ for any $k$ big enough.\nThus, for these values of $k$, we have that\n$\\gamma_k(t)$ and $\\dot\\gamma_k(t)$ converge respectively\nto $\\zeta(t)$ and $\\dot\\zeta(t)$, and the convergence is uniform\nfor $t\\in[0,\\tau]$.\nTherefore,\n\\[\n\\lim_{k\\to\\infty}\\mathcal{A}_{L+h}(\\gamma_k\\mid_{[0,\\tau]})=\n\\mathcal{A}_{L+h}(\\zeta\\mid_{[0,\\tau]}).\n\\]\nOn the other hand, since on each compact set our function\n$u(x)$ is the uniform limit of the functions\n$u_k(x)=u_{q_k}(x)-u_{q_k}(0)$, we can also write\n\\[\nu(q_0)-u(\\zeta(\\tau))=\n\\lim_{k\\to\\infty}\\,(\\,u_k(q_0)-u_k(\\gamma_k(\\tau))\\,).\n\\]\nWe use now the fact that for each one of these values of $k$\nwe have, by the calibration property, that\n\\[\nu_k(q_0)-u_k(\\gamma_k(\\tau))=\n\\mathcal{A}_{L+h}(\\gamma_k\\mid_{[0,\\tau]}),\n\\]\nto conclude then that\n\\[\nu(q_0)-u(\\zeta(\\tau))=\\mathcal{A}_{L+h}(\\zeta\\mid_{[0,\\tau]}).\n\\]\nNotice that what we have proved is that the reversed curve\n$\\zeta(-t)$ defined on $[-\\tau,0]$\nis indeed an $h$-calibrating curve of $u$.\nThe concatenation of this calibrating curve with the\ncalibrating curve $\\sigma_0$\nresults, according to Lemma \\ref{lema-concat.calib},\nin a new calibrating curve,\ndefined on $[-\\tau,1]$ and passing by $q_0$ at $t=0$.\nTherefore this concatenation of curves is an $h$-minimizer,\nwhich implies that it is smooth at $t=0$.\nWe have proved that $\\dot\\zeta(0)=v_0=\\dot\\gamma(1)$.\nThis also implies that $t^*=+\\infty$ and that\n$\\zeta(t)=\\gamma(t+1)$ for all $t\\geq 0$.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=1.1]{proof33a.pdf}\n\\caption{The $C^1$ approximation of the curve $\\gamma$ by\n$h$-minimizers from $q_0$ to $q_k=p_{n_k}$.\nHere $\\lambda=\\lambda_{n_k}$ and\n$\\norm{q_k-\\lambda a}1$ we define the functions\n\\[\n\\rho_k:[0,t_k]\\to\\mathbb{R}^+,\n\\quad\n\\rho_k(t)=\\norm{\\gamma_k(t)}\n\\]\n\\[\n\\theta_k:[0,t_k]\\to\\mathbb{S},\n\\quad\n\\theta_k(t)=\\norm{\\gamma_k(t)}^{-1}\\gamma_k(t)\n\\]\nwhere $\\mathbb{S}=\\set{x\\in E^N\\mid \\inner{x}{x}=1}$ is the unit sphere\nfor the mass inner product. \nThus, for each $k>0$ we can write $\\gamma_k=\\rho_k\\theta_k$,\nand the Lagrangian action in polar coordinates writes\n\\[\n\\mathcal{A}_{L+h}(\\gamma_k)=\n\\int_0^{t_k}\\tfrac{1}{2}\\;\\dot\\rho_k^{\\,2}\\,dt\\,+\n\\int_0^{t_k}\\tfrac{1}{2}\\;\\rho_k\\,\\dot\\theta_k^{\\,2}\\,dt\\,+\n\\int_0^{t_k}\\rho_k^{-1}\\,\\mu(\\gamma_k)\\,dt\\;+\nht_k.\n\\]\nAssuming that $\\mu(b)=+\\infty$, we can find $\\epsilon>0$\nsuch that, if $\\norm{x-b}<\\epsilon$,\nthen $\\mu(x)>3\\mu(a)$.\nOn the other hand, since we have that $\\gamma(t)=tb+o(t)$,\nthere is $T_0>0$ such that\n$\\norm{\\gamma(t)t^{-1}-b}<\\epsilon\/2$ for all $t\\geq T_0$.\n\nWe use now the approximation of $\\gamma$ by the curves\n$\\gamma_k$. For each $T\\geq T_0$ there is a positive integer\n$k_T$ such that, if $k>k_T$,\nthen $t_k>T$ and\n$\\norm{\\gamma_k(t)-\\gamma(t)}k_T$ and for any $t\\in[T_0,T]$\nwe have\n\\[\n\\norm{\\frac{\\gamma_k(t)}{t}-\\frac{\\gamma(t)}{t}}<\n\\frac{\\epsilon}{2},\n\\]\nand then $\\norm{\\gamma_k(t)t^{-1}-b}<\\epsilon$.\nIn turn, since $\\mu$ is homogeneous, this implies that\n\\[\n\\mu(\\gamma_k(t))=\n\\mu(\\gamma_k(t)t^{-1})>3\\mu(a).\n\\]\n\nNow we are almost able to define the sequence of curves\n$\\eta_k\\in\\mathcal{C}(q_0,q_n)$.\nLet us write $k_0$ for $k_{T_0}$.\nFor $k\\geq k_0$ we know that\n$\\mu(\\gamma_k(T_0))>3\\mu(a)$.\nMoreover, since the extreme $p_k$ of the\ncurve $\\gamma_k$ lies in a ball $B_r(\\lambda a)$ with\n$r=o(\\lambda)$, we can assume that $k_0$ is big enough\nin order to have $\\mu(p_k)<2\\mu(a)$ for all $k\\geq k_0$.\nThen we define\n\\[\nT_k=\n\\max\\set{T\\geq T_0\\mid\n\\mu(\\gamma_k(t))\\geq 2\\mu(a)\n\\text{ for all } t\\in [T_0,T]},\n\\]\nand $c_k=\\theta_k(T_k)$.\nGiven $T>T_0$,\nby the previous considerations we have that\n$k>k_T$ implies $T_k>T$.\nThus,\nwe can take $T_k$ as large as we want\nby choosing $k$ large enough.\nThe last ingredient for building the curve $\\eta_k$\nis a minimizer $\\delta_k$ of $\\mathcal{A}_{L+h}$ in\n$\\mathcal{C}(\\gamma_k(T_0),\\rho_k(T_0)c_k)$\nwhose existence is guaranteed by Theorem\n\\ref{lema-JM.geod.complet}.\nThen we define $\\eta_k$ as follows.\nFor $k0$\nfor $k$ large enough.\n\nWe start by observing that the first and the last components\nof $\\eta_k$ are also segments of $\\gamma_k$ so that\ntheir contributions to $\\Delta_k$ cancel each other out.\n\nAlso we have\n\\[\n\\mathcal{A}_{L+h}(\\gamma_k\\mid_{[T_0,T_k]}) = \n\\int_{T_0}^{T_k}\\,\\tfrac{1}{2}\\,\\dot\\rho_k^{\\,2}\\,dt +\n\\int_{T_0}^{T_k}\\,\\tfrac{1}{2}\\,\\rho_k\\dot\\theta^{\\,2}\\,dt +\n\\int_{T_0}^{T_k}\\,\\rho_k^{-1}\\mu(\\gamma_k)\\,dt +\nh(T_k-T_0),\n\\]\nand\n\\[\n\\mathcal{A}_{L+h}(\\,\\rho_kc_k\\mid_{[T_0,T_k]}) =\n\\int_{T_0}^{T_k}\\,\\tfrac{1}{2}\\,\\dot\\rho_k^{\\,2}\\,dt +\n\\int_{T_0}^{T_k}\\,\\rho_k^{-1}\\,2\\mu(a)\\,dt +\nh(T_k-T_0).\n\\]\nWe recall that\n$\\mu(\\gamma_k(t))\\geq 2\\mu(a)$ for all\n$t\\in [T_0,T_k]$.\nTherefore, so far we can say that\n\\[\n\\Delta_k>\n\\int_{T_0}^{T_k}\\rho_k^{-1}\\,\n\\left(\\mu(\\gamma_k(t))-2\\mu(a)\\right)\\,dt\\;-\\;\n\\mathcal{A}_{L+h}(\\delta_k).\n\\]\nThis part of the proof is essentially done.\nTo conclude we only need to establish estimates\nfor the two terms on the right side of the previous inequality.\nMore precisely,\nwe will prove that the the integral diverges as $k\\to\\infty$,\nand that the second term is bounded as a function of $k$.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=1.1]{proof33b.pdf}\n\\caption{For $k$ large enough, the $\\mathcal{A}_{L+h}$ action\nof the green curve $\\eta_k$\nis less than that of the curve $\\gamma_k$.\nThe intermediate points are \n$b_k=\\gamma_k(T_0)$,\n$d_k=\\rho_k(T_0)c_k$,\nand $e_k=\\rho_k(T_k)c_k=\\gamma_k(T_k)$.} \n\\label{Proof33b}\n\\end{figure}\n\n\\begin{claim}\nThe sequence $\\mathcal{A}_{L+h}(\\delta_k)$ is bounded.\n\\end{claim}\n\\begin{proof}\nIndeed, the curve $\\delta_k$ is a minimizer of $\\mathcal{A}_{L+h}$\nbetween curves binding two configurations of size\n$\\rho_k(T_0)$, and\n\\[\n\\rho_k(T_0)\\to\\rho(T_0)=\\norm{\\gamma(T_0)}\n\\]\nas $k\\to\\infty$. Therefore there is $R>0$ such that\nthe endpoints of the curves $\\delta_k$ are all contained\nin the compact ball $B_R(0)\\subset E^N$.\nOn the other hand, \nsince by Theorem \\ref{thm-phih.estim} we know that\nthe action potential $\\phi_h$ is continuous, we can conclude that\n$\\sup \\mathcal{A}_{L+h}(\\delta_k)<+\\infty$.\n\\end{proof}\n\n\\begin{claim}\nThe sequence\n$\\int_{T_0}^{T_k}\\rho_k^{-1}\\,\n\\left(\\mu(\\gamma_k(t))-2\\mu(a)\\right)\\,dt$\ndiverges as $k\\to\\infty$.\n\\end{claim}\n\\begin{proof}\nIn order to get a lower bound for the integral of $\\rho_k^{-1}$,\nwe make the following considerations.\nWe note first that $\\rho(t)=\\norm{\\gamma(t)}< \\alpha t+\\beta$\nfor some constants $\\alpha, \\beta>0$.\nThis is because we know that\n$\\gamma(t)=tb+o(t)$ as $t\\to +\\infty$.\nThus we have that for any $T>T_0$\n\\[\n\\int_{T_0}^T\\rho^{-1}dt \\;\\geq\\;\n\\log(\\alpha T+\\beta)-\\log(\\alpha T_0 +\\beta).\n\\]\nTherefore, for any choice of $K>0$ there is $T>0$ such that\nthe integral at the left side is bigger than\n$\\mu(a)^{-1}K$.\n\nOn the other hand,\nsince for $k>k_T$ we have that $T_k>T$, and since\n$\\gamma_k(t)$ uniformly converges to $\\gamma(t)$\non $[T_0,T]$, we can assume that we have\n$\\mu(\\gamma_k(t))>3\\mu(a)$ for all $t\\in [T_0,T]$\nand then,\nneglecting the part of the integral between $T$ and $T_k$\nwhich is positive, to conclude that\n\\[\n\\int_{T_0}^{T_k}\\rho_k^{-1}dt\\,\n\\left(\\mu(\\gamma_k(t))-2\\mu(a)\\right)\\,dt \\;>\n\\;\\mu(a)\\int_{T_0}^{T}\\rho_k^{-1}dt \\;>\\;K\n\\]\nfor every $k$ sufficiently large.\n\\end{proof}\nIt follows that for large values of $k$ the difference $\\Delta_k$ is\npositive, meaning that the corresponding curves $\\gamma_k$\nare not $h$-minimizers because the curves $\\eta_k$ have\nsmaller action.\nTherefore we have proved by contradiction that $b\\in\\Omega$.\n\nThe last step to finish the proof is to show that $b=\\lambda a$\nfor some $\\lambda >0$.\nIf not, we can choose two disjoint cones $C_a$ and $C_b$\nin $E^N$, centered at the origin and with axes directed by\nthe configurations $a$ and $b$ respectively.\nSince we know that $b\\in\\Omega$,\nwe can apply Chazy's Lemma to get that for $k$ large enough\nthe curves $\\gamma_k$ are defined for all $t>0$, and that there\nis $T^*>0$ for which we must have $\\gamma_k(t)\\in C_b$\nfor all $t>T^*$ and any $k$ large enough.\nBut this produces a contradiction, because we know that\n$q_k=\\gamma_k(t_k)=\\lambda_{n_k}a+o(\\lambda_{n_k})$\nas $k\\to\\infty$, which\nforces to have $q_k\\in C_a$ for $k$ large enough.\n\\end{proof}\n\n\n\n\\section{The Jacobi-Maupertuis distance for nonnegative energy}\n\\label{s-jm.dist}\n\nIn this section we develop the geometric viewpoint\nand we show, for $h\\geq 0$, that when restricted to $\\Omega$\nthe action potential $\\phi_h$ is exactly the Riemannian distance\nassociated to the Jacobi-Maupertuis metric $j_h=2(h+U)g_m$,\nwhere $g_m$ is the mass scalar product.\nMoreover,\nwe will see that the metric space $(E^N,\\phi_h)$\nis the completion of $(\\Omega,j_h)$.\nThe fact that $\\phi_h$ is a distance over $E^N$ is a\nstraightforward consequence of the definition and of\nLemma \\ref{lema-geod.are.lipschitz} or\nLemma \\ref{lema-compar.pot.action.kepler}\ndepending on whether $h>0$ or $h=0$. \nIt is also immediate to see that $(E^N,\\phi_h)$ is a length space,\nthat is to say $\\phi_h$ coincides with the induced length distance.\nFrom now on,\nwe denote by $\\mathcal{L}_h(\\gamma)$ the Riemannian length\nof a $C^1$ curve $\\gamma$,\nand we denote by $d_h$ the Riemannian distance on $\\Omega$.\n \n\\begin{proposition}\nFor all $h\\geq 0$,\nthe space $(E^N,\\phi_h)$ is the completion of $(\\Omega,d_h)$. \n\\end{proposition}\n\n\\begin{proof}\nIn the case $h>0$,\nthe fact that $(E^N,\\phi_h)$ is a complete length space\ncomes directly from the definition of $\\phi_h$\nand from Lemma \\ref{lema-geod.are.lipschitz}\nand Theorem \\ref{thm-phih.estim}.\nMoreover, we have that $\\phi_h$ generates the topology of $E^N$\nand that $\\Omega$ is thus a dense subset.\n\nFor the case $h=0$ the argument is exactly the same,\nbut instead of Lemma \\ref{lema-geod.are.lipschitz},\nwhich becomes meaningless,\nwe have to use Lemma \\ref{lema-compar.pot.action.kepler} below.\n\nThe proof will be achieved now by showing that\nthe inclusion of $\\Omega$ into $E^N$ is an isometry,\nthat is to say,\nthat $\\phi_h$ coincides with $d_h$ when restricted to $\\Omega$.\nGiven $(x,v)\\in \\Omega\\times E^N$,\nwe have\n\\[\n\\norm{v}_h=j_h(x)(v,v)^{1\/2}\\leq L(x,v)+h\n\\]\nwith equality if and only if $\\mathcal{E}(x,v)=h$,\nwhere $\\mathcal{E}(x,v)=\\frac{1}{2}\\norm{v}^2-U(x)$\nis the energy function in $T\\Omega$.\nIt follows that if $\\gamma$ is an absolutely continuous curve in\n$\\Omega$ it holds $\\mathcal{L}_h(\\gamma)\\leq A_{L+h}(\\gamma)$,\nwith equality if and only if\n$\\mathcal{E}(\\gamma(t),\\dot{\\gamma}(t))=h$ for almost all $t$.\nGiven now $x,y\\in\\Omega$,\nby Marchal's Theorem any $h$-minimizer joining $x$ to $y$\nis a genuine motion,\nin particular it is a $C^1$ curve.\nSince $d_h$ is defined as the infimum of $\\mathcal{L}_h(\\gamma)$\nover all $\\mathcal{C}^1$ curves in $\\Omega$ joining $x$ to $y$,\nwe have that $d_h(x,y)\\leq\\phi_h(x,y)$.\n\nIn order to prove the converse inequality,\nlet $\\epsilon>0$ and $\\gamma:[0,1]\\to \\Omega$\nbe a $\\mathcal{C}^1$ curve joining $x$ to $y$ such that\n$\\mathcal{L}_h(\\gamma)\\leq d_h(x,y)+\\epsilon$.\nWe can now find a finite sequence $0=t_0<...0$ such that\nfor all $x,y\\in E^N$ satisfying $x\\neq y$, we have\n\\[\n\\phi_0(x,y)\\geq \\frac{\\mu_0}{\\rho}\\norm{x-y}\n\\]\nwhere\n$\\rho=\\max\\set{\\norm{x},\\norm{y}}^\\frac{1}{2}$.\n\\end{lemma}\n\n\\begin{proof}\nThe main idea of the proof is to estimate $\\phi_0$\nby comparing it with the action of some Kepler problem in $E^N$.\nSince $U$ is a continuous function with values in $(0,+\\infty]$,\nthe minimum of $U$ on the unit sphere of $E^N$,\nhere denoted $U_0$, is strictly positive.\nThus, by homogeneity of the potential,\nif $x$ is any nonzero configuration we have\n\\[\nU(x)=\\frac{1}{\\norm{x}}\\,U\\left(\\frac{x}{\\norm{x}}\\right)\n\\geq \\frac{U_0}{\\norm{x}}.\n\\]\nLet us consider now the Lagrangian function\nassociated to the Kepler problem in $E^N$\nwith potential $U_0\/\\norm{x}$, that is to say \n\\[\nL_{\\kappa}(x,v)=\n\\frac{1}{2}\\norm{v}^2+\\frac{U_0}{\\norm{x}}\\;.\n\\]\nBy the previous inequality we know that $L_\\kappa(x,v) \\le L(x,v)$.\nThe critical action potential associated to $L_\\kappa$\nis defined on $E^N\\times E^N$ by\n\\[\n\\Phi_0(x,y)=\n\\min\\set{\\mathcal{A}_{L_\\kappa}(\\gamma)\\mid \\gamma\\in\\mathcal{C}(x,y)},\n\\]\nand it follows immediately from the definition that\n$\\Phi_0(x,y)\\le \\phi_0(x,y)$.\nAssume now $x\\neq y$, and let\n$\\gamma : [0,\\tau]\\rightarrow E^N$ be a free-time minimizer for\n$\\mathcal{A}_{L_\\kappa}$ in $\\mathcal{C}(x,y)$.\nThus $\\gamma$ is an absolutely continuous curve satisfying\n$\\mathcal{A}_{L_\\kappa}(\\gamma)=\\Phi_0(x,y)$.\nAs a zero energy motion of the Kepler problem,\nwe know that $\\gamma$ is an arc of Keplerian parabola,\nand in particular we know that\n\\[\n\\max_{t\\in[0,\\tau]}\\norm{\\gamma(t)}=\n\\max\\set{\\norm{x},\\norm{y}}\n\\]\nwhich in turn implies that\n\\[\n\\frac{U_0}{\\norm{\\gamma(t)}}\\,\\geq\\,\n\\frac{U_0}{\\rho^2}\n\\]\nfor all $t\\in[0,\\tau]$. \nThus, using this lower bound and Cauchy-Schwarz inequality\nfor the kinetic part of the action of $\\gamma$ we deduce that \n$\\Phi_0(x,y)\\geq g(\\tau)$,\nwhere $g:\\mathbb{R}^+\\to\\mathbb{R}$ is the function defined by \n\\[\ng(s)=\n\\frac{\\norm{x-y}^2}{2s}+\\frac{U_0}{\\rho^2}\\,s.\n\\]\nObserving now that $g$ is convex and proper,\nand replacing $g(\\tau)$ in the previous inequality\nby the minimum of $g(s)$ for $s>0$, we obtain\n\\[\n\\phi_0(x,y)\\geq\n\\Phi_0(x,y)\\geq\n\\frac{\\mu_0}{\\rho}\\,\\norm{x-y}.\n\\]\nfor $\\mu_0=\\sqrt{2U_0}$.\n\\end{proof} \n\nNow we have all the necessary elements to give\nthe proof of the corollary stated in Sect. \\ref{s-geom.view}.\nWe have to prove that if two geodesic rays have the same\nasymptotic limit, then they are equivalent in the sense of\nhaving bounded difference.\n\n\\begin{proof}\n[Proof of Corollary \\ref{cor-Gr.boundary}]\nLet $\\gamma:[0,+\\infty)\\to E^N$ be a geodesic ray \nof the distance $\\phi_h$, with $h>0$.\nWe assume that $\\gamma(s)=sa+o(s)$\nas $s\\to +\\infty$ for some $a\\in\\Omega$.\nThus, we know that $\\gamma(s)$ is without collisions\nfor all $s$ sufficiently big. \nBy performing a time translation we can assume that\n$\\gamma(s)\\in \\Omega$ for all $s\\geq 0$, hence that\n$\\gamma$ is a geodesic ray of the\nJacobi-Maupertuis metric $j_h$ in $\\Omega$.\nNow we know that $\\gamma$ admits a factorization\n$\\gamma(s)=x(t_\\gamma(s))$ where\n$x(t)$ is a motion of energy $h$.\nMore precisely,\nthe inverse of the new parameter $t_\\gamma$\nis a function $s_x$ satisfying $x(t)=\\gamma(s_x(t))$.\nSince $\\gamma$ is arclength parametrized,\nwe have $\\norm{\\dot\\gamma(s)}_h=1$ for all $s\\geq 0$,\nand we deduce that $s_x$ is the solution\nof the differential equation \n\\begin{equation}\\tag{$\\star$}\\label{eq-edo.reparam}\n\\dot s_x(t)=2h+2U(\\gamma(s_x(t)))\n\\end{equation}\nwith intial condition $s_x(0)=0$.\nThis implies that $s_x(t)\\to +\\infty$ and $\\dot s_x(t)\\to 2h$\nas $t\\to +\\infty$,\nhence we also have $s_x(t)=2ht+o(t)$\nand $x(t)=2ht\\,a+o(t)$ as $t\\to +\\infty$.\nIn particular $x(t)$ is a hyperbolic motion.\nWe claim now that\n\\[\ns_x(t)=2ht+\\frac{U(a)}{h}\\log t+O(1).\n\\]\nand the proof is as follows.\nFrom (\\ref{eq-edo.reparam}) we have, for $t>1$,\n\\begin{equation}\\tag{$\\star\\star$}\\label{eq-sx.asymptotic}\ns_x(t)=2ht+\\int_0^1 2U(x(\\nu))\\,d\\nu+\\int_1^t 2U(x(\\nu))\\,d\\nu.\n\\end{equation}\nOn the other hand, by Chazy's Theorem we have that\n\\[\nx(t)=\n2ht\\,a-\\frac{\\log t}{4h^2}\\,\\nabla U(a)+O(1).\n\\]\nWe observe then that\n\\begin{eqnarray*}\nU(x(\\nu))&=&\n\\frac{1}{2h\\,\\nu}\\,U\\left(a +O\\left(\\frac{\\log \\nu}{\\nu}\\right)\\right)\\\\\n& &\\\\\n&=&\\frac{U(a)}{2h}\\,\\frac{1}{\\nu}+\nO\\left(\\frac{\\log \\nu}{\\nu^2}\\right)\n\\end{eqnarray*}\nNow the claim can be verified by replacing this last expression of\n$U(x(\\nu))$ in the last term of (\\ref{eq-sx.asymptotic}).\n\nGiven now another gedodesic ray\n$\\sigma : [0,+\\infty)\\to E^N$,\ndenoting $\\sigma(s)=y(t_\\sigma(s))$\nthe reparametrization such that $y(t)$ is a motion\nof energy constant $h$,\nand denoting $s_y$ the inverse of $t_\\sigma$,\nit is clear from the previous asymptotic estimates that the\ndifference $s_x(t)-s_y(t)$ is bounded.\nSince the derivative of $s_x$ and $s_y$\nare both bounded below by the same positive constant,\nwe easily conclude that\n$t_\\gamma(s)-t_\\sigma(s)$ is also bounded.\nBy replacing in the asymptotic expansion of $x(t)$ and $y(t)$\nwe find that $\\gamma(s)-\\sigma(s)$ is bounded. \n\\end{proof}\n\n\n\n\n\n\\section{Open questions on bi-hyperbolic motions}\n\\label{s-bi.hyp}\n\nWe finish with some general open questions.\nThey are closely related to the recent advances made by\nDuignan {\\it et al.} \\cite{DuMoMoYu}\nin which the authors show in particular that\nthe limit shape map $(x,v)\\mapsto (a^-,a^+)$\ndefined below is actually real analytic.\n\nWe define bi-hyperbolic motions\nas those which are defined for all $t\\in\\mathbb{R}$,\nand are hyperbolic both in the past and in the future.\nThe orbits of these entire solutions\ndefine a non-empty open set in the phase space,\nnamely the intersection of the two open set\n\\[\\mathcal{H}=\\mathcal{H}^ +\\cap\\mathcal{H}^-\\]\nwhere\n$\\mathcal{H}^+\\subset T\\Omega=\\Omega\\times E^N$\nis the set of the initial conditions giving rise\nto hyperbolic motions in the future, and\n$\\mathcal{H}^-=\\set{(x,v)\\in T\\Omega\\mid (x,-v)\\in\\mathcal{H}^+}$\nis the set of the initial conditions\ngiving rise to hyperbolic motions in the past.\nNewton's equations define a complete vector field\nin the open set $\\mathcal{H}\\subset\\Omega\\times E^N$.\nWe will denote by $\\varphi^t$ the corresponding flow\nand $\\pi:\\Omega\\times E^N\\to\\Omega$\nthe projection onto the first factor.\n \nWe also note that this open and completely invariant set\nhas a natural global section,\ngiven by the section of \\emph{perihelia}:\n\\[\\mathcal{P}=\\mathcal{H}\\cap\\set{(x,v)\\in T\\Omega\\mid \\inner{x}{v}=0}\\,.\\]\n\n\\begin{proposition}\nThe flow $\\varphi^t$ in $\\mathcal{H}$\nis conjugated to the shift in $\\mathcal{P}\\times\\mathbb{R}$.\n\\end{proposition}\n\n\\begin{proof}\nGiven $(x_0,v_0)\\in\\mathcal{H}$,\nlet $x(t)=\\pi(\\varphi^t(x_0,v_0))$ be\nthe generated bi-hyperbolic motion.\nSince $I=\\inner{x}{x}$,\nit follows from the Lagrange-Jacobi identity $\\ddot I=4h+2U$,\nthat $I$ is a proper and strictly convex function.\nThus, there is a unique $t_p\\in\\mathbb{R}$ such that \n$\\varphi^{t_p}(x_0,v_0)\\in\\mathcal{P}$.\nMoreover,\nthe sign of $\\dot I=\\inner{x}{\\dot x}$ is the sign of $t-t_p$\nand $\\norm{x(t)}$ reaches its minimal value at $t=t_p$.\nThe conjugacy is thus given by the map\n$(x_0,v_0)\\mapsto (p(x_0,v_0),-t_p)$,\nwhere $p:\\mathcal{H}\\to\\mathcal{P}$ gives the phase point at perihelion\n$p(x_0,v_0)=(x(t_p),\\dot x(t_p))$.\n\\end{proof}\n\nNaturally associated with each bi-hyperbolic motion,\nthere is the pair of limit shapes that it produces\nboth in the past and in the future.\nMore precisely, we can define the \\emph{limit shape map}\n$S:\\mathcal{H}\\to\\Omega\\times\\Omega$ by\n\\[S(x,v)=(a^-(x,v),a^+(x,v))\\]\n\\[a^\\pm(x,v)=\\lim_{t\\to\\pm\\infty} \\;\\abs{t}^{-1}\\pi(\\varphi^t(x,v))\\,.\\]\nAs a consequence of Chazy's \\emph{continuity of the instability}\n(Lemma \\ref{lema-cont.limitshape}) we have that\nthe limit shape map is actually a continuous map.\nIt is also clear that\n\\[\\norm{a^-(x,v)}=\\norm{a^+(x,v)}\\] for all $(x,v)\\in\\mathcal{H}$.\nIn fact, we have\n\\[\\norm{a^\\pm(x,v)}^2=2h=\\norm{v}^2-2U(x)\\]\nwhere $h>0$ is the energy constant\nof the generated bi-hyperbolic motion.\nHence the image of $S$ is contained in the manifold\n\\[\n\\mathcal{S}=\n\\set{(a,b)\\in\\Omega\\times\\Omega\\mid \\norm{a}=\n\\norm{b}}\\,.\n\\]\nClearly, we have $S\\circ\\varphi^t=S$ for all $t\\in\\mathbb{R}$.\nTherefore the study of the limit shape map can be restricted\nto the section of perihelia $\\mathcal{P}$.\nCounting dimensions we get\n\\[\\dim\\mathcal{P}=2dN-1=\\dim\\mathcal{S}\\]\nwhere $d=\\dim E$.\n\nWe will see now that the center of mass\ncan be reduced to the origin.\nLet us call $G:E^N\\to E$ the linear map that associates\nto each configuration its center of mass.\nMore precisely,\nif $M=m_1+\\dots+m_N$ is the total mass of the system,\nthen the center of mass $G(x)$ of $x=(r_1,\\dots,r_N)\\in E^ N$\nis well defined by the condition\n$MG(x)=m_1r_1+\\dots+m_Nr_N$.\nJust as we did for the quantities U and I,\nwe will write $G(t)$ instead of $G(x(t))$ when\nthe motion $x(t)$ is understood.\nWe observe now that if $x(t)=ta^++o(t)$ as $t\\to+\\infty$,\nthen $G(t)=tG(a^+)+o(t)$.\nMoreover, since $\\ddot G(t)=0$ for all $t\\in\\mathbb{R}$ we know that\nthe velocity of the center of mass $\\dot G(t)=v_G$ is constant,\nhence $G(t)=tv_G+G(0)$.\nTherefore we must have $G(a^+)=v_G$.\nIf in addition $x(t)=-ta^-+o(t)$ as $t\\to-\\infty$,\nthen we also have $G(a^-)=-v_G$.\nWe conclude that\n\\[G(a^-(x,v))=-\\;G(a^+(x,v))\\]\nfor all $(x,v)\\in\\mathcal{H}$.\nThis allows to reduce in $d$ dimensions\nthe codomain of the limit shape map.\nOn the other hand,\na constant translation of a bi-hyperbolic motion\ngives a new bi-hyperbolic motion with the same limit shapes.\nThus the domain can also be reduced of $d$ dimensions\nby imposing the condition $G(x(0))=0$.\n\nFinally,\nwe note that bi-hyperbolic motions are preserved\nby addition of uniform translations.\nLet $\\Delta\\subset E^N$ be the diagonal subspace,\nthat is the set of configurations of total collision. \nFor any bi-hyperbolic motion $x(t)$\nwith limit shapes $a^-$ and $a^+$, \nand any $v\\in\\Delta$,\nwe get a new bi-hyperbolic motion $x_v(t)=x(t)+tv$,\nwhose limit shapes are precisely $a^--v$ and $a^++v$.\nIn particular these configurations without collisions\nhave opposite center of mass and the same norm.\nThe equality of the norms can also be deduced\nfrom the orthogonal decomposition\n$E^N=\\Delta\\oplus \\ker G$ and using the fact that\n$G(a^+-a^-)=0$.\n\nIn sum,\nwe can perform the total reduction of the center of mass\nby setting $G(x(0))=G(\\dot x(0))$,\nwhich leads to $G(a^-)=G(a^+)=0$.\nWe define\n\\[\\mathcal{P}_0=\\set{(x,v)\\in\\mathcal{H}\\mid\\;\nG(x)=G(v)=0\\text{ and }\\inner{x}{v}=0}\\]\n\\[\\mathcal{S}_0=\\set{(a,b)\\in\\Omega\\times\\Omega\\mid\\;\nG(a)=G(b)=0\\text{ and }\\norm{a}=\\norm{b}}\\]\nand we maintain the balance of dimensions.\n\n\\begin{question}\nIs the limit shape map $S:\\mathcal{P}_0\\to\\mathcal{S}_0$\na local diffeomorphism?\n\\end{question}\n\nThe answer is yes in the Kepler case\n(see Figure \\ref{HyperKepler1}).\nBut in the general case,\nthis property must depend on the potential $U$.\nFor instance,\nin the extremal case of $U=0$,\nin which motions are thus straight lines,\nwe get the restriction $a^-=-a^+$ for all hyperbolic motion.\nIn this case the shape map loses half of the dimensions.\n\nIt is therefore natural to ask, for the general $N$-body problem,\nwhether or not\nthere is some relationship between these two functions.\n\n\\begin{question}\nHow big is the image of the limit shape map?\n\\end{question}\n\nIn the Kepler case, only the pairs $(a,b)$ such that\n$\\norm{a}=\\norm{b}$ and $a\\neq\\pm\\, b$\nare realized as asymptotic velocities of some hyperbolic trajectory.\nThis can be generalized for $N\\geq 3$.\nIf $a\\in\\Omega$ is a planar central configuration\nand $R\\in SO(E)$ keeps invariant the plane containing $a$,\nthe pair $(a,Ra)$ is realized as the limit shapes\nof a unique homographic hyperbolic motion,\nexcept in the cases $R=\\pm\\,Id$.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=1.1]{HyperKepler1.pdf}\n\\caption{Hyperbolic motions of the Kepler problem\nwith fixed value of the energy constant $h>0$\nand asymptotic velocity $a$ in the future.\nAll but one of these motions are bi-hyperbolic.\nThe blue curve $\\mathcal{P}$ is composed of the\ncorresponding perihelia.} \n\\label{HyperKepler1}\n\\end{figure}\n\nWe now devote attention to the effect of homogeneity.\nRecall that\nif $x(t)$ is a bi-hyperbolic motion of energy constant $h$,\nthen for every $\\lambda>0$ the solution given by\n$x_\\lambda(t)=\\lambda\\,x(\\lambda^{-3\/2}t)$\nis still bi-hyperbolic with energy constant $\\lambda^{-1}h$.\nMoreover,\nif we note $x_0=x(0)$ and $v_0=\\dot x(0)$ then we have\n\\[\n(x_\\lambda(t),\\dot x_\\lambda(t))=\n\\varphi^t(\\lambda\\, x_0,\\lambda^{-1\/2}\\,v_0)\n\\]\nfor all $t\\in\\mathbb{R}$.\nThese considerations prove the following remark.\n\n\\begin{remark}\nFor any $(x,v)\\in\\mathcal{H}$ and for any $\\lambda>0$, we have\n\\[S(\\lambda\\, x,\\lambda^{-1\/2}\\,v)=\\lambda^{-1\/2}\\,S(x,v)\\,.\\]\n\\end{remark}\n\nLet us introduce the following question with an example.\nConsider the planar three-body problem with equal masses.\nThat is, $E=\\mathbb{R}^2\\simeq\\mathbb{C}$, $N=3$ and $m_i=1$ for $i=1,2,3$.\nFor $h>0$, define the equilateral and collinear configurations\n\\[a_h=\\sqrt\\frac{2h}{3}\\,(1,z,z^2)\\qquad b_h=\\sqrt h\\,(-1,0,1)\\]\nwhere $z$ is a primitive root of $z^3-1$.\nThus we have $\\norm{a_h}=\\norm{b_h}=\\sqrt{2h}$ and also\n$G(a_h)=G(b_h)=0$ for all $h>0$.\n\n\\begin{question}\nIs the pair $(a_h,b_h)$ in the image of the limit shape map?\n\\end{question}\n\nIn other words,\nis there a bi-hyperbolic motion whose dynamics originates\nin the past with a contraction from a big equilateral triangle,\nand then, after a period of strong interaction between the particles,\nthe evolution ends with an almost collinear expansion?\n\nIn our view, the method of viscosity solutions\ncould be useful to answer this question. \nIn particular, we consider it necessary to push forward\nthe understanding of the regions of differentiability\nof these weak solutions.\nIt seems reasonable that an orbit like this\ncan be found by looking for critical points\nof a sum of two Busemann functions (see Sect. \\ref{s-busemann}).\n\n\\begin{question}\nIf the answer to Question 3 is yes,\nwhat is the infimum of the norm of the perihelia\nof the bi-hyperbolic motions having these limit shapes? \n\\end{question}\n\nObserve that once we have a bi-hyperbolic motion which is\nequilateral in the past and collinear in the future,\nwe can play with the homogeneity in order to obtain a new one,\nbut having a perihelion contained in an arbitrarily small ball.\nThat is to say, it would be possible to make, at some point,\nall bodies pass as close as we want from a total collision.\nOf course, to do this\nwe must increase the value of the energy constant indefinitely.\nThus we preserve the limit shapes in the weak sense,\nbut not the size of the asymptotic velocities.\n In the family of motions $(x_\\lambda)$ described above,\nthe product of the energy constant $h$\nand the norm of the perihelion is constant.\nIn the Kepler case,\nonce we fix the value of $h>0$\nthere is only one bi-hyperbolic motion\nconnecting a given pair $(a,b)$ (see \\cite{Alb}).\nTherefore we can see the norm of the perihelion\nas a function of the limit shapes.\nWe can see that the norm of the perihelion\ntends to $0$ for $a\\to b$,\nand tends to $+\\infty$ for $a\\to -b$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection*{Acknowledgements}\nThe authors would like to express\na very special thank to Albert Fathi,\nwho suggested the use of the method\nof viscosity solutions, as well as the way\nto construct the maximal calibrating curves\nof the horofunctions. \nThe first author is also grateful to\nAlain Albouy, Alain Chenciner and Andreas Knauf\nfor several helpful conversations at the IMCCE.\nFinally, we would like to thank the referee\nfor his accurate and helpful comments,\nand Juan Manuel Burgos\nfor pointing out a subtle mistake in one of the proofs.\n\n\n\n\n\n\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nUnderstanding open system dynamics and decoherence is important in several areas of quantum physics~\\cite{BreuerBook_2002,RivasBook_2012}. During the last ten years, there have been significant developments in both understanding the role of non-Markovian memory-effects~\\cite{Rivas_2014, Breuer_2016, Hall_2018, Li_2019,Li_2019_EPL2} and in developing improved tools and techniques to treat open system \ndynamics~\\cite{Vega_2017}.\nHere, one of the common themes is the role that various types of correlations play in open system dynamics. In particular, understanding\n initial correlations between composite environments~\\cite{Laine_2012,Liu_2013,Laine_2014,Xiang_2014,Liu_2016,HamedaniRaja_2017} and the role of initial system-environment correlations~\\cite{pechukas94,Alicki_1995,pechukas95,jordan04,shaji05a,Linta-Shaji,Wiseman_2019,Lyyra_2018,Alipour_2019} have led to fundamental insights as well as practical knowledge regarding open systems.\n\nPhotons provide a common and highly controllable system where the influence of correlations can be studied both conceptually and practically~\\cite{Laine_2012,Liu_2013,Laine_2014,Xiang_2014,Liu_2016,HamedaniRaja_2017,Lyyra_2018}. Here, the polarization state of the photon is the open system and its frequency is the environment. Polarization and frequency are coupled via birefringence leading to dephasing of a polarization state of the photon(s). The control of initial frequency distribution allows for the engineering of the decoherence and it is also possible to exploit various correlations for single photon or composite two-photon systems~\\cite{ElsiNature2011,ElsiPRL2012,Lyyra_2018}.\n\nOn the one hand, dephasing dynamics of photons has often been described using the concept of decoherence functions and subsequent family of completely positive (CP) dynamical maps, in the past. On the other hand, master equations are one of the most common tools to treat open system dynamics~\\cite{BreuerBook_2002}. However, master equations have not been used extensively when considering multipartite photonic systems and dephasing.\nWe consider first a bipartite two-photon system where the initial system-enviroment state is factorized whilst there exist initial correlations between the environmental states. \nIt has been shown earlier that this induces non-local memory effects in open system dynamics~\\cite{ElsiPRL2012,Liu_2013}. However, the role of these types of initial correlations and non-local memory effects have not been considered on the level of master equations before, to the best of our knowledge. We derive generic master equations which display explicitly the role of initial correlations both on the dephasing rates and on the operator form of the master equation. This allows also to reveal how even quite straightforward changes in the initial environmental state change drastically the description of photonic dephasing and increases the number of jump operators in the master equation. \n\nContinuing within the framework of correlations and open systems, we also study another long-standing problem in this context.\nThis is the role that initial system-environment correlations play in open system dynamics. Here, our interest is to see, what kind of insight the recently developed\n\\textit{bath positive decomposition} method~\\cite{Wiseman_2019} allows when studying the open dynamics of the polarization states. This very general method is based on decomposing initial arbitrary system-environment \nstate to a number of terms where each term can be treated with its individual CP-map. We show that for single-photon dephasing, this decomposition allows to describe, in a insightful way, how initial correlations influence the dynamics beyond the contribution arising from the factorized part.\n\nThe structure of the paper is the following. In the next section we describe briefly the basics of photonic dephasing.\nIn Section \\ref{sec3} we focus on the correlations within the composite environment and derive various master equations in this context and discuss the insight they provide.\nSection \\ref{sec4}, in turn, describes the initially correlated system-environment case for single photon and Sec.~\\ref{sec5} concludes the paper.\n\n\\section{Preliminaries with single photon dephasing dynamics}\n\nWe start with a brief recall of the single-photon dephasing model~\\cite{ElsiNature2011}.\nPolarization degree of freedom and frequency degree of freedom of a photon correspond to the open system and its environment, respectively. To begin with, we consider initially factorized joint polarization-frequency state \n\\begin{equation}\n\\hat{\\rho}_{SE}(0)=\\hat{\\rho}_S(0)\\otimes \\ket{\\Omega}\\bra{\\Omega}.\n\\end{equation}\nHere, $\\hat{\\rho}_S(0)$ is the density operator of the initial polarization state and \n\\begin{equation}\n\\ket{\\Omega}= \\int d \\omega \\; g(\\omega)\\ket{\\omega},\n\\end{equation}\nis the initial frequency state where $g(\\omega)$ is the probability amplitude that the photon has frequency $\\omega$. \nThe polarization Hilbert space is discrete and spanned by the horizontal-vertical polarization basis $\\{\\ket{h},\\ket{v}\\}$, while the Hilbert space of the frequency degree of freedom is spanned by the continuous frequency basis $\\{\\ket{\\omega}\\}$. \n\nThe system-environment -- or polarization-frequency -- interaction is provided by the Hamiltonian ($\\hbar=1$)\n\\begin{equation}\\label{eq:Hamiltonian}\n\\hat{H}=(n_h\\ket{h}\\bra{h}+n_v\\ket{v}\\bra{v})\\otimes\\int d \\omega \\; \\omega \\; \\ket{\\omega}\\bra{\\omega} ,\n\\end{equation}\nwhere $n_h$ ($n_v$) is the refraction index for polarization component $h$ ($v$).\nFor interaction time $t$, and tracing over the frequency, the reduced polarization state is\n\\begin{equation}\\label{eq:MapSinglePhoton}\n\\hat{\\rho}_S(t)=\\begin{pmatrix}\n\\bra{ h} \\rho_S(0)\\ket{h} & \\kappa(t)\\bra{ h} \\rho_S(0)\\ket{v} \\\\\n\\kappa(t)^{*}\\bra{ v} \\rho_S(0)\\ket{h} & \\bra{ v} \\rho_S(0)\\ket{v}.\n\\end{pmatrix},\n\\end{equation}\nHere, the dephasing dynamics is given by the decoherence function\n\\begin{equation}\n\\label{eq:kappa}\n \\kappa(t)=\\int d\\omega \\; \\vert g(\\omega)\\vert^2 \\mathrm{e}^{-i \\Delta n \\omega t},\n \\end{equation}\nwhere $\\Delta n \\equiv n_v-n_h$. Note that $0\\leq \\vert \\kappa(t) \\vert \\leq 1 $ for all times $t\\geq 0$ and $\\vert \\kappa(0) \\vert=1$. \n\nEquation \\eqref{eq:MapSinglePhoton} describes a $t$-parametrized completely positive (CP) map $\\hat{\\Phi}_t$, such that $\\hat{\\rho}_S(t)=\\hat{\\Phi}_t[\\hat{\\rho}_S(0)]$, and its corresponding master equation takes the form\n \\begin{equation}\\label{eq:MasterEq}\n\\frac{d}{dt}\\hat{\\rho}_S(t)=-i\\frac{\\nu(t)}{2}[\\hat{\\sigma}_z,\\hat{\\rho}_S(t)]+\\frac{\\gamma(t)}{2}[\\hat{\\sigma}_z \\hat{\\rho}_S(t) \\hat{\\sigma}_z-\\hat{\\rho}_S(t)].\n\\end{equation}\nHere, $\\hat{\\sigma}_z$ is the Pauli $z$ operator and the rates $\\nu(t)$ and $\\gamma(t)$ can be expressed in terms of the decoherence function $\\kappa(t)$ as\n\\begin{equation}\\label{eq:Rates}\n\\gamma(t)=-\\Re \\bigg[\\frac{1}{\\kappa(t)} \\frac{d\\kappa(t)}{dt}\\bigg], \\quad \\nu(t)=-\\Im \\bigg[\\frac{1}{\\kappa(t)} \\frac{d\\kappa(t)}{dt}\\bigg],\n\\end{equation}\nwhere, $\\Re[\\cdot]$ and $\\Im[\\cdot]$ indicate the real and imaginary parts, respectively. \n\nEquation~\\eqref{eq:Rates} shows that once the decoherence function $\\kappa(t)$ is obtained from Eq.~\\eqref{eq:kappa}, then we can derive the corresponding rates in master equation ~\\eqref{eq:MasterEq}. Indeed, the decoherence function $\\kappa(t)$ in Eq.~\\eqref{eq:kappa} is the Fourier transformation of the initial frequency \nprobability distribution $P(\\omega) =|g(\\omega)|^2$, and therefore the control of this distribution allows to study various types of dephasing maps and to engineer the form and time dependence of the dephasing rate $\\gamma(t)$ in master equation ~\\eqref{eq:MasterEq}.\n\nFor example, a Gaussian frequency distribution with variance $\\sigma^2$ and mean value $\\bar{\\omega}$, i.e., \n\\[P(\\omega)=\\frac{\\mathrm{exp}[-(\\omega-\\bar{\\omega})^2\/2\\sigma^2]}{\\sqrt{2\\pi}\\sigma},\\]\nleads to a positive and time dependent dephasing rate $\\gamma(t)= \\Delta n ^2 \\sigma^2 t$ which presents a time-dependent Markovian dynamics. On the other hand, a Lorentzian distribution \n\\[P(\\omega)=\\frac{\\lambda}{\\pi[(\\omega-\\omega_0)^2+\\lambda^2]},\\]\nresults in a constant decay rate $\\gamma=\\lambda \\Delta n $, corresponding to dynamical semi-group and Lindbad-Gorini-Kossakowski-Sudarshan (LGKS) dynamics~\\cite{GoriniJmathPhys1976,LindbladCommun1976}. We note that the latter case has been also reported in \\cite{Smirne2014, Smirne2018}. \nThe transition from Markovian to non-Markovian regime, in turn, is observed with further modifications of the frequency distribution~\\cite{ElsiNature2011}.\n\nIn the following, we generalize the master equation~in Eq.\\eqref{eq:MasterEq} to two-photon case. In particular, we are interested in how the initial correlations between the frequencies of the two photons influence the various dephasing rates and the operator form of the corresponding master equation for a bipartite open system. \n\n\\section{Master equation for two-photon dephasing dynamics: role of initially correlated joint frequency distribution} \\label{sec3}\n\nConsider a pair of photons, labeled $a$ and $b$, whose total polarization-frequency initial state is again in a factorized form\n\\begin{equation}\n\\hat{\\rho}_{SE}(0)=\\hat{\\rho}_S(0)\\otimes \\ket{\\Omega}\\bra{\\Omega},\n\\end{equation}\nwhere now\n\\begin{equation}\n\\ket{\\Omega}=\\int d \\omega_a \\int d \\omega_b \\; g(\\omega_a,\\omega_b)\\ket{\\omega_a,\\omega_b},\n\\end{equation}\nis the initial state of the two-photon frequency degree of freedom and the corresponding joint probability distribution is $P(\\omega_a,\\omega_b)=\\vert g(\\omega_a,\\omega_b)\\vert ^2$. Initial polarization state is $\\hat{\\rho}_S(0)$, whose Hilbert space is spanned by the bipartite basis $\\{\\ket{hh},\\ket{hv},\\ket{vh},\\ket{vv}\\}$. \n\nThe polarization of each photon interacts locally with its own frequency and therefore system-environment interaction Hamiltonian for the two photons is a sum of the two local contributions~\\cite{ElsiPRL2012}\n\\begin{equation}\\label{eq:HamiltonianTwoPhoton}\n\\hat{H}=\\hat{H}_a \\otimes \\hat{I}_b + \\hat{I}_a \\otimes \\hat{H}_b.\n\\end{equation}\nHere, each local term is given by Eq.~\\eqref{eq:Hamiltonian} and $\\hat{I}_a$ ($\\hat{I}_b$) is the identity operator for photon a (b).\n\nWe write initial bipartite polarization state $\\hat{\\rho}_S(0)$ as\n\\[\\hat{\\rho}_S(0)=\\sum_{\\alpha,\\beta}\\sum_{\\alpha',\\beta'}p_{\\alpha\\beta,\\alpha'\\beta'}\\ket{\\alpha \\beta}\\bra{\\alpha' \\beta'},\\]\nwith sums over $h$ and $v$. After interaction time $t$, the polarization state is \\citep{ElsiPRL2012}\n\\begin{flalign}\\label{eq:MapTwoPhotoon}\n&\\hat{\\rho}_S(t)=\n\\\\ \n&\\begin{pmatrix} \\nonumber\np_{hh,hh} & \\kappa_b(t)p_{hh,hv} & \\kappa_a(t)p_{hh,vh} & \\kappa_{ab}(t)p_{hh,vv} \\\\\n\\kappa_b^{*}(t) p_{hv,hh} & p_{hv,hv} & \\Lambda_{ab}(t)p_{hv,vh} & \\kappa_a(t)p_{hv,vv} \\\\\n\\kappa_a^{*}(t) p_{vh,hh} & \\Lambda_{ab}^{*}(t)p_{vh,hv} & p_{vh,vh} & \\kappa_b(t)p_{vh,vv} \\\\\n\\kappa_{ab}^{*}(t) p_{vv,hh} & \\kappa_a^{*}(t)p_{vv,hv} & \\kappa_b(t)^{*} p_{vv,vh} & p_{vv,vv}\n\\end{pmatrix}.\n\\end{flalign}\nHere, the local decoherence functions for photon $j=a,b$ are given by\n\\begin{equation}\n\\label{k-loc}\n\\kappa_j(t)=\\int d\\omega_a \\int \\;d\\omega_b \\; \\vert g(\\omega_a, \\omega_b)\\vert^2 \\mathrm{e}^{-i \\Delta n \\omega_j t},\n\\end{equation}\nand the non-local ones by\n\\begin{equation}\n\\label{k1-nloc}\n\\kappa_{ab}(t)=\\int d\\omega_a \\int \\; d\\omega_b \\; \\vert g(\\omega_a, \\omega_b)\\vert^2 \\mathrm{e}^{-i \\Delta n(\\omega_a+\\omega_b)t},\n\\end{equation}\nand \n\\begin{equation}\n\\label{k2-nloc}\n\\Lambda_{ab}(t)=\\int d\\omega_a \\int \\; d\\omega_b \\;\\vert g(\\omega_a, \\omega_b)\\vert^2 \\mathrm{e}^{-i \\Delta n(\\omega_a-\\omega_b)t}.\n\\end{equation}\nThe density matrix evolution given by Eqs.~(\\ref{eq:MapTwoPhotoon}-\\ref{k2-nloc})\ncan also be described by a $t$-parametrized completely positive dynamical map $\\hat{\\Phi}_t$, such that \n\\begin{equation}\\label{eq:DynMap}\n\\hat{\\rho}_S(t)=\\hat{\\Phi}_t[\\hat{\\rho}_S(0)].\n\\end{equation} It is important to note that when the initial joint frequency distribution factorizes, $P(\\omega_a,\\omega_b)=P_a(\\omega_a)\\times P_b(\\omega_b)$, then the global decoherence functions are products of the local ones, i.e.,\n$\\kappa_{ab} (t) = \\kappa_a (t) \\kappa_b (t)$ and $\\Lambda_{ab} (t)= \\kappa_a (t) \\kappa^{\\ast}_b (t)$. Subsequently, the map for the bipartite photon system is tensor product of the local CP maps $\\hat{\\Phi}_t=\\hat{\\Phi}_t^{(a)} \\otimes \\hat{\\Phi}_t^{(b)}$. However, when the initial frequency distribution does not factorize, $P(\\omega_a,\\omega_b)\\neq P_a(\\omega_a)\\times P_b(\\omega_b)$, and contains correlations, then the map for the bipartite system is not anymore product of the local maps, $\\hat{\\Phi}_t\\neq\n\\hat{\\Phi}_t^{(a)} \\otimes \\hat{\\Phi}_t^{(b)}$~\\cite{ElsiPRL2012}. Now, we are interested in how to derive the generator of the corresponding non-local bipartite dynamical map and what are the modifications in the corresponding dephasing master equations when the amount of initial frequency correlations change.\n\nWe begin our derivation by writing the dynamical map formally as\n\\begin{equation}\\label{eq:dynMap}\n\\hat{\\Phi}_t = \\exp \\Big[\\int_0^t d\\tau \\hat{\\mathcal{L}}_{\\tau}\\Big],\n\\end{equation}\nwhere $\\hat{\\mathcal{L}}_t$ is the generator of the dynamics. Finding an expression for the generator then provides us the master equation we want to construct as\n\\begin{equation}\\label{eq:formalME}\n\\frac{d}{dt}\\hat{\\rho}_S(t)=\\hat{\\mathcal{L}}_t [\\hat{\\rho_S}(t)].\n\\end{equation}\nProvided that the map in Eq.~\\eqref{eq:dynMap} is invertible and its derivative is well-defined, one can obtain the generator as\n\\begin{equation}\\label{eq:genMapInvMap}\n\\mathcal{\\hat{L}}_t=\\frac{d}{dt}\\hat{\\Phi}_t \\circ \\hat{\\Phi}^{-1}_t.\n\\end{equation}\nTo find the generator in Eq.~\\eqref{eq:genMapInvMap} we need a suitable representation for the dynamical map $\\hat{\\Phi}_t$. With this in mind, we expand the two-photon density matrix $\\hat{\\rho}_S(t)$ in terms of a complete and orthonormal operator basis $\\{\\hat{F_{\\alpha}}\\}$. Specifically, we choose here fifteen generators of $\\mathrm{SU}(4)$, whose exact expressions can be found in~\\cite{Alicki1987}, plus $\\hat{F}_1=\\hat{I}\/\\sqrt{4}$, such that $\\mathrm{Tr}[\\hat{F}_i^{\\dagger} \\hat{F}_j]=\\delta_{i j}$. It is worth mentioning that one can alternatively use the basis constructed by the tensor product of Pauli matrices plus the identity. Fixing the basis for the representation, then the two-photon polarization state at time $t$ is\n\\begin{equation}\\label{eq:BlochVec1}\n\\hat{\\rho}_S(t)=\\sum_{\\alpha=1}^{16} r_{\\alpha} (t)\\hat{F}_{\\alpha}, \\qquad r_{\\alpha}(t)= \\mathrm{Tr}[\\hat{F}_{\\alpha}\\hat{\\rho}_S(t)],\n\\end{equation}\nwhere coefficients $\\{r_{\\alpha}\\}$ form the generalized Bloch vector corresponding to the state $\\hat{\\rho}_S(t)$ as\n\\begin{equation}\\label{eq:BlochVec2}\n\\vec{r}(t)=(1\/2,r_2(t),...,r_{16}(t))^{\\mathrm{T}}.\n\\end{equation} \nBy using Eq.~\\eqref{eq:BlochVec1} for both $\\hat{\\rho}_S(t)$ and $\\hat{\\rho}_S(0)$, we can write Eq.~\\eqref{eq:DynMap} as\n\\begin{equation}\\label{eq:MapBlochRep}\nr_{\\alpha}(t)=\\sum_{\\beta}[\\hat{\\Phi}_t]_{\\alpha \\beta} r_{\\beta}(0),\n\\end{equation}\nwhere $[\\hat{\\Phi}_t]$ is the transformation matrix corresponding to the map $\\hat{\\Phi}_t$ represented in the basis $\\{\\hat{F_{\\alpha}}\\}$. Elements of this matrix depend on the decoherence functions given in \nEqs.~(\\ref{k-loc}-\\ref{k2-nloc}) and each column can be systematically calculated by using a proper pair of initial and evolved states (c.f.~Eq.~\\eqref{eq:MapTwoPhotoon}). One can proceed to find the matrix representation of the generator by calculating the derivative and inverse of $[\\hat{\\Phi}_t]$ and using them in Eq.\\eqref{eq:genMapInvMap}, such that\n\\begin{equation}\\label{eq:genMat}\n[\\hat{\\mathcal{L}}_t]=\\frac{d}{dt}[\\hat{\\Phi}_t].[\\hat{\\Phi}_t]^{-1},\n\\end{equation}\nwhere we have replaced operator multiplication by matrix multiplication.\n\nLet us now consider the generator in a Lindblad operator form\n\\begin{eqnarray}\\label{eq:LindbladGen}\n\\hat{\\mathcal{L}}_t[\\hat{\\rho}_S(t)]&&=-i[\\hat{H}(t),\\hat{\\rho}_S(t)]+\n\\\\\n&&\\sum_{\\alpha=2}^{16}\\sum_{\\beta=2}^{16} R_{\\alpha \\beta}(t)\\Big(\\hat{F}_{\\alpha}\\hat{\\rho}_S(t) \\hat{F}_{\\beta}^{\\dagger}-\\frac{1}{2}\\{\\hat{F}_{\\beta}^{\\dagger}\\hat{F}_{\\alpha},\\hat{\\rho}_S(t)\\}\\Big), \\nonumber\n\\end{eqnarray}\nwhere \n\\begin{equation}\n\\hat{H}(t)=\\frac{-1}{2i}\\sum_{\\alpha=2}^{16}\\Big[R_{\\alpha1}(t)\\hat{F}_{\\alpha}-R_{1\\alpha}(t)^{*}\\hat{F}^{\\dagger}_{\\alpha}\\Big],\n\\end{equation}\ncaptures the environment induced coherent dynamics and $R_{\\alpha \\beta}(t)$ with $\\alpha, \\beta =2,3,...,16$ are elements of a $15\\times 15$ matrix providing the decay rates. Each element in the matrix representation of the generator then reads\n\\begin{equation}\\label{eq:equality}\n[\\hat{\\mathcal{L}}_t]_{\\alpha\\beta}=\\mathrm{Tr}[\\hat{F}_{\\alpha}^{\\dagger} \\hat{\\mathcal{L}}_t[\\hat{F}_{\\beta}]].\n\\end{equation}\nHere we use Eq.~\\eqref{eq:LindbladGen} in the right hand side. Finally, by elementwise comparison of Eq.~\\eqref{eq:equality} with Eq.~\\eqref{eq:genMat} we find the decay rates of the Lindblad master equation in Eq.~\\eqref{eq:LindbladGen} in terms of the decoherence functions in Eqs.~(\\ref{k-loc}-\\ref{k2-nloc}).\nBefore proceeding further, let us note that generator of a CP-divisible map always has a Linblad form \\cite{Alicki1987,GoriniJmathPhys1976,LindbladCommun1976,RivasBook_2012}. A map $\\hat{\\Phi}_t$ is CP-divisible if it can be decomposed as $\\hat{\\Phi}_t=\\hat{\\Phi}_{t,s}\\hat{\\Phi}_s$ where the intermediate map $\\hat{\\Phi}_{t,s}$ is also a legitimate CP-map for all $t \\geqslant s \\geqslant0 $ \\cite{RivasPRL2010}. In this paper, however, we do not restrict ourselves to the CP-divisible maps and as we show later we also take non-Markovian dynamics into account.\n\nAfter finding the general expression for the decay rate matrix, it turns out that it is quite sparse and can be reduced to a $3\\times3$ matrix, which we denote by $R(t)$. The corresponding subspace is spanned by only three generators of $\\mathrm{SU}(4)$, which are linearly dependent on the operators $\\hat{I}_2 \\otimes \\hat{\\sigma}_z$, $\\hat{\\sigma}_z \\otimes \\hat{I}_2$, and $\\hat{\\sigma}_z \\otimes \\hat{\\sigma}_z$. This is indeed intuitive because population elements of the density matrix are invariant upon a dephasing channel, so those terms that couple the levels must be absent. \nThe explicit expression for the matrix $R(t)$, corresponding to a general frequency distribution, is provided in the Appendix. Considering this general result, we diagonalize it to rewrite the second term on r.h.s of Eq.~\\eqref{eq:LindbladGen} in the form\n\\begin{equation}\\label{eq:NormalizedME}\n\\hat{\\mathcal{D}}[\\hat{\\rho}_S(t)]=\\sum_{\\alpha=1}^{3} \\gamma_{\\alpha}(t)\\Big[\\hat{J}_{\\alpha}\\hat{\\rho}_S(t)\\hat{J}_{\\alpha}^{\\dagger}-\\frac{1}{2}\\{\\hat{J}_{\\alpha}^{\\dagger}\\hat{J}_{\\alpha},\\hat{\\rho}_S(t)\\}\\Big],\n\\end{equation}\nwhere \n\\begin{equation}\n\\begin{pmatrix}\n\\gamma_1(t) & 0 & 0 \\\\\n0 & \\gamma_2(t) & 0 \\\\\n0 & 0 & \\gamma_3(t)\n\\end{pmatrix}=U R(t) U^{\\dagger}, \\quad \\quad \\hat{J}_{\\alpha}=\\sum_j U_{\\alpha j}\\hat{F}_j,\n\\end{equation}\nand $U$ is the orthogonal transformation which diagonalizes the matrix $R(t)$. It is worth stressing that if the dynamical map in hand is CP-divisible, then all decay rates will be non-negative, i.e. $\\gamma_i(t)\\geq 0$ for all interaction times $t\\geq 0$. \n\nAbove general results hold for arbitrary initial frequency distributions.\nIn the following, we discuss explicitly initially correlated joint frequency distributions for bivariate single- and double-peak Gaussian cases.\nThese choices are motivated by their use in recent theoretical and experimental works, see e.g.~\\cite{ElsiNature2011,ElsiPRL2012,HamedaniRaja_2017}, and due to their ability to account for the explicit influence of frequency correlations in the dephasing dynamics.\n\n\\subsection{Single-peak bivariate Gaussian distribution}\n\nConsider the joint bivariate Gaussian frequency distribution $P_{ab}(\\omega_a, \\omega_b)$\nand its covariance matrix $C$, such that $C_{ij}=\\langle \\omega_i \\omega_j\\rangle-\\langle\\omega_i\\rangle\\langle \\omega_j \\rangle$ for $i,j=a,b$ \\citep{ElsiPRL2012}.\nThe correlation coefficient is now given by $ K=C_{ab}\/\\sqrt{C_{aa}C_{bb}}$, such that $-1\\leq K \\leq 1$. A fully anti-correlated initial frequency distribution has $K=-1$, which dictates that for any pair of $\\omega_a$ and $\\omega_b$ we have $\\omega_a+\\omega_b\\equiv \\omega_0$, with some constant frequency $\\omega_0$.\nThe means of the local single photon frequency distributions are given by $(\\bar{\\omega}_a,\\bar{\\omega}_b)^{\\mathrm{T}}$ and we denote the difference between the local means as $\\bar{\\omega}_a-\\bar{\\omega}_b =\\Delta\\omega$ and their sum as $\\bar{\\omega}_a+\\bar{\\omega}_b =\\omega_0$. \nUsing Eqs.~(\\ref{k-loc}-\\ref{k2-nloc}) and denoting the variance of the distribution by $\\sigma^2$, the decoherence functions become\n\\begin{flalign}\n&\\kappa_a(t)=\\mathrm{exp}\\Big[\\frac{-\\sigma^2 \\Delta n^2 t^2-i \\Delta n t (\\omega_0+\\Delta\\omega)}{2}\\Big],\n\\\\\n&\\kappa_b(t)=\\mathrm{exp}\\Big[\\frac{-\\sigma^2 \\Delta n^2 t^2-i \\Delta n t (\\omega_0-\\Delta\\omega)}{2}\\Big],\n\\\\\n&\\kappa_{ab}(t)=\\mathrm{exp}\\big[-\\sigma^2 \\Delta n^2 t^2(1+K)-i \\Delta n t \\omega_0\\big],\n\\\\\n&\\Lambda_{ab}(t)=\\mathrm{exp}\\big[-\\sigma^2 \\Delta n^2 t^2(1-K)-i \\Delta n t \\Delta\\omega\\big].\n\\end{flalign}\nIt is straightforward to check that the corresponding transformation matrix $[\\hat{\\Phi}_t]$ for the generalized Bloch vector is always invertible when time $t$ is finite. \nAfter inserting the above expressions for the decay rate matrix $R(t)$, see the Appendix, and followed by diagonalization, we obtain the rates appearing in the master equation~\\eqref{eq:NormalizedME} as follows\n\\begin{flalign}\n&\\gamma_1(t)=2(1-K)\\sigma^2 \\Delta n^2 t,\\label{eq:gamma1}\n\\\\\n&\\gamma_2(t)=2(1+K)\\sigma^2 \\Delta n^2 t,\\label{eq:gamma2}\n\\\\\n&\\gamma_3(t)=0,\\label{eq:gamma3}\n\\end{flalign}\nand the corresponding jump operators \n\\begin{flalign}\n&\\hat{J}_1=\\frac{1}{2\\sqrt{2}}(\\hat{I}_2\\otimes \\hat{\\sigma}_z + \\hat{\\sigma}_z \\otimes \\hat{I}_2),\\label{eq:J1}\n\\\\\n&\\hat{J}_2=\\frac{1}{2\\sqrt{2}}(\\hat{I}_2\\otimes \\hat{\\sigma}_z - \\hat{\\sigma}_z \\otimes \\hat{I}_2),\\label{eq:J2}\n\\\\\n&\\hat{J}_3=\\frac{1}{2}\\hat{\\sigma}_z \\otimes \\hat{\\sigma}_z.\\label{eq:J3}\n\\end{flalign}\n\nDephasing rates $\\gamma_1$ and $\\gamma_2$ are linear functions of time and their slopes depend on the correlation coefficient $K$. Figure~\\ref{fig1} displays the rates for $K=-1,0,1$. Since all the rates are non-negative and the first two are time dependent, this leads to CP-divisible dynamics which, however, does not fulfil the LGKS semigroup property. It is also interesting to note here the absence of the jump operator $\\hat{\\sigma}_z \\otimes \\hat{\\sigma}_z$ since the corresponding rate $\\gamma_3$ is always equal to zero. \nMoreover, the role of the environmental correlation coefficient $K$ of the initial joint frequency distribution is now explicit in expressions~(\\ref{eq:gamma1}-\\ref{eq:gamma3}). When $K=1$ ($K=-1$) the rate $\\gamma_1=0$ ($\\gamma_2=0$) and we are left with only one dephasing channel given by $\\hat{J}_2$ ($\\hat{J}_1$). When there are no initial correlations between the two environments, $K=0$, then \n$\\gamma_1(t)=\\gamma_2(t)$. Subsequently, the corresponding generator and master equation contain equally weighted contributions of \n the two local jump operators $\\hat{J}_1$ and $\\hat{J}_2$. Changing the value of the initial correlations $K$ allows then to tune the dynamics between the above mentioned extreme cases.\n\nIt is also worth discussing similarities and differences between our photonic model and the two-qubit model interacting with a common environment\n \\cite{PalmaPROC1996, ReinaPRA2002, CironeNEWJPhys2009, AddisPRA2013}. In the latter model two qubits are spatially separated by a distance $D$, while they both interact with same physical and common bosonic environment. It is interesting that the master equation describing this model has the exact same operator form and jump operators \\cite{AddisPRA2013} obtained in equations \\eqref{eq:J1} to \\eqref{eq:J3}. In addition, decay rates derived in \\cite{AddisPRA2013} exhibit similar dependence on the distance $D$, as our decay rates here depend on the correlation coefficient $K$. Moreover, when $D \\rightarrow \\infty$, the dynamical map will be factorized to $\\hat{\\Phi}_t=\\hat{\\Phi}_t^{(a)} \\otimes \\hat{\\Phi}_t^{(b)}$, with the superscripts corresponding to each qubit. The same behavior is also captured here when $K \\rightarrow 0$.\n However, it is worth keeping in mind that in our case the two environments are distinct physical entities and the tuning of the generator -- or form of the master equation -- is obtained by changing the initial bipartite environmental state. Furthermore, we can tune the generator continuously between the fully correlated and anti-correlated cases.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.95 \\columnwidth, height = 1.5 cm]{gamma1SP.pdf}\n\\includegraphics[width=0.95 \\columnwidth, height = 1.5 cm]{gamma2.pdf}\n\\includegraphics[width=0.95 \\columnwidth ,height = 2.4 cm]{gamma3SP.pdf}\n\\caption{(Color online) Decay rates as a function of normalized interaction time in the case of single-peak Gaussian frequency distribution. Large-dashed green when $K=-1$, solid black when $K=0$, and small-dashed red when $K=-1$. Here we set $\\Delta \\omega \/\\sigma=2$.}\n\\label{fig1}\n\\end{figure} \n\n\n\\subsection{Double-peak bivariate Gaussian distribution}\n\nWe consider a double-peak frequency distribution as sum of two single-peak bivariate Gaussian distributions, already used in \\cite{HamedaniRaja_2017}, such that \n\\begin{equation}\nP(\\omega_a,\\omega_b)=[P_1(\\omega_a, \\omega_b)+ P_2(\\omega_a, \\omega_b)]\/2.\n\\end{equation}\nWe assume that both single-peak terms have the same correlation coefficient $K$ and standard deviation $\\sigma$, but their means are located at $(\\omega_0\/2-\\Delta \\omega\/2,\\omega_0\/2+\\Delta \\omega\/2)^{\\mathrm{T}}$ and $(\\omega_0\/2+\\Delta \\omega\/2,\\omega_0\/2-\\Delta \\omega\/2)^{\\mathrm{T}}$, respectively. Please note that the correlation coefficient $K$ of each single-peak distribution $P_1(P_2)$ does not equal to the actual correlation coefficient of the bivariate distribution $P$, obtained by its covariance matrix. In more detail, whenever we have nonzero $K$ for each single-peak, we have non-zero correlation in $P$. But note that if $K=0$, then we still have correlation in $P$ as long as we have non-zero peak separation, $\\Delta \\omega \\neq 0$.\n\nThe decoherence functions calculated from Eqs.~(\\ref{k-loc}-\\ref{k2-nloc}) become\n\\begin{flalign}\n&\\kappa_a(t)=\\mathrm{exp}\\bigg[\\frac{-\\sigma^2 \\Delta n^2 t^2-i t \\Delta n \\omega_0}{2}\\bigg]\\cos\\bigg(\\frac{t \\Delta n \\Delta \\omega}{2}\\bigg),\n\\\\\n&\\kappa_b(t)=\\kappa_a(t),\n\\\\\n&\\kappa_{ab}(t)=\\mathrm{exp}\\big[-\\sigma^2 \\Delta n^2 t^2 (1+K)-i t \\Delta n \\omega_0\\big],\n\\\\\n&\\Lambda_{ab}(t)=\\mathrm{exp}\\big[-\\sigma^2 \\Delta n^2 t^2 (1-K)\\big]\\cos(t \\Delta n \\Delta \\omega).\n\\end{flalign}\nBy using the earlier obtained general results, in a similar manner compared to single-peak case, we obtain the dephasing\nrates\n\\begin{flalign}\n&\\gamma_1(t)=2(1-K)\\sigma^2 \\Delta n^2 t+ \\tan(t \\Delta n \\Delta \\omega )\\Delta n \\Delta \\omega ,\\label{eq:gamma1Two}\n\\\\\n&\\gamma_2(t)=2(1+K)\\sigma^2 \\Delta n^2 t,\\label{eq:gamma2Two}\n\\\\\n&\\gamma_3(t)= \\frac{1}{2} \\tan \\bigg(\\frac{t \\Delta n \\Delta \\omega}{2} \\bigg)\\big[1-\\sec (t \\Delta n \\Delta \\omega)\\big]\\Delta n \\Delta \\omega.\\label{eq:gamma3Two}\n\\end{flalign}\nThe corresponding jump operators $\\{\\hat{J}_1, \\hat{J}_2, \\hat{J}_3\\}$ are the same as in the single-peak case, see Eqs.~(\\ref{eq:J1}-\\ref{eq:J3}).\nIn the limit $\\Delta \\omega \\rightarrow 0$ corresponding to single peak case, the rates~(\\ref{eq:gamma1Two}-\\ref{eq:gamma3Two}) reduce to those given by\n Eqs.~(\\ref{eq:gamma1}-\\ref{eq:gamma3}).\n\nFigure~\\ref{fig2} displays the rates for $K=-1,0,1$.\nDephasing rate $\\gamma_2$ remains the same as in the single peak case. However, rate $\\gamma_1$ -- corresponding to $\\hat{J}_1$ including the sum of the local jump operators -- changes. The rate includes now an extra term, coming from the peak separation $\\Delta \\omega$, \nand an oscillatory part displaying negative values of the rate as a function of time. This also leads to non-Markovian dephasing dynamics which is not CP-divisible. It is even more striking that introducing the double-peak frequency structure, opens now an additional dephasing channel since the rate $\\gamma_3$ is non-zero. Here, the corresponding jump operator $\\hat{J}_3=\\frac{1}{2}\\hat{\\sigma}_z \\otimes \\hat{\\sigma}_z$ displays a joint bipartite structure, in contrast to local features of $\\hat{J}_1$ and $\\hat{J}_2$. This is an interesting observation since the system-environment interaction Hamiltonian is the same as before having only local interactions, see Eq.~(\\ref{eq:HamiltonianTwoPhoton}), whilst the only change introduced was going from single- to double-peak structure of the initial bipartite environmental state. It is also worth noting that even though $\\gamma_3$ is independent \nof $K$, its functional form is non-trivial since it contains the peak separation $\\Delta \\omega$ and trigonometric functions. \n\nThere is a somewhat subtle mathematical point related to the behavior of rates $\\gamma_1$ [Eq.~\\eqref{eq:gamma1Two}] and $\\gamma_3$ [Eq.~\\eqref{eq:gamma3Two}] which needs an attention. Indeed, $\\gamma_1(t)$ and $\\gamma_3(t)$ diverge at isolated points of times. Subsequently, the corresponding dynamical maps are non-invertible at these points. According to the Eq.~\\eqref{eq:MapBlochRep}, the generalized Bloch vector of the two-photon polarization state at time $t$ reads\n\n\\begin{equation}\n\\vec{r}(t)=\\left(\n\\begin{array}{c}\n \\frac{1}{2} \\\\\n \\Gamma_0\\cos(t\\Delta\\Omega\/2)[\\cos(t\\Omega_0\/2)r_2-\\sin(t\\Omega_0\/2)r_3] \\\\\n \\Gamma_0\\cos(t\\Delta\\Omega\/2)[\\cos(t\\Omega_0\/2)r_3+\\sin(t\\Omega_0\/2)r_2] \\\\\n r_4 \\\\\n \\Gamma_0\\cos(t\\Delta\\Omega\/2)[\\cos(t\\Omega_0\/2)r_5-\\sin(t\\Omega_0\/2)r_6] \\\\\n \\Gamma_0\\cos(t\\Delta\\Omega\/2)[\\cos(t\\Omega_0\/2)r_6+\\sin(t\\Omega_0\/2)r_5] \\\\\n \\Gamma_-\\cos(t\\Delta \\Omega)r_7 \\\\\n \\Gamma_-\\cos(t\\Delta \\Omega)r_8 \\\\\n r_9 \\\\\n \\Gamma_+[\\cos(t\\Omega_0)r_{10}-\\sin(t\\Omega_0)r_{11}] \\\\\n \\Gamma_+[\\cos(t\\Omega_0)r_{11}+\\sin(t\\Omega_0)r_{10}] \\\\\n \\Gamma_0\\cos(t\\Delta\\Omega\/2)[\\cos(t\\Omega_0\/2)r_{12}-\\sin(t\\Omega_0\/2)r_{13}] \\\\\n \\Gamma_0\\cos(t\\Delta\\Omega\/2)[\\cos(t\\Omega_0\/2)r_{13}+\\sin(t\\Omega_0\/2)r_{12}] \\\\\n \\Gamma_0\\cos(t\\Delta\\Omega\/2)[\\cos(t\\Omega_0\/2)r_{14}-\\sin(t\\Omega_0\/2)r_{15}] \\\\\n \\Gamma_0\\cos(t\\Delta\\Omega\/2)[\\cos(t\\Omega_0\/2)r_{15}+\\sin(t\\Omega_0\/2)r_{14}] \\\\\n r_{16} \\\\\n\\end{array}\n\\right),\n\\end{equation}\nwhere we have defined $\\Gamma_0=\\exp[-\\sigma^2 \\Delta n^2 t^2\/2]$, $\\Gamma_{\\pm}=\\exp[-\\sigma^2 \\Delta n^2 t^2(1\\pm K)]$, $ \\Delta\\Omega= \\Delta n \\Delta \\omega$, $\\Omega_0=\\Delta n \\omega_0$, and $\\vec{r}(0)=(1\/2,r_2,r_3,...,r_{16})^T$ is the initial Bloch vector. One can check that all of the different initial vectors (states) that share the same values of $r_4,r_7,r_8,r_9,r_{10},r_{11}$, and $r_{16}$ are mapped to the same vector (state) at $t=\\pi \/\\Delta \\Omega$. This many to one nature of the map -- at these isolated times -- makes it non-invertible. Although all the trajectories corresponding to the aforementioned initial vectors end up together at the isolated points, it is evident that they continue their different paths immediately after this. This can be seen in the following way. Consider the generator of the master equation in matrix form and its action on the generalized Bloch vector.\nWe see that while some rates diverge at certain points in time, it is precisely at these points that the generalized Bloch vector components -- with which the rates get multiplied -- all go to zero. In more detail, we have\n\\begin{equation}\n\\frac{d}{dt} r_{\\alpha}(t)=\\sum_{\\beta}[\\hat{\\mathcal{L}}_t]_{\\alpha \\beta} \\, r_{\\beta}(t),\n\\end{equation}\ntherefore, the product of the divergent rate with the zero value component leads to a finite rate of change of the Bloch vector which allows us to continue propagation of each state forward in time. Accordingly, following the trajectories immediately before they unite at a single point, lets us identify each one of them immediately after that, when they separate again. We see therefore that in spite of the divergences in the rates, the master equation we have obtained describes the dephasing evolution of the two-photon polarization state in meaningful way.\nIt is also worth noting that the divergent decoherence rates in master equations have appeared in earlier literature many times, e.g., in the prominent resonant Jaynes-Cummings model~\\cite{BreuerBook_2002}.\n \n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.95 \\columnwidth, height = 1.5 cm]{gamma1DP.pdf}\n\\includegraphics[width=0.95 \\columnwidth, height = 1.5 cm]{gamma2.pdf}\n\\includegraphics[width=0.95 \\columnwidth ,height = 2.4 cm]{gamma3DP.pdf}\n\\caption{(Color online) Decay rates as a function of normalized interaction time in the case of double-peak Gaussian frequency distribution. Dashed green when $K=-1$, solid black when $K=0$, and dot-dashed when $K=-1$. Here we set $\\Delta \\omega \/\\sigma=2$.}\n\\label{fig2}\n\\end{figure} \n\\section{Single-photon dephasing with initial polarization-frequency correlations} \\label{sec4}\n\nWe described above how initial correlations between the composite environmental states influence the generator of the dynamical map and the corresponding master equation for photonic dephasing. In this section we continue with initial correlations but \ntake a different perspective by considering non-factorized initial system-evironmental state for single qubit.\nThis is motivated by the recent observation that initial system-environment correlations can be exploited for arbitrary control of single qubit dephasing~\\cite{Lyyra_2018}. We revisit this problem and obtain new insight by exploiting the very recently developed general method of \\textit{bath positive decomposition} ($\\mathrm{B+}$ decomposition) ~\\cite{Wiseman_2019}. \nIn general, presence of initial system-environment correlations implies that the open system evolution is not described by a CP dynamical map \\cite{pechukas94,Alicki_1995,pechukas95,jordan04,shaji05a,Linta-Shaji}. However, $\\mathrm{B+}$ decomposition method allows to treat this case with a set of CP maps, where each term of the decomposition is evolved over time with its individual CP map~\\cite{Wiseman_2019}. \n\n\\subsection{Preliminaries on $\\mathrm{B+}$ decomposition for initially correlated system-environment state}\nFollowing~\\cite{Wiseman_2019} we begin by considering arbitrary system-environment state -- in the corresponding Hilbert space $\\mathcal{H}=\\mathcal{H}_S\\otimes \\mathcal{H}_E$ -- and write it as \n\\begin{equation}\\label{eq:BplusDecom}\n\\hat{\\rho}_{SE}(0)=\\sum_{\\alpha}w_{\\alpha}\\hat{Q}_{\\alpha}\\otimes \\hat{\\rho}_{\\alpha}.\n\\end{equation} \nHere, $\\{\\hat{Q}_{\\alpha}\\}$ forms a basis (possibly overcomplete) for operators on $\\mathcal{H}_S$ and $\\{\\hat{\\rho}_{\\alpha}\\}$ are valid environmental density operators on $\\mathcal{H}_E$. Note that $\\hat{Q}_{\\alpha}$ need not be positive or trace orthogonal, so they may not constitute proper density matrices on the system Hilbert space. However when the initial state is factorized, this summation reduces to a single term $\\hat{\\rho}_{SE}(0)=\\hat{\\rho}_S(0)\\otimes \\hat{\\rho}_E(0)$ corresponding to reduced states of the open system and environment, respectively. In general, number of terms in this summation is restricted by $1\\leq N\\leq d^2$ where $d$ is the dimension of the system Hilbert space \\cite{Wiseman_2019}. All the information about initial state of the open system is incorporated in the weights $w_{\\alpha}$, such that $\\hat{\\rho}_S(0)=\\mathrm{Tr}_E[\\hat{\\rho}_{SE}(0)]=\\sum w_{\\alpha}\\hat{Q}_{\\alpha}$. Although $\\hat{Q}_{\\alpha}$ may not be legitimate density operators for the open system, those expressed by $\\hat{\\rho}_{\\alpha}$ are valid density operators for the environment. This means that the factorized form of the terms in \\eqref{eq:BplusDecom} allows to write the dynamics of the open system state as the weighted sum of legitimate CP-maps acting on $\\hat{Q}_{\\alpha}$. In more detail, if the total system-environment evolves due to a unitary operator $\\hat{U}(t)$, one has\n\\begin{eqnarray}\n\\hat{\\rho}_S(t)&&=\\sum_{\\alpha}w_{\\alpha}\\mathrm{Tr}_E[\\hat{U}(t)(\\hat{Q}_{\\alpha} \\otimes \\hat{\\rho}_{\\alpha})\\hat{U}(t)^{\\dagger}] \\nonumber\n\\\\\n&&=\\sum_{\\alpha}w_{\\alpha}\\hat{\\Phi}^{(\\alpha)}_t[\\hat{Q}_{\\alpha}],\n\\end{eqnarray}\nwhere\n\\begin{equation}\\label{eq:BPlusDecMap}\n\\hat{\\Phi}_t^{(\\alpha)}[\\cdot]:=\\mathrm{Tr}_E[\\hat{U}(t)(\\cdot\\otimes\\hat{\\rho}_{\\alpha})\\hat{U}(t)^{\\dagger}].\n\\end{equation}\nSince all maps of the form given in Eq.\\eqref{eq:BPlusDecMap} are CP, all previous tools for studying CP-maps are applicable here. In particular, one can investigate properties of each CP-map $\\Phi^{(\\alpha)}_t$ and see how they are connected to the presence of initial correlations.\n\nFor example, consider single qubit dynamics in the presence of initial system-environment correlations~\\cite{Wiseman_2019} . Using completeness of Pauli sigma basis $\\{I_2,\\hat{\\sigma}_x,\\hat{\\sigma}_y,\\hat{\\sigma}_z\\}$, we have \n\\begin{equation}\\label{eq:BplusDecQubit}\n\\hat{\\rho}_{SE}(0)=\\sum_{\\alpha=0,x,y,z} w_{\\alpha} \\hat{Q}_{\\alpha}\\otimes \\hat{\\rho}_{\\alpha},\n\\end{equation}\nin which\n\\begin{eqnarray}\n&&\\hat{Q}_0=\\frac{1}{2}(\\hat{I}_2 - \\hat{\\sigma}_x - \\hat{\\sigma}_y - \\hat{\\sigma}_z),\\label{eq:Q0}\\\\\n&&\\hat{Q}_{\\alpha}=\\frac{1}{2}\\hat{\\sigma}_{\\alpha} \\quad for \\; \\alpha = x,y,z,\\label{eq:Qalpha}\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n&&\\hat{\\rho}_0=\\mathrm{Tr}_S[\\hat{\\rho}_{SE}(0)]=\\hat{\\rho}_E(0),\\label{rho1}\n\\\\\n&&\\hat{\\rho}_{\\alpha}=\\frac{\\mathrm{Tr}_S[((\\hat{I}_2+\\hat{\\sigma}_{\\alpha})\\otimes \\hat{I}_E)\\hat{\\rho}_{SE}(0)]}{w_{\\alpha}},\\label{rho2}\n\\end{eqnarray}\nwith $w_0=1$, and $w_{\\alpha}=\\mathrm{Tr}[((\\hat{I}_2+\\hat{\\sigma}_{\\alpha})\\otimes \\hat{I}_E)\\hat{\\rho}_{SE}(0)]$ for ${\\alpha}=x,y,z$.\nWe exploit these generic expressions below.\n\n\\subsection{Initial polarization-frequency correlation and $\\mathrm{B+}$ decomposition for single photon}\n\nWe consider initial polarization-frequency correlations by following the recent results and experimental work on generating, in principle, arbitrary single-photon dephasing dynamics~\\cite{Lyyra_2018}. Generic initial polarization-frequency state can be written as\n\\begin{eqnarray}\\label{eq:InStateCorrelated}\n\\ket{\\psi(0)}_{SE}=&& C_v\\ket{v}\\otimes\\int d \\omega g(\\omega)\\ket{\\omega}\\nonumber\n\\\\\n&&\\quad+C_h\\ket{h}\\otimes\\int d \\omega g(\\omega)\\mathrm{e}^{i \\theta(\\omega)}\\ket{\\omega},\n\\end{eqnarray}\nwhere $\\vert C_h \\vert ^2+\\vert C_v \\vert ^2=1$ and $\\int d \\omega \\vert g(\\omega ) \\vert ^2=1$.\nAbove, the crucial ingredient is the frequency dependent initial phase $\\theta(\\omega)$ for the component including the polarization $h$.\nIf $\\theta(\\omega)$ is a constant function, then there are no initial system-environment correlations. However, controlling the non-constant functional form of $\\theta(\\omega)$\nallows to control the initial correlations and their amount.\n \nWhen the initial state evolves according to the interaction Hamiltonian in Eg.~\\eqref{eq:Hamiltonian}, the reduced polarization state at time $t$ is\n\\begin{equation}\\label{eq:MapInCorrel}\n\\hat{\\rho}(t)=\\begin{pmatrix}\n\\vert C_h \\vert ^2 & \\kappa(t)C_h C_v^{*} \\\\\n\\kappa(t)^{*}C_h^{*} C_v & \\vert C_v \\vert ^2\n\\end{pmatrix},\n\\end{equation}\nwhere the decoherence function is \n\\begin{equation}\\label{eq:decFuncCorrel}\n\\kappa(t)=\\int d \\omega \\vert g(\\omega) \\vert ^2 \\mathrm{e}^{i \\theta(\\omega)}\\mathrm{e}^{-i \\Delta n \\omega t}.\n\\end{equation}\nNote that in addition to the frequency probability distribution $|g(\\omega)|^2$, one can now use also $\\theta(\\omega)$, and subsequent initial correlations, to control the dephasing dynamics.\n\nThe dynamics given by Eqs.~(\\ref{eq:MapInCorrel}-\\ref{eq:decFuncCorrel}) can be equivalently formulated by using the $\\mathrm{B}+$ decomposition. \nConsidering the initial total state in Eq.~\\eqref{eq:InStateCorrelated}, and applying the $\\mathrm{B}+$ decomposition along Eq.~\\eqref{eq:BplusDecQubit} and Eq.~\\eqref{rho1}-\\eqref{rho2}, we obtain environmental terms\n\\begin{widetext}\n\\begin{eqnarray}\n&&\\hat{\\rho}_0=\\int d\\omega \\int d\\omega'\\; g(\\omega)g(\\omega')^{*}\\;\\big(\\vert C_h\\vert ^2 \\mathrm{e}^{i[\\theta(\\omega)-\\theta(\\omega')]}+\\vert C_v\\vert ^2\\big)\\ket{\\omega}\\bra{\\omega'},\n\\\\\n&&\\hat{\\rho}_x=\\frac{1}{w_x}\\int d\\omega \\int d\\omega'\\; g(\\omega)g(\\omega')^{*}\\;\\big(\\vert C_h\\vert ^2 \\mathrm{e}^{i[\\theta(\\omega)-\\theta(\\omega')]}+\\vert C_v\\vert ^2+C_h C_v^{*}\\mathrm{e}^{i\\theta(\\omega')}+C_v C_h^{*}\\mathrm{e}^{-i\\theta(\\omega)}\\big)\\ket{\\omega}\\bra{\\omega'},\n\\\\\n&&\\hat{\\rho}_y=\\frac{1}{w_y}\\int d\\omega \\int d\\omega' \\;g(\\omega)g(\\omega')^{*}\\;\\big(\\vert C_h\\vert ^2 \\mathrm{e}^{i[\\theta(\\omega)-\\theta(\\omega')]}+\\vert C_v\\vert ^2+i C_h C_v^{*}\\mathrm{e}^{i\\theta(\\omega')}- i C_v C_h^{*}\\mathrm{e}^{-i\\theta(\\omega)}\\big)\\ket{\\omega}\\bra{\\omega'},\n\\\\\n&&\\hat{\\rho}_z=\\int d\\omega \\int d\\omega'\\; g(\\omega)g(\\omega')^{*}\\;\\ket{\\omega}\\bra{\\omega'},\n\\end{eqnarray}\n\\end{widetext}\nwith weights\n\\begin{eqnarray}\n&&w_x=1+2\\int d\\omega \\vert g(\\omega)\\vert ^2 \\Re{[C_v C_h^{*}\\mathrm{e}^{-i\\theta(\\omega)}]},\n\\\\\n&&w_y=1+2\\int d\\omega \\vert g(\\omega)\\vert ^2 \\Im{[C_v C_h^{*}\\mathrm{e}^{-i\\theta(\\omega)}]},\n\\\\\n&&w_z=2\\vert C_h\\vert ^2.\n\\end{eqnarray}\nEach specific term of the $\\mathrm{B}+$ decomposition is related to a frequency state ($\\hat{\\rho}_{\\alpha}$) above, and acts on its own input system operator $\\hat{Q}_{\\alpha}$, see Eqs~(\\ref{eq:Q0}-\\ref{eq:Qalpha}). In the current case, we can combine the contributions of $\\hat{\\rho}_0$ and $\\hat{\\rho}_z$ to simplify the decomposition into only three terms. Subsequently, polarization density matrix at time $t$ is given by \n\\begin{eqnarray}\\label{eq:MapInCorrThreeTerms}\n\\hat{\\rho}(t)&=&\\frac{1}{2}\\begin{pmatrix} \\nonumber\nw_z & \\kappa_0(t)(i-1) \\\\\n\\kappa_0(t)^{*}(-i-1) & 2-w_z\n\\end{pmatrix}\n\\\\\n&&+\\frac{1}{2} w_x\\begin{pmatrix} \\nonumber\n0 & \\kappa_x(t) \\\\\n\\kappa_x(t)^{*} & 0\n\\end{pmatrix}\n\\\\\n&&+\\frac{1}{2} w_y\\begin{pmatrix}\n0 & -i \\kappa_y(t) \\\\\ni \\kappa_y(t)^{*} & 0\n\\end{pmatrix},\n\\end{eqnarray}\nwhere the three different decoherence functions are given by\n\\begin{flalign}\n&\\kappa_0(t)=\\int d \\omega \\vert g(\\omega)\\vert ^2 \\mathrm{e}^{-i \\Delta n \\omega t}, \\label{d1}\n\\\\\n&\\kappa_x(t)=\\frac{\\int d \\omega \\vert g(\\omega)\\vert ^2 (1+2 \\Re{[C_v C_h^{*}\\mathrm{e}^{-i\\theta(\\omega)}]})\\mathrm{e}^{-i \\Delta n \\omega t}}{w_x}, \\label{d2}\n\\\\\n&\\kappa_y(t)=\\frac{\\int d \\omega \\vert g(\\omega)\\vert ^2 (1+2 \\Im{[C_v C_h^{*}\\mathrm{e}^{-i\\theta(\\omega)}]})\\mathrm{e}^{-i \\Delta n \\omega t}}{w_y}.\\label{d3}\n\\end{flalign}\nIt is interesting to note here that the decoherence function $\\kappa_0$ is independent of $\\theta(\\omega)$ and actually corresponds directly to the case when there are no initial polarization-frequency correlations.\nThe other two functions, $\\kappa_x$ and $\\kappa_y$, depend also on $\\theta(\\omega)$ and describe in detail how the initial correlations change the dephasing dynamics.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.8 \\columnwidth, height = 1.5 cm]{kSimDAbs.pdf}\n\\includegraphics[width=0.8 \\columnwidth, height = 1.5 cm]{k0DAbs.pdf}\n\\includegraphics[width=0.8 \\columnwidth, height = 1.5 cm]{kxDAbs.pdf}\n\\includegraphics[width=0.8 \\columnwidth ,height = 2 cm]{kyDAbs.pdf}\n\\caption{(Color online) Non-positive map decoherence functions. Magnitudes of the original decoherence function ($\\kappa$) and $\\mathrm{B}+$ decomposition decoherence functions ($\\kappa_0, \\kappa_x, \\kappa_y$) as a function of time. We set $C_h=C_v=1\/\\sqrt{2}$.}\\label{fig3}\n\\end{figure}\n\n\nIt is also interesting to compare Eq.~\\eqref{eq:MapInCorrThreeTerms} with the $\\mathrm{B+}$ decomposition for generic dephasing dynamics of a qubit coupled to a Bosonic bath, when qubit and bath are initially correlated \\citep{Wiseman_2019}. The total Hamiltonian of the qubit and the Bosonic bath reads\n\\begin{equation}\n\\hat{H}=\\omega_q \\; \\hat{\\sigma}_z+\\sum_i \\; \\omega_i \\hat{b}^{\\dagger}_i \\hat{b}_i+\\hat{\\sigma}_z\\otimes \\sum_i g_i (\\hat{b}^{\\dagger}_i+\\hat{b}_i),\n\\end{equation}\nwhere $\\omega_q$ is the qubit's energy level separation (in $\\ket{0}$, $\\ket{1}$ basis), $\\hat{b}^{\\dagger}_i$ and $\\hat{b}_i$ are bath mode creation and annihilation operators respectively, and $g_i$ is the coupling strength. Employing the $\\mathrm{B+}$ decomposition, dynamics of the off-diagonal element of the qubit's density matrix in the interaction picture reads \\citep{Wiseman_2019}\n\\begin{equation}\\label{eq:dephasScale}\n\\bra{0}\\rho_S(t)\\ket{1}=\\sum_{\\alpha}\\;w_{\\alpha} \\bra{0}\\hat{Q}_{\\alpha}\\ket{1}\\chi_{\\hat{\\rho}_{\\alpha}}(\\vec{\\xi}_t),\n\\end{equation}\nwhere $\\chi_{\\hat{\\rho}_{\\alpha}}(\\vec{\\xi}_t)=\\mathrm{Tr}_B[\\hat{\\rho}_{\\alpha}\\hat{D}(\\vec{\\xi}_t)]$ is the Wigner characteristic function of the bath state $\\hat{\\rho}_{\\alpha}$. Above $\\vec{\\xi}_t=(\\xi_1(t),\\xi_2(t),...)$ with\n\\[\\xi_j(t)=2g_j \\bigg(\\frac{1-\\mathrm{e}^{i\\omega_j t}}{\\omega_j}\\bigg) ,\\]\nand $\\hat{D}(\\vec{\\xi}_t)=\\exp(\\sum_i \\xi_i \\hat{b}^{\\dagger}_i+\\xi^{*}\\hat{b}_i)$ is the Glauber displacement operator. The comparison between \nEqs.~\\eqref{eq:MapInCorrThreeTerms} and \\eqref{eq:dephasScale} reveals that the decoherence functions in our photonic model -- corresponding to integral transformations of the frequency probability distribution and frequency dependent phase $\\theta(\\omega)$ -- play the exact same role as the characteristic functions in the dephasing dynamics of a qubit coupled to Bosonic bath.\n\nLet us go back to the photonic model and see in detail, for some examples, what is the relation between the orginal decoherence function \\eqref{eq:decFuncCorrel} and those appearing in the $\\mathrm{B}+$ decomposition in Eqs.~(\\ref{d1}-\\ref{d3}). In particular, we consider the similar cases as used in~\\citep{Lyyra_2018} to demonstrate arbitrary control of dephasing dynamics. These include a non-positive map, Markovian, non-Markovian, and coherence trapping dynamics.\nIn all of the cases below, the frequency distributions used and the values for $\\theta(\\omega)$ are similar to those considered in Ref.~\\citep{Lyyra_2018}, respectively.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.83 \\columnwidth, height = 1.5 cm]{k0DReIm.pdf}\n\\includegraphics[width=0.83 \\columnwidth, height = 1.5 cm]{kxDReIm.pdf}\n\\includegraphics[width=0.83 \\columnwidth, height = 2 cm]{kyDReIm.pdf}\n\\caption{(Color online) Non-positive map decoherence functions. Real and imaginary parts of the original decoherence function ($\\kappa$) and $\\mathrm{B}+$ decomposition decoherence functions ($\\kappa_0, \\kappa_x, \\kappa_y$) as a function of time. We set $C_h=C_v=1\/\\sqrt{2}$.}\n\\label{fig4}\n\\end{figure} \n\n\nFigure \\ref{fig3} shows the magnitude of various decoherence functions for the case of a non-positive (NP) map, i.e., $\\kappa (t) > \\kappa(0)$. \n It is easy to check that the off-diagonal term of the density matrix is obtained from $\\rho_{hv}(t)=(\\kappa_0(t)(i-1)+w_x \\kappa_x(t)-iw_y \\kappa_y(t))\/2$, and equivalently from $\\rho_{hv}(t)=\\kappa(t)C_h C_v^{*}$. Thereby, it is evident that if $\\kappa_0(t)=\\kappa_x(t)=\\kappa_y(t)=0$, for some $t>0$, then $\\kappa(t)=0$. However, the reverse statement does not always hold. Instead, one can show that whenever $w_x=w_y=1$, then having identical decoherence functions, $\\kappa_0(t)=\\kappa_x(t)=\\kappa_y(t)$, is sufficient to have zero coherence, i.e., $\\kappa(t)=0$. This is an interesting result making a link between properties of the CP-maps obtained in $\\mathrm{B+}$ decomposition and the original non-positive map. In fact, the case discussed in Fig. \\ref{fig3} demonstrates this situation.\nThis is even more evident when considering the real and imaginary parts of the decoherence functions explicitly, see Fig.~\\ref{fig4}. \nOne can see that the three decoherence functions $\\kappa_0$, $\\kappa_x$, and $\\kappa_y$ are identical when the interaction time is short. Therefore, since we also have $w_x = w_y = 1$, the decoherence function $\\kappa(t)$ has zero value in this regime.\n\nThe non-Markovian, Markovian, and coherence trapping cases are plotted respectively in Figs.~\\ref{fig5}, \\ref{fig6}, and \\ref{fig7}. Looking at the Fig.~\\ref{fig5}, one finds that $\\vert \\kappa(t) \\vert$ first decays to zero and then it revives again. This situation displays non-Markovian features, where coherence can revive after a period of disappearance. The Markovian case however illustrates a monotonically decaying $\\vert \\kappa (t) \\vert$, see Fig.~\\ref{fig6}. Finally in the coherence trapping case we observe that $\\vert \\kappa(t) \\vert$ decays at first but mostly maintains its value later. \nMagnitudes of the other three decoherence \nfunctions, used in the $\\mathrm{B+}$ decomposition, are also plotted in the corresponding figures. We observe that these decoherence functions behave similarly, in contrast to the case of non-positive map. Again, whenever $\\kappa_0$, $\\kappa_x$, and $\\kappa_y$ are all zero, one has $\\kappa(t)=0$. However since $w_x$ and $w_y$ are not equal, we get a non-zero $\\kappa$, even though, $\\kappa_0$, $\\kappa_x$, and $\\kappa_y$ seem to be identical in some regions.\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.8 \\columnwidth, height = 1.5 cm]{kSimAAbs.pdf}\n\\includegraphics[width=0.8 \\columnwidth, height = 1.5 cm]{k0AAbs.pdf}\n\\includegraphics[width=0.8 \\columnwidth, height = 1.5 cm]{kxAAbs.pdf}\n\\includegraphics[width=0.8 \\columnwidth ,height = 2 cm]{kyAAbs.pdf}\n\\caption{(Color online) Non-Markovian dynamics. Magnitudes of the original decoherence function ($\\kappa$) and $\\mathrm{B}+$ decomposition decoherence functions ($\\kappa_0, \\kappa_x, \\kappa_y$) as a function of time. We set $C_h=C_v=1\/\\sqrt{2}$.}\\label{fig5}\n\\end{figure} \n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.8 \\columnwidth, height = 1.5 cm]{kSimBAbs.pdf}\n\\includegraphics[width=0.8 \\columnwidth, height = 1.5 cm]{k0BAbs.pdf}\n\\includegraphics[width=0.8 \\columnwidth, height = 1.5 cm]{kxBAbs.pdf}\n\\includegraphics[width=0.8 \\columnwidth ,height = 2 cm]{kyBAbs.pdf}\n\\caption{(Color online) Markovian dynamics. Magnitudes of the original decoherence function ($\\kappa$) and $\\mathrm{B}+$ decomposition decoherence functions ($\\kappa_0, \\kappa_x, \\kappa_y$) as a function of time. We set $C_h=C_v=1\/\\sqrt{2}$.}\\label{fig6}\n\\end{figure} \n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.8 \\columnwidth, height = 1.5 cm]{kSimCAbs.pdf}\n\\includegraphics[width=0.8 \\columnwidth, height = 1.5 cm]{k0CAbs.pdf}\n\\includegraphics[width=0.8 \\columnwidth, height = 1.5 cm]{kxCAbs.pdf}\n\\includegraphics[width=0.8 \\columnwidth ,height = 2 cm]{kyCAbs.pdf}\n\\caption{(Color online) Coherence trapping dynamics. Magnitudes of the original decoherence function ($\\kappa$) and $\\mathrm{B}+$ decomposition decoherence functions ($\\kappa_0, \\kappa_x, \\kappa_y$) as a function of time. We set $C_h=C_v=1\/\\sqrt{2}$.}\n\\label{fig7}\n\\end{figure}\n\n\n\\section{Discussion} \\label{sec5}\nWe have studied the influence of initial correlations on open system dynamics from two perspectives corresponding to master equation descriptions and recently introduced $\\mathrm{B+}$ decomposition method. By using a common two-photon dephasing scenario with local polarization-frequency interaction, our results show explicitly how initial correlations -- between the composite environments (frequencies) -- influence the decoherence rates and operator form of the master equation for the polarization state. When the environment has a single-peak gaussian structure, the master equation contains two sets of jump operators, corresponding to sum and difference between the local interactions, and whose weights can be controlled by changing the amount of initial environmental correlations. Here, the dephasing rates are non-negative and depend linearly on time for the considered case. Having a double-peak bivariate structure, the situation changes drastically. This opens an additional dephasing path with a non-local form for the corresponding operator, and the associated rate also has divergences. Moreover, the rates for the other two dephasing operators have distinctive functional forms.\n\n $\\mathrm{B+}$ decomposition method, in turn, allows to study such cases where the system and environment are initially correlated, preventing the use of conventional CP-maps. We have used this decomposition to study dephasing, when polarization and frequency of a single photon are initially correlated.\nThe results display in detail how the initial correlations change the dephasing contribution arising solely on initial factorized state.\nIndeed, instead of having one decoherence function associated to dephasing, we have now three different decoherence functions corresponding to the elements of the \n $\\mathrm{B+}$ decomposition. Here, one of the functions arises due to initially factorized part and additional two decoherence functions include also contributions from initial polarization-frequency correlations. In general, our results shed light and help in understanding how different types of correlations influence the dephasing dynamics within the commonly used photonic framework. \n \n\n\n\\section*{Acknowledgments}\nSina Hamedani Raja acknowledges finical support from Finnish Cultural Foundation. Anil Shaji acknowledges the support of the EU through the Erasmus+ program and support of the Science and Engineering Research Board, Government of India through EMR grant No. EMR\/2016\/007221.\n\\section*{Appendix}\nGeneral expressions for the elements of the nonzero subspace of the decay rate matrix, denoted by $R(t)$, are\n\\begin{widetext}\n\\begin{eqnarray}\n&&R_{11}(t)=-\\Re{[\\bold{k}_b(t)]},\n\\\\\n&&R_{12}(t)=\\frac{1}{\\sqrt{3}}\\Big(i\\Im{[\\bold{k}_b(t)]}-i\\Im{[\\bold{k}_a(t)]}+i\\Im{[\\bold{\\Gamma}_{ab}(t)]}+\\Re{[\\bold{\\Gamma}_{ab}(t)]}-\\Re{[\\bold{k}_{a}(t)]}\\Big),\n\\\\\n&&R_{13}(t)=\\frac{1}{2\\sqrt{6}}\\Big(4i\\Im{[\\bold{k}_a(t)]}+2i\\Im{[\\bold{k}_b(t)]}-3i\\Im{[\\bold{k}_{ab}(t)]}-i\\Im{[\\bold{\\Gamma}_{ab}(t)]}+4\\Re{[\\bold{k}_{a}(t)]}-3\\Re{[\\bold{k}_{ab}(t)]}-\\Re{[\\bold{\\Gamma}_{ab}(t)]}\\Big),\n\\\\\n&&R_{21}(t)=R_{12}(t)^{*},\n\\\\\n&&R_{22}(t)=\\frac{1}{3}\\Big(-2\\Re{[\\bold{k}_a(t)]}+\\Re{[\\bold{k}_b(t)]}-2\\Re{[\\bold{\\Gamma}_{ab}(t)]}\\Big),\n\\\\\n&&R_{23}(t)=\\frac{1}{6\\sqrt{2}}\\Big(-3i\\Im{[\\bold{k}_{ab}(t)]}+6i\\Im{[\\bold{k}_b(t)]}+3i\\Im{[\\bold{\\Gamma}_{ab}(t)]}-4\\Re{[\\bold{k}_{a}(t)]}-3\\Re{[\\bold{k}_{ab}(t)]}+8\\Re{[\\bold{k}_{b}(t)]}-\\Re{[\\bold{\\Gamma}_{ab}(t)]}\\Big),\n\\\\\n&&R_{31}(t)=R_{13}(t)^{*},\n\\\\\n&&R_{32}(t)=R_{23}(t)^{*},\n\\\\\n&&R_{33}(t)=\\frac{1}{6}\\Big(-2\\Re{[\\bold{k}_a(t)]}-3\\Re{[\\bold{k}_{ab}(t)]}-2\\Re{[\\bold{k}_b(t)]}+\\Re{[\\bold{\\Gamma}_{ab}(t)]}\\Big),\n\\end{eqnarray}\nwhere we have defined $\\bold{k}_i(t)=\\frac{1}{\\kappa_i(t)}\\frac{d}{dt}\\kappa_i(t)$, $\\bold{\\Gamma}_{ab}(t)=\\frac{1}{\\Lambda_{ab}(t)}\\frac{d}{dt}\\Lambda_{ab}(t)$ and $\\bold{k}_{ab}(t)=\\frac{1}{\\kappa_{ab}(t)}\\frac{d}{dt}\\kappa_{ab}(t)$.\n\\end{widetext}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzjuni b/data_all_eng_slimpj/shuffled/split2/finalzzjuni new file mode 100644 index 0000000000000000000000000000000000000000..f47e41aec0fbd5a8fc82fdb51ced4755424fcd63 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzjuni @@ -0,0 +1,5 @@ +{"text":"\\section{The SU(N+1) Schwarzschild-like Solution}\n\nIn a previous paper \\cite{sing} we exploited the connection between\ngeneral relativity and Yang-Mills theory to find an exact\nSchwarzschild-like solution for an SU(2) gauge theory coupled to\na massless scalar field. In the present paper we wish to show that\na similiar solution can be found for the general group SU(N+1).\nInstead of using the Euler-Lagrange formalism which leads to coupled\nsecond-order, nonlinear equations, we will use the Bogomolny\napproach \\cite{bogo} to derive our field equations.\nBogomolny obtained his first-order version of the Yang-Mills\nfield equations by requiring that the gauge and scalar fields\nproduce an extremum of the canonical Hamiltonian. The field equations\nobtained in this way are first-order, but their solutions are\nalso solutions to the second-order Euler-Lagrange equations.\n\nThe model which we consider here is an SU(N+1) gauge field coupled\nto a massless scalar field in the adjoint representation. The\nLagrangian for this theory is\n\\begin{equation}\n\\label{lagran}\n{\\cal L} = -{1 \\over 4} F^{\\mu \\nu a} F_{\\mu \\nu} ^a + {1 \\over 2}\nD^{\\mu} ( \\Phi ^a ) D_{\\mu} ( \\Phi ^a)\n\\end{equation}\nwhere\n\\begin{equation}\nF_{\\mu \\nu} ^a = \\partial _{\\mu} W_{\\nu} ^a - \\partial _{\\nu}\nW_{\\mu} ^a + g f ^{abc} W_{\\mu} ^b W_{\\nu} ^c\n\\end{equation}\nand\n\\begin{equation}\nD_{\\mu} \\Phi ^a = \\partial _{\\mu} \\Phi ^a + g f ^{abc}\nW_{\\mu} ^b \\Phi ^c\n\\end{equation}\nwhere $f^{abc}$ are the structure constants of the gauge group\nand $a, b, c = 1, 2, \\dots , N, N+1$.\nThe canonical Hamiltonian obtained from Eq. (\\ref{lagran}) is\n\\begin{equation}\n\\label{hamo}\n{\\cal H} = \\int d^3 x \\left[{1 \\over 4} F_{ij} ^a F^{aij} - {1 \\over 2}\nF_{0i} ^a F^{a0i} + {1 \\over 2} D_i \\Phi ^a D^i \\Phi ^a\n- {1 \\over 2} D_0 \\Phi ^a D^0 \\Phi ^a \\right]\n\\end{equation}\nWe wish to find gauge and scalar fields which produce\nan extremum of ${\\cal H}$.\nFirst we rescale the scalar field ({\\it i.e.} $\\Phi ^a \\rightarrow\nA \\Phi ^a$). This is done so that later on it will be simple to examine\nthe pure gauge case by setting $A=0$.\nNext we specify that all the fields are time independent, and that\nthe time component of the gauge fields are proportional to the\nscalar fields ({\\it i.e} $W_0 ^a = C \\Phi ^a$, where $\\Phi ^a$ is\nthe rescaled scalar field). The time component of the gauge fields\nact like an additional Higgs field except its kinetic term appears\nwith the opposite sign in the Lagrangian \\cite{zee}. Using these two\nrequirements and the antisymmetry of $f^{abc}$ we find that\n$D_0 \\Phi ^a = 0$ and $F_{0i} ^a = C (D_i \\Phi ^a)$, so that the\nHamiltonian becomes\n\\begin{eqnarray}\n\\label{ham}\n{\\cal H} = \\int d^3 x \\Big[&&{1 \\over 4} \\left(F_{ij} ^a - \\epsilon_{ijk}\n\\sqrt{A^2 - C^2} D^k \\Phi ^a \\right) \\left(F^{aij} - \\epsilon_{ijl}\n\\sqrt{A^2 - C^2} D^l \\Phi ^a \\right) \\nonumber \\\\\n+ && {1 \\over 2} \\epsilon_{ijk} \\sqrt{A^2 - C^2}\nF^{aij} D^k \\Phi ^a \\Big]\n\\end{eqnarray}\nUsing the fact that\n\\begin{equation}\n{1 \\over 2} \\epsilon_{ijk} F^{aij} D^k \\Phi ^a = \\partial ^i\n\\left({1 \\over 2} \\epsilon_{ijk} F^{ajk} \\Phi ^a \\right)\n\\end{equation}\nand the requirement that the solutions we are looking for are\nonly functions of $r$ we find\n\\begin{eqnarray}\n\\label{ham1}\n{\\cal H} &=& \\sqrt{A^2 - C^2} \\int _S (\\Phi^a B^a _i)\ndS ^i \\nonumber \\\\\n&+& \\int d^3 x \\left[{1 \\over 4} \\left(F_{ij} ^a -\n\\epsilon_{ijk} \\sqrt{A^2 - C^2} D^k \\Phi ^a \\right)\n\\left(F^{aij} - \\epsilon_{ijl} \\sqrt{A^2 - C^2}\nD^l \\Phi ^a \\right) \\right]\n\\end{eqnarray}\nFor the total divergence term we\nhave used the definition of the non-Abelian\nmagnetic field in terms of the field strength tensor ({\\it i.e}\n$B^a _i = {1 \\over 2} \\epsilon_{ijk} F^{ajk}$), and used\nGauss's Law to turn the volume integral into a surface\nintegral. The lower limit of this Hamiltonian can be found by\nrequiring\n\\begin{eqnarray}\n\\label{fbogo}\nF_{ij} ^a &=& \\epsilon _{ijk} \\sqrt{ A^2 - C^2}\nD^k \\Phi ^a \\nonumber \\\\\nor \\nonumber \\\\\nB_i ^a &=& \\sqrt{A^2 - C^2} D_i \\Phi ^a\n\\end{eqnarray}\nTo get the second expression we have again used the definition of\nthe non-Abelian magnetic field. These are the Bogomolny equations\n\\cite{bogo}. Wilkinson and Goldhaber \\cite{gold} have given a\ngeneralized ansatz for the gauge and scalar fields\n\\begin{eqnarray}\n\\label{ansatz}\n{\\bf W}_i &=& {\\epsilon_{ijb} r^j ({\\bf T}^b - {\\bf M}^b (r)) \\over g r^2}\n\\nonumber \\\\\n\\Phi ^a &=& {{\\bf \\Phi} (r) \\over g}\n\\end{eqnarray}\n${\\bf W} _i$ are three $(N+1) \\times (N+1)$ matrices of the gauge\nfields. ${\\bf M}_b (r)$ and ${\\bf \\Phi} (r)$ are four $(N+1) \\times\n(N+1)$ matrices whose elements are functions of $r$, and in terms of\nwhich the Bogomolny equations will be written. ${\\bf T}_b$ are three\n$(N+1) \\times (N+1)$ matrices which generate the maximal embedding\nof SU(2) in SU(N+1). Because of the spherical symmetry requirement\none can look at Eq. (\\ref{fbogo}) along any axis \\cite{bais}\n\\cite{wilk}. Taking the positive $\\hat {z}$ axis the Bogomolny\nequations become $\\sqrt{ A^2 - C^2} (D_3 \\Phi ^a) = B _3 ^a$ and\n$\\sqrt{A^2 - C^2} (D_{\\pm} \\Phi ^a) = B_{\\pm} ^a$, or in terms of the\nansatz of Eq. (\\ref{ansatz})\n\\begin{eqnarray}\n\\label{fbogo1}\nr^2 \\sqrt{A^2 - C^2} {d {\\bf \\Phi} \\over dr} &=&\n[{\\bf M}_+ , {\\bf M}_- ] - {\\bf T}_3 \\nonumber \\\\\n{d {\\bf M}_{\\pm} \\over dr} &=& \\mp \\sqrt{A^2 - C^2}\n[{\\bf M}_{\\pm} , {\\bf \\Phi} ]\n\\end{eqnarray}\nTaking the third ``component'' of the maximal SU(2) embbedding into\nSU(N+1) as\n\\begin{equation}\n{\\bf T}_3 = diag \\left[{1 \\over 2}N, {1\\over 2}N -1, \\dots ,\n-{1 \\over 2} N +1, -{1 \\over 2} N \\right]\n\\end{equation}\nit has been shown \\cite{gold} that the matrix functions, ${\\bf M}_+ (r)$\nand ${\\bf \\Phi} (r)$, can be taken as\n\\begin{eqnarray}\n\\label{mat1}\n{\\bf \\Phi} = {1 \\over 2}\n\\left(\n\\begin{array}{ccccc}\n\\phi_1 &\\space &\\space &\\space &\\space \\\\\n\\space &\\phi_2 - \\phi_1 &\\space &\\space &\\space \\\\\n\\space &\\space &\\ddots &\\space &\\space \\\\\n\\space &\\space &\\space &\\phi_N -\\phi_{N-1} &\\space \\\\\n\\space &\\space &\\space &\\space &- \\phi _N \\\\\n\\end{array}\n\\right)\n\\end{eqnarray}\n\\begin{eqnarray}\n\\label{mat2}\n{\\bf M}_+ = {1 \\over \\sqrt{ 2}}\n\\left(\n\\begin{array}{ccccc}\n0 &a_1 &\\space &\\space &\\space \\\\\n\\space &0 &a_2 &\\space &\\space \\\\\n\\space &\\space &\\ddots &\\space &\\space \\\\\n\\space &\\space &\\space &0 &a_N \\\\\n\\space &\\space &\\space &\\space &0 \\\\\n\\end{array}\n\\right)\n\\end{eqnarray}\nwhere $\\phi _m$ and $a_m$ are real functions of $r$ and ${\\bf M}_- =\n({\\bf M} _+) ^T$. Substituting Eqs. (\\ref{mat1}), (\\ref{mat2}) into\nthe first order field equations of Eq. (\\ref{fbogo1}) the field\nequations become \\cite{bais} \\cite{wilk}\n\\begin{eqnarray}\n\\label{deqn}\nr^2 {d \\phi _m \\over dr} &=& {1 \\over \\sqrt{A^2 - C^2}}\n\\left[ (a_m)^2 - m{\\bar m} \\right] \\nonumber \\\\\n{d a_m \\over dr} &=& \\sqrt{A^2 - C^2} \\left(-{1 \\over 2} \\phi_{m-1}\n+\\phi_m - {1 \\over 2} \\phi _{m+1} \\right) a_m\n\\end{eqnarray}\nwhere $1 \\le m \\le N$, ${\\bar m} = N +1 - m$ and $\\phi_0 = \\phi_{N+1} =0$.\nExact solutions have been found to Eq. (\\ref{deqn}) \\cite{bais}\n\\cite{wilk} which are generalizations of the well known\nPrasad-Sommerfield solution \\cite{prasad} for SU(2). The\nPrasad-Sommerfield solution and their generalizations satisfy\nthe boundary condition that the gauge and scalar fields are\nfinite at the origin. If one does not require that the fields\nbe finite at the origin then the coupled equations for $\\phi _m (r)$\nand $a_m (r)$ can be solved by\n\\begin{eqnarray}\n\\label{soln}\n\\phi _m (r) &=& {1 \\over \\sqrt{A^2 - C^2}}\n{K m {\\bar m} \\over r (K - r)} \\nonumber \\\\\na_m (r) &=& {r \\sqrt{m {\\bar m}} \\over K - r}\n\\end{eqnarray}\n$K$ is an arbitrary constant with the dimensions\nof distance. This is the generalization of a similiar\nsolution which we found for SU(2) using the second-order\nEuler-Lagrange formalism \\cite{sing}. The reason for wanting to\ngeneralize our solution to SU(N+1) is to give a possible explanation for\nthe confinement mechanism in QCD whose gauge group is SU(3). Inserting\nthe functions $\\phi _m (r)$ and $a_m (r)$ of Eq. (\\ref{soln}) into\nthe fields of Eq. (\\ref{ansatz}) we find that these fields become\ninfinite at a finite radius of\n\\begin{equation}\nr_0 = K\n\\end{equation}\nThe non-Abelian ``electric'' ($E_i ^a = F_{0i} ^a$) and ``magnetic''\nfields ($B_i ^a = {1 \\over 2} \\epsilon_{ijk} F^{jka}$) calculated for\nthis solution also become infinite at $r_0$. Thus any particle which\ncarries an SU(N+1) charge would either never be able to penetrate beyond\n$r_0$ (when the SU(N+1) charges are replusive) or once it passed into\nthe region $r < r_0$ it would never be able to escape back to the\nregion $r > r_0$. One has in a sense a color charge black hole.\n\nBoth the present solution and our SU(2) solution were found by\nusing the connection between Yang-Mills theory and general relativity\n\\cite{utiyama}, and trying to find the Yang-Mills equivalent of the\nSchwarzschild solution. The objects in general relativity which correspond\nto the gauge fields are the Christoffel coefficients, $\\Gamma^{\\alpha}\n_{\\beta \\gamma}$. Examining a few of the Christoffel symbols of the\nSchwarzschild solution we find\n\\begin{eqnarray}\n\\Gamma ^t _{r t} &=& {2GM \\over 2r ( r - 2GM )} \\nonumber \\\\\n\\Gamma ^r _{r r} &=& -{2GM \\over 2r ( r - 2GM )}\n\\end{eqnarray}\nwhere $2GM$ the equivalent of the constant $K$ from the Yang-Mills\nsolution. The similarity between these Christoffel coefficients and the\ngauge and scalar fields that result from the solutions, $\\phi _m (r)$\nand $a_m (r)$ of Eq. (\\ref{soln}), is striking. The most important\nsimilarity from the point of explaining confinement is the existence\nin both solutions of an event horizon, from which particles which carry\nthe appropriate charge can not escape once they pass into the region\n$r < r_0$. For general relativity the appropriate ``charge'' is\nmass-energy so that nothing can climb back out of the Schwarzschild\nhorizon, while in the Yang-Mills case only particles carrying an SU(N+1)\ncharge will become confined.\n\nOne slightly disturbing feature of these Schwarzschild-like SU(N+1)\nsolutions is that they have an infinite energy due to the\nsingularity at $r = 0$.\nWhen quantities such as the energy of this field configuration\nare calculated the integral must be cutoff at some\narbitrary radius, $r_c$. The singularity at\n$r_0$ does not give an infinite energy unless one sets\n$r_c = K$. This singular behaviour at the origin is shared\nby several other classical field theory solutions. Both the\nSchwarzschild solution of general relativity, and the Coulomb solution\nin electromagnetism have similiar singularities. The Wu-Yang\nsolution \\cite{wu} for static SU(2) gauge fields with no time component\nalso blows up at the origin, leading to an infinite energy if one\nintegrates the energy density down to $r=0$. Just as these classical\nsolutions are not expected to hold down to $r=0$ so the present solution\nwill certainly be modified by quantum corrections as $r$ approaches\nzero. Phenomenologically we know that the present solutions can not\nbe correct for very small $r$, since they do not exhibit the asymptotic\nfreedom behaviour that is a desirable consequences of the quantum\ncorrections to QCD. It would be interesting to see if the behaviour of\nthe fields at the origin could be modified by introducing a mass\nterm (${m^2 \\over 2} \\Phi ^a \\Phi ^a$) and a self interaction term\n(${\\lambda \\over 4} (\\Phi ^a \\Phi ^a) ^2$) to the scalar field part of\nthe Lagrangian, while still retaining the color event horizon feature of\nthe present solution. This smoothing of the fields at the origin does\nhappen when one compares the Prasad-Sommerfield exact solution (where\nthere are no mass or self interaction terms for the scalar field) with\nthe the numerical results of 't Hooft \\cite{thooft} or Julia and Zee\n\\cite{zee} (where mass and self interaction terms are included).\nThe numerical results lead to monopoles and dyons with a\nfinite core, while the exact Prasad-Sommerfield solution leads to a\npoint monopole (despite this their exact solution still has finite\nenergy when the energy density is integrated down to zero, unlike our\npresent solution). As in the case of the Prasad-Sommerfield solution,\nintroducing mass and self interaction terms for the scalar fields\nwould require solving the equations numerically, since we have\nnot been able to find an analytical solution under these conditions.\n\nTo calculate the energy of the field configuration of our solution it is\nnecessary to integrate the $T_{00}$ component of the energy-momentum\ntensor over all space, excluding the origin. The $T_{00}$\ncomponent of the energy-momentum tensor\nis similiar to the Hamiltonian density of Eq. (\\ref{ham}) except that\nall the terms have positive signs\n\\begin{equation}\nT_{00} = {1 \\over 4} F_{ij} ^a F^{aij} + {1 \\over 2} F_{0i} ^a\nF^{a0i} + {A^2 \\over 2} D_i \\Phi ^a D^i \\Phi ^a + {A^2 \\over 2}\nD_0 \\Phi ^a D^0 \\Phi ^a\n\\end{equation}\nThe energy in the fields of our solution is the integral over all\nspace of $T_{00}$. Using the field equations of Eq. (\\ref{fbogo}),\nand the fact that $F_{0i} = C(D_i \\Phi ^a)$ the energy of the fields\nis\n\\begin{eqnarray}\nE &=& \\int T_{00} d^3 x \\nonumber \\\\\n&=& A^2 \\int d^3 x D_i \\Phi ^a D^i \\Phi ^a\n\\end{eqnarray}\nSince the fields only have a radial dependence the angular part\nof the integration can be easily done. Futher using the radial\nsymmetry to evaluate the integrand along the positive $\\hat{ z}$\naxis, and using the matrix expression for $\\Phi ^a$ as well as the\nfield equations, Eq. (\\ref{fbogo1}), we find that the energy\nbecomes \\cite{gold}\n\\begin{equation}\n\\label{energy1}\nE = {8 \\pi A^2 \\over g^2} \\int _{r_c} ^{\\infty} r^2 dr \\left(\nTr \\left( \\left[{d {\\bf \\Phi } \\over dr} \\right] ^2 \\right)\n+ {2 \\over \\ A^2 - C^2} Tr \\left( r^{-2} {d {\\bf M_+} \\over dr}\n{d {\\bf M_-} \\over dr} \\right) \\right)\n\\end{equation}\nUsing the solutions for the elements of the matrices,\n$\\bf{ \\Phi}$ and $\\bf{ M}_{\\pm}$ of Eq. (\\ref{soln}) we find\n\\begin{eqnarray}\n\\label{phim}\nTr \\left( \\left[ {d {\\bf \\Phi } \\over dr} \\right]^2 \\right)\n&=& {K^2 (2r - K)^2 \\over 4 r^4 (A^2 - C^2)\n(K - r)^4} \\sum _{n=0} ^N (N - 2n)^2 \\nonumber \\\\\nTr \\left( r^{-2} {d {\\bf M_+} \\over dr} {d {\\bf M_-} \\over dr} \\right)\n&=& {K^2 \\over r^2 (K-r)^4} \\sum _{n=0} ^N n(N+1-n)\n\\end{eqnarray}\nUsing this in Eq. (\\ref{energy1}) and carrying through the integration\nthe energy in the field configuration of this Schwarzschild-like\nsolution is\n\\begin{equation}\n\\label{energy2}\nE = {2 \\pi A^2 K^2 N(N+1)(N+2) \\over 3 g^2 (A^2 - C^2)}\n\\left[ {K - 2 r_c \\over r_c (K -r_c)^3} \\right]\n\\end{equation}\nwhere the sums from Eq. (\\ref{phim}) have been done explicitly.\nThis result can be checked against the SU(2) result \\cite{sing}\nby taking $N=1$ in Eq. (\\ref{energy2}), and the expressions for\nthe energy do indeed agree (In Ref. \\cite{sing} we required that\n$A^2 - C^2 =1$ whereas here this factor is divided out). As it\nstands there are two arbitrary constants that enter the solution\n({\\it i.e.} $K$ and $r_c$) which would have to be specified\nbefore any connection between this Schwarzschild-like solution\nand the real world could be carried out. As has already been\nmentioned $K$ is the Yang-Mills equivalent of $2GM$ in general\nrelativity. Thus it could be conjectured that $K$ is related\nto the strength of the gauge interaction\n($G$ in general relativity), and the\nmagnitude of the central charge which produces the gauge field\nconfiguration ($M$ in general relativity).\n\nOne interesting feature which this general SU(N+1) Schwarzschild-like\nsolution shares with our previous SU(2) solution is that scalar\nfields are apparently required in order to get a physically\nnon-trivial solution. If there where no scalar fields in the original\nLagrangian ({\\it i.e.} $A = 0$), then the field energy of Eq.\n(\\ref{energy2}) would be zero, and the $W_0 ^a$ component of the\ngauge fields would be pure imaginary. Although the pure gauge\ncase with no scalar fields is a solution mathematically, its\nphysical significance is dubious. Requiring that the\nsolutions are pure real, or that the energy in the fields\nbe non-zero would exclude the pure gauge case solution. Under either\nof these requirements on the solution, it can be seen that scalar\nfields must be present.\n\n\\section{Conclusions}\n\nIn this paper we have generalized our previous exact,\nSchwarzshild-like solution for SU(2) Yang-Mills theory\nto SU(N+1) by using an embedding of SU(2) into SU(N+1) \\cite{gold}.\nThis exact SU(N+1) solution was found by using the connection\nbetween general relativity and Yang-Mills theories. It was found that\nthe Schwarzschild solution of general relativity carries over\nwith only a little modification into an equivalent solution for an\nSU(N+1) gauge theory coupled to massless scalar fields. Just as the\nSchwarzschild solution possesses an event horizon which permanently\nconfines any particle which carries the ``charge'' of the gravitational\ninteraction ({\\it i.e} mass-energy), so the present solution also\nhas a ``color'' event horizon which permanently confines any particle\nwhich carries the SU(N+1) gauge charge. This may be the confinement\nmechanism which has long been sought for the SU(3) gauge theory of\nthe strong interaction, QCD. Before this claim can be made there\nare several important questions which must be resolved. First, under\nsome reasonable physical assumptions about the nature of our solution\nit is found that scalar fields are required for a solution to\nexist. Normally scalar fields are not thought to play a significant role\nin confinement, so the physical importance of these scalar fields\nwould need to be addressed. Second, there are several arbitrary constants\nwhich crop up in the solution ($K$ and $r_c$). In order to make a\nconnection with the real world these constants would have to be given.\nTheoretically $K$ should be related to the strength of the gauge\ninteraction as well as the magnitude of the gauge charge which produces\nthe Schwarzschild-like gauge fields. Experimentally $K$ should be\nrelated to the radius of the various QCD bound states ({\\it e.g.}\nprotons, pions, etc.). The other constant, $r_c$, was\nintroduced chiefly to avoid the\nsingularity at $r=0$, but also because our solution does not\npossess the property of asymptotic freedom as $r \\rightarrow 0$.\nThis should not be too surprising since our solution is\nfor classical Yang-Mills fields, but as $r \\rightarrow 0$ quantum\neffects should become increasingly important.\nThus $r_c$ can be thought of roughly, as marking the boundary between\nthe classical, confining solution of this paper, and the quantum\ndominated asymptotic freedom regime. All this strongly suggests\na bag-like structure for QCD bound states : As a particle approaches\n$r = K$ from $r < K$, it feels a progressively stronger color force\nwhich confines it to remain inside the bound state. As the particle\napproaches $r \\rightarrow 0$ it enters the asymptotic freedom\nregime, where it moves as if it were free.\n\nAn interesting extension of this work would be to see if other\nexact solutions from general relativity have Yang-Mills counterparts.\nIn particular if a Yang-Mills equivalent of the Kerr solution\ncould be found it might give some insight into the nature of\nthe spin of fermions.\n\n\\section{Acknowledgements} I would like to acknowledge the help\nand suggestions of David Singleton and Hannelore Roscher.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nStarting with the original Great Dark Spot (GDS-89) observed by {\\it Voyager 2}, roughly a half-dozen large geophysical vortices have been observed on the Ice Giants. In 2015-2017, a feature was observed on Neptune in the southern hemisphere (SDS-2015) \\citep{Wong2016,Wong2018}. A recent observation as part of the Hubble Outer Planet Atmosphere Legacy (OPAL) program revealed a new dark spot with bright companion clouds in Neptune's northern hemisphere, NDS-2018 \\citep{Simon2019}. The structure is the most recent of large scale geophysical features\nto be observed on the ice giants. Although these Dark Spots have similar features to large vortices on Jupiter, such as the Great Red Spot, they exhibit dynamical motions such as shape oscillations and latitudinal drift. Neptune anticyclones evolve on a time scale of months \\citep{hsu2019} as opposed to large vortices on Jupiter, which evolve over many decades \\citep{Ingersoll2004}. During the {\\it Voyager 2} encounter with Neptune beginning in January of 1989, GDS-89 was drifting towards the equator at an approximate rate of 1.3\\degree \/month \\citep{Sromovsky2002}. Post-{\\it Voyager 2} observations using the Hubble Space Telescope (HST) revealed that the vortex had dissipated as it approached the equator \\citep{HammelLockwood1997}. Many of the observed characteristics have been matched previously by numerical models\n(e.g., shape oscillation by \\citet{Polvani1990}, drift rate by \\citet{LeBeauDowling_Neptune} and companion cloud formation by \\citet{Stratman2001}). However, reproducing all of these features simultaneously has remained elusive.\n\nThese vortices exhibit surprising variability in terms of evolution, shape, drift, cloud distribution, and shape oscillations, so an explicit calculation of the environmental parameters is required for each vortex observed. However, this is beneficial because more diagnostic information can be obtained from each case than if the spot occurrences were repetitive in terms of their characteristics. For example, in the case of GDS-89, equatorial drift was observed \\citep{Smith1989,LeBeauDowling_Neptune} as opposed to the poleward drift of dark spot SDS-2015 \\citep{Wong2016,Wong2018}. In addition, GDS-89 had an accompanying cloud feature external to the dark spot, presumably condensed methane, whereas some vortices have centered bright companions such as D2 \\citep{Smith1989} and SDS-2015 \\citep{Wong2018}. \n\n\\citet{Stratman2001} demonstrated that orographic upwelling could be the cause of companion clouds. An analysis of these overlying orographic cloud features shows promising insight into the dynamics of clouds on Neptune, as the features are directly impacted by the underlying atmospheric structure and the deep methane abundance. To that end, it is necessary to apply a cloud microphysical model to make a more complete representation of dark spots. We use an updated microphysics calculation implemented in the Explicit Planetary Isentropic Coordinate General Circulation Model (EPIC GCM) \\citep{Dowling_EPICmodel1998,Dowling2006,Palotai2008} to account for methane cloud microphysics (see Section~\\ref{clouds}). We investigate the dynamics of vapor and subsequent persistent cloud formation on Neptune by modelling large scale vortices. \n \nClouds have been modelled explicitly in recent works to study the effect of convection \\citep[e.g.,][]{Sugiyama2011,Li2019} and the GRS \\citep{Palotai2014} on Jupiter, but not for Neptune. \n\n\nWe seek to answer the following questions: \n\\begin{enumerate}\n \\item How does the addition of a methane cloud microphysical model affect the evolution of the vortex? \n \\item How do various methane deep abundance (mole fraction) values affect the vortex? \n\\end{enumerate}\n\nTo investigate these questions, we use an increased horizontal grid resolution and increased number of vertical layers in the column compared to previous studies. \nAdditionally, we use methane vapor to track the vortex drift rate and shape oscillations whereas in previous studies, potential vorticity (PV) was used as the primary parameter to determine the features of the vortex.\n\n\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{Figures\/neptune_voyager.jpg}\n \\caption{An image of the GDS-89 (dark feature near the center) taken by {\\it Voyager 2} in 1989. The high altitude white companion clouds around the GDS are visible. To the South, the Scooter, which is another dark spot observed by {\\it Voyager 2} is visible with its distinct centered cloud. Courtesy NASA\/JPL-Caltech.}\n \\label{fig:GDS-89}\n\\end{figure}\n\n\\section{Methods}\n\\subsection{EPIC model}\nTo investigate persistent cloud formation on Neptune, we use the EPIC-GCM with an active hydrological cycle for methane.\nEPIC uses a hybrid vertical coordinate, $\\zeta$, described by \\citet{Dowling2006}. The top of the model uses potential temperature, $\\theta$, as a vertical coordinate while the bottom uses a scaled pressure variable, $\\sigma$. The transition between the two functions occurs at 10 hPa (see Figure \\ref{fig:TP}). Indeed, this transition allows for the use of EPIC on planets without a solid, terrestrial surface (i.e. gas giants) and increased vertical resolution at deeper layers where $\\theta$ is nearly constant. \n\n\\subsection{Cloud microphysics} \\label{clouds}\nWe use the active cloud microphysical model from \\cite{Palotai2008} which incorporates the condensation of methane in the EPIC model, with the revisions to precipitation given in \\cite{Palotai2016DPS}. \n\nThis scheme deals with bulk mass transfer between five explicit phases: vapor, cloud ice, liquid cloud droplet, snow and rain. The last two correspond to precipitation in the model and are subject to sedimentation at the terminal velocity.\n\nCloud particle sizes are diagnosed using the Gunn-Marshall distribution \\citep{GunnMarshall} for snow and the Marshall-Palmer distribution \\citep{MarshallPalmer} for rain. Both are log-normal with two free parameters describing the mean and standard-deviation of the particle-size distribution. We apply these distributions due to the low number of parameters (2) and the computational efficiency they provide compared to other, more recently retrieved parameterizations. Furthermore, most Earth-based literature use empirically derived distributions which are accurate for localised conditions, making them a poor choice for porting to gas-giant atmospheres \\citep{Palotai2008}.\n\n\\subsection{Fall Speed of Particles}\n\n\nTo speed up computation, we parameterize the terminal velocity snow particles of different diameters ($D$) using a power law:\n\\begin{equation}\\label{eq:fall_speed}\n V_t = x D^y \\left( \\dfrac{p_0}{p}\\right)^{\\gamma}\n\\end{equation}\nwhere $x$, $y$ and $\\gamma$ are determined by fitting the terminal velocity calculated using theoretical hydrodynamical principles \\citep{PruppacherKlettBookFallSpeed}. $p_0=1$ bar is the reference pressure. Similarly to \\citet{Palotai2008}, we assume that snow is graupellike with a hexagonal shape and follow their formulation for calculating sedimentation velocity as a function of particle size. \n\nA plot of the terminal velocity of CH$_4$ and H$_2$S snow at a pressure of 1 bar is shown in Figure~\\ref{fig:termvel}, and the fit parameters for CH$_4$ and H$_2$S on Neptune are given in Table \\ref{tab:fall_speed}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/neptune_snow.png}\n \\caption{Snow terminal velocity fits for CH$_4$ (solid) and H$_2$S (dashed) at 1 bar on Neptune.}\n \\label{fig:termvel}\n\\end{figure}\n\nUsing the particle size distributions, the net sedimentation rate for a grid cell is calculated by weighting the terminal velocities by the mass of particles in each radius bin. Currently, only snow particles undergo sedimentation since the terminal velocity for cloud ice on Neptune is on the order of a few $\\mu$m\/s and thus fall only a few meters over the duration of our simulations.\n\n\\begin{table}\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\centering\n & CH$_4$ snow & H$_2$S snow \\\\ \\hline \\hline\n$x$ & 14.5 & 22.2 \\\\ \\hline\n$y$ & 0.458 & 0.504 \\\\ \\hline\n$\\gamma$ & 0.320 & 0.320 \\\\ \\hline\n\\end{tabular}\n\\caption{These constants are determined from Eq. \\ref{eq:fall_speed} when $D$ is in meters and $V_t$ is in m\/s.}\n\\label{tab:fall_speed}\n\\end{table}\n\n\n\n\\subsection{Model setup} \\label{Environmental Initialization}\n\nWe run 3-dimensional simulations that span $-90\\degree$ to $0\\degree$ latitude and $-120\\degree$ to $120\\degree$ longitude with 256 points each resulting in horizontal boxes that are $0.5\\degree\\times1\\degree$ (lat$\\times$lon). These limits are sufficient to minimize the effects of the lateral boundaries on the simulated vortex and its associated clouds and prevent the vortex from interacting with itself \\citep{LeBeauDowling_Neptune}. The model covers 35 unequally spaced vertical layers between $1$ hPa and $14$ bar (see Figure \\ref{fig:TP}), which provides significantly higher resolution than previous studies of the GDS.\n\n\nThe pressure-temperature profile used in this study is shown in Figure \\ref{fig:TP}, which is an idealized curve taken from \\citet{LeBeauDowling_Neptune}. The black curve is from previous refinements of {\\it Voyager 2} radio-occultation T(P) data from \\citet{Conrath1991} and \\citet{Stratman2001} took helium at a 19\\% mole fraction. This profile is used as the initial input into the model at the equator. The temperature profile throughout the rest of the model is calculated using the thermal wind equation. The layers, shown on Figure \\ref{fig:TP} as horizontal lines on the right, were chosen in order to increase the resolution around the initial location of the spot (approximately 1 bar) in the vertical column and the methane cloud deck. \nIn addition, since the model reaches up to 1 hPa, we can set the $\\sigma-\\theta$\ntransition at 10 hPa, or between $k = 2$ and $k = 3$ indicated by the intersection of the horizontal red line and the potential temperature profile in blue. We initialize equilibrium simulation using the idealized zonal wind profile $Q_y = \\frac{1}{3}$ from \\citet{LeBeauDowling_Neptune}\n, which closely matches the drift rate of the GDS-89 as opposed to other idealized curves (see Fig. \\ref{fig:zonal_wind} and Section \\ref{Drift_rate}). We run 130 day simulations to investigate stability, drift rate, and companion clouds throughout the model, with the ultimate goal of achieving similarities to the observations of GDS-89 and other vortices. Given that the observed periodicity in the oscillation of the GDS-89 was about 8 days \\citep{Sromovsky1993}, the model should be able to capture several periods of wobble in the vortex.\n\nInitially, the geopotential heights are determined in the model by integrating the hydrostatic equation. Due to numerical errors, the model is initially slightly unstable, so we run the model for 5 days to let the winds stabilize. During this time, cloud microphysics is turned off so as to not have spurious cloud growth. The spot is added to the model at the end of this equilibrium phase. \n\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{Figures\/tp_profile5.png}\n \\caption{(a) shows the temperature and pressure profile of Neptune used in the EPIC model. (b) shows the potential temperature profile ($\\theta$) of Neptune (shown in blue). The horizontal black lines indicate the position of vertical layers used in the model. The GDS-89 extends from approximately 200 hPa to 3.5 bars (shown in the shaded region). The sigma-theta transition is also shown in red at 10 hPa.}\n \\label{fig:TP}\n\\end{figure}\n\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{Figures\/zonal_wind.png}\n \\caption{Mean zonal wind profile as a function of latitude using the pseudo PV gradient, $Q_y=\\frac{1}{3}$. The vortex is induced at roughly $32\\degree$ S latitude. }\n \\label{fig:zonal_wind}\n\\end{figure}\n\n\n\\subsection{Methane abundance}\nThe primary goal of this work is to analyse the effect of methane on the dynamics and stability of the GDS-89. To that end, we vary the deep abundance (mole fraction) of carbon and initial ambient humidity in our test cases. \n\nIn the nominal case, we take the standard value of $40\\times$ the solar [C\/H] fraction for Neptune \\citep{Baines1995}, or an approximate CH$_4$ mole fraction of $\\textit{f}_{\\text{CH}_4} = 0.022^{+0.005}_{-0.006}$. The vertical methane abundance profile is initialized by a ``cold-trap\" model, where the mole fraction at the deepest layer is defined by a constant value (the deep abundance). Successive layers are limited by the minimum of the saturation mole fraction and the mixing ratio of the layer below. In our model, we take the initial relative humidity for all ``wet\" cases to be 95\\% to prevent spurious cloud growth in the first timestep. In a 3-dimensional model, advection of methane vapor and cooling produce localized supersaturation, which leads to cloud formation.\n\nWe run 5 cases which are shown in Table~\\ref{Cases}. Three include active cloud microphysics at different methane deep abundances. In the passive case, methane vapor is added to the model but does not condense. We also include a dry (H\/He atmosphere) case. These different parameters test both the effect of the additional mass from methane vapor and the latent heat release from cloud formation on the dynamics of the GDS. \n\n\\begin{table}\n\\centering\n\\begin{tabular}{ccc}\n\\hline\n\nCase & Cloud microphysics & [C\/H] relative to solar \\\\ \\hline\\hline\n1 & On & $20$ \\\\ \\hline\n2 & On & $40$ \\\\ \\hline\n3 & On & $80$ \\\\ \\hline\n4 & Off & $40$ \\\\ \\hline\n5 & Off & - \\\\ \\hline\n\\end{tabular}\n\\caption{A summary of cases tested in this study. For case 4, methane was added but cloud microphysical processes were turned off (i.e. methane vapor was subject only to advection). Case 5 is a dry model which has no added methane.}\n\\label{Cases}\n\\end{table}\n\n\\subsection{Addition of the Vortex} \\label{sec:addvort}\n\nWe utilize the vortex initialization as detailed in \\citet{LeBeauDowling_Neptune}, who use a Kida vortex model.\nThe vortex begins at a latitude of $32\\degree$ S at a pressure level of 1000 hPa ($k = 24$). The vortex extends roughly a scale height vertically in both directions (see Fig. \\ref{fig:TP}). The initial vortex adjustment period is about $6-8$ model days during which the shape of the spot changes drastically. As such, we begin our analysis after this adjustment period; $10$ days into the model. \n\nThe induction of the vortex creates a region of anomalous potential vorticity (PV) compared to the ambient zonal wind shear, defined as the difference in PV between the test case and the corresponding equilibrium simulation. PV has been used in previous modelling works \\citep[e.g.][]{LeBeauDowling_Neptune, Stratman2001} to study the GDS and is a useful tracer in dry models to track the vortex. We can define the extent of the GDS by a contour of constant anomalous PV at a pressure level of $1$ bar or $500$ hPa (the centre of initialized vortex). The shape and location of the vortex are then quantified by fitting an ellipse to these contours using the method of \\citet{HalirFlusser1998}. \n\n\nHowever, the PV of the simulated vortex is difficult to compare with {\\it Voyager 2} observations due to the lack of precise wind speed measurements. The change in the atmospheric structure by the addition of the vortex leads to a distinct variation in methane abundance inside the GDS, which can be used to track the dynamics of the spot in a similar way to PV and can also be compared with the observed spot opacity. By integrating the methane vapor density with depth, we obtain the column density (CD), which is analogous to optical depth in the upper troposphere. We then take the ratio of the CD to the equilibrium (see Section \\ref{Environmental Initialization}) and proceed similarly, by fitting an ellipse to a contour of constant CD. Hereafter, references to CD implies this ratio, unless otherwise specified.\n\n\n\n\n\\section{Results}\n\n\\subsection{1D Simulations}\nWe first run a 1-dimensional case with methane cloud formation enabled to test the microphysical processes and input parameters. We initialize the atmosphere with 150 layers between 1 hPa and 14 bar. We use a nominal value of $40\\times$ the solar [C\/H] fraction, and the model is initially saturated at 100\\% relative humidity everywhere. \n\nFrom equilibrium cloud condensation models (ECCM) \\citep[e.g.,][]{AtreyaWong2005}, methane ice clouds are predicted to form at around 15 bar. Figure~\\ref{fig:1dsim} shows the cloud as a function of time. The base of the clouds in the model are at about 1050 hPa. Snow is shown with the black contours and is precipitated out in a few hours. Cloud ice is most dense near the base and has a typical density of $\\sim 10^{-5}$ kg\/m$^3$. \n\n\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{Figures\/cloud_dens.png}\n \\caption{Results from 1D simulations. Methane ice cloud is plotted in light blue and snow is plotted in black dashed contours. Colorbar corresponds to the base-10 log of the cloud density in kg\/m$^3$. }\n \\label{fig:1dsim}\n\\end{figure}\n\n\\subsection{Potential vorticity} \\label{vorticity_dynamics}\nTo fit the vortex, we use the anomalous potential vorticity as a tracer. We also tracked the evolution of the anomalous PV within the vortex, as shown in Figure~\\ref{fig:pv_evolution}. As the vortex drifted equatorward, the background wind shear and the background potential vorticity increased. Consequently, although the total PV of the vortex itself was conserved, the anomalous potential vorticity of the vortex decreased with time, making it difficult to consistently track the vortex. Thus, we are limited to only the first 100 days of tracking the vortex with this method, after which the anomalous PV of the vortex was comparable to ambient noise. \n\n\nWe fit the anomalous PV at two levels: 1000 hPa and 500 hPa (Figure~\\ref{fig:cd_pv}), corresponding roughly to the vertical extent of the vortex. Below roughly 2000 hPa, the vortex is very weak and the anomalous PV is almost negligible. We find that the vortex is fairly uniform between the two layers, as the vortex is mostly equal in size and at the same position. The 1000 hPa anomalous PV decreases faster than the 500 hPa layer, and thus for all further analysis we report values from the vortex tracked using the 500 hPa anomalous PV.\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=1\\textwidth]{Figures\/panels5.png}\n \\caption{Snapshots of the $40\\times$ solar methane abundance simulation showing CD fit (black), PV fit at 1 bar (red), and PV fit at 500 hPa (green) to the vortex, along with their respective centres ($\\times$). Methane column density is plotted in the background in blue, with the darker regions corresponding to lower CD (less vapour) with a lower limit of $-0.015$ kg\/m$^2$ and an upper limit of $0.4$ kg\/m$^2$ . The PV is shifted by about $1.5\\degree$ to the north east from the CD centre and decreases over time.} \n \\label{fig:cd_pv}\n\\end{figure*}\n\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{Figures\/pv_center_final.png}\n \\caption{Temporal variation PV and anomalous PV\n (both in units of m$^2$ K s$^{-1}$ kg$^{-1}$)\n at 500 hPa inside the simulated GDS in our test cases using CD to track the vortex. In the dry case, only anomalous PV at 500 hPa was used to track the simulated GDS due to the lack of methane. } \n \\label{fig:pv_evolution}\n\\end{figure}\n\n\\subsection{Drift Rate}\\label{Drift_rate}\n\\citet{LeBeauDowling_Neptune} investigated the equatorial drift of GDS-89 by using constant-vorticity gradient zonal wind profiles (Fig. \\ref{fig:zonal_wind}) and found that the value of the mid-latitude pseudo PV gradient, $Q_y$ is the primary influencer on meridionial drift. Using $Q_y=\\frac{1}{3}$, our experiments match the observed drift rate of the GDS-89 of $1.3\\degree$ per month \\citep{Sromovsky1993}.\n\nFigure \\ref{fig:drift_rate} shows the different cases investigated in this study and their latitudinal drift relative to the approximate observed drift rate of the GDS. Two lines per case are depicted, one for the PV at 500 hPa (dashed) and one for the methane column density (solid). Table \\ref{drift_rate_table} summarizes the average drift rate for each case investigated.\n\nCase 2 achieves the closest drift to the observed value of GDS-89, followed by Case 1. The passive and dry cases (Cases 4 and 5 respectively) have the slowest drift. The $80\\times$ (Case 3) struggles to maintain dynamic stability throughout the simulation. We could not fit an ellipse around the vortex since it did not have a uniform shape and it dissipated after approximately 60 days. \n\n\\begin{table}\n\\centering\n\\begin{tabular}{ccc}\n\\hline\n\nCase & CD Drift Rate [$\\degree$\/month] & PV Drift Rate [$\\degree$\/month]\\\\ \\hline\\hline\n1 & $0.82$ & $0.90$ \\\\ \\hline\n2 & $1.36$ & $1.26$ \\\\ \\hline\n3 & - & - \\\\ \\hline\n4 & $0.27$ & $0.31$ \\\\ \\hline\n5 & - & $0.46$ \\\\ \\hline\n\\end{tabular}\n\\caption{A summary of the approximate drift rate using either CD or PV at 500 hPa for each case. For case 3($80\\times$), we were unable to generate a fit. Case 5 is a dry model so PV alone was used to track the vortex.}\n\\label{drift_rate_table}\n\\end{table}\n\n\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{Figures\/drift_rate_final.png}\n \\caption{Drift rate of the simulated GDS in our test cases. We used both methane column density (solid) and potential vorticity (dashed) at 500 hPa to fit the ellipse. In the dry case, only PV at 500 hPa was used to track the simulated GDS due to the lack of methane. he dashed grey lines are the average meridional drift rate of the GDS-89 observed by {\\it Voyager 2} \\citep{Sromovsky1993}. }\n \\label{fig:drift_rate}\n\\end{figure}\n\n\\subsection{Shape Oscillations}\\label{shape_oscillations}\nAlong with meridionial drift, time dependent oscillations in the shape of the vortex are of interest. Figure \\ref{fig:shape_oscillation} shows the inverted aspect ratio ($b\/a$) of the fitted ellipse for CD (Cases 1-4) and the PV at 1 bar (Case 5). In all cases, we correct for geometric effects by converting the extents to physical lengths using an equatorial radius of $24,760$ km and polar radius of $24,343$ km. Clearly, these simulations exhibit complex dynamical variability in shape. The passive (Case 4) and dry (Case 5) cases have a lower inverted aspect ratio than the active cases (1-3) at around 0.3 as opposed to 0.4. Additionally, for the active cases, there does not appear to be a predictable frequency of oscillation and a Fourier power spectrum did not reveal any strong periodicity. The semi-major axis of the active cases are increasing without a corresponding increase in the minor axis resulting in a more elliptic vortex with time. Conversely, the passive and dry cases have the opposite occurring (more circular). \n\nThe GDS-89 was observed to increase in its semi-minor axis ($b$) at a rate of $3.8 \\times 10^{-5}$ h$^{-1}$ \\citep{Sromovsky1993} as it drifted northward, while the semi-major axis ($a$) was roughly constant, resulting a more circular vortex with time. This is in contradiction to our cloudy simulations where we see the vortex getting more elliptic by stretching along the zonal axis as it drifts northward while maintaining a nearly constant meridional height (Figure~\\ref{fig:cd_pv}). In the passive and dry case, there is very little change in the aspect ratio throughout the run, and they are much closer to the observed aspect ratio trend observed by {\\it Voyager 2}. \n\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{Figures\/shape_oscillation_final.png}\n \\caption{The inverted aspect ratio of the ellipse (semi-major divided by semi-minor) for the different test cases using methane column density to fit the simulated GDS. The solid black line is the average increase in the inverted aspect ratio of the GDS-89 observed by {\\it Voyager 2} \\citep{Sromovsky1993}. The period nature of the vortex (i.e. the `rolling motion') is evident in all cases, but at different time scales. }\n \\label{fig:shape_oscillation}\n\\end{figure}\n\n\\subsection{Companion Clouds}\n\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{Figures\/comp_cloud.png}\n \\caption{Model output at day 44. (a) shows the top-down view of methane column density (blue) and cloud (white). (b) shows the vertical wind with upwelling in red and downwelling in blue, with wind vectors in the comoving frame. (c) and (d) are longitudinal slices showing methane relative humidity, with brighter corresponding to saturated regions and darker being drier. Wind vectors show the local circulation within the vortex and the black contours are ice cloud densities. The location of the slices are shown in (a) in red. }\n \\label{fig:comp_cloud}\n\\end{figure}\n\nFigure~\\ref{fig:comp_cloud} (a) shows a snapshot of the model output with the methane column density in blue and companion clouds in white. There are two distinct types of clouds that interact with the vortex: those that are interior (within the horizontal extents of the vortex) and that are exterior to the vortex (Figure~\\ref{fig:comp_cloud} a). The exterior clouds form a thin layer at roughly 1 bar (Fig~\\ref{fig:comp_cloud} c, d), while the interior clouds form much higher and are extended vertically. \n\nThe interior clouds form at two discrete levels: one close to the tropopause at $\\sim100$ hPa and the other further down around $400-500$ hPa. The upper level cloud particles are small due to being in a thinner environment and do not reach the critical diameter of $500\\mu$m to precipitate. The humid environment where the clouds form (bright in Fig~\\ref{fig:comp_cloud} c, d) are retained throughout the vortex lifetime due to the thermal structure of the vortex. \n\nThe deeper cloud is denser and precipitates immediately. However, both clouds are long lived and last about 30 days after the initial adjustment period before dissipating. The particle sizes in both clouds, which are diagnosed from the cloud water content \\citep{Palotai2008}, are on the order of $300-500\\mu$m with the larger particles forming in the lower cloud. These sizes are comparable to cold cirrus clouds on Earth \\citep{Heymsfield2017}, which is a similar habit to the interior clouds in the GDS. \n\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{Figures\/drift_rate_nds_final.png}\n \\caption{Drift rate (blue) and aspect ratio (red) of NDS-2018 from model output. }\n \\label{fig:NDS_drift}\n\\end{figure}\n\\subsection{NDS-2018}\n\nIn 2018, Hubble Space Telescope visible wavelength imaging revealed the presence of a dark spot in the northern hemisphere (labelled NDS-2018) \\citep{Simon2019}. OPAL observations showed that there was increase in active cloud formation in the regions for 1-2 years leading up to the development of the spot. The NDS-2018 is similar in size and shape to the GDS-89 \\citep{Simon2019}. Consequently, in order to constrain the dynamics of this feature, we run simulations with identical vortex initialization, domain, and resolution as the GDS cases. We run 100 day simulations of this feature using an active cloud microphysical model. The vortex was initialized at $32\\degree$ N latitude with a \n$40\\times$ solar [C\/H] fraction and an initial relative humidity of 95\\%. Fig. \\ref{fig:nds} shows select timesteps of this NDS simulation. \n\nDue to the symmetrical nature of the zonal wind profile used in this study (Fig \\ref{fig:zonal_wind}), equatorial drift was observed in the simulated NDS case similar to the simulated GDS cases (Fig. \\ref{fig:NDS_drift}). The average approximate drift rate found in this study for the NDS-2018 was roughly $3.2\\degree$ per month. In contrast to the surprisingly consistent latitudinal wobble observed in the southern hemisphere for the GDS cases, the NDS case has inconsistent oscillations in latitude. This simulation also saw a decrease in inverted aspect ratio throughout the 100 day simulation with an average value of around 0.4 consistent with the value of $b\/a=0.45$ observed by \\citet{Simon2019} for the NDS-2018.\n\n\nThe cloud formation in the NDS simulations were also similar to the GDS cases, with a set of interior and exterior clouds, although the interior clouds were not perfectly centered like the GDS simulations. These are likely a result of the much more irregular dynamics of the NDS vortex in our simulations. \n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=1\\textwidth]{Figures\/40x_shape_oscillation_nds.png}\n \\caption{Select timesteps from a 100 day simulation of Neptune's NDS-2018 using the EPIC model with active microphysics for methane and $40\\times$ solar methane abundance. The blue is integrated column density of methane vapor with darker regions indicating areas of low density. The solid line is the fitted ellipse using dashed line, which is the vapor field contour.}\n \\label{fig:nds}\n\\end{figure*}\n\n\\section{Discussion}\nLarge scale vortices that have been observed on Neptune exhibit surprising variability in terms of shape, drift, and lifetime. Consequently, they are some of the most dynamic features in the Solar System. Due to these variations, a variety of vortex characteristics can be examined at short timescales. Although we are constrained by limited observational data, recent developments in the accuracy of models using active microphysics can help us investigate the dynamics of these features. \n\n\\subsection{Methane deep abundance} \n\nThe various cases discussed here exhibit a wide range in both the meridionial drift and the dynamic time-dependent variations in shape of the GDS-89. \n\nWe tracked the vortex using both anomalous PV and methane column density (Figure~\\ref{fig:cd_pv}), and found that using the CD allows us to track the vortex for much longer due to the anomalous PV decreasing as the vortex moves through different shear environments. These cases differ significantly in the rate of latitudinal drift, the change in aspect ratio and the periodicity of the oscillations. Cases 1 and 2 ($20\\times$ and $40\\times$ solar methane abundance with active microphysics) are best matches for the drift rate (with Case 2 reproducing the drift almost perfectly), but do not show the observed decrease in ellipticity of the GDS-89. The passive and dry (Cases 4 and 5) are poor matches for the drift rate but are better at reproducing the observed slow change in the aspect ratio. Surprisingly, the $80\\times$ simulation failed to achieve a stable feature, and dissipates after 60 days. It should be noted that the high fidelity observations of the GDS started above $\\sim-26\\degree$ latitude \\citep{Sromovsky1993}, which is further north compared to where most of our simulations reach, and it is likely that we were probing an earlier part of the GDS-89's trajectory. A study of starting the vortex further north will need to be done to investigate this. Ultimately, the stability of Cases 1 and 2 and their consistency with {\\it Voyager 2} observations reinforces the measurements of a roughly $20-40\\times$ solar [C\/H] ratio on Neptune. \n\n\\subsection{Cloud microphysics}\n\nCloud microphysics has a much more drastic effect on the dynamics of the vortex compared to simply the addition of methane, as the passive case behaves similarly to the dry case. It is, however, much more difficult to trace the process(es) responsible for these effects. Due to the low density of the methane clouds, coupled with the low latent heat release of methane, it is unlikely to be just latent heating that drives this difference. \n\n\n\nClouds were present throughout the simulation and interacted strongly with the vortex. Similar to \\citet{Stratman2001}, we observed multi-layered clouds both within and without the vortex in our simulations (Figure~\\ref{fig:comp_cloud}). Their eastward cloud forms much higher than ours. However, this may be a function of vertical resolution and interpolation between the layers. In general, both results agree with the humid environment within the vortex and a region of dry air directly below. We are unable to reproduce the persistent poleward cloud. \n\\citet{Stratman2001} showed that vertical location of the vortex strongly influences the formation of clouds and indeed observed the poleward cloud only in certain configurations. The vertical extent and location of the spot in our simulations is held constant to reduce the number of free parameters, and we will investigate the effect of changing them in future simulations.\n\nThe deep cloud within the vortex formed snow and had cloud particles exceeded $300\\mu$m in diameter based on the assumption of hexagonal plate-like particles. Using {\\it Voyager 2} IRIS spectra, \\citet{Conrath1991} obtained particle sizes between $0.3-30\\mu$m, assuming Mie scattering by spherical particles. In our model, we assume ice and snow to be hexagonal plates (as determined for Earth clouds) resulting in much larger diameters than the equivalent spheres for the same particle masses. The equivalent radii for the spherical cloud particles in our simulations is about $50-70\\mu$m, for the same particle mass (i.e. the ``wetted radius\"). \\citet{Kinne1989} noted that using spherical ice crystals underestimates the albedo by up to $15\\%$ due to greater forward scattering, depending on the shape of the ice particle. Therefore, the discrepancy in cloud particle sizes on Neptune requires further study in the avenue of experimentally determining methane nucleation in a H\/He environment and scattering for non-spherical geometries. Both of these would further refine the microphysical parameters in our model to better represent gas-giant atmospheres. \n\n\n\\subsection{Optical depth}\n\nStrong methane absorption in Neptune's atmosphere largely correlates with a higher optical depth \\citep{Hueso2017}. In our simulations, we find that the vortex exists as an area of low methane vapor density compared to the surrounding atmosphere. This is due to the modified thermal structure of the atmosphere as a consequence of introducing the vortex (Figure~\\ref{fig:cd_pv}). All cases (except for Case 5 - which contains no methane vapor) reciprocate this phenomenon and decreased methane column density persist throughout the simulation. The reduced column density results in a decreased optical depth over the vortex compared to the ambient background atmosphere. We interpret this as being able to see deeper into the atmosphere, producing the ``darkness\" of the GDS-89. This is, indeed, an `apples-to-oranges' comparison, and applying a forward radiative transfer (RT) model to simulation outputs would lend additional insight into this discussion. We are working on an RT model that will address this in future studies. \n\n\n\\subsection{Applications to the NDS-2018}\n\nThe NDS-2018 simulation has surprising differences from our GDS test cases. There is much more irregularity in the equatorward drift of the vortex and a higher drift rate. The shape oscillation was similar to our wet GDS cases, with the vortex becoming more elliptic with time. \n\n\nThese differences could be due to the {\\it Voyager}-era zonal wind profile we use in our models. Recent OPAL observations have provided new zonal wind data and even shown evidence of weak vertical wind shear \\citep{Tollefson2018}. \\citet{Simon2019} suggest that a zonal wind gradient of $\\sim4\\times$ that of the \\citet{Sromovsky1993} fit is required to match the observed aspect ratio of the NDS-2018, or $du\/dy\\sim5.4\\times10^{-5}$ s$^{-1}$. The Q$_y=1\/3$ zonal wind profile we use has a value of $du\/dy\\sim 3\\times10^{-5}$ s$^{-1}$ around $30\\degree$ latitude. Testing different zonal wind gradients is not part of this study however, and will be introduced into future sims. Furthermore, without additional observations, it is difficult to verify the accuracy of the drift rate and shape oscillations we measured in the model. It is unknown whether the spot is still present because Hubble observations are limited. \n\n\n\\section{Conclusion}\n\nIn this study, we have analysed the effect of methane abundance and cloud formation on the dynamics of the vortex. While the addition of active microphysics improves the meridional drift rate of the vortex, there are other observations which are difficult to reproduce, such as the consistent periodicity of the shape oscillation or the increase in inverse aspect ratio. A further study of the parameter space that describes the spot is required, such as changing the initial vortex location and size (both vertically and horizontally). Additionally, \\citet{Tollefson2018} assumes a $\\times 2$ or an $\\times 4$ depletion of methane at the mid-latitudes and towards the poles and an increase in mixing ratio near the equator based on idealized equations. These recent observations would be beneficial in the study of the newer spots such as the NDS-2018, and indeed are likely required to explain their dynamics. \n\nThe active microphysics package implemented in this study has a clear effect on the dynamical motions of the vortex, such as meridional drift and shape oscillations. Using a $40\\times$ or $20\\times$ methane deep abundance, we achieve a stable simulation for over 120 days. We also see persistent bright companion cloud formation throughout the simulation. Additionally, the vortex exists as an area of low density which may be correlated to a decrease in optical depth. Finally, we found a possible scenario for the dynamical evolution of the NDS-2018 including its equatorial drift and the vortex becoming more elliptic with time. However, additional observations are needed to constrain these simulations. \n\n\\section*{Acknowledgements}\nThis research was supported in part by the NASA Solar System Workings Program grant NNX16A203G and the DPS Hartmann Student Travel Grant Program. N.H. thanks the Astronaut Scholarship Foundation for their support. We also thank Noah Nodolski for his help and the anonymous reviewer for their input.\n\n\\vspace{-1em}\n\\section*{Data Availability}\nThe data underlying this article will be shared on reasonable request to the corresponding author.\n\n\n\n\n\n\n\\vspace{-2em}\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\nGame theory was originally developed in the field of economics to study strategic interactions amongst humans \\citep{neumann:1944ef,flood:1952aa}. \nThe ``agents\" who play against each other have a set of ``strategies\" to choose from.\nThe payoff which an agent gets depends on its own strategy and the strategy of the opponent.\nA player can decide which strategy to play against an opponent of a given strategy.\n\nIn evolutionary game theory players are born with fixed strategies instead, \\citep{maynard-smith:1982to}\nwhich are considered to be inherited traits.\nAs usual, we assume a population game in which every player effectively plays against the average opponent.\nThe success of a strategy depends on the number of players of that strategy and also the number of players with other strategies.\nA classical example is the Lotka-Volterra equation \\citep{lotka:1910aa,volterra:1928aa,hofbauer:1998mm}.\nIf the number of wolves increases then the number of hares will decrease in turn leading to a decrease in the number of wolves.\nEvolutionary game dynamics studies the change in the frequencies of the strategies \\citep{nowak:2006bo}, which depends on mutation, selection and drift.\n\nA recurrent and obvious question asked in the study of games is which is the best strategy?\nAssuming an infinitely large population we can approach this question by the traditional replicator dynamics \\citep{hofbauer:1998mm}.\nThe frequency of a strategy will increase if its average payoff is greater than the average payoff of the whole population.\nThat is, if the individuals of a particular strategy are doing better on average than the individuals of other strategies then that strategy spreads.\nThe average payoff of a strategy is also dependent on the frequency of the strategy.\nFor finite populations one must resort to stochastic descriptions \\citep{ficici:2000aa,schreiber:2001aa,nowak:2004pw}.\nOne important quantity is the fixation probability.\nConsider two strategies $A$ and $B$ in a population of size $N$.\nLet the population be almost homogenous for $B$ with only a single $A$.\nIf there is no fitness difference amongst the strategies, i.e. selection is neutral, then the probability that the $A$ individual will take over the entire population is $1\/N$.\nIf this probability is greater than $1\/N$ we say that strategy $A$ is favoured by selection.\nWhen there are multiple strategies in the population, then a pair-wise comparison between the fixation probabilities of all the strategies will reveal which is the most abundant strategy \n\\citep{fudenberg:2006ee,hauert:2007aa,hauert:2008bb,van-segbroeck:2009mi,sigmund:2010aa}.\nThis analysis requires the assumption of low mutation rates.\n\nWhen mutations become more frequent then the concept of fixation itself is problematic and hence also that of fixation probability.\nIn such a case we resort to the average frequency of a strategy which is maintained at the mutation-selection balance.\nThis has been termed as the abundance of a strategy \\citep{antal:2009hc}.\n\nConsider $n$ strategies which are effectively neutral against each other.\nIn such a case the abundance of all the strategies in the stationary state will be just $1\/n$.\nUsually there are fitness differences between the strategies.\nIf the abundance of a strategy is greater than that of all the other strategies then we can say it is favoured under the effects of mutation, selection and drift.\nHence for $n$ strategies, the $k^{th}$ strategy will be favoured if the abundance of $k$ is greater than $1\/n$.\nCalculating the abundance of a strategy is a non-trivial exercise even when assuming weak selection.\n\\cite{antal:2009hc} have developed such an approach based on coalescence theory for the case of two-player games and $n$ strategies.\nUnder certain conditions and weak selection, one can calculate the most abundant strategy for arbitrary mutation rates even in structured populations \\citep{antal:2009aa,tarnita:2009df,tarnita:2009jx} and bimatrix games \\citep{ohtsuki:2010aa}.\n\nUsually two-player interactions are studied in evolutionary game theory.\nThe analysis of Antal et al. is also for two-player games.\nThe interactions which we usually use as examples in evolutionary game theory are in general multi-player interactions making the systems nonlinear \\citep{nowak:2010na}.\nA classical example where a certain minimum number of individuals are required to complete a task is group hunting.\n\\cite{stander:1992aa} studied cooperative hunting in lions.\nA typical hunting strategy is to approach the prey from at least three sides to cutoff possible escape paths.\nThis hunting approach is impossible with only two hunters, i.e.\\ a two-player game theoretic approach would be insufficient to capture the dynamics.\nAlthough evolutionary dynamics of multi-player games has received growing interest in the recent years, the main focus has been the Public Goods Game and its variants \\citep{hauert:2002te,milinski:2006aa,rockenbach:2006aa,hauert:2007aa,santos:2008xr,pacheco:2009aa,souza:2009aa,veelen:2009ma,archetti:2011aa}.\nOnly few authors consider general evolutionary many person games \n \\citep{hauert:2006fd,kurokawa:2009aa,gokhale:2010pn}.\nWe extend the approach developed by \\cite{antal:2009hc} for two-player games and multiple strategies to multi-player games.\nWe show that in the limit of weak selection it is possible to calculate analytical results for $n$ strategies and $d$ players for arbitrary mutation rates.\nFor a three-player game the mathematical analysis is described in detail.\nIt is followed by an example with simulations supporting the analytical result.\nLastly we discuss how the methodology can be extended for $d$-player games and argue that a general approach is possible, but tedious.\n\n\\section{Abundances in the stationary state for three-player games}\n\\label{model}\n\n\\cite{antal:2009hc} have developed an approach to find the abundances of $n$ strategies in a two-player game ($d=2$).\nFor a two-player game even with $n$ strategies, the payoff values can be represented in the usual payoff matrix form.\nThey can be represented as quantities with two indices, $a_{k,h}$.\nWe increase the complexity first by adding one more player ($d=3$).\nThis adds another index for the third player's strategy set, $a_{k,h,i}$.\nTo calculate the average change in the frequency of a strategy we thus need to take into account this payoff `tensor'.\n\nWe calculate the abundance of a strategy at the mutation-selection equilibrium.\nWe begin by checking if there is a change in the frequency of a strategy, say $k$ on average, due to selection.\nThe average change under weak selection is given by\n\\begin{eqnarray}\n\\label{replike}\n\\langle \\Delta x_k^{sel} \\rangle_\\delta = \\frac{\\delta}{N} \\left(\\sum_{h,i} a_{k,h,i}\\langle x_k x_h x_i\\rangle - \\sum_{h,i,j} a_{h,i,j}\\langle x_k x_h x_i x_j\\rangle \\right),\\nonumber \\\\\n\\end{eqnarray}\nwhere the angular brackets denote the average in the neutral stationary state.\nThe $\\delta$ (selection intensity) as a lower index on the left hand side, however, denotes that the average is obtained under (weak) selection.\nIf we pick three individuals in the neutral stationary state, then the probability of the first one to have strategy $k$, the next one $h$ and the last $i$, is given by the angular brackets in the first sum, $\\langle x_k x_h x_i\\rangle$.\nFurthermore, $a_{k,h,i}$ denotes the payoff values obtained by a strategy $k$ player when pitted against two other players of strategy $h$ and $i$.\nFor $n$ strategies the sums run from $1$ to $n$.\nThis equation is the special case of a $d=3$ player game.\nThe derivation for arbitrary $d$ is given in \\ref{eq1app}.\nThe above equation is similar to the replicator equation, which is also based on the difference between the average payoff of a strategy and the average payoff of the population, but as we will see below, here the averages on the right hand side also include mutations.\n\nTo incorporate mutations in the process, we write the total expected change due to mutation and selection as\n\\begin{eqnarray}\n\\label{xtot}\n\\Delta x ^{tot}_k = \\Delta x^{sel}_k (1-u)+ \\frac{u}{N}\\left(\\frac{1}{n} - x_k\\right).\n\\end{eqnarray}\nThe first term is the change in the frequency in the absence of mutation.\nIn presence of mutations, the second term shows that the frequency can increase by $1\/(nN)$ by random mutation and decrease by $x_k\/N$ due to random death.\nA mutation means that with a certain probability $u$, the strategy $k$ can mutate to any of the $n$ strategies.\n\nWe are interested in the abundance of a strategy in the stationary state.\nIn the stationary state, the average change in frequency is zero, $\\langle \\Delta x ^{tot}_k \\rangle_\\delta = 0$, as the mutations are balanced by selection.\nAveraging Eq.\\ \\ref{xtot} under weak selection thus gives us\n\\begin{eqnarray}\n\\label{abundeq}\n\\langle x_k \\rangle_\\delta = \\frac{1}{n} + N \\frac{1-u}{u} \\langle \\Delta x^{sel}_k \\rangle_\\delta.\n\\end{eqnarray}\nThis is our quantity of interest, the abundance of a strategy when the system has reached the stationary state.\nFor $d=2$ player games, this quantity is given by \\cite{antal:2009hc}.\nFor the abundance of a strategy to be greater than neutral, $\\langle x_k \\rangle_\\delta > \\frac{1}{n}$, the change in frequency in the stationary state due to selection must be greater than zero, $\\langle \\Delta x^{sel}_k \\rangle_\\delta >0$.\n\nThus, we need to resolve the right hand side of Equation \\ \\ref{replike}.\nConsider the first term in the brackets.\nIn the neutral stationary state the number of combinations in the sums reduces due to symmetry, e.g. $\\langle x_i x_j x_j \\rangle = \\langle x_j x_i x_j \\rangle = \\langle x_j x_j x_i \\rangle$.\nHence, we need to calculate only three different terms, $ \\langle x_1 x_1 x_1 \\rangle $, $ \\langle x_1 x_2 x_2 \\rangle $ and $ \\langle x_1 x_2 x_3 \\rangle $.\nAlso for $d$ player games, the terms in the sums are reduced.\nFor the second term in the brackets we need to calculate five different types of averages, $\\langle x_1 x_1 x_1 x_1\\rangle$, $\\langle x_1 x_2 x_2 x_2\\rangle$, $\\langle x_1 x_1 x_2 x_2\\rangle$, $\\langle x_1 x_1 x_2 x_3\\rangle$ and $\\langle x_1 x_2 x_3 x_4\\rangle$.\nThese averages are derived in the \\ref{averagesapp}.\nUsing an approach from coalescence theory, we derive $s_i$, the probability that $i$ individuals chosen from the stationary state all have the same strategy.\nHence $s_4$ is the probability that four individuals chosen in the stationary state all have the same strategy.\nIf there are in all $n$ strategies, then the probability that all have exactly strategy $1$ is $s_4\/n$.\nHence, $\\langle x_1 x_1 x_1 x_1 \\rangle = \\langle x_2 x_2 x_2 x_2 \\rangle = \\ldots = \\langle x_n x_n x_n x_n \\rangle = s_4\/n$.\nConversely, $\\bar{s}_i$ is the probability that if we choose $i$ individuals in the stationary state, each has a unique strategy.\nKnowing these averages helps us resolve Eq.\\ \\eqref{replike},\n\\begin{eqnarray}\n\\langle \\Delta x_k^{sel} \\rangle_\\delta\n= \\frac{\\delta \\mu (L_k + M_k \\mu + H_k \\mu^2)}{N n (1+\\mu) (2+\\mu) (3+\\mu)} \n\\end{eqnarray}\nwhere $\\mu = N u$ and $L_k$, $M_k$ and $H_k$ are functions consisting only of the number of strategies $n$ and the payoff values $a_{k,h,i}$ (see \\ref{averagesapp}).\nUsing this and evaluating Eq.\\ \\eqref{abundeq} gives us the abundance of the $k^{th}$ strategy.\n\\begin{eqnarray}\n\\label{finalres}\n\\langle x_k \\rangle_\\delta = \\frac{1}{n}\\left[ 1 + \\frac{ \\delta (N- \\mu) (L_k + M_k \\mu + H_k \\mu^2) }{ (1+ \\mu) (2+ \\mu) (3+ \\mu)}\\right].\n\\end{eqnarray}\nThis expression is valid for large population sizes, $N\\delta \\ll 1$ and any $\\mu =N u$.\nSince we have $0\\leq u \\leq 1$, $\\mu$ is bounded by $0\\leq \\mu \\leq N$.\n\n\nWe arrive at the result with an implicit assumption that there are at least four strategies.\nFor $n \\leq d$, each player cannot have a unique strategy and hence we need to set the corresponding terms to zero (see \\ref{averagesapp}).\nIf there are less than $n=4$ strategies then $\\bar{s}_4$ vanishes.\nThis does not affect our general result as the affected terms in $L_k$, $M_k$ and $H_k$ simply vanish.\n\n\n\\section{An example for three-player games with three strategies}\n\\label{example}\n\nTo illustrate the analytical approach we explore an evolutionary three-player game with three strategies $A$, $B$ and $C$.\nLet our focal individual play strategy $A$.\nThe other two players can play any of the three strategies.\nThis can lead to a potential complication.\nConsider the combinations $AAB$ or $ABA$.\nIf the order of players does not matter, then both these configurations give the same payoffs but if they do matter then we need to consider them separately.\nHere we assume random matching, and hence the order drops out (e.g. $AAB$ and $ABA$ are equally likely).\nWe consider an arbitrary game as denoted in Table \\ref{paytab}.\n\nWe need to calculate the average change in the frequency of strategy $A$ due to selection, i.e. Eq.\\ \\eqref{replike}.\nWe denote the co-efficients of the averages in the first sum by $\\alpha_1$, $\\alpha_2$ and $\\alpha_3$.\nHence for example, $\\alpha_3 = a_{A,B,C} + a_{A,C,B}$.\nSimilarly for the second sum we have $\\beta_1$ to $\\beta_4$ (Note that $\\beta_1 = \\alpha_1 = a_{A,A,A}$).\nThus we have,\n\\begin{eqnarray}\n\\sum_{h,i} a_{A,h,i} \\langle x_A x_h x_i\\rangle &=& \\alpha_1 \\langle x_A x_A x_A \\rangle + \\alpha_2 \\langle x_A x_B x_B \\rangle \\nonumber \\\\\n&&+ \\alpha_3 \\langle x_A x_B x_C \\rangle\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\sum_{h,i,j} a_{h,i,j}\\langle x_A x_h x_i x_j\\rangle &=& \\beta_1 \\langle x_A x_A x_A x_A\\rangle + \\beta_2 \\langle x_A x_B x_B x_B\\rangle \\nonumber \\\\\n&&+ \\beta_3 \\langle x_A x_A x_B x_B\\rangle + \\beta_4 \\langle x_A x_A x_B x_C\\rangle.\\nonumber \\\\\n\\end{eqnarray}\nNote that the term $\\langle x_A x_B x_C x_D\\rangle$ which would appear with a factor $\\beta_5$, does not appear, as we have only three strategies and thus $\\bar{s}_4 = 0$.\nThis also alters the definition of $\\langle x_A x_A x_B x_C\\rangle$ and $\\langle x_A x_A x_B x_B\\rangle$ (see Figure \\ref{fig:1}, all terms dependent on $\\bar{s}_4$ are affected).\n\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{figure_1.pdf}\n\\caption{The average change in the frequency of strategy $k$ due to selection, $\\langle \\Delta x_k^{sel} \\rangle_\\delta$ for a three-player game.\nNotice first the similarity to the replicator equation where also we look at how a strategy is faring compared to the population.\nThe first term in the bracket is analogous to the average fitness of strategy $k$.\nIf we pick three individuals in the stationary state, then the probability that the first one has strategy $k$, second $h$ and the third $i$ is given by $\\langle x_k x_h x_i\\rangle$ (dashed box).\nEven for $n$ strategies there are only three possible combinations, either all can have the same strategy, a pair has the same strategy or all three have different strategies.\nThese probabilities were calculated by \\cite{antal:2009hc}.\nThe $s_i$'s appearing in the averages are the probabilities that if we choose $i$ individuals from the stationary distribution then they all have the same strategy.\nThe second term in the bracket is analogous to the average fitness of the population in the stationary state.\nFor this we need to pick four individuals and look for all the different combinations (solid box).\nFor $n$ strategies, five combinations can explain all the different configurations.\nThese range from all the individuals having the same strategy $\\langle x_1 x_1 x_1 x_1 \\rangle$ to all having a different strategy $\\langle x_1 x_2 x_3 x_4 \\rangle$ (\\ref{averagesapp}).\nFor the latter, we calculate $\\bar{s}_i$, the probability that we choose $i$ individuals from the stationary distribution and each of them has a unique strategy.\nFor a general $d$-player game we need to pick $d$ individuals for the first term and $d+1$ for the second.}\n\\label{fig:1}\n\\end{figure}\n\n\n\nWe know the form of $L_k$, $M_k$ and $H_k$ from \\ref{averagesapp} as,\n\\begin{eqnarray}\nL_k &=& \\tfrac{1}{n} \\left[2 \\alpha_1 (n-1) + 3 \\alpha_2 - 2 \\beta_2 - \\beta_3\\right] \\\\\nM_k &=& \\tfrac{1}{n^2} \\left[\\left(3 n-3\\right) \\alpha_1 +\\left(n+3\\right) \\alpha_2 +3 \\alpha_3 - 3 \\beta_2 - 2 \\beta_3 - \\beta_4\\right] \\nonumber \\\\\n\\\\\nH_k &=& \\tfrac{1}{n^3} \\left[n (\\alpha_1 + \\alpha_2 + \\alpha_3) - (\\beta_1 + \\beta_2 + \\beta_3 + \\beta_4 + \\beta_5)\\right]\n\\end{eqnarray}\nWith $L_k$, $M_k$ and $H_k$ as above, Eq.\\ \\eqref{finalres} for $n=3$ reduces to,\n\\begin{eqnarray}\n\\langle x_A \\rangle_\\delta = \\frac{1}{3}\\left[ 1 + \\frac{\\delta (N- \\mu) (L_k + M_k \\mu + H_k \\mu^2) }{ (1+ \\mu) (2+ \\mu) (3+ \\mu)}\\right].\n\\end{eqnarray}\nThis gives us the abundance of strategy $A$ at the mutation selection equilibrium.\nRepeating the same procedure for strategies $B$ and $C$ gives the analytical lines in \nFigure \\ref{fig:2}.\nAlthough the analytical solutions are valid for large population sizes only, we still see a good agreement between the simulation and theory results, even for a population size as small as $30$.\n\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{figure_2.pdf}\n\\caption{\nFor a three-player game with three strategies ($d=3;n=3$) we plot the average abundances of the three strategies as a function of the mutation probability $u$.\nThe payoff table from Table \\ref{paytab} is used.\nThe lines are the solutions of Eq.\\ \\eqref{abundeq} and the symbols are the simulation results for the three strategies.\nAlthough the calculations are valid for large populations we see a good agreement even for a population size of $N=30$\n(selection intensity $\\delta=0.003$, simulation points are obtained \naveraging over $20000$ independent runs, each over $2 \\times 10^6$ time steps after a transient phase of $N$ time steps).\n}\n\\label{fig:2}\n\\end{figure}\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{ccccccc}\n\\hline\\hline\n\\hspace{0.2cm}\n\\begin{minipage}[c]{2cm}\n\\begin{center}\n\\vspace{0.1cm}\nWeights\n \\\\ \n( Total 9 )\n\\vspace{.2cm}\n\\end{center}\n\\end{minipage} & 1\t& 2 & 2 & 1 & 2 & 1\t\\\\\n\\hline\n & AA & AB & AC & BB & BC & CC\t\\\\\n \\hline\n A \t& 2 & 2 & 3 & 1 & 3 & 4\t\\\\\n B \t& 2 & 1 & 2 & 3\t& 0 & 2\\\\\n C \t& 2 & 12 & 2 & 0 & 1 & 3\t\\\\\n\\hline\\hline\n\\end{tabular}\n\\caption{An example payoff table for $d = 3$ and $n=3$.\nConsider a three-player game with three strategies $A$, $B$ and $C$.\nThe strategy of the focal individual is in the column on the left.\nFor example the payoff received by a $C$ individual when playing in a configuration of $CAB$ is $12$.\nFrom the focal individual's point of view there are two ways of this configuration $CAB$ and $CBA$ as it is twice as likely as compared to e.g. $CAA$.\nHence we weight that payoff value by $2$ when calculating the average payoff of strategy $C$.}\n\\label{paytab}\n\\end{center}\n\\end{table}\n\n\\section{Abundances in $\\mathbf{d>3}$ player games.}\n\nWe can repeat the whole procedure for $d=4$ player games with $n$ strategies.\nThe formula for the abundance remains the same, Eq.\\ \\eqref{abundeq}, but the average change due to selection, Eq.\\ \\eqref{replike}, becomes more complicated.\nWe need to add an index in the sums,\n\\begin{eqnarray}\n\\langle \\Delta x_k^{sel} \\rangle_\\delta &=& \\frac{\\delta}{N} \\Big(\\sum_{l,m,n} a_{k,l,m,n}\\langle x_k x_l x_m x_n\\rangle \\nonumber \\\\\n&&- \\sum_{l,m,n,o} a_{l,m,n,o} \\langle x_k x_l x_m x_n x_o\\rangle \\Big)\n\\end{eqnarray}\nThe first term is comparatively simple as we already know all the different ways of picking four individuals.\nFor the second term we need to know the different possible combinations of strategies when picking five individuals from the neutral stationary state.\n\nFor $d$ players and $n$ strategies we can construct an expression analogous to Eq.\\ \\ref{replike}.\nConsider for example the strategies played by $d$ individuals denoted by, $r_1, r_2, r_3 \\ldots r_d$.\nNote that each of these can be a strategy from the strategy set $1,2,3 \\ldots n$.\nLet $k$ be our strategy of interest.\nThen the expression for the change of strategy $k$ due to selection is given by,\n\\begin{eqnarray}\n\\label{dplayers}\n\\langle \\Delta x_{k}^{sel} \\rangle_\\delta &=& \\frac{\\delta}{N} \\Bigg( \\sum_{r_2,\\ldots r_d} a_{k,r_2,\\ldots r_d } \\langle x_k x_{r_2} x_{r_3} \\ldots x_{r_d} \\rangle \\nonumber \\\\\n\\!\\!\\!\\! &&- \\sum_{r_2,\\ldots r_{d+1}} a_{r_2,\\ldots r_{d+1} } \\langle x_k x_{r_2} x_{r_3} \\ldots x_{r_{d+1}} \\rangle \\Bigg) \\nonumber \\\\\n\\end{eqnarray}\nwhere the sums range as usual from $1$ to $n$ (\\ref{eq1app}).\nSolving this and plugging it in Eq.\\ \\eqref{abundeq} gives the generalized expression for the abundance of strategy $k$ for an $n$ strategy, $d$-player game.\nWe see that in the first sum the averages are for choosing $d$ players but for the second it is $d+1$.\nHence we need to calculate the probabilities of the form $s_{d+1}$, but $s_{d+1}$ depends on $s_d$.\nThus we have to solve the expression recursively e.g.\nfor $d=6$, we will need to pick $7$ players at most and we must solve the expression for $d = 2,3,4,5,6$ before ($d=2$ has been solved by \\cite{antal:2009hc} and $d=3$ in this paper).\nAs $d$ increases calculating $s_{d+1}$ is not enough and we will also need to calculate terms such as $\\bar{s}_{d+1}$ which is already the case for $d=3$.\n\n\\section{Special case: Two strategies, $\\mathbf{n=2}$}\n\nGames with two strategies have been very well studied.\nIn two-player games with two strategies, one strategy can replace another with a higher probability if the sum of its payoff values is greater than the sum of the payoff values of the other strategy.\nThis is valid under small mutation rates for deterministic evolutionary dynamics \\citep{kandori:1993aa}.\nThe result also holds for for different dynamical regimes under specific limits of selection intensity and mutation rates \\citep{fudenberg:1992bv,nowak:2004pw,antal:2009th}.\nRecently it has been shown that this result can be generalized for $d$-player games with two strategies \\citep{kurokawa:2009aa,gokhale:2010pn}.\n\nHence the condition which we find for $d$-player games should be identical to $L_k > 0$ derived in this paper for $d$ players.\nWe check this for $d=3$,\n\\begin{eqnarray}\nL_k &=& \\tfrac{1}{2} \\left[2 \\alpha_1 (2-1) + 3 \\alpha_2 - 2 \\beta_2 - \\beta_3 \\right]\n\\\\\n\\nonumber \n&=&\\tfrac{1}{2} [ 2 a_{1,1,1} + 3 ( a_{1,1,2} + a_{1,2,1} + a_{1,2,2} )\\nonumber \\\\ \n&& - 2 ( a_{1,1,2} + a_{1,2,1} + a_{2,1,1} + a_{2,2,2} ) - a_{1,2,2} - a_{2,1,2} - a_{2,2,1} ] \\nonumber \\\\\n\\end{eqnarray}\nThus $L_k > 0$ is equivalent to,\n\\begin{eqnarray}\n2 a_{1,1,1} +&& a_{1,1,2} + a_{1,2,1} + 2 a_{1,2,2} \\nonumber \\\\\n&&> 2 a_{2,1,1} + a_{2,1,2} + a_{2,2,1} + 2 a_{2,2,2} \\nonumber \\\\\n\\end{eqnarray}\nIf we assume that the order of players does not matter then we have $a_{1,1,2} = a_{1,2,1}$ and $a_{2,1,2} = a_{2,2,1}$.\nThis yields\n\\begin{eqnarray}\na_{1,1,1} + a_{1,1,2} + a_{1,2,2} > a_{2,1,1} + a_{2,1,2} + a_{2,2,2},\n\\end{eqnarray}\nwhich is exactly the condition that has been obtained previously using different methods and different notation by \\cite{kurokawa:2009aa} and \\citep{gokhale:2010pn}.\n\\section{Application to a task allocation problem}\nTo demonstrate the power of the approach we are motivated by \\cite{wahl:2002aa} who studies the problem of task allocation and of the evolution of division of labour.\n\\cite{wahl:2002aa} studied a two-player game between task $1$ specialists ($T_1$), task $2$ specialists ($T_2$) and generalists. \nInstead, we have $T_1$, $T_2$ and freeloaders $F$.\nWe can think of this problem in context of bacteria that need two types of enzymes to obtain resources from the environment.\nOne strain produces one type of enzyme at a cost $c_1$ and \nanother strain produces the second type of enzyme at a cost of $c_2$.\nWe also have the freeloading strain which does not produce any enzyme but can get resources by the help of the other two strains.\nThe benefit of getting \nthe resources is given by $b$.\nWe have the condition that the total cost is less than the benefit accrued i.e. $b> c_1 + c_2$.\nFurther assume that our contenders are conservative in the enzyme production.\nSensing who they are pitted against, the strains share the costs of producing the enzyme.\nThus a two-player payoff matrix for such a setting can be written down as in Table \\ref{2plpaytabex}.\nIt is hard to imagine though that the bacteria interact only in a pair-wise fashion.\nAlthough it is hard to judge how many players are interacting, we can at least increase the complexity by one more player and study what effect this has on the abundances of the strains.\nTherefore, we study the system\nin a three-player setting.\nIn this case the payoff table will look like \\ref{3plpaytabex}.\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{ccccccc}\n\\hline\\hline\n & $T_1$ & $T_2$ & $F$ \t\\\\\n \\hline\n$T_1$ & $\\frac{-c_1}{2}$ & $b-c_1$ & $-c_1$ \t\\\\\n$T_2$& $b-c_2$ & $\\frac{-c_2}{2}$ & $-c_2$ \\\\\n$F$ \t& 0 & 0 & 0 \\\\\n\\hline\\hline\n\\end{tabular}\n\\caption{\nPayoff table for a two-player game with three strategies, $T_1$ i.e. specialising in task $1$ or $T_2$ i.e. specialising in task $2$.\n$T_1$ produces\nenzyme $1$ and \n$T_2$ produces \nenzyme $2$.\nWhen both the enzymes are present then a benefit $b$ is obtained.\nThe cost of producing enzyme $1$ is $c_1$ and the cost to produce enzyme $2$ is $c_2$.\nIf the partner in the game is of the same strategy then the cost is shared.\nThe freeloading strategy $F$ does not pay any cost and so the benefits of the resource are unavailable to it.}\n\\label{2plpaytabex}\n\\end{center}\n\\end{table}\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{ccccccc}\n\\hline\\hline\n\\hspace{0.2cm}\n\\begin{minipage}[c]{2cm}\n\\begin{center}\n\\vspace{0.1cm}\nWeights\n \\\\ \n( Total 9 )\n\\vspace{.2cm}\n\\end{center}\n\\end{minipage} & 1\t& 2 & 2 & 1 & 2 & 1\t\\\\\n\\hline\n & $T_1 T_1$ & $T_1 T_2$ & $T_1 F$ & $T_2 T_2$ & $T_2 F$ & $F F$\t\\\\\n \\hline\n $T_1$ \t& $\\frac{-c_1}{3}$ & $b-\\frac{c_1}{2}$ & $\\frac{-c_1}{2}$ & $b-c_1$ & $b-c_1$ & $-c_1$\t\\\\\n $T_2$ \t& $b-c_2$ & $b-\\frac{c_2}{2}$ & $b-c_2$ & $\\frac{-c_2}{3}$\t& $\\frac{-c_2}{2}$ & $-c_2$\\\\\n $F$ \t& 0 & b & 0 & 0 & 0 & 0\t\\\\\n\\hline\\hline\n\\end{tabular}\n\\caption{\nPayoff table for the same game as discussed in Table \\ref{2plpaytabex}.\n$T_1$ and $T_2$ refer to specialising in task $1$ and $2$ namely producing enzyme $1$ and $2$.\nNote the costs can also be shared if at least one of the game partners is of the same strategy.\nIn this case the freeloaders can get the benefit when the other two players produce both enzymes.\n}\n\\label{3plpaytabex}\n\\end{center}\n\\end{table}\n\nWe calculate the abundances of the strains in these different settings, cf.\\ Figure \\ref{fig:3}. \nEven when there are almost no mutations there is a quantitative difference between the average frequencies of the strains.\nFor a higher mutation probability the difference is also qualitative.\nWhile the freeloaders never pay a cost they have the highest abundance in the two-player setting for any mutation probability.\nFor the same reasoning but in three-player games we see that the abundance of the freeloaders falls below that of $T_2$ for a certain range of mutation probability.\n\n\n\\begin{figure*}\n\\includegraphics[width=2.0\\columnwidth]{figure_3.pdf}\n\\caption{\nThe strains of a bacteria $T_1$ and $T_2$ produce the enzymes $1$ and $2$ which when present together provide a benefit $b$.\nWhen more than one individual of the same strain is present then the production costs for the enzymes, $c_1$ and $c_2$ are shared.\nA third strain $F$ does not produce any enzyme and thus avoids the costs and cannot obtain the benefit on its own.\nWe can analyse the interactions by assuming them to be pair-wise (two-player game, Table \\ref{2plpaytabex}) or in triplets (three-player game, \\ref{3plpaytabex}).\nWe notice that the abundances of the three strains are qualitatively and quantitatively different in the two settings even though the underlying rules defining the interactions are the same.\nUnder neutrality the abundance of all the strains would be given by the dashed line.\nThe full lines are the analytical results obtained by solving Eq.\\ \\eqref{abundeq}, assuming $b=1.0$, $c_1 = 0.6$, $c_2 = 0.2$ and a population size of $N=30$ with selection intensity $\\delta =0.003$.\n}\n\\label{fig:3}\n\\end{figure*}\n\n\n\\section{Discussion}\n\nPublic goods games are often used as examples of multi-player games.\nIn the beginning there were the cooperators and defectors.\nThen came the punishers and then the loners \\citep{hauert:2002te,szabo:2002te}.\nNow we talk about second order punishers, pool and peer punishers \\citep{sigmund:2010aa} and more.\nStudying these systems for small mutation rates and arbitrary selection intensity is almost becoming standard \\citep{fudenberg:2006ee,hauert:2007aa,hauert:2008bb,van-segbroeck:2009mi,sigmund:2010aa}.\nIn the limit of weak selection our method allows to find out which strategy is most abundant for arbitrary mutation rates.\n\nYet, another important aspect of most social dilemmas and many other biological examples is that they involve multiple players \\citep{stander:1992aa,broom:2003aa,milinski:2008lr,levin:2009aa}.\n\\cite{antal:2009aa,antal:2009hc} have made use of the coalescence approach to characterize the mutation process under neutrality and then apply it under weak selection to two-player games with $n$ strategies ($n \\times n$).\nHere we extend the approach to $d$-player games with $n$ strategies.\n\n\nWe give an example for an $n \\times n \\times n$ game and derive the analogous expressions for abundances of the strategies for arbitrary mutation rates.\nWhen we increase the number of players to $d$, the payoff matrix becomes a $d$ dimensional object.\nWe run into the problem of whether the order of players matters or not.\nEither way this does not influence our results but notation-wise it is easier if the order of players does not matter.\nAdding a new player adds a new index to the payoff values.\nFor calculating the abundance we need to assess Eq.\\ \\eqref{dplayers}.\nFor solving the two sums in Eq.\\ \\eqref{dplayers} we need to know the different combinations of choosing $d$ players and $d+1$ players from the neutral coalescent stationary state.\n\nTo illustrate the complexity of the situation take for example $s_4$.\nThis is the probability that four chosen individuals have the same strategy.\nIn \\ref{coalescentapp} we have shown that deriving $s_4$ depends on $s_3$ which depends on $s_2$ in turn.\nHence in general to derive $s_{d+1}$, we need to know $s_{d}, s_{d-1}, s_{d-2} \\ldots , s_2$.\nIn addition, for general $d$-player games we need quantities such as $\\bar{s}_{d+1}$, probability that $d+1$ individuals chosen in the stationary state all have different strategies.\nIn the absence of mutations such probabilities as $\\bar{s}_{d+1}$ vanish and we are left with only $s_{d+1}$, as mentioned in \\cite{ohtsuki:2010bb}.\nIf $n0$,\n\\begin{equation}\n\\label{realfinal}\n \\sideset{}{^*}\\sum_{m \\leq M} \\left| \\sideset{}{^*}\\sum_{n \\leq N} a_n \\Big (\\frac {n}{m} \\Big ) \\right|^2 \\ll_{\\varepsilon} (MN)^{\\varepsilon}(M+N)\n \\sideset{}{^*}\\sum_{n \\leq N} |a_n|^2,\n\\end{equation}\n where the asterisks indicate that $m,n$ run over positive odd square-free\n integers and $(\\frac {\\cdot}{m})$ is the Jacobi symbol. \\newline\n\n Similar to \\eqref{realfinal}, Heath-Brown also established the following large sieve inequality involving the cubic symbols \\cite[Theorem\n 2]{DRHB1}:\n\\begin{equation}\n\\label{eq:HBcubic}\n \\sideset{}{^*}\\sum_{\\substack{m \\in \\ensuremath{\\mathbb Z}[\\omega] \\\\\\mathcal{N}(m) \\leq M}} \\left| \\ \\sideset{}{^*}\\sum_{\\substack{n \\in \\ensuremath{\\mathbb Z}[\\omega] \\\\\\mathcal{N}(n) \\leq N}} a_n \\leg{n}{m}_3 \\right|^2\n \\ll \\left( M + N + (MN)^{2\/3} \\right)(MN)^{\\varepsilon} \\sum_{\\mathcal{N}(n) \\leq N} |a_n|^2,\n\\end{equation}\nwhere the asterisks indicate that $m,n$ run over square-free elements\nof $\\ensuremath{\\mathbb Z}[\\omega], \\omega=\\exp(2 \\pi i\/3)$ that are congruent to $1$ modulo 3 and $(\\frac {\\cdot}{m})_3$ is the cubic residue symbol. Moreover, here and after, we use $\\mathcal{N}(m)$ to denote the norm of $m$. \\newline\n\n Using \\eqref{eq:HBcubic}, S. Baier and M. P. Young \\cite[Theorem 1.4]{B&Y} proved the following large sieve inequality for cubic Dirichlet characters:\n\\begin{equation*}\n\\begin{split}\n& \\sum\\limits_{\\substack{Q 0$,where the asterisks indicate that $m$ and $n$ run over square-free elements of $\\ensuremath{\\mathbb Z}[i]$ that are congruent to $1$ modulo $(1+i)^3$ and $(\\frac\n{\\cdot}{m})_4$ is the quartic residue symbol.\n\\end{theorem}\n\n Next we shall establish the following large sieve inequality for quartic Dirichlet characters.\n\\begin{theorem}\n\\label{quarticlargesieve} Let $(a_m)_{m\\in \\mathbb{N}}$ be an\narbitrary sequence of complex numbers. Then\n\\begin{equation} \\label{final}\n\\begin{split}\n& \\sum\\limits_{\\substack{Q0$,\n\\begin{equation*}\n \\theta(w, \\chi)=\\sum_{\\substack{ a \\equiv 1 \\bmod{ (1+i)^3} \\\\ (a,f)=1}} \\chi(a)e^{-2\\pi \\mathcal{N}(a)w} \\ll E(\\chi)w^{-1}+\\mathcal{N}(f)^{1\/2+\\varepsilon},\n\\end{equation*}\n where $E(\\chi)=1$ if $\\chi$ is principal, $0$ otherwise. The\n implied constant depends only on $\\varepsilon$.\n\\end{lemma}\n\nLemma \\ref{lem1} implies that each of these sums in \\eqref{2.1}\nare $O\\left( \\mathcal{N}(n_1n_2)^{1\/2+\\varepsilon} \\right)$, provided that the\ncharacter involved is non-principal. Since $n_1$ and $n_2$ are\nsquare-free, $\\leg{n_1}{m}_4\\overline{\\leg{n_2}{m}_4}$ is principle only if $n_1 = n_2$.\nIt follows that\n\\[ {\\sum}_1 \\ll_{\\varepsilon}\n N^{\\varepsilon} \\left( M\\sum_n|a_n|^2+N\\sum_{n_1,n_2}|a_{n_1}a_{n_2}| \\right) \\ll_{\\varepsilon}\n N^{\\varepsilon} \\left( M+N^2 \\right)\\sum_n|a_n|^2. \\]\n We therefore have\n\\begin{equation}\n\\label{initialest}\n \\mathcal{B}_1(M,N) \\ll_{\\varepsilon} N^{\\varepsilon}\\left( M+N^2 \\right).\n\\end{equation}\n This will be the starting point for an iterative bound for\n $\\mathcal{B}_1(M,N)$. \\newline\n\n Similar to the proof of \\cite[Lemma 1]{DRHB}, using the duality\n principle (see for example, \\cite[Chap. 9]{HM}) and the quartic\n reciprocity law by considering the case for $n=a+bi$ with\n $a \\equiv 1 \\bmod{4}, b \\equiv 0 \\bmod{4}$ or $a\n\\equiv 3 \\bmod{4}, b \\equiv 2 \\bmod{4}$ (and similarly for $m$),\nwe can establish the following lemma.\n\\begin{lemma}\n\\label{lem2} We have $\\mathcal{B}_1(M,N) \\leq 2\n\\mathcal{B}_1(N,M)$. Moreover, there exist coefficients $a'_n,\na''_n$ with $|a'_n|=|a''_n|=|a_n|$ such that\n\\begin{equation*}\n\\begin{split}\n\\sideset{}{^*}\\sum_{\\substack{m \\in \\ensuremath{\\mathbb Z}[i] \\\\M <\\mathcal{N}(m) \\leq 2M}} \\left| \\\n\\sideset{}{^*}\\sum_{\\substack{n \\in \\ensuremath{\\mathbb Z}[i] \\\\N < \\mathcal{N}(n) \\leq 2N}} a_n\n\\leg{n}{m}_4\n \\right|^2 & \\leq 2 \\sideset{}{^*}\\sum_{\\substack{m \\in \\ensuremath{\\mathbb Z}[i] \\\\M< \\mathcal{N}(m) \\leq 2M}} \\left| \\ \\sideset{}{^*}\\sum_{\\substack{n \\in \\ensuremath{\\mathbb Z}[i] \\\\N < \\mathcal{N}(n) \\leq 2N}} a'_n \\leg{m}{n}_4\n \\right|^2 \\\\\n & \\leq 4 \\sideset{}{^*}\\sum_{\\substack{m \\in \\ensuremath{\\mathbb Z}[i] \\\\M< \\mathcal{N}(m) \\leq 2M}} \\left| \\ \\sideset{}{^*}\\sum_{\\substack{n \\in \\ensuremath{\\mathbb Z}[i] \\\\N < \\mathcal{N}(n) \\leq 2N}} a''_n \\leg{n}{m}_4\n \\right|^2.\n\\end{split}\n\\end{equation*}\n\\end{lemma}\n\n\n Our next lemma is a trivial modification of Lemma 9 of \\cite{DRHB}, which shows that the norm $\\mathcal{B}_1(M,N)$ is essentially\n increasing.\n\\begin{lemma}\n\\label{lem3} There is an absolute constant $C > 0$ as follows.\nLet $M_1, N \\geq 1$ and $M_2 \\geq CM_1\\log(2M_1N)$. Then\n\\begin{equation*}\n \\mathcal{B}_1(M_1,N)\\leq C \\mathcal{B}_1(M_2,N).\n\\end{equation*}\n Similarly, if $M, N_1 \\geq 1$ and $N_2 \\geq CN_1\\log(2N_1M)$. Then\n\\begin{equation*}\n \\mathcal{B}_1(M,N_1)\\leq C \\mathcal{B}_1(M,N_2).\n\\end{equation*}\n\\end{lemma}\n\nNext, we define\n\\begin{equation*}\n \\mathcal{B}_2(M,N)=\\sup \\left\\{ {\\sum}_2: \\sum_n|a_n|^2=1 \\right\\},\n\\end{equation*}\n where\n\\begin{equation}\n\\label{sum2}\n {\\sum}_2=\\sum_{\\substack{m \\in \\ensuremath{\\mathbb Z}[i] \\\\M < \\mathcal{N}(m) \\leq 2M}} \\left| \\ \\sideset{}{^*}\\sum_{\\substack{n \\in \\ensuremath{\\mathbb Z}[i] \\\\N < \\mathcal{N}(n) \\leq 2N}} a_n \\leg{m}{n}_4\n \\right|^2,\n\\end{equation}\n the summation over $m$ running over all integers of $\\ensuremath{\\mathbb Z}[i]$ in the relevant range. \\newline\n\n It follows directly from Lemma \\ref{lem2} that\n\\begin{equation}\n\\label{12comparison}\n \\mathcal{B}_1(M,N) \\leq 2\\mathcal{B}_2(M,N).\n\\end{equation}\nFor the other direction, we have the following.\n\\begin{lemma}\n\\label{lem4} There exist $X, Y \\gg 1$ such that $XY^3 \\ll M$ and\n\\begin{equation*}\n \\mathcal{B}_2(M,N) \\ll (\\log M)^3M^{1\/2}X^{-1\/2}Y^{-3\/2}\\min(Y\\mathcal{B}_1(X,N), X\\mathcal{B}_1(Y,N)).\n\\end{equation*}\n\\end{lemma}\n\\begin{proof}\n To handle $\\sum_2$ we write each of the integers $m$ occurring in the outer summation\nof \\eqref{sum2} in the form $m = ab^2c^3d$, where $a, b, c\\equiv 1\n\\bmod {(1+i)^3}$ are square-free, and $d$ is a product of a unit,\na power of $1+i$, and a fourth power (so that $d$ can be written\nas $d=u(1+i)^je^4$ where $u$ is a unit, $0 \\leq j \\leq 3$ and $e\n\\in \\ensuremath{\\mathbb Z}[i]$). We split the available ranges for $a, b, c$ and $d$\ninto sets $X < \\mathcal{N}(a) \\leq 2X, Y < \\mathcal{N}(b) \\leq 2Y, Z < \\mathcal{N}(c) \\leq 2Z$\nand $W < \\mathcal{N}(d) \\leq 2W$, where $X, Y, Z$ and $W$ are powers of $2$.\nThere will therefore be $O(\\log^3M)$ possible quadruples $X, Y,Z,\nW$. We may now write\n\\begin{equation*}\n {\\sum}_2 \\ll \\sum_{X,Y,Z, W}{\\sum}_2(X,Y,Z,W)\n\\end{equation*}\n accordingly, so that\n\\begin{equation*}\n {\\sum}_2 \\ll (\\log^3M){\\sum}_2(X,Y,Z,W)\n\\end{equation*}\n for some quadruple $X,Y,Z, W$. However,\n\\begin{equation*}\n {\\sum}_2(X,Y,Z,W) \\leq \\sum_{b,c,d}\\sideset{}{^*}\\sum_{\\substack{a \\in \\ensuremath{\\mathbb Z}[i] \\\\X'<\\mathcal{N}(a) \\leq 2X'}}\n \\left| \\ \\sideset{}{^*}\\sum_{\\substack{n \\in \\ensuremath{\\mathbb Z}[i] \\\\N < \\mathcal{N}(n) \\leq 2N}} a_n \\leg{b^2c^3d}{n}_4 \\leg{a}{n}_4\n \\right|^2,\n\\end{equation*}\n where $X'=X'(b,c, d)=M\/\\mathcal{N}(b^2c^3d)$. It is easy to see that $X\/2 \\leq X' \\leq\n 2X$, and hence by Lemma \\ref{lem2}\n\\[ {\\sum}_2(X,Y,Z, W) \\ll \\sum_{b,c,d}\\mathcal{B}_1(X',N)\\sum_{n}\n |a_n|^2 \\ll YZW^{1\/4}\\max \\left\\{ \\mathcal{B}_1(X',N): X\/2 \\leq X' \\leq 2X \\right\\} \\sum_{n}\n |a_n|^2, \\]\n since there are $O(W^{1\/4})$ possible integers $d$. \\newline\n\n In the same way we have\n\\begin{align*}\n {\\sum}_2(X,Y,Z, W) &\\leq \\sum_{a, b, d}\\sideset{}{^*}\\sum_{\\substack{c \\in \\ensuremath{\\mathbb Z}[i] \\\\Z'<\\mathcal{N}(c) \\leq 2Z'}}\n \\left| \\ \\sideset{}{^*}\\sum_{\\substack{n \\in \\ensuremath{\\mathbb Z}[i] \\\\N < \\mathcal{N}(n) \\leq 2N}} a_n \\leg {ab^2d}{n}_4 \\leg {c^3}{n}_4\n \\right|^2 \\\\\n &= \\sum_{a,b, d}\\sideset{}{^*}\\sum_{\\substack{c \\in \\ensuremath{\\mathbb Z}[i] \\\\Z'<\\mathcal{N}(c) \\leq 2Z'}}\n \\left| \\ \\sideset{}{^*}\\sum_{\\substack{n \\in \\ensuremath{\\mathbb Z}[i] \\\\N < \\mathcal{N}(n) \\leq 2N}} \\overline{a}_n \\overline{\\leg {ab^2d}{n}}_4 \\overline{\\leg {c^3}{n}}_4\n \\right|^2 \\\\\n &= \\sum_{a,b,d}\\sideset{}{^*}\\sum_{\\substack{c \\in \\ensuremath{\\mathbb Z}[i] \\\\Z'<\\mathcal{N}(c) \\leq 2Z'}}\n \\left| \\ \\sideset{}{^*}\\sum_{\\substack{n \\in \\ensuremath{\\mathbb Z}[i] \\\\N < \\mathcal{N}(n) \\leq 2N}} \\overline{a}_n \\overline{\\leg {ab^2d}{n}}_4 \\leg {c}{n}_4\n \\right|^2 \\\\\n & \\ll \\ \\sum_{a,b,d}\\mathcal{B}_1(Z',N)\\sum_{n}\n |a_n|^2 \\\\\n & \\ll XYW^{1\/4}\\max \\left\\{ \\mathcal{B}_1(Z',N): Z \\ll Z' \\ll Z \\right\\} \\sum_{n}\n |a_n|^2,\n\\end{align*}\n where $Z'=Z'(a,b,d)=M\/\\mathcal{N}(ab^2d)$. As $Y \\ll M^{1\/2}X^{-1\/2}Z^{-3\/2}W^{-1\/2}$, we see that\n\\begin{equation*}\n \\mathcal{B}_2(M,N) \\ll (\\log M)^3M^{1\/2}X^{-1\/2}Z^{-3\/2}W^{-1\/4}\\min(Z\\mathcal{B}_1(X,N), X\\mathcal{B}_1(Z,N)).\n\\end{equation*}\n The assertion of the lemma now follows on replacing $Z$ by $Y$ above.\n\\end{proof}\n\n As in \\cite{DRHB}, we introduce an infinitely differentiable weight function $W: \\ensuremath{\\mathbb R} \\rightarrow \\ensuremath{\\mathbb R}$, defined by\n\\begin{equation}\n\\label{W}\nW(x)= \\begin{cases} \\exp\\left(\\frac{-1}{(2x-1)(5-2x)}\\right),\n\\qquad & \\text{if } \\frac{1}{2} 0$ be given. Then there exist\npositive integers $\\Delta_2 \\geq \\Delta_1$ such that\n\\begin{equation*}\n \\mathcal{B}_2(M,N) \\ll_{\\varepsilon} N^{\\varepsilon} \\mathcal{B}_3 \\left(\\frac {M}{\\Delta_1}, \\frac {N}{\\Delta_2}\\right).\n\\end{equation*}\n\\end{lemma}\n\n\n We complete the chain of relations amongst the various norms by giving the following estimate for $\\mathcal{B}_3(M,N)$ in terms of\n $\\mathcal{B}_2(M,N)$.\n\\begin{lemma}\n\\label{lem6} Let $N \\geq 1$. Then for any $\\varepsilon>0$ we have\n\\begin{equation*}\n \\mathcal{B}_3(M,N) \\ll_{\\varepsilon} MN^{4\\varepsilon-1}\\max \\left\\{ \\mathcal{B}_2(K,\n N): K \\leq N^2\/M\n \\right\\}+M^{-1}N^{3+4\\varepsilon}\\sum_{K>N^2\/M}K^{-2-\\varepsilon}\\mathcal{B}_2(K,\n N),\n\\end{equation*}\n where $K$ runs over powers of $2$.\n\\end{lemma}\nThis bound uses the Poisson summation formula and is the key\nin the proof of Theorem \\ref{mainthm}. Note that it does not cover the case in which $N =1\/2$, say, for which we have the trivial bound\n\\begin{equation}\n\\label{trivialbound}\n \\mathcal{B}_3(M,N) \\ll_{\\varepsilon} M, \\hspace{0.1in} (N \\leq 1).\n\\end{equation}\nSection \\ref{sec4} will be devoted to the proof of Lemma~\\ref{lem6}.\n\n\\section{Proof of Lemma \\ref{lem6}} \\label{sec4}\n\nOur proof of Lemma \\ref{lem6} requires the application of the Poisson\nsummation formula. We shall write\n\\begin{equation*}\n \\chi(m)=\\leg{m}{n_1}_4\\overline{\\leg{m}{n_2}}_4,\n\\end{equation*}\nwhich is a primitive character (on the group\n$(\\ensuremath{\\mathbb Z}[i]\/(n_1n_2))^{\\times}$) to modulus $q = n_1n_2$, provided\nthat $n_1$, $n_2$ and 2 are pair-wise coprime and that $n_1$ and $n_2$ are\nsquare-free.\n\\begin{lemma}\n\\label{lem7} With the above notations we have\n\\begin{equation*}\n \\sum_{m \\in \\ensuremath{\\mathbb Z}[i]}W\\left(\\frac {\\mathcal{N}(m)}{M}\\right)\\chi(m)=\\frac {\\chi(-2i)g(n_1)\\overline{g(n_2)}M}{\\mathcal{N}(q)}\\leg{n_2}{n_1}_4\\overline{\\leg{n_1}{n_2}}_4\\leg{-1}{n_2}_4\n \\sum_{m \\in \\ensuremath{\\mathbb Z}[i]}\\widetilde{W}\\left(\\sqrt{\\frac {\\mathcal{N}(k)M}{\\mathcal{N}(q)}}\\right)\\overline{\\chi}(k),\n\\end{equation*}\n where\n\\begin{equation*}\n \\widetilde{W}(t)=\\int\\limits^{\\infty}_{-\\infty}\\int\\limits^{\\infty}_{-\\infty}W(\\mathcal{N}(x+yi))\\widetilde{e}\\left( \\frac{t(x+yi)}{2i}\\right)\\mathrm{d} x \\mathrm{d} y,\n\\end{equation*}\n for non-negative $t$. Here $\\widetilde{e}(z)$ is defined in \\eqref{etildedef} and $g(n)$ is the Gauss sums defined in Section \\ref{sec2.4}.\n\\end{lemma}\n\\begin{proof}\nThis lemma is analogous to Lemma 10 in \\cite{DRHB1} and the proof is very similar. The differences include we need to start with the Poisson summation formula for\n $\\ensuremath{\\mathbb Z}[i]$, which takes the form.\n\\begin{equation*}\n \\sum_{j \\in \\ensuremath{\\mathbb Z}[i]}f(j)=\\sum_{k \\in\n \\ensuremath{\\mathbb Z}[i]}\\int\\limits^{\\infty}_{-\\infty}\\int\\limits^{\\infty}_{-\\infty}f(x+yi)\\widetilde{e}\\left( \\frac {k(x+yi)}{2i} \\right)\\mathrm{d} x \\mathrm{d} y.\n\\end{equation*}\nWe omit the details of the rest of proof as it simply goes along the same lines as the proof of Lemma 10 in \\cite{DRHB1}.\n\\end{proof}\n\n Our next result will be used to separate the variables in a function of a\nproduct, which is Lemma 12 of \\cite{DRHB}.\n\\begin{lemma}\n\\label{lem8} Let $\\rho: \\ensuremath{\\mathbb R} \\rightarrow \\ensuremath{\\mathbb R}$ be an infinitely\ndifferentiable function whose derivatives satisfy the bound\n\\[ \\rho{(k)}(x) \\ll_{k,A}|x|^{-A} \\]\nfor $|x| \\geq 1$, for any positive constant\n$A$. Let\n\\begin{equation*}\n \\rho_{+}(s)=\\int\\limits^{\\infty}_{0}\\rho(x)x^{s-1} \\mathrm{d} x,\n \\hspace{0.1in} \\rho_{-}(s)=\\int\\limits^{\\infty}_{0}\\rho(-x)x^{s-1} \\mathrm{d}\n x.\n\\end{equation*}\n Then $\\rho_{+}(s)$ and $\\rho_{-}(s)$ are holomorphic in $\\Re(s) = \\sigma > 0$, and\n satisfy\n\\begin{equation*}\n \\rho_{+}(s), \\ \\rho_{-}(s) \\ll_{A, \\sigma} |s|^{-A},\n\\end{equation*}\nin that same domain, for any positive constant $A$. Moreover if $\\sigma> 0$ we have\n\\begin{equation*}\n \\rho(x)=\\frac {1}{2\\pi\n i}\\int\\limits^{\\sigma+i\\infty}_{\\sigma-i\\infty}\\rho_{+}(s)x^{-s} \\mathrm{d} s \\; \\; \\; \\mbox{and} \\; \\; \\;\n \\rho(-x)=\\frac {1}{2\\pi\n i}\\int\\limits^{\\sigma+i\\infty}_{\\sigma-i\\infty}\\rho_{-}(s)x^{-s} \\mathrm{d} s\n\\end{equation*}\n for any positive $x$.\n\\end{lemma}\n\nWe are now ready to prove Lemma~\\ref{lem6}.\n\n\\begin{proof} [Proof of Lemma~\\ref{lem6}]\n In the notation of Lemma \\ref{lem7} we have\n\\begin{equation*}\n {\\sum}_3(M,N)=\\sum_{(n_1,n_2)=1}a_{n_1}\\overline{a}_{n_2}\\sum_{m \\in \\ensuremath{\\mathbb Z}[i]}W\\left(\\frac {\\mathcal{N}(m)}{M}\\right)\\chi(m).\n\\end{equation*}\n We proceed to evaluate the inner sum using Lemma \\ref{lem7}, whence\n\\begin{equation}\n\\label{5.1}\n {\\sum}_3(M,N)=M\\sum_{k \\in \\ensuremath{\\mathbb Z}[i]}\\sum_{(n_1,n_2)=1}c_{n_1}\\overline{c}_{n_2}\\leg{n_2}{n_1}_4\\overline{\\leg{n_1}{n_2}}_4\\leg{-1}{n_2}_4\\widetilde{W}\\left(\\sqrt{\\frac {\\mathcal{N}(k)M}{\\mathcal{N}(n_1n_2)}}\\right)\\overline{\\chi}(k),\n\\end{equation}\n where\n\\begin{equation*}\n c_n=a_n\\leg{-2i}{n}_4\\frac {g(n)}{\\mathcal{N}(n)}.\n\\end{equation*}\n Note by the law of quartic reciprocity, we have\n\\begin{equation*}\n \\leg{n_2}{n_1}_4\\overline{\\leg{n_1}{n_2}}_4=(-1)^{((\\mathcal{N}(n_1)-1)\/4)((\\mathcal{N}(n_2)-1)\/4)}.\n\\end{equation*}\n Now we let\n\\[ S_1 =\\{n \\in \\ensuremath{\\mathbb Z}[i]: N<\\mathcal{N}(n)\\leq 2N, n \\hspace{0.05in} \\text{square-free}, n=a+bi, a, b \\in \\ensuremath{\\mathbb Z}, a \\equiv 1 \\bmod{4}, b \\equiv 0 \\bmod{4} \\}, \\]\nand\n\\[ S_2 =\\{n \\in \\ensuremath{\\mathbb Z}[i]: N<\\mathcal{N}(n)\\leq 2N, n \\hspace{0.05in} \\text{square-free}, n=a+bi, a, b \\in \\ensuremath{\\mathbb Z}, a \\equiv 3 \\bmod{4}, b \\equiv 2 \\bmod{4} \\}. \\]\n We can then recast the inner sum in \\eqref{5.1} as\n\\begin{align*}\n &\\sum_{(n_1,n_2)=1} \\cdots \\\\\n =& \\sum_{\\substack{(n_1,n_2)=1 \\\\ n_1 \\in S_1, n_2 \\in S_1}} \\cdots\n \\; \\; +\\sum_{\\substack{(n_1,n_2)=1 \\\\ n_1 \\in S_1, n_2 \\in S_2}} \\cdots \\; \\; +\\sum_{\\substack{(n_1,n_2)=1 \\\\ n_1 \\in S_2, n_2 \\in S_1}} \\cdots \\; \\;\n +\\sum_{\\substack{(n_1,n_2)=1 \\\\ n_1 \\in S_2, n_2 \\in S_2}} \\cdots \\; \\; -2\\sum_{\\substack{(n_1,n_2)=1 \\\\ n_1 \\in S_2, n_2 \\in S_2}} \\cdots \\\\\n =& \\sum_{(n_1,n_2)=1}c_{n_1}\\overline{c}_{n_2}\\leg{-1}{n_2}_4\\widetilde{W}\\left(\\sqrt{\\frac {\\mathcal{N}(k)M}{\\mathcal{N}(n_1n_2)}}\\right)\\overline{\\chi}(k)-2\\sum_{(n_1,n_2)=1}c'_{n_1}\\overline{c'}_{n_2}\\leg{-1}{n_2}_4\\widetilde{W}\\left(\\sqrt{\\frac {\\mathcal{N}(k)M}{\\mathcal{N}(n_1n_2)}}\\right)\\overline{\\chi}(k),\n\\end{align*}\n where we let $c'_n=c_n$ if $n \\in S_2$ and $0$ otherwise. Due\n to similarities, it suffices to estimate\n\\begin{equation*}\n M\\sum_{k \\in\n\\ensuremath{\\mathbb Z}[i]}\\sum_{(n_1,n_2)=1}c_{n_1}\\overline{c}_{n_2}\\leg{-1}{n_2}_4\\widetilde{W}\\left(\\sqrt{\\frac\n{\\mathcal{N}(k)M}{\\mathcal{N}(n_1n_2)}}\\right)\\overline{\\chi}(k),\n\\end{equation*}\n Note that $k = 0$ may be omitted if $N \\geq 1$, since then $\\mathcal{N}(n_1n_2) > 1$ and $\\chi(0) = 0$,\nthe character being non-trivial. We may now apply Lemma \\ref{lem8}\nto the function $\\rho(x) = \\widetilde{W}(x)$, which satisfies the necessary conditions of the lemma, as one sees by repeated\nintegration by parts. We decompose the available $k$ into sets for\nwhich $K < \\mathcal{N}(k) \\leq 2K$, where $K$ runs over powers of $2$, and\nuse\n\\begin{equation} \\label{sigmadef}\n\\sigma = \\left\\{ \\begin{array}{ll} \\varepsilon , & \\mbox{for} \\; K \\leq N^2\/M, \\\\ 4 +\n\\varepsilon , & \\mbox{otherwise}.\n\\end{array} \\right.\n\\end{equation}\nThis gives\n\\begin{align*}\n {\\sum}_3 & \\ll_{\\varepsilon} M\n \\sum_{K}(KM)^{-\\sigma\/2}\\int\\limits^{\\infty}_{-\\infty}|\\rho_{+}(\\sigma+it)||S(\\sigma+it)| \\mathrm{d} t,\n\\end{align*}\n where\n\\[\n S(s)=\\sum_{K < \\mathcal{N}(k) \\leq 2K}\\left| \\sum_{(n_1,n_2)=1}b_{n_1}b'_{n_2}\\leg{-1}{n_2}_4\\overline{\\chi}(k)\\right|, \\; \\; \\; \\mbox{with} \\; \\; \\;\n b_n=c_n\\mathcal{N}(n)^{s\/2} \\; \\; \\mbox{and} \\; \\; b'_n=\\overline{c}_n\\mathcal{N}(n)^{s\/2} .\n\\]\nWe use the M\\\"obius function to detect the coprimality condition in the inner sum of $S(s)$, giving\n\\begin{align*}\n S(s) &\\ll \\sum_{d}\\sum_{K < \\mathcal{N}(k) \\leq 2K}\\left| \\sum_{d|(n_1,n_2)}b_{n_1}b'_{n_2}\\leg{-1}{n_2}_4\\overline{\\chi}(k)\\right| \\\\\n&= \\sum_{d}\\sum_{K < \\mathcal{N}(k) \\leq 2K}\\left|\n\\sum_{d|n}b_{n}\\overline{\\leg{k}{n}}_4\\right| \\left|\n\\sum_{d|n}b'_{n}\\leg{-k}{n}_4\\right| \\leq S^{1\/2}_1S^{1\/2}_2,\n\\end{align*}\n by Cauchy's inequality, where\n\\[ S_1 = \\sum_{d}\\sum_{K < \\mathcal{N}(k) \\leq 2K}\\left| \\sum_{d|n}b_{n}\\overline{\\leg{k}{n}}_4\\right|^2 \\]\nand satisfies the bound\n\\[ S_1 \\leq \\sum_{d}\\mathcal{B}_2(K,N)\\sum_{d|n}|b_n|^2 \\leq \\mathcal{B}_2(K,N)\\sum_{n}d(n)|a_n|^2\\mathcal{N}(n)^{\\sigma-1} \\ll_{\\varepsilon} N^{\\varepsilon+\\sigma-1}\\mathcal{B}_2(K,N). \\]\n$S_2$ can be treated similarly. It follows then that\n\\begin{equation*}\n S(s) \\ll_{\\varepsilon} N^{\\varepsilon+\\sigma-1}\\mathcal{B}_2(K,N),\n\\end{equation*}\n and since\n\\begin{align*}\n \\int\\limits^{\\infty}_{-\\infty}|\\rho_{+}(\\sigma+it)| \\mathrm{d} t \\ll_{\\varepsilon} 1,\n\\end{align*}\nwe infer, mindful of our choices of $\\sigma$ in \\eqref{sigmadef}, that\n\\begin{align*}\n {\\sum}_3 & \\ll_{\\varepsilon} MN^{4\\varepsilon-1}\\max \\left\\{ \\mathcal{B}_2(K,\n N): K \\leq N^2\/M\n \\right\\}+M^{-1}N^{3+4\\varepsilon}\\sum_{K>N^2\/M}K^{-2-\\varepsilon}\\mathcal{B}_2(K,\n N),\n\\end{align*}\nRecalling the definition of $\\mathcal{B}_3(M,N)$ in \\eqref{B3def}, we have completed the proof of Lemma \\ref{lem6}.\n \\end{proof}\n\n\\section{The Recursive Estimate and the Proof of Theorem~\\ref{mainthm}}\n\n Lemmas \\ref{lem4}, \\ref{lem5} and \\ref{lem6} allow us to estimate $\\mathcal{B}_1(M,N)$ recursively, as follows.\n\\begin{lemma}\n\\label{lem9} Suppose that $3\/2 < \\xi \\leq 2$, and that\n\\begin{equation}\n\\label{6.1}\n \\mathcal{B}_1(M,N)\\ll_{\\varepsilon} (MN)^{\\varepsilon} \\left( M+N^{\\xi} + (MN)^{3\/4} \\right).\n\\end{equation}\n for any $\\varepsilon > 0$. Then\n\\begin{equation*}\n \\mathcal{B}_1(M,N)\\ll_{\\varepsilon} (MN)^{\\varepsilon} \\left( M+N^{(9\\xi-6)\/(4\\xi-1)} + (MN)^{3\/4} \\right).\n\\end{equation*}\n for any $\\varepsilon > 0$.\n\\end{lemma}\n\\begin{proof}\n By the symmetry expressed in Lemma \\ref{lem2} the hypothesis\n\\eqref{6.1} yields\n\\begin{align*}\n \\mathcal{B}_1(M,N)\\ll_{\\varepsilon} \\left( M^{\\xi} + N + (MN)^{3\/4}\\right) (MN)^{\\varepsilon}.\n\\end{align*}\n It follows from \\eqref{initialest} that the above estimation is valid with\n $\\xi=2$. We now feed this into Lemma \\ref{lem4}, whence\n\\begin{equation} \\label{eqwithmin}\n \\mathcal{B}_2(M,N) \\ll_{\\varepsilon} (MN)^{2\\varepsilon}M^{1\/2}X^{-1\/2}Y^{-3\/2}\\min(Yf(X,N),\n Xf(Y,N)),\n\\end{equation}\n where\n\\begin{equation*}\n f(Z,N)=Z^{\\xi} + N + (ZN)^{3\/4}.\n\\end{equation*}\n If $X \\geq Y$ we bound the minimum in \\eqref{eqwithmin} by $Yf(X,N)$, whence\n\\begin{equation*}\n \\mathcal{B}_2(M,N) \\ll_{\\varepsilon} (MN)^{2\\varepsilon}M^{1\/2}X^{-1\/2}Y^{-3\/2}\\left( YX^{\\xi} + YN +\n Y(XN)^{3\/4} \\right).\n\\end{equation*}\n Here we have\n\\begin{equation*}\n M^{1\/2}X^{-1\/2}Y^{-3\/2}YX^{\\xi} \\ll M^{\\xi}Y^{1-3\\xi}\n\\end{equation*}\n since $X \\ll MY^{-3}$. On recalling that $\\xi > 3\/2 >1\/3$ and $Y \\gg 1$ we see that\nthis is $O(M^{\\xi})$. Moreover\n\\begin{equation*}\n M^{1\/2}X^{-1\/2}Y^{-3\/2}YN=M^{1\/2}X^{-1\/2}Y^{-1\/2}N \\ll\n M^{1\/2}N.\n\\end{equation*}\n Finally\n\\[ M^{1\/2}X^{-1\/2}Y^{-3\/2}Y(XN)^{3\/4}=M^{1\/2}X^{1\/4}Y^{-1\/2}N^{3\/4}\n \\ll M^{3\/4}N^{3\/4}\n \\ll M^{1\/2}N+M^{3\/2} \\leq\n M^{1\/2}N+M^{\\xi}, \\]\n since $\\xi > 3\/2$. It follows that\n\\begin{equation}\n\\label{B2bound}\n \\mathcal{B}_2(M,N) \\ll_{\\varepsilon}\n (MN)^{2\\varepsilon} \\left( M^{1\/2}N+M^{\\xi} \\right)\n\\end{equation}\n when $X \\geq Y$. \\newline\n\n In the alternative case we bound the minimum in \\eqref{eqwithmin} by $Xf(Y,N)$, whence\n\\begin{equation*}\n \\mathcal{B}_2(M,N) \\ll_{\\varepsilon} (MN)^{2\\varepsilon}M^{1\/2}X^{-1\/2}Y^{-3\/2} \\left( XY^{\\xi} + XN + X(YN)^{3\/4} \\right).\n\\end{equation*}\n Here\n\\begin{equation*}\n M^{1\/2}X^{-1\/2}Y^{-3\/2}XY^{\\xi} \\ll M^{1\/2}X^{1\/2}Y^{1\/2} \\ll M\n \\ll M^{\\xi}\n\\end{equation*}\n since $\\xi \\leq 2$ and $XY \\ll M$. Moreover\n\\begin{equation*}\n M^{1\/2}X^{-1\/2}Y^{-3\/2}XN = M^{1\/2}X^{1\/2}Y^{-3\/2}N \\ll\n M^{1\/2}N\n\\end{equation*}\n since we are now supposing that $Y \\geq X$.\n Finally\n\\[ M^{1\/2}X^{-1\/2}Y^{-3\/2}X(YN)^{3\/4}=M^{1\/2}X^{1\/2}Y^{-3\/4}N^{3\/4} \\ll\n M^{1\/2}Y^{-1\/4}N^{3\/4} \\ll M^{1\/2}N^{3\/4} \\ll M^{1\/2}N+M^{\\xi}, \\]\n as before. It follows that \\eqref{B2bound} holds when $Y \\geq X$ too. It will be convenient to\nobserve that \\eqref{B2bound} still holds when $M < 1\/2$ , since\nthen $\\mathcal{B}_2(M,N)= 0$. \\newline\n\n We are now ready to use \\eqref{B2bound} (with a new value for $\\varepsilon$) in Lemma \\ref{lem6}, to obtain\na bound for $\\mathcal{B}_3(M,N)$. We readily see that\n\\begin{equation*}\n \\max\\left\\{ \\mathcal{B}_2(K,\n N): K \\leq N^2\/M\n \\right\\} \\ll_{\\varepsilon} N^{\\varepsilon}\\left( M^{-1\/2}N^2+M^{-\\xi}N^{2\\xi} \\right)\n\\end{equation*}\n and\n\\begin{equation*}\n \\sum_{K>N^2\/M}K^{-2-\\varepsilon}\\mathcal{B}_2(K,\n N) \\ll_{\\varepsilon}\n N^{\\varepsilon}\\left( M^{3\/2}N^{-2}+M^{2-\\xi}N^{2\\xi-4} \\right).\n\\end{equation*}\n Thus, if $N \\geq 1$, we will have\n\\begin{equation*}\n \\mathcal{B}_3(M,\n N) \\ll_{\\varepsilon}\n N^{5\\varepsilon}\\left( M^{1\/2}N+M^{1-\\xi}N^{2\\xi-1} \\right).\n\\end{equation*}\n When this is used in Lemma \\ref{lem5} we find that when $N\/\\Delta_2 \\geq\n 1$,\n\\begin{align*}\n \\mathcal{B}_3\\left( \\frac {M}{\\Delta_1}, \\frac {N}{\\Delta_2} \\right) &\\ll_{\\varepsilon}\n N^{5\\varepsilon}\\left( M^{1\/2}N+M^{1-\\xi}N^{2\\xi-1}\\Delta^{\\xi-1}_1\\Delta^{1-2\\xi}_2 \\right) \\leq N^{5\\varepsilon} \\left( M^{1\/2}N+M^{1-\\xi}N^{2\\xi-1}\\Delta^{-\\xi}_2 \\right) \\\\\n & \\leq N^{5\\varepsilon}\\left( M^{1\/2}N+M^{1-\\xi}N^{2\\xi-1} \\right).\n\\end{align*}\n Note that when $M \\geq N$, we have $M^{1\/2}N \\leq (MN)^{3\/4}$\n and when $M \\leq N$, we have $(MN)^{3\/4} \\leq\n M^{1-\\xi}N^{2\\xi-1}$ since $\\xi > 3\/2$. Thus we conclude that\n\\begin{equation*}\n \\mathcal{B}_2(M,\n N) \\ll_{\\varepsilon}\n N^{6\\varepsilon}\\left( (MN)^{3\/4}+M^{1-\\xi}N^{2\\xi-1} \\right),\n\\end{equation*}\n provided that $N\/\\Delta_2 \\geq 1$. In the alternative case \\eqref{trivialbound} applies, whence\n\\begin{equation*}\n \\mathcal{B}_2(M,\n N) \\ll_{\\varepsilon}\n (MN)^{6\\varepsilon}\\left( M+(MN)^{3\/4}+M^{1-\\xi}N^{2\\xi-1} \\right),\n\\end{equation*}\n In view of Lemma \\ref{lem3} and \\eqref{12comparison} we may now deduce that\n\\begin{align*}\n \\mathcal{B}_1(M,\n N) \\leq \\mathcal{B}_1(M',\n N) \\ll \\mathcal{B}_2(M',\n N) \\ll_{\\varepsilon}\n (M'N)^{6\\varepsilon}\\left( M'+(M'N)^{3\/4}+{M'}^{1-\\xi}N^{2\\xi-1} \\right) ,\n\\end{align*}\n for any $M' \\geq CM \\log(2MN)$. Note that when ${M}^{4\\xi-1}\n \\leq N^{8\\xi-7}$, we have\n\\begin{align*}\n (MN)^{3\/4} \\leq {M}^{1-\\xi}N^{2\\xi-1}.\n\\end{align*}\n We shall now choose\n\\begin{align*}\n M'=C\\max \\left\\{ M, N^{(8\\xi-7)\/(4\\xi-1)} \\right\\} \\log(2MN) ,\n\\end{align*}\n so that when $M \\geq N^{(8\\xi-7)\/(4\\xi-1)}$, we have\n\\begin{align*}\n M'+(M'N)^{3\/4}+{M'}^{1-\\xi}N^{2\\xi-1} \\ll (MN)^{\\varepsilon} \\left( M+(MN)^{3\/4} \\right) ,\n\\end{align*}\n while when $M \\leq N^{(8\\xi-7)\/(4\\xi-1)}$, we have\n\\begin{align*}\n M'+(M'N)^{3\/4}+{M'}^{1-\\xi}N^{2\\xi-1} &\n \\ll (MN)^{\\varepsilon} \\left( N^{(8\\xi-7)\/(4\\xi-1)}+N^{(8\\xi-7)(1-\\xi)\/(4\\xi-1)}N^{2\\xi-1} \\right) \\\\\n & \\ll\n (MN)^{\\varepsilon}N^{(9\\xi-6)\/(4\\xi-1)}.\n\\end{align*}\n We then deduce that\n\\begin{align*}\n \\mathcal{B}_1(M,\n N) \\ll_{\\varepsilon}\n (MN)^{20\\varepsilon} \\left( M+(MN)^{3\/4}+N^{(9\\xi-6)\/(4\\xi-1)} \\right).\n\\end{align*}\n Lemma \\ref{lem9} now follows.\n\\end{proof}\n\nWe now proceed to prove Theorem \\ref{mainthm}.\n\n\\begin{proof}[Proof of Theorem~\\ref{mainthm}] Note that it follows from \\eqref{initialest} that the estimation given in Lemma \\ref{lem9} is valid with\n $\\xi=2$. We further observe that\n\\begin{align*}\n \\frac{3}{2} < \\frac {9\\xi-6}{4\\xi-1} < \\xi,\n\\end{align*}\n for $\\xi > 3\/2$ and in the iterative applications of Lemma~\\ref{lem9} the exponent of $N$ in the bound for $\\mathcal{B}_1(M,N)$ decreases and tends to $3\/2$. We therefore\narrive at the following bound\n\\begin{align*}\n \\mathcal{B}_1(M,\n N) \\ll_{\\varepsilon}\n (MN)^{\\varepsilon}\\left( M+N^{3\/2}+(MN)^{3\/4} \\right),\n\\end{align*}\n for any $\\varepsilon>0$. Using Lemma \\ref{lem2} we then have\n\\begin{equation*}\n\\begin{split}\n \\mathcal{B}_1(M,\n N) & \\ll_{\\varepsilon}\n (MN)^{\\varepsilon}\\min \\left\\{ M+N^{3\/2}+(MN)^{3\/4},\n N+M^{3\/2}+(MN)^{3\/4} \\right\\} \\\\\n & \\ll_{\\varepsilon} (MN)^{\\varepsilon} \\left(M+N+(MN)^{3\/4} \\right),\n \\end{split}\n \\end{equation*}\n where the last estimation follows since when $N \\leq M, N^{3\/2}=N^{3\/4}N^{3\/4} \\leq (MN)^{3\/4}$ and similarly when $N \\geq M, M^{3\/2} \\leq (MN)^{3\/4}$.\n This establishes Theorem \\ref{mainthm}.\n \\end{proof}\n\n\n\\section{The Quartic large sieve for Dirichlet Characters}\n\\label{sec8}\n\nWe now proceed to prove Theorem~\\ref{quarticlargesieve}. It is easy to reduce the expression on the left-hand side of\n\\eqref{final} to a sum of similar expressions with the additional\nsummation conditions $(q,2)=1$ and $(m,2)=1$ included. Thus it\nsuffices to estimate\n\\begin{equation} \\label{trans}\n\\begin{split}\n\\sum\\limits_{\\substack{Q M^4\/Q}\nK^{-2-\\varepsilon} B_3(K,M),\n\\end{equation}\nwhere the sum over $K$ in \\eqref{B43} runs over powers of $2$.\n\\end{lemma}\n\nSince the proofs of \\eqref{C22}-\\eqref{B43} are essentially the\nsame as those of (31)-(36) of Lemma 4.1 in \\cite{B&Y}, we omit\nthe proofs here. \\newline\n\nWe note that it follows from \\eqref{B1C1}-\\eqref{C22}, that we have\n\\begin{equation} \\label{C2egen}\nB_1(Q,M) \\ll (QM)^{\\varepsilon}\\left(Q^{1-1\/v} M +\nQ^{1+3\/(4v)}\\right)\n\\end{equation}\nfor any $v\\in \\ensuremath{\\mathbb N}$.\n\n\n\\subsection{Estimating $C_2$} \\label{section:C2}\n\n In this section we prove \\eqref{C2e1}. Recall $C_2(M,Q)$\nis the norm of the sum\n\\begin{equation}\n\\label{eq:C2def} \\sum\\limits_{M0$ here, since the contribution of the negative $h$'s can be treated similarly and satisfies the same bound. We use $\\sigma = \\varepsilon$\nto see that\n\\begin{align*}\n U'(\\Delta, \\delta, \\ell, e,\nh) & \\ll_{\\varepsilon}\n\\left(\\frac{\\ell \\mathcal{N}(e\\overline{e}\\Delta)}{hM}\\right)^{\\varepsilon}\\int\\limits^{\\infty}_{-\\infty}|\\rho_{+}(\\varepsilon+it)||V(\\varepsilon+it)|\n\\mathrm{d} t,\n\\end{align*}\n where\n\\begin{align*}\n V(s)= \\sideset{}{'}\\sum\\limits_{\\substack{n_1,n_2\\in \\ensuremath{\\mathbb Z}[i]\\\\\n\\frac{Q}{\\mathcal{N}(e\\delta \\Delta)}<\\mathcal{N}(n_1),\\mathcal{N}(n_2)\\le \\frac{2Q}{\\mathcal{N}(e\\delta\n\\Delta)} \\\\ n_1,n_2\\equiv \\pm 1 \\bmod {(1+i)^3} \\\\ (n_1 n_2,\ne \\overline{e}\\delta \\overline{\\delta} \\Delta \\overline{\\Delta}) =\n1 } } d_{n_1}d'_{n_2}\\left(\\frac{n_1}{n_2}\\right)^2_4,\n\\end{align*}\n with\n\\begin{equation*}\n d_n=c_{\\Delta,\\delta,\\ell,h,e,n}\\mathcal{N}(n)^{s} \\; \\; \\; \\mbox{and} \\; \\; \\; d'_n=c'_{\\Delta,\\delta,\\ell,h,e,n}\\mathcal{N}(n)^{s} .\n\\end{equation*}\n Note that $d_{n_1}$ and $d_{n_2}'$ depend on\n$\\Delta,\\delta, \\ell, h, e, n$ and $s$, and\n\\begin{equation*}\n|d_n| \\ll \\left(\\frac{\\mathcal{N}(e\\delta\\Delta)}{Q}\\right)^{1\/2-\\varepsilon}\n\\left| b_{ne\\Delta\\delta} \\right|, \\quad |d_n'| \\ll\n\\left(\\frac{\\mathcal{N}(e\\delta\\Delta)}{Q}\\right)^{1\/2-\\varepsilon}\n\\left| b_{\\overline{n}e\\Delta\\overline{\\delta}} \\right|.\n\\end{equation*}\n Now, using the Cauchy-Schwarz inequality and the estimate\n\\eqref{eq:quartic} upon noting that this estimate remains valid if\nthe summation conditions $m,n\\equiv 1 \\bmod {(1+i)^3}$ therein are\nreplaced by $m,n \\equiv \\pm 1 \\bmod {(1+i)^3}$ and\n$\\left(\\frac{n}{m}\\right)_4$ replaced by\n$\\left(\\frac{n_1}{n_2}\\right)^2_4$, we bound $V(\\varepsilon+it)$ by\n\\begin{equation} \\label{square}\n\\begin{split}\n& \\left| V(\\varepsilon+it) \\right|^2 \\\\\n&\\ll \\sideset{}{'}\\sum\\limits_{\\substack{n_1\\in \\ensuremath{\\mathbb Z}[i]\\\\\n\\frac{Q}{\\mathcal{N}(e\\delta \\Delta)}<\\mathcal{N}(n_1)\\le \\frac{2Q}{\\mathcal{N}(e\\delta \\Delta)}\n\\\\ n_1 \\equiv \\pm 1 \\bmod {(1+i)^3} \\\\ (n_1, e\n\\overline{e}\\delta \\overline{\\delta} \\Delta \\overline{\\Delta}) = 1\n} } |d_{n_1}|^2 \\;\\;\\;\\; \\times\n\\sideset{}{'}\\sum\\limits_{\\substack{n_1\\in \\ensuremath{\\mathbb Z}[i]\\\\\n\\frac{Q}{\\mathcal{N}(e\\delta \\Delta)}<\\mathcal{N}(n_1)\\le \\frac{2Q}{\\mathcal{N}(e\\delta \\Delta)}\n\\\\ n_1 \\equiv \\pm 1 \\bmod {(1+i)^3} \\\\ (n_1, e\n\\overline{e}\\delta \\overline{\\delta} \\Delta \\overline{\\Delta}) = 1\n} } \\left|\\sideset{}{'}\\sum\\limits_{\\substack{n_2\\in \\ensuremath{\\mathbb Z}[i]\\\\\n\\frac{Q}{\\mathcal{N}(e\\delta \\Delta)}<\\mathcal{N}(n_2)\\le \\frac{2Q}{\\mathcal{N}(e\\delta \\Delta)}\n\\\\ n_2 \\equiv \\pm 1 \\bmod {(1+i)^3} \\\\ (n_2, e\n\\overline{e}\\delta \\overline{\\delta} \\Delta \\overline{\\Delta}) = 1\n} } d_{n_2}'\n\\left(\\frac{n_1}{n_2}\\right)^2_4 \\right|^2 \\\\\n&\\ll (QM)^{8\\varepsilon}\\left(\\frac{\\mathcal{N}(e\\delta\n\\Delta)}{Q}\\right)^{1\/4 -4\\varepsilon} \\left( \\sideset{}{'}\\sum_{\\substack{Q\/\\mathcal{N}(e)< \\mathcal{N}(n)\\leq 2Q\/\\mathcal{N}(e)\\\\\nn \\equiv \\pm 1 \\bmod {(1+i)^3}\\\\ (\\mathcal{N}(n), \\mathcal{N}(e))=1 }} |b_{en}|^2 \\right)^2.\n\\end{split}\n\\end{equation}\n Since\n\\begin{align*}\n \\int\\limits^{\\infty}_{-\\infty}|\\rho_{+}(\\sigma+it)| \\mathrm{d} t \\ll_{\\varepsilon} 1,\n\\end{align*}\n we deduce that\n\\begin{equation} \\label{cont}\n\\begin{split}\nS'_W(M,Q) & \\ll M(QM)^{7\\varepsilon}\n\\sideset{}{'}\\sum_{\\substack{\\mathcal{N}(\\Delta) \\leq 2Q \\\\ \\Delta \\equiv 1\n\\bmod{{(1+i)^3}}}}\\ \\sideset{}{'}\\sum_{\\substack{\\mathcal{N}(\\delta) \\leq\n\\frac{2Q}{\\mathcal{N}(\\Delta)} \\\\ \\delta \\equiv 1 \\bmod{{(1+i)^3}} \\\\\n(\\mathcal{N}(\\delta), \\mathcal{N}(\\Delta)) = 1}}\\frac{1}{(\\mathcal{N}(\\delta))^{1\/2}} \\sum_{\\ell |\n\\mathcal{N}(\\Delta)} \\frac{1}{\\ell} \\sideset{}{'}\\sum_{\\substack{e \\in \\ensuremath{\\mathbb Z}[i] \\\\ e\n\\equiv 1 \\bmod {(1+i)^3} \\\\ \\mathcal{N}(e) \\leq \\frac{2Q}{\\mathcal{N}(\\delta\n\\Delta)}\\\\ (\\mathcal{N}(e), \\mathcal{N}(\\delta\\Delta))=1 }}\\frac {1}{\\mathcal{N}(e)}\n\\\\\n& \\hspace*{2cm} \\times \\sum_{0< |h| \\leq H} \\left(\\frac{\\mathcal{N}(e\\delta\n\\Delta)}{Q}\\right)^{\\frac14 -2\\varepsilon}\\sideset{}{'}\\sum_{\\substack{Q\/\\mathcal{N}(e)< \\mathcal{N}(n)\\leq 2Q\/\\mathcal{N}(e)\\\\\nn \\equiv \\pm 1 \\bmod {(1+i)^3}\\\\ (\\mathcal{N}(n), \\mathcal{N}(e))=1 }} |b_{en}|^2 \\\\\n& \\ll Q^{7\/4+2\\varepsilon}(QM)^{8\\varepsilon}\n\\sideset{}{'}\\sum_{\\substack{\\mathcal{N}(\\Delta) \\leq 2Q \\\\ \\Delta \\equiv 1\n\\bmod{{(1+i)^3}}}}\\frac {1}{\\mathcal{N}(\\Delta)^{7\/4+2\\varepsilon}}\n\\sideset{}{'}\\sum_{\\substack{\\mathcal{N}(\\delta) \\leq\n\\frac{2Q}{\\mathcal{N}(\\Delta)} \\\\ \\delta \\equiv 1 \\bmod{{(1+i)^3}} \\\\\n(\\mathcal{N}(\\delta), \\mathcal{N}(\\Delta)) = 1}}\\frac{1}{(\\mathcal{N}(\\delta))^{5\/4+2\\varepsilon}}\n\\sum_{\\ell | \\mathcal{N}(\\Delta)}1 \\\\\n& \\hspace*{2cm} \\times \\sideset{}{'}\\sum_{\\substack{e \\in \\ensuremath{\\mathbb Z}[i]\n\\\\ e \\equiv 1 \\bmod {(1+i)^3} \\\\ \\mathcal{N}(e) \\leq \\frac{2Q}{\\mathcal{N}(\\delta\n\\Delta)}\\\\ (\\mathcal{N}(e), \\mathcal{N}(\\delta\\Delta))=1 }}\\frac\n{1}{\\mathcal{N}(e)^{3\/4+2\\varepsilon}}\n\\sideset{}{'}\\sum_{\\substack{Q\/\\mathcal{N}(e)< \\mathcal{N}(n)\\leq 2Q\/\\mathcal{N}(e)\\\\\nn \\equiv \\pm 1 \\bmod {(1+i)^3}\\\\ (\\mathcal{N}(n), \\mathcal{N}(e))=1 }} |b_{en}|^2 \\\\\n& \\ll Q^{7\/4+2\\varepsilon}(QM)^{8\\varepsilon}\\sideset{}{'}\\sum_{\\substack{Q< \\mathcal{N}(n)\\leq 2Q\\\\\nn \\equiv \\pm 1 \\bmod {(1+i)^3}}} |b_{n}|^2\n\\sideset{}{'}\\sum_{\\substack{e \\in \\ensuremath{\\mathbb Z}[i]\n\\\\ e \\equiv 1 \\bmod {(1+i)^3} \\\\ e|n }}\\frac\n{1}{\\mathcal{N}(e)^{3\/4+2\\varepsilon}} \\\\\n& \\ll Q^{7\/4+2\\varepsilon}(QM)^{9\\varepsilon}\\sideset{}{'}\\sum_{\\substack{Q< \\mathcal{N}(n)\\leq 2Q\\\\\nn \\equiv \\pm 1 \\bmod {(1+i)^3}}} |b_{n}|^2 .\n\\end{split}\n\\end{equation}\n\n Combining \\eqref{h0cont} and \\eqref{cont}, we deduce that\n\\eqref{eq:prePoisson} and hence \\eqref{eq:C2def} is bounded by\n\\begin{equation*}\n\\ll (QM)^{\\varepsilon} \\left(M+Q^{7\/4}\\right) \\| b_{n} \\|^2\n\\end{equation*}\nwhich implies the desired bound \\eqref{C2e1}.\n\n\\section{Completion of the proof of Theorem \\ref{quarticlargesieve}}\nWe start with \\eqref{C2egen} with any $v\\ge 2$ (as one checks\neasily that $v=1$ does not lead to any improvement) as an initial\nestimate. From \\eqref{B21} and \\eqref{C2egen}, it follows that\n\\begin{equation*}\nB_2(Q,M) \\ll (QM)^{\\varepsilon} Q^{1\/2}X^{-1\/2}\n(X^{1+3\/(4v)}+X^{1-1\/v}M)\n\\end{equation*}\n for a suitable $X$ with $1\\ll X\\ll Q$. The worst case is $X = Q$ which shows $B_2(Q,M)$ also satisfies\n \\eqref{C2egen}. Repeating the argument, we have\n\\begin{equation*}\nB_3(Q,M)\\ll\n(QM)^{\\varepsilon}\\left(Q^{1+3\/(4v)}+Q^{1-1\/v}M\\right).\n\\end{equation*}\nCombining this with \\eqref{B43}, we obtain\n\\begin{eqnarray*}\nB_4(Q,M)&\\ll& Q+(QM)^{9\\varepsilon}QM^{-2} \\max\\left\\{K^{1+3\/(4v)}+K^{1-1\/v}M \\ :\\ K\\le M^4Q^{-1}\\right\\}\n\\\\ & & \\hspace*{1cm}+ (QM)^{9\\varepsilon}M^{6}Q^{-1} \\sum\\limits_{K\\ge M^4\/Q} K^{-2-\\varepsilon}(K^{1+3\/(4v)}+K^{1-1\/v}M)\\nonumber\\\\\n&\\ll&\nQ+(QM)^{10\\varepsilon}(Q^{-3\/(4v)}M^{2+3\/v}+Q^{1\/v}M^{3-4\/v}),\n\\end{eqnarray*}\nwhere the sum over $K$ runs over powers of $2$. From this and\n\\eqref{B34}, we deduce that\n\\begin{equation*}\nB_3(Q,M)\\ll \\frac{Q}{\\Delta_1}+(QM)^{\\varepsilon}\\left(\n\\left(\\frac{Q}{\\Delta_1}\\right)^{-3\/(4v)}\n\\left(\\frac{M}{\\Delta_2}\\right)^{2+3\/v}+\n\\left(\\frac{Q}{\\Delta_1}\\right)^{1\/v}\n\\left(\\frac{M}{\\Delta_2}\\right)^{3-4\/v}\\right)\n\\end{equation*}\nfor some positive integers $\\Delta_1$, $\\Delta_2$ with\n$\\Delta_2^2\\ge \\Delta_1$. From this and the trivial bound\n\\begin{equation*}\nB_1(Q,M)\\ll B_3(Q,M),\n\\end{equation*}\n we deduce that\n\\begin{equation}\n\\label{tem1} B_1(Q,M)\\ll\nQ+(QM)^{\\varepsilon}\\left(Q^{-3\/(4v)}M^{2+3\/v}+Q^{1\/v}M^{3-4\/v}\\right).\n\\end{equation}\nCombining \\eqref{tem1} with \\eqref{B11'}, we deduce that\n\\begin{equation} \\label{Re1}\nB_1(Q,M)\\ll\n(\\tilde{Q}M)^{\\varepsilon}\\left(\\tilde{Q}+\\tilde{Q}^{-3\/(4v)}M^{2+3\/v}+\\tilde{Q}^{1\/v}M^{3-4\/v}\\right)\n\\end{equation}\nif $\\tilde{Q}\\ge CQ\\log(2QM)$. We choose $\\tilde{Q}:=\\max\n(Q^{1+\\varepsilon}, M^{4-4v\/7}). $ Then \\eqref{Re1} implies that\n\\begin{equation} \\label{Re2}\nB_1(Q,M)\\ll (QM)^{\\varepsilon}\n\\left(Q+Q^{1\/v}M^{3-4\/v}+M^{17\/7}\\right).\n\\end{equation}\n\n It's easy to see that the choice $v=2$ is optimal and a further cycle in the above process does not lead to an improvement of\nour result. Combining \\eqref{C2egen} with $v=1,2,3$ and\n\\eqref{Re2} with $v=2$, we obtain our final estimate\n\\begin{equation}\n\\label{final0} B_1(Q,M)\\ll\n(QM)^{\\varepsilon}\\min\\left\\{Q^{7\/4}+M, \\; Q^{11\/8}+Q^{1\/2}M, \\; Q^{5\/4}+Q^{2\/3}M,\\;\nQ+Q^{1\/2}M+M^{17\/7}\\right\\}.\n\\end{equation}\nwhich together with \\eqref{trans} (noting that the last expression in \\eqref{trans} is $\\ll B_1(Q,M)$ by the law of quartic reciprocity) implies Theorem \\ref{quarticlargesieve}. \\hfill $\\Box$ \\\\\n\nCalculating the right-hand side of \\eqref{final} for various\nranges of $Q$ and $M$, we obtain that it is bounded by\n\\begin{equation*}\n\\ll (QM)^{\\varepsilon} \\| a_m \\|^2\n\\cdot \\left\\{ \\begin{array}{llll} M &\\mbox{ if } Q\\le M^{4\/7},\\\\ \\\\ Q^{7\/4} &\\mbox{ if } M^{4\/7}> 1$), \nthe observed anisotropy is expected to fall to much lower values \n\\cite{savthes}. Photon diffusion dampens anisotropies at angular \nscales smaller than about one minute. However, for such large \nvalues of $k$, $D_g$ has rapidly growing solutions. The \nperturbation equation becomes \n\\begin{equation}\n\\ddot D_g + \\dot D_g - Ce^{-\\bar{\\eta}}D_g = 0 \n\\end{equation}\nThis has exact solutions in terms of modified first and second \ntype bessel functions $I_1,~K_1$:\n\\begin{equation}\nD_g = C_1(Ce^{-\\bar{\\eta}})^{1\\over 2}I_1((4Ce^{-\\bar{\\eta}})^{1\\over 2})\n + C_2(Ce^{-\\bar{\\eta}})^{1\\over 2}K_1((4Ce^{-\\bar{\\eta}})^{1\\over 2})\n\\end{equation}\nFor large arguments, these functions have their asymptotic forms:\n\\begin{equation}\nI_1 \\longrightarrow {(Ce^{-\\bar{\\eta}})^{-{1\\over 4}}\\over {2\\sqrt{\\pi}}}\nexp[2(Ce^{-\\bar{\\eta}})^{1\\over 2}];~~\nK_1 \\longrightarrow {(Ce^{-\\bar{\\eta}})^{-{1\\over 4}}\\over {2\\sqrt{\\pi}}}\nexp[- 2(Ce^{-\\bar{\\eta}})^{1\\over 2}]\n\\end{equation}\nEven if diffusion damping were to reduce the baryon density contrast to \nvalues as low as some $10^{-15}$, a straight forward numerical integration\nof eqn(4) demonstrates that for $k \\ge 3000$ the density contrast becomes \nnon linear around redshift of the order 50. \n\nIn contrast to the above, in the radiation dominated epoch,\nthe adiabatic approximation perturbation equations imply \n\\cite{pranav}:\n\\begin{equation}\n[(k^2 + 3){3\\over {4k^2}} + {3\\tilde C\\over {2k^2e^{2\\bar{\\eta}}}}]\\ddot D_g +\n{3\\tilde C\\over {k^2e^{2\\bar{\\eta}}}}\\dot D_g\n+[{{k^2 + 3}\\over 8} - {\\tilde C\\over {2e^{2\\bar{\\eta}}}}]D_g = 0 \n\\end{equation}\nFor $\\bar{\\eta}$ large and negative, small $k$ pertubation equation \nreduces to \n\\begin{equation}\n3\\ddot D_g + 6\\dot D_g -k^2D_g = 0\n\\end{equation}\nEqns(7-8) imply that \nperturbations bounded for large negative $\\eta$ damp out for \nsmall $k$ while large $k$ modes are oscillatory.\n\nWe conclude that fluctuations do not grow in the radiation \ndominated era, small $k$ (large scale) fluctuations do not grow \nin the matter dominated era as well. However, \neven tiny residual baryonic fluctuations $O(10^{-15})$ \nat the last scattering surface for large values of $k \\ge 3000$ \nin the matter dominated era, grow to the non linear regime. \nSuch a growth would be a necessary condition for structure \nformation and is not satisfied in the standard model. In the \nstandard model, cold dark matter is absolutely essential \nfor structure formation. \n\n\n\\vspace{.2cm}\n\\item{\\it \\bf \\noindent The recombination epoch}\n\\vspace{.2cm}\n\n Salient features of the plasma era in a linear coasting \ncosmology have been described in \\cite{astroph,savitaI,savthes}. \nHere we reproduce some of\nthe peculiarities of the recombination epoch. These\nare deduced by making a simplifying assumption of thermodynamic \nequilibrium just before recombination. \n\nThe probability that a photon was last scattered in the interval \n$(z,z + dz)$ can be expressed in terms of optical depth,and turns out to be:\n\\begin{equation}\nP(z) = e^{-\\tau_\\gamma}{{d\\tau_\\gamma}\\over {dz}} \\approx 7.85\\times 10^{-3}\n({z\\over {1000}})^{13.25}exp[-0.55({z\\over {1000}})^{14.25}]\n\\end{equation}\nThis $P(z)$ is sharply peaked and well fitted by a gaussian of mean redshift\n$z \\approx 1037$ and standard deviation in redshift $\\Delta z \\approx 67.88$.\nThus in a linearly coasting cosmology, the last scattering surface locates at\nredshift $z^* = 1037$ with thickness \n$\\Delta z \\approx 68$. Corresponding values in \nstandard cosmology are $ z = 1065$ and $\\Delta z \\approx 80$.\n\n An important scale that determines the nature of CMBR anisotropy is the \nHubble scale which is the same as the curvature scale for linear coasting. \nThe angle subtended today, by a sphere of Hubble radius at $z^* = 1037$, \nturns out to be $\\theta_H \\approx 15.5$ minutes. The Hubble length determines the scale \nover which physical processes can occur coherently. Thus one expects all \nacoustic signals to be contained within an angle $\n\\theta_H \\approx 15.5$ minutes.\n\nWe expect the nature of CMB anisotropy to follow from the above \nresults. The details are still under study and shall be reported \nseparately.\n\n\\vspace{.5cm}\n\\item{\\it \\bf \\noindent Summary}\n\\vspace{.5cm}\n\nIn spite of a significantly different evolution, a linear \ncoasting cosmology can not be ruled out by all the tests we have \nsubjected it to so far. Linear coasting being extremely\nfalsifiable, it is encouraging to observe its\nconcordance !! In standard cosmology, falsifiability has taken \na backstage - one just constrains the values of cosmological \nparameters subjecting the data to Bayesian statistics. Ideally, \none would have been very content with a cosmology based on physics\ntested in the laboratory. Clearly, standard cosmology does not \npass such a test. One needs a mixture of hot and cold dark \nmatter, together with (now) some form of \\emph{dark energy} to \nact as a cosmological constant, to \nfind any concordance with observations. In other words, one uses \nobservations to parametrize theory in Standard Cosmology. \nIn contrast, a universe that is born and evolves as a curvature \ndominated model has a tremendous concordance, it does not need any form of dark matter and there are \nsufficient grounds to explore models that support such a coasting.\n\n\\end{itemize}\n\\section{The Nucleosynthesis Constraint:}\n\nWhat makes linear coasting particularly appealing is a \nstraightforward adaptation of standard nucleosynthesis codes to\ndemonstrate that primordial nucleosynthesis is not an impediment \nfor a linear coasting cosmology \\cite{kapl,annu}. A linear \nevolution of the scale factor radically effects nucleosynthesis\nin the early universe. With the present age of the universe some \n$15\\times 10^9$ years and the $effective$ CMB temperature 2.73 K, \nthe universe turns out to be some 45 years old at $10^9$ K. \nWith the universe expanding at such low rates, weak interactions \nremain in equillibrium for temperature as low as $\\approx 10^8$ K.\nThe neutron to proton ratio is determined by the n-p\nmass difference and is approximately $n\/p\\sim exp[-15\/T_9]$.\nThis falls to abysmally low values at temperatures below $10^9$ K.\nSignificant nucleosynthesis leading to helium formation commences \nonly near temperatures below $\\sim 5\\times 10^9$K. The low n\/p\nratio is not an impediment to adequate helium production. This\nis because once nucleosynthesis commences, inverse beta decay \nreplenishes neutrons by converting protons into neutrons and \npumping them into the nucleosynthesis network. For baryon entropy \nratio $\\eta\\approx 7.8\\times 10^{-9}$, the standard \nnucleosynthesis network can be modified for linear coasting and gives $\\approx 23.9\\% $ Helium.\nThe temperatures are high enough to cause helium to burn.\nEven in SBBN the temperatures are high enough for helium to burn.\nHowever, the universe expands very rapidly in SBBN. In comparison,\nthe linear evolution gives enough time for successive burning \nof helium, carbon and oxygen. The metallicity yield is some $10^8$ times the metallicity \nproduced in the early universe in the SBBN. The metallicity \nis expected to get distributed amongst nucleii with maximum \nbinding energies per nucleon. These are nuclei with atomic \nmasses between 50 and 60. This metallicity is close to\nthat seen in lowest metallicity objects. Figure(1) \\& (2) describe \nnuclesynthesis as a function of the Baryon entropy ratio. The \nmetallicity concommitantly produced with $\\approx 23.9\\%$ \nHelium is roughly $10^{-5}$ solar. \n\nThe only problem that one has to contend with is the significantly\nlow residual deuterium in such an evolution. The desired amount\nwould have to be produced by the spallation processes much later \nin the history of the universe as described below.\n\nInterestingly, the baryon entropy ratio required for the right \namount of helium corresponds to $\\Omega_b \\equiv \\rho_b\/\\rho_c = \n8\\pi G \\rho_b\/3H_o^2 \\approx 0.69$ . This closes dynamic mass \nestimates of large galaxies and clusters [see eg \n\\cite{tully}]. In standard cosmology this closure is \nsought by taking recourse to non-baryonic cold dark matter. \nThere is hardly any budget for non - baryonic CDM in linear \ncoasting cosmology.\n\n\n\n{\\bf Deuterium Production:}\n\nTo get the observed abundances of light elements besides $^4He$,\nwe recall spallation mechanisms that were explored in the \npre - 1976 days \\cite{eps}. Deuterium can indeed be produced by \nthe following spallation reactions:\n$$\np + ^4He \\longrightarrow D + ^3He; ~~ 2p \\longrightarrow D + \\pi^+;\n$$\n$$\n2p \\longrightarrow 2p + \\pi^o,~ \\pi^o \\longrightarrow 2\\gamma,~\n\\gamma +^4He \\longrightarrow 2D.\n$$There is no problem in producing Deuterium all the way \nto observed levels. The trouble is that under most conditions \nthere is a concomitant over - production of $Li$ nuclei and \n$\\gamma$ rays at unacceptable levels. Any later destruction of \nlithium in turn completely destroys $D$. As described in \n\\cite{eps}, figure (3) exhibits relative production of $^7Li$ \nand $D$ by spallation. It is apparent that the production of \nthese nuclei to observed levels and without a collateral \ngamma ray flux is possible only if the incident (cosmic ray or any\nother) beam is energized to an almost mono energetic value of \naround 400 MeV. A model that requires nearly mono energetic \nparticles would be rightly considered \n$ad~hoc$ and would be hard to physically justify.\n\nHowever, lithium production occurs by spallation of protons over \nheavy nuclei as well as spallation of helium over helium:\n$$\np,\\alpha ~+ ~C,N,O \\longrightarrow Li~+~X;~~ \np,\\alpha ~+ ~Mg,Si,Fe \\longrightarrow Li~+~X;~~\n$$\n$$\n2\\alpha \\longrightarrow ^7Li ~+~p; ~~ \\alpha ~+~D \\longrightarrow p ~+~^6Li;\n$$\n$$\n^7Be + \\gamma \\longrightarrow p + ^6Li; ~~ ^9Be + p \\longrightarrow\n\\alpha +^6Li.\n$$\nThe absence or deficiency of heavy nuclei in a target cloud and \ndeficiency of alpha particles in the incident beam would clearly \nsuppress lithium production. Such conditions could well have \nexisted in the environments of incipient Pop II stars. \n\nEssential aspects of evolution of a collapsing cloud to form a low\nmass Pop II star is believed to be fairly well understood \n\\cite{feig,hart}. The formation\nand early evolution of such stars can be discussed in terms of\ngravitational and hydrodynamical processes. A protostar would \nemerge from the collapse of a molecular cloud core and would be \nsurrounded by high angular momentum material forming a \ncircumstellar accretion disk with bipolar outflows.\nSuch a star contracts slowly while the magnetic fields play a \nvery important role in regulating collapse of the accretion disk \nand transferring the disk orbital angular motion to collimated \noutflows. A substantial fraction of the accreting matter is \nejected out to contribute to the inter - stellar medium.\n\nEmpirical studies of star forming regions over the last twenty \nyears have now provided direct and ample evidence for MeV \nparticles produced within protostellar and T Tauri systems \n\\cite{Terekhov,Torsti}. The source of such accelerated \nparticle beaming is understood to be violent magnetohydrodynamic \n(MHD) reconnection events. These are analogous to solar magnetic\nflaring but elevated by factors of $10^1$ to $10^6$ above levels \nseen on the contemporary sun besides being up to some 100 times \nmore frequent. Accounting for characteristics in the meteoritic \nrecord of solar nebula from integrated effects of particle \nirradiation of the incipient sun's flaring has assumed the status \nof an industry. Protons are the primary component of particles \nbeaming out from the sun in gradual flares while $^4He$ are \nsuppressed by factors of ten in rapid flares to factors of a \nhundred in gradual flares\\cite{Terekhov,Torsti}. Models of young \nsun visualizes it as a much larger protostar with a cooler \nsurface temperature and with a very highly elevated level of \nmagnetic activity in comparison to the contemporary sun. It is \nreasonable to suppose that magnetic reconnection events would \nlead to abundant release of MeV nuclei and strong shocks that \npropagate into the circumstellar matter. Considerable evidence \nfor such processes in the early solar nebula has been found in \nthe meteoric record. It would be fair to say that the \nhydrodynamical paradigms for understanding the earliest stages of \nstellar evolution is still not complete. However, it seems \nreasonable to conjecture that several features of collapse of a \ncentral core and its subsequent growth from acreting material \nwould hold for low metallicity Pop II stars. Strong magnetic \nfields may well provide for a link between a central star, its \ncircumstellar envelope and the acreting disk. Acceleration of \njets of charged particles from the surface of such stars could \nwell have suppressed levels of $^4He$. Such a suppression \ncould be naturally expected if the particles are picked up from \nan environment cool enough to suppress ionized $^4He$ in \ncomparison to ionized hydrogen. Ionized helium to hydrogen number \nratio in a cool sunspot temperature of $\\approx 3000~K$ can be \ncalculated by the Saha's ionization formula and the \nionization energies of helium and hydrogen. This turns out to be \n$\\approx~ exp(-40)$ and increases rapidly with temperature. Any \nelectrodynamic process that accelerates charged particles from \nsuch a cool environment would yield a beam deficient in alpha \nparticles. With $^4He$ content in an accelerated particle beam \nsuppressed in the incident beam and with the incipient cloud \nforming a Pop II star having low metallicity in the \ntarget, the ``no - go'' concern of (Epstein et.al. \\cite{eps}) \nis effectively circumvented. The ``no-go'' used \n$Y_\\alpha \/Y_p \\approx .07$ in both the energetic particle\nflux as well as the ambient medium besides the canonical solar \nheavy element mass fraction. Incipient Pop II environments may \ntypically have heavy element fraction suppressed by more than \nfive orders of magnitude while, as described above, magnetic \nfield acceleration could accelerate beams of particles deficient \nin $^4He$.\n\nOne can thus have a broad energy band - all the way from a few \nMeV up to some 500 MeV per nucleon as described in the Figure (3), \nin which acceptable levels of deuterium could be ``naturally'' \nproduced. The higher energy end of the band may also\nnot be an impediment. There are several astrophysical processes \nassociated with gamma ray bursts that could produce $D$ at high \nbeam energies with the surplus gamma ray flux a natural by product.\n\n\n\nCircumventing the ``no-go'' concern of Epstein et al would be\nof interest for any cosmology having an early universe \nexpansion rate significantly lower than corresponding rates\nfor the same temperatures in early universe SBB.\n\n{\\bf Conclusions:}\n\nOur understanding of star formation has considerably evolved \nsince 1976. SBBN constraints need to be reconsidered in view of \nempirical evidence from young star forming regions. These models \nclearly imply that spallation mechanism can lead to viable and \nnatural production of Deuterium and Lithium in the incipient \nenvironment of Pop II stars. One can conceive of cosmological \nmodels in which early universe nucleosynthesis produces the \ndesired primordial levels of $^4He$ but virtually no $D$. Such a \nsituation can arise in SBBN itself with a high baryon entropy \nratio $\\eta$. In such a universe, in principle,\nDeuterium and Lithium can be synthesized up to acceptable levels \nin the environment of incipient Pop II stars.\n\n\nIn SBB, hardly any metallicity is produced in the very early \nuniverse. Metal enrichment is supposed to be facilitated by a \ngeneration of Pop III stars. Pop III star formation from a \npristine material is not well understood till date in spite of a \nlot of effort that has been expanded to that effect recently \n\\cite{sneider}. It is believed that with metallicity below a \ncritical transition metallicity \n($Z_{cr} \\approx 10^{-4} Z_\\odot$), masses of Pop III stars would \nbe biased towards very high masses. Metal content higher than \n$Z_{cr}$ facilitates cooling and a formation of lower mass Pop II \nstars. In SBB, the route to Deuterium by spallation discussed in \nthis article would have to follow a low metal contamination by a \ngeneration of Pop III stars.\n\n\nDeuterium production by spallation discussed in this article \nwould be good news for a host of slowly evolving cosmological \nmodels \\cite{kapl,annu}. An FRW model with a linearly evolving \nscale factor enjoys concordance with constraints on age of the \nuniverse and with the Hubble data on SNe1A. Such a linear coasting\nis consistent with the right amount of helium\nobserved in the universe and metallicity yields close to the \nlowest observed metallicities. The only problem that one has to \ncontend with is the significantly low yields of deuterium in such \na cosmology. In such a model, the first generation of stars would \nbe the low mass Pop II stars and the above analysis would \nfacilitate the desired deuterium yields.\n\n\nIn SBB, large-scale production and recycling of metals through \nexploding early generation Pop III stars leads to verifiable \nobservational constraints. Such stars would be visible as \n27 - 29 magnitude stars appearing any time in every square \narc-minute of the sky. Serious doubts have been expressed on the \nexistence and detection of such signals \\cite{escude}. The linear \ncoasting cosmology would do away with the requirement of such \nPop III stars altogether.\n\n\\begin{center}\n\\begin{figure}\n\\resizebox{.8\\columnwidth}{!}\n{\\includegraphics[angle=270]{fig1a.ps}}\\\\\n\\title{Fig1(a). The figure shows abundance of He4 as a function of temperature for $\\eta\\approx 7.8\\times 10^{-9}$. The final abundance of He4 is approximately 23 \\%. It reaches this value around $T \\approx T_9$ and stays same thereafter.}\\\\\n\\resizebox{.8\\columnwidth}{!}\n{\\includegraphics[angle=270]{fig1b.ps}}\\\\\n\\title{Fig1(b). The figure shows metalicity as a function of temperature for $\\eta\\approx 7.8\\times 10^{-9}$. The metallicity for a linaer coasting model is nearly equal to $ 10^{-5}$ times solar metallicity. }\\\\ \n\\end{figure}\n\\end{center}\n\\begin{center}\n\\begin{figure}\n\\resizebox{.8\\columnwidth}{!}\n{\\includegraphics[angle=270]{fig2a.ps}}\\\\\n\\title{Fig2(a). The figure shows He4 abundance as a function of $\\eta$.}\\\\\n\\resizebox{.8\\columnwidth}{!}\n{\\includegraphics[angle=270]{fig2b.ps}}\\\\\n\\title{Fig2(b). The figure shows metallicity as a function of $\\eta$.}\\\\\n\\end{figure}\n\\end{center}\n\\eject\n\\begin{center}\n\\begin{figure}\n\\resizebox{.8\\columnwidth}{!}\n{\\includegraphics{fig3.ps}}\\\\\n\\title{ Fig3. The rates at which abundances approach their present values as a function of the energy per nucleon of the incident particle.} \\cite{eps}\\\\\n\\end{figure}\n\\end{center}\n\\vskip 1cm\n\n\n\n\\vfil\\eject\n{\\bf Acknowledgment.} \\\\ Daksh Lohiya and Sanjay Pandey acknowledge IUCAA for support under \nthe IUCAA Associate Program. Geetanjali and Pranav ackowledge C.S.I.R for financial support. \\\\\n\\vspace{0.5cm}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzkcua b/data_all_eng_slimpj/shuffled/split2/finalzzkcua new file mode 100644 index 0000000000000000000000000000000000000000..c83eed7642df73a712b5621e9ce45c9fbeda81d1 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzkcua @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \n\nRecent observations with the Submillimetre Common-User Bolometer Array \n(SCUBA; Holland et al.\\ 1999) on the James Clerk Maxwell Telescope have \nhighlighted the presence of a number of submillimetre-luminous galaxies \n(Smail, Ivison \\& Blain 1997; Holland et al.\\ 1998; Barger et al.\\ 1998a; \nHughes et al.\\ 1998; Eales et al.\\ 1998). To date about forty \nsources have been found. These measurements confirm earlier suggestions \n(Blain \\& Longair 1993; Eales \\& Edmunds 1996, 1997) that submillimetre-wave \nobservations will provide an important probe of cosmology.\n\nThe optical counterparts of these SCUBA sources are faint, most with $I > 20$ \n(Smail et al. 1998). They are presumably not low-redshift sources: the best \nstudied is SMM\\,J02399$-$0136 (Ivison et al.\\ 1998), a dust-enshrouded AGN at \n$z=2.8$, perhaps similar to IRAS F10214+4724 (Rowan-Robinson et al.\\ 1991; \nLacy et al.\\ 1998). The recent detection of molecular gas in SMM\\,J02399$-$0136 \n(Frayer et al.\\ 1998) also suggests that these galaxies are similar.\n \nWhat is the nature of these SCUBA-selected sources? In particular are they \nsimilar to the ultraluminous infrared galaxies (ULIRGs) observed locally (e.g. \nSanders \\& Mirabel 1996)? They have the same submillimetre-wave flux densities \nas would have local ULIRGs seen at high redshift (Barger et al.\\ 1998a). They also have \nsimilar optical colours (Smail et al.\\ 1998) to two of the three local ULIRGs\nstudied at ultraviolet wavelengths by Trentham, Kormendy \\& Sanders\n(1998), were those galaxies to be seen at high redshift. If this interpretation is \ncorrect, then we might hope to be able to use the wealth of observational \ninformation available for local ULIRGs to help to understand the properties of the \nSCUBA sources and their relevance to cosmology. \n\nWhether the SCUBA sources are dust-enshrouded AGNs, like Markarian 231, or \ndust-enshrouded starbursts, like Arp 220, is an important question. \nThis is difficult to determine for even local ULIRGs, because there are up to \nseveral hundred magnitudes of extinction along our lines of sight to the galaxy \ncores at optical wavelengths. For local ULIRGs, simple conclusions can be \ndrawn based on the general form of the spectral energy distributions (SEDs) \n(e.g.~Sanders et al.\\ 1988a); however, this information is not available for the \nSCUBA sources. Recently, new more detailed methods based on mid-infrared \nspectroscopic diagnostics (Lutz et al.\\ 1996, Genzel et al.\\ 1998) have been used \nto resolve this question for the most local ULIRGs. \n \nWe construct redshift-dependent luminosity functions that are consistent with\nall the counts and backgrounds from a number of recent surveys at far-infrared \nand millimetre\/submillimetre wavelengths. We use non-evolving \nGaussian luminosity functions over specified redshift ranges. These are the \nsimplest luminosity functions that we could adopt, without the results \nbeing dependent on the properties of low-luminosity galaxies that are not \nprobed by the SCUBA observations. Contributions from these low-luminosity \ngalaxies are important when computing backgrounds, although they contribute \ninsignificantly to the counts. The parameterization used to compare models \nwith observations reflects this fact. Imposing all the \nconstraints simultaneously does limit the possible form of the luminosity \nfunction and we identify three plausible models.\n\nGiven the redshift-dependent luminosity functions of these models, we \ninvestigate the properties to the individual sources using observations of their \nlocal ultraluminous counterparts, and derive the cosmological implications of \nthe SCUBA galaxies under both the starburst and AGN interpretation.\n\nWe first investigate the possibility that the SCUBA sources are high-redshift \nstar-forming galaxies. The starburst models of Leitherer \\& \nHeckman (1995) are used to convert far-infrared luminosities to star-formation \nrates; a transformation that is uncertain by more than an order of magnitude.\nWe then place the SCUBA galaxies on the Madau plot, which relates the \ncosmic star formation rate to the cosmic epoch (Madau et al.\\ 1996; Madau, \nDella Valle \\& Panagia 1998), \nand predict the total density of local stars produced in such \nobjects. Most of the star formation in the SCUBA sources is observed through\nhuge amounts of internal extinction, and so will not be included in global\nstar-formation rates that are computed using the optically selected samples that \nare normally used to construct the Madau plot. Indeed Hughes et al.\\ (1998) \nshow that deriving a star-formation rate from a rest-frame ultraviolet flux \nresults in a value that is more than an order of magnitude too low.\nThe SCUBA sources are thus not accounted for in the existing samples that are \nused to construct the Madau plot.\n\nWe then investigate the possibility that the SCUBA sources are high-redshift\ndust-enshrouded AGN, as discussed by Haehnelt, Natarajan \\& Rees (1998) \nand Almaini, Lawrence \\& Boyle (1999). The three distant \nsubmillimetre-luminous galaxies that have been studied in detail -- \nAPM\\,08279+5255 (Irwin et al. 1998; Lewis et al. 1998; Downes et al. submitted), \nSMM\\,J02399$-$0136 and IRAS F10214+4724 -- all contain powerful AGN. If all \nthe sources derive their bolometric luminosity from AGN that heat their dust \nshrouds radiatively by accretion onto a massive black hole, then we can use our \nredshift-dependent luminosity functions to\ncompute the comoving integrated mass density of these black holes.\n\nWe find plausible (although not unique) scenarios that are consistent with all \nthe observations and present their cosmological implications. Finally, we \nhighlight future work which may help to distinguish between the various \nscenarios. Throughout this paper we assume an Einstein--de Sitter world model \nwith $H_0 = 50$\\,km\\,s$^{-1}$\\,Mpc$^{-1}$.\n\n\\section{Far-infrared\/submillimetre-wave counts and backgrounds\nand the ULIRG luminosity function}\n\nThere have been many recent measurements of the far-infrared and\nsubmillimetre source counts and backgrounds at a number of wavelengths:\nsee Table 1. This has been a recent field of substantial activity \nbecause of new instrumentation, both SCUBA and {\\it ISO}. If we assume that\nthe SED at far-infrared and submillimeter\nwavelengths is known for all the sources,\nwe can then use all these measurements in conjunction to constrain the\nbivariate far-infrared luminosity--redshift function of these sources.\n\n\\begin{table*}\n\\caption{Far-infrared and submillimetre-wave surveys.\nLagache et al.~(1998) and Kawara et al.~(1997, 1998) obtain a different\ncalibration for the ISO 175-$\\mu$m counts. A discussion of calibration is\ngiven in Lagache et al.~(1998). The detection limit in the 850-$\\mu$m \nsurvey by Hughes et al. (1998) is about 2\\,mJy: the count at 1\\,mJy is \nderived from a source confusion analysis. See Blain et al.~(1999b) and \nSmail et al. (1999) for a \ndirect sub-mJy 850-$\\mu$m count. For complementary \nfar-infrared\/submillimetre-wave background measurements the reader \nis referred to Puget et al.~(1996), Guiderdoni et al.~(1997), Dwek et al.~(1998) \nand Fixsen et al.~(1998).}\n{\\vskip 0.75mm}\n{$$\\vbox{\n\\halign {\\hfil #\\hfil && \\quad \\hfil #\\hfil \\cr\n\\noalign{\\hrule \\medskip} \nWavelength & Counts \/ & Flux Limit \/& Background \/ & \nTelescope\/Instrument & Reference &\\cr\n & deg$^{-2}$ & mJy & nW m$^{-2}$ sr$^{-1}$ &\n & &\\cr \n\\noalign{\\smallskip \\hrule \\smallskip} \n\\cr \n2.8\\,mm & $<162$ & 3.55 & $-$ & BIMA & Wilner \\& Wright (1997) &\\cr \n\\noalign{\\smallskip}\n850\\,$\\mu$m & $2500\\pm1400$ & 4 & $-$ & JCMT\/SCUBA & \nSmail et al.~(1997)&\\cr\n\t & $1100\\pm600$ & 8 & $-$ & & Holland et al.~(1998)&\\cr \n\t & $800^{+1100}_{-500}$ & 3 & $-$ & & Barger et al.~(1998a) & \\cr\n & $7000\\pm3000$ & 1 & $-$ & & Hughes et al.~(1998) & \\cr \n\t & $1800\\pm600$ & 2.8 & $-$ & & Eales et al.~(1998) &\\cr \n\\noalign{\\smallskip}\n850\\,$\\mu$m & $-$ & $-$ & $0.55 \\pm 0.15$ & {\\it COBE}\/FIRAS & \nFixsen et al.~(1998)\n&\\cr\n\\noalign{\\smallskip}\n450\\,$\\mu$m & $< 1000$ & 80 & $-$ & JCMT\/SCUBA & Smail et al.~(1997) &\\cr\n\t & $<360$ & 75 & $-$ & & Barger et al.~(1998a) &\\cr \n\\noalign{\\smallskip}\n240\\,$\\mu$m & $-$ & $-$ & $17 \\pm 4$ & {\\it COBE}\/DIRBE + {\\it IRAS}\/ISSA & \nSchlegel et al.~(1998) &\\cr \n & $-$ & $-$ & $14 \\pm 4$ & & Hauser et al.~(1998) &\\cr \n\\noalign{\\smallskip}\n175 $\\mu$m & $41 \\pm 6$ & 150 & $-$ & {\\it ISO}\/ISOPHOT & Kawara et\nal.~(1998) \n&\\cr\n\t & $98 \\pm 15$ & 100 & $-$ & \t & \nLagache et al.~(1998) &\\cr\n\\noalign{\\smallskip}\n140 $\\mu$m & $-$ & $-$ & $32 \\pm 13$ & {\\it COBE}\/DIRBE + {\\it IRAS}\/ISSA & \nSchlegel et al.~(1998) &\\cr\n\t & $-$ & $-$ & $24 \\pm 12$ & & Hauser et al. (1998) &\\cr\n\\noalign{\\smallskip \\hrule} \n\\noalign{\\smallskip}\\cr}}$$}\n\\end{table*} \n\nWe assume that the galaxies have the thermal SEDs of \nwarm dust that is heated by an enshrouded starburst or AGN, and adopt a \nsimple analytic form for the luminosity function between a minimum and \nmaximum redshift. All the observational constraints on the model are imposed \nthrough integral functions, and so do not require unique solutions. \nNevertheless, it is interesting to see which general classes of \nluminosity--redshift distributions are ruled out and why.\n\nTo ensure consistent definitions are used when matching models to\nobservations, first we present all the details of our computations. Much of this \nwill be familiar to many readers, but different authors define parameters in slightly \ndifferent ways. Secondly, we point out some generic features of the comparison \nbetween our models and observations, in particular the results of requiring the \nmodels to be consistent with source counts at one wavelength and with the \ninfrared background at another simultaneously. Finally, we isolate some \nplausible luminosity functions that are consistent with all the observations for \nfurther investigation.\n\nWe define a bivariate 60-$\\mu$m luminosity-redshift function \n$\\phi_z (L_{\\rm 60})$, with units Mpc$^{-3}$\\,${\\rm L}_{\\odot}^{-1}$, such that \n$\\phi_z (L_{\\rm 60}) \\, {\\rm d} L_{\\rm 60}\\, {\\rm d}z$ is the total number density of \ngalaxies with 60-$\\mu$m luminosity between $L_{\\rm 60}$ and \n$L_{\\rm 60} + {\\rm d} L_{\\rm 60}$ with redshifts between $z$ and $z + {\\rm d}z$. \nSome familiar analytic examples of this function are a \nGaussian in $\\log_{10} L_{\\rm 60}$ with no luminosity or density evolution, \n\\begin{equation} \n\\phi_z (L_{\\rm 60}) = C (1+z)^3 \\exp \\left[ - {1\\over{2 \\sigma^2}} \n\\log_{10}^2 \\left( {{L_{\\rm 60}} \\over {L_{\\rm 60}^*}}\\right) \\right] \n{1\\over{L_{\\rm 60} \\, \\rm{ln} 10}},\n\\end{equation} \nor a Saunders et al.\\ (1990) function, which is a power-law in $L_{\\rm 60}$\nwith index $\\alpha$ at the faint end and a Gaussian in $\\log_{10} L_{\\rm 60}$ at \nthe bright end, and allows for density and luminosity\nevolution,\n\\begin{eqnarray} \n\\lefteqn{\\nonumber \n\\phi_z (L_{\\rm 60}) = C(z)\\, (1+z)^{3} \n\\left( {{L_{\\rm 60}} \\over {L_{\\rm 60}^* (z)}} \\right)^{1 - \\alpha} \\times } \\\\ \n& & \\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\n \\exp \\left[ - {1\\over{2 \\sigma^2}}\n\\log_{10}^2 \\left( 1 + {{L_{\\rm 60}} \\over {L_{\\rm 60}^* (z)}}\\right) \\right]\n{1\\over{L_{\\rm 60} \\, \\rm{ln} 10}}.\n\\end{eqnarray} \nImplicit in both equations (1) and (2) above \nare a normalization constant $C$ (units Mpc$^{-3}$, which depends on\n$z$ if there is density evolution), a characteristic\nluminosity $L_{\\rm 60}^{*}$ (which depends on $z$ if there is\nluminosity evolution), and a Gaussian width $\\sigma$.\n\nOnce we have specified $\\phi_z (L_{\\rm 60})$ for some\npopulation of galaxies, we can compute their contribution to the \ncosmic infrared background at some frequency $\\nu$:\n\\begin{equation} \n\\nu {\\rm I}_{\\nu} = {{1}\\over{4 \\pi}} \\int_{0}^{\\infty} \\int_{0}^{\\infty}\n{ {l_{\\nu} (z,L_{\\rm 60})}\\over{4 \\pi d_{\\rm L} (z)^2}} \\,\\, \\phi_z (L_{\\rm 60})\n\\, \\, {\\rm d} L_{\\rm 60} \\, \\, { {{\\rm d} V}\\over{{\\rm d} z}} \\, \\, {\\rm d} z \n,\n\\end{equation}\nin units of W\\,m$^{-2}$\\,sr$^{-1}$, where, \n\\begin{equation} \nl_{\\nu} (z,L_{\\rm 60}) = \\left[ { {\\nu (1+z)} \\over {\\nu_{60}}}\\right]^4\n\\!\\! L_{\\rm 60} \n{ {k_{\\nu (1+z)}}\\over{k_{\\nu_{60}}}} \\,\n{ {\\exp \\left({{h \\nu_{60}}\\over{kT}} \\right) - 1 }\\over\n{\\exp \\left[{{h \\nu(1+z)}\\over{k T}} \\right] - 1 } }, \n\\end{equation}\nand $\\nu_{60}$ \nis the frequency corresponding to a wavelength of 60\\,$\\mu$m. \n$k_{{\\nu}}$ is the\nemissivity function of dust. We follow Hughes (1996) and \nBlain et al.\\ (1999a) in \nassuming a power law $k_{\\nu} \\sim {\\nu^{1.5}}$, and so the dust emission \nspectrum at long wavelengths is a Raleigh--Jeans power-law with spectral \nindex 3.5. We can also compute the number density of sources with flux density \nabove some threshold $S_{\\rm lim}$, which is measured in units of\nW\\,m$^{-2}$\\,Hz$^{-1}$ (or Jy), \n\\begin{equation} \nn(S_{\\rm lim}) \n= {{1}\\over{4 \\pi}} \\int_{0}^{\\infty} \\int_{L_{\\rm lim} \n[S_{\\rm lim}]}^{\\infty} \\> \n\\phi_z (L_{\\rm 60})\n\\, \\, {\\rm d} L_{\\rm 60} \\, \\, { {{\\rm d} V}\\over{{\\rm d} z}} \\, \\, {\\rm d} z\n,\n\\end{equation}\nin units of sr$^{-1}$ (or deg$^{-2}$), where, \n\\begin{eqnarray} \n\\lefteqn{\\nonumber \nL_{\\rm lim} [S_{\\rm lim}] = \n4 \\pi d_{\\rm L} (z)^2 \\, \\, S_{\\rm lim} \\, \\nu_{60} \n\\left[ { {\\nu_{60}} \\over {\\nu (1+z)} } \\right]^{3} \\times } \\\\\n& & \\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\\>\n\\>\\>\\>\\>\\>\\>\n\\, { {k_{\\nu_{60}}} \\over {k_{\\nu(1+z)}} } \\,\n{ {\\exp \\left[{{h \\nu (1+z)}\\over{k T}} \\right] - 1} \\over\n {\\exp \\left({{h \\nu_{60}}\\over{kT}} \\right) - 1 }}\n.\n\\end{eqnarray} \nThe quantities,\n\\begin{equation} \nd_{\\rm L} (z) = \n{ {2 c}\\over{H_0}} \\left( 1 - {{1}\\over{\\sqrt{1+z}}} \\right) (1+z), \n\\end{equation} \nand,\n\\begin{equation} \n{ {{\\rm d} V}\\over{{\\rm d} z}} =\n16 \\pi \\, \\left({ { c}\\over{H_0}} \\right)^3\n\\, (1+z)^{-{{3}\\over {2}}} \\, \n\\left( 1 - {{1}\\over{\\sqrt{1+z}}} \\right)^2,\n\\end{equation} \nare the luminosity distance and the\nredshift-derivative of the volume element respectively (assuming\n$\\Omega_0 = 1$).\n\nA few features of the comparison between observation and theory\nare immediately apparent. Source counts constrain the integral of \n$\\phi_z(L_{60})$ over redshift and luminosity above some redshift-dependent \nlimit. However, measurements of the background light\nconstrain the integral of the product of luminosity $L_{60}$ and\n$\\phi_z(L_{60})$ over redshift and all luminosities. For a Saunders luminosity \nfunction with pure density evolution, we compare in Fig.\\,1 the ratio\nof the predicted 850-$\\mu$m source count and 240-$\\mu$m background \nas compared with the observed values listed in Table\\,1 as a function of the \ncharacteristic luminosity $L_{*} \\equiv L_{60}^{*}$. \nThe minimum is produced by the different constraints on the integral of \n$\\phi_z(L_{60})$ imposed by the source counts and the background \nmeasurement. \n\nVery low luminosity galaxies make a negligible contribution to the source counts, \nbut a substantial one to the infrared background if $L_*$ is low in the \nSaunders function. Hence, a low normalization is required to explain the \nbackground data, but a high one is required to explain the source count data,\nand so the ratio plotted in Fig.\\,1 is very high at low $L_*$. Increasing $L_*$\nincreases the fraction of high-luminosity galaxies, and so lowers the ratio of the\nnormalizations required in the figure. At very high $L_*$, the source counts are \nproduced by a very small number of extremely luminous sources, which cause \nthe background to be extremely high based on their luminosities alone. Hence, \nthe ratio of the normalization begins to increase again, producing the minimum\nin Fig.\\,1. If the flux threshold (lower limit) in the luminosity integral in\nequation (5) is reduced, then the turn-up of the ratio at high luminosities is less \nmarked, and disappears as the lower limit tends to zero.\n\nIt is intriguing that this minimum in the ratio occurs at a very high characteristic \nluminosity $L_{*} = 10^{12}$\\,L$_\\odot$. Locally $L_{*} \\sim 10^{9}$\\,L$_\\odot$, \nand so if the luminosity function of the SCUBA sources has a Saunders form, \nhuge amounts of luminosity evolution are required to match the data. This was \nessentially one of the main conclusions of Blain et al.\\ (1999a).\n\n\\begin{figure} \n\\begin{center}\n\\vskip-2mm\n\\epsfig{file=fig1.ps, width=8.65cm}\n\\end{center}\n\\vskip-4mm\n\\caption{ \nThe normalization obtained by fitting a Saunders function \n(equation 2) with density evolution parameterized by $\\gamma$ to the \nCOBE 240-$\\mu$m background (Schlegel et al.\\ 1998), relative to the \nnormalization obtained by fitting a similar function to the SCUBA \n850-$\\mu$m counts of Smail et al.\\ (1997). A value of $\\sigma = 0.724$, \nderived by Saunders et al.\\ (1990), is assumed. A dust emissivity index \nof 1.5 is assumed when converting luminosities measured at different \nwavelengths to restframe 60-$\\mu$m luminosities. The four line styles \nrepresent the following parameter values: solid ($T=70$\\,K, $\\gamma = 0$),\nshort-dashed ($T=40$\\,K, $\\gamma = 0$), long-dashed ($T=70$\\,K, \n$\\gamma = 6$), and dot-dashed ($T=40$\\,K, $\\gamma = 6$). Locally, \n$L^{*}_{60} = 1.1 \\times 10^9$\\,L$_{\\odot}$.\n}\n\\end{figure} \n\n\\begin{figure*}\n\\begin{minipage}{170mm}\n{\\vskip-3.5cm} \n\\begin{center}\n\\epsfig{file=fig2.ps, width=18.65cm}\n\\end{center}\n{\\vskip-7.2cm} \n\\caption{\nCombinations of the normalization constant $C$ and the width $\\sigma$ for \nGaussian luminosity functions with no density evolution that are consistent \nwith the 850-$\\mu$m counts. The units of $C$ are Mpc$^{-3}$. All galaxies \nare assumed to have a temperature $T = 70$\\,K. The comoving galaxy \ndensity $C$ is assumed to be constant between redshifts $z_1 = 3$ and \n$z_2 = 5$ and zero outside this range, (that is the upper and lower limits to \nthe redshift integral in equation (5) are $z_1$ and $z_2$). The solid lines show \nfits to the mean value quoted by Smail et al.\\ (1997). \nThe dotted-dashed lines are fitted to the +1$\\sigma$ values. The dashed \nlines are fitted to the $-\\sigma$ values: see Table 1. The four panels are \nplotted for different values of $L_{*}$.\n}\n\\end{minipage}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{minipage}{170mm}\n{\\vskip-3.5cm}\n\\begin{center}\n\\epsfig{file=fig3.ps, width=18.65cm}\n\\end{center}\n{\\vskip-7.2cm}\n\\caption{As Figure 2, but for $z_1=2$ and $z_2=4$.}\n\\end{minipage}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{minipage}{170mm}\n{\\vskip-3.5cm}\n\\begin{center}\n\\epsfig{file=fig4.ps, width=18.65cm}\n\\end{center}\n{\\vskip-7.2cm}\n\\caption{As Figure 2, but for $z_1=1$ and $z_2=3$.}\n\\end{minipage}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{minipage}{170mm}\n{\\vskip-3.5cm}\n\\begin{center}\n\\epsfig{file=fig5.ps, width=18.65cm}\n\\end{center}\n{\\vskip-7.2cm}\n\\caption{As Figure 2, but for $T=40$K.}\n\\end{minipage}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{minipage}{170mm}\n{\\vskip-3.5cm}\n\\begin{center}\n\\epsfig{file=fig6.ps, width=18.65cm}\n\\end{center}\n{\\vskip-7.2cm}\n\\caption{As Figure 3, but for $T=40$K.}\n\\end{minipage}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{minipage}{170mm}\n{\\vskip-3.5cm}\n\\begin{center}\n\\epsfig{file=fig7.ps, width=18.65cm}\n\\end{center}\n{\\vskip-7.2cm}\n\\caption{As Figure 4, but for $T=40$K.} \n\\end{minipage}\n\\end{figure*} \n\nHigher temperature sources systematically produce greater background fluxes\nand counts at shorter wavelengths, because their dust spectra peak at shorter \nwavelengths. Hence, lower normalizations are required in Fig.\\,1 in order to \nexplain the data for 40-K sources as compared with 70-K sources. The 40-K \ncurve is consistent with a ratio of one over a range of luminosities, suggesting\nthat if a Saunders luminosity function describes the SCUBA sources, then they \nmust have temperatures of about 40\\,K. This was another conclusion of \nBlain et al.\\ (1999a): see their Figure 4 -- in their favoured scenarios most of the \nSCUBA sources are at $z>2$ and have luminosities \n$\\sim 10^{12} {\\rm L}_{\\odot}$ and dust temperatures $\\sim 40$\\,K. \n\nFinally, note that the results in Fig.\\,1 are only weakly dependent on the\nform of density evolution parameterized by $\\gamma$.\n\n\\subsection{Comparison with the 850-$\\mu$m source counts}\n\nWe now compute the normalizations $C$ required to fit the 850-$\\mu$m source \ncount data for a number of Gaussian luminosity functions (equation 1) with no \ndensity or luminosity evolution. $C$ is constant between redshifts \n$z_1$ and $z_2$ and is zero elsewhere. This is the simplest possible form of the \nluminosity function; at present the observations do not justify a more rigorous \ntreatment. This parametrization is motivated in part by the\nfact that the local 60-$\\mu$m luminosity function (Saunders et al.\\ 1990) is \napproximately Gaussian at the bright end. The local elliptical galaxy luminosity \nfunction (Binggeli, Sandage \\& Tammann 1988, Ferguson \\& Sandage 1991) is \nalso Gaussian, which is relevant if the SCUBA sources evolve into elliptical \ngalaxies: see Section 3. One important feature of the Gaussian function is that it \nis only valid for high-luminosity galaxies. Large numbers of galaxies with low \nfar-infrared luminosities will also exist between $z_1$ and $z_2$. Although these \nwill not contribute to the source counts, they may contribute significantly to the \ninfrared backgrounds if they are very numerous. Hence, when we match our \nluminosity functions to counts (equation 5), we require the model prediction to\nequal the observed values, but when we match our luminosity functions\nto background fluxes (equation 3), we require the model prediction to be less \nthan or equal to the observed values. Our approach is fundamentally different \nfrom that of Blain et al.\\ (1999a). We are not evolving the local 60-$\\mu$m \nluminosity function to high redshift. Indeed, in Section 3, we argue that the\nsystems which have the highest far-infrared luminosities for redshifts \n$z_1 < z < z_2$ probably evolve into elliptical galaxies, \nwhich are not the most \nluminous systems at far-infrared wavelengths in the local Universe. \n\nIn Figs\\,2 to 7 we present the required normalization constants for\na Gaussian luminosity function to fit the 850-$\\mu$m data of Smail\net al.\\ (1997) for various values of $\\sigma$ and $L_{*}$ given\na constant dust temperature $T$. The uncertainties are large (see Table\\,1), \nand so the range of acceptable normalizations is large, with a peak-to-peak \nspread of about a factor of 4. Given these large uncertainties, all the \n850-$\\mu$m counts by different authors listed in\nTable 1 are fully consistent.\nSome general \nfeatures of Figs\\,2 to 7 are worth highlighting:\n\\begin{enumerate} \n\\item $\\sigma = 0$ (a $\\delta$-function) never appears to fit the data well; \n\\item lower normalizations are required for higher characteristic luminosities;\n\\item the absolute values of the normalizations depend primarily on where the \ndust spectrum peak is shifted in wavelength space. It is shifted closer to\n850\\,$\\mu$m for a source at higher redshift at either 40 or 70\\,K, and so a lower \nnormalization is required at higher $z$;\n\\item lower temperatures require lower normalizations for a given 60-$\\mu$m \nluminosity, since 850\\,$\\mu$m is closer to the blackbody peak. \n\\end{enumerate} \n\n\\subsection{Comparison with counts at other wavelengths}\n\nHaving isolated various models that are consistent with the 850-$\\mu$m \ncounts, we now consider the constraints from measurements at other\nwavelengths. We select a number of models with various values of \n$\\sigma$, $L_{*}$, $C$, and $T$, each represented by a point \non the curves in Figs\\,2 to 7, and thus consistent with the 850-$\\mu$m count. \nFor each model, we consider three separate scenarios: scenario ``0'', in which \nthe 850-$\\mu$m source counts are the mean Smail et al.\\ (1997) values;\nscenario ``+'', in which they are 1$\\sigma$ larger; and scenario ``$-$'', in which \nthey are 1$\\sigma$ smaller. The counts in the ``$-$'' scenario are very close to \nthe lowest published counts (Barger et al.\\ 1998a). In all, a total of 45 models, \neach of which generates three scenarios, are considered.\n\nIn Table 2, we show how well the models defined by these parameters account \nfor the measurements at other wavelengths. The numbers in that table \nrepresent the model prediction relative to the observed counts or background \nfluxes listed in Table 1, assuming scenario ``0''. The fractional uncertainties in \nthe numbers in Table 2 are given in Table 3. \n\nOther measurements exist in the literature that we do not use while\nmaking this comparison. For example, measurements of 15-$\\mu$m source \ncounts with {\\it ISO} have recently been made (Rowan-Robinson \net al.\\ 1997, Aussel et al.~1998). \nHowever, these measurements probe far down the Wien tail of \nthe dust spectrum, and are of very limited use in constraining the high-redshift \nluminosity function. Furthermore we would need to account for \ncontamination in the 15-$\\mu$m samples by nearby (unobscured) galaxies and \nAGN. Similarly the IRAS 60-$\\mu$m source counts (Saunders et al.\\ 1990)\nare dominated by nearby ($z < 0.1$) sources. Measurements at 1.25\\,mm offer a \npromising probe in the future, but to date measurements have only been made \nfor IRAS-selected galaxies (Franceschini, Andreani \\& Danese 1998). Unbiased \nsamples are not yet available at this wavelength.\n\nSome of the more notable general trends shown in Table 2 are:\n\\begin{enumerate}\n\\item models in which the sources are at higher redshift tend to\nproduce higher source counts or backgrounds at longer \nwavelengths, because the peak of the dust spectrum is shifted\nto longer wavelengths;\n\\item models in which the sources are at higher redshift tend to\nproduce higher backgrounds for a given source count\n{\\it at the same wavelength}. The counts probe only part of the total range of \ngalaxy luminosities, and there are a larger number\nof low-luminosity sources that contribute to the backgrounds but\nnot to the counts;\n\\item on decreasing the temperature from 70 to 40\\,K, the predicted 2.8-mm \ncounts increase, but the predicted 175-$\\mu$m counts and 140- and \n240-$\\mu$m backgrounds decrease, because the peak in the dust emission \nspectrum shifts to longer wavelengths;\n\\item Changing the temperature has a substantial effect on the relative value \nof the source counts and backgrounds at the same wavelength. Decreasing the \ntemperature has a greater relative effect on the counts, because the lower limit\nin the integral in equation 5 is decreased: see Fig.\\,8. \n\\end{enumerate} \n\nSeven of the models that we investigate are consistent with all the\nobservations: see the last column of Table 2 for details. These all have a high \ncharacteristic luminosity greater than $10^{11} {\\rm L}_{\\odot}$. \nUsing a different parametrization, Blain et al.\\ (1999a) also found that most of the\nSCUBA sources are distant galaxies with similarly high luminosities.\nThese seven models are:\n12\/40\/3\/5\/0.9``0'' (hereafter Model A);\n13\/40\/3\/5\/0.9``$-$'' (hereafter Model B);\n11\/40\/2\/4\/0.9``$-$'' (hereafter Model C);\n12\/40\/2\/4\/0.9``$-$'' (hereafter Model D);\n13\/40\/2\/4\/0.5``0'' (hereafter Model E);\n13\/40\/2\/4\/0.5``$-$'' (hereafter Model F);\n13\/40\/1\/3\/0.1``$-$'' (hereafter Model G).\nIn general, the tables suggest that models with $T = 70$\\,K overproduce the \n175-$\\mu$m counts and the 140- and 240-$\\mu$m backgrounds, given \nthe normalization from the 850-$\\mu$m source counts. Models in which the \ngalaxies are nearby, that is where $z_1 = 1$, tend to have similar problems. \nConversely, models with very low temperatures $ T < 40$\\, K would overproduce\nthe 2.8-mm counts, but such low-temperature sources are ruled out by the\nconsistency arguments of Blain et al.~(1999a --- see Section 4 of that paper).\n\n\\subsection{Comparison with the FIRAS background}\n\nFixsen et al.\\ (1998) derived the cosmic submillimetre background from \n{\\it COBE} FIRAS data. Although the background at these wavelengths is \ndominated by emission from the Galaxy and the cosmic microwave background, \nthey used three independent techniques to subtract out these signals. The \nresulting residual 850-$\\mu$m extragalactic background radiation intensity \n$\\nu I_\\nu$=$0.55 \\pm 0.15$\\,nW\\,m$^{-2}$\\,sr$^{-1}$. \n\nWe can predict 850-$\\mu$m backgrounds in our models using\nequation (3). The results are presented in Table 4. As in the previous\nsection, we require that our models do not overproduce the background. Model \nG achieves this convincingly, and models B and F are consistent within \n2$\\sigma$. \nWe select these three models for further\nstudy. In general, models with $z_1 = 3$ overpredict the 850-$\\mu$m \nbackground if the 240-$\\mu$m background is predicted correctly, because the \nobserved spectra at a temperature of 40\\,K are shifted to too long a \nwavelength. The surviving models have $z_1 = 2$ (B and F) or $z_1$ = 1 (G).\nAll these models also have $L_* = 10^{13}$\\,L$_{\\odot}$, meaning\nthat most of the far-infrared luminosity from this population comes\nfrom galaxies with 60-$\\mu$m luminosities in excess of 10$^{12}$\\,L$_{\\odot}$. \nAs discussed in Section\\,2, models with lower characteristic luminosities tend \nto overpredict the 850-$\\mu$m background if they match the 850-$\\mu$m \ncounts. \n\nNote that in the three surviving models (B, F and G) the normalization of the \n850-$\\mu$m counts is consistent with all the observations. In the best-fitting \nmodel (G), the mean redshift of the sources $z_{\\rm s} \\sim 2$, as argued by Lilly \net al.\\ (1998). One half (F,G) or more (B) of the 240-$\\mu$m background, but \nconsiderably less of the 140-$\\mu$m background, comes from the \nhigh-luminosity galaxies described by our Gaussian luminosity function. In \nmodels B and F, about half of the 175-$\\mu$m sources are the same sources \nthat contribute to the 850-$\\mu$m source counts; in model G, the two \npopulations are practically the same. In Model B, the predicted 2.8-mm \ncount is close to the observed limit (Wilner \\& Wright 1997); for the other models\nthe predicted count is much smaller. The 450-$\\mu$m source count limit\n(Barger et al.\\ 1998a) is larger than the predicted values by a factor of about 3 \nfor Model B, and by a much larger factor for the other models.\n\nThree models appear to fit all the data. Even within our simple parametrization \nwe do not find a unique best-fitting model. Furthermore, there are presumably \nmany other (non-Gaussian) models that fit all the data, for example the \nmodels of Blain et al.\\ (1999a). Nevertheless most of our models seem to be ruled \nout when we consider all the observations in conjunction. That a relatively \nnarrow region of parameter space is consistent with observation \n($L_{*} \\sim 10^{13}$\\,L$_{\\odot}$, $T \\simeq 40$\\,K, $z_{\\rm s} \\sim 2$) is \nencouraging and suggests that it will be productive to investigate these models \nin further detail. \n\n\\section{Properties of the SCUBA sources} \n\nWe have isolated three models of the redshift-dependent luminosity\nfunction of ULIRGs that are consistent with observation, and now\ninvestigate the cosmological implications. \nIn Section 3.1 we investigate the possibility that these are dusty star-forming\ngalaxies. We use the star-formation models of Leitherer \\& Heckman (1995) to\nplace the galaxies on the Madau Plot, and then examine the consequences\nof this interpretation in the context of the wider galaxy formation\npicture. In Section 3.2 we investigate the alternative possibility that the \nsources are dust-enshrouded AGNs. \n\n\\subsection{The SCUBA sources as star-forming galaxies}\n\nThe majority of local ULIRGs appear to be star-forming galaxies and not \ndust-enshrouded AGN (Sanders et al.\\ 1988a, Genzel et al.\\ 1988). Furthermore, \nthe temperatures of the SCUBA sources inferred in Section 2 ($T \\sim 40$\\,K) \nare close to, although systematically slightly cooler than, those of local \nstar-forming ULIRGs. Locally, dust-enshrouded AGN like Mrk 231 tend to have \nhigher dust temperatures than dust-enshrouded starbursts like Arp 220 \n(Sanders et at.\\ 1988a); however, note that some dusty high-redshift quasars \nappear to be fairly cold: see Benford et al.\\ (1998). Hence, it seems \nreasonable to investigate the possibility that the SCUBA sources are\nhigh-redshift star-forming galaxies.\n\n\\subsubsection{Leitherer-Heckman models}\n\nIn order to compute the cosmological star-formation rate associated with the \nSCUBA sources at high redshift, we need a recipe to convert far-infrared \nluminosities to star-formation rates for individual sources.\n\nThe transformation between these two quantities is not straightforward to \ndetermine observationally even for local ULIRGs because of the high internal \nextinction along our lines to the galaxy centers. For example, \neven the near-infrared 2.2-$\\mu$m Br$\\gamma$ line strength--far-infrared\nluminosity correlation (Goldader et al.\\ 1997a,b), which is usually very \nreliable, breaks down in ULIRGs at very high luminosities, presumably due to \ninternal extinction. We therefore need to compute this transformation using \nmodels (Leitherer \\& Heckman 1995).\n\n\\begin{table*} \n\\caption{Comparison between observations and models. All the numbers \nrepresent the predicted source counts or background flux for\nmodel ``0'' that are appropriate to the luminosity function defined by\nthe parameters $L_{*}$, $T$, $\\sigma$, $z_1$, and $z_2$, relative to the \nobserved counts and background intensities listed in Table\\,1.\nThe model parameters are listed in condensed form in the model name, which\ntakes the general form log$_{10}$($L^*_{60}$)\/$T$\/$z_1$\/$z_2$\/$\\sigma$. The \nfigure panel in which the constraints for each model are imposed are \nlisted in the second column. \nA value less than $10^{-10}$ is listed as $\\sim 0$ in the Table.\nThe errors in these quantities are listed in Table\\,3. \nAll the models have been selected {\\it a priori} to give the correct \n850-$\\mu$m counts. The comments refer to the selection or rejection of \nmodels based on the listed values. The range of acceptable values and a \ndescription of the numerical code values are given in Table\\,3. The \n140-$\\mu$m and 240-$\\mu$m background values of \nSchlegel et al.\\ (1998), the 175-$\\mu$m counts of Lagache\net al.~(1998), and the 450-$\\mu$m counts of Smail et al.~(1997)\nare fitted. \n} \n{\\vskip -2.75mm}\n{$$\\vbox{\n\\halign {\\hfil #\\hfil && \\quad \\hfil #\\hfil \\cr\n\\noalign{\\hrule \\medskip}\nModel & Figure \n & 2.8-mm & 450-$\\mu$m & 175-$\\mu$m\n & 240-$\\mu$m & 140-$\\mu$m & comments &\\cr\n & & counts & counts & counts & background & background & (See\nTable\\,3)&\\cr\n\\noalign{\\smallskip \\hrule \\smallskip}\n\\cr\n\n10\/70\/3\/5\/0.5 & 2a & $\\sim 0$\n & $7.2 \\times 10^{-5}$ \n & $0.18$ \n & $1.5 \\times 10^{10}$ & $7.5 \\times 10^9$ & (4$^{-}$56) &\\cr \n10.70\/3\/5\/0.9 & 2a & $0.0019$\n & $0.082$\n & $2.5$ \n & $110$ & $55$ & (3$^{0+}$56) &\\cr\n11\/70\/3\/5\/0.5 & 2b & $3.2 \\times 10^{-9}$\n & $8.1 \\times 10^{-4}$\n & $0.40$ \n & $2.3 \\times 10^{5}$ & $1.1 \\times 10^{5}$ & (56) &\\cr\n11\/70\/3\/5\/0.9 & 2b & $0.011$\n & $0.17$\n & $3.9$ \n & $15$ & $7.4$ & (356) &\\cr\n12\/70\/3\/5\/0.5 & 2c & $1.3 \\times 10^{-6}$\n & 0.0090 \n & $1.0$ \n & $178$ & $87$ & (3$^{+}$56) &\\cr\n12\/70\/3\/5\/0.9 & 2c & $0.067$\n & $0.36$\n & $6.0$ \n & $6.3$ & $3.1$ & (356$^{0+}$) &\\cr\n13\/70\/3\/5\/0.1 & 2d & $\\sim 0$\n & $\\sim 0$\n & $3.2 \\times 10^{-6}$ \n & $2.9 \\times 10^{13}$ & $1.9 \\times 10^{13}$ & (456) &\\cr\n13\/70\/3\/5\/0.5 & 2d & $4.4 \\times 10^{-4}$\n & $0.094$\n & $3.0$ \n & $5.8$ & $2.8$ & (356$^{0+}$) &\\cr\n13\/70\/3\/5\/0.9 & 2d & $0.35$\n & $0.71$\n & $9.6$ \n & $7.2$ & $3.5$ & (2$^{+}$356) &\\cr\n10\/70\/2\/4\/0.5 & 3a & $\\sim 0$\n & $2.3 \\times 10^{-4}$\n & $32$ \n & $1.9 \\times 10^{10}$ & $1.6 \\times 10^{10}$ & (356) &\\cr\n10\/70\/2\/4\/0.9 & 3a & $0.0014$\n & $0.12$\n & $14$ \n & $150$ & $120$ & (356) &\\cr\n11\/70\/2\/4\/0.5 & 3b & $1.5 \\times 10^{-9}$\n & $0.0021$\n & $20$ \n & $3.0 \\times 10^{5}$ & $2.5 \\times 10^{5}$ & (356) &\\cr\n11\/70\/2\/4\/0.9 & 3b & $0.0090$\n & $0.23$\n & $14$ \n & $20$ & $17$ & (356) &\\cr\n12\/70\/2\/4\/0.5 & 3c & $7.2 \\times 10^{-7}$\n & $0.017$\n & $14$ \n & $230$ & $190$ & (356) &\\cr\n12\/70\/2\/4\/0.9 & 3c & $0.055$ \n & $0.45$\n & $16$ \n & $8.3$ & $7.0$ & (356) &\\cr\n13\/70\/2\/4\/0.1 & 3d & $\\sim 0$\n & $\\sim 0$\n & $5.6 \\times 10^{5}$ \n & $2.4 \\times 10^{13}$ & $2.1 \\times 10^{13}$ & (356) &\\cr\n13\/70\/2\/4\/0.5 & 3d & $3.0 \\times 10^{-4}$\n & $0.15$\n & $14$ \n & $7.5$ & $6.4$ & (356) &\\cr\n13\/70\/2\/4\/0.9 & 3d & $0.3$\n & $0.82$\n & $18$ \n & $9.5$ & $8.0$ & (356) &\\cr\n10\/70\/1\/3\/0.5 & 4a & $ \\sim 0$\n & $8.8 \\times 10^{-4}$\n & $3600$ \n & $1.7 \\times 10^{10}$ & $2.4 \\times 10^{10}$ & (356) &\\cr\n10\/70\/1\/3\/0.9 & 4a & $0.0011$\n & $0.17$\n & $60$ \n & $170$ & $230$ & (356) &\\cr\n11\/70\/1\/3\/0.5 & 4b & $7.6 \\times 10^{10}$\n & $0.0054$\n & $560$ \n & $3.0 \\times 10^5$ & $4.0 \\times 10^5$ & (356) &\\cr\n11\/70\/1\/3\/0.9 & 4b & $0.0073$\n & $0.31$\n & $44$ \n & $23$ & $32$ & (356) &\\cr\n12\/70\/1\/3\/0.5 & 4c & $4.5 \\times 10^{-7}$\n & $0.034$\n & $120$ \n & $250$ & $350$ & (356) &\\cr\n12\/70\/1\/3\/0.9 & 4c & $0.047$\n & $0.55$\n & $34$ \n & $10$ & $14$ & (356) &\\cr\n13\/70\/1\/3\/0.1 & 4d & $ \\sim 0 $\n & $ \\sim 0$ \n & $3.1 \\times 10^{10}$ \n & $3.0 \\times 10^{11}$ & $4.3 \\times 10^{11}$ & (356) &\\cr\n13\/70\/1\/3\/0.5 & 4d & $2.2 \\times 10^{-4}$\n & $0.21$\n & $41$ \n & $8.8$ & $12$ & (356) &\\cr\n13\/70\/1\/3\/0.9 & 4d & $0.27$\n & $0.94$\n & $29$ \n & $12$ & $16$ & (2$^{+}$356) &\\cr\n10\/40\/3\/5\/0.5 & 5a & $1.0 \\times 10^{-8}$ \n & $9.2 \\times 10^{-6}$\n & $1.1 \\times 10^{-8}$ \n & $1.1 \\times 10^{6}$ & $7.1 \\times 10^{5}$ & (456) &\\cr \n10\/40\/3\/5\/0.9 & 5a & $0.018$\n & $0.033$\n & $0.0067$ \n & $3.6$ & $0.23$ & (45) &\\cr\n11\/40\/3\/5\/0.5 & 5b & $2.1 \\times 10^{-6}$\n & $2.1 \\times 10^{-4}$\n & $1.4 \\times 10^{-6}$ \n & $200$ & $14$ & (456) &\\cr\n11\/40\/3\/5\/0.9 & 5b & $0.087$\n & $0.092$\n & $0.034$ \n & $0.99$ & $0.067$ & (45$^{+}$) &\\cr\n12\/40\/3\/5\/0.5 & 5c & $3.9 \\times 10^{-4}$\n & $0.0050$\n & $1.8 \\times 10^{-4}$ \n & $1.7$ & $0.12$ & (45$^{0+}$) &\\cr\n12\/40\/3\/5\/0.9 & 5c & $0.40$\n & $0.25$\n & $0.17$ \n & $0.84$ & $0.056$ & (4$^{-}$5$^{+}$) &\\cr\n13\/40\/3\/5\/0.1 & 5d & $\\sim 0$\n & $3.3 \\times 10^{-18}$\n & $\\sim 0$ \n & $1.14$ & $0.076$ & (45$^{+}$) &\\cr\n13\/40\/3\/5\/0.5 & 5d & $0.053$\n & $0.11$\n & $0.019$ \n & $0.50$ & $0.034$ & (4) &\\cr\n13\/40\/3\/5\/0.9 & 5d & $1.6$\n & $0.63$\n & $0.82$ \n & $1.8$ & $0.12$ & (1$^{0+}$3$^{+}$5$^{0+}$) &\\cr\n10\/40\/2\/4\/0.5 & 6a & $2.96 \\times 10^{-9}$\n & $8.6 \\times 10^{-5}$\n & $5.7 \\times 10^{-4}$ \n & $1.1 \\times 10^{6}$ & $2.1 \\times 10^{5}$ & (456) &\\cr\n10\/40\/2\/4\/0.9 & 6a & $0.012$\n & $0.068$\n & $0.22$ \n & $6.6$ & $1.2$ & (56$^{+}$) &\\cr\n11\/40\/2\/4\/0.5 & 6b & $8.7 \\times 10^{-7}$\n & $0.0012$\n & $0.0058$ \n & $280$ & $51$ & (4$^{0-}$56) &\\cr\n11\/40\/2\/4\/0.9 & 6b & $0.065$\n & $0.16$\n & $0.54$ \n & $2.0$ & $0.36$ & (5$^{0+}$) &\\cr\n12\/40\/2\/4\/0.5 & 6c & $2.2 \\times 10^{-4}$\n & $0.017$\n & $0.062$ \n & $3.0$ & $0.54$ & (4$^{0-}$5$^{0+}$) &\\cr\n12\/40\/2\/4\/0.9 & 6c & $0.33$\n & $0.38$\n & $1.4$ \n & $1.77$ & $0.32$ & (3$^{0+}$5$^{0+}$) &\\cr\n13\/40\/2\/4\/0.1 & 6d & $\\sim 0$\n & $\\sim 0$\n & $2.0 \\times 10^{-9}$ \n & $1.1$ & $0.020$ & (45$^{+}$) &\\cr\n13\/40\/2\/4\/0.5 & 6d & $0.040$\n & $0.22$\n & $0.70$ \n & $1.1$ & $0.19$ & (5$^{+}$) &\\cr\n13\/40\/2\/4\/0.9 & 6d & $1.4$\n & $0.84$\n & $3.7$ \n & $4.0$ & $0.72$ & (1$^{0+}$2$^{+}$35) &\\cr\n10\/40\/1\/3\/0.5 & 7a & $9.4 \\times 10^{-10}$\n & $8.0 \\times 10^{-4}$\n & $3.7$ \n & $9.6 \\times 10^{5}$ & $4.3 \\times 10^{5}$ & (356) &\\cr\n10\/40\/1\/3\/0.9 & 7a & $0.0086$\n & $0.13$\n & $3.6$ \n & $11$ & $4.8$ & (356) &\\cr\n11\/40\/1\/3\/0.5 & 7b & $4.0 \\times 10^{-7}$\n & $0.0062$\n & $3.0$ \n & $323$ & $147$ & (356) &\\cr\n11\/40\/1\/3\/0.5 & 7b & $0.051$\n & $0.28$\n & $4.5$ \n & $3.5$ & $1.6$ & (356$^{0+}$) &\\cr\n12\/40\/1\/3\/0.1 & 7c & $ \\sim 0$\n & $\\sim 0$\n & $140$ \n & $2.8 \\times 10^{17}$ & $1.3 \\times 10^{17}$ & (356) &\\cr\n12\/40\/1\/3\/0.5 & 7c & $1.4 \\times 10^{-4}$\n & $0.52$\n & $3.0$ \n & $4.6$ & $2.1$ & (356$^{0+}$) &\\cr\n12\/40\/1\/3\/0.9 & 7c & $0.28$\n & $0.56$\n & $6.5$ \n & $3.3$ & $1.5$ & (356$^{0+}$) &\\cr\n13\/40\/1\/3\/0.1 & 7d & $\\sim 0$\n & $6.1 \\times 10^{-6}$\n & $2.2$ \n & $1.2$ & $0.54$ & (3$^{0+}$5$^{+}$) &\\cr\n13\/40\/1\/3\/0.5 & 7d & $0.033$\n & $0.42$\n & $5.4$ \n & $2.0$ & $0.92$ & (35$^{0+}$6$^{+}$) &\\cr\n13\/40\/1\/3\/0.9 & 7d & $1.3$\n & $1.1$\n & $10$ \n & $8.0$ & $3.6$ & (356) &\\cr\n\\noalign{\\smallskip \\hrule}\n\\noalign{\\smallskip}\\cr}}$$}\n\\end{table*}\n\n\\begin{table*}\n\\caption{Fractional uncertainties in the background and source counts \nvalues used to accept or reject the models listed in Table\\,2. The acceptable \nranges of values are used to make the selection, and the relevant cases are \ndescribed by numerical codes in the last column of Table\\,2. A code is \nshown if the model is rejected on the grounds of: (1) overproduction of \nthe 2.8-mm count; (2) overprediction of the 450-$\\mu$m count; (3) \noverproduction of the 175-$\\mu$m count; (4) underprediction of the \n175-$\\mu$m count; (5) overprediction of the {\\it COBE} 240-$\\mu$m background; \nand (6) overprediction of the {\\it COBE} 140-$\\mu$m background. Although \na model that underpredicts the 175-$\\mu$m count is not formally excluded, \nsuch a model requires that the 175-$\\mu$m counts and 850-$\\mu$m counts come\nfrom entirely different populations of galaxies. This is not impossible, but we \nregard it as unlikely; we choose a factor of ten discrepancy as the cutoff\nbetween a model being acceptable and unacceptable. \nIn the last column of Table\\,2, superscripts 0, + or $-$ indicate that \nthe exclusion applies only in that scenario, corresponding to the \nvalues listed below. The absence of a superscript implies that the \nmodel is rejected in all three scenarios.} \n{\\vskip -0.75mm}\n{$$\\vbox{\n\\halign {\\hfil #\\hfil && \\quad \\hfil #\\hfil \\cr\n\\noalign{\\hrule \\medskip}\nModel & 2.8-mm\n & 450-$\\mu$m\n & 175-$\\mu$m & 240-$\\mu$m & 140-$\\mu$m &\\cr\n & counts & counts & counts & background & background & &\\cr\n\\noalign{\\smallskip \\hrule \\smallskip}\n\\cr\nFractional Error in ratio & $-$ & $-$ & 0.15 & 0.24 & 0.41 &\\cr\n & & & & & & & &\\cr\nRange of Acceptability (``0'' models) &\n$<1.0$ & $<1.0$ & [0.85,1.2] & $<1.2$ & $<1.4$ &\\cr\nRange of Acceptability (``$-$'' models) &\n$<2.4$ & $<2.4$ & [2.0,2.8] & $<3.0$ & $<3.4$ &\\cr\nRange of Acceptability (``+'' models) &\n$<0.63$ & $<0.63$ & [0.54,0.73] & $<0.78$ & $<0.89$ &\\cr\n & & & & & & & &\\cr\n\\noalign{\\smallskip \\hrule}\n\\noalign{\\smallskip}\\cr}}$$}\n\\end{table*}\n\nIn Table 5 we list a number of star-formation models that we investigate in \nfurther detail. Note that we use capital letters to represent galaxy formation \nmodels (Section 2) and roman numerals to represent the different star-formation \nmodels. There is insufficient information to determine accurately the appropriate \nset of input parameters to the star-formation models for the SCUBA sources, \nand even for local ULIRGs in many cases. The more important parameters are:\n\\begin{enumerate}\n\\item the mode of star formation. For example, the star formation may occur\ncontinuously, as an instantaneous burst, or with a more complex time\ndependence, for example an exponential decay. We investigate both the \ninstantaneous (Models I and II) and continuous (models III through X) cases.\nObservations of the core of Arp 220 (Mouri \\& Taniguchi 1992;\nPrestwich, Joseph, \\& Wright 1994; Armus et al.\\ 1995; Larkin et al.\\ 1995,\nScoville et al.\\ 1998), combined with the inferred presence of ionizing sources \nthere (Larkin et al.\\ 1995; Kim et al.\\ 1995; Goldader et al.\\ 1995), indicate that \nthe continuous models offer a more plausible description of the star formation \nin this well studied ULIRG;\n\\item the stellar initial mass function (IMF). The precise form of the IMF in \nstarburst galaxies is very much an open question: see Leitherer (1998) for a \nreview. It appears that the IMF of the most luminous star clusters in the Milky \nWay and the Magellanic Clouds follows closely the Salpeter form (Hunter et al.\\ \n1995; Hunter et al.\\ 1997; Massey et al.\\ 1995a). However, the field-star IMF may \nbe significantly steeper than the cluster IMF (Massey et al.\\ 1995b),\nand there is some evidence (Meurer et al.\\ 1995) that the most massive stars\nin starbursts form in environments that are more similar to the field than to \nthe cores of the luminous star clusters studied above. There are \nhence few direct observations to guide us to the correct IMF in starburst \ngalaxies, and we therefore consider four different scenarios in Models III to X.\nLeitherer \\& Heckman (1995) consider power-law IMFs, and \nwe investigate various possible combinations of\nthe upper and lower mass cutoffs and the slope of this power law.\nNote that if the SCUBA sources end up as elliptical galaxies (Section 3.1.3; \nBlain et al. 1999a; Lilly et al.\\ 1998), a lower limit to the IMF at \n$M_{\\rm l}$ = 3\\,M${_\\odot}$ is suggested, in agreement with the value suggested \nby Zepf \\& Silk (1996).\n\\item the inital gas metallicity. We assume a metallicity of twice solar.\nThis is close to the metallicities of the most luminous elliptical\ngalaxies (e.g.~Faber 1973, Vader 1986); \n\\item the age at which we observe the star clusters. We investigate the\ncases where we observe the galaxies at between 10$^8$ and 10$^{8.5}$\nyears after the onset of star formation. Fortunately, most of the\nproperties of the starburst, like the total bolometric luminosity, reach\na plateau soon after the onset of star formation, and so our results do not \ndepend strongly on this parameter. The median total amount\nof gas consumed by a $L_{*}$ galaxy 10$^{8.5}$ years after the onset\nof star-formation is 7 $\\times 10^{10}$\\,M$_\\odot$ in our models. This is \nconsistent with the progenitor galaxies to the SCUBA sources being gas-rich \nnormal galaxies; \n\\item the effects of extinction. Since we are only attempting to model\nthe far-infrared and submillimetre properties of the starburst, we\ncan assume that the galaxies are optically thin and do not need to\nmake any corrections for extinction. We assume that the dust absorbs\nand reradiates all the energy produced by the starburst.\n\\end{enumerate} \n\nFrom the seventh column of Table 5 it is immediately apparent that there is \nmore than an order of magnitude of uncertainty in the transformation from \nfar-infrared luminosity to star-formation rate, due to the uncertainty in the\nstar-formation parameters. The constant conversion factor of 2.2 $\\times$\n10$^9$\\,L$_{\\odot}$\\,M$_{\\odot}^{-1}$\\,yr (Rowan-Robinson et al.\\ 1997), \nused by Blain et al.\\ (1999a), corresponding to 0.22 in the seventh column of\nTable 5, is bracketed by our results.\n\n\\subsubsection{The Madau plot}\n\nWe combine our redshift-dependent luminosity functions (Models B, F and G) \nwith the star-formation parameters for the models listed in Table 5 \n(Models I to X) to compute the comoving star formation rate in the SCUBA \nsources. The results are listed in Table 6. We can then put the SCUBA \nsources on the the Madau plot: see Figs\\,9, 10 and 11. The other points on the \nMadau plot all come from optically-selected samples. As there is no luminosity \nevolution in the models between $z_1$ and $z_2$, the comoving star-formation \nrate is constant between these redshifts, and so the lines on the Madau plot are \nhorizontal. \n\nThe figures show that the uncertainty in determining where the SCUBA \nsources lie on the Madau plot is huge, more than an order of magnitude, \neven if we have specified the correct luminosity--redshift distribution model B, \nF or G. This uncertainty is solely due to our lack of knowledge regarding the \nstar-formation parameters, as discussed in Section 3.1.1. For Model B, and \npossibly Model F, if the true cosmic star-formation rate in the SCUBA sources\nis towards the higher end of our permitted range, then they dominate the \nstar-formation rate of the Universe: this is essentially the scenario proposed\nby Blain et al.\\ (1999a) and Hughes et al.\\ (1998). If the true cosmic star-formation \nrate in the SCUBA sources is towards the lower end of our range, then\nthey do not contribute significantly to the star-formation rate of the Universe \nat any redshift. In general, the instantaneous models \npredict higher star-formation rates. This is because more stars\nneed to be formed to produce a given bolometric luminosity we observe\nmore than 10$^8$ years after the starburst.\n\nIt is interesting to note that the IMF proposed by Zepf \\& Silk (1996) for \nelliptical galaxies, if valid for the SCUBA sources, results in them being at \nthe extreme low end of our proposed range. This is because these models \nunderproduce low-mass stars, and so result in a low star-formation rate for \na given amount of bolometric luminosity: the low-mass stars, when young, \ndo not contribute significantly to the bolometric luminosity. The implication \nhere is that if SCUBA sources evolve into elliptical galaxies, then they do \nnot contribute significantly to the star-formation rate of the Universe at \nany redshift given our models of the luminosity function. \n \n\\begin{figure}\n\\begin{center}\n\\epsfig{file=fig8.ps, width=8.65cm}\n\\end{center}\n\\caption{The lower limit on the integral in equation (5) as a function of the \ngalaxy dust temperature $T$, normalized to its value for $T=38$\\,K, predicted \nfrom {\\it IRAS} and {\\it ISO} counts by Blain et al.\\ (1999a).} \n\\end{figure} \n\nThe broadband optical colours of the SCUBA sources are not significantly \ndifferent from those of normal field galaxies (Smail et al.\\ 1998). Furthermore, \ntwo of the three local ULIRGs studied at ultraviolet wavelengths with {\\it HST} \nby Trentham, Kormendy \\& Sanders (1998) would also have broadband colours \nsimilar to normal field galaxes if they were placed at high redshift. Therefore,\nSCUBA sources cannot be identified as submillimetre-luminous galaxies \nbased on broadband optical colours alone. The star-formation rates given \nin Table 5 are far higher than the rates in normal galaxies. Therefore we need \nto treat the SCUBA sources as a separate population on the Madau plot. This \nis particularly important if the SCUBA sources contribute significantly to the\nstar-formation rate of the Universe at any redshift. \n\n\\begin{table}\n\\caption{Predicted 850-$\\mu$m background intensities. The observed\n850-$\\mu$m background intensity is $0.55 \\pm 0.15$\\,nW\\,m$^{-2}$\\,sr$^{-1}$ \n(Fixsen et al. 1998).} \n{$$\\vbox{\n\\halign {\\hfil #\\hfil && \\quad \\hfil #\\hfil \\cr\n\\noalign{\\hrule \\medskip}\nModel & Background \/ & Model & Background \/\\cr\n & nW\\,m$^{-2}$\\,sr$^{-1}$ & & nW\\,m$^{-2}$\\,sr$^{-1}$\\cr\n\\noalign{\\smallskip \\hrule \\smallskip}\n\\cr\nA & 3.40 & E & 1.71 \\cr\nB & 0.85 & F & 0.71 \\cr\nC & 1.32 & G & 0.34 \\cr\nD & 1.19 & & \\cr\n\\noalign{\\smallskip \\hrule}\n\\noalign{\\smallskip}\\cr}}$$}\n\\end{table}\n\n\\begin{table*}\n\\caption{Properties of star-formation models. \n$\\alpha$ is the slope of the stellar IMF and $M_{\\rm u}$ and $M_{\\rm l}$\nare the lower and upper mass cutoffs.\nThe final two columns are derived from the results of Leitherer \\& Heckman\n(1995). The star-formation\nrates (SFR) as a function of 60-$\\mu$m luminosity come from Figs\\,7 and 8,\nassuming a temperature of 70\\,K. The metal masses $M_Z$ come from \nFigs\\,53 and 54 of Leitherer \\& Heckman (1995), assuming that the mass of \nmetals produced is equal to the mass returned by winds and supernovae.\nAll models assume an initial metallicity of twice solar.\n} \n{$$\\vbox{\n\\halign {\\hfil #\\hfil && \\quad \\hfil #\\hfil \\cr\n\\noalign{\\hrule \\medskip}\nModel & SF profile & log$_{10}$(age\/yr) &\n $\\alpha$ & $M_{\\rm u}$ \/ M$_\\odot$ & $M_{\\rm l}$ \/ M$_\\odot$ & \nSFR \/ ($L_{60} \/ 10^{9} {\\rm L}_{\\odot}$) M$_\\odot$\\,yr$^{-1}$ \n& $M_Z$ \/ ($L_{60} \/ 10^{9} {\\rm L}_{\\odot}$) M$_\\odot$ &\\cr\n\\noalign{\\smallskip \\hrule \\smallskip}\n\\cr\nSF I & Instantaneous & 8.0 & 2.35 & 100 & 1 & 1.0 & $4.0 \\times 10^7$ &\\cr\nSF II & Instantaneous & 8.5 & 2.35 & 100 & 1 & 0.81 & $1.0 \\times 10^8$ &\\cr\nSF III & Continuous & 8.0 & 2.35 & 100 & 1 & 0.11 & $3.5 \\times 10^6$ &\\cr\nSF IV & Continuous & 8.5 & 2.35 & 100 & 1 & 0.072 & $7.2 \\times 10^6$ &\\cr\nSF V & Continuous & 8.0 & 2.35 & 30 & 1 & 0.15 & $4.1 \\times 10^6$ &\\cr\nSF VI & Continuous & 8.5 & 2.35 & 30 & 1 & 0.12 & $1.0 \\times 10^7$ &\\cr\nSF VII & Continuous & 8.0 & 3.3 & 100 & 1 & 0.34 & $8.0 \\times 10^5$ &\\cr\nSF VIII & Continuous & 8.5 & 3.3 & 100 & 1 & 0.25 & $1.6 \\times 10^6$ &\\cr\nSF IX & Continuous & 8.0 & 2.35 & 100 & 3 & 0.056 & $3.5 \\times 10^6$ &\\cr\nSF X & Continuous & 8.5 & 2.35 & 100 & 3 & 0.043 & $7.2 \\times 10^6$ &\\cr\n\\noalign{\\medskip \\hrule}\n\\noalign{\\smallskip}\\cr}}$$}\n\\end{table*}\n\n\\subsubsection{Why do the result differ from those of other authors} \n\nBoth Blain et al.~(1999a) and Hughes et al.~(1998) derive global star-formation rates\nat least an order of magnitude higher than we derive in the current paper.\nThere are two sources of discrepancy.\n\\vskip 1pt\n\\noindent (i) differences in star-formation rates. \nFor example Blain et al.~(1999a) use a value of $M_{\\rm l}$ below 1 M$_\\odot$ and\nconsequently assume star-formation rates that are towards the upper end of our\nconsidered range.\n\\vskip 1pt\n\\noindent (ii) differences in the assumed luminosity-redshift distribution, which can \nbe broadly split further into (a) differences in the characteristic luminosity of the\nsources who dominate the contribution to the SCUBA counts , and (b) difference in the\nnormalization of the luminosity function at a given redshift. Effect (iia) is not a\nmajor source of discrepancy between our results and those of Blain et al.~(1999a) --- in\nboth cases, most of the star formation occurs in very luminous sources\nwith $L_{60} > 10^{11}$\\,L$_{\\odot}$. Effect (iib) is a much more substantial source\nof discrepancy. Blain et al.~(1999a) assume a redshift-dependent Saunders luminosity function at\nhigh redshift which is related to the local 60-$\\mu$m luminosity function by simple luminosity\nevolution. We assume a Gaussian luminosity function with no evolution over some specified\nredshift range. Our objects, when evolved to, $z=0$ are early-type stellar populations, \nquite unrelated to the objects which dominate the local 60-$\\mu$m luminosity function. \nWe consequently derive a far lower\nnormalization to our luminosity functions than do Blain et al.~(1999a).\nThe present observations do not permit us to distinguish \nbetween these models, but this should be a straightforward exercise when a \nsignificantly complete redshift distribution for the SCUBA sources is known. \n\nIn addition, Eales et al.~(1998) argue that at least 10 percent\nof the stars in the Universe\nformed in SCUBA sources since they produce about\n10 percent \nof the extragalactic background light. This number is much closer to the\nnumbers presented in this paper. Nevertheless, the assumption that the SCUBA sources\ncontribute the same fraction of the submillimeter background at all wavelengths, on which\nthe Eales et al.~calculation is based, appears to be inconsistent with the formulation that\nwe use, when the BIMA 2.8-mm counts, \nISO 175-$\\mu$m counts, and COBE 850-$\\mu$m background\nare all considered in conjuction. Furthermore, it is unclear how we should relate\nstar-formation rates derived from optical or ultraviolet luminosities of normal field\ngalaxies to star-formation rates derived from submillimetre fluxes of the SCUBA sources. \n\n\\begin{figure}\n\\begin{center}\n\\vskip -2mm\n\\epsfig{file=fig9.ps, width=8.65cm}\n\\end{center}\n\\vskip -5mm\n\\caption{The comoving star-formation density of\nthe Universe contained in the SCUBA sources when their\nluminosity-redshift distribution is as in Model B.\nThe eight dashed lines represent the SFR histories for models\n(in ascending order) X, IX, IV, III, VI, V, VIII, VII, II and I.\nThe other points come from:\nfilled triangle -- Gallego et al.~(1996);\nopen triangle -- Treyer et al.~(1998);\nopen circle -- Tresse \\& Maddox (1998);\nstars -- Lilly et al.~(1996);\nopen hexagons -- Hammer \\& Flores (1998);\nfilled squares - Connolly et al.~(1997);\nfilled circles - Madau et al.~(1996);\nopen squares - Pettini et al.~(1998; these include a global correction\nfor dust extinction). The recent work of Glazebrook et al.~(1999) \nsuggests a SFR almost coincident with the $z=0.85$ point of Hammer\n\\& Flores (1998).} \n\\end{figure} \n\n\\begin{figure}\n\\begin{center}\n\\vskip -2mm\n\\epsfig{file=fig10.ps, width=8.65cm}\n\\end{center}\n\\vskip-5mm\n\\caption{\nAs Figure 9, but for model F. The symbols\nhave the same meanings and the lines are in the same order.\n}\n\\end{figure} \n\n\\begin{figure}\n\\begin{center}\n\\vskip-2mm\n\\epsfig{file=fig11.ps, width=8.65cm}\n\\end{center}\n\\vskip-5mm\n\\caption{\nAs Figure 9, but for model G. The symbols\nhave the same meanings and the lines are in the same order.\n}\n\\end{figure} \n\n\\subsubsection{The fate of the star-forming galaxies}\n\nWe have hinted that the SCUBA sources could evolve into elliptical galaxies.\nLocal ULIRGs have gas densities that are similar to the core stellar densities \nin elliptical galaxies (Kormendy \\& Sanders 1992, Doyon et al.\\ 1994).\nIf the SCUBA sources have similar morphologies to local ULIRGs, we \nmight then expect them to evolve into elliptical galaxies. \n\nIn Table 7 we present the density parameter in stars produced by the SCUBA \nsources, in our luminosity-function and star-formation models. For most models, \nparticularly those with IMFs appropriate to elliptical galaxies (models IX and\nX), these numbers are very small as compared with the stellar density \ncontained in the local spheroid stellar population $\\Omega_{\\rm sph}$,\nthat is elliptical galaxies and bulges: $\\Omega_{\\rm sph} = 0.0036$ \n(Fukugita, Hogan \\& Peebles 1998). \nIn Models IX and X, between 1 and 4\\,per cent of $\\Omega_{\\rm sph}$\nformed in the luminous high-redshift SCUBA sources, and slightly more if a \nsolar-neighbourhood IMF (Models III and IV) is assumed. These numbers are \nsufficiently low that a scenario in which all the SCUBA sources evolve into \nelliptical galaxies is consistent with galaxy formation models that predict\nlow-redshift elliptical formation (Kauffmann 1996; Kauffmann, Charlot \\& White \n1996; Kauffmann \\& Charlot 1998). They are also low enough to be\nconsistent with the observed paucity of red galaxies in the Hubble Deep Field \nand the consequent interpretation that less than 30\\,per cent of field ellipticals\nformed at high redshift (Zepf 1997; Barger et al.\\ 1998b). \n\nVarious lines of argument suggest that the elliptical galaxies in clusters are \nvery old. Not only did the stars form a long time ago (Ellis et al.\\ 1997; \nStanford et al.\\ 1998; Kodama et al.\\ 1998), but it appears that in the richest \nclusters the most luminous elliptical galaxies themselves were assembled by \n$z \\sim 1$ (Trentham \\& Mobasher 1998). It is further possible that most 3C radio \ngalaxies are cluster ellipticals in the process of formation (Best, Longair \\& \nRottgering 1998). This leads to a natural question: are the SCUBA galaxies \nforming cluster ellipticals? Low-redshift cluster ellipticals are\nhighly clustered, with a bias of about 4, and an even greater bias at high redshifts\n(Fry 1996; Mo \\& White 1996). In comparison, the reasonably uniform detection\nrate of SCUBA galaxies in the fields studied by Smail et al.\\ (1998) suggests \nthat this is unlikely. \nThese fields contained low-redshift clusters (which magnify the background\nSCUBA sources through gravitational lensing), but these low-redshift\nclusters are unrelated to the hypothesized ellipticals in formation that\nwe are discussing here. \nInstead, we propose that the SCUBA sources are a \nforming trace population of field ellipticals. This is consistent with the \nconstraints outlined in the previous paragraph. This is not to say that if we \nhappened to point SCUBA at a cluster in formation we would not detect a large \nnumber of galaxies.\n\n\\begin{table*} \n\\caption{Comoving star-formation rates in units of \nM$_\\odot$\\,yr$^{-1}$\\,Mpc$^{-3}$ in the three well fitting models of \ndistant ULIRGs.\n}\n{$$\\vbox{\n\\halign {\\hfil #\\hfil && \\quad \\hfil #\\hfil \\cr\n\\noalign{\\hrule \\medskip}\nLF Model & SF I & SF II & SF III & SF IV & SF V & SF VI & SF VII & SF VIII & SF\nIX & SF X &\\cr\n\\noalign{\\medskip \\hrule \\smallskip}\n\\cr\nB & 0.097 & 0.078 & 0.011 & 0.0069 & 0.014 & 0.012 & 0.039 & 0.025 & 0.0064\n& 0.0041 &\\cr\nF & 0.042 & 0.034 & 0.0046 & 0.0030 & 0.0060 & 0.0050 & 0.017 & 0.010 & 0.0027\n& 0.0018 &\\cr\nG & 0.045 & 0.037 & 0.0050 & 0.0032 & 0.017 & 0.0054 & 0.018 & 0.011 & 0.0030 &\n 0.0019 &\\cr\n\\noalign{\\medskip \\hrule}\n\\noalign{\\smallskip}\\cr}}$$}\n\\end{table*}\n \n\\begin{table*} \n\\caption{The present-day values of the density parameter in stars $\\Omega_{*}$ \nproduced by each well fitting \nluminosity function (LF) model, in each star-formation model.} \n{$$\\vbox{\n\\halign {\\hfil #\\hfil && \\quad \\hfil #\\hfil \\cr\n\\noalign{\\hrule \\medskip}\nLF Model & SF I & SF II & SF III & SF IV & SF V & SF VI & SF VII & SF VIII & SF\nIX & SF X &\\cr\n\\noalign{\\medskip \\hrule \\smallskip}\n\\cr\nB & 0.0010 & 0.00082 & 0.00012 & 7.3 $\\times$ 10$^{-5}$ & 0.00015 & 0.00013 &\n0.00041 & 0.00026 & 6.7 $\\times$ 10$^{-5}$ & 4.3 $\\times$ 10$^{-5}$ &\\cr\nF & 0.00080 & 0.00064 & 8.7 $\\times$ 10$^{-5}$ & 5.7 $\\times$ 10$^{-5}$ & \n0.00011\n& 9.5 $\\times$ 10$^{-5}$ & 0.00032 & 0.00019 & 5.1 $\\times$ 10$^{-5}$ & 3.4\n$\\times$ 10$^{-5}$ &\\cr\nG & 0.0019 & 0.0016 & 0.00021 & 0.00014 & 0.00072 & 0.00023 & 0.00077 & 0.00047 \n& 0.00013 & 8.0 $\\times$ 10$^{-5}$ &\\cr\n\\noalign{\\medskip \\hrule}\n\\noalign{\\smallskip}\\cr}}$$}\n\\end{table*}\n\n\\begin{table*} \n\\caption{Cosmic enrichment from each well fitting model. The results are \nin units of the solar metallicity, assumed to be 0.0189 (Anders\n\\& Grevesse 1989). The redshift range is specified in the luminosity function \nmodel: see Table\\,2. All the enrichment due to distant ULIRGs is \ncompleted by the lower limit to this redshift range.} \n{$$\\vbox{\n\\halign {\\hfil #\\hfil && \\quad \\hfil #\\hfil \\cr\n\\noalign{\\hrule \\medskip}\nLF Model & $z$ range & SF I & SF II & SF III & SF IV & SF V & SF VI &\nSF VII & SF VIII & SF IX & SF X &\\cr\n\\noalign{\\medskip \\hrule \\smallskip}\n\\cr\nB & $30$, there exist two positive constants $\\kappa_{T,1,H}\\leq \\kappa_{T,2,H}$ depending only on $T$ and $H$, such that for any $0\\leq s0$ and the multi-index $\\mathbf{k}=(k_1,\\cdots, k_d)$ with all $k_i$ being nonnegative integers, let\n\\[\np^{(\\mathbf{k})}_{\\varepsilon}(x)=\\frac{\\partial^\\mathbf{k}}{\\partial x^{k_1}_1\\cdots \\partial x^{k_d}_d}p_{\\varepsilon}(x) =\\frac{\\iota^{|\\mathbf{k}|}}{(2\\pi)^d}\\int_{{\\mathbb R}^d} \\Big(\\prod^d_{i=1}y^{k_i}_i\\Big)\\, e^{\\iota y\\cdot x}e^{-\\frac{\\varepsilon|y|^2}{2}}\\, dy,\n\\]\nwhere $p_{\\varepsilon}(x)=\\frac{1}{(2\\pi \\varepsilon)^{\\frac{d}{2}}} e^{-\\frac{|x|^2}{2\\varepsilon}}$ and $|\\mathbf{k}|=\\sum\\limits^d_{i=1}k_i$.\n\nFor any $T>0$ and $x\\in{\\mathbb R}^d$, if\n\\begin{align}\\label{epsilon}\nL^{(\\mathbf{k})}_{\\varepsilon}(T,x):=\\int^T_0\\int^T_0 p^{(\\mathbf{k})}_{\\varepsilon}(X^{H_1}_t-\\widetilde{X}^{H_2}_s+x)\\, ds\\, dt\n\\end{align}\nconverges to some random variable in $L^2$ when $\\varepsilon\\downarrow 0$, we denote the limit by $L^{(\\mathbf{k})}(T,x)$ and call it the $\\mathbf{k}$-th derivative of local time for the $(2,d)$-Gaussian field $Z$. If it exists, $L^{(\\mathbf{k})}(T,x)$ admits the following $L^2$-representation\n\\begin{align} \\label{dlt}\nL^{(\\mathbf{k})}(T,x)=\\int^T_0\\int^T_0 \\delta^{({\\bf k})}(X^{H_1}_t-\\widetilde{X}^{H_2}_s+x)\\, ds\\, dt.\n\\end{align}\n\nThe following are main results of this paper.\n\\begin{theorem}\\label{thm1}\nAssume that $X^{H_1}=\\{X^{H_1}_t:\\, t\\geq 0\\}$ and $\\widetilde{X}^{H_2}=\\{\\widetilde{X}^{H_2}_t:\\, t\\geq 0\\}$ are two independent Gaussian processes in $G^d_{1,2}$ with parameters $H_1, H_2\\in(0,1)$, respectively. For any $x\\neq 0$, if $\\frac{H_1H_2}{H_1+H_2}(2|\\mathbf{k}|+d)\\geq 1$, then there exist positive constants $c_1$ and $c_2$ such that\n\\begin{align*}\n\\liminf_{\\varepsilon\\downarrow 0}\\frac{{{\\mathbb E}\\,}[|L^{(\\mathbf{k})}_{\\varepsilon}(T,x)|^2]}{h^{d,|{\\bf k}|}_{H_1,H_2}(\\varepsilon)}\\geq c_1e^{-c_2|x|^2},\n\\end{align*}\nwhere\n\\begin{align} \\label{rate}\nh^{d,|{\\bf k}|}_{H_1,H_2}(\\varepsilon)=\n\\left\\{\\begin{array}{ll}\n\\varepsilon^{\\frac{H_1+H_2}{2H_1H_2}-\\frac{d}{2}-|\\mathbf{k}|} & \\text{if}\\; \\frac{H_1H_2}{H_1+H_2}(2|\\mathbf{k}|+d)>1\\\\ \\\\\n\\ln(1+\\varepsilon^{-\\frac{1}{2}}) & \\text{if}\\; \\frac{H_1H_2}{H_1+H_2}(2|\\mathbf{k}|+d)=1.\n\\end{array} \\right.\n\\end{align}\n\\end{theorem}\n\n\\begin{theorem}\\label{thm2}\nAssume that $X^{H_1}=\\{X^{H_1}_t:\\, t\\geq 0\\}$ and $\\widetilde{X}^{H_2}=\\{\\widetilde{X}^{H_2}_t:\\, t\\geq 0\\}$ are two independent Gaussian processes in $G^d_{1}$ with parameters $H_1, H_2\\in(0,1)$, respectively. We further assume that (i) $|\\mathbf{k}|$ is even or (ii) ${{\\mathbb E}\\,}[X^{H_1,1}_{t}X^{H_1,1}_{s}]\\geq 0$ and ${{\\mathbb E}\\,}[\\widetilde{X}^{H_2,1}_{t}\\widetilde{X}^{H_2,1}_{s}]\\geq 0$ for any $0\\sum\\limits^N_{j=1} \\frac{1}{H_j}\\\\\n\\ln(1+\\varepsilon^{-\\frac{1}{2}}) & \\text{if}\\;\\; 2|\\mathbf{k}|+d=\\sum\\limits^N_{j=1} \\frac{1}{H_j}.\n\\end{array} \\right.$\n\n\\item[(ii)] Assume that $X^{j,H_j}_{t_j} (j=1,\\dots,N)$ are independent $d$-dimensional Gaussian processes in $G^d_{1}$, $|\\mathbf{k}|$ is even or all $X^{j,H_j}$ have nonnegative covariance functions, and $2|\\mathbf{k}|+d\\geq \\sum\\limits^N_{j=1} \\frac{1}{H_j}$. Then there exists a positive constant $c_6$ such that $\n\\liminf\\limits_{\\varepsilon\\downarrow 0}\\frac{{{\\mathbb E}\\,}[|L^{(\\mathbf{k})}_{N,\\varepsilon}(T,0)|^2]}{h^{d,|{\\bf k}|}_{H_1,H_2,\\dots, H_N}(\\varepsilon)}\\geq c_6.$\n\\end{enumerate}\nThe case $N=1$ is easy. It follows from simplified proofs of Theorems \\ref{thm1} and \\ref{thm2}. Moreover, our methodologies also work for derivatives of local times of L\\'{e}vy processes or fields.\n\\end{remark}\n\nAfter some preliminaries in Section 2, Sections 3 and 4 are devoted to the proofs of Theorems \\ref{thm1} and \\ref{thm2}, respectively. Throughout this paper, if not mentioned otherwise, the letter $c$, with or without a subscript, denotes a generic positive finite constant whose exact value may change from line to line. For any $x,y\\in{\\mathbb R}^d$, we use $x\\cdot y$ to denote the usual inner product and $|x|=(\\sum\\limits^d_{i=1}|x_i|^2)^{1\/2}$. Moreover, we use $\\iota$ to denote $\\sqrt{-1}$.\n\n\n\\section{Preliminaries}\n\nIn this section, we give three lemmas. The first two will be used in the proof of Theorem \\ref{thm1} and the last one in the proof of Theorem \\ref{thm2}.\n\\begin{lemma} \\label{lma1} Assume that $k\\in{\\mathbb N}\\cup\\{0\\}$ and $\\varepsilon>0$. Then, for any $a,b,c, x\\in{\\mathbb R}$ with $a>0$, $c>0$ and $\\Delta=c+\\varepsilon-\\frac{(b-\\varepsilon)^2}{a+2\\varepsilon}>0$, we have\n\\begin{align} \\label{integral}\n&\\frac{(-1)^k}{2\\pi}\\int_{{\\mathbb R}^2} \\exp\\Big\\{-\\frac{1}{2}(y_2^2a+2y_2 y_1b+y_1^2c)-\\frac{\\varepsilon}{2}((y_1-y_2)^2+y_2^2)+\\iota y_1x\\Big\\} y^{k}_2 (y_1-y_2)^{k} \\, dy \\nonumber \\\\\n&=\\sum^k_{\\ell=0}\\sum^{k+\\ell}_{m=0:\\text{even}}\\sum^{2k-m}_{n=0:\\text{even}} c_{k,\\ell,m,n} (a+2\\varepsilon)^{-\\frac{m+1}{2}} (\\frac{\\varepsilon-b}{a+2\\varepsilon})^{k+\\ell-m}\\Delta^{-(2k-m)-\\frac{1-n}{2}} x^{2k-m-n}e^{-\\frac{x^2}{2\\Delta}},\n\\end{align}\nwhere\n\\[\nc_{k,\\ell,m,n}=(-1)^{\\ell-\\frac{m+n}{2}} \\binom{k}{\\ell} \\binom{k+\\ell} {m}\\binom{2k-m} {n}(m-1)!! (n-1)!!\n\\]\nand we use the convention $0^0=1$ for the case $x=0\\in{\\mathbb R}$.\n\\end{lemma}\n\\begin{proof} Let $L$ be the left hand side of the equality \\eref{integral}. It is easy to show that\n\\begin{align*}\nL&=\\frac{(-1)^k}{2\\pi}\\int_{{\\mathbb R}^2} \\exp\\Big\\{-\\frac{1}{2}y_2^2(a+2\\varepsilon)-y_2 y_1(b-\\varepsilon)-\\frac{1}{2}y_1^2(c+\\varepsilon)+\\iota y_1 x\\Big\\} y^{k}_2 (y_1-y_2)^{k} \\, dy\\\\\n&=\\sum^k_{\\ell=0}\\frac{(-1)^{k+\\ell}}{2\\pi} \\binom{k}{\\ell}\\int_{{\\mathbb R}^2} \\exp\\Big\\{-\\frac{1}{2}y_2^2(a+2\\varepsilon)-y_2 y_1(b-\\varepsilon)-\\frac{1}{2}y_1^2(c+\\varepsilon)+\\iota y_1 x\\Big\\} y^{k+\\ell}_2 y_1^{k-\\ell} \\, dy\\\\\n&=\\sum^k_{\\ell=0}\\sum^{k+\\ell}_{m=0:\\, \\text{even}} \\frac{(-1)^{k+\\ell}}{\\sqrt{2\\pi}}\\binom{k}{\\ell} \\binom{k+\\ell} {m}(m-1)!! (a+2\\varepsilon)^{-k-\\ell-\\frac{1-m}{2}} (\\varepsilon-b)^{k+\\ell-m} \\int_{{\\mathbb R}} y_1^{2k-m}e^{-\\frac{1}{2} y_1^2\\Delta+\\iota y_1 x} \\, dy_1\\\\\n&=\\sum^k_{\\ell=0}\\sum^{k+\\ell}_{m=0:\\, \\text{even}}\\sum^{2k-m}_{n=0:\\, \\text{even}} c_{k,\\ell,m,n} (a+2\\varepsilon)^{-\\frac{m+1}{2}} (\\frac{\\varepsilon-b}{a+2\\varepsilon})^{k+\\ell-m}\\Delta^{-(2k-m)-\\frac{1-n}{2}} x^{2k-m-n}e^{-\\frac{x^2}{2\\Delta}}.\n\\end{align*}\n\\end{proof}\n\n\n\\begin{lemma} \\label{lma2} Assume that $k\\in{\\mathbb N}\\cup\\{0\\}$ and $\\varepsilon>0$. Then, for any $a_1,b_1,c_1, a_2,b_2,c_2, x\\in{\\mathbb R}$ with $a_1, a_2, c_1, c_2>0$ and $\\Delta'=c_1+c_2+a_2+2b_2+\\varepsilon-\\frac{(b_1-b_2-a_2-\\varepsilon)^2}{a_1+a_2+2\\varepsilon}>0$, we have\n\\begin{align*}\n&\\frac{(-1)^k}{2\\pi}\\int_{{\\mathbb R}^2} \\exp\\left\\{-\\frac{1}{2}\\big([y_2^2a_1+2y_2 y_1b_1+y_1^2c_1]+[(y_1-y_2)^2a_2+2(y_1-y_2)y_1b_2+y^2_1c_2]\\big)\\right\\}\\\\\n&\\qquad\\qquad\\times \\exp\\Big\\{-\\frac{\\varepsilon}{2}((y_1-y_2)^2+y_2^2)+\\iota y_1x\\Big\\} y^{k}_2 (y_1-y_2)^{k} \\, dy\\\\\n&=\\sum^k_{\\ell=0}\\sum^{k+\\ell}_{m=0:\\text{even}}\\sum^{2k-m}_{n=0:\\text{even}} c_{k,\\ell,m,n} (a_1+a_2+2\\varepsilon)^{-\\frac{m+1}{2}} (\\frac{\\varepsilon+b_2+a_2-b_1}{a_1+a_2+2\\varepsilon})^{k+\\ell-m}(\\Delta')^{-(2k-m)-\\frac{1-n}{2}} x^{2k-m-n}e^{-\\frac{x^2}{2\\Delta'}},\n\\end{align*}\nwhere\n\\[\nc_{k,\\ell,m,n}=(-1)^{\\ell-\\frac{m+n}{2}} \\binom{k}{\\ell} \\binom{k+\\ell} {m}\\binom{2k-m} {n}(m-1)!! (n-1)!!\n\\]\nand we use the convention $0^0=1$ for the case $x=0\\in{\\mathbb R}$.\n\\end{lemma}\n\\begin{proof} Note that the integral in the above statement can be written as\n\\begin{align*}\n\\int_{{\\mathbb R}^2} \\exp\\Big\\{-\\frac{1}{2}y_2^2(a_1+a_2+2\\varepsilon)-y_2 y_1(b_1-a_2-b_2-\\varepsilon)-\\frac{1}{2}y_1^2(c_1+c_2+a_2+2b_2+\\varepsilon)+\\iota y_1 x\\Big\\} y^{k}_2 (y_1-y_2)^{k} \\, dy.\n\\end{align*}\nThen the desired result follows from Lemma \\ref{lma1}.\n\\end{proof}\n\n\\begin{lemma} \\label{lma3} Assume that $k\\in{\\mathbb N}\\cup\\{0\\}$ and $\\varepsilon>0$. Then, for any $a,b,c\\in{\\mathbb R}$ with $a,c>0$ and $(a+\\varepsilon)(c+\\varepsilon)-b^2>0$, we have\n\\begin{align*}\n&\\frac{(-1)^k}{2\\pi}\\int_{{\\mathbb R}^2} \\exp\\Big\\{-\\frac{1}{2}(y_2^2a+2y_2 y_1b+y_1^2c)-\\frac{\\varepsilon}{2}(y_2^2+y_1^2)\\Big\\} y^{k}_2 y_1^{k} \\, dy\\\\\n&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad=\\sum^k_{\\ell=0, \\text{even}} \\frac{c_{k,\\ell} \\,b^{k-\\ell}}{((a+\\varepsilon)(c+\\varepsilon)-b^2)^{\\frac{2k-\\ell+1}{2}}},\n\\end{align*}\nwhere $c_{k,\\ell}=(\\ell-1)!! \\binom{k}{\\ell} (2k-\\ell-1)!!$.\n\\end{lemma}\n\\begin{proof} This follows from similar arguments as in the proof of Lemma \\ref{lma1}.\n\\end{proof}\n\n\n\\section{Proof of Theorem \\ref{thm1}}\n\nIn this section, we give the proof of Theorem \\ref{thm1}.\n\n\\begin{proof} We divide the proof into several steps.\n\n\n\\noindent\n{\\bf Step 1.}\nRecall the definition of $L^{(\\mathbf{k})}_{\\varepsilon}(T,x)$ in (\\ref{epsilon}). Using Fourier transform,\n\\begin{align*}\nL^{(\\mathbf{k})}_{\\varepsilon}(T,x)\n&=\\frac{\\iota^{|\\mathbf{k}|}}{(2\\pi)^d} \\int^T_0\\int^T_0\\int_{{\\mathbb R}^d} e^{\\iota z\\cdot (X^{H_1}_u-\\widetilde{X}^{H_2}_v+x)}e^{-\\frac{\\varepsilon|z|^2}{2}} \\prod^d_{i=1}z^{k_i}_i\\, dz\\, du\\, dv.\n\\end{align*}\nHence\n\\begin{align*}\n{{\\mathbb E}\\,}[|L^{(\\mathbf{k})}_{\\varepsilon}(T,x)|^2]\n&=\\frac{(-1)^{|\\mathbf{k}|}}{(2\\pi)^{2d}}\\int_{[0,T]^4}\\int_{{\\mathbb R}^{2d}} e^{-\\frac{1}{2}\\big[{{\\mathbb E}\\,}(z_2\\cdot X^{H_1}_{t_2}+z_1\\cdot X^{H_1}_{t_1})^2+{{\\mathbb E}\\,}(z_2\\cdot \\widetilde{X}^{H_2}_{s_2}+z_1\\cdot \\widetilde{X}^{H_2}_{s_1})^2\\big]}\\\\\n&\\qquad\\qquad \\times e^{-\\frac{\\varepsilon}{2}(|z_2|^2+|z_1|^2)+\\iota(z_1+z_2)\\cdot x} \\prod\\limits^d_{i=1}z^{k_i}_{2,i} \\prod\\limits^d_{i=1}z^{k_i}_{1,i}\\, dz_2\\, dz_1\\, dt\\, ds,\n\\end{align*}\nwhere $z_1=(z_{1,1},\\cdots,z_{1,d})$ and $z_2=(z_{2,1},\\cdots,z_{2,d})$.\n\nFor $i=1,\\cdots, d$, we first introduce the following notations\n\\begin{align*}\nI_i(H,t_2,t_1,z_2,z_1)&=e^{-\\frac{1}{2} {{\\mathbb E}\\,}[z_{2,i}\\cdot (X^{H,i}_{t_2}-X^{H,i}_{t_1})+z_{1,i}\\cdot X^{H,i}_{t_1}]^2}\\\\\n\\widetilde{I}_i(H,t_2,t_1,z_2,z_1)&=e^{-\\frac{1}{2} {{\\mathbb E}\\,}[z_{2,i}\\cdot (\\widetilde{X}^{H,i}_{t_2}-\\widetilde{X}^{H,i}_{t_1})+z_{1,i}\\cdot \\widetilde{X}^{H,i}_{t_1}]^2}\\\\\nK_i(\\varepsilon,z_2, z_1)&=e^{-\\frac{\\varepsilon}{2}(z^2_{2,i}+z^2_{1,i})+\\iota(z_{1,i}+z_{2,i})x_i}z^{k_i}_{2,i}z^{k_i}_{1,i}.\n\\end{align*}\nThen we define\n\\begin{align*}\nF_1(t_2,t_1, s_2,s_1, x_i)&=\\frac{(-1)^{k_i}}{2\\pi}\\int_{{\\mathbb R}^2}I_i(H_1,t_2,t_1,z_2,z_1+z_2)\\widetilde{I}_i(H_2,s_2,s_1,z_2,z_1+z_2)K_i(\\varepsilon,z_2, z_1) \\, dz_{2,i}\\, dz_{1,i}\\\\\nF_2(t_2,t_1, s_2,s_1, x_i)&=\\frac{(-1)^{k_i}}{2\\pi}\\int_{{\\mathbb R}^2}I_i(H_1,t_2,t_1,z_2,z_1+z_2)\\widetilde{I}_i(H_2,s_2,s_1,z_1,z_1+z_2)K_i(\\varepsilon,z_2, z_1)\\, dz_{2,i}\\, dz_{1,i}.\n\\end{align*}\nNow we can obtain that\n\\begin{align} \\label{e1}\n{{\\mathbb E}\\,}\\Big[ |L^{(\\mathbf{k})}_{\\varepsilon}(T,x)|^2\\Big]&=\\frac{2}{(2\\pi)^d}\\left[\\int_{D}\\prod^d_{i=1}F_1(t_2,t_1, s_2,s_1, x_i)\\, dt\\, ds+\\int_{D}\\prod^d_{i=1}F_2(t_2,t_1, s_2,s_1, x_i)\\, dt\\, ds\\right] \\nonumber \\\\\n&=:\\frac{2}{(2\\pi)^d}(I_1(\\varepsilon)+I_2(\\varepsilon)),\n\\end{align}\nwhere $D=\\{00$ and\n\\begin{align*}\n\\Delta=\\frac{ac+a\\varepsilon+2c\\varepsilon+\\varepsilon^2-b^2+2b\\varepsilon}{a+2\\varepsilon}\\geq \\frac{c\\varepsilon+\\varepsilon^2}{a+2\\varepsilon}>0,\n\\end{align*}\nwhere we use $|b|\\leq \\sqrt{ac}\\leq \\frac{a+c}{2}$ in the first inequality.\n\nBy Lemma \\ref{lma1}, $F_1(t_2,t_1, s_2,s_1, x_i)$ equals\n\\begin{align*}\n\\sum^{k_i}_{\\ell=0}\\sum^{k_i+\\ell}_{m=0:\\text{even}}\\sum^{2k_i-m}_{n=0:\\text{even}} c_{k_i,\\ell,m,n} (a+2\\varepsilon)^{-\\frac{m+1}{2}} (\\frac{\\varepsilon-b}{a+2\\varepsilon})^{k_i+\\ell-m}\\Delta^{-(2k_i-m)-\\frac{1-n}{2}} x^{2k_i-m-n}_ie^{-\\frac{x^2_i}{2\\Delta}},\n\\end{align*}\nwhere $c_{k_i,\\ell,m,n}=(-1)^{\\ell-\\frac{m+n}{2}} \\binom{k_i}{\\ell} \\binom{k_i+\\ell} {m}\\binom{2k_i-m} {n}(m-1)!! (n-1)!!$.\n\n\nFor any $\\gamma>1$ and $(t_2,t_1,s_2,s_1)\\in D$, using the Cauchy-Schwartz inequality and properties {\\bf (P1)} and {\\bf (P2)}, we can show that\n\\begin{align*}\n\\big|{{\\mathbb E}\\,}[(X^{H_1,1}_{t_2}-X^{H_1,1}_{t_1})X^{H_1,1}_{t_1}]\\big|\\leq c_1\\big[(\\gamma^{H_1}+\\gamma^{-H_1})a+\\beta(\\gamma)a^{\\frac{1}{2}}\\big],\n\\end{align*}\nwhere $\\gamma^{H_1}a$ comes from the case $\\frac{1}{\\gamma}<\\frac{t_2-t_1}{t_1}<\\gamma$, $\\gamma^{-H_1}a$ from the case $\\frac{t_2-t_1}{t_1}\\geq \\gamma$, and $\\beta(\\gamma)a^{\\frac{1}{2}}$ from the case $\\frac{t_2-t_1}{t_1}\\leq \\frac{1}{\\gamma}$. Similarly,\n\\begin{align*}\n\\big|{{\\mathbb E}\\,}[(\\widetilde{X}^{H_2,1}_{s_2}-\\widetilde{X}^{H_2,1}_{s_1})\\widetilde{X}^{H_2,1}_{s_1}]\\big|\\leq c_2\\big[(\\gamma^{H_1}+\\gamma^{-H_1})a+\\beta(\\gamma)a^{\\frac{1}{2}}\\big].\n\\end{align*}\nHence\n\\begin{align*}\n\\Big|\\frac{\\varepsilon-b}{a+2\\varepsilon}\\Big|\n&\\leq \\frac{1}{2}+c_3\\frac{(\\gamma^{H_1}+\\gamma^{-H_1}+\\gamma^{H_2}+\\gamma^{-H_2})a+\\beta(\\gamma)a^{\\frac{1}{2}}}{a+2\\varepsilon}\\leq c_4\\Big(\\gamma^{H_1}+\\gamma^{H_2}+\\frac{\\beta(\\gamma)}{(a+2\\varepsilon)^{\\frac{1}{2}}}\\Big).\n\\end{align*}\n\nLet \\begin{align} \\label{dg}\nD_{\\gamma}=D\\cap\\Big\\{0<\\frac{t_2-t_1}{t_1}<\\frac{T\\wedge 1}{2\\gamma}, \\frac{T}{4}0$, the function $h(w)=\\frac{1}{w^{\\alpha}}e^{-\\frac{1}{w}}\\in(0,\\alpha^{\\alpha}e^{-\\alpha}]$ when $w\\in(0,+\\infty)$. Choosing $\\gamma$ large enough gives\n\\begin{align*}\n\\liminf_{\\varepsilon\\downarrow 0}\\frac{I_1(\\varepsilon)}{h^{d,|{\\bf k}|}_{H_1,H_2}(\\varepsilon)}\n&\\geq \\liminf_{\\varepsilon\\downarrow 0} \\frac{(1-c_4\\beta(\\gamma))^d}{h^{d,|{\\bf k}|}_{H_1,H_2}(\\varepsilon)}\\int_{D} (a+2\\varepsilon)^{-|{\\bf k}|-\\frac{d}{2}}\\, \\Delta^{-\\frac{d}{2}}e^{-\\frac{|x|^2}{2\\Delta}}dt\\, ds,\n\\end{align*}\nwhere $00$ and\n\\begin{align*}\n\\Delta'&\\geq \\frac{a_1e_2+a_1a_2+a_2e_1+2a_1b_2+2a_2b_1+2b_1b_2+\\varepsilon(e_1+e_2)+\\varepsilon^2}{a_1+a_2+2\\varepsilon}\\geq \\frac{\\varepsilon(e_1+e_2)+\\varepsilon^2}{a_1+a_2+2\\varepsilon}>0,\n\\end{align*}\nwhere we use $|b_1|\\leq \\sqrt{a_1e_1}\\leq \\frac{a_1+e_1}{2}$ and $|b_2|\\leq \\sqrt{a_2e_2}\\leq \\frac{a_2+e_2}{2}$ in the first two inequalities.\n\n\n\n\nBy Lemma \\ref{lma1}, $F_2(t_2,t_1, s_2,s_1, x_i)$ equals\n\\begin{align*}\n\\sum^{k_i}_{\\ell=0}\\sum^{k_i+\\ell}_{m=0:\\text{even}}\\sum^{2k_i-m}_{n=0:\\text{even}} c_{k_i,\\ell,m,n} (a_1+a_2+2\\varepsilon)^{-\\frac{m+1}{2}} (\\frac{\\varepsilon+b_2+a_2-b_1}{a_1+a_2+2\\varepsilon})^{k_i+\\ell-m}(\\Delta')^{-(2k_i-m)-\\frac{1-n}{2}} x^{2k_i-m-n}e^{-\\frac{x^2}{2\\Delta'}},\n\\end{align*}\nwhere $c_{k_i,\\ell,m,n}=(-1)^{\\ell-\\frac{m+n}{2}} \\binom{k_i}{\\ell} \\binom{k_i+\\ell} {m}\\binom{2k_i-m} {n}(m-1)!! (n-1)!!$.\n\nFor any $(t_2,t_1,s_2,s_1)\\in D$, using the Cauchy-Schwartz inequality and properties {\\bf (P1)} and {\\bf (P2)}, we can show that\n\\begin{align*}\n\\Big|\\frac{\\varepsilon+b_2+a_2-b_1}{a_1+a_2+2\\varepsilon}\\Big|\n&\\leq 1+\\frac{|b_2-b_1|}{a_1+a_2+2\\varepsilon}\\leq c_{10}\\Big(\\gamma^{H_1}+\\gamma^{H_2}+\\frac{\\beta(\\gamma)}{(a_1+a_2+2\\varepsilon)^{\\frac{1}{2}}}\\Big).\n\\end{align*}\nTherefore,\n\\begin{align*}\n\\liminf_{\\varepsilon\\downarrow 0}\\frac{I_2(\\varepsilon)}{h^{d,|{\\bf k}|}_{H_1,H_2}(\\varepsilon)}\n&\\geq \\liminf_{\\varepsilon\\downarrow 0} \\frac{1}{h^{d,|{\\bf k}|}_{H_1,H_2}(\\varepsilon)}\\int_{D} (a_1+a_2+2\\varepsilon)^{-|{\\bf k}|-\\frac{d}{2}}\\, \\Delta^{-\\frac{d}{2}}e^{-\\frac{|x|^2}{2\\Delta}}dt\\, ds\\\\\n&\\qquad\\qquad-c_{11}\\beta(\\gamma)\\limsup_{\\varepsilon\\downarrow 0} \\frac{1}{h^{d,|{\\bf k}|}_{H_1,H_2}(\\varepsilon)}\\int_{D} (a_1+a_2+2\\varepsilon)^{-|{\\bf k}|-\\frac{d}{2}}\\, dt\\, ds\\\\\n&\\geq -c_{12}\\beta(\\gamma)\\limsup_{\\varepsilon\\downarrow 0} \\frac{1}{h^{d,|{\\bf k}|}_{H_1,H_2}(\\varepsilon)}\\int_{D} ((t_2-t_1)^{2H_1}+(s_2-s_1)^{2H_2}+2\\varepsilon)^{-|{\\bf k}|-\\frac{d}{2}}\\,dt\\, ds\\\\\n&\\geq -c_{13}\\beta(\\gamma).\n\\end{align*}\nLetting $\\gamma\\uparrow+\\infty$ gives\n\\begin{align} \\label{e3}\n\\liminf_{\\varepsilon\\downarrow 0}\\frac{I_2(\\varepsilon)}{h^{d,|{\\bf k}|}_{H_1,H_2}(\\varepsilon)}\\geq 0.\n\\end{align}\n\n\\noindent\n{\\bf Step 3.} Combining \\eref{e1}, \\eref{e2} and \\eref{e3} gives\n\\begin{align*}\n\\liminf_{\\varepsilon\\downarrow 0}\\frac{{{\\mathbb E}\\,}[|L^{(\\mathbf{k})}_{\\varepsilon}(T,x)|^2]}{h^{d,|{\\bf k}|}_{H_1,H_2}(\\varepsilon)}\n&\\geq \\liminf_{\\varepsilon\\downarrow 0}\\frac{I_1(\\varepsilon)}{h^{d,|{\\bf k}|}_{H_1,H_2}(\\varepsilon)}+\\liminf_{\\varepsilon\\downarrow 0}\\frac{I_2(\\varepsilon)}{h^{d,|{\\bf k}|}_{H_1,H_2}(\\varepsilon)}\\geq c_{14}e^{-\\frac{|x|^2}{2c_{5}(T^{2H_1}+T^{2H_2})}}.\n\\end{align*}\nThis completes the proof.\n\\end{proof}\n\n\n\\section{Proof of Theorem \\ref{thm2}}\n\nIn this section, we give the proof of Theorem \\ref{thm2}.\n\n\\begin{proof}\nBy Lemma \\ref{lma3},\n\\begin{align*}\n{{\\mathbb E}\\,}[|L^{(\\mathbf{k})}_{\\varepsilon}(T,0)|^2]\n&=\\frac{(-1)^{|\\mathbf{k}|}}{(2\\pi)^{2d}}\\int_{[0,T]^4}\\int_{{\\mathbb R}^{2d}} e^{-\\frac{1}{2}\\big[{{\\mathbb E}\\,}(z_2\\cdot X^{H_1}_{t_2}+z_1\\cdot X^{H_1}_{t_1})^2+{{\\mathbb E}\\,}(z_2\\cdot \\widetilde{X}^{H_2}_{s_2}+z_1\\cdot \\widetilde{X}^{H_2}_{s_1})^2\\big]}\\\\\n&\\qquad\\qquad \\times e^{-\\frac{\\varepsilon}{2}(|z_2|^2+|z_1|^2)} \\prod\\limits^d_{i=1}z^{k_i}_{2,i} \\prod\\limits^d_{i=1}z^{k_i}_{1,i}\\, dz_2\\, dz_1\\, dt\\, ds\\\\\n&=\\frac{1}{(2\\pi)^d}\\int_{[0,T]^4} \\prod^d_{i=1}\\Big(\\sum^{k_i}_{\\ell=0, \\text{even}} \\frac{c_{k_i,\\ell} \\,b^{k_i-\\ell}}{((a+\\varepsilon)(c+\\varepsilon)-b^2)^{\\frac{2k_i-\\ell+1}{2}}}\n\\Big)dt\\, ds,\n\\end{align*}\nwhere $a={{\\mathbb E}\\,}[(X^{H_1,1}_{t_2})^2]+{{\\mathbb E}\\,}[(\\widetilde{X}^{H_2,1}_{s_2})^2]$, $b={{\\mathbb E}\\,}[X^{H_1,1}_{t_2}X^{H_1,1}_{t_1}]+{{\\mathbb E}\\,}[\\widetilde{X}^{H_2,1}_{s_2}\\widetilde{X}^{H_2,1}_{s_1}]$ and $c={{\\mathbb E}\\,}[(\\widetilde{X}^{H_2,1}_{s_1})^2]+{{\\mathbb E}\\,}[(X^{H_1,1}_{t_1})^2]$. According to the assumption (i) $|\\mathbf{k}|$ is even or (ii) ${{\\mathbb E}\\,}[X^{H_1,1}_{t}X^{H_1,1}_{s}]\\geq 0$ and ${{\\mathbb E}\\,}[\\widetilde{X}^{H_2,1}_{t}\\widetilde{X}^{H_2,1}_{s}]\\geq 0$ for any $05\\times 10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$. The novelty of our study over previous ones is two-fold. First, we include information about the structural distortion of each galaxy to trace merger remnants (besides considering morphology and star formation properties). And secondly, as most objects in their evolution towards the Red Sequence must have gone over nearby Green Valley locations transitorily (F07), we have analysed the red galaxies both lying on the Red Sequence and at close positions on the Green Valley. Therefore, red galaxies in the context of this paper include both the galaxies on the Red Sequence and at its neighbourhood. The galaxy classes resulting from the combination of morphological, structural, and star-formation activity properties allow us to trace the evolution of intermediate stages of major mergers and of their final remnants since $z\\sim 1.5$. Finally, the observed number density evolution experienced by each galaxy type is used to carry out a set of novel observational tests defined on the basis of the expectations of hierarchical models, which provide observationally and for the first time main evolutionary paths among the different red galaxy types that have occurred in the last $\\sim 9$\\,Gyr. \n\nThe paper is organized as follows. In \\S\\ref{Sec:sample}, we provide a brief description of the survey. Section \\S\\ref{Sec:RGSelection} is devoted to the definition of the mass-limited red galaxy sample. In \\S\\ref{Sec:classification}, we define the galaxy classes according to the global morphology, structural distortion level, and star formation enhancement of the red galaxies. In \\S\\ref{Sec:Error}, we comment on the sources of errors and uncertainties. Section \\S\\ref{Sec:Tests} presents three novel tests to check the existence of any evolutionary links between the different red galaxy types, based on the expectations of hierarchical models of galaxy formation. The results of the study are presented in \\S\\ref{Sec:Results}. In particular, the results of the three tests proposed for the hierarchical scenario of E-S0 formation can be found in \\S\\ref{Sec:TestsResults}. The discussion and the main conclusions of the study are finally exposed in \\S\\S\\ref{Sec:Discussion} and \\ref{Sec:Conclusions}, respectively. Magnitudes are provided in the Vega system throughout the paper. We assume the concordance cosmology \\citep[$\\Omega_\\mathrm{m} = 0.3$, $\\Omega_\\Lambda = 0.7$, and $H_0 = 70$\\,km s$^{-1}$ Mpc$^{-1}$, see][]{2007ApJS..170..377S}. \n\n\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth,angle=0]{star_galaxy_separation.eps}\n\\caption{Color-color diagram for distinguishing stars from galaxies ($I-K_{s}$ vs.~$V-I$). \\emph{Dots}: Data from the original $K$-band selected catalogue. \\emph{Solid line}: Color cut defined to isolate stars (i.e., the sequence of data located below the line) from galaxies (data lying above it). \\emph{Circles}: Objects classified as \"stars\" or \"compact\" visually (see \\S\\ref{Sec:Morphology}). All objects identified as \"stars\" visually are located in the stellar region of this diagram. }\\label{Fig:star-galaxy}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth,angle=0]{zspecRainbow_vs_zpRainbow_GRs_mag.eps}\n\\caption{Spectroscopic redshifts vs. photometric redshifts for the red galaxies in the sample having spectroscopic confirmation in the DEEP2 catalog \\citep{2003SPIE.4834..161D,2007ApJ...660L...1D}. Bright and faint galaxies in each one of the three wide redshift bins under consideration in the study are plotted with different symbols. The typical photometric redshift uncertainty for the red galaxy sample is below $\\Delta(z) \/ (1+z) <0.03$ at all redshifts for both bright and faint sources (see the estimates in the figure).}\\label{Fig:zspec-zphot}\n\\end{figure}\n\n\\section{The sample}\n\\label{Sec:sample}\n\nWe have combined multiwavelength data from the Rainbow Extragalactic Database\\footnote{Rainbow Extragalactic Database:\\\\https:\/\/rainbowx.fis.ucm.es\/\\-Rainbow\\_Database\/\\-Home.html} \\citep{2011ApJS..193...13B,2011ApJS..193...30B} and the GOYA photometric survey\\footnote{GOYA project (Galaxy Origins and Young Assembly):\\\\ http:\/\/www.astro.ufl.edu\/GOYA\/home.html} \\citep[see][]{2002INGN....6...11B} over an area of $\\sim 155$\\,arcmin$^{2}$ in the Groth Strip \\citep[$\\alpha=14^h 16^m 38.8^s$ and $\\delta=52^o 16' 52''$, see][]{1994AAS...185.5309G,1999AJ....118...86R,2002ApJS..142....1S}. The Rainbow Extragalactic Database compiles multi-wavelength photometric data from the UV to the FIR (and, in particular, in Spitzer\/MIPS 24$\\mu m$ band) over this sky area, providing analysis of spectral energy distributions of nearly 80,000 IRAC 3.6$+$4.5 $\\mu m$ selected galaxies. This study considers the photometric redshifts available in the Rainbow Database, which have a typical photometric redshift accuracy of $<\\Delta z\/(1+z)> = 0.03$ \\citep{2011ApJS..193...30B}, as derived for the sources with spectroscopic redshifts available in the DEEP2 Galaxy Redshift Survey \\citep{2003SPIE.4834..161D,2007ApJ...660L...1D}. The GOYA Survey is a survey covering the Groth Strip compiling photometry in four optical bands ($U$, $B$, $F606W$, and $F814W$) and in two near infrared ones ($J$ and $K_{s}$) with visual classifications available, reaching similar depths to the Rainbow data in similar bands \\citep[$U\\sim 25$, $B\\sim 25.5$, $K\\sim 21$ mag, see][]{2003ApJ...595...71C,2006ApJ...639..644E,2007RMxAC..29..165A,2008A&A...488.1167D}.\n\nWe have performed the sample selection starting from a $K$-band selected catalog in the field, reaching a limiting magnitude for 50\\% detection efficiency at $K\\sim 21$\\,mag. Several cuts have been performed to the original catalogue to obtain a mass-limited red galaxy sample. Firstly, red galaxies are selected as detailed in \\S\\ref{Sec:RGSelection}. This selection determines the redshift interval of the study, as it is restricted to the redshifts where the obtained number of red galaxies is statistically significant (i.e., to $0.3\\lesssim z\\lesssim 1.5$, see \\S\\ref{Sec:ClassificationSED}). \n\nAccording to the $M_{K}$-$z$ distribution of the red galaxies sample, the faintest absolute magnitude for which the catalogue is complete in luminosity up to $z\\sim 1.5$ corresponds to $M_{K,\\mathrm{lim}}\\sim -24$\\,mag. According to the redshift evolution of the mass-to-light relation (assuming a Salpeter IMF) derived by \\citet{2007A&A...476..137A} for a sample of quiescent bright galaxies, a red passive galaxy with this $K$-band absolute magnitude at $z\\sim 1.5$ has a stellar mass of $M_{*,\\mathrm{lim}}\\sim 5\\times 10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$. Therefore, we have selected red galaxies with masses higher than $M_{*}\\sim 5\\times 10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ at each $z$ just considering all galaxies with $M_{K,\\mathrm{cut}} (z) = -23.3 - 0.45 z$ to account for their luminosity evolution. This luminosity cut is very similar to the one obtained by \\citet{2007MNRAS.380..585C}. Analogously, the equivalent luminosity cut to obtain a complete red galaxy sample for $M_{*,\\mathrm{lim}}=10^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ would be: $M_K = -24-0.45 z$ (we will use this selection for comparing our results with those reported in other studies, see \\S\\ref{Sec:RGSelection}). The detection efficiency in the $K$-band drops below 0.9 for $m_K >20.5$\\,mag \\citep{2003ApJ...595...71C}, so we have checked that all galaxies in our mass-limited red sample exhibit apparent magnitudes brighter than this limit. \n\nWe have used the color-color diagram shown in Fig.\\,\\ref{Fig:star-galaxy} to remove stars from the sample. It represents the $I-K_{s}$ vs.~$V-I$ distribution for all the sources in the mass-limited red galaxy sample. Stars typically exhibit NIR colors bluer than galaxies, so they populate the lower region in the diagram. Attending to this bimodality of star-galaxy colors, we have defined a color-color cut to isolate galaxies from stars (see the solid line in the figure). The marked points include the stars and compact objects in the sample. We have checked that the stars identified according to it include all the objects at lower redshifts that have been classified as \"stars\" in the morphological classification performed in \\S\\ref{Sec:Morphology} (they are marked in Fig.\\,\\ref{Fig:star-galaxy}). \nAs the number of compact objects found in the sample is not statistically significant (see \\S\\ref{Sec:Morphology}), they have been excluded from this study. From an initial sample of 1809 sources from the original $K$-band selected catalogue, we finally have a mass-limited red galaxy sample of 257 systems at $0.32.0$, unless there is a large population of dust-reddened star-forming objects at all redshifts at $0.410^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$, but for different mass-to-light transformation relations. Panel c presents the results obtained with the same selection criteria as panel b, but for red galaxies with $\\mathrm{M}_*>5\\times 10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$. All data plotted in the same panel use equivalent color and mass selection criteria. \\emph{Filled black circles}: Results for our $K$-band selected data for each selection. \\emph{Rest of symbols}: Results obtained by \\citet{2007A&A...476..137A}, \\citet{2007MNRAS.380..585C}, F07, \\citet{2009ApJ...694.1171T}, \\citet{2010ApJ...709..644I}, \\citet{2010MNRAS.405..100M}, and \\citet{2011ApJ...727...51N}. Plotted data assume $h\\equiv H_0\/100 = 0.7$ and a Salpeter initial mass function. Cosmic variance and sample incompleteness contribute to the large dispersion found among different studies at $z<0.4$.\n\\label{Fig:comparacion}}\n\\end{center}\n\\end{figure*}\n\n\n\\subsection{Comparison with other studies}\n\\label{Sec:Comparison}\n\nThe red galaxy selection made in this study cannot be directly compared to the red galaxy samples obtained by most studies in the literature because, first, we have included red galaxies adjacent to the Red Sequence to study objects at transitory stages of their evolution towards it (which is not usual, see \\S\\ref{Sec:introduction}), and secondly, we have estimated masses using the $\\mathrm{M}_*\/L_K$-z relation derived by \\citet{2007A&A...476..137A} for different redshifts, whereas most authors use the $\\mathrm{M}_*\/L_V$-color relation derived by \\citet{2001ApJ...550..212B} or an equivalent relation. Moreover, most studies report the number evolution of red galaxies for masses $\\mathrm{M}_*>10^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$, instead of for masses $\\mathrm{M}_*>5\\times 10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ (as our case). In order to check out our results, we have made alternative red galaxy selections for $\\mathrm{M}_*>10^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$, adopting the color cuts and\/or the mass-to-light relation used by other authors.\n\nThe three panels of Fig.\\,\\ref{Fig:comparacion} compare the redshift evolution of the number density of red galaxies derived from our data with the results of different authors, for analogous mass and color selections in each case. In panel a, we have assumed the $U-B$ color evolution derived by \\citet{2001ApJ...553...90V} to select red galaxies (following F07), and the masses are estimated using the $\\mathrm{M}_*\/L_V$-color relation by \\citeauthor[][]{2001ApJ...550..212B}. Only red galaxies with $\\mathrm{M}_*>10^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ at each redshift are considered in this panel. Panel b of the figure also assumes the color cut by \\citeauthor{2001ApJ...553...90V} for selecting red galaxies, but the mass estimates assume the $\\mathrm{M}_*\/L_K$ relation by \\citeauthor{2007A&A...476..137A}, which includes evolutive corrections \\citep[it is equivalent to the one derived by][]{2007MNRAS.380..585C}. The number densities of red galaxies shown in panel b are also for galaxies with $\\mathrm{M}_*>10^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ at each redshift. Finally, panel c of the figure use the same selection criteria as panel b, but the number densities of red galaxies have been computed for $\\mathrm{M}_*>5\\times 10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$. Note that the results of our color selection (including galaxies on the Red Sequence and at nearby locations) are not plotted in this figure (see \\S\\ref{Sec:Results} and Table\\,\\ref{Tab:densities}).\n\nThe data from \\citet{2007A&A...476..137A}, \\citet{2007MNRAS.380..585C}, and F07 have been obtained by integrating their red galaxy luminosity functions at each redshift for $\\log(\\mathcal{M}_*\/\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi)>11$. In this case, the absolute magnitudes have been transformed into stellar masses using the expression derived for local E-S0's by \\citet{2006A&A...453L..29C}, considering the L-evolution of red galaxies derived by F07 and the redshift evolution of the $(B-K)$ color expected for E-S0's \\citep[]{1998ApJ...501..578S,2003A&A...404..831D,2007A&A...476..137A}. AB magnitudes have been transformed to the Vega system in the $B$ and $K$ bands according to \\citet{2007AJ....133..734B} transformations and considering galaxy colors derived for E-S0's by \\citet{1995PASP..107..945F}.\n\nThe good agreement between our results and those from independent studies for similar selection criteria supports the reliability of our methodology and completeness of our nominal red sample (compare the black filled circles with the rest of studies in all panels of Fig.\\,\\ref{Fig:comparacion}). However, although our first data point in panels b and c in the figure is inside the cloud of points within errors, it does not follow the trend of the other authors. So, our data at this redshift are probably affected by volume and cosmic variance effects.\n\n\\section{Classification of red galaxy types}\n\\label{Sec:classification}\n\nThe hierarchical picture of galaxy formation predicts that massive E-S0's are the result of the most violent and massive merging histories in the Universe (see references in \\S1 in EM10). To test this scenario, we need to distinguish between galaxies undergoing a major merger and normal E-S0's (see \\S\\ref{Sec:Tests}). Normal relaxed galaxies and major mergers differ basically in their structural distortion level. Major mergers also exhibit different global morphology and star formation enhancement depending on the gas content of the progenitors and the evolutionary stage of the encounter.\n\nA gas-rich major merger is expected to turn into a dust-reddened star-forming disk with noticeable structural distortions at intermediate and advanced stages of the encounter, basically since the coalesce of the two galaxies into an unique galaxy body (this stage is known as the merging-nuclei phase). In earlier phases of the merger, the two galaxies can develop noticeable tidal tails and asymmetric structures, but the two bodies can still be distinguished and are not expected to suffer from enough dust reddening to lie nearby or on top of the Red Sequence. During the latest phase of the encounter (post-merger), the star formation is quenched and the remnant gets a more relaxed spheroidal structure until it transforms into a typical E-S0 \\citep[see][]{2008MNRAS.384..386C,2008MNRAS.391.1137L,2010MNRAS.404..590L,2010MNRAS.404..575L}. Intermediate-to-late stages of gas-poor major mergers present a distorted spheroidal morphology and negligible levels of star formation, thus being quite red too \\citep{2005AJ....130.2647V}. By the contrary, typical E-S0's present a spheroidal-dominated relaxed morphology, although they are also expected to be quite red due to their negligible star formation. Therefore, a gas-rich major merger is expected to be quite red from its merging-nuclei phase until its transformation into a typical E-S0, so we have traced major mergers once the two merging galaxies have merged into a unique remnant, because they are expected to be quite red in any case. Accounting for this, we have classified the galaxies in our red sample attending to their global morphology, structural distortion level, and star formation enhancement to distinguish among normal galaxies and intermediate-to-advanced phases of major mergers.\n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=\\textwidth,angle=0]{stamps2.eps}\n\\caption{False-color postage stamps of some red galaxies in our sample, obtained using the $V$ and $I$ bands. One example representative of each type is shown for each wide redshift bin used in the present study ($0.3100$, see][]{2000ApJ...529..886C,2003ApJS..147....1C,2009MNRAS.394.1956C}. But the galaxies in our red sample have $S\/N \\sim 40-50$ at most (as $I$-band magnitude errors are $\\sim 0.02-0.03$\\,mag typically), making our asymmetry estimates quite uncertain. Moreover, asymmetry indices are sensitive to environmental influences in the galaxy outskirts. This means that some galaxies identified as regulars according to our criteria (because they do not exhibit noticeable distortions in its whole body) may have a high asymmetry index because of tidal features in the outer parts. This obviously smudges the correlation between visual irregularity and computed asymmetries. \n\nWe have adapted the method by \\citet{1993ApJ...418...72Z} to quantify the irregularity level of galaxy morphology. These authors developed a procedure to classify an elliptical galaxy as regular or irregular, attending to the distortion level of their isophotes with respect to perfect ellipses. They fitted ellipses to the isophotes of each elliptical to obtain the radial profiles of the coefficients $a_3$, $b_3$, $a_4$, and $b_4$ of their Fourier expansion series. The peak value of each Fourier coefficient was identified along the radial profile. These authors considered the following criteria to distinguish among regular and irregular galaxies:\n\n\n\\begin{enumerate}\n \\item If all the peaks of the coefficients were small, this meant that the isophotes exhibited small deviations \n from perfect ellipses. Therefore, these galaxies could be considered as regulars.\n \\item If one of them was not small, then they differentiated between two possible cases:\n \\begin{itemize}\n \\item If the maximum value of the peaks of all coefficients did not correspond to $b_4$, then the isophotes deviated noticeably from ellipses, meaning that the galaxy was irregular.\n \\item If the peak of $b_4$ was the maximum among all the peaks, its classification depended on its trend with the radial position in the galaxy. If the profile of this coefficient changed from one type to another within 1.5 effective radius of the galaxy, then the galaxy was irregular. If not, it just implied that the galaxy was boxy or disky (depending on the sign of $b_4$), but the galaxy morphology could be considered as regular.\n \\end{itemize}\n\\end{enumerate}\n\nWe can adopt this method for our galaxies, as we are considering irregularities that must affect to the galaxy as a whole. We have limited the analysis in each red galaxy to the isophotes with a mean signal higher than 1.5 times the standard deviation of the sky signal per pixel. We have used the IRAF task \\texttt{ellipse} for fitting ellipses to the isophotes and for getting the third and fourth order coefficients of their Fourier expansion series. We have identified the peaks of each coefficient in the galaxy radial profile. \n\nThe task \\texttt{ellipse} uses a normalization to the surface brightness of the isophote that directly measure the deviations of the isophote from perfect ellipticity. According to de \\citet{1990AJ....100.1091P}, these deviations can be considered negligible if they are $< 5$\\%, a value that can be translated directly to a value of 0.05 in these coefficients. Therefore, we have adopted this limit as the critical value to distinguish between small and high values. We have defined the irregularity index $C_\\mathrm{irr,isoph}$ as the peak value of maximum absolute value among the peaks of the four Fourier coefficients. Therefore, according to \\citet{1993ApJ...418...72Z}, any galaxy that has $|C_\\mathrm{irr,isoph}|<0.05$ is regular. If not, it is irregular, except if $C_\\mathrm{irr,isoph}$ corresponds to the $b_4$ coefficient and this coefficient does not change between $|b_4|>0.5$ and $|b_4|<0.5$ values or viceversa along its radial profile. \n\nWe compare the results of the visual and quantitative classifications concerning the irregularity level of our red galaxies in Fig.\\,\\ref{Fig:IrregularClassification} for each wide redshift bin. The percentage of agreement between the visual and quantitative classifications into the regular type is $\\sim 77$\\% (decreasing from 83\\% to 70\\% from low to high redshifts). This percentage is slightly lower in the irregular type: $\\sim $66\\% (rising from 58\\% to 74\\% from low to high redshifts). The miss-classifications between both methods are $\\sim $23\\% for visually-identified regular galaxies (rising the confussion percentage from 17\\% to 30\\% from low to high z), and $\\sim $34\\% for visual irregulars (dropping from 42\\% at low z to 26\\% at high z). In general, both procedures coincide in $\\sim 78$\\% of the galaxy classifications at $0.3 1.5$ at all redshifts. This implies that the quantitative method is not biased towards more irregular types at high redshifts, as it is limited to the isophotes with enough $S\/N$ at all redshifts. Obviously, $C_\\mathrm{irr,isoph}$ is derived from an intrinsic physical region in the galaxies smaller at high redshift than at low redshift, just because of cosmological effects. But, as commented above, we consider as irregular galaxies only the stages of advanced major mergers, which imply a noticeable distortion level in its whole body. The effects of cosmological dimming and resolution loss on the classification are analysed in \\S\\ref{Sec:TestVisual2}.\n\nIn conclusion, this test proves the robustness of the visual classifications into regular and irregular types at all redshifts.\n\n\\subsubsection{Robustness of the observed morphology against cosmological effects}\n\\label{Sec:TestVisual2}\n\nIn order to find out how the loss of spatial resolution, the cosmological dimming, and the change of rest-frame band with redshift are affecting to our classification, we have simulated images of galaxies at different redshifts in the observed $I$-band. We have used \\texttt{COSMOPACK} \\citep{2003RMxAC..16..259B}, an IRAF package that transforms images of real galaxies to depict their appearance at a given redshift as observed with a given telescope, camera, and filter. The transformation includes K-corrections, change of observing band, repixelation to the scale of the observing system, convolution by the seeing, and noise from sky, detector, and dark current. \n\nStarting from the $I$-band image of a galaxy representative of one type at the lowest wide redshift bin ($0.311.1$ derived by \\citet{2011ApJ...730...61K}. \\emph{Yellow circles}: Redshift evolution of the SFR of galaxies with $10.8 < \\log (\\mathcal{M}_*\/\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi) < 11.1$ derived by the same authors. \\emph{Grey stars}: Galaxy data from \\citet{2005ApJ...630...82P} with $F(24\\mu m)> 85$\\,$\\mu$J. \\emph{Black stars}: Galaxy data from the same authors with $F(24\\mu m)< 85$\\,$\\mu$J. This limiting flux in 24\\,$\\mu$m naturally isolates galaxies with high (enhanced) SFR compared to the average value of the whole galaxy population at each redshift from those with low SFR, for the mass range considered here.} \n\\end{center}\n\\label{Fig:ksfr}\n\\end{figure}\n\n\\subsection{Classification according to star formation activity}\n\\label{Sec:ClassificationSED}\n\nA typical problem of studies based on red galaxy samples is to disentangle dust-reddened star-forming galaxies from quiescent ones \\citep[][]{2000MNRAS.317L..17P,2004ApJ...617..746D,2007ApJ...665..265F,2011MNRAS.412..591P}, because both galaxy populations are indistinguishable using only broad-band photometry at wavelengths $\\lesssim 10$\\,$\\mu$m \\citep{2006AJ....132.1405S}. In particular, the rest-frame $U-B$ color cannot differentiate between both red galaxy types adequately at $z>0.8$ (F07). Therefore, different selection techniques based on color indices, mid-IR data, or SED fitting have been developed to isolate both populations in red galaxy samples \\citep{1999ApJ...518..533L,2003A&A...401...73W,2006A&A...455..879Z,2009ApJ...691.1879W}.\n\nThe mid-IR emission and the SFR of a galaxy are known to be tightly correlated \\citep[][]{1998ARA&A..36..189K,2005ApJ...630...82P}. The higher sensitivities and spatial resolutions achieved by IR instruments in the last years have allowed the development of SFR indicators based on the emission of a galaxy in a single mid-IR band \\citep[]{2005ApJ...633..871C,2007ApJ...666..870C,2006ApJ...648..987P}. In particular, the 24\\,$\\mu$m band of the Multiband Imaging Photometer in the \\emph{Spitzer} Space Telescope (MIPS) is found to be a good tracer of the infrared emission coming from the dust heated by star-forming stellar populations \\citep{2006ApJ...650..835A}. \n\nIn this study, we have identified galaxies with noticeable star formation compared to the average SFR exhibited by the galaxies with similar masses at the same redshift, because this is an evidence of that mechanisms different to passive evolution are triggering it (such as tidal interactions, mergers, gas infall, or stripping). The SFR of a galaxy changes noticeably with redshift due to the natural evolution of their stellar populations. The specific SFR has decayed with cosmic time as $\\sim (1+z)^{n}$ since $z=3$, being $n=4.3$ for all galaxies and $n=3.5$ for star-forming sources \\citep{2011ApJ...730...61K}. This means that we must have into account the intrinsic rise of SFR with redshift to define when a galaxy is forming stars more efficiently than the average of the whole galaxy population at each redshift. Therefore, we have used {\\it Spitzer}\/MIPS 24~$\\mu$m data to discriminate between red galaxies with enhanced SFRs from those that have lower SFRs compared to the average SFR of the whole galaxy population at each redshift \\citep[see also][]{2009ApJ...691.1879W}. A limiting flux in 24\\,$\\mu$m of $\\sim 60$\\,$\\mu$Jy corresponds to the 5-$\\sigma$ detection level on our MIPS 24~$\\mu$m data \\citep[][]{2011ApJS..193...13B}.\n\nIn Fig.\\,\\ref{Fig:ksfr}, we plot the $z$-evolution of the average SFR of galaxy populations with different mass ranges overlapping with ours (which is $\\log (\\mathcal{M}_*\/\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi)>10.7$), as derived by \\citet{2011ApJ...730...61K} from a deep 3.6\\,$\\mu$m-selected sample in the COSMOS field. We have overplotted the location of the galaxies from \\citet{2005ApJ...630...82P} in the diagram, differentiating those with emission fluxes in MIPS 24~$\\mu$m above 85\\,$\\mu$J from those below it. Note that this limiting flux follows pretty well the redshift evolution of the average SFR of the whole galaxy population for our mass range, being a straightforward way for distinguishing galaxies with enhanced SFRs from those with low SFRs, compared to the average value of the galaxy population at each redshift. Therefore, we have considered red galaxies with a 24~$\\mu$m MIPS flux above $85$\\,$\\mu$Jy as systems with enhanced star formation compared to the global galaxy population at the same redshift (High Star-Forming galaxies or HSF, hereafter), whereas the galaxies with a 24~$\\mu$m flux below this limit are not substantially star-forming compared to it (so, they are Low Star-Forming ones or LSF, henceforth). \n\n\\begin{table}\n\\begin{minipage}{0.5\\textwidth}\n\\caption{Percentages of AGN contamination in the subsample of HSF red galaxies by morphological types.\\label{Tab:agns}}\n\\begin{center}\n\\begin{tabular}{lcccccccc}\n\\hline\n \\multirow{2}{*}{HSF} & \\multicolumn{2}{c}{$0.3 \\cdot (1+z_\\mathrm{phot}$), where $<\\Delta(z)\/(1+z)>$ is the average value obtained for this normalized dispersion at the redshift bin of the galaxy (see Fig.\\,\\ref{Fig:zspec-zphot}). Then, we have obtained the number densities corresponding to each simulated catalogue for each galaxy type and at each redshift bin, accounting for the different redshifts of each catalogue. The dispersion of the 100 values obtained for the number density at each redshift bin and galaxy type represents an estimate of the error associated to the photometric redshift uncertainties. \n\nStatistics of massive red galaxies can be dramatically affected by cosmic variance due to their high clustering \\citep{2004ApJ...600L.171S}. We have estimated cosmic variance using the model by \\citet{2011ApJ...731..113M}, which provides estimates of cosmic variance for a given galaxy population using predictions from cold dark matter theory and the galaxy bias. They have developed a simple recipe to compute cosmic variance for a survey as a function of the angular dimensions of the field and its geometry, the mean redshift and the width of the considered redshift interval, and the stellar mass of the galaxy population. We have considered the geometry and angular dimensions of our field, as well as the different redshift bins analysed in each case to estimate the cosmic variance. \\citeauthor{2011ApJ...731..113M} software provides these estimates in two mass ranges overlapping with ours: $10.5<\\log (\\mathcal{M}_*\/\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi)<11$ and $11<\\log (\\mathcal{M}_*\/\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi)<11.5$. Therefore, we have considered the mean cosmic variance of both mass ranges as a representative value of the cosmic variance of our mass-limited sample at each redshift bin. Cosmic variance depends on the redshift. At $0.310^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ at $z=0$, just accounting for the effects of the major mergers strictly reported by observations since $z\\sim 1.2$ \\citep{2009ApJ...694..643L}. This model reproduces the observed evolution of the massive end of the galaxy luminosity function by color and morphological types. The evolutionary track described by F07 appears naturally in the model, as it considers the relative contribution of gas-poor and gas-rich mergers at each redshift reported by \\citet{2008ApJ...681..232L} and their different effects on galaxy evolution. \n\nThe advantage of this model is that its predictions are in excellent agreement with cosmological hierarchical models (despite being based on observational major merger fractions), reproducing observational data at the same time \\citep[see EM10;][]{2010arXiv1003.0686E}. Based on these predictions, we have defined some tests that observational data must fulfill if most massive E-S0's have really derived from major mergers occurred at relatively late epochs in the cosmic history. These predictions are the following ones:\n\n\\begin{enumerate}\n \\item Most present-day E-S0's with $\\mathrm{M}_*>10^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ are the result of at least one gas-rich major merger that place them on the Red Sequence since $z\\sim 1.2$.\n \\item In addition, $\\sim 75$\\% of the remnants resulting from these gas-rich events have been involved in a subsequent gas-poor major merger, occurred quite immediately. The remaining $\\sim 25$\\% have thus continued their evolution towards an E-S0 passing through a quiet post-merger phase. \n \\item The bulk of these major mergers are at intermediate-to-late stages during the $\\sim 2$\\,Gyr period elapsed at $0.710^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ at $z=0$ must have been definitely built up during the $\\sim 2.2$\\,Gyr time period elapsed at $0.75\\times10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ at each redshift. This means that we can ensure that we are in a position to trace back in time the potential progenitors of the present-day E-S0's with $\\mathrm{M}_*>10^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ at $z\\sim 0$ that could have merged to create them during the last $\\sim 9$\\,Gyr (see \\S\\ref{Sec:sample}).\n\nTherefore, we have estimated the cumulative distribution of ID's and of IS's, and we have compared them with the redshift evolution of the number density of RS's since $z\\sim 1.5$. In the case that major mergers have not driven the assembly of the massive E-S0's as proposed by the EM10 model, the previous three tests must fail. On the contrary, if these three distributions agree pretty well, these tests will support strongly the existence of an evolutionary link between major mergers and the appearance of massive E-S0's, as expected by hierarchical scenarios of galaxy formation. The results of these tests are presented in \\S\\ref{Sec:TestsResults}. \n\nWe must remark that the EM10 model exclusively quantifies the effects of major mergers on galaxy evolution at $z<1.2$. Hence, it does not discard the contribution of different evolutionary processes to the definitive assembly of massive E-S0's, although it predicts that it must have been low, as most of their number density evolution can be explained just accounting for the effects of the major mergers. This seems to be confirmed by observations, as other evolutionary mechanisms (such as minor mergers, ram pressure stripping, or bars) seem to have been significant for the formation of the Red Sequence only for low and intermediate masses, but not for high masses \\citep[][]{2004ARA&A..42..603K,2007ApJ...660.1151D,2008A&A...489.1003D,2009ApJ...697.1369B,2009A&A...508.1141S,2010MNRAS.409..346C,2011MNRAS.411.2148K}. Moreover, the model does not exclude disk rebuilding after the major merger either. On the contrary, it is probably required for giving rise to a S0 instead of an elliptical, as indicated by observations \\citep{2005A&A...430..115H,2009A&A...507.1313H,2009A&A...496..381H,2009A&A...501..437Y}.\n\nMoreover, the EM10 model assumes that intermediate-to-late stages of major mergers are red and will produce an E-S0, on the basis of many observational and computational studies \\citep[see references in EM10 and][]{2002A&A...381L..68C,2007MNRAS.382.1415S,2010ApJ...714L.108S,2010A&A...518A..61C}. These assumptions are crucial for the model, as they are necessary to reproduce the redshift evolution of the luminosity functions selected by color and morphological type. Therefore, testing if our data is coherent with the existence of an evolutionary link between the advanced stages of major mergers in our red sample and the definitive buildup of massive E-S0's, we are also indirectly testing these assumptions of the EM10 model.\n\n\n\\subsection{Observational considerations for the tests}\n\\label{Sec:RSEvolution}\n\nAccording to \\citet{2006ApJ...652..270B}, the average number density of major mergers at a given redshift centered at $z$, $n _\\mathrm{m}(z)$, is related to the number density of the major mergers detected at certain intermediate phase of the encounter in that redshift bin, $$, as follows:\n\n\\begin{equation}\\label{eq:density}\nn_\\mathrm{m}(z) = \\frac{t([z_1,z_2])}{\\tau _\\mathrm{det}},\n\\end{equation}\n\n\\noindent with $t([z_1,z_2])$ being the time elapsed in the redshift bin, and $\\tau _\\mathrm{det}$ representing the detectability time of the intermediate merger stage under consideration. We have used this equation in our tests.\n\nWe find that the number densities of ID's and IS's remain quite constant with redshift (see \\S\\ref{Sec:Results}). As ID's and IS's correspond to intermediate merger stages, this means that the major merger rate must evolve smoothly with redshift, in good agreement with observational estimates of merger rates \\citep{2008ApJ...681..232L,2011ApJ...739...24B,2011ApJ...742..103L}. This also indicates that the net flux of irregular galaxies appearing on the Red Sequence or at nearby locations on the Green Valley (i.e., the number of red irregulars created and destroyed per unit time in the red sample) must have been nearly constant at $0.35\\times 10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$, according to their morphology and structural distortion level, for the three wide redshift bins considered in the study ($0.35\\times 10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ per redshift bins at $0.35\\times 10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ at $0.3 10^ {-4}$\\,Mpc$^{-3}$. Although the fractions of red irregular spheroids and disks decrease with cosmic time, their densities remain quite constant at $0.35\\times 10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ at $0.35\\times 10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$. \n\nAs commented in \\S\\ref{Sec:introduction}, the rise of the number density of red regular types with cosmic time and the constancy of that of irregular ones has been interpreted as a sign pointing to the conversion of irregulars into regulars with time. We provide observational evidence supporting the existence of this evolutionary link in \\S\\ref{Sec:TestsResults}. \n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics*[width=0.24\\textwidth,angle=0]{mip_RS_v2.eps}\n\\includegraphics*[width=0.24\\textwidth,angle=0]{mip_RD_v2.eps}\n\\includegraphics*[width=0.24\\textwidth,angle=0]{mip_IS_v2.eps}\n\\includegraphics*[width=0.24\\textwidth,angle=0]{mip_ID_v2.eps}\n\\caption{Redshift evolution of the comoving number density of red galaxies with $\\mathrm{M}_*>5\\times 10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$, according to their morphological, structural, and star formation activity properties. \\emph{Solid lines}: Number densities for LSF galaxies of each type. \\emph{Dashed lines}: Number densities of HSF galaxies of each morphological type. \\emph{Panel a)}: Regular spheroids (RS's). \\emph{Panel b)}: Regular disks (RD's). \\emph{Panel c)}: Irregular spheroids (IS's). \\emph{Panel d)}: Irregular disks (ID's). }\\label{Fig:mip_p_classification}\n\\end{center}\n\\end{figure*}\n\n\\subsection{Star formation activity according to red galaxy types since $z\\sim 1.5$}\n\\label{Sec:SFandMorphology}\n\nIn Fig.\\,\\ref{Fig:mip_p_classification} we show the redshift evolution of the number density of massive red galaxies for each morphological type defined in this study, attending to its star formation activity. We remark that HSF's are galaxies that show enhanced SFRs compared to the average SFR of the galaxy population at each redshift (\\S\\ref{Sec:ClassificationSED}). All types, except RS's, host a significant number of HSF galaxies, with percentages varying depending on type and redshift. Red regular spheroids are the galaxy types hosting the lowest fractions of HSF systems at all redshifts since $z\\sim 1.5$, as expected. Curiously, red RD's exhibit a noticeable increase of the HSF systems fraction at $z<0.7$ (panel b). Irregular types harbour enhanced SFRs typically (both spheroidal and disk systems), coherently with their merger-related nature (panels c and d in the figure). The percentage of HSF objects in these types has not changed at $0.35\\times 10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$), and for an alternative one which is identical to our nominal one in all aspects, except in the mass range ($\\mathrm{M}_*>10^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$).\n\nThe EM10 model traces back in time the evolution of the E-S0's that have $\\mathrm{M}_*>10^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ at $z=0$ (\\S\\ref{Sec:F07}). According to the model, these galaxies have been mostly assembled through major mergers at $0.70.7$, which have masses lower by a factor of $\\sim 2$ compared to the E-S0's resulting from the merger. Consequently, the model predictions on the number density of E-S0's at $z>0.9$ can be compared with our results for $\\mathrm{M}_*\\gtrsim 5\\times 10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ (as both studies trace similar mass ranges globally). But at lower redshifts, the EM10 model traces E-S0's that already have $\\mathrm{M}_*>10^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$, so it is comparable to the results with a mass selection $\\mathrm{M}_*>10^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$. \n\nAs Fig.\\,\\ref{Fig:acumIDSF} c shows, at $z>0.7$, the model reproduces much better the settlement of the RS's with $\\mathrm{M}_*\\gtrsim 5\\times 10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$, whereas at $z<0.7$ it clearly follows the trend of RS's with $\\mathrm{M}_*>10^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$, as expected from the arguments given above. However, our data seem to have completeness problems at $z<0.5$, so we cannot assure that the number density of E-S0's with $\\mathrm{M}_*> 10^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ has remained constant since $z\\sim 0.7$. \n\nNevertheless, the number density of E-S0's with $\\mathrm{M}_*>10^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ at $z\\sim 0.6$ estimated with our data is\nquite similar to the one estimated for these galaxies at $z=0$ \\citep{2003ApJ...599..997M}, as shown in panel c of Fig.\\,\\ref{Fig:acumIDSF}. This means that the number density of these objects has remained nearly constant since $z\\sim 0.6$, in good agreement with the predictions of the EM10 model.\n\nAlthough better data at $z<0.6$ are required to directly confirm this result, our data are coherent with the fact that E-S0's with $\\mathrm{M}_*> 10^{11}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$ have been definitively assembled since $z\\sim 0.6$, supporting that the bulk of the assembly of red massive E-S0's has occurred during the $\\sim 2.2$\\,Gyr period elapsed at $0.7 11$ seem to have been in place since $z\\sim 0.6$ (see \\S\\ref{Sec:TestsResults}). This is supported by the fact that many of these processes are observed to be relevant only for the evolution of galaxies with lower masses \\citep[in particular, the fading of stellar populations, see]{2002ApJ...577..651B,1998ApJ...496L..93D,2006A&A...458..101A}. \n\nWe must also remark that our red galaxy sample does not trace the evolution of S0's observed in the clusters, which seems to have been relevant since $z\\sim 0.4$-0.5 due to the environmental star-formation quenching of the spirals that fall into the clusters \\citep{2007ApJ...660.1151D,2009ApJ...697L.137P,2009A&A...508.1141S,2011MNRAS.412..246V}. Cluster S0's have typical masses lower than those selected here \\citep[$\\mathrm{M}_*\\lesssim 5\\times 10^{10}\\ifmmode{\\mathrm M_\\odot}\\else{M$_\\odot$}\\fi$, see][]{1999ApJS..122...51D,2006MNRAS.373.1125B}, and hence, our results do not apply to them in general. Moreover, some studies indicate that the fraction of S0's in groups is similar to that of clusters at $z < 0.5$ \\citep{2009ApJ...692..298W}. Considering that most galaxies reside in groups \\citep[$\\sim 70$\\%, see][]{2006ApJS..167....1B,2007ApJ...655..790C}, this means that the majority of S0's in the Universe are located in groups (not in clusters). Nevertheless, note that this evolution in clusters does not contradict the EM10 model at all, because this model exclusively analyses the effects of the major mergers on galaxy evolution since $z\\sim 1.2$, independently on the relevance of other evolutionary processes. \n\nTo summarize, our study supports that major mergers have been the main drivers of the evolution of the massive end of the Red Sequence since $z\\sim 1.5$, although other processes can also have contributed to it significantly at intermediate-to-low masses (especially, since $z\\sim 0.6$). Our tests support observationally a late definitive buildup of the massive E-S0's through major mergers (mostly at $0.6$ 1 MeV.}}\n\\label{fig:Nhits}\n\\end{figure}\nFigure~\\ref{fig:Nhits} shows the distribution of the number of hits in the drift chamber ( $N_{\\rm Hits}$ ) obtained for events with detected energy in the LYSO crystals $E_{\\rm LYSO} > $ 1 MeV. Events with $N_{\\rm Hits} >$ 9 are selected for the analysis of the charged secondary particles.\n\n\\noindent\nIn order to evaluate the setup acceptance and efficiency, and to optimize the particle identification analysis a detailed simulation has been developed using the FLUKA software release 2011.2~\\cite{Fasso'2003,Ferrari2005}. The detailed geometry description with the setup materials (air included) together with the trigger logic, the time resolution of the scintillator as well as the experimental space resolution of the drift chamber have been considered. The quenching effect in the scintillator has also been introduced in the Monte Carlo according to \\cite{Koba2011}.\nThe interaction of a sample of $10^9$ carbon ions with 80 MeV\/u, equivalent to $10^3$ s of data taking at the typical 1 MHz rate of beam, has been simulated. To identify the charged particles reconstructed in the drift chamber, we exploit the distribution of the detected energy in the LYSO detector $E_{\\rm LYSO}$ as a function of Time of Flight (ToF), Figure~\\ref{fig:Etof}. \nIn the data sample (left panel) a fast low-energy component due to electrons\nis clearly visible for ToF values around zero, in the area delimited by the first dashed line. These electrons are produced by Compton scattering of the de-excitation photon induced by beam interactions in the PMMA material. \nThe central most populated band, delimited by the two dashed lines, is made by protons with detected energy within a very wide range, originating also the clearly visible saturation of the LYSO crystals QDC for $E_{\\rm LYSO} >$ 24 MeV. The FLUKA simulation (right panel) shows similar populations in the (ToF , $E_{\\rm LYSO}$) plane with an additional component of deuterons, above the second dashed line, which is not present in data.\n\n\\begin{figure}[!h]\n\\begin{center}\n\\includegraphics [width = 1\\textwidth] {Fig4ab.pdf}\n\\caption{\\small{ Distribution of the detected energy in the LYSO crystals as a function of the Time of Flight: Data (Left) and FLUKA Simulation (Right).\n}}\n\\label{fig:Etof}\n\\end{center}\n\\end{figure}\n\\noindent\nWe have then identified as proton a charged secondary particle with ToF and $E_{\\rm LYSO}$ values inside the area delimited by the two dashed lines in Figure~\\ref{fig:Etof}. \nThe systematic uncertainty on the proton\/deuteron identification has been estimated using the data events in the deuterons area of the (ToF , $E_{\\rm LYSO}$) plane.\nFigure~\\ref{fig:beta} shows the distributions of $\\beta = \\frac{v}{c}$ and the corresponding detected kinetic energy $E_{\\rm kin}$ for the identified protons, obtained using the ToF measurement together with the distance between LYSO crystals and PMMA.\nThis detected kinetic energy can be related to the proton kinetic energy at emission time, $E^{\\rm Prod}_{\\rm kin}$, considering the energy loss in the PMMA and the quenching effect of the scintillating light for low energy protons. The minimum required energy to detect a proton in the LYSO crystals is $E^{\\rm Prod}_{\\rm kin} = 7.0\\pm 0.5$ MeV, evaluated using the FLUKA simulation, and a proton with an average detected kinetic energy $E_{\\rm kin}$ = 60 MeV has been emitted with $E_{\\rm kin}^{\\rm Prod} 83\\pm 5$ MeV. The uncertainty is mainly due to the finite size of both the beam spot $\\mathcal{O}$(1 cm) and profile.\n\\begin{figure}[!htb]\n\\begin{center}\n\\includegraphics [width = \\textwidth] {Fig5ab.pdf}\n\\caption{\\small{Distribution of $\\beta = \\frac{v}{c}$ (left) and kinetic energy (right) of charged secondary particles identified as protons.}}\n\\label{fig:beta}\n\\end{center}\n\\end{figure}\n\n\\noindent\nIn order to use the secondary protons for monitoring purposes, the crossing of some centimeters of patient's tissue has to be considered and therefore the range $E_{\\rm kin} > $ 60 MeV of the detected kinetic energy distribution is the most interesting for the above-mentioned application.\nIn the following the proton kinetic energy detected in the LYSO crystals will be referred to as the kinetic energy.\n\\section{Production region of charged secondary particles}\n\\label{region}\nTracks reconstructed in the drift chamber are backward extrapolated to the PMMA position, to find the production region of charged secondary particles along the path of the carbon ion beam. The PMMA is mounted on a single axis movement stage allowing position scans along the x-axis to be performed with a 0.2 mm accuracy (Figure~\\ref{fig:Schema}). In the configuration with the centers of PMMA, drift chamber and LYSO crystals aligned along the z-axis, the PMMA position in the stage reference frame is taken as 0 and will be referred to as the reference configuration. \n\n\\noindent From each track reconstructed in the drift chamber and backward extrapolated to the beam axis we can measure the x and y coordinates of the estimated emission point of the charged secondary particle, named $x_{\\rm PMMA}$ and $y_{\\rm PMMA}$. The expected position of the Bragg peak obtained with the FLUKA simulation \\cite{Fasso'2003} is located at $(11.0 \\pm 0.5)$ mm from the beam entrance face of the PMMA. With the setup in the reference configuration, the expected position of the Bragg peak in our coordinate system is $x_{\\rm Bragg}|^{\\rm Ref} = (9.0 \\pm 0.5)$ mm. \nFigure~\\ref{fig:peak} shows the distribution of the reconstructed $x_{\\rm PMMA}$, compared to the expected distribution of the dose deposition in the PMMA, both obtained with the setup in the reference configuration. \n\\begin{figure}[!htb]\n\\begin{center}\n\\includegraphics [width = 0.8\\textwidth] {Fig6.pdf}\n\\caption{\\small{Expected dose deposition in the PMMA evaluated with FLUKA (hatched) compared to the distribution of $x_{\\rm PMMA}$ (solid), the emission point of charged secondary particles along the x-axis. The beam entrance and exit faces of the PMMA are at $x_{\\rm PMMA}$ = 2 cm and $x_{\\rm PMMA}$ = -2 cm, respectively.}}\n\\label{fig:peak}\n\\end{center}\n\\end{figure}\nThe mean of the gaussian fit to the distribution is $ \\bar{x}_{\\rm PMMA} = 17.1\\pm0.2$ mm, and consequently the separation between the BP and the peak from secondary proton emission is $\\Delta_{\\rm ProtonBragg} = 8.1 \\pm 0.5$ mm.\n Figure~\\ref{fig:reso_ekin} shows the distribution of the reconstructed $x_{\\rm PMMA}$ and $y_{\\rm PMMA}$ for all identified protons (solid line), for protons with $E_{\\rm kin} > $ 60 MeV (hatched) and for protons with $E_{\\rm kin} >$ 100 MeV (grey). The beam entrance and exit faces of the PMMA are at $x_{\\rm PMMA}$ = 2 cm and $x_{\\rm PMMA}$ = -2 cm, and $y_{\\rm PMMA}$ = 1.6 cm and $y_{\\rm PMMA}$ = -2.4 cm. The $x_{\\rm PMMA}$ distribution is related to the range of the beam while the $y_{\\rm PMMA}$ to its transversal profile. \nQuite remarkably the shape of the distribution of the emission point is approximately the same for protons emitted with different kinetic energies, e.g. the resolution on $x_{\\rm PMMA}$ does not depend critically on the $E_{\\rm kin}$ variable.%\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics [width = 1 \\textwidth] {Fig7ab.pdf}\n\\caption{\\small{Distribution of $x_{\\rm PMMA}$ (Left) and $y_{\\rm PMMA}$ (Right) obtained for all charged particles identified as protons (black solid line), for protons with $E_{\\rm kin} > $ 60 MeV (dashed line) and with $E_{\\rm kin} > $ 100 MeV (grey). The beam entrance and exit faces of the PMMA are at $x_{\\rm PMMA}$ = 2 cm and $x_{\\rm PMMA}$ = -2 cm, and $y_{\\rm PMMA}$ = 1.6 cm and $y_{\\rm PMMA}$ = -2.4 cm.}}\n\\label{fig:reso_ekin}\n\\end{center}\n\\end{figure}\n\n\\noindent\nThe existence of a relationship between the expected BP position and the peak of the $x_{\\rm PMMA}$ distribution, as a function of the PMMA position, in principle could allow us to follow the BP position using the $x_{\\rm PMMA}$ measurements. To estimate the accuracy of this method\n, a position scan has been performed acquiring several data runs moving the PMMA by means of the translation stage.\n\n\\noindent\nFor each run with different PMMA position, the production region of the protons have been monitored using the mean values of the gaussian fit to $x_{\\rm PMMA}$ and $y_{\\rm PMMA}$ distributions, $\\bar{x}_{\\rm PMMA}$ and $\\bar{y}_{\\rm PMMA}$. \nSince $\\bar{y}_{\\rm PMMA}$ is the coordinate of the proton emission point along the vertical axis, and is related to the fixed beam profile in the transverse plane, its behaviour as a function of the PMMA position provides an estimate of the method's systematic uncertainty.\n\n\\noindent\nEach PMMA position in the stage reference frame can be translated into the expected Bragg peak position $x_{\\rm Bragg}$ for that given PMMA position.\nFigure~\\ref{fig:ADD} shows the results obtained for $\\bar{x}_{\\rm PMMA}$ and $\\bar{y}_{\\rm PMMA}$ as a function of $x_{\\rm Bragg}$, with $E_{\\rm kin} >$ 60 MeV protons. A clear linear relationship is observed between $\\bar{x}_{\\rm PMMA}$ and $x_{\\rm Bragg}$, indicating that the charged secondary particles emission reconstructed with the drift chamber follows accurately the BP movement.\nNo dependence of the $\\bar{y}_{\\rm PMMA}$ values on the Bragg peak position is observed, as expected from a translation of the PMMA along the x-axis only. \nSimilar results can be obtained using protons with different $E_{\\rm kin}$ selection, as it can be inferred from Figure~\\ref{fig:reso_ekin}.\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics [width = 1 \\textwidth] {Fig8.pdf}\n\\caption{\\small{Reconstructed peak position of the secondary proton emission distribution $\\bar{x}_{\\rm PMMA}$,$\\bar{y}_{\\rm PMMA}$ as a function of the expected Bragg Peak position $x_{\\rm Bragg}$, with $E_{\\rm kin} >$ 60 MeV.}}\n\\label{fig:ADD}\n\\end{center}\n\\end{figure} \n\n\\noindent\nTo estimate the achievable accuracy on the BP determination several contributions need to be considered.\nWe evaluated the difference $\\Delta_{\\rm ProtonBragg} = \\bar{x}_{\\rm PMMA} - x_{\\rm Bragg}$ for all identified protons and for the proton sample with $E_{\\rm kin} >$ 60 MeV. The $\\Delta_{\\rm ProtonBragg}$ root mean square is $\\sigma_{\\rm \\Delta_{\\rm ProtonBragg}}\\simeq$ 0.9 mm for both samples.\nThis can be explained as follows: in the sample with all identified protons the contribution to the total uncertainty due to the scattering is partially compensated by the larger statistics with respect to the sample with $E_{\\rm kin} >$ 60 MeV. Table \\ref{TAB:NPStat} reports the number of identified protons with $E_{\\rm kin} >$ 60 MeV obtained with the position scan data.\n\\begin{table}[!hbt]\n\\caption{Statistics of identified protons with $E_{\\rm kin} >$ 60 MeV, obtained with the position scan data.}\n\\begin{center}\n\\begin{tabular}{cccccccccccc}\n\\hline\n\\bs\n$x_{\\rm Bragg}$ (mm) & -19 & -15 & -11 & -9 & -5 & -3 & 1 & 5& 9& 11 & 13 \\\\\n\\bs\n\\hline\n\\bs\n$N^{\\rm 60~MeV}_{\\rm Protons}$ & 67& 77& 88& 61& 92& 75& 113& 154& 1223& 130& 83 \\\\\n\\bs\n\\hline\n\\end{tabular}\n\\label{TAB:NPStat}\n\\end{center}\n\\end{table}\n\n\\noindent\nThe uncertainty $\\sigma_{\\rm Extrapol}$ due to the backward extrapolation of the track from the drift chamber to the beam line can be estimated from the root mean square of the $\\bar{y}_{\\rm PMMA}$ values, $\\sigma_{\\bar{y}_{\\rm PMMA}} = \\sigma_{\\rm Extrapol}$ = 0.5 mm. The latter contributes to the $\\Delta_{\\rm ProtonBragg}$ distribution, together with $\\sigma_{\\rm Stage}$ = 0.2 mm from the uncertainty on the PMMA positioning. \nWe can then estimate the contribution to the total uncertainty coming from the shape of the distribution of the emission point of charged secondary particles as: \n\\begin{equation}\n\\sigma_{\\rm Emission} = \\sqrt{\\sigma_{\\rm \\Delta_{\\rm ProtonBragg}}^2 - \\sigma_{\\rm Extrapol}^2 - \\sigma_{\\rm Stage}^2} \\sim 0.7 \\text{mm}\n\\end{equation}\n\n\\noindent It must be stressed that this value represents only an indication of the precision achievable in the BP determination using secondary protons, due to the target thickness and homogeneity in the present setup, with respect to a possible clinical application.\n\\section{Flux of charged secondary particles}\n\\label{flux}\nThe flux of the secondary protons emitted from the beam interaction with the PMMA has been measured at 90$\\degree$ with respect to the beam direction and in the geometrical acceptance of the triggering LYSO crystals, configuration maximizing the sensitivity to the Bragg peak position. The surface of the LYSO is 3x3 cm$^2$, corresponding to a solid angle $\\Omega_{\\rm LYSO} = 1.3\\times 10^{-4}$ sr at a distance of 74 cm. The proton's kinetic energy spectrum measured with data has been inserted in the FLUKA simulation to evaluate the detection efficiency in the LYSO crystals for protons with $E_{\\rm LYSO} >$ 1 MeV: $\\epsilon_{\\rm LYSO}= (98.5\\pm 1.5)\\%$, with the uncertainty mainly due to the Monte Carlo statistics. To properly evaluate the rate of charged secondary particles reaching the LYSO crystals, the number of carbon ions reaching the PMMA target ($N_{\\rm C}$) has been computed according to \\cite{Agodi2012a}: counting the number of signals in the Start Counter ($N_{\\rm SC}$) within randomly-triggered time-windows of $T_w=2\\ \\micro\\second$, corrected for the Start Counter efficiency $\\epsilon_{\\rm SC} = (96 \\pm 1)\\%$, and the acquisition dead time. \nThe number of emitted secondary protons $N_{\\rm P}$ has been measured with the $x_{\\rm PMMA}$ distribution counts, corrected for $\\epsilon_{\\rm SC}$, $\\epsilon_{\\rm LYSO}$, the tracking efficiency $\\epsilon_{\\rm Track} = (98 \\pm 1)\\%$~\\cite{Abou-Haidar2012} and the acquisition dead time.\n\n\\noindent\nThe double differential production rate of secondary protons emitted at 90$\\degree$ with respect to the beam line is estimated as:\n\\begin{equation}\n\\frac{d^2N_{\\rm P}}{dN_{\\rm C}d\\Omega}(\\theta=90\\degree)=\\frac{N_{\\rm P}}{N_{\\rm C}~~\\Omega_{\\rm LYSO}}.\n\\end{equation}\nFigure~\\ref{fig:flusso} shows the double differential production rate of secondary protons, emitted at 90$\\degree$ with respect to the beam line, as a function of the rate of the carbon ions $R_{\\rm C}$ reaching the PMMA: all identified protons and protons with $E_{kin} >$ 60 MeV.\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics [width = 1\\textwidth] {Fig9.pdf}\n\\caption{\\small{Double differential production rate for secondary particles emitted at 90$\\degree$ with respect to the beam line, as a function of the rate of the carbon ions $R_{\\rm C}$ reaching the PMMA target: all identified protons (triangles) and protons with $E_{\\rm kin} >$ 60 MeV (circles).\n}}\n\\label{fig:flusso}\n\\end{center}\n\\end{figure}\n\n\\noindent Expressing these results in terms of the secondary proton's kinetic energy at emission $E^{\\rm Prod}_{\\rm kin}$, we obtain:\n\\begin{eqnarray}\n\\fl\n\\frac{dN_{\\rm P}}{dN_{\\rm C}d\\Omega}(E^{\\rm Prod}_{\\rm kin} > 7 {\\rm ~MeV}, \\theta=90\\degree) = (9.56\\pm 0.18_{\\rm stat} \\pm 0.40_{\\rm sys})\\times 10^{-4} sr^{-1} \\\\\n\\fl\n \\frac{dN_{\\rm P}}{dN_{\\rm C}d\\Omega}(E^{\\rm Prod}_{\\rm kin} > 83 {\\rm ~MeV}, \\theta=90\\degree) = (2.69\\pm 0.08_{\\rm stat} \\pm 0.12_{\\rm sys})\\times 10^{-4} sr^{-1}\n\\end{eqnarray}\nwith the systematic contribution mainly due to proton identification and to the uncertainty on the production kinetic energy related to the beam's transversal profile uncertainty. \n\n\\noindent\nThe same experimental setup described in Section \\ref{setup} has been used to measure the differential production rate for prompt photons, with energy $E_{\\rm LYSO} >$ 2 MeV and emitted at 90$\\degree$ with respect to the beam line: $dN_{\\rm \\gamma}\/(dN_{\\rm C}d\\Omega)(E_{\\rm LYSO} > 2 {\\rm ~MeV} , \\theta=90\\degree) = (2.92 \\pm 0.19)\\times 10^{-4} sr^{-1}$~\\cite{Agodi2012a}.\n\n\\section{Discussion and conclusions}\nWe reported the study of secondary charged particles produced by the interaction of 80 MeV\/u fully stripped carbon ion beam of INFN-LNS laboratory in Catania with a PMMA target. \nProtons have been identified exploiting the energy and time of flight measured with a plastic scintillator together with LYSO crystals, and their direction has been reconstructed with a drift chamber. A detailed simulation of the setup based on the FLUKA package has been done to evaluate its acceptance and efficiency, and to optimize secondary particle's identification.\n\\\\\n\\noindent\nIt has been shown that the backtracking of secondary protons allows their emission region in the target to be reconstructed. Moreover the existence of a correlation between the reconstructed production region of secondary protons and the Bragg peak position has been observed, performing a position scan of the target.\nThe achievable accuracy on the Bragg peak determination exploting this procedure has been estimated to be in the submillimeter range, using the described setup and selecting secondary protons with kinetic energy at emission $E^{\\rm Prod}_{\\rm kin} >$ 83 MeV. \n\\\\\n\\noindent\nThe obtained accuracy on the position of the released dose should be regarded as an indication of the achievable accuracy for possible applications of this technique to monitor the BP position in hadrontherapy treatment. In fact in clinical application the secondary particles should cross a larger amount of material (patient tissue) resulting in an increased multiple scattering contribution worsening the BP resolution by, at most, a factor 2-3. On the other hand an optimized device allowing a closer positioning to the patient could greatly improve the collected statistics of protons produced with $E^{\\rm Prod}_{\\rm kin} >$ 80 MeV, reducing multiple scattering effects. Furthermore the intrinsic good tracking resolution and high detection efficiency easily achievable in charged particles detectors, make this monitoring option worthwhile of further investigations.\n\n\\noindent\nThe measured differential production rate for protons with $E^{\\rm Prod}_{\\rm kin} >$ 83 MeV and emitted at 90$\\degree$ with respect to the beam line is: $dN_{\\rm P}\/(dN_{\\rm C}d\\Omega)(E^{\\rm Prod}_{\\rm kin} > 83 MeV , \\theta=90\\degree) = (2.69\\pm 0.08_{\\rm stat} \\pm 0.12_{\\rm sys})\\times 10^{-4} sr^{-1}$.\n\n\\ack\nWe would like to thank the precious cooperation of the staff of the INFN-LNS (Catania, Italy) accelerator group. The authors would like to thank Dr. M.~Pillon and Dr. M.~Angelone (ENEA-Fra\\-scati, Italy) for allowing us to validate the response of our detector to neutrons on the Frascati Neutron Generator; C.~Piscitelli (INFN-Roma, Italy) for the realization of the mechanical support; M.~Anelli (INFN-LNF, Frascati) for the drift chamber construction. \n\n\\noindent\nThis work has been supported by the ``Museo storico della fisica e Centro di studi e ricerche Enrico Fermi''. \n\\section*{References}\n\\bibliographystyle{jphysicsBforPMB}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzkvxu b/data_all_eng_slimpj/shuffled/split2/finalzzkvxu new file mode 100644 index 0000000000000000000000000000000000000000..19ed20fa2a1c1d51c2411714c2de6d50088430db --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzkvxu @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nEvery particle accelerator needs a vacuum system to transport its particle beams with low losses. With a~certain probability beam particles interact with residual gas molecules. The scattering leads to immediate particle losses from the beam, or to a deterioration of the beam quality. Both effects are not wanted and the beam losses can even lead to a problematic activation of accelerator components. For different situations, i.e. beam particle species and beam energies, the requirements for the quality of acceptable vacuum conditions can differ significantly. For example in a hadron accelerator with a section that is passed by the beam only once, a pressure of $10^{-5}\\,$mbar might be sufficient. On the other hand an~electron storage ring might need a base pressure of $10^{-10}\\,$mbar in the absence of beam. Consequently the~technology and physics of vacuum systems covers a broad range of effects and concepts, depending on the application. Vacuum systems for electron and proton storage rings are discussed for example in Ref. \\cite{benvenuti}, and a dedicated CERN school on accelerator vacuum is documented under Ref. \\cite{CERN}. Fundamentals of vacuum physics are discussed in Ref. \\cite{redhead}. \nFigure\\,\\ref{fig:p_overview} shows the range of pressure levels from ambient conditions down to the lowest pressures in accelerators. The pressure ranges for typical applications, the volume density of molecules and the mean free path of nitrogen molecules are indicated. The pressure range for beam vacuum is in technical language referred to as Ultra-High-Vacuum (UHV). One often speaks about dynamic vacuum, when the presence of an intense beam affects the residual gas pressure.\n\n\\begin{figure}\n\\centering\\includegraphics[width=0.85\\textwidth]{Fig_01_Vac.png}\n\\caption{Gas pressures ranging from ambient conditions down to cold accelerator vacuum cover 13 orders of magnitude.}\n\\label{fig:p_overview}\n\\end{figure}\n\n\n\\section{Pressure and Gas Equation}\nPressure is basically a force that gas molecules apply to a surface by mechanical momentum transfer, averaged over a huge number of collisions during a macrosopic time interval. Pressure is measured as a~force per area. A common unit is Pascal, 1\\,Pa~=~1\\,N\/m$^2$. Another common unit is 1\\,mbar~=~$100$\\,Pa. The~average velocity of molecules depends on the square root of the temperature as shown in Eq. (\\ref{eq:velo}). Using the molecule density $n_v$ the average velocity can be related to the rate of molecules impinging on the wall of a vessel per area and per time.\n\n\\begin{equation}\n\\overline{v} = \\sqrt{ \\frac{8 k_b}{\\pi m_0} ~ T },~~ \\d{N}{A\\,dt} = \\frac{1}{4} n_v \\overline{v}\n\\label{eq:velo}\n\\end{equation}\n\nHere $k_b = 1.38\\times 10^{-23}\\,$J\/K is the Boltzmann constant. The gas equation is a fundamental law relating pressure, volume, temperature and amount of a gas:\n\n\\begin{equation}\nPV = N k_b T = nRT.\n\\label{eq:pv}\n\\end{equation}\n\n$R = 8.314\\,$N\\,m\\,\/\\,mole\\,K is the gas constant. The majority of vacuum systems operate at room temperature and we can consider the temperature as being constant. Consequently the product of pressure and volume is a measure of the amount of gas, related to the number of molecules $N$ or the number of moles $n$. In practice a leak rate is often given in terms of mbar\\,l\/s, which is the amount of gas that is entering the considered recipient per unit of time.\nVacuum systems are equipped with pumps, devices that absorb gas molecules. A vacuum pump is characterised by its pumping speed $S = Q\/P$, quantified in l\/s at its interface. Here $Q$ is the gas load at the interface of the pump. The pumping speed varies for different gas species. Some chemically inactive gases like the inert gases He, Ar, and for example methane (CH$_4$), are pumped less efficiently by several types of pumps like Titanium sublimation pumps and NEG pumps. Turbo pumps and cryo pumps are not based on chemical binding reactions and are suited for all gas types. \nIn vacuum systems we can discriminate two types of flow regimes - viscous flow and molecular flow. Viscous flow occurs for higher gas densities. The higher the density of a gas the~shorter is the mean free path between two collisions of a gas molecule with others. The Knudsen number is the ratio between the mean free path and a typical dimension of the vacuum recipient. If the~mean free path is much larger than a typical dimension, we have molecular flow. With the cross section $\\sigma$ for collissions the mean free path $\\lambda$ is calculated as follows:\n\n\\begin{equation}\n\\lambda = \\frac{k_b T}{\\sqrt{2} \\sigma P}.\n\\end{equation}\n\nFor example Nitrogen residual gas at room temperature and a pressure of 10$^{-6}$\\,mbar has a mean free path of 60\\,m, which is much larger than the diameter of a typical vacuum pipe. Thus for particle accelerators we normally deal with molecular flow from a source of residual gas to a pump. We define the conductance of a vacuum component, e.g. an orifice or a piece of vacuum tube, as the ratio of the~molecular flow and the pressure drop across the element: \n\n\\begin{equation}\nC = \\frac{Q}{\\Delta P} . \n\\label{eq:cond}\n\\end{equation}\n\nThe conductance of an orifice of cross section $A$ can be estimated by:\n\\begin{equation}\nC = \\sqrt{\\frac{k_b T}{2\\pi M}} A\\, , ~~~\nC_\\mathrm{air} = 11.6 \\mathrm{[l\/s]} ~\nA \\mathrm{[cm^2]}.\n\\end{equation}\n\nOrifices may be used to realise a defined pumping speed for outgassing measurements. The conductance of a circular tube with diameter $d$ and length $l$ is given by:\n\\begin{equation}\nC = \\sqrt{\\frac{2 \\pi k_b T}{M}} \\frac{d^3}{l} \\, , ~\nC_\\mathrm{air} = 12.1 \\mathrm{[l\/s]} \\,\n\\frac{d^3 \\mathrm{[cm]}}{l \\mathrm{[cm]}}.\n\\end{equation}\n\nConductance varies with the type of gas, and in both formulas $M$ denotes the molecular mass of the~considered gas species. If two components are connected, the resulting conductance can be calculated from the individual conductances $C_1, C_2$. For a concatenation in series the resulting conductance is obtained by inverse addition: \n\n\\begin{equation}\nC_\\mathrm{total} = \\left( \\frac{1}{C_1} + \\frac{1}{C_2} \\right) ^{-1}.\n\\end{equation}\n\nAnd for a combination in parallel, simple addition has to be applied: \n\n\\begin{equation}\nC_\\mathrm{total} = C_1 + C_2. \n\\end{equation}\n\nIn a practical example an ion sputter pump of $400\\,$l\/s is connected to a recipient by a 30\\,cm long, $d=8\\,$cm tube. This results in an effective reduction of the pumping speed to $136\\,$l\/s. \n\nIn an accelerator vacuum system the operating pressure is established as a balance between release rate of free gas molecules and the removal rate that results from conductance and pumping. Gas molecules are released by different effects, such as thermal desorption from surfaces, beam induced desorption, diffusion of gas molecules from the bulk of material, permeation from ambient conditions through material, but also from leak rates through sealed connections between components. Among those effects thermal desorption is typically a significant contribution. Because of thermal energy, physical and chemical bindings of gas molecules to the material of the wall can be broken up and molecules are released. The sojourn time of a molecule at the wall is an exponential function of the temperature: $\\tau \\propto \\exp{\\left( E_d\\,\/\\,k_b T \\right) }$. For example a binding energy of 1\\,eV at room temperature leads to a sojourn time of 5 hours. The qualitative behaviour of the pressure in a recipient during pump-down, that results from the mentioned outgassing mechanisms, is shown in Fig.\\,\\ref{fig:pdown}. In the first phase of the pumpdown the~amount of gas removed from the recipient per unit time is proportional to the pressure, and thus one observes an exponential pressure decay. At some point a balance is reached between pumping speed and release of gas molecules from the surface of the walls into the volume of the recipient. In this phase the pressure decays inversely with time until the surface is emptied. Afterwards the outgassing is dominated by molecules that are transported in the bulk material to the surface by thermal diffusion. According to the~nature of diffusion this process scales as $1\/\\sqrt{t}$. It is possible to reduce this contribution to outgassing by a heat treatment of the material under vacuum conditions, thereby enhancing diffusion speed \\cite{calder}. Finally an equilibrium is reached when the remaining pressure is dominated by gas that permeates from the outside through the wall of the vacuum vessel. Naturally the permeation rate is very small in typical situations, and diffusion coefficients in metals are relevant only for light molecules like hydrogen. \n\n\\begin{figure}\n\\centering\\includegraphics[width=0.90\\textwidth]{Fig_02_Vac.png}\n\\caption{Qualitative pump down behaviour of a recipient. Note the large logarithmic time scale on which $10^8$\\,s corresponds to three years.}\n\\label{fig:pdown}\n\\end{figure}\n\nSo far we have considered thermal outgassing of materials as the main source of gas load in a~vacuum recipient. In the presence of an intense particle beam the release of gas molecules from the~chamber walls can be drastically enhanced, by orders of magnitude, and those effects must be taken into account and quantified. A prominent effect is photo-desorption due to synchrotron radiation (SR), emitted by an electron or positron beam \\cite{fischer}. When a photon is absorbed on a metallic surface, the~photo effect may lead to the emission of a free electron. Absorption of the electron may then result in the~release of a gas molecule that was bound on the surface of the vacuum chamber. In a dipole magnet with bending radius $\\rho$ the number of photons emitted per unit length and time is given by:\n\n\\begin{equation}\n\\frac {dN_\\gamma}{dtds} = 1.28 \\cdot 10^{17} \\frac{I\\,[\\RM{mA}]\\,E\\,[\\RM{GeV}]}{\\rho\\,[\\RM{m}]}\\,.\n\\end{equation}\nHere $E$ is the beam energy and $I$ the beam current. The photons exhibit a statistical distribution of their energies and the given formula delivers the total number. More details on the properties of synchrotron radiation are given for example in Ref. \\cite{vacuumelectronic}. The SR induced desorption results in a specific outgassing rate per unit length of:\n\n\\begin{equation}\nq = \\eta \\, k_b\\, T\\, \\frac {dN_\\gamma}{dt\\,ds}\\,.\n\\end{equation}\n\nThe desorption yield $\\eta$ equals the number of gas molecules released per absorbed photon. It depends on the material of the chamber wall, the preparation of the material and foremost the conditioning of the surface under the bombardment with photons. For a surface cleaned with standard procedures one observes a decrease of the desorption yield inversely proportional to the total number of absorbed photons, e.g. Ref.\\,\\cite{billy}. The pressure obtained in a storage ring as a balance between beam induced desorption and installed pumping speed is called dynamic pressure as it depends on the beam intensity. In Fig.\\,\\ref{fig:petra} the~conditioning process of a NEG pumped vacuum chamber for the synchrotron light source PETRA-III is shown. The pressure rise per beam current is reduced by several orders of magnitude over the course of the conditioning process. In a well conditioned vacuum system the desorption yield $\\eta$ may reach values of $10^{-6}$ or lower. Rings with high intensity beams exhibit a high photon flux and one might be tempted to expect also high dynamic pressure for those. However, in this situation also the conditioning process advances faster and so the achieved pressure after a certain operating time is often similar for different storage rings. The typical gas composition is dominated by H$_2$ and a $\\sfrac{1}{4}$ fraction of CO.\n\n\\begin{figure}\n\\centering\\includegraphics[width=0.85\\textwidth]{Fig_03_Vac.png}\n\\caption{The pressure rise per beam current is called dynamic pressure. It is shown here for a test chamber of the~PETRA-III storage ring as a function of integrated beam current.}\n\\label{fig:petra}\n\\end{figure}\n\nBesides the discussed mechanism of photo desorption there are other mechanisms causing beam induced desorption. If ion beams are accelerated, small losses of energetic ions may lead to high desorption rates. For example there are situations where a single ion, impinging on the wall of the vacuum chamber, may release of the order of $10^5$ gas molecules \\cite{sen}. Ion induced desorption cannot easily be reduced by conditioning. \n\nAnother important effect is the electron cloud instability \\cite{ecloud}. A free electron is accelerated in the~electromagnetic field of the beam and releases more than one secondary electron when absorbed on the chamber wall. Due to the multiplicative behavior of the process, the intensity of these electrons grows exponentially, resulting in a proportional growth of gas release in the vacuum chamber. A key factor for this process is the secondary emission yield, that is the number of electrons released after the~absorption of one electron. It depends on the type of material and can be reduced by coating of the~surface, for example with TiN or C. Another countermeasure is the application of a small longitudinal magnetic field, by winding a coil around the beampipe, to force the low energy free electrons on spiral paths.\n\n\\section{Pressure Computation for One-Dimensional Systems}\n\nAccelerator vacuum systems are build for beam transport and are typically lengthy, that is the longitudinal dimension is much larger than the transverse dimensions. For a calculation of the pressure profile a one-dimensional calculation is then sufficient. We start the derivation of the corresponding diffusion equation from the previously discussed relation between gas flow $Q$ and conductance $C$ in Eq. (\\ref{eq:cond}). Note that we have to introduce at this point a minus sign that ensures positive gas flow towards smaller pressure. In the~limit of small differences we obtain a differential equation for the gas flow. Here a specific conductance $\\mathcal{C} = C \\Delta s$ is introduced, which is a property of the vacuum vessel cross section. \n\n\\begin{eqnarray}\nQ & \\propto & - \\d{P(s)}{s} \\nonumber \\\\\nQ(s) & = & -\\mathcal{C}\\cdot \\d{P(s)}{s} \n\\label{eq:flow}\n\\end{eqnarray}\n\nA second equation is the continuity equation for the gas flow, which expresses that the change in the amount of transported gas is given by the sum of outgassing from the wall minus the pumped gas of a considered section with infinitesimal length. Here we use the specific pumping speed $\\mathcal{S} = S\/\\Delta s$ and the specific outgassing rate $q$ per unit length in mbar\\,l\/m\\,s.\n\n\\begin{equation}\n\\d{Q(s)}{s} = q - \\mathcal{S}\\,P(s)\n\\label{eq:continuity}\n\\end{equation}\n\nThese two Eqs. (\\ref{eq:flow}) and (\\ref{eq:continuity}) can now be combined into a~second order diffusion equation for a~one-dimensional vacuum system with the independent coordinate $s$.\n\n\\begin{equation}\n\\d{}{s}\\,\\mathcal{C}\\,\\d{}{s}\\,P(s)-\\mathcal{S}P(s) + q = 0\n\\label{eq:diffeq}\n\\end{equation}\n\nDepending on the nature of the pumps two different types of solutions are obtained for Eq. (\\ref{eq:diffeq}). In the most common situation of systems with lumped pumps the pumping speed $\\mathcal{S}$ is non-zero only for short sections of the system. In-between these pumps the pressure distribution follows a quadratic function with a maximum half-way between a pair of pumps that are installed at a distance $l$:\n\n\\begin{equation}\nP(s) = \\frac{ql}{S}+ \\frac{q}{8\\mathcal{C}} \\left(l^2-4s^2\\right).\n\\label{eq:qprofile}\n\\end{equation}\n\nPeak pressure and average pressure for this situation are given by:\n\n\\begin{equation}\nP_\\RM{avg} = ql\\,\\left(\\frac{1}{S}+\\frac{l}{12\\,\\mathcal{C}}\\right), \\quad\nP_\\RM{max} = ql\\,\\left(\\frac{1}{S}+\\frac{l}{8\\,\\mathcal{C}}\\right).\n\\label{eq:aprofile}\n\\end{equation}\n\nThese functions have two terms. Even with infinite pumping speed (left term) the pressure is still limited by the conduction of the vacuum vessel. The economy of a technical solution is thus to be optimized w.r.t. density and size of the vacuum pumps, and the transverse dimensions of the vacuum vessel that determine the specific conductance.\nBesides lumped pumps, also distributed pumps are used in accelerator vacuum systems. Distributed pumps can be realised by NEG strips or NEG coating, and by distributed ion sputter pumps that utilize the magnetic field of accelerator magnets. For the case of non-zero pumping speed we obtain an exponential solution of Eq. (\\ref{eq:diffeq}). At some distance from neighboured sections the pressure in a long section with distributed pumps approaches a value of $P=q\/\\mathcal{S}$. Figure\\,\\ref{fig:v2} shows an exemplary calculation of a pressure distribution with lumped pumps in the right part and a long distributed pump in the left part of the graph.\n\n\\begin{figure}\n\\centering\\includegraphics[width=0.925\\textwidth]{Fig_04_Vac.png}\n\\caption{Simulated pressure profile in the upper part of the image. The lower part shows the pumping speed along the beamline. A $d=10\\,$cm beam pipe is assumed for this demonstration. While the left section contains a 20m long distributed pump, the right section has 5 lumped pumps installed.}\n\\label{fig:v2}\n\\end{figure}\n\nPressure profiles can also be calculated in a numerical approach using matrix multiplications \\cite{ziemann}: \n\n\\begin{eqnarray}\n\\left(\\begin{array}{c} P(l) \\\\ Q(l) \\end{array} \\right) & = &\n\\left( \\begin{array}{cc}\n\\cosh(\\alpha l) & -\\frac{1}{c\\alpha} \\sinh(\\alpha l) \\\\\n-\\alpha c \\sinh( \\alpha l) & \\cosh(\\alpha l) \\end{array} \\right)\n\\left( \\begin{array}{c} P(0) \\\\ Q(0) \\end{array} \\right)\n+ \\frac{q}{\\alpha} \\left(\n\\begin{array}{c} \\frac{1-\\cosh(\\alpha l)}{\\alpha c} \\\\\n\\sinh(\\alpha l) \\end{array} \\right). \\hphantom{8888} \n\\label{eq:matrix}\n\\end{eqnarray}\n\nHere the variable $\\alpha = \\sqrt{\\mathcal{S}\/\\mathcal{C}}$ was introduced. For sections without pumping, $\\mathcal{S}=0$, the elements containing $\\alpha$ in Eq. (\\ref{eq:matrix}) have to be taken in the limit $\\alpha\\rightarrow 0$, leading to the quadratic profile Eq. (\\ref{eq:qprofile}). So far we have considered the static situation of equilibrium pressure profiles. This is a special case of the general time dependent diffusion equation: \n\n\\begin{equation}\n\\mathcal{V} \\d{}{t} P(s,t) = \\d{}{s}\\,\\mathcal{C}\\,\\d{}{s}\\,P(s,t)\n-\\mathcal{S}P(s,t) + q.\n\\label{eq:timediff}\n\\end{equation}\n\nThe specific volume $\\mathcal{V}$ denotes the volume of the vessel per unit length. This equation describes also dynamic evolutions of the pressure profile, for example when He gas is injected for reasons of leak search. By comparison with a classical diffusion equation $\\d{}{t} f(x,t) = \\d{}{x}\\,\\mathcal{D}\\,\\d{}{x} f(x,t)$ that is known from the literature, we identify the~diffusion coefficient of our problem as:\n\n\\begin{equation}\n\\mathcal{D} = \\frac{<\\Delta x^2>}{<\\Delta t>} = \\frac{\\mathcal{C}}{\\mathcal{V}}.\n\\label{eq:diffcoeff}\n\\end{equation}\n\nUsing this parameter one can estimate that it takes 3 seconds for He gas to travel a 5\\,m distance in a 2\\,cm pipe. Figure\\,\\ref{fig:diff_He} shows the time evolution of the pressure distribution for a Delta-function like inlet of He gas in such a pipe.\n\n\\begin{figure}\n\\centering\\includegraphics[width=0.85\\textwidth]{Fig_05_Vac.png}\n\\caption{A pressure bump of He in this example diffuses in a 2\\,cm diameter tube at limited speed.}\n\\label{fig:diff_He}\n\\end{figure}\n\nThe shown analytic calculations allow one to estimate the pressure in vacuum systems and to calculate the required number of pumps and their distance. For complex geometries Monte Carlo methods for pressure calculations might be more accurate. The code MolFlow \\cite{molflow} follows individual gas molecules on their scattered path through a vacuum system, including sticking probabilities and sojourn times. Pressure and other variables are then calculated as statistical averages. For lepton storage rings the~code may be augmented by another code, SynRad \\cite{synrad}, which allows to simulated the photo desorption process.\n\n\\section{Requirements for Accelerator Vacuum Quality}\n\nIn order to assess the required vacuum quality in terms of pressure or density it is necessary to study the~different mechanisms of beam gas interaction. Scattering of beam particles may lead to immediate loss of beam particles, or to a degradation of the beam properties. The number of lost particles $\\Delta N_b$ from a beam passing through a section of length $\\Delta l$ can be estimated in the following way:\n\n\n\\begin{eqnarray}\n\\Delta N_b & = & -N_b \\times \\frac{\\mathrm{area\\,of\\,molecules}}{\\mathrm{total\\,area}} \\nonumber \\\\\n& = & -N_b \\times \\frac{n_v V \\sigma}{V\/\\Delta l} \\nonumber \\\\\n& = & -N_b n_v \\sigma \\Delta l \\nonumber \\\\\n& = & -N_b n_v \\sigma \\beta c \\Delta t \\, . \\nonumber\n\\label{eq:cross}\n\\end{eqnarray}\n\nHere $\\sigma$ is the cross section of a generic interaction process, $N_b$ is the number of beam particles and $n_v$ is the volume density of the residual gas. In case the considered scattering mechanism leads to the loss of the particle, and by converting this relation into a differential equation, the solution is given by an exponential decay of the beam intensity:\n\n\\begin{equation}\nN_b(t) = N_0 \\exp \\left( - \\sigma \\beta c n_v \\, t \\right), \n~\\tau = \\frac{1}{\\beta c \\, \\sigma \\, n_v}.\n\\label{eq:diffcoeff}\n\\end{equation}\n\nThe beam lifetime $\\tau$ is deduced from the exponential solution, and for most situations we can safely assume $\\beta=1$. In the following we consider the effect of different scattering mechanisms for electrons and protons.\\hfill\\\\\n\n\\begin{figure}\n\\centering\\includegraphics[width=0.85\\textwidth]{Fig_06_Vac.png}\n\\caption{A beam with $N_b$ particles passes through a volume with residual gas density $n_v$. The gas molecules exhibit an effective cross section $\\sigma$ that describes the interaction probability with the beam for specific processes.}\n\\label{fig:beam_gas}\n\\end{figure}\n\n\\subsection{Coulomb Scattering for Electrons}\n\n\\noindent\nThis process is described by the well known formula for Rutherford Scattering, which gives the differential cross section for the occurrence of a scattering angle $\\theta$:\n\n\\begin{equation}\n\\frac{d\\sigma_i}{d\\Omega} = \\frac{Z_i^2\\,r_e^2}{4\\gamma^2}~\n\\frac{1}{\\sin^4(\\theta\/2)}.\n\\label{eq:rutherford}\n\\end{equation}\n\nThe charge of the residual gas atom is $Z_i$ and $r_e=2.8\\,$fm the classical electron radius. If we integrate this differential cross section from the angle $\\theta_0$ above which particles are lost to $\\pi$ and use $\\theta_0 \\ll 1$, the total elastic scattering cross section for particle loss is:\n\n\\begin{equation}\n\\sigma_{i, \\RM{el}} = \\frac{2\\pi\\,Z^2_i\\,r_e^2}{\\gamma^2}~\n\\frac{1}{\\theta_0^2}.\n\\label{eq:elastic}\n\\end{equation}\n\nThe limiting angle $\\theta_0$ can be estimated from a typical value for the $\\beta$-function $\\overline{\\beta_y}$ and the minimum aperture $A_y$ of the accelerator: $\\theta_0 = A_y\/\\overline{\\beta_y}$. Using the parameters for the vertical plane is sufficient since the vertical aperture is usually smaller than the horizontal one in electron storage rings. Average values of the $\\beta$ functions at the locations of particle scattering and particle loss have to be used. Combining the~above relations and carrying out the sum over different atom species we obtain the following formula for the beam lifetime due to elastic scattering:\n\n\\begin{equation}\n\\tau^{-1}_\\RM{el} = \\frac{2\\pi r_e^2 c}{\\gamma^2}~\n\\frac{\\overline{\\beta_y}^2}{A_y^2} \\sum_i n_i \\sum_j k_{ij} Z_j^2.\n\\label{eq:telastic}\n\\end{equation}\n\nHere $k_{ij}$ is the number of atoms of type $j$ within the molecule of type $i$. By inserting numbers for the fundamental constants and expressing the gas density in terms of pressure at room temperature we obtain the following formula for the beam lifetime in electron rings due to elastic scattering:\n\n\\begin{equation}\n\\tau_\\RM{el}\\,[\\RM{h}] = 2839~\\frac{E^2\\,[\\RM{GeV}^2]\\,\\,A_y^2\\,[\\RM{mm}^2]}\n{\\overline{\\beta_y}^2\\,[\\RM{m}^2]}~\n\\left( \\sum_i P_i\\,[\\RM{pbar}] \\sum_j k_{ij} Z^2_j \\right)^{-1}.\n\\label{eq:handy}\n\\end{equation}\n\nNote that the quadratic function of $Z$ causes a sensitive dependence of the beam lifetime on the~presence of heavy gas species in the gas composition.\\hfill\\\\\n\n\\subsection{Bremsstrahlung} \\noindent\nDue to deceleration of a beam particle in the Coulomb field of a residual gas atom and the emission of a high energy photon, the particle may leave the energy acceptance of the accelerator. The important parameter in this context is the largest allowed relative energy deviation for the particles to stay confined within the beam: $\\delta_E = \\Delta E\/E_0$. The cross section for the inelastic process is\n\n\\begin{equation}\n\\sigma_\\RM{inel} \\approx -\\frac{4}{3}\\,\\, \\frac{V_n}{N_A}\\,\\,\\frac{1}{X_0}\\,\\ln\\,\\delta_E.\n\\label{eq:inelastic}\n\\end{equation}\n\nFrom this the following lifetime $\\tau_\\RM{brems}$ is computed. For a gas mixture one has to sum up contributions of gas species with their partial pressures $P_i$ and corresponding radiation lengths $X_{0,i}$. \n\n\\begin{eqnarray}\n\\frac{1}{\\tau_\\RM{brems}} & = & -\\frac{4}{3}~\\frac{c}{P_n}\\ln(\\delta_E)~ \\sum_i \\frac{P_i}{X_{0,i}} \\nonumber \\\\\n\\tau_\\RM{brems}\\,[\\RM{h}] & = & \\frac{-0.695}{\\ln(\\delta_E)}\\,\\left(\\sum_i \\frac{P_i\\,[\\RM{pbar}]}{X_{0, i}\\,[\\RM{m}]}\\right)^{-1}.\n\\label{eq:brems}\n\\end{eqnarray}\n\n$N_A$ is the Avogadro constant and $V_n=22.4\\,$l\/mol, $P_n$ the molar volume and the pressure under standard conditions. The \\emph{radiation length} $X_0$ is the length over which a particles energy has dropped by a factor $1\/e$. $X_0$ scales roughly inversely proportional to the square of the nuclear charge of the~residual gas, and also inversely proportional to its density. Radiation length values for common gases under normal conditions are tabulated for example in Ref. \\cite{pd}. In Table \\ref{tab:radlength} we list $X_0$ for the important gases of accelerator vacuum systems. With common energy acceptance and transverse acceptance of storage rings, the mechanism of Bremsstrahlung is the more severe mechanism for loss of particles from the~beam. \n\n\n\\subsection{Emittance Growth for Hadrons} \\noindent\nFor a proton or ion beam already the degradation of the beam emittance from elastic gas scattering at small angles is harmful. Due to the absence of radiation damping any decrease of the beam density over the storage time cannot be recovered. For the beam emittance a growth time can be defined:\n\n\\begin{equation}\n\\frac{1}{\\tau_\\varepsilon} = \\frac{1}{\\varepsilon_x}~\\frac{d\\varepsilon_x} {dt}.\n\\label{eq:emmlife}\n\\end{equation}\n\nThe scattering causes a diffusive growth of the mean squared angular deviation of the particles momentum vector which is linear in time. The emittance growth is related to this angle as follows ($\\theta_0$ is the rms scattering angle projected on a transverse plane):\n\n\\begin{equation}\n\\frac{d\\varepsilon}{dt} = \\frac{1}{2}~\\overline{\\beta_y}~\\frac{d(\\theta_0^2)}{dt} = \n\\frac{1}{2}~\\overline{\\beta_y}~\\frac{(13.6)^2}{(cp)^2\\,[\\RM{MeV}^2]}~\\frac{c}{P_0}~\n\\sum_i \\frac{P_i}{X_{0,i}}.\n\\label{eq:emmgrowth}\n\\end{equation}\n\nUsing Eq. (\\ref{eq:emmlife}) the resulting emittance growth time for protons is:\n\n\\begin{equation}\n\\tau_\\varepsilon\\,[\\RM{h}] \\approx 34.2~\\frac{\\varepsilon_y\\,[\\RM{m\\,rad}]\\,\nE^2\\,[\\RM{GeV}^2]\\,T\\,[\\RM{K}]}{\\overline{\\beta_y}\\,[\\RM{m}]}~\\left(\n\\sum_i \\frac{P_i\\,[\\RM{pbar}]}{X_{0, i}\\,[\\RM{m}]}\\right)^{-1}.\n\\end{equation}\n\nThe temperature $T$ has been included since proton accelerators often use superconducting magnets and cold beam pipes. The described mechanism ignores other elastic scatting mechanisms besides Coulomb scattering. \\hfill\\\\\n\n\\subsection{Inelastic Scattering for Hadrons}\n\nAnother process is the complete removal of particles from the beam by an inelastic reaction. The beam lifetime for this effect can be computed using the inelastic interaction length $\\lambda_\\RM{inel}$ which is also tabulated in Table\\,\\ref{tab:radlength}:\n\n\\begin{equation}\n\\frac{1}{\\tau_\\RM{inel}} = \\frac{\\beta c}{P_0}~\n\\sum_i \\frac{P_i}{\\lambda_{\\RM{inel},i}}.\n\\label{eq:tauinel}\n\\end{equation}\n\nThe inelastic interaction length is related to the corresponding nuclear cross section via $\\lambda_\\RM{inel} = A\/\\rho\\,N_A\\,\\sigma_\\RM{inel}$, where $A$ is the molar mass and $\\rho$ the density. If we again include the gas temperature and take out all constants we obtain the following formula for the inelastic beam-gas lifetime:\n\n\\begin{equation}\n\\tau_\\RM{inel}\\,[\\RM{h}] = 3.2\\cdot10^{-3}~T\\,[\\RM{K}]\\,\\left(\n\\sum_i \\frac{P_i\\,[\\RM{pbar}]}{\\lambda_{\\RM{inel}, i}\\,[\\RM{m}]}\\right)^{-1}.\n\\end{equation}\n\n\\begin{table}\n\\begin{center}\n\\caption{Radiation length $X_0$ and inelastic interaction length $\\lambda_i$ \nfor different gases under atmospheric pressure and 20\\,$^\\circ$C \\cite{pd}.}\n\\label{tab:radlength}\n\\begin{tabular}{|l|cc|cc|cc|cc|cc|cc|cc|cc|cc|c|}\n\\hline\\hline\n&& \\textbf{H}$_\\textbf{2}$ && \\textbf{He} && \\textbf{CH}$_\\textbf{4}$ && \\textbf{H}$_\\textbf{2}$\\textbf{O} &&\\textbf{CO}&&\\textbf{N}$_\\textbf{2}$&&\\textbf{Ar}&&\\textbf{CO}$_\\textbf{2}$&&\\textbf{air} \\\\\n\\hline\nA && 2 && 4 && 16 && 18 &&28&& 28 &&40&&44 && \\\\\n$X_0$\\,[m] && 7530 && 5670 && 696 && 477 && 321&&326&&117&&196&& 304 \\\\\n$\\lambda_\\RM{inel}$\\,[m]&&6107&&3912&&1103&&1116&&763&&753&&704&&490&&747 \\\\\n\\hline\\hline\n\\end{tabular} \\end{center}\n\\end{table}\n\n\\section{Vacuum Technology for Accelerators}\n\\subsection{Pumping}\n\\label{pumping}\n\nAs discussed in the previous sections the average pressures required for accelerators range from $10^{-6}\\,$mbar down to $10^{-11}\\,$mbar. For facilities of large size, containing a huge number of components, reliability of the overall system becomes a critical aspect. Pumps without moving mechanical parts are thus advantageous. Figure\\,\\ref{fig:pumps} shows an overview of the most commonly used types of pumps in large accelerator systems. Initial pumpdown of accelerator vacuum systems is often done by turbo-molecular pumps in combination with rotary pumps that are installed on mobile pump carts. During normal operation these are disconnected while the required vacuum conditions are maintained by sputter ion pumps, titanium sublimation pumps (TSP) and non-evaporable getter (NEG) pumps. These types of pumps are almost exclusively used for routine operation in storage rings and large linear accelerators. \n\n\\begin{figure}\n\\centerline{\\includegraphics[width=0.9\\textwidth]{Fig_07_Vac.png}}\n\\caption{Overview of the most commonly used types of pumps and gauges in large accelerator vacuum systems including their operating range.} \n\\label{fig:pumps}\n\\end{figure} \n\n\\emph{Turbo-molecular pumps} are fully mechanical pumps, based on a mechanism to transfer physical momentum to residual gas molecules in a preferred direction. The momentum transfer is achieved through fast moving blades with rotation frequencies of $30\\dots60\\cdot 10^3\\,$ RPM (rounds per minute). At these rotation frequencies the blades reach a speed of $300\\dots600\\,$m\/s, which is to be compared with the speed of the residual gas molecules that should ideally not escape the blades. At room temperature heavier molecules like CO have a speed of $470\\,$m\/s, while hydrogen moves at $1800\\,$m\/s. Consequently the compression ratio (gas density ratio) achieved with a turbo-molecular pump may vary by several orders of magnitude for these gas species. Turbo-molecular pumps are always combined with mechanical roughing pumps that provide an intermittent vacuum level between UHV conditions and the normal atmospheric pressure.\n\n\\emph{Sputter ion pumps} are capable of pumping all gases and they can be operated at relatively high pressure. These pumps use penning cells with an applied high voltage of 3\\dots7\\,kV and a superimposed magnetic field. Permanent magnet blocks are used to generate the magnetic field. Residual gas molecules are ionized and accelerated in the electric field. The current drawn from the high voltage power supply is thus proportional to the residual gas pressure. This synergy is often used to measure the pressure in accelerators in a cost effective way. Hydrogen is pumped by diffusion into the bulk of the cathodes. All other reactive gases are chemisorbed by the cathode material. A commonly used material is titanium, which is sputtered onto the walls and anodes by the ions upon incident on the cathodes. Noble gases are physically buried by the sputtered cathode atoms. The pumping speed for noble gases is small, but it can be increased to values of $25-30\\,$\\% of that for N$_2$ replacing the cathode plate by heavy material such as tantalum. Such \\emph{noble diodes} are often used in systems with enhanced risk of helium leaks as in accelerator sections \nusing superconducting magnets or resonators cooled with liquid helium.\n\n\\begin{figure}\n\\centering\\includegraphics[width=0.75\\textwidth]{Fig_08_Vac.png}\n\\caption{Penning cells are elements in ion sputter pumps to ionize residual gas molecules. In a combined electric and magnetic field electrons are accelerated on spiral paths, thereby enhancing their efficiency to ionize neutral gas molecules. Ions are then also accelerated and implanted in the cathod material or burried under sputtered metal on the anodes.}\n\\label{fig:penning}\n\\end{figure}\n\nA cost effective solution are so-called integrated sputter ion pumps inserted linearly into a channel parallel to the beam channel \\cite{cummings}, \\cite{koupt}. These pumps utilize the magnetic field of the bending magnets of the accelerator. This solution has been adopted at the electron ring of HERA, PETRA, PEP and TRISTAN reaching typical pump speeds of $25\\,$l\/s\/m. On the downside this couples the functions of the accelerator magnets and the vacuum system. Magnets must be powered to maintain pumping. In-between ramp cycles of storage rings these pumps are not active. Furthermore the magnetic field intensity is decreased when the particles are injected at lower energies, resulting in a reduced pumping speed. This becomes crucial for accelerators where the injection energy and thus the corresponding magnetic field are much lower than the nominal beam energy, such that the discharge in the sputter ion pumps extinguishes. Thus additional pumps are required to ensure good vacuum conditions. \n\n\\emph{Titanium sublimation pumps} are sorption pumps. In a metallic vessel titanium\nis evaporated by temporary electrical heating and deposited on the walls, forming a getter surface. These pumps exhibit a~high pumping speed for active gases but have limited pumping capacity since the thin titanium film saturates quickly. The pumping surface is renewed\nby deposition of fresh titanium from a heated filament by sublimation. A pump with $1000\\,$l\/s pumping speed is saturated after one hour at a pressure of $10^{-7}\\,$mbar. Titanium sublimation pumps cannot pump noble gases and are therefore used in combination with a low pumping speed ion sputter pump. Often a chicane is incorporated in the pumping port, to avoid that titanium atoms enter the beam chamber or contaminate surfaces, e.g. mirrors, ceramic insulators or instrumentation. \n\n\\emph{Non-evaporable getter (NEG) pumps} are sorption pumps as well. The NEG material is made of special alloys which forms stable chemical compounds with the majority of active gas molecules. The~sorption of hydrogen is reversible by heating of the material. Also NEG pumps have a limited pumping capacity. The NEG material is activated by heating for times below one hour. Activation temperatures depend on the NEG material and range from $180^\\circ$C to $400^\\circ$C. During heating the gas molecules are not evaporated from the NEG material, but the molecules diffuse into the bulk material. Hydrogen is an exception, which is released again into the gas phase, thus requiring other pumps during the process of activation and reactivation. The heating produces fresh surface sites for further adsorption of active gases. The NEG material is a compound of different metals, for example Zircon, Vanadium, Iron, and is typically sintered in the form of a powder onto flexible strips. Such strips can be integrated in a side channel of the vacuum chamber design. The initial specific pumping speed provided by such schemes may exceed 1000\\,l\\,s$^{-1}$\\,m$^{-1}$. This technique has been developed for LEP \\cite{ben-NEG} and is now applied in many accelerators. At LEP about 24 km of the beam pipe have been equipped with 30\\,mm wide and 0.2\\,mm thick constantan ribbons coated with 0.1 mm thin NEG material on both sides. A newer application for the synchrotron light source PETRA-III is shown in Fig.\\,\\ref{fig:chamber}. The strips are installed onto a~rigid stainless steel carriage via insulating ceramics inside a separate pump channel. For electric heating the pumps are connected to current feedthroughs. NEG pumps are also available commercially as lumped pumps, cartridge units that can be connected to recipients by standard flange connections.\n\nAnother approach to the NEG pumping concept is the deposition of thin film coatings of TiZrV, sputtered onto a vacuum chamber. The activation temperature of these coatings is relatively low, a fact that is important to limit the thermal stress on the vacuum system during activation \\cite{Benvenuti-coating-1}, \\cite{Benvenuti-coating-2}. If the~design of the vacuum chamber allows that, the coating may cover the complete inner surface of the~beam vacuum system and thus the outgassing of the vacuum vessel itself is drastically reduced. It should be emphasised that even with low activation temperature the need for baking the entire vacuum system to ca. $200\\,^\\circ$C implies severe restrictions for the design of vacuum chambers, support structures and magnets. For accelerators in which the vacuum conditions are dominated by beam induced desorption, it is possible to reduce outgassing with NEG coated surfaces in the vicinity of the particle beam. In particular for light sources NEG coated surfaces are advantageous to reach good vacuum conditions without long conditioning times \\cite{chiggiato}. Modern light sources use complex multi-bend achromat lattice cells to achieve extremely small horizontal emittances. As a result the apertures of vacuum chambers are small, and the achievable pressure is conductance limited. Lumped pumping concepts are less efficient in this situation since a large number of pumps had to be installed per unit length. Today it is common for such light sources to use NEG coating for efficient pumping of the narrow beam chambers. Also hadron accelerators benefit from NEG coated chambers through the reduction of the secondary emission yield of electrons \\cite{wfischer}.\n\n\\begin{figure}\n\\centering\\includegraphics[width=0.85\\textwidth]{Fig_09_Vac.png}\n\\caption{Concept view of a vacuum chamber with integrated NEG pump for an electron beam storage ring. The~aluminum profile is extruded with integrated cooling channels. The synchrotron radiation is absorbed on the left side of the chamber, while the right side contains the NEG strip (blue) in a side channel, which is shielded from direct view to the beam to suppress capture of dust particles (courtesy DESY).}\n\\label{fig:chamber}\n\\end{figure}\n\n\\emph{Cryo pumps} use the effect of cryosorption to bind gas molecules on cold surfaces inside the pump vessel. With sufficiently low temperature these pumps can remove all gas species and they provide high pumping speed and capacitance. On the downside cryo pumps must be regenerated by regular warm-up cycles, for which the accelerator operation must be interrupted. During the regeneration cycle all pumped gases are released again, and are typically removed from the vacuum system by turbo molecular pumps. A concept sketch of a cryo pump is shown in Fig.\\,\\ref{fig:cryo}.\n\nFor performance reasons many accelerators make use of superconducting magnets or superconducting accelerating structures. With the beam pipe being integrated into the cryostat this leads to a \\emph{cold bore} vacuum system, which takes on the characteristics of a huge cryopump. Design and operation of such cold bore vacuum systems have many specific implications \\cite{grobner}.\n\n\\begin{figure}\n\\centering\\includegraphics[width=0.575\\textwidth]{Fig_10_Vac.png}\n\\caption{Concept view of a cryo pump with a large aperture flange connection on the top side (courtesy Lothar Schulz, PSI).}\n\\label{fig:cryo}\n\\end{figure}\n\n\\subsection{Instrumentation} \\label{instrumentation}\n\nAccelerator based research infrastructures are expensive installations, they use a lot of grid energy and their operation should be efficient and reliable. Also for vacuum systems it is therefore important that the integrity of the system is continuously monitored and problems like leaks or regions of unusual high pressure resulting from beam impact can be identified quickly to allow fast and targeted intervention. Using gate valves the beam vacuum system of accelerators is divided into several sections. In this way it is possible to exchange components without venting the entire facility. Fast shutters, capable to stop shock waves in milliseconds, are commonly used to avoid contamination of sensitive sections caused by a sudden break of the vacuum system.\n\nEach of the segments of a larger vacuum system should have at least one gauge to monitor the~integrated residual gas pressure. Total pressure gauges are included in Fig.\\,\\ref{fig:pumps} with their operating range. For practical operation it is usually not necessary to obtain precise absolute measurements, but to be able to diagnose relative changes over time and to compare sections with each other. Cold cathode gauges or Bayard-Alpert gauges are frequently used at low pressures. Current monitoring of sputter ion pumps is a~cost effective method, particularly for large facilities to monitor the residual gas pressure down to levels of $10^{-9}\\,$mbar. \n\nUsing quadrupole mass spectrometers the residual gas composition in a vacuum system can be analysed. The relative occurrence of molecule masses is determined, which allows to diagnose problems in an accelerator vacuum system. For example it can be determined whether a leak or a contamination is the reason for unusually high pressure. Such residual gas analysers (RGA) are typically not part of a standard installation, but these are temporarily connected to the recipient. Often the sensitive electronics is not compatible with the radiation environment of the stray magnetic fields of an accelerator in operation. \n\n\\subsection{Materials and Technology Choices for Accelerator Vacuum Systems}\n\nFor large facilities production cost can be optimized by a careful design and by utilization of industrial manufacturing processes. For example extrusion processes serve as a cost effective method to produce large lengths of beam pipe profiles. Over time a wide variety of best practices have been developed for accelerator vacuum systems and more details may be found for example in Ref. \\cite{CERN}.\n\nMaterials for the manufacturing of beam chambers and other beam vacuum components have to be selected carefully according to the specific requirements of each accelerator. A significant number of aspects must be addressed in parallel. The air pressure under normal conditions on a surface area of $10\\times10\\,\\RM{cm}^2$ results in a force equivalent to the weight of 100\\,kg. Consequently mechanical robustness is already one important criterion for a vacuum chamber, while the costly aperture of accelerator magnets might favour a thin wall solution. Vacuum chambers should be bake-able and thus stability must be guaranteed also at elevated temperatures. The magnetic guide field for the beam should not be disturbed by magnetic properties of the beam chamber. If synchrotron radiation is deposited on the chamber wall a high thermal conductivity is desirable. And the transport of beam image currents in the chamber walls requires good surface conductivity and smooth electrical connections across components. For machines susceptible to electron cloud instabilities the secondary electron emission yield of the material should be low. Last but not least the compatibility with UHV vacuum conditions and low outgassing rates are important. The requirement of UHV class pressure in combination with resistance to radiation and corrosive atmospheres demands all metal solutions for beam vacuum system. \n\nStainless steel, copper or aluminum are the most common vacuum chamber materials. Electrical and thermal conductivity are by factors better for copper and aluminum surfaces compared to steel. On the other hand the mechanical strength of steel is outstanding. Copper and aluminum alloys exhibit better mechanical strength than the pure metals, at the expense of somewhat lower conductivity. A~few properties for common materials are listed in Table\\,\\ref{tab:materials}. Joining techniques for materials of vacuum systems include inert gas shielded arc welding, electron-beam welding, laser beam welding as well as brazing in a~furnace. Laser and electron beam welding can deposit large amounts of heating power to a~small volume in a short time. These technologies can be used advantageously for welding of sensitive components that allow only local heating while another part of the component must be kept at moderate temperature.\n\n\\begin{table}\n\\begin{center}\n\\caption{Properties of materials that are utilized for accelerator vacuum components.} \n\\label{tab:materials}\n\\begin{tabular}{|l|cc|cc|cc|cc|} \n\\hline\\hline\n && \\textbf{density} && \\textbf{thermal} && \\textbf{electrical} && \\textbf{yield} \\\\ \n && && \\textbf{conductivity} && \\textbf{conductivity} && \\textbf{strength} \\\\ \n\t && \\textbf{[g\/cm}$^3$] && \\textbf{[W\/K\/m]} && \\textbf{[10}$^6\/ \\Omega$\\textbf{\/m]} && \\textbf{[N\/mm}$^2$] \\\\ \\hline \nstainless steel 316LN && 8.00 \t && 16 \t && 1.35 \t\t && 205 \\\\ \naluminum pure && 2.70\t && 235 \t && 37 \t\t && 35 \\\\ \nAlMgSi0.5 && 2.70 && 200 && 30 \t\t && 70-150 \\\\ \ncopper pure && 8.95 && 394 && 58 \t\t && 40-80 \\\\ \nCu\\,Sn$_2$ && 8.90 && 140 && 25 \t\t && 150 \\\\ \n\\hline\\hline\n\\end{tabular} \\end{center}\n\\end{table}\n\nAluminum or copper are favoured for electron facilities due to their high thermal conductivity. The synchrotron radiation can be absorbed directly by the vacuum chamber if the power line-density is not exceeding values of $\\approx 100\\,$W\/m. With appropriate water cooling of the chamber a power load of several 10\\,kW\/m can be accepted. For intense synchrotron radiation fans with a shallow height the temperature profile should be simulated. The often quite elaborate beam pipe cross sections in combination with pumping and cooling ducts can be economically produced by continuous extrusion as shown in the example of Fig.\\,\\ref{fig:chamber}. More complex chambers are produced from solid blocks, which is associated with higher manufacturing cost. Neighboured chambers may be connected by aluminum flanges in combination with Helicoflex$^\\RM{TM}$ gaskets. Alternatively aluminum\/stainless steel transitions, for example made by explosion bonding, allow the usage of standard stainless steel Conflat$^\\RM{TM}$ flanges. Overall the sealing of aluminum systems is not as reliable as the standard stainless steel system with copper gaskets. \n\nIn some cases copper or copper alloy chambers are used to benefit from the even higher conductivity compared to Al. Examples include the accelerator facilities HERA-e, KEK-B and PEP. Brazing techniques are typically applied to join copper components and to connect stainless steel flanges. Massive water cooled copper blocks are often used as collimators or absorbers for synchrotron radiation at locations of high power density. \n\nFor proton accelerators austenitic stainless steel has become the most widely used material. Beam pipes are often fabricated from seamless tubes with discrete pumps attached every few meters. For sealing usually welded stainless steel flanges and copper Conflat$^\\RM{TM}$ gaskets are used. Only metal sealed flange connections allow to achieve leak rates compatible with UHV conditions and radiation resistance at the same time. A concept sketch of the Conflat system is shown in Fig.\\,\\ref{fig:conflat}.\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=1.0\\textwidth]{Fig_11_Vac.png}}\n\\caption{In a Conflat$^\\RM{TM}$ metal sealed flange connection the copper seal is clamped between two stainless steel flanges that contain a knife edge.} \n\\label{fig:conflat}\n\\end{figure} \n\nFor accelerators with superconducting magnets the vacuum chamber is often also operated at low temperature, since there is no room for thermal insulation in the tight space between the magnet coils. Examples include the proton ring of HERA\/DESY, RHIC at Brookhaven and LHC at CERN. The~cool down over a large temperature range from room temperature to an operating temperature of, for example, $4.5\\,$K presents another challenge for the beam vacuum system. A system of bellows and space for expansion must be foreseen to handle the mechanical contraction. Usually the beam pipes are made from a type of stainless steel with the inner surface coated by copper to enhance the thermal conductivity. \n\nFor electrical feedthroughs into vacuum ceramics are used as insulators. In some cases ceramics are used for entire vacuum chambers. This is necessary if the chamber is to be placed in rapidly changing magnetic fields, and in a metallic chamber strong eddy currents would be induced. Applications include for example kicker magnets and rapid cycling synchrotrons \\cite{csns}. Beam chambers inside particle physics detectors should influence particles generated at the interaction point as less as possible. For this purpose very thin wall tubes are used, made from low Z material, such as aluminum, beryllium or carbon fiber materials. For exit windows of beams similar solutions are common.\n\nThe beam chamber carries an image current that acts back on the beam and may deteriorate beam quality properties like emittance or energy spread. To minimize such effects the chamber should carry the~image current with low resistance, avoiding geometric discontinuities. Variations of the vacuum chamber cross section should be applied gradually using tapered sections. Bellows and openings for pumping ports must be electrically shielded. In bellows often a set of flexible, sliding spring contacts of copper-beryllium alloys is installed around their circumference. Perforated electric screens with circular holes of some millimeters diameter or longitudinal slits are used for pump ports. Of course such electrical shielding measures result in a reduction of the effective pumping speed. Also for gate valves in the open state, gaps have to be covered by spring contacts that move with the gate mechanism. Inappropriate shielding of transverse openings will not only affect the beam quality, but may also result in the excitation of trapped RF modes, leading to a strong heating effect for vacuum components.\n\nAccelerator components installed in the vicinity of the beam pipe should be radiation resistant and must withstand corrosive atmospheres produced by the primary radiation. This includes cables and electronics, which must either be properly chosen or shielded. For high energy electron\/positron storage rings and synchrotron radiation facilities the generated X-rays may cause problems, and often the beam chamber is wrapped into a lead shield to absorb as much radiation as possible. Figure\\,\\ref{fig:att} shows the~attenuation of X-rays in various materials as function of the photon energy. Using beam energy and bending radius the critical photon energy may be estimated with the expression: \n\n\\begin{equation}\nE_c [\\RM{keV}] \\approx 2.218 ~ \\frac{E^3 [\\RM{GeV}^3]}{\\rho [\\RM{m}]}.\n\\label{crit}\n\\end{equation}\n\n\\begin{figure}\n\\centering\\includegraphics[width=0.85\\textwidth]{Fig_12_Vac.png}\n\\caption{Attenuation length for X-rays of commonly used materials for vacuum vessels and shielding as a function of photon energy.}\n\\label{fig:att}\n\\end{figure}\n\n\\section{Summary}\nThe vacuum system presents a challenging technical aspect of each particle accelerator. To achieve required beam lifetimes and beam quality, and to minimize unwanted losses of high energy particles associated with radio-activation and damage to components, vacuum systems must be carefully designed to provide the adequate low gas densities.\n\nSteps for designing an accelerator vacuum system can be roughly categorized into three groups. The first step involves an evaluation of the {\\bf gas sources} in an accelerator. This includes outgassing rates for surfaces and beam induced outgassing dynamics, such as synchrotron radiation, electron cloud, heavy ion bombardment to name a few. Certain accelerator designs require quick handling of activated components thus involving relatively leaky inflating seals. In such cases leak rates must be considered. In practice a clean and baked stainless steel surface might exhibit an outgassing rate of $q_0 = 10^{-11}\\,$mbar\\,l\\,s$^{-1}$\\,cm$^{-2}$. In the presence of synchrotron radiation and after a reasonable time of conditioning the outgassing is still dominated by releasing at least $\\eta = 10^{-6}$ gas molecules per photon incident on the chamber wall.\n\nThe second step covers the definition of the {\\bf target residual gas pressure and composition} (more precisely the gas density). Physics effects of beam gas interaction and the resulting performance degradation must be considered for this purpose. To give a couple of examples the acceptable dynamic gas pressure in an electron storage ring is in the order of $10^{-8}\\,$mbar with a typical composition of $\\sfrac{3}{4}$ H$_2$ and $\\sfrac{1}{4}$ CO. A proton cyclotron as a single pass accelerator is more forgiving and a pressure of $10^{-6}$\\,mbar is sufficient.\n\nThe third step involves then to lay out the vacuum system in terms of geometry, installed {\\bf pumping speed} and perhaps more complex technical measures like surface coating to achieve the required vacuum quality under operating conditions. Types of pumps may include turbo pumps, ion sputter pumps, NEG coating or cryo pumps for large recipients. A typical electron storage ring may be equipped with $S = 100$\\,l\/s ion sputter pumps at a distance of 5\\,m. Today software can be used to compute the pressure profile using Monte Carlo methods or numerical solutions of the diffusing Eq. (\\ref{eq:diffeq}). \nIn addition to such conceptual considerations the design of vacuum systems requires a lot of engineering. Keywords include: UHV compatible materials and materials preparation, mechanical stability, thermo-mechanical problems under heat load, pumps, gauges, flange systems and valves.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}%\n\n\\par A vast range of space and astrophysical scenarios are driven by the rapid expansion of plasmas through space. Such examples include interplanetary coronal fast ejecta~\\cite{Burlaga2001}, the expansion of the stellar material from supernova remnants~\\cite{1990ApJ...356..549S}, and artificial magnetospheric releases of tracer ions~\\cite{Krimigis1982}. When these expanding plasmas encounter obstacles of magnetic nature, the resultant interaction leads to highly nonlinear and complex dynamics. In the solar system, the interaction between the plasma flow (\\textit{i.e.} the solar wind) and planetary-sized magnetic obstacles leads to the formation of magnetospheres~\\cite{Russell1991}.%\n\n\\par The effective size of the magnetic obstacles is determined by the equilibrium position between the kinetic pressure of the solar wind and the magnetic pressure exerted by the planetary magnetic fields~\\cite{Schield1969}. The region of equilibrium, called the magnetopause, can be described using the pressure balance derived from magnetohydrodynamics (MHD)%\n\\begin{equation}\n n_dm_{i,d}v_0^2 = \\frac{B^2}{8\\pi} \\ ,\n \\label{eq:pressure-equilibrium}\n\\end{equation}\nwhere $n_d$ is the density of the solar wind, $v_0$ is its flow velocity, $m_{i,d}$ is the mass of its ions, and $B$ is the total magnetic field at the magnetopause. The total magnetic field can be written as $B = B_0 + B_\\textrm{dip}$, where $B_0$ is the collective magnetic field and $B_\\textrm{dip} = M \/ L_0^3$ is the magnetic field of the obstacle, often well described by a dipolar profile of magnetic moment $M$. The distance $L_0$ between the center of the dipole and the magnetopause, often referred to as the plasma standoff distance, measures the effective size of the magnetic obstacle.%\n\n\\par For planetary-sized magnetospheres, the obstacle size is typically tens of thousands of kilometers. However, magnetospheres with a few hundreds of kilometers are also observed in space environments such as the lunar surface. When the magnetic obstacle size is smaller or of the order of the ion kinetic scales of the plasma, \\textit{i.e.} the ion skin depth or the ion gyroradius, the interaction with the solar wind results in ion-scale magnetospheres, or mini-magnetospheres.%\n\n\\par The study of mini-magnetospheres in past years was mainly motivated by the observation of crustal magnetic anomalies on the lunar surface~\\cite{Lin1998,Halekas2008,Kato2010,Wieser2010,Kramer2021}. Although the Moon does not have a global magnetic field like Earth, it does have small localized regions of crustal magnetic field, of 10-100 nT over distances of 100-1000 km~\\cite{Lin1998}, which are of the same order as the gyroradius of solar wind ions near the Moon's surface. As a result, when these regions of the lunar surface are exposed to the solar wind, mini-magnetospheres are formed. The deflection of charged particles off of lunar mini-magnetospheres commonly leads to the formation of ``lunar swirl'' structures~\\cite{Bamford2012}. Similar interactions between the solar wind and small-sized patches of magnetic field also occur in other planets and natural satellites without a planetary magnetosphere, such as Mars~\\cite{Lillis2013}, Mercury~\\cite{doi:10.1126\/science.1211001}, Ganymede~\\cite{https:\/\/doi.org\/10.1029\/97GL02201}, and comets and asteroids~\\cite{https:\/\/doi.org\/10.1029\/GL011i010p01022}.%\n\n\\par Multiple experiments have been performed in laboratory environments that replicate the interaction between plasma flows and magnetic obstacles. With a proper re-scaling of parameters~\\cite{Ryutov_2002}, these experiments represent highly controlled configurations where a large variety of diagnostics can be used to obtain more accurate measurements than those obtained from the direct probing of astrophysical events. In experimental studies, fast-moving plasma flows are usually driven resorting to high-intensity lasers focused onto solid targets of plastic or metal composition~\\cite{ablated,ablated2}. These laser-ablated plasmas can be mildly collisional or collisionless, replicating astrophysical conditions~\\cite{Niemann2014a,Bondarenko2017}. By adding dipole field sources against the plasma flow, previous experiments of mini magnetospheres studied possible applications for spacecrafts~\\cite{Winglee2007,Bamford2008,Bamford2014}, the formation of lunar swirls~\\cite{Bamford2012}, and the conditions for the formation of magnetosphere features~\\cite{Brady2009,Shaikhislamov2013, Shaikhislamov2014,Rigby2018}. Although these experiments achieved important breakthroughs in the study of ion-scale magnetospheric physics, they were limited to i) 1D measurements of the magnetic field and plasma density profiles and ii) fixed properties of the obstacle and plasma flow.%\n\n\\par Numerical simulations play a key role in interpreting and designing experiments. Early MHD simulations attempted to explain the formation and characteristics of lunar mini-magnetospheres and validate experimental and analytical models~\\cite{Harnett2000,Harnett2002,Shaikhislamov2013}. Hybrid simulations were used to study the role of ion kinetic effects, and obtain conditions for the formation of magnetospheres~\\cite{Blanco-Cano2004} and replicate previous experimental results~\\cite{Gargate2008}. However, these simulations do not resolve the electron scales and do not capture important kinetic effects on the magnetosphere's boundary, \\textit{e.g.} charge separation effects and nonthermal particle distributions. Particle-in-cell (PIC) simulations were used to fully resolve the micro-physics of these systems and study its role in the formation of lunar mini-magnetospheres~\\cite{Kallio2012,Deca2014,Deca2015,Zimmerman2015,Bamford2016}, the scaling of their properties with solar wind speed and magnetic field orientation~\\cite{Deca2021} and the conditions for the formation of collisionless shocks~\\cite{Cruz2017}.%\n\n\\par In this work, we use PIC simulations of ion-scale magnetospheres to interpret the results of recent experiments~\\cite{Schaeffer2021} performed at the LArge Plasma Device (LAPD), University of California, Los Angeles. In these experiments, fast collisionless plasma flows generated by high-repetition-rate lasers were collided with the magnetized ambient plasma provided by the LAPD and with a dipolar magnetic field obstacle, leading to the formation of ion-scale magnetospheres. Using motorized probes, high spatial and temporal resolution measurements of the magnetic field allowed characterization of 2D magnetic field and current density structures. Apart from validating the experimental results, the simulations presented in this work explore a set of upstream and magnetic parameter scans and configurations not accessible in the laboratory to determine the importance of each system parameter on the magnetospheric properties. The simulations show that the background ions, and then the driver ions, are responsible for the formation of the magnetopause observed in the experiments. They also show that a reflection of the downstream magnetic compression is observed for certain parameters of the driver plasma, and that the distance between the main current features is dependent on the dipolar and driver plasma parameters. \n\n\\par This paper is organized as follows. In Sec.~\\ref{sec:experiments}, we briefly review the LAPD experiments and their main results. In Sec.~\\ref{sec:simulations}, we present PIC simulations of ion-scale magnetospheres. In Sec.~\\ref{sec:numerical-methods}, we outline the standard configuration and parameters used for the simulations. In Sec.~\\ref{sec:additional}, we provide an overview of the temporal evolution of these systems and show that the simulations agree with the results of the LAPD experiments. We discuss the origin of the structures observed in current density and magnetic field synthetic diagnostics and use particle phase spaces to interpret them. In Sec.~\\ref{sec:length}, we present the results for different lengths of the plasma flow and define the conditions required to reproduce the features observed experimentally. The coupling between the laser-ablated driver and background plasmas is characterized in Sec.~\\ref{sec:density} with simulations with different driver densities. In Sec.~\\ref{sec:momentum}, different magnetic moments are considered, and we show that the main current density features are highlighted and more easily visible for weaker magnetic obstacles.\nIn Secs.~\\ref{sec:realistic} and~\\ref{sec:finite}, we discuss and illustrate the validity of the key simplifications and approximations used for the parameter scans presented in Secs.~\\ref{sec:additional}-\\ref{sec:momentum}. Finally, we outline the conclusions of this work in Sec.~\\ref{sec:conclusions}.%\n\n\\par This paper is the second part of a two part series. Detailed experimental results are presented in Part I~\\cite{Schaeffer2021}. \n\n\\section{LAPD Experiment}\n\\label{sec:experiments}\n\n\\par A new experimental platform has been developed on the LAPD to study mini-magnetospheres. The platform combines the large-scale magnetized ambient plasma generated by the LAPD, a fast laser-driven plasma, and a pulsed dipole magnet, all operating at high-repetition-rate ($\\sim1$ Hz). In the experiments, a supersonic plasma is ablated from a plastic target and then expands into the dipole magnetic field embedded in the ambient magnetized plasma. By measuring 2D planes of the magnetic field over thousands of shots, detailed maps of the magnetic field evolution are constructed. Additional details on the platform and results can be found in Part I.\n\n\\par Example results are shown in Fig.~\\ref{fig:experiment} for the measured change in magnetic field $\\Delta B_z = B_{z}-B_{z,\\textrm{initial}}$ and the current density $J_x = \\partial \\Delta B_z\/\\partial y$. Here, $B_{z}$ is the total magnetic field, $B_{z,\\textrm{initial}}=B_0 + B_\\textrm{dip}$ is the total initial magnetic field, $B_0$ is the background LAPD field, and $B_\\textrm{dip}$ is the dipole magnetic field. These results are taken along $y$ at $x=0$ from the $z=0$ plane probed experimentally. In the experiments, the dipole is centered at $(x,y,z)=(0,0,0)$ and has a magnetic moment $M=475$ Am$^2$.\n\n\\par As seen in Fig.~\\ref{fig:experiment}(a), the expanding laser-driven plasma creates a leading magnetic field compression followed by a magnetic cavity. The cavity reaches a peak position of $y\\approx-13$ cm, while the compression propagates closer to the dipole before being reflected back towards the target. The current density in Fig.~\\ref{fig:experiment}(b) shows two prominent structures. Following the expansion of the magnetic cavity is a diamagnetic current, which reaches a peak position of $y\\approx-15$ cm before stagnating for approximately 1 $\\mu$s and then dissipating. Ahead of the diamagnetic current is the magnetopause current near $y\\approx-13.5$ cm, which lasts for about 0.5 $\\mu$s.\n\n\\par Here, we aim to qualitatively model these experiments in order to address key questions that aid in the interpretation of the experimental results. In particular, simulations can explain the role of each system component in the features observed, address which plasma component (ambient or laser-driven) is responsible for the features observed and which pressure balances are most relevant. \n\n\\par For convenience, the notations used in this paper are different for the ones used in Part I. Here, we use the CGS system, and the axis system is rotated from the used in Part I.\n\n\\begin{figure}[ht]\n \\includegraphics[width=0.85\\linewidth]{plots\/experiment.pdf}\n \\caption{\\label{fig:experiment} LAPD experimental results for the temporal evolution of a) the variation of the magnetic field $\\Delta B_z$ and b) the current density $J_x$ at $x=z=0$. The experimental results are discussed with more detail in Part I.}\n\\end{figure}\n\n\\section{PIC simulations}\n\\label{sec:simulations}\n\n\\subsection{Configuration of the simulations} \\label{sec:numerical-methods}\n\n\\par Motivated by the results of experiments described in Sec.~\\ref{sec:experiments}, we performed 2D simulations with OSIRIS, a massively parallel and fully relativistic PIC code~\\cite{Fonseca2002,Fonseca2013}. With PIC simulations, we can accurately resolve the plasma kinetic scales characteristic of mini-magnetospheres dynamics.%\n\n\\par The numerical simulations presented in this work stem from a simplified description of the LAPD experimental setup, represented in Fig.~\\ref{fig:config}. In these simulations, a driver plasma moves against a background plasma permeated by a uniform magnetic field $\\mathbf{B_0}$ and a dipolar magnetic field $\\mathbf{B_{dip}}$. $\\mathbf{B_0}$ and $\\mathbf{B_{dip}}$ are oriented along the $z$ direction and are transverse to the driver plasma flow. Since the most relevant dynamics of the simulations occurs at the ion kinetic scales, all the spatial scales are normalized to the ion skin depth of the background plasma $d_i=c\/\\omega_{pi}=\\sqrt{m_{i,0}c^2\/4\\pi n_0e^2}$, where $c$ is the speed of light in vacuum, $\\omega_{pi}$ is the ion plasma frequency, $m_{i,0}$ is the mass of the background plasma ions, $n_0$ is the background density, and $e$ is the electron charge. In turn, the temporal scales are normalized to $1\/\\omega_{ci}$, where $\\omega_{ci} = eB_0\/m_{i,0}c$ is the ion cyclotron frequency of the background. The simulation box is a 12 $d_i$ $\\times$ 12 $d_i$ area with open and periodic boundary conditions in the $x$ and $y$ directions, respectively. The flow is in the $x$ direction and the size of the simulation domain in the $y$ direction is large enough to avoid re-circulation of the particles through the whole interaction. The simulations considered 25 particles per cell per species. To resolve the dynamics of the electron kinetic scales, we used 10 grid cells per electron skin depth $d_e=d_i\\sqrt{m_e\/m_{i,0}}$ in both $x$ and $y$ directions, where $m_e$ is the electron mass.\n\n\\begin{figure}[ht]\n \\includegraphics[width=0.85\\columnwidth]{plots\/config.pdf}\n \\caption{\\label{fig:config} Schematic illustration of the initial setup of the 2D PIC simulations performed. The system considers a vacuum region at the left, a driver plasma (I) of density $n_d$ and length $L_x$, travelling to the right with flow velocity $v_0$, and a background plasma (II) with constant density $n_0$ and with an internal magnetic field $B_0$. A dipole is included at the center of the background region. Both the uniform and the dipolar magnetic fields are oriented in the $z$ direction. An illustration of the effective magnetic obstacle created by the dipole and of the magnetic field profile at $y=0$ are also shown in a dashed circumference and in a solid black line, respectively.}%\n\\end{figure}\n\n\\par The driver plasma, shown in region I in Fig.~\\ref{fig:config}, represents ideally the plasma ablated from the plastic target in the experiments. We assume that this driver has a length $L_x$ that is typically 2 $d_i$, and a width $L_y$ that is typically infinite. It has a constant density $n_{d}$, and it is initialized moving to the right side with initial flow velocity $v_0$. The driver is composed of an electron species and a single ion species, with ion mass $m_{i,d}$. Because the driver plasma is reflected during the interaction with the background, an empty region at the left of the driver was added to accommodate the reflecting particles.%\n\n\\par The background plasma is represented in region II. It is an 8 $d_i$ length and infinite width plasma and it has uniform density $n_0$. The initial interface between the driver and background plasma is located at $x_B=-4\\ d_i$. Like the driver plasma, it has an electron species and a single ion species, of mass $m_{i,0}$. The background plasma is magnetized with an internal uniform magnetic field $\\mathbf{B_0} = B_0 \\mathbf{\\hat{z}}$, and its magnitude is defined such that the Alfv\u00e9nic Mach number of the flow, $M_A \\equiv v_0\/v_A = v_0\\sqrt{4\\pi n_0m_{i,0}}\/B_0$ matches the peak experimental value $M_A=1.5$, where $v_A$ is the Alfv\u00e9n velocity.%\n\n\\par A dipolar magnetic field is externally imposed in our simulations (\\textit{i.e.}, it is added to the plasma self-consistent electromagnetic fields to advance particle momenta but is not included in Maxwell's equations to advance the fields). The dipole is centered at $(x,y)=(0,0)$ and its associated magnetic field is $\\mathbf{B_{dip}} = B_\\mathrm{dip} \\mathbf{\\hat{z}}$, with $B_\\mathrm{dip} = M \/ r^3$, where $M$ is the dipolar magnetic moment, $r = \\sqrt{x^2 + y^2+\\delta^2}$ is the distance to the origin of the dipole and $\\delta=0.25\\ d_i$ is a regularization parameter. For most simulations, the magnetic moment $M$ was chosen such that the expected standoff, obtained from Eq.~\\eqref{eq:pressure-equilibrium}, is similar to the experimental value $L_0=1.8\\ d_i$. For this particular magnetic moment, the total initial magnetic field $B_0+B_\\mathrm{dip}$ is $\\approx 3.0~B_0$ at the standoff distance. Near the interface between the driver and background plasmas, the magnetic field of the dipole is relatively small and the initial magnetic field is $\\approx 1.2~B_0$.%\n\n\\par In this work, we present simulations with different drivers and magnetic dipole moments. All the simulations presented here, and their respective parameter sets, are listed in Table~\\ref{tab:runs}. Simulations B-G are discussed through Sec.~\\ref{sec:simulations} on equally labeled subsections. Simulation B is used to discuss the overall dynamics of the system, while simulations C, D, and E illustrate the role of the driver length, the density ratio, and the magnetic moment, respectively. Simulations F show the results for more realistic choices of parameters and simulation G for a more realistic driver shape. The physical parameters of the simulations (\\textit{e.g.} $M_A$, $L_0\/d_i$) were adjusted to be similar to the LAPD experiments, whereas other parameters (\\textit{e.g.} $m_i\/m_e$, $v_0$, $v_{the}$) were chosen to make simulations computationally feasible. The experimental and numerical parameters are presented in Table~\\ref{tab:parameters} and compared with lunar mini-magnetospheres.%\n\n\\begin{table*}\n \\caption{\\label{tab:runs} List of simulations performed and their parameters. $v_{the,x}$ and $v_{thi,x}$ represent the $x$ component of the electron and ion thermal velocities, respectively. All the runs considered $v_{th,x}=v_{th,y}=v_{th,z}$ for the electrons and ions.}%\n \\begin{ruledtabular}\n \\begin{tabular}{cccccccccc}\n \\textrm{Name} & $v_{the,x}\/v_0$ & $v_{thi,x}\/v_0$ & $n_d\/n_0$ & $m_{i}\/m_e$ & $m_{i,0}\/m_e$ & $L_x\/d_i$ & $L_y\/d_i$ & $L_0\/d_i$\\\\ \\colrule\n \\textrm{B\/D2\/E2} & 0.1 & 0.01 & 2 & 100 & 100 & 2 & $+\\infty$ & 1.8 \\\\\n \\textrm{C1} & 0.1 & 0.01 & 2 & 100 & 100 & 1 & $+\\infty$ & 1.8 \\\\\n \\textrm{C2} & 0.1 & 0.01 & 2 & 100 & 100 & 4 & $+\\infty$ & 1.8 \\\\\n \\textrm{C3} & 0.1 & 0.01 & 2 & 100 & 100 & $+\\infty$ & $+\\infty$ & 1.8 \\\\\n \\textrm{D1} & 0.1 & 0.01 & 1 & 100 & 100 & 2 & $+\\infty$ & 1.8 \\\\\n \\textrm{D3} & 0.1 & 0.01 & 4 & 100 & 100 & 2 & $+\\infty$ & 1.8 \\\\\n \\textrm{E1} & 0.1 & 0.01 & 2 & 100 & 100 & 2 & $+\\infty$ & 2.3 \\\\ \n \\textrm{E3} & 0.1 & 0.01 & 2 & 100 & 100 & 2 & $+\\infty$ & 1.4 \\\\ \\colrule\n \\textrm{F1} & 0.1 & 0.002 & 2 & 1836 & 1836 & 2 & $+\\infty$ & 1.8 \\\\\n \\textrm{F2} & 2.5 & 0.033 & 2 & 1836 & 1836 & 2 & $+\\infty$ & 1.8 \\\\\n \\textrm{F3} & 2.5 & 0.033 & 2 & 100 & 100 & 2 & $+\\infty$ & 1.8 \\\\\n \\textrm{G} & 0.1 & 0.01 & 2 & 100 & 100 & 2 & 6 & 1.8 \\\\\n \\end{tabular}\n \\end{ruledtabular}\n\\end{table*}\n\n\\par In most simulations, we considered a reduced mass ratio $m_i \/ m_e = 100$, a flow velocity $v_0 \/ c = 0.1$, and cold plasmas to reduce the required computational resources, allow extended scans over the different parameters of the system, and simplify our analysis. The thermal effects are negligible for the main results, and the chosen ion-to-electron mass ratio is high enough to ensure sufficient separation between electron and ion spatial and temporal scales. We confirm the validity of our assumptions in Sec.\\ref{sec:realistic}.%\n\n\\par In most of the simulations presented in this work, we have assumed that ions and electrons are initially in thermal equilibrium, and thus used the electron thermal velocities $v_{the}$ shown in Table~\\ref{tab:runs}, to compute the ion thermal velocities $v_{thi}$. Because we aim to study the role of the hydrogen ions of the experimental driver in the interaction with the background plasma, these simulations considered equal ion masses for the driver and background plasmas, \\textit{i.e.} $m_{i,d}=m_{i,0}$.%\n\n\\begin{table*}\n \\caption{\\label{tab:parameters} Typical parameters associated with lunar mini-magnetospheres~\\cite{Russell1991,Bamford2012,Lin1998}, the range of parameters of LAPD~\\cite{Schaeffer2021} and the canonical simulation B. The parameters are written in both physical and normalized units to facilitate the comparison between the space, the laboratory environments and the PIC simulations. The experimental parameters are presented in ranges of values computed with the possible LAPD values for the flow velocity $v_0$, the density $n_0$ and the temperature $T$. The plasma parameters shown for lunar mini-magnetospheres are relative to the solar wind, while for the experiments and the simulations, they are relative to the background plasma. The ion data shown corresponds to Hydrogen ions. The magnetic field $B_{\\textrm{std}}$ is calculated at the standoff position, \\textit{i.e.}, at a distance $L_0$ from the center of the obstacle.}%\n \\begin{ruledtabular}\n \\begin{tabular}{cccccc}\n \\multirow{2}{*}{\\textrm{Parameters}} & \\multicolumn{2}{c}{Lunar mini-magnetospheres} & \\multicolumn{2}{c}{LAPD experiments} & PIC simulations \\\\ \n & Physical units & Normalized units & Physical units & Normalized units & Normalized units\\\\ \\colrule \\\\ [-9 pt]\n Flow velocity, $v_0$ & $400$ km\/s & $10^{-3}$ $c$ & 200-300 km\/s & 0.7-1.0$\\times10^{-3}$ $c$ & 0.1 $c$ \\\\\n Density, $n_0$ & 5 cm$^{-3}$ & --- & $10^{12}$-$10^{13}$ cm$^{-3}$ & --- & ---\\\\\n Mass ratio, $m_i\/m_e$ & --- & 1836 & --- & 1836 & 100 \\\\\n Ion skin depth, $d_i$ & 100 km & --- & 7-23 cm & --- & --- \\\\\n Electron skin depth, $d_e$ & 2 km & 2$\\times10^{-2}$ $d_i$ & 0.2-0.5 cm & 0.7-7.0$\\times10^{-2}$ $d_i$ & 0.1 $d_i$ \\\\\n Magnetic obstacle size, $L_0$ & 300 km & 3 $d_i$ & 14-18 cm & 0.6-2.5 $d_i$ & 1.8 $d_i$\\\\\n Internal magnetic field, $B_{0}$ & $10^{-4}$ G & $10^{-2}$ $m_ec^2\/ed_e$ & 300 G & 3-9$\\times10^{-2}$ $m_ec^2\/ed_e$ & 0.67 $m_ec^2\/ed_e$ \\\\\n Ion gyroradius, $\\rho_i$ & 500 km & 5 $d_i$ & 7-10 cm & 0.3-1.5 $d_i$ & 1.5 $d_i$ \\\\\n Electron gyroradius, $\\rho_e$ & 800 m & $8\\times10^{-3}$ $d_i$ & 4-6$\\times10^{-3}$ cm & 2-8$\\times10^{-4}$ $d_i$ & 0.15 $d_i$ \\\\\n Ion gyroperiod, $\\omega_{ci}^{-1}$ & 1 s & --- & 230-520 ns & --- & --- \\\\\n Alfv\u00e9n velocity, $v_A$ & 80 km\/s & $3\\times10^{-4}$ $c$ & 140-980 km\/s & 0.5-3.3$\\times10^{-3}$ $c$ & 0.067 $c$ \\\\\n Alfv\u00e9nic Mach number, $M_A$ & --- & 5 & --- & 0.3-1.5 & 1.5 \\\\\n Temperature, $T$ & 5 eV & --- & 1-10 eV & --- & --- \\\\\n Electron thermal velocity, $v_{the}$ & 1500 km\/s & 4 $v_0$ & 730-2300 km\/s & 2.4-11.5 $v_0$ & 0.1 $v_0$ \\\\\n Ram pressure, $n_0m_iv_0^2$ & 1 nPa & --- & 70-1500 Pa & --- & --- \\\\\n Standoff magnetic field, $B_{\\textrm{std}}$ & $5\\times10^{-4}$ G & 0.07 $m_ec^2\/ed_e$ & 100-600 G & 0.02-0.2 $m_ec^2\/ed_e$ & 2.0 $m_ec^2\/ed_e$ \\\\\n \\end{tabular}\n \\end{ruledtabular}\n\\end{table*}\n\n\\subsection{\\label{sec:additional} Evolution and main features of the system}\n\n\\par To identify the main magnetospheric and kinetic-scale structures that arise from the initial configuration, simulation \\textrm{B} was performed. It considered a driver with length $L_x=2\\ d_i$ and density $n_d=2\\ n_0$ (twice the background density). Figs.~\\ref{fig:movie} a1-3) represent the total ion density $n_i=n_{i,d}+n_{i,0}$, for three different times, and Figs.~\\ref{fig:movie} b1-3) show the variation of the $z$ component of the magnetic field, from its initial value, $\\Delta B_z=B_z-B_{z,\\textrm{initial}}$.%\n\n\\begin{figure*}[t]\n \\includegraphics[width=0.98\\linewidth]{plots\/movie.pdf\n \\caption{\\label{fig:movie} Spatiotemporal evolution of a) the total ion density and b) the variation of the $z$ component of the magnetic field in simulation \\textrm{B} (see Table~\\ref{tab:runs} for a list of parameters). Columns 1-3 correspond to three different times in the simulation. The vertical and circular dashed lines mark the initial border between the driver and background plasma and the dipolar magnetic obstacle with radius $L_0$, respectively.}%\n\\end{figure*}\n\n\\par In Fig.~\\ref{fig:movie} a1), we see the total ion density for an early time ($t\\omega_{ci}=1.5$). Given the small distance propagated by the driver plasma at this time, the dipolar magnetic field does not significantly affect the interaction between the plasmas. For this reason, we can express the early system as a driver flowing against a uniform magnetized background plasma. \nIn Fig.~\\ref{fig:movie} b1), we observe that this interaction creates a region of compressed magnetic field in the downstream region, where the background plasma is located, and expels the magnetic field in the region of the driver, leading to a magnetic cavity in the upstream region with approximately null magnetic field~\\cite{Bondarenko2017}.%\n\n\\par In Figs.~\\ref{fig:movie} a2) and b2), we start to observe the effects of the dipolar magnetic field for a later time ($t\\omega_{ci}=3.0$). As the magnetic pressure exerted against the plasmas increases, a region of compressed background plasma forms in front of the dipole, as Fig.~\\ref{fig:movie} a2) shows. After the interaction between the background and the dipole, the magnetic field pressure becomes large enough to counterbalance the kinetic pressure of the driver, reflecting it upstream. This can be seen seen in Fig.~\\ref{fig:movie} a3) for a subsequent time ($t\\omega_{ci}=4.5$). After the reflection, there is no longer a plasma flow pushing the magnetic compression forward or holding the decompression by the left side of the background region, and as a result, the region near the dipole quickly decompresses --- see Fig.~\\ref{fig:movie} b3).%\n\n\\par To compare the numerical results with the experimental data shown in Fig.~\\ref{fig:experiment}, synthetic diagnostics were obtained from the simulations. In Fig.~\\ref{fig:standard}, the variation of the magnetic field $\\Delta B_z$ and the density current $J_y$ measured at the axis of symmetry $y=0$ and as a function of time are plotted for simulation \\textrm{B}. These diagnostics are important to comprehend the system dynamics, due to the importance of the $z$ direction of the magnetic field in the motion of the particles.%\n\n\\begin{figure}[ht]\n \\includegraphics[width=0.85\\columnwidth]{plots\/standard.pdf}\n \\caption{\\label{fig:standard} Temporal evolution of a) the variation of the magnetic field $B_z$ and b) current density $J_y$ at $y=0$ for the simulation \\textrm{B}. The driver has a 2 $d_i$ length and a density $n_d = 2\\ n_0$. The dashed lines have slopes that match the flow velocity $v_0$, the coupling velocity $v_c$ and the reflection velocity $v_r$.}%\n\\end{figure}\n\n\\par The main features of Fig.~\\ref{fig:standard} are consistent with the experimental results. In the magnetic field plot of Fig.~\\ref{fig:standard} a), both the upstream magnetic cavity and the downstream magnetic compression are present. Between $t\\omega_{ci}=0$ and $t\\omega_{ci}\\approx1.5$, the system behaves approximately as a driver piston moving against a uniform magnetized plasma. As the driver pushes the background plasma and magnetic field, the discontinuity that separates these two media travels at constant coupling velocity $v_c < v_0$, measured as $v_c\\approx0.49\\ v_0$ for this simulation. The leading edge of the compression of the magnetic field travels with a velocity close to $v_0$ for the runs considered.%\n\n\\par The driver experiences increasingly higher magnetic fields until the magnetic pressure is enough to reflect the driver near the expected standoff $x_0 = -L_0$, at $t\\omega_{ci}\\approx3$. The magnetic cavity and magnetic compression are also reflected, and the boundary between these two regions travels with a velocity $v_r$ after reflection. The background magnetic decompression is seen after $t\\omega_{ci}=5$.%\n\n\\par In the current density plot of Fig.~\\ref{fig:standard} b), we can observe the diamagnetic current that supports the magnetic field gradient between the driver and background plasmas and that identifies the leading edge of the magnetic cavity. During the driver reflection, this current branches into multiple components due to the multi-stream velocity distributions developed in the driver and background plasmas. We can also verify that this structure is reflected near the expected standoff $x_0 =-L_0$. Between $t\\omega_{ci}\\approx2$ and $t\\omega_{ci}\\approx3$, a second current structure is present in the background region. It is associated with the magnetopause of the system and the small decompressed field region that we see in Fig.~\\ref{fig:standard} a), and it arises from the interaction of the accelerated background ions with the dipole, as we show in Sec.~\\ref{sec:momentum}. The presence of these two current structures is consistent with the experimental results.%\n\n\\par In Fig.~\\ref{fig:standard} b) we can also see the formation of waves in the background plasma, near the dipole region. These waves are excited in regions of highly non-uniform density and magnetic field, and have periods and wavelengths between the ion and electron kinetic scales. We have verified that their properties change significantly for different ion thermal velocities. In particular, we have found these waves to be more clearly excited for lower ion temperatures, which may explain why these waves have not been observed in the experiments performed at the LAPD. A detailed characterization of these waves and the conditions for their formation is out of the scope of this paper, and shall be addressed in a future work.%\n\n\\par To better understand the particle motion during the events described, we show in Fig.~\\ref{fig:phase-space} the phase spaces of ions and electrons located near $y=0$. For the ions, the $x$ component of the velocity of the particles is presented, to illustrate their reflection and accumulation, while for the electrons, the $y$ component is shown instead, to show the formation of the currents. The magnetic field $B_z$ and the current density $J_y$ profiles for $y=0$ are also represented. Once again, we used the parameter set \\textrm{B} of Table~\\ref{tab:runs}.%\n\n\\begin{figure*}[t]\n \\includegraphics[width=0.95\\linewidth]{plots\/phase-scan-standard.pdf\n \\caption{\\label{fig:phase-space} Ion (a) and electron (b) phase spaces, and magnetic field $B_z$ and current density $J_y$ profiles at $y=0$, for simulation \\textrm{B} and for three different times (1-3). The particles shown were randomly selected in the region $-0.2\\ d_i < y < 0.2\\ d_i$. The frames labeled a1) to a3) show the $v_x$ velocity of the ions, while the frames labeled b1) to b3) show the $v_y$ velocity of the electrons. Blue\/orange markers correspond to background\/driver plasma particles. The green and purple lines correspond to the magnetic field $B_z$ and current density $J_y$, respectively. The left dashed line marks the initial border between the driver and the background plasmas, and the right dashed line marks the expected standoff $x_0=-L_0$.}%\n\\end{figure*}\n\n\\par Fig.~\\ref{fig:phase-space} a1) shows the $v_x$ velocity of the ions when the dipole field is still negligible. The ions initially move upstream with velocity $v_0$ until they interact with the background field. After reaching the background, they are mostly decelerated and reflected by electric field in the interface between the plasmas~\\cite{Bondarenko2017}, and end up with a flow velocity that is close to zero for the simulation considered. The reflection occurs near the boundary of the magnetic cavity, which moves with velocity $v_c$ through the background, as mentioned above. During this stage, the background ions accelerate from rest to velocities of average close to $v_c$.%\n\n\\par The driver and the accelerated background ions continue to approach the dipole until they are reflected. This can be seen in Fig.~\\ref{fig:phase-space} a2). During this interaction, two main current structures are visible in the $J_y$ profile. The first one (from the left) corresponds to the typical diamagnetic current, while the second one corresponds to the magnetopause. To the right of these two main current structures, we can see the background waves observed in Fig.~\\ref{fig:standard} b). In Fig.~\\ref{fig:phase-space} a3), the driver ions are totally reflected. The ions reflected by the dipole obtain a velocity close to $-v_0$, while the magnetic cavity moves back with velocity $v_r$.%\n\n\\par Because the simulation considers a cold plasma approximation, the ion thermal velocities remain small most of the time, except for the boundary between the two plasmas, where the velocity of the ions changes abruptly. The same does not occur for the electrons. We can see in the $v_y$ velocity of the electrons, represented in Figs.~\\ref{fig:phase-space} b1) to b3) that, although the electron thermal velocities are initially small, they rapidly increase considerably. At the boundary, the electrons can reach thermal velocities of $6\\ v_0$, much higher than the ion velocities. Because the electron and ion density profiles are very similar during the entire evolution of the system, the current density $J_y = e(n_iv_{iy}-n_ev_{ey})$ is then mainly transported by the electrons, where $n_j$ is the density and $v_{jy}$ the $y$ component of the velocity of the ions and electrons ($j=i,e$, respectively). This is also consistent with the observed spatial distribution of electrons during the reflection, which shows an excess of fast electrons around the standoff position.\n\n\\subsection{Driver length} \\label{sec:length}\n\n\\par To choose a driver length that best reproduces the experimental results shown in Fig.~\\ref{fig:experiment} and to understand its role on the magnetic field and current density structures, we performed simulations \\textrm{C1} to \\textrm{C3} (see Table~\\ref{tab:runs}) with varying driver length $L_x$. In Fig.~\\ref{fig:scan-length}, we show $\\Delta B_z$ and $J_y$ at $y=0$ for $L_x=1\\ d_i$ (C1), $L_x=4\\ d_i$ (C2) and for an infinite driver (C3). For these simulations, the properties of the background plasma and the width of the driver $L_y$ were kept unchanged. The density of the driver was $n_d=2\\ n_0$.%\n\n\\par In Figs.~\\ref{fig:scan-length} a1) and b1), we see the magnetic field and current density plots for the short driver length $L_x=1\\ d_i$. We observe most of the features of Fig.~\\ref{fig:standard}, namely the reflection of the compressed magnetic field in a1) and the diamagnetic and magnetopause currents in b1). For this length, however, the driver never fully interacts with the dipole. The closest that the diamagnetic current structure gets to the dipole is $x_r\\approx-3.0\\ d_i$, \\textit{i.e.}, much farther than the expected standoff $x_0 = -L_0=-1.8\\ d_i$. To replicate the experimental results and ensure that the driver can reach the dipole, we should thus use a sufficiently long driver such that $x_r>x_0$. Additionally, short drivers risk entering in a decoupling regime between the two plasmas~\\cite{Hewett2011}, which can compromise the observation of a magnetopause. The coupling effects on the results are discussed in detail in Sec.~\\ref{sec:density}.%\n\n\n\\par The position where the driver is fully reflected by the background can be estimated as $x_r\\approx x_B+L_xv_c\/(v_0-v_c)$, where $x_B$ is the initial boundary position between the two plasmas. This estimate is obtained by computing the volume of the background plasma required for the driver plasma to deposit its kinetic energy, \\textit{i.e.} $x_r - x_B$ corresponds to the magnetic stopping radius of the system~\\cite{rb}.\n\n\n\\par In the simulation with $L_x=4\\ d_i$, represented in Figs.~\\ref{fig:scan-length} a2) and b2), we observe once more the main features identified in Fig.~\\ref{fig:standard}, but unlike the $L_x=1\\ d_i$ case, the driver is long enough and ends up reflected by the dipole. We observe that the diamagnetic current reaches the expected standoff and has enough plasma to maintain it near the dipole for a time period ($t\\omega_{ci}\\approx3$ to $t\\omega_{ci}\\approx5$) longer than the $2\\ d_i$ case shown in Fig.~\\ref{fig:standard}. As a result, the magnetic decompression in the background region is delayed for longer drivers. However, because the full driver reflection also occurs later, longer drivers will result in short-lived reflections of the compression of the magnetic field.%\n\n\\par In Figs.~\\ref{fig:scan-length} a3) and b3), we show the results for a driver with infinite length ($L_x=+\\infty$). In this simulation, the driver plasma is only partially initialized inside the simulation domain, and a flow is continuously injected from the lower $x$ boundary. An infinite driver configuration allows us to understand the dynamics of the system in an asymptotic regime in which the driver plasma stays close to the dipole. As expected, until $t\\omega_{ci}=3$, the features observed are very similar to $L_x=2\\ d_i$ and $L_x=4\\ d_i$. After this time, the magnetic and the driver kinetic pressures balance each other near $x_0$, so the diamagnetic current remains stationary. Because the driver can hold for longer near the dipole, the decompression in the background region is much slower and is not visible for the time range of the plot. We can also observe that the background waves are only visible during a transient.%\n\n\\par In all the three simulations, the coupling velocity measured was always $v_c \\approx 0.49\\ v_0$. Given the results shown in Fig.~\\ref{fig:scan-length}, we chose a driver length of $2\\ d_i$ to reproduce the experimental results. This driven length is large enough to ensure that the driver arrives at the dipole and small enough to observe a significant reflection of the compression of the magnetic field as we see in the experiments.%\n\n\\begin{figure*}[t]\n \\includegraphics[width=\\linewidth]{plots\/different-lengths-plot.pdf\n \\caption{\\label{fig:scan-length} Temporal evolution of the variations of the magnetic field $\\Delta B_z$ and current density $J_y$ at $y=0$, for driver lengths of a) 1 $d_i$, b) 4 $d_i$ and for c) an infinite driver length (see Table~\\ref{tab:runs} for a full list of the parameters). The dashed lines represent the slopes of the flow velocity $v_0$, the coupling velocity $v_c$, and the reflection velocity $v_r$.}%\n\\end{figure*}\n\n\\subsection{Plasma coupling with density ratio} \\label{sec:density}\n\n\\par As expected from previous works, increasing the ratio between the driver and background plasma densities should improve the coupling between the two plasmas~\\cite{Bondarenko2017, Hewett2011}, meaning that, for denser drivers, the transfer of momentum and energy from the driver to the background plasma is more efficient. To better understand the role of the coupling mechanism, we performed simulations with different values of the driver density, namely $n_d=n_0$ (\\textrm{D1}), $n_d=2\\ n_0$ (\\textrm{D2}) and $n_d=4\\ n_0$ (\\textrm{D3}), while keeping a constant background density $n_0$ and a driver length $L_x = 2\\ d_i$. For each run, the magnetic moment was chosen such that the expected standoff obtained from Eq.~\\eqref{eq:pressure-equilibrium} was always $L_0=1.8\\ d_i$. The synthetic magnetic field and current density diagnostics were obtained for these simulations and are shown in Fig.~\\ref{fig:scan-density}.%\n\n\\begin{figure*}[t]\n \\includegraphics[width=\\linewidth]{plots\/different-densities-plot.pdf\n \\caption{\\label{fig:scan-density} Temporal evolution of the variations of the magnetic field $\\Delta B_z$ and current density $J_y$ at $y=0$, for different ratios between the driver and background densities $n_d \/ n_0$. The magnetic moment was chosen so that the expected standoff distance $L_0$, calculated from Eq.~\\eqref{eq:pressure-equilibrium}, was kept as 1.8 $d_i$ for all the simulations. Panels a-c) show results for $n_d = n_0$, $n_d = 2\\ n_0$ and $n_d=4\\ n_0$, respectively.}%\n\\end{figure*}\n\n\\par In Figs.~\\ref{fig:scan-density} a1) and b1) we can see $\\Delta B_z$ and $J_y$ for the lowest driver density considered, $n_d = n_0$ (\\textit{i.e.}, background and driver with the same initial density). In this regime, the coupling is less efficient and, as a result, the coupling velocity $v_c \\approx 0.38\\ v_0$ is lower than obtained in the higher densities cases represented in Figs.~\\ref{fig:scan-density} b) and c). Due to the low coupling velocity, the driver plasma is reflected more quickly by the background than for denser drivers, and the expected position $x_r$ for the total reflection on the background is farther from the dipole than the expected standoff $x_0$, meaning $x_rx_0$. In fact, the position where the driver is reflected $x_r$, for no dipole cases, increases with the driver length $L_x$ and the velocity ratio $v_c\/v_0$, and thus, both quantities must be large enough to guarantee that $x_r>x_0$. In turn, the ratio $v_c\/v_0$ increases with increasing driver density ratio $n_d\/n_0$, and so, the driver should be sufficiently long and dense to effectively couple to the background plasma. Our results (in particular Sec.~\\ref{sec:additional}) show that a driver with $L_x=2\\ d_i$ and $n_d=2\\ n_0$ qualitatively reproduces the experimental results.%\n\n\\par A separate study was also performed to analytically determine the properties of the driver-background plasma coupling. The results of this study will be presented in a future paper.\n\n\\subsection{Dependency of the magnetopause position with the magnetic moment} \\label{sec:momentum}\n\n\\par To confirm that the features previously associated with the magnetopause location change according with its expected position, we performed simulations with a 2 $d_i$ long driver with density $n_d = 2\\ n_0$ for three different magnetic moments. Considering the magnetic moment that results in the expected standoff $L_0=1.8\\ d_i$ as $M_0$ (simulation \\textrm{B\/E2} on Table~\\ref{tab:runs}), simulations with the magnetic moments $2\\ M_0$ (\\textrm{E1}) and $M_0\/2$ (\\textrm{E3}) were also performed, corresponding respectively to the expected standoffs $L_0\\approx2.3\\ d_i$ and $L_0\\approx1.4\\ d_i$. Fig.~\\ref{fig:scan-momentum} shows the $\\Delta B_z$ and $J_y$ synthetic diagnostics at $y=0$ for the three simulations.%\n\n\\begin{figure*}[t]\n \\includegraphics[width=\\linewidth]{plots\/different-momentums-plot.pdf\n \\caption{\\label{fig:scan-momentum} Temporal evolution of the variation of the magnetic field $\\Delta B_z$ and current density $J_y$ at $y=0$, for three different magnetic moments. The magnetic moments $M$ considered were a) $M=2\\ M_0$, b) $M=M_0$ and c) $M=M_0\/2$, where $M_0$ represents the magnetic moment that corresponds to a standoff $L_0=1.8\\ d_i$ for a driver density $n_d=2\\ n_0$. The corresponding standoffs for these simulations are a) $L_0\\approx2.3\\ d_i$, b) $L_0 = 1.8\\ d_i$ and c) $L_0 \\approx 1.4 \\ d_i$.}%\n\\end{figure*}\n\n\\par Figs.~\\ref{fig:scan-momentum} a1) and b1) show the results for the highest magnetic moment $M=2\\ M_0$. We see that the current structures associated with the magnetopause and the background waves are less evident than for the lower magnetic moments, as they are formed farther from the dipole. Figs.~\\ref{fig:scan-momentum} a2) and b2) correspond to the magnetic moment $M_0$ that leads to $L_0=1.8\\ d_i$ and are the same results shown in Fig.~\\ref{fig:standard}. As previously mentioned, there are two main observable current structure standoffs. The first one is associated to the diamagnetic current, which is reflected around $t\\omega_{ci}\\approx3$ near the expected value $x_0=-L_0=-1.8\\ d_i$. This standoff is related to the interaction between the driver ions and the dipole. The second standoff occurs between $t\\omega_{ci}\\approx2$ and $t\\omega_{ci}\\approx3$ and it is located in the background plasma region. This standoff also occurs near $x=-1.8\\ d_i$.%\n\n\\par In Figs.~\\ref{fig:scan-momentum} a3) and b3), we show the results obtained for the half magnetic moment $M=M_0\/2$. In this case, the magnetic pressure exerted by the dipole is lower, leading to a smaller $L_0$, and consequently, the diamagnetic current feature visible in b3) is closer to the dipole than in Figs.~\\ref{fig:scan-momentum} b1) and b2). The main changes, however, occur in the magnetopause current. Unlike what we observe for the other magnetic moments, the magnetopause current, pinpointed in the current density plot, lasts for a longer time (until $t\\omega_{ci}\\approx4$). This current is also more separated from the diamagnetic current standoff and is easier to identify. This is consistent with the experimental observations.%\n\n\\par To identify the pressure balances associated with the two observed standoffs, and because the magnetic and kinetic pressures vary over time, we studied the temporal evolution of the different plasma and magnetic pressure components of the system. In particular, we calculated the spatial profiles of the magnetic pressure $B^2\/8\\pi$, the ram pressure $n_jm_jv_{flj}^2$ and the thermal pressure $n_jm_jv_{thj}^2$ as a function of time for $y = 0$. In these expressions, $n_j$, $m_j$, $v_{flj}$ and $v_{thj}$ refer to the density, mass and flow and thermal velocities, respectively, of the ions ($j=i$) and electrons ($j=e$). The magnetic pressure was calculated from the magnetic field measured in each PIC grid cell located at $y=0$. The flow and thermal pressures, were calculated from averaged particle data. To ensure that the calculation of each kinetic pressure considered a sufficiently large number of particles, all the particles between $-0.1\\ d_i3$, the magnetic field loses most of its energy to the background and driver plasmas leading to a drop of the magnetic energy. After $t\\omega_{ci}\\approx4$, the background ions start to leave the simulation box, and the total energy is no longer conserved. The background kinetic energy remains approximately constant because the background plasma loses energy to the sink at the right boundary of the simulation but gains energy from the magnetic field. For both driver and background plasma, the ions carry most of the energy.%\n\n\\par From Fig.~\\ref{fig:pressures}, we can identify the positions where multiple pressure balances occur, and therefore, develop an insight into the pressure equilibria that are behind the structures of the current density synthetic diagnostics. Using the previously calculated pressures, we obtained the equilibrium positions where certain pressure balances manifested and plotted them in Fig.~\\ref{fig:equilibrium} alongside $J_y$.%\n\n\\begin{figure}[ht]\n\\includegraphics[width=\\columnwidth]{plots\/pressure-equilibrium3.pdf}\n\\caption{\\label{fig:equilibrium} Temporal evolution of the current density $J_y$ at $y=0$, with the closest locations to the dipole of different pressure balances for multiple times. The represented locations of pressure balances are the equilibria between the driver kinetic pressure $P_{\\mathrm{d}}$ with the total magnetic field pressure $P_{\\mathrm{mag}}=B_z^2\/8\\pi$, represented by the solid line; the background kinetic pressure $P_0$ with the pressure exerted by the relative magnetic field $P_{\\mathrm{rel}}=P_{\\mathrm{mag}}-B_0^2\/8\\pi$, by the dotted line, and $P_{\\mathrm{d}}=P_{\\mathrm{rel}}$, by the dashed line. The results correspond to simulation \\textrm{E3} (see Table~\\ref{tab:runs}).}%\n\\end{figure}\n\n\\par This analysis shows that the system has, in general, two magnetopause structures: one driven by the background, and one by the driver plasma. The former structure is defined by the balance $P_{\\mathrm{0}}=P_{\\mathrm{rel}}$. For the latter structure to form, the driver needs to have almost enough energy to push the diamagnetic current up to the magnetopause, defined by Eq.~\\ref{eq:pressure-equilibrium}. This is illustrated in Fig.~\\ref{fig:equilibrium}, where we show the location of the pressure equilibrium between the driver kinetic pressure and the total magnetic pressure, $P_\\mathrm{d} = P_\\mathrm{mag}$. \n\n\\par As shown in Fig.~\\ref{fig:pressures}, the current associated with the background magnetopause seems to overlap with the region of background and magnetic pressure balance. Unlike the driver, the background plasma is magnetized. If we neglect the compression of the magnetic field in the downstream region, the pressure balance that describes this magnetopause can then be estimated by the equilibrium of the kinetic pressure of the background plasma with the relative magnetic pressure, $P_0 = P_\\mathrm{rel}$. In Fig.~\\ref{fig:equilibrium}, we show that this pressure balance, represented by the dotted line, describes well the position of the current feature identified as the magnetopause between times $t\\omega_{ci}\\approx2$ and $t\\omega_{ci}\\approx3$. \n\n\n\\par After $t\\omega_{ci} \\approx 3$, the magnetopause current is well described by the pressure balance $P_\\mathrm{d} = P_\\mathrm{rel}$, as illustrated by the dashed line in Fig.~\\ref{fig:equilibrium}. In fact, after inspecting the phase spaces in Figs.~\\ref{fig:phase-space} a3) and b3), we can observe that a combination of driver plasma particles (separated from the bulk distribution) and background ions pushes the dipolar field and sets the position of the magnetopause.\n\n\n\n\\par We stress that, because we are determining equilibria via MHD pressure balances but are checking the intersection between pressure curves with kinetic resolution, some caution must be made to ensure that we are observing the equilibrium between pressures and not merely the interface between the different regions of interest. To ensure that the pressure equilibria were correctly obtained, the corresponding pressure profiles were always carefully inspected with additional diagnostics.%\n\n\\subsection{Realistic parameters}\\label{sec:realistic}\n\n\\par Due to the need for more extensive scans (and thus using physically equivalent but computationally feasible parameters), the simulations shown so far considered reduced ion mass ratios, cold plasmas, and higher velocities than the ones used in the LAPD experiments - see Table~\\ref{tab:parameters}. To ensure that the main results presented in the previous sections are also valid with realistic parameters, we have performed a set of simulations with parameters similar to those expected experimentally. \n\n\\par Three simulations were performed, labeled as runs F1 to F3. Run F1 employs realistic mass ratios $m_{i,d}\/{m_e} = m_{i,0}\/{m_e} = 1836$. Additionally, run F2 also considers a ratio between the electron thermal and flow velocities close to the ones expected for the LAPD experiments, namely $v_{the,x}\/v_0=2.5$ and $v_{thi,x}\/v_0=0.033$, leading to higher temperatures than in the previous simulations, and thus allowing possible thermal effects on the system. Finally, run F3 considers the same electron thermal velocity ratios of F2 but the standard reduced mass ratios. \n\n\\par The $\\Delta B_z$ and $J_y$ plots for these simulations are shown in Fig.~\\ref{fig:realistic}. Note that, due to changes in $m_i\/m_e$, the spatial and temporal scales were recalculated for the new parameters. Once again, the magnetic dipole moment for the three simulations was adjusted to ensure that $L_0=1.8\\ d_i$.%\n\n\\begin{figure*}[t]\n \\includegraphics[width=\\linewidth]{plots\/realistic2.pdf\n \\caption{\\label{fig:realistic} Temporal evolution of a) the variation of the magnetic field $\\Delta B_z$ and b) the current density $J_y$ at $y=0$, for the simulations with similar parameters to the experiments. Run F1 considers realistic mass ratios for the driver and background plasmas and low ratios between the thermal and flow velocities; run F2 uses realistic mass ratios and thermal velocity ratios close to the ones expected in the experiments; run F3 uses the realistic thermal velocity ratios but reduced mass ratios.}%\n\\end{figure*}\n\n\\par As expected, these simulations show the same main structures discussed in the previous sections. We observe the typical reflection of the compression of the magnetic field and the current structures of the magnetopause and diamagnetic cavity. However, some differences are also visible. In Figs.~\\ref{fig:realistic} a1) and b1), \\textit{i.e.} for the realistic mass ratios but cold plasmas simulation, we observe a stronger filamentation of the plasma flow reflected off the dipole and a thinner diamagnetic current. This is because $d_e$ is the characteristic length scale of the current layer and we have lower $d_e\/d_i$ values for larger $m_i\/m_e$. Figs.~\\ref{fig:realistic} a2) and b2), for the simulation with higher temperatures, show no major differences with Figs.~\\ref{fig:realistic} a1) and b1), even though there is a significant increase in the thermal velocities. \n\n\\par In Figs.~\\ref{fig:realistic} a3) and b3), however, we observe significant differences for reduced mass ratios with realistic thermal velocity ratios. In particular, we observe in the current density plot smoother magnetic and current structures and less defined background waves between the magnetopause and the dipole. We also observed for increased ion thermal velocities, for example, $v_{thi}\/v_0 \\approx 0.25$, that the background waves are no longer visible. \n\n\\par Additionally, other simulations were performed to look for possible changes with realistic parameters. A simulation with a lower flow velocity $v_0=0.01\\ c$ and realistic thermal velocity ratios lead to no significant features observed, and the obtained synthetic diagnostics were very similar to the ones in Figs.~\\ref{fig:realistic} a3) and b3), meaning that the system scales well with $v_0$. Another simulation was performed to observe if the shape of the initial density profiles of the plasmas would affect the main results. Namely, the constant density profiles used on both the driver and background plasmas were replaced by Gaussian density profiles with a typical gradient scale $\\sigma=1\\ d_i$ on the edges of the plasmas. This simulation did not show meaningful differences, in agreement with previous plasma coupling works, which observed that the leading edge of the plasmas evolves similarly for different initial density profiles~\\cite{doi:10.1063\/1.1694472}.%\n\n\\subsection{Finite transverse size}\\label{sec:finite}\n\n\\par For simplicity, and because we were more interested in studying the system along the axis of symmetry $y=0$, the previous simulations only considered a driver with infinite width $L_y$ and a length of $L_x=2\\ d_i$. In the experiments, however, the drivers had a width comparable to their lengths and did not have the sharp boundaries used in the simulations. To investigate if and how our results are modified with a more complex-shaped driver, we performed a simulation with a finite width, semi-circular-shaped driver plasma. This driver is initially defined with the conditions $(x+7.25\\ d_i)^2+y^2<(3.25\\ d_i)^2$ and $x>-6\\ d_i$ and has length $L_x=2\\ d_i$ and width $L_y=6\\ d_i$. Fig.~\\ref{fig:finite} shows the results of this simulation and includes the initial shape of the driver in Fig.~\\ref{fig:finite} a).%\n\n\\begin{figure*}[t]\n \\includegraphics[width=\\linewidth]{plots\/finite-driver.pdf}\n \\caption{\\label{fig:finite} a) Total ion density at time $t\\omega_{ci}=3.0$, and temporal evolution of b) the variation of the magnetic field $\\Delta B_z$ and c) the current density $J_y$ at $y=0$, for simulation G with a finite width driver with a circular segment shape. The dashed lines at a) represent the initial position of the driver and the left border of the background plasma.}%\n\\end{figure*}\n\n\\par Due to the finite width of the new driver and its particular shape, we should expect to see significant differences in the regions of the simulation plane far from $y=0$. In the total ion density plot of Fig.~\\ref{fig:finite} a) for a time $t\\omega_{ci}=3$, when there is a strong interaction of the driver with the dipolar magnetic field, we observe the propagation of waves at the lower and upper sides of the dipole caused by the finite width of the driver, that was not present for infinite width drivers.%\n\n\\par In Figs.~\\ref{fig:finite} b) and c), we see the usual magnetic and current density plots at $y=0$ for this simulation. By shortening the driver plasma width, the background particles escape from the bottom and top regions of the simulation box, and the driver has more difficulty holding the magnetic decompression in the background region. The decompression, therefore, occurs quicker for finite drivers, as seen in Fig.~\\ref{fig:finite} b), leading to short reflections of the magnetic compression.\n\n\\par Although this complex-shaped driver gets us closer to the experimental configuration, the simulations did not include all the properties of the experimental driver, as for example, the non-uniform density, velocity profiles of the plasmas and the flow divergence. Additionally, 3D effects should also be considered. Future simulations are planned to study the effect of these properties in the results. However, we expect that these features will not change the main results of the simulations.%\n\n\\section{Conclusions} \n\\label{sec:conclusions}\n\n\\par In this work, we have performed PIC simulations of mini-magnetospheres in the interaction between a plasma flow and a magnetized background plasma. In particular, we have successfully reproduced results from recent experiments performed at the LAPD, validating the experimental platform to study mini-magnetospheres in the laboratory. We have also explored an extensive parameter space defining the interaction, allowing us to i) determine how the main properties of the system change with the parameters and ii) identify the required conditions for the creation of a mini-magnetosphere.%\n\n\\par Our simulations have shown that some system features are present across multiple regimes. The initial flow of the driver expels the magnetic field in the upstream region, leading to a magnetic cavity, and compresses the downstream magnetic field. The driver travels through the background until the magnetic field pressure is large enough to counterbalance the driver plasma pressure. A fast decompression of the background magnetic field then follows. If the background decompression occurs after the total reflection of the driver plasma, then we can observe the reflection of the compression of the magnetic field. To see this feature, the driver needs to be short enough to anticipate the driver reflection relative to the decompression but sufficiently long to ensure that it can get close to the dipole.%\n\n\\par For the super-Alfv\u00e9nic flows considered, the driver particles are reflected upstream during the interaction with the background plasma and the magnetic field. The coupling velocity (\\textit{i.e.}, the velocity at which the leading end of the driver travels through the background) is lower than the flow velocity and increases with the increase of the ratio between the driver and background densities. The coupling velocity and the length of the driver determine how far the driver can go through the background region without a dipole, for a uniform driver plasma. \n\n\\par The interaction of the plasmas with the dipole results in two magnetopauses. The first describes the balance between the kinetic pressure of the propelled background plasma plus the pressure of the plasma internal magnetic field and the total magnetic pressure. The seconds describes approximately the balance between the kinetic pressure of the driver plasma separated from the bulk distribution and the relative magnetic pressure. Using simulations with different dipole moments, we have shown that, for lower magnetic moments, the driver and background standoffs are closer to the center of the dipole, and the magnetopause current is more clearly identified than for higher magnetic moments. Furthermore, it is also easier to separate the magnetopause and diamagnetic currents for lower magnetic moments, consistent with experimental observations.%\n\n\\par In the simulations performed, we also observed the formation of waves in the background plasma region, between the magnetopause and the center of the dipole, where the magnetic field gradient was significant. These waves result from the excitation that always followed the formation of the magnetopause and were only observed for background plasmas with relative low ion thermal velocities. This condition may explain the absence of these waves in the experimental plots.%\n\n\\par Most of the simulations presented in this work were performed in idealized configurations. In particular, we used reduced ion-to-electron mass ratios, unrealistically high flow velocities, a simple flat-top driver density profile, and neglected thermal effects. In Sec.~\\ref{sec:realistic} and~\\ref{sec:finite}, we presented simulations that drop some of these simplifications. Replacing reduced ion mass ratios with realistic ones and considering high thermal velocities ratios close to the obtained in the experiments did not lead to significant changes in the results. The same occurred when considering smoothed density profiles. It was also possible to conclude that the main features of the system scaled as expected with the absolute value of the driver flow velocity. We also presented a simulation to study possible effects associated with the complexity of the experimental laser-ablated driver. A simple circular segment-shaped driver was considered and led to similar results in the axis of symmetry as the infinite width driver simulations. However, wave-like structures were observed on both the bottom and upper sides of the dipole. For future studies on the regions outside the axis of symmetry, the driver shape and complexity must be considered.%\n\n\\par Additionally, we also performed other parameter scans related to the complexity of the driver. For instance, we performed simulations where the driver ions were heavier than the background ions to simulate the small role of the carbon ions in the experimental driver. These studies showed no significant differences to the lighter ions simulations.\n\n\\par In conclusion, the simulations were consistent with the LAPD experimental results, and the multiple parameter scans performed dictated the formation conditions of the main features of mini-magnetospheres. For future works, we intend to exploit the features present in the sides of the dipole, exploit anti-parallel magnetic field configurations, perform 3D simulations, and consider even more realistic properties of the driver.\n\n\\begin{acknowledgments}\n\n\\par We acknowledge the support of the European Research Council (InPairs ERC-2015-AdG 695088), FCT (PD\/BD\/114307\/2016 and APPLAuSE PD\/00505\/2012), the NSF\/DOE Partnership in Basic Plasma Science and Engineering (Award Number PHY-2010248), and PRACE for awarding access to MareNostrum (Barcelona Supercomputing Center, Spain). The simulations presented in this work were performed at the IST cluster (Lisbon, Portugal) and at MareNostrum.\n\n\\end{acknowledgments}\n\n\\nocite{*}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRadio continuum emission traces star formation on timescales of up to\n100 Myr \\citep{C92}. Two physical processes resulting from massive\nstar formation produce most of the radio continuum emission between 1\nand 100 GHz in star-forming galaxies: (1) nonthermal synchrotron\nemission from relativistic electrons accelerated by magnetic fields as\na result of recent supernovae and (2) thermal free-free emission from\ngas ionized by young massive stars \\citep{C92}. The nonthermal\nemission is closely tied to the number of supernova-generating massive\nstars produced in recent episodes of star formation, while the thermal\nemission gives a nearly direct measure of the current equivalent\nnumber of O stars via the ionizing flux in the sampled area. Since\neach component traces a physical process with a well-known timescale,\nwe can use measurements of the radio continuum to determine star\nformation rates and constrain the ages of recent episodes of star\nformation.\n\nRecent studies of nearby star-forming galaxies with interferometers\nhave emphasized resolving individual star-forming regions\n\\citep*[e.g.][]{B00,J03,JK03,J04,T06,R08a,J09,A11}. Since radio\ncontinuum emission is not affected by extinction, it can be used to\nobserve deeply embedded regions of current star formation that have\nnot yet shed their surrounding material and are thus invisible at\nshorter wavelengths. These studies have taken advantage of\ninterferometers' exceptional spatial resolution to probe very young\nstarbursts whose optical emission is obscured by dust. While these\nstudies have been invaluable for determining star formation properties\nin galaxies outside of our own, the high angular resolution and\nmissing short-spacing data of interferometers, especially at higher\nfrequencies, ``resolves out'' the diffuse radio continuum emission\noutside of compact star-forming regions. This effect disproportionally\nimpacts synchrotron emission, which tends to be much more diffuse than\nthe primarily thermal emission surrounding areas of ongoing massive\nstar formation \\citep{J09}. Unlike interferometers, single dish\ntelescopes are not plagued by missing short spacings. Therefore, these\ntelescopes provide a way to simultaneously measure the compact thermal\nand diffuse non-thermal components of a galaxy's radio continuum\nemission in order to characterize its \\emph{global} star formation\nproperties.\n\nDetermining the relative contributions of the thermal and nonthermal\ncomponents of the measured flux of entire galaxies can be challenging.\nFortunately, each component has a distinct behavior with respect to\nfrequency, and therefore we can model radio continuum emission with a\nsimple two-component fit. Radio continuum flux follows a power law\nrelation such that $S_{\\nu} \\propto \\nu^{\\alpha}$, where $\\alpha$ is\nthe spectral index that is characteristic of a source's\nemission. Optically thin thermal emission exhibits $\\alpha =$ -0.1,\nand nonthermal emission exhibits $\\alpha \\approx$ -0.8 \\citep{C92}. To\ndetermine a reliable fit to these parameters, observations sampling\nthe same physical area at multiple, widely-spaced frequencies are\nrequired. If only one frequency is observed, it is impossible to\ndetermine the relative contributions of each emission process without\nprevious knowledge of the source. Since single dish telescopes are\nsensitive to both compact thermal and diffuse nonthermal emission over\nlarge spatial extents, they are useful for constraining the\nlarge-scale properties of multiple components of a galaxy's radio\ncontinuum emission, and are thus powerful probes of star formation.\n\n\nThe goal of this paper is to characterize the \\emph{global} star\nformation properties of local galaxies. Our observations were taken in\nfour independent channels continuous in frequency across the full\n26-40 GHz span of Ka band. This range in frequencies is where a\ntypical star-forming galaxy's global radio continuum emission would be\nexpected to contain relatively equal amounts of flux from\nsteep-spectrum synchrotron and flat-spectrum thermal sources\n\\citep{C92}. Our observations are thus ideal for approximating the\nrelative contributions of each type of emission at this ``lever arm''\nfrequency range. Using new radio continuum observations centered at\n27.75 GHz, 31.25 GHz, 34.75 GHz, and 38.25 GHz, as well as archival\nNVSS 1.4 GHz and IRAS 60 $\\rm \\mu m$ and 100 $\\rm \\mu m$ data, we have\ndetermined these galaxies' thermal fractions and star formation\nrates. We have also explored the radio-far-infrared correlation in\nthese galaxies and its implications for their star formation\ntimescales. We will describe the galaxy sample and our observations\nand data reduction in Section 2, present our results and address the\nprocess of fitting spectral energy distributions to our data in\nSection 3, and finally conclude in Section 4.\n\n\\section{Data}\n\n\\subsection{Sample Selection}\n\nWe selected a heterogeneous group of 27 local (D $<$ 70 Mpc),\nwell-studied star-forming galaxies with known thermal radio continuum\nemission. Our sample contains galaxies spanning a variety of shapes,\nsizes, and environments, from blue compact dwarfs to grand-design\nspirals, including major and minor mergers, with members of compact\ngroups as well as more isolated galaxies (see Table\n\\ref{fig:obssummary} for galaxy types). The intention was to observe\nas many types of star-forming galaxies as possible to probe star\nformation in a diverse range of environments. See Table\n\\ref{fig:obssummary} and Figure \\ref{fig:sampleproperties} for sample\nproperties. For more information on each galaxy's previous radio\ncontinuum observations and discussions of their properties, see the\npapers named in Table \\ref{fig:obssummary}.\n\n\nThe galaxies in our sample span a range of distances (1-70 Mpc) and\nproperties. They all have previously detected radio emission and\nongoing star formation that covers three orders of magnitude in star\nformation rate. Thus, they are strong targets for a study of global\nradio continuum properties at a frequency range that probes both\nthermal free-free and nonthermal synchrotron star formation\nindicators. As seen in Figure \\ref{fig:sampleproperties}, these\ngalaxies are largely less massive and have higher star formation rates\nthan the Milky Way, and have subsolar metallicities. However, their\nproperties are not so similar that they can be considered as a single\nclass. It would not be surprising if their radio continuum properties\nalso encompassed a range of values. Our analysis is best understood as\nreflecting properties of nearby star-forming galaxies, though it is\nbeyond the scope of this paper to perform detailed analysis on each\ngalaxy individually.\n\n\\subsection{Observations and Data Reduction}\nWe observed the galaxies in our sample with the Caltech Continuum\nBackend (CCB) on the Robert C. Byrd Green Bank Telescope\n(GBT)\\footnotemark{} using single pointings. The CCB is designed for\nthe GBT's dual-beam Ka band receiver spanning the entire range of\nfrequencies from 26-40 GHz. The primary observing mode of the CCB is a\n70 second ``nod'', where each beam takes a turn as the on-source beam\nwhile the other beam is off-source. We observed 24 of the galaxies\nusing a single nod each, while we observed eight galaxies using\nmultiple nods, which we then averaged. Five galaxies in our sample\nwere observed on two different nights with both of these methods; we\ntreated these on a case-by-case basis and chose the observation(s)\nwith the best weather and elevation conditions. We used the standard\nNRAO primary flux calibrators 3C 147 and 3C 48 for flux calibration,\nas well as nearby pointing calibrators to ensure accurate\npointing. See Table \\ref{fig:obssummary} for a summary of the\nobservations.\n\n\\footnotetext{The National Radio Astronomy Observatory is a facility\n of the National Science Foundation operated under cooperative\n agreement by Associated Universities, Inc.}\n\n\nWe reduced our data using IDL reduction routines developed by B.\nMason \\citep[for details on the data reduction process,\n see][]{M09}. Data with wind speeds over 5 $\\rm m \\ s^{-1}$ were\nexcluded due to the possibility of large pointing errors. We detected\n22 galaxies in all four channels. When a galaxy's flux was lower than\nthe 2$\\sigma$ level in one or more of the four channels, we combined\nthe channels' fluxes to produce one average flux across the entire\nband, centered at 33 GHz. One galaxy, Pox 4, was detected at the\n$5\\sigma$ level after averaging the four channels. Three additional\ngalaxies were marginally detected (between 2$\\sigma$ and 3$\\sigma$)\nusing this method. We report an upper limit 33 GHz flux for one\ngalaxy, Tol 35. The galaxies' observed fluxes are reported in Table\n\\ref{fig:uncorrectedflux}.\n\nSince the angular size of a telescope's beam is inversely proportional\nto the frequency observed, the beam size of the GBT varies appreciably\nacross the 26-40 GHz range of our observations ($\\sim27\\arcsec$ in the\nlowest-frequency sub-band versus $\\sim19\\arcsec$ in the\nhighest-frequency sub-band, see Figure ~\\ref{fig:beamsizes} for an\nillustration). We followed the procedure of \\citet{M10} to correct for\ndiffering beam sizes in each of the four sub-bands. First, we imaged\nan archival VLA radio continuum map of each galaxy (typically at\nfrequencies of 4-10 GHz) using the AIPS task IMAGR. For these maps, we\nselected the archival UV data from the NRAO science data\narchive\\footnotemark{} with the closest beamsize to our Ka-band\ndata. We explicitly imposed each of the four CCB beam sizes on these\nimages using BMIN and BMAJ (assuming a circular beam). We determined\ncorrection factors for each beam by normalizing the flux contained in\neach beam area in the archival map to the flux in the 34.75 GHz port's\n$\\sim21\\arcsec$ beam. This procedure partially adjusts for more flux\nbeing observed at lower frequencies due to these frequencies'\nintrinsically larger beam sizes. We then could approximate\n``beam-matched'' flux measurements to determine spectral indices\nbetween 26-40 GHz (see Figure \\ref{fig:histogram} for an illustration\nof the galaxies' spectral indices before and after applying the\ncorrections). NGC 1222 did not have available archival data, so we\napplied to it the average correction factors of all of the other\ngalaxies. We emphasize that these correction factors are only\napproximate. In many cases, they are based on resolved archival images\nthat may not contain all of the galaxies' radio flux. In particular,\nthese resolved images may contain most of the thermal emission, which\ntends to be compact, but underestimate the galaxies' nonthermal\nemission, which tends to be diffuse. This could bias the correction\nfactors to be closer to 1.0 than should be the case, especially in the\nhighest-frequency (and thus highest-resolution) channel. See Table\n\\ref{fig:beamcorrections} for the beam correction factors for each\ngalaxy. \n\nThe dominant sources of uncertainty in our beam corrections are\nsystematic errors due to the geometries of our sources. The smallest\ncorrections possible are for a galaxy whose most diffuse, extended\nflux is still contained within the smallest beam and is unresolved by\nthe lower-frequency interferometric observations. This type of source\nwould look identical to all four of the GBT beam sizes. In this case,\nthe correction factors would be 1.0 for each sub-band. For a source\nmuch more extended than the beam sizes, the maximum deviations from no\nbeam corrections in each sub-band are -36$\\%$, -19$\\%$, 0$\\%$, and\n+21$\\%$. Pointing offsets from the peak of radio emission can also be\nsources of systematic error, though the errors depend on whether the\nsource is compact or extended, and the magnitude of the pointing\noffset from the central peak of radio emission. These errors are\ntypically smaller than the maximum deviations discussed above. Since\nwe do not know how much diffuse emission is missing in the archival\ndata, we do not have enough information to quantify uncertainties in\nthe beam correction factors for each galaxy.\n\n\\footnotetext{https:\/\/archive.nrao.edu}\n\nMany of the galaxies that we observed were more extended than the\n$\\sim 23\\arcsec$ beam size of the GBT at 33 GHz. In these cases, radio\ncontinuum fluxes and star formation rates should only be interpreted\nas covering the inner $\\sim 23\\arcsec$ of the galaxies. The galaxies\nwith resolved lower-frequency archival data that was more extended\nthan the beam size are flagged with a ``1'' in Table\n\\ref{fig:correctedflux}.\n\nThe CCB has a beam separation of $78\\arcsec$ between the ``on'' and\n``off'' beams. M 51 and M 101 are more extended than that separation\nin both optical images (see Figure \\ref{fig:beamseparation}) and in\nmaps of their lower-frequency radio continuum emission\n\\citep{K84,G90}. In these cases, our flux measurements may be lower\nthan the true amount of flux contained within the beam. There is\nlikely to be radio continuum emission at the ``off'' positions, which\nwould cause an oversubtraction of flux in the reduction process.\n\n\\section{Results and Discussion}\n\n\\subsection{Fluxes}\nFor 22 of the 27 galaxies that we observed, the fluxes we report are\nthe first detections (either in all four sub-bands or averaged) at\n$\\sim$33 GHz. Four of the galaxies in our sample were previously\nobserved with the CCB by \\citet{M12}, three of which were detected (M\n101 was reported as an upper limit by \\citet{M12} and is a $2.6\\sigma$\nmarginal detection when the four sub-bands were averaged in our\nobservations). Only one galaxy in our sample, Tol 35, was not detected\nwhen averaging four sub-bands' fluxes. Its $3\\sigma$ upper-limit flux\nis 0.87 mJy. This galaxy was observed at a very low elevation\n($7.9^{\\circ}$), so it was observed with large atmospheric\nextinction. See Table \\ref{fig:uncorrectedflux} for the uncorrected\nfluxes, and Table \\ref{fig:correctedflux} for the fluxes corrected for\nthe differing beam sizes of each frequency.\n\n\\subsection{Spectral energy distribution fitting}\nWe fit a spectral energy distribution (SED) for each galaxy that was\ndetected in all four sub-bands using the four CCB fluxes and archival\nNRAO VLA Sky Survey (NVSS) 1.4 GHz fluxes (measured with a 45$\\arcsec$\naperture). We assumed a two-component fit of nonthermal emission with\na spectral index $\\alpha_{N}$ = -0.8 and thermal emission with a\nspectral index $\\alpha_{T}$ = -0.1. These fits are plotted in Figure\n\\ref{fig:sedpanel}. Though the spectral index of nonthermal emission\ncan vary (this phenomenon is described further in Section 3.2.1), we\nused this simple model because we only fit to five data points for\neach galaxy; our model did not include enough data to justify\nadditional free parameters. We do not see evidence of anomalous dust\nemission in the observed regions of these galaxies \\citep[for\n explanations of anomalous dust, see][]{D98,M10}. Our observations\nare also at frequencies low enough to have negligible contributions\nfrom the low-frequency tail of the dust blackbody. Therefore, we did\nnot include any thermal dust emission in our fits. Our spectra also do\nnot show the inverted structure characteristic of self-absorption or\noptically thick thermal emission, so we did not include either of\nthese components. From these fits, we determined each galaxy's thermal\nand nonthermal fluxes at 33 GHz.\n\nNone of the galaxies have globally flat spectra indicative of purely\nthermal emission, nor the inverted spectra seen in some resolved\nobservations of very young, obscured thermal sources. Thermal emission\nwas the primary component at 33 GHz in some galaxies, while others had\nless prominent or even negligible thermal components in the observed\nregions. In contrast to radio continuum studies done at high spatial\nresolution, our single dish observations detect the diffuse\nsynchrotron emission produced by past supernovae in addition to the\nstrong compact thermal emission from H II regions, so the spectral\nindices that we derive are typically much steeper than those derived\nonly from detections of compact radio sources. Since our observations\ndo not spatially separate regions of thermal and nonthermal emission,\nwe cannot further distinguish the two components in that way.\n\n\\subsubsection{Galaxies with steep radio spectra}\nThe fitted spectra for eight of the 27 galaxies (Arp 217, NGC 4449,\nNGC 2903, Maffei II, NGC 4038, M 51, NGC 4490, and NGC 1741) are\nsignificantly steeper than can be fit by a combination of thermal\n($\\alpha_{T}$ = -0.1) and nonthermal ($\\alpha_{N}$ = -0.8) components\n(see Figure \\ref{fig:sedpanel}). When we could not fit a galaxy's SED\nwith both the thermal and nonthermal components at the $2\\sigma$\nlevel, we used only a single-component fit that assumed no thermal\nflux and a fixed nonthermal spectral index of $\\alpha_{N}$ = -0.8 for\nconsistency. The thermal fluxes and associated properties of this\ngroup of galaxies are reported as upper limits. We used the total flux\nin the 34.75 GHz channel plus 3$\\sigma$ as a conservative upper limit\nto the thermal flux in these cases.\n\nThere are two possible explanations for the steep spectra that we see\nin some galaxies. There could be technical considerations due to\nimperfect beam-matching in our data, or there could be physical\nprocesses taking place within these galaxies causing their spectra to\nsteepen at high frequencies. In order to have more accurate SED\nfits---and more precise star formation rates---we would need to have\nbeam-matched observations of the same regions at many different\nfrequencies. \n\nThe correction factors for differing beam sizes that are given in\nTable \\ref{fig:beamcorrections} are limited by being calculated using\nhigher-resolution data that could be missing extended emission. If\nextended emission is missing in the archival data, the correction\nfactors in Table \\ref{fig:beamcorrections} could be closer to 1.0 than\nis actually the case. While all of the correction factors calculated\nact to flatten the SED between 26 GHz and 40 GHz with respect to the\nuncorrected data, it is possible that they do not flatten the SED\nenough if they do not reflect contributions from extended emission (as\ndiscussed in Section 2.2). In addition, we did not correct for\nmismatched beams between the $\\sim 45\\arcsec$ NVSS data and the $\\sim\n23\\arcsec$ CCB data. This beam difference only affects resolved\ngalaxies (those marked with a ``1'' in Table \\ref{fig:correctedflux}),\nwhich comprise $33\\%$ of our sample. It is possible that synchrotron\nemission is more adversely affected by the differences in beam sizes\nthan thermal emission. More diffuse synchrotron emission could be\nundetected at higher frequencies (and thus smaller beam sizes) than\nwould be expected for a smooth flux distribution observed with two\napertures of different sizes. If this is the case in our observed\nregions, it could explain why some of our galaxies' spectra steepen at\nthe frequencies we observed. It is also possible that the choice of\nwhere the GBT beams were pointed within a galaxy could affect its\nfluxes in different beam sizes. If the beam is not centered on the\ngalaxy (in the case of unresolved galaxies) or is not centered on a\nbright knot of emission (in the case of resolved galaxies), the\nsmaller beams could contain even less flux than would be expected\nafter corretions for the beams' areas. NGC 1741 and NGC 4490 are\nlikely affected by pointing offsets, as seen by comparing the GBT\npointing in Table \\ref{fig:obssummary} to previous radio continuum\nmaps in Figure 2 of \\citet{B00} and Figure 4 of \\citet{A11}. As\ndescribed in Section 2.2, pointing offsets from the peak of radio\ncontinuum emission result in the need for larger beam correction\nfactors than derived from the archival radio continuum data, the lack\nof which result in steep spectra at the observed frequencies.\n\n\nIn addition to the technical issue of mismatched beam sizes, there are\npossible physical explanations for steep spectra in star-forming\ngalaxies. It is difficult to distinguish between a spectrum with a\nnonthermal component having $\\alpha_{N} \\approx -0.8$ coupled with a\nlow thermal fraction from a spectrum with a steeper nonthermal\ncomponent coupled with a relatively high thermal fraction\n\\citep{C92}. Though the spectral indices that we used are typical\nvalues \\citep{C92}, they can vary depending on the physical parameters\nof the observed regions. Thermal emission can have a positive spectral\nindex if the emission regions are optically thick, though we do not\nsee any evidence that this is occuring on the angular scale of our\nobservations. Nonthermal spectral indices can be positive at low\nfrequencies due to synchrotron self-absorption (which we do not\nobserve), or become more negative with increasing frequency and\nincreasing scale height from the disk due to aging cosmic ray\nelectrons losing energy as they propagate outward from their parent\nsupernovae \\citep{S85,CK91,H09}. \\citet{K11b} calculated the timescale\nfor synchrotron losses for cosmic ray electrons in NGC 4214 to be 44\nMyr at 1.4 GHz and 18 Myr at 8.5 GHz. There is also some evidence of\nsteepening spectra at higher frequencies ($\\gtrsim$ 10 GHz) for\nluminous and ultra-luminous infrared galaxies, as well as in the\npost-starburst galaxy NGC 1569 \\citep{I88,L04,C08,C10,L11}. These\nauthors hypothesize that winds or outflows may disperse synchrotron\nemission from its parent source more quickly than would be expected\nfor simple diffusion. This rapid dispersal could cause a dearth of\nsynchrotron emission at higher frequencies on shorter timescales than\nwould be predicted from the timescale of energy loss. \\citet{L11} also\nhypothesize that there could be a modified injection spectrum in\ngalaxies where this is the case. Our sample of galaxies does not\ncontain any LIRGs or ULIRGs, and we do not see steepening in our\nmeasurements of NGC 1569. We are only observing the inner region of\nNGC 1569, while the dispersed synchrotron emission resides in its\nouter halo, so it is not surprising that we do not observe a\nsteepening spectrum in this galaxy.\n\nWe suspect that the steep spectra seen in our sample are primarily a\nresult of imperfect beam matching as discussed above. This is\nespecially likely to be the case for the galaxies resolved by the GBT\nat 33 GHz, since these galaxies will have emission that is outside of\nthe view of the GBT beam but is included in the NVSS flux. As\ndiscussed earlier, the galaxies that appear unresolved in archival\nmaps could still have diffuse synchrotron emission that was not\ndetected in the archival data but that is more extended than the\n$23\\arcsec$ beam at 33 GHz. Five of the eight resolved galaxies that\nwere detected in all sub-bands had steep spectra (four out of those\nfive are classified as spiral galaxies), while only three of the\nfourteen unresolved galaxies had this feature. Two of these three are\nclassified as SABbc galaxies, while the third is classified as\npeculiar. It is possible that these steep-spectrum galaxies contain\nemission in their spiral arms that is extended with respect to the\nGBT's smaller beam but is observed in the NVSS data. In Figure\n\\ref{fig:sedpanel}, most of the galaxies with steep spectra (and thus\nsingle component fits) also showed the NVSS 1.4 GHz data point being\nlocated above the best-fit line expected for purely nonthermal\nemission. This could be a consequence of the larger beam at 1.4 GHz\nsampling a larger physical area of emission. Even so, the alternative\nphysical explanations merit consideration, especially in the case of\nthe unresolved galaxies. In Figure \\ref{fig:rad-IR}, which will be\ndiscussed further in Section 4.6, the three unresolved galaxies with\nsteep spectra (NGC 1741, NGC 2903, and Arp 217) have elevated 1.4 GHz\nfluxes with respect to what would be expected from the radio-far\ninfrared correlation. Since the 33 GHz fluxes of these galaxies are\nnot similarly elevated with respect to their far-infrared fluxes in\nFigure \\ref{fig:rad-IR}, their steep radio spectra may indicate an\ninternal physical process that strongly increases the amount of\nsynchrotron emission.\n\n\\subsection{Thermal fractions}\nThe average thermal fraction fit by two-component models at 33 GHz was\n54$\\%$, with a $1\\sigma$ scatter of 24$\\%$ and a range of\n10$\\%$-90$\\%$. The average is consistent, albeit with large scatter,\nwith the average global thermal fraction at 33 GHz in star-forming\ngalaxies without active galactic nuclei following the relation\n\\begin{equation}\n\\frac{S}{S_{T}} \\sim 1 + 10\\left(\\frac{\\nu}{GHz}\\right)^{0.1+\\alpha_{N}}\n\\end{equation}\n where $\\alpha_{N} = -0.8$ is the nonthermal spectral index, $S_{T}$\n is the thermal flux at a given frequency, and S is the total flux at\n that frequency \\citep{CY90}. When a two-component fit was not\n possible, we report the thermal flux as the corrected flux at 34.75\n GHz plus $3\\sigma$, which gives a very conservative upper limit. We\n expect from the galaxies' SEDs that their true thermal fractions are\n very low at 33 GHz, which we assume in the rest of our analysis.\n\n\\subsubsection{Implications for star formation timescales}\nThe large scatter in the thermal fraction is likely a consequence of\nour heterogeneous galaxy sample; these galaxies are at different\nstages of evolution and have different star formation rates, stellar\npopulations, and physical properties \\citep{B00}. Some of them may\nhave a very recent ($<$ 10 Myr) burst of star formation that produces\na large amount of free-free emission that dominates their spectra from\n1-100 GHz. Others may be in between episodes of very active star\nformation and instead be experiencing a more quiescent phase, which\nwould result in a relatively low thermal fraction and steepening\nnonthermal component at 33 GHz due to synchrotron energy losses at\nhigh frequencies.\n\nThermal emission traces very recent star formation, since it comes\nfrom ionized regions around short-lived, massive stars. For a single\nstarburst, a spectrum showing solely thermal emission requires that\ntoo few supernovae have yet occurred to detect their emission. This\nwould constrain the starburst to be less than $\\sim$ 6 Myr old (or\neven younger, depending on the mass and lifetime of the most massive O\nstars in the starburst; \\citet{M89} find the lifetime of a 120\n$M_{\\sun}$ star to be 3.4 Myr). A complete absence of thermal flux\nimplies the absence of enough massive O stars to have detectable\nfree-free emission for a long enough period of time that the emission\nhas dissipated from its parent region. If this was the case, the\nstarburst is likely at least 30 Myr old (the lifetime of the least\nmassive supernova progenitors). On the other hand, nonthermal emission\nprobes star formation on longer timescales (30 Myr $< \\tau <$ 100\nMyr). It is produced by recent supernovae of stars that can be less\nmassive and have longer lifetimes than the O stars that produce\nthermal emission \\citep[see Figure 9 of][]{C92}. The presence of\nnonthermal emission implies that the starburst is at least 6 Myr old\nbut younger than the timescale dictated by synchrotron energy loss for\nthe galaxy's magnetic field ($\\sim$ 100 Myr) \\citep{C92}.\n\nWe note that there are limits to the amount of each component that we\ncan detect, so the timescales quoted in the previous paragraph are\nonly approximate. To constrain how much nonthermal emission could be\npresent in a spectrum that appears purely thermal, we generated\nspectra with varying thermal fractions with fluxes at the same five\nfrequencies as those in our data set (1.4 GHz, 27.75 GHz, 31.25 GHz,\n34.75 GHz, and 38.25 GHz) and 10$\\%$ errors on the fluxes. When these\nspectra are fit with a two-component model assuming $\\alpha_{T} =\n-0.1$ and $\\alpha_{N} = -0.8$, nonthermal emission can only be\ndetected in the spectra for thermal fractions less than 97$\\%$. This\nmeans that the galaxy could have some nonthermal emission (up to $3\\%$\nfor $10\\%$ errors on the fluxes), but the emission would be\nundetectable and thus the starburst would appear younger than it\nis. Similarly, a spectrum could look like it contains no thermal\nemission while actually containing quite a bit. For the same spectra\nwith $10\\%$ errors on the fluxes, thermal fractions of up to $20\\%$\nresulted in undetectable thermal components. This means that a galaxy\ncould look like its massive star formation has ceased while still\nhaving a small thermal component.\n\nFor the galaxies in our sample, this picture could be more\ncomplicated. The quoted timescales in this section are for an isolated\nsingle starburst. Since our observations measure star formation\nproperties on large angular scales, the galaxies may have multiple\noverlapping generations of star formation that are not easily\nseparated in time. We are also sampling different structures and\nphysical scales in each galaxy. For some galaxies, we are only\nobserving the most central region. For these galaxies, we may be\nmissing the majority of the ongoing star formation happening in outer\nregions and spiral arms. For the more compact galaxies, however, we\nare likely measuring the entirety of the galaxy's star formation\nwithin the GBT beam, so our measurements characterize their global\nstar formation properties.\n\n\\subsection{O stars producing ionizing photons}\nFor those galaxies whose SEDs were fit with thermal components, we\nused their fluxes at 33 GHz to calculate their thermal\nluminosities. We then used those luminosities to calculate the number\nof ionizing photons responsible for the thermal fluxes seen within the\nGBT beam following Equation 2 in \\citet{C92}:\n\\begin{equation}\n\\left(\\frac{Q_{Lyc}}{s^{-1}}\\right) \\geq 6.3 \\times 10^{52}\n \\left(\\frac{T_{e}}{10^{4}K}\\right)^{-0.45} \\left(\\frac{\\nu}{GHz}\\right)^{0.1}\n \\left(\\frac{L_{T}}{10^{20}W Hz^{-1}}\\right),\n\\end{equation}\nwhere $Q_{Lyc}$ is the number of Lyman continuum photons emitted by\nthe region on thermal emission, $T_{e}$ is the electron temperature,\nand $L_{T}$ is the thermal luminosity. The resulting values are\ndetailed in Table \\ref{fig:sfrfractableunres} (unresolved galaxies)\nand Table \\ref{fig:sfrfractableres} (resolved galaxies). We used an\nelectron temperature of $10^{4}$K, as is typical for star-forming\nregions \\citep{C92}, and used $Q_{0} = 10^{49} s^{-1}$ as the number\nof Lyman continuum photons emitted by an O7.5V star from Table 5 in\n\\citet{Vacca96}. We report the total number of O7.5V stars in the\ngalaxies that are unresolved by the GBT at 33 GHz, and the number of\nO7.5V stars per square kiloparsec for the resolved galaxies in Tables\n\\ref{fig:sfrfractableunres} and \\ref{fig:sfrfractableres}. As seen in\nTable \\ref{fig:sfrfractableunres}, the number of O7.5V stars in each\nunresolved galaxy varies widely (log $\\#$ O7.5V stars is between 2.42\nand 4.66). This is likely due to the wide range in the unresolved\ngalaxies' overall star formation rates and physical areas observed.\n\n\\subsection{Supernova rates}\nSince we were able to fit nonthermal components for all of our\ngalaxies, we calculated supernova rates ($\\nu_{SN}$) for each of them\nfollowing Equation 18 in \\citet{C92}:\n\\begin{equation}\n\\left(\\frac{L_{N}}{10^{22}W Hz^{-1}}\\right) \\sim 13\\left(\\frac{\\nu}{GHz}\\right)^{-0.8} \\left(\\frac{\\nu_{SN}}{yr^{-1}}\\right),\n\\end{equation}\nwhere $L_{N}$ is the nonthermal luminosity. We report the total\nsupernova rate of the unresolved galaxies in Table\n\\ref{fig:sfrfractableunres}, while for the resolved galaxies we\nreport the supernova rate per square kiloparsec in Table\n\\ref{fig:sfrfractableres}. The supernova rates of the unresolved\ngalaxies vary by three orders of magnitude (log SNe rate between -3.72\nand -0.71), which is not surprising given the differences in star\nformation rates and physical areas sampled.\n\n\\subsection{Star formation rates}\nWe calculated massive star formation rates (SFRs) from thermal fluxes\nfor each galaxy whose SEDs have a thermal component and from\nnonthermal fluxes for all of our galaxies following Equations 21 and\n23 of \\citet{C92}:\n\\begin{equation}\n\\left(\\frac{L_{N}}{W Hz^{-1}}\\right) \\sim 5.3 \\times 10^{21}\n \\left(\\frac{\\nu}{GHz}\\right)^{-0.8} \\left(\\frac{SFR_{N}\\left(M \\geq 5M_{\\sun}\\right)}{M_{\\sun} yr^{-1}}\\right)\n\\end{equation}\n\\begin{equation}\n\\left(\\frac{L_{T}}{W Hz^{-1}}\\right) \\sim 5.5 \\times 10^{20}\n \\left(\\frac{\\nu}{GHz}\\right)^{-0.1} \\left(\\frac{SFR_{T}\\left(M \\geq 5M_{\\sun}\\right)}{M_{\\sun}yr^{-1}}\\right)\n\\end{equation}\nwhere $L_{T}$ and $L_{N}$ are thermal and nonthermal luminosities,\nrespectively, calculated from each galaxy's thermal and nonthermal\nfluxes, and $\\nu = 33$ GHz. These equations are derived from Equations\n2 and 18 of \\citet{C92} (reproduced as Equations 2 and 3 in this\npaper). Those equations were derived assuming (1) an extended\nMiller-Scalo IMF \\citep{MS79} with an exponent of $-2.5$, (2) that all\nstars with masses greater than $8 \\rm M_{\\sun}$ become supernovae, and\n(3) that dust absorption is negligible \\citep{C92}. We then scaled the\nmassive SFRs generated by each equation by a factor of 5.6 to\ntransform them to total SFRs (M $\\geq$ 0.1$M_{\\sun}$) calculated with\na Kroupa IMF \\citep{K01}. The galaxies' SFRs calculated from their\nthermal and nonthermal fluxes are shown in Table\n\\ref{fig:sfrfractableunres} (unresolved galaxies) and Table\n\\ref{fig:sfrfractableres} (resolved galaxies). We report the total\nmassive SFRs of the unresolved galaxies, while we report the massive\nSFR per square kiloparsec of the resolved galaxies. All of the\ngalaxies for which we calculated both thermal and nonthermal SFRs\nshowed agreement between the two to within an order of magnitude, but\nnot necessarily to within their margins of uncertainty. The\ndisagreement correlates with the thermal fractions of each galaxy:\ngalaxies with high thermal fractions were likely to have higher\nthermal SFRs than nonthermal SFRs, while galaxies with low thermal\nfractions showed the opposite relation. Like the differences in\nthermal fractions between galaxies in our sample, disagreement could\nbe due to the different star formation timescales traced by the\nthermal and nonthermal fluxes. Since these two emission components are\ncaused by physical processes that operate over differing lengths of\ntime (as discussed in Section 3.3.1), it is possible that the\ndiscrepancies between the star formation rates could be used to infer\nthe recent star formation histories of the observed regions.\n\nWe compared the radio continuum SFRs to monochromatic SFRs from 24$\\mu\nm$ fluxes as described in \\citet{Cal10}. The galaxies' SFRs (for the\nunresolved galaxies) and SFR densities (for the resolved galaxies)\nderived from 24$\\mu m$ fluxes are listed in Table\n\\ref{fig:sfrfractableunres} and Table \\ref{fig:sfrfractableres}. In\nFigure~\\ref{fig:comparesfrs}, we compare the SFRs derived from thermal\nand nonthermal radio continuum fluxes of the unresolved galaxies for\nwhich we fit two-component SEDs to SFRs derived from 24$\\mu m$\nfluxes. We find that most of the galaxies in our sample have higher\nradio continuum SFRs (both from thermal and nonthermal fluxes) than\nSFRs from 24$\\mu m$ data. One possible explanation for this is that\nextinction is lower at radio wavelengths than it is at 24$\\mu\nm$. Another possible explanation is that since radio continuum\nemission traces very young star formation while 24$\\mu m$ emission\ntraces less recent star formation, higher SFRs calculated from radio\ncontinuum observations than from 24$\\mu m$ data could be another\nindication that our sample of galaxies is undergoing recent star\nformation.\n\n\\subsection{Radio-far-infrared correlation}\nThere is a well-established tight correlation between far-infrared\n(FIR) and radio flux in star-forming galaxies\n\\citep[e.g.][]{H85,M06,M12}. When plotted on a log-log scale, the\nrelationship between radio continuum and FIR flux for star-forming\ngalaxies appears linear. This correlation has been well-studied at low\nfrequencies ($\\sim$ 1.4 GHz) where synchrotron emission is the\ndominant component of radio emission in a star-forming galaxy. We\ninvestigated whether this correlation could also be found on a global\nscale at 33 GHz, where synchrotron emission is weaker than it is at\n1.4 GHz and the relative contribution from thermal emission is more\nsignificant.\n\nWe limited our study of the radio-FIR correlation to the galaxies in\nour sample that are unresolved with the GBT beam at 33 GHz (as\ndiscussed in Section 2.2). We chose this limit to ensure that we were\nobserving both the total area of radio emission and total area of\nfar-infrared emission in each galaxy. This minimizes issues related to\nthe different beam sizes of the GBT and IRAS (objects are considered\npoint sources to IRAS if they are more compact than 1$\\arcmin$ at 60\n$\\rm \\mu m$ and 2$\\arcmin$ at 100 $\\rm \\mu m$). \n\nWe fit a power law to our 33 GHz flux as a function of total FIR\nflux. The total FIR flux was determined by a combination of archival\nIRAS 100 $\\rm \\mu m$ and 60 $\\rm \\mu m$ fluxes as described in\n\\citet{H88} ($S_{FIR} = 2.58 S_{60\\mu m} + S_{100 \\mu m}$). We chose\nto compute each galaxy's 33 GHz flux by taking the average of its\nfluxes in the four sub-bands. We used this measure (rather than the\nflux at 33 GHz inferred from the galaxies' SEDs) in order to eliminate\npossible uncertainties in the flux due to using assumed spectral\nindices in our fits. We found that the fluxes were related by $\\rm log\n\\ S_{33} = (0.88 \\pm 0.01) log \\ S_{FIR} + log \\ (5.3\\times 10^{-4}\n\\pm 6\\times 10^{-5})$. This correlation is relatively well-fit (the\nfractional errors of both fit parameters are small) even though our\nsample contains a wide range of thermal fractions. \\citet{M12} found a\nsimilar correlation between 33 GHz and $24 \\rm \\mu m$ fluxes for\nresolved nuclei and individual star-forming regions of galaxies. We\nfind that the radio-FIR correlation at 33 GHz can be extended to\nglobal measurements of galaxies' fluxes.\n\nAs a control of the tightness of the radio-FIR correlation in our\nsample, we also fit a relationship between the galaxies' NVSS 1.4 GHz\nfluxes and their total FIR fluxes. This relationship for the\nunresolved galaxies in our sample is $\\rm log \\ S_{1.4} = (0.85 \\pm\n0.01) log \\ S_{FIR} + log \\ (0.0047 \\pm 0.0006)$. The fractional\nuncertainties on the fit parameters are similar to those of the fit at\n33 GHz. We plot both correlations in Figure \\ref{fig:rad-IR}.\n\nAs discussed in Section 3.2, we have determined thermal fractions from\nSED fits assuming fixed thermal and nonthermal spectral indices. Due\nto the limited number of radio data points we have for each galaxy, we\ncannot more accurately constrain the thermal fractions at 33 GHz of\nthe galaxies in our sample at this time. Therefore, we do not have\nenough information to definitively isolate thermal and nonthermal\ncomponents to explore whether the radio-FIR correlation is equally\ntight for each. As an estimate, we have coded approximate thermal\nfractions in the plot. Even given these limitations, we are confident\nthat a correlation exists between the unresolved galaxies' total radio\nflux at 33 GHz and total FIR flux. \\citet{M12} found a similar\ncorrelation at 33 GHz for resolved nuclei and star-forming regions of\ngalaxies.\n\nTo further constrain the radio-FIR correlation at 33 GHz in our\nsample, we calculated $q_{\\nu}$ for each galaxy. $q_{\\nu}$ is a\nlogarithmic measure of the ratio of total far-infrared flux ($S_{FIR}$\nin Janskys) to radio continuum flux ($S_{\\nu}$) in units of $\\rm W\nm^{-2} Hz^{-1}$ at a given frequency. It is defined in \\citet{H85} as\n\\begin{equation}\nq_{\\nu} = log\\left(\\frac{S_{FIR} \\cdot 1.26 \\times 10^{-14} W\n m^{-2}}{3.75 \\times 10^{12} W m^{-2}}\\right) -\nlog\\left(\\frac{S_{\\nu}}{W m^{-2} Hz^{-1}}\\right).\n\\end{equation}\nThe average $q_{33}$ for our sample is $q_{33}=3.3$, with a 1$\\sigma$\nscatter of 0.3. \\citet{C92} reported that at 1.4 GHz, the average\nvalue of $q_{1.4}$ from a large sample of galaxies is $q_{1.4}=2.3 \\pm\n0.2$. The average value of $q_{\\nu}$ at 1.4 GHz for this set of\ngalaxies is $q_{1.4} = 2.4 \\pm 0.2$, consistent with the \\citet{C92}\nvalue. Since $q_{\\nu}$ is a function of the ratio of FIR flux to radio\nflux at a given frequency, it makes sense that $q_{\\nu}$ is larger\nusing 33 GHz fluxes than it is using 1.4 GHz fluxes (star-forming\ngalaxies are generally much brighter at 1.4 GHz than at 33 GHz). The\nscatter on $q_{\\nu}$ at 33 GHz is larger than that at 1.4 GHZ, which\nindicates that the radio-FIR correlation is not as tight at 33 GHz as\nat 1.4 GHz. This may be due to contamination from increased thermal\nflux at 33 GHz. If the correlation is solely between synchrotron and\nFIR emission, thermal flux at 33 GHz will increase the scatter in the\ncorrelation. However, due to our small sample size, we cannot rule out\nthe possibility that the correlation is just as strong at 33 GHz,\nwhere thermal fractions are higher, as it is at 1.4 GHz, where\nnonthermal emission is typically much stronger. We note that the\ngalaxies with the highest thermal fractions lie above the fitted\ncorrelation at 33 GHz, while the same is not true at 1.4 GHz, which\nsupports thermal emission being the cause of increased scatter.\n\nIn addition to plotting the radio-FIR correlation, we also plot the\nratio of 33 GHz flux to FIR flux, $q_{33}^{-1}$, against\n$\\alpha_{1.4-33}$ for our unresolved galaxies in Figure\n\\ref{fig:inverseqvsalpha}, similar to \\citet{M12}. The plot shows an\nincreasing $q_{33}^{-1}$ for flatter values of\n$\\alpha_{1.4-33}$. Flatter $\\alpha_{1.4-33}$ values are presumably\nindicative of a higher proportion of thermal flux to nonthermal flux,\nwhich is reflected in the highest thermal fractions in our sample also\nhaving the flattest $\\alpha_{1.4-33}$. A correlation between an\nelevated $q_{33}^{-1}$ and flat values of $\\alpha_{1.4-33}$ is not\nsurprising if the radio-FIR correlation is solely dependent on\nsynchrotron emission. If the radio-FIR correlation was independent of\nthe type of radio emission, $q_{33}^{-1}$ should be relatively\nconstant between galaxies and should not be affected by different\nspectral indices or thermal fractions. Our data support that the\nradio-FIR correlation is independent of a galaxy's thermal emission\nsince the addition of thermal emission results in elevated ratios of\n33 GHz flux to FIR flux.\n\n\\subsubsection{Implications for star formation timescales}\n\nWhen the timescales of the emission mechanisms for thermal,\nnonthermal, and FIR fluxes are taken into account, the observed\nrelationship between the ratio of 33 GHz and FIR fluxes and\n$\\alpha_{1.4-33}$ may be a way to age-date an episode of star\nformation. Since thermal flux is only produced by the shortest-lived\n($\\tau<10$ Myr) massive stars, its presence in large quantities\nrelative to synchrotron emission is indicative of very young star\nformation. Since in addition to massive stars, infrared emission also\ntraces less massive stars ($\\rm M > 5M_{\\sun}$) that live longer than\nthe $\\rm M > 8M_{\\sun}$ stars that produce thermal and nonthermal\nradio emission \\citep{D90}, FIR emission is a tracer of star formation\non longer timescales. Stars with these masses can live up to $\\sim$100\nMyr, while nonthermal radio emission traces stars with lifetimes of up\nto $\\sim$30 Myr and whose emission is detectable for up to 100 Myr\n\\citep[for an illustrative plot of stellar lifetimes, see Figure 3\n of][]{R05}. In addition, infrared emission also contains a component\nfrom diffuse dust that is heated by lower-mass stars with lifetimes\nlonger than 100 Myr. These timescales could mean that the galaxies\nthat show both flat spectral indices and enhanced $q_{33}^{-1}$ also\nhost the youngest areas of ongoing star formation. This correlation\ncould then be a method of determining approximate ages for galaxies'\nglobal star formation. As a simple test, we used a Starburst 99 model\nof a single instantaneous burst using default inputs (solar\nmetallicity, a 2-component Kroupa IMF, and no effects of cosmic ray\naging, escape, or absorption taken into account) run for 100 Myr\n\\citep{L99,V05,L10}. This model, depicted in Figure \\ref{fig:sb99},\nshows the flattest spectral indices and highest thermal fractions at\nthe earliest times of the starburst. Similarly, the steepest spectral\nindices and lowest thermal fractions were seen as the lowest-mass\nstars that produce supernovae were dying (at $\\sim$40 Myr). The\nstarburst's ratio of 33 GHz luminosity to FIR luminosity was also high\nat early times (between 3 Myr and 40 Myr) while the lowest ratios of\n33 GHz luminosity to FIR luminosity were seen even later (after 40\nMyr). While modeling a more robust quantitative relationship between\nthis observed correlation and the age of each galaxy's star-forming\nepisode is beyond the scope of this work (the simple model we used\ndoes not take into account multiple co-existing generations of star\nformation), the apparent relationship between enhanced 33 GHz flux,\nflat spectral indices, and high thermal fractions is a promising\nmetric for future global radio and far-infrared photometric studies of\nstar-forming galaxies. Our simple model is not robust enough to\nconstrain the timescales' uncertainties, but is only meant to be\nillustrative of a correlation visible in our data.\n\n\n\\section{Conclusions}\n\nWe have observed 27 local, well-studied, star-forming galaxies between\n26-40 GHz with the GBT and obtained the first detections at this\nfrequency range for 22 of the galaxies. We determined the\ncontributions of thermal free-free and nonthermal synchrotron emission\nto the galaxies' total radio emission. We have used these measures to\nderive the number of massive, short-lived O stars and the number of\nrecent supernovae in the observed regions of each galaxy. In addition,\nwe have calculated SFRs for each galaxy using thermal and nonthermal\nfluxes and explored the radio-FIR correlation for the unresolved\ngalaxies. We found that\n\\begin{itemize}\n\\item None of the galaxies have spectral incides indicative of purely\n thermal emission; eight galaxies show spectra that are too\n steep to fit thermal components,\n\\item Thermal fractions range from 10$\\%$ to 90$\\%$, with a\n median of $55\\%$,\n\\item The radio-far infrared correlation holds for the unresolved\n galaxies at 1.4 GHz and 33 GHz, though the scatter at 33 GHz is\n larger due to the increased influence of thermal emission at higher\n frequencies, and\n\\item Galaxies with flat $\\alpha_{1.4-33}$ and high thermal fractions\n have enhanced radio flux at 33 GHz with respect to far-infrared\n flux, which identifies them as galaxies with recent star\n formation. This is consistent with a simple model of a single\n starburst.\n\\end{itemize}\n\nWe found that the observed regions of our galaxies had a diverse mix\nof radio continuum characteristics, with some galaxies' SEDs being\ndominated at 33 GHz by the thermal emission indicative of ongoing\nmassive star formation, while others have little or no detectable\nthermal emission. Even with this spread in the relative contributions\nof thermal and nonthermal emission, we saw that there is still a\ncorrelation between the global 33 GHz and far-infrared flux in the\nunresolved galaxies. The scatter in the correlation is larger than\nthat at 1.4 GHz, likely due to the increased influence of thermal\nemission at 33 GHz. We cannot, however, rule out that the radio-FIR\ncorrelation is not solely dependent on synchrotron emission. We also\nfound that higher ratios of 33 GHz emission to FIR emission correlated\nwith flatter spectral indices (and higher thermal fractions) for\nunresolved galaxies, which is consistent with younger ages in simple\nstarburst models. This correlation may be useful as a rough indicator\nof the age of the most recent episode of star formation. Future global\nstudies of more homogeneous galaxy populations or resolved studies of\nindividual star-forming regions will enable better modeling of star\nformation timescales using this metric.\n\nIn giving a broad measure of nearby galaxies' radio continuum\nemission, our observations complement previous studies done with\ninterferometers in which individual star-forming regions in local\ngalaxies were highly resolved. With the GBT, we can simultaneously\nobserve compact and diffuse thermal and nonthermal emission and\ndetermine their relative intensities, and in doing so estimate the\ntimescale for the current episode of star formation. Unfortunately, we\ncannot make stricter timescale estimates than those discussed in\nSection 3.3 at this time, as we do not have enough radio data points\nto robustly fit thermal and nonthermal flux components with varying\nspectral indices. Obtaining more unresolved radio fluxes at lower and\nhigher frequencies would help this effort.\n\nThis research has made use of the NASA\/IPAC Extragalactic Database\n(NED) which is operated by the Jet Propulsion Laboratory, California\nInstitute of Technology, under contract with the National Aeronautics\nand Space Administration. We acknowledge the use of NASA's SkyView\nfacility (http:\/\/skyview.gsfc.nasa.gov) located at NASA Goddard Space\nFlight Center. We thank the telescope operators and support staff at\nthe GBT for assistance with this project. K.R. acknowledges support\nfrom an NRAO student observing support award (GSSP10-0002). K.R. also\nthanks Brian Mason for his help with understanding the CCB observation\nand data reduction process.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Appendix A. }\n\n\\widetext\n\\section*{\\large Supplemental material for stacking-induced Chern insulator}\n\n\\section{I. Haldane model in bilayer honeycomb lattice}\n\nWe consider a HM bilayer with AB and AA stackings where the layers are assumed to have complex NNN phases $\\Phi_1=-\\Phi_2=\\frac{\\pi}2$, to drop the global energy shift ($a^0_{\\mathbf{k}}=0$ (Eq.~\\ref{al})) which does not affect the topology of the system.\nIn the basis of the four orbitals forming the unit cell ($A_1,B_1,A_2,B_2$) the corresponding Hamiltonians can be written as\n\\begin{eqnarray}\n H_\\text{AA-HM}(\\mathbf{k})=\n\\begin{pmatrix}\na_{\\mathbf{k}} +M_1 & f_{\\mathbf{k}} &2t_{\\perp}&0 \\\\\nf^{\\ast}_{\\mathbf{k}} & -a_{\\mathbf{k}} -M_1&0&2t_{\\perp}\\\\\n2t_{\\perp}&0&-a_{\\mathbf{k}} +M_2 & f_{\\mathbf{k}}\\\\\n0&2t_{\\perp}&f^{\\ast}_{\\mathbf{k}} & a_{\\mathbf{k} -M2}.\n\\end{pmatrix}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n H_\\text{AB-HM}(\\mathbf{k})=\n\\begin{pmatrix}\na_{\\mathbf{k}} +M_1 & f_{\\mathbf{k}} &0 &2t_{\\perp} \\\\\nf^{\\ast}_{\\mathbf{k}} & -a_{\\mathbf{k}} -M_1&0&0\\\\\n0&0&-a_{\\mathbf{k}} +M_2 & f_{\\mathbf{k}}\\\\\n2t_{\\perp}&0&f^{\\ast}_{\\mathbf{k}} & a_{\\mathbf{k} -M2}.\n\\end{pmatrix}\n\\end{eqnarray}\n\nThese Hamiltonians can be expressed, using the layer and the sublattice pseudospin matrices $\\boldsymbol{\\sigma}$ and $\\boldsymbol{\\tau}$, as\n\\begin{eqnarray}\nH_\\text{AA-HM}(\\mathbf{k})&=&\\left( b_{\\mathbf{k}}\\sigma_x+c_{\\mathbf{k}}\\sigma_y\\right)\\tau_0\n+2t_{\\perp}\\sigma0\\tau_x+ a_{\\mathbf{k}}\\sigma_z\\tau_z\n+\\frac 12 \\left(M_1+M_2\\right)\\sigma_z\\tau_0+\\frac 12 \\left(M_1-M_2\\right)\\sigma_z\\tau_z,\\\\\nH_\\text{AB-HM}(\\mathbf{k})&=&\\left( b_{\\mathbf{k}}\\sigma_x+c_{\\mathbf{k}}\\sigma_y\\right)\\tau_0\n+t_{\\perp}\\left(\\sigma_x\\tau_x-\\sigma_y\\tau_y\\right)+ a_{\\mathbf{k}}\\sigma_z\\tau_z\n+\\frac 12 \\left(M_1+M_2\\right)\\sigma_z\\tau_0+\\frac 12 \\left(M_1-M_2\\right)\\sigma_z\\tau_z.\n\\label{AB-HM-mass}\n\\end{eqnarray}\nwhere $a_{\\mathbf{k}}$ is given by Eq.~\\ref{al} in the main text.\\\n\n$H_\\text{AA-HM}$ ($H_\\text{AB-HM}$) breaks TRS $\\mathcal{T}=K$, the charge conjugation, represented by $\\mathcal{C}=\\sigma_z\\tau_zK$ ($\\mathcal{C}=\\sigma_z\\tau_0K$) with $\\mathcal{C}^2=\\mathds{1}$, and the chirality $\\mathcal{S}=\\tau_z\\sigma_z$ ($\\mathcal{S}=\\tau_0\\sigma_z$).\\\n\nIn the following, we will show based on numerical band structure calculations on bilayer ribbons, that coupling two HM with opposite chiralities ($C_1=-C_2$), resulting from oppositely broken TRS ($\\Phi_1=-\\Phi_2$), gives rise, as expected, to a trivial Chern insulator with $C=C_1+C_2=0$.\\\nWe will discuss the stacking order, the nature of the ribbon edges (zigzag or armchair) and the effect of the intralayer Semenoff masses $M_l$, where $l=1,2$ is the layer index. The case of AA stacking was discussed in Ref.~\\onlinecite{Dutta} for a fixed value of the mass term $M_l$.\\\n\nFigure~\\ref{band-HM-ZZ} shows the band structure of the AB bilayer HM on zigzag ribbons for $\\Phi_1=\\Phi_2= \\frac{\\pi}2$, $M_1=M_2=0$ and at different values of the interlayer hopping $t_{\\perp}$ . Starting from uncoupled ($t_{\\perp}=0$) chiral layers, with equal Chern number $C_1=C_2=\\pm 1$, the system turns, under the interlayer coupling, into a Chern insulator with a Chern number $C=\\pm 2$ characterized by a pair of chiral edge states propagating at the boundaries of each layer as shown in Fig.~\\ref{table-res} of the main text.\n \n\\begin{figure}[hpbt] \n\\begin{center}\n$\n\\begin{array}{ccc}\n\\includegraphics[width=0.3\\columnwidth]{HM-AB-ZZ-a.eps}\n\\includegraphics[width=0.3\\columnwidth]{HM-AB-ZZ-b.eps}\n\\includegraphics[width=0.3\\columnwidth]{HM-AB-ZZ-c.eps}\n\\end{array}\n$\n\\end{center}\n\\caption{Tight binding calculations of the electronic band structure of an AB bilayer HM on zigzag nanoribbons of a width $W = 60$ atoms. The interlayer hopping is (a) $t_{\\perp}=0$, \n(b) $t_{\\perp}=0.5t$ and (c) $t_{\\perp}=0.8t$.\nCalculations are done for $\\Phi_1=\\Phi_2= \\frac{\\pi}2$, $M_1=M_2=0$ and $t_2 = 0.1t$, where $t$ is the NN hopping integral.}\n\\label{band-HM-ZZ}\n\\end{figure}\nThe $C=\\pm 2$ Chern insulating phase occurs as far as the Semenoff mass $|M_l|<|M_{lc}|$, where\n\\begin{eqnarray}\nM_{lc}=3\\sqrt{3}t_2\\sin \\Phi_l.\n\\label{Mlc}\n\\end{eqnarray}\n$M_{lc}$ is the critical mass at which the transition from a topological phase ($C_l=\\pm 1$) to a trivial gapped phase ($C_l=0$), takes place in the monolayer HM~\\cite{Haldane}.\\\n\nThis feature is depicted in Fig.~\\ref{band-HM-mass}, showing that the pair of chiral edge states appears at the boundaries of the bilayer ribbons only if both layers have the same chirality~\\cite{Haldane}.\\\n\n\\begin{figure}[hpbt] \n$\n\\begin{array}{ccc}\n\\includegraphics[width=0.3\\columnwidth]{HM_AB_mass_a.eps}\n\\includegraphics[width=0.3\\columnwidth]{HM_AB_mass_b.eps}&\\includegraphics[width=0.3\\columnwidth]{HM_AB_mass_c.eps}\\\\\n\\includegraphics[width=0.3\\columnwidth]{HM_AB_mass_d.eps}\n\\includegraphics[width=0.3\\columnwidth]{HM_AB_mass_e.eps}&\n\\includegraphics[width=0.3\\columnwidth]{HM_AB_mass_f.eps}&\n\\end{array}\n$\n\\begin{center}\n\n\\end{center}\n\\caption{Electronic band structure of an AB bilayer HM on zigzag\nnanoribbons of a width $W = 60$ atoms. Calculations are done for $t_2 = 0.1t$, $\\Phi_1=\\Phi_2= \\frac{\\pi}2$, $t_{\\perp}=0.5t$ and for (a) $M_1=M_2=0$, (b) $M_1=M_2=\\sqrt{3}t_2$, (c) $M_1=-M_2=\\sqrt{3}t_2$,\n(d) $M_1=0, M_2=3\\sqrt{3}t_2$, (e) $M_1=0, M_2=5\\sqrt{3}t_2$, (f) $M_1=5\\sqrt{3}t_2, M_2=5\\sqrt{3}t_2$.}\n\\label{band-HM-mass}\n\\end{figure}\n\nThis behavior is independent of the nature (zigzag or armchair) of the ribbon boundaries as shown in Fig.~\\ref{band-HM-AC}.\n\n\\begin{figure}[hpbt] \n\\begin{center}\n$\n\\begin{array}{cc}\n\\includegraphics[width=0.3\\columnwidth]{HM-AB-ZZ-b.eps}\n\\includegraphics[width=0.3\\columnwidth]{HM_AC_AB_b.eps}\n\\end{array}\n$\n\\end{center}\n\\caption{Electronic band structure of an AB bilayer HM on (a) zigzag and (b) armchair nanoribbons of a width $W = 60$ atoms. \nCalculations are done for $t_2 = 0.1t$, $t_{\\perp} = 0.5t$, $\\Phi_1=\\Phi_2=\\frac{\\pi}2$ and\n$M_1=M_2=0$.}\n\\label{band-HM-AC}\n\\end{figure}\n\nRegardless of the stacking type (AB or AA), the bilayer HM is~\\cite{Dutta}:\n$(i)$ a trivial insulator, if the layers have opposite Chern numbers $C_1=-C_2$,\n$(ii)$ a topological chiral insulator with $C=\\pm 2$, if the layers have the same chirality ($C_1=C_2$),\n$(iii)$ and a Chern insulator with $C=\\pm1$ if one layer has a non-vanishing Chern number $C_1=\\pm 1$ and the other layer is a trivial insulator $C_2=0$, as depicted in Fig.~\\ref{band-HM-AA} showing the band structure of an AA bilayer HM on zigzag ribbons.\n\\begin{figure}[hpbt] \n\\begin{center}\n$\n\\begin{array}{cc}\n\\includegraphics[width=0.3\\columnwidth]{HM-AA-a.eps}\n\\includegraphics[width=0.3\\columnwidth]{HM-AA-b.eps}\\\\\n\\includegraphics[width=0.3\\columnwidth]{HM-AA-c.eps}\n\\includegraphics[width=0.3\\columnwidth]{HM-AA-d.eps}\n\\end{array}\n$\n\\end{center}\n\\caption{Electronic band structure of an AA Bilayer HM on zigzag\nnanoribbons of a width $W = 60$ atoms for $t_{\\perp} = 0.5t$,\n(a) $\\Phi_1=\\Phi_2=\\frac{\\pi}2$, $M_1=M_2=0$, (b) $\\Phi_1=\\Phi_2=-\\frac{\\pi}2$, $M_1=M_2=0$, (c) $\\Phi_1=\\Phi_2=\\frac{\\pi}2$, $M_1=0$, $M_2=5\\sqrt{3}t_2$ and\n(d) $\\Phi_1=\\frac{\\pi}2, \\Phi_2=0$, $M_1=\\sqrt{3}t_2$, $M_2=0$. Calculations are done for\n$t_2=0.1t$ in (a), (b) and (d) and $t_2=0.2t$ in (c).}\n\\label{band-HM-AA}\n\\end{figure}\n\n\n\\section{ II. Modified Haldane model in AB stacked layers}\n\nTo derive the low energy Hamiltonian given by Eq.~\\ref{Heff} in the main text, we use the \nL\\\"owdin partitioning method~\\cite{Lowdin,McCann} in the case of bilayer graphene. \nFor simplicity, we consider the case $\\Phi_1=-\\Phi_2=\\frac{\\pi}2$ to remove the energy-shift terms $a^0_{l,\\mathbf{k}}$ (Eq.~\\ref{al}). We rewrite the full Hamiltonian (Eq.~\\ref{HBL}), in the basis ($A_2,B_1,A_1,B_2$) as\n\\begin{eqnarray}\n H_{B}(\\mathbf{k})=\n\\begin{pmatrix}\nH_{\\alpha\\alpha} & H_{\\alpha\\beta} \\\\\nH_{\\beta\\alpha} & H_{\\beta\\beta}\n\\end{pmatrix},\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nH_{\\alpha\\alpha}=\n\\begin{pmatrix}\n -a_{\\mathbf{k}}+M_2& 0 \\\\\n0 & a_{\\mathbf{k}}-M1\n\\end{pmatrix},\\,\nH_{\\alpha\\beta}=H_{\\beta\\alpha}\n\\begin{pmatrix}\n0&f_{\\mathbf{k}} \\\\\nf^{\\ast}_{\\mathbf{k}} & 0\n\\end{pmatrix},\\,\nH_{\\beta\\beta}=\n\\begin{pmatrix}\n a_{\\mathbf{k}}+M_1& t_{\\perp} \\\\\n t_{\\perp}& -a_{\\mathbf{k}}-M_2\n\\end{pmatrix}.\n\\end{eqnarray}\nThe corresponding effective Hamiltonian is~\\cite{McCann}\n\n\\begin{eqnarray}\n H_\\text{eff}({\\mathbf{k}},E)=H_{\\alpha\\alpha}+H_{\\alpha\\beta}\\left( E-H_{\\beta\\beta}\\right)^{-1} H_{\\beta\\alpha},\n\\end{eqnarray}\nwhich reduces in the limit $M_l\\sim t_2\\ll t_{\\perp}$ ($l=1,2$), and for $E\\sim 0$ to\n\\begin{eqnarray}\nH_\\text{eff}({\\mathbf{k}},E=0)\\equiv H_\\text{eff}({\\mathbf{k}})\\sim H_{\\alpha\\alpha}-\\frac 1{X^2}H_{\\alpha\\beta}H_{\\beta\\beta} H_{\\beta\\alpha},\n\\end{eqnarray}\nwhere $X^2=\\left(a_{\\mathbf{k}}+M_1\\right)\\left(a_{\\mathbf{k}}+M_2\\right)+4t^2_{\\perp}$.\\\n\nAssuming $M_l\\frac{|f_{\\mathbf{k}}|^2}{X^2}\\ll M_{l^{\\prime}}\\sim t_2 \\, (l,l^{\\prime}=1,2)$, the corresponding effective Hamiltonian gives rise to Eq.~\\ref{Heff} of the main text.\n\n\\section{III. Modified Haldane model in AA stacked layers}\n\n\\begin{figure}[hpbt] \n$\n\\begin{array}{cc}\n\\includegraphics[width=0.3\\columnwidth]{mHM-AA-ZZ-a.eps}\n\\includegraphics[width=0.3\\columnwidth]{mHM-AA-AC-b.eps}\\\\\n\\includegraphics[width=0.3\\columnwidth]{mHM-AA-ZZ-c.eps}\n\\includegraphics[width=0.3\\columnwidth]{mHM-AA-ZZ-d.eps}\n\\end{array}\n$\n\\begin{center}\n\\end{center}\n\\caption{Band structure of the mHM on AA nanoribbons of a width of $W=60$ atoms with (a,c,d) zigzag and (b) armchair boundaries. Calculations are done for $t_2 = 0.1t$, $t_{\\perp}=0.5t$, (a) and (b) $\\Phi_1=-\\Phi_2= \\frac{\\pi}2$, $M_1=M_2=0$, while in (c) $\\Phi_1=-\\Phi_2= \\frac{\\pi}2$, $M_1=-M_2=\\sqrt{3}t_2$ and (d) $\\Phi_1=-\\Phi_2= \\frac{\\pi}2$, $M_1=0$, $M_2=\\sqrt{3}t_2$. }\n\\label{AA}\n\\end{figure}\nFigure~\\ref{AA} shows the band structure of the mHM in AA stacked ribbons with zigzag and armchair boundaries in the case of opposite complex phases $\\Phi_1=-\\Phi_2$.\nIn the absence of the Semenoff masses ($M_1=M_2=0$), the system remains gapless under the interlayer coupling. However, it turns to a trivial insulator if the layers have Semenoff mass terms. \\\n\nTherefore, in the absence of the Semenoff masses, the Fermi surface (Fig.~\\ref{Fig-intro}) of the mHM in AA stacked bilayer is,\ncontrary to the AB stacking, stable against the interlayer hopping which cannot induce a gap opening.\\\n\nTo understand the Fermi surface stability, we start by writing the corresponding Hamiltonian in the basis of the four orbitals forming the unit cell ($A_1,B_1,A_2,B_2$) and we consider, for simplicity, the case of opposite complex NNN phases $\\Phi_1=-\\Phi_2=\\frac{\\pi}2$ to have a vanishing global energy shift ($a^0_{\\mathbf{k}}=0$ (Eq.~\\ref{al}))\n\\begin{eqnarray}\n H_{AA-mHM}(\\mathbf{k})=\n\\begin{pmatrix}\na_{\\mathbf{k}} & f_{\\mathbf{k}} &2t_{\\perp}&0 \\\\\nf^{\\ast}_{\\mathbf{k}} & a_{\\mathbf{k}} &0&2t_{\\perp}\\\\\n2t_{\\perp}&0&-a_{\\mathbf{k}} & f_{\\mathbf{k}}\\\\\n0&2t_{\\perp}&f^{\\ast}_{\\mathbf{k}} & -a_{\\mathbf{k}}\n\\end{pmatrix}.\n\\label{AA-mHM}\n\\end{eqnarray}\nThis Hamiltonian can be written, using the layer and the sublattice pseudospin matrices $\\boldsymbol{\\sigma}$ and $\\boldsymbol{\\tau}$, as\n\\begin{eqnarray}\nH_\\text{AA-mHM}(\\mathbf{k})&=&\\left( b_{\\mathbf{k}}\\sigma_x+c_{\\mathbf{k}}\\sigma_y\\right)\\tau_0\n+2t_{\\perp}\\sigma_0\\tau_x+ a_{\\mathbf{k}}\\sigma_0\\tau_z,\n\\label{HBLAA}\n\\end{eqnarray}\nwhere $a_{\\mathbf{k}}$ is given by Eq.~\\ref{al} in the main text.\\\n\nThe Hamiltonian of Eq.~\\ref{HBLAA} breaks TRS, $\\mathcal{T}=K\\tau_x$, the charge conjugation, represented by $\\mathcal{C}=\\sigma_z\\tau_zK$ with $\\mathcal{C}^2=\\mathds{1}$, and the chirality $\\mathcal{S}=\\tau_z\\sigma_z$.\\\n\nThe gap separating the two bands, $E_{-,-}(\\mathbf{k})$ and $E_{+,-}(\\mathbf{k})$, around the zero energy is $\\Delta=\\mathrm{min}_{\\mathbf{k}}\\left(\\Delta_{\\mathbf{k}}\\right)$, where \n\\begin{eqnarray}\n \\Delta_{\\mathbf{k}}=2\\sqrt{A_{\\mathbf{k}}-B_{\\mathbf{k}}},\\;\n A_{\\mathbf{k}}=a^2_{\\mathbf{k}}+|f_{\\mathbf{k}}|^2+4t^2_{\\perp},\\; B_{\\mathbf{k}}=2|f_{\\mathbf{k}}|\\sqrt{a^2_{\\mathbf{k}}+4t^2_{\\perp}}\n\\end{eqnarray}\n$\\Delta_{\\mathbf{k}}=0$ leads to\n\\begin{eqnarray}\n|f_{\\mathbf{k}}|^2=a^2_{\\mathbf{k}}+4t^2_{\\perp},\n\\label{FL}\n\\end{eqnarray}\nwhich defines a closed Fermi line. \\\n\nFor $a_{\\mathbf{k}}=0$, Eq.~\\ref{FL} corresponds to the Fermi line of the AA graphene bilayer in the absence of NNN hopping terms. \\\n\nFor $t_{\\perp}=0$, Eq.~\\ref{FL} describes the mHM in AA bilayer with a particle-hole Fermi line obeying to $|f_{\\mathbf{k}}|=|a_{\\mathbf{k}}|$.\\\n\nBy turning on $t_{\\perp}$, this Fermi line is, simply, shifted but cannot be gapped (Eq.~\\ref{FL}).\nThe mHM on AA bilayer remains, then, metallic for vanishing Semenoff masses.\\\n\n\n\\section{IV. Modified Haldane model in AB stacked layers: effect of Semenoff masses}\n\nThe effect of Semenoff mass on the mHM in monolayer graphene nanoribbon is represented in Fig.~ \\ref{supp-mass-ML} showing that the mass term lifts the degeneracy of the antichiral edge modes which\nsurvive as far as $M1\n\\end{eqnarray} \nwith $\\epsilon_n$ and $\\Phi^*_n$ referred to as the eccentricity and participant plane (PP), respectively. Model calculations suggest that hydrodynamic response to the shape component is linear for the first few flow harmonics, i.e. $\\Phi_n\\approx \\Phi_n^*$ and $v_n\\propto \\epsilon_n$ for $n=$1--3~\\cite{Teaney:2010vd,Qiu:2011iv}. But these simple relations are violated for higher-order harmonics, due to strong mode-mixing effects intrinsic in the collective expansion~\\cite{Qiu:2011iv,Gardim:2011xv,Teaney:2012ke}.\n\nThe presence of large event-by-event (EbyE) fluctuations of the initial geometry suggests a general set of observables that involve correlations between $v_n$ and $\\Phi_n$:\n\\begin{equation}\n\\label{eq:flow}\np(v_n,v_m,...., \\Phi_n, \\Phi_m, ....)=\\frac{1}{N_{\\mathrm{evts}}}\\frac{dN_{\\mathrm{evts}}}{dv_ndv_m...d\\Phi_{n}d\\Phi_{m}...},\n\\end{equation}\nwith each variable being a function of $\\pT$, $\\eta$ etc~\\cite{Gardim:2012im}. Among these, the joint probability distribution of the EP angles:\n\\small{\n\\begin{eqnarray}\n\\nonumber\n\\frac{dN_{\\mathrm{evts}}}{d\\Phi_{1}d\\Phi_{2}...d\\Phi_{l}} &\\propto& \\sum_{c_n=-\\infty}^{\\infty} a_{c_1,c_2,...,c_l} \\cos(c_1\\Phi_1+c_2\\Phi_2...+c_l\\Phi_l),\\\\\\label{eq:ep}\na_{c_1,c_2,...,c_l}&=&\\left\\langle\\cos(c_1\\Phi_1+c_2\\Phi_2+...+c_l\\Phi_l)\\right\\rangle\n\\end{eqnarray}}\\normalsize\ncan be reduced to the following event-plane correlators required by symmetry~\\cite{Bhalerao:2011yg,Qin:2011uw,Jia:2012ma}:\n\\begin{eqnarray}\n\\label{eq:ep2}\n\\left\\langle\\cos(c_1\\Phi_1+2c_2\\Phi_2...+lc_l\\Phi_l)\\right\\rangle, c_1+2c_2...+lc_l=0.\n\\end{eqnarray}\nThese observables are sensitive to the fluctuations in the initial density profile and the final state hydrodynamics response~\\cite{Teaney:2012ke}.\n\n\nEarlier flow measurements were aimed at studying the individual $v_n$ coefficients for $n=$1--6 averaged over many events~\\cite{Adare:2011tg,star:2013wf,Aamodt:2011by,Aad:2012bu,CMS:2012wg}. Recently, the LHC experiments exploited the EbyE observables defined in Eq.~\\ref{eq:flow} by performing the first measurement of $p(v_n)$~\\cite{Aad:2013xma} for $n=2-4$ and fourteen correlators involving two or three event planes~\\cite{Jia:2012sa,ALICE:2011ab}. The measured event-plane correlators are reproduced by EbyE hydrodynamics~\\cite{Qiu:2012uy,Teaney:2012gu} and AMPT transport model~\\cite{Bhalerao:2013ina} calculations. The EP correlation measurement provides detailed insights on the non-linear hydrodynamic response, for example the correlators $\\left\\langle\\cos 4(\\Phi_{2}-\\Phi_{4})\\right\\rangle$ and $\\left\\langle\\cos 6(\\Phi_{3}-\\Phi_{6})\\right\\rangle$ mainly arise from the non-linear effects, which couple $v_4$ to $(v_2)^2$ and $v_6$ to $(v_3)^2$. Similarly, the correlator $\\left\\langle\\cos (2\\Phi_{2}+3\\Phi_{3}-5\\Phi_5)\\right\\rangle$ is driven by the coupling between $v_5$ and $v_2v_3$~\\cite{Gardim:2011xv,Teaney:2012ke}. \n\n\nThis paper focuses on two subsets of the observables defined by Eq.~\\ref{eq:flow}: $p(v_n,v_m)$ and $p(v_n, \\Phi_m, \\Phi_l, ....)$, which can provide further insights on the linear and non-linear effects in the hydrodynamics response. The correlation $p(v_n,v_m)$ quantifies directly the coupling between $v_m$ and $v_n$, while $p(v_n, \\Phi_m, \\Phi_l, ...)$ allows us to study how the event-plane correlations couples to a specific flow harmonics $v_n$. The probability distributions of these correlations are difficult to measure directly, instead we explore them systematically using the recently proposed event shape selection method~\\cite{Schukraft:2012ah} (also investigated in Ref.~\\cite{Petersen:2013vca,Lacey:2013eia}): Events in a given centrality interval are first classified according to the observed $v_n$ signal in certain $\\eta$ range, and the $p(v_m)$ and $p(\\Phi_m, \\Phi_l, ...)$ are then measured in other $\\eta$ range for each class. The event shape observables should be those that correlate well with the $\\epsilon_n$ of the initial geometry, such as the observed $v_1$ (dipolar flow), $v_2$ and $v_3$. The roles of these selection variables are similar to the event centrality, except that they further divide events within the same centrality class.\n\nThe event shape selection method also provides a unique opportunity to investigate the longitudinal dynamics of the collective flow. For example, events selected with large $v_2$ in one pseudorapidity window, in addition to having bigger $\\epsilon_2$, may also have stronger density fluctuations, larger initial flow or smaller viscous correction~\\cite{Pang:2012he}. Studying how the $v_n$ values or EP correlations vary with the $\\eta$ separation from the selection window may provide better insights on the longitudinal dynamics in the initial and the final states. Earlier efforts in this front can be found in Refs.~\\cite{Petersen:2011fp,Pang:2012he,Xiao:2012uw}.\n\nIn this paper, we apply the event shape selection technique to events generated by the AMPT model, to investigate the $p(v_n,v_m)$, $p(v_n, \\Phi_m, \\Phi_l, ....)$, and the longitudinal flow fluctuations. These correlations are studied for events binned according to the observed $v_2\/v_3$ signal, which are then compared with results for events binned directly in $\\epsilon_2\/\\epsilon_3$. This comparison helps to elucidate whether the changes in the correlation are driven mostly by the selection of the initial geometry or due to additional dynamics in the final state. This study also help to develop and validate the analysis method to be used in the actual data analysis.\n\nThe structure of the paper is as follows: Section~\\ref{sec:1} introduces the observables and method of the event shape selection in the AMPT model. Section~\\ref{sec:2} studies how the correlations among the eccentricities and PP angles vary with event shape selection. Section~\\ref{sec:3} presents a study of the rapidity fluctuations of flow. Section~\\ref{sec:4} studies how the correlations among the $v_n$'s and $\\Phi_n$'s vary with event shape selection. Section~\\ref{sec:5} gives a discussion and summary of the results.\n\n\n\n\\section{The method}\n\\label{sec:1}\nA Muti-Phase Transport model (AMPT)~\\cite{Lin:2004en} has been used frequently to study the higher-order $v_n$ associated with $\\epsilon_n$ in the initial geometry~\\cite{Xu:2011jm,Xu:2011fe,Ma:2010dv}. It combines the initial fluctuating geometry based on Glauber model from HIJING with the final state interaction via a parton and hadron transport model. The collective flow in this model is driven mainly by the parton transport. The AMPT simulation in this paper is performed with string-melting mode with a total partonic cross-section of 1.5 mb and strong coupling constant of $\\alpha_s=0.33$~\\cite{Xu:2011fe}, which has been shown to reproduce reasonably the $\\pT$ spectra and $v_n$ data at RHIC and LHC~\\cite{Xu:2011fe,Xu:2011fi}. The initial condition of the AMPT model with string melting has been shown to contain significant longitudinal fluctuations that can influence the collective dynamics~\\cite{Pang:2012he,Pang:2012uw}.\n\nThe AMPT sample used in this study is generated for $b=8$~fm Pb+Pb collisions at LHC energy of $\\sqrt{s_{NN}}=2.76$ TeV, corresponding to $\\sim 30\\%$ centrality. The particles in each event are divided into various subevents along $\\eta$, one example division scheme is shown in Fig.~\\ref{fig:m1}. Four subevents labelled as S, A, B, C, with at least 1 unit $\\eta$ gap between any pair except between S and A, are used in the analysis. Note that particles in $-6<\\eta<-2$ are divided randomly into two equal halves, labelled as S and A, respectively. The particles in subevent S are used only for the event shape selection purpose, and they are excluded for $v_n$ and event-plane correlation analysis. This choice of subevents and analysis scheme ensure that the event shape selection does not introduce non-physical correlations between S and A, B or C.\n\n\nThe flow vector in each subevent is calculated as:\n\\begin{eqnarray}\n\\nonumber\n&&\\overrightharp{q}_n =(q_{x,n},q_{y,n}) = \\frac{1}{\\Sigma_i w_i}\\left(\\textstyle\\Sigma_i (w_i\\cos n\\phi_i), \\Sigma_i (w_i\\sin n\\phi_i)\\right)\\;, \\\\\\label{eq:me1}\n&&\\tan n\\Psi_n = \\frac{q_{y,n}}{q_{x,n}}\\;,\n\\end{eqnarray}\nwhere the weight $w_i$ is chosen as the $\\pT$ of $i$-th particle and $\\Psi_n$ is the measured event plane. Due to finite number effects, $\\Psi_n$ smears around the true event-plane angle $\\Phi_n$. Hence $q_n$ represents the weighted raw flow coefficients $v_n^{\\mathrm{obs}}$, $q_n=\\Sigma_i \\left(w_i (v_n^{\\mathrm{obs}})_i\\right)\/\\Sigma_i w_i$. In this study, each subevent in Fig.~\\ref{fig:m1} has 1400-3000 particles, so $q_n$ is expected to follow closely the true $v_n$.\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.8\\linewidth]{dets}\n\\caption{\\label{fig:m1} (Color online) The $\\eta$ range of the subevent for the event shape selection (S) and three other subevents for correlation analysis (A, B and C). Note that the particles in $-6<\\eta<-2$ are divided randomly and equally into subevents A and S.}\n\\end{figure}\n\nFor each generated event, the following quantities are calculated for $n=$1--6: $(\\epsilon_n, \\Phi^*_n)$ from initial state, $(q_n^{\\mathrm{A}}, \\Psi^{\\mathrm{A}}_n)$ for subevent A, $(q_n^{\\mathrm{B}}, \\Psi^{\\mathrm{B}}_n)$ for subevent B, $(q_n^{\\mathrm{C}}, \\Psi^{\\mathrm{C}}_n)$ for subevent C, and $(q_n^{\\mathrm{S}}, \\Psi^{\\mathrm{S}}_n)$ for subevent S, a total of 60 quantities. The event shape selection is performed by dividing the generated events into 10 bins in $q_2^{\\mathrm{S}}$ or $q_3^{\\mathrm{S}}$ with equal statistics. Similar event shape selection procedure is also performed by slicing the values of $\\epsilon_2$ or $\\epsilon_3$ directly, with the aim of studying how well the physics for events selected in the final state correlates with those selected purely on the initial geometry.\n\nFigure~\\ref{fig:m2} shows the performance of the event shape selection on $q_2^{\\mathrm{S}}$ and $q_3^{\\mathrm{S}}$. Strong positive correlations between $\\epsilon_n$ and $q_n^{\\mathrm{S}}$ seen in the top panels reflect the fact that collective response is linear for $n=2$ and 3~\\cite{Qiu:2012uy}. The bottom panels show that events selected with top 10\\% of the $q_2^{\\mathrm{S}}$ have a $\\left\\langle\\epsilon_2\\right\\rangle$ value that is nearly 3 times that for events with the lower 10\\% of $q_2^{\\mathrm{S}}$. For $n=3$ the difference in $\\epsilon_3$ in the two event classes is about a factor of 2. These results suggest that the ellipticity and triangularity of the initial geometry can be selected precisely by slicing the flow vector in the final state. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=1\\linewidth]{cqe3paper_cent8_m1}\n\\caption{\\label{fig:m2} (Color online) Correlations between $\\epsilon_n$ and magnitude of flow vector $q_n^{\\mathrm{S}}$ calculated using half of the particles in $-6<\\eta<-2$ (top panels), the 10 bins in $q_n^{\\mathrm{S}}$ with equal statistics (middle panels) and the corresponding distributions of $\\epsilon_n$ for events in the top 10\\%, bottom 10\\% and total of $q_n^{\\mathrm{S}}$ (bottom panels). The results are calculated for $b=8$~fm for $n=2$ and $n=3$, and are shown in the left and right column, respectively.}\n\\end{figure}\n\nIn the event shape selection method, $p(v_m,v_n)$ is not directly calculated. Instead, the calculated correlation is:\n\\begin{eqnarray}\n\\label{eq:m2}\np(q_m^{\\mathrm{S}},v_n^{\\mathrm{obs}}) = p(q_m^{\\mathrm{S}})\\times p(v_n^{\\mathrm{obs}})_{q_m^{\\mathrm{S}}},\\;\\; m=2,3\n\\end{eqnarray}\nwhere conditional probability $p(v_n^{\\mathrm{obs}})_{q_m^{\\mathrm{S}}}$ represents the distribution of $v_n^{\\mathrm{obs}}$ for events selected with given $q_m^{\\mathrm{S}}$ value. To minimize non-flow effects, the $v_n^{\\mathrm{obs}}$ is calculated for particles separated in $\\eta$ from the subevent that provides the event plane. To minimize non-flow effects, a $\\eta$ gap from the corresponding event plane in each case is required. The probability $p(v_n)_{q_m^{\\mathrm{S}}}$ can be obtained from $p(v_n^{\\mathrm{obs}})_{q_m^{\\mathrm{S}}}$ via the unfolding technique~\\cite{Aad:2013xma,Jia:2013tja}, or if one is interested in the event-averaged $v_n$ values, the standard method~\\cite{Poskanzer:1998yz} can be applied for each $q_m^{\\mathrm{S}}$ bin:\n\\begin{eqnarray}\n\\label{eq:m3}\nv_n(\\pT,\\eta)_{q_m^{\\mathrm{S}}} = \\left[\\frac{v_n^{\\mathrm{obs}}(\\pT,\\eta)}{\\mathrm{Res}\\{ n\\Psi_n \\} }\\right]_{q_m^{\\mathrm{S}}},\n\\end{eqnarray}\nwhere the event-plane resolution factor $\\mathrm{Res}\\{ n\\Psi_n \\}$ is calculated separately for A, B, and C via the three-subevent method, providing three independent $v_n$ estimates~\\cite{Poskanzer:1998yz}. Since the magnitude and direction of the flow vector are uncorrelated, the event shape selection is not expected to introduce biases to the resolution correction. One special case of Eq.~\\ref{eq:m3} is $n=m$, which probes into the rapidity fluctuation of the $v_n$ itself (see Section~\\ref{sec:3}).\n\nTo calculate the event-plane correlation for each $q_m^{\\mathrm{S}}$ bin, the standard method introduced by the ATLAS collaboration based on event-plane correlation~\\cite{Jia:2012sa,Jia:2012ma}, and the method based on scalar products in Refs.~\\cite{Luzum:2012da,Bhalerao:2013ina} are adopted:\n\\begin{eqnarray} \n\\nonumber\n\\langle\\cos (\\Sigma \\Phi)\\rangle &=& \\frac{\\langle\\cos (\\Sigma \\Psi)\\rangle} {\\mathrm{Res}\\{c_1\\Psi_1\\}\\mathrm{Res}\\{c_22\\Psi_2\\}...\\mathrm{Res}\\{c_ll\\Psi_l\\}}\\\\\\label{eq:m4a}\n\\\\\\nonumber\n\\langle\\cos (\\Sigma \\Phi)\\rangle_w &=& \\frac{\\langle q_1^{c_1} q_2^{c_2}... q_l^{c_l}\\cos (\\Sigma \\Psi)\\rangle} {\\mathrm{Res}\\{c_1\\Psi_1\\}_w\\mathrm{Res}\\{c_22\\Psi_2\\}_w...\\mathrm{Res}\\{c_ll\\Psi_l\\}_w}\\\\\\label{eq:m4}\n\\end{eqnarray}\nwhere shorthand notions $\\Sigma \\Phi = c_1\\Phi_{1}+2c_2\\Phi_{2}+...+lc_l\\Phi_{l}$ and $\\Sigma \\Psi = c_1\\Psi_{1}+2c_2\\Psi_{2}+...+lc_l\\Psi_{l}$ are used. They are referred to as the EP method (Eq.~\\ref{eq:m4a}) and the SP method (Eq.~\\ref{eq:m4}) for the rest of this paper. The resolution factors $\\mathrm{Res}\\{c_nn\\Psi_n\\}$ and $\\mathrm{Res}\\{c_nn\\Psi_n\\}_w$ are calculated via three-subevent method involving subevents A, B and C:\n\\small{\n \\begin{eqnarray}\n \\label{eq:m5a}\n &&{\\mathrm{Res}}\\{jn\\Psi^{\\mathrm A}_{n}\\}=\\sqrt{\\frac{\\left\\langle {\\cos\\Delta\\Psi^{AB}_n} \\right\\rangle\\left\\langle {\\cos \\Delta\\Psi^{AC}_n}\\right\\rangle}{\\left\\langle {\\cos \\Delta\\Psi^{BC}_n} \\right\\rangle}}.\\\\\\nonumber\n&&{\\mathrm{Res}}\\{jn\\Psi^{\\mathrm A}_{n}\\}_w= \\sqrt{\\frac{\\left\\langle { (q_n^{\\mathrm A}q_n^{\\mathrm B})^j \\cos \\Delta\\Psi^{AB}_n} \\right\\rangle \\left\\langle { (q_n^{\\mathrm A}q_n^{\\mathrm C})^j\\cos \\Delta\\Psi^{AC}_n }\\right\\rangle}{\\left\\langle { (q_n^{\\mathrm B}q_n^{\\mathrm C})^j\\cos \\Delta\\Psi^{BC}_n} \\right\\rangle}}.\\\\\\label{eq:m5}\n \\end{eqnarray}}\\normalsize\nwhere $\\Delta\\Psi^{AB}_n = jn \\left(\\Psi_n^{\\mathrm A} - \\Psi_n^{\\mathrm B}\\right)$ etc. Each $\\Psi_n$ angle in Eq.~\\ref{eq:m4} is calculated in a separate subevent to avoid auto-correlations. The two subevents involved in two-plane correlation are chosen as A and C in Fig.~\\ref{fig:m1}, while the three subevents in three-plane correlation are chosen as A, B, and C in Fig.~\\ref{fig:m1}. Note that selecting on $q_n^{\\mathrm{S}}$ explicitly breaks the symmetry between subevents A and C even though they still have symmetric $\\eta$ acceptance. Thus their resolution factors are different and need to be calculated separately.\n\n\\section{Correlations in the initial state}\n\\label{sec:2}\nBefore discussing correlations in the final state, it is instructive to look first at how the initial geometry variables $\\epsilon_n, \\Phi^*_n$ and their correlations vary with the event shape selection. Figure~\\ref{fig:i1} shows the correlations between pairs of $\\epsilon_n$ for $n\\leq4$ for the generated AMPT events. Significant correlations are observed between $\\epsilon_2$ and $\\epsilon_3$~\\cite{Lacey:2013eia,ATLAS2014-022}, $\\epsilon_1$ and $\\epsilon_3$. The correlations between $\\epsilon_1$ and $\\epsilon_2$ are weak for this impact parameter but become more significant for $b=10$~fm (see Appendix~\\ref{sec:7}). Since the hydrodynamic response is nearly linear for $n=1-3$~\\cite{Qiu:2011iv}, these correlations are expected to survive into correlations between $v_n$ of respective order. The $\\epsilon_2$ and $\\epsilon_4$ correlation is also significant, especially for large $\\epsilon_2$ values, this correlation may survive to the final state but it competes with non-linear effects expected for $v_4$~\\cite{Gardim:2011xv}. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{ceepaperb_cent8}\n\\caption{\\label{fig:i1} (Color online) Selected correlations between $\\epsilon_n$ of different order for Pb+Pb events at $b=8$~fm. More examples are given in Appendix~\\ref{sec:7}. The $x$- and $y$-profiles of the 2D correlations are represented by the solid symbols.}\n\\end{figure}\n\nFigure~\\ref{fig:i2} shows selected correlations between $\\Phi_n^*$ of different order for events binned in $\\epsilon_2$ (boxes) or $q_2^{\\mathrm{S}}$ (circles)~\\cite{Jia:2012ma,Jia:2012ju}. It is clear that the correlation signal varies dramatically with $\\epsilon_2$, implying that the correlations between $\\Phi_n^*$'s can vary a lot for events with the same impact parameter. Figure~\\ref{fig:i2} also shows that events with different correlations in the initial geometry can be selected with nearly the same precision between using $q_2^{\\mathrm{S}}$ and using $\\epsilon_2$.\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{cephilonga_cent8}\n\\caption{\\label{fig:i2} (Color online) Dependence of the participant-plane correlations on $\\epsilon_2$ (boxes) or $q_2^{\\mathrm{S}}$ (circles), calculated for Pb+Pb events with $b=8$~fm. Results for three two-plane, three three-plane and two four-plane correlators proposed in Refs.~\\cite{Jia:2012ma,Jia:2012ju} are shown.}\n\\end{figure}\n\n\n\n\\section{The correlation between $v_n$ and $v_m$ and longitudinal fluctuations}\n\\label{sec:3}\nFigure~\\ref{fig:eta1} shows the $v_2(\\eta)$ values for events selected for lower 10\\% (top panels) and upper 10\\% (bottom panels) of the values of either $q_2^{\\mathrm{S}}$ (left panels) or $\\epsilon_2$ (right panels). They are calculated via Eqs.~\\ref{eq:m3} and \\ref{eq:m5a} using all final state particles with $0.1<\\pT<5$ GeV, excluding those particles used in the event shape selection (i.e. subevent S). The event-plane angles are calculated separately for the three subevents A, B, and C, and a minimum 1--2 unit of $\\eta$ gap is required between $v_n(\\eta)$ and the subevent used to calculate the event plane. Specifically, the $v_2$ values in $-6<\\eta<0$ are obtained using the EP angle in subevent C covering $2<\\eta<6$ (open boxes), the $v_2$ values in $0<\\eta<6$ are obtained using the EP angle in subevent A covering $-6<\\eta<-2$ (open circles), and the $v_2$ values in $|\\eta|>2$ are also obtained using the EP angle in subevent B covering $|\\eta|<1$ (solid circles).\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=1\\linewidth]{cvnbpaper_cent8_pt0_har1_m1}\n\\caption{\\label{fig:eta1} (Color online) $v_2(\\eta)$ for events selected with lower 10\\% (top panels) and upper 10\\% (bottom panels) of the values of either $q_2^{\\mathrm{S}}$ (left panels) or $\\epsilon_2$ (right panels) for AMPT Pb+Pb events with $b=8$~fm. In each case, the integral $v_2$ calculated for particles in $0.1<\\pT<5$ GeV relative to the event plane of subevent A, B and C (Their $\\eta$ coverages are indicated in the legend) with a minimum $\\eta$ gap of 1 unit are shown.}\n\\end{figure}\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=1\\linewidth]{cvncombpaper_cent8_pt0_m1}\n\\caption{\\label{fig:eta2} (Color online) $v_n(\\eta)$ for events selected with lower 10\\% (open symbols) and upper 10\\% (solid symbols) of the values of $q_2^{\\mathrm{S}}$ for AMPT Pb+Pb events with $b=8$~fm. Results are shown for $v_2(\\eta)$, $v_3(\\eta)$,..., and $v_6(\\eta)$ from left panel to the right panel. The ratios of $v_n(\\eta)$ between events with $q_2^{\\mathrm{S}}$ selection to the inclusive events are shown in the bottom panels.}\n\\end{figure*}\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=1\\linewidth]{cvnQdeppaper_cent8_pt0_eta1_m1}\n\\caption{\\label{fig:eta3} (Color online) Dependence of the $v_n$ on $q_2^{\\mathrm{S}}$ in a forward ($-5<\\eta<-4$) and a backward ($4<\\eta<5$) pseudorapidity ranges.}\n\\end{figure*}\n\nThere are several interesting features in the observed $\\eta$ dependence of $v_2$. The $v_2(\\eta)$ values for events selected with lower 10\\% values of $q_2^{\\mathrm{S}}$ or $\\epsilon_2$ are significantly lower (by a factor of 3) than for events selected with upper 10\\%, indicating that the $v_2$ signal correlates well with both the $q_2^{\\mathrm{S}}$ and the $\\epsilon_2$. Furthermore, a significant forward\/backward asymmetry of $v_2(\\eta)$ is observed for events selected on $q_2^{\\mathrm{S}}$ but not $\\epsilon_2$. This asymmetry is already observed outside the $\\eta$ range covered by subevent S, but is bigger towards larger $|\\eta|$. This asymmetry may reflect the dynamical fluctuations exposed by the $q_2^{\\mathrm{S}}$ selection. Additional cross-checks performed by choosing subevent S in a more restricted $\\eta$ range show similar asymmetry (see Fig.~\\ref{fig:a0} in the Appendix).\n\n\nBased on the good agreement between the three $v_2$ estimations in Fig.~\\ref{fig:eta1}, they are combined into a single $v_2(\\eta)$ result. Good agreement is also observed for higher harmonics, hence they are combined in the same way. The resulting $v_2(\\eta)$--$v_6(\\eta)$ are shown in Fig.~\\ref{fig:eta2} for events with lower 10\\% and upper 10\\% of the values of $q_2^{\\mathrm{S}}$. The asymmetry of $v_n$ in $\\eta$ is much weaker for the higher-order harmonics. The values of $v_n(\\eta)$ for $n>3$ are also seen to be positively correlated with $v_2$, $i.e.$ events with large $q_2^{\\mathrm{S}}$ also have bigger $v_n(\\eta)$. On the other hand, $v_3$ values are observed to decrease with increasing $q_2^{\\mathrm{S}}$. This decrease reflects the anti-correlation between $\\epsilon_2$ and $\\epsilon_3$ in Fig.~\\ref{fig:i1} (also confirm by ATLAS data~\\cite{ATLAS2014-022}). Figure~\\ref{fig:eta3} quantifies the forward\/backward asymmetry of $v_2$--$v_6$ in two $\\eta$ ranges: $-5<\\eta<-4$ and $4<\\eta<5$. Clear asymmetry can be seen for $v_2$, $v_4$ and $v_5$, but not for $v_3$. This behavior re-enforces our earlier conclusion that the correlation between $v_2$ and $v_3$ in the AMPT model is mostly geometrical, $i.e.$ reflecting correlation between $\\epsilon_2$ and $\\epsilon_3$.\n\nAn identical analysis is also performed for events selected on $q_3^{\\mathrm{S}}$ or $\\epsilon_3$. The $v_n(\\eta)$ for events with the upper 10\\% and lower 10\\% values of $q_3^{\\mathrm{S}}$ or $\\epsilon_3$ are shown in Fig.~\\ref{fig:eta1b}. A strong $\\eta$ asymmetry is observed as a result of $q_3^{\\mathrm{S}}$ selection, but not for $\\epsilon_3$ selection. Nevertheless, the overall magnitude of the $v_3$ is similar between the two selections. In the $-6<\\eta<-2$ range where $q_3^{\\mathrm{S}}$ is calculated, $v_3(\\eta)$ values for events with the lower 10\\% of $q_3^{\\mathrm{S}}$ drop to below zero. This implies that the $\\Phi_3$ angle for large negative $\\eta$ region become out of phase with the $\\Phi_3$ angle in the large positive range. This $\\Phi_3$ angle decorrelation is also observed for events selected with lower 10\\% of $\\epsilon_3$ values as shown in the top-right panel of Fig.~\\ref{fig:eta1b}. This behavior suggests that in the AMPT model, rapidity decorrelation of $v_3$ is stronger for events with small $\\epsilon_3$ and grows towards large $|\\eta|$ (negative $v_3$ implies its phase is opposite to that in the $\\eta$ region used to obtain the event plane). An earlier study~\\cite{Xiao:2012uw} has show evidences of $\\eta$ decorrelation of $v_3$ in the AMPT; Our later studies published in separate papers trace this decorrelation to the independent fluctuations of the $\\epsilon_n$ for the projectile nucleus and the $\\epsilon_n$ for the target nucleus~\\cite{Jia:2014vja,Jia:2014ysa}.\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=1\\linewidth]{Q3cvnbpaper_cent8_pt0_har2_m1}\n\\caption{\\label{fig:eta1b} (Color online) The $v_3(\\eta)$ for events selected for lower 10\\% (top panels) and upper 10\\% (bottom panels) of the values of either $q_3^{\\mathrm{S}}$ (left panels) or $\\epsilon_3$ (right panels) for AMPT Pb+Pb events with $b=8$~fm. In each case, the integral $v_3$ calculated for particles in $0.1<\\pT<5$ GeV relative to the event plane of subevent A, B and C (Their $\\eta$ coverages are indicated in the legend) with a minimum $\\eta$ gap of 1 unit are shown. }\n\\end{figure}\n\n Figure~\\ref{fig:eta3b} quantifies the rapidity asymmetry of $v_n$ between $-5<\\eta<-4$ and $4<\\eta<5$ as a function of $q_3^{\\mathrm{S}}$ and $\\epsilon_3$. The even harmonics $v_2$ and $v_4$ show little asymmetry and are nearly independent of $q_3^{\\mathrm{S}}$. In contrast, the $v_5$ values show a strong $\\eta$-asymmetry similar to that for $v_3$. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=1\\linewidth]{Q3cvnQdeppaperb_cent8_pt0_eta1_m1}\n\\caption{\\label{fig:eta3b} (Color online) Dependence of the $v_n$ on $q_3^{\\mathrm{S}}$ in a forward ($-5<\\eta<-4$) and a backward ($4<\\eta<5$) pseudorapidity ranges.}\n\\end{figure}\n\nFigure~\\ref{fig:eta5} shows the particle multiplicity distributions $dN\/d\\eta$ for events selected on $q_2^{\\mathrm{S}}$ (left) or $q_3^{\\mathrm{S}}$ (right). The distributions remain largely symmetric in $\\eta$ and the overall magnitude is nearly independent of the event selection. We also verified explicitly that the number of participating nucleons for the projectile and target are nearly equal for all $q_2^{\\mathrm{S}}$ or $q_3^{\\mathrm{S}}$ bins. This suggests that the underlying mechanism is not due to the EbyE fluctuations of the $dN\/d\\eta$ distribution.\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\linewidth]{cvnNpaper_cent8_pt0_m0}\\includegraphics[width=0.5\\linewidth]{Q3cvnNpaper_cent8_pt0_m0}\n\\caption{\\label{fig:eta5} (Color online) The $dN\/d\\eta$ distributions of all particles for events selected on $q_2^{\\mathrm{S}}$ (left) and $q_3^{\\mathrm{S}}$ (right).}\n\\end{figure}\n\n\n\\section{event-plane correlations}\n\\label{sec:4}\nThe AMPT model has been shown to reproduce~\\cite{Bhalerao:2013ina} the centrality dependence of various two-plane and three-plane correlations measured by the ATLAS Collaboration~\\cite{Jia:2012sa}. Here we use AMPT model to study how these correlators change with $q_n^{\\mathrm{S}}$ or $\\epsilon_n$. In this analysis, the two-plane correlators $\\langle\\cos k(\\Phi_{n}-\\Phi_{m})\\rangle$ are calculated by correlating the EP angles from subevent A and subevent C. Each subevent provides its own estimation of the EPs, leading to two statistically independent estimates of the correlator: Type1 $\\langle\\cos k(\\Phi_{n}^{\\mathrm{A}}-\\Phi_{m}^{\\mathrm{C}})\\rangle$ and Type2 $\\langle\\cos k(\\Phi_{n}^{\\mathrm{C}}-\\Phi_{m}^{\\mathrm{A}})\\rangle$. The two estimates are identical for events selected on $\\epsilon_2$, and hence they are averaged to obtain the final result. But for events selected based on $q_2^{\\mathrm{S}}$, the two estimates can differ quite significantly. \n\nFigure~\\ref{fig:ep1} shows the values of four two-plane correlators in bins of $q_2^{\\mathrm{S}}$ or $\\epsilon_2$. The values of the correlators are observed to increase strongly with increasing $q_2^{\\mathrm{S}}$ or $\\epsilon_2$. The two estimates based on $q_2^{\\mathrm{S}}$ selection differ significantly, reflecting the influence of longitudinal flow fluctuations exposed by the $q_2^{\\mathrm{S}}$ selection. Interestingly, the correlators whose $\\Phi_2$ angle is calculated in subevent C agree very well with those based on $\\epsilon_2$ event shape selection, such as $\\langle\\cos 4(\\Phi_{2}^{\\mathrm{C}}-\\Phi_{4}^{\\mathrm{A}})\\rangle$. This is because $\\Phi_{2}^{\\mathrm{C}}$ is expected to be less dependent on the $q_2^{\\mathrm{S}}$ selection than $\\Phi_{2}^{\\mathrm{A}}$ (see Fig.~\\ref{fig:eta3}(a)). These observations suggest that the dependence of $\\langle\\cos 4(\\Phi_{2}^{\\mathrm{C}}-\\Phi_{4}^{\\mathrm{A}})\\rangle$ and $\\langle\\cos 6(\\Phi_{2}^{\\mathrm{C}}-\\Phi_{6}^{\\mathrm{A}})\\rangle$ on $q_2^{\\mathrm{S}}$ reflects mainly the change in the initial geometry and the ensuing non-linear effects in the final state. Note that the last bin in each panel represents the value obtained without event shape selection, which agrees between the three calculations by construction.\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=1\\linewidth]{c2pcomp2paperw_cent8_type_m1}\n\\caption{\\label{fig:ep1} (Color online) The four two-plane correlations as a function of bins of $q_2^{\\mathrm{S}}$ or $\\epsilon_2$ for AMPT Pb+Pb events with $b=8$~fm. The two event planes $\\Phi_n$ and $\\Phi_m$ are measured by subevent A and subevent C. The $q_2^{\\mathrm{S}}$-binned results are presented separately for the two combinations: $\\langle\\cos k(\\Phi_{n}^{\\mathrm{A}}-\\Phi_{m}^{\\mathrm{C}})\\rangle$ (open circles) and $\\langle\\cos k(\\Phi_{n}^{\\mathrm{C}}-\\Phi_{m}^{\\mathrm{A}})\\rangle$ (solid circles).}\n\\end{figure}\n\nFigure~\\ref{fig:ep2} compares various two-plane correlators calculated via the EP method and the SP method given by Eqs.~\\ref{eq:m4a}-\\ref{eq:m5}. The SP method is observed to give systematically higher values for Type1 correlators where the first angle is measured by subevent A, while it gives consistent or slightly lower values for Type2 correlators. The last bin in each panel shows the result obtained without event shape selection, where the values from the SP method are always higher, as expected~\\cite{Bhalerao:2013ina}. \n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=1\\linewidth]{c2pcomp3paper_cent8_type_m1}\n\\caption{\\label{fig:ep2} (Color online) The four two-plane correlators in bins of $q_2^{\\mathrm{S}}$ for AMPT Pb+Pb events with $b=8$~fm, shown for two different combinations of the event-plane angles (solid symbols), and compared with the correlations calculated via the scalar product method (open symbols).}\n\\end{figure}\n\nFigure~\\ref{fig:ep1b} shows $\\langle\\cos 6(\\Phi_{2}-\\Phi_{3})\\rangle$ and $\\langle\\cos 6(\\Phi_{3}-\\Phi_{6})\\rangle$ in bins of $q_3^{\\mathrm{S}}$ or $\\epsilon_3$. The first correlator shows little dependence on $q_3^{\\mathrm{S}}$ or $\\epsilon_3$, while the second correlator does. This is in sharp contrast to the results seen in Fig.~\\ref{fig:ep1}, where both correlators show strong but opposite dependence on $q_2^{\\mathrm{S}}$ or $\\epsilon_2$. This behavior is consistent with a strong coupling between $v_6$ and $v_2$, $v_6$ and $v_3$, but weak coupling between $v_2$ and $v_3$.\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=1\\linewidth]{Q3c2pcomp2paperw_cent8_type_m1}\n\\caption{\\label{fig:ep1b} (Color online) Two two-plane correlators as a function of either $q_3^{\\mathrm{S}}$ or $\\epsilon_3$ for AMPT Pb+Pb events with $b=8$~fm. The event planes $\\Phi_n$ and $\\Phi_m$ are measured by subevent A and subevent C. The $q_3^{\\mathrm{S}}$-binned results are presented separately for the two estimates: $\\langle\\cos k(\\Phi_{n}^{\\mathrm{A}}-\\Phi_{m}^{\\mathrm{C}})\\rangle$ (open circles) and $\\langle\\cos k(\\Phi_{n}^{\\mathrm{C}}-\\Phi_{m}^{\\mathrm{A}})\\rangle$ (solid circles).}\n\\end{figure}\n\nTo calculate three-plane correlations, subevents A, B and C are used. Each subevent provides its own estimation of the three EP angles, and hence there are $3!=6$ independent ways of estimating a given three-plane correlator. For $c_nn\\Phi_{n}+c_mm\\Phi_{m}+c_ll\\Phi_{l}$ with $n = <\\sqrt{\\frac{\\mathbf{x}}{\\|\\mathbf{x}\\|_1}},\\sqrt{\\frac{\\mathbf{y}}{\\|\\mathbf{y}\\|_1}}>\n \\end{equation}which means power $\\ell_2$-normalization explicitly introduces non-linear kernel in the final classifier. \\textbf{In a word, sum pooling and power $\\ell_2$-normalization is effective and efficient way to enable linear SVM to have the power of non-linear classifier and boost final recognition rate}.\n\\end{itemize}\n\nIn conclusion, pooling and normalization is a crucial step in the pipeline of BoVW framework, whose importance may not be highlighted in previous research work. Proper choice of pooling and normalization strategy may largely reduce the performance gap of different encoding methods. For sum pooling and power $\\ell_2$-normalization, which is the best combination in all these possible choices, the performances of LLC, SA-$k$, and VQ are comparable to each other for iDT features. Thus, in the remaining evaluation for fusion methods, we fix the pooling and normalization strategy as sum pooling and power $\\ell_2$-normalization.\n\n\\subsection{Exploration of Fusion Methods}\n\\begin{table*}\n \\caption{Comparison of different fusion methods for the encoding methods on the \\textbf{HMDB51} dataset.}\n \\label{tbl:hmdb}\n \\centering\n \\begin{tabular}{lcccccccccc}\n \\hline\n \\hline\n Methods & ~~~FV~~~ & ~~SVC~~ & ~SVC-$k$~ & ~SVC-$all$~ & ~VLAD~ & VLAD-$k$ & VLAD-$all$ & ~~LLC~~ & SA-$k$ & ~~~VQ~~~ \\\\\n \\hline\n \\hline\n \\multicolumn{11}{l}{\\textbf{Space Time Interest Points (STIPs)}} \\\\\n ~HOG & 22.81 & 17.76 & 21.09 & 21.87 & 18.13 & 19.87 & 20.04 & 20.46 & 18.39 & 16.10 \\\\\n ~HOF & 31.96 & 30.44 & 32.68 & 33.36 & 30.46 & 31.53 & 31.55 & 27.19 & 26.27 & 24.49\\\\\n ~d-Fusion & \\textbf{38.82} & \\textbf{35.12} & 36.64 & \\textbf{37.19} & \\textbf{34.81} & \\textbf{36.18} & \\textbf{36.23} & 29.87 & 28.13 & 25.66 \\\\\n ~r-Fusion & 37.32 & 34.36 & \\textbf{36.73} & \\textbf{37.19} & 34.23 & 35.84 & 35.88 & \\textbf{33.44} & \\textbf{32.59} & \\textbf{30.35} \\\\\n ~s-Fusion & 36.71 & 32.14 & 34.51 & 34.99 & 32.11 & 33.90 & 34.01 & 32.52 & 30.96 & 27.54 \\\\\n \\hline\n \\multicolumn{11}{l}{\\textbf{Improved Dense Trajectories (iDTs)}} \\\\\n ~HOG & 45.12 & 36.93 & 39.32 & 38.10 & 36.93 & 39.30 & 37.08 & 37.08 & 35.45 & 34.81 \\\\\n ~HOF & 50.70 & 47.70 & 49.00 & 48.00 & 47.70 & 49.00 & 45.80 & 42.20 & 42.70 & 42.10 \\\\\n ~MBHx & 44.14 & 39.35 & 43.01 & 41.68 & 39.43 & 43.03 & 41.55 & 35.51 & 35.51 & 34.6 \\\\\n ~MBHy & 50.04 & 44.25 & 47.02 & 46.51 & 44.27 & 47.02 & 44.68 & 40.39 & 40.35 & 39.78 \\\\\n ~d-Fusion & 58.37 & 54.12 & 56.82 & 56.86 & 54.2 & 56.88 & 54.73 & 48.25 & 48.58 & 47.93 \\\\\n ~r-Fusion & \\textbf{60.22} & \\textbf{58.19} & \\textbf{60.09} & \\textbf{60.07} & \\textbf{58.26} & \\textbf{60.09} & \\textbf{58.58} & \\textbf{55.45} & \\textbf{55.8} & \\textbf{55.27} \\\\\n ~s-Fusion & 59.62 & 57.27 & 59.11 & 58.78 & 57.14 & 59.17 & 57.54 & 53.68 & 53.94 & 53.27 \\\\\n \\hline\n \\hline\n \\end{tabular}\n\\end{table*}\n\n\\begin{table*}\n \\caption{Comparison of different fusion methods for the encoding methods on the \\textbf{UCF50} dataset.}\n \\label{tbl:ucf50}\n \\centering\n \\begin{tabular}{lcccccccccc}\n \\hline\n \\hline\n Methods & ~~~FV~~~ & ~~SVC~~ & ~SVC-$k$~ & ~SVC-$all$~ & ~VLAD~ & VLAD-$k$ & VLAD-$all$ & ~~LLC~~ & SA-$k$ & ~~~VQ~~~ \\\\\n \\hline\n \\hline\n \\multicolumn{11}{l}{\\textbf{Space Time Interest Points (STIPs)}} \\\\\n ~HOG & 66.20 & 60.76 & 63.98 & 63.94 & 60.22 & 62.32 & 62.22 & 60.42 & 59.11 & 56.21 \\\\\n ~HOF & 73.10 & 71.93 & 74.14 & 74.56 & 71.30 & 72.36 & 72.51 & 64.72 & 63.80 & 61.55\\\\\n ~d-Fusion & \\textbf{78.32} & \\textbf{76.33} & 77.60 & 77.59 & \\textbf{75.57} & \\textbf{76.06} & \\textbf{76.13} & 70.13 & 68.66 & 67.16 \\\\\n ~r-Fusion & 77.21 & 76.07 & \\textbf{78.42} & \\textbf{78.91} & 75.36 & 75.95 & 75.99 & \\textbf{74.05} & \\textbf{73.67} & \\textbf{71.95} \\\\\n ~s-Fusion & 76.33 & 76.19 & 77.76 & 77.25 & 73.79 & 74.91 & 74.98 & 72.95 & 71.66 & 69.16 \\\\\n \\hline\n \\multicolumn{11}{l}{\\textbf{Improved Dense Trajectories (iDTs)}} \\\\\n ~HOG & 84.39 & 78.22 & 80.29 & 79.97 & 78.19 & 80.20 & 78.33 & 72.73 & 73.76 & 74.27 \\\\\n ~HOF & 86.33 & 85.18 & 85.92 & 84.94 & 85.15 & 85.87 & 83.48 & 80.23 & 80.58 & 80.29 \\\\\n ~MBHx & 84.03 & 81.33 & 83.19 & 82.46 & 81.28 & 83.12 & 81.16 & 77.77 & 77.91 & 77.04 \\\\\n ~MBHy & 87.02 & 84.64 & 86.38 & 85.29 & 84.60 & 86.32 & 84.04 & 80.36 & 80.6 & 80.3 \\\\\n ~d-Fusion & 90.84 & 89.39 & 90.72 & 90.62 & 89.43 & 90.64 & 90.18 & 84.18 & 84.76 & 84.67 \\\\\n ~r-Fusion & \\textbf{92.07} & \\textbf{90.87} & \\textbf{91.89} & \\textbf{91.50} & \\textbf{90.82} & \\textbf{91.80} & \\textbf{90.56} & \\textbf{87.56} & \\textbf{87.92} & \\textbf{88.12} \\\\\n ~s-Fusion & 91.03 & 90.08 & 90.71 & 90.36 & 90.11 & 90.63 & 89.67 & 87.37 & 87.86 & 87.41 \\\\\n \\hline\n \\hline\n \\end{tabular}\n\\end{table*}\n\n\\begin{table*}\n \\caption{Comparison of different fusion methods for the encoding methods on the \\textbf{UCF101} dataset.}\n \\label{tbl:ucf101}\n \\centering\n \\begin{tabular}{lcccccccccc}\n \\hline\n \\hline\n Methods & ~~~FV~~~ & ~~SVC~~ & ~SVC-$k$~ & ~SVC-$all$~ & ~VLAD~ & VLAD-$k$ & VLAD-$all$ & ~~LLC~~ & SA-$k$ & ~~~VQ~~~ \\\\\n \\hline\n \\hline\n \\multicolumn{11}{l}{\\textbf{Space Time Interest Points (STIPs)}} \\\\\n ~HOG & 53.74 & 47.56 & 50.07 & 50.31 & 47.15 & 49.21 & 49.35 & 46.70 & 45.79 & 42.85 \\\\\n ~HOF & 62.89 & 60.57 & 63.81 & 64.02 & 60.04 & 61.73 & 61.60 & 54.16 & 52.78 & 50.04\\\\\n ~d-Fusion & \\textbf{69.90} & \\textbf{66.43} & 68.22 & 68.40 & \\textbf{65.42} & \\textbf{66.42} & \\textbf{66.46} & 59.52 & 57.83 & 56.09 \\\\\n ~r-Fusion & 68.21 & 65.39 & \\textbf{69.00} & \\textbf{69.18} & 65.39& 66.13 & 66.19 & \\textbf{63.04} & \\textbf{62.13} & \\textbf{59.31} \\\\\n ~s-Fusion & 66.77 & 62.50 & 65.81 & 65.98 & 62.17 & 63.97 & 64.15 & 60.94 & 59.48 & 56.69 \\\\\n \\hline\n \\multicolumn{11}{l}{\\textbf{Improved Dense Trajectories (iDTs)}} \\\\\n ~HOG & 74.79 & 69.74 & 72.14 & 72.36 & 69.66 & 71.65 & 71.39 & 65.46 & 65.81 & 65.40 \\\\\n ~HOF & 78.63 & 76.26 & 77.70 & 77.12 & 76.28 & 77.76 & 76.35 & 71.03 & 71.14 & 70.57 \\\\\n ~MBHx & 76.82 & 71.63 & 74.24 & 73.92 & 71.62 & 74.11 & 71.84 & 67.00 & 67.55 & 66.43 \\\\\n ~MBHy & 79.15 & 74.53 & 77.46 & 76.82 & 74.54 & 76.78 & 74.21 & 69.6 & 69.67 & 68.50 \\\\\n ~d-Fusion & 85.32 & 83.36 & 85.19 & 85.17 & 83.39 & 85.14 & 85.45 & 77.65 & 77.96 & 76.76 \\\\\n ~r-Fusion & \\textbf{87.11} & \\textbf{84.87} & \\textbf{86.54} & \\textbf{86.19 }& \\textbf{84.90} & \\textbf{86.16} & \\textbf{85.59} & \\textbf{81.43} & \\textbf{81.65} & \\textbf{81.37} \\\\\n ~s-Fusion & 85.49 & 83.34 & 84.84 & 84.57 & 83.29 & 85.04 & 83.83 & 80.11 & 80.39 & 79.81 \\\\\n \\hline\n \\hline\n \\end{tabular}\n\\end{table*}\n\nThe local features usually have multiple descriptors, such as HOG, HOF, MBHx, and MBHy, each of which corresponds to a specific view of video data. For the empirical study in previous section, we choose a simple method to combine these multiple descriptors, where we just concatenate them into a single one, namely descriptor level fusion. In this section, we mainly analyze the influence of different fusion methods on final recognition performance.\n\nFor encoding methods, we choose the same ten approaches as in previous section. The codebook size of super vector based methods is set as $512$ and the one of other encoding methods is set as $8,000$. For pooling and normalization methods, we use sum pooling and power $\\ell_2$-normalization, according to the observations in Section \\ref{sec:poolingnorm}. We also use intra normalization for super vector based encoding methods of iDTs features. For fusion methods, we evaluate three kinds of methods, namely descriptor level fusion, representation level fusion, and score level fusion, as described in Section \\ref{sec:fusion}. For score level fusion, we use the geometrical mean to combine the scores from multiple SVMs.\n\n\\begin{table*}\n \\caption{Comparison our hybrid representation with the sate-of-the-art methods.}\n \\label{tbl:cmp}\n \\centering\n \\begin{tabular}{lcc|lcc|lcc}\n \\hline\n \\hline\n HMDB51 & ~Year~ & \\% & ~UCF50 & ~Year~ & \\% & ~UCF101 & ~Year~ &\\% \\\\\n \\hline\n \\hline\n Kuehne \\emph{et al.} \\cite {KuehneJGPS11} & 2011 & 23.0 & Sadanand \\emph{et al.} \\cite{SadanandC12} & 2012 & 57.9 & Soomro \\emph{et al.} \\cite{SOOMRO12} & ~2012~ & 43.9 \\\\\n Sadanand \\emph{et al.} \\cite{SadanandC12} & 2012 & 26.9 & Kliper-Gross \\emph{et al.} \\cite{Kliper-GrossGHW12} & 2012 & 72.7 & Karpthy \\emph{et al.} \\cite{Karpathy14} & 2014 & 63.3 \\\\\n Kliper-Gross \\emph{et al.} \\cite{Kliper-GrossGHW12} & 2012 & 29.2 & Solmaz \\emph{et al.} \\cite{SolmazAS13} & 2012 & 73.7 & Cai \\emph{et al.} \\cite{CaiWPQ14} & 2014 & 83.5 \\\\\n Jiang \\emph{et al.} \\cite{JiangDXLN12} & 2012 & 40.7 & Reddy \\emph{et al.} \\cite{ReddyS13} & 2012 & 76.9 & Wu \\emph{et al.} \\cite{WuZL14} & 2014 & 84.2 \\\\\n Wang \\emph{et al.} \\cite{WangQT13} & 2013 & 42.1 & Wang \\emph{et al.} \\cite{WangQT13} & 2013 & 78.4 & Peng \\emph{et al.} \\cite{PengWCQP13} & 2013 & 84.2 \\\\\n Wang \\emph{et al.} \\cite{WangKSL13} & 2013 & 46.6 & Wang \\emph{et al.} \\cite{WangKSL13} & 2013 & 84.5 & Murthy \\emph{et al.} \\cite{Murthy13} & 2013 & 85.4 \\\\\n Peng \\emph{et al.} \\cite{PengQPQ13}& 2013 & 49.2 & Wang \\emph{et al.} \\cite{WangQT13b} & 2013 & 85.7 & Karaman \\emph{et al.} \\cite{Karaman13} & 2013 & 85.7 \\\\\n Wang \\emph{et al.} \\cite{WangS13a} & 2013 & 57.2 & Wang \\emph{et al.} \\cite{WangS13a} & 2013 & 91.1 & Wang \\emph{et al.} \\cite{Wang13} & 2013 & 85.9 \\\\\n \\hline\n Hybrid representation & - & \\textbf{61.1} & Hybrid representation & - & \\textbf{92.3} & Hybrid representation & - & \\textbf{87.9} \\\\\n \\hline\n \\hline\n \\end{tabular}\n\\end{table*}\n\nThe experimental results on three datasets are shown in Table \\ref{tbl:hmdb}, Table \\ref{tbl:ucf50}, and Table \\ref{tbl:ucf101}. From these results, we observe serval trends:\n\\begin{itemize}\n \\item \\textbf{For iDTs features, representation level fusion is the best choice for all of the selected encoding methods on the three datasets}. This result indicates that these multiple descriptors are most correlated in the video level. Descriptor level fusion emphasizes the dependance in cuboid and results in high dimension features for codebook training and encoding. This may make these unsupervised learning algorithm unstable.\n \\item \\textbf{For STIPs features, representation level fusion is more effective for reconstruction based and voting based encoding methods}. For super vector based encoding methods, the performance of representative level fusion is comparable to that of descriptor level fusion. This trend is consistent with the finds with iDTs features.\n \\item \\textbf{For both features, SA-$k$, LLC, and VQ encoding methods are much sensitive to fusion methods than those super vector based encoding methods}. Great improvement can be obtained for SA-$k$, LLC, and VQ when using representation level fusion, but slight improvements happen to those super vector methods. We analyze this is due to two facts. Firstly, for reconstruction and voting based encoding methods, the final dimension of representation level fusion is $M$ (the number of descriptors) times of the dimension of descriptor level fusion. However, for super vector based encoding methods, the dimension of descriptor level fusion is the same with representation level fusion. The higher dimension of final representation may enable SVM to classify more easily. Secondly, the codebook size $K$ of super vector methods is much smaller than that of other types of encoding methods, where clustering algorithm may be more stable for high dimensionality in descriptor level fusion method.\n\\end{itemize}\n\nBased on the observation and analysis above, we conclude that fusion method is a very important component for handling combination of multiple descriptors in the action recognition system. Representation level fusion method is a suitable choice for different kinds of encoding methods due to its good performance. From our analysis, we know that the performance boosting of fusing multiple features mainly owns the complementarity of these features. This complementarity may be not limited to the exploration of different descriptors, but also can be extended to the different BoVWs. From the perspective of statistics, FV aggregates information using $1^{st}$ and $2^{nd}$ order statistics, while SVC is about zero and $1^{st}$ order statistics. Intuitively, these two kinds of super vector encoding methods are complementary to each other. Thus, we present a new feature representation, called hybrid representation, combining the outputs FV and soft version SVC of multiple descriptors, including HOG, HOF, MBHx, and MBHy. This representation is simple but proved to be effective in next section.\n\n\\subsection{Comparison to the State-of-the-Art Results}\n\\label{sec:stoa}\nIn this section, we demonstrate the effectiveness of our proposed hybrid representation according to our previous insightful analysis. Specifically, we choose two super vector based encoding methods, namely SVC-$k$ and FV, for iDTs features. We use the power operation and then intra $\\ell_2$-normalization. For feature fusion, we adopt the representation level fusion method.\n\nTable \\ref{tbl:cmp} shows our final recognition rates and compare our results to that of state-of-the-art approaches. For the HMDB51 dataset, we obtain a recognition rate of $61.1\\%$, which is superior to the best result \\cite{WangS13a} by $3.9\\%$. Our system reaches classification accuracy of $92.3\\%$ on the dataset of UCF50 and $87.9\\%$ on the dataset of UCF101, which outperform the best results by $1.2\\%$ and $2.0\\%$ respectively. It is worth noting that UCF101 is newest and largest dataset, so few published papers have reported results on this dataset. We mainly compare with those top performers in the Thumos'13 Action Recognition Challenge \\cite{THUMOS13}. We also compare with three latest papers in CVPR 2014. Karpathy \\emph{et al.} \\cite{Karpathy14} resorts to a large deep Convolutional Neural Network trained with an extra 1-M training dataset. Cai \\emph{et al.} \\cite{CaiWPQ14} propose a complex and less efficient encoding method by considering the correlation of different descriptors. Wu \\emph{et al.} \\cite{WuZL14} propose a simple, lightweight, but powerful bimodal encoding method. Our results outperform these top performer and latest papers on the UCF101. From these comparisons, our hybrid representation is an efficient and effective method and obtains the state-of-the-art performance on the three challenging datasets.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this paper, we have comprehensively studied each step in the BoVW pipeline and tried to uncover good practice to build a more accurate and efficient action recognition system. Specifically, we mainly explore five aspects, namely local features, pre-processing techniques, encoding methods, pooling and normalization strategy, fusion methods. We conclude that every step is crucial for contributing to the final recognition rate and improper choice in one of the steps may counteract the performance improvement of other steps. Meanwhile, based on the insights from our comprehensive study, we propose a simple yet effective representation, called \\emph{hybrid representation}. Using this representation, our action recognition system obtains the state-of-the-art performance on the three challenging datasets.\n\n\n\n\n\\bibliographystyle{spmpsci} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{S0}\nFrobenius theory for completely positive maps on von Neumann\nalgebras was developed in \\cite{A-H} (further contributions to\nthis subject can be found in \\cite{Gr} and \\cite{Wa}). This theory\nstates, in paticular, that for an ergodic semigroup of completely\npositive (or, in fact, even Schwarz) maps on a von Neumann algebra\nits point spectrum forms a group, and the corresponding\neigenspaces are one-dimensional and spanned by a unitary operator.\nThe aim of this paper is to show that neither of these is true for a\nsemigroup of merely positive maps. Namely, we first prove that the\neigenvectors are either multiples of a partial isometry or linear\ncombinations of two partial isometries or multiples of a unitary\noperator, and then show by means of examples, that there is a whole\nclass of semigroups of positive maps on a von Neumann algebra such\nthat their point spectra are not groups and the corresponding\neigenspaces have dimensions greater than one.\n\\section{Preliminaries and notation}\\label{S1}\nLet $M$ be a von Neumann algebra, and let $(\\varPhi_g\\colon\ng\\in\\mathbb{G})$ be a semigroup of linear normal positive unital maps on\n$M$. We shall be concerned with two cases: $\\mathbb{G}=\\mathbb{N}_0$\n--- all nonnegative integers, and $\\mathbb{G}=\\mathbb{R}_+$ --- all\nnonnegative reals (notice that in the first case the semigroup has\nthe form $(\\varPhi^n\\colon n=0,1,\\dots)$, where $\\varPhi$ is a\nlinear normal positive unital map on $M$).\n\nA complex number $\\lambda$ of modulus one is called an\n\\emph{eigenvalue} of the semigroup, if there is a nonzero $x\\in M$\nsuch that for each $g\\in\\mathbb{G}$\n\\begin{equation}\\label{e0}\n \\varPhi_g(x) = \\lambda^g x.\n\\end{equation}\nThe collection of all $x's$ such that \\eqref{e0} holds is called\nthe \\emph{eigenspace} corresponding to the eigenvalue $\\lambda$,\nand denoted by $M_{\\lambda}$. In particular, $M_1$ is the\nfixed-point space of the semigroup, and the semigroup is called\n\\emph{ergodic} if $M_1$ consists of multiples of the identity. The\nset of all eigenvalues of the semigroup is called its \\emph{point\nspectrum}, and denoted by $\\sigma((\\varPhi_g))$. Let $\\omega$ be a\nnormal faithful state on $M$ such that for each \\linebreak\n$g\\in\\mathbb{G},\\ \\omega\\circ\\varPhi_g = \\omega$. The part of Frobenius\ntheory developed in \\cite{A-H} which is of interest to us, states\nthat in this case, if we assume that \\linebreak $(\\varPhi_g\\colon g\\in\\mathbb{G})$\nis ergodic and the maps $\\varPhi_g$ are two-positive, the\npoint spectrum is a group, and the corresponding eigenspaces are\none-dimensional and spanned by a unitary operator. A natural question\nis if the same is true under the assumption of mere positivity of the\nmaps $\\varPhi_g$. We shall show that this is not the case neither\nfor the group structure nor for the dimension of the eigenspaces.\n\nLet $N$ be the $\\sigma$-weak closure of the linear span of\n$\\bigcup_{\\lambda}M_{\\lambda}$, where the sum is taken over all\neigenvalues of $(\\varPhi_g)$. Then according to \\linebreak \\cite[Theorem\n1]{Lu} $N$ is a $JW^*$-algebra, by which is meant that $N$ is\n\\linebreak a $\\sigma$-weakly closed linear space, closed with respect\nto the Jordan product\n\\[\n x\\circ y = \\frac{1}{2}(xy + yx);\n\\]\nmoreover, $\\varPhi_g|N$ are Jordan $^*$-automorphisms.\nConsequently, if $x\\in M_{\\lambda}$, then\n\\[\n \\varPhi_g(x^*) = \\varPhi_g(x)^* = \\bar{\\lambda}^g x^*,\n\\]\nmeaning that $x^*\\in M_{\\bar{\\lambda}}$, and for $x\\in\nM_{\\lambda_1},\\ y\\in M_{\\lambda_2}$ we have\n\\[\n \\varPhi_g(x\\circ y) = \\varPhi_g(x)\\circ\\varPhi_g(y) = \\lambda_1^g\n x\\circ\\lambda_2^g y = (\\lambda_1\\lambda_2)^g x\\circ y,\n\\]\nmeaning that $x\\circ y\\in M_{\\lambda_1\\lambda_2}$. In particular,\nfor an eigenvector $x$ we have $x\\circ x^*\\in M_1$.\n\\section{Eigenvectors}\\label{S2}\nIn the following theorem we shall describe the eigenvectors of\n$(\\varPhi_g)$.\n\\begin{theorem}\nAssume that $(\\varPhi_g)$ is ergodic, and let $x\\in M_{\\lambda}$\nbe an eigenvector of $(\\varPhi_g)$. Then one\nof the following possibilities holds:\n\\begin{enumerate}\n\\item[(i)] $x=\\alpha v$, where $\\alpha\\in\\mathbb{C}$, and $v$ is a\npartial isometry in $M_{\\lambda}$ such that\n\\[\n v^*v=e, \\qquad vv^*=e^{\\bot}\n\\]\nfor some nonzero projection $e$, $e\\neq\\boldsymbol{1}$;\n\\item[(ii)] $x=\\alpha_1v_1+\\alpha_2v_2$, where\n$\\alpha_1,\\alpha_2\\in\\mathbb{C},\\,\\alpha_1\\neq\\alpha_2$, and\n$v_1,v_2$ are partial isometries in $M_{\\lambda}$ such that for\nsome nonzero projection $e,\\ e\\neq\\boldsymbol{1}$,\n\\[\n v^*_1v_1=e, \\quad v_1v^*_1=e^{\\bot}, \\quad v^*_2v_2=e^{\\bot}, \\quad\n v_2v^*_2=e;\n\\]\n\\item[(iii)] $x=\\alpha u$, where $\\alpha\\in\\mathbb{C}$, and $u$ is\na unitary operator in $M_{\\lambda}$.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nWe have $x\\circ x^*\\in M_1$, so renorming $x$, if\nnecessary, we may assume that\n\\begin{equation}\\label{E1}\n x^*x+xx^*=\\boldsymbol{1}.\n\\end{equation}\nMultiplying both sides of the above equality by $x$ on the left\nand on the right, respectively, we obtain\n\\begin{equation}\\label{E2}\n xx^*x+x^2x^*=x=x^*x^2+xx^*x,\n\\end{equation}\nso in particular $x^*$ commutes with $x^2$. Multiplying again both\nsides of the second part of the above equality by $x^*$, we obtain\n\\begin{equation}\\label{E3}\n x^*xx^*x+x^{*2}x^2=x^*x.\n\\end{equation}\n\n(i) Assume that $x^2=0$. Then \\eqref{E3} gives\n\\[\n (x^*x)^2=x^*x,\n\\]\nso $x^*x$ is a projection $e$, and by \\eqref{E1} $xx^*=e^{\\bot}$,\nthus $x$ is a partial isometry with initial projection $e$ and\nfinal projection $e^{\\bot}$, hence part (i) of the conclusion of\nthe theorem follows.\n\n(ii) and (iii) Assume that $x^2\\neq 0$. Then $x^2\\in\nM_{\\lambda^{2}},\\,x^{*2}\\in M_{\\bar{\\lambda}^2}$, so $x^2\\circ\nx^{*2}\\in M_1$, and since $x^2$ and $x^{*2}$ commute, we get\n\\[\n x^2x^{*2}=x^2\\circ x^{*2}=\\theta\\boldsymbol{1}\n\\]\nfor some $\\theta>0$, by the assumed ergodicity of $(\\varPhi_g)$.\nDenote $z=x^*x$. Then the above equality and \\eqref{E3} give\n\\[\n z^2-z+\\theta\\boldsymbol{1}=0,\n\\]\nthat is\n\\[\n \\left(z-\\frac{1-\\sqrt{1-4\\theta}}{2}\\,\\boldsymbol{1}\\right)\n \\left(z-\\frac{1+\\sqrt{1-4\\theta}}{2}\\,\\boldsymbol{1}\\right)=0.\n\\]\nIt follows that $\\theta\\leq 1\/4$, and we have two\npossibilities for the spectrum of $z$: either $\\theta=1\/4$\nin which case $\\text{sp}\\,z=\\left\\{1\/2\\right\\}$, or $\\theta\n< 1\/4$ in which case\n$\\text{sp}\\,z=\\left\\{\\frac{1-\\sqrt{1-4\\theta}}{2},\n\\,\\frac{1+\\sqrt{1-4\\theta}}{2}\\right\\}$. The first possibility\ngives at once\n\\[\n x^*x=\\frac{1}{2}\\,\\boldsymbol{1}, \\qquad\n xx^*=\\frac{1}{2}\\,\\boldsymbol{1},\n\\]\nand part (iii) of the conclusion of the theorem follows.\n\nFor the second possibility we have\n\\[\n x^*x=\\frac{1-\\sqrt{1-4\\theta}}{2}\\,e +\n \\frac{1+\\sqrt{1-4\\theta}}{2}\\,e^{\\bot}\n\\]\nwhere $e$ and $e^{\\bot}$ are the spectral projections of $x^*x$.\nFrom \\eqref{E1} we get\n\\[\n xx^*=\\frac{1+\\sqrt{1-4\\theta}}{2}\\,e +\n \\frac{1+\\sqrt{1-4\\theta}}{2}\\,\n e^{\\bot},\n\\]\nand denoting\n\\[\n \\alpha_1=\\left(\\frac{1-\\sqrt{1-4\\theta}}{2}\\right)^{1\/2}, \\qquad\n \\alpha_2=\\left(\\frac{1+\\sqrt{1-4\\theta}}{2}\\right)^{1\/2},\n\\]\nwe obtain\n\\begin{equation}\\label{E4}\n \\begin{aligned}\n |x|&=\\alpha_1e + \\alpha_2e^{\\bot}\\\\\n |x^*|&=\\alpha_2e + \\alpha_1e^{\\bot}.\n \\end{aligned}\n\\end{equation}\nLet\n\\[\n x=u|x|\n\\]\nbe the polar decomposition of $x$. Then\n\\[\n x^*=u^*|x^*|\n\\]\nis the polar decomposition of $x^*$, and since $|x|$ and $|x^*|$ are\ninvertible the operator $u$ is unitary. Moreover,\n\\[\n |x^*|=u|x|u^*,\n\\]\nwhich gives the equality\n\\[\n \\alpha_2e + \\alpha_1e^{\\bot}=\\alpha_1ueu^* + \\alpha_2ue^{\\bot}u^*,\n\\]\nyielding\n\\[\n \\alpha_1\\boldsymbol{1} +\n (\\alpha_2-\\alpha_1)e=\\alpha_2\\boldsymbol{1} +\n (\\alpha_1-\\alpha_2)ueu^*.\n\\]\nThus\n\\[\n (\\alpha_1-\\alpha_2)\\boldsymbol{1}=(\\alpha_1-\\alpha_2)(e + ueu^*),\n\\]\nshowing that\n\\[\n ueu^*=e^{\\bot},\n\\]\nand consequently,\n\\[\n e=\\boldsymbol{1}-ueu^*=ue^{\\bot}u^*.\n\\]\nFrom the polar decomposition and formula \\eqref{E4} we obtain\n\\begin{equation}\\label{E4'}\n x=\\alpha_1ue + \\alpha_2ue^{\\bot}=\\alpha_1v_1 + \\alpha_2v_2,\n\\end{equation}\nwhere\n\\[\n v_1=ue, \\qquad v_2=ue^{\\bot}.\n\\]\nWe have\n\\[\n v^*_1v_1=e,\\quad v_1v^*_1=ueu^*=e^{\\bot},\\quad\n v^*_2v_2=e^{\\bot},\\quad v_2v^*_2=ue^{\\bot}u^*=e,\n\\]\nso \\eqref{E4'} is the representation of $x$ as in part (ii) of the\nconclusion of the theorem. It remains to show that $v_1,v_2\\in\nM_{\\lambda}$. Equality \\eqref{E2} gives\n\\[\n xx^*x=x-x^2x^*=x-x^2\\circ x^*\n\\]\nbecause $x^2$ and $x^*$ commute. Since $x^2\\in M_{{\\lambda}^2}$ and\n$x^*\\in M_{\\bar{\\lambda}}$, we get $x^2\\circ x^*\\in M_{\\lambda}$, so\n$xx^*x\\in M_{\\lambda}$. We have\n\\[\n xx^*x=(\\alpha_1v_1 + \\alpha_2v_2)(\\alpha^2_1e + \\alpha^2_2e^{\\bot})\n =\\alpha^3_1v_1 + \\alpha^3_2v_2,\n\\]\nand the equality\n\\[\n \\varPhi_g(xx^*x)=\\lambda^g xx^*x\n\\]\nyields\n\\begin{equation}\\label{E5}\n \\alpha^3_1\\varPhi_g(v_1) +\\alpha^3_2\\varPhi_g(v_2)\n =\\lambda^g\\alpha^3_1v_1 + \\lambda^g\\alpha^3_2v_2.\n\\end{equation}\nOn the other hand $\\varPhi_g(x)=\\lambda^g x$, which gives\n\\begin{equation}\\label{E6}\n \\alpha_1\\varPhi_g(v_1) + \\alpha_2\\varPhi_g(v_2) = \\lambda^g\\alpha_1v_1\n + \\lambda^g\\alpha_2v_2.\n\\end{equation}\nMultiplying both sides of equality \\eqref{E6} by $\\alpha^2_2$ and\nsubstracting \\eqref{E6} from \\eqref{E5} we obtain\n\\[\n \\alpha_1(\\alpha^2_1-\\alpha^2_2)\\varPhi_g(v_1) =\n \\lambda^g\\alpha_1(\\alpha^2_1-\\alpha^2_2)v_1,\n\\]\nwhich gives $\\varPhi_g(v_1)=\\lambda^g v_1$,\nand analogously $\\varPhi_g(v_2)=\\lambda^g v_2$.\n\\end{proof}\n\\section{Positivity of elements from $\\textbf{Mat}_2(M)$}\\label{S3}\nIn what follows we shall need a number of properties concerning\npositivity of matrices with elements in a von Neumann algebra.\nThey will be exploited in examples in a particular case of an\nabelian von Neumann algebra, but as these properties seem to be\ninteresting in their own right, we prove them here in slightly\ngreater generality. For more information on positivity of such\nmatrices the reader is referred to \\cite[Chapter IV.3]{Ta} and\n\\cite{Wo}. It should be added that some of the facts obtained\nbelow can be given alternative proofs based on methods used in\n\\cite{Ta,Wo}.\n\nLet $M$ be a von Neumann algebra, and let\n$\\widetilde{M}=\\textbf{Mat}_2(M)$ be the algebra of\n$2\\times2$-matrices with elements from $M$. Assuming that $M$ acts\non a Hilbert space $\\mathcal{H}$, we can consider $\\widetilde{M}$\nas acting on the Hilbert space $\\widetilde{\\mathcal{H}} =\n\\mathcal{H}\\oplus\\mathcal{H}$.\n\n\\begin{proposition}\\label{P1}\nLet $A=\\left[\\begin{smallmatrix}a & b\\\\ c &\nd\\end{smallmatrix}\\right]\\in\\widetilde{M}$.\n\\begin{enumerate}\n\\item[(i)]$A\\geq 0$ if and only if $a,d\\geq 0,\\,c=b^*$, and for\neach $\\varepsilon > 0 \\linebreak d\\geq\nb^*(a+\\varepsilon\\boldsymbol{1})^{-1}b$.\n\\item[(ii)] Assume that $a,b\\text{ and }c$ commute. Then\n$A\\geq 0$ if and only if $a,d\\geq 0,\\,c=b^*$, and $ad\\geq b^*b$.\n\\end{enumerate}\n\\end{proposition}\n\\begin{proof}\nCalculate first the quadratic form of $A$. For\n$\\tilde{\\xi}=\\left(\\begin{smallmatrix}\\xi_1 \\\\\n\\xi_2\\end{smallmatrix}\\right)$,\\linebreak\n$\\xi_1,\\xi_2\\in\\mathcal{H}$, we have\n\\begin{equation}\\label{e1}\n \\begin{aligned}\n \\langle\n A\\tilde{\\xi},\\tilde{\\xi}\\rangle_{\\widetilde{\\mathcal{H}}}&=\\langle\\begin{pmatrix}\n a\\xi_1+b\\xi_2 \\\\ c\\xi_1+d\\xi_2\\end{pmatrix},\\begin{pmatrix}\\xi_1\n \\\\ \\xi_2\\end{pmatrix}\\rangle_{\\widetilde{\\mathcal{H}}} \\\\\n &=\\langle a\\xi_1,\\xi_1\\rangle_{\\mathcal{H}}+\\langle\n b\\xi_2,\\xi_1\\rangle_{\\mathcal{H}}+\\langle\n c\\xi_1,\\xi_2\\rangle_{\\mathcal{H}}+\\langle d\\xi_2,\\xi_2\\rangle_{\\mathcal{H}}.\n \\end{aligned}\n\\end{equation}\n\nIt is clear that in order that $A$ be positive we must have\n$a,d\\geq 0$ and $c=b^*$, so we restrict attention to matrices $A$\nof the form $A=\\left[\\begin{smallmatrix}a & b \\\\ b^* & d\n\\end{smallmatrix}\\right]$ with $a,d \\geq 0$. Then \\eqref{e1}\nbecomes\n\\begin{equation}\\label{e2}\n \\langle A\\tilde{\\xi},\\tilde{\\xi}\\rangle_{\\widetilde{\\mathcal{H}}}=\\langle\n a\\xi_1,\\xi_1\\rangle_{\\mathcal{H}}+\\langle b\\xi_2,\\xi_1\\rangle_{\\mathcal{H}}\n +\\langle\\xi_1,b\\xi_2\\rangle_{\\mathcal{H}} + \\langle\n d\\xi_2,\\xi_2\\rangle_{\\mathcal{H}}.\n\\end{equation}\n\n(i) \\emph{Step 1}. First we shall show that for $A$ of the form\n$A=\\left[\\begin{smallmatrix}\\boldsymbol{1} & b \\\\ b^* & z\n\\end{smallmatrix}\\right]$, $A\\geq 0$ if and only if $z\\geq b^*b$.\nThis is virtually proved in\\\\ \\cite[Lemma 3.1]{Wo}. For the sake of\ncompleteness we give a simple proof below.\n\nFor the quadratic form of $A$ we have\n\\begin{equation}\\label{e3}\n \\langle\n A\\tilde{\\xi},\\tilde{\\xi}\\rangle_{\\widetilde{\\mathcal{H}}}=\n \\langle\\xi_1,\\xi_1\\rangle_{\\mathcal{H}}+\\langle\n b\\xi_2,\\xi_1\\rangle_{\\mathcal{H}}+\\langle\\xi_1,b\\xi_2\\rangle_{\\mathcal{H}}+\n \\langle z\\xi_2,\\xi_2\\rangle_{\\mathcal{H}}.\n\\end{equation}\n\nLet $b^*b\\leq z$. Then\n\\begin{align*}\n \\langle A\\tilde{\\xi},\\tilde{\\xi}\\rangle_{\\widetilde{\\mathcal{H}}}&\\geq\n \\langle\\xi_1,\\xi_1\\rangle_{\\mathcal{H}}+\\langle b\\xi_2,\\xi_1\\rangle_{\\mathcal{H}}\n +\\langle\\xi_1,b\\xi_2\\rangle_{\\mathcal{H}}+\\langle\n b\\xi_2,b\\xi_2\\rangle_{\\mathcal{H}}\\\\&=\\langle\\xi_1+b\\xi_2,\\xi_1+b\\xi_2\\rangle_{\\mathcal{H}}\n \\geq 0,\n\\end{align*}\nshowing that $A\\geq 0$.\n\nConversely, if $A\\geq 0$ then substituting $-\\xi_1$ for $\\xi_1$ in\n\\eqref{e3}, we obtain\n\\[\n 0\\leq \\langle\\xi_1,\\xi_1\\rangle_{\\mathcal{H}}-\\langle\n b\\xi_2,\\xi_1\\rangle_{\\mathcal{H}}\n -\\langle\\xi_1,b\\xi_2\\rangle_{\\mathcal{H}}+\\langle\n z\\xi_2,\\xi_2\\rangle_{\\mathcal{H}},\n\\]\nthat is\n\\[\n\\langle b\\xi_2,\\xi_1\\rangle_{\\mathcal{H}}+\\langle\\xi_1,b\\xi_2\\rangle_{\\mathcal{H}}\n-\\langle \\xi_1,\\xi_1\\rangle_{\\mathcal{H}}\\leq\\langle\nz\\xi_2,\\xi_2\\rangle_{\\mathcal{H}}.\n\\]\n\nNow putting $\\xi_1=b\\xi_2$ in the above inequality, we get\n\\[\n \\langle b\\xi_2,b\\xi_2\\rangle_{\\mathcal{H}}\\leq\\langle\n z\\xi_2,\\xi_2\\rangle_{\\mathcal{H}},\n\\]\nwhich shows that $b^*b\\leq z$.\n\n\\emph{Step 2}. Let now $A=\\left[\\begin{smallmatrix}a & b \\\\ b^* &\nd\\end{smallmatrix}\\right]$ be arbitrary. $A\\geq 0$ if and only if\nfor each $\\varepsilon > 0, \\quad\nA_{\\varepsilon}=\\left[\\begin{smallmatrix}a+\\varepsilon\\boldsymbol{1}&\nb \\\\ b^*& d \\end{smallmatrix}\\right]\\geq 0$, and denoting\n$a_\\varepsilon = a + \\varepsilon\\boldsymbol{1}$, we obtain\n\\[\n \\langle\n A_{\\varepsilon}\\tilde{\\xi},\\tilde{\\xi}\\rangle_{\\widetilde{\\mathcal{H}}}=\n \\langle a_{\\varepsilon}\\xi_1,\\xi_1\\rangle_{\\mathcal{H}}+\\langle b\\xi_2,\\xi_1\n \\rangle_{\\mathcal{H}}+\\langle\\xi_1,b\\xi_2\\rangle_{\\mathcal{H}}+\\langle\n d\\xi_2,\\xi_2\\rangle_{\\mathcal{H}}.\n\\]\n\nPutting $\\eta_1=a_{\\varepsilon}^{1\/2}\\xi_1$, we get\n\\[\n \\begin{aligned}\n \\langle\n A_{\\varepsilon}\\tilde{\\xi},\\tilde{\\xi}\\rangle_{\\widetilde{\\mathcal{H}}}&=\n \\langle \\eta_1,\\eta_1\\rangle_{\\mathcal{H}}+\\langle\n a_{\\varepsilon}^{-1\/2}b\\xi_2,\\eta_1\\rangle_{\\mathcal{H}}\n +\\langle\\eta_1,a_{\\varepsilon}^{-1\/2}b\\xi_2\\rangle_{\\mathcal{H}}+\\langle\n d\\xi_2,\\xi_2\\rangle_{\\mathcal{H}}\\\\\n &=\\langle\\begin{bmatrix}\\boldsymbol{1}& a_{\\varepsilon}^{-1\/2}b \\\\\n b^* a_{\\varepsilon}^{-1\/2} & d \\end{bmatrix}\\begin{pmatrix}\\eta_1\\\\\n \\xi_2\\end{pmatrix},\\begin{pmatrix}\\eta_1 \\\\\n \\xi_2\\end{pmatrix}\\rangle_{\\widetilde{\\mathcal{H}}}.\n \\end{aligned}\n\\]\n\nSince $a_{\\varepsilon}^{1\/2}$ maps $\\mathcal{H}$ onto\n$\\mathcal{H}$ in a 1--1 way, we see that $A_{\\varepsilon}\\geq 0$\nif and only if $\\left[\\begin{smallmatrix}\\boldsymbol{1} &\na_{\\varepsilon}^{-1\/2}b \\\\ b^* a_{\\varepsilon}^{-1\/2} & d\n\\end{smallmatrix}\\right]\\geq 0$, which by \\emph{Step 1} is\nequivalent to the condition\n\\[\n d \\geq\n b^*a_{\\varepsilon}^{-1}b=b^*(a+\\varepsilon\\boldsymbol{1})^{-1}b,\n\\]\nand the proof of (i) is complete.\n\n(ii) The reasoning is similar to that in part (i) using the simple\n\\linebreak $\\varepsilon$-trick. Namely, the inequality $ad\\geq\nb^*b$ is equivalent to $(a+\\varepsilon\\boldsymbol{1})d \\geq b^*b$\nfor each $\\varepsilon >0$, which in turn, by the assumed\ncommutation property, is equivalent to $d\\geq\nb^*(a+\\varepsilon\\boldsymbol{1})^{-1}b$. Applying part (i)\nfinishes the proof.\n\\end{proof}\n\\begin{remark}\nIn virtually the same way we obtain the following variant of (i):\n\\begin{enumerate}\n\\item[(i${}'$)] $A\\geq 0$ if and only if $a,d\\geq 0,\\quad c=b^*$,\nand for each $\\varepsilon > 0 \\linebreak a\\geq\nb(d+\\varepsilon\\boldsymbol{1})^{-1}b^*$.\n\\end{enumerate}\n\\end{remark}\n\\begin{lemma}\\label{L2}\nLet $\\left[\\begin{smallmatrix}a & b \\\\ b^* & d\n\\end{smallmatrix}\\right]\\geq 0$. Then\n\\begin{enumerate}\n\\item[(i)]$\\left[\\begin{smallmatrix}d & b^* \\\\ b & a\n\\end{smallmatrix}\\right]\\geq 0$.\n\\item[(ii)] For each $x,y\\in M \\quad\n\\left[\\begin{smallmatrix}xax^* & xby^* \\\\ yb^*x^* & ydy^*\n\\end{smallmatrix}\\right]\\geq 0$.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n(i) follows from the equality\n\\[\n \\begin{bmatrix}d & b^* \\\\ b & a \\end{bmatrix} = \\begin{bmatrix}0\n & \\boldsymbol{1} \\\\ \\boldsymbol{1} & 0\n \\end{bmatrix}\\begin{bmatrix}a & b \\\\ b^* & d \\end{bmatrix}\n \\begin{bmatrix}0 & \\boldsymbol{1} \\\\ \\boldsymbol{1} & 0\n \\end{bmatrix},\n\\]\nand (ii) from the equality\n\\[\n \\begin{bmatrix}xax^* & xby^* \\\\ yb^*x^* & ydy^* \\end{bmatrix}=\n \\begin{bmatrix}x & 0 \\\\ 0 & y \\end{bmatrix}\\begin{bmatrix}a & b\n \\\\ b^* & d \\end{bmatrix}\\begin{bmatrix}x^* & 0 \\\\ 0 & y^*\n \\end{bmatrix}.\n\\]\n\\end{proof}\n\\begin{lemma}\\label{L3}\nLet $a$ commute with $b$, and assume further that either $b$ is\nnormal or that $b$ commutes with $d$. If\n$\\left[\\begin{smallmatrix}a & b \\\\ b^* & d\n\\end{smallmatrix}\\right]\\geq 0$, then $\\left[\\begin{smallmatrix}a\n& b^* \\\\ b & d \\end{smallmatrix}\\right]\\geq 0$.\n\\end{lemma}\n\\begin{proof}\nLet $\\left[\\begin{smallmatrix}a & b \\\\ b^* & d\n\\end{smallmatrix}\\right]\\geq 0$, and assume first that $b$ is\nnormal. Then by Proposition \\ref{P1} (i) we have for each\n$\\varepsilon > 0$\n\\[\n d \\geq b^*(a+\\varepsilon\\boldsymbol{1})^{-1}b =\n b^*b(a+\\varepsilon\\boldsymbol{1})^{-1} =\n b(a+\\varepsilon\\boldsymbol{1})^{-1}b^*,\n\\]\nwhich again by Proposition \\ref{P1} (i) means that\n$\\left[\\begin{smallmatrix}a & b^* \\\\ b & d\n\\end{smallmatrix}\\right]\\geq 0$.\n\nNow let $b$ commute with $d$. Then for each $\\varepsilon > 0$,\n\\[\n d \\geq b^*(a+\\varepsilon\\boldsymbol{1})^{-1}b =\n |b|^2(a+\\varepsilon\\boldsymbol{1})^{-1} =\n |b|(a+\\varepsilon\\boldsymbol{1})^{-1}|b|,\n\\]\nso $\\left[\\begin{smallmatrix}a & |b| \\\\ |b| & d\n\\end{smallmatrix}\\right]\\geq 0$. By Lemma \\ref{L2} (i) it follows\nthat $\\left[\\begin{smallmatrix}d & |b| \\\\ |b| & a\n\\end{smallmatrix}\\right]\\geq 0$, and thus, on account of\nProposition \\ref{P1} (i), for each $\\varepsilon > 0$,\n\\[\n a \\geq |b|(d+\\varepsilon\\boldsymbol{1})^{-1}|b| =\n b^*b(d+\\varepsilon\\boldsymbol{1})^{-1} =\n b^*(d+\\varepsilon\\boldsymbol{1})^{-1}b,\n\\]\nwhich means that $\\left[\\begin{smallmatrix}d & b \\\\ b^* & a\n\\end{smallmatrix}\\right]\\geq 0$. Applying again Lemma \\ref{L2} (i)\nwe obtain $\\left[\\begin{smallmatrix}a & b^* \\\\ b & d\n\\end{smallmatrix}\\right]\\geq 0$.\n\\end{proof}\n\\section{Examples}\\label{S4}\nLet us begin with a simple example.\n\\begin{example}\\label{Ex1}\nKeeping the notation of Section \\ref{S3}, put\n$\\widetilde{M}=\\mathbb{B}(\\mathbb{C}^2)$ (i.e. $M=\\mathbb{C}$),\n$\\omega=\\frac{1}{2}tr$, and let $\\lambda_0\\in\\mathbb{C}$ be such\nthat $|\\lambda_0|=1,\\ \\lambda_0\\ne1$. Define\n$\\varPhi:\\widetilde{M}\\to\\widetilde{M}$ as\n\\[\n \\varPhi\\left(\\begin{bmatrix}a&b\\\\c&d\\end{bmatrix}\\right) =\n \\begin{bmatrix}\\frac{a+d}{2}&\\lambda_0b\\\\\n \\bar{\\lambda}_0c&\\frac{a+d}{2}\\end{bmatrix}.\n\\]\nIt is clear that $\\varPhi$ is linear normal unital, and that\n$\\omega\\circ\\varPhi=\\omega$. Moreover, for $\\left[\\begin{smallmatrix}a&b\\\\c&d\\end{smallmatrix}\\right]\\geq 0$, we have\n$a,d \\geq 0,\\ c=\\bar{b}$, and $\\left[\\begin{smallmatrix}d & b \\\\\n\\bar{b} & a \\end{smallmatrix}\\right]\\geq 0$, thus \\linebreak\n$\\left[\\begin{smallmatrix}a+d & 2b \\\\ 2\\bar{b} & a+d\n\\end{smallmatrix}\\right]\\geq 0$, so\n$\\left[\\begin{smallmatrix}\\frac{a+d}{2} & b \\\\ \\bar{b} &\n\\frac{a+d}{2} \\end{smallmatrix}\\right]\\geq 0$, and consequently\n$\\left[\\begin{smallmatrix}\\frac{a+d}{2} & \\lambda_0b \\\\\n\\bar{\\lambda}_0\\bar{b} & \\frac{a+d}{2}\n\\end{smallmatrix}\\right]\\geq 0$, showing that $\\varPhi$ is\npositive.\n\nThe equality\n\\[\n \\varPhi\\left(\\begin{bmatrix}a&b\\\\c&d\\end{bmatrix}\\right) = \\begin{bmatrix}a&b\\\\c&d\\end{bmatrix}\n\\]\nyields\n\\[\n \\frac{a+d}{2} = a = d,\\qquad \\lambda_0b = b,\\qquad\n \\bar{\\lambda}_0c = c,\n\\]\nhence $b=c=0$, and the fixed-points have the form\n$\\left[\\begin{smallmatrix}a & 0 \\\\ 0 & a \\end{smallmatrix}\\right]\n= a\\left[\\begin{smallmatrix}1 & 0 \\\\ 0 & 1\n\\end{smallmatrix}\\right]$, which means that $(\\varPhi^n)$ is\nergodic.\n\n(i) Let $\\lambda_0$ be such that $\\lambda_0\\ne-1,\\\n\\lambda_0^3\\ne1$, and let $\\lambda\\ne1$ be an eigenvalue of\n$(\\varPhi^n)$. Then\n\\[\n \\frac{a+d}{2} = \\lambda a = \\lambda d,\\qquad \\lambda_0b = \\lambda\n b,\\qquad \\bar{\\lambda}_0c = \\lambda c,\n\\]\nwhich yields\n\\[\n a = d = 0,\\qquad \\bar{\\lambda}\\lambda_0b = b,\\qquad\n \\bar{\\lambda}\\bar{\\lambda}_0c = c.\n\\]\nThus either $\\lambda = \\lambda_0,\\ c = 0$ or $\\lambda =\n\\bar{\\lambda}_0,\\ b = 0$, so $\\lambda_0 \\text{ and }\n\\bar{\\lambda}_0$ are the only eigenvalues of $(\\varPhi^n)$\ndifferent from $1$, with the eigenspaces\n\\[\n \\widetilde{M}_{\\lambda_0} = \\left\\{\\begin{bmatrix}0 & b \\\\ 0 & 0\n \\end{bmatrix}\\colon b\\in\\mathbb{C}\\right\\}, \\qquad\n \\widetilde{M}_{\\bar{\\lambda}_0} = \\left\\{\\begin{bmatrix}0 & 0 \\\\\n c & 0 \\end{bmatrix}\\colon c\\in\\mathbb{C}\\right\\}.\n\\]\nConsequently, $\\sigma((\\varPhi^n)) =\n\\{1,\\lambda_0,\\bar{\\lambda}_0\\}$, which is not a group if\n\\linebreak $\\lambda_0\\ne-1,\\ \\lambda_0^3\\ne1$.\n\n(ii) Now let $\\lambda_0 = -1$. The above calculations give\n$\\sigma((\\varPhi^n)) = \\{1,-1\\}$, and\n\\[\n \\widetilde{M}_{-1} = \\left\\{\\begin{bmatrix} 0 & b \\\\ c & 0\n \\end{bmatrix}\\colon b,c\\in\\mathbb{C}\\right\\},\n\\]\nso the eigenspace is not one-dimensional.\\hfill\\qedsymbol\n\\end{example}\n\nLet us observe that the reasoning above may be repeated with\nvirtually no change for the semigroup $(\\varPhi_t\\colon t\\geq 0)$\ndefined as\n\\[\n \\varPhi_t\\left(\\begin{bmatrix}a&b\\\\c&d\\end{bmatrix}\\right) = \\begin{bmatrix} \\frac{a+d}{2} &\n \\lambda_0^t b \\\\ \\bar{\\lambda}_0^t c & \\frac{a+d}{2}\n \\end{bmatrix},\n\\]\nthus giving a corresponding example in the continuous case.\n\\begin{remark}\nThe triple $(\\mathbb{B}(\\mathbb{C}^2),(\\varPhi_t),\\omega)$ from\nthe above example constitutes what in \\cite{Gr} is called an\nirreducible $W^*$-dynamical system. In \\linebreak\\cite[Theorem\n3.8]{Gr} it is proved that under the assumption that the\n$\\varPhi_t$'s are Schwarz maps every such a system on a full\nalgebra has trivial point spectrum (i.e. consisting only of $1$).\nAs we see this is not the case if we assume only positivity of the\nmaps $\\varPhi_t$'s.\n\\end{remark}\nNow we construct a more involved\nexample (in fact, a class of examples) in which we shall see that\nall the possibilities for the eigenvectors given in Theorem may\noccur.\n\\begin{example}\\label{Ex2}\nLet $M$ be abelian, let $\\omega$ be a normal faithful state on\n$M$, and let $\\varPsi$ be a positive normal unital map on $M$ such\nthat $\\omega\\circ\\varPsi = \\omega,\\ (\\varPsi^n)$ is ergodic, and\n$\\sigma((\\varPsi^n)) = \\{-1,1\\}$. The abelianess of $M$ implies\nthat $\\varPsi$ is completely positive (cf. \\cite[Chapter IV.3]{Ta}),\nthus according to \\cite{A-H} the eigenspace corresponding to $-1$\nis one-dimensional and spanned by a unitary operator $u$, i.e.\n\\[\n \\varPsi(x) = -x\n\\]\nif and only if $x$ is a multiple of $u$.\n\nPut $\\widetilde{M} = \\textbf{Mat}_2(M)$,\n\\[\n \\tilde\\omega\\left(\\begin{bmatrix}a&b\\\\c&d\\end{bmatrix}\\right) = \\frac{1}{2}[\\omega(a) +\n \\omega(d)],\n\\]\nand let $\\lambda_0\\in\\mathbb{C}$ be such that $|\\lambda_0|=1,\\\n\\lambda_0\\notin\\{-1,1\\}$. Define\n$\\varPhi:\\widetilde{M}\\to\\widetilde{M}$ as\n\\[\n \\varPhi\\left(\\begin{bmatrix}a&b\\\\c&d\\end{bmatrix}\\right) =\n \\begin{bmatrix}\\varPsi\\left(\\frac{a+d}{2}\\right) & \\lambda_0\\varPsi(b) \\\\\n \\bar{\\lambda}_0\\varPsi(c) & \\varPsi\\left(\\frac{a+d}{2}\\right)\\end{bmatrix}.\n\\]\n$\\varPhi$ is a linear normal unital map on $\\widetilde{M}$ such\nthat $\\tilde\\omega\\circ\\varPhi = \\tilde\\omega$. Arguing as in\nExample \\ref{Ex1}, and using Lemma \\ref{L2} with $x =\n\\lambda_0\\boldsymbol{1},\\ y = \\boldsymbol{1}$, and Lemma \\ref{L3}, we see\nthat if $\\left[\\begin{smallmatrix}a&b\\\\c&d\\end{smallmatrix}\\right]\\geq 0$, then\n$\\left[\\begin{smallmatrix}\\frac{a+d}{2} & \\lambda_0 b \\\\ \\bar{\\lambda}_0 c &\n\\frac{a+d}{2}\\end{smallmatrix}\\right]\\geq 0$, so $\\varPhi$ is\npositive by virtue of the complete positivity of $\\varPsi$.\n\nThe equality\n\\[\n \\varPhi\\left(\\begin{bmatrix}a&b\\\\c&d\\end{bmatrix}\\right) = \\begin{bmatrix}a&b\\\\c&d\\end{bmatrix}\n\\]\nyields\n\\[\n \\varPsi\\left(\\frac{a+d}{2}\\right) = a = d,\\qquad \\lambda_0\\varPsi(b) =\n b,\\qquad \\bar{\\lambda}_0\\varPsi(c) = c,\n\\]\nhence $b = c = 0$ and $\\varPsi(a) = a$. From the ergodicity of\n$\\varPsi$ it follows that $a$ is a multiple of $\\boldsymbol{1}$, so\nthe fixed-points of $(\\varPhi^n)$ have the form\n$\\left[\\begin{smallmatrix}\\theta\\boldsymbol{1} & 0 \\\\ 0 &\n\\theta\\boldsymbol{1}\\end{smallmatrix}\\right]$ with\n$\\theta\\in\\mathbb{C}$, which means that $(\\varPhi^n)$ is ergodic.\nLet $\\lambda\\ne1$ be an eigenvalue of $(\\varPhi^n)$. Then\n\\[\n \\varPsi\\left(\\frac{a+d}{2}\\right) = \\lambda a = \\lambda d,\\qquad\n \\lambda_0\\varPsi(b) = \\lambda b,\\qquad \\bar{\\lambda}_0\\varPsi(c)\n = \\lambda c,\n\\]\nwhich yields\n\\begin{equation}\\label{e4}\n \\varPsi(a) = \\lambda a,\\qquad \\varPsi(b) = \\bar{\\lambda}_0\\lambda b,\n \\qquad \\varPsi(c) = \\bar{\\lambda}_0\\lambda c.\n\\end{equation}\n\n(i) Take $\\lambda_0$ such that $\\lambda_0\\neq i,\\\n\\lambda_0\\neq-i,\\ \\lambda^3_0\\neq1,\\ \\lambda^3_0\\neq-1$.\nEqualities \\eqref{e4} yield the following possibilities:\n\n(i.1) $\\lambda=-1$. Then $a=d$ is a multiple of $u,\\\nb=c=0$, and\n\\[\n \\widetilde{M}_{-1}=\\left\\{\\alpha\\begin{bmatrix}u & 0\\\\ 0 & u\n \\end{bmatrix}\\colon \\alpha\\in\\mathbb{C}\\right\\},\n\\]\nso the eigenvector corresponding to the eigenvalue $-1$ is as in\npart (iii) of Theorem.\n\n(i.2) $\\lambda\\neq-1$. Then $a=d=0$, and one of the four\nsituations must occur:\n\n(i.2.1) $\\lambda=\\lambda_0$. Then $b$ is a multiple of\n$\\boldsymbol{1},\\ c=0$, and\n\\[\n \\widetilde{M}_{\\lambda_0}=\\left\\{\\alpha\\begin{bmatrix}0 &\n \\boldsymbol{1}\\\\ 0 & 0\n \\end{bmatrix}\\colon\\alpha\\in\\mathbb{C}\\right\\},\n\\]\nso the eigenvector corresponding to the eigenvalue $\\lambda_0$ is\nas in part (i) of Theorem.\n\n(i.2.2) $\\lambda=-\\lambda_0$. Then $b$ is a multiple of $u,\\ c=0$, and\n\\[\n \\widetilde{M}_{-\\lambda_0}=\\left\\{\\alpha\\begin{bmatrix}0 & u\\\\\n 0 & 0\\end{bmatrix}\\colon\\alpha\\in\\mathbb{C}\\right\\}.\n\\]\n\n(i.2.3) $\\lambda=\\bar{\\lambda}_0$. Then $b=0,\\ c$ is a multiple of\n$\\boldsymbol{1}$, and\n\\[\n \\widetilde{M}_{\\bar{\\lambda}_0}=\\left\\{\\alpha\\begin{bmatrix}0 &\n 0\\\\ \\boldsymbol{1} & 0\n \\end{bmatrix}\\colon\\alpha\\in\\mathbb{C}\\right\\}.\n\\]\n\n(i.2.4) $\\lambda=-\\bar{\\lambda}_0$. Then $b=0,\\ c$ is a multiple\nof $u$, and\n\\[\n \\widetilde{M}_{-\\bar{\\lambda}_0}=\\left\\{\\alpha\\begin{bmatrix}0 &\n 0\\\\u & 0 \\end{bmatrix}\\colon\\alpha\\in\\mathbb{C}\\right\\}.\n\\]\n\nMoreover, we have\n\\[\n \\sigma((\\varPhi^n))=\\{1,-1,\\lambda_0,\\bar{\\lambda}_0,-\\lambda_0,\n -\\bar{\\lambda}_0\\},\n\\]\nwhich is not a group under our assumptions on $\\lambda_0$.\n\nNow take $\\lambda_0=i$. Equations \\eqref{e4} become then\n\\[\n \\varPsi(a)=\\lambda a,\\qquad \\varPsi(b)=-i\\lambda b,\\qquad\n \\varPsi(c)=i\\lambda c.\n\\]\nAs in part (i) we have the possibilities:\n\n(ii.1) $\\lambda=-1$, in which case $a=d$ is a multiple of\n$\\boldsymbol{1}$, and $b=c=0$.\n\n(ii.2) $\\lambda\\neq-1$, in which case $a=d=0$, and we may only\nhave either $\\lambda=i$ or $\\lambda=-i$. In the first case $b$ is\na multiple of $\\boldsymbol{1},\\ c$ is a multiple of $u$, and\n\\[\n \\widetilde{M}_i=\\left\\{\\alpha_1\\begin{bmatrix}0 &\n \\boldsymbol{1}\\\\ 0 & 0\\end{bmatrix}+ \\alpha_2\\begin{bmatrix}0 &\n 0\\\\ u &\n 0\\end{bmatrix}\\colon\\alpha_1,\\alpha_2\\in\\mathbb{C}\\right\\},\n\\]\nso the situation is as in part (ii) of Theorem with\n\\[\n v_1=\\begin{bmatrix}0 & \\boldsymbol{1}\\\\ 0 & 0\\end{bmatrix},\\\n v_2=\\begin{bmatrix}0 & 0\\\\ u & 0\\end{bmatrix},\\\n e=\\begin{bmatrix}0 & 0\\\\ 0 & \\boldsymbol{1}\\end{bmatrix},\\\n e^{\\bot}=\\begin{bmatrix}\\boldsymbol{1} & 0\\\\ 0 & 0\\end{bmatrix}.\n\\]\nIn the second case $b$ is a multiple of $u,\\ c$ is a multiple of\n$\\boldsymbol{1}$, and\n\\[\n \\widetilde{M}_{-i}=\\left\\{\\alpha_1\\begin{bmatrix}0 & u\\\\ 0 &\n 0\\end{bmatrix}+ \\alpha_2\\begin{bmatrix}0 & 0\\\\ \\boldsymbol{1} &\n 0\\end{bmatrix}\\colon\\alpha_1,\\alpha_2\\in\\mathbb{C}\\right\\},\n\\]\nso again part (ii) of Theorem occurs with\n\\[\n v_1=\\begin{bmatrix}0 & u\\\\ 0 & 0\\end{bmatrix},\\\n v_2=\\begin{bmatrix}0 & 0\\\\ \\boldsymbol{1} & 0\\end{bmatrix},\\\n e=\\begin{bmatrix}\\boldsymbol{1} & 0\\\\ 0 & 0\\end{bmatrix},\\\n e^{\\bot}=\\begin{bmatrix}0 & 0\\\\ 0 & \\boldsymbol{1}\\end{bmatrix}.\n\\]\n\\hfill\\qedsymbol\n\\end{example}\nAs in Example \\ref{Ex1}, we observe that taking $(\\varPsi_t\\colon\nt\\geq 0)$ --- an ergodic semigroup of positive maps on $M$ with\n$\\sigma((\\varPsi_t)) = \\{1,-1\\}$, and defining\n\\[\n \\varPhi_t\\left(\\begin{bmatrix}a&b\\\\c&d\\end{bmatrix}\\right) =\n \\begin{bmatrix}\\varPsi_t\\left(\\frac{a+d}{2}\\right) &\n \\lambda_0^t\\varPsi_t(b) \\\\ \\bar{\\lambda}_0^t\\varPsi_t(c) &\n \\varPsi_t\\left(\\frac{a+d}{2}\\right)\\end{bmatrix},\\qquad t\\geq 0,\n\\]\nwe obtain a continuous counterpart of Example \\ref{Ex2}.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThere are a number of reasons why metrics are believed to be useful\nfor studying spike trains. It can be argued that it is the most\ngeneral, useful, mathematical framework for spike trains\n\\cite{VictorPurpura1997a}, it may be possible to find a manifold of\nspike trains using local linear methods and, of most relevance to the\npresent discussion, it is possible that studying spike train metrics\nis a useful approach to understanding how content is coded in spike\ntrains.\n\nThe Victor-Purpura metric is an edit distance metric on the space of\nspike trains. The distance between two spike trains is, in effect,\ncalculated as the cost of changing one spike train into the other by\nadding, deleting, or moving spikes, with an individual cost for each\ntype of elementary moves. In particular, a cost of one is associated\nwith adding or deleting a spike and a cost of $q\\delta t$ with moving\na spike a time $\\delta t$: the distance is\n\\begin{equation}\nd({\\bf u},{\\bf v};q)=\\mbox{min}_\\gamma c_{\\gamma;q}({\\bf u},{\\bf v})\n\\end{equation}\nwhere $c_\\gamma({\\bf u},{\\bf v})$ is the cost of a sequence of\nelementary moves $\\gamma$ and is calculated by adding the cost of all\nthe elementary moves in the sequence. The minimum is taken over all\nsequences $\\gamma$ changing ${\\bf u}$ to ${\\bf v}$. The parameter\n$2\/q$ gives an important timescale for the metric: it is never\nworthwhile to move a spike more than $2\/q$ since it would be cheaper\nto delete the spike from one temporal location and to add it to the\nother. This gives a timescale that, roughly speaking, separates jitter\nfrom unreliability in comparing spikes in the spike trains. However,\nthe Victor-Purpura metric does more than this; it explicitly pairs up\nthose spikes that can be thought of as being related by jitter.\n\nThe Victor-Purpura metric has an $l^1$ character: the cost $c_\\gamma$\nis a simple linear sum of the individual costs of the individual\nmoves. The generalization proposed in \\cite{DubbsSeilerMagnasco2009a}\nchanges this to an $l^p$-like sum. For notational convenience, from\nnow on the set of sequences will be restricted: the minimum sequence\nwill never involve moving a spike after it has been added, it would\nalways be cheaper to add the spike in the correct location, similarly,\nspikes are never moved before they are deleted. It is also specified\nthat spikes can only be moved once. With these restrictions, the\nsequence $\\gamma$ can be considered to be an unordered set made up of\ndeletions, additions and moves. Let\n\\begin{equation}\n\\gamma=\\alpha\\cup\\mu\n\\end{equation}\nwhere $\\alpha$ is the set of additions and deletions and $\\mu$ is the\nset of moves. The elements of $\\mu$ are pairs of spikes $(u,v)$, with\n$u\\in {\\bf u}$ and $v\\in {\\bf v}$, that are related by jitter.\n\nNow,\n\\begin{equation}\nc_{\\gamma;q}({\\bf u},{\\bf v})=|\\alpha|+\\sum_{(u,v)\\in\\mu} q|u-v|\n\\end{equation}\nThis is generalized in \\cite{DubbsSeilerMagnasco2009a} to\n\\begin{equation}\nc_{\\gamma;q,p}({\\bf u},{\\bf v})=\\left(|\\alpha|+\\sum_{(u,v)\\in\\mu} q^p|u-v|^p\\right)^{1\/p}\n\\end{equation}\nwith $p\\ge 1$, to give the $\\mathcal{L}_p$ Victor-Purpura metric\n\\begin{equation}\nd({\\bf u},{\\bf v};q,p)=\\mbox{min}_\\gamma c_{\\gamma;q,p}({\\bf u},{\\bf v})\n\\end{equation}\nwhere the minimum is taken over sequences $\\gamma$ changing ${\\bf u}$\nto ${\\bf v}$ and satisfying the restrictions specified above. An\nalgorithm for calculating this quantity is given in\n\\cite{DubbsSeilerMagnasco2009a}; the existing algorithms used to\ncalculate the Victor-Purpura metric,\n\\cite{VictorGoldbergGardner2007a}, for example, could also be adapted\nto $p>1$.\n\n\\section{Evaluating the metric using clustering}\n\nIn the data that will be considered here, spike trains were recorded\nfrom field L of the auditory fore-brain of anesthetized zebra finch\nduring playback of 20 con-specific songs with each song repeated ten\ntimes, to give a total of 200 spike trains. These spike trains, and\nthe experimental conditions used to produce them, are described in\n\\cite{NarayanEtAl2006b,WangEtAl2007a} and they have previously been\nused to compare metrics in\n\\cite{Houghton2007a,HoughtonVictor2009a}. The key point, in this\ncontext, is that a good metric, one which depends on the content\nencoded in the spike train, should measure a smaller distance between\nspiking responses to the same song than between responses to different\nsongs. One motivation for looking at spike train metrics is that\nscoring metrics in this way, and examining how good metrics measure\ndistance, might reveal details about how content is encoded.\n\nThe usual method for evaluating how well a metric succeeds in this way\nis to calculate the transmitted information\n\\cite{VictorPurpura1996a}. There are two ways to cluster the spike\ntrains: a true clustering, based on the identity of the song that\nelicited the response and an estimated clustering, based on the metric\ndistances between the responses. Roughly speaking, the transmitted\ninformation quantifies the amount of information about the true\nclustering given by the estimated clustering.\n\nCalculating the transmitted information relies on the calculation of a\nconfusion matrix $N$. This is described in detail in\n\\cite{VictorPurpura1996a}, described specifically for the data\nconsidered here, $N$ is a $20\\times 20$ matrix of integers where the\nentry $N_{ij}$ counts how many of the spike trains from the $i$th true\ncluster is closest, on average, to the spike trains in the $j$th true\ncluster. As in \\cite{VictorPurpura1996a} this averaging is carried out\nas a root-mean-square averaging to under-weigh outliers. If the metric\nclustering is close to the true clustering, $N$ will be nearly\ndiagonal. The transmitted information gives a measure of this, it is\n\\begin{equation}\n\\tilde{h}=\\frac{1}{n\\ln{20}}\\sum_{ij}N_{ij}\\left(\\ln{N_{ij}}-\\ln{\\sum_k{N_{kj}}}-\\ln{\\sum_k{N_{ik}}}+\\ln {n}\\right).\n\\end{equation}\nwhere $n=\\sum_i\\sum_jN_{ij}$. All sums are from one to 20: the number\nof clusters. The factor of $\\ln{20}$ normalizes $\\tilde{h}$ and so\nthat it takes values between zero and one; a low value corresponds to\npoor clustering and a value near one corresponds to a near-diagonal\nconfusion matrix.\n\n\\begin{figure}\n\n\\begin{center}\n\\setlength{\\unitlength}{0.01575pt}\n\\framebox{\n\\begin{picture}(20999,10500)(0,0)\n\\put(0,700){{0}}\n\\put(10499,25){{$\\tilde{h}$}}\n\\put(20499,700){{1}}\n\\put(500,850){\\line(1,0){4924}}\n\\put(5574,850){\\line(1,0){4849}}\n\\put(10574,850){\\line(1,0){4849}}\n\\put(15574,850){\\line(1,0){4924}}\n\\put(6030,2500){\\line(0,1){300}}\n\\put(6945,2500){\\line(0,1){300}}\n\\put(8182,2500){\\line(0,1){300}}\n\\put(9926,2500){\\line(0,1){300}}\n\\put(9934,2500){\\line(0,1){300}}\n\\put(11429,2500){\\line(0,1){300}}\n\\put(12395,2500){\\line(0,1){300}}\n\\put(13853,2500){\\line(0,1){300}}\n\\put(14254,2500){\\line(0,1){300}}\n\\put(14439,2500){\\line(0,1){300}}\n\\put(14686,2500){\\line(0,1){300}}\n\\put(14946,2500){\\line(0,1){300}}\n\\put(16081,2500){\\line(0,1){300}}\n\\put(16128,2500){\\line(0,1){300}}\n\\put(16837,2500){\\line(0,1){300}}\n\\put(17302,2500){\\line(0,1){300}}\n\\put(17577,2500){\\line(0,1){300}}\n\\put(18511,2500){\\line(0,1){300}}\n\\put(18663,2500){\\line(0,1){300}}\n\\put(19278,2500){\\line(0,1){300}}\n\\put(19838,2500){\\line(0,1){300}}\n\\put(20274,2500){\\line(0,1){300}}\n\\put(20388,2500){\\line(0,1){300}}\n\\put(20496,2500){\\line(0,1){300}}\n\\put(0,2800){{\\bf F}}\n\\put(500,2650){\\line(1,0){4924}}\n\\put(5574,2650){\\line(1,0){4849}}\n\\put(10574,2650){\\line(1,0){4849}}\n\\put(15574,2650){\\line(1,0){4924}}\n\\put(14933,2650){\\line(0,1){450}}\n\\put(14933,2650){\\line(0,-1){450}}\n\\put(7203,3800){\\line(0,1){300}}\n\\put(9090,3800){\\line(0,1){300}}\n\\put(9568,3800){\\line(0,1){300}}\n\\put(12104,3800){\\line(0,1){300}}\n\\put(12860,3800){\\line(0,1){300}}\n\\put(13432,3800){\\line(0,1){300}}\n\\put(14572,3800){\\line(0,1){300}}\n\\put(16247,3800){\\line(0,1){300}}\n\\put(16365,3800){\\line(0,1){300}}\n\\put(16796,3800){\\line(0,1){300}}\n\\put(17102,3800){\\line(0,1){300}}\n\\put(17495,3800){\\line(0,1){300}}\n\\put(18349,3800){\\line(0,1){300}}\n\\put(18843,3800){\\line(0,1){300}}\n\\put(18945,3800){\\line(0,1){300}}\n\\put(19353,3800){\\line(0,1){300}}\n\\put(19497,3800){\\line(0,1){300}}\n\\put(19599,3800){\\line(0,1){300}}\n\\put(19763,3800){\\line(0,1){300}}\n\\put(19947,3800){\\line(0,1){300}}\n\\put(20052,3800){\\line(0,1){300}}\n\\put(20161,3800){\\line(0,1){300}}\n\\put(20496,3800){\\line(0,1){300}}\n\\put(20499,3800){\\line(0,1){300}}\n\\put(0,4100){{\\bf E}}\n\\put(500,3950){\\line(1,0){4924}}\n\\put(5574,3950){\\line(1,0){4849}}\n\\put(10574,3950){\\line(1,0){4849}}\n\\put(15574,3950){\\line(1,0){4924}}\n\\put(16597,3950){\\line(0,1){450}}\n\\put(16597,3950){\\line(0,-1){450}}\n\\put(6049,5100){\\line(0,1){300}}\n\\put(6951,5100){\\line(0,1){300}}\n\\put(7649,5100){\\line(0,1){300}}\n\\put(8878,5100){\\line(0,1){300}}\n\\put(10424,5100){\\line(0,1){300}}\n\\put(12478,5100){\\line(0,1){300}}\n\\put(12600,5100){\\line(0,1){300}}\n\\put(13347,5100){\\line(0,1){300}}\n\\put(14025,5100){\\line(0,1){300}}\n\\put(14745,5100){\\line(0,1){300}}\n\\put(14857,5100){\\line(0,1){300}}\n\\put(15047,5100){\\line(0,1){300}}\n\\put(15980,5100){\\line(0,1){300}}\n\\put(16024,5100){\\line(0,1){300}}\n\\put(16921,5100){\\line(0,1){300}}\n\\put(17125,5100){\\line(0,1){300}}\n\\put(17379,5100){\\line(0,1){300}}\n\\put(18440,5100){\\line(0,1){300}}\n\\put(18663,5100){\\line(0,1){300}}\n\\put(19170,5100){\\line(0,1){300}}\n\\put(19730,5100){\\line(0,1){300}}\n\\put(20161,5100){\\line(0,1){300}}\n\\put(20388,5100){\\line(0,1){300}}\n\\put(20496,5100){\\line(0,1){300}}\n\\put(0,5400){{\\bf D}}\n\\put(500,5250){\\line(1,0){4924}}\n\\put(5574,5250){\\line(1,0){4849}}\n\\put(10574,5250){\\line(1,0){4849}}\n\\put(15574,5250){\\line(1,0){4924}}\n\\put(14897,5250){\\line(0,1){450}}\n\\put(14897,5250){\\line(0,-1){450}}\n\\put(6810,6900){\\line(0,1){300}}\n\\put(7705,6900){\\line(0,1){300}}\n\\put(8680,6900){\\line(0,1){300}}\n\\put(10738,6900){\\line(0,1){300}}\n\\put(11557,6900){\\line(0,1){300}}\n\\put(11812,6900){\\line(0,1){300}}\n\\put(13248,6900){\\line(0,1){300}}\n\\put(14872,6900){\\line(0,1){300}}\n\\put(15103,6900){\\line(0,1){300}}\n\\put(15458,6900){\\line(0,1){300}}\n\\put(15810,6900){\\line(0,1){300}}\n\\put(15952,6900){\\line(0,1){300}}\n\\put(17369,6900){\\line(0,1){300}}\n\\put(17589,6900){\\line(0,1){300}}\n\\put(18284,6900){\\line(0,1){300}}\n\\put(18313,6900){\\line(0,1){300}}\n\\put(18547,6900){\\line(0,1){300}}\n\\put(18935,6900){\\line(0,1){300}}\n\\put(19267,6900){\\line(0,1){300}}\n\\put(19931,6900){\\line(0,1){300}}\n\\put(20055,6900){\\line(0,1){300}}\n\\put(20274,6900){\\line(0,1){300}}\n\\put(20496,6900){\\line(0,1){300}}\n\\put(20499,6900){\\line(0,1){300}}\n\\put(0,7200){{\\bf C}}\n\\put(500,7050){\\line(1,0){4924}}\n\\put(5574,7050){\\line(1,0){4849}}\n\\put(10574,7050){\\line(1,0){4849}}\n\\put(15574,7050){\\line(1,0){4924}}\n\\put(15721,7050){\\line(0,1){450}}\n\\put(15721,7050){\\line(0,-1){450}}\n\\put(6248,8200){\\line(0,1){300}}\n\\put(7545,8200){\\line(0,1){300}}\n\\put(8476,8200){\\line(0,1){300}}\n\\put(10738,8200){\\line(0,1){300}}\n\\put(11095,8200){\\line(0,1){300}}\n\\put(11812,8200){\\line(0,1){300}}\n\\put(13105,8200){\\line(0,1){300}}\n\\put(14587,8200){\\line(0,1){300}}\n\\put(14885,8200){\\line(0,1){300}}\n\\put(15004,8200){\\line(0,1){300}}\n\\put(15578,8200){\\line(0,1){300}}\n\\put(15651,8200){\\line(0,1){300}}\n\\put(17276,8200){\\line(0,1){300}}\n\\put(17369,8200){\\line(0,1){300}}\n\\put(18090,8200){\\line(0,1){300}}\n\\put(18120,8200){\\line(0,1){300}}\n\\put(18293,8200){\\line(0,1){300}}\n\\put(18568,8200){\\line(0,1){300}}\n\\put(18935,8200){\\line(0,1){300}}\n\\put(19838,8200){\\line(0,1){300}}\n\\put(19857,8200){\\line(0,1){300}}\n\\put(20274,8200){\\line(0,1){300}}\n\\put(20496,8200){\\line(0,1){300}}\n\\put(20499,8200){\\line(0,1){300}}\n\\put(0,8500){{\\bf B}}\n\\put(500,8350){\\line(1,0){4924}}\n\\put(5574,8350){\\line(1,0){4849}}\n\\put(10574,8350){\\line(1,0){4849}}\n\\put(15574,8350){\\line(1,0){4924}}\n\\put(15514,8350){\\line(0,1){450}}\n\\put(15514,8350){\\line(0,-1){450}}\n\\put(6218,9500){\\line(0,1){300}}\n\\put(7242,9500){\\line(0,1){300}}\n\\put(8200,9500){\\line(0,1){300}}\n\\put(9985,9500){\\line(0,1){300}}\n\\put(10102,9500){\\line(0,1){300}}\n\\put(11812,9500){\\line(0,1){300}}\n\\put(12659,9500){\\line(0,1){300}}\n\\put(14064,9500){\\line(0,1){300}}\n\\put(14141,9500){\\line(0,1){300}}\n\\put(14812,9500){\\line(0,1){300}}\n\\put(14850,9500){\\line(0,1){300}}\n\\put(15196,9500){\\line(0,1){300}}\n\\put(16686,9500){\\line(0,1){300}}\n\\put(16925,9500){\\line(0,1){300}}\n\\put(17279,9500){\\line(0,1){300}}\n\\put(17515,9500){\\line(0,1){300}}\n\\put(17908,9500){\\line(0,1){300}}\n\\put(17965,9500){\\line(0,1){300}}\n\\put(18935,9500){\\line(0,1){300}}\n\\put(19745,9500){\\line(0,1){300}}\n\\put(19832,9500){\\line(0,1){300}}\n\\put(20274,9500){\\line(0,1){300}}\n\\put(20496,9500){\\line(0,1){300}}\n\\put(20499,9500){\\line(0,1){300}}\n\\put(0,9800){{\\bf A}}\n\\put(500,9650){\\line(1,0){4924}}\n\\put(5574,9650){\\line(1,0){4849}}\n\\put(10574,9650){\\line(1,0){4849}}\n\\put(15574,9650){\\line(1,0){4924}}\n\\put(15139,9650){\\line(0,1){450}}\n\\put(15139,9650){\\line(0,-1){450}}\n\\end{picture}\n}\n\\end{center}\n\n\\caption{Performance as $p$ changes. {\\bf A}, {\\bf B} and {\\bf C}\n correspond to $p=1$, $p=2$ and $p=10$. Each horizontal line runs\n from zero to one. For clarity, there are tiny gaps at 0.25, 0.5 and\n 0.75. Each small vertical dash marks the $\\tilde{h}$ value for one\n site where for each site the optimal value of $q$ is used. The large\n vertical dash marks the average, 0.732, 0.751 and 0.761\n respectively. To indicate the range over which $\\tilde{h}$ values\n vary, two other metrics are plotted for comparison. ${\\bf D}$\n corresponds to the van Rossum metric \\cite{vanRossum2001a}, with\n each site using its optimal value of $\\tau$. ${\\bf E}$ corresponds\n to the synapse metric, with each site using its optimal value of\n $\\tau$ and $\\mu$ \\cite{Houghton2007a}. Finally, ${\\bf F}$\n corresponds to the $L^1$ van Rossum metric, with a boxcar filter;\n this metric measures a very similar distance to the $p=1$ metric\n \\cite{HoughtonSen2006a}.\\label{Results}}\n\\end{figure}\n\n\nThe main purpose of this comment is to evaluate the $\\mathcal{L}_p$\nVictor-Purpura metric by calculating $\\tilde{h}$ for the zebra finch\ndata previous used to evaluate other spike train metrics\n\\cite{Houghton2007a,HoughtonVictor2009a}. The data set contains 24\nsets of spike trains, corresponding to 24 recording sites from\nmultiple birds. Although the songs themselves are of different\nlengths, all of the songs are at least a second long and, in each\ncase, the spike train is truncated to one second, starting at song\nonset.\n\n\\begin{figure}\n\\begin{center}\n\n\\setlength{\\unitlength}{0.240900pt}\n\\ifx\\plotpoint\\undefined\\newsavebox{\\plotpoint}\\fi\n\\sbox{\\plotpoint}{\\rule[-0.200pt]{0.400pt}{0.400pt}}%\n\\begin{picture}(900,540)(0,0)\n\\sbox{\\plotpoint}{\\rule[-0.200pt]{0.400pt}{0.400pt}}%\n\\put(140,197){\\makebox(0,0)[r]{ 0.74}}\n\\put(160.0,197.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(140,341){\\makebox(0,0)[r]{ 0.75}}\n\\put(160.0,341.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(140,485){\\makebox(0,0)[r]{ 0.76}}\n\\put(160.0,485.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(160,41){\\makebox(0,0){ 1}}\n\\put(160.0,82.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(235,41){\\makebox(0,0){ 2}}\n\\put(235.0,82.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(311,41){\\makebox(0,0){ 3}}\n\\put(311.0,82.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(386,41){\\makebox(0,0){ 4}}\n\\put(386.0,82.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(462,41){\\makebox(0,0){ 5}}\n\\put(462.0,82.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(537,41){\\makebox(0,0){ 6}}\n\\put(537.0,82.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(613,41){\\makebox(0,0){ 7}}\n\\put(613.0,82.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(688,41){\\makebox(0,0){ 8}}\n\\put(688.0,82.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(764,41){\\makebox(0,0){ 9}}\n\\put(764.0,82.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(839,41){\\makebox(0,0){ 10}}\n\\put(839.0,82.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(160.0,82.0){\\rule[-0.200pt]{0.400pt}{100.696pt}}\n\\put(160.0,82.0){\\rule[-0.200pt]{163.571pt}{0.400pt}}\n\\put(160,82){\\usebox{\\plotpoint}}\n\\multiput(160.58,82.00)(0.498,2.412){73}{\\rule{0.120pt}{2.016pt}}\n\\multiput(159.17,82.00)(38.000,177.816){2}{\\rule{0.400pt}{1.008pt}}\n\\multiput(198.58,264.00)(0.498,1.181){71}{\\rule{0.120pt}{1.041pt}}\n\\multiput(197.17,264.00)(37.000,84.840){2}{\\rule{0.400pt}{0.520pt}}\n\\multiput(235.58,351.00)(0.498,0.565){73}{\\rule{0.120pt}{0.553pt}}\n\\multiput(234.17,351.00)(38.000,41.853){2}{\\rule{0.400pt}{0.276pt}}\n\\multiput(273.58,394.00)(0.498,0.565){73}{\\rule{0.120pt}{0.553pt}}\n\\multiput(272.17,394.00)(38.000,41.853){2}{\\rule{0.400pt}{0.276pt}}\n\\multiput(311.00,437.59)(4.161,0.477){7}{\\rule{3.140pt}{0.115pt}}\n\\multiput(311.00,436.17)(31.483,5.000){2}{\\rule{1.570pt}{0.400pt}}\n\\multiput(349.00,442.58)(0.560,0.497){63}{\\rule{0.548pt}{0.120pt}}\n\\multiput(349.00,441.17)(35.862,33.000){2}{\\rule{0.274pt}{0.400pt}}\n\\multiput(386.00,475.58)(3.935,0.491){17}{\\rule{3.140pt}{0.118pt}}\n\\multiput(386.00,474.17)(69.483,10.000){2}{\\rule{1.570pt}{0.400pt}}\n\\multiput(462.00,485.59)(4.918,0.488){13}{\\rule{3.850pt}{0.117pt}}\n\\multiput(462.00,484.17)(67.009,8.000){2}{\\rule{1.925pt}{0.400pt}}\n\\multiput(537.00,493.59)(6.819,0.482){9}{\\rule{5.167pt}{0.116pt}}\n\\multiput(537.00,492.17)(65.276,6.000){2}{\\rule{2.583pt}{0.400pt}}\n\\put(613,498.67){\\rule{18.067pt}{0.400pt}}\n\\multiput(613.00,498.17)(37.500,1.000){2}{\\rule{9.034pt}{0.400pt}}\n\\put(688.0,500.0){\\rule[-0.200pt]{36.376pt}{0.400pt}}\n\\put(160.0,82.0){\\rule[-0.200pt]{0.400pt}{100.696pt}}\n\\put(160.0,82.0){\\rule[-0.200pt]{163.571pt}{0.400pt}}\n\\end{picture}\n\n\n\\end{center}\n\\caption{The average performance plotted against $p$.\n $\\tilde{h}$ has been calculated for a range of $p$ values using the\n optimal value of $q$ for each site and the average value of\n $\\tilde{h}$ is plotted against $p$.\\label{p}}\n\\end{figure}\n\nThe performance of the $\\mathcal{L}_p$ Victor-Purpura metric is\nillustrated in Fig.~\\ref{Results}. A remarkable feature is that\n$\\tilde{h}$ rarely decreases as $p$ is increased. Other\nmulti-parameter metrics show a mixed effect; although changing a\nparameter might increase the average $\\tilde{h}$ across the sites,\nthere will typically be many individual sites where $\\tilde{h}$\ndecreases. In the case of $p$, while the average improvement as $p$\nincreases is modest and a small number of sites show no change; there\nis no site where the clustering is negatively affected when $p$ is\nchanged from one to two, three do show a tiny decrease between $p=1$\nand $p=1.5$. $\\tilde{h}$ increases for 19 sites between $p=1$ and\n$p=2$ and for 15 between $p=2$ and $p=3$, the improvement gets less\nand less and appears to have plateaued by $p=10$. The average\nperformance is plotted against $p$ in Fig.~{p}. The optimal value of\n$q$ does not vary significantly as $p$ changes; however, the range of\n$q$ that produces the optimal performance does get wider.\n\n\\section{Discussion}\n\n\nIt is interesting to compare this metric to another generalization of\nthe Victor-Purpura metric\n\\cite{VictorPurpura1996a,VictorPurpura1997a}. The linear distance\nfunction $q\\delta t$ can be replaced by any convex function,\n$f(|\\delta t|)$, of distance:\n\\begin{equation}\nc_{\\gamma;f(\\cdot)}({\\bf u},{\\bf v})=|\\alpha|+\\sum_{(u,v)\\in\\mu} f(|u-v|).\n\\end{equation}\nThe distance function $f(|\\delta t|)$ must be convex to ensure that\nthe triangular inequality is satisfied. Now, the function $q^p\\delta\nt^p$ in $c_{\\gamma;q,p}$ above is concave for $p>1$. However, because\nof the $1\/p$ exponent in $d({\\bf u},{\\bf v};q,p)$, this does not cause\nthe same difficulty with the triangular inequality: an interesting\nproperty of the ${\\mathcal L}_p$ metrics is that they satisfy the\ntriangular inequality despite having a concave distance function.\n\nAs $p$ is increased, $q^p|\\delta t|^p$ becomes increasingly concave\nand spikes located at similar times contribute less and less to the\ntotal cost and this cost is predominately made up of the cost of\nadding and deleting spikes that are not related by jitter. This means\nthat the metric performs more and more as a windowed coincidence\ndetector as $p$ increases. The fact that it is always beneficial to\nincrease $p$ appears to imply that, for these data, coincidence and\ncoincidence detection has a role in the encoding and decoding of\ncontent in spike trains.\n\n\\begin{figure}\n\\begin{center}\n\\setlength{\\unitlength}{0.24pt}\n\\ifx\\plotpoint\\undefined\\newsavebox{\\plotpoint}\\fi\n\\begin{picture}(1200,720)(0,0)\n\\sbox{\\plotpoint}{\\rule[-0.200pt]{0.400pt}{0.400pt}}%\n\\put(330,232){\\makebox(0,0)[r]{ 0.25}}\n\\put(350.0,232.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(330,381){\\makebox(0,0)[r]{ 0.5}}\n\\put(350.0,381.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(330,531){\\makebox(0,0)[r]{ 0.75}}\n\\put(350.0,531.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(330,680){\\makebox(0,0)[r]{ 1}}\n\\put(350.0,680.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(500,41){\\makebox(0,0){ 0.25}}\n\\put(500.0,82.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(649,41){\\makebox(0,0){ 0.5}}\n\\put(649.0,82.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(799,41){\\makebox(0,0){ 0.75}}\n\\put(799.0,82.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(948,41){\\makebox(0,0){ 1}}\n\\put(948.0,82.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(350.0,82.0){\\rule[-0.200pt]{0.400pt}{144.058pt}}\n\\put(350.0,82.0){\\rule[-0.200pt]{144.058pt}{0.400pt}}\n\n\\put(889,633){\\line(1,0){8}}\n\\put(816,566){\\line(1,0){8}}\n\\put(911,657){\\line(1,0){8}}\n\\put(632,366){\\line(1,0){8}}\n\\put(945,680){\\line(1,0){8}}\n\\put(817,573){\\line(1,0){8}}\n\\put(893,604){\\line(1,0){8}}\n\\put(838,603){\\line(1,0){8}}\n\\put(774,511){\\line(1,0){8}}\n\\put(782,510){\\line(1,0){8}}\n\\put(861,584){\\line(1,0){8}}\n\\put(632,369){\\line(1,0){8}}\n\\put(749,488){\\line(1,0){8}}\n\\put(580,312){\\line(1,0){8}}\n\\put(515,253){\\line(1,0){8}}\n\\put(852,591){\\line(1,0){8}}\n\\put(761,521){\\line(1,0){8}}\n\\put(948,680){\\line(1,0){8}}\n\\put(706,420){\\line(1,0){8}}\n\\put(941,673){\\line(1,0){8}}\n\\put(928,660){\\line(1,0){8}}\n\\put(543,284){\\line(1,0){8}}\n\\put(767,490){\\line(1,0){8}}\n\\put(677,446){\\line(1,0){8}}\n\n\\put(889,633){\\line(-1,0){8}}\n\\put(816,566){\\line(-1,0){8}}\n\\put(911,657){\\line(-1,0){8}}\n\\put(632,366){\\line(-1,0){8}}\n\\put(945,680){\\line(-1,0){8}}\n\\put(817,573){\\line(-1,0){8}}\n\\put(893,604){\\line(-1,0){8}}\n\\put(838,603){\\line(-1,0){8}}\n\\put(774,511){\\line(-1,0){8}}\n\\put(782,510){\\line(-1,0){8}}\n\\put(861,584){\\line(-1,0){8}}\n\\put(632,369){\\line(-1,0){8}}\n\\put(749,488){\\line(-1,0){8}}\n\\put(580,312){\\line(-1,0){8}}\n\\put(515,253){\\line(-1,0){8}}\n\\put(852,591){\\line(-1,0){8}}\n\\put(761,521){\\line(-1,0){8}}\n\\put(948,680){\\line(-1,0){8}}\n\\put(706,420){\\line(-1,0){8}}\n\\put(941,673){\\line(-1,0){8}}\n\\put(928,660){\\line(-1,0){8}}\n\\put(543,284){\\line(-1,0){8}}\n\\put(767,490){\\line(-1,0){8}}\n\\put(677,446){\\line(-1,0){8}}\n\n\\put(889,633){\\line(1,0){8}}\n\\put(816,593){\\line(1,0){8}}\n\\put(911,663){\\line(1,0){8}}\n\\put(632,413){\\line(1,0){8}}\n\\put(945,680){\\line(1,0){8}}\n\\put(817,586){\\line(1,0){8}}\n\\put(893,643){\\line(1,0){8}}\n\\put(838,615){\\line(1,0){8}}\n\\put(774,519){\\line(1,0){8}}\n\\put(782,544){\\line(1,0){8}}\n\\put(861,614){\\line(1,0){8}}\n\\put(632,388){\\line(1,0){8}}\n\\put(749,512){\\line(1,0){8}}\n\\put(580,327){\\line(1,0){8}}\n\\put(515,271){\\line(1,0){8}}\n\\put(852,622){\\line(1,0){8}}\n\\put(761,540){\\line(1,0){8}}\n\\put(948,680){\\line(1,0){8}}\n\\put(706,420){\\line(1,0){8}}\n\\put(941,673){\\line(1,0){8}}\n\\put(928,667){\\line(1,0){8}}\n\\put(543,297){\\line(1,0){8}}\n\\put(767,529){\\line(1,0){8}}\n\\put(677,463){\\line(1,0){8}}\n\n\\put(889,633){\\line(-1,0){8}}\n\\put(816,593){\\line(-1,0){8}}\n\\put(911,663){\\line(-1,0){8}}\n\\put(632,413){\\line(-1,0){8}}\n\\put(945,680){\\line(-1,0){8}}\n\\put(817,586){\\line(-1,0){8}}\n\\put(893,643){\\line(-1,0){8}}\n\\put(838,615){\\line(-1,0){8}}\n\\put(774,519){\\line(-1,0){8}}\n\\put(782,544){\\line(-1,0){8}}\n\\put(861,614){\\line(-1,0){8}}\n\\put(632,388){\\line(-1,0){8}}\n\\put(749,512){\\line(-1,0){8}}\n\\put(580,327){\\line(-1,0){8}}\n\\put(515,271){\\line(-1,0){8}}\n\\put(852,622){\\line(-1,0){8}}\n\\put(761,540){\\line(-1,0){8}}\n\\put(948,680){\\line(-1,0){8}}\n\\put(706,420){\\line(-1,0){8}}\n\\put(941,673){\\line(-1,0){8}}\n\\put(928,667){\\line(-1,0){8}}\n\\put(543,297){\\line(-1,0){8}}\n\\put(767,529){\\line(-1,0){8}}\n\\put(677,463){\\line(-1,0){8}}\n\n\n\n\\put(889,633){\\line(0,1){4}}\n\\put(816,593){\\line(0,1){4}}\n\\put(911,663){\\line(0,1){4}}\n\\put(632,413){\\line(0,1){4}}\n\\put(945,680){\\line(0,1){4}}\n\\put(817,586){\\line(0,1){4}}\n\\put(893,643){\\line(0,1){4}}\n\\put(838,615){\\line(0,1){4}}\n\\put(774,519){\\line(0,1){4}}\n\\put(782,544){\\line(0,1){4}}\n\\put(861,614){\\line(0,1){4}}\n\\put(632,388){\\line(0,1){4}}\n\\put(749,512){\\line(0,1){4}}\n\\put(580,327){\\line(0,1){4}}\n\\put(515,271){\\line(0,1){4}}\n\\put(852,622){\\line(0,1){4}}\n\\put(761,540){\\line(0,1){4}}\n\\put(948,680){\\line(0,1){4}}\n\\put(706,420){\\line(0,1){4}}\n\\put(941,673){\\line(0,1){4}}\n\\put(928,667){\\line(0,1){4}}\n\\put(543,297){\\line(0,1){4}}\n\\put(767,529){\\line(0,1){4}}\n\\put(677,463){\\line(0,1){4}}\n\n\n\\put(889,633){\\line(0,-1){4}}\n\\put(816,566){\\line(0,-1){4}}\n\\put(911,657){\\line(0,-1){4}}\n\\put(632,366){\\line(0,-1){4}}\n\\put(945,680){\\line(0,-1){4}}\n\\put(817,573){\\line(0,-1){4}}\n\\put(893,604){\\line(0,-1){4}}\n\\put(838,603){\\line(0,-1){4}}\n\\put(774,511){\\line(0,-1){4}}\n\\put(782,510){\\line(0,-1){4}}\n\\put(861,584){\\line(0,-1){4}}\n\\put(632,369){\\line(0,-1){4}}\n\\put(749,488){\\line(0,-1){4}}\n\\put(580,312){\\line(0,-1){4}}\n\\put(515,253){\\line(0,-1){4}}\n\\put(852,591){\\line(0,-1){4}}\n\\put(761,521){\\line(0,-1){4}}\n\\put(948,680){\\line(0,-1){4}}\n\\put(706,420){\\line(0,-1){4}}\n\\put(941,673){\\line(0,-1){4}}\n\\put(928,660){\\line(0,-1){4}}\n\\put(543,284){\\line(0,-1){4}}\n\\put(767,490){\\line(0,-1){4}}\n\\put(677,446){\\line(0,-1){4}}\n\n\n\n\n\n\n\\sbox{\\plotpoint}{\\rule[-0.400pt]{0.800pt}{0.800pt}}%\n\\put(350,82){\\usebox{\\plotpoint}}\n\\put(352,82){\\line(1,1){600}}\n\n\\put(677,446){\\line(0,1){17}}\n\\put(767,490){\\line(0,1){39}}\n\\put(543,284){\\line(0,1){13}}\n\\put(706,420){\\line(0,1){1}}\n\\put(928,660){\\line(0,1){7}}\n\\put(941,673){\\line(0,1){1}}\n\\put(948,680){\\line(0,1){1}}\n\\put(761,521){\\line(0,1){19}}\n\\put(852,591){\\line(0,1){31}}\n\\put(515,253){\\line(0,1){18}}\n\\put(580,312){\\line(0,1){15}}\n\\put(749,488){\\line(0,1){24}}\n\\put(632,369){\\line(0,1){19}}\n\\put(861,584){\\line(0,1){30}}\n\\put(782,510){\\line(0,1){34}}\n\\put(774,511){\\line(0,1){8}}\n\\put(838,603){\\line(0,1){12}}\n\\put(893,604){\\line(0,1){39}}\n\\put(817,573){\\line(0,1){13}}\n\\put(945,680){\\line(0,1){1}}\n\\put(632,366){\\line(0,1){47}}\n\\put(911,657){\\line(0,1){6}}\n\\put(816,566){\\line(0,1){27}}\n\\put(889,633){\\line(0,1){1}}\n\n\\sbox{\\plotpoint}{\\rule[-0.200pt]{0.400pt}{0.400pt}}%\n\\put(350.0,82.0){\\rule[-0.200pt]{0.400pt}{144.058pt}}\n\\put(350.0,82.0){\\rule[-0.200pt]{144.058pt}{0.400pt}}\n\\end{picture}\n\n\n\\end{center}\n\\caption{Comparing the $L^1$ boxcar van Rossum metric with the $p=1$\n and $p=10$ ${\\mathcal L}_p$ metrics. Here $\\tilde{h}$ values for the\n $p=1$ and $p=10$ metrics are graphed against the values for the\n $L^1$ van Rossum metric with a boxcar filter with optimal values of\n the $q$ or $\\tau$ parameter has been used for each site. The $p=1$\n and $p=10$ values are marked by a horizontal line, the two values\n for a given site are joined by a vertical line. Since increasing $p$\n never decreases $\\tilde{h}$ between $p=1$ and $p=10$, the topmost of\n the two horizontal lines corresponds to $p=10$. The \\lq{}x=y\\rq{}\n line is also plotted for clarity.\\label{Compare}}\n\\end{figure}\n\nThe pairing of near coincident spikes is the key distinction between\nthe Victor-Purpura metrics and the van Rossum metrics, a family of\nspike train metrics which are calculated by comparing reconstructed\nrate functions. In fact, computationally, an $L^1$ van Rossum metric\nwith a boxcar filter measures a very similar distance to the\nVictor-Purpura metric \\cite{HoughtonSen2006a} and, as seen in\nFig.~\\ref{Compare}, has a very similar performance. The way in which\n${\\mathcal L}_p$ Victor-Purpura metric performance increased with $p$\nseems to indicate that the significant temporal structure of spikes is\nnot fully accounted for by the temporal structure of a spike rate.\n\n\\section*{Acknowledgments}\nC.H. is supported by Science Foundation Ireland grant\n08\/RFP\/MTH1280. He thanks Kamal Sen for the use of the zebra finch\ndata analyzed here and Jonathan Victor for comments on an early draft\nof this comment.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\n\n\\par Unsupervised learning has long been an intriguing field in artificial intelligence. Human and animal learning is largely unsupervised: we discover the structure of the world mostly by observing it, not by being told the name of every object, which would correspond to supervised learning \\cite{lecun15nature}. \nA system capable of predicting what is going to happen by just watching large collections of unlabeled video data needs to build an internal representation of the world and its dynamics \\cite{mathieu2015deep}. When considering the vast amount of unlabeled data generated every day, unsupervised learning becomes one of the key challenges to solve in the road towards general artificial intelligence.\n\nBased on how a human would provide a high level summary of a video, we hypothesize that there are three key components to understand such content: namely \\textit{foreground}, \\textit{motion} and \\textit{background}. These three elements would tell us, respectively, what the main objects in the video are, what they are doing and where their location. We propose a framework that explicitly disentangles these three components in order to build strong features for action recognition, where the supervision signals can be generated without requiring from expensive and time consuming human annotations.\nThe proposal is inspired by how infants who have no prior visual knowledge tend to group things that move as connected wholes and also move separately from one another \\cite{elizabeth90cognitivemotion}. Based on this intuition, we can build a similar unsupervised pipeline to segment foreground and background with global motion, i.e. the rough moving directions of objects. Such segmented foregrounds across the video can be used to model both the global motion (e.g.~transition or stretch) and local motion (i.e.~transformation of detailed appearance) from a pair of foregrounds at different time steps. \nSince background motion is mostly given by camera movements, we restrict the use of motion to the foreground and rely on appearance to model the former.\n\nThe contributions of this work are two-fold: (1) disentangling motion, foreground and background features in videos by human alike motion aware mechanism and (2) learning strong video features that improve the performance of action recognition task. \n\n\n\n\\section{Related Work}\n\n\n\n\n\n\n\nLeveraging large collections of unlabeled videos has proven beneficial for unsupervised training of image models thanks to the implicit properties they exhibit in the temporal domain, e.g. visual similarity between patches in consecutive frames \\cite{wang2015unsupervised} and temporal coherence and order \\cite{misra2016shuffle}. Since learning to predict future frames forces the model to construct an internal representation of the world dynamics, several works have addressed such task by predicting global features of future frames with Recurrent Neural Networks (RNN) \\cite{srivastava2015unsupervised} or pixel level predictions by means of multi-scale Convolutional Neural Networks (CNN) trained with an adversarial loss \\cite{mathieu2015deep}. The key role played by motion has been exploited for future frame prediction tasks by explicitly decomposing content and motion \\cite{villegas2017decomposing} and for unsupervised training of video-level models \\cite{luo2017motionprediction}. Similarly in spirit, separate foreground and background streams have been found to increase the quality of generative video models \\cite{vondrick2016generating}.\n\nTechniques exploiting explicit foreground and background segmentations in video generally require from expensive annotation methods, limiting their application to labeled data. However, the findings by Pathak et al. \\cite{pathak2017learning} show how models trained on noisy annotations learn to generalize and perform well when finetuned for other tasks. Such noisy annotations can be generated by unsupervised methods, thus alleviating the cost of annotating data for the target task. In this work we study our proposed method by using manual annotations, whereas evaluating the performance drop when replacing such annotations with segmentations generated in an unsupervised manner remains as future work.\n\n\n\n\n\n\n\\section{Methodology}\n\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[width=\\linewidth]{.\/figs\/architecture.pdf}\\\\\n\\caption{System architecture. Please note that in this work, the masks used to generate ground truth are from manual annotations while uNLC will be utilized in our future work.}\n\\label{fig:architecture}\n\\end{figure*}\n\n\nWe adopt an autoencoder-styled architecture to learn features in an unsupervised manner. The encoder maps input clips \nto feature tensors \nby applying a series of 3D convolutions and max-pooling operations \\cite{du2015c3d}. Unlike traditional autoencoder architectures, the bottleneck features are partitioned into three splits which are then used as input for three different reconstruction tasks, as depicted in Figure \\ref{fig:architecture}.\n\n\\textbf{Disentangling of foreground and background:} depending on the nature of the training data, reconstruction of frames may become dominated either by the foreground or background. We explicitly split the reconstruction task to guarantee that none of the parts dominates over the other. Partitioned foreground and background features will be passed into two different decoders for reconstruction. While segmentation masks are often obtained by manual labeling, it is worth noting they can be obtained without supervision as well, e.g. by using methods based on motion perceptual grouping such as uNLC \\cite{pathak2017learning}. The latter approach has proven beneficial for unsupervised pre-training of CNNs \\cite{pathak2017learning}.\n\n\\textbf{Disentangling of foreground motion:} leveraging motion information can provide a boost in action recognition performance when paired with appearance models \\cite{simonyan2014two}. We encourage the model to learn motion-related representations by solving a predictive learning task where the foreground in the last frame needs to be reconstructed from the foreground in the first frame. Given a pair of foregrounds at timesteps $t_1$ and $t_2$, namely $\\left(f_{t_{1}}, f_{t_{2}} \\right)$, we aim to estimate a function $M$ from motion features $m_{t_1\\rightarrow t_2}$ throughout $t_1$ and $t_2$ that maps $f_{t_{_1}}$ to $f_{t_{2}}$ in deep feature space $G$: \n\n\\begin{equation}\nG \\left( f_{t_{2}} \\right) = M \\left( G(f_{t_{1}}), m_{t_1 \\rightarrow t_2} \\right)\n\\end{equation}\n\nThroughout this work, the space of encoded features is used for $G$, and $M$ is parametrized by a deterministic version of cross convolution \\cite{visualdynamics16}. The foreground decoder weights are shared among all foreground reconstruction task. Gradients coming from the reconstruction of $f_{t_{2}}$ are blocked from backpropagating through $G( f_{t_{1}})$ during training to prevent $G( f_{t_{1}})$ from storing information about $f_{t_{2}}$.\n\n\n\\textbf{Frame selection:} assuming that the background semantics stay close throughout the short clips, only the background in the first frame is reconstructed. First and last frames are chosen to perform foreground reconstruction, since they represent the most challenging pair in the clip.\n\n\\textbf{Loss function:} the model is optimized to minimize the L1 loss between the original frames and their reconstruction. In particular, the loss function is defined from a decomposition of the input video volume $x$ of $T$ frames into the foreground $x_{fg}$ and background $x_{bg}$ volumes:\n\n\\begin{equation}\n\\begin{split}\nx_{fg} = x \\cdot b_{fg} \\\\\nx_{bg} = x \\cdot (1- b_{fg})\n\\end{split}\n\\end{equation}\n\nwhere $b_{fg}$ corresponds to a volume of binary values, so that $1$ correspond to foreground pixels and $0$ to the background ones.\n\nThis decomposition allows defining the reconstruction loss $L_{rec}(x)$ over the video volume $x$ as the sum of three terms:\n\n\\begin{equation}\n\\label{eq:loss}\nL_{rec}(x) = L^{1}_{fg}(x) + L^{1}_{bg}(x) + L^{T}_{fg}(x)\n\\end{equation}\n\nwhere the components $L^{1}_{fg}$, $L^{1}_{bg}$ and $L^{T}_{fg}$ represent the reconstruction loss for the first foreground and first background , and last foreground , respectively. These three terms are particularizations at the first ($t=1$) and last ($t=T$) frames of the generic foreground $L^{t}_{fg}(x)$ and background $L^{t}_{bg}(x)$ reconstructions losses:\n\n\\begin{equation}\nL^{t}_{fg}(x) = \\frac{1}{A^t}\\sum_{i,j}{W^t[i,j] \\cdot \\left|\\hat{x}^{t}_{fg}[i,j] - x_{fg}[i,j,t]\\right|}\n\\end{equation}\n\\begin{equation}\nL^{t}_{bg}(x) = \\frac{1}{A^t}\\sum_{i,j}{\\left|\\hat{x}^t_{bg}[i,j] - x_{bg}[i,j,t]\\right|}\n\\end{equation}\n\nwhere $\\hat{x}^t$ denotes a reconstructed foreground\/ background at time $t$, $A^t$ is the area of the reconstructed frame at time $t$, and $W^t$ is an element-wise weighting mask at time $t$ designed to leverage the focus between the foreground and background pixels:\n\n\\begin{equation}\nW^{t}[i,j]= \n\\begin{cases}\n1 & \\text{if } (i,j) \\in \\text{background}\\\\\n\\max\\left[ 1, \\frac{A^t_{bg}}{A^t_{fg}} \\right] & \\text{if } (i,j) \\in \\text{foreground}\n\\end{cases}\n\\end{equation}\n\n\n\n\n\n\n\n\n\n\n\n\nDuring preliminary experiments, we observed that the reconstruction of the first foreground always outperformed the reconstruction of the last one by a large margin, given the increased difficulty of the latter task. In order to get finer reconstruction of the last foreground, we introduce an L2 loss $L_{feat}$ on $G(f{t_{2}})$. The pseudo ground truth for this task is obtained by getting first foreground features from the encoder fed with the temporally reversed clip. The final loss to optimize is the following:\n\n\\begin{equation}\nL_{total}(x) = L_{rec}(x) + L_{feat}(x) \n\\end{equation}\n\n\n\\section{Experimental setup}\nPlease note again we are showing results trained with ground truth masks to check the feasibility of our proposal and the pure unsupervised framework generating masks from uNLC \\cite{pathak2017learning} remains as future work.\n\n\\textbf{Dataset:} there are 24 classes out of 101 in UCF-101 with localization annotations \\cite{UCF101,THUMOS15}. Following \\cite{pathak2017learning}, we first evaluate the proposed framework with supervised annotations and use the bounding boxes in the subset of UCF-101 for such purpose. Evaluating the proposal in weak annotations collected by means of unsupervised methods remains as future work. We follow the original splits of training and test set and also split 10\\% videos out of the training set as validation set in order to perform early stopping and prevent the network from overfitting the training data.\n\n\\textbf{Training details:} videos are split into clips of 16 frames each. These clips are then resized to $128\\times128$ and their pixel values are scaled and shift to $[ -1, 1]$. The clips are randomly temporally or horizontally flipped for data augmentation. Weight decay with rate of $10^{-3}$ is added as regularization. The network is trained for 125 epochs with Adam optimizer and a learning rate of $10^{-4}$ on batches of 40 clips.\n\n\\section{Results}\n\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[width=\\linewidth]{.\/figs\/results.pdf}\\\\\n\\caption{Reconstruction results on the test set. For each example, the top row shows the reconstruction while the bottom one contains the ground truth. Each column shows the segmentation of foreground in first frame, background in first frame and foreground in last frame, respectively.}\n\\label{fig:recon_results}\n\\end{figure*}\n\nWe tested our model on test set for reconstruction task. For better demonstrating the efficiency of our proposed pretraining pipeline, we also trained the network to do action recognition with pretrained features.\n\n\\textbf{Reconstruction task:} reconstruction results on test set are shown in Figure \\ref{fig:recon_results}. From these results, we can clearly see that the network already can predict similar foreground segmentation as ground truth. However, the image reconstructions are still blurry. We argue that this is due to the properties of the L1 loss we are adopting \\cite{mathieu2015deep}. One interesting fact is that the network has learned to generalize foreground to some other moving objects in the scene even though they are not included in the annotations. For example, the result shown in the top-right corner: instead of only segmenting the person, the dog walking beside the person is also included. This fact suggests that the network has successfully learned to identify foreground from motion cues.\n\nBesides from foreground and background features, these results also demonstrate a good extraction of motion features. The learned motion features contain both global motions, e.g.~transition of foreground, and local motions, e.g.~change of human pose. In the bottom-center result, the generated kernels from motion feature successfully shift the object from right to the middle and change its gesture.\n\n\\textbf{Action recognition:} a good pretraining pipeline should show better performance on some typical discriminative tasks than random initialization, especially when training data is scarce \\cite{misra2016shuffle,luo2017motionprediction,pathak2017learning,vondrick2016generating,wang2015unsupervised}. We also conducted comparative experiments on the task of action recognition. By discarding the decoders in our framework and training a linear softmax layer on top of the disentangled features, we can obtain a simple network for action recognition. For the first experiment, we first pretrain our encoder on the subset of UCF-101 with the settings discussed above and then fine-tune the whole action recognition network with added softmax layer on the same subset. As baselines, we trained another two action recognition networks, one with all weights initialized randomly and another one pretrained with an unsupervised autoencoder architecture. This autoencoder shared the same 3D convolutional encoder architecture with ours, while its decoder was the mirrored version of the encoder but replacing the pooling operations with convolutions.\n\nDuring training, we observed that our pretrained model reached 90\\% accuracy on training set immediately after one epoch while the randomly initialized network took 130 epochs to achieve it. All three models reached around 96\\% accuracy at the end of training and encountered severe overfitting problems. The accuracy of different methods on the validation set during training time is shown in Figure \\ref{fig:val_results}. The best accuracy obtained on the test set with our pretrained model is 62.5\\%, while it drops to 52.2\\% and 56.8\\% respectively when using a random initialization and autoencoder as pretraining scheme, as shown in Table \\ref{tab:test_acc}. We observe a margin of more than 10\\% on accuracy between our proposed method and random initialization on both validation set and test set. This further demonstrates that with our proposal, the network can learn features that generalize better. These results are specially promising given the small amount of data used during pretraining, which is just a fraction of UCF-101. While this demonstrates the efficiency of the approach, using a larger dataset for pretraining should provide additional gains and better generalization capabilities. \n\n\n\n\\begin{table}\n \\centering\n \\caption{Action recognition accuracy of different methods on the test subset of UCF-101.}\n \\label{tab:test_acc}\n \\resizebox{0.6\\linewidth}{!}{\n \\begin{tabular}{cc}\n \\toprule\n \\textbf{Method} & \\textbf{Accuracy} \\\\ \n \\midrule\n Random initialization & 52.2\\% \\\\ \n \\midrule\n Pretrained (autoencoder) & 56.8\\% \\\\ \n \\midrule\n Pretrained (ours) & \\textbf{62.5\\%} \\\\\n \\bottomrule\n \\end{tabular}\n }\n\\end{table}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{.\/figs\/val_figure.png}\\\\\n\\caption{Action recognition results on validation set. This figure shows the accuracy of each method on validation set during the training time.}\n\\label{fig:val_results}\n\\end{figure}\n\n\n\n\\section{Conclusions}\n\nThis work has proposed a novel framework towards an unsupervised learning of video features capable of disentangling of motion, foreground and background.\nOur method mostly exploits motion in videos and is inspired by human perceptual grouping with motion cues. \nOur experiments using ground truth boxes render convincing results on both frame reconstruction and action recognition, showing the potential of the proposed architecture.\n\nHowever, multiple aspects still need to be explored in our work. As our plans for the future work, we decide to (1) introduce unsupervised learning for foreground segmentation as well, as proposed in uNLC \\cite{pathak2017learning}; (2) train with a larger amount of unlabeled data; (3) introduce adversarial loss to improve the sharpness of the reconstructed frames \\cite{mathieu2015deep}; and (4) fill the gap of absent motion features between the first frame and the last frame by reconstructing any random frame in the clip.\n\nOur model and source code are publicly available at \\url{https:\/\/imatge-upc.github.io\/unsupervised-2017-cvprw\/} .\n\\section*{Acknowledgments}\nThe Image Processing Group at UPC is supported by the project TEC2013-43935-R and TEC2016-75976-R, funded by the Spanish Ministerio de Economia y Competitividad and the European Regional Development Fund (ERDF). The Image Processing Group at UPC is a SGR14 Consolidated Research Group recognized and sponsored by the Catalan Government (Generalitat de Catalunya) through its AGAUR office. \nThe contribution from the Barcelona Supercomputing Center has been supported by project TIN2015-65316 by the Spanish Ministry of Science and Innovation contracts 2014-SGR-1051 by Generalitat de Catalunya.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nBroad emission lines are a hallmark feature of type 1 active galactic nuclei \n(AGNs) and quasars \\citep{osterbrock1986}. As pervasive as they are, \nmany basic properties of the broad-line region (BLR), such as its basic \ngeometry, dynamics, and physical connection to the accretion disk around the \nsupermassive black hole (BH), remain ill-defined. AGN spectra exhibit both \ntremendous diversity as well as discernable patterns of systematic \nregularity. Principal component analysis has isolated several dominant \nrelationships among emission-line properties \\citep{boroson1992, sulentic2000}. \nThe main varying trend of those properties, which is so-called Eigenvector \n1 (EV1), has been demonstrated to be driven by Eddington ratios, \n$L_{\\rm bol}\/L_{\\rm Edd}$, where $L_{\\rm bol}$ is the bolometric luminosity and \nthe Eddington luminosity $L_{\\rm Edd} = 1.5 \\times 10^{38}\\, (M_{\\bullet}\/M_{\\odot})$ \n\\citep{boroson1992, sulentic2000, shen2014}. As one of the most prominent \nvariables in EV1, the relative strength of broad optical Fe {\\sc ii}\\ emission, expressed as \n\n\\begin{equation}\n{\\cal R}_{\\rm Fe}=\\frac{F_{\\rm FeII}}{F_{\\rm H\\beta}}, \n\\end{equation}\nmay correlate with $L_{\\rm bol}\/L_{\\rm Edd}$. \nSources with high Eddington ratios (accretion \nrates), for instance so-called narrow-line Seyfert 1 galaxies \\citep{osterbrock1985}, \nemit exceptionally strong Fe {\\sc ii}\\ lines compared with the normal ones\n\\citep{boroson1992, hu2008, dong2011}. \nHowever, the underlying physical mechanism that controls ${\\cal R}_{\\rm Fe}$ remains unclear, \nas the formation of Fe {\\sc ii}\\ is very complex (e.g., \\citealt{baldwin2004}). \nIt may be influenced by different hydrogen density of BLR gas \\citep{verner2004}, \nor diverse contribution from microturbulence \\citep{baldwin2004}.\nIn addition, Fe {\\sc ii}\\, lags are generally longer by a factor of a few than H$\\beta$ \nin broad-line Seyfert 1 galaxies \\citep{barth2013, chelouche2014} and roughly\nequal to H$\\beta$ lags in narrow-line Seyfert 1s \\citep{hu2015}, \nimplying the potential connection of ${\\cal R}_{\\rm Fe}$ with the distribution or structure of \nline-emitting gas. The ${\\cal R}_{\\rm Fe}-L_{\\rm bol}\/L_{\\rm Edd}$ correlation indicates that \nEddington ratios probably regulate all above mentioned properties of BLR. It should \nbe noted that ${\\cal R}_{\\rm Fe}$ also correlates with some other properties like \nX-ray spectral slopes (e.g., \\citealt{wang1996, laor1997, sulentic2000}), \nbut it likely originates from relation of those properties and Eddington ratios \n(e.g., \\citealt{wang2004, risaliti2009, shemmer2006, brightman2013}).\n\n\n\n\nThe overall breadth of the broad emission lines, notably H$\\beta$, reflects \nboth the virial velocity and inclination of the BLR \\citep{kollatschny2011, shen2014}. \nThe shape of the line profile may encode more \ninformation on the detailed dynamics of the BLR (e.g., \\citealt{collin2006, kollatschny2011}), \nwhich itself may depend on fundamental properties \nsuch as the accretion or outflow rate. The broad H$\\beta$ lines of NLS1s \ntend to have more sharply peaked ($\\sim$Lorentzian) profiles compared to \ntype 1 AGNs with more normal Eddington ratios \\citep{veron2001, zamfir2010}. \nAs a non-parametric description of the line profile, one can define\n\\begin{equation}\n{\\cal D}_{_{\\rm H\\beta}}=\\frac{\\rm FWHM}{\\sigma_{_{\\rm H\\beta}}},\n\\end{equation}\nwhere $\\sigma_{_{\\rm H\\beta}}$ is the dispersion (second moment) of the H$\\beta$ line. \nThe value of ${\\cal D}_{_{\\rm H\\beta}}$ is 2.35, 3.46, 2.45, 2.83 and 0 for a Gaussian, a \nrectangular, a triangular, an edge-on rotating ring, and a Lorentzian \nprofiles (for a pure Lorentzian profile $\\sigma_{_{\\rm H\\beta}}\\rightarrow \\infty$ and thus \n${\\cal D}_{_{\\rm H\\beta}}=0$), respectively (e.g., \\citealt{collin2006}). The quantity ${\\cal D}_{_{\\rm H\\beta}}$\ncorrelates loosely with Eddington ratio \\citep{collin2006} and, as the ratio \nof the rotational and turbulent components of the line-emitting clouds \n\\citep{kollatschny2011}, gives a simple, convenient parameter that may be \nrelated to the dynamics of the BLR.\n\nWhile ${\\cal R}_{\\rm Fe}$ and ${\\cal D}_{_{\\rm H\\beta}}$ each correlates separately with Eddington ratio, we \ndemonstrate that both ${\\cal R}_{\\rm Fe}$ and ${\\cal D}_{_{\\rm H\\beta}}$ {\\it combined}\\ correlate even \nmore tightly with Eddington ratio (and dimensionless accretion rate). This \nbivariate relation, which we call the ``fundamental plane''\\footnote{Borrowing \nthe terminology from galaxy formation (e.g., \\citealt{djorgovski1987}) and \naccreting BHs (e.g., \\citealt{merloni2003})} of the BLR links two direct \nobservables, plausibly related to the structure and dynamics of the BLR, with \nthe dimensionless accretion rate. Applying the BLR fundamental plane to a \nlarge sample of Sloan Digital Sky Survey (SDSS) quasars, we find that a \nlarge fraction of quasars at $z < 0.8$ have super-Eddington accretion rates.\n\n\n\\renewcommand{\\arraystretch}{1.2}\n\\begin{deluxetable*}{lcccccccl}[!t]\n\\tablecolumns{9}\n\\setlength{\\tabcolsep}{3pt}\n\\tablewidth{0pc}\n\\tablecaption{The Sample of Reverberation-mapped AGNs}\n\\tabletypesize{\\footnotesize\n\\tablehead{\n \\colhead{Objects} &\n \\colhead{$\\log L_{5100}$} &\n \\colhead{$\\log\\left(M_{\\bullet}\/M_{\\odot}\\right)$} &\n \\colhead{$\\log\\dot{\\mathscr{M}}$} &\n \\colhead{FWHM} &\n \\colhead{$\\sigma_{\\rm line}$} &\n \\colhead{${\\cal D}_{_{\\rm H\\beta}}$} &\n \\colhead{$\\cal{R}_{\\rm Fe}$} &\n \\colhead{Ref.} \\\\ \\cline{2-9}\n \\colhead{} &\n \\colhead{(${\\rm erg~s^{-1}})$} &\n \\colhead{} &\n \\colhead{} &\n \\colhead{($\\rm km~s^{-1}$)} &\n \\colhead{($\\rm km~s^{-1}$)} &\n \\colhead{} &\n \\colhead{} &\n \\colhead{}\n}\n\\startdata\nMrk 335 & $ 43.69\\pm0.06 $ & $ 6.87_{-0.14}^{+0.10} $ & $ 1.17_{-0.30}^{+0.31} $ & $2096\\pm170$ & $1470\\pm 50$ & $ 1.43\\pm0.13 $ & $ 0.39 $ & 1, 2, 3, 4 \\\\\n & $ 43.76\\pm0.06 $ & $ 7.02_{-0.12}^{+0.11} $ & $ 1.28_{-0.29}^{+0.30} $ & $1792\\pm 3$ & $1380\\pm 6$ & $ 1.30\\pm0.01 $ & $ 0.77 $ & 4, 5, 6$^a$ \\\\\n & $ 43.84\\pm0.06 $ & $ 6.84_{-0.25}^{+0.18} $ & $ 1.39_{-0.29}^{+0.30} $ & $1679\\pm 2$ & $1371\\pm 8$ & $ 1.23\\pm0.01 $ & $ 0.77 $ & 4, 5, 6$^a$ \\\\\n & $ 43.74\\pm0.06 $ & $ 6.92_{-0.14}^{+0.11} $ & $ 1.25_{-0.29}^{+0.30} $ & $1724\\pm236$ & $1542\\pm 66$ & $ 1.12\\pm0.16 $ & $ 0.69 $ & 4, 7$^a$ \\\\\n & $\\bm{43.76\\pm0.07}$ & $\\bm{6.93_{-0.11}^{+0.10}}$ & $\\bm{ 1.27_{-0.17}^{+0.18}}$ & ... & ... & $\\bm{1.27\\pm0.05}$ & $\\bm{0.62}$ & 4 \\\\\nPG 0026+129 & $ 44.97\\pm0.02 $ & $ 8.15_{-0.13}^{+0.09} $ & $ 0.65_{-0.20}^{+0.28} $ & $2544\\pm 56$ & $1738\\pm100$ & $ 1.46\\pm0.09 $ & $ 0.33 $ & 4, 5, 8$^a$ \\\\\nPG 0052+251 & $ 44.81\\pm0.03 $ & $ 8.64_{-0.14}^{+0.11} $ & $ -0.59_{-0.25}^{+0.31} $ & $5008\\pm 73$ & $2167\\pm 30$ & $ 2.31\\pm0.05 $ & $ 0.12 $ & 4, 5, 8$^a$ \\\\\n\\enddata\n\\vspace{-0.2cm}\n\\tablecomments{All the values of \n $\\log L_{5100}$, $\\log (M_{\\bullet}\/M_{\\odot})$ and $\\log \\mathscr{\\dot{M}}$ are compiled from Du et al. (2015).\nValues in boldface are the weighted averages of all the measurements for this object. \\\\ \\vglue 0.15cm\n\\vspace{-0.30cm}\n\\hspace{0.1in}\nRef.:\n (1) \\citealt{du2014}; \n (2) \\citealt{wang2014}; \n (3) \\citealt{hu2015}; \n (4) \\citealt{du2015}; \n (5) \\citealt{collin2006}; \n (6) \\citealt{peterson1998}; \n (7) \\citealt{grier2012}; \n (8) \\citealt{kaspi2000}. \n \\\\ \\vglue 0.02cm\n\\vspace{-0.2cm}\n %\n \\hspace{0.1in} \nThe superscript $a$ for references indicates that ${\\cal R}_{\\rm Fe}$ is measured in this \npaper; $b$ indicates that FWHM and $\\sigma_{_{\\rm H\\beta}}$ are measured from SDSS spectra (the \nH$\\beta$ width of SEAMBHs is significantly broadened by the 5$^{\\prime\\prime}$ \nlongslit of our campaign; see details in Ref. 4); $c$ means the MCMC BH mass \nis used (see Section 2.2); $d$ means that ${\\cal D}_{_{\\rm H\\beta}}$ is taken from the latest \nmeasurements in \\cite{kollatschny2011}. NGC 5548 marked with $e$ is measured from its mean \nannual spectra in the AGN watch database; the average value is provided here. \nWe first calculate ${\\cal D}_{_{\\rm H\\beta}}$ for each measurement, and then average. In the \nmain text, we use these averaged numbers for the objects with multiple RM \nmeasurements (treated as one point in all figures). For NGC 7469, which was\nmapped twice \\cite{collier1998} and \\cite{peterson2014}, the H$\\beta$ lags are not very \ndifferent but the H$\\beta$ FWHM is very different; take the values of FWHM \nmeasured by \\cite{kollatschny2011}. NGC 4051 and PG 1700+518 have very small values of \n${\\cal D}_{_{\\rm H\\beta}}$ in Ref. 5, but \\cite{kollatschny2011} provides new measurements, which are used \nhere.\\\\ \\vglue 0.02cm\n\\vspace{-0.2cm}\n\\hspace{0.1in} This table is available in its entirety in a machine-readable \nform in the online journal. A portion is shown here for guidance regarding its \nform and content.\n}\n\\end{deluxetable*}\n\n\n\n\n\n\n\\section{Measurements}\n\\subsection{The Reverberation-mapped AGN sample}\nWe select all AGNs with reverberation mapping (RM) data (here only broad H$\\beta$ \nline), which yield robust BH \nmass estimates needed for our analysis. All RM AGNs before 2013 are summarized\nby \\cite{bentz2013}. We took all of 41 AGNs from \\cite{bentz2013}. \nThree additional sources (Mrk 1511, NGC 5273, \nKA1858+4850) were subsequently published. Our project to search for \nsuper-Eddington accreting massive black holes (SEAMBHs) has monitored about 25 \ncandidates and successfully measured H$\\beta$ lags ($\\tau_{_{\\rm H\\beta}}$) in 14 AGNs to \ndate \\citep{du2015} and other five objects monitored between 2014-2015\n(to be submitted). We measure Fe {\\sc ii}\\, using the same approach as \\cite{hu2008}\nand \\cite{hu2015}.\nFor reverberation-mapped AGNs without published measurements of Fe {\\sc ii}\\ and \nH$\\beta$ flux, we fit the mean spectra from the monitoring campaigns, using \nthe fitting scheme described in \n\\cite{hu2015}. In short, the spectrum is fitted with several components simultaneously: \n(1) a power law for continuum, (2) Fe {\\sc ii} template from \\cite{boroson1992}, \n(3) host galaxy template if necessary, (4) broad H$\\beta$, (5) broad He {\\sc ii} \n$\\lambda 4686$ emission line, and (6) several Gaussians for narrow lines such as [O {\\sc iii}] \n$\\lambda\\lambda$4959, 5007. The \nflux of broad optical Fe {\\sc ii} is measured by integration from 4434 \\AA\\ to 4684 \\AA.\nTable 1 lists the 63 RM AGNs we consider, along with \nthe BH mass, 5100 \\AA\\, luminosity, dimensionless accretion rate, FWHM, \n$\\sigma_{_{\\rm H\\beta}}$, ${\\cal R}_{\\rm Fe}$ and data sources. \n\nThe sample covers a wide range of accretion rates, $\\dot{\\mathscr{M}} \\approx 10^{-3} \n- 10^3$, from the regime of a \\cite{shakura1973} standard disk to a \nslim disk \\citep{abramowicz1988}. We take ${\\cal R}_{\\rm Fe}$ from the published \nliterature if available; otherwise, we measure it from the averaged spectra \nfollowing the spectral fitting scheme of \\cite{hu2008, hu2015}. As the \nvariability of H$\\beta$ is unusually much larger than that of Fe {\\sc ii}\\ in \nsub-Eddington AGNs, the uncertainties of ${\\cal R}_{\\rm Fe}$ are mainly governed by \nH$\\beta$ variability, which on average is $\\sim 20$\\%. \n \nWe estimate the BH mass as $M_{\\bullet}=f_{_{\\rm BLR}} V_{\\rm FWHM}^2 c\\tau_{_{\\rm H\\beta}}\/G$, where \n$f_{_{\\rm BLR}}$ is the virial factor, $V_{\\rm FWHM}$ is H$\\beta$ FWHM, and $G$ is the \ngravitational constant. In practice, the factor $f_{_{\\rm BLR}}$ is calibrated against\nthe $M_{\\bullet}-\\sigma$ relation of inactive galaxies \\citep{onken2004, ho2014}. \nFor consistency with our earlier series of papers, we adopt $f_{_{\\rm BLR}}=1$.\n\n\n\\subsection{Accretion rates and Eddington ratios}\nWe derived accretion rates from the disk model of \\cite{shakura1973},\nwhich has been extensively applied to fit the spectra of quasars and Seyfert 1 \ngalaxies \\citep{czerny1987, sun1989, laor1989, collin2002, brocksopp2006, kishimoto2008, \ndavis2011, capellupo2015}. The effective temperature distribution is \ngiven by $T_{\\rm eff}=6.2\\times 10^{4}\\,\\dot{m}_{\\bullet,0.1}^{1\/4}m_7^{1\/4}\nR_{14}^{-3\/4}$\\,K, where $\\dot{m}_{\\bullet,0.1}=\\dot{M}_{\\bullet}\/0.1M_{\\odot}\\,\n{\\rm yr^{-1}}$, $\\dot{M}_{\\bullet}$ is mass accretion rates, $m_7=M_{\\bullet}\/10^7\nM_{\\odot}$, and $R_{14}=R\/10^{14}$cm \\citep{frank2002}. Here the effect of the \ninner boundary is neglected because the region emitting optical radiation is \nfar from the boundary. Introducing $x=h\\nu\/kT_{\\rm eff}$, we have the spectral \nluminosity by integrating over the entire disk,\n\\begin{equation}\nL_{\\nu}=1.58\\times 10^{28}\\,\\dot{m}_{\\bullet,0.1}^{2\/3}m_7^{2\/3}\\nu_{14}^{1\/3}\\cos i \n \\int_{x_{\\rm in}}^{\\infty}\\frac{x^{5\/3}}{e^x-1}dx\\,\\,\\,\\,{\\rm erg\\,s^{-1}\\,Hz^{-1}},\n\\end{equation}\nwhere $i$ is the disk inclination relative to the observer and \n$\\nu_{14}=\\nu\/10^{14}$Hz. Since long-wavelength photons are radiated from \nlarge disk radii, the integral term in Equation (3) can be well approximated \nby 1.93 for $x_{\\rm in}=0$ \\citep{davis2011}. We thus have\n$\\dot{M}_{\\bullet}=0.53\\left(\\ell_{44}\/\\cos i\\right)^{3\/2}m_7^{-1}~M_{\\odot}~{\\rm yr^{-1}}$,\nand the dimensionless accretion rate\\footnote{The applicability of Eq. (4) \nto SEAMBHs can be justified by the self-similar solution of slim disks \\citep{wang1999a, wang1999b}. \nThe solution shows that the 5100 \\AA\\,\nphotons are emitted from $R_{5100}\/R_{\\rm Sch}\\approx 4.3\\times 10^3\nm_7^{-1\/2}$, and the photon trapping radius $R_{\\rm trap}\/R_{\\rm Sch}\\approx \n144\\dot{\\mathscr{M}}_{100}$, where $R_{\\rm Sch}$ is the Schwartzschild radius. Eq. (4) \nholds provided that $R_{5100}\\gtrsim R_{\\rm trap}$, or $\\dot{\\mathscr{M}}\\lesssim \n3\\times 10^3m_7^{-1\/2}$. No SEAMBH so far has exceeded this limit.}\n\\begin{equation}\n\\dot{\\mathscr{M}}=20.1\\left(\\frac{\\ell_{44}}{\\cos i}\\right)^{3\/2}m_7^{-2},\n\\end{equation}\nwhere $\\ell_{44}$ is the 5100 \\AA\\, luminosity in units of $10^{44}\\,{\\rm erg~s^{-1}}$. \nThis convenient expression can easily convert luminosity and BH mass into \ndimensionless accretion rates. In this paper, we take an average value of \n$\\cos i = 0.75$, which corresponds to the opening angle of the dusty torus \n(e.g., \\citealt{davis2011, du2015}). The uncertainties of $\\dot{\\mathscr{M}}$ due to \n$i$ ($\\in[0,45^{\\circ}]$) are\n$\\Delta\\log\\dot{\\mathscr{M}}=1.5\\Delta\\log\\cos i\\lesssim 0.15$ from Equation (4),\nwhere we took $\\Delta \\log \\cos i\\lesssim0.1$. This uncertainty is significantly\nsmaller than the average error bars of $\\log \\dot{\\mathscr{M}}$ ($\\sim 0.35$), and is \nthus neglected.\n\nThe dimensionless accretion rate is related to the more widely used Eddington \nratio via $L_{\\rm bol}\/L_{\\rm Edd}=\\eta\\dot{\\mathscr{M}}$, where $\\eta$ is the radiative \nefficiency, and $L_{\\rm bol} \\approx 10 L_{5100}$ \\citep{kaspi2000}. \nThe uncertainties of Eddington ratios result from the fact that the bolometric\ncorrection depends on both accretion rates and BH mass \\citep{jin2012}. In \nour following discussion, we will use both $\\dot{\\mathscr{M}}$ and $L_{\\rm bol}\/L_{\\rm Edd}$.\n\n\n\\begin{figure*}[t!]\n\\begin{center}\n\\includegraphics[angle=0,width=0.82\\textwidth]{fig1.eps}\n\\end{center}\n\\vglue -0.5cm\n\\caption{\\footnotesize Correlations between ({\\it a}) ${\\cal R}_{\\rm Fe}-\\dot{\\mathscr{M}}$ and \n({\\it b}) ${\\cal D}_{_{\\rm H\\beta}}-\\dot{\\mathscr{M}}$. The Pearson's coefficient, null-probability, \nand scatter of the $X-\\dot{\\mathscr{M}}$ correlation are given by ($r,p,\\Delta_X$). \nIn panel ({\\it a}), the SDSS quasars overlap with the RM AGNs quite well, \nexcept for AGNs with ${\\cal R}_{\\rm Fe}\\gtrsim 1.4$. This could be because these objects \nare super-Eddington accretors, in which the normal $R-L$ relation \\citep{bentz2013} \noverestimates $R_{_{\\rm H\\beta}}$ as well as BH mass \\citep{du2015}, and \nhence $\\dot{\\mathscr{M}}$ is greatly underestimated (see details in the text). In \npanel ({\\it b}), the SDSS sample also overlaps well with the RM AGNs, but the \nlow$-\\dot{\\mathscr{M}}$ AGNs lie beyond the locus of the SDSS sample. There are some \nSDSS quasars with extremely high accretion rates, $\\dot{\\mathscr{M}}\\gtrsim 10^2$, \nsuggesting that we should monitor them in the future of SEAMBH project. The \nhistograms indicate the distributions of ${\\cal R}_{\\rm Fe}$, ${\\cal D}_{_{\\rm H\\beta}}$ and $\\dot{\\mathscr{M}}$ on \na normalized scale. We note that there is no significant correlation between\n${\\cal R}_{\\rm Fe}$ and ${\\cal D}_{_{\\rm H\\beta}}$, either in the RM AGN or SDSS sample, indicating that \n${\\cal D}_{_{\\rm H\\beta}}$ and ${\\cal R}_{\\rm Fe}$ are independent from each other, although both correlate \nwith $\\dot{\\mathscr{M}}$. Panels ({\\it c}) and ({\\it d}) are the same as ({\\it a}) and \n({\\it b}), but for Eddington ratios. \n}\n\\end{figure*}\n\n\n\n\n\n\\section{Fundamental plane of the BLR}\n\\subsection{Correlations}\nFigure 1{\\it a} and 1{\\it c} show the ${\\cal R}_{\\rm Fe}-(\\dot{\\mathscr{M}},L_{\\rm bol}\/L_{\\rm Edd})$ \nplots and yield the following correlations:\n\\begin{equation}\n{\\cal R}_{\\rm Fe}=\\left\\{\\begin{array}{l}\n (0.66\\pm 0.04)+(0.30\\pm 0.03)\\log \\dot{\\mathscr{M}},\\\\[0.8em] \n (1.20\\pm 0.07)+(0.55\\pm 0.06)\\log \\left(L_{\\rm bol}\/L_{\\rm Edd}\\right).\n \\end{array}\\right.\n\\end{equation}\nWe define the scatter of a correlation as \n$\\Delta_{X}=\\sqrt{\\sum_{i=1}^N(X-X_i)^2\/N}$, where $N$ is the number of \nobjects, and $X$ represents ${\\cal R}_{\\rm Fe}$, ${\\cal D}_{_{\\rm H\\beta}}$, $\\dot{\\mathscr{M}}$, or $L_{\\rm bol}\/L_{\\rm Edd}$. \nThe Pearson's correlation coefficient ($r$), null-probability ($p$), and \nscatters are indicated in the plots. By comparing $(r,p,\\Delta_{\\rm {\\cal R}_{\\rm Fe}})$ in \npanels ({\\it a}) and ({\\it c}), we find that the ${\\cal R}_{\\rm Fe}-\\dot{\\mathscr{M}}$ correlation \nis slightly stronger than that of ${\\cal R}_{\\rm Fe}-L_{\\rm bol}\/L_{\\rm Edd}$. In \nhigh-$\\dot{\\mathscr{M}}$ AGNs, both H$\\beta$ and continuum variability are \nsignificantly smaller than those in sub-Eddington AGNs. On the other hand, \nFe {\\sc ii}\\, reverberates in a very similarly fashion to H$\\beta$ with respect to \nthe continuum \\citep{hu2015}. Indeed, it can be seen that the scatter of the \ncorrelation gets larger with decreasing $\\dot{\\mathscr{M}}$ or \n$L_{\\rm bol}\/L_{\\rm Edd}$. The ${\\cal R}_{\\rm Fe}-(\\dot{\\mathscr{M}}, L_{\\rm bol}\/L_{\\rm Edd})$ correlations \nsupports the idea that Fe {\\sc ii}\\, strength is not governed by metallicity \nbut by the ionizing flux and hydrogen density \\citep{verner2004}. \n\nWe plot the ${\\cal D}_{_{\\rm H\\beta}}-(\\dot{\\mathscr{M}}, L_{\\rm bol}\/L_{\\rm Edd})$ relations in Figure 1{\\it b} and \n1{\\it d} and find\n\\begin{equation}\n{\\cal D}_{_{\\rm H\\beta}} =\\left\\{\\begin{array}{l}\n (2.01\\pm 0.05)-(0.39\\pm 0.04)\\log \\dot{\\mathscr{M}},\\\\[0.8em]\n (1.28\\pm0.09)-(0.72\\pm0.08)\\log\\left(L_{\\rm bol}\/L_{\\rm Edd}\\right).\n \\end{array}\\right.\n\\end{equation} \nThe above two correlations are similar, but the former is slightly stronger \nthan the latter. \\cite{collin2006} also found a correlation between \n${\\cal D}_{_{\\rm H\\beta}}$ and $L_{\\rm bol}\/L_{\\rm Edd}$ (see their Figure 6), but their results are much \nweaker than ours. This is mainly due to the lack of high$-\\dot{\\mathscr{M}}$ AGNs in\ntheir sample. We would like to emphasize that the \n${\\cal D}_{_{\\rm H\\beta}}-(\\dot{\\mathscr{M}},L_{\\rm bol}\/L_{\\rm Edd})$ correlations cannot be an artifact of the \ninclusion of FWHM in $\\dot{\\mathscr{M}}$. For a constant $\\sigma_{_{\\rm H\\beta}}$ of RM AGNs, the \naccretion rates span over about 5 dex whereas luminosities span over 4.5;\nhowever, the ${\\cal D}_{_{\\rm H\\beta}}-\\dot{\\mathscr{M}}$ relation has a scatter of only\n$\\Delta_{_{\\cal D}}=0.3-0.4$. The correlations are intrinsic. \n\nFigure 1 also shows, as background, the SDSS DR5 sample of \\cite{hu2008}. \nThe sample comprises 4037 $z\\lesssim 0.8$ \nquasars with criteria of S\/N$\\ge 10$ and EW(Fe {\\sc ii})$\\ge 25$ \\AA\\, (this excludes Fe {\\sc ii}-weak \nquasars). BH masses assume $f_{\\rm BLR} = 1$ and a standard $R-L$ relation\\footnote{ \nThis is an empirical relation between the BLR size and the continuum. From the recent work \nof Bentz et al. (2013), it has the form $R_{\\rm BLR}=33.65\\,\\ell_{44}^{0.53}$ltd. However, \n\\cite{du2015} found that it only applies to sub-Eddington AGNs; it depends on $\\dot{\\mathscr{M}}$ \nfor super-Eddington AGNs. We do not consider \nthe dependence of the $R-L$ relation on $\\dot{\\mathscr{M}}$ for the SDSS sample in this paper.}. \nThe RM AGNs overlap very well with the SDSS sample, on both the \n${\\cal R}_{\\rm Fe}-(\\dot{\\mathscr{M}},L_{\\rm bol}\/L_{\\rm Edd})$ and the ${\\cal D}_{_{\\rm H\\beta}}-(\\dot{\\mathscr{M}},L_{\\rm bol}\/L_{\\rm Edd})$ plots. \nWe note that among the mapped AGNs there is a small population ($\\lesssim 9\\%$; Figure \n1{\\it a} and {\\it c}) of AGNs with ${\\cal R}_{\\rm Fe}>1.4$ of what appear to be super-Eddington sources. \nTheir values of $\\dot{\\mathscr{M}}$ are likely underestimated because their black hole masses were \nestimated using the standard $R-L$ relation.\n\n\n\\begin{figure*}[!t]\n\\begin{center}\n\\includegraphics[angle=0,width=0.45\\textwidth]{fig2a.eps}\n\\includegraphics[angle=0,width=0.45\\textwidth]{fig2b.eps}\n\\end{center}\n\\vglue -0.5cm\n\\caption{\\footnotesize The fundamental plane of AGN BLRs, showing a physical \nconnection between accretion disks and BLRs. \nThe dependent variable is ({\\it left}) $\\dot{\\mathscr{M}}$ and ({\\it right}) \nEddington ratio. The two observables of ${\\cal R}_{\\rm Fe}$ and ${\\cal D}_{_{\\rm H\\beta}}$ can be readily \nmeasured from single-epoch spectra, allowing us to constrain the accretion \nstatus of the central engine. }\n\\end{figure*}\n\n\n\n\\subsection{Fundamental Plane}\nThe $({\\cal R}_{\\rm Fe},{\\cal D}_{_{\\rm H\\beta}})-(\\dot{\\mathscr{M}},L_{\\rm bol}\/L_{\\rm Edd})$ relations reflect \nconnections between the BLR structure and dynamics with BH accretion. We \ninvestigate whether these two univariate correlations can be unified into a \nsingle bivariate correlation of the form\n\\begin{equation}\n\\log (\\dot{\\mathscr{M}}, L_{\\rm bol}\/L_{\\rm Edd})=\\alpha_k + \\beta_k{\\cal D}_{_{\\rm H\\beta}}+\\gamma_k{\\cal R}_{\\rm Fe},\n\\end{equation}\nwhere $(\\alpha_k,\\beta_k,\\gamma_k)$ are coefficients to be determined by \ndata ($k=1,2$). We define\n\\begin{equation}\n\\chi_k^2=\\frac{1}{N}\\sum_{i=1}^N\n \\frac{\\left(\\log {\\cal A}_k^i-\\alpha_k-\\beta_k{\\cal D}_{_{\\rm H\\beta}}^i-\\gamma_k{\\cal R}_{\\rm Fe}^i\\right)^2}\n {\\sigma_{\\scriptscriptstyle{{\\cal A}_i}}^2+\n \\beta_k\\sigma_{\\scriptscriptstyle{{\\cal D}_{_{\\rm H\\beta}}^i}}^2+\\gamma_k\\sigma_{\\scriptscriptstyle{{\\cal R}_{\\rm Fe}^i}}^2},\n\\end{equation}\nwhere ${\\cal A}_k=(\\dot{\\mathscr{M}},L_{\\rm bol}\/L_{\\rm Edd})$, \n$\\sigma_{\\scriptscriptstyle{{\\cal A}_i}},\\sigma_{\\scriptscriptstyle{{\\cal D}_{_{\\rm H\\beta}}^i}}$ \nand $\\sigma_{\\scriptscriptstyle{{\\cal R}_{\\rm Fe}^i}}$ are the error bars of $\\log{\\cal A}$, \n${\\cal D}_{_{\\rm H\\beta}}$, and ${\\cal R}_{\\rm Fe}$ of the $i-$th object, respectively. Minimizing $\\chi_k^2$,\nwe obtain \n$$\n\\alpha_1=2.47\\pm 0.34;~~~\\beta_1=-1.59\\pm 0.14;~~~{\\rm and}~~\\gamma_1=1.34\\pm 0.20,\n$$\n$$\n\\alpha_2=0.31\\pm 0.30;~~~\\beta_2=-0.82\\pm 0.11;~~~{\\rm and}~~\\gamma_2=0.80\\pm 0.20.\n$$\nThe error bars of $(\\alpha_k,\\beta_k,\\gamma_k)$ are derived from bootstrap \nsimulations. The bivariate correlations, plotted in Figure 2, are much \nstronger than individual corrections of Figure 1 (see the correlation \ncoefficients and null-probability). We call these new correlations as the \nfundamental plane of the BLR.\n\nThe implications of Equation (7) are exciting. From two simple measurements \nof a single-epoch spectrum of a quasar---strength of Fe {\\sc ii}\\ and shape of broad \nH$\\beta$---we can deduce the status of its accretion flow. This can be very \nuseful when applied to large samples of quasars to investigate the cosmological \ngrowth of BHs. Our method can be usefully applied to quasars with suitable \nspectroscopy in the rest-frame H$\\beta$ region, for which the strength of Fe {\\sc ii}\\ \ncan be measured or constrained.\n\n\\subsection{Application to SDSS sample}\nWe apply the $\\dot{\\mathscr{M}}-$plane (Equation 7) to a sample of \n4037 objects \\cite{hu2008}, which were selected from the\nSDSS DR5 sample composed of $N_{\\rm tot}\\approx 15,000$ quasars with $z\\lesssim 0.8$. \nWe calculate fractions of quasars with $\\dot{\\mathscr{M}}\\ge \\dot{\\mathscr{M}}_c$,\n$\\delta=N_{{\\dot{\\mathscr{M}}_c}}\/N_{\\rm tot}$, where $N_{{\\dot{\\mathscr{M}}_c}}$ \nis the number of quasars and\n$\\dot{\\mathscr{M}}_c$ is the critical accretion rate in question. For objects with $\\dot{\\mathscr{M}}\\ge 3$, we \nfind $\\delta_3=N_3\/N_{\\rm tot}\\approx 0.18$. Similarly,\nwe have $\\delta_{10}=N_{10}\/N_{\\rm tot}\\approx 0.12$ and \n$\\delta_{100}=N_{100}\/N_{\\rm tot}\\approx 0.02$.\nThese numbers show that\nsuper-Eddington accreting AGNs are quite common in the Universe at $z<0.8$. We should note \nthat these fractions are lower limits, as a result of the selection criteria imposed by \nHu et al. Detailed results of the application of our technique to the latest sample of SDSS \nquasars will be carried out in a separate paper.\n\n\\section{Conclusions}\nThis paper studies correlations among three dimensionless AGN parameters: \naccretion rate (or Eddington ratio), shape of the broad H$\\beta$ line, and \nflux ratio of optical Fe {\\sc ii}\\, to H$\\beta$. A strong correlation among them is \nfound, which we denote as the fundamental plane of AGN BLRs (Equation 7). \nThe BLR fundamental plane enables us to conveniently explore the accretion \nstatus of the AGN central engine using single-epoch spectra, opening up many \ninteresting avenues for exploring AGNs, including their cosmological evolution.\nA simple application of the BLR fundamental plane shows that super-Eddington \naccreting AGNs are quite common in among low-redshift quasars.\n\n\\acknowledgements{ This research \nis supported by the Strategic Priority Research Program - The Emergence of Cosmological Structures \nof the Chinese Academy of Sciences, Grant No. XDB09000000, by NSFC grants NSFC-11173023, -11133006, \n-11373024, -11233003 and -11473002, and a NSFC-CAS joint key grant of U1431228.}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzpipv b/data_all_eng_slimpj/shuffled/split2/finalzzpipv new file mode 100644 index 0000000000000000000000000000000000000000..6a3b73fffe814f9b59f4122d4ee3af948d5bc15c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzpipv @@ -0,0 +1,5 @@ +{"text":"\\section{Motion of a point charge in a Coulomb potential well}\n\nThe behavior of a point charge in a Coulomb potential well was studied well\n enough (see, e.g., \\cite{1}). By contrast, the behavior of {\\itshape\\bfseries a distributed} charge in a Coulomb potential well reveals some\n specific features. And it is the specific features arising in the behavior of {\\itshape\\bfseries a distributed} charge that we are going to address in the present paper. \n\nWe start by assuming a negative distributed charge with a density \n$\\rho(\\mathbf{r})$ to be placed into the field of a point positive charge $e$ of infinite mass at the origin of a coordinate system. For the sake of conveniency, we shall call this charge a nucleus. Our goal will be to reveal the specific features in the behavior of the distributed charge \n$\\rho(\\mathbf{r})$ in a Coulomb potential well.\nWe shall look for the solution of this problem in the framework of the approach that was employed, for instance, in monograph \\cite{1}, i.e., in the nonrelativistic approximation. This condition is reliably substantiated.\nFor instance, in an atom any element of charge located at a distance of the Bohr radius from the nucleus has a velocity on the order of $\\alpha c$, where $\\alpha$ is the fine structure constant, and $c$ is the velocity of light. Because in the nonrelativistic approximation the velocities of motion of individual charge elements are small, we will disregard in what follows the generation of magnetic field induced by charge motion. Said otherwise, we are going to neglect the vector potential $\\mathbf{A}(\\mathbf{r})$ induced by charge motion and assume the elements of a distributed charge to feel only the field of the scalar potential $\\Phi(\\mathbf{r})$.\n\nWe shall address in this paper the case where the density of the distributed charge is small. This licenses us to neglect the potential of the distributed charge $\\rho(\\mathbf{r})$ itself compared with that of the nucleus, and assume each element of the distributed charge to move only in the field of a point charge $e$ with the spherical potential $\\Phi(r)=e\/r$.\n\nNext we choose in the distributed charge a constant element of charge $dq$ with a mass $dm$ and, assuming this element to be a point charge, recall some features in the behavior of a point charge $dq$ in a Coulomb potential well, which we are going to use later on. (We shall denote this element sometimes by $dq$, and sometimes by $dm$, bearing in mind that in all cases we will deal with an element of charge $dq$ and mass $dm$). A comment is here, however, in order.\n\nAn element of charge $dq$ moving with acceleration must radiate energy. Our analysis of the motion of this element in the field of the nucleus will, however, be conducted under the assumption that the element does not radiate. Indeed, the element under consideration, rather than being a single isolated element of charge, has been selected by us out of the total distributed charge. As this will be demonstrated later on, there exist such states of a distributed charge which do not radiate. It is only such states that will be studied in this paper. Therefore in consideration of the motion of a charge element isolated from the total charge we also shall assume that this element does not radiate. Note that each given separate element of charge must radiate.\n\n\\medskip\n\nThe most convenient approach to identifying the significant details in the behavior of a point charge $dq$ in a Coulomb potential well is to resort to monograph \\cite{1}, \\$~15. It describes the behavior of a point particle in a field inversely proportional to $r^2$. Among such fields are Newton's gravitational and Coulomb's electrostatic fields.\nThe gravitational forces being weak, we are going to neglect them. The only force we will take account of is the Coulomb interaction of a negative element of charge $dq$ with the positively charged nucleus. Because the element $dq$ separated out from the total charge does not radiate (a point to be substantiated later on), one can invoke here all the conclusions drawn in \\cite{1}, \\$~15.\n\nConsider now the specific features in the behavior of an element of charge we shall treat in the discussion to follow.\n\nIn a Coulomb potential well, a constant element of charge \n$dq$ with a mass $dm$ can move in a circle or an ellipse, \nand the ellipse can degenerate into a straight line. The \nenergy of an element of charge depends on the semimajor axis \nof the ellipse (or on the radius of the circle). The actual \nshape of the ellipse (i.e., its semiminor axis) depends on \nthe angular momentum of the particle. Thus, all ellipses with \nthe same semimajor axis but different semiminor axes have the \nsame energies but different angular momenta, down to the zero \nmomentum (in which case the ellipse degenerates into a straight \nline). Said otherwise, elements of the same energy \ncan move in a Coulomb potential well along trajectories which \ndiffer in the value of the angular momentum.\n\n\nLet us analyze the various trajectories along which an \nelement of charge $dq$ with a mass $dm$ can move in the \ncase where the total energy of the element in each trajectory \nis the same. \n\n\\begin{figure}\n\\label{F1}\n\\begin{center}\n\\includegraphics[scale=1]{Figure1.eps}\n\\caption{}\n\\end{center}\n\\end{figure}\n\n\nFigure~1 for an element $dm$ located at some point\n$\\mathbf r$ illustrates several such orbits of all possible \nones: a circular orbit $a$, eight elliptical orbits \n$(b-k)$ with different eccentricities (and, hence, different \nangular momenta), and a linear orbit $l$ into which the \nellipse degenerates at an eccentricity of unity. This orbit \npasses through the nucleus. All the orbits are \ncharacterized by identical semimajor axes (if the energies \nof the elements are equal, the semimajor axes of the ellipses \nshould likewise be equal). All orbits lie in the same plane. \nThe elements of charge in all orbits rotate in the same \nsense.\n\nAll orbits focus at the same point. In this focus (in our \nfigure, this is the center $O$ of the circle) the nucleus \nis located. Using the focal properties of \nellipses, one can readily show that each elliptic trajectory \nintersects a circular orbit at the point where this ellipse \nintersects its semiminor axis. The dashed lines confine the \nregion of allowed trajectories along which an element $dm$\ncan move.\n\nSignificantly, the extended trajectories with an eccentricity close to unity pass not far from the nucleus. Therefore, within a small region in the vicinity of the nucleus the condition of the smallness of the element motion velocities does not hold. This region, however, is not large.\n\nBecause the energy of an element in each trajectory is the same, element $dm$ can move over {\\itshape\\bfseries any} of the trajectories specified. As follows from Fig.~1 and an analysis of the trajectories along which element $dm$ can propagate, all possible directions of motion of the element at a given point are strictly confined to the angle $\\pi$ (a few words on a semicircle located above orbit $l$---if the elements rotated in their orbits in the opposite direction, this semicircle would lie below orbit $l$).\n\nAn element residing in a Coulomb potential well moves in one plane. Note, however, that the number of planes which can be passed through the line connecting the element under consideration with the nucleus is infinite. Any of these planes can confine the trajectory of the given element, because the Coulomb field possesses spherical symmetry. In this case, all possible directions of the velocities of element motion at a given point lie inside the solid angle $2\\pi$ (a hemisphere).\nIn Fig.~1, this hemisphere extends above the plane passed through orbit $l$ perpendicular to the drawing. If elements in their orbits rotated in the opposite sense, the hemisphere would be located below this plane. Additional conditions related with the position of this plane will be specified below (in Sections 4 and 5).\n\nConsider now the magnitude of the velocities with which \nelements of charge having equal energies move in different \ndirections. The Reader can be conveniently referred to monograph \n\\cite{1}, \\$~14. The energy \n$d\\mathcal E$ of an element of charge $dq$ with a mass $dm$ \nmoving in a central field is preserved. Recalling the standard \nexpression for energy\n\n\\begin{equation}\n\\label{1}\nd\\mathcal E=d\\mathcal E_{Kin}+d\\mathcal E_{Pot}=\n\\frac{dm\\upsilon^2}{2}+d\\mathcal E_{Pot},\n\\end{equation}\nwhere $d\\mathcal E_{Kin}$ is the kinetic, and $d\\mathcal E_{Pot}$, \nthe potential energy of the element, and $\\upsilon$ is its \nvelocity, we obtain\n\n\\begin{equation}\n\\label{2}\n\\upsilon=\\sqrt{\\frac{2}{dm}(d\\mathcal E-d\\mathcal E_{Pot})}.\n\\end{equation}\n\nThis means that the magnitude of the velocity of an element \nwith a given energy $d\\mathcal E$ depends only on the potential \nenergy of this element $d\\mathcal E_{Pot}$. Equation (\\ref{2}) \ndoes not contain any other parameters, in particular, of \nparameters which would suggest the dependence of the magnitude \nof the velocity on its direction. As for the potential energy \nof this element, it depends only on the position of this element, \n$d\\mathcal E_{Pot}=edq\/r$. \nOne comes inevitably to the conclusion that the {\\itshape\\bfseries\n{magnitude of the velocity}} of an element at a given point \n{\\itshape\\bfseries{does not depend on \nthe direction}} of motion of this element.\n\nIt is a very significant statement, and, hence, it has to be \ncorroborated in more detail and in a more revealing way. \nRewrite Eq. (\\ref{1}) in the form\n\n\\begin{equation}\n\\label{3}\nd\\mathcal E=\\frac{dm}{2}(\\dot r^2+r^2\\dot\\zeta^2)+d\\mathcal E_{Pot}\n=\\frac{dm\\dot r^2}{2}+\\frac{dM^2}{2dmr^2}+d\\mathcal E_{Pot}.\n\\end{equation}\nHere $\\zeta$ is the angular coordinate in the plane in which \nthe element $dm$ rotates, and $dM$ is the angular momentum of \nthis element. Whence we come to \n\n\\begin{equation}\n\\label{4}\n\\dot r^2=\\frac{2}{dm}(d\\mathcal E-d\\mathcal E_{Pot})-\n\\frac{dM^2}{dm^2r^2}\n\\end{equation}\nThe term $r^2\\dot\\zeta^2$ can be derived from the expression \nfor the angular momentum $dM=dmr^2\\dot\\zeta$. Substituting it \ninto the expression for the velocity, we come to\n\n\\begin{equation}\n\\label{5}\n\\upsilon=\\sqrt{\\dot r^2+r^2\\dot\\zeta^2}=\\sqrt{\\frac{2}{dm}\n(d\\mathcal E-d\\mathcal E_{Pot})-\\frac{dM^2}{dm^2r^2}+\n\\frac{dM^2}{dm^2r^2}}.\n\\end{equation}\n\nAs seen from Eq.~(\\ref{5}), terms containing the angular \nmomentum cancel. Said otherwise, the magnitude of the \nvelocity does not depend on the angular momentum. Figure~1 \nshows, however, that it is different angular momenta of an \nelement of a given energy residing at a point that account \nfor the different directions of the elements velocity. \n\nThis brings us to the conclusion that the magnitude of the velocity \n{\\itshape\\bfseries at a given point} (i.e., at point $\\mathbf r$ where element $dm$ resides) does not depend on the trajectory on which element $dm$ moves. At a given point, the element $dm$ has the same velocity, no matter what trajectory is involved.\n\nOne more comment is here in order. A separate element of charge $dq$ rotating in a {\\itshape\\bfseries circular} trajectory radiates an ac field with the frequency of rotation. If, however, elements of the charge fill completely the circular trajectory, this charge distribution will not radiate ac fields, because this state is the steady-state which does not depend on time. The trajectory of such a state is closed and is actually an analog of a circular current. And circular current, as is well known, does not radiate ac fields while generating constant electric and magnetic fields.\n\n\n\\section{Motion of a distributed charge in a Coulomb potential well}\n\n\nNow recall that we are considering motion not of a point but ruther \nof a distributed charge.\n\nBecause in all trajectories which are shown in Fig.~1\nthe element of charge $dq$ has \nthe same energy, it can move \n{\\itshape\\bfseries along any} trajectory. Moreover, \nthis element of charge can move {\\itshape\\bfseries in all \ntrajectories at the same time}. This \ncan be visualized in the following way. Divide element \nof charge $dq$ in $k$ parts. Then one element of charge \n$dq'=dq\/k$ with mass $dm'=dm\/k$ can move along one elliptical \ntrajectory, another \ncharge $dq'$, along another trajectory, and so on. As $k$ \ntends to infinity, all the trajectories will criss-cross \nall of the allowed region containing trajectories of the \nelements of charge $dq'$ of the same energy but with \ndifferent angular momenta. Generally speaking, this \nprocess may be considered not as motion of elements of \ncharge along trajectories but rather as motion of a \ncontinuous medium, of a {\\itshape\\bfseries charge wave}. \n\nConsider now the velocity with which elements of a distributed charge move. As already shown, elements of a distributed charge can propagate from a given point along all trajectories simultaneously. In Section~1 it was demonstrated that elements of charge have at a specific point the same velocity in any trajectory. At this specific point, these trajectories are characterized by different directions (within a solid angle $2\\pi$, see Fig.~1). Hence, when a charge propagates over all trajectories simultaneously, the directions of velocities of different elements at a given point are confined within the solid angle $2\\pi$, while their magnitude is the same.\n\nThis motion {\\itshape\\bfseries of a charge wave} may be treated from two angles, to wit, either as motion of different elements of charge on all trajectories at the same time, or as motion of a wave. In the first case, the behavior of each element is described by equations of mechanics (allowing for the potentials in which these elements move; but for the description of the overall picture to be complete, one will have to take into account that the number of these elements is infinite). In the second case, one will have to resort to equations in partial derivatives. For description of the behavior of any continuous medium, and, in particular, of a distributed charge the second model appears to be simpler and, thus, preferable. For the present, however, we are going to adhere to the first approach -- it appears more graphic (while certainly more cumbersome). \n\nTurning now to Fig.~1, we see immediately that different trajectories (i.e., trajectories characterized by the same energy but different momenta) cross one another. Said otherwise, it turns out that the charges moving along all trajectories simultaneously {\\itshape\\bfseries can \"interpenetrate\"} one another without changing their trajectories.\n\nThis would seem at first glance to be in contradiction with the well known statement that like charges repel. Therefore the assumption that likely charged elements can penetrate into one another may sound surprising, to say the least. But point charges considered usually in science have an infinite density at the point of charge. Therefore like point charges cannot penetrate into one another and, moreover, cannot even approach one another to a short enough distance. The distributed charges considered by us have a specific finite charge density. We are going to show now that charges can penetrate into one another, depending on what external forces act on these charges and what relevant forces are generated by the charges themselves.\n\nThe force $d\\mathbf F=\\mathbf Edq$ acting on any element of charge \n$dq=\\rho dV$ is defined by the magnitude of the electric field \n$\\mathbf E$ at this point. The field $\\mathbf E(\\mathbf r)$ at a given point $\\mathbf r$ is a sum of the field $\\mathbf E_N(\\mathbf r)$ generated by the nucleus and the field $\\mathbf E_{\\rho}(\\mathbf r)$ created by all the charges surrounding the given charge:\n\n\\begin{equation}\n\\label{6}\n\\mathbf E(\\mathbf{r})=\\mathbf E_N(\\mathbf{r})+\\mathbf E_{\\rho}(\\mathbf{r})=\n\\frac{e\\mathbf r}{r^3}+\\int{\\frac{\\rho(\\mathbf r')(\\mathbf r-\\mathbf r')dV'}\n{|\\mathbf r-\\mathbf r'|^3}}.\n\\end{equation}\n\nThe presence of expression $|\\mathbf r-\\mathbf r'|^3$ in the denominator of the second term in equality (\\ref{6}) might cause an erroneous idea that the field $\\mathbf E_{\\rho}(\\mathbf{r})$ at point $\\mathbf r$ is large. In this case the element of charge located at point $\\mathbf r''$, near point\n $\\mathbf r$, will not be able to approach in its motion point \n$\\mathbf r$, because both these elements of charge have like signs.\n One can readily show that this is not so: indeed, the field \n$\\mathbf E_{\\rho}(\\mathbf{r})$ is finite at point $\\mathbf r$, and the difference between the fields at the two neighboring points $\\mathbf r$ and $\\mathbf r''$ tends to zero as the magnitude of \n$|\\mathbf r-\\mathbf r''|$ approaches zero. \n\nTo prove this, we use the approach employed in monograph \\cite{2}, $\\$44$. Circumscribe a sphere of radius $R_0$ around point $\\mathbf r$. The field generated by the charges outside the sphere $R_0$ is finite, because these charges are at a finite distance larger than $R_0$ from point $\\mathbf r$. We have now to verify that the field $\\hat\\mathbf E(\\mathbf r)$ generated by charges confined inside the sphere $R_0$, i.e., in the immediate vicinity of point $\\mathbf r$, is also finite. Denoting $|\\mathbf r-\\mathbf r'|=R$, and \n$(\\mathbf r-\\mathbf r')=\\mathbf R$, we can write for this part of the field:\n\n\\begin{equation}\n\\label{7}\n| \\hat \\mathbf E(\\mathbf r)|\\le\\int{\\frac{|\\rho(\\mathbf r')\\mathbf R|d\\hat V}\n{R^3}},\n\\end{equation}\nwhere integration of $d\\hat V$ is performed over the volume of the sphere \n$R_0$. But\n$$\n|\\rho(\\mathbf r')\\mathbf R|\\le |\\rho_{max}|R,\n$$ \nwhere $|\\rho_{max}|$ is the absolute value of the maximum density of charge inside sphere $R_0$.\n\nWe finally come to \n$$\n| \\hat \\mathbf E(\\mathbf r)|\\le|\\rho_{max}|\\int{\\frac{d\\hat V}\n{R^2}}.\n$$ \n\nIntroducing spherical coordinates $R, \\vartheta, \\varphi$ with the center at point $\\mathbf r$, with $d\\hat V=R^2sin\\vartheta d\\vartheta d\\varphi dR$,\nand integrating with respect to $R$ from $0$ to $R_0$, we obtain\n\n\\begin{equation}\n\\label{8}\n|\\hat \\mathbf E(\\mathbf r)|\\le 4\\pi|\\rho_{max}|R_0.\n\\end{equation}\n\nThus $\\hat \\mathbf E(\\mathbf r)$ is a finite quantity tending to zero with decreasing radius of the sphere $R_0$. Moreover, this immediately suggests a conclusion that the difference between the values of vector $\\mathbf E$ \nat two adjacent points, for instance, $\\mathbf r$ and $\\mathbf r''$, tends to zero with the distance between these points approaches zero too. Suppose that these two points are located inside the sphere $R_0$. The field created by charges outside the sphere $R_0$ is continuous, because these charges are at finite distances from points $\\mathbf r$ and $\\mathbf r''$.\n As for the field $\\hat \\mathbf E$ generated by charges confined inside the sphere $R_0$, the strength of this field in absolute magnitude, as proved above, cannot be larger than the value of $4\\pi|\\rho_{max}|R_0$. As the magnitude of $R_0$ is going to zero, we see that this part of the field also changes continuously and tends to zero as $R_0$ approaches zero.\n\nThus the electric field $\\mathbf E$ surrounding any element of the distributed charge $dq=\\rho dV$ is finite and varies continuously. And this is why the force $d\\mathbf F=\\mathbf Edq$ acting on this element of charge is finite and varies continuously. The element of distributed charge will be driven by this force to move in the direction of the total force acting at this point. \n\nIn actual fact, the statement that charges can pass through \none another does not carry anything supernatural in it. For \ninstance, electromagnetic fields can penetrate one into or \nthrough the other without at the same time affecting one \nanother---this is nothing but the standard principle of \nsuperposition. Two radar beams can cross without interaction; \nthis is just penetration of ac fields through one another. \nSuperposition of one dc field on another (the principle of \nsuperposition) may be regarded as penetration of one field \ninto another. Significantly, in this process the fields do \nnot act in any way on one another.\n\nAs for the charges, no statements concerning passage of one \ncharge through another without direct action on one another \n(interaction of charges is taken into account through the \nfields created by these charges) have thus far been made, \nalthough the principle of superposition is valid for charges \nas well. This statement should, however, be made. \n\nThis paper was intended to address the case of small density of the distributed charge. This means that we shall neglect the second term in expression (\\ref{6}) compared to the first one. In this case, any element of charge $dq$ with mass $dm$ moves only in the field of charge of the nucleus \n$e$, and for this element all the conditions specified in Section~1 are fully met, and the motion of this element will be subject to the laws described in the above monograph \\cite{1}, \\$~15.\n\nWe have considered earlier a set of trajectories, both elliptical and circular, which are characterized by the same energy. In a Coulomb potential well, however, elements moving along different, including circular, trajectories may have different energies. The potential energy of an element in a circular trajectory is constant and depends only on the distance of this element from the nucleus, $d\\mathcal E_{Pot}=edq\/r$. Each element in any circular trajectory can be identified by a set of elliptical trajectories with the same energy (see Fig.~1). This makes the total set of all trajectories extremely complex.\n\n\n\\section{Distributed charge has no spherical symmetry}\n\nConsider the shape which can have a charge in a Coulomb potential well.\n\nIn a spherical coordinate system $r, \\vartheta, \\varphi$ the angular part of any distribution of charges can be presented in the form of an expansion in spherical functions $Y_{lm}(\\vartheta, \\varphi)$. Let us see what spherical functions can be employed in description of a distributed charge in a Coulomb potential well.\n\nThe simplest spherical function is the spherically symmetric function \n$Y_{00}(\\vartheta, \\varphi)$. This function should be present in an expansion always (except for the cases where the total charge of the distribution is zero). This has the following natural explanation. The integral over all space of a charge expanded in spherical functions $Y_{lm}(\\vartheta, \\varphi)$ yields the total charge $Q$. But the only angular function whose integration yields a nonzero result is $Y_{00}(\\vartheta, \\varphi)$. Integration of all other functions will yield zero. \nTherefore only the function $Y_{00}(\\vartheta, \\varphi)$ can describe the presence itself of a charge in a volume. All the other angular functions participating in the expansion can only change the shape of the charge distribution, while not being capable of removing or adding a charge.\n\nOn the other hand, a distributed charge cannot be characterized with the use of one angular function $Y_{00}(\\vartheta, \\varphi)$ only. This becomes evident from the following consideration. A Coulomb potential well is spherically symmetric. Therefore, an elliptical trajectory of propagation of an element of charge may lie in any plane passing through the nucleus (i.e., through the origin of coordinates).\n\nAn analysis of all elements of a distributed charge reveals that their trajectories lie {\\itshape\\bfseries in all planes simultaneously}. Consider now different planes in which the elliptical trajectories of elements may lie. We choose for this purpose one of such planes, rotate it successively and follow several trajectories of motion of an element. The continuity of the distributed charge allows this operation.\n\n\\begin{figure}\n\\label{F2}\n\\begin{center}\n\\includegraphics[scale=1]{Figure2.eps}\n\\caption{}\n\\end{center}\n\\end{figure}\n\nThis situation is visualized in Fig.~2. It shows three planes which may confine the trajectories of motion $A$, $B$ and $C$. The nucleus is located at point $O$. A $K-K$ line passing through the nucleus is drawn in plane $A$. One can see three trajectories in plane $A$: a circular trajectory $a1$, an elliptical trajectory $a2$, and another elliptical trajectory $a3$, which is symmetric to $a2$ about the $K-K$ line.\nIt is shown that all elements lying in plane $A$ rotate in the same sense. We shall turn plane $A$ about the $K-K$ line in angle $\\eta$ starting from an initial direction. Because the Coulomb potential is spherically symmetric, and the charge continuous, it can be expected that on turning plane $A$ about the $K-K$ line through angle $\\pi$, we shall come to the state in which all trajectories will coincide with the ones that had been there before the turn of the plane.\n\nLet us perform this operation of the turn. Figure~2 shows two positions of the plane after a turn by $\\pi\/2$ and by an angle $\\pi$. By turning plane $A$ about the $K-K$ axis in angle $\\eta$ by an angle $\\pi\/2$, we come to plane \n$B$ with trajectories $b1$, $b2$, $b3$, and by angle $\\pi$, to plane $C$ with trajectories $c1$, $c2$, $c3$. \nSignificantly, trajectory $c1$ in plane $C$ will exactly coincide with trajectory $a1$ in plane $A$. Similarly, trajectory $a2$ will merge with trajectory $c3$, and trajectory $a3$, with $c2$. Thus for each elliptical trajectory in plane $A$ there will be always the corresponding trajectory in plane $C$. After the turn of the plane, these trajectories will coincide.\n\nIt will turn out, however, that motion along all these trajectories after completion of the turn of the plane (i.e., in position $C$) will occur in the sense opposite to that in which the elements moved in the plane before its turn (that is, in position $A$). The + sign on planes $A$, $B$, $C$ is put for the sake of convenience in following the effect of plane rotation.\n\nSince we are considering a {\\itshape\\bfseries distributed charge}, its elements should move simultaneously in all planes, including planes $A$ and $C$. As showed, however, our analysis, motion over the latter should occur in opposite directions. All elements just cannot rotate simultaneously in opposite directions (this would involve loss of energy).\n\nIt thus turns out that motion over these trajectories is impossible altogether. Hence, there should exist a direction in which trajectories for motion of charges do not exist at all. In the spherically symmetric potential well of the nucleus one cannot, however, isolate a specified direction (for instance, the direction from which we reckoned the angle $\\eta$ in Fig.~2). Hence, a distributed charge must have a specified direction, a direction in which the distributed charge does not exist. In this case, with no charge in this direction, one cannot expect existence of trajectories along which propagation could occur in opposite directions.\n\nIn other words, although the Coulomb potential is spherically symmetric, distributed charge loses spherical symmetry.\n\nThis appears only natural, because the angular momentums in plane $A$ (i.e., before the turn of the plane) do not coincide with those in plane $C$ (i.e., after the turn of the plane) and, moreover, have opposite orientation.\n\nTo sum up, in a spherically symmetric Coulomb potential well a spherically symmetric charge distribution just cannot exist; there must therefore exist a specified direction. An additional comment concerning a specified direction in which there is no distributed charge will be proposed in Section 5.\n\n\n\\section{Description of a charge with spherical functions}\n\nAs follows from previous considerations, description of a distributed charge has to be made with a spherically symmetric angular function \n$Y_{00}(\\vartheta, \\varphi)$ (this function specifies the presence itself of a charge in the given potential well). This function alone is not sufficient, however, because the $Y_{00}(\\vartheta, \\varphi)$ function does not have any specified direction (a specified direction appears as a result of the circular motion of the elements of charge around the nucleus). Therefore, description of a charge requires, even in the simplest case, invoking some other $Y_{lm}(\\vartheta, \\varphi)$ functions as well. Let us see what functions could be employed in description of a distributed charge.\n\nWe start by assuming that the specified direction discussed above coincides with the $\\vartheta=0$ direction of the spherical coordinate system. We shall call this direction the $Z$ axis. Then total rotation of the elements of charge will occur about the $Z$ axis, i.e., along the $\\varphi$ coordinate. We understand under total rotation here not the rotation of any one element but rather that of the totality of the elements, of the distributed charge as a whole. We discussed in Section 1 a plane with respect to which an element $dq$ (or $dm$) propagates as a wave in a solid angle $2\\pi$. In the present case this plane passes through the axis $Z$ and the position of the element at the given moment, i.e., perpendicular to the $\\varphi$ coordinate.\n\nIntroduce an additional condition; to wit, we are going to consider only stationary, i.e., time-independent, states of the charge. States of charge which do not depend on time, do not produce radiation of variable fields, while generating constant electric and magnetic fields. Because stationary states of a distributed charge do not vary with time, no need appears in proving that such states do not generate variable fields, i.e., fields depending on time. If total rotation of the charge occurs about the $Z$ axis, i.e., along the $\\varphi$ coordinate, absence of radiation can be described by using for description only functions which are axially symmetric about the $Z$ axis. This restricts the allowable set of functions to those of the $Y_{l0}(\\vartheta, \\varphi)$ group. Indeed, only \n$Y_{l0}(\\vartheta, \\varphi)$ functions do not depend on the coordinate \n$\\varphi$, i.e., have axial symmetry with respect to the $Z$ axis.\n\nIf we impose one more constraint, namely, that the distributed charge is symmetric with respect to the coordinate $\\vartheta=\\pi\/2$, i.e., about the equator (which is a more frequent situation), index $l$ of the spherical function $Y_{l0}$ can be only even.\n\nThere are no other proper functions for description of a charge which does not generate radiation. Indeed, $Y_{lm}(\\vartheta, \\varphi)$ functions contain a factor $exp(\\pm im\\varphi)$. Motion of such a charge along the coordinate $\\varphi$ will initiate dependence on time (a factor of the kind of $exp(\\pm im\\varphi-i\\omega t)$ will appear, where $\\omega$ is the frequency of the moving wave). The appearance of the dependence on time will inevitably give rise to radiation of variable fields. We disregard here such states involving radiation and focus our interest on stationary states only, which do not generate radiation.\n\nThe simplest function satisfying the above requirements is $Y_{20}$. We write therefore the angular part of the relation for a distributed charge in the form\n\n\\begin{equation}\n\\label{9}\nL=D(Y_{00}+D_{20}Y_{20}),\n\\end{equation}\n$$\n\\mbox{where }Y_{00}=\\frac{1}{\\sqrt{4\\pi}},\\quad Y_{20}\n=\\sqrt{\\frac{5}{4\\pi}}\\left(\\frac{3}{2}\\cos^2\\vartheta-\\frac{1}\n{2}\\right),\\quad \\mbox{(see, e.g., \\cite{3}).}\n$$\n\nThe coefficient $D_{20}$ can be determined from the condition that at \n$\\vartheta=0$ and $\\vartheta=\\pi$ (i.e., on the $Z$ axis) the function $L$ be zero. Coefficient $D$ can be found from the condition that at \n$\\vartheta=\\pi\/2$ (i.e., at the equator) function $L$ is unity. These conditions bring us to\n\n\n\n$$\nD=\\frac{2\\sqrt{4\\pi}}{3},\\qquad D_{20}=-\\frac{1}{\\sqrt{5}}.\n$$\nNow function L acquires the final form\n$$\nL=\\frac{2\\sqrt{4\\pi}}{3}\\left[\\frac{1}{\\sqrt{4\\pi}}-\\frac{1}\n{\\sqrt5}\\cdot\\sqrt{\\frac{5}{4\\pi}}\\left(\\frac{3}{2}\\cos^2\n\\vartheta-\\frac{1}{2}\\right)\\right].\n$$\nOne can readily verify that this function simply coincides with the function \n$\\sin^2\\vartheta$. This means that in this case the angular part of the density of distributed charge can be written in one of two ways:\n\n\\begin{equation}\n\\label{10}\nL=\\sin^2\\vartheta, \\quad \\mbox{or} \\quad L=D(Y_{00}+D_{20}Y_{20}),\n\\end{equation}\nand the density of distributed charge corresponding to this angular distribution will read\n\\begin{equation}\n\\label{11}\n\\rho(r,\\vartheta)=AR(r)\\sin^2\\vartheta, \\quad \\mbox{or} \\quad\n\\rho(r,\\vartheta)=AR(r)D(Y_{00}+D_{20}Y_{20}),\n\\end{equation}\nwhere $R(r)$ is the radial part of the distribution. The coefficient $A$ is derived from normalization of the distributed charge against the total charge $Q$.\n\nOne may choose any form that would seem appropriate in a given situation.\n\nThus, in describing a charge with spherical functions one obtains for the charge distribution in a Coulomb potential well in the simplest case a figure resembling a torus. All elliptical and circular trajectories of an elements of the charge (of any energy) should be confined to this torus. We note that at the $Z$ axis the charge is zero.\n\nA comment will be appropriate here. As shown earlier, at a given point the element $dm$ propagates in all directions (within a solid angle $2\\pi$) with the same velocity. It would seem that this is in direct contradiction with the statement that a distributed charge has a specified direction in which there is no charge. In particular, in the above example with a torus-shaped charge, it would seem that the trajectories lying on the equator must differ from those confined to a perpendicular plane, because these trajectories cross the $Z$ axis, where the charge is zero. \n\nThe following point may be in order here. The trajectories of the element $dm$ in Fig.~1 were considered by us under the tacit assumption that this element will continue to move along its original trajectory. (This assumption derives from the concept of the motion of a solid body.) This is, however, not necessarily so. An analysis of Fig.~1 shows that {\\itshape\\bfseries each point} is a center from which elements of charge propagate in all directions (within a solid angle $2\\pi$). Said otherwise, element $dm$ does not move along this trajectory all the time. At each point it breaks up into many elements $dm'$ which continue to move subsequently, but now along other trajectories.\n At the next point the situation repeats, with breakup into many elements, at the next point -- again into countless elements, and so on. Thus, the element $dm$, in starting its motion at a point on a trajectory, should not necessarily terminate it at the same trajectory. Actually, this is motion not of individual elements but rather that of a wave. This is why a mass (and a charge) may have different densities at different points in space.\n\nThis is an allowed process, because in each trajectory the corresponding element has the same energy. But {\\itshape\\bfseries at each given point} the velocity of the elements remains, as before, the same in all directions, irrespective of the density of charge or mass at the given point.\n\n \n\\section{Differences in the velocity of motion between a charge wave and individual elements}\n\nAs already pointed out, it appears more appropriate to consider the motion of elements of equal energy along different elliptical trajectories as that of a charge wave. It appears pertinent to compare now different parameters of motion of this wave with those of an individual element.\n\nThe total energy of all elements making up a charge wave is equal to the energy of one combined element lying at the point of crossing of all elliptical trajectories and moving along one of the circular trajectories. Considered in the context of equality of energies, the motion of a charge wave may be correlated with that of one point element, with the sum of the energies of all elements in this wave equal to the energy of one combined element moving in a circle. This energy can be readily determined.\n\nAn element moving along a circular trajectory retains both its total and the potential and kinetic energies. By the virial theorem (see, e.g. \\cite{1}, \n\\$~10), in this case the potential energy of an element is twice its total energy, and its total energy is equal to the kinetic energy taken with the opposite sign, \n$2d\\mathcal E=d\\mathcal E_{Pot}$, $d\\mathcal E=-d\\mathcal E_{Kin}$. No averaging is needed here, because the energies are constant. As for the potential energy of an element in a circular orbit, it can be derived simply from the radius $R$ of the circular orbit: $d\\mathcal E_{Pot}=edq\/R$. \n\nThe situation is different with the velocities of motion of a wave and of individual elements.\n \nAn analysis of Fig.~1 suggests that {\\itshape\\bfseries each point} of a charge is a center from which elements of charge propagate in all directions (within a solid angle $2\\pi$). One might say that the motion of a distributed charge at a given point represents a kind of \"a velocity fan\"\\ for all directions (within a solid angle $2\\pi$); note that, as shown in Section~1, in any direction the velocity has the same magnitude. \n\nThe velocity of an element being a vector, the velocity must \nretain its vector properties even in the case of the element \npropagating (in the form of a wave) within a solid angle of \n$2\\pi$. This, however, will be not the velocity of a single \nelement but rather that of motion of a wave, of propagation \nof a wave process. It turns out that the velocity of propagation \nof a wave process does not coincide with that of motion of \nelements of mass or charge. The momentum of an element $dm$ \npropagating as a wave likewise does not coincide with that \nof an element moving as a whole in one direction.\n\nDenote the velocity of motion of a charge wave by $\\mathbf \nv_w(\\mathbf r)$ (the subscript $w$ standing for wave), and \nthe magnitude of this velocity, by $\\upsilon_w(\\mathbf r)$, \nto discriminate this velocity from the velocity \n$\\mathbf v(\\mathbf r)$ and $\\upsilon(\\mathbf r)$ of motion \nof an element of charge. The momentum of an element $dm$ \npropagating as a wave will be denoted, accordingly, by \n$\\mathbf p_w(\\mathbf r)$.\n\nConsider this situation in more detail.\n\nIsolate an element of mass $dm$ at a point in space. If this \nelement moves as a whole with a velocity $\\mathbf{v}_l$ along \na trajectory $l$, the momentum $d\\mathbf{p}_l$ of this element \nwill be\n\\begin{equation}\n\\label{12}\nd\\mathbf{p}_l=dm\\mathbf{v}_l,\\qquad\\mbox{and the magnitude of \nthe momentum }\\qquad dp_l=dm\\upsilon.\n\\end{equation}\n\nTo describe the motion of this element of mass as that of a wave \nin the solid angle of $2\\pi$, divide the mass $dm$ into many parts \n$dm'$. Each part $dm'$ will propagate within a solid angle $d\\Omega$. \nIn this case, we can write $dm'=dm\\cdot\\frac{d\\Omega}{2\\pi}$. \nAs was shown in Section~1\nthe velocity of motion of each element $dm'$ is the same, equal \nin magnitude to the velocity $\\upsilon$ of motion of the whole \nelement $dm$. \n The direction of this velocity \n$\\mathbf{v'}$ is determined by the solid angle $d\\Omega$. \nFor the momentum of element $dm'$ in this case we can write\n\\begin{equation}\n\\label{13}\nd\\mathbf p'=dm'\\mathbf v'=dm\\frac{d\\Omega}{2\\pi}\n\\mathbf v',\\quad\\mbox{and for its magnitude}\\quad\ndp'=dm\\frac{d\\Omega}{2\\pi}\\upsilon.\n\\end{equation}\n\nIt might come up as a surprise that sometimes we discuss motion \nalong an elliptical trajectory (for instance, trajectory $l$) \nto stress that the element $dm$ moving in this trajectory obeys \nall laws of theoretical mechanics, while in other cases we prefer \nto identify the motion of the same element in a solid angle as \nthat of a wave.\n\nIn actual fact, we are speaking in these cases about different \nthings. When discussing the motion along a trajectory, we follow \nthe motion of {\\itshape\\bfseries{one}} specific element, be it \nelement $dm$ or $dm'$. It appears only natural that the motion \nof this element obeys all laws of theoretical mechanics.\n\nWhen, however, we discuss the motion of an element as a wave in \na solid angle, we have in mind rather the motion of \n{\\itshape\\bfseries{many}} elements crossing at a point \n$\\mathbf r$. This is shown in a revealing way in Fig.~1. \nThis set off elements could be formed of one element $dm$ as well. \nIt is for this purpose that we broke up element $dm$ into a set \nof elements $dm'$, which thereafter moved along {\\bfseries\\itshape\n{different}} trajectories.\n\n\\medskip\n\nWe chose to orient a spherical coordinate system such that the \ndirection $\\vartheta=0$ coincides with that of the angular momentum \nof the charge, and denoted this axis by $Z$. All elements of a \ndistributed charge rotate in the same sense (overall rotation \nis around the $Z$ axis). That all elements rotate in one sense only \nimplies that there are no velocity components along the negative \ndirection of the $\\varphi$ axis. In other words, on passing a \nplane through the $Z$ axis and the position of element $dm$, we end \nup with the following situation: element $dm$ propagates (as a wave) \ninto a solid angle of $2\\pi$, i.e., into the hemisphere located \non one side of this plane.\n\n\\begin{figure}\n\\label{F3}\n\\begin{center}\n\\includegraphics[scale=1]{Figure3.eps}\n\\caption{}\n\\end{center}\n\\end{figure}\n\nThis situation is illustrated by Fig.~3. In Fig.~3, the $Z$ axis is \ndirected at us (i.e., it is actually a top view). The nucleus \nis at point O. Dashed lines are lines of equal mass density. \nElement $dm$ propagates into a solid angle of $2\\pi$, i.e., into the \nhemisphere. We see a fanlike distribution of velocities $\\mathbf v'$ \nof the elements of mass $dm'$. The velocity of motion of element \n$dm'$ in any direction is the same. In Fig.~3, this is shown by all \nvelocity vectors $\\mathbf v'$ being of equal length.\n\nTo find the resultant momentum of an element $dm$ in the case where \nit propagates (as a wave) into a hemisphere, we have to sum all the \nmomentum vectors $d\\mathbf p'=dm'\\mathbf v'$. This can be done by \nintroducing a local spherical system of coordinates centered on the \nlocation of element $dm$, with angles $\\eta$ and $\\xi$. We orient \nthe coordinate system such that the $\\eta=0$ direction is \nperpendicular to the above-mentioned plane and denote this direction \nby $Z'$. In this particular case, $Z'$ coincides with the direction \nof the $\\varphi$ axis of the common coordinate system. Next we \nresolve the momentum into a component along the $Z'$ axis, and another,\nperpendicular to it. By virtue of the symmetry relative to \nthe $Z'$ axis, the perpendicular component will vanish after the \nsummation, leaving only the component along the $Z'$ axis, i.e., \nalong the $\\varphi$ axis. Thus, the resultant momentum of an element \nmoving in all directions simultaneously (into a solid angle of $2\\pi$, \ni.e., into the hemisphere) is aligned with the $\\varphi$ axis. \nCalculate now $dp_{w,\\varphi}$, i.e., the projection of the momentum \non the $\\varphi$ axis. To do this, we sum all $dp'_\\varphi$ components. \nUsing Eq. (\\ref{13}), we come to $dp'_\\varphi=dp'\\cos\\eta=\ndm'\\upsilon\\cos\\eta=dm\\frac{d\\Omega}{2\\pi}\\upsilon\\cos\\eta$, whence:\n\\begin{equation}\n\\label{14}\ndp_{w,\\varphi}=\\int{dm\\frac{d\\Omega}{2\\pi}\\upsilon\\cos\\eta}=\n\\frac{dm}{2\\pi}\\upsilon\\int\\limits_0^{\\pi\/2}\\cos\\eta\n\\sin\\eta d\\eta\\int\\limits_0^{2\\pi}d\\xi=\n\\frac{1}{2}dm\\upsilon=\\frac{1}{2}dp_l,\n\\end{equation}\nor\n\\begin{equation}\n\\label{15}\nd\\mathbf p_w=\\frac{1}{2}dm\\upsilon \\mathbf n_\\varphi ,\n\\end{equation}\nwhere $\\mathbf n_\\varphi$ is the unit vector along the $\\varphi$ axis.\n\nComparing now Eqs. (\\ref{14}) and (\\ref{15}) with relations (\\ref{12}), \nwe see that the momentum of an element propagating into a hemisphere \nis one half that in magnitude of an identical element moving in one \ndirection. Also, while the momentum of an element moving as a whole \nin one direction coincides in direction with its velocity, the \nmomentum of an element propagating into a hemisphere is directed \n{\\itshape\\bfseries {only along the}} $\\varphi$ {\\itshape\\bfseries \n{axis}}. {\\itshape\\bfseries {This momentum has no other components}}.\n\nThe above reasoning and the conclusions were conducted for a momentum. \nTo obtain a diverging wave, the element $dm$ was broken up \ninto smaller elements $dm'$, with all $dm'$ elements having the same \nvelocity $\\upsilon$, and each element of mass $dm'$ propagating in \nits solid angle $d\\Omega$. This is a rigorous treatment. It can be \nmade more convenient, however, by considering not the momentum but \nrather directly the velocity. To do this, we denote conventionally \n $d\\upsilon=\\upsilon d\\Omega$, understanding by $d\\upsilon$ a set \n of velocities with directions confined to the solid angle $d\\Omega$. \n Then in place of Eq. (\\ref{15}) and taking into account (\\ref{12})\n we come to\n\\begin{equation}\n\\label{16}\n\\mathbf v_w=\\frac{1}{2}\\upsilon\\mathbf n_\\varphi.\n\\end{equation}\n\nHere $\\mathbf v_w$ is no longer the velocity of a single element \n$dm$ moving in its trajectory; it is now the remaining vector part \nof the velocity of the element $dm$ propagating into the hemisphere, \nand $\\upsilon$ is, as before, the magnitude of the velocity of the \nelement moving in its trajectory. \n(Equation (\\ref{16}) can be also derived \ndirectly from Eq. (\\ref{15}) by canceling $dm$).\n\nThus, the velocity $\\mathbf v_w(\\mathbf r)$ is the velocity of \npropagation of a wave process, and the modulus of \n$\\mathbf v_w(\\mathbf r)$ is the magnitude of the velocity of the \nwave process $\\upsilon_w(\\mathbf r)$. The magnitude of the velocity \nof a wave process does not coincide with that of propagation of \nelements of charge. Indeed, taking the modulus of $\\mathbf v_w$ \n(using Eq. (\\ref{16}) and recalling that vector $\\mathbf v_w$ has \nonly one component, and it is directed along the $\\varphi$ axis), \nwe obtain only one half of the real velocity of an element $\\upsilon$. \nIndeed, some of the vector components vanish in propagation of \nthe element as a wave into the hemisphere. \nActually, these components do not disappear without trace, so that, \nfor instance, they have to be taken into account in calculation of \nthe energy, because in actual fact elements move along elliptical \ntrajectories with a velocity $\\upsilon$.\nSpecifically, for calculation of the kinetic energy one must use\nthe $\\upsilon$ quantity that is the real velocity of an element.\nFor calculation of the angular momentum (vector quantity) \nof the distributed charge one must use $\\mathbf v_w$ quantity.\n\n\n\\addcontentsline{toc}{chapter}{\u041b\u0438\u0442\u0435\u0440\u0430\u0442\u0443\u0440\u0430}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\nGiven an integer $k > 0$ and a set $P$ of $n$ weighted points in the plane,\nour objective is to fit a $k$-step function to them so that the maximum weighted\nvertical distance of the points to the step function is minimized.\nWe call this problem the {\\em $k$-step function problem}.\nIt has applications in areas such as geographic information systems, \ndigital image analysis, data mining, facility locations, and\ndata representation (histogram), etc.\n\nIn the unweighted case, if the points are presorted,\nFournier and Vigneron \\cite{fournier2011} showed that\nthe problem can be solved in linear time using the results of \n\\cite{frederickson1991a,frederickson1984,gabow1984}.\nLater they showed that the weighted version of the problem can also be solved\nin $O(n\\log n)$ time~\\cite{fournier2013},\nusing Megiddo's parametric search technique \\cite{megiddo1983a}.\nPrior to these results, the problem had been discussed by several researchers\n\\cite{chen2009,diazbanez2001,liu2010,lopez2008,wang2002}. \n\nGuha and Shim \\cite{guha2007} considered this problem in the context of {\\em histogram construction}.\nIn database research, it is known as the {\\em maximum error histogram} problem.\nIn the weighted case,\n this problem is to partition the given points into $k$ buckets based on their $x$-coordinates,\nsuch that the maximum $y$-spread in each bucket is minimized.\nThis problem is of interest to the data mining community as well (see \\cite{guha2007} for references).\nGuha and Shim \\cite{guha2007} computed the optimum histogram of size $k$,\nminimizing the maximum error.\nThey present algorithms which run in linear time when the points are unweighted,\nand in $O(n\\log n + k^2\\log^6n)$ time and $O(n\\log n)$ space when the points are weighted.\n\nOur objective is to improve the above result to $O(n)$ time when $k$ is a constant.\nWe show that we can optimally fit a $k$-step function to unsorted weighted points in linear time.\nWe earlier suggested a possible approach to this problem at an OR workshop~\\cite{bhattacharya2013b}.\nHere we flesh it out, presenting a complete and rigorous algorithm and proofs.\nOur algorithm exploits the well-known properties of prune-and-search along the lines in \\cite{bhattacharya2007}.\n\n\nThis paper is organized as follows.\nSection \\ref{sec:prelim} introduces the notations used in the rest of this paper.\nIt also briefly discusses how the prune-and-search technique can be used\nto optimally fit a $1$-step function (one horizontal line) to a given set of weighted points.\nWe then consider in Section~\\ref{sec:cond2step} a variant of the 2-step function problem,\ncalled the anchored 2-step function problem.\nWe discuss a ``big partition'' in the context of the $k$-partition of a point set\ncorresponding to a $k$-step function in Section \\ref{sec:kstep}.\nSection \\ref{sec:algorithm} presents our algorithm for the optimal $k$-step function problem.\nSection \\ref{sec:conclusion} concludes the paper,\nmentioning some applications of our results.\n\n\n\n\\section{Preliminaries}\\label{sec:prelim}\n\\subsection{Model}\\label{sec:model}\nLet $P=\\{p_1,p_2,\\ldots, p_n\\}$ be a set of $n$ weighted points in the plane.\nFor $1\\leq i\\leq n$ let $p_i.x$ (resp. $p_i.y$) denote the $x$-coordinate (resp. $y$-coordinate)\nof point $p_i$, and let $w(p_i)$ denote its weight.\nThe points in $P$ are not sorted,\nexcept that $p_1.x\\leq p_i.x\\leq p_n.x$ holds for any $i=1, \\ldots, n$.\\footnote{For the sake\nof simplicity we assume that no two points have the same $x$ or $y$ coordinate.\nBut the results are valid if this assumption is removed.\n}\nLet $F_k(x)$ denote a generic $k$-step function,\nwhose $j^{th}$ segment (=step) is denoted by $s_j$.\nFor $1\\leq j \\leq k-1$, segment $s_j$ represents a half-open horizontal interval $[s_j^{(l)}, s_j^{(r)})$\nbetween two points $s_j^{(l)}$ and $s_j^{(r)}$.\nThe last segment $s_k$ represents a closed horizontal interval $[s_k^{(l)}, s_k^{(r)}]$.\nNote that $s_j^{(l)}.y= s_j^{(r)}.y$,\nwhich we denote by $s_j.y$.\nWe assume that for any $k$-step function $F_k(x)$,\n segments $s_1$ and $s_k$ satisfy $s_1^{(l)}.x = p_1.x$ and $s_k^{(r)}.x = p_n.x$,\nrespectively.\nSegment $s_j$ is said to {\\em span} a set of points $Q\\subseteq P$,\nif $s_j^{(l)}.x \\leq p.x U$; (b) $q$ can be ignored at $y>U$.\n}\n\\label{fig:2points1}\n\\end{figure}\nLet $y=U$ be the line at or above which at least 2\/3 of the upper bisectors lie,\nand at or below which at least 1\/3 of the upper bisectors lie.\nWe use $\\wp^U_{2\/3}$ and $\\wp^U_{1\/3}$ to name the subsets of\n $\\wp$ that have these two sets of bisectors, respectively.\n Note that $|\\wp^U_{2\/3}|\\geq n\/2\\times 2\/3 = n\/3$ and $|\\wp^U_{1\/3}|\\geq n\/2\\times 1\/3 = n\/6$.\nSimilarly, let $y=L$ be the line at or below which at least 2\/3 of the \nlower bisectors lie,\nand at or above which at least 1\/3 of the bisectors lie.\\footnote{{We define $U$ and $L$ this way,\nbecause many points could lie on them.}}\nWe use $\\wp^L_{2\/3}$ and $\\wp^L_{1\/3}$ to name the subsets of\n$\\wp$ that have these two sets of bisectors, respectively.\n Note that $|\\wp^L_{2\/3}|\\geq n\/2\\times 2\/3 = n\/3$ and $|\\wp^L_{1\/3}|\\geq n\/2\\times 1\/3 = n\/6$.\n\\begin{lemma} \\label{lem:one6th}\nWe can identify $n\/6$ points that can be removed without affecting the weighted 1-center\n for the values\nof their $y$-coordinates.\n\\end{lemma}\n\\begin{proof}\nConsider the following three possibilities.\n\\begin{enumerate}\n\\item[(i)]\nThe weighted 1-center lies above $U$.\n\\item[(ii)]\nThe weighted 1-center lies below $L$.\n\\item[(iii)]\nThe weighted 1-center lies between $U$ and $L$,\nincluding $U$ and $L$.\n\\end{enumerate}\n\nIn case (i), there are two subcases,\nwhich are shown in Fig.~\\ref{fig:2points1}(a) and (b), respectively.\nSince the center lies above $U$, \nwe are interested in the upper envelope of the costs in the \n$y$-region given by $y > U$.\nIn the case shown in Fig.~\\ref{fig:2points1}(a),\nthe costs of points $p$ and $q$ satisfy $d(y,p.y)w(p) < d(y,q.y)w(q)$ for $y > U$.\nThus we can ignore $p$.\nIn the case shown in Fig.~\\ref{fig:2points1}(b),\n the costs of points $p$ and $q$ satisfy\n$d(y,p.y)w(p) > d(y,q.y)w(q)$ for $y > U$.\nThus we can ignore $q$.\nSince $|\\wp^U_{1\/3}|\\geq n\/6$,\nin either case, one point from each such pair can be ignored,\ni.e., 1\/6 of the points in $P$ can be eliminated, because it cannot affect the weighted 1-center.\nIn case (ii) a symmetric argument proves that 1\/6 of the points in $P$ can be discarded.\n\n\\begin{figure}[ht]\n\\centering\n\\subfigure[]{\\includegraphics[height=1.5cm]{figs\/2points3.pdf}}\n\\hspace{2mm}\n\\subfigure[]{\\includegraphics[height=1.5cm]{figs\/2points4.pdf}}\n\\caption{2\/3 of upper bisectors are at $y> U$.}\n\\label{fig:2points3}\n\\end{figure}\n\nIn case (iii) see Fig.~\\ref{fig:2points3}.\nThe costs of each pair in $\\wp^U_{2\/3}$ (of the $2n\/3$ pairs) as functions of $y$ intersect at most once at $yL$.\nTherefore, $\\wp^U_{2\/3}\\cap \\wp^L_{2\/3}$ ($n\/3$ pairs must be common to both,)\ni.e., both intersections of each such pair occur outside of the $y$-interval $[L,U]$.\n$|\\wp^U_{2\/3}\\cap \\wp^L_{2\/3}| = n\/2 \\times 1\/3 = n\/6$.\nThis implies that their cost functions do not intersect within in $[L,U]$,\ni.e., one of each pair lies above that of the other in $[L,U]$,\nand can be discarded.\n\\end{proof}\n\n\n\\subsection{Optimal 1-step function}\\label{sec:1step}\nThis problem is equivalent to finding the weighted center for $n$ points on a line.\nWe pretend that all the points had the same $x$-coordinate.\nThen the problem becomes that of finding a weighted 1-center on a line,\ni.e., on the $y$-axis.\nThis can be solved in linear time using Megiddo's {\\em prune-and-search} method \\cite{bhattacharya2007,chen2015a,megiddo1983a}.\nIn \\cite{megiddo1983b} Megiddo presents a linear time algorithm in the case where the\npoints are unweighted. \nFor the weighted case we now present a more technical algorithm that we can apply\nlater to solve other related problems.\nThe following algorithm uses a parameter $c$ which is a small integer constant.\n\n\\begin{algorithm}{\\rm :} {\\tt 1-Step}$(P)$\n\\begin{enumerate}\n\\item\nPair up the points of $P$ arbitrarily.\n\\item\nFor each such pair $(p,q)$ determine their horizontal bisector lines. \n\\item\nDetermine a horizontal line, $y=U$ such that $|\\wp^U_{2\/3}|\\geq n\/3$\nand $|\\wp^U_{1\/3}|\\geq n\/6$ hold.\n\\item\nDetermine a horizontal line, $y=L$ such that $L$ $|\\wp^L_{2\/3}|\\geq n\/3$ and\n$|\\wp^L_{1\/3}|\\geq n\/6$ hold.\n\\item\nDetermine the critical points for $U$ and $L$.\n\\item\nIf there exist critical points for $U$ on both sides of (above and below) $U$, \nthen $y=U$ defines an optimal 1-step function, $F^*_1(x)$; Stop. \nOtherwise, let $s_U$ (higher or lower than $U$) be the side of $U$ on which the critical point\nlies.\n\\item\nIf there exist critical points for $L$ on both sides of $L$,\n$y=L$ defines $F^*_1(x)$; Stop.\nOtherwise, let $s_L$ (higher or lower than $L$) be the side of $L$ on which the critical point\nlies.\n\\item\nBased on $s_U$ and $s_L$, discard 1\/6 of the points from $P$,\nbased on Lemma~\\ref{lem:one6th}.\n\\item\nIf the size of the reduced set $P$ is greater than constant $c$, \nrepeat this algorithm from the beginning with the reduced set $P$.\nOtherwise, determine $F^*_1(x)$ using any known method\n(which runs in constant time).\n\\end{enumerate}\n\\end{algorithm}\n\n\\begin{lemma}\\label{lem:1step}\nAn optimal 1-step function $F^*_1(x)$ can be found in linear time.\n\\end{lemma}\n\\begin{proof}\nThe recurrence relation for the running time $T(n)$ of {\\tt 1-Step}$(P)$ for general $n$ is \n$T(n) \\leq T(n-n\/6) + O(n)$,\nwhich yields $T(n) = O(n)$.\n\\end{proof}\n\n\\section{Anchored $2$-step function problem}\\label{sec:cond2step}\nIn general, we denote an optimal $k$-step function by $F^*_k(x)$\nand its $i^{th}$ segment by $s^*_i$. \nLater, we need to constrain the first and\/or the last step of a step function to be\nat a specified height.\nA $k$-step function is said to be {\\em left-anchored} (resp. {\\em right-anchored}),\nif $s_1.y$ (resp. $s_k.y$) is assigned a specified value,\nand is denoted by $^{\\downarrow}\\!F_k(x)$ (resp. $F_k^{\\downarrow}(x)$).\nThe {\\em anchored $k$-step function} problem is defined as follows.\nGiven a set $P$ of points and two $y$-values $a$ and $b$,\ndetermine the optimal $k$-step function $^{\\downarrow}\\!F^*_k(x)$ (resp. $F_k^{\\downarrow *}(x)$)\nthat is left-anchored (resp. right-anchored) at \n$a$ (resp. $b$) such that cost $D(P, ^{\\downarrow}\\!\\!F^*_k(x))$ (resp. $D(P, F_k^{\\downarrow *}(x))$)\n is the smallest possible.\nIf a $k$-step function is both left- and right-anchored, \nit is said to be {\\em doubly anchored} and is\ndenoted by $^{\\downarrow}\\!F_k^{\\downarrow}(x)$.\n\n\n\\subsection{Doubly anchored 2-step function}\nSuppose that segment $s_1$ (resp. $s_2$) is anchored at $a$ (resp. $b$).\nSee Fig.~\\ref{fig:anchored2}(a).\n\\begin{figure}[ht]\n\\centering\n\\subfigure[]{\\includegraphics[height=3cm]{figs\/conditional2.pdf}}\n\\hspace{4mm}\n\\subfigure[]{\\includegraphics[height=3cm]{figs\/goftNhoft.pdf}}\n\\caption{(a) $s_1.y=a$ and $s_2.y=b$;\n(b) Monotone functions $g(x)$ (in blue) and $h(x)$ (in red).\n}\n\\label{fig:anchored2}\n\\end{figure}\nLet us define two functions $g(x)$ and $h(x)$ by\n\\begin{eqnarray}\ng(x) &=& \\max_{p.x\\leq x} \\{w(p)\\cdot|p.y - a|~\\mid p\\in P\\},\\label{eqn:g}\\\\\nh(x) &=&\\max_{p.x>x} \\{w(p)\\cdot |p.y - b|~\\mid p\\in P\\},\\label{eqn:h}\n\\end{eqnarray}\nwhere $g(x) =0$ for $x p_n.x$.\nIntuitively, if we divide the points of $P$ at $x$ into two partitions $P_1$ and $P_2$,\nthen $g(x)$ (resp. $h(x)$) gives the cost of partition $P_1$ (resp. $P_2$).\nSee Fig.~\\ref{fig:anchored2}(b).\nClearly the global cost for the entire $P$ is minimized for any $x$\nat the lowest point in the upper envelope of $g(x)$ and $h(x)$,\nwhich is named $\\overline{x}$.\nSince the points in $P$ are not sorted,\n$g(x)$ and $h(x)$ are not available explicitly,\nbut we can compute $\\overline{x}$ in linear time using the {\\em prune-and-search} method,\ntaking advantage of the fact that $\\max\\{g(x),h(x)\\}$ is unimodal.\n\n\\begin{algorithm}{\\rm :} {\\tt Doubly-Anch-2-Step}$(P,a,b)$\\label{alg:double}\n\\begin{enumerate}\n\\item\nInitialize $P'=P$.\n\\item\nFind the point in $P'$ that has the median $x$-coordinate, $x_m$.\n\\item\nEvaluate $g(x_m)$ (resp. $h(x_m)$) using (\\ref{eqn:g}) (resp. (\\ref{eqn:h})).\n\\item\nIf $g(x_m) = h(x_m)$ then $\\overline{x}=x_m$. Stop.\n\\item\nIf $g(x_m) < h(x_m)$ (resp. $g(x_m) > h(x_m)$), \ni.e., $\\overline{x}< x_m$ (resp. $\\overline{x}< x_m$),\nprune all the points $p$ with $p.x < x_m$ (resp. $p.x > x_m$),\n from $P'$,\nremembering just the maximum cost.\n\\item\nStop when $|P'|=2$, and find the lowest point $\\overline{x}$.\nOtherwise, go to Step~2.\n\\end{enumerate}\n\\end{algorithm}\n\nWe have the following lemma.\n\\begin{lemma}\\label{lem:doublyAnchored}\nAn optimal doubly anchored 2-step function\ncan be found in linear time.\n\\end{lemma}\n\\begin{proof}\nSteps~2 and 3 of Algorithm {\\tt Doubly-Anch-2-Step}$(P,a,b)$ can be carried out in linear time.\nSince Step~4 cuts the size of $P'$ in half every time, Step~2 is entered $O(\\log n)$ times.\nTherefore the total time is $O(n)$.\n\\end{proof}\n\n\\subsection{Left- or right-anchored 2-step function}\nWithout loss of generality, we discuss only a left-anchored 2-step function. \nGiven an anchor value $a$,\nwe want to determine the optimal 2-step function with the constraint\nthat $s^*_1.y=a$, denoted by $^{\\downarrow}\\!F^*_2(x)$.\nSee Fig.~\\ref{fig:anchored2}(a).\nIn this case, $b$ in (\\ref{eqn:h}) is not given; \nwe need to find the optimal value for it.\nBut assume for now that $b$ is also given,\nand execute {\\tt Doubly-Anch-2-Step}$(P,a,b)$.\nFrom the solution that it yields,\ncan we find the direction in which to move $b$ to find the optimal\nleft-anchored 2-step function?\n\\begin{lemma}\nLet $P_1$ (resp. $P_2$) be the left (resp right) partition of $P$ generated by {\\tt Doubly-Anch-2-Step}$(P,a,b)$\nsuch that $s_1.y=a$ (resp. $s_2.y=b$), where $a D(P_2, s_2)$)\nthen $P_1$ (resp. $P_2$) is the big partition.\n\\end{enumerate}\n\\end{procedure}\n\nIf $P_1$ is the big partition, we can eliminate all the points belonging to it,\nwithout affecting $^{\\downarrow}\\!F^*_2(x)$ that we will find.\nSee Step~4 of the Algorithm~\\ref{alg:cond2} given below.\nWe then repeat the process with the reduced set $P$.\nIf $P_2$ is the big partition, on the other hand, we need to do more work,\nsimilar to what we did to find an optimal 1-step function.\nNamely,\nwe determine values $U$ and $L$ for $P_2$ by executing Algorithm {\\tt 1-Step}$(P_2)$.\nWe then find a doubly anchored 2-step solution for $P$ with left anchor $a$\nand right anchor $U$.\n\n\n\\begin{algorithm}{\\rm :} {\\tt $l$-Anch-2-Step}$(P,a)$\\label{alg:cond2}\n\\begin{enumerate}\n\\item\nDivide $P$ into left partition $P_1$ and right partition $P_2$,\nwhose sizes differ by at most one.\\footnote{As before,\nwe assume that the points have different $y$-coordinates.\n}\n\\item\nLet $s_1$ be the segment with $s_1.y=a$ spanning $P_1$,\nand let $s_2$ be the 1-step (optimal) solution for $P_2$.\\footnote{\nSegment $s_2$ can be found in $O(|P_2|)$ time by Lemma~\\ref{lem:1step}.}\n\\item\nIf $D(P_1, s_1) = D(P_2, s_2)$ then\noutput $\\{s_1,s_2\\}$, which defines $^{\\downarrow}\\!F^*_2(x)$.\nStop.\n\\item\nIf $D(P_1, s_1) < D(P_2, s_2)$, remove from $P$ the points of $P_1$,\nexcept the critical point for $s_1$.\nGo to Step~6.\n\\item\nIf $D(P_1, s_1) > D(P_2, s_2)$ then carry out the following steps.\n\\begin{enumerate}\n\\item\nDetermine points $U$ and $L$ for $P_2$ as described in Algorithm {\\tt 1-Step}$(P)$.\n\\item\nExecute {\\tt Doubly-Anch-2-Step}$(P,a,U)$,\nand find the solution whose left partition is maximal.\nRepeat it with right anchor $L$.\n\\item\nEliminate 1\/6 of the points of $P_2$ from $P$, based on the two solutions\n(as in Steps 6--8 of Algorithm {\\tt 1-Step}$(P)$.)\n\\end{enumerate}\n\\item\nIf $|P| > c$ (a small constant),\nrepeat Steps~1 to 4.\nOtherwise, optimally solve the problem in constant time, using a known method.\n\\end{enumerate}\n\\end{algorithm}\nIn the example in Fig.~\\ref{fig:leftAnc}, \nassume that $b$ is not given,\nand $s_2$ is determined by Step~2.\nThen we have $D(P_1, s_1) > D(P_2, s_2)$,\nand Step~5 applies.\nAccording to Step~5(a), we determine $U$.\nWe then find the doubly anchored solution with the right anchor set to $b=U$.\n\n\n\\begin{lemma}\nAlgorithm {\\tt $l$-Anch-2-Step}$(P,a)$ computes $^{\\downarrow}\\!F^*_2(x)$ correctly,\nand runs in linear time.\n\\end{lemma}\n\\begin{proof}\nStep~3 is obviously correct.\nIf $D(P_1, s_1) < D(P_2, s_2)$ holds in Step~4,\nthen the first partition of $^{\\downarrow}\\!F^*_2(x)$ contains $P_1$.\nWe need to keep the critical point for $a$,\nbut all other points of $P_1$ can be ignored from now on\nbecause $P_1$ will expand.\nIf $D(P_1, s_1) > D(P_2, s_2)$ holds in Step~5,\nthen the first partition of $^{\\downarrow}\\!F^*_2(x)$ is contained in $P_1$.\n\nEach iteration of Steps 3 and 4 will eliminate at least $1\/2\\times1\/6=1\/12$ of the points of $P$.\nSuch an iteration takes linear time in the input size. \nThe total time needed for all the iterations is therefore linear.\n\\end{proof}\n\n\\section{$k$-step function}\\label{sec:kstep}\n\\subsection{Approach}\nTo design a recursive algorithm, assume that for any set of points $Q\\subset P$,\nwe can find the optimal $(j-1)$-step function and the optimal left- and right-anchored $j$-step function \nfor any $2\\leq j < k$ in $O(|Q|)$ time,\nwhere $k$ is a constant .\nWe have shown that this is true for $k=2$ in the previous two sections.\nSo the basis of induction holds.\n\nGiven an optimal $k$-step function $F^*_k(x)$, for each $i~(1\\leq i \\leq k)$,\nlet $P^*_i$ be the set of points vertically closest to segment $s^*_i$.\nBy definition, the partition\n$\\{P^*_i \\mid i = 1, 2, \\ldots, k\\}$ satisfies the contiguity condition.\nIt is easy to see that\nfor each segment $s^*_i$, there are (local) critical points with respect to $s^*_i$,\nlying on the opposite sides of $s^*_i$.\n\nIn finding an optimal $k$-step function,\nwe first identify a big partition that will be spanned by a segment in\nan optimal solution.\nBy Lemma~\\ref{lem:big},\nsuch a big partition always exists.\nOur objective is to eliminate a constant fraction of the points in a big partition.\nThis will guarantee that a constant fraction of the input set is eliminated when $k$ is a fixed constant.\nThe points in the big partition other than two critical points are ``useless''\nand can be eliminated from further considerations.\\footnote{Note that there may\nbe more than two critical points in which case all but two are ``useless.''}\nThis elimination process is repeated until the problem size gets small enough\nto be solved by an exhaustive method in constant time.\n\n\\subsection{Feasibility test}\\label{sec:feasibility}\nGiven a weighted distance (=cost) $D$,\na point set $P$ is said to be $D$-{\\em feasible} if there exists a $k$-step function\n$F_k(x)$ such that $D(P,F_k(x)) \\leq D$. \nTo test $D$-feasibility\nwe first try to identify the first segment $s_1$ of a possible $k$-step function $F_k(x)$.\nTo this end we compute the median $m$ of $\\{p_i.x\\mid i = 1, 2,\\ldots, n\\}$ in $O(n)$ time,\nand divide $P$ into two parts $P_1 = \\{p_i \\mid p_i.x \\leq m\\}$ and $P_2 = \\{p_i \\mid p_i.x > m\\}$,\nwhich also takes $O(n)$ time.\nNote that $|P_1| \\leq \\lceil |P|\/2\\rceil$ and $|P_2| \\leq \\lceil |P|\/2\\rceil$ hold.\nWe then find the intersection $I$ of the $y$-intervals in $\\{|p_i.y-y| \\leq D \\mid p_i \\in P_1\\}$.\nAssuming that $P$ is $D$-feasible,\nthen we have two cases.\n\nCase (a): [$|I|=\\emptyset$] $s_1$ ends at some point $p_j \\in P_1$.\nThrow away all the points in $P_2$ and look for the longest $s_1$ limited by cost $D$,\nconsidering only the points in $P_1$ from the left.\n\n\n\nCase (b): [$|I|\\not=\\emptyset$] $s_1$ may end at some point $p_j \\in P_2$.\nThrow away all the points in $P_1$\nand look for the longest $s_1$, using $I$ and the points in $P_2$ from the left.\n\nClearly,\nwe can find the longest $s_1$ in $O(n)$ time.\nRemove the points spanned by $s_1$ from $P$,\nand find $s_2$ in $O(n)$ time, and so on.\nSince we are done after finding $k$ steps $\\{s_1, \\ldots, s_k\\}$,\nit takes $O(kn)$ time.\n\n\\begin{lemma}\\label{lem:feasibility}\nWe can test $D$-feasibility in $O(kn)$ time.\n\\hfill$\\qed$\n\\end{lemma}\n\n\\subsection{Identifying a big partition}\\label{sec:findBig}\n\\begin{lemma}\\label{lem:big}\nLet ${\\cal P}=\\{P_i\\mid i=1,\\ldots,k\\}$ be any $k$-partition of $P$,\nsatisfying the contiguity condition,\nsuch that the sizes of the partitions differ by no more than 1,\nand let $\\{P^*_i\\mid i=1,\\ldots,k\\}$ be an optimal $k$-partition.\nThen there exists an index $j$ such that $P_j$ is a big partition spanned by $s^*_j$.\n\\end{lemma}\n\\begin{proof} \nLet $j$ be the smallest index such that $s^{(r)}_j.x \\leq s^{*(r)}_j.x$.\nSuch an index must exists, because if $s^{(r)}_j.x > s^{*(r)}_j.x$\nfor all $1\\leq j \\leq k-1$ then $s^{(r)}_k.x = s^{*(r)}_j.x$.\nWe clearly have $s_j\\subset s^*_j$,\nwhich implies that $s^*_j$ spans $P_j$.\n\\end{proof}\n\nGiven a point set $P$ in the $x$-$y$ plane,\nlet ${\\cal P}=\\{P_i\\mid i=1,\\ldots,k\\}$ be any $k$-partition of $P$,\nsatisfying the contiguity condition,\nsuch that the sizes of the partitions differ by no more than 1.\nThe following procedure returns a big partition $P_j$ spanned by $s^*_j$,\nwhose existence was proved by Lemma~\\ref{lem:big}.\nSince $P=\\cup\\{P_i \\mid P_i \\in {\\cal P}\\}$,\n$P$ is implicit in the input to the next procedure.\n\n\\begin{procedure}{\\rm :} {\\tt Big$({\\cal P}, k)$}\\label{proc:bigk}\n\n\\noindent\n\\begin{enumerate}\n\\item\nUsing Algorithm~{\\tt 1-Step}$(P)$, compute the optimal 1-step function for $P_1$\nand let $D_1$ be its cost for $P_1$.\nIf $P$ is not $D_1$-feasible (i.e., $D(P,F^*_{k}(x))>D_1$),\nthen return $P_1$ and stop.\\footnote{There exists an optimal solution for $P$\n in which $s^*_1$ spans $P_1$.}\n\\item\nUsing Algorithm~{\\tt 1-Step}$(P)$, compute the optimal 1-step function for $P_k$\nand let $D'_k$ be its cost for $P_k$.\nIf $P$ is not $D'_k$-feasible (i.e., $D(P,F^*_{k}(x))>D'_k$),\nthen return $P_k$ and stop.\n\\item\nFind an index $j~ (1 < j < k)$ such that for $D_{j-1}=D(\\cup_{i=1}^{j-1} P_i, F^*_{j-1}(x))$\n$P$ is $D_{j-1}$-feasible, \nand for $D_j=D(\\cup_{i=1}^{j} P_i, F^*_j(x))$ $P$ is not $D_{j}$-feasible.\\footnote{This means\nthat $D_{j-1}\\geq D^*$ and $D_{j}< D^*$,\nwhere $D^*$ is the cost of the optimal solution for $P$.\nUnless $P^*_i =P_i$ for all $i$, such an index $j$ always exists.\n[We should indicate why.]}\nReturn $P_j$ and stop.\n\\end{enumerate}\n\\end{procedure}\n\n\\begin{lemma}\\label{lem:Bigiscorrect}\nProcedure {\\tt Big$({\\cal P},k)$} is correct.\n\\end{lemma}\n\\begin{proof}\nIt is clear that Steps~1 and 2 are correct.\nTo show that Step~3 is also correct, \nwe {\\em stretch} a step $s$ of an optimal step function\nby making it as long as possible as follows.\nMove $s^{(l)}.x$ (resp. $s^{(r)}.x$) to the left (resp. right) as far as possible without changing the cost\nof the step function.\nThe step that has been stretched is called a {\\em stretched step.} \nLet us assume without loss of generality that $s^*_j$ corresponding to $P_j$ returned by Step~3 is stretched.\nSince $D_{j-1}\\geq D^*$,\nwe must have $s^{*(l)}_j.x \\leq s^{(l)}_j.x$.\n\nThe optimal solution $F^*_j(x)$ for $\\cup_{i=1}^j P_i$ has cost $D_j$,\nwhich is too small for $P$ to be $D_j$-feasible.\nRegarding the remaining points $\\cup_{i=j+1}^k P_i$,\nlet $G^*_j(x)$ denote the optimal $(k-j)$-step function for this point set.\nIf $D(\\cup_{i=j+1}^k P_i, G^*_j(x)) \\leq D_j$,\nthe $P$ would be $D_j$-feasible.\nSince it is not,\n $s^{(r)}_j.x$ would be stretched to the right under the optimal solution $F^*_k(x)$,\ni.e., $s^{*(r)}_j.x \\geq s^{(r)}_j.x$.\nTogether with $s^{*(l)}_j.x \\leq s^{(l)}_j.x$,\nit follows that $P_j$ is spanned by $s^*_j$.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:Bigislinear}\nProcedure {\\tt Big$({\\cal P},k)$} runs in linear time in $n$.\n\\end{lemma}\n\\begin{proof}\nIn Step~1, the optimal 1-step function for $P_1$ can be found in $O(|P_1|)$ time by Lemma~\\ref{lem:1step},\nand it takes $O(kn)$ time to test if $P$ is not $D_1$-feasible by Lemma~\\ref{lem:feasibility}.\nSimilarly, Step~2 can be carried out in $O(n)$ time.\nTo carry out Step~3,\nwe compute, using binary search, $\\lceil \\log n\\rceil$ values out of $\\{D_i\\mid 1\\leq i \\leq k-1\\}$,\nwhich takes $O(f(k)n)$ time for some function $f(k)$,\nunder the assumption that any $i$-step function problem, $i < k$,\nis solvable in time linear in the size of the input point set.\n\\end{proof}\n\n\n\\section{Algorithm}\\label{sec:algorithm}\n\\subsection{Optimal $k$-step function}\nIn this section we are assuming that we can solve any $(j-1)$-step and anchored\n$j$-step function problems for any $2\\le j 1$,\nwe need to find the left- and right-anchored solution for $U$ and $L$,\nand prune $1\/6$ of the points in $P_j$ using {\\tt Prune-Big$(k,P_j)$}, given below,\nwhich is very similar to Algorithm {\\tt 1-Step}$(P)$.\nLet $P_j$ be a big partition spanned by $s^*_j$,\nwhich is an input to the following procedure.\n\\begin{procedure}{\\rm :} {\\tt Prune-Big$(k,P_j)$}\n\n\\noindent\n{\\bf Output:} 1\/6 of points in $P_j$ removed.\n\n\n\\begin{enumerate}\n\\item\nDetermine $U$ and $L$ for $P_j$ as in Algorithm {\\tt 1-Step}$(P)$. \n\\item \nIf $j>1$, find two right-anchored $j$-step functions $F_j^{\\downarrow *}(x)$ for $\\cup_{i=1}^{j} P_i$,\none anchored by $L$ and the other anchored by $U$.\n\\item\nIf $j< k$, find two left-anchored $(k-j+1)$-step functions\n $^{\\downarrow}\\!F^*_{k-j+1}(x)$ for $\\cup_{i=j}^k P_i$,\none anchored by $L$ and the other anchored by $U$.\n\\item\nIdentify 1\/6 of the points in $P_j$ with respect to $L$ and $U$,\nwhich are ``useless''\\footnote{See Step~8 of Algorithm~{\\tt 1-Step}$(P)$.}\nbased on $F_j^{\\downarrow *}\\!(x)$\nand $^{\\downarrow}\\!F^*_{k-j+1}(x)$ found above,\nand remove them from $P$. \n\\end{enumerate}\n\\end{procedure}\n\n\n\n\\begin{lemma}\\label{lem:anchored}\n{\\tt Prune-Big$(k,P_j)$} runs in linear time when $k$ is a constant..\n\\hfill$\\qed$\n\\end{lemma}\n\nWe can now describe our algorithm formally as follows.\n\n\\begin{algorithm}{\\rm :} {\\tt $k$-Step}$(P)$. \n\n\\noindent\n{\\bf Output:} Optimal $k$-step function $F^*_k(x)$\n\\begin{enumerate}\n\\item \nDivide $P$ into partitions $\\{P_i \\mid i = 1, 2, \\ldots, k\\}$,\nsatisfying the contiguous condition,\nsuch that their sizes differ by no more than one.\n\\item \nExecute Procedure {\\tt Big$({\\cal P},k)$} to find a big partition $P_j$\nspanned by $s^*_j$. \n\\item\nExecute Procedure {\\tt Prune-Big$(k,P_j)$}.\n\\item\nIf $|P| > c$ for some fixed $c$, \nrepeat Steps~1 to 3 with the reduced $P$.\n\\end{enumerate}\n\\end{algorithm}\n\n\n\\subsection{Analysis of algorithm}\nTo carry out Step 1 of Algorithm {\\tt $k$-Step}$(P)$, \nwe first find the $(hn\/k)^{th}$ smallest among $\\{p_i.x \\mid 1\\leq i \\leq n\\}$,\nfor $h=1, 2, \\ldots, k-1$.\nWe then place each point in $P$ into $k$ partitions delineated by these\n$k-1$ values.\nIt is clear that this can be done in $O(kn)$ time.\\footnote{This could be done in $O(n\\log k)$ time.}\nAs for Step~2,\nwe showed in Sec.~\\ref{sec:findBig} that finding a big partition spanned by an optimal step\n$s_j^*$ takes $O(n)$ time, since $k$ is a constant.\nStep~3 also runs in $O(n)$ time by Lemma~\\ref{lem:anchored}.\nSince Steps 1 to 3 are repeated $O(\\log n)$ times,\neach time with a point set whose size is at most a constant fraction of the size of the previous set,\nthe total time is also $O(n)$, when $k$ is a constant.\nBy solving a recurrence relation for the running time of Algorithm~{\\tt $k$-Step}$(P)$,\nwe can show that it runs in $O(2^{2k\\log k}n)=O(k^{2k}n)$ time.\n\n\\begin{theorem}\nGiven a set of $n$ points in the plane $P=\\{p_1,p_2,\\ldots, p_n\\}$,\nwe can find the optimal $k$-step function that minimizes the maximum distance\nto the $n$ points in $O(k^{2k} n)$ time.\n\\hfill$\\qed$\n\\end{theorem}\nThus the algorithm is optimal for a fixed $k$.\n\n\n\\section{Conclusion and Discussion}\\label{sec:conclusion}\nWe have presented a linear time algorithm to solve the optimal $k$-step function problem,\nwhen $k$ a constant.\nMost of the effort is spent on identifying a ``big partition.''\nIt is desirable to reduce the constant of proportionality. \n\nThe {\\em size-$k$ histogram construction problem}~\\cite{guha2007},\nwhere the points are not weighted, \nis similar to the problem we addressed in this paper.\nIts generalized version,\nwhere the points are weighted,\nis equivalent to our problem, and thus can be solved in optimal linear time when $k$ is a constant.\nThe {\\em line-constrained $k$ center problem} is defined by:\nGiven a set $P$ of weighted points in the plane and a horizontal line $L$,\ndetermine $k$ centers on $L$\nsuch that the maximum weighted distance of the points to their closest centers is minimized.\nThis problem was solved in optimal $O(n\\log n)$ time for arbitrary $k$ even if the points\nare sorted~\\cite{karmakar2013,wang2014a}.\nOur algorithm presented here can be applied to solve this problem \nin $O(n)$ time if $k$ is a constant.\n\nA possible extension of our work reported here is to use a cost other than the weighted vertical distance.\nThere is a nice discussion in \\cite{guha2007} on the various measures one can use. \nOur complexity results are valid\nif the cost is more general than (\\ref{eqn:pointCost}),\nin particular, $D(p, F(x))\\triangleq d(p, F(x))^2 w(p)$,\nwhich is often used as an error measure.\n\\section*{Acknowledgement}\\label{sec:ack}\nThis work was supported in part by Discovery Grant \\#13883 from\nthe Natural Science and Engineering Research Council (NSERC) of Canada and in part by MITACS,\nboth awarded to Bhattacharya.\n\n\n\\section*{Reference}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\bf Introduction}\nOne of the most interesting new developments in hadron physics is\nthe application of the AdS\/CFT correspondence~\\cite{Maldacena} to\nnonperturbative QCD\nproblems~\\cite{Polchinski,Janik,Erlich,Karch,Brodsky1,Teramond1,Teramond2}.\nIt is well know that AdS\/CFT gives an important insight into the\nviscosity and other global properties of the hadronic system\nformed in heavy ion collisions~\\cite{Kovtun}. The essential ansatz\nfor the application of AdS\/CFT to hadron physics is the indication\nthat the QCD coupling $\\alpha_s(Q^2)$ becomes large. Therefore\nconformal symmetry can be applied to, for example solutions of the\nQCD Dyson Schwinger equations and phenomenological studies of QCD\ncouplings based on physical observables such as $\\tau$ decay and\nthe Bjorken sum rule show that the QCD $\\beta$ function vanishes\nand $\\alpha_s(Q^2)$ become constant at small virtuality; i.e.,\neffective charges develop an infrared fixed point. Fully\nexploiting the gauge\/gravity correspondence to produce a model for\nreal strong interaction physics- a method called \" holographic\nQCD\" or \"AdS\/QCD\"-may be attempted either through a top-dawn\napproach starting with a particular string theory and choosing a\nbackground that naturally produces QCD-like properties, or a\nbottom-up approach starting with real QCD properties and using\nthem to obtain constraints on viable dual gravity theories.\n\nThe first attempts have been made for constructing phenomenological\nholographic models of QCD~\\cite{Erlich}. Surprisingly simple models\nconsisting of gauge theory in an anti-de-Sitter space interval have\nturned out to provide a remarkably good description of the meson\nsector of QCD. Therefore, it will be interesting that the\ncalculation of the higher twist effects within holographic QCD in\nproton-proton collisions in the running coupling approach.\n\nThe large-order behavior of a perturbative expansion in gauge\ntheories is inevitably dominated by the factorial growth of\nrenormalon diagrams~\\cite{Hooft,Mueller,Zakharov,Beneke}. In the\ncase of quantum chromodynamics (QCD), the coefficients of\nperturbative expansions in the QCD coupling $\\alpha_{s}$ can\nincrease dramatically even at low orders. This fact, together with\nthe apparent freedom in the choice of renormalization scheme and\nrenormalization scales, limits the predictive power of perturbative\ncalculations, even in applications involving large momentum\ntransfer, where $\\alpha_{s}$ is effectively small.\n\nIn this work we apply the running coupling approach~\\cite{Agaev}\nin order to compute the effects of the infrared renormalons on the\npion production in proton-proton collisions within holographic\nQCD. This approach was also employed\npreviously~\\cite{Ahmadov3,Ahmadov4,Ahmadov5,Ahmadov6} to calculate\nthe inclusive meson production in proton-proton and photon-photon\ncollisions.\n\nFor the calculations of the higher-twist cross sections on the\ndependence of wave functions of pion, we used the holographic QCD\nprediction $\\Phi_{hol}(x)$~\\cite{Brodsky2,Brodsky3,Vega} and pion\nasymptotic wave functions $\\Phi_{asy}(x)$~\\cite{Lepage1} from the\nperturbative QCD evolution. Theoretically obtained predicted\nresults within holographic QCD were compared with results of the\nperturbative QCD which obtained by the running coupling and frozen\ncoupling constants approaches.\n\nThe frozen coupling constant approach in\nRefs.~\\cite{Bagger,Baier,Ahmadov1,Ahmadov2} was used for calculation\nof integrals, such as\n\\begin{equation}\nI\\sim \\int\\frac{\\alpha_{s}({Q}^2)\\Phi(x,{Q}^2)}{1-x}dx.\n\\end{equation}\nIt should be noted that, in pQCD calculations, the argument of the\nQCD coupling constant (or the renormalization and factorization\nscale) ${Q}^2$ should be taken equal to the square of the momentum\ntransfer of a hard gluon in a corresponding Feynman diagram. But\ndefinition of $\\alpha_{s}(\\hat{Q}^2)$ suffers from infrared\nsingularities. Therefore, in the soft regions as $x_{1}\\rightarrow\n0$, and $x_{2}\\rightarrow 0$, integrals (1.1) diverge and for their\ncalculation some regularization methods are needed for\n$\\alpha_{s}(Q^2)$ in these regions. Investigation of the infrared\nrenormalon effects in various inclusive and exclusive processes is\none of the most important and interesting problem in the\nperturbative QCD. It is known that infrared renormalons are\nresponsible for factorial growth of coefficients in perturbative\nseries for the physical quantities. But, these divergent series can\nbe resummed by means of the Borel transformation ~\\cite{Hooft} and\nthe principal value prescription ~\\cite{Contopanagos}. Studies of\nhigher-twist and renormalon effects also opened new prospects for\nevaluation of power suppressed corrections to processes\ncharacteristics.\n\nWe organize the paper as the follows. In Section \\ref{ht}, we\nprovide some formulas for the calculation of the contributions of\nthe higher twist and leading twist diagrams. In Section \\ref{ir},\nwe present the formulas and analysis of the higher-twist effects\non the dependence of the pion wave function by the running\ncoupling constant approach, and in Section \\ref{results}, the\nnumerical results for the cross section and discuss the dependence\nof the cross section on the pion wave functions are presented.\nFinally, some concluding remark are stated in Section \\ref{conc}.\n\n\\section{HIGHER TWIST AND LEADING TWIST CONTRIBUTIONS TO INCLUSIVE REACTIONS}\n\\label{ht} The higher-twist Feynman diagrams, which describe the\nsubprocess $q_1+\\bar{q}_{2} \\to \\pi^{+}(\\pi^{-})+\\gamma$ for the\npion production in the proton-proton collision are shown in Fig.1.\nThe amplitude for this subprocess can be found by means of the\nBrodsky-Lepage formula ~\\cite{Lepage2}\n\\begin{equation}\nM(\\hat s,\\hat\nt)=\\int_{0}^{1}{dx_1}\\int_{0}^{1}dx_2\\delta(1-x_1-x_2)\\Phi_{\\pi}(x_1,x_2,Q^2)T_{H}(\\hat\ns,\\hat t;x_1,x_2).\n\\end{equation}\nIn Eq.(2.1), $T_H$ is the sum of the graphs contributing to the\nhard-scattering part of the subprocess.\n\nThe Mandelstam invariant variables for subprocesses $q_1+\\bar{q}_{2}\n\\to \\pi^{+}(\\pi^{-})+\\gamma$ are defined as\n\\begin{equation}\n\\hat s=(p_1+p_2)^2,\\quad \\hat t=(p_1-p_{\\pi})^2,\\quad \\hat\nu=(p_1-p_{\\gamma})^2.\n\\end{equation}\nThe pion wave functions predicted by\nAdS\/QCD~\\cite{Brodsky2,Brodsky3,Vega} and the PQCD evolution\n~\\cite{Lepage1} has the form:\n$$\n\\Phi_{asy}^{hol}(x)=\\frac{4}{\\sqrt{3}\\pi}f_{\\pi}\\sqrt{x(1-x)},\n$$\n\\begin{equation}\n\\Phi_{VSBGL}^{hol}(x)=\\frac{A_1k_1}{2\\pi}\\sqrt{x(1-x)}exp\\left(-\\frac{m^2}{2k_{1}^2x(1-x)}\\right),\\quad\n\\Phi_{asy}^{p}(x)=\\sqrt{3}f_{\\pi}x(1-x)\n\\end{equation}\nwhere $f_{\\pi}$ is the pion decay constant.\n\nThe cross section for the higher-twist subprocess $q_1\\bar{q}_{2}\n\\to \\pi^{+}(\\pi^{-})\\gamma$ is given by the expression\n\\begin{equation}\n\\frac{d\\sigma}{d\\hat t}(\\hat s,\\hat t,\\hat u)=\\frac\n{8\\pi^2\\alpha_{E} C_F}{27}\\frac{\\left[D(\\hat t,\\hat\nu)\\right]^2}{{\\hat s}^3}\\left[\\frac{1}{{\\hat u}^2}+\\frac{1}{{\\hat\nt}^2}\\right]\n\\end{equation}\nwhere\n\\begin{equation}\nD(\\hat t,\\hat u)=e_1\\hat\nt\\int_{0}^{1}dx\\left[\\frac{\\alpha_{s}(Q_1^2)\\Phi_{\\pi}(x,Q_1^2)}{1-x}\\right]+e_2\\hat\nu\\int_{0}^{1}dx\\left[\\frac{\\alpha_{s}(Q_2^2)\\Phi_{\\pi}(x,Q_2^2)}{1-x}\\right].\n\\end{equation}\nIn the Eq.(2.5) $Q_{1}^2=(x-1)\\hat u \\,\\,\\,\\,$and $Q_{2}^2=-x\\hat\nt$ \\,\\, represent the momentum squared carried by the hard gluon\nin Fig.1, $e_1(e_2)$ is the charge of $q_1(\\overline{q}_2)$ and\n$C_F=\\frac{4}{3}$. The higher-twist contribution to the\nlarge-$p_{T}$ pion production cross section in the process\n$pp\\to\\pi^{+}(\\pi^{-})+\\gamma+X$ is ~\\cite{Owens,Greiner}\n\\begin{equation}\n\\Sigma_{M}^{HT}\\equiv E\\frac{d\\sigma}{d^3p}=\\int_{0}^{1}\\int_{0}^{1}\ndx_1 dx_2 G_{{q_{1}}\/{h_{1}}}(x_{1})\nG_{{q_{2}}\/{h_{2}}}(x_{2})\\frac{\\hat s}{\\pi} \\frac{d\\sigma}{d\\hat\nt}(q\\overline{q}\\to \\pi\\gamma)\\delta(\\hat s+\\hat t+\\hat u).\n\\end{equation}\nWe denote the higher-twist cross section obtained using the frozen\ncoupling constant approach by $(\\Sigma_{\\pi}^{HT})^0$.\n\nRegarding the higher-twist corrections to the pion production\ncross section, a comparison of our results with leading-twist\ncontributions is crucial. We take two leading-twist subprocesses\nfor the pion production:(1) quark-antiquark annihilation $q\\bar{q}\n\\to g\\gamma$, in which the $g \\to \\pi^{+}(\\pi^{-})$ and (2)\nquark-gluon fusion, $qg \\to q\\gamma $, with subsequent\nfragmentation of the final quark into a meson, $q \\to\n\\pi^{+}(\\pi^{-})$ ~\\cite{Ahmadov3,Ahmadov5}.\n\n\\section{THE HIGHER TWIST MECHANISM IN HOLOGRAPHIC QCD AND INFRARED RENORMALONS}\\label{ir}\n\nThe main problem in our investigation is the calculation of integral\nin (2.5) by the running coupling constant approach within\nholographic QCD and also discussion of the problem of normalization\nof the higher twist process cross section in the context of the same\napproach. Therefore, it is worth noting that, the renormalization\nscale (argument of $\\alpha_s$) according to Fig.1 should be chosen\nequal to $Q_{1}^2=(x-1)\\hat u$, $Q_{2}^2=-x\\hat t$. The integral in\nEq.(2.5) in the framework of the running coupling approach takes the\nform\n\\begin{equation}\nI(\\mu_{R_{0}}^2)=\\int_{0}^{1}\\frac{\\alpha_{s}(\\lambda\n\\mu_{R_0}^2)\\Phi_{M}(x,\\mu_{F}^2)dx}{1-x}.\n\\end{equation}\nThe $\\alpha_{s}(\\lambda \\mu_{R_0}^2)$ has the infrared singularity\nat $x\\rightarrow1$, for $\\lambda=1-x$ or $x\\rightarrow0$, for\n$\\lambda=x$ and so the integral $(3.1)$ diverges. For the\nregularization of the integral, we express the running coupling at\nscaling variable $\\alpha_{s}(\\lambda \\mu_{R_0}^2)$ with the aid of\nthe renormalization group equation in terms of the fixed one\n$\\alpha_{s}(Q^2)$. The solution of renormalization group equation\nfor the running coupling $\\alpha\\equiv\\alpha_{s}\/\\pi$ has the form\n~\\cite{Contopanagos}\n\\begin{equation}\n\\frac{\\alpha(\\lambda)}{\\alpha}=\\left[1+\\alpha\n\\frac{\\beta_{0}}{4}\\ln{\\lambda}\\right]^{-1}.\n\\end{equation}\nThen, for $\\alpha_{s}(\\lambda Q^2)$, we get\n\\begin{equation}\n\\alpha(\\lambda Q^2)=\\frac{\\alpha_{s}}{1+\\ln{\\lambda\/t}}\n\\end{equation}\nwhere $t=4\\pi\/\\alpha_{s}(Q^2)\\beta_{0}=4\/\\alpha\\beta_{0}$.\n\nHaving inserted Eq.(3.3) into Eq.(2.5) we obtain\n$$\nD(\\hat t,\\hat u)=e_{1}\\hat t\\int_{0}^{1}dx\\frac{\\alpha_{s}(\\lambda\n\\mu_{R_0}^2)\\Phi_{M}(x,Q_{1}^2)}{1-x}+ e_{2}\\hat\nu\\int_{0}^{1}dx\\frac{\\alpha_{s}(\\lambda\n\\mu_{R_0}^2)\\Phi_{M}(x,Q_{2}^2)}{1-x}\n$$\n\\begin{equation}\n=e_{1}\\hat t\\alpha_{s}(-\\hat u)t_{1}\\int_{0}^{1}dx\n\\frac{\\Phi_{M}(x,Q_{1}^2)}{(1-x)(t_{1}+\\ln\\lambda)} + e_{2}\\hat\nu\\alpha_{s}(-\\hat t)t_{2}\\int_{0}^{1}dx\n\\frac{\\Phi_{M}(x,Q_{2}^2)}{(1-x)(t_{2}+\\ln\\lambda)}\n\\end{equation}\nwhere $t_1=4\\pi\/\\alpha_{s}(-\\hat u)\\beta_{0}$ and\n$t_2=4\\pi\/\\alpha_{s}(-\\hat t)\\beta_{0}$.\n\nAlthough the integral (3.4) is still divergent, it is recast into a\nsuitable form for calculation. Making the change of variable as\n$z=\\ln\\lambda$, we obtain\n\\begin{equation}\nD(\\hat t,\\hat u)=e_{1}\\hat t \\alpha_{s}(-\\hat u) t_1\\int_{0}^{1}dx\n\\frac{\\Phi_{M}(x,Q_{1}^2)}{(1-x)(t_1+z)}+ e_{2}\\hat u\n\\alpha_{s}(-\\hat t) t_2 \\int_{0}^{1} dx\n\\frac{\\Phi_{M}(x,Q_{2}^2)}{(1-x)(t_2+z)}\n\\end{equation}\nIn order to calculate (3.5) we will apply the integral\nrepresentation of $1\/(t+z)$ ~\\cite{Zinn-Justin,Erdelyi}.\n\\begin{equation}\n\\frac{1}{(t+z)}=\\int_{0}^{\\infty}e^{-(t+z)u}du,\n\\end{equation}\ngives\n\\begin{equation}\nD(\\hat t,\\hat u)=e_{1} \\hat{t} \\alpha_{s}(-\\hat u) t_1 \\int_{0}^{1}\n\\int_{0}^{\\infty} \\frac{\\Phi_{\\pi}(x,Q_{1}^2)e^{-(t_1+z)u}du\ndx}{(1-x)}+ e_{2} \\hat{u} \\alpha_{s}(-\\hat t) t_2 \\int_{0}^{1}\n\\int_{0}^{\\infty} \\frac{\\Phi_{\\pi}(x,Q_{2}^2)e^{-(t_2+z)u}du\ndx}{(1-x)}\n\\end{equation}\nIn the case $\\Phi_{asy}^{hol}(x)$ for the $D(\\hat t,\\hat u)$ it is\nwritten as\n\\begin{equation}\nD(\\hat t,\\hat u)=\\frac{16 f_{\\pi} e_{1} \\hat t}{\\sqrt{3}\\beta_{0}}\n\\int_{0}^{\\infty} du\ne^{-t_{1}u}B\\left(\\frac{3}{2},\\frac{1}{2}-u\\right)+ \\frac{16\nf_{\\pi} e_{2} \\hat u}{\\sqrt{3}\\beta_{0}} \\int_{0}^{\\infty} du\ne^{-t_{2}u}B\\left(\\frac{3}{2},\\frac{1}{2}-u\\right)\n\\end{equation}\nand for $\\Phi_{asy}^{p}(x)$ wave function\n\\begin{equation}\nD(\\hat t,\\hat u)=\\frac{4\\sqrt{3}\\pi f_{\\pi}e_{1}\\hat t}{\\beta_{0}}\n\\int_{0}^{\\infty}du e^{-t_{1}u}\n\\left[\\frac{1}{1-u}-\\frac{1}{2-u}\\right] +\\frac{4\\sqrt{3}\\pi\nf_{\\pi}e_{2}\\hat u}{\\beta_{0}} \\int_{0}^{\\infty}du e^{-t_{2}u}\n\\left[\\frac{1}{1-u}-\\frac{1}{2-u}\\right].\n\\end{equation}\nwhere $B(\\alpha,\\beta)$ is Beta function. The structure of the\ninfrared renormalon poles in Eq.(3.8) and Eq.(3.9) strongly depend\non the wave functions of the pion. To remove them from Eq.(3.8) and\nEq.(3.9) we adopt the principal value prescription. We denote the\nhigher-twist cross section obtained using the running coupling\nconstant approach by $(\\Sigma_{\\pi}^{HT})^{res}$.\n\n\\section{NUMERICAL RESULTS AND DISCUSSION}\\label{results}\n\nIn this section, we discuss the higher-twist contributions\ncalculated in the context of the running and frozen coupling\nconstant approaches on the dependence of the chosen pion wave\nfunctions in the process $pp \\to \\pi^{+}(or\\,\\, \\pi^{-})\\gamma+X$.\nIn numerical calculations for the quark distribution function\ninside the proton, the MSTW distribution function ~\\cite{Martin},\nand the gluon and quark fragmentation ~\\cite{Albino} functions\ninto a pion have been used. The results of our numerical\ncalculations are displayed in Figs.2-14. Firstly, it is very\ninteresting comparing the higher-twist cross sections obtained\nwithin holographic QCD with the ones obtained within perturbative\nQCD. In Fig.2 and Fig.3 we show the dependence of higher-twist\ncross sections $(\\Sigma_{\\pi^{+}}^{HT})^{0}$,\n$(\\Sigma_{\\pi^{+}}^{HT})^{res}$ calculated in the context of the\nfrozen and running coupling constant approaches as a function of\nthe pion transverse momentum $p_{T}$ for different pion wave\nfunctions at $y=0$. It is seen from Fig.2 and Fig.3 that the\nhigher-twist cross section is monotonically decreasing with an\nincrease in the transverse momentum of the pion. In Fig.4-Fig.7,\nwe show the dependence of the ratios\n$(\\Sigma_{HT}^{hol})$\/$(\\Sigma_{HT}^p)$,\n$(\\Sigma_{\\pi}^{HT})^{res}$\/$(\\Sigma_{\\pi^{+}}^{HT})^{0}$,\n$(\\Sigma_{\\pi^{+}}^{HT})^{0}$\/$(\\Sigma_{\\pi^{+}}^{LT})$ and\n$(\\Sigma_{\\pi^{+}}^{HT})^{res}$\/$(\\Sigma_{\\pi^{+}}^{LT})$ as a\nfunction of the pion transverse momentum $p_{T}$ for\n$\\Phi_{\\pi}^{hol}(x)$, $\\Phi_{\\pi}^{p}(x)$ and\n$\\Phi_{VSBGL}^{hol}(x)$ pion wave functions. Here\n$\\Sigma_{\\pi^{+}}^{LT}$ is the leading-twist cross section,\nrespectively. As shown in Fig.4, in the region\n$2\\,\\,GeV\/c