diff --git a/.gitattributes b/.gitattributes index e08d1a5e054e43713e24fe2513d909cb7deea08f..8015a17cff825c2d49a72618271e88549dd0298c 100644 --- a/.gitattributes +++ b/.gitattributes @@ -217,3 +217,4 @@ data_all_eng_slimpj/shuffled/split/split_finalac/part-14.finalac filter=lfs diff data_all_eng_slimpj/shuffled/split/split_finalac/part-02.finalac filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalac/part-05.finalac filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalac/part-10.finalac filter=lfs diff=lfs merge=lfs -text +data_all_eng_slimpj/shuffled/split/split_finalac/part-01.finalac filter=lfs diff=lfs merge=lfs -text diff --git a/data_all_eng_slimpj/shuffled/split/split_finalac/part-01.finalac b/data_all_eng_slimpj/shuffled/split/split_finalac/part-01.finalac new file mode 100644 index 0000000000000000000000000000000000000000..b7be08059309d6c3b7b4f9fb7671b3b675f3f979 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split/split_finalac/part-01.finalac @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:013aa9f4c8e871555949e7cce5fe5c2a51a16c18ca16916bb710029156989c91 +size 12576665669 diff --git a/data_all_eng_slimpj/shuffled/split2/finalzotz b/data_all_eng_slimpj/shuffled/split2/finalzotz new file mode 100644 index 0000000000000000000000000000000000000000..07c3d87fe1fe86ce5b024a00f4e98cb2bf708a00 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzotz @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nOne important problem in machine learning is to find the minimum of the expected loss, \n\\begin{align}\\label{origin}\n\\min_{\\theta} \\mathbb{E}_{\\Xb,Y\\sim \\cD}\\left[l(Y,\\langle \\Xb,\\theta \\rangle) \\right]. \n\\end{align}\nHere $l(\\cdot,\\cdot)$ is a loss function and $(\\Xb,Y) \\in \\cX\\times \\cY \\subseteq \\mathbb{R}^d \\times \\cY$ has a distribution $\\cD$. In practice, the minimizer $\\theta^*$ needs to be estimated by observing $N$ samples $\\{\\xb_i,y_i\\}$ drawn from distribution $\\cD$. In many applications $N$ or $d$ are very large, so distributed algorithms are necessary in such case. Without loss of generality, assume that $N = nm$ and that the observations of $j$-th machine are $\\{\\xb_{ji},y_{ji}\\}_{i =1}^n$. We consider the high-dimensional learning problem where the dimension $d$ can be very large, and the effective variables are supported on $S:= \\text{support}\\{\\theta^*\\} = \\{i\\in [d]:\\theta^*_i \\ne 0\\}$ and $s:=|S|\\ll d$. Extensive efforts have been made to develop batch algorithms \\cite{friedman2007pathwise,Beck:09,xiao2013proximal}, which provide good convergence guarantees in optimization. However, when $N$ is large, batch algorithms are inefficiency, which takes at least $\\cO(N)$ time per iteration. Therefore, an emerging recent interest is observed to address this problem using the distributed optimization frameworks \\cite{jordan2016communication,lee2015distributed,wang2016efficient}, which is more efficient than the stochastic algorithms. One important issue of existing distributed optimization for sparse learning is that they did not take advantage of the sparse structure, thus they have the same communication costs with general dense problems. In this paper, we propose a novel communication-efficient distributed algorithm to explicitly leverage the sparse structure for solving large scale sparse learning problems. This allows us to reduce the communication cost from $\\cO(d)$ in existing works to $\\cO(s)$, while we still maintaining nearly the same performance under mild assumptions.\n\n\\noindent \\textbf{Notations}\nFor a sequence of numbers $a_n$, we use $\\cO(a_n)$ to denote a sequence of numbers $b_n$ such that $b_n\\leq C \\cdot a_n$ for some positive constant $C$. Given two sequences of numbers $a_n$ and $b_n$, we say $a_n\\lesssim b_n$ if $a_n =\\cO(b_n) $ and $a_n\\gtrsim b_n$ if $b_n =\\cO(a_n) $. The notation $a_n \\asymp b_n$ denotes that $a_n =\\cO(b_n) $ and $b_n =\\cO(a_n) $. For a vector $\\vb\\in \\mathbb{R}^d$, the $l_p$-norm of $\\vb$ is defined as $\\|\\vb\\|_p = ( \\sum_{i =1}^d|\\vb_i|^p)^{1\/p}$, where $p>0$; the $l_0$-norm of $\\vb$ is defined as the number of its nonzero entries; the support of $\\vb$ is defined as $\\text{supp}(\\vb) = \\{i:\\vb_i\\ne 0 \\}$. For simplicity, we use $[d]$ to denote the set $\\{1,\\cdots,d\\}$. For a matrix $A = (a_{ij})\\in \\mathbb{R}^{n_1\\times n_2}$, we define the $l_\\infty$-norm of $A$ as $\\|A\\|_\\infty = \\max_{i\\in[n_1],j\\in[n_2]}|a_{ij}|$. Given a number $k\\leq d$, the hard thresholding $\\cH_k(\\vb)$ of a vector $\\vb\\in\\RR^d $ is defined by keeping the largest $k$ entries of $\\vb$ (in magnitude) and setting the rest to be zero. Given a subset $S$ of index set $\\{1,\\cdots,d\\}$, the projection $\\cP_S(\\vb)$ of a vector $\\vb$ on $S$ is defined by \n\\begin{align*} \n\\cP_S(\\vb)_j =0,\\hspace{0.1in} \\text{if} \\hspace{0.05in} j\\notin S \\hspace{0.1in} \\text{and}\\hspace{0.1in} \\cP_S(\\vb)_j =\\vb_j, \\hspace{0.1in}\\text{if}\\hspace{0.05in} j\\in S.\n\\end{align*} \n$\\cP_S(\\vb)$ is also denoted as $(\\vb)_S$ for short.\n\n\\vspace{-0.1in}\n\\subsection{{Related work}}\nThere is much previous work on distributed optimizations such as (Zinkevich et al. \\cite{zinkevich2010parallelized}; Dekel et al. \\cite{dekel2012optimal}; Zhang et al. \\cite{zhang2012communication}; Shamir and Srebro \\cite{shamir2014distributed}; Arjevani and Shamir \\cite{arjevani2015communication}; Lee et al. \\cite{lee2015distributed}; Zhang and Xiao \\cite{zhang2015disco}). Initially, most distributed algorithms used averaging estimators formed by local machines (Zinkevich et al. \\cite{zinkevich2010parallelized}; Zhang et al. \\cite{zhang2012communication}). Then Zhang and Xiao \\cite{zhang2015disco}, Shamir et al. \\cite{shamir2014communication} and Lee et al. \\cite{lee2015communication} proposed more communication-efficient distributed optimization algorithms. More recently, using ideas of the approximate Newton-type method, Jordan et al. \\cite{jordan2016communication} and Wang et al. \\cite{wang2016efficient} further improved the computational efficiency of this type of method.\n\nMany gradient hard thresholding approaches are proposed in recent years such as (Yuan et al. \\cite{yuan2014gradient}; Li et al. \\cite{li2016stochastic}; Jain et al. \\cite{jain2014iterative}). They showed that under suitable conditions, the hard thresholding type first-order algorithms attain linear convergence to a solution which has optimal estimation accuracy with high probability. However, to the best of our knowledge, hard thresholding techniques applied to approximate Newton-type distributed algorithms has not been considered yet. So in this paper, we present some initial theoretical and experimental results on this topic. \n\\iffalse\nInitially, averaging estimators formed by locally machines is a intuitive approach to distributed estimation (Zinkevich et al. \\cite{zinkevich2010parallelized}; Zhang et al. \\cite{zhang2012communication}).\n\\fi\n\\vspace{-0.15in}\n\n\\section{Algorithm} \n\\vspace{-0.05in}\nIn this section, we explain our approach to estimating the $\\theta^*$ that minimizes the expected loss. The detailed steps are summarized in Algorithm \\ref{algor}. \n\nFirst the empirical loss at each machine is defined as\n\\begin{align*}\\textstyle\n\\cL_j(\\theta) = \\frac{1}{n} \\sum_{i =1}^n l(y_{ji},\\langle \\xb_{ji},\\theta \\rangle), ~~\\text{where}~~ j\\in [m]. \n\\end{align*}\nAt the beginning of algorithm, we solve a local Lasso subproblem to get an initial point. Specifically, at iteration $h =0$, the master machine solves the minimization problem\n\\begin{align} \\label{initial}\n\\textstyle \\gamma^0 = \\argmin \\cL_1(\\theta) + \\mu_0\\|\\theta\\|_1. \n\\end{align}\nThe initial point $\\theta^0$ is formed by keeping the largest $k$ elements of the resulting minimizer $\\gamma^0$ and setting the other elements to be zero, i.e., $\\theta^0 = \\cH_k(\\gamma^0)$. Then, $\\theta^0$ is broadcasted to the local machines, where it is used to compute a gradient of local empirical loss at $\\theta^0$, that is, $\\nabla\\cL_j(\\theta^0)$. The local machines project $\\nabla\\cL_j(\\theta^0)$ on the support $S^0$ of $\\theta^0$ and transmit the projection $\\cP_{S^0}\\left[\\nabla\\cL_j(\\theta^0)\\right]$ back to the master machine. Later at $(h+1)$-th iteration ($h\\geq 0$), the master solves a shifted $l_1$ regularized minimization subproblem:\n\\begin{align}\\label{subproblem}\n\\textstyle \\nonumber \\gamma^{h+1}& = \\argmin_{ \\theta} \\hspace{0.1in}\\cL_1(\\theta) + \\mu_{h+1} \\|\\theta\\|_1\\\\\n&+\\Big\\langle \\cP_{S^h}\\left[{\\textstyle\\frac{1}{m}\\sum_{j=1}^m \\nabla \\cL_j(\\theta^h)}\\right] -\\nabla \\cL_1(\\theta^h),\\theta \\Big \\rangle. \n\\end{align} \nAgain the minimizer $\\gamma^{h+1}$ is truncated to form $\\theta^{h+1}$, and this quantity is communicated to the local machines, where it is used to compute the local gradient as before. \n\nSolving subproblem $\\eqref{subproblem}$ is inspired by the approach of Wang et al. \\cite{wang2016efficient} and Jordan et al.\\cite{jordan2016communication}. Note that the formulation takes advantage of both global first-order information and local higher-order information. Specially, assuming the $\\mu_{h+1} = 0$ and $\\cL_j$ has an invertible Hessian, the solution of $\\eqref{subproblem}$ has the following closed form\n\\begin{align*}\n\\gamma^{h+1} = \\theta^{h} - \\nabla^2\\cL_1(\\theta^h) ^{-1} \\left(\\cP_{S^h}\\left[{\\textstyle\\frac{1}{m}\\sum_{j=1}^m \\nabla \\cL_j(\\theta^h)}\\right]\\right),\n\\end{align*}\nwhich is similar to a Newton updating step. Note that here we add a projection procedure $\\cP_{S^h}\\left[\\frac{1}{m}\\sum_{j=1}^m\\nabla\\cL_j(\\theta^h)\\right]$ to reduce the number of nonzeros that need to be communicated to the master machine. This procedure is reasonable intuitively. First, when $\\theta^h$ is close to $\\theta^*$, the elements of $\\frac{1}{m}\\sum_{j=1}^m\\nabla \\cL_j(\\theta^h)$ outside the support $S^{h}$ should be very small, so nominally little error is incurred in the truncation step. Second, when $\\theta^{h+1}$ is also close to $\\theta^*$, the lost part has even more minimal effects on the inner product in subproblem $\\eqref{subproblem}$. Third, we leave $-\\nabla \\cL_1(\\theta^h)$ in $\\eqref{subproblem}$ out of the truncation to maintain the formulation as unbiased.\n\n\\begin{algorithm}[!htb]\n\t\\caption{Two-way Truncation Distributed Sparse Learning}\n\t\\label{algor}\n\n\t\\begin{algorithmic}\n\t\t\\REQUIRE Loss function $l(\\cdot,\\cdot)$, data $\\{\\xb_{ji}, y_{ji}\\}_{i\\in[n],j\\in[m]}$. \n\t\t\\STATE \\hspace{-1.25em} \\textbf{\\underline{Local machines:}}\n\t\t\\STATE \\hspace{-1.25em} \\textbf{Initializaiton:} The master solves the local $l_1$ regularized loss minimization problem \\eqref{initial} to get a solution $\\gamma^0$. Set $\\theta^{0} = \\cH_k(\\gamma^{0})$. \n\t\t\\FOR{$h=0,1, \\dots$}\n\t\t\\FOR{$j=2,3, \\dots, m$}\n\t\t\\STATE \\textbf{if} Receive $\\theta^h$ from the master \\textbf{then}\n\t\t\\STATE Calculate gradient $\\nabla \\cL_j(\\theta^h)$ and get the projection $\\cP_{S^{h}}\\left[ \\nabla\\cL_j(\\theta^h)\\right]$ of the gradient on support $S^h$ and transmit it to the master.\n\t\t\\STATE \\textbf{end}\n\t\t\n\t\t\\ENDFOR\n\t\t\\STATE \\textbf{\\underline{Master:}}\n\t\t\\STATE \\textbf{if} Receive $\\{\\nabla\\cL_j(\\theta^h)\\}_{j=2}^m$ from local machines \\textbf{then}\n\t\t\\STATE \\hspace{1.25em}Solve the shifted $l_1$ regularized problem\n\t\t\\STATE \\hspace{1.25em}\\eqref{subproblem} to obtain $\\gamma^{h+1}$.\n\t\t\\STATE \\hspace{1.25em}Do hard thresholding $\\theta^{h+1} = \\cH_k(\\gamma^{h+1})$. \n\t\t\\STATE \\hspace{1.25em}Let $S^{h+1} = \\text{supp}(\\theta^{h+1})$. \n\t\t\\STATE \\hspace{1.25em}Broadcast $\\theta^{h+1}$ to every local machine. \n\t\t\\STATE \\textbf{end}\t\n\t\t\\ENDFOR\t\n\t\\end{algorithmic}\n\\end{algorithm}\n\\vspace{-0.1in}\n\\section{Theoretical Analysis}\n\\vspace{-0.05in}\n\\subsection{Main Theorem}\n\nWe present some theoretical analysis of the proposed algorithm in this section.\n\\vspace{-0.05in}\n\\begin{assumption}\\label{assum_smooth}\n\tThe loss $l(\\cdot,\\cdot)$ is a $L$-smooth function of the second argument, i.e.,\n\t\\begin{align*} {\\textstyle\n\tl^\\prime(x,y) - l^\\prime(x,z) \\leq L|y - z|, \\hspace{0.2in} \\forall x,y,z\\in \\RR}\n\t\\end{align*}\n\tMoreover, the third derivative with respect to its second argument, $\\partial^3 l(x,y)\/\\partial y^3$, is bounded by a constant $M$, i.e.,\n\t{\\begin{align*} \\textstyle\n\t|\\partial^3 l(x,y)\/\\partial y^3| \\leq M, \\hspace{0.2in} \\forall x,y\\in \\RR\n\t\\end{align*}}\n\\end{assumption}\n\n\\begin{assumption}\\label{assum_restrict}\n\tThe empirical loss function computed on the first machine satisfies that: $\\forall \\Delta \\in \\cC(S,3)$, we have\n\t\\begin{align*}\n\t\\cL_1(\\theta^* + \\Delta) - \\cL_1(\\theta^*) -\\langle \\nabla \\cL_1(\\theta^*),\\Delta \\rangle \\geq \\kappa \\|\\Delta\\|_2^2,\n\t\\end{align*}\n\twhere $\\cC(S,3)$ is defined as\n\t\\begin{align*}\n\t\\cC(S,3) = \\{\\Delta \\in \\RR^d |~ \\|\\Delta_{S^c}\\|_1 \\leq 3\\|\\Delta_S\\|_1\\}. \n\t\\end{align*}\t\n\\end{assumption}\n\n\\begin{assumption}\\label{assum_supp}\n\tThe $\\gamma^{h+1}$, $S^{h+1}$ and $S^h$ defined in Algorithm \\ref{algor} satisfy the following condition:\n\tthere exists some positive constants $H$ and $\\tau_1$ and $\\tau_2$ such that for $h\\geq H$, \n\t\\begin{align*}\n\t\\left\\|\\left(\\gamma^{h} - \\theta^*\\right)_{(S^h)^c}\\right\\|_1 &\\leq \\tau_1 \\left\\|\\gamma^{h} - \\theta^*\\right\\|_1\\\\\n\t\\left\\|\\left(\\gamma^{h+1} - \\theta^*\\right)_{S^{h+1}\\backslash S^{h}}\\right\\|_1 &\\leq \\tau_2\\left\\|\\gamma^{h+1}-\\theta^*\\right\\|_1. \n\t\\end{align*}\n\\end{assumption}\n\\begin{remark}\n\tIn practice, both $\\tau_1$ and $\\tau_2$ are very small even after only one round of communication and will decrease to $0$ fast in the later steps. \n\\end{remark}\n\\vspace{-0.1in}\nFor simplicity, we define the following notation:\n\\begin{align*}\n\\overbar{\\cL_1}(\\theta^*,\\theta^h) &:= \\cL_1(\\theta^*) +\\left\\langle {\\textstyle \\frac{1}{m}\\sum_{j = 1}^m \\nabla\\cL_j(\\theta^h) - \\nabla\\cL_1(\\theta^h)},\\theta \\right \\rangle,\\\\\n\\tilde{\\cL_1}(\\theta^*,\\theta^h) &:= \\cL_1(\\theta^*)\\\\\n&\\hspace{0.2in}+\\left\\langle \\cP_{S^h}\\left[{\\textstyle \\frac{1}{m}\\sum_{j = 1}^m \\nabla\\cL_j(\\theta^h) - \\nabla\\cL_1(\\theta^h)}\\right],\\theta \\right \\rangle.\n\\end{align*}\n\nNow we state our main theorem. \n\\begin{thmi} \\label{main_theorem}\n\tSuppose that Assumption~\\ref{assum_smooth}, \\ref{assum_restrict}, and \\ref{assum_supp} hold. Let $k = C_1\\cdot s$ with $C_1>1$ and \n\t\\begin{align}\n\t\\nonumber& \\textstyle\\mu_{h+1} = \\hspace{0.1in} 4\\left\\|{ \\frac{1}{m}\\sum_{j = 1}^m \\nabla \\cL_j(\\theta^*)}\\right\\|_\\infty \\\\\n\t\\nonumber&\\textstyle + 2L\\left(\\max_{j,i}\\|x_{j,i}\\|_\\infty^2\\right)\\cdot \\Big[{ 2\\sqrt{\\frac{\\log(2d\/\\delta)}{n}} + \\rho }\\Big]\\|\\theta^h - \\theta^*\\|_1\\\\\n\t&\\textstyle + 2M\\left(\\max_{j,i}\\|x_{j,i}\\|_\\infty^3 \\right)\\|\\theta^h - \\theta^*\\|_1^2, \\label{mu_def}\n\t\\end{align} \n\twhere $\\rho := \\tau_1+\\tau_2$. \n\t\n\tThen with probability at least $1-\\delta$, we have that\n\t\n\t\\begin{align*} \n &\\|\\theta^{h+1} - \\theta^*\\|_1 {\\textstyle \\leq \\frac{C_2 s}{\\kappa}\\left \\|\\frac{1}{m}\\sum_{j=1}^m \\nabla \\cL_j(\\theta^*)\\right \\|_\\infty}\\\\\n &\\hspace{0.2in}{\\textstyle +\\frac{C_2 s}{2\\kappa} L\\cdot \\max_{j,i}\\|\\xb_{ji}\\|_\\infty^2\\cdot \\Big[2\\sqrt{\\frac{\\log(2d\/\\delta)}{n}} + \\rho \\Big]\\|\\theta^h - \\theta^*\\|_1}\\\\\n &\\hspace{0.2in}{\\textstyle +\\frac{C_2 s}{2\\kappa} M\\cdot \\max_{j,i}\\|\\xb_{ji}\\|_\\infty^3\\cdot \\|\\theta^h - \\theta^*\\|_1^2}, ~\\text{and}\\\\\n &\\|\\theta^{h+1} - \\theta^*\\|_2 {\\textstyle \\leq \\frac{C_3\\sqrt{s}}{\\kappa}\\left \\|\\frac{1}{m}\\sum_{j=1}^m \\nabla \\cL_j(\\theta^*)\\right \\|_\\infty }\\\\\n &\\hspace{0.2in} {\\textstyle + \\frac{C_3\\sqrt{s}}{2\\kappa} L \\cdot \\max_{j,i}\\|\\xb_{ji}\\|_\\infty^2\\cdot\\Big[2\\sqrt{\\frac{\\log(2d\/\\delta)}{n}} + \\rho \\Big]\\cdot \\|\\theta^h - \\theta^*\\|_1} \\\\\n &\\hspace{0.2in} \\textstyle+\\frac{C_3\\sqrt{s}}{2\\kappa}M\\cdot \\max_{j,i}\\|\\xb_{ji}\\|_\\infty^3\\cdot \\|\\theta^h - \\theta^*\\|_1^2, \n \\end{align*}\n\twhere $\\textstyle C_2 = 24\\sqrt{1+2(C_1-1)^{-\\frac{1}{2}}}\\cdot \\sqrt{C_1+1}$ and $\\textstyle C_3= 24\\sqrt{1+2(C_1-1)^{-\\frac{1}{2}}}$ are positive constants independent of $m,n,s,d$.\n\\end{thmi}\nThe theorem immediately implies the following convergence result.\n\\begin{cori} \\label{first_cor}\n\tSuppose that for all $h$\n\t\\begin{align}\\label{cor_ass}\n\t\\textstyle \\nonumber&M\\cdot \\Big(\\max_{j,i}\\|x_{ji}\\|_\\infty^3\\Big) \\|\\theta^h - \\theta^*\\|_1\\leq\\\\ & L\\cdot \\max_{j,i}\\|x_{ji}\\|_\\infty^2\\left[2\\sqrt{\\frac{\\log(2d\/\\delta)}{n}} \n\t+ \\rho\\right],\n\t\\end{align}\n\twhere $\\rho := \\tau_1+\\tau_2$.\n\t\n\tThen under the assumption of Theorem~\\ref{main_theorem} we have\n\t\\begin{align*} \n\t\\|\\theta^{h+1} - \\theta^*\\|_1 &\\leq \\textstyle \\frac{1-a_n^{h+1}}{1 - a_n}\\cdot \\frac{C_2 s}{\\kappa}\\cdot \\textstyle \\left\\|\\frac{1}{m}\\sum_{j=1}^m \\nabla \\cL_j(\\theta^*)\\right\\|_\\infty \\\\\n\t& \\hspace{0.1in}+ a_n^{h +1} \\|\\theta^0 - \\theta^*\\|_1,\\\\\n\t|\\theta^{h+1} - \\theta^*\\|_2 &\\leq \\textstyle \\frac{1-a_n^{h+1}}{1 - a_n}\\cdot \\frac{C_3\\sqrt{s}}{\\kappa}\\cdot \\textstyle \\left\\|\\frac{1}{m}\\sum_{j=1}^m \\nabla \\cL_j(\\theta^*)\\right\\|_\\infty \\\\\n\t&\\hspace{0.1in}+ a_n^h b_n \\|\\theta^0 - \\theta^*\\|_1,\n\t\\end{align*}\n\twhere \n \\begin{align*}\n\t a_n =&\\textstyle \\frac{C_2 s}{\\kappa} L\\cdot \\max_{j,i}\\|x_{ji}\\|_\\infty^2 \\cdot \\Bigg[2\\sqrt{\\frac{\\log(2d\/\\delta)}{n}}+ \\rho\\Bigg]\n\t\\end{align*}\n\tand \n\t\\begin{align*}\n\t b_n =&\\textstyle \\frac{C_3\\sqrt{s}}{\\kappa} L\\cdot \\max_{j,i}\\|x_{ji}\\|_\\infty^2\\cdot\\Bigg[ 2\\sqrt{\\frac{\\log(2d\/\\delta)}{n}}+ \\rho \\Bigg],\n\t\\end{align*}\n\twhere $C_2 $ and $C_3$ are defined in Theorem~\\ref{main_theorem} and independent of $m,n,s,d$. \n\\end{cori}\n\\begin{remark}\n\tFrom the conclusion, we know that the hard thresholding parameter $k$ can be chosen as $C_1\\cdot s$, where $C_1$ can be a moderate constant larger than $1$. By contrast, previous work such as \\cite{li2016stochastic} solving a nonconvex minimization problem subject to $l_0$ constraint $\\|\\theta\\|_0 \\leq k$ requires that $k\\geq \\cO(\\kappa_s^2 s)$, where $\\kappa_s$ is the condition number of the object function.\n\tMoreover, instead of only hard thresholding on the solution of Lasso subproblems, we also do projection on the gradients in \\eqref{subproblem}. These help us reduce the communication cost from $\\cO(d)$ to $\\cO(s)$. \n\\end{remark}\n\n\\subsection{Sparse Linear Regression}\nIn the sparse linear regression, data $\\{\\xb_{ji}, y_{ji}\\}_{i\\in [n], j\\in[m]}$ are generated\naccording to the model\n\\begin{align} \\label{lin_model} \n\\textstyle y_{ji} = \\langle\\xb_{ji},\\theta^*\\rangle + \\epsilon_{ji},\n\\end{align}\nwhere the noise $\\epsilon_{ji}$ are i.i.d subgaussian random variables with zero mean. Usually the the loss function for this problem is the squared loss function $l(y_{ji},\\langle \\theta,\\xb_{ji}\\rangle) = \\frac{1}{2}(y_{ji}-\\langle \\theta,\\xb_{ji}\\rangle)^2$, which is $1$-smooth.\n\nCombining Corollary \\ref{first_cor} with some intermediate results obtained from \\cite{rudelson2011reconstruction, vershynin2010introduction} and \\cite{wainwright2009sharp}, we have the following bound for the estimation error. \n\\begin{cori}\n\tSuppose the design matrix and noise are subgaussian, Assumption \\ref{assum_supp} holds and $\\mu_{h+1}$ is defined as \\eqref{mu_def}. Then under the sparse linear model, we have the following estimation error bounds with probability at least $1-2\\delta$:\n\t\\begin{align*} \\textstyle\n\t\\|\\theta^{h+1} - \\theta^*\\|_1 \\lesssim \\frac{1-a_n^{h+1}}{1-a_n} \\cdot\\frac{C_2s\\sigma\\sigma_X}{\\kappa}\\sqrt{\\frac{\\log(d\/\\delta)}{mn}}\\\\\n\t\\textstyle + a_n^{h+1} \\frac{s \\sigma \\sigma_X}{\\kappa}\\sqrt{\\frac{\\log(nd\/\\delta)}{n}}\n\t\\end{align*}\n\tand\n\t\\begin{align*} \\textstyle\n\t\\|\\theta^{h+1} - \\theta^*\\|_2 \\lesssim\\frac{1-a_n^{h+1}}{1-a_n} \\cdot\\frac{C_3\\sqrt{s}\\sigma \\sigma_X}{\\kappa}\\sqrt{\\frac{\\log(d\/\\delta)}{mn}}\\\\\n\t\\textstyle + a_n^{h}b_n \\frac{s \\sigma \\sigma_X}{\\kappa}\\sqrt{\\frac{\\log(nd\/\\delta)}{n}},\n\t\\end{align*}\n\twhere $C_2 $ and $ C_3$ are defined in Theorem~\\ref{main_theorem}, and where\n\t\\begin{align*} \n\t\\textstyle a_n =\\frac{C_2s}{\\kappa}\\sigma_X^2\\log\\left(\\frac{mnd}{\\delta}\\right)\\left[2\\sqrt{\\frac{\\log(2d\/\\delta)}{n}} + \\rho \\right] \n\t\\end{align*}\n\tand\n\t\\begin{align*} \n\t\\textstyle b_n = \\frac{C_3\\sqrt{s}}{\\kappa}\\sigma_X^2\\log\\left(\\frac{mnd}{\\delta}\\right)\\left[2\\sqrt{\\frac{\\log(2d\/\\delta)}{n}} + \\rho \\right]. \n\t\\end{align*}\t\n\\end{cori}\n\n\\begin{remark}\nUnder certain conditions we can further simplify the bound and have an insight of the relation between $n,m,s,d$. When $n\\geq s^2\\log d$, it is easy to see by choosing\n\\begin{align*}\n\\mu_{h+1} \\asymp \\sqrt{\\frac{\\log d}{mn}} + \\sqrt{\\frac{\\log d}{n}} \\left[s\\left( \\sqrt{\\frac{\\log d}{n}} + \\rho \\right)\\right]^{h+1} \n\\end{align*}\nand $k = \\cO(s)$ there holds the following error bounds with high probabiltiy:\n\\begin{align*}\n&\\|\\theta^{h+1} - \\theta^*\\|_1 \\\\\n&\\hspace{0.2in} \\lesssim s\\sqrt{\\frac{\\log d}{mn}} + s\\sqrt{\\frac{\\log d}{n}} \\left[s\\left( \\sqrt{\\frac{\\log d}{n}} + \\rho \\right)\\right]^{h+1},\\\\\n&\\|\\theta^{h+1} - \\theta^*\\|_2 \\\\\n&\\hspace{0.2in}\\lesssim \\sqrt{\\frac{s\\log d}{mn}} +\\sqrt{\\frac{s\\log d}{n}} \\left[s\\left( \\sqrt{\\frac{\\log d}{n}} + \\rho \\right)\\right]^{h+1}.\n\\end{align*}\n\\end{remark}\n\n\\subsection{Sparse Logistic Regression}\nCombining Corollary \\ref{first_cor} with some intermediate results obtained from \\cite{wang2016efficient} and \\cite{regularizers2012m}, we now can give a similar result about the estimation error bound for sparse logistic regression. The explicit form is omitted due to the limitation of spaces.\n\\iffalse\n\\begin{cori}\n\tSuppose Assumption \\ref{assum_supp} holds and $\\mu_{h+1}$ is defined as \\eqref{mu_def}.\n\tIf the following condition holds for some $T\\geq 0$:\n\t\\begin{align*}\n\t\\|\\theta^T - \\theta^*\\|_1 \\leq 4\\sqrt{\\frac{\\log (2d\/\\delta)}{n}}. \n\t\\end{align*}\n\tThen under the sparse logistic model with random design, we have the following estimation error bound for all $h\\geq T$ with probability at least $1-2\\delta$:\t\n\t\\begin{align*} \\textstyle\n\t\\|\\theta^{h+1}- \\theta^*\\|_1 \\lesssim\\sqrt{1+2(C_1-1)^{-\\frac{1}{2}}}\\frac{1-a_n^{h-T+1}}{1-a_n}\\hspace{0.27in}\\\\\n\t\\cdot\\frac{24C_2s \\sigma_X}{\\kappa} \\sqrt{\\frac{\\log(d\/\\delta)}{mn}}+4a_n^{h-T+1}\\sqrt{\\frac{\\log(2d\/\\delta)}{n}}\n\t\\end{align*}\n\tand\n\t\\begin{align*} \\textstyle\n\t\\|\\theta^{h+1}- \\theta^*\\|_2 \\lesssim \\sqrt{1+2(C_1-1)^{-\\frac{1}{2}}}\\frac{1-a_n^{h-T+1}}{1-a_n}\\hspace{0.27in}\\\\\n\t\\cdot \\frac{24\\sqrt{s} \\sigma_X}{\\kappa}\\sqrt{\\frac{\\log(d\/\\delta)}{mn}} +4a_n^{h-T}\\sqrt{\\frac{\\log(2d\/\\delta)}{n}},\n\t\\end{align*}\n\twhere\n\t\\begin{align*} \\textstyle\n\ta_n = 6\\sqrt{1+2(C_1-1)^{-\\frac{1}{2}}}\\hspace{1.5in}\\\\\n\t\\cdot\\frac{ C_2 s}{\\kappa}\\sigma_X^2\\log(\\frac{mnd}{\\delta})\\left[\\sqrt{\\frac{\\log(2d\/\\delta)}{n}} + \\rho\\right]\n\t\\end{align*}\n\tand\n\t\\begin{align*} \\textstyle\n\tb_n = 6\\sqrt{1+2(C_1-1)^{-\\frac{1}{2}}}\\hspace{1.5in}\\\\\n\t\\cdot\\frac{\\sqrt{s}}{\\kappa}\\sigma_X^2\\log(\\frac{mnd}{\\delta})\\left[\\sqrt{\\frac{\\log(2d\/\\delta)}{n}} + \\rho\\right]. \n\t\\end{align*}\n\\end{cori}\n\\fi\n\n\\section{Experiments}\n\\vspace{-0.02in}\nNow we test our algorithm on both simulated data and real data. In both settings, we compare our algorithm with various advanced algorithms. These algorithms are:\n\n\\begin{itemize}\n\t\\item[1.] EDSL: the state-of-the-art approach proposed by Jialei Wang et al. \\cite{wang2016efficient}.\n\t\\vspace{-0.05in}\n\t\\item[2.] Centralize: using all data, one machine solves the centralized loss minimization problem with $l_1$ regularization. This procedure is communication expensive or requires much larger storage.\n\t\\vspace{-0.05in} \n\t\\item[3.] Local: the first machine solves the local $l_1$ regularized loss minimization problem with only the data stored on this machine, ignoring all the other data.\n\t\\vspace{-0.05in}\n\t\\item[4.] Two-way Truncation: the proposed sparse learning approach which further improves the communication efficiency. \t\n\t\\vspace{-0.1in}\n\\end{itemize}\n\\subsection{Simulated data}\n\\begin{figure}[!htb]\n\t\\centering \n\t\\vspace{-0.1in}\n\t\\begin{center}\n\t\n\t\n\t\n\t\n\t\n\t\n\t\t\\hspace{-0.096in}\\subfloat[$\\Sigma_{ij} = 0.5^{|i -j|}$]{\n\t\t\t\\includegraphics[width=0.46\\linewidth]{{figure1a}.pdf}\n\t\t}\n\t\t\\hspace{-0.08in}\\subfloat[$\\Sigma_{ij} = 0.5^{|i -j|\/5}$]{\n\t\t\t\\includegraphics[width=0.46\\linewidth]{{figure1b}.pdf}\n\t\t}\\\\\n\t\t\\vspace{0.1in} $m= 20, n =600, d = 20000, s=10, \\Xb \\sim \\cN(0,\\Sigma)$ \\vspace{0.1in}\\\\\n\t\n\t\n\t\n\t\n\t\n\t\n\t\\end{center}\n\t\\vspace{-0.1in}\t\n\t \\caption{Comparison among four algorithms in sparse linear regression setting}\\label{figure1}\n\n\\end{figure}\n\n\\begin{figure}[!htb] \n\n\t\\centering\n\t\\begin{center}\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\t\\hspace{-0.096in}\\subfloat[$\\Sigma_{ij} = 0.5^{|i -j|}$]{\n\t\t\t\\includegraphics[width=0.46\\linewidth]{{figure2a}.pdf}\n\t\t}\n\t\t\\hspace{-0.08in}\\subfloat[$\\Sigma_{ij} = 0.5^{|i -j|\/5}$]{\n\t\t\t\\includegraphics[width=0.46\\linewidth]{{figure2b}.pdf}\n\t\t}\\\\\n\t\t\\vspace{0.1in} $m= 10, n =1000, d = 2000, s=20, \\Xb \\sim \\cN(0,\\Sigma)$ \\vspace{0.1in}\\\\\n\t\n\t\n\t\n\t\n\t\n\t\n\t\\end{center}\n\t\\vspace{-0.1in}\n\t\\caption{\\hspace{-0.1in} Comparison among four algorithms in sparse logistic regression setting}\\label{figure2}\n\n\\end{figure}\nThe simulated data $\\{\\xb_{ji}\\}_{i\\in [n], j\\in[m]}$ is sampled from multivariate Gaussian distribution with zero mean and covariance matrix $\\Sigma$. We choose two different covariance matrices: $|\\Sigma_{ij}| = 0.5^{|i-j|}$ for a well-conditioned situation and $|\\Sigma_{ij}| = 0.5^{|i-j|\/5}$ for an ill-conditioned situation. The noise $\\epsilon_{ji}$ in sparse linear model ($y_{ji} = \\langle\\xb_{ji},\\theta^*\\rangle + \\epsilon_{ji}$) is set to be a standard Gaussian random variable. We set the true parameter $\\theta^*$ to be $s$-sparse where all the entries are zero except that the first $s$ entries are i.i.d random variables from a uniform distribution in [0,1]. Under both two models, we set the hard thresholding parameter $k$ greater than s but less than $3 s$. \n\\vspace{-0.02in}\n\nHere we compare the algorithms in different settings of $(n,d,m,s)$ and plot the estimation error $\\|\\theta^h - \\theta^*\\|_2$ over rounds of communications. The results of sparse linear regression and sparse logistic regression are showed in Figure~\\ref{figure1} and Figure~\\ref{figure2}. We can observe from these plots that: \n\n\\begin{itemize}\n\t\\item First, there is indeed a large gap between the local estimation error and the centralized estimation error. The estimation errors of EDSL and the Two-way Truncation decrease to the centralized one in the first several rounds of communications.\n\n\t\\item Second, the Two-way Truncation algorithm is competitive with EDSL in both statistical accuracy and convergence rate as the theory indicated. Since it can converge in at least the same speed as EDSL's and requires less communication and computation cost in each iteration, overall it's more communicationally and computationally efficient. \n\n\\end{itemize}\nThe above results support the theory that the Two-way Truncation approach is indeed more efficient and competitive to the centralized approach and EDSL.\n\\vspace{-0.05in}\n\n\\vspace{-0.05in}\n\n\\subsection{Real data}\n\\begin{figure}[!htb]\n\t\\vspace{-0.1in}\n\t\\begin{center}\n\t\t\\hspace{-0.08in}\\subfloat[dna (linear regression)]{\n\t\t\t\\includegraphics[width=0.46\\linewidth]{{figure3a}.pdf}\n\t\t}\n\t\t\\hspace{-0.08in}\\subfloat[a9a (classification)]{\n\t\t\t\\includegraphics[width=0.46\\linewidth]{{figure3b}.pdf}\n\t\t}\n\t\n\t\n\t\n\t\\end{center}\n\t\\vspace{-0.1in}\n\t\\caption{Comparison among four algorithms on real datasets}\\label{fig:real_data1} \n\t\\vspace{-0.5in}\n\\end{figure}\n\\vspace{0.4in}\nIn this section, we examine the above sparse learning algorithms on real-world datasets. The data comes from UCI Machine Learning Repository \\footnote{http:\/\/archive.ics.uci.edu\/ml\/} and the LIBSVM website \\footnote{https:\/\/www.csie.ntu.edu.tw\/$\\sim$cjlin\/libsvmtools\/datasets\/}. The high-dimensional data 'dna' and 'a9a' are used in the regression model and classification model respectively. We randomly partition the data in $[60\\%, 20\\%,20\\%]$ for training, validation and testing respectively. Here the data is divided randomly on $m = 10$ machines and processed by algorithms mentioned above. The results are summarized in Figure \\ref{fig:real_data1}. These results in real-world data experiments again validate the theoretical analysis that the proposed Two-way Truncation approach is a quite effective sparse learning method with very small communication and computation costs. \n\n\\section{Conclusions}\n\nIn this paper we propose a novel distributed sparse learning algorithm with Two-way Truncation. Theoretically, we prove that the algorithm gives an estimation that converges to the minimizer of the expected loss exponentially and attain nearly the same statistical accuracy as EDSL and the centralized method. Due to the truncation procedure, this algorithm is more efficient in both communication and computation. Extensive experiments on both simulated data and real data verify this statement.\n\n\n\\section*{Acknowledgment}\n\nThe authors graciously acknowledge support from NSF Award CCF-1217751 and DARPA Young Faculty Award N66001-14-1-4047 and thank Jialei Wang for very useful suggestion. \n\n\\bibliographystyle{CADSLTT}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\nIn the local Universe, there is a noticeable dearth of baryons within massive ($M_{DM}>10^{12.5}$ M$_{\\odot}$) galactic halos \\citep{Benson03,Croton06}. Feedback from supermassive black holes (SMBH) is commonly invoked to explain the missing baryons within the massive halos. Similarly, the tight correlation between the mass of a galaxy's SMBH and the total mass of the bulge and galaxy \\citep{Magorrian98,Gebhardt00} suggests that galaxies and SMBHs have evolved together. How SMBHs and galaxies co-evolve and regulate their mutual growth is an outstanding problem in modern astrophysics. Theoretical work suggests that the strong correlation between the mass of the SMBH and the velocity dispersion of the bulge ($M_{\\bullet}-\\sigma~$) may be achieved through quenching of star formation by powerful outflows driven once the galaxies reside on the $M_{\\bullet}-\\sigma~$ relationship \\citep{Hopkins06,Zubovas12,Zubovas14}.\n\nThe feedback that regulates the overall growth of a galaxy is expected to be most important when both the galaxies and the SMBHs were experiencing the majority of their growth \\citep{Zubovas12,Choi12,Barai17,Costa18}. Given that both quasar activity and star formation peak in peak in normal galaxies at cosmic noon ($1.51$) Universe have focused on optical emission lines that trace the warm ionized phase of outflows with typical densities $n_e\\sim 100-1000$ cm$^{-3}$ and temperatures $T=10^4$ K. At the epoch of peak quasar activity $z\\sim 2-3$, the bright emission lines such as [O{\\sc III}]\\xspace\\ $\\lambda$5007\\AA, H$\\alpha$ and H$\\beta$ are redshifted into near-infrared bands where they can be resolved spatially and spectrally. Using near-infrared spectroscopy, several authors have studied quasar-driven ionized winds on galaxy-wide scales both in radio-loud quasars (with powerful jets) and in radio-quiet quasars \\citep{Nesvadba08,Cano-Diaz12,Carniani15,Vayner17,Vayner21,Kakkad20}.\n\nRecent theoretical and observational works have suggested that the energy and momentum in galaxy outflows are shared between multiple phases of the gas. These gas phases span a wide range of densities and temperatures, from the dense cold molecular and neutral gas ($T\\sim10-300$ K) to diffuse hot post-shock medium ($T>10^7$ K; \\citealt{Crichton16,Hall19}). There have been several studies of multi-phase outflows in distant quasars, focusing on individual systems \\citep{Vayner17,Brusa18,Herrera-Camus19} and at studying the cold molecular and ionized gas phase in galactic outflows. It is unknown which phase of the outflow is responsible for the bulk of the momentum and mass of these multi-phase outflows, and it may well be a function of the conditions in the host galaxies. Theoretical work suggests that the molecular gas clouds may be disrupted and entrained by the outflow \\citep{Scannapieco15} or, instead, molecules may form within the outflowing gas \\citep{Richings18}. Observations of nearby galaxies suggest that the cold molecular gas phase (50-100 K) may be the dynamically dominant phase of quasar winds \\citep{Sun14,Cicone14,Alatalo11,Aalto12,Feruglio13,Morganti13,Veilleux17,Fiore17}. The warm (T$\\sim$500 K) molecular gas phase may also be important and carry a significant fraction of the momentum in a galactic outflow \\citep{Richings18}. In nearby galaxies, Spitzer observations of the warm molecular H$_{2}$\\ gas (T$\\sim$500 K) have found a significant amount of gas mass in galactic outflows \\citep{Beirao15,Dasyra14,Rogemar20}. \n\nBecause cold molecular gas is the fuel for star formation, the fate of the molecular gas phase is the key link in understanding the impact of quasar feedback on star formation. Since outflows are invariably multi-phase, understanding the relationship between molecular gas and atomic gas in outflows is crucial for accurately estimating the energetics of feedback and the fate of the interstellar medium (ISM). Statistical studies are lacking, as the study of multi-gas phase outflows in the distant Universe is still in its infancy. A recent attempt at comparing molecular gas properties of galaxies with powerful (L$_{bol}=10^{45-47}$ erg s$^{-1}$ ) active galactic nuclei (AGN) to star forming galaxies with similar stellar mass have found the AGN to reside in galaxies with lower molecular gas masses hinting at potential effects of feedback from outflows or radiation \\citep{Circosta21,Bischetti21}.\n\nTo address both the molecular gas reservoir and the measurement of cold molecular gas in galactic outflows requires kpc-scale spatial resolution observations that have the right sensitivity to detect the molecular gas or place stringent limits. The lowest energy transition of the H$_{2}$\\ molecule is the rotational quadrupole transitions that require gas temperatures $>100$ K to excite; hence H$_{2}$\\ is invisible in the cold molecular gas phase. The next most abundant molecule in molecular clouds is carbon monoxide (CO) that has a weak permanent dipole moment, with the near-ground rotational transitions having small excitation energies, enabling to trace colder molecular gas (5.5-55 K). In this paper, we present Atacama Large Millimeter Array (ALMA) observations of the cold molecular gas traced through rotational transitions of CO in six radio-loud quasars at $z=1.439-2.323$ with known powerful ionized gas outflows. We present observations, data reduction, and emission line analysis in Section \\ref{sec:obs}. We describe how we search for molecular outflows and calculate their energetics in Section \\ref{sec:dynamics}. We discuss individual objects in Section \\ref{sec:indiv_obj}. We compare the entire sample, discuss potential sources that drive the multi-phase gas outflows and the dominant source of molecular gas depletion in Section \\ref{sec:discussion}. We summarize our conclusions in Section \\ref{sec:conc}. We use an $\\rm H_{0}=67.8$ \\kms\\ Mpc$^{-1}$, $\\Omega_{\\rm m}=0.308$, $\\Omega_{\\Lambda}=0.692$ cosmology throughout this paper. \n\n\\section{Observations, Data Reduction \\& Line Fitting} \\label{sec:obs}\n\n\\begin{deluxetable*}{ccccccccc}\n\\centering\n\\tablecaption{Summary of ALMA observations \\label{tab:VLA-archive}}\n\\tablehead{\n\\colhead{Object} & \n\\colhead{Date} & \n\\colhead{Central frequency} &\n\\colhead{Continuum beam \\tablenotemark{a}}&\n\\colhead{Continuum Sensitivity}&\n\\colhead{Line beam}&\n\\colhead{Line sensitivity}&\n\\colhead{Channel width} \n\\\\\n\\colhead{} & \n\\colhead{} & \n\\colhead{(GHz)}& \n\\colhead{}&\n\\colhead{mJy}&\n\\colhead{}&\n\\colhead{mJy}&\n\\colhead{\\kms}}\n\\startdata\n7C 1354 & 2018 Jan 8-24 & 153.364 & 0.48\\arcsec$\\times$0.36\\arcsec &0.0069&0.59\\arcsec$\\times$0.47\\arcsec&0.08 & 34.0\\\\\n4C 22.44 & 2017 Dec 17-30 & 135.55 & 0.39\\arcsec$\\times$0.35\\arcsec&0.0068&0.408\\arcsec$\\times$0.366\\arcsec&0.083 & 66.0\\\\\n4C 05.84 & 2018 Jan 5-16 & 138.87 & 0.41\\arcsec$\\times$0.30\\arcsec&0.0075&0.438\\arcsec$\\times$0.54\\arcsec&0.134 & 33.0\\\\\n4C 09.17 & 2017 Dec 25 & \t148.24 & 0.32\\arcsec$\\times$0.23\\arcsec&0.0139&0.39\\arcsec$\\times$0.34\\arcsec&0.130 & 15.8\\\\\n & 2018 Jan 8 & \t & &&&\\\\\n3C 318 & 2014 Jul 6 & 134.34 & 0.25\\arcsec$\\times$0.18\\arcsec&0.0095&0.454\\arcsec$\\times$0.320\\arcsec&0.107 & 66.66\\\\\n3C 298 & 2016 Sep 9 & 141.85 & 0.25\\arcsec$\\times$0.18\\arcsec&0.046&0.39\\arcsec$\\times$0.30\\arcsec&0.22 & 34.0\n\\enddata\n\\tablenotetext{a}{From continuum images integrated over all frequencies and cleaned with robust = 0.5 parameter.}\n\\end{deluxetable*}\n\nA leading goal of the ALMA observational program was to detect molecular gas outflows in quasars with powerful ionized outflows that were previously detected via optical emission line (e.g., [O{\\sc III}]\\xspace, H$\\alpha$\\xspace) kinematics using integral field spectroscopy observations with adaptive optics \\citep{Vayner19b,Vayner19a}. All sources selected within this study display ionized gas outflows on kpc scales, with outflow rates in the range of 50-1000 \\myr, velocities $>$ 500 \\kms\\ with momentum fluxes $>$ 10$^{35} $ dyne and coupling efficiencies between the kinetic luminosity of the outflow and the bolometric luminosity of the quasar $>$0.05$\\%$. Given the similarities between the field-of-view and angular resolution between the Keck\/OSIRIS and ALMA observations we are able to study molecular gas outflows on similar spatial and dynamical time scales.\n\nALMA band 4 observations were conducted in Cycle 5 in the C43-5 configuration with a typical angular resolution of 0.4\\arcsec\\ and a maximum recoverable scale of 4-5.5\\arcsec, corresponding to rough physical scales of 3 kpc and 34-43 kpc, respectively. One 1.875 GHz spectral window was tuned to the redshifted frequency of CO (3-2) or CO (4-3) emission line with an effective velocity bandwidth of 4,000 \\kms, while additional three bands were tuned to the nearby continuum.\\\\ \n\nData reduction was performed using CASA (Common Astronomy Software Applications; \\citealt{McMullin07}) version 5.1.2-4. The ALMA automated pipeline was used to create the measurement sets (MS) for each observing block, which were then combined into a single measurement set for each source. We performed phase self-calibration for 4C 05.84 and 7C 1354+2552, while for 3C 318 and 4C 09.17, we performed both phase and amplitude self-calibration. For each source, the band 4 quasar continuum, was used as a self-calibration model using the CASA task \\textit{clean}. Given the similarity between the archival VLA images of the jets in these systems, we believe that the majority of the continuum in our sources comes from the synchrotron emission of the quasar core\/jets. While there is extended continuum emission for each source, the majority of the flux is associated with the unresolved core emission from the quasar making the modeling of the continuum relatively simple for self-calibration. We image the continuum with Briggs weighting using a robust value of 0.5 and a pixel size set to 1\/4th of the beam's full-width half max (FWHM). In this work, we also include our pilot observations of 3C 298 conducted in cycles 2 and 3 \\citep{Vayner17} with ALMA band 4. We achieve an SNR of 600-25,000 for the peak continuum flux. The continuum SNR improved by a factor of 1.2-5 from phase-only self-calibration and in the case of 4C09.17 after amplitude self-calibration the rms further improved by a factor of 1.5, while for 3C318 we did not see a significant improvement in the rms after amplitude calibration.\\\\\n\nWe performed continuum subtraction using the CASA task \\textit{uvcontsub} by fitting a first-order polynomial to line-free emission channels of the spectral window with the CO emission. We then subtracted the best fit continuum model from the full spectral window.\\\\\n\nWe imaged the cube using \\textit{clean} with a robust value of 1.5 to help improve detection of fainter and more diffuse emission, resulting in a larger beam than the continuum imaging. We used a spectral pixel size of either 16 \\kms, 34 \\kms, or 66 \\kms, depending on the signal-to-noise ratio (SNR) of the CO emission. We used a value of 0.05\\arcsec\\ for the spatial pixel size. For all sources except 4C 09.17, we used a wide circular aperture centered on the quasar for the cleaning mask with a radius of 1\/4 the primary beam size. For 4C 09.17, the CO emission was detected in individual 16 \\kms\\ channels in the first cleaning cycle, and a tight mask was designed for each channel encompassing the CO (4-3) emission.\n\n\nTo search for extended emission, we construct an SNR map for each channel in the data cube. An SNR map is made by first computing a standard deviation in a large aperture away from the phase center, followed by dividing the flux per spaxel by the standard deviation. SNR maps are constructed from data cubes that were not corrected for the primary beam's response. For any spaxel containing emission with a peak SNR$\\geq$4 per beam, we fit the emission line in neighboring spaxels that lie in the beam with a Gaussian plus constant continuum model using the least-squares fitting routine \\textit{curvefit} within \\textit{SciPy}. We create a 0$^{th}$ moment (flux) map by integrating the emission line in each spaxel with a successful emission line fit over the fitted Gaussian model and a 1$^{st}$ moment map by computing the line centroid's Doppler shift relative to the redshift of the quasar host galaxy. The integrated intensity maps are all optimally extracted, with a varying window for integrating the CO emission line based on the velocity offset and dispersion. We construct a 2$^{nd}$ moment map using the fit's velocity dispersion from the Gaussian model. All moment maps are constructed from data cubes that were corrected for the primary beam's response. Figure \\ref{fig:all_sources_CO} showcases the integrated CO (3-2) or CO (4-3) maps for sources with detected extended emission.\\\\\n\nHerein we use the quasar redshifts that are reported from Keck\/OSIRIS observations \\citep{Vayner19b} where they were derived from the centroid of the spatially unresolved ($<1.5$ kpc) [O{\\sc III}]\\xspace\\ or H$\\alpha$\\xspace emission lines, effectively originating from the narrow-line-region (NLR) of the quasar. The [O{\\sc III}]\\xspace\\ and H$\\alpha$\\xspace\\ redshifts agree within the centroiding uncertainty. For 3C 298 the redshift derived from the molecular gas disk traced through CO (3-2) is within 50 \\kms\\ of the redshift derived from the quasar emission \\citep{Vayner17}.\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=7.0in]{all_sources_ALMA.eps}\n \\caption{ALMA band 4 observations of the sources within our sample. We present ALMA band 4 integrated intensity maps of CO emission. In contours we present the ALMA band 4 continuum emission that is dominated by synchrotron emission from the quasar jets. The contours stretch from a peak flux of 0.013, 0.0037, 0.2, 0.003, 0.045, and 0.002 Jy\/beam down to 2$\\sigma$ sensitivity in linear steps, from top left to bottom right, respectively. The ellipse in the lower-left corner of each map represents the beam of the emission line data. The star in the center represents the quasar's location, and the bar represents 1 arcsecond. The maps are at at a position angle of 0$^{\\circ}$, where north is oriented up, the only exception is 3C 298 that is at a position angle of 103$^{\\circ}$ East of North.}\n \\label{fig:all_sources_CO}\n\\end{figure*}\n\n\n\\begin{deluxetable*}{lcclll@{\\extracolsep{-10pt}}c}[!th]\n\\tiny\n\\tablecaption{Sample properties \\label{tab:sample}}\n\\tablehead{\n\\colhead{Name} & \n\\colhead{RA} & \n\\colhead{DEC} &\n\\colhead{z\\tablenotemark{a}} &\n\\colhead{L$_{\\rm bol}$} &\n\\colhead{L$_{\\rm 178 MHz}$} &\n\\colhead{M$_{\\rm BH}$}\\\\\n\\colhead{} & \n\\colhead{J2000} &\n\\colhead{J2000} & \n\\colhead{} & \n\\colhead{($10^{46}$ erg s$^{-1}$ )} & \n\\colhead{($10^{44}$ erg s$^{-1}$ )} & \n\\colhead{M$_{\\odot}$}}\n\\startdata\n4C 09.17 & 04:48:21.74 & +09:50:51.46 & 2.1170 & 2.88$\\pm$0.14 &2.6 & 9.11 \\\\\n7C 1354+2552 & \t13:57:06.54 & +25:37:24.49 & 2.0068 & 2.75$\\pm$0.11 & 1.4& 9.86 \\\\\n3C 298 & 14:19:08.18 & +06:28:34.76 & 1.439\\tablenotemark{b} & 7.80$\\pm$0.30 & 12 &9.51 \\\\\n3C 318 & 15:20:05.48 & +20:16:05.49 & 1.5723 & 0.79$\\pm$0.04 & 4.0 &9.30 \\\\\n4C 22.44 & 17:09:55.01 & +22:36:55.66 & 1.5492 & 0.491$\\pm$0.019 & 0.6 &9.64 \\\\\n4C 05.84 & 22:25:14.70 & +05:27:09.06 & 2.320 & 20.3$\\pm$1.00& 4.5 &9.75 \\\\\n\\enddata\n\\tablenotetext{a}{Redshift relative to narrow-line region emission of the quasar, derived from 5007 \\AA\\ [O{\\sc III}]\\xspace\\ emission.}\n\\tablenotetext{b}{Redshift derived from host galaxy CO (3-2) emission \\citep{Vayner17}}\n\\end{deluxetable*}\n\n\n\\section{Dynamics of the molecular gas}\\label{sec:dynamics}\n\n\\subsection{Systemic molecular gas}\n\nFor each source, we search for emission at the systemic redshift of the quasar host galaxy. Narrow, CO line emission is found in the host galaxies of 4C 09.17A, 4C 09.17B, 3C 298, and 7C 1354+2552. The narrow CO emission in 3C 298 resembles a rotating disk that we modeled in \\citet{Vayner17}. The narrow emitting gas in the other systems does not show a smooth velocity gradient that would be indicative of a rotating galactic disk. \n\n\\noindent We compute the emission line luminosity using the following equation:\n\n\\begin{equation}\n L^{'}_{CO}=3.25 \\times 10^{7}S_{CO}\\Delta v \\frac{D^{2}_{L}}{(1+z)^{3}\\nu_{obs}^{2}} \\rm K~km~s^{-1}~pc^{2}, \n\\end{equation} \n\n\\noindent \nwhere $\\nu_{obs}$ is the observed CO transition frequency, $D_{L}$ is the luminosity distance, and $S_{CO}\\Delta v$ is the line integrated flux in units of Jy \\kms. We convert the observed CO transition luminosity into CO (1-0) luminosity (L$^{'}_{CO(1-0)}$) by assuming that the low-J CO transitions are thermalized and are optically thick, so ${L}_{\\mathrm{CO}\\ 4\\mbox{--}3}^{{\\prime} }={L}_{\\mathrm{CO}\\ 3\\mbox{--}2}^{{\\prime} } ={L}_{\\mathrm{CO}\\ 1\\mbox{--}0}^{{\\prime} }$. Using the ratios (${r}_{{\\rm{J}}1}={L}_{\\mathrm{CO}J\\to J-1}^{{\\prime} }\/{L}_{\\mathrm{CO}\\ 1\\mbox{--}0}^{{\\prime} }$) from \\cite{CarillinWalter13} with $r_{31}$=0.97 and $r_{41}$=0.87 did not significantly change our results. Furthermore in 3C 298 we found that the molecular gas is consistent with being thermalized and optically thick \\citep{Vayner17}. These physical conditions are consistent with what is found for Mrk 231 \\citep{Feruglio15}. Finally, we convert the ${L}_{\\mathrm{CO}\\ 1\\mbox{--}0}^{{\\prime}}$ line luminosity into molecular gas mass using the CO-to-H$_{2}$ conversion factor: $\\alpha_{\\rm CO}$ with units of (K \\kms$\\rm pc^{2}$)$^{-1}$. For sources where we do not detect any narrow CO emission at the systemic redshift, we place a limit on the molecular gas mass over an aperture equal to the beam size for an emission line with a velocity FWHM of 250 \\kms. The molecular gas mass limits can be linearly scaled with a different $\\alpha_{CO}$\\ value. For sources with detected molecular gas at the quasar's systemic redshift, we compute the radius of the molecular gas region, which allows us to measure the gas surface density. All radii are computed using a curve of growth method. Effective radii refer to a region which encloses 50\\% of the flux, while a ``maximum\" extent refers to a size scale which encloses 90\\% of the flux. In all cases, narrow emission at the systemic redshift of the quasar is spatially resolved by our observations. We deconvolve the size of the beam from all radius measurements. For sources with no detected CO emission, we use the molecular gas mass limit and the beam of the observations as a proxy for the radius. Values associated with the molecular gas at the systemic redshift are summarised in Table \\ref{tab:narrow-prop}.\n\n\\begin{deluxetable*}{lcccccr}[!th]\n\\tablecaption{Properties of molecular gas at the systemic redshift of each quasar. $S_{\\nu}\\Delta V$ is the spatially and line-integrated CO flux. $M_{H_{2}}$ is the mass of gas at the systemic redshift of each galaxy, assuming an $\\alpha_{CO}$ of 0.8. $R$ is the radius of the region. V$\\rm_{\\sigma}$ is the velocity dispersion. SFR is the expected star formation rate based on the currently available molecular gas reservoir based on the KS law. \\label{tab:narrow-prop}}\n\n\n\n\\tablehead{\n\\colhead{Source}&\n\\colhead{$S_{\\nu}\\Delta V$}&\n\\colhead{$M_{H_{2}}$}&\n\\colhead{R}&\n\\colhead{V$\\rm_{\\sigma}$}&\n\\colhead{$\\Sigma_{molecular}$}&\n\\colhead{SFR}\\\\\n\\colhead{}&\n\\colhead{Jy \\kms}&\n\\colhead{$\\times10^{9}$M$_{\\odot}$}&\n\\colhead{kpc}&\n\\colhead{\\kms}&\n\\colhead{M$_{\\odot}$ pc$^{-2}$}&\n\\colhead{\\myr}}\n\\startdata\n4C 09.17 A RL & 0.29$\\pm$0.03 & 3$\\pm$0.3 & 2.8 & 158.0$\\pm$14 & 137$\\pm$13 & 6 \\\\\n4C 09.17 B RQ & 0.88$\\pm$0.09 & 10$\\pm$1 & 1.1 & 143.5$\\pm$12 & 2357$\\pm$235 & 56 \\\\\n7C 1354 & 0.028 $\\pm$0.003 & 0.3$\\pm$0.1 & 2.0 & 44.10$\\pm$11 & 23$\\pm$2 & 1\\\\\n3C 298 & 0.63$\\pm$0.07 & 6.6$\\pm$1 & 1.6 & 42.35$\\pm$12.8 & 820$\\pm$10 & 24\\\\\n3C 318 & $<0.05$ & $<0.6$ & -- & -- & -- & --\\\\\n4C 22.44 & $<$0.05 & $<$ 1 & -- & -- & -- & --\\\\\n4C 05.84 & $<0.07$ & $<0.8$ & -- & -- & -- & --\\\\\n\\enddata\n\\end{deluxetable*}\n\n\\subsection{Observed Molecular Outflows}\\label{sec:search_outflow}\n\nIonized gas outflows typically show a peak emission line offset $>$ 300 \\kms\\ from the quasar redshift measured from the narrow emission line region, with a velocity dispersion $>$ 250-300 \\kms\\ over the outflow region for the sources within our sample \\citep{Vayner19b}. Based on these observed velocities of the ionized gas outflows, we define the criteria for a molecular outflow to be any molecular gas that has a peak emission line offset relative to the redshift of the quasar host galaxy $|v| >300$ \\kms\\ or a spaxel with a velocity dispersion greater than 250 \\kms. The selected outflow criteria are typical of molecular outflows found in nearby galaxies \\citep{Fluetsch19}. Using these criteria, we detect molecular gas outflows in four quasars in the ALMA sample. For 3C 318 and 4C 05.84, we select outflows based on both broad ($\\sigma>250$ \\kms) and offset ($|v| > 500$ \\kms) emission and for 4C 09.17, based on broad ($\\sigma>300$ \\kms) emission. We include the molecular gas outflow in 3C 298 that was previously detected in \\citet{Vayner17} based on broad CO (3-2) and CO (5-4) emission. For each source we extract a spectrum by integrating over all spaxels satisfying this outflow criteria. We present the extent of the molecular outflows along with the spectra in Figures \\ref{fig:3C318_spec}, \\ref{fig:4C0917_spec}, \\ref{fig:4C0584_spec}, and \\ref{fig:3C298_spec}. \n\nWe potentially may be missing dense outflowing gas moving at slower speeds since our high velocity criteria is based on atomic gas observations, which may lead us to underestimate the total molecular gas in the outflow for the 4C 09.17 and 3C 298 systems where we detect CO emission moving at speeds $<$ 300 \\kms. Observations of denser molecular gas tracers, such as CS or HCN, could provide a more complete picture of the outflow. \\\\\n\nWe calculate the molecular gas mass in the outflows based on flux associated with the broad or highly offset emission line component. We use an $\\alpha_{CO}$\\ value of 0.8 \\msun$\\rm(K~kms^{-1}pc^{2})^{-1}$ for consistency with other works at low and high redshift \\citep{Herrera-Camus19,Fluetsch19}, this value is commonly adopted for the molecular gas in the ISM of nearby Ultra Luminous Infrared Galaxies \\citep{Bolatto13}. However, in some well-studied molecular outflows in nearby galaxies, the conversion factor can be much higher, $\\alpha_{\\rm CO}\\sim 2$ \\citep{Cicone18}, so we may be conservative with our mass estimates.\n\nWe compute the molecular gas outflow rate using\n\n\\begin{equation}\\label{equation:outflow-thin-shell}\n \\dot{M}_{H_{2}}=\\frac{M_{H_{2}}v_{out}}{R_{out}}.\n\\end{equation}\n\nWe select this equation since the molecular gas outflows in 3C 318, 4C 05.84, and 4C 09.17-A RL are seen as a single high-velocity offset component that spans either blue- or red-shifted velocities. The molecular gas outflows in 4C 09.17B and 3C 298 may be closer in geometry to a filled wide-angle cone since they span a broader velocity range. In these two sources, the estimates obtained from equation \\ref{equation:outflow-thin-shell} should be multiplied by a factor of 3 if the outflows are closer to a filled cone. Here $R_{out}$ is the extent of the outflow where 90\\% of the flux associated with the outflow emission accumulates. The velocity is computed as $v_{out} = \\left | v_{r} \\right | + FWHM\/2$, where $\\left | v_{r} \\right |$ and $FWHM$ are the radial velocity and full-width-half-maximum in units of \\kms\\ of the emission relative to the systemic redshift of the emission associated with the outflow component.\n\nIn addition to the outflow rates we also compute the momentum flux of the outflow using: \n\n\\begin{equation}\n \\dot{P}_{H_{2}} = \\dot{M_{H_{2}}} \\times v_{out}\n\\end{equation}\n\n\\noindent and the kinetic luminosity:\n\n\\begin{equation}\n \\dot{E}_{H_{2}} = \\frac{1}{2}\\times\\dot{M_{H_{2}}}\\times v_{out}^{2}.\n\\end{equation}\n\nThe CO line luminosity used to compute the molecular gas mass along with the spatial extent, velocity, outflow rate, and energetics are summarized for each source in Table \\ref{tab:outflow-prop}.\n\n\n\\begin{deluxetable*}{lllccccccccl}\n\\tablecaption{Multi-phase outflow properties. $S_{\\nu}\\Delta V$ is the spatially and line-integrated CO flux associated with the molecular outflow. R$\\rm_{out}$ is the radial extent of the outflow. V$\\rm_{out}$ is the velocity of the outflow. dM\/dt$\\rm_{H_{2}}$ and dM\/dt$\\rm_{Ionized}$ are the molecular and ionized outflow rates. $\\dot{P}_{H_{2}}$ is the momentum flux of the molecular outflow (assuming $\\alpha_{CO}$$=0.8$), while $\\dot{P}_{Ionized}$ is the momentum flux of the ionized outflow, using the dM\/dt$\\rm_{H\\alpha}$ outflow rate from \\citep{Vayner19a}. $\\frac{\\dot{P}_{outflow}}{\\dot{P}_{Quasar}}$ is the ratio of the momentum flux of outflow to the momentum flux of the quasar accretion disk ($L_{bol}\/c$) using a sum of the ionized and molecular outflow momenta flux.\\label{tab:outflow-prop}}\n\n\\tablehead{\\colhead{Source}&\n\\colhead{$S_{\\nu}\\Delta V$}&\n\\colhead{M$\\rm_{H_{2}}$}&\n\\colhead{R$\\rm_{out}$}&\n\\colhead{V$\\rm_{out}$}&\n\\colhead{V$\\rm_{FWHM}$}&\n\\colhead{dM\/dt$\\rm_{H_{2}}$}&\n\\colhead{M$\\rm_{Ionized}$}&\n\\colhead{dM\/dt$\\rm_{Ionized}$}&\n\\colhead{$\\dot{P}_{H_{2}}$}&\n\\colhead{$\\dot{P}_{Ionized}$}&\n\\colhead{$\\frac{\\dot{P}_{outflow}}{\\dot{P}_{Quasar}}$}\\\\\n\\colhead{}&\n\\colhead{Jy \\kms}&\n\\colhead{$\\times10^{9}$M$_{\\odot}$}&\n\\colhead{kpc}&\n\\colhead{\\kms}&\n\\colhead{\\kms}&\n\\colhead{\\myr} &\n\\colhead{$\\times10^{9}$M$_{\\odot}$} &\n\\colhead{\\myr}&\n\\colhead{$10^{35}$dyne}&\n\\colhead{$10^{35}$dyne}&\n\\colhead{}\n}\n\\startdata\n4C 05.84 & 0.1$\\pm$0.03 & 1.4$\\pm$0.2 & 8.6 & 653$\\pm$30 & 382.2$\\pm$47.4 & 110$\\pm$12 & 0.4$\\pm$0.3 & 870$\\pm$600 & 4.4$\\pm$0.6 & 40$\\pm$30& 0.7$\\pm$0.4 \\\\\n3C 318 & 0.25$\\pm$0.03 & 3$\\pm$0.3 & 20.2 & 1132$\\pm$44 & 528.7$\\pm$67.4 & 168$\\pm$18 & 0.32$\\pm$0.2 & 220$\\pm$150 & 12$\\pm$2 & 10$\\pm$7 & 8$\\pm$3 \\\\\n3C 298 & 0.3$\\pm$0.03 & 3$\\pm$0.3 & 1.6 & 394$\\pm$64 & 624.0$\\pm$49.0 & 780$\\pm$150 & 0.6$\\pm$0.3 & 750$\\pm$400 & 20$\\pm$7 & 77$\\pm$40 & 4$\\pm$2 \\\\\n4C 09.17 A & 0.11$\\pm$0.01 & 1.3$\\pm$0.1 & 2.8 &852$\\pm$77& 439.1$\\pm$122.6 & 400$\\pm$50 & 0.05$\\pm$0.02 & 50$\\pm$20 & 21$\\pm$4 & 2.1$\\pm$1 & 2.4$\\pm$0.5 \\\\\n4C 09.17 B & 2.3$\\pm$0.2 & 27$\\pm$3 & 4.9 & 456$\\pm$26 & 870.6$\\pm$47.2 & 2500$\\pm$300 & -- & -- & 73$\\pm$11 & -- & -- \\\\\n\\enddata\n\\end{deluxetable*}\n\n\\section{Individual Objects}\\label{sec:indiv_obj}\n\nIn this section, we outline the known properties of each quasar within our sample, focusing on their radio jet morphology, far-infrared properties, morphologies, the extent of the ionized gas outflows, and a description of what we detect with our ALMA band 4 observations.\n\n\\subsection{3C 318 (z=1.5734)}\\label{sec:3c318}\n\n3C 318 is a luminous radio-loud quasar at $z=1.5734$. The radio jet is double-sided, stretching in the southwest and northeast, with a bright core emission associated with the quasar's optical emission location. Within our ALMA continuum observations, we only detect the jet's southwest component. The northeast component of the jet blends with the bright unresolved core. 3C 318 has been detected with the \\textit{Herschel Space Telescope} and is known to be a bright far-infrared emitting source \\citep{Podigachoski15}. Resolved band 7 ALMA observations of the dust emission reveal a ring-like structure on kpc scale centered on the quasar \\citep{Barthel19}.\\\\\n\nWith the Keck\/OSIRIS, we detected ionized gas emission in nebular emission lines [O{\\sc III}]\\xspace 5007 \\AA, H$\\alpha$\\xspace, [N{\\sc II}]\\xspace 6585 \\AA, and [S{\\sc II}]\\xspace 6717, 6731 \\AA\\ with an extent of 4 kpc \\citep{Vayner19b}. We detected an ionized outflow extending in the SW and NE direction with a maximum extent of 3.2 kpc. \\\\\n\n3C 318 was known to have molecular emission detected at the CO (2-1) transition with PdBI \\citep{willott07}. The CO (2-1) emission was known to be blueshifted by 400 \\kms\\ and spatially offset from the quasar continuum by 2.4\\arcsec\\ to the west and 0.5\\arcsec\\ to the north with considerable uncertainty due to a coarse beam of 8.05\\arcsec$\\times$4.32\\arcsec. \\\\\n\nOur ALMA band 4 observations reveal an extended CO (3-2) emission with one component offset 1.7 kpc to the west and a second component offset 17 kpc towards the south. The emission is divided between two regions that have widths of 4 and 8 kpc, that show similar highly blueshifted emission relative to the quasar. The spatially integrated emission is blueshifted (-936.0 \\kms) and relatively broad with an FWHM of 534 \\kms. The spectrum and integrated intensity map of the detected CO emission are shown in Figure \\ref{fig:3C318_spec}. Likely the ALMA and PdBI observations trace the same molecular gas components. However, the differences in the beams, maximum recoverable scales, and SNR play a role in the observed line shift and integrated line flux. 3C 318 has also been observed with VLA targeting the CO (1-0) transition \\citep{Heywood13}. Emission associated with the CO (1-0) emission line at the velocity offset of the PdBI observations was found 0.33\\arcsec\\ north from the quasar continuum. We do not detect any CO (3-2) emission at that location. Using typical ratios between the CO (1-0) and CO (3-2) line luminosity, we would have expected to detect this component at an SNR of 100, integrated over an emission line equivalent with a velocity dispersion of 250 \\kms. \\\\\n\nThe separation between the two high velocity clumps roughly matches the maximum recoverable scale of the interferometric observations. We have attempted to recover the more diffuse emission between the two clumps by smoothing the data in the UV plane with Gaussian kernel using the \\textit{uvtaper} option within \\textit{tclean} in CASA. Using a uv-taper parameter of 0.5\\arcsec\\ and 1\\arcsec\\ on the sky, the fainter emission between the two clumps seen in Figure \\ref{fig:3C318_spec} is below the noise in the uv-tapered data. The total integrated flux from the CO (3-2) emission is within 10\\% of the original data, within the statistical noise of the observations, hence no additional ``diffuse\" emission was recovered. A loss of baselines resulted in an increase in noise with larger uv-taper parameters. Observations in a more compact ALMA configuration are necessary to detect the more diffuse emission. \\\\\n\nALMA's higher angular resolution and sensitivity lead us to speculate that the molecular gas emission in 3C 318 is associated with a molecular outflow rather than a merging system. This is supported by more accurate redshift measurement from the narrow-line region, allowing us to do better kinematics and dynamics measurement of the molecular gas emission. The Dark Energy Survey \\citep{DES2021} shows no apparent optical detection of a companion galaxy down to an $r$-band magnitude of 24 at a significance of 10$\\sigma$. Infrared observations with Spitzer and archival ALMA band 7 observations do not show any evidence for a galaxy at the spatial locations of the highly blueshifted CO (3-2) emission \\citep{Barthel19}. There may be a possibility that high-velocity emission is associated with an obscured galaxy near the 3C 318 quasar host galaxy. In recent years there have been several galaxies detected with ALMA that have faint or no counterparts in very deep optical imaging and are referred to as ``optically dark ALMA galaxies\" \\citep{Williams19,Zhou20}. These galaxies show high far-infrared luminosities and are characterized as dusty star-forming galaxies at high redshift ($z>2$). Such galaxies are typically relatively massive, with stellar masses of 10$^{10.2-11.5}$ M$_{\\odot}$\\ and contain a substantial amount of molecular gas. If the emission in 3C 318 is associated with such a galaxy, then the associated dust continuum emission would have been easily detectable in the ALMA band 7 observations.\\\\ \n\nWe divide the molecular gas mass by the area of the emitting region using the effective radius as the scale size. Using the Milky Way's hydrogen column density - $V$-band extinction relationship \\citep{Guver09} we convert the column density into a $V$-band extinction value for the molecular gas traced by CO. We measure an $\\rm A_{v}$ value in the CO gas of 1-4 magnitudes, which is a factor of 100 lower than those found in the dusty star-forming galaxies. Furthermore, we use the molecular gas mass of individual clumps and convert them into expected dust continuum emission in band 7 based on the relationship between ISM mass and dust emission of \\citet{Scoville17}. The expected continuum flux density is 0.017 mJy\/beam, which would be undetected in the band 7 observations with a sensitivity of 0.0297 mJy\/beam. We assume the clumps have the same surface brightness profile in both bands and in the dust and molecular gas emissions. Based on these calculations, the clumps are unlikely to be associated with typical dust-obscured galaxies at this redshift. Band 4 continuum is not optimal for detecting the dust continuum at this redshift since the dust emission is expected to be fainter at the longer wavelength of the Rayleigh-Jeans tail. Another possibility is that the emission may be associated with a tidal tail feature. However the FWHM of 534 \\kms\\ and an offset of -936.0 \\kms\\ relative to the redshift of the quasar are both larger than what would be expected for a tidal feature. Based on morphological identification of tidal tail from HST imaging and Keck\/OSIRIS observations, in two other systems, we found that the velocity dispersion in both the ionized and molecular gas mass is $\\sim$ 150 \\kms\\ with velocity offsets of -250 \\kms\\ \\citep{Vayner19b}. \\\\\n\nCombining rest-frame optical and sub-mm observations, we find that the ionized and molecular gas outflows in 3C 318 show different morphologies, spatial extent, and kinematics. The molecular gas outflow is far more extended, with a maximum distance of 21 kpc from the quasar, while the ionized outflow shows a maximum extent of only 3.2 kpc. The molecular outflow is also faster moving with blueshifted velocities up to -1200 \\kms. In contrast, the ionized gas outflow has a velocity of 703 \\kms\\ with a bi-conal morphology that is both blue- and red-shifted relative to the quasar in the SW and NE directions. We find no evidence for CO (3-2) emission at the quasar's systemic redshift associated with narrow emission. We place a limit on the molecular gas reservoir at the quasar's location over an aperture matching the size of the beam of $<0.7\\times10^{9}$ M$_{\\odot}$\\ at 2$\\sigma$ confidence.\\\\\n\nThe extent of the molecular outflow is similar to the cold gas outflow detected in $z=6.4$ quasar SDSS J1148+5251, through the 158 \\micron\\ [C II] emission line \\citep{Cicone15}. The morphology and kinematics of the molecular outflow in 3C 318 are also similar to the outflow recently detected in zC400528 \\citep{Herrera-Camus19} through CO (3-2) observations, where they see an extended emission entirely redshifted from the galaxy with a relatively collimated morphology similar to the case of 3C 318. The clumpy morphology and high velocity of the outflowing molecular gas are also similar to recent molecular outflows detected in PDS 456 \\citep{Bischetti19} and in the lensed quasar HS 0810+2554 \\citep{Chartas20}.\n\n\n\\begin{figure*}[!th]\n \\centering\n \\includegraphics[width=7.0in]{3C318_spec.eps}\n \\caption{ALMA band 4 observations of 3C 318. On the left we show optimally extracted intensity map of the molecular outflow in the 3C 318 system, detected in the CO (3-2) line. The white contours outline the molecular outflow region. On the right we show a spectrum integrated over the entire molecular outflow region shown in the white contours, along with fit to the CO (3-2) emission line. The systemic redshift of the quasar host galaxy is at 0 \\kms. The ellipse in the lower left corner on the right panel shows the beam of our ALMA band 4 observations. We detect no molecular CO (3-2) emission at the systemic redshift.}\n \\label{fig:3C318_spec}\n\\end{figure*}\n\n\n\\subsection{4C 09.17 (z=2.117)}\\label{sec:4c0917}\n\n4C 09.17 is a radio-loud quasar at $z=2.117$. A one-sided jet extends towards the southwest, with a bright core emission associated with the quasar's optical emission location. The system is also bright at far-infrared wavelength \\citep{Podigachoski15}. \\\\\n\nWith Keck\/OSIRIS we detected ionized gas emission in nebular emission lines [O{\\sc III}]\\xspace\\ 5007 \\AA, H$\\alpha$\\xspace, and [N{\\sc II}]\\xspace\\ 6585 \\AA\\ \\citep{Vayner19b}. We found an ionized gas outflow extending towards the east with a maximum extent of 6 kpc.\\\\ \n\nWe detect broad, blueshifted emission resembling a molecular gas outflow in the host galaxy of the radio-loud (RL) quasar 4C 09.17. From here on, we refer to this object as 4C 09.17 A - RL. We also detect a very broad component in the merging radio-quiet (RQ) galaxy towards the northeast; from here on, we refer to this object as 4C 09.17 B - RQ. This galaxy is also detected in \\textit{K}-band imaging of \\citet{Armus97}, with a red optical to near-IR continuum color. In 4C 09.17 B - RQ, we detect very faint narrow [O{\\sc III}]\\xspace\\ emission in Keck\/OSIRIS, the ionized outflow is undetected. 4C 09.17 B - RQ also contains a narrower emission line component in CO (4-3) at a similar redshift to the narrow [O{\\sc III}]\\xspace\\ emission, which we use to calculate its redshift. The velocity offset between the 4C 09.17 A - RL and 4C 09.17 B - RQ is -593 \\kms. The majority of the narrow CO emission-line flux is found concentrated in 4C 09.17 B - RQ, within a 1 kpc radius region. The majority of the dust continuum detected at far-infrared wavelengths with the \\textit{Herschel Space Telescope} is likely associated with this galaxy. 4C 09.17B - RQ is highly obscured; the narrow CO emission component yields a line integrated gas column density of 3.4$\\rm \\times10^{24}~cm^{-2}$ computed by dividing the molecular gas mass by area of the emitting region. Using the Milky Way's hydrogen column density - $V$-band extinction relationship \\citep{Guver09}, we find a $V$-band extinction of 150 mag. The narrow CO emission is likely at the center of the merging galaxy because it roughly corresponds to the $K$-band continuum's peak location.\\\\\n\nFor the galaxies detected to the southwest and northwest of the quasar in \\citet{Armus97} and \\citet{Lehnert99} we detect narrow CO (4-3) emission near their optical locations. We do not detect any high velocity or broad molecular gas associated with these two systems. The detection of 3 galaxies found within 20 kpc of the quasar host galaxy from both ALMA and Keck\/OSIRIS observations makes the 4C 09.17 system likely a proto-group environment at z $\\sim 2.11$. \\\\\n\nWe find that the molecular gas outflow in 4C 09.17 A -RL is more compact than the ionized gas outflow. The ionized and molecular gas outflow show similar blueshifted velocities and velocity dispersion. The maximum extent of the molecular gas outflow is 2.8 kpc, while the ionized outflow extends to 6 kpc. We find that both the ionized and molecular outflow in this system are not along the path of the radio jet, but extend in the same eastern direction. Similar results have been found for a subset of nearby galaxies recently, where outflows appear to expand perpendicular to the path of the jet \\citep{Venturi21}. \\\\\n\nIn 4C 09.17 B - RQ, the molecular outflow extends 4.9 kpc from the narrow CO emission line component. The extent of the molecular outflow in 4C 09.17 B - RQ roughly matches the maximum extent of the $K$-band stellar continuum; hence the molecular outflow is occurring on galactic scales in this galaxy. Extinction within the outflowing gas can potentially prevent ionization by quasar photons and prevent the observer from detecting recombination photons. Spectra of the distinct regions detected in this system are shown in Figure \\ref{fig:4C0917_spec}. \\\\\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=6.5 in]{4C0917_spec.eps}\n \\caption{ALMA band 4 observations of 4C 09.17. On the left we show optimally extracted intensity map of CO emission in the 4C 09.17 system. The 4C 09.17 is a merger system with molecular outflows detected in both galaxies. The teal contours outline the molecular outflow in the radio-quiet galaxies 4C 09.17 B, while the purple contours outline the molecular outflow detected in the host galaxy of the radio loud quasar 4C 09.17. On the right we show the spectra extracted over the respective outflow regions along with the Gaussian fit model. Dashed lines represent the individual Gaussian components of the emission line fit, while the solid black line represents the sum of all components and a 0th order polynomial fit to any residual continuum. Component 1 (C1) in 4C 09.17 B is the fit to the narrow emission at the systemic redshift of the merging galaxy, that has a velocity offset of about 593 \\kms\\ relative to 4C 09.17A, while C2 is gas in the outflow. For 4C 09.17 A, C1 corresponds to the outflow gas while C2 is the narrow gas at the systemic redshift of the quasar host galaxy. The systemic redshift of the quasar host galaxy is at 0 \\kms. The ellipse in the lower left corner on the right panel shows the beam of our ALMA band 4 observations.}\n \\label{fig:4C0917_spec}\n\\end{figure*}\n\\subsection{4C 22.44 (z=1.5492)}\\label{sec:4c2244}\n\n\n4C 22.44 is a radio-loud quasar at z=1.5492. The radio jet in this system extends along the east-west direction with a length of 7\\arcsec\\ and a bright core emission associated with the quasar's optical emission location. With Keck\/OSIRIS we detected extended ionized gas in the [O{\\sc III}]\\xspace, H$\\alpha$\\xspace, and the [N{\\sc II}]\\xspace emission lines with an extent of 2 kpc. An ionized gas outflow is detected on a spatial scale of $<1$ kpc. We detect no emission in the CO (3-2) line with ALMA. We place a limit on the CO (3-2) line luminosity of 0.08 Jy \\kms\\ for an aperture matching the beam with a line velocity dispersion of 250 \\kms, which converts to a molecular gas mass limit of $<$ 1$\\times10^{9}$ M$_{\\odot}$.\n\n\\subsection{4C 05.84 (z=2.323)}\\label{sec:4c0584}\n\n4C 05.84 is a radio-loud quasar at z=2.323. The one-sided radio jet in this system extends towards the southwest with a maximum extent of 12 kpc. There is a bright radio core component associated with the quasar's optical emission location.\n\nWith Keck\/OSIRIS observations, we detected extended ionized gas on an 8 kpc scale in the [O{\\sc III}]\\xspace, H$\\alpha$\\xspace, and [N{\\sc II}]\\xspace emission lines \\citep{Vayner19a}. We detected a bi-conical ionized outflow extending along the northeast and southwest direction.\n\nIn our ALMA band 4 observations, we detect extended CO (4-3) emission that is offset in the western direction consisting of multiple clumps. The individual clump components and the spatially integrated emission is highly blueshifted relative to the quasar's systemic redshift. This emission is not associated with any known galactic component. The spectrum and integrated intensity map of the detected CO emission are shown in Figure \\ref{fig:4C0584_spec}. \n\nThe ionized and molecular gas outflows in this system are on similar scales. The ionized outflow extending towards the southwest direction shows a similar blueshifted velocity offset as the molecular outflow. The ionized outflow detected on spatial a scale $<1$ kpc is similarly blueshifted to the more extended ionized and molecular outflow. The ionized outflow appears to be more turbulent with a larger velocity dispersion than the molecular outflow. We find no evidence for CO (4-3) emission at the quasar's systemic redshift associated with narrow emission. We place a limit on the molecular gas reservoir mass of $<0.8\\times10^{9}$ M$_{\\odot}$\\ at the quasar's location with an aperture the size of the beam and using a velocity dispersion of 250 \\kms.\n\n\n\\begin{figure*}[!th]\n \\centering\n \\includegraphics[width=7.5 in]{4C0584_spec.eps}\n \\caption{ALMA band 4 observations of 4C 05.84. On the left we show optimally extracted intensity map of the molecular outflow in the 4C 05.84 system, detected in the CO (4-3) line. The white contours outline the molecular outflow region. On the right we show a spectrum integrated over the entire molecular outflow region, along with fit to the CO (4-3) emission line. The systemic redshift of the quasar host galaxy is at 0 \\kms. The ellipse in the lower left corner on the right panel shows the beam of our ALMA band 4 observations. We detect no molecular CO (4-3) emission at the systemic redshift.}\n \\label{fig:4C0584_spec}\n\\end{figure*}\n\n\\subsection{3C 298 (z=1.439)}\\label{sec:3C298}\n\n3C 298 is a radio-loud quasar at z=1.439. The jets in the system extend in the east-west direction, with a length of about 16 kpc and a bright core emission associated with the quasar's optical emission location. The system is bright at far-infrared wavelengths. ALMA band 7 observations have revealed that the dust emission mostly comes from a kpc scale region centered on the quasar \\citep{Barthel18}.\n\nWith Keck\/OSIRIS, we detected extended ionized gas emission \\citep{Vayner19b} on scales up to 20 kpc from the quasar. A bi-conical ionized outflow is detected with a maximum extent of 3 kpc from the quasar along the jet's path. The redshifted cone is associated with the western jet component, while the blueshifted cone is associated with the eastern jet component. We also detected an ionized gas outflow in the merging system 8 kpc from the quasar.\n\nIn the ALMA band 4 observations, we detect a molecular gas emission in two distinct components, one centered on the quasar with a radius of 2.1 kpc and a second component 21 kpc from the quasar, offset by -250 \\kms. The component near the quasar shows both broad and narrow emission. The narrow emission is associated with a galactic disk \\citep{Vayner17}, while the broad component is associated with the outflow. The outflow emanates from the molecular disk centered on the quasar with a maximum extent of 1.6 kpc. The majority of the molecular gas in the outflow is on the blueshifted side of the disk and extends in the direction of the jet's western component. The ionized outflow is more extended than the molecular outflow, has a faster velocity, and appears to be more turbulent with a larger velocity dispersion. \n\n\n\n\\begin{figure*}[!th]\n \\centering\n \\includegraphics[width=7.0 in]{3C298_spec.eps}\n \\caption{ALMA band 4 observations of 3C 298. On the left we show optimally extracted intensity map of CO emission in the 3C 298 system, detected in our 2017 study of this object \\citep{Vayner17}. The purple contours outline the molecular outflow region, white contours outline total emission from the molecular disk, while the teal contours outline a star forming\/tidal tail feature. On the right we show a spectrum integrated over each distinct region along with fit to the CO (3-2) emission line. Dashed lines represent the individual Gaussian components of the emission line fit, while the solid black line represents the sum of all components and a 0th order polynomial fit to any residual continuum. The systemic redshift of the quasar host galaxy is at 0 \\kms. The ellipse in the lower left corner on the right panel represents the beam of the ALMA band 4 observations.}\n \\label{fig:3C298_spec}\n\\end{figure*}\n\n\n\\subsection{7C 1354+2552 (z=2.0064)}\\label{sec:7C1354}\n\n7C 1354+2552 is a radio-loud quasar at z=2.0064. The system contains two jets that are perpendicular to each other. The east-west jet has a length of about 24 kpc, while the north-south jet has a length of 86 kpc. Only the east-west jet is detected in the continuum in our ALMA observations due to limited sensitivity to low-surface brightness emission on scales $>$ 6\\arcsec.\n\nUsing the Keck\/OSIRIS observations, we detected extended emission in the nebular emission lines [O{\\sc III}]\\xspace and H$\\alpha$\\xspace on scales of 4-6 kpc. An ionized outflow is detected on a spatial scale of $<1$ kpc. \n\nIn the ALMA band 4 observations, we detect molecular gas emission towards the northeast. The narrow emission line resides near the quasar's systemic redshift. We do not detect any broad emission from a turbulent molecular gas outflow. We detect no highly offset emission consistent with our outflow criteria; hence there is no evidence for a cold molecular gas outflow in this system at our observations' sensitivity. Spectra of the distinct region detected in this system is shown in Figure \\ref{fig:7C1354_spec}. The detected emission may be associated with a merging galaxy. With the large (8 kpc) separation from the quasar and the fact that the motion of the gas does not appear to follow the kinematics of the galactic disk on the eastern side \\citep{Vayner19b}, there is a possibility that this emission is associated with a merging galaxy.\n\n\n\\begin{figure*}[!th]\n \\centering\n \\includegraphics[width=7.0 in]{7C1354_spec.eps}\n \\caption{ALMA band 4 observations of 7C 1354+2552. On the left we show optimally extracted intensity map of the molecular gas in the 7C 1354+2552 system, detected in the CO (4-3) line. The gas is narrow and is consistent with gravitational motion. Yellow contours outline the CO emitting region towards the north-west that is slightly redshifted, the spectrum associated with it is shown on right. The systemic redshift of the quasar host galaxy is at 0 \\kms. The ellipse in the lower left corner on the right panel represents the beam of the ALMA band 4 observations.}\n \\label{fig:7C1354_spec}\n\\end{figure*}\n\n\n\\section{Discussion}\\label{sec:discussion}\n\n\\subsection{What is driving the outflows?}\\label{sec:driving}\nSeveral powerful mechanisms can drive galactic-scale outflows. Quasars can reside in galaxies with powerful star formation activity \\citep{Duras17,Aird19,Circosta21,Bischetti21}, especially at redshifts near the peak of the star formation activity. Star formation can result in powerful galactic winds \\citep{Rupke18}. Radio-loud quasars that are optically luminous can drive galactic outflows by both jets \\citep{Wagner12,Mukherjee16} and radiation pressure \\citep{Murray95,Faucher12a,Zubovas12,Costa18} from the accretion disk. In this section, we explore primary mechanisms that are capable of driving the multi-phase galactic outflows. We combine the momentum flux and kinetic luminosities from the cold molecular and ionized gas phases to look at both the total impact of galactic winds on the quasar host galaxies and what mechanism may be responsible for driving the entirety of the outflowing gas in each system.\n\nTo understand the main driving mechanism of galactic outflows, we compare the momentum flux of the outflow ($\\dot{P}_{outflow}$ ) to the momentum input from the quasar accretion disk ($\\dot{P}_{Quasar}$ ) and to the momentum deposition from stellar feedback ($\\dot{P}_{SNe}$).\n\n\\subsection{Stellar feedback as potential driver of galactic outflows}\nTo explore whether star formation activity can drive the galactic scale outflows, we compare the momentum deposition from supernovae explosions, based on the star formation rate, to the energetics of the multi-phase gas outflow. To compute the energy deposition from supernovae, we use the results of recent simulations by \\citet{2015MNRAS.450..504M} and \\citet{2015ApJ...802...99K}, that predict the momentum deposition per unit of solar mass formed. We assume that one supernova explodes for every 100 M$_{\\odot}$\\ of stars. We assume an electron density of 100 cm$^{-3}$ in the interstellar medium with solar metallicity.\n\n\n\\begin{equation}\n \\dot{P}_{SNe} = 7.5\\times 10^{33} \\frac{\\dot{M}_{SFR}}{1M_{\\odot}{\\rm yr}^{-1}} \\left(\\frac{n}{100\\,{\\rm cm}^{3}}\\right)^{-0.18}\n \\left(\\frac{Z}{Z_\\odot}\\right)\n \\rm dyne \\label{eq:SNe_mom}.\n\\end{equation} \n\nFor sources with resolved rest-frame far-infrared observations (3C 318 and 3C 298), we use the far-infrared luminosity and convert that into a star formation rate based on work by \\citet{Barthel18,Barthel19}. The total-infrared (8-1000 \\micron) derived star formation rates of 3C 298 and 3C 318 are 930\\myr\\ and 580 \\myr\\ derived using the \\citet{Kennicutt98} calibration \\citep{Podigachoski15}. Infrared-derived star formation rates have high uncertainties since it is not clear what fraction of the far-infrared emission comes from dust heating from massive stars and quasar heating on kpc scales \\citep{Symeonidis17,Symeonidis21}. In radio-loud objects synchrotron emission from the jets and the core of the quasar can also contribute to the far-infrared and mm-emission. In \\citet{Podigachoski15} correction for synchrotron emission was only applied to the 850 \\micron\\ flux of a handful of objects, including 3C 298 that is part of our study. We therefore use the infrared-derived star-formation rates as an upper limit on the actual star formation rates. The total infrared star formation rates are presented in Table \\ref{tab:SFR-rates}. \\\\\n\nAnother tracer of recent star formation comes from recombination lines of hydrogen. In Table \\ref{tab:SFR-rates} we present the star formation rates derived from the H$\\alpha$\\xspace line luminosity using the \\citep{Kennicutt98} calibration. The H$\\alpha$\\xspace derived star formation rates are derived from integrated H$\\alpha$\\xspace emission at the systemic redshift of the quasar host galaxy. The H$\\alpha$\\xspace and total infrared derived star formation rates show a significant discrepancy, with the far-infrared star formation rates being almost an order of magnitude higher. The discrepancy between these inferred star formation rates may be due to several reasons; dust extinction, different tracers of the star formation history, contamination of the far-infrared emission from quasar processes, or in addition a large difference between the resolution of the \\textit{Herschel Space Telescope} and Keck\/OSIRIS observations. Indeed, the far-infrared derived star formation rate in 4C 09.17 can be attributed to several galaxies falling within \\textit{Herschel Space Telescope} PSF leading to contamination of the far-infrared emission. The merging galaxy 4C 09.17 B-RQ contains higher molecular gas surface density, hence the larger fraction of the star formation rate among the galaxies. Contribution from nearby galaxies can also affect the far-infrared derived flux for 3C 298 and 3C 318, but we do not see such a strong over-density compared to 4C 09.17. Recent observations with ALMA of a high redshift ($z=4.4$) and luminous quasar have revealed multiple sources falling within 17 kpc from the quasar \\citep{Bischetti18}, which all contribute to the far-infrared emission detected with \\textit{Herschel Space Telescope} for this system. \n\nWe find that stellar processes can deposit a momentum flux of 7.5 - 10,000 $\\times10^{33}$ dyne, comparable to momentum fluxes of the multi-phase outflows. However, since these estimates rely on the maximum momentum flux from SNe and the maximum star formation rates, it is unlikely that star formation alone drives the outflows in these systems.\n\n\\begin{deluxetable*}{cccc}\n\\tiny\n\\tablecaption{Star formation rates based on H$\\alpha$\\xspace emission line \\citep{Vayner19b},infrared observations \\citep{Podigachoski15} and expected star formation rate using the molecular gas surface density based on the KS law. \\label{tab:SFR-rates}}\n\\tablehead{\n\\colhead{Name} & \n\\colhead{SFR [H$\\alpha$\\xspace]}&\n\\colhead{SFR [Total IR]}&\n\\colhead{SFR [KS]}\\\\\n\\colhead{} & \n\\colhead{\\myr}&\n\\colhead{\\myr}&\n\\colhead{\\myr}}\n\\startdata\n4C 09.17 A RL & 9$\\pm$1 & 1330 \\tablenotemark{a} & 6 \\\\\n7C 1354+2552 & \t 29$\\pm$3 & -- & 1 \\\\\n3C 298 & 22$\\pm$2 & 930 & 24 \\\\\n3C 318 & 88$\\pm$9 & 580 & - \\\\\n4C 22.44 & 32$\\pm$3 & -- & - \\\\\n4C 05.84 & 11$\\pm$1 & $<$540 & - \\\\\n\\enddata\n\\tablenotetext{a}{Star formation rate likely contaminated by a merging galaxy}\n\\end{deluxetable*}\n\n\\subsection{Quasar as potential driver of galactic outflows}\nThe photon momentum fluxes of $10^{35-37}$ dyne and bolometric luminosities of $10^{46-47}$ erg s$^{-1}$ \\ for these quasars indicate that they have sufficient energy and momentum to drive the observed multi-phase gas outflows. To test which quasar mechanism is the primary driver of the observed galactic-scale outflows, we compare $\\dot{P}_{outflow}$ to $\\dot{P}_{Quasar}$ and the location and extent of the quasar jets. High ($>2$) $\\frac{\\dot{P}_{outflow}}{\\dot{P}_{Quasar}}$\\ on scales $>$ 1 kpc are generally attributed to energy-conserving outflows, where a radiatively-driven nuclear wind \\citep{Faucher12a} or a jet \\citep{Mukherjee16,Wagner12} drives a hot shock (T$>10^{7}$ K) in the interstellar medium that does not cool efficiently, maintaining nearly all of the kinetic energy provided to it. The swept-up material is shocked and is able to cool to produce a multi-phase outflow medium \\citep{Faucher12,Faucher12a,Zubovas12}, and may explain the presence of molecular gas moving at fast outflow velocities. To decipher whether the galactic-scale wind is ultimately powered by a jet or by radiation, we need to compare the location and extent of the quasar jet to the outflow and to search for evidence of a fast, radiatively driven wind in X-ray and UV spectrum of the quasar.\n\nHigh ($>2$) $\\frac{\\dot{P}_{outflow}}{\\dot{P}_{Quasar}}$ on scales $<$ 1 kpc can be attributed to outflows driven by radiation pressure in a high column density environment, where the ``momentum boost\" is provided by photons scattering multiple times off dust grains as they are driving the outflow \\citep{Thompson15}. Detecting and resolving the hot shocked gas produced by the jet or the radiatively-driven nuclear wind would be helpful in understanding what is driving the outflow. The shocked hot gas can be detected through the Sunyaev\u2013Zeldovich effect \\citep{Chatterjee07} and has recently been found in the host galaxy of a luminous quasar at z=1.71 \\citep{Lacy19}. Additionally it was detected by stacking analysis of luminous quasars from the Atacama Cosmology Telescope, \\textit{Herschel Space Telescope} and VLA observations \\citep{Hall19}.\n\nLow ($\\ll1$) $\\frac{\\dot{P}_{outflow}}{\\dot{P}_{Quasar}}$\\ on scales $>$ 1 kpc can be attributed to outflows driven by a radiative shock produced where the shocked gas can cool efficiently, and kinetic energy is radiated away. Outflows driven through radiation pressure in a low column density environment can also produce an observed $\\frac{\\dot{P}_{outflow}}{\\dot{P}_{Quasar}}$ $\\ll$1. \n\nTo compare the energetics of winds of our sample with other massive galaxies with AGN, we collated data on molecular and ionized outflows in galaxies at $11$ on kpc scales. An energy-conserving shock is responsible for driving the outflow for 3C 318, 4C 09.17 A, and 3C 298. For 4C 05.84, it is still possible for an energy-conserving shock to drive the outflow; however, radiation pressure or a radiative shock model is still able to explain the observed $\\frac{\\dot{P}_{outflow}}{\\dot{P}_{Quasar}}$. We overlay the location of the ionized and molecular outflows in Figure \\ref{fig:CO_ionized_outflows}, to highlight the extent and morphologies of the multi-phase gas outflows.\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=7.0 in]{energetics_comparison.eps}\n \\caption{A comparison of the momentum flux (left) and kinetic luminosity (right) of the outflow to the momentum flux of the accretion disk and the quasar's bolometric luminosity. On the left, we plot lines of constant $\\frac{\\dot{P}_{outflow}}{\\dot{P}_{Quasar}}$, points above the 2:1 line represent outflows that are likely driven by an energy-conserving shock on kpc scale or radiation pressure on small ($<1$ kpc) scales. On the right, we plot lines of constant coupling efficiency between the outflow's kinetic luminosity and the quasar's bolometric luminosity. Blue circles and squares represent molecular outflows detected through CO emission and OH absorption, respectively. Red circles represent ionized outflows at cosmic noon, recomputed in the exact same manner in \\citep{Vayner19a}. Black stars are the ionized outflows of the parent sample for this study. Blue stars are molecular outflows derived in this study through CO emission, and orange stars are the energetics from the total (molecular + ionized) momentum flux and kinetic luminosity. Combining the energetics of the multi-phase outflows indicates that they are likely driven by an energy-conserving shock and have coupling efficiencies between 0.1-1\\%.}\n \\label{fig:energetics}\n\\end{figure*}\n\n\\subsection{The nature of the multi-phase outflows}\n\nFor 3C 318, 4C 05.84, and 4C 09.17 A, compared to the ionized outflowing gas the molecular gas shows a smaller velocity range single component that spans only blueshifted emission from -200 to -1200 \\kms. The morphology of the outflowing molecular gas is much clumpier and more confined compared to the ionized outflows. The ionized outflows span a broader range of velocities and fill a larger volume likely residing in a wide-angle cone. The high velocity and clumpy molecular outflows are unexpected, appearing to be out of pressure equilibrium with their surroundings at the larger separations observed in 3C 318 and 4C 05.84. Recently high velocity and clumpy outflows have also been detected in one low redshift radio-quiet quasar and a higher redshift lensed quasar, appearing to have similar velocity offset, dispersion, and morphology to 3C 318, 4C 05.84 and 4C09.17A \\citep{Bischetti19,Chartas20}. There is no consensus on the observed spatial morphology and velocity structure of high redshift molecular outflows in luminous quasars due to the small number of detections. The velocity dispersions and offsets for 3C 318, 4C 05.84, and 4C 09.17 A are consistent with molecular outflows detected through OH absorption \\citep{Veilleux13}, which have been hypothesized to have a thin shell geometry. For 3C 298 and 4C 09.17 B, the molecular outflows show a broader velocity range, which likely indicates that these molecular outflows are more volume filling, similar to the ionized outflows. In case these outflows are indeed filled cones, then their outflow rates and energetics would scale by a factor of 3. We are unable to fully rule out a shell geometry for these outflows. It would require higher angular resolution observations to distinguish if they reside in a shell or in a filled cone\/sphere. The velocities of the molecular and ionized outflows are consistent with recent theoretical predictions by \\citet{Richings20} for multi-phase gas outflows driven by a quasar. \\\\\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=7.0in]{all_sources_ALMA_OSIRIS_overlay.eps}\n \\caption{We present the comparison between the ionized outflow and the molecular gas distribution, extent, and morphology in sources where we detect a multi-phase gas outflow. In the background, we show the line integrated CO intensity tracing the cold molecular gas. Violet contours outline the location of the molecular outflows in the quasar host galaxies, the teal contour in the 4C09.17 system represent the molecular outflow in the merging galaxy. The white contours show the ionized outflow traced through the [O{\\sc III}]\\xspace\\ 5007 \\AA\\ emission line for all systems except 4C 05.84 where it is traced with H$\\alpha$\\xspace. The white star represents the quasar's location, while the bar to the right of each source represents 1 arcsecond.}\n \\label{fig:CO_ionized_outflows}\n\\end{figure*}\n\nThe energy-conserving shocks responsible for the galactic scale outflows are either powered by radiatively driven outflow or through shock heating of the gas by the radio jet. In 3C 298 and 3C 318, the jet's path is consistent with both the ionized and molecular outflows, while in 4C 05.84, the jet is only consistent with the path of the ionized outflow; however, the size of the jet is still consistent with both the extent of the ionized and molecular gas outflows. In 4C 09.17, the jet is not consistent with either the molecular or ionized outflow path. The lack of correlation between the jet and the galactic outflows does not indicate that the jet could not have driven the outflow. At later stages in their evolution, the hot bubble cocoons shocked by the jet could be relatively spherical and volume filling relative to the thin jet that we observe at radio wavelength \\citep{Wagner12,Mukherjee16}. The lack of path correlation between the multiple phases of the outflow, the general asymmetry, and spatial differences in ionized and molecular outflow can be caused by the non-uniform cooling properties of the galactic outflow, subject to gas and dust extinction and surface brightness sensitivity of our observations. Some nearby galaxies show fast-moving outflows that are not correlated with the axis of the jet \\citep{Venturi21} but are rather found to expand perpendicular to the jet's path. None of the quasars appear to show evidence for a broad absorption line wind, based on their rest-frame UV spectra from SDSS \\citep{Paris18}. The X-ray observations are either missing or are too shallow to search for radiatively-driven nuclear winds in Fe absorption lines. Future space-based X-ray telescopes with larger effective apertures and higher spectral resolving power will be crucial to search for ultra-fast outflows and compare their energetics with the galactic scale winds to decipher whether the jet or quasar disk winds are the primary driver.\n\nAt the current evolutionary stage of the quasar host galaxies in our study, the dominant outflow component, in terms of mass, is the molecular gas. In fact, for sources with both a molecular outflow and a molecular gas reservoir at the systemic redshift, between 20-60\\% of the cold molecular gas reservoir is in the galactic outflow. For sources with no detected molecular reservoir at the systemic redshift, we are catching these systems when the majority of the cold molecular gas is in an outflow. This suggests that we are observing these systems at a phase where the quasar is responsible for removing a significant fraction of the gas in these galaxies, and therefore may be directly responsible for removing the fuel for subsequent star formation, in turn impacting the stellar growth of these galaxies.\n\nFor the galactic outflows in 3C 298 and 4C 05.84, we find that most of the momentum flux is in the ionized gas phase. For 3C 318, the ionized outflow can have higher energetics within the errors that are dominated by the measurement of the electron density. For 4C 09.17 A, the molecular outflow has the higher energetics. In 4C 09.17 B, we do not detect an ionized outflow, likely due to obscuration effects since we measure a high concentration of molecular gas with a high $\\rm A_{V}$ value near the center of galaxy and 4C 09.17 B galaxy also has a red $V-K$ color of $>$5.35. Hence it is unclear how the energetics are distributed between the ionized and molecular gas phase of the outflow. For most of our objects, the distribution in energetics is consistent with recent theoretical predictions by \\citet{Richings20}. The coupling efficiency between the multi-phase outflow and the bolometric luminosity of the quasar is consistent with theoretical predictions by \\citet{Hopkins10,Choi12,Costa18}, where a minimum of 0.1-0.5$\\%$ of the quasar's bolometric luminosity is expected to transfer into the kinetic luminosity of the outflow for there to impact star formation processes.\n\n\\subsection{Missing mass in galactic outflows}\n\nWe do not have any measurement of the neutral atomic gas mass in the outflow. Yet the neutral atomic phase likely exists within each outflow since we observe ionized emission through the optical [O{\\sc III}]\\xspace and H$\\alpha$\\xspace lines. Theoretical work by \\citet{Dempsey18} suggests that in the absence of a substantial amount of neutral gas, the [O{\\sc III}]\\xspace transition would not be present, and most of the gas would be over-ionized (e.g., into [O IV]). At the same time, theoretical work by \\citet{Richings18} has shown that a substantial amount of the molecular gas in an outflow is expected to be in the warmer molecular gas phase, with temperatures on the order of 400 K. The CO transition that trace the molecular gas reservoir within our study are only capable of tracing the cold molecular gas phase at temperatures of 40-70 K. The actual total momentum flux ratios are still likely lower limits, so energy-conserving outflows driven through a quasar mechanism are the most likely scenario for explaining the driving mechanism behind these galactic-scale outflows. Future observations with the MIRI instrument aboard \\textit{JWST} will trace the warm molecular gas phase through the rest-frame mid-infrared rotational transitions of hydrogen. Observations with future 30-meter class telescopes will be able to probe the neutral gas phase through the Na D, [OI] 6300 \\AA\\ lines. Furthermore, the Square Kilometer Array (SKA) will enable us to probe the neutral gas phase directly through the 21 cm hydrogen line. \n\n\\subsection{Molecular gas depletion time scales}\n\nThis section we explore the molecular gas depletion time scale in the galaxies within our sample. The molecular gas depletion time scale is defined as $t_{depletion, SFR} = M_{molecular}\/SFR$. We will also compare the star formation depletion scale to the outflow depletion scale defined in a similar manner; $t_{depletion, outflow} = M_{molecular}\/\\dot{M}_{outflow}$. We compare the depletion time scale of the molecular gas due to star formation and from both the ionized and the molecular gas outflows. In all cases, we find a molecular gas depletion time scale of $<$ 31 Myr, with $t_{depletion, outflow}$ having the shorter timescale. The infrared-derived star formation rates are two orders of magnitude too high to be supported by the current molecular surface gas density, suggesting that the IR emission is likely contaminated by the quasar. While the maximum derived star formation rate from the total-infrared emission can be comparable to the multi-phase gas outflow rates, at present time, star formation rates expected from the Kennicutt-Schmidt (KS) law are two orders of magnitude smaller. The gas depletion timescale due to the KS-derived star formation rates is about two orders of magnitude higher than the depletion time scale due to outflows. Due to the expected low star formation rate from the low molecular gas surface density, in the next few Myr, the depletion time scale will be dominated by the galactic scale outflows. The depletion time scales from each source for the systems in our survey are presented in Table \\ref{tab:depletion}. Not only are we catching these systems when a substantial fraction of the gas is in an outflow state, the rate of molecular gas depletion is dominated by the quasar driven outflows.\n\n\\begin{deluxetable*}{ccccc}\n\\tablecaption{Measured depletion time scales based on the star formation activity and multi-phase outflows in our sample. t$_{depletion,SFR}$ is the depletion time scale of the current molecular reservoir using the highest star formation rate observed. t$_{depletion,outflow}$ is the depletion time scale due to the outflows, and t$_{depletion,KS}$ is the depletion time scale based on the KS law for the observed molecular gas surface density. t$_{depletion,MS}$ is the expected depletion time scale for a galaxy at the measured stellar mass on the galaxy main sequence. \\label{tab:depletion}}\n\n\n\n\\tablehead{\n\\colhead{Source}&\n\\colhead{t$_{depletion,SFR}$}&\n\\colhead{t$_{depletion,outflow}$}&\n\\colhead{t$_{depletion,KS}$} &\n\\colhead{t$_{depletion,MS}$}\\\\\n\\colhead{}&\n\\colhead{Myr}&\n\\colhead{Myr}&\n\\colhead{Myr}&\n\\colhead{Myr}}\n\\startdata\n4C 09.17 A RL & & 6$\\pm$1 & 500$\\pm$50 & 700 \\\\\n4C 09.17 B RQ & & 4$\\pm$1 & 178$\\pm$18 & 700 \\\\\n7C 1354 & 10$\\pm$4 & 6$\\pm$3 & 300$\\pm$100& 700 \\\\\n3C 298 & 7$\\pm$1 & 4$\\pm$1 & 275$\\pm$40 & 700 \\\\\n3C 318 & $<$ 1 & $<$ 1.7$\\pm$0.6 & & 800 \\\\\n4C 22.44 & $<$ 31$\\pm$3 & $<$3$\\pm$2 & & 800 \\\\\n4C 05.84 & $<$1.5 & $<$0.8$\\pm$0.5 & & 700 \\\\\n\\enddata\n\\end{deluxetable*}\n\nIn the last several years, there have been many molecular gas observations in massive galaxies near cosmic noon. We can compare the depletion time scales that we observe within our systems to those that do not have powerful quasars. We compare the expected depletion time scale due to star formation for an average galaxy in our stellar mass range to the observed depletion time scale due to the outflows. \\citet{Tacconi18} measured the depletion time scale for galaxies at an extensive range of redshifts and stellar masses. The dynamical mass \\citep{Vayner19b} of the galaxies within our sample range from $10^{10.5-11.5}$ M$_{\\odot}$, with star formation rates of 30-1330 \\myr. This places them on the massive end of the galaxy luminosity function and the galaxy main sequence (MS) at $z=2$ that relates the star formation rate of a galaxy to its stellar mass. For galaxies with a mass in the range of $10^{10.5-11.5}$ M$_{\\odot}$, the expected depletion time scale is around 700-800 Myr for galaxies on the star-formation main sequence at $z\\sim2$. The range in the molecular gas depletion time scale for galaxies in the mass range of $10^{10.5-11.5}$ M$_{\\odot}$\\ is $0.3-3.5$ Gyr. Our systems appear to show a depletion time scale of the molecular reservoir that is least 100 times faster, indicating that powerful quasars can play a role in having shorter depletion times compared to star formation in a typical massive galaxy. While the H$\\alpha$\\xspace derived star formation rates place the sample quasar hosts on or below the star forming main sequence at z$\\sim$2, the much larger FIR-derived star formation rates place them well above the MS. We therefore estimate a range of depletion times in Table \\ref{tab:depletion} covering the range of estimated star formation rates for each source.\n\nThe rapid depletion time scales of the cold molecular gas reservoir and the present amount of molecular gas has a profound implication for the evolution of these galaxies from cosmic noon to the present day. From the Keck\/OSIRIS observations, we learned that these galaxies are offset from the local $M_{\\bullet}-\\sigma~$ and $M_{\\bullet}-M_{*}~$ relationships \\citep{Vayner19b}, indicating that the galaxies are under-massive for the mass of their SMBH. We estimated that they require a continuous stellar mass growth of approximately 100 \\myr\\ between z=2 and z=0 to land onto the local scaling relations. For each system, we have computed the molecular gas at the systemic redshift of the quasar within the radius where the dynamical masses are calculated in \\cite{Vayner19b}. We find molecular gas mass in the range of $0.3-10\\times10^{9}$ M$_{\\odot}$, while in systems without detection of CO emission at the systemic redshift, we have placed limits of $<1\\times10^{9}$ M$_{\\odot}$. None of the systems have the molecular gas mass necessary to increase their stellar mass to bring them close to the local scaling relations between the SMBH mass and the stellar mass of the galaxy.\n\nFurthermore, the majority of the molecular gas is in the galactic outflows rather than in the host galaxy; hence the overall stellar mass is still not going to increase by a substantial amount through star-formation. This study further indicates that a large new fresh reservoir of gas is necessary to accrete from the circumgalactic medium to replenish the fuel necessary for future star formation, and\/or a large number of mergers is expected between z=2 and z=0 \\citep{Burke13,Cooke19}. Our results further indicate that strong quasar feedback occurs before galaxies assemble onto the local scaling relations.\n\n\n\\section{Conclusions}\\label{sec:conc}\n\nWe have conducted a study of the molecular gas properties in 6 radio-loud quasar host galaxies at $1.40.3$ and \n$T_{bol}<70$~K, Class I protostars have $n_{4.5-24}>0.3$ and $T_{bol}>70$~K, \nflat-spectrum sources have $-0.3 < n_{4.5-24} < 0.3$, and Class II pre-main-sequence \nstars have $n_{4.5-24}<-0.3$.\nBased on this, we identify 92 targets as Class 0 protostars, 125 as Class I protostars, \n102 as flat-spectrum sources, and 11 as Class II pre-main-sequence stars (see Table\nA\\ref{bestfit} and Figure \\ref{HOPS_n_Tbol}). \nThere are nine protostars with $T_{bol}$ values between 66.5 and 73.5 K (which \ncorresponds to a $\\pm$ 5\\% range around the Class 0--I boundary of 70 K); \nsix of them have $T_{bol}$ $>$ 70 K (HOPS 1, 18, 186, 256, 322, 370), and \nthe other three have $T_{bol}$ values just below 70 K (HOPS 75, 250, 361). \nThese protostars' classification is less firm than for the other HOPS targets. \nThere are also a few flat-spectrum sources whose classification is more \nuncertain: HOPS 45, 183, 192, 194, 210, 264, and 281 should be Class I \nprotostars based on their 4.5-24 $\\mu$m spectral index, but when considering \nthe IRS spectrum (specifically, the 5-25 $\\mu$m spectral index), they fall \ninto the flat-spectrum regime ($n_{5-25} < 0.3$). Also, for HOPS 45 and 194 \nthe $T_{bol}$ values are relatively high ($>$ 500 K).\nSimilarly, HOPS 33, 134, 242, 255, and 284 should be Class II pre-main-sequence\nstars based on their 4.5-24 $\\mu$m spectral index, but the spectral slope over \nthe IRS wavelength range suggests that they are flat-spectrum sources.\nIn these cases where the $n_{4.5-24}$ and $n_{5-25}$ spectral indices were \nsomewhat discrepant, we adopted the latter, and thus these objects were \nclassified as flat-spectrum sources. \n\nThere are five objects with $T_{bol}$ $<$ 70 K and $n_{4.5-24}$ $<$ 0\n(HOPS 164, 340, 341, 373, 405); despite their negative 4.5-24 $\\mu$m SED \nslopes, their SEDs either show or imply a deep silicate absorption feature at \n10 $\\mu$m, rise steeply in the mid- to far-IR, and their long-wavelength emission \nis strong. Thus, their $T_{bol}$ values are low, and we identify them as Class 0 \nprotostars, even though they have 4.5-24 $\\mu$m spectral indices not typical \nof embedded protostars. In particular, HOPS 341, 373, and 405 are likely young \nprotostars with dense envelopes (\\citealt{stutz13}; see also section \\ref{Class0}). \nIn the case of HOPS 373, the 4.5 $\\mu$m flux may be contaminated by bright\nH$_2$ emission from an outflow shock, rendering the $n_{4.5-24}$ value more \nunreliable. This might also explain the negative 4.5-24 $\\mu$m spectral index \nfor the other four protostars.\n\nFinally, the few Class II objects in our sample were thought to be potential\nprotostars prior to their observations with {\\it Herschel}. Their 4.5-24 \\micron\\ \nSED slopes are usually just slightly more negative than the cutoff for a \nflat-spectrum source ($-0.3$); three Class II pre-main-sequence stars \n(HOPS 22, 184, 201) have SEDs that are typical of disks with inner holes, \ndisplaying a 10 $\\mu$m silicate emission feature and a rising SED from \n12 to about 20 \\micron\\ \\citep[e.g.,][]{kim13}. The SEDs of the other\nClass II objects are similar to those of flat-spectrum sources; thus, they could\nhave (remnant) envelopes that contribute to their long-wavelength emission.\n\nOur HOPS sample is mostly complete in the number of Class 0, Class I, and \nflat-spectrum sources in the areas of Orion surveyed by {\\it Spitzer} excluding\nthe Orion Nebula \\citep[see][]{megeath12,stutz13}. Of the 357 unique YSOs \noriginally identified in {\\it Spitzer} data that were included in the HOPS sample and \nobserved with PACS, 322 were detected at least at 70 $\\mu$m, which amounts \nto a fraction of 90\\%. We removed likely contaminants and added 16 new\nprotostars discovered in PACS data to get to our sample of 330 YSOs,\nmost of which are protostars.\nOur lowest $L_{bol}$ source is HOPS 208, with $L_{bol}$= 0.017 $L_{\\odot}$. This \nprotostar also has the lowest PACS 70 $\\mu$m flux in our sample (8.2 mJy). \nOverall, our sample has 27 protostars with $L_{bol}<$ 0.1 $L_{\\odot}$, which places \nthem in the luminosity range of very low luminosity objects \n\\citep[VeLLOs;][]{diFrancesco07,dunham08}. The number of VeLLOs in our \nsample is likely larger, given that VeLLOs are defined as having internal \nluminosities less than 0.1 $L_{\\odot}$, and the bolometric luminosity has contributions \nfrom both the internal luminosity and that due to external heating \n\\citep[see][]{dunham08}. In addition, our sample could miss fainter flat-spectrum \nsources and Class 0 and Class I protostars. In fact, there are several faint YSOs \nwithout PACS data that were excluded from our sample, but do have {\\it Spitzer} \ndetections (see Appendix section \\ref{YSOs_not_modeled}).\n\n\n\\vspace{3ex}\n\n\\section{Model Grid}\n\\label{grid}\n\nTo characterize the SEDs of our HOPS sample in a uniform manner,\nwe fit the data to simple but physically plausible models. In this way we\ncan assess how well such simple models can fit the data, and how the\nquality of the fits changes with evolutionary class. We can also determine\nthe full range of physical parameters implied by the fits and the range of\nparameters for each protostellar class. There are degeneracies and biases \nin the fits, and the uncertainties in model parameters will vary from object \nto object, but our results represent a first step in estimating physical \nparameters that describe the protostars in our sample.\n\nWe use a large model grid calculated using the 2008 version of the\n\\citet{whitney03a,whitney03b} Monte Carlo radiative transfer code \n\\citep[see][]{stutz13}; an early version of the grid was presented in \n\\citet{ali10}. \nEach model consists of a central protostar, a circumstellar disk, and an envelope;\nthe radiation released by the star and the accretion is reprocessed by the\ndisk and envelope. The density in the disk is described by power laws in the \nradial and vertical directions, while the density distribution in the envelope \ncorresponds to that of a rotating, collapsing cloud core with constant infall\nrate (the so-called TSC model, after \\citealt{terebey84}; see also \n\\citealt{ulrich76,cassen81}). The envelope also contains an outflow cavity, \nwhose walls are assumed to follow a polynomial shape. At favorable inclination \nangles, this evacuated cavity allows radiation from the inner envelope and \ndisk regions to reach the observer directly. Also, radiation is scattered off the \ncavity walls and can increase the near-IR emission from a protostellar system.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.49]{Opacities_plot.eps}\n\\caption{Extinction opacities of the \\citet{ormel11} dust model ``icsgra3''\n({\\it black}) compared to other dust opacities from the literature: grains with\nthin ice mantles after 10$^5$ years of coagulation with a gas density of\n10$^6$ cm$^{-3}$ from \\citet{ossenkopf94} ({\\it orange});\ncase A model of carbon and silicate dust for R$_V$=5.5 from \\citet{draine03}\n({\\it green}); two extinction curves derived for star-forming regions by\n\\citet{mcclure09}, one for $0.76 < A_J < 2.53$ ({\\it blue}), and one\nfor $2.53 < A_J < 17.71$ ({\\it purple}).\n\\label{opacities}}\n\\end{figure}\n\nWe used dust opacities from \\citet{ormel11} to account for larger, icy grains\n(as opposed to the small grains made of amorphous silicates typically found in\nthe interstellar medium). We adopted their dust model that includes graphite \ngrains without ice coating and ice-coated silicates, with a size distribution that \nassumes growth of aggregates for $3 \\times 10^5$ years, when grains have \ngrown up to 3 $\\mu$m in size (``icsgra3''). Particle sizes range from 0.1 to \n3 $\\mu$m, with a number density that is roughly proportional to $a^{-2.3}$ \n(where $a$ is the particle radius).\nFigure \\ref{opacities} shows our adopted opacities compared to different \nones found in the literature. The opacities from \\citet{draine03} assume a \nmixture of small carbonaceous and amorphous silicate grains. Including larger \nand icy grains broadens the 10 $\\mu$m silicate feature (which is mostly due to \nthe libration mode of water ice) and causes additional absorption at 3 $\\mu$m \nand in the 40-60 $\\mu$m range (all mostly due to the presence of water ice). \nThe mid-IR opacities of the ``icsgra3'' dust model are similar to the ones \ndetermined by \\citet{mcclure09} for star-forming regions and also to those \nused by \\citet{tobin08} to model an edge-on Class 0 protostar; in the mid- to \nfar-IR, they resemble the opacities of \\citet{ossenkopf94}, which are often \nused to model embedded sources. In Figure \\ref{opacities}, we show model \n`OH5' from \\citet{ossenkopf94}, which is listed as the fifth model in their \nTable 1 and corresponds to grains with thin ice mantles after 10$^5$ years \nof coagulation and a gas density of 10$^6$ cm$^{-3}$. We could not use the \n`OH5' opacities for our model grid, since that opacity law does not include \nscattering properties (which are required by the Whitney Monte Carlo \nradiative transfer code). Other authors have modified the `OH5' dust \nto include the scattering cross section and extend the opacities to shorter\nand longer wavelengths \\citep{young05,dunham10}.\n\n\\subsection{Model Parameters}\n\\label{model_parameters}\n\nThere are 3040 models in the grid; they cover 8 values for the total (i.e.,\nintrinsic) luminosity, 4 disk radii, 19 envelope infall rates (which correspond \nto envelope densities), and 5 cavity opening angles. Each model is calculated \nfor 10 different inclination angles, from 18.2\\degr\\ to 87.2\\degr, in equal steps \nin $\\cos(i)$ (starting at 0.95 and ending at 0.05), resulting in 30,400 different \nmodel SEDs. The values for the various model parameters are listed in Table \n\\ref{modelpars}. Since there are a large number of parameters that can be set \nin the Whitney radiative transfer models, we focused on varying those \nparameters that affect the SED the most, leaving the other parameters \nat some typical values. For example, we assumed a stellar mass of 0.5 $M_{\\odot}$, \na disk mass of 0.05 $M_{\\odot}$, and an envelope outer radius of 10,000 AU.\nThe stellar mass enters the code in two ways. First, it is needed to relate the \ndensity of the envelope to the infall rate (see Equation 1 below). Since we fit the \ndensity of the envelope, the infall rate plays no role in the best-fit envelope \nparameters; any stellar mass can be chosen to determine the infall rate for a \ngiven best-fit envelope density. Second, the stellar mass is combined with the \nstellar radius and disk accretion rate to set the disk accretion luminosity. Given \nthat the accretion luminosity is the actual parameter that influences the SED, \nit does not matter which of the three factors is varied. For simplicity and reasons \ndescribed below, we varied the disk accretion rate and the stellar radius, but\nleft the stellar mass constant, to achieve different values for this component \nof the luminosity.\n \nThe total luminosity for each system consists of the stellar luminosity \n(derived from a 4000 K stellar atmosphere model), the accretion luminosity \nresulting from material accreting through the disk down to the disk truncation \nradius, and the accretion luminosity from the hot spots on the stellar surface, \nwhere the accretion columns, which start at the magnetospheric truncation \nradius, land (these columns are not included in the modeled density distribution, \nsince they do not contain dust and do not have a source of opacity in the radiative \ntransfer models). Typically, the accretion luminosity from the hot spots is much \nlarger than the disk accretion luminosity; in our models, the former is about a factor \nof 9 larger than the latter.\nWe chose three different stellar radii, 0.67, 2.09, and 6.61 $R_{\\odot}$\\ (with the same \nstellar temperature), resulting in three different stellar luminosities. Since both \ncomponents of the accretion luminosity depend on the disk accretion rate, \nchoosing a total of eight different disk accretion rates (three for the 0.67 $R_{\\odot}$\\ \nstar, two for the 2.09 $R_{\\odot}$\\ star, and three for the 6.61 $R_{\\odot}$\\ star) results in \neight values for the total luminosity used in the grid (see Table \\ref{modelpars}). \nThe input spectrum produced by the central protostar depends on the relative \ncontributions from the intrinsic stellar luminosity (which peaks at 0.7~$\\mu$m) \nand the accretion luminosity (which is radiated primarily in the UV). In the \nmodels, it can be altered to some degree by choosing different combinations of \nthe disk accretion rate and stellar radius (the former affects only the accretion \nluminosity, while the latter affects both the stellar and accretion luminosity). \nHowever, the effect of the input spectrum on the output SED is negligible. \nConsequently, we cannot reliably measure the relative contributions of stellar \nand accretion luminosity through our SED fits. Instead, we adjusted the particular \nvalues for the stellar radius and disk accretion rate to set the values of the\ntotal luminosity.\n\nFor our model grid, we chose four values for the disk outer radius, which we\nset equal to the centrifugal radius ($R_c$). In a TSC model, the centrifugal \nradius is the position in the disk where material falling in from the envelope \naccumulates; due to envelope rotation, material from the envelope's\nequatorial plane lands at $R_c$, while material from higher latitudes falls\ncloser to the star. The disk could extend beyond $R_c$, but in our models \nit ends at $R_c$. In this work, we use the terms ``disk (outer) radius'' and \n``centrifugal radius'' interchangeably. The primary effect of $R_c$ is to set \nthe rotation rate of the infalling gas and thereby determine the density \nstructure of the envelope \\citep{kenyon93}. \n \n\\begin{deluxetable*}{clcc}\n\\tablecaption{Model Parameters\n\\label{modelpars}}\n\\tablehead{\n\\colhead{Parameter} & \\colhead{Description} & \\colhead{Values} & \n\\colhead{Units}}\n\\startdata\n\\multicolumn{4}{c}{\\it{\\bf Stellar Properties}} \\\\ \n$M_{\\ast}$ & Stellar mass & 0.5 & $M_{\\odot}$ \\\\\n$T_{\\ast}$ & Stellar effective temperature & 4000 & K \\\\\n$R_{\\ast}$ & Stellar radius & 0.67, 2.09, 6.61 & $R_{\\odot}$ \\\\ \n\\hline\n\\multicolumn{4}{c}{\\it{\\bf Disk Properties}} \\\\ \n$M_{disk}$ & Disk mass & 0.05 & $M_{\\odot}$ \\\\\n$R_{disk}$ & Disk outer radius & 5, 50, 100, 500 & AU \\\\\nA & Radial exponent in disk density law & 2.25 & \\nodata \\\\\nB & Vertical exponent in disk density law & 1.25 & \\nodata \\\\\n$\\dot{M}_{disk,1}$ & Disk-to-star accretion rate for $R_{star}$=0.67 $R_{\\odot}$\\ & \n0, $1.14 \\times 10^{-8}$, $5.17 \\times 10^{-8}$ & $M_{\\odot}$\\ yr$^{-1}$ \\\\\n$\\dot{M}_{disk,2}$ & Disk-to-star accretion rate for $R_{star}$=2.09 $R_{\\odot}$\\ & \n$3.67 \\times 10^{-7}$, $1.63 \\times 10^{-6}$ & $M_{\\odot}$\\ yr$^{-1}$ \\\\\n$\\dot{M}_{disk,3}$ & Disk-to-star accretion rate for $R_{star}$=6.61 $R_{\\odot}$\\ & \n$1.14 \\times 10^{-5}$, $5.15 \\times 10^{-5}$, $1.66 \\times 10^{-4}$ & \n$M_{\\odot}$\\ yr$^{-1}$ \\\\\n$R_{trunc}$ & Magnetospheric truncation radius$^a$ & 3 & $R_{\\ast}$ \\\\\n$f_{spot}$ & Fractional area of the hot spots on the star$^b$ & 0.01 & \\nodata \\\\\n\\hline\n\\multicolumn{4}{c}{\\it{\\bf Envelope Properties}} \\\\ \n$R_{env}$ & Envelope outer radius$^c$ & 10,000 & AU \\\\\n$\\rho_{1000}$ & Envelope density at 1000 AU$^d$ & 0.0, 1.19 $\\times 10^{-20}$, \n1.78 $\\times 10^{-20}$, 2.38 $\\times 10^{-20}$, & g cm$^{-3}$ \\\\\n& & 5.95 $\\times 10^{-20}$, 1.19 $\\times 10^{-19}$, 1.78 $\\times 10^{-19}$, \n& g cm$^{-3}$ \\\\\n& & 2.38 $\\times 10^{-19}$, 5.95 $\\times 10^{-19}$, 1.19 $\\times 10^{-18}$,\n& g cm$^{-3}$ \\\\\n& & 1.78 $\\times 10^{-18}$, 2.38 $\\times 10^{-18}$, 5.95 $\\times 10^{-18}$,\n& g cm$^{-3}$ \\\\\n& & 1.19 $\\times 10^{-17}$, 1.78 $\\times 10^{-17}$, 2.38 $\\times 10^{-17}$,\n& g cm$^{-3}$ \\\\\n& & 5.95 $\\times 10^{-17}$, 1.19 $\\times 10^{-16}$, 1.78 $\\times 10^{-16}$\n& g cm$^{-3}$ \\\\\n$R_c$ & Centrifugal radius of TSC envelope & $= R_{disk}$ & AU \\\\ \n$\\theta$ & Cavity opening angle & 5, 15, 25, 35, 45 & degrees \\\\\n$b_{cav}$ & Exponent for cavity shape$^e$ (polynomial) & 1.5 & \\nodata \\\\\n$z_{cav}$ & Vertical offset of cavity wall & 0 & AU \\\\\n \\hline\n\\multicolumn{4}{c}{\\it{\\bf Derived Parameters}} \\\\ \n$L_{\\ast}$ & Stellar luminosity$^f$ & 0.1, 1, 10 & $L_{\\odot}$ \\\\\n$L_{tot}$ & Total luminosity (star + accretion)$^g$ & 0.1, 0.3, 1.0, 3.1, \n 10.1, 30.2, 101, 303 & $L_{\\odot}$ \\\\\n\\hline\n\\multicolumn{4}{c}{\\it{\\bf Parameters for Model SEDs}} \\\\ \n$i$ & Inclination angle & 18.2, 31.8, 41.4, 49.5, 56.7, & degrees \\\\\n & & 63.3, 69.5, 75.6, 81.4, 87.2 & degrees \\\\\n & Aperture radii for model fluxes$^h$ & 420, 840, 1260, 1680, ..., 10080 & AU \\\\\n\\enddata\n\\tablecomments{\nThe dust opacities used for these models are those called ``icsgra3'' from\n\\citet{ormel11}.\\\\\n$^a$ This radius applies to the gas. The inner disk radius for the dust is equal to the \ndust destruction radius. The scale height of the disk at the dust sublimation radius is set \nto the hydrostatic equilibrium solution. \\\\\n$^b$ The hot spots are caused by the accretion columns that reach from the \nmagnetospheric truncation radius to the star. \\\\\n$^c$ The inner envelope radius is set to the dust destruction radius. \\\\\n$^d$ The actual input parameter for the Whitney code is the envelope infall\nrate, which can be derived from $\\rho_{1000}$ using Equation (2). The first six \n$\\rho_{1000}$ values correspond to envelope infall rates of 0, $5.0 \\times 10^{-8}$, \n$7.5 \\times 10^{-8}$, $1.0 \\times 10^{-7}$, $2.5 \\times 10^{-7}$, and $5.0 \\times 10^{-7}$ \n$M_{\\odot}$\\ yr$^{-1}$; the other values can be similarly deduced. \\\\\n$^e$ The cavity walls are assumed to have a polynomial shape; no material is assumed\nto lie inside the cavity. Also, the ambient density (outside the envelope) is 0. \\\\\n$^f$ The three values of $L_{\\ast}$ correspond to the three different stellar radii.\\\\\n$^g$ The total luminosities combine the stellar luminosities and the accretion luminosities\n(which depend on $\\dot{M}_{disk}$). \\\\\n$^h$ For each model, the emitted fluxes are calculated for 24 apertures ranging from \n420 to 10080 AU, in steps of 420 AU. }\n\\end{deluxetable*}\n\nThe largest number of parameter values in our grid is for the envelope \ninfall rate. The envelope infall rate used as an input in the Whitney code \nsets the density of the envelope for a given mass of the protostar. \nSince the SED depends on the density of the envelope (and not directly \non the infall rate, which is only inferred from the density and the acceleration \ndue to gravity from the central protostar), in this work we report a \nreference envelope density instead of the envelope infall rate as one of our \nmodel parameters. For the TSC model, the envelope infall rate $\\dot{M}_{env}$ \nand the reference density at 1 AU in the limit of no rotation ($R_c$=0) are \nrelated as follows \\citep[see][]{kenyon93}:\n\\begin{equation}\n\\rho_1 = 5.318 \\times 10^{-14} \\left( \\frac{\\dot{M}_{env}}{10^{-5} \nM_{\\odot} \\, \\mathrm{yr}^{-1}} \\right) \\left(\\frac{M_{\\ast}}\n{1\\, M_{\\odot}}\\right)^{-1\/2} \\mathrm{g}\\, \\mathrm{cm}^{-3},\n\\end{equation} \nwhere $M_{\\ast}$ is the mass of the central protostar, which is assumed \nto be 0.5 $M_{\\odot}$\\ in our model grid. The density distribution in the\nenvelope follows a power law, $\\rho \\propto r^{-3\/2}$, at radii\nlarger than the centrifugal radius, $R_c$, but then flattens as a result\nof the rotation of the envelope. The density reported by $\\rho_1$ \nassumes a spherically symmetric envelope with a $-3\/2$ power-law \nexponent valid down to the smallest radii, and it is higher than the \nangle-averaged density of a rotating envelope at 1 AU. To quote \ndensities that are closer to actual values found in the modeled rotating \nenvelopes (which have $R_c$ values ranging from 5 to 500 AU), we \nreport $\\rho_{1000}$, the density at 1000 AU for a $\\rho \\propto r^{-3\/2}$ \nenvelope with a 0.5 $M_{\\odot}$\\ protostar:\n\\begin{eqnarray}\n\\rho_{1000} & = & \\rho_1 \\left(\\frac{1}{1000}\\right)^{3\/2} \\nonumber \\\\\n& = & 2.378 \\times 10^{-18} \\left( \n\\frac{\\dot{M}_{env}}{10^{-5} M_{\\odot} \\, yr^{-1}} \\right) \n\\mathrm{g}\\, \\mathrm{cm}^{-3}.\n\\end{eqnarray} \nThus, the range of reference densities probed in our model grid,\nfrom $1.2 \\times 10^{-20}$ to $1.8 \\times 10^{-16}$ g cm$^{-3}$ \n(see Table \\ref{modelpars}), would correspond to envelope\ninfall rates from $5.0 \\times 10^{-8}$ to $7.5 \\times 10^{-4}$ $M_{\\odot}$\\ yr$^{-1}$,\nassuming $M_{\\ast}$=0.5 $M_{\\odot}$\\ (this does not account for a reduction \nof the infalling mass due to clearing by outflow cavities).\nIn Figure \\ref{Rho_env_profiles}, we show the radial density profiles\nfor two TSC models with 5 AU and 500 AU centrifugal radii. The density \nprofiles are azimuthally symmetric and show the flattening of the density \ndistribution inside $R_c$ due to envelope rotation. These plots demonstrate\nthat the density $\\rho_1$ is much higher than the angle-averaged \ndensity at 1 AU; $\\rho_{1000}$ seems to yield more physical values for \nthe density in the envelope at 1000 AU, even for $R_c$ values of 500 AU.\n \n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.37,angle=90]{Envelope_rho_vs_radius_Rc5.eps}\n\\includegraphics[scale=0.37,angle=90]{Envelope_rho_vs_radius_Rc500.eps}\n\\caption{Envelope density versus radius for a model protostar with\n$\\dot{M}_{env}=1.0 \\times 10^{-6}$ $M_{\\odot}$\\ yr$^{-1}$,\n$M_{\\ast}$=0.5 $M_{\\odot}$, and $R_c$=5 AU ({\\it left}) and 500\nAU ({\\it right}) to show the difference between the reference densities\n$\\rho_1$ and $\\rho_{1000}$. The lines with different colors represent \nradial density profiles for different polar angles $\\theta$; the black line represents\nthe angle-averaged density profile (for equations see \\citealt{whitney03a,\nadams86}). The dashed line represents an $r^{-3\/2}$ power law. The vertical\ndotted line marks the location of the centrifugal radius.\n\\label{Rho_env_profiles}}\n\\end{figure*}\n \nAs can be seen from the values of the envelope density in Table \\ref{modelpars}, \nthere is one set of models with an envelope density of 0. These are models \nthat do not contain an envelope component; the entire excess emission \nis caused by the circumstellar disk. If an object is best fit by such a model, \nit would indicate that it is more evolved, having already dispersed its envelope.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.55]{Whitney_cavity_shapes.eps}\n\\caption{Schematic showing the shape of the cavity assumed in our models\nfor three cavity opening angles $\\theta$: 5\\degr, 25\\degr, and 45\\degr\\ \n(from left to right). The cavity walls are defined as a polynomial with exponent \n1.5 ($z \\propto \\tilde r^{1.5}$), with $\\tilde r_{max} = z_{max}\\, \\tan\\theta$, and \nare shown as solid lines. The outer envelope radius ($R_{env}$) at 10,000 AU \nis shown with a short-dashed line. The dotted lines show a different definition \nof the cavity size, where $\\tilde r_{max} = R_{env} \\sin\\theta$ \nand $z_{max} = R_{env} \\cos\\theta$.\n\\label{cavity_shape}}\n\\end{figure}\n\nThe cavities in our models range from 5\\degr\\ to 45\\degr\\ and are defined\nsuch that $z \\propto \\tilde r^{1.5}$, where $\\tilde r$ and $z$ are the cylindrical\ncoordinates for the radial and vertical direction, respectively, and $\\tilde r_{max} \n= z_{max}\\, \\tan\\theta$, with $\\theta$ defined as the cavity opening angle that \nis specified in the parameter file of the Whitney radiative transfer code. In this \ncode, $z_{max}$ is set to the envelope outer radius. Thus, a polynomial-shaped \ncavity, which is wider at smaller $\\tilde r$ values and then converges toward the \nspecified opening angle, is somewhat larger than this opening angle at the outer \nenvelope radius (see Figure \\ref{cavity_shape}). This effect is most noticeable at \nlarger cavity opening angles, but negligible for small cavities. A different definition \nof the cavity size, where $\\tilde r_{max} = R_{env} \\sin\\theta$ and \n$z_{max} = R_{env} \\cos\\theta$ (with $R_{env}$ as the envelope outer radius), \nresults in $z$ values that are a factor of $1\/\\cos\\theta$ larger, and thus the cavity \nreaches the specified opening angle at the outer envelope radius. For this \nwork, the adopted definition of the cavity opening angle is inconsequential, \nbut it becomes relevant when comparing the results of SED modeling to \nscattered light images that reveal the actual cavity shape and size. \nWe also note that in our models the cavities are evacuated of material,\nso there is no dust and gas inside the cavity; in reality, there might be\nsome low-density material left that would add to the scattered light\n\\citep[see][]{fischer14}.\n\nFigures \\ref{Models_inc} to \\ref{Models_Ltot} display a few examples\nof model SEDs from our grid to show the effect of changing those model \nparameters that influence the resulting SED the most. \nThe inclination angle has a strong effect on the near- and mid-infrared SED\n(Figure \\ref{Models_inc}). While a low inclination angle results in an overall \nflat SED in this wavelength region, increasing the inclination angle causes \na deeper silicate absorption feature at 10 $\\mu$m and a steep slope \nbeyond it. The far-infrared to millimeter SED is not affected by the \ninclination angle, since emission at these wavelengths does not suffer \nfrom extinction through the envelope.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.52]{Models_inc_example.eps}\n\\caption{A model from the grid seen at 10 different inclination\nangles to illustrate the effect of viewing angle on the SED. \nThe model has $L_{tot}$=10.1 $L_{\\odot}$, $R_c$=50 AU, \n$\\rho_{1000}$=$1.2 \\times 10^{-18}$ g cm$^{-3}$, $\\theta$=15\\degr, \nand is seen at inclination angles 18\\degr, 32\\degr, 41\\degr, 49\\degr, \n57\\degr, 63\\degr, 69\\degr, 76\\degr, 81\\degr, and 87\\degr\\ (from top to bottom). \n\\label{Models_inc}}\n\\end{figure}\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.52]{Models_cavity_example.eps}\n\\caption{Models from the grid to illustrate the effect of cavity opening\nangle on the SED. The models have $L_{tot}$=10.1 $L_{\\odot}$, \n$R_c$=50 AU, $\\rho_{1000}$=$1.2 \\times 10^{-18}$ g cm$^{-3}$, i=63\\degr, \nbut each has a different cavity opening angle: 5\\degr\\ ({\\it red}),\n15\\degr\\ ({\\it yellow}), 25\\degr\\ ({\\it green}), 35\\degr\\ \n({\\it blue}), 45\\degr\\ ({\\it purple}).\n\\label{Models_cavity}}\n\\end{figure}\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.52]{Models_Rdisk_example.eps}\n\\caption{Models from the grid to illustrate the effect of the centrifugal radius\n($=R_{disk}$) on the SED. The models have $L_{tot}$=10.1 $L_{\\odot}$, \n$\\rho_{1000}$=$1.2 \\times 10^{-18}$ g cm$^{-3}$, $\\theta$=5\\degr, i=63\\degr, \nbut different disk radii: 5 AU ({\\it red}), 50 AU ({\\it yellow}), 100 AU ({\\it green}), \n500 AU ({\\it purple}).\n\\label{Models_Rdisk}}\n\\end{figure}\n\nThe cavity opening angle affects the SED shape at all wavelengths (Figure\n\\ref{Models_cavity}). A small cavity only minimally alters the SED compared\nto a case without a cavity; there is still a deep silicate absorption at 10 $\\mu$m \nand steep SED slope, but the cavity allows some scattered light to escape in\nthe near-IR. A larger cavity results in higher emission at near- and mid-infrared \nwavelengths and reduced emission in the far-infrared. \nThe effect of the cavity on the SED would change if a different shape\nfor the cavity walls were adopted. For example, cavities where the outer\nwall follows the streamlines of the infalling gas and dust evacuate less inner \nenvelope material than our polynomial-shaped cavities, resulting in deeper \nsilicate absorption features and steeper mid-infrared SED slopes for the \nsame cavity opening angle \\citep[see][]{furlan14}. Thus, our cavity \nopening angles are tied to our assumed cavity shape.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.52]{Models_density_example.eps}\n\\caption{Models from the grid to illustrate the effect of envelope\ndensity on the SED. The models have $L_{tot}$=10.1 $L_{\\odot}$, \n$R_c$=50 AU, $\\theta$=15\\degr, i=63\\degr, but different reference \ndensities $\\rho_{1000}$:\n0, $2.4 \\times 10^{-20}$, $1.2 \\times 10^{-19}$, $2.4 \\times 10^{-19}$, \n$1.2 \\times 10^{-18}$, $2.4 \\times 10^{-18}$, $1.2 \\times 10^{-17}$, \n$2.4 \\times 10^{-17}$, and $1.2 \\times 10^{-16}$ g cm$^{-3}$ (the \npeak of the SED moves to longer wavelengths as $\\rho_{1000}$ \nincreases).\n\\label{Models_density}}\n\\end{figure}\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.52]{Models_Ltot_example.eps}\n\\caption{Models from the grid to illustrate the effect of the total\nluminosity on the SED. The models have $R_c$=50 AU, $\\rho_{1000}$=\n$1.2 \\times 10^{-18}$ g cm$^{-3}$, $\\theta$=15\\degr, i=63\\degr, but \ndifferent values for the total luminosity: 0.1, 0.3, 1.0, 3.1, 10.1, 30.2, 101, \nand 303 $L_{\\odot}$\\ (from bottom to top).\n\\label{Models_Ltot}}\n\\end{figure}\n\nThe effect of the centrifugal radius is somewhat similar to those of the cavity\nopening angle and inclination angle, but less pronounced (Figure \n\\ref{Models_Rdisk}). Small disk radii imply more slowly rotating, less flattened \nenvelopes and depress the near- and mid-infrared fluxes more than larger \ndisk radii, but even with large disk radii (and more flattened envelopes) there \nis still sufficient envelope material along the line of sight to cause a pronounced \n10 $\\mu$m absorption feature. Overall, our models do not directly constrain the \nsize of the disk; the opacity is dominated by the envelope. Furthermore, the \nflattening of the envelope that is determined by $R_c$ has a similar effect on \nthe SED as changing the outflow cavity opening angle. \n\nChanging the envelope density causes shifts in the SED in terms of both\nwavelength and flux level: the higher the envelope density, the less \nflux is emitted at shorter wavelengths, and the more the peak of the \nSED shifts to longer wavelengths (Figure \\ref{Models_density}). Deeply \nembedded protostars have SEDs that peak at $\\lambda >$ 100 $\\mu$m, \nsteep mid-IR SED slopes, and deep silicate absorption features.\nThe effect of the envelope density on the SED is different from that of the\ninclination angle, especially in the far-IR: while the SED is not very\nsensitive to the inclination angle in this wavelength region, the ratio of, \ne.g., 70 and 160 $\\mu$m fluxes changes considerably depending on \nthe envelope density. \n\nThe total luminosity of the source has an effect on the overall emission\nlevel of the protostar, but does not strongly affect the SED shape. The\nmain effect is that the peak of the SED shifts to longer wavelengths as \nthe luminosity decreases ($\\lambda_{peak} \\propto L^{-1\/12}$; \n\\citealt{kenyon93}). Especially when comparing models with $L_{tot}$ \nvalues that differ by a factor of a few, the SED shapes are similar \n(Figure \\ref{Models_Ltot}). Thus, one could scale a particular model by \na factor between $\\sim$ 0.5 and 2 and get a good representation of a \nprotostar that is somewhat fainter or brighter, without having to rerun \nthe model calculation with the different input luminosity.\n\n\n\\subsection{Model Apertures}\n\\label{model_ap}\n\nThe model fluxes are computed for 24 different apertures, ranging from \n420 to 10,080 AU in steps of 420 AU (which corresponds to 1\\arcsec\\ at\nthe assumed distance of 420 pc to the Orion star-forming complex). \nFor these SED fluxes, no convolution with a PSF is done, and therefore\nthe spatial distribution of the flux is solely due to the extended nature \nof protostars. Since the envelope outer radius is chosen to be 10,000 AU, \nthe largest aperture encompasses the entire flux emitted by each protostellar\nsystem. However, most of the near- and mid-infrared emission comes \nfrom smaller spatial scales, so an aperture of about 5000 AU will already\ncapture most of the flux emitted at these wavelengths. \n\nFor a more accurate comparison of observed and model fluxes, in each \ninfrared photometric band where we have data available, we interpolate \nmodel fluxes from the two apertures that bracket the aperture used in \nmeasuring the observed fluxes (4\\arcsec\\ for 2MASS, 2{\\farcs}4\nfor IRAC, PSF photometry for MIPS 24 \\micron, with a typical FWHM of\n6\\arcsec, 9{\\farcs}6 for PACS 70 and 100 \\micron, 12{\\farcs}8 \nfor PACS 160 \\micron). For the IRS data points, we use fluxes interpolated \nfor a 5{\\farcs}3 aperture, since the spectra are composed of two segments,\nSL (5.2-14 \\micron; slit width of 3{\\farcs}6) and LL (14-38 \\micron, slit \nwidth of 10{\\farcs}5), and, if any flux mismatches were present, the SL \nsegment was typically scaled to match the LL flux level at 14 \\micron\\ \n\\citep[see, e.g.,][]{furlan08}. So, fluxes measured in an aperture with \na radius of 5{\\farcs}3 roughly correspond to fluxes from a 10{\\farcs}6-wide \nslit.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.39,angle=90]{Model_fluxes_PACS160.eps}\n\\caption{PACS 160 $\\mu$m fluxes versus aperture radius derived for \na model ($L_{tot}=1.0$ $L_{\\odot}$, $R_c=100$ AU, $\\rho_{1000}=2.378 \n\\times 10^{-18}$ g cm$^{-3}$, $\\theta$=15\\degr, $i= 63$\\degr) using \ndifferent methods. \nThe black symbols represent fluxes from the model SED, the blue symbols \nfluxes derived using aperture photometry on the model image convolved with \nthe PACS 160 $\\mu$m PSF, and the red symbols fluxes derived from the \nconvolved model image and then corrected for PSF losses (see text for details). \nThe maximum flux from the model SED was used to normalize all other fluxes.\nThe dotted line indicates an aperture radius of 12{\\farcs}8.\n\\label{Model_fluxes_PACS160}}\n\\end{figure}\n\nGiven that our targets are typically extended and that the near- to mid-infrared \ndata have relatively high spatial resolution, measuring fluxes in small apertures \n(a few arcseconds in radius) will truncate some of the object's flux, so it is\nimportant to choose similar apertures for the model fluxes. \nFrom about 30 to 100 $\\mu$m, the model fluxes calculated for smaller apertures \nare not very different from the total flux (i.e., the flux from the largest aperture),\nwhich is a result of the emission profile in the envelope and the lower spatial \nresolution at longer wavelengths. \nTo check whether extended source emission in the far-infrared might affect \nthe flux we measure in our models, we calculated a small set of model\nimages at 160 $\\mu$m, convolved them with the PACS 160 $\\mu$m\nPSF, and compared the fluxes from the model images to those written\nout for the model SEDs (which we refer to as ``SED fluxes''; these are\nthe fluxes from the models in the grid).\nModel images would be the most observationally consistent way to measure\nthe flux densities, but they are too computationally expensive and would not\nrepresent a significant gain.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.39,angle=90]{Model_fluxes_SABOCA.eps}\n\\caption{SABOCA (350 $\\mu$m) fluxes versus aperture radius derived \nfor the same model as in Figure \\ref{Model_fluxes_PACS160} using different\nmethods. The black symbols represent fluxes from the model SED, the \nblue symbols fluxes derived using aperture photometry on the model \nimage convolved with a Gaussian PSF, and the red dot-dashed line the \nbeam flux (assuming a beam with a FWHM of 7{\\farcs}3). The maximum \nflux from the model SED was used to normalize all other fluxes.\nThe dotted line indicates an aperture radius of 3{\\farcs}65.\n\\label{Model_fluxes_SABOCA}}\n\\end{figure}\n\nIn Figure \\ref{Model_fluxes_PACS160} we show the fluxes derived for\na particular model at 160 $\\mu$m using different methods. The fluxes \nmeasured in the convolved model image are lower than the SED fluxes; this \nis caused by the wide PACS 160 $\\mu$m PSF, which spreads flux to very\nlarge radii. Since the shape of the PSF is known, we can correct for these \nPSF losses (assuming a point source and using standard aperture\ncorrections). The fluxes corrected for these PSF losses are very similar to the \nSED fluxes, typically within $\\sim$ 5-10\\% at apertures larger than 5\\arcsec.\nSince our observed fluxes correspond to these PSF-corrected fluxes (we apply \naperture corrections to our fluxes measured in a 12{\\farcs}8 aperture to \naccount for PSF losses), adopting the SED fluxes from the largest aperture \nwould yield model fluxes that are somewhat too high. Thus, we chose to adopt \nthe SED flux measured in a 12{\\farcs}8 aperture as a good approximation for \nthe model flux we would get if we had model images available for all models in \nthe grid and measured aperture-corrected fluxes in these images. We note that \nin our PACS data, the 160 $\\mu$m sky annulus, which extends from 12{\\farcs}8 \nto 25{\\farcs}6 (see B. Ali et al. 2016, in preparation), can include extended emission\nfrom surrounding material and also some envelope emission. In these cases, we \noften used PSF photometry to minimize contamination from nearby sources and \nnebulosity; however, PSF fitting was not used for more isolated sources since \nthe envelopes can be marginally resolved at 160 $\\mu$m and thus deviate \nslightly from the adopted PSF shape.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.39,angle=90]{Model_fluxes_LABOCA.eps}\n\\caption{Similar to Figure \\ref{Model_fluxes_SABOCA}, but for the\nLABOCA (870 $\\mu$m) fluxes. The dotted line indicates an aperture \nradius of 9{\\farcs}5.\n\\label{Model_fluxes_LABOCA}}\n\\end{figure}\n\nFor the SABOCA and LABOCA data, beam fluxes were adopted; the\nFWHM of the SABOCA beam is 7{\\farcs}3, while for the LABOCA\nbeam it is 19\\arcsec. In order to determine which aperture radius\ncorresponds best to beam fluxes, we created a similar set of model \nimages as above at 350 and 870 $\\mu$m, convolved them with \nGaussian PSFs, and measured fluxes in the model images using \ndifferent apertures (see Figures \\ref{Model_fluxes_SABOCA} and\n\\ref{Model_fluxes_LABOCA}, where we show the results for one\nmodel). Fluxes measured in the convolved model image are smaller \nthan the SED fluxes, especially at aperture radii smaller than the \nFWHM of the beam. We find that the beam fluxes for SABOCA and \nLABOCA are best matched by SED fluxes from apertures with radii \nhalf the size of the FWHM of the beam, i.e., 3{\\farcs}65 for \nSABOCA and 9{\\farcs}5 for LABOCA (thus, the aperture sizes are\nthe same as the beam FWHM). \nThis is again an idealized situation, since the measured SABOCA\nand LABOCA beam fluxes also include extended emission (if the\nsource lies on top of background emission), and thus they could be \nhigher than those from the model.\n\n\\subsection{Effect of External Heating}\n\\label{model_ext_heat}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.65,angle=90]{Ext_heat_models.eps}\n\\caption{{\\it Left:} Comparison of models with $L_{tot}$=0.1 $L_{\\odot}$, $R_c$=100 AU, \n$\\theta$=15\\degr, $\\rho_{1000}$=2.4 $\\times 10^{-18}$ g cm$^{-3}$ ({\\it top}) or\n2.4 $\\times 10^{-20}$ g cm$^{-3}$ ({\\it bottom}), $i$=63\\degr, without external heating \n({\\it black}), with external heating by an ISRF equal to that in the solar neighborhood\n({\\it green, dashed line}), and with heating by an ISRF 10 times stronger ({\\it orange, \ndashed line}). {\\it Right:} Similar to the models in the left panels, but these models have \n$L_{tot}$=1.0 $L_{\\odot}$.\n\\label{models_ext_heat}}\n\\end{figure*}\n\nIn our models, the luminosity is determined by the central protostar and\nthe accretion; no external heating is included. The interstellar radiation\nfield (ISRF) could increase the temperature in the outer envelope regions,\nthus causing an increase in the longer-wavelength fluxes \\citep[e.g.,][]\n{evans01,shirley02,young03}.\nIt is expected that external heating has a noticeable effect only on \nlow-luminosity sources ($\\lesssim$ 1 $L_{\\odot}$), while objects with strong \ninternal heating are not affected by the ISRF. Moreover, the strength of \nthe ISRF varies spatially \\citep{mathis83}, and thus its effect on each \nindividual protostar is uncertain. Nonetheless, in the following we estimate\nthe effect of external heating on model fluxes by using a different set of\nmodels.\n \n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.7,angle=90]{Ext_heat_models_flux_ratios.eps}\n\\caption{Ratio of the excess emission due to external heating and the emission of \nthe protostar with external heating in different bands, for heating by an ISRF\nequal to that in the solar neighborhood ({\\it green diamonds}) and by an ISRF \n10 times stronger ({\\it orange squares}). The vertical lines show the range of\nflux excess ratios resulting from different viewing angles (inclination angles range \nfrom 18\\degr\\ to 87\\degr), while the symbols represent mean values. The top (bottom) \npanels are for models with $L_{tot}$=0.1 (1.0) $L_{\\odot}$. The four columns correspond \nto the four reference densities probed.\n\\label{models_ext_heat_ratios}}\n\\end{figure*}\n\nFor this model calculation, we used the 2012 version of the Whitney radiative \ntransfer code \\citep{whitney13}, which allows for the inclusion of external illumination \nby using the ISRF value in the solar neighborhood from \\citet{mathis83}; to vary \nthe ISRF strength, the adopted value can be scaled by a multiplicative factor \nand extinguished by a certain amount of foreground extinction. \nWe calculated a small number of models with and without external heating\nand then compared their far-infrared and submillimeter fluxes. One set of models \nhas $L_{tot}$=0.1 $L_{\\odot}$, $R_c$=100 AU, $\\theta$=15\\degr, and four different \nreference densities $\\rho_{1000}$, ranging from 2.4 $\\times 10^{-17}$ g cm$^{-3}$ \nto 2.4 $\\times 10^{-20}$ g cm$^{-3}$. The other set has the same parameters\nexcept for $L_{tot}$, which is 1.0 $L_{\\odot}$. We calculated models without external \nheating, with heating from an ISRF equal to that in the solar neighborhood, and \nwith ISRF heating 10 times the solar neighborhood value. For these models, we \ndid not include any foreground extinction for the ISRF; thus, the ISRF heating in \nthese models can be considered an upper limit -- especially the 10-fold increase \nover the ISRF in the solar neighborhood represents an extreme value.\nFigure \\ref{models_ext_heat} shows a few examples of model SEDs with and\nwithout external heating. External heating results in flux increases in the far-IR\nand sub-mm; as expected, it affects low-luminosity sources more, and its effects \nare also more noticeable for higher-density envelopes.\n\nFor a more quantitative comparison of model fluxes in the far-IR and sub-mm,\nwe computed the fluxes for each model in six different bands, those of MIPS \n24 $\\mu$m, PACS 70, 100, and 160 $\\mu$m, and SABOCA (350 $\\mu$m) \nand LABOCA (870 $\\mu$m), using apertures as described in section \n\\ref{model_ap}. The model fluxes are affected by poorer signal-to-noise \nratios at the longest wavelengths, so the 870 $\\mu$m fluxes are less reliable.\nWe subtracted the fluxes of the models without external heating ($F_{\\rm no.ext.heating}$)\nfrom those with external heating ($F_{\\rm ext.heating}$) to determine the flux \nexcess due to external heating. The ratios of these excess fluxes and the model fluxes \nwith external heating ($(F_{\\rm ext.heating}-F_{\\rm no.ext.heating})\/F_{\\rm ext.heating}$)\nare shown in Figure \\ref{models_ext_heat_ratios}. Given that these ratios depend \non the inclination angle to the line of sight, we show them as average values for all\n10 inclination angles as well as the range subtended by all inclination angles.\nWe note overall smaller flux ratios at 350 $\\mu$m due to the smaller aperture \nsize chosen in this wave band (see section \\ref{model_ap}).\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.65,angle=90]{Models_ext_heat_SED1.eps} \n\\caption{{\\it Black and orange lines:} SEDs for models with $L_{tot}$=0.1 $L_{\\odot}$, \n$R_c$=100 AU, $\\theta$=15\\degr, $i$=75\\degr, reference densities \n$\\rho_{1000}$=2.4 $\\times 10^{-18}$ g cm$^{-3}$ ({\\it left}) and 2.4 $\\times 10^{-19}$ \ng cm$^{-3}$ ({\\it right}), without external heating ({\\it black}) and with heating by an ISRF \nscaled by a factor of 10 ({\\it orange}). The purple dashed lines show SEDs from our \nmodel grid (which does not include external heating) with model parameters\nchanged as indicated in the figure label; these models were chosen to closely \nmatch the model SEDs with external heating.\n\\label{ext_heat_SED1}}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.65,angle=90]{Models_ext_heat_SED2.eps} \n\\caption{Similar to Figure \\ref{ext_heat_SED1}, but for model SEDs with\n$L_{tot}$=1.0 $L_{\\odot}$\\ ({\\it black and orange lines}). The light blue and purple \ndashed lines show SEDs from our model grid (no external heating) with the same \nmodel parameters as shown except for a reference density 2.5 times higher \n({\\it light blue}) and $\\theta$=25\\degr, $i=$81\\degr, and a higher luminosity \n({\\it purple}).\n\\label{ext_heat_SED2}}\n\\end{figure*}\n\nOur analysis shows that heating by the ISRF results in flux increases in the far-IR \nand sub-mm that are about a factor of 2-3 higher for envelopes of low-luminosity \nsources ($L_{tot}$=0.1 $L_{\\odot}$) than for those with higher luminosity. Also, the\neffect of external heating is more noticeable at longer wavelengths (where \napertures\/beams are also larger) than at shorter ones; given our chosen apertures, \nthe largest effect occurs at 160 and 870 $\\mu$m. We also note that the flux increases \ndue to heating by the ISRF are smallest for the lowest $\\rho_{1000}$ value probed, \n2.4 $\\times 10^{-20}$ g cm$^{-3}$; at 160 $\\mu$m, the flux increase is largest for \nintermediate envelope densities. Finally, the flux increases in the far-IR and sub-mm \nare far larger for a solar-neighborhood ISRF scaled by factor of 10 than for an \nunscaled ISRF; for the $L_{tot}$=0.1 $L_{\\odot}$\\ models, an unscaled ISRF increases \nthe fluxes from a few percent (at $\\lesssim$ 100 $\\mu$m) to 50\\% (at 870 $\\mu$m), \nwhile an ISRF scaled by a factor of 10 increases these fluxes by 30\\%-75\\%. Thus, \nfor low-luminosity protostars, up to $\\sim$ 75\\% of a protostar's 870 $\\mu$m flux \ncould be due to external heating, if the environment is dominated by an extremely \nstrong ISRF.\n\nTo estimate how the contribution of external heating would modify derived\nmodel parameters, in Figures \\ref{ext_heat_SED1} and \\ref{ext_heat_SED2} \nwe compare model SEDs that include external heating by an ISRF 10 times \nstronger than in the solar neighborhood and model SEDs without this additional \nheating. For the latter, we used models from our model grid and tried to reproduce\nthe SEDs with external heating. For the models with $L_{tot}$=0.1 $L_{\\odot}$, the\neffect of external heating can be reproduced by increasing the luminosity by\nfactors of a few, increasing $\\rho_{1000}$ by up to an order of magnitude,\nand increasing the cavity opening angle and inclination angle by a small\namount. For the $L_{tot}$=1.0 $L_{\\odot}$\\ models, just increasing the reference \ndensity by a factor of 2.5 results in a good match to the long-wavelength \nemission of our externally heated models; however, the shorter-wavelength \nflux is either under- or overestimated. A better match is achieved with models \nhaving the same reference density as the externally heated models, but with \nslightly larger cavity opening angles and inclination angles, and luminosities \nabout a factor of 2 larger.\nThus, if the far-IR and sub-mm fluxes were contaminated by emission resulting \nfrom extremely strong external heating, a model fit using models from our grid \n(which does not include external heating) could overestimate the envelope density \nby up to an order of magnitude and the luminosity by a factor of 2-5. The cavity \nopening and inclination angles would also be more uncertain, but not by much. \nFor a more realistic scenario with more modest external heating (which would \nalso include the effect of local extinction), the effect on model parameters would\nbe smaller.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.4,angle=90]{Ext_heat_models_with_Av.eps}\n\\caption{Models with $L_{tot}$=0.1 $L_{\\odot}$, $R_c$=100 AU, $\\theta$=15\\degr, \n$\\rho_{1000}$=2.4 $\\times 10^{-18}$ g cm$^{-3}$, $i$=63\\degr, without external \nheating ({\\it black}), with external heating by an ISRF 10 times stronger than in the \nsolar neighborhood ({\\it orange} to {\\it brown}, {\\it dashed lines}) and different \namounts of extinction applied to the ISRF (from $A_V = 2.5$ to $A_V = 50$, \n{\\it top} to {\\it bottom}).\n\\label{models_ext_heat_Av}}\n\\end{figure}\n\nFor the latter point, we explored the effect of extinction on the ISRF by calculating \na few more models with $L_{tot}$=0.1 $L_{\\odot}$, $R_c$=100 AU, $\\theta$=15\\degr, \n$\\rho_{1000} = 2.4 \\times 10^{-18}$ g cm$^{-3}$, an ISRF 10 times stronger than \nthat in the solar neighborhood, and $A_V$ values for the ISRF of 2.5, 10, 20, and 50. \nThe model SEDs are shown in Figure \\ref{models_ext_heat_Av}. Compared to ISRF\nheating without any foreground extinction, already $A_V=2.5$ causes a decrease \nby a factor of 1.5-2 in the overall emission at far-IR wavelengths. With $A_V$ of\n10 and 20, the far-IR emission decreases by factors of up to $\\sim$ 3.5 and 4, \nrespectively, compared to a strong ISRF that is not extinguished. The fraction of\nexcess emission due to external heating at 160 $\\mu$m decreases from an\naverage of 0.8 for $A_V$=0 (see Figure \\ref{models_ext_heat_ratios}) to 0.6,\n0.3, and 0.2 for $A_V$=2.5, 10, and 20, respectively. Therefore, considering that \ntypical $A_V$ values in Orion are $\\sim$ 10-20 mag \\citep{stutz15}, it is likely\nthat the effect of external heating on model parameters of low-luminosity sources\ndoes not exceed a factor of $\\sim$ 2 in luminosity and $\\sim$ 5 in envelope\ndensity.\n\n\n\\section{Fitting Method}\n\\label{method}\n\nA customized fitting routine determines the best-fit model from the grid \nfor each object in our sample of 330 YSOs (see Sections \\ref{sample} \nand \\ref{SEDs}) using both photometry and, where available, IRS spectroscopy.\nIdeally, an object has 2MASS, IRAC, IRS, MIPS, PACS, and SABOCA and LABOCA \ndata; in many cases, no submillimeter data are available, and in a few cases the \nobject is too faint to be detected by 2MASS. Of the 330 modeled objects, 40 do \nnot have IRS spectra. As a minimum, objects have some {\\it Spitzer} photometry \nand a measured flux value in the PACS 70 $\\mu$m band. No additional data \nfrom the literature were included in the fits to keep them homogeneous.\n\nIn order to reduce the number of data points contained in the IRS spectral\nwavelength range (such that the spectrum does not dominate over the photometry) \nand to exclude ice absorption features in the 5-8 \\micron\\ region and at 15.2 \\micron\\\nthat are usually observed, but not included in the model opacities, we rebin each \nIRS spectrum to fluxes at 16 wavelengths. These data points trace the continuum \nemission and the 10 and 20 \\micron\\ silicate features. Also, when rebinning the \nspectrum, we smooth over its noisy regions, and we scale the whole spectrum \nto match the MIPS 24 \\micron\\ flux if a similar deviation is also seen at the \nIRAC 5.8 and 8 \\micron\\ bands and is larger than 10\\%. Figure \\ref{IRS_rebin} \nshows three examples of our IRS spectra with the rebinned fluxes overplotted.\nOur selection of 16 IRS data points in addition to at most 13 photometric points \nspread from 1.1 to 870 \\micron\\ puts more emphasis on the mid-IR spectral \nregion in the fits. This wavelength region is better sampled by observations, \nmost of the emission is thermal radiation from the protostellar envelope and\ndisk (as opposed to some possible inclusion of scattered light or thermal \nemission from surrounding material at shorter and longer wavelengths, \nrespectively), and it contains the 10 \\micron\\ silicate feature, which crucially \nconstrains the SED fits. As a result, most models are expected to reproduce \nthe mid-IR fluxes well and might fit more poorly in the near-IR and sub-mm.\n\n\\begin{figure}[!]\n\\centering\n\\includegraphics[scale=0.63]{HOPS_IRS_spectra_rebin.eps}\n\\caption{Three IRS spectra, one for HOPS 32 (Class 0 protostar; {\\it top}),\none for HOPS 84 (Class I protostar; {\\it middle}), and one for HOPS 105\n(flat-spectrum source; {\\it bottom}), overlaid with the rebinned data points\n({\\it filled circles}) used by the fitting routine. Note the different flux ranges\non the y axis in the three panels and thus the big differences in slopes\namong the three spectra.\n\\label{IRS_rebin}}\n\\end{figure}\n\nTo directly compare observed and model fluxes, we create model SEDs \nwith data points that correspond to those obtained from observations,\nfrom both photometry and IRS spectroscopy. For the former, the model \nfluxes are not only derived from the same apertures as the data (see \nsection \\ref{model_ap}), but also integrated over the various filter \nbandpasses, thus yielding model photometry. For the latter, the model \nfluxes are interpolated at the same 16 wavelength values as the IRS spectra.\n\nSince the model grid contains a limited number of values for the total luminosity\n(eight), but the objects we intend to fit have luminosities that likely do not\ncorrespond precisely to these values, we include scaling factors for the luminosity \nwhen determining the best-fit model. As long as these scaling factors are not far\nfrom unity, they are expected to yield SEDs that are very similar to those obtained \nfrom models using the scaled luminosity value as one of the input parameters.\nThe scaling factor can also be related to the distance of the source; for all model \nfluxes, a distance of 420 pc is assumed, but in reality the protostars in our sample\nspan a certain (presumably small) range of distances along the line of sight.\nFor example, a 10\\% change in distance would result in a $\\sim$ 20\\% change \nin flux values (scaling factors of 0.83 or 1.23). Here we report luminosities \nassuming a distance of 420 pc.\n\nIn addition to scaling factors, each model SED can be extinguished to account\nfor interstellar extinction along the line of sight. We use two foreground extinction \nlaws from \\citet{mcclure09} that were derived for star-forming regions: one applies \nto $0.76 \\leq A_J < 2.53$ (or $0.3 \\leq A_K < 1$), and the other one to $A_J \\geq 2.53$\n(or $A_K \\geq 1$). For $A_J < 0.76$, we use a spline fit to the Mathis $R_V=5$ \ncurve \\citep{mathis90}. Since the three laws apply to different extinction environments, \nwe use a linear combination of them to achieve a smooth change in the extinction law \nfrom the diffuse interstellar medium to the dense regions within molecular clouds. \nThus, to find a best-fit model for a certain observed SED, the model fluxes \n$F_{mod}(\\lambda)$ are scaled and extinguished as follows:\n\\begin{equation}\nF_{obs}(\\lambda) = s F_{mod}(\\lambda) 10^{-0.4 A_{\\lambda}},\n\\label{F_scaled_ext}\n\\end{equation}\nwhere $F_{obs}(\\lambda)$ and $F_{mod}(\\lambda)$ are the observed and model \nfluxes, respectively, $s$ is the luminosity scaling factor, and $A_{\\lambda}$\nis the extinction at wavelength $\\lambda$. We use three reddening laws,\n$k_{\\lambda}=A_{\\lambda}\/A_J$; by denoting them with the subscripts 1, 2,\nand 3, $A_{\\lambda}$ in the above equation becomes\n\\begin{eqnarray}\nA_{\\lambda} = A_J k_{1,\\lambda} \\quad {\\rm for}\\; A_J < 0.76 \\nonumber \\\\\nA_{\\lambda} = 0.76 k_{1,\\lambda} + (A_J - 0.76) k_{2,\\lambda} \n\\nonumber \\\\ {\\rm for}\\; 0.76 < A_J < 2.53 \\nonumber \\\\\nA_{\\lambda} = 0.76 k_{1,\\lambda} + 2.53 k_{2,\\lambda} + \n(A_J - 2.53) k_{3,\\lambda} \\nonumber \\\\ {\\rm for}\\; A_J > 2.53\n\\end{eqnarray}\nThus, equation \\ref{F_scaled_ext} can be written as\n\\begin{eqnarray}\n2.5 \\log(F_{mod}(\\lambda)\/F_{obs}(\\lambda)) \n= A_J k_{1,\\lambda} -2.5 \\log(s) \\nonumber \\\\ \n{\\rm for}\\; A_J < 0.76 \\nonumber \\\\\n2.5 \\log(F_{mod}(\\lambda)\/F_{obs}(\\lambda)) \n- 0.76 (k_{1,\\lambda}-k_{2,\\lambda}) = \\nonumber \\\\ \nA_J k_{2,\\lambda} -2.5 \\log(s) \\quad {\\rm for}\\; 0.76 < A_J < 2.53 \n\\nonumber \\\\\n2.5 \\log(F_{mod}(\\lambda)\/F_{obs}(\\lambda)) - 0.76 k_{1,\\lambda} \n- 2.53 (k_{2,\\lambda}-k_{3,\\lambda}) \\nonumber \\\\\n= A_J k_{3,\\lambda} -2.5 \\log(s) \\quad {\\rm for}\\; A_J > 2.53 \\qquad\n\\end{eqnarray}\nThese are linear equations in $A_J$, with the left-hand side of the \nequations as the dependent variables and $k_{\\lambda}$ as the \nindependent variable. For each regime of $A_J$ values, a best-fit\nline can be determined that yields $A_J$ and $-2.5 \\log(s)$ from the \nslope and intercept, respectively, for each model that is compared to \nthe observations.\n\nFor each set of model fluxes and observed fluxes, we calculate three linear fits \n(using linear combinations of the three different extinction laws, as explained above),\nthus yielding three values for scaling factors and three for the extinction value. \nIf each extinction value is within the bounds of the extinction law that was \nused and smaller than a certain maximum $A_J$ value (which will be discussed\nbelow), and the scaling factor is in the range from 0.5 to 2.0, then the result \nwith the best linear fit will be used. \nHowever, if some of the values are not within their boundaries, then combinations\nof their limiting values are explored, and the set of scaling factor and extinction\nwith the best fit is adopted. For example, if a model has fluxes that are\nmuch higher than all observed fluxes, the linear fit described above will likely\nyield very large extinction values and small scaling factors. In this case the fitter\nwould only accept the smallest possible scaling factor (0.5) and the maximum \nallowed $A_J$ value as a solution (which will still result in a poor fit). \n\nFor each object, we allowed the model fluxes to be extinguished up to a maximum\n$A_J$ value derived from column density maps of Orion (\\citealt{stutz15}; see also\n\\citealt{stutz10,stutz13,launhardt13} for the methodology of deriving N$_H$ from \n160-500 \\micron\\ maps). We converted the total hydrogen column \ndensity from these maps to $A_V$ values ($A_V$=3.55 $A_J$)\nby using a conversion factor of \n$1.0 \\times 10^{21}$ cm$^{-2}$ mag$^{-1}$ \\citep{winston10, pillitteri13}. \nFor objects for which no column density could be derived, we set the maximum \n$A_J$ value to 8.45 (which corresponds to $A_V=30$). \n\nAfter returning a best-fit scaling factor and extinction value for each model, \neach data point is assigned a weight, and the goodness of the fit is \nestimated with \n\\begin{equation}\n R = \\frac{\\sum_{i=1}^{N} w_i |\\ln \\left(\\frac{F_{obs}(\\lambda_i)}\n{F_{mod}(\\lambda_i)}\\right)|}{N},\n\\end{equation}\nwhere $w_i$ are the weights, $F_{obs}(\\lambda_i)$ and $F_{mod}(\\lambda_i)$\nare the observed and the scaled and extinguished model fluxes, respectively, \nand N is the number of data points \\citep[see][]{fischer12}. Thus, $R$ is a measure \nof the average, weighted, logarithmic deviation between the observed and model SED. \nIt was introduced by \\citet{fischer12} since the uncertainty of the fit is dominated by \nthe availability of models in the grid (i.e, the spacing of the models in SED space) \nand not by the measurement uncertainty of the data, making the standard $\\chi^2$ \nanalysis less useful. Also, a statistic that measures deviations between models and \ndata in log space more closely resembles the assessment done by eye when \ncomparing models and observed SEDs in log($\\lambda F_{\\lambda}$) vs.\\ $\\lambda$ \nplots.\nWe set the weights $w_i$ to the inverse of the estimated fractional uncertainty \nof each data point; so, for photometry at wavelengths below 3 \\micron\\ they \nare equal to 1\/0.1, between 3 and 60 \\micron\\ they are 1\/0.05, at 70 and \n100 \\micron\\ they are 1\/0.04, at 160 \\micron\\ the weight is 1\/0.07, and \nfor photometry at 350 and 870 \\micron\\ they are 1\/0.4 and 1\/0.2, respectively. \nFor fluxes from IRS spectra the weights are 1\/0.075 for wavelength ranges \n8-12 \\micron\\ and 18-38 \\micron, while they are 1\/0.1 for the 5-8 \\micron\\ \nand 12-18 \\micron\\ regions. These IRS weights are also multiplied by 1.5 for\n high signal-to-noise spectra and by 0.5 for noisy spectra. In this way those \nparts of the IRS spectrum that most constrain the SED, the 10 \\micron\\ silicate \nabsorption feature and slope beyond 18 $\\mu$m, are given more weight; for \nhigh-quality spectra, the weights in these wavelength regions are the same as \nfor the 3-60 \\micron\\ photometry.\n\nFor small values, $R$ measures the average distance between model and data\nin units of the fractional uncertainty.\nIn general, the smaller the $R$ value, the better the model fit, but protostars with fewer \ndata points can have small $R$ values, while protostars with some noisy data can have \nlarger $R$ values (but still an overall good fit). We find a best-fit model for each object, \nbut we also record all those models that lie within a certain range of $R$ values from the \nbest-fit $R$. These models give us an estimate on how well the various model parameters\nare constrained (see Section \\ref{deltaR}).\n\nOur model grid is used to characterize the parameters that best describe the \nobserved SED of each object; the $R$ values rank the models for each \nobject and thus can be used to derive best-fit parameters, as well as estimates of \nparameter ranges. In several instances, better fits could be achieved if the model \nparameters were further adjusted, for example by testing more values of cavity \nopening angle or shape, or even changing the opacities (see, e.g., HOPS 68 \n\\citep{poteet11}, HOPS 223 \\citep{fischer12}, HOPS 59, 60, 66, 108, 368, 369, 370\n\\citep{adams12}, HOPS 136 \\citep{fischer14}, and HOPS 108 \\citep{furlan14}).\nHowever, for protostars that are well fit with one of the models from the grid or for \nwhich the grid yields a narrow range of parameter values, it is unlikely that a more \nextended model grid would yield much different best-fit parameters. Overall, our \nmodel fits yield good estimates of envelope parameters for a majority of the sample, \nand thus we can analyze the protostellar properties of our HOPS targets in a statistical \nmanner. \n\n\n\\section{Results of the Model Fits}\n\nThe best-fit parameters resulting from our models can be found in Table \nA\\ref{bestfit}, and Figure A\\ref{bestSEDs} shows the SEDs and best fits for our sample.\nIn this section we give an overview of the quality of the fits, the distributions \nof the best-fit model parameters, both for the sample as a whole and separated \nby SED class, the parameter uncertainties, and the various degeneracies between \nmodel parameters.\n\n\\subsection{Quality of the Fits}\n\\label{fit_quality}\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.55]{R_histogram.eps}\n\\caption{Histogram of the $R$ values of the best fits of the 330 \nYSOs in the HOPS sample that have {\\it Spitzer} and {\\it Herschel}\ndetections. \\label{R_histo}}\n\\end{figure}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.62,angle=90]{R_histogram_by_class.eps}\n\\caption{Histograms of the $R$ values of the best fits shown separately\nfor the three classes of objects (Class 0, I, and flat-spectrum). The three \nfits with $R>8$ (two Class 0 protostars, one Class I protostar) are not shown.\n\\label{R_histo_by_class}}\n\\end{figure*}\n\nFigure \\ref{R_histo} displays the histogram of $R$ values of the best model\nfits for the 330 objects in our HOPS sample that have {\\it Spitzer} and \n{\\it Herschel} data (more than two data points at different wavelengths)\nand are not contaminants (see Section \\ref{sample}). The median $R$ \nvalue is 3.10, while the mean value is 3.29. Fitting a Gaussian to the \nhistogram at $R$ $\\leq$ 7 yields 3.00 and 2.24 as the center and FWHM \nof the Gaussian, respectively. \nThe distribution of $R$ values implies that, on average, the model deviates\nby about three times the average fractional uncertainty from the data.\nThis is not unexpected, given that we fit models from a grid to observed \nSEDs that span almost three orders of magnitude in wavelength range, \nwith up to 29 data points. The fewer the data points, the easier it is to \nachieve a good fit; in fact, the eight protostars with $R < 1$, HOPS 371, 391, \n398, 401, 402, 404, 406, and 409, have SEDs with measured flux values at \nonly 4-5 points.\nStarting at $R$ values of about 1, $R$ can be used as an indicator of the \ngoodness of fit. However, in some cases a noisy IRS spectrum can \nincrease the $R$ value of a fit that, judged by the photometry alone, does \nnot deviate much from the observed data points. In other cases, mismatches \nbetween different data sets, like offsets between the IRAC fluxes and the IRS\nspectrum, can result in larger $R$ values. These might be interesting protostars \naffected by variability and are thus ideal candidates for follow-up observations. \n\nWhen looking at the SED fits in Figure A\\ref{bestSEDs} (and the corresponding \n$R$ values in Table A\\ref{bestfit}), we estimate that an $R$ value of up to \n$\\sim$ 4 can identify a reliable fit (with some possible discrepancies between \ndata and model in certain wavelength regions). When $R$ gets larger than \nabout 5, the discrepancy between the fit and the observed data points usually\nbecomes noticeable; the fit might still reproduce the overall SED shape\nbut deviate substantially from most measured flux values. \n\nIn Figure \\ref{R_histo_by_class}, we show the histogram of $R$ values \nseparately for the three main protostellar classes in our sample. The \nmedian $R$ value decreases from 3.27 for the Class 0 protostars to 3.18 for \nthe Class I protostars to 2.58 for the flat-spectrum sources. There are \n4 Class 0 protostars and 4 Class I protostars with $R$ values between 1.0 \nand 2.0, but 17 flat-spectrum sources in this $R$ range. These numbers \ntranslate to 17\\% of the flat-spectrum sources in our sample, 4\\% of the \nClass 0 protostars, and 3\\% of the Class I protostars. When examining objects' \n$R$ values between 2.0 and 4.0, there are 51 Class 0 protostars (55\\% of \nClass 0 protostars in the sample), 91 Class I protostars (73\\% of the Class I \nsample), and 74 flat-spectrum sources (73\\% of the flat-spectrum sample).\n\nThus, close to 90\\% of flat-spectrum sources are fit reasonably well ($R$ \nvalues $<$ 4), representing the largest fraction among the different classes \nof objects in our sample. This could be a result of their source properties \nbeing well represented in our model grid, but also lack of substantial \nwavelength-dependent variability (see, e.g., \\citealt{guenther14}), which, \nif present, would make their SEDs more difficult to fit. \nAbout three-quarters of Class I protostars also have best-fit models with $R < 4$; \nthis fraction drops to about two-thirds for the Class 0 protostars. The latter \ngroup of objects often suffers from more uncertain SEDs due to weak\nemission at shorter wavelengths (which, e.g., results in a noisy IRS\nspectrum); they might also be more embedded in extended emission, \nsuch as filaments, which can contaminate the far-IR to submillimeter fluxes. \nAnother factor that could contribute to poor fits is their presumably\nhigh envelope density, which places them closer to the limit in parameter\nspace probed by the model grid.\nOverall, 75\\% of the best-fit models of the protostars in our sample have\n$R < 4$.\n\nWhen examining the SED fits of objects with $R$ values larger than 5.0,\nseveral have very noisy IRS spectra (HOPS 19, 38, 40, 95, 164, 278, 316,\n322, 335, 359). In a few cases the measured PACS 100 and 160 $\\mu$m \nfluxes seem too high compared to the best-fit model (e.g., HOPS 189), \nwhich could be an indication of contamination by extended emission \nsurrounding the protostar. \n\nOf particular interest are objects where variability likely plays a role in \na poor fit. As mentioned in Section \\ref{SEDs}, variability among protostars\nis common; we found in Appendix \\ref{variability} that about 5\\% of our \ntargets display noticeable ($\\gtrsim$ 50\\%) mismatches between \nthe IRS, IRAC, and MIPS fluxes that could be due to intrinsic variability.\nThe SED fits of objects for which the flux mismatches between IRS and \nIRAC and between IRS and MIPS are different are particularly affected, \nsince in that case we did not scale the IRS spectrum to match the \nMIPS 24 $\\mu$m flux.\nHOPS 228 exemplifies such a case: there is a clear discrepancy\nbetween the IRAC and IRS fluxes (a factor of 2.1-2.7) and also\nbetween MIPS 24 $\\mu$m and IRS (a factor of 0.8); even though \nthe fit gives more weight to the IRS data, they are not fit well, especially \nthe silicate absorption feature. The $R$ value of 5.74 for the fit of\nHOPS 228 reflects the discrepant data sets and poor fit. \nHOPS 223 is another case where the IRS fluxes do not match the \nshorter-wavelength data (they are more than an order of magnitude \nlarger); however, it is a known FU Ori source \\citep[see][]{fischer12},\nand the SED presented here contains both pre- and post-outburst data. \nThe model fit is very poor, which can also be gauged by the $R$ value \nof 8.41. \n\nThere are also objects with overall good fits whose SEDs show discrepancies\nthat may be signs of variability or contamination. \nFor example, for the Class I protostar HOPS 71 the IRAC fluxes are a factor \nof 1.8-2.4 lower than the IRS fluxes in the 5-8 $\\mu$m region, and also the \nMIPS flux is about 20\\% lower. The best-fit model ($R=3.63$) fits the SED \nextremely well beyond about 6 $\\mu$m, with some discrepancy at shorter \nwavelengths. There is a source just 11\\arcsec\\ from HOPS 71 that is detected\nin 2MASS and {\\it Spitzer} data, but not by PACS; this object, HOPS 72, is\nlikely an extragalactic object (see Appendix \\ref{exgal_not_modeled}) that\ncould contaminate the IRS fluxes. Thus, in this case, wavelength-dependent\ncontamination by a companion could explain the discrepancies observed in\nthe SED.\n\nAnother example is HOPS 124, which is a deeply embedded Class 0 protostar. \nFor this object, the mismatch between IRS and IRAC and MIPS fluxes \ndecreases with increasing wavelength (from a factor of 2.5 to a factor of 1.4); \nfor the SED fit, the IRS spectrum was scaled by 0.7 to match the MIPS 24 \n$\\mu$m flux. As with HOPS 71, there is a nearby source that could contaminate \nsome of the fluxes, especially at shorter wavelengths: HOPS 125, a flat-spectrum \nsource, lies 9.8\\arcsec\\ from HOPS 124 and is brighter than HOPS 124 out to \n$\\sim$ 20 $\\mu$m, but then much fainter at longer wavelengths. The best-fit \nmodel of HOPS 124 ($R=2.43$) matches the mid- to far-IR photometry and \nalso most of the IRS spectrum well.\n\nAs an example of a probably variable flat-spectrum source, HOPS 132 \nhas IRAC fluxes that lie a factor of 1.3-1.7 above those of IRS and a \nMIPS 24 $\\mu$m flux that is a factor of 0.6 lower. It does not have a close\ncompanion; the nearest HOPS source, HOPS 133, is 27\\arcsec\\ away.\nThe IRS spectrum was not scaled, and since the SED fitter gave more \nweight to the spectrum, it is fit well, but the IRAC photometry is underestimated\nand the MIPS photometry overestimated. Nonetheless, the $R$ value\nof the best fit is 2.87.\n\nOverall, the SED fits of objects that are likely variable or suffer from some\ncontamination are less reliable, but it is not always clear from the $R$ \nvalue of the best fit. The SED fitting procedures assume that the protostars \nare not variable, so when large mismatches between different data sets are \npresent, the fit will appear discrepant with at least some of the observed\ndata points, but the $R$ value would not end up particularly high \nif, e.g., the IRS spectrum was fit exceptionally well. However, given the\ndata sets we have for these protostars, our SED fits will still yield the best \npossible estimate for the protostellar parameters describing these systems.\n\n \n\\subsection{Overview of Derived Parameters}\n\\label{results_overview}\n\nThe histogram of best-fit $\\rho_{1000}$ values (which is the density of the \nenvelope at 1000 AU; see Section \\ref{model_parameters}) is shown in \nFigure \\ref{rho1000_histo}.\nThe median value of the distribution amounts to $5.9 \\times 10^{-19}$ \ng cm$^{-3}$; this corresponds to a $\\rho_1$ value of $1.9 \\times \n10^{-14}$ g cm$^{-3}$. There is a spread in values: 69 objects have \ndensities $\\rho_{1000}$ smaller than $5.0 \\times 10^{-20}$ g cm$^{-3}$\n(6 of them have actually no envelope), 89 fall in the $5.0 \\times\n10^{-20}$ to $5.0 \\times 10^{-19}$ g cm$^{-3}$ range, 96\nare between $5.0 \\times 10^{-19}$ and $5.0 \\times 10^{-18}$ g \ncm$^{-3}$, 60 between $5.0 \\times 10^{-18}$ and $5.0 \\times \n10^{-17}$ g cm$^{-3}$, and 16 have $\\rho_{1000}$ values\nlarger than $5.0 \\times 10^{-17}$ g cm$^{-3}$.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.55]{Rho1000_histogram.eps}\n\\caption{Histogram of the envelope reference density $\\rho_{1000}$ \nof the best fits for the 330 targets in our sample. \n\\label{rho1000_histo}}\n\\end{figure}\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.55]{Menv_histogram.eps}\n\\caption{Histogram of the envelope mass within 2500 AU derived for\nthe best fits for the 330 targets in our sample.\n\\label{Menv_histo}}\n\\end{figure}\n\nWe also calculated the envelope mass ($M_{env}$) within 2500 AU for \nthe best-fit models (see Figure \\ref{Menv_histo} for their distribution). \nThe 2500 AU radius is close to half the FWHM of the PACS 160 $\\mu$m \nbeam at the distance of Orion (i.e., $\\sim$ 6\\arcsec), and thus roughly \nrepresents the spatial extent over which we measure the SEDs.\nThis envelope mass is determined from the integrated envelope density \nof our best-fit models, with allowances made for outflow cavities, and thus \nonly valid in the context of our models. The median envelope mass within \n2500 AU amounts to 0.029 $M_{\\odot}$. The majority of protostars have \nmodel-derived masses in the inner 2500 AU of their envelopes around \n0.1~$M_{\\odot}$; just 22 objects have $M_{env}$ ($<$ 2500 AU) larger than \n1.0~$M_{\\odot}$. Of the 330 modeled objects, 291 have $M_{env}$ ($<$ 2500 AU)\nsmaller than 0.5 $M_{\\odot}$\\ (6 of these 291 objects have no envelope). \n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.55]{Ltot_histogram.eps}\n\\caption{Histogram of the total luminosities of the best fits for the 330 targets \nin our sample. \n\\label{Ltot_histo}}\n\\end{figure}\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.55]{Rdisk_histogram.eps}\n\\caption{Histogram of the disk radii of the best fits for the 330 targets \nin our sample. \n\\label{Rdisk_histo}}\n\\end{figure}\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.55]{Cavity_histogram.eps}\n\\caption{Histogram of the cavity opening angles of the best fits for the \n330 targets in our sample. \n\\label{Cav_histo}}\n\\end{figure}\n\n\\begin{figure*}[!]\n\\centering\n\\includegraphics[scale=0.7]{Rho1000_and_cavity_histogram.eps}\n\\includegraphics[scale=0.7]{Ltot_and_cavity_histogram.eps}\n\\includegraphics[scale=0.7]{Rdisk_and_cavity_histogram.eps}\n\\caption{Histograms of the envelope reference density $\\rho_{1000}$ \n({\\it left}), the total luminosity ({\\it middle}), and the disk radius ({\\it right})\nof the best fits grouped by cavity opening angles. \n\\label{Pars_cav_histo}}\n\\end{figure*}\n\nFigure \\ref{Ltot_histo} contains the histogram of the total luminosities\nderived from the best-fit models. These luminosities consist of the stellar,\ndisk accretion, and accretion shock components. The median total luminosity \namounts to 3.02 $L_{\\odot}$, while the values cover four orders of magnitude, \nfrom 0.06 $L_{\\odot}$\\ (for HOPS 336) to 607 $L_{\\odot}$\\ (for HOPS 288 and 361). \nSince the minimum and maximum values for the total luminosity \nin our grid amount to 0.1 and 303.5 $L_{\\odot}$, respectively, and our scaling \nfactors range from 0.5 to 2.0, our fitting procedure can return best-fit\nluminosities that range from 0.05 to 607 $L_{\\odot}$. Thus, two protostars are\nreaching the upper limit allowed for total luminosities in our grid; it is possible \nthat even better fits could be achieved by increasing the luminosity further. \n\nFrom the distribution of best-fit outer disk radii in Figure \\ref{Rdisk_histo}, it is \napparent that most protostars are fit by small disks whose radius is only 5 AU. \nSince the outer disk outer radius is the centrifugal radius in our models, \ninfalling material from the envelope tends to accumulate close to the star \nfor most sources. Thus, the disk radius is tied to the envelope structure; \na small centrifugal radius implies higher envelope densities at smaller radii\nand a less flattened envelope structure. The median disk radius is 50 AU, \nbut the number of objects with disk radii $\\geq$ 50 AU is roughly evenly \nsplit among the values of 50, 100, and 500 AU.\n\nThe distribution of best-fit cavity opening angles is displayed in Figure \n\\ref{Cav_histo}. Most protostars seem to have either very small (5\\degr) \nor very large (45\\degr) cavities; the median value is 25\\degr.\nWhen dividing the envelope densities by cavity opening angle (see Figure\n\\ref{Pars_cav_histo}, left column), differences emerge: the distributions of \n$\\rho_{1000}$ values are significantly different when comparing objects \nwith $\\theta$=5\\degr\\ and $\\theta \\geq$35\\degr, objects with \n$\\theta$=15\\degr\\ and $\\theta$ $\\geq$ 25\\degr, and objects with \n$\\theta$=25\\degr\\ and $\\theta$=45\\degr. The Kolmogorov-Smirnov (K-S)\ntests yield significance levels that these subsamples are drawn from the\nsame parent population of $\\lesssim$ 0.015. Thus, there seems to be a \ndifference in the distribution of envelope densities among the best-fit \nmodels with smaller cavity opening angles and those with larger cavities. \nProtostars with larger cavities ($\\geq$ 35\\degr) tend to have higher envelope \ndensities (their median $\\rho_{1000}$ values are about an order of magnitude \nlarger compared to objects with cavities $\\leq$ 15\\degr). \n\nFigure \\ref{Pars_cav_histo} (middle column) also shows the distribution \nof total luminosities for the different cavity opening angles. The only \nsignificant difference can be found for the $\\theta$=5\\degr\\ histogram \nas compared to the histograms for larger $\\theta$ values (K-S test\nsignificance level $\\lesssim$ 0.03); the luminosities of models with \n$\\theta$=5\\degr\\ have a different distribution, and also their median \nvalue is 1.45 $L_{\\odot}$, as compared to $\\sim$ 3-5 $L_{\\odot}$\\ for the models\nwith larger cavities. So, protostars with small cavities seem to have lower\ntotal luminosities.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.55]{Inc_histogram.eps}\n\\caption{Histogram of the inclination angles of the best fits for the 330 \ntargets in our sample. The green dashed histogram represents the \ndistribution of uniformly (randomly) distributed inclination angles.\n\\label{Inc_histo}}\n\\end{figure}\n\nThe distribution of centrifugal radii for different cavity opening angles \n(right column in Figure \\ref{Pars_cav_histo}) shows that, independent of \ncavity size, most objects have $R_{disk}$ = 5 AU. However, the\ndistribution among the four different disk radii becomes flatter for \nthe largest cavity opening angles; the histograms for $\\theta$=35\\degr\\ \nand $\\theta$=45\\degr\\ are very similar (K-S test significance level of\n0.98). There is also no significant difference (K-S test values $>$ 0.075)\nbetween the $\\theta$=15\\degr\\ and $\\theta$=25\\degr\\ histograms and \nbetween the $\\theta$=5\\degr\\ and $\\theta \\geq $ 35\\degr\\ histograms. \nThe distributions of disk radii for the other cavity opening angles are all \ndifferent from one another (K-S test significance levels $<$ 0.015). \nOverall, Figure \\ref{Pars_cav_histo} shows that protostars best fit by \nmodels with large cavity opening angles are also fit by models with \nhigher envelope densities and larger centrifugal radii. \n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.54]{Cumulative_inc.eps}\n\\caption{Cumulative distribution of the inclination angles of the best fits,\nnormalized by the total number of fits ({\\it solid line}), compared to the \ncumulative probability of finding an inclination angle below a given value\nfor randomly distributed inclinations ({\\it green dashed line}). \n\\label{Inc_CDF}}\n\\end{figure}\n\nIn Figure \\ref{Inc_histo}, we show the distribution of the inclination angles\nfor the best-fit models. There is a clear concentration of models in the \n60\\degr$-$70\\degr\\ range; the median inclination angle is 63\\degr.\nThis median value is close to 60\\degr, which is where the probability for \nisotropically distributed inclination angles reaches 50\\% (i.e., the probability \nof observing an inclination angle less than 60\\degr\\ is the same as the \nprobability of observing i$>$60\\degr). However, the details of the\ndistributions differ.\nThe cumulative probability of finding an inclination angle less than a certain value, \n$i_c$, is $1-\\cos(i_c)$, assuming a random distribution of inclination angles. For\ninclination angles $i_1$ and $i_2$, the probability for $i_1 < i < i_2$ is\n$\\cos(i_1)-\\cos(i_2)$. Thus, since the inclination angles in our model grid were \nchosen to be equally spaced in $\\cos(i)$ (there are five values $<$60\\degr\\ and \nfive values $>$60\\degr), one would expect a flat distribution in Figure \\ref{Inc_histo} \nif the best-fit inclination angles were randomly distributed (see the green dashed\nhistogram). However, we find a distribution peaked at 63\\degr\\ and 70\\degr. \nThis can also be seen in Figure \\ref{Inc_CDF}, where we compare our observed \ncumulative distribution of inclination angles to that of randomly distributed ones. \nOur distribution shows a deficit at inclination angles below 60\\degr\\ and \nis just slightly higher at large inclination angles. A K-S test of the two \ndistributions yields a 5.6\\% chance that they are drawn from the same parent\ndistribution.\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.59]{Rho1000_and_inc_histogram.eps} \n\\includegraphics[scale=0.59]{Ltot_and_inc_histogram.eps}\n\\includegraphics[scale=0.59]{Cavity_and_inc_histogram.eps}\n\\caption{Histograms of the envelope reference density $\\rho_{1000}$\n({\\it left}), total luminosity ({\\it middle}), and cavity opening angles \n({\\it right}) of the best fits divided by bins of inclination angles. \n\\label{Pars_inc_histo}}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.37,angle=90]{Ltot_vs_Lbol_and_inc.eps}\n\\includegraphics[scale=0.37,angle=90]{Ltot_vs_Lbol_and_Av.eps}\n\\caption{Ratio of the total luminosity from the best fits and the bolometric \nluminosity derived from the observed SEDs versus the inclination angle ({\\it \nleft}) and foreground extinction ({\\it right}) of the best fits. In the left\npanel, the open stars represent the median ratios at each inclination angle.\nIn the right panel, the open circles represent the median ratios for eight\nbins in $A_V$ values, represented by the horizontal lines bisecting\neach circle.\n\\label{Ltot_Lbol}}\n\\end{figure*}\n\nTo examine whether the distribution of envelope parameters changes with\ninclination angle (which could imply a degeneracy), Figure \\ref{Pars_inc_histo}\nshows the reference envelope density $\\rho_{1000}$, the total luminosity, \nand the cavity opening angle binned by three ranges of inclination angles.\nNone of the three model parameters show a significantly different distribution \nfor any of the inclination bins (K-S test significance levels are $\\gtrsim$ 0.1,\nexcept for the cavity opening angles for the lowest and middle inclination\nrange, for which the K-S test significance value is 0.02). \nThe median $\\rho_{1000}$ values for the $i=$18\\degr--41\\degr, 49\\degr--63\\degr, \nand 69\\degr--87\\degr\\ inclination bins are all $5.9 \\times 10^{-19}$ g cm$^{-3}$. \nEven though not shown in Figure \\ref{Pars_inc_histo}, the objects whose best-fit\nmodel does not include an envelope are only found at $i \\geq$ 49\\degr. It is\nnoteworthy that protostars with the highest envelope densities do not have \ninclination angles in the 69\\degr--87\\degr\\ range; it is not clear whether this \nis an observational bias, whether our observed sample does not contain high-density, \nedge-on protostars, or whether this is due to biases in the fitting procedure and\/or \nmodel grid.\nThe median values for the total luminosity do not differ by much for the \ndifferent bins of inclination angle, increasing from 2.9 to 4.1 $L_{\\odot}$\\ from\nthe lowest to the middle inclination range and then decreasing to 2.0 $L_{\\odot}$\\\nfor the highest inclination angles. The few protostars with very high $L_{tot}$\nvalues have large inclination angles ($i \\geq$ 49\\degr).\nFinally, the distribution of cavity opening angles is quite similar for different\nranges in inclination, except for a somewhat larger number of $\\theta =$ 45\\degr\\\nvalues at intermediate inclination angles. Half the objects in the $i=$18\\degr--41\\degr\\ \nand 69\\degr--87\\degr\\ inclination bins have $\\theta \\leq $ 15\\degr\\ (with the most\ncommon value 5\\degr), while almost half the objects at intermediate inclination \nangles have $\\theta \\geq $ 35\\degr\\ (the most common value is 45\\degr). \n\nIn Figure \\ref{Ltot_Lbol}, we show ratios of the total and bolometric luminosities\nas a function of inclination angle and foreground extinction ($i$ and $A_V$ are\nadopted from the best model fits). The total luminosity is the intrinsic luminosity \nfrom the best-fit model of each object, while the bolometric luminosity is derived \nby integrating the fluxes of the observed SED. It is expected that $L_{tot}$ is \nhigher than $L_{bol}$ for objects seen at higher inclination angles, since for \nthese objects a large fraction of the emitted flux is not directed toward the \nobserver (and thus deriving bolometric luminosities from observed fluxes will \nunderestimate the intrinsic source luminosity). Conversely, objects seen more \nface-on should have lower $L_{tot}$ values compared to $L_{bol}$. Our data \nand model fits yield $L_{tot}$ values that are usually higher than the $L_{bol}$\nvalues measured from the SED; the discrepancy is larger for the more \nhighly inclined sources. The median $L_{tot}\/L_{bol}$ ratio is 1.5 for \nprotostars with inclination angles in the 18\\degr--41\\degr\\ range, 2.5\nfor the i=49\\degr--63\\degr\\ range, and 3.5 for inclination angles \n$\\geq$ 69\\degr. \nThe fact that $L_{tot}>L_{bol}$ even for $i=$18\\degr--41\\degr\\ could\nbe related to the typically smaller cavity opening angles for this range of\ninclination angles (see Figure \\ref{Pars_inc_histo}); less flux, especially at\nshorter wavelengths, is detected since the opacity along the line of sight \nis still high due to the small cavities. \n\nForeground extinction also plays a role in increasing the $L_{tot}$\/$L_{bol}$\nratio. The median ratio of these luminosities increases from 1.8 for the $A_V$=\n0-5 mag range to 5.0 for $A_V$=25-30; it decreases somewhat for the next\n$A_V$ bin, but reaches 5.9 at $A_V$=40-50 (the 23 objects with $A_V >$ 50, \nnot shown in Figure \\ref{Ltot_Lbol}, have a median $L_{tot}$\/$L_{bol}$ ratio of 8.2). \nAmong the 22 objects with best-fit $A_V$ values of 0-5 mag and inclination angles \n$\\leq$ 50\\degr, only four have $L_{tot}\/L_{bol}$ ratios that are larger than \n1.5 (they are HOPS 57, 147, 199, and 201; in most cases the model overestimates \nthe near-IR emission). \n\n\n\\subsection{Envelope Parameters for Different SED Classes}\n\\label{par_classes}\n\nFigures \\ref{Rho_class_histo}--\\ref{AV_class_histo} divide the histograms of the \nbest-fit reference density $\\rho_{1000}$, inclination angle, cavity opening angle, \ntotal luminosity, disk radius, and foreground extinction, respectively, by protostar \nclass. As explained in Section \\ref{SEDs}, we divided our targets into Class 0, \nClass I, flat-spectrum, and Class II objects based on their mid-infrared (4.5-24 \\micron) \nspectral index and bolometric temperature (see also Table A\\ref{bestfit}). Thus, \nClass 0 and I protostars have a spectral index $>$ 0.3, and Class 0 protostars have \n$T_{bol}$ values $<$ 70 K, but, as mentioned in Section \\ref{SEDs}, there are \na few protostars whose spectral index or $T_{bol}$ value places them very close \nto the transition region between Class 0 and I or between Class I and flat spectrum. \nGiven that our sample contains just eleven Class II pre-main-sequence stars, we did \nnot include them in the following histograms; they will be discussed in section \n\\ref{disk_sources}.\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.65,angle=90]{Rho1000_histogram_by_class.eps}\n\\caption{Histograms of the envelope reference density $\\rho_{1000}$ \nof the best fits for the different SED classes. \n\\label{Rho_class_histo}}\n\\end{figure*}\n\nThe distributions of reference densities (Figure \\ref{Rho_class_histo}) are \ndifferent for all SED classes; none are consistent with being drawn from the \nsame parent population (K-S test significance level $<$ 0.01). Overall, Class 0 \nprotostars have higher envelope densities than Class I and flat-spectrum sources; \nthe median $\\rho_{1000}$ values decrease from 5.9 $\\times 10^{-18}$ g cm$^{-3}$ \nto 2.4 $\\times 10^{-19}$ g cm$^{-3}$ to 1.2 $\\times 10^{-19}$ g cm$^{-3}$ for \nthese three groups. The lower and upper quartiles for $\\rho_{1000}$ are \n1.8 $\\times 10^{-18}$ and 1.8 $\\times 10^{-17}$ g cm$^{-3}$ for the Class 0\nprotostars, and 2.4 $\\times 10^{-20}$ and 1.2 $\\times 10^{-18}$ g cm$^{-3}$ for the \nClass I and flat-spectrum objects.\nWe will discuss some implications of these differences in derived envelope\ndensities in section \\ref{SED_class_properties}.\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.65,angle=90]{Inc_histogram_by_class.eps}\n\\caption{Histograms of the inclination angles of the best fits for the different \nSED classes. \n\\label{Inc_class_histo}}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.65, angle=90]{Cavity_histogram_by_class.eps}\n\\caption{Histograms of the cavity opening angles of the best fits for the different \nSED classes. \n\\label{Cav_class_histo}}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.65, angle=90]{Ltot_histogram_by_class.eps}\n\\caption{Histograms of the total luminosity of the best fits for the different \nSED classes. \n\\label{Ltot_class_histo}}\n\\end{figure*}\n\nFor the inclination angles (Figure \\ref{Inc_class_histo}), the distributions \nare significantly different for all protostellar classes, too (K-S test significance \nlevel $\\ll$ 0.01). As was shown in Figure \\ref{Inc_histo}, a random distribution of \ninclination angles would result in equal numbers of protostars at each value; \nthere is a deficit of Class 0 and Class I protostars at $i \\lesssim$ 60\\degr, and \nthere are also few Class I protostars and hardly any flat-spectrum sources \nat the highest inclination angles. \nThe median inclination angle is highest for Class 0 protostars (70\\degr), then \ndecreases somewhat for Class I protostars (63\\degr) and even more for \nflat-spectrum sources (57\\degr). Similar to the envelope density, \nthe median inclination angle decreases as one progresses from Class 0 to \nflat-spectrum sources. \n\nIn the distributions of cavity opening angles (Figure \\ref{Cav_class_histo}), \nsignificant differences can be found between Class 0 and Class I protostars \nand between Class I protostars and flat-spectrum sources (K-S test significance \nlevel $\\ll$ 0.01). The median cavity opening angle is 15\\degr\\ for the Class I \nprotostars, but 25\\degr\\ for the other two classes. About 40\\% of Class I protostars \nhave $\\theta$=5\\degr, while the distribution among the different cavity opening \nangles is flatter for the other two object classes. The large fraction of Class I\nprotostars with small cavities could be the result of degeneracy in model parameters \n(see section \\ref{SED_class_properties}) or our assumptions on envelope geometry \n(see section \\ref{Model_problems}). There are notably few flat-spectrum sources \nwith a 5\\degr\\ cavity opening angle; most of them have cavity opening angles of \n15\\degr\\ or 45\\degr.\n\nWhen comparing the total luminosities for the different SED classes\n(Figure \\ref{Ltot_class_histo}), the distribution of $L_{tot}$ values\nis different for the Class 0 protostars when compared to the other two\nclasses (K-S test significance level $<$ 0.015), but similar for Class I \nprotostars and flat-spectrum sources. The median total luminosity for \nClass 0 protostars is 5.5 $L_{\\odot}$, compared to 2.0 $L_{\\odot}$\\ for Class I \nprotostars and 3.0 $L_{\\odot}$\\ for flat-spectrum sources. Both Class 0 and I \nprotostars cover close to the whole range of $L_{tot}$ values in the model grid \n($\\sim$ 0.06-600 $L_{\\odot}$), while flat-spectrum sources span a more limited \nrange, from 0.1 to 316 $L_{\\odot}$.\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.65, angle=90]{Rdisk_histogram_by_class.eps}\n\\caption{Histograms of the disk radii of the best fits for the different \nSED classes. \n\\label{Rdisk_class_histo}}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.65, angle=90]{AV_histogram_by_class.eps}\n\\caption{Histograms of the foreground extinction of the best fits for the different \nSED classes. \n\\label{AV_class_histo}}\n\\end{figure*}\n\nThe distribution of centrifugal radii for the whole sample showed a preference\nfor 5 AU (see Figure \\ref{Rdisk_histo}). When separating the best-fit disk\nradii by protostellar class (Figure \\ref{Rdisk_class_histo}), it is clear that the\ntrend for small centrifugal radii is driven by the flat-spectrum sources \nand also Class I protostars. The fraction of Class 0 protostars with $R_{disk}$=\n5 AU is 17\\%; it increases to 46\\% and 73\\% for Class I protostars and \nflat-spectrum sources, respectively. The median disk radius decreases \nfrom 100 AU for Class 0 protostars to 50 AU for Class I protostars to 5 AU for \nflat-spectrum sources. All three histograms are significantly different from \none another (K-S test significance level $\\ll$ 0.001). The unexpectedly small\ncentrifugal radii for Class I protostars and flat-spectrum sources could point\nto parameter degeneracies (see section \\ref{SED_class_properties}) or the\nneed to revise certain model assumptions (see section \\ref{Model_problems}).\n\nFinally, the distribution of best-fit foreground extinction values (Figure \n\\ref{AV_class_histo}) is similar for all three object classes (K-S test significance \nlevel $>$ 0.03). Even the median values are close: $A_V$=9.2 for Class 0 \nprotostars, $A_V$=8.9 for Class I protostars, and $A_V$=10.1 for flat-spectrum\nsources. Most objects are fit with relatively low foreground extinction values.\nAs can be seen from Figure \\ref{AV_models_maps}, the majority of protostars\nhave best-fit $A_V$ values well below the maximum $A_V$ values determined\nfrom column density maps, which were used as the largest allowed $A_V$\nvalues for the SED fitter. The ratio of model-derived $A_V$ to observationally\nconstrained maximum $A_V$ is lower than 0.5 for about 60\\% of the sample.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.53]{AV_model_AV_maps.eps}\n\\caption{Foreground extinction values $A_V$ from the best-fit models versus\nthe maximum $A_V$ value determined from column density maps of Orion.\nThe dashed line indicates where the two $A_V$ values are equal.\n\\label{AV_models_maps}}\n\\end{figure}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.65, angle=90]{Rho1000_vs_AV_by_class.eps}\n\\caption{Best-fit $\\rho_{1000}$ values versus the foreground extinction for the different \nSED classes. Note that there are a few objects at $A_V > 75$, but they are not shown\nfor overall clarity of the figure.\n\\label{Rho_AV_class}}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.65, angle=90]{Rho1000_vs_inc_by_class.eps}\n\\caption{Best-fit $\\rho_{1000}$ values versus inclination angle for the different \nSED classes. The size of the plotting symbol increases with the number of objects\nhaving the same ($i$, $\\rho_{1000}$) combination; the legend in the leftmost panel\nshows which symbol size corresponds to which number of objects. \n\\label{Rho_inc_class}}\n\\end{figure*}\n\nIn Figure \\ref{Rho_AV_class}, we plot the reference densities $\\rho_{1000}$\nversus the foreground extinction for Class 0, Class I, and flat-spectrum sources.\nAs was already seen in Figure \\ref{AV_class_histo}, the extinction along the line\nof sight is similar for all three classes, with most objects in the $A_V \\sim$ 0-30\nregime. Class 0 protostars, which have higher envelope densities, tend to \nhave lower $A_V$ values from foreground extinction; the highest-density\nenvelopes are spread among a wide range of $A_V$ values. The result is\nsimilar for Class I protostars. Flat-spectrum sources display a range in \nenvelope densities at various foreground extinction values; the lowest-density\nenvelopes typically have $A_V < 20$.\nThus, foreground extinction does not seem to affect the classification of\nprotostars. This result is also supported by the statistical analysis of \\citet{stutz15},\nwho found that, for $A_V$ values up to 35, the misclassification of a Class I\nprotostar as a Class 0 protostar due to foreground extinction (which results in a\nlower $T_{bol}$) is low.\n\n\\begin{deluxetable*}{l|ccc}\n\\tablewidth{0.9\\linewidth}\n\\tablecaption{Median Best-Fit Parameter Values for the Three Protostellar Classes\n\\label{Median_par}}\n\\tablehead{\n\\colhead{Parameter} & \\colhead{Class 0} & \\colhead{Class I} & \n\\colhead{Flat-spectrum}}\n\\startdata\n$L_{tot}$ & 5.5 $L_{\\odot}$ & 2.0 $L_{\\odot}$ & 3.0 $L_{\\odot}$ \\\\\n$\\rho_{1000}$ & 5.9 $\\times 10^{-18}$ g cm$^{-3}$ & \n2.4 $\\times 10^{-19}$ g cm$^{-3}$ & 1.2 $\\times 10^{-19}$ g cm$^{-3}$ \\\\\n$\\theta$ & 25\\degr & 15\\degr & 25\\degr \\\\\n$R_{disk}$ & 100 AU & 50 AU & 5 AU \\\\\n$i$ & 70\\degr & 63\\degr & 57\\degr \\\\\n$A_V$ & 9.2 & 8.9 & 10.1 \\\\\n\\enddata\n\\end{deluxetable*}\n\n\\begin{figure*}[!]\n\\centering\n\\includegraphics[scale=0.5, angle=90]{Median_model_SEDs.eps}\n\\caption{Model SEDs for Class 0 protostars ({\\it red}), Class I protostars\n({\\it green}), and flat-spectrum sources ({\\it blue}) with parameter values \nequal to the median values for each SED class (see Table \\ref{Median_par}).\n\\label{median_SEDs}}\n\\end{figure*}\n\nWe found differences in the best-fit envelope densities and inclination angles\nfor the various protostellar classes.\nThe result that Class 0 protostars tend to have larger inclination angles and \nenvelope densities compared to Class I and flat-spectrum objects can also\nbe seen in Figure \\ref{Rho_inc_class}. There are very few Class 0 protostars\nwith low inclination angles; most have relatively high density and $i>$60\\degr. \nClass I protostars are best fit by somewhat lower inclination angles than Class 0 \nprotostars and also lower $\\rho_{1000}$ values. The best-fit reference density\nfor Class I protostars decreases as the inclination angle increases; thus, higher-density\nprotostars are typically classified as Class I protostars only if they are not seen at\nclose to edge-on orientations.\nFlat-spectrum sources are spread out in density--inclination space, but intermediate \ninclination angles and low envelope densities are common. There is a relatively\nlarge number of objects at $i=$18\\degr\\ and a deficit of objects at high inclination\nangles. The highest-density flat-spectrum sources are seen at inclination angles \n$<$ 50\\degr, while the lower-density objects cover almost the full range of \ninclination angles.\n\nThe median parameter values we determined from the best fits for the \nClass 0, Class I, and flat-spectrum sources (see Table \\ref{Median_par}) \ncan be used to show representative median SEDs for each protostellar class. \nIn Figure \\ref{median_SEDs}, we show model SEDs whose parameter values \nare equal to the median values found for each of the three protostellar classes. \nIt is apparent that the large envelope density and higher inclination angle for \nClass 0 protostars cause a deep absorption feature at 10 $\\mu$m and a steeply \nrising SED in the mid- and far-IR, with a peak close to 100 $\\mu$m. In Class I \nprotostars, the SED is less steep and peaks at a shorter wavelength than \nthe median SED of Class 0 protostars. Flat-spectrum sources show the \nstrongest near-IR emission of the three protostellar classes; their median\nSED is very flat out to 70 $\\mu$m, but at longer wavelengths it is very \nsimilar in shape and flux level to that of Class I protostars.\n\n\n\\subsection{Estimating Parameter Uncertainties}\n\\label{deltaR}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.69, angle=90]{Inc_modes.eps}\n\\caption{Mode of the inclination angle of all models that lie within \n0.5, 1.0, 1.5, and 2 of the best-fit $R$ value (from left to right) versus \nthe best-fit inclination angle for all 330 objects in our sample. \nNote that for each data point, small random offsets in the x and y direction \nhave been applied to avoid overlap. Also, when two or more parameter \nvalues had the same frequency within a $\\Delta R$ bin (i.e., not a unique \nmode value), we computed the average of these values and used it for \nthe mode. The dashed line indicates where the mode and best-fit value \nare equal.\n\\label{inc_modes}}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.69, angle=90]{Ltot_modes.eps}\n\\caption{Mode of the total luminosity of all models that lie within \n0.5, 1.0, 1.5, and 2 of the best-fit $R$ value (from left to right) versus \nthe best-fit total luminosity for all 330 objects in our sample. \n\\label{lum_modes}}\n\\end{figure*}\n\nGiven that the $R$ values are a measure of the goodness of fit in units of\nthe fractional uncertainty, we can use models that lie within a certain range\nof the best-fit $R$ value to estimate uncertainties for the various model \nparameters. For each modeled HOPS target, we tabulated the model\nparameters for all those models that lie within a difference of 0.5, 1.0, \n1.5, and 2.0 of the best-fit $R$. We then computed the mode (i.e., the \nvalue with the highest frequency) for the inclination angle, total luminosity, \n$\\rho_{1000}$, cavity opening angle, outer disk radius, and $A_V$ in \neach of the $\\Delta R$ bins for each object. \nFor any given protostar, the models in each $\\Delta R$ bin span certain \nranges in parameter values; while the modes do not capture the full extent of \nthese ranges, they convey the most common value within each parameter \nrange. The farther away a mode is from the best-fit value, the more poorly \nconstrained the model parameter. Conversely, if a mode of a certain\nparameter is close to or matches the best-fit value, especially for $\\Delta R=$ \n1.5 or 2, that particular model parameter is well constrained.\nIn Figures \\ref{inc_modes} to \\ref{AV_modes} we plot the mode versus the best-fit \nvalue for six model parameters and four $\\Delta R$ bins for all 330 targets \nin our sample. The larger $\\Delta R$, the larger the spread in modes is expected \nto be for each parameter value. \n\nFor example, Figure \\ref{inc_modes} shows that even when considering \nall models with an $R$ value of up to 2 larger than the best-fit $R$ \n($\\Delta R=$ 2), the inclination angle for objects with a best-fit $i$ of \n18\\degr\\ is well constrained; most modes lie at $i=$ 18\\degr, too, and \nonly a few modes can be found at larger inclination angles. However, \nobjects with best-fit $i$ values of 32\\degr\\ or 41\\degr\\ typically can also \nbe fit by models with lower inclination angles (the majority of modes lies \nbelow the line where mode and best-fit value are equal). Inclination \nangles $\\gtrsim$ 63\\degr\\ are better constrained, since their modes \nmostly lie at high inclination angle values, but there are protostars \nwith modes of $i=$18\\degr, too.\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.69, angle=90]{Rho1000_modes.eps}\n\\caption{Similar to Figure \\ref{inc_modes}, but for the reference density\n$\\rho_{1000}$.\n\\label{rho1000_modes}}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.69, angle=90]{Cavity_modes.eps}\n\\caption{Similar to Figure \\ref{inc_modes}, but for the cavity opening\nangle.\n\\label{cavity_modes}}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.69, angle=90]{Rdisk_modes.eps}\n\\caption{Similar to Figure \\ref{inc_modes}, but for the outer disk radius ($=R_c$).\n\\label{Rdisk_modes}}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.69, angle=90]{AV_modes.eps}\n\\caption{Similar to Figure \\ref{lum_modes}, but for the foreground\nextinction.\n\\label{AV_modes}}\n\\end{figure*}\n\nThe modes for the total luminosity (Figure \\ref{lum_modes}) show a small \nspread for models within $\\Delta R$=0.5, but the spread increases as $R$ \nincreases, with some objects displaying up to an order of magnitude in \nvariation of $L_{tot}$.\nAs illustrated in Figure \\ref{rho1000_modes}, the reference density \n$\\rho_{1000}$ is usually well constrained; however, as $R$ increases, \nthe modes of the $\\rho_{1000}$ values are often lower than the best-fit \nvalues.\nFor the cavity opening angle (Figure \\ref{cavity_modes}), many models up \nto $\\Delta R$=2 have modes of $\\theta$=45\\degr, independent of the \nbest-fit value. Similarly for the centrifugal radius (Figure \\ref{Rdisk_modes}), \n$R_{disk}$=500 AU is a common mode. For all four disk radii, the modes \ntend to be larger than the best-fit values; in particular, objects with a best-fit \n$R_{disk}$ of 5 AU have a very uncertain disk radius. In general, it looks like \nour models do not constrain the disk radius and cavity opening angle well.\nThe foreground extinction (Figure \\ref{AV_modes}) displays a certain range \nof modes for each best-fit value, but objects with $A_V \\lesssim$ 20 typically\nhave more reliable $A_V$ values from their model fits. \n\nFigures \\ref{inc_modes} to \\ref{AV_modes} allow us to gauge general trends \nbetween best-fit values and modes for different model parameters. For results \non individual objects, we refer to Appendix \\ref{models_unc}, where we show \nplots of the difference between the modes and the best-fit values of the major \nmodel parameters for all modeled HOPS targets. In this way it is possible to \nestimate which models are better constrained and thus which objects have \nmore reliable SED fits. \nIn addition, in Appendix \\ref{models_unc} we also include contour plots of \n$R$ values for different pairs of model parameters for a few targets to \nillustrate typical parameter degeneracies, which also contribute to parameter\nuncertainties.\n\n\n\\section{Discussion} \n \n\\subsection{Deriving Envelope Parameters from a Model Grid}\n\nWe compared a grid of TSC models to each target by ranking the models \nusing a statistic, $R$, which measures the deviation between observed and \nmodel fluxes in logarithmic space. We did not model each source by further \nadjusting the model parameters, but instead identified the best-fit SED from \nour model grid. Thus, we are bound by the range and sampling of parameters \nchosen for the grid, and while we constructed the grid with the aim of covering \nthe typical parameter space for protostars, it is limited to discrete values. It is \nlikely that many protostars have best-fit parameters that would fall between those \nsampled by the grid, and a few objects could have parameter values that lie \nbeyond the limits set by the grid. In addition, TSC models are axisymmetric \nand have mostly smooth density and temperature profiles, and they do not\ninclude external heating. They assume a rotating, infalling envelope with \nconstant infall rate and with the gravitational force dominated by the central\nprotostar, but the true envelope structure is likely more complex. \nThe models would not apply to the collapse of a cloud in an initial filamentary \nor sheet-like geometry or to multiple systems with, e.g., more than one \noutflow cavity \\citep[e.g.,][]{hartmann96,tobin12}.\n\nDespite the relatively simple models that we use, many of the observed SEDs are \nfit remarkably well: 75\\% of the fits have $R<4$. In those cases, the continuum \ntraced by the IRS spectrum, the silicate absorption feature at 10 $\\mu$m, and \nthe PACS fluxes are all accurately reproduced by the model. Even many \nflat-spectrum sources, which often do not display any spectral features in \nthe mid-infrared and have an overall flat SED out to 30 or 70 $\\mu$m, often \nhave models that fit them very well. About 75\\% of Class I protostars and $\\sim$ \n70\\% of Class 0 protostars have $R < 4$, while close to 90\\% of flat-spectrum \nsources have $R$ values in this range. This validates the choice of parameter \nvalues for our model grid. \nAdditional constraints, like limits on foreground extinction or information on the \ninclination and cavity opening angles from scattered light images or mapping of \noutflows, would allow us to further test and refine the models. We have used limits \non the extinction in our analysis. Although {\\it Hubble Space Telescope (HST)} \nscattered light images have been used to constrain models for one HOPS \nprotostar \\citep{fischer14}, scattered light images are not available for many \nof our targets. We therefore chose to focus on fitting the SEDs of all of our \ntargets in a uniform way to a well-defined set of models. Future studies will \nincorporate scattered light images and compare the results to those from \nthe SED fits (J. Booker et al. 2016, in preparation).\n\nThe best-fit models from our grid for the HOPS targets both reproduce the SEDs \nand yield estimates for their protostellar parameters, mostly envelope properties. \nHowever, these are not necessarily unique fits to the data for three reasons. \nFirst, there are degeneracies in the model parameters; increasing the envelope \ndensity or inclination angle, or decreasing the cavity opening angle or disk radius, \nresults in a steeper mid-IR SED slope and deeper silicate features. Each of these \nparameters affects the SED differently (just the general trends are the same), and \nthe best fit for each object tries optimizing them. The next best fit, however, could \nbe a different combination of these parameters, especially if the SED is not \nwell constrained by observations (see Section \\ref{deltaR}). \nSecond, although the TSC models reproduce the observed SEDs, other models \nwith different envelope geometries may also be able to fit the same SEDs. The \nmodeling presented here is only valid in the context of the TSC models of single \nstars, and the resulting derived properties are only valid within that framework. \nLast, neglecting external heating could result in overestimated envelope densities\nand luminosities, with the most noticeable effects ($\\rho_{1000}$ and $L_{tot}$ too \nlarge by factors of a few) on low-luminosity sources exposed to strong radiation \nfields (see Section \\ref{model_ext_heat}). From the distribution of best-fit $L_{tot}$ \nvalues, we estimate that $\\sim$ 20\\% of HOPS targets in our sample could be \naffected by external heating. Even though we do not know the strength of \nexternal heating for each protostar, it is likely that external heating would only \nresult in relatively small changes in the derived envelope parameters for these \nprotostars.\n\n\n\\subsection{Envelope Properties and SED Classes}\n\\label{SED_class_properties}\n\nWhen comparing envelope parameters sorted by SED classes, we found that \nenvelope densities and inclination angles decrease from the sample of\nClass 0 protostars through that of Class I protostars to that of flat-spectrum objects. \nThe former is likely an evolutionary effect, while the latter confirms the results of \nprevious work \\citep[e.g.,][]{evans09} that the inclination angle has an important \neffect on the SED and that the evolutionary state of an object cannot be derived \nfrom SED slopes alone. Thus, there is a difference between the ``stage'' and \n``class'' of an object \\citep{robitaille06}; Stage 0 and I objects are characterized \nby substantial envelopes, Stage II objects are surrounded by optically thick disks, \nwith possibly some remnant infalling envelopes, and Stage III objects have \noptically thin disks. \n\nIn general, the trends we see among model parameters are a consequence \nof the definition of a protostar based on its SED: in order to be classified \nas a Class 0 or I object, a protostar is required to have a near- to mid-infrared \nSED slope larger than 0.3. A protostellar model with a small cavity opening \nangle, small centrifugal radius, and\/or high inclination angle will generate \nsuch an SED, since it increases the optical depth along the line of sight. \nModels with a large cavity will only yield a rising SED in the 2$-$40 $\\mu$m \nspectral range if their envelope density is large or the inclination angle is \nrelatively high. \n\nWe find that Class 0 protostars can be best fit not only by very high envelope \ndensities but also moderately high envelope densities and large inclination \nangles. The bolometric temperature, which is used to separate \nClass~0 from Class~I protostars, is inclination dependent; some Class~I \nprotostars are shifted to the Class~0 regime if they are viewed more edge-on. \nThe higher-density Class~I protostars tend to have lower inclination angles (but \nstill $>$ 50\\degr); thus, their evolutionary stage could be similar to more \nembedded protostars that are seen edge-on and classified as Class 0 protostars. \nConversely, some Class~0 objects with large inclination angles, but lower \nenvelope densities, could be in a later evolutionary stage than typical Class~0 \nprotostars. Similarly, Class I protostars with large $i$ and low $\\rho_{1000}$ \nvalues could be edge-on Stage II objects (whose infrared emission is dominated \nby a disk). Finally, low-inclination Stage 0 and I protostars can appear \nas a flat-spectrum sources \\citep{calvet94}.\n\n\\begin{figure*}[!]\n\\centering\n\\includegraphics[scale=0.58, angle=90]{Rho1000_vs_inc_cavity.eps}\n\\caption{Best-fit $\\rho_{1000}$ values versus inclination angle with the cavity\nsize indicated by the different symbol sizes and gray shades: symbols become\nlarger and lighter colored with increasing cavity size (5\\degr, 15\\degr, 25\\degr,\n35\\degr, 45\\degr). A small random offset in the x direction has been applied to \neach data point to prevent too much overlap.\n\\label{Rho1000_inc_cavity}}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.58, angle=90]{Rho1000_vs_inc_Rdisk.eps}\n\\caption{Similar to Figure \\ref{Rho1000_inc_cavity}, but with the outer disk \nradius indicated by the different symbol sizes and gray shades: symbols \nbecome larger and lighter colored with increasing disk radius (5, 50, 100,\n500 AU).\n\\label{Rho1000_inc_Rdisk}}\n\\end{figure*}\n\nNevertheless, the observed trend in envelope densities suggests that the variations \nin the observed SEDs track, in great part, an evolution toward lower envelope \ndensities and lower infall rates. Assuming a certain mass for the central \nstar, the reference density in our models can be used to infer an envelope infall rate\n($\\dot{M}_{env} \\propto \\rho_{1000} \\sqrt{M_{\\ast}}$). \nAs mentioned in section \\ref{model_parameters}, this infall rate is model dependent \nand therefore tied to the assumptions of the models. With this in mind, the median \n$\\rho_{1000}$ values for the Class 0, Class I, and \nflat-spectrum protostars in our sample correspond to envelope infall rates of \n$2.5 \\times 10^{-5}$, $1.0 \\times 10^{-6}$, and $5.0 \\times 10^{-7}$ $M_{\\odot}$\\ yr$^{-1}$, \nrespectively, for a 0.5 $M_{\\odot}$\\ star. \nUsing a more realistic assumption of larger stellar mass for more evolved protostars, \nthe infall rates for Class I and flat-spectrum protostars would be larger than these \nvalues by a factor of a few. However, just larger stellar masses cannot explain the\nlarge decrease of a factor of 50 in the median envelope density from Class 0 to \nflat-spectrum protostars; to achieve such a decrease with a constant infall rate\nof $2.5 \\times 10^{-5}$ $M_{\\odot}$\\ yr$^{-1}$, the stellar mass would have to increase\nby a factor of 2500. Thus, within the context of our model fits, we can conclude that,\nas envelopes become more tenuous, the infall rates also decrease.\n\nOther trends are also apparent. Class 0 protostars and flat-spectrum sources show \na relatively flat distribution of cavity opening angles. On the other hand, the best \nfit for a large fraction of Class I protostars (40\\%) results in $\\theta$=5\\degr. \nThis could point to a degeneracy in the models, since protostars with small cavity \nopening angles tend to have lower envelope densities (and also lower total \nluminosities); thus, the smaller cavity partly compensates for the lower opacity \nresulting from the lower envelope density (see also Figure \\ref{Rho1000_inc_cavity}). \n\nEven though our models do not yield reliable disk properties, we can make a\nstatement about the difference in the best-fit centrifugal radii (or $R_{disk}$),\nwhich are tied to the structure of the rotating envelope given by the model fits. \nIt should be noted that the centrifugal radii set a lower limit to the disk radii, \nsince disks may spread outward due to viscous accretion. Most Class I protostars \nand flat-spectrum sources are fit with a centrifugal radius of just 5 AU. Since \nthe smallest centrifugal radius in our model grid is 5 AU and the next value is 50 AU, \nwe can state that, except for Class 0 protostars, most protostars in our sample \nhave $R_{disk} < 50$~AU, and some might even have $R_{disk} <$ 5~AU.\n\nSmall disks of those sizes have been observed; radio interferometry of the \nmultiple protostellar system L1551 IRS 5 shows disks whose semi-major axes \nare $\\lesssim$~20~AU \\citep{rodriguez98,lim06}. However, there is a degeneracy \nbetween the centrifugal radius and the envelope density; for protostars with low \nenvelope densities, the small centrifugal radius can compensate the decrease in \nopacity by concentrating more material closer to the star. As can be seen in Figure \n\\ref{Rho1000_inc_Rdisk}, most protostars with $R_{disk}=$ 5 AU also have lower \nenvelope densities. Inclination angle also plays a role; protostars seen at \n$i>$ 80\\degr\\ typically have larger centrifugal radii. In addition, our envelope\nmodels include outflow cavities, which allow some of the mid-IR radiation to\nescape. In order to generate model SEDs with silicate absorption features \nand steep mid-IR slopes at low to intermediate inclination angles, a lower \n$R_{disk}$ value is needed. \nWe will discuss the potential implications of the small cavity sizes and $R_c$ \nvalues for Class I protostars and flat-spectrum sources in Section \n\\ref{Model_problems}.\n\n\n\\subsubsection{The Most Embedded Protostars}\n\\label{Class0}\n\nAmong the Class 0 protostars, there are protostars in the earliest evolutionary \nstages, when the envelope is massive and the protostar still has to accrete \nmost of its mass. \n\\citet{stutz13} identified 18 protostars with very red mid- to far-infrared\ncolors ($\\log(\\lambda F_{\\lambda}(70)\/\\lambda F_{\\lambda}(24))$\n$>$ 1.65), of which 11 were newly identified (see Table D\\ref{New_proto}). \n\\citet{tobin15} added an additional object.\nThese protostars were named PACS Bright Red sources (PBRs) by \\citet{stutz13}; \nthey are HOPS 169, 341, 354, 358, 359, 372, 373, 394, 397-405, 407, and 409. \nBased on their steep 24-70 $\\mu$m SEDs and large submillimeter luminosities, \nthey were interpreted as the youngest protostars in Orion with very dense \nenvelopes.\n\nFrom our best-fit models to the SEDs of the PBRs, we derive a median \n$\\rho_{1000}$ value of 1.2 $\\times$ 10$^{-17}$ g cm$^{-3}$, \nwhich is twice as high as the median value of all the Class 0 protostars in \nour sample. These fits also result in a median envelope mass within 2500 AU \nof 0.66 $M_{\\odot}$\\ for the PBRs, but the individual objects cover a large \nrange, from 0.07 to 1.83 $M_{\\odot}$. The median total luminosity amounts to \n5.6 $L_{\\odot}$\\ (with a range from 0.6 to 71.0 $L_{\\odot}$), which is very similar\nto the median $L_{tot}$ value for the Class 0 protostars in our sample. \nMost PBRs (14 out of 19 protostars) are fit by models with large inclination \nangles ($i \\geq$ 70\\degr), but, as shown in \\citet{stutz13}, high inclination\nalone cannot explain the redness of the PBRs. \nThus, our models confirm the results of \\citet{stutz13} that the PBRs\nare deeply embedded and thus likely among the youngest protostars\nin Orion.\n\n\n\\subsubsection{Flat-spectrum Sources}\n\\label{flat-spectrum}\n\nA particularly interesting group of protostars that are not easy to categorize\nare the flat-spectrum sources. They are thought to include objects in transition \nbetween Stages I and II, when the envelope is being dispersed \\citep{greene94}. \nIn particular, those with low envelope densities could be more evolved protostars, \nor they could be protostars that started out with more tenuous envelopes. \nOn the other hand, flat-spectrum sources could also be highly inclined \ndisk sources \\citep[see][]{crapsi08}, or protostars surrounded by dense \nenvelopes, but seen close to face-on \\citep{calvet94}. This type of \nmisclassification could have a large effect on the lifetimes of the earlier \nprotostellar stages and thus on the timeline of envelope dispersal. \nAmong the 330 HOPS targets in our sample, we identified 102 flat-spectrum \nsources based on their flat ($-0.3$ to $+0.3$) spectral index from 4.5 to 24 \n$\\mu$m (or 5-25 $\\mu$m in a few cases). Thus, they compose a fairly large \nfraction of our protostellar sample. Of these 102 objects, 47 have a negative \nspectral index and 55 have one between 0 and $+0.3$; 41 have a spectral \nindex between $-0.1$ and 0.1, which results in a very flat mid-infrared SED. \n\nDespite a flat SED slope between 4.5 and 24 $\\mu$m, many flat-spectrum\nsources display a weak silicate emission or absorption feature at 10 $\\mu$m,\nwhich may indicate the presence of a very tenuous infalling envelope or\nmay be the result of the viewing geometry. Some SEDs are very flat out to \n100 $\\mu$m, others have negative spectral slopes beyond 40 $\\mu$m, \nand again others a rising SED from the mid- to the far-IR.\nThere are also objects with more pronounced absorption features due to \nnot only silicates but also ices, as are typically found in Class 0 and I protostars,\nbut also edge-on disks (see HOPS 82, 85, 89, 90, 92, 129, 150, 200, 210, 211, \n281, 304, 331, and 363). Only two flat-spectrum sources have prominent silicate \nemission features, and their SEDs are reminiscent of protoplanetary disks \n(see HOPS 187 and 199). Thus, flat-spectrum sources likely include objects of a \nvariety of evolutionary stages.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.39, angle=90]{Median_Rho1000_vs_inc.eps}\n\\caption{Median best-fit $\\rho_{1000}$ values at each inclination angle \nfor the Class 0 and I protostars ({\\it squares}) and the flat-spectrum sources\n({\\it circles}) in our sample. \n\\label{Rho_inc_flat-spectrum}}\n\\end{figure}\n\nThe latter conclusion can also be drawn when analyzing the distribution\nof envelope reference densities and inclination angles for flat-spectrum\nsources. In Figure \\ref{Rho_inc_class}, we showed that flat-spectrum\nsources typically have intermediate inclination angles and lower envelope\ndensities. To compare their properties more directly to Class 0 and I protostars,\nin Figure \\ref{Rho_inc_flat-spectrum} we show the median best-fit \n$\\rho_{1000}$ value at each best-fit inclination angle; it is larger for Class 0\nand I protostars than for flat-spectrum sources at all inclination angles.\nFor Class 0 and I protostars, the median $\\rho_{1000}$ value is highest\nat intermediate inclination angles, decreases at larger inclination angles,\nand then increases again for $i>$ 80\\degr. For flat-spectrum sources, the \nmedian $\\rho_{1000}$ value is relatively flat over the 18\\degr--63\\degr\\\nregion but has its peak value at $i=$41\\degr; it decreases for larger \ninclination angles. The only flat-spectrum source with a best-fit inclination \nangle of 81\\degr, HOPS 357, has a very low envelope density (the lowest \nvalue for this parameter in the model grid), and its spectrum displays a \ndeep silicate absorption feature.\n\nOverall, this shows that, while a range of envelope densities and inclination \nangles can explain flat-spectrum sources, their envelope densities are typically\nlower than for Class 0 and I protostars. The higher-density objects are seen at low\nto intermediate inclination angles, while only the lowest-density objects\nare seen closer to edge-on. Some of the high-density flat-spectrum sources\ncould actually be more embedded protostars (Stage 0 objects) seen\nface-on (which would be classified as Class 0 objects if seen at larger\ninclination angles). Thus, in terms of envelope evolution, they include a\ndiverse group of objects. \n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.71, angle=90]{Rho1000_vs_F350.eps}\n\\caption{Best-fit $\\rho_{1000}$ values versus the 350 $\\mu$m fluxes\nfor the Class 0 and I protostars ({\\it left}) and the flat-spectrum sources\n({\\it right}) in our sample. Detections at 350 $\\mu$m are shown with \ndiamonds, while upper limits are shown with arrows. The histograms \nshow the distribution of best-fit $\\rho_{1000}$ values for sources with \na 350 $\\mu$m flux measurement ({\\it thick solid line}) and with 350 \n$\\mu$m upper limits ({\\it shaded area}). \n\\label{Rho_F350}}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[scale=0.71, angle=90]{Rho1000_vs_F870.eps}\n\\caption{Similar to Figure \\ref{Rho_F350}, but for the 870 $\\mu$m fluxes.\n\\label{Rho_F870}}\n\\end{figure*}\n\nWe note that even though we find that flat-spectrum sources have in general\nlower envelope densities than Class 0 and Class I objects, their best fit does\ninclude an envelope in almost all cases; just 3 of the 102 flat-spectrum \nsources are best fit without an envelope. This seems to contradict recent\nfindings by \\citet{heiderman15}, who found that only about 50\\% of flat-spectrum\nsources were actually protostars surrounded by envelopes. This could be partly\nexplained by different criteria used to select flat-spectrum sources; in the \n\\citet{heiderman15} sample, flat-spectrum sources are selected by their\nextinction-corrected 2-24 $\\mu$m spectral index (see also \\citealt{evans09,\ndunham13}), while our sample uses a flat 4.5-24 $\\mu$m spectral index.\nMoreover, in their study \\citet{heiderman15} detected the presence of an \nenvelope via HCO$^+$ emission, and they found that almost all sources \ndetected in the sub-mm are also detected in HCO$^+$ (but the opposite \ndoes not always hold). For our sample of Orion protostars, we find that \n75\\% of Class 0+I protostars observed with SABOCA (350 $\\mu$m) are \ndetected, while only 47\\% of flat-spectrum sources have detections. For \nLABOCA observations (870 $\\mu$m), these two fractions amount to \n41\\% and 21\\%, respectively. Thus, we find that flat-spectrum sources \nhave a $\\sim$ 50\\% lower sub-mm detection rate than Class 0+I protostars. \nFlat-spectrum sources without sub-mm detections would likely also not \ndisplay HCO$^+$ emission and thus would be considered as protostars \nwithout envelopes by \\citet{heiderman15}.\n\nTo compare how our submillimeter detections correlate with the presence\nof an envelope, in Figures \\ref{Rho_F350} and \\ref{Rho_F870} we show the\nderived best-fit reference envelope densities as a function of 350 or 870 $\\mu$m\nfluxes for the combined Class 0+I sample and the flat-spectrum sources. We\nalso differentiate the distribution of envelope densities between measured\nflux values and upper limits; at 870 $\\mu$m, the upper limits are often cases\nwhere the sources are not detected due to confusion with bright, spatially\nvarying emission. We find that even protostars with upper limits at 350 and \n870 $\\mu$m are best fit with an envelope; however, the envelope density \nis lower for objects with upper limits in the sub-mm. This is especially evident \nfor Class 0+I protostars; for flat-spectrum sources, the distributions of envelope \ndensities for sub-mm detections and upper limits show significant overlap. Four \ntimes as many flat-spectrum sources have upper limits instead of detections \nat 870 $\\mu$m, but their derived $\\rho_{1000}$ values span almost the full \nrange of values. Furthermore, the median $\\rho_{1000}$ value of 1.19 $\\times \n10^{-19}$ g cm$^{-3}$ for sources without detections is relatively close to the \nmedian value of 5.95 $\\times 10^{-19}$ g cm$^{-3}$ for the sources with \n870 $\\mu$m detections. Thus, our model fits do not rely on sub-mm detections \nto yield a best fit with an envelope; in most cases the near- to far-IR SED is \nsufficient to constrain the properties of the envelope.\n\n\n\\subsubsection{Sources without an Envelope and Class II Objects}\n\\label{disk_sources}\n\nAmong the six objects whose best-fit SED required no envelope ($\\rho_{1000}$\nvalue of 0), three are flat-spectrum sources (HOPS 47, 187, 265), two are Class \nII pre-main-sequence stars (HOPS 113, 293), and one is a Class I protostar (HOPS 232). \nThe low 70 $\\mu$m fluxes of HOPS 47 and 265 constrained the best model to \none without an envelope. The SED of HOPS 187 looks like that of a transitional \ndisk, which are disks with gaps or holes in their inner regions (see \\citealt{espaillat14} \nand references therein). If HOPS 187 were a transitional disk, it would not have \nan envelope. HOPS 232 has a rising SED over the mid-IR spectral range; its best \nfit requires no envelope, but an edge-on disk with a high accretion luminosity.\n\nIt would be expected that the SEDs of Class II objects can be best fit by a \nmodel that does not include an envelope. This is the case for HOPS 113 and \n293. Of the nine remaining Class II objects in our sample, four have very \nlow envelope densities ($\\rho_{1000} \\sim (1-2.5) \\times 10^{-20}$ g cm$^{-3}$; \nHOPS 22, 26, 98, 283), while five have $\\rho_{1000}$ between $6.0 \\times \n10^{-20}$ and $1.8 \\times 10^{-19}$ g cm$^{-3}$ (HOPS 184, 201, 222, 272, 277).\nThe SEDs of HOPS 22, 184, and 201 are similar to those of transitional disks,\nwith some silicate emission at 10 $\\mu$m and a rising SED between about\n13 and 20 $\\mu$m. The best-fit models require some envelope emission\nto fit the long-wavelength data. \nHOPS 222, 272, and 277 lie close to the border between a Class II \npre-main-sequence star and a flat-spectrum source based on their \n4.5-24 $\\mu$m spectral index, and therefore they could have some \nenvelope material left, despite being classified as Class II objects.\n\nOverall, of the 330 YSOs in our sample, 319 were classified as\neither Class 0, Class I, or flat-spectrum protostars based on their SEDs. \nHowever, four of them are best fit without an envelope. Conversely, of the\n11 Class II objects in our sample, nine are best fit with an envelope; however,\nthree of these might be transitional disks. Thus, based on our model fits and\nSEDs, 321 of our 330 YSOs are protostars with envelopes, and nine are likely\npre-main-sequence stars with disks.\n\n\\clearpage\n\n\\subsection{The Total Luminosities of Protostars}\n\nThe luminosity distribution of protostars is a significant constraint on \nprotostellar evolution, and it is important to understand the effect of the \nenvelope on the observed luminosity \\citep[e.g.,][]{offner11}. The bolometric \nluminosity distribution of the HOPS protostars is very similar to that determined \nfor the {\\it Spitzer}-identified protostars by \\citet{kryukova12} with a \npeak near 1~$L_{\\odot}$\\ (Fig.\\ \\ref{HOPS_n_Tbol_Lbol_histo}). In contrast, the \ndistribution of the total luminosities from the models shows a peak near \n2.5~$L_{\\odot}$\\ (Fig.\\ \\ref{Ltot_class_histo}), indicating that the luminosities of \nprotostars may be systematically underestimated by the bolometric \nluminosities, which do not take into account the inclination angle (and thus\nbeaming of the radiation along the outflow cavities) as well as foreground\nextinction (see Fig.\\ \\ref{Ltot_Lbol} in section \\ref{results_overview}).\n\nHigher intrinsic luminosities for protostars could help address the \n``luminosity problem'' first pointed out by \\citet{kenyon90}, who found \nthat the luminosities of protostars are lower by about an order of magnitude \nthan a simple estimate of the expected accretion luminosity. However, \nan increase in the luminosity by a factor of 2.5-3 would not solve the \nproblem; solutions proposed by other authors, such as mass-dependent\naccretion rates \\citep{offner11} or episodic accretion events \\citep{dunham12}, \nare still needed.\n\nOur best-fit models also suggest that Class 0 protostars have a \ndifferent distribution of $L_{tot}$ values compared to Class I protostars or \nflat-spectrum sources. Their median total luminosity is higher, which could \nbe an indication of larger accretion luminosities for younger protostars. \nWe must bear in mind the caveats and degeneracies mentioned above; \nin particular, in some cases the higher luminosity could be related to the \nadoption of an overly large inclination angle, which results in most \nof the emitted radiation not reaching the observer. Nevertheless, these \ndifferences have potentially important implications for protostellar evolution, \nwhich will be discussed in a future publication (W. Fischer et al. 2016, in \npreparation).\n\n\n\\subsection{Potential Problems with TSC Models}\n\\label{Model_problems}\n\nAlthough the TSC models provide impressive fits to the SEDs, some of the \nobserved trends suggest problems with the models. First, the distribution \nof inclination angles (Fig.\\ \\ref{Inc_histo}) deviates from what we expect from \na randomly oriented sample of protostars. Although this could result from \nunintentional selection biases in our sample of protostars, it may also be the \neffect of applying the wrong envelope model to the data. \n\nFurthermore, our data show flat distributions in cavity opening angles \nfor Class~0 and flat-spectrum sources, but an excess of small cavities for the \nClass~I protostars (Figure \\ref{Cav_class_histo}). We also find that protostars \nwith large cavities often have high envelope densities (Figure \\ref{Rho1000_inc_cavity}).\nFor example, models with high envelope densities viewed more edge-on require \nlarge cavity opening angles and high $L_{tot}$ values to \ngenerate sufficient mid-IR flux; this is the case for a few of our highest-luminosity \nobjects (HOPS 87, 108, and 178). These trends do not support the notion of \nincreasing cavity size with later evolutionary stage, which would be expected if \noutflows play a major role in dispersing envelopes \\citep{arce06}. This may \nsuggest that cavity sizes are not growing with time; however, this may also imply \na deviation from spherical symmetry for the initial configuration of the collapsing \nenvelopes. Such a deviation may result if the envelope collapses from the \nfragmentation of a flattened sheet or elongated filament. \n\nFinally, we find an excess of small values of $R_{disk}$, and therefore small \ncentrifugal radii, for Class I and flat-spectrum protostars (Figure \n\\ref{Rdisk_class_histo}). This is contrary to the expectation from the TSC \nmodel, in which the late stages of protostellar evolution are characterized \nby the infall of high angular momentum material from large radii and hence \nlarger values of $R_c$. This may imply that disks sizes are small, but it \nmay also be the result of incorrect assumptions about the distribution of \nangular momentum in the TSC model. \n\nIn total, these ``conundrums'' that arise from our model fits hint that the \ncurrent models do not realistically reproduce the structure of collapsing \nenvelopes. Future high-resolution observations at submillimeter and longer\nwavelengths that resolve the structure and motions of envelopes may \nprovide the means to develop more refined models that can fit the SEDs \nwith more realistic envelope configurations. \n\n\n\\vspace{2ex}\n\n\\section{Conclusions}\n\nWe have presented SEDs and model fits for 330 young stellar objects\nin the Orion A and B molecular clouds. The SEDs include data from \n1.2 to 870 $\\mu$m, with near-infrared photometry from 2MASS, mid-infrared \nphotometry and spectra from the {\\it Spitzer Space Telescope}, far-infrared \nphotometry at 70, 100, and 160 $\\mu$m from the {\\it Herschel Space Observatory}, \nand submillimeter photometry from the APEX telescope. \nWe calculated bolometric luminosities ($L_{bol}$), bolometric temperatures\n($T_{bol}$), and 4.5-24 $\\mu$m spectral indices ($n_{4.5-24}$) for all 330 sources \nin our sample. From the distributions of these three parameters, we find that $L_{bol}$\nhas a broad peak near 1 $L_{\\odot}$\\ and extends from 0.02 to several hundred $L_{\\odot}$,\nwhile the distribution of $T_{bol}$ values is broad and flat from about 30 K to 800 K,\nwith a median value of 146 K. The 4.5-24 $\\mu$m spectral indices range from \n-0.75 to 2.6, with a peak near 0.\n\nBased on traditional classification schemes involving $n_{4.5-24}$\nand $T_{bol}$, we have identified 92 sources as Class 0 protostars \n($n_{4.5-24} > 0.3$ and $T_{bol} < 70$~K), 125 as Class I protostars \n($n_{4.5-24} > 0.3$ and $T_{bol} > 70$~K), and 102 as flat-spectrum sources \n($-0.3 < n_{4.5-24} < 0.3$). The remaining 11 sources are Class II pre-main-sequence \nstars with $n_{4.5-24} < -0.3$; most of them just missed the flat-spectrum cutoff,\nand three have SEDs typical of disks with inner holes. Considering these\ntransitional disks and YSOs whose best fit does not require an envelope, we\nfind that 321 of the 330 HOPS targets in our sample are protostars with\nenvelopes. Class 0 and I protostars often display a deep silicate absorption \nfeature at 10 $\\mu$m due to the presence of the envelope, while many \nflat-spectrum sources have a weak silicate emission or absorption feature \nat that wavelength. \n\nWe have used a grid of 30,400 protostellar model SEDs, calculated using the\n2008 version of the \\citet{whitney03a,whitney03b} Monte Carlo radiative \ntransfer code, to find the best-fit models for each observed SED. The grid \nis limited to discrete values for protostellar parameters, and their ranges \nwere chosen to represent typical protostars. Within the framework of these \nmodels, we find the following:\n\n\\begin{itemize}\n\\item{About 70\\% of Class 0 protostars, 75\\% of Class I protostars, and close to \n90\\% of flat-spectrum sources have reliable SED fits ($R < 4$, where $R$\nis a measure of the average distance between model and data in units\nof the fractional uncertainty). Thus, our model grid can reproduce most of the \nobserved SEDs of Orion protostars.}\n\\item{Our results show a clear trend of decreasing envelope densities as we \nprogress from Class 0 to Class I and then to flat-spectrum sources: we find that \nthe median $\\rho_{1000}$ values decrease from 5.9 $\\times 10^{-18}$ g cm$^{-3}$ \nto 2.4 $\\times 10^{-19}$ g cm$^{-3}$ to 1.2 $\\times 10^{-19}$ g cm$^{-3}$.\nThe decrease in densities implies a decrease in the infall rates of the protostars \nas they evolve.\nWe find that the PACS Bright Red sources (PBRs) have median $\\rho_{1000}$ \nvalues twice as high as the median value of the Class 0 protostars in our sample, \nsupporting the interpretation that they are likely the youngest protostars in Orion.}\n\\item{There are degeneracies in the parameters for models that reproduce the \nobserved SEDs. For example, increasing the mid-IR SED slope and \ndeepening the silicate absorption feature at 10 $\\mu$m of a model protostar \ncan be done by increasing the envelope density or inclination angle, \ndecreasing the cavity opening angle or centrifugal radius, or even increasing \nthe foreground extinction. Hence, the properties of a specific source may be fit \nby a wide range of parameters. The best-fit model parameters are particularly \nuncertain for objects whose SED is not well constrained by observations. \nBecause of these degeneracies, the observed classes contain a mixture \nof evolutionary stages.}\n\\item{We find that flat-spectrum sources are particularly well fit by our models.\nThey have, on average, lower envelope densities and intermediate inclination \nangles, so many flat-spectrum sources are likely more evolved protostars, \nbut this group also includes protostars with higher envelope densities (and\nsometimes larger cavity opening angles) seen at lower inclination angles. \nFlat-spectrum sources seen at $i>$ 65\\degr\\ have very tenuous envelopes. \nThus, the sample of flat-spectrum sources includes protostars \nat different stages in their envelope evolution. All but three of the flat-spectrum \nsources in our sample have envelopes in their best-fit models, indicating that, \nwith a small number of exceptions, these objects are protostars with infalling \ngas.}\n\\item{The luminosity function for the model luminosities peaks at a higher \nluminosity than that for the observed bolometric luminosities as a result of \nbeaming along the outflow cavities. Furthermore, the total luminosity \ndetermined by the models is higher for Class 0 protostars: the median \ntotal luminosities are 5.5, 2.0, and 3.0 $L_{\\odot}$\\ for Class 0, Class I, and \nflat-spectrum sources, respectively.}\n\\item{Since heating by external radiation fields is not included \nin our model grid, we assessed its influence by adding an interstellar radiation \nfield to a set of models. We find that an ISRF ten times that typical of the solar \nneighborhood can substantially change the SEDs of sources with internal \nluminosities of 0.1 $L_{\\odot}$. However, when we incorporate the effect of \nextinction on the external radiation field, the effect on the protostellar\nSEDs is smaller; the best-fit luminosities and envelope densities would be \noverestimated by factors of a few for $\\sim$ 0.1 $L_{\\odot}$\\ prototars and much\nless for higher-luminosity protostars. We estimate that the best-fit parameters\n(in particular, $L_{tot}$, $\\rho_{1000}$) of $\\sim$ 20\\% of the HOPS sources \ncould be affected by external heating.}\n\\item{Although the adopted TSC models reproduce the observed SEDs well, \nthere are trends that suggest inadequacies with these models. First, the \ndistribution of best-fit inclination angles does not reproduce that expected \nfor randomly oriented protostars. Second, although the distribution of outflow \ncavity sizes for flat-spectrum and Class~0 sources is flat, there is an excess \nof small cavities for Class I sources. This is in contradiction to the typical \npicture that outflow cavities grow as protostars evolve. Finally, the distribution \nof outer disk radii set by the rotation of the envelope is concentrated at small \nvalues ($<$ 50 AU) for the Class I and flat-spectrum sources but is slightly \ntilted toward large values ($>$ 50 AU) for Class 0 protostars. Again, this trend \ncontradicts the expected growth of disks as the infall region in protostellar \nenvelopes expands. These findings suggest that either the envelope structure \nof the adopted models is incorrect, or our understanding of the evolution of \nprotostars needs to be revised substantially.}\n\\end{itemize}\n\nOur work provides a large sample of protostars in one molecular cloud\ncomplex for future, more detailed studies of protostellar evolution. For example,\nusing additional constraints, such as from scattered light imaging, the \nstructure of envelope cavities and thus the role of outflows can be better \nunderstood. In addition, the detailed structure of the envelope and the disk \nembedded within, as well as multiplicity of the central source, can be studied \nwith high spatial resolution imaging such as ALMA can provide. With the \nanalysis of their SEDs presented in this work, the HOPS protostars constitute \nan ideal sample to derive a better understanding of the early evolution of \nyoung stars, when the assembly of the stellar mass and the initial \nstages of planet formation likely take place.\n\n\\vspace{1ex}\n\n\\acknowledgments\nSupport for this work was provided by NASA through awards issued by\nJPL\/Caltech.\nThe work of W.J.F. was supported in part by an appointment to the NASA \nPostdoctoral Program at Goddard Space Flight Center, administered by \nOak Ridge Associated Universities through a contract with NASA.\nJ.J.T. acknowledges support provided by NASA through Hubble Fellowship grant \n\\#HST-HF-51300.01-A awarded by the Space Telescope Science Institute, which is \noperated by the Association of Universities for Research in Astronomy, Inc., \nfor NASA, under contract NAS 5-26555. J.J.T acknowledges further support from \ngrant 639.041.439 from the Netherlands Organisation for Scientific Research (NWO).\nThe work of A.M.S. was supported by the Deutsche Forschungsgemeinschaft\npriority program 1573 (``Physics of the Interstellar Medium'').\nM.O. acknowledges support from MINECO (Spain) AYA2011-3O228-CO3-01 \nand AYA2014-57369-C3-3-P grants (co-funded with FEDER funds). \nWe thank Thomas Robitaille for helpful discussions regarding the model grid\nand model parameters.\nThis work is based on observations made with the {\\it Spitzer Space Telescope}, \nwhich is operated by the Jet Propulsion Laboratory (JPL), California Institute of \nTechnology (Caltech), under a contract with NASA; it is also based on\nobservations made with the {\\it Herschel Space Observatory}, a European Space\nAgency Cornerstone Mission with significant participation by NASA. \nThe {\\it Herschel} spacecraft was designed, built, tested, and launched under \na contract to ESA managed by the {\\it Herschel\/Planck} Project team by an industrial \nconsortium under the overall responsibility of the prime contractor Thales Alenia Space \n(Cannes), and including Astrium (Friedrichshafen) responsible for the payload module \nand for system testing at spacecraft level, Thales Alenia Space (Turin) responsible \nfor the service module, and Astrium (Toulouse) responsible for the telescope, with \nin excess of a hundred subcontractors.\nWe also include data from the Atacama Pathfinder Experiment, a collaboration \nbetween the Max-Planck Institut f\\\"ur Radioastronomie, the European Southern \nObservatory, and the Onsala Space Observatory. \nThis publication makes use of data products from the Two Micron All Sky Survey, \nwhich is a joint project of the University of Massachusetts and the Infrared Processing \nand Analysis Center\/Caltech, funded by NASA and the NSF. \n\n\\vspace{2ex}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Observations and reductions}\n\\subsection{Spectroscopy}\nNew electronic spectra were obtained at four observatories: with\nthe Ond\\v{r}ejov 2~m reflector (OND) and a~coud\\'e\nspectrograph, with the Dominion Astrophysical Observatory 1.22~m reflector (DAO)\nand a coud\\'e spectrograph, with the Cerro Armazones 1.5~m Hexapod Telescope (HPT),\nthe Bochum Echelle Spectroscopic Observer (BESO)\nspectrograph, which is similar to FEROS \\citep{beso}, and with the Cerro Tololo Inter-American\nObservatory (CTIO) 1.5~m reflector with the CHIRON echelle spectrograph\n\\citep{toko2013}.\nWe also used one archival ESO FEROS echelle spectrum \\citep{feros,feros2},\nthe medium-resolution CCD spectra obtained and published by Christian Buil,\n\\footnote{For the description of his instrumentation and data reduction, see\n\\url{http:\/\/www.astrosurf.com\/buil\/us\/bestar.htm}\\,.} and a selection of\namateur CCD spectra with resolutions better than 10000 from the BeSS\nspectroscopic database \\citep{neiner2011}.\nTable~\\ref{jourv} lists a journal of all spectral observations used.\n\nThe initial reduction of all Ond\\v{r}ejov and DAO spectra (bias subtraction,\nflat-fielding, creation of 1D spectra, and wavelength calibration) was carried\nout in {\\tt IRAF}. Initial reduction of the HTP and CTIO spectra\nwas carried out at the respective observatories. Rectification, removal\nof residual cosmics and flaws, and RV measurements of all spectra were carried\nout with the Pascal program {\\tt SPEFO} \\citep{sef0,spefo}, namely the latest\nversion 2.63 developed by J.~Krpata \\citep{spefo3}.\n{\\tt SPEFO} displays direct and flipped traces of the line\nprofiles superimposed on the computer screen that the user can slide\nto achieve a precise overlapping of the parts of the profile for which the RV\nis to be measured.\nAll RVs measurements were carried out independently by PH and also by AH, who\nstudied the spectra as a~part of his student's research project. In addition to the wings of the H$\\alpha$ and \\ion{He}{i}~6678 emission, we also measured the bottom of the\n(sometimes asymmetric) cores of the H$\\alpha$, \\ion{He}{i}~6678, \\ion{Si}{ii}, and \\ion{Fe}{ii}\nshell absorptions. Both sets of measurements were\nintercompared, the larger deviations were carefully\nchecked, and the mean of the measurements for each line was then used here.\nOnly after these RV measurements were completed was a new program\nfor spectral reduction {\\tt reSPEFO}, a~modern replacement of {\\tt SPEFO} , written\nin JAVA and running on different platforms (Linux, Windows) developed\nby A.~Harmanec.\\footnote{\\url{https:\/\/astro.troja.mff.cuni.cz\/projects\/respefo}}\nIt can, among other things, import the spectra that were originally reduced in {\\tt SPEFO} \nand treat spectra stored as FITS files.\nWe used this new program to measure line intensities, equivalent widths, and\nthe $V\/R$ ratio of the double H$\\alpha$ emission.\\\\\n\n\\subsection{Photometry}\nWe attempted to collect all available observations with known\ndates of observations. Basic information about all data sets\ncan be found in Table~\\ref{jouphot}, and the details of the photometric\nreductions and standardisation are described in Appendix~\\ref{apb}.\n\nFor the convenience of other investigators, we also publish all our individual\nobservations together with their HJDs. All measured RVs are listed in Table~3,\nspectrophotometric quantities are provided in Table~4, and photometric observations\nare collected in Table~5. These three tables are available in electronic form only.\n\n\\section{Long-term spectral, light, and colour changes of V1294~Aql}\nAs mentioned above, we attempted to collect and homogenise all available\nphotometric observations and measurements of RVs and line strengths from\nthe records with known dates of observations. Long-term behaviour of these\nquantities and their mutual correlations are discussed in the following\nsub-sections.\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{vis-v.pdf}}\n\\caption{Time plot of the Otero visual brightness estimates\n(empty red circles) along with Johnson $V$ photometry from Hvar (blue dots),\n SPM (green), \\c{C}anakkale (cyan), and TUG (magenta).}\n\\label{vis-v}\n\\end{figure}\n\n\\subsection{Value of visual estimates of brightness}\nThe visual estimates of brightness by skilful amateur observers are commonly\nused to determine the times of minima (or maxima) of periodic\nvariables and to monitor light changes of variables with a large\namplitude (over several magnitudes). The scatter band of\nvisual estimates is typically about 0\\m1 to 0\\m15, but it can be pushed down by\ntalented individuals who, moreover, follow a few principles:\n(a) to perform only one visual estimate per night (without recalling the previous\none), and (b) to reduce the estimates to Johnson $V$ magnitudes of the\ncomparison stars (known to one thousandth of a magnitude), not to\nthe Harvard scale of magnitudes, which is only accurate to 0\\m1.\nOne of us (SO) contacted the first author of this study back in 2003\nto inform him that he had observed another light decrease of V1294~Aql for\nabout 0\\m3. We then agreed to test his ability to obtain accurate visual\nestimates via parallel photoelectric observations at Hvar,\nSan Pedro M\\'artir (SPM), Tubitak National Observatory\n(TUG), and \\c{C}anakkale. Figure~\\ref{vis-v}\nshows the comparison of visual estimates and Johnson $V$ photoelectric\nphotometry from several stations over the time interval covered by visual\nestimates. The visual estimates agree very well with\nthe general trend of variations that were recorded via photoelectric photometry, but\nthe deep light minimum appears broader than that recorded by photoelectric\nphotometry. One possible reason is that the minimum was observed\nclose to the end of visibility of the star in the sky and the visual\nestimates were not corrected for the differential extinction.\n\n\\subsection{Correlation between the long-term light and spectral changes\nin time}\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics[angle=-90]{all.pdf}}\n\\caption{Time plot of available observations over the whole time interval\nof about 25000~d covered by the data. Top panel: Yellow brightness observations,\nwhich could be transformed into Johnson $V$ magnitude. Second and third panel:\nAvailable \\hbox{$B\\!-\\!V$}\\ and \\hbox{$U\\!-\\!B$}\\ colour index observations. Bottom panel: $V\/R$ changes in the peaks of the double H$\\alpha$ emission.\nIn the three panels with photometry, the differential observations\nare shown as blue dots, and all-sky observations are shown as black dots. In the bottom panel,\nblue circles denote the DAO spectra, red circles show the OND spectra, green circles represent BESO spectra,\nmagenta circles show BeSS spectra, black circles show Castanet Tolosan spectra, and the black crosses plot the data\nfrom the literature.}\n\\label{all}\n\\end{figure}\n\n\\begin{figure}[t]\n\\resizebox{\\hsize}{!}{\\includegraphics[angle=-90]{all-n.pdf}}\n\\caption{Time plot showing the correlation between the secular brightness\nand colour variations in the $V$ -band magnitudes and \\hbox{$B\\!-\\!V$}\\ and \\hbox{$U\\!-\\!B$}\\ colour\nindices (black shows all-sky and blue shows differential observations),\nand the $V\/R$ changes, EW, and strength of the\nH$\\alpha$ emission for the more recent electronic spectra. The colour symbols for \nspectra from different sources are the same as in Fig. 2.}\n\\label{all-n}\n\\end{figure}\n\n\\begin{figure}[t]\n\\resizebox{\\hsize}{!}{\\includegraphics{si1vr.pdf}}\n\\caption{Time plot of the $V\/R$ changes recorded for the electronic spectra\nin the \\ion{Si}{ii}~6347~\\AA\\ line. The colour symbols for spectra from different\nsources are the same as in Fig.~\\ref{all}.}\n\\label{si1vr}\n\\end{figure}\n\n\\begin{figure}[t]\n\\resizebox{\\hsize}{!}{\\includegraphics{rvhe.pdf}}\n\\resizebox{\\hsize}{!}{\\includegraphics{rvh3.pdf}}\n\\caption{Time plot of radial velocities. Top panel: RV of the \\ion{He}{i}\nshell lines. Bottom panel: RV measured on the steep wings of the H$\\alpha$ emission.\nData from individual instruments are shown by different symbols:\nthe circles are the same as in Fig. 2, and\nblack crosses plot data from \\citet{balle89}. The ranges on the two\naxes in the two plots are different.}\n\\label{rvtime}\n\\end{figure}\n\nFigure~\\ref{all} is a time plot of all available observations secured in\nor transformed into the Johnson \\hbox{$U\\!B{}V$}\\ magnitudes. The top panel\nshows that the usual brightness level in $V$ is occasionally\ndisturbed by rapid light decreases of different durations.\nMoreover, we note that there is also a secular, steady slow light decrease\nof the undisturbed brightness of the star outside the more rapid light\ndecreases until about HJD~2455000, when it suddenly changed to\na~steeper secular light increase. We return to this new phenomenon\nin a separate section below. The second panel of Fig.~\\ref{all} shows\nthat the \\hbox{$B\\!-\\!V$}\\ index followed the brightness changes, but with a small amplitude,\nwhile the \\hbox{$U\\!-\\!B$}\\ index showed a similar pattern of changes, but with a~larger\namplitude and with a~more or less steady secular reddening.\n\nAn enlarged Fig.~\\ref{all-n} covers only the more recent time interval,\nwhen electronic spectra became available. It shows that in the time intervals\nthat are sufficiently densely covered by the data, the sharp light decreases are\naccompanied by similarly sharp strengthening in the H$\\alpha$ emission.\n\nFigure~\\ref{si1vr} shows the $V\/R$ changes of \\ion{Si}{ii}~6347~\\AA\\ line.\nIt reveals a~pattern similar to that observed for H$\\alpha$, but the time coverage\nis less dense because these variations could not be measured in time intervals\nwhen the emission was faint.\n\nFigure~\\ref{rvtime} shows the variation of the shell absorption RVs\n\\citep[characterised by \\ion{He}{i} RVs, for which data are also\npublished by][]{balle89} and emission-line RVs measured on the wings of\nthe H$\\alpha$ line in electronic spectra. We note that while the shell RVs\nshows large cyclic changes that have also been observed for a number of other Be stars,\nthe RVs measured on the wings of the H$\\alpha$ emission are secularly stable and\nshow only mild changes on a shorter timescale.\n\n\\subsection{Unusual colour variations}\n\n\\begin{figure*} [t]\n\\includegraphics[angle=0,scale=0.95]{ubbvc.pdf}\n\\includegraphics[angle=0,scale=0.95]{ubbvd.pdf}\n\\includegraphics[angle=0,scale=0.95]{ubbva.pdf}\n\\includegraphics[angle=0,scale=0.95]{ubbvb.pdf}\n\\caption{ \\hbox{$U\\!-\\!B$}\\ vs. \\hbox{$B\\!-\\!V$}\\ diagram for several distinct data subsets.\nTop panels: Older data until JD 2450000 (left). All-sky observations\nare shown as black circles, and data from stations defined\nin Table~\\ref{jouphot} are denoted as follows:\n01 (blue), 04 (green), 12 (red), and 26 (magenta).\nMore recent data from the interval of secular\nlight brightening, all from station 01 (blue) (right).\nThe bottom panels show \\hbox{$U\\!B{}V$}\\ observations from\nthe two sharp increases in the emission-line strength accompanied\nby light decreases. Time interval JD~2452741 -- 2453309, which\ncovers the first sharp light decrease (left, cf. Figs.~\\ref{vis-v} and \\ref{all-n}.)\nData from the time interval JD~2455357 -- 2456094 corresponding\nto the second sharp increase in the emission-line strength (right).\nData from stations 1, 30, 66, and 89 of Table~\\ref{jouphot} are shown\nby blue, red, green, and magenta dots, respectively. The main sequence\nand the supergiant sequence based on data from \\citet{golay74} (pp. 79-80)\nare shown, as is the reddening line.}\n\\label{ubbv}\n\\end{figure*}\n\n\n Many systematically studied Be stars are known to exhibit always the same\nand a rather clear type of either a positive or an inverse correlation\nbetween the long-term brightness variations, a~characteristic type of\nbehaviour in the colour-colour diagram, and the Balmer emission-line strength\nas defined by \\citet{hvar83,hec2000}. He identified these two types\nof correlation as an aspect effect.\nFor Be stars with an~inverse type of correlation, light decreases are followed\nby the rise of the Balmer emission-line strength and by a~shift along\nthe main sequence towards later spectral subclasses in the \\hbox{$U\\!-\\!B$}\\ versus \\hbox{$B\\!-\\!V$}\\\ndiagram. For a~positive type of correlation, the brightenings are followed\nby the rise of the emission strength and a~shift from the main sequence\ntowards the supergiant sequence in the \\hbox{$U\\!-\\!B$}\\ versus \\hbox{$B\\!-\\!V$}\\ diagram. The inverse\ncorrelation is observed for stars that are seen more or less equator-on (a growing\ngaseous envelope is attenuating the light of the central object), while\nthe positive correlation is observed for stars that are seen more pole-on (inner\noptically thick parts of the growing envelope mimic an~apparent increase in\nthe stellar radius). Several examples of both types of correlation in\nthe colour-colour diagram can be found, for instance, in Fig.~2 of\n\\citet{bozic2013}.\n\n The situation is dramatically different for V1294~Aql. The \\hbox{$U\\!B{}V$}\\ observations\naccumulated over several decades cover a large part of the whole\ncolour-colour diagram, with a single clear pattern.\n\n To understand better what is going on, we investigated\nthe colour-colour diagrams for different segments of the long-term\nchanges. Figure~\\ref{ubbv} shows the colour changes separately for\nthe old data secured before JD~2450000, for more recent data from the secular\nbrightness increase (observations after JD~2457000), and for\nobservations covering two episodes of a rapid increase and decrease in the H$\\alpha$ emission\nassociated with sharp light decreases. The\npattern is remarkably similar for both these episodes. Formally, it looks like\na~positive correlation. However, the phases of minimum brightness and\nmaximum strength of the emission correspond to data that are close\nto the main sequence, even below it when the reddening is considered.\nThe older data are all clustered above the supergiant sequence, while\nthe recent data lie along the supergiant sequence for late-B and early-A\nspectral classes.\nAll this indicates that we observe a combination of several different types\nof long-term changes.\n\n\\section{Duplicity of V1294~Aql}\n\n\\setcounter{table}{5}\n\\begin{table}\n\\begin{flushleft}\n\\caption{Orbital solutions based on the H$\\alpha$ emission RVs.}\n\\label{sol}\n\\begin{tabular}{lccrrccc}\n\\hline\\hline\\noalign{\\smallskip}\n Element &All RVs &Hi-res. spectra only\\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n$P$ (d) &$192.91\\pm0.18$& 192.91 fixed \\\\\n$T_{\\rm super.conj.} ^*$ &$56318.5\\pm2.2$&$56316.2\\pm2.9$ \\\\\n $e$ &0.0 fixed & 0.0 fixed \\\\\n$\\gamma$ (km~s$^{-1}$) & $-6.27\\pm0.31$& $-5.52\\pm0.44$ \\\\\n$K_1$ (km~s$^{-1}$) & $6.33\\pm0.41$ & $6.26\\pm0.61$ \\\\\nNo. of RVs & 172 & 38 \\\\\nrms (km~s$^{-1}$) & 3.92 & 2.63 \\\\\n\\hline\\noalign{\\smallskip}\n\\end{tabular}\n\\end{flushleft}\n\\tablefoot{$^*$) All epochs are in HJD-2400000.}\n\\end{table}\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{orbit.pdf}}\n\\resizebox{\\hsize}{!}{\\includegraphics{orbithe.pdf}}\n\\caption{Radial-velocity curve corresponding to the orbital solution, based on\nRVs measured on the steep wings of the H$\\alpha$ emission, plotted for\nphases from ephemeris (top; \\ref{efem}).\nOrbital curve based on RV of the \\ion{He}{i}~6678~\\AA\\ shell line\nprewhitened for the long-term changes, plotted for the same ephemeris (bottom).\nData from individual instruments are shown by different symbols. The circles \nare the same as in Fig. 2, and black triangle shows CTIO.}\n\\label{orbit}\n\\end{figure}\n\nThe idea that duplicity can be an important factor for the very existence\nof the Be phenomenon is not new. \\citet{plahor69,kriz69} and\n\\citet{plavec70} have argued that at least some Be stars could be binaries\nthat are observed in the later phases of mass exchange between the binary components.\n\\citet{krizhec75} and \\citet{hk76} formulated the general hypothesis that\nBe stars are mass-accreting components of binaries and showed that this idea \ncan also explain several types of time variations observed for Be\nstars. Additional arguments were provided by \\citet{plavec76a} and\n\\citet{peters76}. However, as pointed out already by \\citet{plavec76b},\nif all Be stars have Roche-lobe filling secondaries,\nmore eclipsing binaries should be observed among them. Later investigations also led to the\nfinding that the presence of Roche-lobe filling secondaries can be excluded\nfor some Be stars that were found to be spectroscopic binaries, such as V744~Her = 88~Her\n\\citep{zarf7a,zarf7b} or V439~Her = 4~Her \\citep{zarf5,zarf6}.\nThis led \\citet{pols91} to suggest that many\nBe stars might be objects created by large-scale mass transfer that were\nobserved in phases after the mass transfer ceased. The expected\nsecondaries of such objects would be hot compact stars, white dwarfs\nin some cases. These are the most easily detectable sources in far-UV spectra. Evidence\nfor a hot secondary to the well-known Be binary $\\varphi$~Per was found\nfirst from the antiphase variation in the \\ion{He}{ii}~4686~\\AA\\ emission\nseen in the photographic spectra \\citep{poeckert81} and later from\n\\ion{He}{i}~6678~\\AA\\ emission in the electronic spectra obtained\nby \\citet{gies93}. Its ultimate direct detection as an O~VI subdwarf\ncame from the study of the far-UV spectra from the Hubble Space Telescope by\n\\citet{gies98}. The secondary was then resolved with optical spectro-interferometry\nby \\citet{mourard2015}. Detections for several other systems followed.\n\\citet{wang2018} carried out a systematic search for the presence of hot\nsecondaries and summarised our knowledge of already known cases. \\citet{wang2021}\ndetected nine new Be+sdO binaries from analyses of the Hubble Space Observatory\nspectra, and \\citet{klement2022} reported the first interferometric detection and\nsignatures of the orbital motion for three known Be+sdO systems. On the other\nhand, \\citet{boden2020} carried out a systematic search for Be stars with\nmain-sequence secondaries, with a completely null result. This constitutes\nindirect evidence that the mass exchange is or was behind the formation\nof binaries with Be primaries. \\citet{hast2021} carried out evolutionary\ncalculations of mass exchange in binaries in an effort to set some limits\non the fraction of Be stars produced by binary interaction. They found\nthat under certain conditions, this fraction can be quite high.\n\n\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{haprofs.pdf}}\n\\caption{Three Ond\\v{r}ejov H$\\alpha$ profiles. They are mutually shifted in\nordinate by 1.0 of the continuum level for better clarity.\nProfiles from HJD~2457128.5976 and 2457137.5762 have anomalously positive\nRVs of the emission wings, which stem from the episode of a large strengthening\nof the emission. The next profile, from HJD~2457154.5435, has a~normal\norbital RV.}\n\\label{haprofs}\n\\end{figure}\n\n\\begin{table}\n\\begin{flushleft}\n\\caption{Possible properties of the binary system:\nMass of the secondary $M_2$, mass ratio $M_2\/M_1$, semi-amplitude of\nthe RV curve of the secondary $K_2$, and the semi-major axis $a$ for\nseveral possible orbital inclinations $i$. The mass of the primary was\nassumed to be $M_1=16.9$~\\hbox{$\\mathcal{M}^{\\mathrm N}_\\odot$}\\ after \\citet{zorec2016}.}\n\\label{secmass}\n\\begin{tabular}{lccrrccc}\n\\hline\\hline\\noalign{\\smallskip}\n $i$ & $ M_2$ &$M_2\/M_1$& $K_2$ & $a$\\\\\n($^\\circ$)& (\\hbox{$\\mathcal{M}^{\\mathrm N}_\\odot$}) & & (km~s$^{-1}$) & (\\hbox{$\\mathcal{R}^{\\mathrm N}_\\odot$}) \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n 90.0 & 1.171 & 0.0693 & 90.43 & 368.69 \\\\\n 85.0 & 1.175 & 0.0695 & 90.07 & 368.72 \\\\\n 80.0 & 1.189 & 0.0704 & 88.99 & 368.82 \\\\\n 70.0 & 1.249 & 0.0739 & 84.73 & 369.23 \\\\\n 60.0 & 1.361 & 0.0805 & 77.77 & 369.98 \\\\\n\\hline\\noalign{\\smallskip}\n\\end{tabular}\n\\end{flushleft}\n\\tablefoot{We express the values of masses and radii\nin the nominal values \\hbox{$\\mathcal{M}^{\\mathrm N}_\\odot$}, and \\hbox{$\\mathcal{R}^{\\mathrm N}_\\odot$}\\ as defined by \\citet{prsa2016}.}\n\\end{table}\n\n\nOne always has to be cautious when analysing binaries with clear signatures\nof the presence of circumstellar matter in the system. The experience from\nour previous studies of individual Be stars \\citep{bozic95,zarf18,zarf20,\nzarf21,zarf24,zarf26} shows that the binary nature of particular Be stars\nis most easily detected via periodic RV variations of the steep\nemission wings of the H$\\alpha$ line and often also via the periodic changes\nin the $V\/R$ ratio of the double Balmer emission lines.\n\nPeriod analyses of all H$\\alpha$ emission-line RVs of V1294~Aql, using\nboth the \\citet{deeming75} and \\citet{stelling78} methods, revealed that\nthe RV of the H$\\alpha$ \\ emission wings varies with a period of 193~d\nand a~semi-amplitude of $\\sim 5$ km~s$^{-1}$.\nThe same periodicity is also detected in the RV of the H$\\alpha$ absorption core\nand in the absorption RVs of \\ion{Si}{ii} doublet at 6347 and 6371~\\AA\\,\nand \\ion{Fe}{ii}~6456~\\AA\\ after long-term changes are removed.\n\nUsing the program {\\tt FOTEL} \\citep{fotel1,fotel2}, we derived the circular-orbit\norbital elements for all H$\\alpha$ emission-wing RVs and for those from the\nhigh-resolution spectra alone. The mutual agreement of the two solutions is\nvery satisfactory. They are presented in Table~\\ref{sol}, and\nthe corresponding RV curve is plotted in Fig.~\\ref{orbit}.\nIn the rest of this study, we adopt the following linear ephemeris:\n\n\\begin{equation}\nT_{\\rm super.conj.}={\\rm HJD}\\,2456318.5+192\\fd91\\times E \\label{efem}\n\\end{equation}\n\n\\noindent based on the solution for all spectra.\n\n To be fair, we note that some deviations from the mean RV curve in the upper\npanel of Fig.~\\ref{orbit} are rather large. This is, for instance,\nthe case of two Ond\\v{r}ejov spectra taken on HJD~2457128.6 and 2457137.6,\nwhen a very steep rise of the emission strength\nhad occurred. We remeasured these spectra several times, but the result was the\nsame. Their RVs are almost in anti-phase to the orbital RV curve near phase 0.3.\nWe show the corresponding line profiles in Fig.~\\ref{haprofs} together with\nanother profile, taken about two weeks later, which already gives a~RV in accord\nwith the orbital motion. The two peculiar RVs were given zero weight in the\norbital solution.\n\nWe tentatively adopted the mass of the Be primary after \\citet{zorec2016} and\nestimated the basic properties of the system for several\npossible orbital inclinations. Because no lines of the secondary were detected\nin the optical spectra and because no companions to Be stars were ever found\namong main-sequence objects \\citep{boden2020}, we conclude that the secondary\nis not a Roche-lobe filling object, but most probably a hot subdwarf star or\nwhite dwarf. It should be looked for in the far-UV spectral region.\nFor the Gaia DR2 parallax of 0\\farcs0007059, the projected angular separation\nof the binary components is 0\\farcs0048, which might be resolved with \npresent-day optical interferometers such as the currently tested interferometer\nSPICA \\citep{spica2020}.\n\n\n\\section{Correlations between orbital and long-term changes}\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{v57500.pdf}}\n\\resizebox{\\hsize}{!}{\\includegraphics{v57900.pdf}}\n\\resizebox{\\hsize}{!}{\\includegraphics{v58300.pdf}}\n\\resizebox{\\hsize}{!}{\\includegraphics{v58600.pdf}}\n\\caption{Phase plots of $V$ magnitude for several seasons of Hvar observations\nfor binary ephemeris (\\ref{efem}). From top to bottom: Data from\nJD~$2457568-57657$,\nJD~$2457931-58079$,\nJD~$2458318-58392$, and\nJD~$2458664-58740$.}\n\\label{vorb}\n\\end{figure}\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{haemisph.pdf}}\n\\caption{Phase plots of the strength of the H$\\alpha$ emission\nfor binary ephemeris (\\ref{efem}). Data from individual sources are\ndenoted as follows: Circles in blue show DAO spectra, red circles show OND spectra, green circles represent BESO spectra,\nmagenta circles show BeSS, and black circles show Castanet spectra. The black crosses show data from the literature.}\n\\label{haemisph}\n\\end{figure}\n\nInspection of the light, colour, and emission-line strength variations seems to\nindicate that the rapid episodes of large changes such as those near JD~2452900\nor JD~2457300 occurred during one binary orbital period. To investigate the problem,\nwe plot phase diagrams of the variability of the $V$ magnitude for several\nmore recent shorter time intervals in Fig.~\\ref{vorb}. The two\nlarge light decreases that are accompanied by strong increases in the H$\\alpha$ emission-line\nstrength apparently occurred around phases of elongations, with the Be primary receding from us.\nAt the same time, the plot shows that in another observing season, brightenings\nwere observed around similar orbital phases. The same is also confirmed by\na phase plot of the H$\\alpha$ emission-line strength; see Fig.~\\ref{haemisph}.\n\n We also investigated the time behaviour of the $V\/R$ ratio of the double\nH$\\alpha$ emission. In this case as well, V1294~Aql appears to be quite unusual.\nAs Figs.~\\ref{all} and \\ref{all-n} show, the $V\/R$ variations are different\nin different time intervals and do not recall either the long-term cyclic\nchanges known for Be stars with one-armed global oscillations or phase-locked\nchanges. In several panels of Fig.~\\ref{vrsets} we show enlarged plots of\nthe $V\/R$ changes. Instants of expected phase-locked $V\/R$ maxima predicted\nby the orbital ephemeris (\\ref{efem}) are shown by vertical lines.\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{vrsets.pdf}}\n\\caption{Enlarged subsets of time variability of the H$\\alpha$ $V\/R$ ratio\nwith the instants of the expected phase-locked maxima predicted by the\norbital ephemeris (\\ref{efem}).}\n\\label{vrsets}\n\\end{figure}\n\n\\section{Rapid changes}\nAlthough we collected a large number of photometric observations of V1294~Aql,\ntheir time distribution is not suitable for a search for rapid periodic changes. Perhaps the only observations suitable for\nsuch a search are the early $V$ observations by \\citet{lynds59}.\nHe himself concluded that his observations definitively indicate\nvariations in brightness, which appear to be somewhat erratic, however, and\nno period could be found. The observations were secured within one month\nduring a~time interval that was not affected by secular variations. Our period analysis\nrevealed sinusoidal variations with a semi-amplitude of 0\\m0139(29).\nA~least-squares fit led to ephemeris (\\ref{efelynds}), the rms of one\nobservation being 0\\m0067. The corresponding phase plot is shown in Fig.~\\ref{rapid}.\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics[angle=0]{lynds.pdf}}\n\\caption{Possibly periodic rapid light changes based on \\citet{lynds59}\n$V$ magnitude photometry and plotted for ephemeris (\\ref{efelynds}).}\n\\label{rapid}\n\\end{figure}\n\n\\begin{equation}\nT_{\\rm light\\, min.}={\\rm HJD}\\,2436450.8053(92)+0\\fd64827(50)\\times E. \\label{efelynds}\n\\end{equation}\n\nThis indicates that a scatter of at least 0\\m03 is to be expected in individual\nobservations on longer timescales.\n\n\\citet{lefe2009} carried out an~automatic period search in the Hipparcos \\hbox{$H_{\\rm p}$}\\\nphotometry to find new periodic variables among OB stars. They identified\nV1294~Aql as a~possible slowly pulsating B star (SPB) with a period of 7\\fd752.\nWe cannot confirm their result. They apparently did not take the secular\nlight change in \\hbox{$H_{\\rm p}$}\\ photometry into account; see the upper panel of Fig.~\\ref{all} here.\n\n\\citet{zorec2016} estimated the following physical properties of the Be component:\\\\\n$T_{\\rm eff}$ = ($30120\\pm2540$)~K, {\\rm log}~$g$ = ($4.08\\pm0.40$) [cgs],\nmass M = ($16.9\\pm2.7$)~M$_{\\odot}$, $v$~sin~$i$ = ($207\\pm18$)~km~s$^{-1}$,\ncritical rotational velocity $v=(517\\pm64)$~km~s$^{-1}$, and the\ninclination of the rotational axis $i=(37^\\circ\\pm9^\\circ)$.\nFor these values, the period 0\\fd648 appears as a reasonable rotation period of the Be component.\nIn passing we note that at the time of writing, the star has not been observed\nby the TESS satellite.\n\n\\section{Fourth timescale}\n\\citet{hec98} has called attention to the fact that the brightness of the\nBe star $\\omega$~CMa outside of the episodes of brightenings accompanied\nby the growth of emission-line strength (typical of the positive correlation\ndiscussed above) has been decreasing secularly. His observation was later\nconfirmed with more recent photometry \\citep{ghore2018, ghore2021}. These\nauthors and also \\citet{marr2021}, who studied another Be star, V2048~Oph = 66~Oph,\nmodelled the secular variability and the episodes of brightening and\nincreases in the Balmer emission-line strength with some success, estimating\nthe required viscosity values for individual episodes, and also discussing\nsome limitations of their effort. The yellow light curve of V2048~Oph\nis shown in the upper panel of Fig.~1 of \\citet{marr2021}. It shows a~secular\nlight decrease between 1980 and 2000, occasionally interrupted by brightenings\nreminiscent of a positive correlation. However, the strength of\nthe H$\\alpha$ emission is near its maximum over the same time interval of about\n20 years, and only then does it gradually decrease. No emission has been\nobserved since about 2010. However, as \\citet{marr2021} pointed out,\nthe outer parts of the disk are still seen in the radio wavelengths.\n\nWe collected and homogenised the $V$ photometry of V2048~Oph\nfrom the archive of \\hbox{$U\\!B{}V$}\\ photometry provided by J.R.~Percy, from Hvar,\nSPM, \\citet{john66}, \\citet{haupt74} and \\citet{kozok85} and transformed the Hipparcos\n\\hbox{$H_{\\rm p}$}\\ \\citep{esa97} photometry into Johnson $V$ and the observations of\n\\cite{hill76} secured in the DAO photometric system into \\hbox{$U\\!B{}V$} \\ using the transformation\nformul\\ae\\ provided by \\citet{hecboz01}. The $V$ light curve of V2048~Oph\nbased on the above-mentioned data sets is shown in the upper panel\nof Fig.~\\ref{fourth}.\n\nA similar secular light decrease has also been reported for V744~Her = 88~Her,\na Be star with an inverse type of correlation \\citep{hecboz2013}. We show\nits light curve complemented by more recent observations, adapted from\nBo\\v{z}i\\'c et al. (in prep.) in Fig.~\\ref{fourth} as well. In the same figure,\nwe also show the $V$ and $B$ light curves of the Be star EW~Lac = HD~217050\nfrom another study in preparation. In this case, a secular increase\nin brightness is observed. We note that the large scatter around the mean trend\nis related to the known rapid light variability of EW~Lac on a timescale shorter than\none day. The fourth panel of Fig.~\\ref{fourth} shows the plot of Hvar $V$ photometry\nof $\\varphi$~And, another Be star with a positive type of correlation.\nA mild light decrease over several decades of observations is visible.\n\nSearching the literature, we found a few more examples.\nA secular light increase has also been observed for the Be star with a~positive\ncorrelation $\\gamma$~Cas \\citep[][Fig.~5]{gcas2002} over nearly 30000~days.\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics[angle=0]{66ophv.pdf}}\n\\resizebox{\\hsize}{!}{\\includegraphics[angle=0]{ewlacv.pdf}}\n\\resizebox{\\hsize}{!}{\\includegraphics[angle=0]{ewlacb.pdf}}\n\\resizebox{\\hsize}{!}{\\includegraphics{phiandv.pdf}}\n\\resizebox{\\hsize}{!}{\\includegraphics{v744herv.pdf}}\n\\caption{Secular photometric changes of several well-observed Be stars.}\n\\label{fourth}\n\\end{figure}\n\nFinally, as we have shown here, the secular light decrease of V1294~Aql \nhas recently changed to a secular light increase.\nAll this shows the large variety of different possible evolutions\nof the circumstellar disk, its replenishment, and a~gradual dissipation.\n\n\n\n\\section{Discussion}\nIn spite of the effort of several generations of stellar astronomers,\nthe engine leading to the occasional formation of circumstellar disks\naround Be stars has not been firmly identified so far.\nOne possible explanation is based on the idea that Be stars are\nrapidly rotating non-radial pulsators (NRP) and that the additional\nforce needed to facilitate the outflow of gas and angular momentum transfer\nfrom the stellar equator arises from a constructive interference of two\nor more NRP modes\n\\citep{rivi98a,rivi98b,baade2017a,baade2017b,baade2020,borre2020,bartz2021}.\nEspecially the systematic photometries from space observatories were\nused and analysed to support this conjecture. Confirmation of this\nscenario would require the creation of new, self-consistent models,\nhowever, which would show that Be stars are indeed pulsationally unstable over the whole\narea that they occupy in the Hertzsprung-Russell (HR) diagram.\nIt should also be mentioned that \\citet{baade2017b} warned that evidence\nfor constructive interference of pulsational modes for a larger number of Be stars\nis lacking and pointed out additional problems such as the rotational splitting of modes\nand the presence of rapid changes in the circumstellar envelopes during active phases.\nAn alternative view was suggested by \\citet{hec98}, who argued that the dominant\nperiod of rapid changes undergoes small cyclic changes. Modelling such\na situation, he found that a standard period analysis of a~corresponding series\nof observations returns a multiperiodicity with several close periods.\nYet another possibility was suggested by \\citet{bebin2002}, who argued that\nthe presence of a secondary can facilitate outflow from the equatorial parts\nof the gaseous disk of the Be primary even in systems that are not filling\ntheir Roche lobes. This idea is problematic, however, in that the effect is\nrather small in the majority of cases.\n\n For a long time, various other suggestions have been made that\nthe Be phenomenon and observed variability patterns of Be stars might be causally\nrelated to their binary nature\n\\citep[e.g. ][among others]{plahor69,krizhec75,hk76,bebin87,pols91,\n pano2018,boden2018,boden2020,langer2020,boden2021}. \nThe approach of \\citet{klement2019} is worth mentioning. These authors studied \nthe spectral energy distribution of several Be stars and provided arguments that \ntheir disks had to be truncated by the Roche lobes. This constitutes \nan~indirect argument for the presence of companions to these objects.\n\n The phase-locked $V\/R$ changes observed for several Be binaries represent another\ninteresting phenomenon. As already noted above, they are usually\nobserved roughly in phase with the orbital RV changes\nof the Be stars in question \\citep{zarf6,zarf7b,zarf16,zarf21,stefl2007}. A phase-locked\nemission-line variation with a single maximum and minimum per orbital period\nwas also found by \\citet{borre2020} for $\\gamma$~Cas. These authors used\nan interesting detection technique. They analysed local pixel-per-pixel line\nfluxes across the H$\\alpha$ profile in a~series of higher-resolution BeSS spectra.\n\\citet{pano2018} modelled the phase-locked $V\/R$ changes as global oscillations\nin the circumstellar disks with two spiral patterns and concluded that the\nphase-locked $V\/R$ changes should exhibit two maxima and minima during one\norbital period. This clearly disagrees with the available observations\nmentioned above. The only case for which a double-wave $V\/R$ curve was detected is\nV696~Mon = HR~2142 \\citep{peters72,peters76}. There is a natural explanation\nof roughly sinusoidal phase-locked $V\/R$ changes in our view, as discussed\nin Appendix~C of \\citet{wolf2021}. The circumstellar disk probably\noccupies almost the whole volume of the Roche lobe near\nthe orbital plane. This causes there to be more gaseous material in the part\nof the disk facing the secondary than on the opposite side. Because the disk\nrotates, more emission power is available on the side facing the secondary,\nand this naturally leads to phase-locked $V\/R$ changes that are in phase\nwith the RV changes of the Be component.\n\n To show how confusing the interpretation of $V\/R$ changes can be,\nwe compare the $V\/R$ changes of H$\\alpha$ and \\ion{He}{i}~6678~\\AA\\ for\ntwo shorter time intervals in Fig.~\\ref{fales}. In the first interval,\nthe variations for both lines are in phase, while in the second interval,\nantiphase variation is observed. We note that this is a consequence\nof the fact that the He shell RV becomes very negative (a~temporarily\nelongated envelope?) and apparently weakens the $V$ peak of a~relatively\nfaint He emission. This is best illustrated in Fig.~\\ref{vrdva}, where\nthe H$\\alpha$ and \\ion{He}{i}~6678 profiles for two dates are shown.\n\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics[angle=0]{heh3vra.pdf}}\n\\resizebox{\\hsize}{!}{\\includegraphics[angle=0]{heh3vrb.pdf}}\n\\caption{Apparent $V\/R$ changes observed for the H$\\alpha$ and \\ion{He}{i}~6678 \nemission lines intercompared for two time segments. The variation\nin the shell He RV is also shown. An~apparently anti-phase behavior\nis observed in the second time interval, when the shell RV becomes\nquite negative and the He shell line blends with the $V$ peak of the\nfaint He emission. The same colours as in the previous time plots\nare used to distinguish spectra from individual observatories.}\n\\label{fales}\n\\end{figure}\n\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics[angle=0]{havrline.pdf}}\n\\resizebox{\\hsize}{!}{\\includegraphics[angle=0]{hevrline.pdf}}\n\\caption{Apparent $V\/R$ changes observed for the H$\\alpha$ and \\ion{He}{i}~6678 \nemission lines intercompared for two dates. An~apparently anti-phase\nbehaviour is observed for the later date, when the \\ion{He}{i}~6678 shell RV becomes\nquite negative and the He shell line blends with the $V$ peak of the\nfaint He emission.}\n\\label{vrdva}\n\\end{figure}\n\nThis study of V1294~Aql demonstrates very clearly how hard it is to identify\nand understand mutually coexisting and overlapping variability patterns\ngoverning the observed spectral, light, and colour changes. Attempts at modelling them\nquantitatively, planned for a~continuation of this study, are expected to shed\nmore light on the mysterious Be phenomenon. We also suggest that further\nmonitoring of the object with systematic photometry, high-resolution\nspectroscopy, and especially with the optical interferometry could help to\nreveal the secrets of this intriguing Be binary, or possibly a multiple system,\nas indicated by the analysis of the astrometric data \\citep{brandt2021}.\n\n\\begin{acknowledgements}\nWe gratefully acknowledge the use of the latest publicly available version\nof the program {\\tt FOTEL} written by P.~Hadrava.\nWe thank A.~Aret, A.~Budovi\\v{c}ov\\'a, P.~Chadima, M.~Dov\\v{c}iak,\nJ.~Fuchs, P.~Hadrava, J.~Jury\\v{s}ek, E.~Kiran, L.~Kotkov\\'a,\nR.~K\\v{r}i\\v{c}ek, J.~Libich, J.~Nemravov\\'a, P.~Rutsch, S.~Saad, P.~\\v{S}koda,\nS.~\\v{S}tefl, and V.~Votruba, who obtained some of the Ond\\v{r}ejov spectra\nused in this study. J.R.~Percy kindly put the archive of his systematic\n\\hbox{$U\\!B{}V$}\\ observations of bright Be stars in three observatories at our disposal.\nWe also acknowledge the constructive suggestions of an anonymous referee\nto the first version of this study.\nOver the years, this long-term project was supported by the grants 205\/06\/0304,\n205\/08\/H005, P209\/10\/0715, and GA15-02112S of the Czech Science Foundation,\nby the grants 678212 and 250015 of the Grant Agency of the Charles University\nin Prague, from the research project AV0Z10030501 of the Academy of Sciences\nof the Czech Republic, and from the Research Program MSM0021620860\n{\\sl Physical study of objects and processes in the solar system and\nin astrophysics} of the Ministry of Education of the Czech Republic.\nThe research of PK was supported by the ESA PECS grant 98058.\nHB, DR, and DS acknowledge financial support\nfrom the Croatian Science Foundation\nunder the project 6212 ``Solar and Stellar Variability\".\nThis work has made use of data from\nthe European Space Agency (ESA) mission Gaia\n(\\url{https:\/\/www.cosmos.esa.int\/gaia}), processed by the Gaia\nData Processing and Analysis Consortium (DPAC;\n\\url{https:\/\/www.cosmos.esa.int\/web\/gaia\/dpac\/consortium}).\nFunding for the DPAC has been provided by national institutions,\nin particular the institutions participating in\nthe Gaia Multilateral Agreement. We also used some spectra\nof the BeSS database, operated at LESIA,\nObservatoire de Meudon, France: \\url{http:\/\/basebe.obspm.fr}.\nFinally, we acknowledge the use of the electronic database from\nthe CDS, Strasbourg, and the electronic bibliography maintained by\nthe NASA\/ADS system.\n\\end{acknowledgements}\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\nAlthough the fifth-generation (5G) wireless network is still under deployment, researchers have moved forward to define the next-generation or sixth-generation (6G) wireless network, with the aim for achieving more stringent performance, such as unprecedentedly high throughput, super-high reliability, ultra-low latency, extremely low power consumption, etc \\cite{9040264,8766143}.\nHowever, these targets may not be fully achieved by only relying on the existing technologies, such as massive multi-input multi-output (MIMO) and millimeter wave (mmWave) communications, which can attain enhanced performance but generally incur more substantial energy consumption and hardware cost.\nOn the other hand, wireless communication performance is fundamentally constrained by the wireless channel impairments such as path-loss, shadowing, and small-scale fading, which can be partially mitigated by conventional wireless communication techniques such as power control, adaptive modulation, diversity, dynamic beamforming, etc., but still remain random and uncontrolled at large.\nRecently, \\emph{intelligent reflecting surface} (IRS) has emerged as a promising technology to address the above issues by leveraging massive low-cost reflecting elements to flexibly and dynamically control the radio signal propagation environment in favor of wireless communications\/sensing, thus achieving substantially improved communication spectral\/energy efficiency and sensing accuracy cost-effectively \\cite{9326394,9724202}.\n\nThe existing works on IRS have mainly considered the wireless systems aided by \\emph{passive IRS}.\nSpecifically, as illustrated in Fig. \\ref{p-IRS}, the passive IRS is composed of a large number of passive reflecting elements with positive resistances (e.g., positive-intrinsic-negative (PIN) diodes, field-effect transistors (FETs), micro-electromechanical system (MEMS) switches) \\cite{9326394}.\nAs such, each passive element can reflect the incident signal with a desired phase shift, while it has no signal processing\/amplification capability due to the lack of transmit\/receive radio frequency (RF) chains. Moreover, compared with the conventional half-duplex active relay, the passive IRS operates in a full-duplex mode and hence is free of amplification\/processing noise as well as self-interference \\cite{9119122}.\nBy properly adjusting the individual phase shifts of all passive reflecting elements with the reflection amplitude no larger than one, the reflected signal by IRS can be added constructively with that from the other propagation paths for enhancing the signal power at the intended receiver \\cite{9362274,9241706} or destructively for suppressing the undesired interference \\cite{9171881}.\nRemarkably, it has been shown in \\cite{8811733} that the passive IRS beamforming can achieve a \\emph{squared power scaling order}, i.e., $\\mathcal{O}(N^2)$ with $N$ denoting the number of reflecting elements, which is even higher than that of the massive MIMO with active arrays.\nExtensive research has been conducted recently to efficiently incorporate passive IRS into wireless systems for various purposes, e.g., enhancing the communication throughput \\cite{9714463}, reducing the outage probability \\cite{9205879}, saving the transmit power \\cite{8741198}, and extending the range of active relays \\cite{9464248,9586067}, among others.\nHowever, the performance gain of passive IRS is fundamentally constrained by the severe \\emph{product-distance} path-loss of the reflected channel by IRS \\cite{8888223}.\nTwo practical approaches to dealing with this problem are, respectively, deploying more passive elements at each IRS to enhance its aperture\/beamforming gain and placing the passive IRSs closer to the transmitter and\/or receiver for reducing the reflected channel product-distance \\cite{8982186}. However, these solutions may not be suitable for practical scenarios when the space of IRS site is limited and\/or its location cannot be freely selected.\n\\begin{figure}[t] \\centering \n{\\subfigure[{Passive IRS.}] {\n\\label{p-IRS}\n\\includegraphics[width=2in]{p-IRS.pdf} \n}} \n{\\subfigure[{Active IRS.}] {\\label{a-IRS}\n\\includegraphics[width=2in]{a-IRS.pdf} \n}}\n{\\subfigure[{Hybrid active-passive IRS.}] {\\label{h-IRS}\n\\includegraphics[width=2in]{h-IRS.pdf} \n}}\n{\\caption{Different types of IRS architecture.}}\n\\end{figure}\n\nTo tackle the above issues, a new type of IRS, called \\emph{active IRS}, has been recently proposed (see, e.g., \\cite{9377648,9734027,9530750,9568854}). \nSpecifically, the active IRS comprises a number of active reflecting elements, each equipped with an active load (or called negative resistance) such as the tunnel diode and negative impedance converter.\nAs illustrated in Fig. \\ref{a-IRS}, each active load is connected to an additional power supply for signal amplification \\cite{8403249,9219017}.\nTherefore, the active IRS not only enables adjustable phase shifts as the passive IRS, but also allows the amplitude amplification (i.e., larger than one) of incident signals in a full-duplex mode, albeit at modestly higher hardware and energy cost than the passive IRS \\cite{9377648}.\nOn the other hand, compared to the active relay that attaches RF chains to the antennas, the active IRS does not entail costly and power-hungry RF chain components \\cite{9758764}.\nThe performance comparison between the active and passive IRSs has been recently studied in the literature.\nFor example, given the active-IRS location and power budget, it has been shown that the active IRS can achieve higher spectral efficiency \\cite{9377648}, energy efficiency \\cite{9568854}, and reliability \\cite{9530403} than the passive IRS.\nBesides, the authors in \\cite{9530750} further optimized the IRS placement for both the passive- and active-IRS aided systems for rate maximization. It was shown that the passive IRS achieves higher rate performance than the active IRS with their respectively optimized placement when the number of reflecting elements is large and\/or the active-element amplification power is small. \nMoreover, the authors in \\cite{9734027} considered the same power budget constraint for both the passive- and active-IRS aided systems, where both the base station's (BS's) transmit power and active IRS's amplification power are considered in the case of active IRS. It was revealed that the active IRS outperforms the passive IRS only when the number of reflecting elements is small and\/or the amplification power of the active IRS is sufficiently large.\n\\begin{figure}[t]\n\\centerline{\\includegraphics[width=5in]{sysmod.pdf}}\n\\caption{A hybrid active-passive IRS aided wireless communication system.}\\label{sysmod}\n\\end{figure}\nTo summarize, the existing works on IRS have shown that passive and active IRSs have complementary advantages.\nSpecifically, the passive IRS has a higher asymptotic beamforming gain than the active IRS (i.e., $\\mathcal{O}(N^2)$ versus $\\mathcal{O}(N)$), thus is more appealing when the number of reflecting elements $N$ is large \\cite{9530750,9377648}. In contrast, the active IRS provides additional power amplification gain, which leads to a much higher signal-to-noise ratio (SNR) than the passive IRS when $N$ is relatively small \\cite{9723093,9716895,8403249}.\nBesides, the active IRS generally incurs higher cost and power consumption than the passive IRS. These thus indicate that given a total budget on the IRS deployment cost (or equivalently the number of active\/passive reflecting elements to be deployed), the conventional IRS architectures with either passive or active elements only in general may not achieve the optimum communication performance.\n\nMotivated by the above, we propose in this paper a new \\emph{hybrid active-passive IRS}\\footnote{We use the term hybrid IRS to denote this new architecture hereafter for brevity.} architecture as shown in Fig. \\ref{h-IRS} to achieve the advantages of both passive and active IRSs for further improving the performance over that with active or passive IRS alone.\nSpecifically, the hybrid IRS is composed of two co-located sub-surfaces, each consisting of a certain number of passive and active reflecting elements, respectively. \nIn particular, we design the active versus passive reflecting elements allocation at the hybrid IRS under a given deployment budget, for optimally balancing the trade-off between the unique power amplification gain of active IRS and the higher beamforming gain of passive IRS than active IRS.\nTo this end, we consider a hybrid active-passive IRS aided multi-user communication system as shown in Fig. \\ref{sysmod}, where a BS transmits independent data to a cluster of users.\nA hybrid IRS is properly deployed at the edge of this user cluster to serve the users in its half-space reflection region over different time slots.\nWe consider a given deployment budget for the hybrid IRS with different costs of each active and passive reflecting element based on practical models, where an active element generally incurs higher cost than its passive counterpart.\nTo reduce the real-time channel estimation overhead and avoid frequent IRS reflection adjustment, we consider a practical approach that designs the hybrid IRS beamforming based on the \\emph{statistical} channel state information (CSI) only, assuming the practical Rician fading channel model (i.e., only the channel path-loss parameters and Rician fading factors are assumed to be known), instead of requiring the knowledge of the instantaneous CSI of all links involved.\nIn the following, we summarize the main contributions of this paper.\n\n\\begin{itemize}\n\\item First, to guarantee the achievable rate performance of all users, we formulate an optimization problem to maximize the ergodic capacity of the worst-case user located at the boundary of the IRS reflection region.\nSpecifically, we assume the statistical CSI available only and jointly optimize the active\/passive reflecting elements allocation, their phase shifts, and the amplification factors of active elements, subject to various practical constraints on the active-element amplification factor and amplification power consumption, as well as the total active and passive elements deployment budget.\nThis problem, however, is shown to be non-convex, which is difficult to be optimally solved in general.\nTo address this difficulty, we approximate the ergodic capacity of the worst-case user with high accuracy and thereby reformulate the original problem in a simpler form.\n\n\\item Next, we propose an efficient algorithm to solve the reformulated problem. First, we jointly optimize all elements' phase shifts and the amplification factors of active elements based on the statistical CSI, and obtain a closed-form expression for the achievable ergodic capacity with a given elements allocation.\nThen, we apply the one-dimensional search to find the optimal active\/passive elements allocation to maximize the ergodic capacity. \nTo obtain useful insight into the optimal elements allocation, we further consider two special cases where the involved channels are line-of-sight (LoS) and follow Rayleigh fading, respectively. \nIt is shown that in the former case with LoS paths, only active elements need to be deployed when the total deployment budget is sufficiently small, while both active and passive elements should be deployed with a decreasing number ratio when the budget increases and exceeds a certain threshold.\n\n\\item Last, we present extensive numerical results to evaluate the effectiveness of our proposed hybrid IRS architecture and its optimized design for rate maximization.\nWe show that the hybrid IRS with optimized elements allocation outperforms the conventional active\/passive-only IRS architecture as well as other benchmarks. \nMoreover, the optimal active\/passive elements allocation under the general Rician fading channel and that under the LoS channel are presented, which are in accordance with our theoretical analysis.\nThe effects of several key parameters such as the Rician fading factor, active-element amplification power, and active\/passive-element deployment cost on the capacity performance and optimal active\/passive elements allocation are also investigated.\n\\end{itemize}\n\nIt is worth noting that the authors in \\cite{48550,9733238} considered a hybrid relay-reflecting intelligent surface architecture with a few relaying elements connected to power amplifiers and RF chains, which, however, significantly differs from our proposed hybrid IRS architecture with both active and passive reflecting elements. \nFor example, in \\cite{48550,9733238}, the relaying elements forward the signals with power-hungry RF chains, while the active reflecting elements in our proposed hybrid IRS architecture adopt the negative resistance components of much lower power consumption for signal amplification.\nThe remainder of this paper is organized as follows. The system model is first introduced in Section \\ref{sec_sysmod}, based on which we formulate an optimization problem and approximate the worst-case user's ergodic capacity in Section \\ref{sec_prob_approx}. \nIn Section \\ref{sec_design}, we elaborate the algorithm to solve the reformulated optimization problem, and present theoretical results that provide useful insight into the optimal IRS active\/passive elements allocation.\nSimulation results and pertinent discussions are presented in Section \\ref{sec_sim}. Finally, the conclusions are drawn in Section \\ref{sec_conclu}.\n\n\\emph{Notations}: \nSuperscript $(\\cdot)^H$ stands for the Hermitian transpose operation. $\\mathbb{C}^{a \\times b}$ denotes the space of $a \\times b$ complex-valued matrices, $\\mathbb{N}$ denotes the set of natural numbers, and $\\mathbb{R}^+$ denotes the set of positive real numbers. \nThe operation $\\mathbb{E}\\left\\{\\cdot\\right\\}$ returns the expected value of a random variable, $\\arg(\\cdot)$ returns the angle of a complex number, and ${\\operatorname{diag}}{(\\boldsymbol{x})}$ returns a diagonal matrix with the elements in $\\boldsymbol{x}$ on its main diagonal. The notation $\\jmath$ represents the imaginary unit, $\\otimes$ denotes the Kronecker product, $\\mathcal{C} \\mathcal{N}(\\mu,\\sigma^2)$ denotes the circularly symmetric complex Gaussian (CSCG) random variable with mean of $\\mu$ and variance of $\\sigma^2$,\n$[\\cdot]_{m,n}$ denotes the $(m,n)$-th entry of a matrix, $[\\cdot]_{m}$ denotes the $m$-th entry of a vector, and $\\lfloor\nx\\rfloor$ denotes the largest integer that does not exceed the real number $x$.\n\n\\section{System Model}\\label{sec_sysmod}\nAs illustrated in Fig.~\\ref{sysmod}, we consider a hybrid active-passive IRS aided wireless communication system, where a single-antenna BS transmits independent data to a cluster of single-antenna users\\footnote{The proposed hybrid IRS and its results obtained in this paper can be extended to the case with multi-antenna BS and multiple IRSs\/user clusters, which will be investigated in our future work.}.\nWe assume that the BS and its served users are separated by long distance and the direct links between them are negligible due to the large channel path-loss and\/or severe blockage. \nAs such, a hybrid IRS comprising both active and passive reflecting elements is properly placed to serve the users in its half-space reflection region.\nMoreover, we consider the time division multiple access (TDMA) scheme, where the users are served by the BS over orthogonal time slots\\footnote{Under this setup, the TDMA scheme has been shown to outperform the non-orthogonal multiple access (NOMA) and orthogonal frequency division multiple access (OFDMA) schemes in terms of energy and spectral efficiency \\cite{8970580}.}. \nTo guarantee the system performance among all users, we consider the worst-case user performance at the boundary of the IRS reflection region with the IRS-user distance $d_{\\mathrm{IU}}$ and BS-IRS distance $D_{\\mathrm{BI}}$ (see Fig. \\ref{sysmod}).\n\nFor ease of implementation, we assume that the hybrid IRS comprises two co-located sub-surfaces with $N_{\\mathrm{pas}}$ passive and $N_{\\mathrm{act}}$ active reflecting elements, respectively (see Fig. \\ref{sysmod}).\nSpecifically, we denote by \n$\\mathbf{\\Psi}^{\\mathrm{pas}}\\triangleq {\\operatorname{diag}}{(e^{\\jmath\\varphi^{\\mathrm{pas}}_1},\\cdots,e^{\\jmath\\varphi^{\\mathrm{pas}}_{N_{\\mathrm{pas}}}})}$\nthe reflection matrix of the passive sub-surface, where $\\varphi^{\\mathrm{pas}}_n$ denotes the phase shift of the $n$-th passive element with $n\\in\\mathcal{N}_{\\mathrm{pas}}\\triangleq \\{1,\\cdots,N_{\\mathrm{pas}}\\}$, and the reflection amplitude of each passive element is set as one (i.e., its maximum value). The cost of each passive element is denoted by $W_{\\mathrm{pas}}$.\nOn the other hand, as the active sub-surface can simultaneously amplify the signal and tune its phase shift, we denote by $\\mathbf{\\Psi}^{\\mathrm{act}}\\triangleq\\mathbf{A}^{\\mathrm{act}}\\mathbf{\\Phi}^{\\mathrm{act}}$ the reflection matrix of the active sub-surface, where $\\mathbf{A}^{\\mathrm{act}}\\triangleq{\\operatorname{diag}}{(\\alpha_1,\\cdots,\\alpha_{N_{\\mathrm{act}}})}$ and $\\mathbf{\\Phi}^{\\mathrm{act}}\\triangleq\n{\\operatorname{diag}}{(e^{\\jmath\\varphi^{\\mathrm{act}}_1},\\cdots,e^{\\jmath\\varphi^{\\mathrm{act}}_{N_{\\mathrm{act}}}})}$\ndenote respectively its reflection amplification matrix and phase-shift matrix with $\\alpha_n$ and $\\varphi^{\\mathrm{act}}_n$ representing the amplification factor and phase shift of each active element $n\\in\\mathcal{N}_{\\mathrm{act}}\\triangleq\\{1,\\cdots,N_{\\mathrm{act}}\\}$. \nTo ensure that each active reflecting element amplifies the signal, we impose a constraint on the amplification factor for each active element as $\\alpha_n\\geq \\alpha_{\\min},\\forall n\\in\\mathcal{N}_{\\mathrm{act}}$ with $\\alpha_{\\min}\\geq 1$ \\cite{9377648}. Moreover, the limited load of each active element \\cite{8403249} leads to the following constraint on its maximum amplification factor: $\\alpha_n\\leq\\alpha_{\\max},\\forall n\\in\\mathcal{N}_{\\mathrm{act}}$ with $\\alpha_{\\max}>\\alpha_{\\min}$.\nLet $W_{\\mathrm{act}}$ represent the deployment cost of each active element, which in general is larger than that of each passive element (i.e., $W_{\\mathrm{act}}>W_{\\mathrm{pas}}$) due to its more sophisticated hardware (i.e., additional power amplifier and amplification control circuit \\cite{9377648}) and higher static operation power (e.g., 6-20 mW for active element \\cite{7920385} versus 5 mW for passive element \\cite{8888223}). \nMoreover, we denote $W_0$ as the total deployment budget for the hybrid IRS such that $N_{\\mathrm{act}}W_{\\mathrm{act}}+N_{\\mathrm{pas}}W_{\\mathrm{pas}}\\leq W_0$.\n\nWe assume the practical Rician fading channel model for all involved links. As such, the baseband equivalent channel from the BS to the active IRS sub-surface, denoted by $\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{act}}\\in\\mathbb{C}^{N_{\\mathrm{act}}\\times 1}$, can be modeled as \n\\begin{equation}\n \\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{act}}=\\sqrt{\\frac{K_{1}}{K_{1}+1}}\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}+\\sqrt{\\frac{1}{K_{1}+1}}\\tilde{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}},\n\\end{equation}\nwhere $K_{1}$ is the Rician fading factor of the BS$\\to$IRS link, $\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}\\in\\mathbb{C}^{N_{\\mathrm{act}} \\times 1}$ is the LoS component, and $\\tilde{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}\\in\\mathbb{C}^{N_{\\mathrm{act}} \\times 1}$ is the non-LoS (NLoS) component.\nSpecifically, the LoS component can be modeled as $\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}={h}_{\\mathrm{BI}}^{\\mathrm{act}}\\boldsymbol{a}_{\\mathrm{r}}\\left(\\vartheta_{\\mathrm{BI}}^{\\mathrm{r}},\\vartheta_{\\mathrm{BI}}^{\\mathrm{r}}, N_{\\mathrm{act}}\\right)$, where ${h}_{\\mathrm{BI}}^{\\mathrm{act}}\\triangleq\\sqrt{\\beta}\/D_{\\mathrm{BI}}e^{-\\jmath\\frac{2\\pi}{\\lambda}D_{\\mathrm{BI}}}$ denotes the complex channel gain with $\\lambda$ and $\\beta$ representing respectively the carrier wavelength and the reference channel gain at a distance of 1 meter (m); and $\\vartheta_{\\mathrm{BI}}^{\\mathrm{r}}\\left(\\vartheta_{\\mathrm{BI}}^{\\mathrm{r}}\\right) \\in[0, \\pi]$ represents the azimuth (elevation) angle-of-arrival (AoA) at the IRS.\nMoreover, $\\boldsymbol{a}_{\\mathrm{r}}\\left(\\vartheta_{\\mathrm{BI}}^{\\mathrm{r}}, \\vartheta_{\\mathrm{BI}}^{\\mathrm{r}}, N_{\\mathrm{act}}\\right)$ denotes the receive response vector, which is given by $\\boldsymbol{a}_{\\mathrm{r}}\\left(\\vartheta_{\\mathrm{BI}}^{\\mathrm{r}}, \\vartheta_{\\mathrm{BI}}^{\\mathrm{r}}, N_{\\mathrm{act}}\\right)=\\boldsymbol{u}\\left(\\frac{2 d_{\\mathrm{I}}}{\\lambda} \\cos \\left(\\vartheta_{\\mathrm{BI}}^{\\mathrm{r}}\\right) \\sin \\left(\\vartheta_{\\mathrm{BI}}^{\\mathrm{r}}\\right), N_{\\mathrm{x}}\\right) \\otimes$ $\\boldsymbol{u}\\left(\\frac{2 d_{\\mathrm{I}}}{\\lambda} \\sin \\left(\\vartheta_{\\mathrm{BI}}^{\\mathrm{r}}\\right) \\sin \\left(\\vartheta_{\\mathrm{BI}}^{\\mathrm{r}}\\right), N_{\\mathrm{y}}\\right)$, with $\\boldsymbol{u}(\\varsigma, M)=[1, e^{-\\jmath \\pi \\varsigma},$ $\\ldots, $ $e^{-(M-1) \\jmath \\pi \\varsigma}]^{T}$ representing the steering vector function, $d_{\\mathrm{I}}$, $N_{\\mathrm{x}}$, and $N_{\\mathrm{y}}$ denoting the distance between adjacent reflecting elements and the number of active reflecting elements along the $x$- and $y$-axis on IRS surface, respectively.\nBesides, the NLoS component, $\\tilde{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}$, follows the complex Gaussian distribution with each entry $[\\tilde{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}]_{n}\\sim\\frac{\\sqrt{\\beta}}{D_{\\mathrm{BI}}}\\mathcal{C} \\mathcal{N}(0,1), \\forall n\\in\\mathcal{N}_{\\mathrm{act}}$.\nSimilarly, the baseband equivalent channel from the active IRS sub-surface to the worst-case user, $\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{act}}\\in\\mathbb{C}^{N_{\\mathrm{act}}\\times 1}$, can be modeled as\n\\begin{equation}\n \\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{act}}=\\sqrt{\\frac{K_{2}}{K_{2}+1}}\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}}+\\sqrt{\\frac{1}{K_{2}+1}}\\tilde{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}},\n\\end{equation}\nwhere $K_{2}$ denotes the Rician fading factor of the IRS$\\to$user link, $\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}}\\in\\mathbb{C}^{N_{\\mathrm{act}} \\times 1}$ denotes the LoS component, and $\\tilde{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}}\\in\\mathbb{C}^{N_{\\mathrm{act}} \\times 1}$ denotes the NLoS component.\nThe baseband equivalent channel from the BS to IRS passive sub-surface, $\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{pas}}$, and that from the IRS passive sub-surface to the worst-case user, $\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{pas}}$, can be defined similarly as $\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{act}}$ and $\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{act}}$, with the details omitted for brevity. \n\nBased on the above, the received signal at the worst-case user aided by the hybrid IRS is given by\n\\begin{equation}\n y = (\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{act}}s+(\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{pas}})^H\\mathbf{\\Psi}^{\\mathrm{pas}}\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{pas}}s+(\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\mathbf{z}_{\\mathrm{I}}+z_0,\n \n\\end{equation}\nwhere $s\\in\\mathbb{C}$ denotes the information symbol with $\\mathbb{E}\\left\\{\\left|s\\right|^{2}\\right\\}=P_{\\mathrm{B}}$, and $P_{\\mathrm{B}}$ denotes the transmit power of the BS. \nMoreover, $\\mathbf{z}_{\\rm{I}}\\in\\mathbb{C}^{N_{\\mathrm{act}} \\times 1}$ is the thermal noise introduced by the active elements due to signal amplification, which is assumed to follow the independent CSCG distribution, i.e., $\\mathbf{z}_{\\rm{I}} \\sim \\mathcal{C} \\mathcal{N}\\left(\\mathbf{0}_{N_{\\mathrm{act}}}, \\sigma_{\\mathrm{I}}^{2} \\mathbf{I}_{N_{\\mathrm{act}}}\\right)$ with $\\sigma_{\\mathrm{I}}^{2}$ denoting the amplification noise power, and $z_0\\sim\\mathcal{C} \\mathcal{N}\\left(0, \\sigma_0^{2}\\right)$ is the additive white Gaussian noise (AWGN) at the user. \nNote that at the user's receiver, the desired signal is superposed by the reflected signals over both active and passive elements, while the noise is due to the amplification noise at active elements and the thermal noise at the receiver.\n\nAs such, the receiver SNR at the worst-case user is given by\n\\begin{equation}\n \\gamma = \\frac{P_{\\mathrm{B}}|(\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{act}}+(\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{pas}})^H\\mathbf{\\Psi}^{\\mathrm{pas}}\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{pas}}|^2}{\\sigma_{\\mathrm{I}}^2\\|(\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\|^2+\\sigma_0^2}.\\label{snr_hybrid}\n\\end{equation}\nThus, the ergodic capacity achieved by the worst-case user in the hybrid IRS aided wireless communication system is given by\n\\begin{equation}\n C=\\mathbb{E}\\left\\{\\log _{2}(1+\\gamma)\\right\\},\\label{ergo_capa}\n \n\\end{equation}\nwhere the expectation is taken over the random NLoS components in all channels involved.\n\\section{Problem Formulation and Ergodic Capacity Analysis}\\label{sec_prob_approx}\nWe aim to maximize the ergodic capacity of the worst-case user subject to a total deployment budget of $W_0$ by optimizing the numbers of active and passive elements,\n$N_{\\mathrm{act}}$ and $N_{\\mathrm{pas}}$, the IRS phase shifts, \\{$\\mathbf{\\Phi}^{\\mathrm{act}}$, $\\mathbf{\\Psi}^{\\mathrm{pas}}$\\}, and the active-element amplification matrix, $\\mathbf{A}^{\\mathrm{act}}$. This problem can be formulated as follows.\n\\begin{align}\n &\\hspace{-0.2cm}\\mathrm{(P1)}~~~~~\\max_{\\mathbf{\\Phi}^{\\mathrm{act}},\\mathbf{\\Psi}^{\\mathrm{pas}},\\mathbf{A}^{\\mathrm{act}},N_{\\mathrm{act}},N_{\\mathrm{pas}}}\n \\quad~~\\mathbb{E}\\left\\{\\log _{2}\\left(1\\!+\\!\\frac{P_{\\mathrm{B}}|(\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{act}}\\!+\\!(\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{pas}})^H\\mathbf{\\Psi}^{\\mathrm{pas}}\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{pas}}|^2}{\\sigma_{\\mathrm{I}}^2\\|(\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\|^2\\!+\\!\\sigma_0^2}\\right)\\right\\} \\label{obj_func_orig}\\\\\n &\\qquad\\qquad~~~~\\!~\\quad~\\text{s.t.} \\qquad\\qquad\\qquad~~0<\\varphi_n^{\\mathrm{act}}\\leq 2\\pi,\\forall n\\in\\mathcal{N}_{\\mathrm{act}},\\label{cons_phase_act}\\\\\n &\\qquad\\qquad~~~~~~~~\\quad~\\qquad\\qquad\\qquad~~0<\\varphi_n^{\\mathrm{pas}}\\leq 2\\pi,\\forall n\\in\\mathcal{N}_{\\mathrm{pas}},\\\\\n &\\qquad\\qquad~~~~~~~~\\quad~\\qquad\\qquad\\qquad~~\\alpha_{\\min}\\leq \\alpha_n\\leq \\alpha_{\\max},\\forall n\\in\\mathcal{N}_{\\mathrm{act}},\\label{cons_alpha}\\\\\n &\\qquad\\qquad~~~~~~~~\\quad~\\qquad\\qquad\\qquad~~ \\mathbb{E}\\left\\{P_{\\mathrm{B}}\\|\\mathbf{\\Psi}^{\\mathrm{act}}\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{act}}\\|^2+\\sigma_{\\mathrm{I}}^2\\|\\mathbf{\\Psi}^{\\mathrm{act}}\\|^2\\right\\}\\leq P_{\\mathrm{I}},\\label{cstr_power_HI}\\\\\n &\\qquad\\qquad~~~~~~~~\\quad~\\qquad\\qquad\\qquad~~N_{\\mathrm{act}}W_{\\mathrm{act}}+N_{\\mathrm{pas}}W_{\\mathrm{pas}}\\leq W_0,\\label{cons_C}\\\\\n &\\qquad\\qquad~~~~~~~~\\quad~\\qquad\\qquad\\qquad~~N_{\\mathrm{act}}\\in\\mathbb{N},N_{\\mathrm{pas}}\\in\\mathbb{N}\\label{cons_C_AnP},\n\\end{align}\nwhere the constraint \\eqref{cstr_power_HI} indicates that the average amplification power of all active elements over the Rician fading channels is constrained by the average amplification power budget, $P_{\\mathrm{I}}$.\nNote that different from the existing works on IRS-aided wireless communications that generally require the instantaneous CSI knowledge (e.g., \\cite{9133142}), we consider a practical scenario where only the statistical CSI, i.e., $\\{\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}},\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}},$ $\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{pas}},$ $\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{pas}},$ $K_1,$ $K_2\\}$ is known \\textit{a priori}, which suffices for the design of active\/passive elements allocation for maximizing the ergodic capacity (in the worst case).\nThis approach also reduces the real-time channel estimation overhead and avoids frequent IRS reflection adjustment for each user. \n\nFor problem (P1), note that it includes the conventional IRS architectures with all-passive and all-active elements as\ntwo special cases. Specifically, when $N_{\\mathrm{act}}=0$, the hybrid IRS reduces to the conventional passive IRS and we have $\\mathbf{\\Psi}^{\\mathrm{act}}=\\mathbf{0}_{N_{\\mathrm{act}}\\times N_{\\mathrm{act}}}$. On the other hand, when $N_{\\mathrm{pas}}=0$, it reduces to the conventional active IRS and thus $\\mathbf{\\Psi}^{\\mathrm{pas}}=\\mathbf{0}_{N_{\\mathrm{pas}}\\times N_{\\mathrm{pas}}}$.\nHowever, problem (P1) is challenging to solve even in the above two special cases, since the phase shifts are coupled with the amplification factors in the function of the ergodic capacity (see \\eqref{snr_hybrid} and \\eqref{ergo_capa}). Moreover, the numbers of active and passive elements are discrete, rendering the design objective a complicated function and the constraints in \\eqref{cstr_power_HI}--\\eqref{cons_C_AnP} non-convex.\n\nTo address the above issues, we first analyze the ergodic capacity to approximate the objective function of problem (P1) in a simpler form.\n\n{\\color{black}\\begin{lemma}\\label{lem_C_approx}\n\\textbf{\\emph{(Ergodic Capacity Approximation)}} \\emph{The ergodic capacity in \\eqref{obj_func_orig} can be approximated by}\n\\begin{align\n C\\approx \\tilde{C}\\triangleq\\log _{2}\\left(1+\\frac{x_{\\mathrm{L}}+x_{\\mathrm{NL,act}}+x_{\\mathrm{NL,pas}}}{z_{\\mathrm{L,act}}+z_{\\mathrm{NL,act}}+\\sigma_0^2}\\right),\\label{sig_approx}\n\\end{align}\n\\emph{where}\n\\begin{align}\n &x_{\\mathrm{L}} \\triangleq \\frac{K_1K_2P_{\\mathrm{B}}}{(K_1\\!+\\!1)(K_2\\!+\\!1)}\\left|(\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}\\!+\\!(\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{pas}})^H\\mathbf{\\Psi}^{\\mathrm{pas}}\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{pas}}\\right|^2,\\label{x_1}\\\\\n &x_{\\mathrm{NL,act}}\\triangleq\\frac{P_{\\mathrm{B}}}{(K_1\\!+\\!1)(K_2\\!+\\!1)}\\Big(\\frac{K_1\\beta}{d_{\\mathrm{IU}}^2}\\|\\mathbf{\\Psi}^{\\mathrm{act}}\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}\\|^2\\!+\\!\\frac{K_2\\beta}{D_{\\mathrm{BI}}^2}\\|(\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\|^2\\!+\\!\\frac{\\beta^2}{D_{\\mathrm{BI}}^2d_{\\mathrm{IU}}^2}\\sum_{n=1}^{N_{\\mathrm{act}}}\\alpha^2_{n}\\Big),\\label{x_2}\\\\\n &x_{\\mathrm{NL,pas}}\\triangleq\\frac{P_{\\mathrm{B}}}{(K_1\\!+\\!1)(K_2\\!+\\!1)}\\Big(\\frac{K_1\\beta}{d_{\\mathrm{IU}}^2}\\|\\mathbf{\\Psi}^{\\mathrm{pas}}\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{pas}}\\|^2\\!+\\!\\frac{K_2\\beta}{D_{\\mathrm{BI}}^2}\\|(\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{pas}})^H\\mathbf{\\Psi}^{\\mathrm{pas}}\\|^2\\!+\\!\\frac{\\beta^2N_{\\mathrm{pas}}}{D_{\\mathrm{BI}}^2d_{\\mathrm{IU}}^2}\\Big),\\label{x_3}\\\\\n &{z_{\\mathrm{L,act}}}\\triangleq{\\frac{K_2\\sigma_{\\rm I}^2}{K_2\\!+\\!1}\\|(\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\|^2},\\quad\n {z_{\\mathrm{NL,act}}}\\triangleq{\\frac{\\sigma_{\\rm I}^2}{K_2\\!+\\!1}\\|(\\tilde{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\|^2}.\n \\end{align}\n\\end{lemma}}\n\\begin{proof}\nSee Appendix \\ref{proof_lem1}.\n\\end{proof}\nThe accuracy of the approximation in Lemma \\ref{lem_C_approx} will be numerically verified in Section \\ref{sec_sim}.\nSimilarly, the average amplification power consumption can be expressed as\n\\begin{align}\n \\!\\!\\!\\!\\!\\mathbb{E}\\left\\{P_{\\mathrm{B}}\\|\\mathbf{\\Psi}^{\\mathrm{act}}\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{act}}\\|^2\\!+\\!\\sigma_{\\mathrm{I}}^2\\|\\mathbf{\\Psi}^{\\mathrm{act}}\\|^2\\right\\}\\!=\n \\!\\frac{P_{\\mathrm{B}}}{K_2\\!+\\!1}\\left(K_2\\|\\mathbf{\\Psi}^{\\mathrm{act}}\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}\\|^2\\!+\\!\\mathbb{E}\\left\\{\\|\\mathbf{\\Psi}^{\\mathrm{act}}\\tilde{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}\\|^2\\right\\}\\right)\\!+\\!\\sum_{n=1}^{N_{\\mathrm{act}}}\\sigma_{\\mathrm{I}}^2\\alpha^2_{n}\\label{power_decomp}.\n\\end{align}\nAs such, problem (P1) is reformatted as follows by using Lemma \\ref{lem_C_approx} and substituting \\eqref{power_decomp} into \\eqref{cstr_power_HI},\n\\begin{align}\n &\\mathrm{(P2)}\\max_{\\mathbf{\\Phi}^{\\mathrm{act}},\\mathbf{\\Psi}^{\\mathrm{pas}},\\mathbf{A}^{\\mathrm{act}},N_{\\mathrm{act}},N_{\\mathrm{pas}}}\n \\quad~~\\log _{2}\\left(1+\\frac{x_{\\mathrm{L}}+x_{\\mathrm{NL,act}}+x_{\\mathrm{NL,pas}}}{z_{\\mathrm{L,act}}+z_{\\mathrm{NL,act}}+\\sigma_0^2}\\right) \\\\\n &\\qquad\\qquad~~~~\\!~\\text{s.t.} \\qquad\\qquad~~\\eqref{cons_phase_act}-\\eqref{cons_alpha}, \\eqref{cons_C},\\eqref{cons_C_AnP}\\nonumber,\\\\\n &\\qquad~~~~~~~\\qquad\\qquad\\qquad~~ \\frac{P_{\\mathrm{B}}}{K_2\\!+\\!1}\\left(K_2\\|\\mathbf{\\Psi}^{\\mathrm{act}}\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}\\|^2\\!+\\!\\mathbb{E}\\left\\{\\|\\mathbf{\\Psi}^{\\mathrm{act}}\\tilde{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}\\|^2\\right\\}\\right)\\!+\\!\\sum_{n=1}^{N_{\\mathrm{act}}}\\sigma_{\\mathrm{I}}^2\\alpha^2_{n}\\leq\\! P_{\\mathrm{I}}.\\label{cstr_power_HI_decomp}\n\\end{align}\n\n\\section{Optimal Solution to Problem (P2)}\\label{sec_design}\nIn this section, we aim to optimally solve problem (P2) and gain useful insight into the optimal active\/passive elements allocation at the hybrid IRS.\nSpecifically, we first obtain the optimal hybrid IRS beamforming and active\/passive elements allocation based on the statistical CSI under the general Rician fading channel model. Then, we consider two special channel setups, i.e., the LoS and Rayleigh fading channel models, to gain useful insight into the optimal active\/passive elements allocation.\n\n\\subsection{IRS Phase Shift Optimization}\nGiven any feasible active\/passive elements allocation and active-element amplification factors, it can be easily shown that the approximated ergodic capacity in \\eqref{sig_approx} is maximized when the LoS components of the IRS-associated channels, i.e., $(\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}$ and $(\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{pas}})^H\\mathbf{\\Psi}^{\\mathrm{pas}}\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{pas}}$, are phase-aligned. The optimal IRS phase shifts are thus given by\n\\begin{align}\n &\\varphi_{n}^{\\mathrm{act}} = \\arg([\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}}]_n)-\\arg([\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}]_n),\\forall n\\in\\mathcal{N}_{\\mathrm{act}},\\label{opt_phase_1}\\\\\n &\\varphi_{n}^{\\mathrm{pas}} = \\arg([\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{pas}}]_n)-\\arg([\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{pas}}]_n),\\forall n\\in \\mathcal{N}_{\\mathrm{pas}},\\label{opt_phase_2}\n\\end{align}\nwhere the optimal phase shifts of the active and passive sub-surfaces have the similar form.\nThen, by substituting the optimal phase shifts in \\eqref{opt_phase_1} and \\eqref{opt_phase_2} into \\eqref{sig_approx}, we have\n\\begin{align}\n &x_{\\mathrm{L}}=\\gamma_1\\left(\\sum_{n=1}^{N_{\\mathrm{act}}}\\alpha_n+N_{\\mathrm{pas}}\\right)^2P_{\\mathrm{B}}\\beta^2\/D^2_{\\mathrm{BI}}d^2_{\\mathrm{IU}},\\\\\n &x_{\\mathrm{NL,act}}+x_{\\mathrm{NL,pas}}=\\gamma_2\\left(\\sum_{n=1}^{N_{\\mathrm{act}}}\\alpha_n^2+N_{\\mathrm{pas}}\\right)P_{\\mathrm{B}}\\beta^2(K_1+K_2+1)\/D^2_{\\mathrm{BI}}d^2_{\\mathrm{IU}},\\label{x_NLoS}\\\\\n &z_{\\mathrm{L,act}}+z_{\\mathrm{NL,act}}=\\sum_{n=1}^{N_{\\mathrm{act}}}\\alpha_n^2\\sigma_{\\mathrm{I}}^2\\beta\/d_{\\mathrm{IU}}^{2},\\label{n_act}\n\\end{align}\nwhere \n\\begin{equation}\n \\gamma_1 \\triangleq \\frac{K_1K_2}{(K_1+1)(K_2+1)}, \\gamma_2 \\triangleq \\frac{1}{(K_1+1)(K_2+1)}.\\label{gamma_approx}\n\\end{equation}\nNote that the considered hybrid IRS beamforming with the statistical CSI only is designed based on the LoS components while ignoring the NLoS components of the involved channels.\n\n\\subsection{Amplification Factor Optimization for Active Elements}\n\nIn this subsection, we optimize the amplification factor given any feasible active\/passive elements allocation and optimal IRS phase shifts (see \\eqref{opt_phase_1} and \\eqref{opt_phase_2}). To this end, we first present an important lemma as follows.\n\\begin{lemma}\\label{lem_min}\n\\textbf{\\emph{(Minimum amplification power for active elements)}}\n\\emph{Given the total deployment budget $W_0$, the hybrid IRS should employ passive elements only (i.e., $N_{\\mathrm{act}}=0$) when}\n\\begin{equation\n P_{\\mathrm{I}} < \\alpha^2_{\\min}\\left(P_{\\mathrm{B}}\\beta\/D^2_{\\mathrm{BI}}+\\sigma^2_{\\mathrm{I}}\\right),\\label{lem_pas\n\\end{equation}\n\\emph{and employ active elements (i.e., $N_{\\mathrm{act}}>0$) otherwise.}\n\\end{lemma}\n\\begin{proof}\n{It can be shown that when \\eqref{lem_pas} holds, the maximum amplification factor that $P_{\\mathrm{I}}$ can support is smaller than its allowable minimum value, i.e., $\\alpha_{n}<\\alpha_{\\min},\\forall n\\in \\mathcal{N}_{\\mathrm{act}}$, thus making it infeasible to satisfy $\\alpha_{n}\\geq\\alpha_{\\min}$.}\n\\end{proof}\n\nNext, we consider the non-trivial case with $P_{\\mathrm{I}} \\geq \\alpha^2_{\\min}\\left(P_{\\mathrm{B}}\\beta\/D^2_{\\mathrm{BI}}+\\sigma^2_{\\mathrm{I}}\\right)$ and hence $N_{\\mathrm{act}}>0$. Given the optimal phase shift design in \\eqref{opt_phase_1} and \\eqref{opt_phase_2}, problem (P2) reduces to the following problem for optimizing the amplification factors of the $N_{\\mathrm{act}}$ active elements.\n\\begin{align}\n &\\mathrm{(P3)}~~~~~\\max_{\\mathbf{A}^{\\mathrm{act}}}\n \\quad~~\\log _{2}\\left(1+\\frac{x_{\\mathrm{L}}+x_{\\mathrm{NL,act}}+x_{\\mathrm{NL,pas}}}{z_{\\mathrm{L,act}}+z_{\\mathrm{NL,act}}+\\sigma_0^2}\\right) \\\\\n &\\qquad\\!~~~~\\quad~\\text{s.t.} \\qquad~~\\eqref{cons_alpha}\\nonumber,\\\\\n &\\qquad~\\qquad\\qquad\\qquad\\sum_{n=1}^{N_{\\mathrm{act}}}\\alpha^2_{n}\\left(P_{\\mathrm{B}}\\beta\/D^2_{\\mathrm{BI}}+\\sigma^2_{\\mathrm{I}}\\right)\\leq P_{\\mathrm{I}}.\\label{cstr_power_HI_decomp_2}\n\\end{align}\nThe constraints in \\eqref{cons_alpha} and \\eqref{cstr_power_HI_decomp_2} indicate that the amplification factors are bounded by both the hardware limitation and the power budget for all active elements. \nTo simplify the analysis, we have the following lemma.\n\\begin{lemma}\\label{lem_P_I_range}\n\\textbf{\\emph{(Favorable amplification power condition)}}\n\\emph{For problem (P3), the constraint in \\eqref{cons_alpha} is always satisfied if}\n\\begin{equation}\n\\alpha^2_{\\min}\\left(P_{\\mathrm{B}}\\beta\/D^2_{\\mathrm{BI}}+\\sigma^2_{\\mathrm{I}}\\right)\\leq P_{\\mathrm{I}}\\leq W_0\\alpha^2_{\\max}\\left(P_{\\mathrm{B}}\\beta\/D^2_{\\mathrm{BI}}+\\sigma^2_{\\mathrm{I}}\\right)\/W_{\\mathrm{act}}.\\label{cons_pb\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nFirst, it can be shown that when \n\\begin{equation}\n P_{\\mathrm{I}}\\geq W_0\\alpha^2_{\\max}\\left(P_{\\mathrm{B}}\\beta\/D^2_{\\mathrm{BI}}+\\sigma^2_{\\mathrm{I}}\\right)\/W_{\\mathrm{act}},\\label{lem_max\n\\end{equation}\nthe amplification factors of $N_{\\mathrm{act}}$ active elements are equal to their maximum value, i.e., $\\alpha_{n}=\\alpha_{\\max},\\forall n\\in\\mathcal{N}_{\\mathrm{act}}$, where the power amplification is constrained by the maximum load.\nThen, combining \\eqref{lem_max} and Lemma \\ref{lem_min}, there always exists a feasible $\\alpha_{n}\\in[\\alpha_{\\min},\\alpha_{\\max}]$ such that the constraint in \\eqref{cons_alpha} of problem (P3) is satisfied if \\eqref{cons_pb} holds by choosing an appropriate $N_{\\mathrm{act}}\\in[0,W_0\/W_{\\mathrm{act}}]$.\n\\end{proof}\n\nLemma \\ref{lem_P_I_range} gives the favorable amplification power condition for active elements that are able to operate in the amplification mode (i.e., $\\alpha_{n}\\geq\\alpha_{\\min}$) and make full use of the amplification power (i.e., $\\alpha_{n}\\leq\\alpha_{\\max}$).\nIn contrast, if the amplification power is too large, i.e., satisfying \\eqref{lem_max}, the amplification factor of each active element is constrained by its hardware limitation, i.e., $\\alpha_n=\\alpha_{\\max},\\forall n\\in\\mathcal{N}_{\\mathrm{act}}$. On the other hand, if the amplification power is insufficient, i.e., when \\eqref{lem_pas} holds, the active elements cannot operate in the amplification mode, i.e., $\\alpha_n< \\alpha_{\\min},\\exists n\\in \\mathcal{N}_{\\mathrm{act}}$, thus making problem (P3) infeasible. \nTherefore, in the sequel, we only consider the case where $P_{\\mathrm{I}}$ satisfies the favorable amplification power condition given in \\eqref{cons_pb} in order to draw useful insight into the optimal IRS active\/passive elements allocation.\n\\begin{lemma}\\label{lem1}\n\\textbf{\\emph{(Optimal amplification factors)}}\n\\emph{Given \\eqref{cons_pb} and $N_{\\mathrm{act}}>0$, the optimal amplification factor for each active element in problem (P3) is given by}\n\\begin{align}\n \\alpha_{n}=\\alpha^* \\triangleq \\sqrt{\\frac{P_{\\mathrm{I}}\/N_{\\mathrm{act}}}{P_{\\mathrm{B}}\\beta\/D^2_{\\mathrm{BI}}+\\sigma^2_{\\mathrm{I}}}}, \\forall n \\in\\mathcal{N}_{\\mathrm{act}}.\\label{opt_a_n}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nSee Appendix \\ref{proof_lem2}.\n\\end{proof}\nLemma \\ref{lem1} shows that all active reflecting elements should adopt a common amplification factor due to the same path-loss over the LoS channels associated with the active elements.\nMoreover, it is observed from \\eqref{opt_a_n} that given the amplification power $P_{\\mathrm{I}}$, the optimal amplification factor, $\\alpha^*$, decreases with the number of active elements or signal power.\n\\begin{remark}\n\\textbf{\\emph{(Amplification noise power)}}\n\\emph{By substituting $\\alpha^*$ in \\eqref{opt_a_n} into \\eqref{n_act}, the amplification noise power is given by}\n\\begin{equation}\n z_{\\mathrm{L,act}}+z_{\\mathrm{NL,act}}=\\frac{\\mathcal{I}_{\\mathbb{R}^+}(N_{\\mathrm{act}})P_{\\mathrm{I}}\\sigma_{\\mathrm{I}}^2\\beta\/d_{\\mathrm{IU}}^{2}}{P_{\\mathrm{B}}\\beta\/D_{\\mathrm{BI}}^{2}+\\sigma_{\\mathrm{I}}^{2}},\\label{noise_term}\n\\end{equation}\n\\emph{where the indicator function $\\mathcal{I}_{\\mathbb{R}^+}(N_{\\mathrm{act}})=1$ when $N_{\\mathrm{act}}>0$ and $\\mathcal{I}_{\\mathbb{R}^+}(N_{\\mathrm{act}})=0$ otherwise, i.e., the noise term exists only when the hybrid IRS consists of active elements.\nNote that the amplification noise power in \\eqref{noise_term} is a constant that depends on the total amplification power, $P_{\\mathrm{I}}$, and the path-loss of the BS-IRS and IRS-user channels.\n\n\\end{remark}\nBased on Lemma \\ref{lem1}, the ergodic capacity of the worst-case user with the optimal IRS phase shifts in \\eqref{opt_phase_1} and \\eqref{opt_phase_2} and amplification factors of active elements in \\eqref{opt_a_n} is given by\n\\begin{equation}\n\\tilde C^*\\!=\\!\\log _{2}\\left(1+\\frac{\\frac{P_{\\mathrm{B}} \\beta^{2}}{D^2_{\\mathrm{BI}}d^2_{\\mathrm{IU}}}\\left(\\gamma_{1} \\left(\\sqrt{A_{\\mathrm{sum}}N_{\\mathrm{act}}}+N_{\\mathrm{pas}}\\right)^{2}+\\gamma_{2}\\left(A_{\\mathrm{sum}}\\!+\\!N_{\\mathrm{pas}}\\right)\\right)}{A_{\\mathrm{sum}} \\sigma_{\\mathrm{I}}^{2}\\beta \/ d_{\\mathrm{IU}}^{2}\\!+\\!\\sigma_{0}^{2}}\\right),\\label{capa_approx2}\n\\end{equation}\nwhere $A_{\\mathrm{sum}}\\triangleq\\frac{\\mathcal{I}_{\\mathbb{R}^+}(N_{\\mathrm{act}})P_{\\mathrm{I}}}{P_{\\mathrm{B}}\\beta\/D_{\\mathrm{BI}}^{2}+\\sigma_{\\mathrm{I}}^{2}}$.\n\n\\subsection{Active\/Passive Elements Allocation Optimization}\\label{ele_alloc}\nGiven the optimal IRS phase shifts and amplification factors of active elements, in the next, we focus on optimizing the active\/passive elements allocation for maximizing the ergodic capacity subject to the total deployment budget constraint, which is formulated as follows.\n\\begin{align}\n &\\mathrm{(P4)}~~~~~\\max_{N_{\\mathrm{act}},N_{\\mathrm{pas}}}\n \\quad~~\\log _{2}\\left(1+\\frac{\\frac{P_{\\mathrm{B}} \\beta^{2}}{D^2_{\\mathrm{BI}}d^2_{\\mathrm{IU}}}\\left(\\gamma_{1} \\left(\\sqrt{A_{\\mathrm{sum}}N_{\\mathrm{act}}}+N_{\\mathrm{pas}}\\right)^{2}+\\gamma_{2}\\left(A_{\\mathrm{sum}}\\!+\\!N_{\\mathrm{pas}}\\right)\\right)}{A_{\\mathrm{sum}} \\sigma_{\\mathrm{I}}^{2}\\beta \/ d_{\\mathrm{IU}}^{2}\\!+\\!\\sigma_{0}^{2}}\\right) \\\\\n &\\qquad~~~\\!~\\quad~\\text{s.t.}\\qquad\\quad \\eqref{cons_C},\\eqref{cons_C_AnP}\\nonumber.\n\\end{align}\n\nProblem (P4) is a non-convex optimization problem due to the discrete active\/passive-element number and the non-concave objective function, making it difficult to obtain a closed-form expression for the optimal solution in general.\nAlthough the optimal number of active (passive) elements can be numerically obtained by applying the one-dimensional search over $N_{\\mathrm{act}}=\\{0,1,\\cdots,\\lfloor\\frac{W_0}{W_{\\mathrm{act}}}\\rfloor\\}$ and hence $N_{\\mathrm{pas}} = \\lfloor\\frac{W_0-N_{\\mathrm{act}}W_{\\mathrm{act}}}{W_{\\mathrm{pas}}}\\rfloor$, it yields little useful insight into the optimal IRS active\/passive elements allocation. \nThus, we consider two special channel setups in the following to obtain closed-form expressions for their corresponding optimal active\/passive elements allocation.\n\n\\subsubsection{LoS channel model}\nFor the LoS channels with $K_1\\to\\infty$ and $K_2\\to\\infty$, we obtain from \\eqref{gamma_approx} that $\\gamma_1\\to 1$ and $\\gamma_2\\to 0$. Then, the approximated ergodic capacity, $\\tilde C$ in \\eqref{sig_approx}, is equal to the exact capacity, $C$ in \\eqref{ergo_capa}.\nNote that the term $A_{\\mathrm{sum}}$ is a positive constant, i.e., $A_{\\mathrm{sum}}=\\frac{P_{\\mathrm{I}}}{P_{\\mathrm{B}}\\beta \/ D_{\\mathrm{BI}}^{2}+\\sigma_{\\mathrm{I}}^{2}}$ when $N_{\\mathrm{act}}>0$, and $A_{\\mathrm{sum}}=0$ when $N_{\\mathrm{act}}=0$.\nTherefore, in the following, we solve problem (P4) in two cases, corresponding to the case of passive IRS with $N_{\\mathrm{act}}=0$ and the case of hybrid IRS with $N_{\\mathrm{act}}>0$ and $N_{\\mathrm{pas}}>0$, respectively. \nTo address the discrete active\/passive elements deployment cost in constraints \\eqref{cons_C} and \\eqref{cons_C_AnP}, we first relax the integer values $N_{\\mathrm{act}}$ and $N_{\\mathrm{pas}}$ into continuous values, $\\tilde N_{\\mathrm{act}}$ and $\\tilde N_{\\mathrm{pas}}$, and then apply the integer rounding technique to reconstruct them based on their optimal continuous values.\nMoreover, it can be shown that the equality in \\eqref{cons_C} holds in the optimal solution to problem (P4), i.e., $\\tilde N_{\\mathrm{pas}}=\\frac{W_0-W_{\\mathrm{act}}\\tilde N_{\\mathrm{act}}}{W_{\\mathrm{pas}}}$.\n\nFirst, consider the case of $\\tilde N_{\\mathrm{act}}=0$. By substituting $\\tilde N_{\\mathrm{pas}}=\\frac{W_0}{W_{\\mathrm{pas}}}$, $\\gamma_1\\to 1$, and $\\gamma_2\\to 0$ into \\eqref{capa_approx2}, the hybrid IRS reduces to the passive IRS for which the achievable capacity under the LoS channels is given by\n\\begin{equation}\n C = C^*_{\\text {L,pas}} \\triangleq \\log _{2}\\left(1+\\frac{W_0^{2} P_{\\mathrm{B}} \\beta^{2}}{W_{\\mathrm{pas}}^{2} D_{\\mathrm{BI}}^{2} d_{\\mathrm{IU}}^{2} \\sigma_{0}^{2}}\\right).\\label{C_p}\n\\end{equation} \nNext, when $\\tilde N_{\\mathrm{act}}>0$ under the LoS channel model, the achievable capacity in \\eqref{capa_approx2} reduces to\n\\begin{equation}\n C = C_{\\text {L,hyb}} \\triangleq \\log _{2}\\left(1+\\frac{P_{\\mathrm{B}}\\beta^2(\\sqrt{A_{\\mathrm{sum}}\\tilde N_{\\mathrm{act}}}+\\tilde N_{\\mathrm{pas}})^2\/D_{\\mathrm{BI}}^2d_{\\mathrm{IU}}^2}{A_{\\mathrm{sum}}\\sigma_{\\mathrm{I}}^2\\beta\/d_{\\mathrm{IU}}^2+\\sigma_0^2}\\right).\\label{C_h}\n\\end{equation}\nBy substituting $\\tilde N_{\\mathrm{pas}}=\\frac{W_0-W_{\\mathrm{act}}\\tilde N_{\\mathrm{act}}}{W_{\\mathrm{pas}}}$ into \\eqref{C_h}, the optimal solution to problem (P4) given $\\tilde N_{\\mathrm{act}}>0$ can be obtained by solving the following equivalent maximization problem.\n\\begin{align}\n &\\mathrm{(P5)}~~~~~\\max_{\\tilde N_{\\mathrm{act}}}\n \\quad~~\\xi_1\\left(-\\tilde N_{\\mathrm{act}}+\\xi_2\\sqrt{\\tilde N_{\\mathrm{act}}}+\\xi_3\\right)^2\\label{obj_p3}\\\\\n &\\qquad~~~\\!~\\quad~\\text{s.t.} ~~~~~0< \\tilde N_{\\mathrm{act}}\\leq\\frac{W_0}{W_{\\mathrm{act}}},\n\\end{align}\nwhere\n\\begin{align}\n \\xi_1=\\frac{P_{\\mathrm{B}}\\beta^2W_{\\mathrm{act}}^2\/D_{\\mathrm{BI}}^2d_{\\mathrm{IU}}^2W_{\\mathrm{pas}}^2}{A_{\\mathrm{sum}}\\sigma_{\\mathrm{I}}^2\\beta\/d_{\\mathrm{IU}}^2+\\sigma_0^2},\n \\qquad \\xi_2 = \\frac{\\sqrt{A_{\\mathrm{sum}}}W_{\\mathrm{pas}}}{W_{\\mathrm{act}}},\n \\qquad \\text{and } \\xi_3 = \\frac{W_0}{W_{\\mathrm{act}}}.\n\\end{align}\n\\begin{theorem}\\label{the_opt_N}\n\\textbf{\\emph{(Optimal active\/passive elements allocation)}} \\emph{Under the LoS channel model and given $\\tilde N_{\\mathrm{act}}>0$, the optimal solution to problem (P4) is}\n\\begin{equation}\\label{opt_na}\n\\begin{cases}\n&\\tilde N^*_{\\mathrm{act}}=\\frac{W_0}{W_{\\mathrm{act}}},\\tilde N^*_{\\mathrm{pas}}=0, \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad~~~\\emph{if } W_0< \\frac{W_{\\mathrm{pas}}^{2} P_{\\mathrm{I}} \/ W_{\\mathrm{act}}}{4 P_{\\mathrm{B}} \\beta \/ D_{\\mathrm{BI}}^{2}+4 \\sigma_{\\mathrm{I}}^{2}},\\\\\n&\\tilde N^*_{\\mathrm{act}}=\\frac{P_{\\mathrm{I}}W_{\\mathrm{pas}}^2\/W_{\\mathrm{act}}^2}{4P_{\\mathrm{B}}\\beta\/D_{\\mathrm{BI}}^2+4\\sigma_{\\mathrm{I}}^2},\\tilde N^*_{\\mathrm{pas}}=\\frac{W_0}{W_{\\mathrm{pas}}}-\\frac{P_{\\mathrm{I}}W_{\\mathrm{pas}}\/W_{\\mathrm{act}}}{4P_{\\mathrm{B}}\\beta\/D_{\\mathrm{BI}}^2+4\\sigma_{\\mathrm{I}}^2},\\qquad \\emph{otherwise}.\n\\end{cases}\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nSee Appendix \\ref{proof_lem3}.\n\\end{proof}\n\\begin{remark}\n\\emph{Theorem \\ref{the_opt_N} shows that when the total deployment budget is small, i.e., $W_0< \\frac{W_{\\mathrm{pas}}^{2} P_{\\mathrm{I}} \/ W_{\\mathrm{act}}}{4 P_{\\mathrm{B}} \\beta \/ D_{\\mathrm{BI}}^{2}+4 \\sigma_{\\mathrm{I}}^{2}}$, only active elements should be employed since they can provide a signal amplification gain and thus a higher rate than passive elements. In contrast, if the total deployment budget is sufficiently large, i.e., $W_0\\geq \\frac{W_{\\mathrm{pas}}^{2} P_{\\mathrm{I}} \/ W_{\\mathrm{act}}}{4 P_{\\mathrm{B}} \\beta \/ D_{\\mathrm{BI}}^{2}+4 \\sigma_{\\mathrm{I}}^{2}}$, the optimal number of active elements should balance the amplification gain of active elements and the beamforming gain of passive elements, which is independent of $W_0$ but determined by other parameters, such as $P_{\\mathrm{I}}$, $W_{\\mathrm{act}}$ and $W_{\\mathrm{pas}}$.\nThis is because the performance bottleneck of active elements becomes the limited amplification power when $W_0$ is large, where the limited power can only support partial active elements to operate with the optimal amplification factor.}\n\\end{remark\n\nBased on Theorem \\ref{the_opt_N} and substituting the optimal number of active elements, $\\tilde N_{\\mathrm{act}}=\\frac{P_{\\mathrm{I}} W_{\\mathrm{pas}}^{2} \/ W_{\\mathrm{act}}^{2}}{4 P_{\\mathrm{B}} \\beta \/ D_{\\mathrm{BI}}^{2}+4 \\sigma_{\\mathrm{I}}^{2}}$, into \\eqref{C_h}, the achievable capacity of the worst-case user is obtained as\n\\begin{equation}\nC^*_{\\text {L,hyb}} = \\log _{2}\\left(1+\\frac{P_{\\mathrm{B}} \\beta^{2}\\left(\\frac{A_{\\text {sum }} W_{\\mathrm{pas}}}{4 W_{\\mathrm{act}}}+\\frac{W_0}{W_{\\mathrm{pas}}}\\right)^{2} \/ D_{\\mathrm{BI}}^{2} d_{\\mathrm{IU}}^{2}}{A_{\\text {sum }} \\sigma_{\\mathrm{I}}^{2} \\beta \/ d_{\\mathrm{IU}}^{2}+\\sigma_{0}^{2}}\\right).\\label{C_h_2}\n\\end{equation}\n\n\\begin{corollary}\n\\emph{For the case of LoS channels, given the optimal IRS phase shifts in \\eqref{opt_phase_1} and \\eqref{opt_phase_2} and active\/passive elements allocation in \\eqref{opt_na}, the optimal amplification factor of each active element is given by}\n\\begin{equation\n \\alpha_{n}=\\frac{2 W_{\\mathrm{act}}}{W_{\\mathrm{pas}}} ,\\forall n \\in\\mathcal{N}_{\\mathrm{act}}\n\\end{equation}\n\\emph{for achieving the capacity in \\eqref{C_h_2}.}\n\\end{corollary}\n\nMoreover, when $W_0<\\frac{W_{\\mathrm{pas}}^2P_{\\mathrm{I}}\/W_{\\mathrm{act}}}{4P_{\\mathrm{B}}\\beta\/D_{\\mathrm{BI}}^2+4\\sigma_{\\mathrm{I}}^2}$ and hence $N^*_{\\mathrm{act}}=W_0\/W_{\\mathrm{act}}$, the achievable capacity of the worst-case user aided by the active IRS under the LoS channel model is given by\n\\begin{equation}\n C^*_{\\text {L,act}} \\triangleq \\log _{2}\\left(1+\\frac{W_0A_{\\mathrm{sum}} P_{\\mathrm{B}} \\beta^{2} \/W_{\\mathrm{act}} D_{\\mathrm{BI}}^{2} d_{\\mathrm{IU}}^{2}}{A_{\\mathrm{sum}} \\sigma_{\\mathrm{I}}^{2} \\beta \/ d_{\\mathrm{IU}}^{2}+\\sigma_{0}^{2}}\\right).\\label{C_a}\n\\end{equation}\n\nNext, we compare the achievable capacity of the hybrid IRS with that of the fully-active and fully-passive IRSs with respect to (w.r.t.) different values of the deployment budget under the LoS channels. To this end, we define the following active\/passive elements allocation thresholds for the deployment budget.\n\\begin{align}\n W_{\\mathrm{A-H}}&\\triangleq\\frac{W_{\\mathrm{pas}}^2P_{\\mathrm{I}}\/W_{\\mathrm{act}}}{4P_{\\mathrm{B}}\\beta\/D_{\\mathrm{BI}}^2+4\\sigma_{\\mathrm{I}}^2},\\label{B_H-A}\\\\\n W_{\\mathrm{H-P}}&\\triangleq\\frac{W_{\\mathrm{pas}}^2\\sigma_0^2d_{\\mathrm{IU}}^2}{4W_{\\mathrm{act}}\\sigma_{\\mathrm{I}}^2\\beta}+\\frac{W_{\\mathrm{pas}}^2\\sigma_0d_{\\mathrm{IU}}}{4W_{\\mathrm{act}}\\sigma_{\\mathrm{I}}}\\sqrt{\\frac{\\sigma_0^2d_{\\mathrm{IU}}^2}{\\sigma_{\\mathrm{I}}^2\\beta^2}+\\frac{P_{\\mathrm{I}}}{P_{\\mathrm{B}}\\beta^2\/D_{\\mathrm{BI}}^2+\\sigma_{\\mathrm{I}}^2\\beta}}.\\label{condition_H-P}\n\\end{align}\nThen, we have the following result.\n\\begin{theorem}\\label{lem_thres}\n\\textbf{\\emph{(Capacity comparison among different IRS architectures)}}\n\\emph{Under the LoS channel model, the capacity comparison among different IRS architectures is given as follows.}\n\n\\emph{1) When $0\\leq W_0< W_{\\mathrm{A-H}}$, $C^*_{\\text {L,act}}=C^*_{\\text {L,hyb}}>C^*_{\\text {L,pas}}$;}\n\n\\emph{2) When $W_{\\mathrm{A-H}}\\leq W_0\\leq W_{\\mathrm{H-P}}$, $C^*_{\\text {L,hyb}}>C^*_{\\text {L,act}}$ and $C^*_{\\text {L,hyb}}>C^*_{\\text {L,pas}}$;}\n\n\\emph{3) When $W_0>W_{\\mathrm{H-P}}$, $C^*_{\\text {L,pas}}=C^*_{\\text {L,hyb}}>C^*_{\\text {L,act}}$.}\n\\end{theorem}\n\\begin{proof}\nSee Appendix \\ref{proof_lem4}.\n\\end{proof}\n\\begin{remark}\n\\textbf{\\emph{(IRS architecture selection)}}\n\\emph{Theorem \\ref{lem_thres} shows that the optimal architecture selection for the IRS (i.e., passive, active, or hybrid) is determined by the total deployment budget. Specifically, when $W_0$ is small, the hybrid IRS reduces to the fully-active IRS, which is the optimal architecture to achieve the maximum capacity. This is because when $N_{\\mathrm{pas}}$ is small, the active IRS can provide a high power amplification gain while passive elements only have limited beamforming gain. In contrast, when $W_0$ is sufficiently large, the hybrid IRS reduces to the fully-passive IRS. This is expected since in this case, the amplification factor for the active elements is limited, which may not be able to mitigate the amplification noise. However, when $W_{\\mathrm{A-H}}\\leq W_0\\leq W_{\\mathrm{H-P}}$, there in general exists a trade-off between increasing the power amplification gain (i.e., assigning more active elements) and beamforming gain (i.e., assigning more passive elements); hence, it is necessary to design the optimal active\/passive elements allocation for the hybrid IRS to maximize the capacity.}\n\\end{remark}\n\\begin{remark}\n\\textbf{\\emph{(Deployment budget thresholds)}}\n\\emph{It is observed from \\eqref{B_H-A} and \\eqref{condition_H-P} that $W_{\\mathrm{A-H}}$ and $W_{\\mathrm{H-P}}$ both increase with the amplification power, $P_{\\mathrm{I}}$, and the passive-element cost, $W_{\\mathrm{pas}}$, while they decrease with the active-element cost, $W_{\\mathrm{act}}$. \nThis can be explained as follows. \nFirst, a higher amplification power, $P_{\\mathrm{I}}$, allows more active elements to operate with sufficiently large amplification factors. \nSecond, higher passive-element cost, $W_{\\mathrm{pas}}$, and\/or lower active-element cost, $W_{\\mathrm{act}}$, make it more desirable to deploy active elements.}\n\\end{remark}\n\n\\subsubsection{Rayleigh fading channel model}\nFor the case of Rayleigh fading channels with $K_1=K_2=0$, we obtain from \\eqref{gamma_approx} that $\\gamma_1=0$ and $\\gamma_2=1$. As such, the ergodic capacity under the Rayleigh fading channels is given by\n\\begin{equation}\n \\tilde C=C_{\\mathrm{NL}}\\triangleq\\log _{2}\\left(1+\\frac{\\left(A_{\\mathrm{sum}}+{N}_{\\mathrm{pas}}\\right) \\frac{P_{\\mathrm{B}} \\beta^{2}}{D_{\\mathrm{BI}}^{2} d_{\\mathrm{IU}}^{2}}}{A_{\\mathrm{sum}} \\sigma_{\\mathrm{I}}^{2} \\beta \/ d_{\\mathrm{IU}}^{2}+\\sigma_{0}^{2}}\\right).\n\\end{equation}\n\n\\begin{lemma}\\label{lem_opt_ea_NLoS}\n\\textbf{\\emph{(Optimal active\/passive elements allocation for Rayleigh fading channels)}}\n\\emph{Under the Rayleigh fading channel model and favorable amplification power condition in \\eqref{cons_pb}, the optimal number of active and passive elements are given by}\n\\begin{equation}\\label{opt_na_NLoS}\n\\begin{cases}\n& N^*_{\\mathrm{act}}=1, N^*_{\\mathrm{pas}}=\\lfloor\\frac{W_0-W_{\\mathrm{act}}}{W_{\\mathrm{pas}}}\\rfloor, \\qquad~~~\\emph{if } W_0> \\frac{A_{\\mathrm{sum}}W_{\\mathrm{pas}}\\sigma_0^2-W_{\\mathrm{act}}\\sigma_0^2}{A_{\\mathrm{sum}}\\sigma_{\\mathrm{I}}^2\\beta\/d_{\\mathrm{IU}}^2},\\\\\n& N^*_{\\mathrm{act}}=0, N^*_{\\mathrm{pas}}=\\lfloor\\frac{W_0}{W_{\\mathrm{pas}}}\\rfloor,\\qquad~~~~~~~\\emph{otherwise}.\n\\end{cases}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nFirst, when $N_{\\mathrm{act}}>0$, it can be shown that\n\\begin{equation}\n C_{\\mathrm{NL}}\\leq C^*_{\\mathrm{NL,hyb}}\\triangleq\\log _{2}\\left(1+\\frac{\\left(A_{\\mathrm{sum}}+{N}_{\\mathrm{pas,max}}\\right) \\frac{P_{\\mathrm{B}} \\beta^{2}}{D_{\\mathrm{BI}}^{2} d_{\\mathrm{IU}}^{2}}}{A_{\\mathrm{sum}} \\sigma_{\\mathrm{I}}^{2} \\beta \/ d_{\\mathrm{IU}}^{2}+\\sigma_{0}^{2}}\\right),\\label{C_NLoS_hyb}\n\\end{equation}\nwhere ${N}_{\\mathrm{pas,max}}=\\lfloor\\frac{W_0-W_{\\mathrm{act}}}{W_{\\mathrm{pas}}}\\rfloor$ and $N_{\\mathrm{act}}=1$.\nSecond, when $N_{\\mathrm{act}}=0$, we obtain that $A_{\\mathrm{sum}}=0$ and the ergodic capacity achieved by passive IRS is given by\n\\begin{equation}\n C_{\\mathrm{NL}}=C^*_{\\mathrm{NL,pas}}\\triangleq\\log _{2}\\left(1+\\frac{W_0P_{\\mathrm{B}} \\beta^{2}}{W_{\\mathrm{pas}} \\sigma_{0}^{2} D_{\\mathrm{BI}}^{2} d_{\\mathrm{IU}}^{2}}\\right). \\label{C_NLoS_pas}\n\\end{equation}\nThen, by comparing $C^*_{\\mathrm{NL,hyb}}$ in \\eqref{C_NLoS_hyb} and $C^*_{\\mathrm{NL,pas}}$ in \\eqref{C_NLoS_pas}, it can be shown that when $W_0> \\frac{A_{\\mathrm{sum}}W_{\\mathrm{pas}}\\sigma_0^2-W_{\\mathrm{act}}\\sigma_0^2}{A_{\\mathrm{sum}}\\sigma_{\\mathrm{I}}^2\\beta\/d_{\\mathrm{IU}}^2}$, we have $C^*_{\\mathrm{NL,hyb}}>C^*_{\\mathrm{NL,pas}}$. Based on the above, the optimal active\/passive elements allocation under the Rayleigh fading channel model is given in \\eqref{opt_na_NLoS}, thus completing the proof.\n\\end{proof}\n\\begin{remark}\n\\textbf{\\emph{(Active\/passive elements allocation under the Rayleigh fading channel model)}}\n\\emph{Lemma \\ref{lem_opt_ea_NLoS} shows that under the Rayleigh fading channel model (or rich scattering environment), most of the deployment budget should be assigned to passive elements, while at most one active element needs to be deployed.\nSpecifically, when the amplification power is sufficiently small and\/or the deployment budget is sufficiently large, all the deployment budget should be assigned to passive elements. This is because the power amplification gain provided by active elements cannot mitigate their amplification noise and\/or the beamforming gain of passive elements (which prevails active-element power amplification).\nOn the other hand, when the amplification power is large and\/or the deployment budget is small, one active element is enough to achieve the amplification power gain because it has no active-element beamforming gain under the Rayleigh fading channels, thus making it desirable to assign most of the deployment budget to passive elements.\n}\n\\end{remark}\n\n\\section{Simulation Results}\\label{sec_sim}\nSimulation results are presented in this section to evaluate the effectiveness of the proposed hybrid IRS architecture and active\/passive elements allocation design. \nIf not specified otherwise, the system setups are as follows.\nWe consider a two-dimensional (2D) Cartesian coordinate system, where the reference points of the BS, the hybrid IRS, and the worst-case user are located at $(0,0)$ m, $(60,0)$ m, and $(60,20)$ m, respectively. \n\nThe deployment costs of each active\/passive element are set as $W_{\\mathrm{act}} = 5$ and $W_{\\mathrm{pas}} = 1$, respectively, by taking into account the fact that the active element in general incurs higher static operation power and hardware cost.\nFor each active element, the feasible region of the amplification factor is in the range of $[\\alpha_{\\min},\\alpha_{\\max}]=[0,14]$ dB \\cite{8403249}.\nThe system operates at a carrier frequency of 6 GHz with the wavelength of $\\lambda=0.05 \\mathrm{~m}$. \nWe consider the same Rician fading factor for the BS-IRS and IRS-user channels, i.e., $K_1=K_2=K$.\nOther parameters are set as $d_{\\mathrm{I}}=\\frac{\\lambda}{4}$, $\\beta=-30$ dB, $\\sigma_{\\mathrm{I}}=\\sigma_0=-80$ dBm, $P_{\\mathrm{B}}=15$ dBm, and $P_{\\mathrm{I}} = 5$ dBm \\cite{9377648}.\n\\subsection{Accuracy of Ergodic Capacity Approximation}\n\\begin{figure}[t]\n\\centerline{\\includegraphics[width=2.7in]{fig1.pdf}}\n\\caption{Accuracy of the ergodic capacity approximation given in \\eqref{sig_approx}.}\\label{eval_approx}\n\\end{figure}\n\nFirst, we evaluate in Fig. \\ref{eval_approx} the accuracy of the ergodic capacity approximation presented in Lemma \\ref{lem_C_approx} (see \\eqref{sig_approx}). \nBy setting equal active\/passive elements allocation, i.e., $N_{\\mathrm{act}}=N_{\\mathrm{pas}}$, the actual ergodic capacity in \\eqref{ergo_capa} is obtained based on 1000 independent channel realizations under different Rician factors of $K\\in\\{0$ dB, $10$ dB, $+\\infty\\}$ ($K\\to +\\infty$ corresponds to the LoS channels).\nIt is observed that the approximated ergodic capacity in \\eqref{sig_approx} is close to the exact capacity in \\eqref{ergo_capa} under different Rician factors. This is because when the number of reflecting elements is large, the variance of random channels is averaged due to the law of large numbers.\n\\subsection{Effect of Active\/passive Elements Allocation on Ergodic Capacity}\n\\begin{figure}[ht]\n\\centerline{\\includegraphics[width=2.7in]{fig2.pdf}}\n\\caption{Capacity performance versus budget percentage of active elements.}\\label{rate_ratio}\n\\end{figure}\n\nNext, we investigate the effect of the active\/passive elements allocation at the hybrid IRS on the capacity performance.\nWe set the total deployment budget $W_0=3000$, the amplification power $P_{\\mathrm{I}}=15$ dBm, and denote $\\rho$ as the percentage of the budget assigned to active elements ($0\\leq \\rho\\leq 1$).\nIn Fig. \\ref{rate_ratio}, we plot the ergodic capacity versus $\\rho$ with different Rician factors $K\\in\\{0$, $15$ dB, $+\\infty\\}$.\nIt is observed that for the cases of $K\\to\\infty$ and $K=15$ dB (with LoS channel components), the ergodic capacity first increases with $\\rho$, then decreases after it exceeds a threshold (i.e., $\\rho^*=0.35$ for $K\\to\\infty$, and $\\rho^*=0.18$ for $K=15$ dB). This shows that for the general Rician fading channels, the active\/passive elements allocation has a significant effect on the capacity maximization.\nThis is expected because if the budget assigned to active elements is too small, the active-element power amplification gain is not fully exploited due to the small number of active elements. However, if the budget assigned to active elements is too large, the performance bottleneck of active elements becomes the limited amplification power that cannot support all active elements to operate in the amplification mode, thus making it desirable to assign partial budget to passive elements for achieving higher beamforming gain.\nMoreover, the optimal budget assigned to active elements increases with the Rician factor because the active elements can achieve both the power amplification gain and beamforming gain over the LoS paths but have no beamforming gain over NLoS paths.\nLast, when $K=0$ (corresponding to the Rayleigh fading channels), the total deployment budget should be assigned to passive elements to maximize the ergodic capacity since the beamforming gain of passive elements is larger than the amplification gain of active elements.\n\\subsection{Effect of Total Deployment Budget}\n\\begin{figure}[t] \\centering \n{\\subfigure[{LoS channels with $K\\to +\\infty$.}] {\n\\label{fig_3_LoS}\n\\includegraphics[width=2.7in]{fig3_LoS.pdf} \n}} \n{\\subfigure[{Rician fading channels with $K=10$ dB.}] {\\label{fig_3_Rician}\n\\includegraphics[width=2.7in]{fig3_Rician.pdf} \n}}\n{\\caption{The ergodic capacity achieved by hybrid IRS, fully-active IRS, and fully-passive IRS versus deployment budget.}}\n\\end{figure}\n\nNext, we compare the proposed optimal active\/passive elements allocation (EA) design against three benchmarks: 1) Fully-active IRS with all the deployment budget assigned to active elements; 2) Fully-passive IRS with all the deployment budget assigned to active elements; 3) Hybrid IRS under equal EA for which the deployment budget is equally assigned to active and passive elements. We apply the optimal IRS beamforming design in \\eqref{opt_phase_1}, \\eqref{opt_phase_2}, and \\eqref{opt_a_n} for the proposed hybrid IRS architecture and the benchmarks.\n\nFigs. \\ref{fig_3_LoS} and \\ref{fig_3_Rician} show the ergodic capacity of the worst-case user versus the total deployment budget under the LoS and Rician fading channel models, respectively.\nFirst, it is observed that the ergodic capacity achieved by the hybrid IRS architecture with optimal active\/passive elements allocation is always larger than or equal to that achieved by the fully-active or fully-passive IRSs. This is expected because the hybrid IRS provides an extra degree of freedom for elements allocation to balance the trade-off between the power amplification and beamforming gains.\nSecond, the hybrid IRS with the optimal active\/passive elements allocation reduces to the fully-active IRS when the deployment budget is sufficiently small (i.e., $W_0<1250$ for the LoS channels, and $W_0<600$ for the Rician fading channels with $K=10$ dB). \nThis is because the active elements has a higher power amplification gain as compared to the passive-element beamforming gain when the budget is small.\nThird, one can observe that the hybrid IRS with the optimal active\/passive elements allocation outperforms that with equal elements allocation, which shows the effectiveness of elements allocation optimization for the hybrid IRS.\nLast, it is also observed that given the same deployment budget, the achievable capacity under the LoS channels is higher than that under Rician fading channels since the IRS phase shifts are designed based on the LoS channel components or statistical CSI only.\n\n\\begin{figure}[t] \\centering \n{\\subfigure[{LoS channels with $K\\to +\\infty$.}] {\n\\label{fig_4_LoS}\n\\includegraphics[width=2.7in]{fig4_LoS.pdf} \n}} \n{\\subfigure[{Rician fading channels with $K=10$ dB.}] {\\label{fig_4_Rician}\n\\includegraphics[width=2.7in]{fig4_Rician.pdf} \n}}\n{\\caption{The optimal numbers of active and passive elements versus total deployment budget.}}\n\\end{figure}\n\nFigs. \\ref{fig_4_LoS} and \\ref{fig_4_Rician} present the optimal numbers of active and passive elements for maximizing the ergodic capacity of the worst-case user under different total deployment budgets with $W_{\\mathrm{act}}=5$ and $W_{\\mathrm{pas}}=1$.\nIt is observed that as $K$ increases, the deployment budget is first assigned to active elements only and then assigned to more passive elements after it exceeds a threshold.\nIn addition, we observe that given the same deployment budget, \nthe optimal number of active elements for the LoS channels, i.e., $N^*_{\\mathrm{act}}=250$ is higher than that for the Rician fading channels with $K=10$ dB, i.e., $N^*_{\\mathrm{act}}=120$.\n\\subsection{Effect of Rician Factor}\nIn Figs. \\ref{fig5_capa_vs_K} and \\ref{fig5_ea_vs_K}, we investigate the effect of the Rician factor of the BS-IRS and IRS-user channels on the achievable capacity and active\/passive elements allocation. The IRS beamforming design and the active-element amplification factors are given by \\eqref{opt_phase_1}, \\eqref{opt_phase_2}, and \\eqref{opt_a_n}, respectively.\n\n\\begin{figure}[t] \\centering \n{\\subfigure[{Ergodic capacity versus Rician factor.}] {\n\\label{fig5_capa_vs_K}\n\\includegraphics[width=2.63in]{fig5_capa_vs_K.pdf} \n}} \n{\\subfigure[Optimal number of active and passive elements versus Rician factor.] {\\label{fig5_ea_vs_K}\n\\includegraphics[width=2.93in]{fig5_ea_vs_K.pdf} \n}}\n{\\caption{Effect of the Rician factor on ergodic capacity and active\/passive elements allocation.}}\n\\end{figure}\n\nSpecifically, Fig. \\ref{fig5_capa_vs_K} presents the ergodic capacity of the worst-case user under different Rician factors. It is observed that the proposed hybrid IRS architecture with the optimal active\/passive elements allocation outperforms other benchmarks under different channel conditions, while the active IRS achieves a higher capacity than the passive IRS only when the channels are LoS-dominant (i.e., Rician factor $K$ is large). \nFig. \\ref{fig5_ea_vs_K} plots the optimal numbers of active and passive elements of the hybrid IRS under different Rician factors. \nIt is observed that with an increasing Rician factor, the optimal number of active elements first increases and then keeps unchanged after it exceeds a threshold, where the NLoS channel components are negligible as compared to the LoS channel components. \nThe reason is that for the NLoS paths, the active elements have no beamforming gain but can achieve power amplification gain due to the high amplification power, while for the LoS paths, active elements can achieve both the power amplification and beamforming gains with multiple elements. \nTherefore, as $K$ increases, more deployment budget should be assigned to the active elements so as to achieve a higher beamforming gain of the active elements.\n\n\\subsection{Effect of Amplification Power}\nMoreover, we evaluate the effect of the amplification power of active elements on the ergodic capacity and active\/passive elements allocation of the proposed hybrid IRS architecture under the general Rician fading channel model with $K=15$ dB.\n\n\\begin{figure}[t] \\centering \n{\\subfigure[{Ergodic capacity versus amplification power.}] {\n\\label{fig6_capa_vs_P_I}\n\\includegraphics[width=2.63in]{fig6_capa_vs_P_I.pdf} \n}} \n{\\subfigure[Optimal numbers of active and passive elements versus amplification power.] {\\label{fig6_ea_vs_P_I.pdf}\n\\includegraphics[width=2.93in]{fig6_ea_vs_P_I.pdf} \n}}\n{\\caption{Effect of the amplification power on ergodic capacity and elements allocation.}}\n\\end{figure}\n\nIn Fig. \\ref{fig6_capa_vs_P_I}, we plot the ergodic capacity versus the amplification power $P_{\\mathrm{I}}$ for different schemes.\nFirst, it is observed that the ergodic capacity of the worst-case user aided by the fully-active or hybrid IRSs monotonically increases with the amplification power. Second, we observe that the hybrid IRS with the optimal active\/passive elements allocation achieves a much higher capacity than the fully-active IRS when $P_{\\mathrm{I}}$ is small and significantly outperforms the fully-passive IRS when $P_{\\mathrm{I}}$ is large. This shows the flexibility and advantages of the hybrid IRS under different amplification powers.\n\nIn Fig. \\ref{fig6_ea_vs_P_I.pdf}, we plot the optimal numbers of active and passive elements for the hybrid IRS architecture versus the amplification power. \nOne can observe that the optimal number of active elements increases with the amplification power, which is expected because a higher amplification power can support more active elements to operate in the amplification mode with optimal amplification factors. \n\\subsection{Effect of Active\/Passive-element Deployment Cost Ratio}\n\\begin{figure}[t] \\centering \n{\\subfigure[{Ergodic capacity versus active\/passive-element cost ratio.}] {\n\\label{fig7_capa_vs_c_ratio}\n\\includegraphics[width=2.63in]{fig7_capa_vs_c_ratio.pdf} \n}} \n{\\subfigure[Optimal numbers of active and passive elements versus deployment cost ratio.] {\\label{fig7_ea_vs_c_ratio}\n\\includegraphics[width=2.93in]{fig7_ea_vs_c_ratio.pdf} \n}}\n{\\caption{Effect of the deployment cost ratio on ergodic capacity and elements allocation.}}\n\\end{figure}\nIn Fig. \\ref{fig7_capa_vs_c_ratio}, we compare the ergodic capacity of the worst-case user versus different ratios of active-over-passive deployment cost, i.e., $W_{\\mathrm{act}}\/W_{\\mathrm{pas}}$, where we set $W_{\\mathrm{pas}}=1$, $K=15$ dB, $W_0=1500$, and $P_{\\mathrm{B}}=10$ dBm.\nFirst, the hybrid IRS with the optimal active\/passive elements allocation has the highest capacity as compared to other benchmarks. Besides, one can observe that as $W_{\\mathrm{act}}$ increases, the ergodic capacity decreases when $W_{\\mathrm{act}}$ is small, then remains unchanged when $W_{\\mathrm{act}}$ is sufficiently large, which can be explained as follows.\nWhen the deployment budget is small, a larger $W_{\\mathrm{act}}$ means that a smaller number of active elements can be deployed, thus resulting in reduced ergodic capacity.\nWhen $W_{\\mathrm{act}}$ exceeds a threshold, all the deployment budget is assigned to passive elements for maximizing the ergodic capacity. As such, increasing $W_{\\mathrm{act}}$ will not change the ergodic capacity and elements allocation.\n\\section{Conclusions}\\label{sec_conclu}\nIn this paper, we proposed a new hybrid active-passive IRS architecture and studied its optimal active\/passive elements allocation in a hybrid IRS aided wireless system based on the statistical CSI only.\nSpecifically, under the general Rician fading channel model, we formulated an optimization problem to maximize the ergodic capacity of the worst-case user, by jointly optimizing the active\/passive elements allocation, their phase shifts, and the amplification factors of active elements, subject to various practical constraints on the active-element amplification factor and amplification power consumption, as well as the total active and passive elements deployment budget.\nTo solve this problem, we first approximated the ergodic capacity in a simpler form and then proposed an efficient algorithm to obtain the optimal hybrid IRS beamforming and active\/passive elements allocation.\nMoreover, it was shown that when all channels are of LoS, only active elements need to be deployed when the total deployment budget is sufficiently small, while both active and passive elements should be deployed with a decreasing number ratio when the budget increases and exceeds a certain threshold.\nLast, numerical results demonstrated the performance gains achieved by the proposed hybrid IRS architecture with the optimal active\/passive elements allocation against the benchmarks of the fully-active and fully-passive IRSs, as well as the hybrid IRS with equal active\/passive elements allocation. This validated that the hybrid IRS can flexibly balance the trade-off between the peculiar power amplification gain of active IRS and superior beamforming gain of passive IRS.\n\n\\appendices\n\\section{Proof of Lemma \\ref{lem_C_approx}}\\label{proof_lem1}\nFirst, based on the Lemma 1 of \\cite{6816003}, we obtain the following approximation for the ergodic capacity. \n\\begin{align}\n C&=\\mathbb{E}\\left\\{\\log _{2}\\left(1+\\frac{P_{\\mathrm{B}}|(\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{act}}+(\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{pas}})^H\\mathbf{\\Psi}^{\\mathrm{pas}}\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{pas}}|^2}{\\sigma_{\\mathrm{I}}^2\\|(\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\|^2+\\sigma_0^2}\\right)\\right\\} \\\\\n &\\approx\\log_2\\left(1+\\frac{\\mathbb{E}\\left\\{P_{\\mathrm{B}}|(\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{act}}+(\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{pas}})^H\\mathbf{\\Psi}^{\\mathrm{pas}}\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{pas}}|^2\\right\\}}{\\mathbb{E}\\left\\{\\sigma_{\\mathrm{I}}^2\\|(\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\|^2+\\sigma_0^2\\right\\}}\\right).\\label{C_approx}\n\\end{align}\nThen, we focus on the derivation of the desired signals at the worst-case user, which can be expressed as\n\\begin{align}\n &\\mathbb{E}\\left\\{P_{\\mathrm{B}}|(\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{act}}+(\\mathbf{h}_{\\mathrm{IU}}^{\\mathrm{pas}})^H\\mathbf{\\Psi}^{\\mathrm{pas}}\\mathbf{h}_{\\mathrm{BI}}^{\\mathrm{pas}}|^2\\right\\}\\\\\n &=\\frac{P_{\\mathrm{B}}}{(K_1+1)(K_2+1)}\\mathbb{E}\\Big\\{\\Big|\\underbrace{\\sqrt{K_1K_2}(\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}}_{x_{1}}+\\underbrace{\\sqrt{K_1}(\\tilde{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}}_{x_{2}}\\nonumber\\\\\n &+\\underbrace{\\sqrt{K_2}(\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\tilde{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}}_{x_{3}}+\\underbrace{(\\tilde{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\tilde{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{act}}}_{x_{4}}+\\underbrace{\\sqrt{K_1K_2}(\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{pas}})^H\\mathbf{\\Psi}^{\\mathrm{pas}}\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{pas}}}_{x_{5}}\\\\\n &+\\underbrace{\\sqrt{K_1}(\\tilde{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{pas}})^H\\mathbf{\\Psi}^{\\mathrm{pas}}\\bar{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{pas}}}_{x_{6}}+\\underbrace{\\sqrt{K_2}(\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{pas}})^H\\mathbf{\\Psi}^{\\mathrm{pas}}\\tilde{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{pas}}}_{x_{7}}+\\underbrace{(\\tilde{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{pas}})^H\\mathbf{\\Psi}^{\\mathrm{pas}}\\tilde{\\mathbf{h}}_{\\mathrm{BI}}^{\\mathrm{pas}}}_{x_{8}}\\Big|^2\\Big\\}\\nonumber\\\\\n &=\\frac{P_{\\mathrm{B}}}{(K_1\\!+\\!1)(K_2\\!+\\!1)}\\Big(\\mathbb{E}\\left\\{\\left|x_{1}\\!+\\!x_{5}\\right|^2\\right\\}\\!+\\!\\mathbb{E}\\Big\\{\\left|x_{2}\\right|^2\\!+\\!\\left|x_{3}\\right|^2\\!+\\!\\left|x_{4}\\right|^2\\!+\\!\\left|x_{6}\\right|^2\\!+\\!\\left|x_{7}\\right|^2\\!+\\!\\left|x_{8}\\right|^2\\Big\\} \\Big)\\\\\n &=x_{\\mathrm{L}}+x_{\\mathrm{NL,act}}+x_{\\mathrm{NL,pas}},\\label{lem_C_approx_1}\n\\end{align}\nwhere $x_{\\mathrm{L}}$, $x_{\\mathrm{NL,act}}$, and $x_{\\mathrm{NL,pas}}$ are defined in \\eqref{x_1}, \\eqref{x_2}, and \\eqref{x_3}, respectively.\nNext, the noise introduced by active reflecting elements and that at the receiver can be expressed as\n\\begin{align}\n &\\mathbb{E}\\left\\{\\sigma_{\\rm I}^2\\left\\|({\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\right\\|^2+\\sigma_0^2\\right\\}\\\\\n &=\\mathbb{E}\\left\\{\\sigma_{\\rm I}^2\\left\\|\\sqrt{\\frac{K_2}{K_2+1}}(\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}+\\sqrt{\\frac{1}{K_2+1}}(\\tilde{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\right\\|^2\\right\\}+\\sigma_0^2\\\\\n &=\\underbrace{\\frac{K_2\\sigma_{\\rm I}^2}{K_2+1}\\|(\\bar{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\|^2}_{z_{\\mathrm{L,act}}}+\\underbrace{\\frac{\\sigma_{\\rm I}^2}{K_2+1}\\|(\\tilde{\\mathbf{h}}_{\\mathrm{IU}}^{\\mathrm{act}})^H\\mathbf{\\Psi}^{\\mathrm{act}}\\|^2}_{z_{\\mathrm{NL,act}}}+\\sigma_0^2.\\label{noise_approx}\n\\end{align}\nLast, the proof is completed by substituting \\eqref{lem_C_approx_1} and \\eqref{noise_approx} into \\eqref{C_approx}.\n\\section{Proof of Lemma \\ref{lem1}}\\label{proof_lem2}\nFirst, given favorable amplification power condition in \\eqref{cons_pb}, it can be verified that at the optimal solution to problem (P3), the constraint \\eqref{cstr_power_HI_decomp_2} is always active, i.e., $\\sum_{n=1}^{N_{\\mathrm{act}}}\\alpha^2_{n}=A_{\\mathrm{sum}}$. Then, by substituting $A_{\\mathrm{sum}}$ into \\eqref{x_NLoS} and \\eqref{n_act}, we can show that the NLoS part of the desired signal, $x_{\\mathrm{NL,act}}+x_{\\mathrm{NL,pas}}$, and the amplification noise , $z_{\\mathrm{L,act}}+z_{\\mathrm{NL,act}}$, are constants. Thus, problem (P3) can be solved by maximizing the ergodic capacity due to the LoS channel component, $x_{\\mathrm{L}}$. \nBy using the Cauchy-Schwarz inequality, we have\n\\begin{align}\n x_{\\mathrm{L}}&=\\gamma_1\\left(\\sum_{n=1}^{N_{\\mathrm{act}}}\\alpha_n+N_{\\mathrm{pas}}\\right)^2P_{\\mathrm{B}}\\beta^2\/D^2_{\\mathrm{BI}}d^2_{\\mathrm{IU}}\\\\\n &=\\gamma_1\\left(\\sum_{n=1}^{N_{\\mathrm{act}}}\\left(\\alpha_n+N_{\\mathrm{pas}}\/N_{\\mathrm{act}}\\right)\\right)^2P_{\\mathrm{B}}\\beta^2\/D^2_{\\mathrm{BI}}d^2_{\\mathrm{IU}}\\\\\n &\\leq \\gamma_1N_{\\mathrm{act}}^2\\left(\\alpha^*+N_{\\mathrm{pas}}\/N_{\\mathrm{act}}\\right)^2P_{\\mathrm{B}}\\beta^2\/D^2_{\\mathrm{BI}}d^2_{\\mathrm{IU}},\n\\end{align}\nwhere the equality holds if and only if $\\alpha_{n}=\\alpha^*,\\forall n \\in\\mathcal{N}_{\\mathrm{act}}$ with $\\alpha^*=\\sqrt{\\frac{P_{\\mathrm{I}}\/N_{\\mathrm{act}}}{P_{\\mathrm{B}}\\beta\/D^2_{\\mathrm{BI}}+\\sigma^2_{\\mathrm{I}}}}$ in \\eqref{opt_a_n}, which thus completes the proof.\n\\section{Proof of Theorem \\ref{the_opt_N}}\\label{proof_lem3}\n\\begin{table}[t]\\centering\n\\caption{Variations of the Receiver SNR under Different Conditions.}\\label{opt_N_A}\n\\begin{center}\n\\begin{tabular}{|c|c|}\n\\hline \nCondition & Variations of $\\gamma_{\\mathrm{hyb}}(\\tilde N_{\\mathrm{act}})$ \\\\\n\\hline\n$0<\\sqrt{\\frac{W_0}{W_{\\mathrm{act}}}}n_{\\mathrm{A},3}$ &\\!\\! Increase for $\\tilde N_{\\mathrm{act}}\\in(0,n^2_{\\mathrm{A},2}]\\cup(n^2_{\\mathrm{A},3},\\frac{W_0}{W_{\\mathrm{act}}}]$, decrease for $\\tilde N_{\\mathrm{act}}\\in(n^2_{\\mathrm{A},2},n^2_{\\mathrm{A},3}]$ \\\\\n\\hline \n\\end{tabular}\n\\end{center\n\\end{table}\nFor the receiver SNR, $\\gamma_{\\mathrm{hyb}}(\\tilde N_{\\mathrm{act}}) = \\xi_1\\left(-\\tilde N_{\\mathrm{act}}+\\xi_2\\sqrt{\\tilde N_{\\mathrm{act}}}+\\xi_3\\right)^2$, its first-order derivative over $\\sqrt{\\tilde N_{\\mathrm{act}}}$ can be expressed as\n\\begin{equation}\n \\frac{\\partial \\gamma_{\\mathrm{hyb}}(\\tilde N_{\\mathrm{act}})}{\\partial \\sqrt{\\tilde N_{\\mathrm{act}}}}=2\\xi_1\\left(-\\tilde N_{\\mathrm{act}}+\\xi_2 \\sqrt{\\tilde N_{\\mathrm{act}}}+\\xi_3\\right)(-2 \\sqrt{\\tilde N_{\\mathrm{act}}}+\\xi_2).\\label{deri_snr}\n\\end{equation}\n\nFirst, it can be shown that \\eqref{deri_snr} equals to 0 for $\\sqrt{\\tilde N_{\\mathrm{act}}}=n_{\\mathrm{A},1}\\triangleq\\frac{\\xi_2-\\sqrt{\\xi_2^2+4\\xi_3}}{2}$, $\\sqrt{\\tilde N_{\\mathrm{act}}}=n_{\\mathrm{A},2}=\\frac{\\xi_2}{2}$, and $\\sqrt{\\tilde N_{\\mathrm{act}}}=n_{\\mathrm{A},3}\\triangleq\\frac{\\xi_2+\\sqrt{\\xi_2^2+4\\xi_3}}{2}$ with $n_{\\mathrm{A},1}<0n_{\\mathrm{A},3}$, we have\n\\begin{equation}\n \\gamma_{\\mathrm{hyb}}(n_{\\mathrm{A},2}^2)-\\gamma_{\\mathrm{hyb}}(\\frac{W_0}{W_{\\mathrm{act}}})=\\frac{\\xi_2^2}{4}+\\xi_3-\\xi_2\\sqrt{\\xi_3}=(\\frac{\\xi_2}{2}-\\xi_3)^2\\geq 0,\n\\end{equation}\nsuch that $\\gamma_{\\mathrm{hyb}}(n_{\\mathrm{A},2}^2)\\geq \\gamma_{\\mathrm{hyb}}(\\frac{W_0}{W_{\\mathrm{act}}})$.\nMoreover, when $0<\\sqrt{\\frac{W_0}{W_{\\mathrm{act}}}}\\frac{W_0^2P_{\\mathrm{B}}\\beta^2}{W_{\\mathrm{pas}}^2D_{\\mathrm{BI}}^2d_{\\mathrm{IU}}^2\\sigma_0^2},\n\\end{equation}\nit is obtained that when\n\\begin{equation}\n W_0 > W_{\\mathrm{A-P}}\\triangleq\\frac{W_{\\mathrm{pas}}^2\/W_{\\mathrm{act}}}{\\frac{\\beta\\sigma_{\\mathrm{I}}^2}{d_{\\mathrm{IU}}^2\\sigma_0^2}+\\frac{P_{\\mathrm{B}} \\beta \/ D_{\\mathrm{BI}}^{2}+\\sigma_{\\mathrm{I}}^{2}}{P_{\\mathrm{I}}}},\n\\end{equation}\nthe fully-passive IRS outperforms the fully-active IRS in term of the capacity.\nSecond, it is obtained from \\eqref{deri_snr} that when $W_0>W_{\\mathrm{A-H}}$, the hybrid IRS outperforms the fully-active IRS, and achieves its maximum.\nThird, we compare the achievable capacity by the hybrid IRS and passive IRS. By comparing \\eqref{C_p} and \\eqref{C_h_2}, it can be shown that when\n\\begin{equation}\n \\frac{W_0^2P_{\\mathrm{B}}\\beta^2}{W_{\\mathrm{pas}}^2D_{\\mathrm{BI}}^2d_{\\mathrm{IU}}^2\\sigma_0^2}>\\frac{P_{\\mathrm{B}}\\beta^2\\left(\\frac{A_{\\mathrm{sum}}W_{\\mathrm{pas}}}{4W_{\\mathrm{act}}}+\\frac{W_0}{W_{\\mathrm{pas}}}\\right)^2\/D_{\\mathrm{BI}}^2d_{\\mathrm{IU}}^2}{A_{\\mathrm{sum}}\\sigma_{\\mathrm{I}}^2\\beta\/d_{\\mathrm{IU}}^2+\\sigma_0^2},\\label{con_hp}\n\\end{equation}\nthe fully-passive IRS achieves the maximum capacity.\nThe condition in \\eqref{con_hp} can be re-expressed as\n\\begin{equation}\\label{ineq_H-P}\n \\frac{\\sigma_{\\mathrm{I}}^2\\beta}{d_{\\mathrm{IU}}^2}W_0^2-\\frac{W_{\\mathrm{pas}}^2\\sigma_0^2}{2W_{\\mathrm{act}}}W_0-\\frac{A_{\\mathrm{sum}}W_{\\mathrm{pas}}^4\\sigma_0^2}{16W_{\\mathrm{act}}^2}>0,\n \n\\end{equation}\nwhich leads to\n\\begin{equation}\n W_0<\\frac{W_{\\mathrm{pas}}^2\\sigma_0^2d_{\\mathrm{IU}}^2}{4W_{\\mathrm{act}}\\sigma_{\\mathrm{I}}^2\\beta}-\\frac{W_{\\mathrm{pas}}^2\\sigma_0d_{\\mathrm{IU}}}{4W_{\\mathrm{act}}\\sigma_{\\mathrm{I}}}\\sqrt{\\frac{\\sigma_0^2d_{\\mathrm{IU}}^2}{\\sigma_{\\mathrm{I}}^2\\beta^2}+\\frac{P_{\\mathrm{I}}}{P_{\\mathrm{B}}\\beta^2\/D_{\\mathrm{BI}}^2+\\sigma_{\\mathrm{I}}^2\\beta}}<0,\n \n\\end{equation}\nand $W_0>W_{\\mathrm{H-P}}$. \nLast, we discuss the relations of $W_{\\mathrm{A-P}},W_{\\mathrm{A-H}}$ and $W_{\\mathrm{H-P}}$\nto facilitate our analysis, we list all possible permutations of the above thresholds in Table \\ref{per_thres}.\n\\begin{table}[t]\\centering\n\\caption{Possible relations for budget thresholds.}\\label{per_thres}\n\\begin{tabular}{|c|c|}\n\\hline\n\\begin{tabular}[c]{@{}c@{}}Permutations\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}$W_{\\mathrm{A-H}}W_{\\mathrm{H-P}}$; and it outperforms the fully-active and fully-passive IRSs with the optimal number of active elements of $\\tilde N_{\\mathrm{act}}=\\tilde N^*_{\\mathrm{act}}$ otherwise. \nMoreover, all other permutations can be verified to be infeasible by contradiction. For example, for the permutation $W_{\\mathrm{A-H}}W_{\\mathrm{A-H}}$, the fully-passive IRS outperforms hybrid IRS when $W_0>W_{\\mathrm{H-P}}$, and the fully-active IRS outperforms the fully-passive IRS when $W_0\n\\equiv\n_{aux}\n=\\delta_{\\gamma\\gamma'}\\delta_{\\vec{j},\\vec{j}'}\n\\delta_{\\vec{c},\\vec{c}'}\\;.\n\\end{equation}\nAnother way to describe $\\cal H$ is by displaying it as a space of\nsquare integrable functions $L_2({\\overline {{\\cal A}\/{\\cal G}}},d\\mu_0)$. Here ${\\overline {{\\cal A}\/{\\cal G}}}$ is a space \nof distributional connections modulo gauge transformations, typically \nnon-smooth and $\\mu_0$ is a \nrigorously defined, $\\sigma$-additive, diffeomorphism invariant probability\nmeasure on ${\\overline {{\\cal A}\/{\\cal G}}}$. The space ${\\overline {{\\cal A}\/{\\cal G}}}$ is the maximal extension of \nthe space ${{\\cal A}\/{\\cal G}}$ of smooth connections such that (the Gel'fand \ntransform of) spin-network functions \nare still continuous. The inner product can be extended, with the same\northonormality relations, to any smooth (rather than analytic) graph with a \nfinite number of edges and to non-gauge-invariant functions. It is only \nthe latter description of $\\cal H$ which enables one to verify that \nthe inner product $<.,.>$ is the unique one that incorporates the \ncorrect reality conditions that $A,E$ are in fact real valued. The\ninner product (\\ref{2}) was postulated earlier (see remarks in \\cite{RDP})\nfor the {\\em complex} connection formulation. But\nit was not until the construction of the Ashtekar-Lewandowski measure $\\mu_0$\nthat one could show that this inner product is actually the correct one \nfor the real connection formulation only.\n\nWe will denote by $\\Phi$ the finite linear combinations of spin-network \nfunctions and call it the space of cylindrical functions. A function \n$f_\\gamma$ is said to be cylindrical with respect to a graph \n$\\gamma$ whenever it is a linear combination of spin-network functions on \nthat graph such that $\\pi_{j_e}$ is not the trivial representation for no\n$e\\in E(\\gamma)$. \nThe space $\\Phi$ can be equipped with one of the standard nuclear topologies\ninduced from $G^n$ because on each graph $\\gamma$ every cylindrical \nfunction $f_\\gamma$ becomes a function $f_n$ on $G^n$ where $n$ is the \nnumber of \nedges $e$ of $\\gamma$ through the simple relation $f_\\gamma(A)=\nf_n(h_{e_1}(A),..,h_{e_n}(A))$. This \nturns it into a topological vector space. By $\\Phi'$ we mean the topological\ndual of $\\Phi$, that is, the bounded linear functionals on $\\Phi$. General\ntheorems on nuclear spaces show that the inclusion $\\Phi\\subset{\\cal \nH}\\subset\\Phi'$ (Gel'fand triple) holds.\n\nSo far we have dealt with solutions to the Gauss constraint only, that is,\nwe have explicitly solved it by dealing with gauge invariant functions only.\nWe now turn to the solutions to the diffeomorphism constraint (we follow \n\\cite{15}).\\\\ \nRoughly speaking one constructs a certain subspace $\\Phi_{Diff}$ of \n$\\Phi'$ by ``averaging spin-network states\nover the diffeomorphism group\" by following the subsequent recipe :\\\\\nTake a spin-network state $T_I$ and consider its orbit $\\{T_I\\}$ under the \ndiffeomorphism group. Here we mean orbit under asymptotically identity \ndiffeomorphisms only ! Then construct the distribution\n\\begin{equation} \\label{3}\n[T_I]:=\\sum_{T\\in\\{T_I\\}} T\n\\end{equation}\nwhich can be explicitly shown to be an \nelement of $\\Phi'$. Any other vector is averaged by \nfirst decomposing it into spin-network states and then averaging those \nspin-network states separately. Certain technical difficulties having to \ndo with superselection rules and graph symmetries \\cite{7} were removed in \n\\cite{15}.\n\nAn inner product on the space of the so constructed states is given by\\\\\n\\begin{equation} \\label{4}\n<[f],[g]>_{Diff}:=[f](g)\n\\end{equation}\nwhere the brackets stand for the averaging \nprocess and the right hand side means evaluation of a distribution on a \ntest function. The completion of $\\Phi_{Diff}$ with respect to \n$<.,.>_{Diff}$ is denoted ${\\cal H}_{Diff}$.\n\nFinally, the Hamiltonian constraint is solved as follows \\cite{10} :\nOne can explicitly write down an algorithm of how to construct the most\ngeneral solution. It turns out that \none can construct ``basic\" solutions $s_\\mu\\in\\Phi'$ which are\nmutually orthonormal with respect to $<.,.>_{Diff}$ (in a generalized sense)\nand diffeomorphism invariant.\nThe span of these solutions is equipped \nwith the natural orthonormal basis $s_\\mu$ (in the generalized sense).\nOne now defines a ``projector\" \n\\begin{equation} \\label{5}\n\\hat{\\eta}f:=[[f]]:=\\sum_\\mu s_\\mu _{Diff}\n\\end{equation}\nfor each $f\\in\\Phi$ and so obtains a \nsubspace $\\Phi_{Ham}\\subset\\Phi'$. \nThe physical inner product \\cite{15} is defined by \n\\begin{equation} \\label{6}\n<[[f]],[[g]]>_{phys}:=[[f]]([g])\\;.\n\\end{equation}\nFinally, the physical Hilbert space is just the completion of \n$\\Phi_{Ham}$ with respect to $<.,.>_{Ham}$.\n\n\n\n\\section{Regularization of the ADM Hamiltonian}\n\nThere are many ways to write the ADM-Hamiltonian \nwhich are all classically weakly identical. We are going to choose a form \nwhich is a pure surface integral and which depends only on $E^a_i$ \nbecause in this case the associated operator will be almost diagonal in a \nspin-network basis so that we can claim that spin-network states really\ndo provide a non-linear Fock representation for quantum general relativity\nas announced in \\cite{8,9}.\n\nThe appropriate form of the classical symmetry generators was derived in \n\\cite{13,13a}.\nAlthough that paper was written for the {\\em complex} Ashtekar variables,\nall results can be taken over by carefully removing factors of $i$ at \nvarious places. We find for the surface part of the Hamiltonian (expression \n(4.31) in \\cite{13}, we \nuse that $\\tiN=N\/\\sqrt{\\det(q)},\\;D_a\\tiN=(D_a N)\/\\sqrt{\\det(q)}$ where\n$N$ is the scalar lapse function)\n\\begin{equation} \\label{20}\nE(N)=-\\frac{2}{\\kappa}\\int_{\\partial\\Sigma} dS_a\\frac{N}{\\sqrt{\\det(q)}}\nE^a_i\\partial_b E^b_i\\;.\n\\end{equation}\nIt is easy, instructive and for the sign of the ADM energy crucial to see \nthat (\\ref{20}) really equals \nthe classical expression $+\\frac{1}{\\kappa}\\int_{\\partial\\Sigma}dS_a \n(q_{ab,b}-q_{bb,a})$ due to ADM :\nUsing that $E^a_i=\\frac{1}{2}\\epsilon^{abc}\\epsilon_{ijk} e_a^j e_b^k$\nwe have the chain of identities\n\\begin{eqnarray} \\label{7}\n&&-\\frac{2}{\\sqrt{\\det(q)}}E^a_i\\partial_b E^b_i\n=-\\mbox{sgn}(\\det(e))e^a_i\\epsilon^{bcd}\\epsilon_{ijk}[e_c^j e_d^k]_{,b}\n\\nonumber\\\\\n&=&-2\\mbox{sgn}(\\det(e))e^a_i\\epsilon^{bcd}\\epsilon_{ijk}e_c^j e_{d,b}^k\n\\nonumber\\\\\n&=& -2\\mbox{sgn}(\\det(e))q^{af}\\epsilon^{bcd}\\epsilon_{ijk}e_f^i e_c^j \ne_{d,b}^k\n=-2q^{af}\\epsilon^{bcd}\\sqrt{\\det(q)} \\epsilon_{fce}e^e_k e_{d,b}^k\n\\nonumber\\\\\n&=&-4q^{ac}\\delta^b_{[c}\\delta^d_{e]}\\sqrt{\\det(q)} e^e_i e_{d,b}^i\n\\nonumber\\\\\n&=& 4\\sqrt{\\det(q)} q^{ac} q^{ed} e_d^i e_{[c,e]}^i\n=2\\sqrt{\\det(q)} q^{ac} q^{bd} e_d^i[e_{c,b}^i-e_{b,c}^i]\n\\nonumber\\\\\n&=& \\sqrt{\\det(q)} q^{ac} q^{bd} \n[2e_{(d}^i e_{c),b}^i+2e_{[d}^i e_{c],b}^i\n-2e_{(d}^i e_{b),c}^i-2e_{[d}^i e_{b],c}^i]\n\\nonumber\\\\\n&=& \\sqrt{\\det(q)} q^{ac} q^{bd} \n[(q_{cd,b}-q_{bd,c})\n+2e_{[d}^i e_{c],b}^i]\\;\n\\end{eqnarray}\nNow we expand $e_a^i(x)=\\delta_a^i+\\frac{f_a^i(x\/r)}{r}+o(1\/r^2)$ where\n$r^2=\\delta_{ab}x^a x^b$ defines the asymptotic Cartesian frame. The function\n$f_a^i(x\/r)$ only depends on the angular coordinates of the asymptotic \nsphere and is related to the analogous expansion \n$q_{ab}(x)=\\delta_{ab}+\\frac{f_{ab}(x\/r)}{r}+o(1\/r^2)$ by \n$f_{ab}\\delta^b_i=f_a^i$. Consider now the \nremainder in the last line of (\\ref{7}). Its $o(1\/r^2)$ part vanishes \nbecause $f_{[ab]}=0$ and this concludes the proof.\n\nIn the sequel we focus on the energy functional $E_{ADM}=E(N=1)$. We will \nquantize it in two different ways correspoinding to two quite different \nfactor orderings. Each of the orderings has certain advantages and \ncertain disadvantages which we will point out. \n\n\n\\subsection{Ordering I : No state space restriction}\n\nIn this subsection we will derive a form of the operator which is densely \ndefined on the whole Hilbert space $\\cal H$ (and extends to the spaces \n${\\cal H}_{Diff},\\;{\\cal H}_{phys}$ defined above) without imposing any further\nrestriction that corresponds to asymptotic flatness.\n\nUsing again that\n$E^a_i=\\frac{1}{2}\\epsilon_{ijk}\\epsilon^{abc}e^j_b e^k_c$ we can write \nit as\n\\begin{equation} \\label{21}\nE_{ADM}=\\lim_{S\\to \\partial\\Sigma} E_{ADM}(S) \n\\mbox{ where } E_{ADM}(S)=-\\frac{2}{\\kappa}\\int_S\n\\frac{1}{\\sqrt{\\det(q)}} \\epsilon^{ijk}e^j\\wedge e^k \\partial_b E^b_i\n\\end{equation}\nand $S$ is a closed 2-surface which is topologically a sphere.\nThe idea is to point-split expression (\\ref{21}) and to use that\n$$\n[\\mbox{sgn}(\\det(e))e_a^i](x)=\\frac{1}{2\\kappa}\\{A_a^i(x),V(x,\\epsilon)\\}\n$$\nwhere $V(x,\\epsilon)=\\int_\\Sigma d^3y \\chi_\\epsilon(x,y)\\sqrt{\\det(q)}(y)$\nand $\\chi_\\epsilon$ is the (smoothed out) characteristic function of a \nbox of coordinate volume $\\epsilon^3$. Since \n$$\n\\lim_{\\epsilon\\to 0}\\frac{\\chi_\\epsilon(x,y)}{\\epsilon^3}=\\delta(x,y)\n\\mbox{ so that } \\lim_{\\epsilon\\to 0}\\frac{V(x,\\epsilon)}{\\epsilon^3}\n=\\sqrt{\\det(q)}(x)\n$$\nwe have\n\\begin{eqnarray} \\label{22}\n&&E_{ADM}(S)\\nonumber\\\\\n&=&\\lim_{\\epsilon\\to 0}\n-\\frac{2}{\\kappa}\\int_S \n\\frac{1}{\\epsilon^3\\sqrt{\\det(q)}(x)}\n\\epsilon^{ijk}e^j(x)\\wedge e^k(x)\\int_\\Sigma d^3y\n\\chi_\\epsilon(x,y)(\\partial_b E^b_i)(y)\n\\nonumber\\\\\n&=& \\lim_{\\epsilon\\to 0}-\\frac{2}{\\kappa}\\int_S \n\\frac{\\epsilon^{ijk}}{V(x,\\epsilon)}\ne^j(x)\\wedge e^k(x)\\int_\\Sigma d^3y \\chi_\\epsilon(x,y)(\\partial_b E^b_i)(y)\n\\nonumber\\\\\n&=& \\lim_{\\epsilon\\to 0}\n-\\frac{1}{2\\kappa^3}\\int_S \n\\frac{\\epsilon^{ijk}}{V(x,\\epsilon)}\n\\{A^j(x),V(x,\\epsilon)\\}\\wedge \\{A^k(x),V(x,\\epsilon)\\}\n\\int_\\Sigma d^3y \\chi_\\epsilon(x,y) (\\partial_b E^b_i)(y) \\nonumber\\\\\n&=& \\lim_{\\epsilon\\to 0}\n-\\frac{2}{\\kappa^3}\\int_S \\epsilon^{ijk} \n\\{A^j(x),\\sqrt{V(x,\\epsilon)}\\}\\wedge \\{A^k(x),\\sqrt{V(x,\\epsilon)}\\}\n\\int_\\Sigma d^3y \\chi_\\epsilon(x,y)\n(\\partial_b E^b_i)(y) \\nonumber\\\\\n&=& \\lim_{\\epsilon\\to 0}\n\\frac{4}{\\kappa^3}\\int\n\\mbox{tr}(\\{A(x),\\sqrt{V(x,\\epsilon)}\\}\\wedge \n\\{A(x),\\sqrt{V(x,\\epsilon)}\\} \\int_\\Sigma d^3y \\chi_\\epsilon(x,y)\n(\\partial_b E^b)(y)) \\nonumber\\\\\n&=& -\\lim_{\\epsilon\\to 0}\n\\frac{4}{\\kappa^3}\\int_S \n\\mbox{tr}(\\{A(x),\\sqrt{V(x,\\epsilon)}\\}\\wedge \n\\{A(x),\\sqrt{V(x,\\epsilon)}\\} \\int_\\Sigma d^3y \n[\\partial_{y^b}\\chi_\\epsilon(x,y)] E^b(y)) \\nonumber\\\\\n&=& \\lim_{\\epsilon\\to 0} E^\\epsilon_{ADM}(S)\n\\end{eqnarray}\nwhere in the second before the last step we have taken a trace with \nrespect to generators $\\tau_i$ of $su(2)$ obeying \n$[\\tau_i,\\tau_j]=\\epsilon_{ijk}\\tau_k$ and in \nthe last step we have performed an integration by parts\n(the boundary term at $\\partial\\Sigma$ does not contribute for finite $S$\nand $\\epsilon$ sufficiently small).\nThus, we absorbed the $1\/\\sqrt{\\det(q)}$ into a square-root within a \nPoisson-bracket and simultanously the singular $1\/\\epsilon^3$ into a \nvolume functional. Classically we could have dropped the $1\/\\sqrt{\\det(q)}$\n(although the integrand would then no longer be a density of weight one \nand is strictly speaking not the boundary integral of a variation of the \nHamiltonian constraint) due to the classical boundary conditions which \ntell us that $\\det(q)$ tends to $1$. \\\\\nWe now quantize $E^\\epsilon_{ADM}(S)$. This consists of two parts : In the \nfirst we focus on the volume integral in (\\ref{22}) and replace $E^a_i$ by\n$\\hat{E}^a_i=-i\\hbar\\kappa\\delta\/\\delta A_a^i$. In the second step we \ntriangulate $S$ exactly as the hypersurface of 2+1 gravity in \\cite{10a}, \nreplace the volume \nfunctional by the volume operator and Poisson brackets by commutators times\n$1\/(i\\hbar)$. \n\nSo let $f_\\gamma$ be a function cylindrical with \nrespect to a graph $\\gamma$. Since we are only interested in the \nlimit $S\\to\\partial\\Sigma$ we may assume that \\\\\n1) $\\gamma$ lies entirely within the closed ball whose boundary is $S$ but\\\\\n2) $\\gamma$ may intersect $S$ at an endpoint of one of its edges and may \neven have edges that lie entirely inside $S$.\\\\\nFurthermore we can label the edges of $\\gamma$ in \nsuch a way that an edge either intersects $S$ transversally (with an \norientation outgoing from the intersection point with $S$) or lies entirely \nwithin $S$.\\\\ \nComing to the first step we have for the $y$ integral involved in\n$E^\\epsilon_{ADM}(S)$ :\n\\begin{eqnarray} \\label{23}\n&& \\int_\\Sigma d^3y [\\partial_a\\chi_\\epsilon(x,y)]\n\\hat{E}^a_(y)_i f_\\gamma \\nonumber\\\\\n&=& -i\\hbar\\kappa \\sum_{e\\in E(\\gamma)}\\int_\\Sigma d^3y \n[\\partial_a\\chi_\\epsilon(x,y)]\n\\int_0^1 dt \\dot{e}^a(t) \\delta(y,e(t))X^i_e(t) f_\\gamma \n\\nonumber\\\\\n&=& -i\\hbar\\kappa \\sum_{e\\in E(\\gamma)}\\int_0^1 dt \\dot{e}^a(t)\n[\\partial_{y^a}\\chi_\\epsilon(x,y)_{y=e(t)}]\nX^i_e(t) f_\\gamma \n\\nonumber\\\\\n&=& -i\\hbar\\kappa \\sum_{e\\in E(\\gamma)}\\int_0^1 dt \n[\\frac{d}{dt}\\chi_\\epsilon(x,e(t))]\nX^i_e(t) f_\\gamma \n\\nonumber\\\\\n&=& -i\\hbar\\kappa \\sum_{e\\in E(\\gamma)} \\lim_{n\\to\\infty}\n\\sum_{k=1}^n [\\chi_\\epsilon(x,e(t_k))-\\chi_\\epsilon(x,e(t_{k-1}))]\nX^i_e(t_{k-1}) f_\\gamma \n\\end{eqnarray}\nwhere $E(\\gamma)$ is the set of edges of $\\gamma$, $X^i_e(t):=\n[h_e(0,t)\\tau_i h_e(t,1)]_{AB} \\partial\/\\partial[h_e(0,1)]_{AB}$\nand $0=t_00$ and \nb) there exists $\\epsilon>0$ such that $-t^2 \nN^2(x)+\\Psi[\\hat{L}(c(\\vec{N},x,t))^2\\xi]<0$ for each $00$ such that the four vector \n$\\hat{P}^\\mu(x,t):=(\\hat{H}_{matter}'(N_{x,t})\n+\\hat{V}_{matter}'(\\vec{N}_{x,t}),0)$ is \neither zero or \nfuture directed and timelike for every $x\\in\\Sigma$ and $00$ for \neach $x,00$ if $v_n\\in R$. Thus we have \n$\\hat{V}(B_m)f_n=\\lambda\\ell_p^3\\delta_{m,n}f_n$. Consider the \ninfinite product state $\\xi:=\\prod_{n=1}^\\infty f_n$ which is a regular \n(non-cylindrical) spin-network state on the infinite graph \n$\\gamma=\\cup_n \\gamma_n$ and which is in fact\nnormalized, $||\\xi||=1$ thanks to the disjointness of the graphs $\\gamma_n$\nbecause of which $||\\xi||=\\prod_n||f_n||$ due to the properties of the \nAshtekar-Lewandowski measure. We now choose $k:=\\lambda$\nand find that for any macroscopic $R$, that is, any $R$ that contains \nmany of the boxes $B_n$ it holds that \n$\\hat{V}(R)\\xi=V_0(R)[1+o(\\ell_p^3\/V_0(R)]\\xi$. Now, since no state\nwhich is cylindrical with respect to any of the \ngraphs $\\gamma_n$ can be in the image of the Hamiltonian constraint\n\\cite{8,9,10} it follows from its definition \\cite{15} that the $\\hat{\\eta}$\noperator reduces to group averaging with respect to the diffeomorphism group\nbecause of which the group averaged diffeomorphism invariant state \n$\\Psi=[\\xi]=\\hat{\\eta}\\xi$ is normalized as well with respect to the \nphysical inner product \\cite{15} $||\\Psi||_{phys}^2=\\Psi(\\xi)=||\\xi||^2=1$.\nThus indeed $\\Psi([\\hat{V}(R)-V_0(R)]\\xi)=o(\\ell_p^3\/V_0(R))$ is satisfied.\nIt is clear that the construction can be repeated for the surface operator\nas well because most of the intersections of the macroscopic surface \n$S$ with the $\\gamma_n$ will not be in vertices of the $\\gamma_n$ so \nthat $\\hat{V}(R),\\hat{A}(S)$ can be simultanously diagonalized up to \nerrors of order of $\\ell_p^2\/A_0(S)$. Thus, almost every $f_n$ can be \nchosen as a simultanous eigenvector of $\\hat{V}(R),\\hat{V}(S)$. Finally,\nany macroscopic, for simplicity non-self-intersecting (any loop is \nproduct of these), loop $\\alpha$ on our particular $\\gamma$ is of the \nproduct form $\\alpha=\\circ^n \n\\alpha_n^{k_n},\\;\\alpha_n\\subset\\gamma_n,\\;k_n\\in\\{0,1\\}$ where $\\;k_n=0$ \nexcept for finitely many. The $SU(2)$ Mandelstam algebra is too \ncomplicated as to exhibit an explicit solution for $SU(2)$ so let us \nargue with an $U(1)$ substitute that the condition stated in definition\n(\\ref{def1}) is reasonable. For $U(1)$ we have \n$h_\\alpha=\\prod_{k_n=1}h_{\\alpha_n}$. Now, if we \nchoose for simplicity $\\alpha_n=\\gamma_n$ then \n$f_n=\\sum_{k=-N}^N a_k h_{\\alpha_n}^k$ where $\\chi_k(g)=g^k$ is the \ncharacter of the irreducible representation of $U(1)$ with weight $k$.\nSince $T=\\chi_k\\chi_l=\\chi_{k+l}$ the condition stated in the definition \namounts to asking that (for $U(1)$) $1=\\prod_{k_n=1}\\sum_k \n|a_k|^2=\\prod_{k_n=1}\\sum_{k=-N+1}^N\\bar{a}_k a_{k+1}$ up to some \ncorrections. Indeed, if we could choose all $a_k$ to be equal \n($=1\/\\sqrt{2N+1}$) then the error would be $1-\\prod_{k_n=1} [1-1\/(2N+1)]$ \nwhich is small provided that $\\sum_{k_n=1}1=o(L(\\alpha)\/\\ell_p)<0$ such that $\\xi$ is,\nfor each $00$\nthe standard vector is associated with the spin of the particle in the rest\nframe and the covering group of the stabilator group is given by by $SU(2)$.\nIn the massless case the standard vector is associated with the helicity\nof the particle (spin in momentum direction) and \nthe covering group of the stabilator group is given by by $U(1)$, \nphysically important representations being two-valued. Thus, the \nrotations at spatial infinity determine the unitary irreducible \nrepresentation of the particle state in question.}. \nThis is enough to construct particle states since the \nirreducible unitary representations of the little group induce a unique\nunitary irreducible representation of the full Poincar\\'e group. So far \nwe did not construct an operator corresponding to a boost generator which\nis more difficult to obtain than the ADM energy operator.\n\nFirst of all we must clarify on which space to represent the Poincar\\'e \ngroup, respectively its generators. To that end it is helpful to remember\nhow the classical Poincar\\'e generators are realized as a subalgebra of \nthe Poisson algebra \\cite{13,13a}.\\\\ \nLet $H(N),V(\\vec{N})$ be the Hamiltonian and diffeomorphism constraint \nfunctional respectively. Both functionals are integrals over $\\Sigma$ of \nlocal \ndensities and both converge and are functionally differentiable only\nif the lapse and shift functions $N,\\vec{N}$ vanish at $\\partial\\Sigma$.\nIn order to be able to describe the Poincar\\'e group corresponding to\nthe asymptotically constant or even diverging functions ($x^a$ is a \ncartesian frame at spatial infinity)\n$N=a+\\chi_a x^a,N^a=a^a+\\epsilon^{abc}\\phi_b x^c$ where $(a,a^a)$ is \na four translation, $\\phi^a$ are rotation angles and $\\chi^a$ are boost\nparameters, one proceeds as follows : let $S$ be a bounded two-surface\nthat is topologically a sphere and let $B(S)$ be the (intersection of \n$\\Sigma$ with the) closed ball such that $\\partial B(S)=S$. For \neach $S$ one defines \n$E(N,S):=H(N,S)+E_{ADM}(N,S)+B(N,S),\\;P(N,S):=V(\\vec{N},S)+P_{ADM}(N,S)$\nwhere the parameter $S$ means that volume integrals are restricted to\n$B(S)$ only (a classical regularization of the divergent integrals) and the \n``counter-terms\" $E_{ADM}(N,S),B(N,S),P_{ADM}(\\vec{N},S)$ are the surface \nintegrals defined in \\cite{13} and correspond to ADM energy, boost and \nmomentum. One can show that $\\lim_{S\\to\\partial\\Sigma} E(N,S), \n\\lim_{S\\to\\partial\\Sigma} P(\\vec{N},S)$ exist. Moreover, for each finite $S$,\n$E(N,S),P(\\vec{N},S)$ are functionally differentiable so that it is \nmeaningful to compute the Possion brackets\n\\begin{eqnarray} \\label{33}\n\\{E(M,S),E(M,S)\\}=P(q^{ab}(MN_{,b}-M_{,b}N),S)\\nonumber\\\\\n\\{E(M,S),P(\\vec{N},S)\\}=E({\\cal L}_{\\vec{N}}M,S)\\nonumber\\\\\n\\{P(\\vec{M},S),P(\\vec{N},S)\\}=P({\\cal L}_{\\vec{M}}\\vec{N},S)\\;.\n\\end{eqnarray}\nThe crucial point is that one computes the Poisson brackets a) at finite \n$S$ and b) on the full phase space and then takes the limit \n$S\\to\\partial\\Sigma$ or restricts to the constraint surface of the phase \nspace (where $H(N,S)=V(\\vec{N},S)=0$). Notice that the numerical value \nof, say, $E(N,S)$ equals $H(N,S)$ for a gauge transformation for which $N\\to \n0$ as $S\\to\\partial\\Sigma$. On the other hand, on the constraint surface \nfor a symmetry for which $N\\not\\to 0$ as $S\\to \\partial\\Sigma$ it equals a \ntime translation or a boost respectively. A similar remark holds for \n$P(\\vec{N},S)$. One therefore interpretes (\\ref{33}) as follows :\nif $M,N$ are both pure gauge then the constraint algebra closes. If $M$\nis a symmetry and $N$ pure gauge then energy (or boost generator) are gauge \ninvariant.\nIf $M,N$ are both symmetry then time translations commute with each other,\ntime translations and boosts give a spatial translation and a boost with a \nboost gives a rotation, in other words the symmetry algebra closes.\n\nIn quantum theory we will therefore proceed as follows : \\\\\nRecall \\cite{8,9,10,15} that the Hamiltonian constraint $\\hat{H}(N)$\n(for asymptotically vanishing $N$) is only well-defined on the subspace\nof $\\Phi'$ corresponding to distributions on $\\Phi$\nwhich are invariant under diffeomorphisms that approach identity at \n$\\partial\\Sigma$. Thus we can expect the symmetry algebra to hold only on\nsuch distributions as well. In fact, we will just choose $\\Psi$ to be a \nsolution to all constraints.\\\\ \nNext, in view of the fact that even the classical symmtry algebra \nonly holds provided one first computes Poisson brackets at finite $S$ and \nthen takes the limit, we will check the quantum algebra first \nby evaluating $\\Psi$ on $\\hat{E}(N_S) f_S$ \nfor functions $f_S\\in\\Phi$ cylindrical with \nrespect to a graph which lies in the interior of $B(S)$ (it may \nintersect $S$ in such a way that the volume operator does not vanish\nat the intersection point for none of the eigenvectors into which\n$f_S$ maybe decomposed) and lapse functions $N_S$ which grow at infinity like\nsymmetries but which are supported in $B(S)\\cup S$ {\\em including $S$}, and \nthen to take the limit $S\\to\\partial\\Sigma$ \n(the support fills all of $\\Sigma$ as $S\\to \\partial\\Sigma$ in this \nprocess). \\\\ \n\nWe come to the definition of $\\hat{E}(N),\\hat{P}(\\vec{N})$. \nFirst we treat the spatial Euclidean group.\\\\\nThe unitary representation of the diffeomorphism group defined by\n$\\hat{U}(\\varphi) f_\\gamma=f_{\\varphi(\\gamma)}$ which was for matters of \nsolving the diffeomorphism constraint so far only defined for \ndiffeomorphisms that approach asymptotically the identity, can easily \nbe extended to three-diffeomorphisms which correpond to asymptotic spatial\ntranslations or rotations. Instead of defining the generator \n$\\hat{P}(\\vec{N})$ though \n(which does not exist on $\\cal H$ \\cite{7}) we content ourselves with the \nexponentiated version $\\hat{U}(\\varphi(\\vec{N}))$ where $\\varphi(\\vec{N})$\nis the diffeomorphism generated by the six parameter shift vector field\n$N^a=a^a+\\epsilon_{abc} \\phi^b x^c$ for some cartesian frame $x^a$ possibly \ncorrected by an asymptotically vanishing vector field corresponding to a \ngauge transformation. It is trivial to check that \n\\begin{equation} \\label{34}\n\\hat{U}(\\varphi(\\vec{N}))\n\\hat{U}(\\varphi(\\vec{N}'))\n\\hat{U}(\\varphi(\\vec{N}))^{-1}\n\\hat{U}(\\varphi(\\vec{N}'))^{-1}\n=\\hat{U}(\\varphi({\\cal L}_{\\vec{N}}\\vec{N}'))\n\\end{equation}\nwhere $\\cal L$ denotes the Lie derivative \nso that there are no anomalies coming from the spatial Euclidean group.\nThis expression was derived by applying it to any function $f_S$ cylindrical \nwith respect to a graph with support in $B(S)$. \n\nWe now turn to the time translations. As already mentioned we will not \nconsider boosts in this paper so that $\\chi_a\\equiv 0$ in \nthe four parameter family of lapse functions $N=a+\\chi_a x^a$\n(modulo a correction which vanishes at $\\partial\\Sigma$).\nDefine the operator on ${\\cal H}$\n\\begin{equation} \\label{35}\n\\hat{E}(N):=\\hat{H}(N)\n+\\hat{E}_{ADM}(N)\n\\end{equation}\nwhere $\\hat{H}(N)$ is the Lorentzian Hamiltonian constraint.\nNotice that $\\hat{E}(N)$ just as the Hamiltonian constraint in \n\\cite{8,9,10,15} carries a certain prescription dependence \nwhich is removed by evaluating its dual on $\\Phi_{Diff}$. We will not\nrepeat these details here and refrain from indicating this prescription\ndependence in \\ref{35}, however, the prescription dependence has \nconsequences for the commutator algebra that we will dicuss below in \ngreat detail.\n\nLet us verify the commutators between the time \ntranslations among themselves and between time taranslations and spatial \ntranslations and rotations. We have \n\\begin{eqnarray} \\label{36}\n&&\\Psi([\\hat{E}(M),\\hat{E}(N)]f_\\gamma)=\n\\Psi([\\hat{H}(M),\\hat{H}(N)]f_\\gamma)\n+\\Psi([\\hat{E}_{ADM}(M),\\hat{E}_{ADM}(N)]f_\\gamma)\n\\nonumber\\\\\n&+&\\Psi(\\{[\\hat{E}_{ADM}(M),\\hat{H}(N)]+[\\hat{H}(M),\\hat{E}_{ADM}(N)]\\}\nf_\\gamma)\\;.\n\\end{eqnarray}\nThe first term vanishes for the same reason as in \\cite{8,9,10,15} \nalthough one needs one additional argument : the \nHamiltonian constraint does not act at vertices that it creates. Therefore,\nit can be written as a double sum over vertices $v,v'$ of $\\gamma$ alone and \neach of these terms is of the form \n$$\n(M(v)N(v')-M(v')N(v))\n\\Psi([\\hat{H}_{v',\\gamma(v)}\\hat{H}_{v,\\gamma}-\n\\hat{H}_{v,\\gamma(v')}\\hat{H}_{v',\\gamma}]f_\\gamma)\n$$\nwhere the notation means that $\\hat{H}_{v,\\gamma}$ is a family of\nconsistently defined operators each of which \nacting on cylindrical functions which depend on the graph $\\gamma$ and \n$\\gamma(v)$ is a graph on which $\\hat{H}_{v,\\gamma}f_\\gamma$ depends. \nThis expression clearly is non-vanishing only if $v\\not=v'$ but then \nit can be shown that the operators $\\hat{H}_{v,..}$ and \n$\\hat{H}_{v',..}$ actually commute. Now still this does not show that\nthe term above vanishes however, it can be shown that \n$\\hat{H}_{v',\\gamma(v)}\\hat{H}_{v,\\gamma}f_\\gamma$ and \n$\\hat{H}_{v,\\gamma(v')}\\hat{H}_{v',\\gamma}f_\\gamma$ are related by a \ndiffeomorphism \\cite{9}. Now in \\cite{9} that was enough to show that the \ncommutator vanishes because we were dealing there only with vertices \nwhich do not intersect $S$ as otherwise both lapse functions identically \nvanish for a pure gauge transformation. Thus the diffeomorphism that \nrelates the two terms above could be chosen to have support inside \n$B(S)$ and $\\Psi$ is invariant under such diffeomorphisms. In the present \ncontext that \ndoes not need to be true. However, the crucial point is now that by the \ntangle property all edges of $\\gamma$ that intersect $S$ must intersect\n$S$ transversally. Therefore the arcs that the Hamiltonian constraint \nattaches to $\\gamma$ and whose position is the only thing by which the \ntwo above vectors differ {\\em lie inside $B(S)$ and do not intersect $S$}. \nTherefore, again the two vectors are related by a diffeomorphism which \nhas support inside $B(S)$, that is, they are related by a gauge \ntransformation and therefore the commutator vanishes.\n\nWe turn to the second term in (\\ref{36}). Now we obtain a double sum\nover vertices of $\\gamma$ which lie in $S$ and each term is \nof the form\n$$\n(M(v)N(v')-M(v')N(v))\n\\Psi([\\hat{E}_{v',ADM},\\hat{E}_{v,ADM}]f_\\gamma)\n$$\nwhich is significantly simpler than before because $\\hat{E}_{v,ADM}$\ndoes not alter the graph. Notice that the commutator makes sense because\n$\\hat{E}_{ADM,v}$ leaves the span of non-zero volume eigenvectors invariant.\nNow for $v\\not=v'$ the commutator trivially vanishes, this time without \nemploying diffeomorphism invariance of $\\Psi$.\n\nFinally the last term in (\\ref{36}) is a double sum over vertices \n$v,v'$ of $\\gamma$, where $v$ must lie in $S$, of the form\n\\begin{equation} \\label{37}\n(M(v)N(v')-M(v')N(v))\n\\Psi([\\hat{H}_{v',\\gamma},\\hat{E}_{v,ADM}]f_\\gamma)\\;.\n\\end{equation}\nThe fact that $\\hat{E}_{ADM}$ does not alter the graph was used to write\n(\\ref{37}) as a commutator without employing diffeomorphism \ninvariance of $\\Psi$. Now it may happen that, although $f_\\gamma$ is in \nthe domain of $\\hat{E}_{v,ADM}$, that $\\hat{H}_{v,\\gamma}f_\\gamma$ is not \nany longer in the domain and so (\\ref{37}), for $v=v'$, is in danger of \nbeing \na meaningless product of something that blows up times zero while that\ncannot happen for $v\\not=v'$. However, \nsince $\\Psi$ is a solution we conclude first of all that \n(\\ref{37}) equals\n\\begin{equation} \\label{38}\n-(M(v)N(v')-M(v')N(v))\n[\\hat{E}_{v,ADM}\\Psi](\\hat{H}_{v',\\gamma} f_\\gamma)\n\\end{equation}\nand since $\\Psi$ is also in the domain of $\\hat{E}_{ADM}$ both \n$\\hat{E}_{v,ADM}\\Psi$ and $\\hat{H}_{v',\\gamma} f_\\gamma$\nare well-defined elements of $\\Phi'$ and $\\Phi$ respectively we conclude \nthat in case $v=v'$ (\\ref{37}) indeed vanishes. On the other hand,\nthe same argument as before shows that the commutator trivially vanishes for \n$v\\not=v'$.\n\nLet us now check the commutator between time translations and spatial\ntranslations and rotations $\\varphi$. We have\n\\begin{eqnarray} \\label{39}\n&&\\Psi([\\hat{U}(\\varphi)^{-1}\\hat{E}(N)\\hat{U}(\\varphi)-\\hat{E}(N)]f_\\gamma)\n\\nonumber\\\\\n&=& \\sum_{v\\in V(\\gamma)}[N(\\varphi(v))\n\\Psi(\\hat{U}(\\varphi^{-1})\\hat{H}_{\\varphi(v),\\varphi(\\gamma)}\nf_{\\varphi(\\gamma)})\n-N(v)\\Psi(\\hat{H}_{v,\\gamma}f_\\gamma)]\n\\nonumber\\\\\n&+& \n\\sum_{v\\in \nV(\\gamma)\\cap \nS}[N(\\varphi(v))\\Psi(\\hat{U}(\\varphi^{-1})\\hat{E}_{ADM,\\varphi(v)} \nf_{\\varphi(\\gamma)})-N(v)\\Psi(\\hat{H}_{ADM,v}f_\\gamma)]\\;.\n\\end{eqnarray}\nSince $\\hat{E}_{ADM}$ does not change the graph on which a function depends\nwe have identically \n$\\hat{U}(\\varphi^{-1})\\hat{E}_{ADM,\\varphi(v)} f_{\\varphi(\\gamma)}\n=\\hat{E}_{ADM,v} f_\\gamma$.\\\\ \nNow, as explained in more detail in \\cite{9}, the operator $\\hat{H}(N)$\ndepends on a certain prescription of how to attach loops to graphs. Since \nin the interiour of $B(S)$ there is no background metric\navailable, this prescription can only be topological in nature and therefore\ngraphs differing\nby a diffeormorphism $\\varphi$ are assigned graphs by $\\hat{H}(N)$ which\nare diffeomorphic by a diffeomorphism $\\varphi'$ which may not coincide with\n$\\varphi$. That is, in the interiour of $B(S)$, $\\hat{H}(N)$ is \nonly covariant up to a diffeomorphism. On the other hand,\nsince one has the fixed background metric $\\delta_{ab}$ at $S$ one can \nmake $\\hat{H}(N)$ precisely covariant at $S$, that is, the prescription \nsatisfies $\\varphi_{|S}=\\varphi'_{|S}$.\nTherefore, with this sense of covariance of $\\hat{H}(N)$ \nit is true that \n$\\hat{U}(\\varphi^{-1})\\hat{H}_{\\varphi(v),\\varphi(\\gamma)}f_{\\varphi(\\gamma)}$\nand $\\hat{H}_{v,\\gamma}f_\\gamma$ differ at most by a diffeomorphism \nwith support in the interiour of $B(S)$.\\\\ \nIn conclusion we obtain\n$$\n\\Psi([\\hat{E}(N),\\hat{U}(\\varphi)]f_\\gamma)=\n\\Psi(\\hat{E}(\\varphi^\\star N-N)f_\\gamma)\n$$ \nwhich is what we were looking for.\n\nWe conclude that the little algebra of the Poincar\\'e algebra is faithfully \nimplemented.\\\\ \\\\\n\\\\\n\\\\\n{\\large Acknowledgements}\\\\\n\\\\\nThis research project was supported in part by DOE-Grant\nDE-FG02-94ER25228 to Harvard University.\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMost of the accretion onto the supermassive black hole (SMBH) found in the center of most massive galaxies is heavily \nobscured by the surrounding dust and gas (e.g., \\citealp{fabian99a}). In the local Universe, $\\sim$75\\% of the Seyfert 2 galaxies\nare heavily-obscured ($N_H$$>$10$^{23}$~cm$^{-2}$; \\citealp{risaliti99}). Many of these, if at $z$$\\gtrsim$1, where most of the black hole growth\noccurs, would not be identified in X-rays even in very deep ($>$1 Msec) Chandra or XMM\/Newton exposures \\citep{treister04}. Locating and \nquantifying this heavily obscured SMBH growth, in particular at high redshifts, is currently one of the fundamental problems \nin astrophysics. \n\nBecause the energy absorbed at optical to X-ray wavelengths is later re-emitted in the mid-IR, it is expected that all Active Galactic Nuclei (AGN), even\nthe most obscured ones, should be very bright mid-IR sources (e.g., \\citealp{martinez06}). Hence, it is not surprising\nthat a large number of heavily obscured --- even Compton-thick ($N_H$$>$10$^{24}$cm$^{-2}$) --- AGN have been found amongst the Luminous and\nUltra-luminous Infrared Galaxies ((U)LIRGs; L$_{IR}$$>$10$^{11}$ and $>$10$^{12}$L$_\\odot$ respectively), both locally \\citep{iwasawa09} and at \nhigh redshift \\citep{bauer10}. Deep X-ray observations performed using the XMM-Newton (e.g., \\citealp{braito03,braito04}), Chandra \\citep{teng05} and \nSuzaku \\citep{teng09} observatories have shown that most ULIRGs are intrinsically faint X-ray sources, most likely due to the effects of obscuration, while \ntheir X-ray spectra show combined signatures of starburst and AGN activity. The key features observed in the X-ray spectra of ULIRGs are \na soft thermal component, typically associated with star formation, a heavily-obscured (N$_H$$\\sim$10$^{24}$ cm$^{-2}$) power-law associated \nwith the AGN direct emission, and a prominent emission line at $\\sim$6.4~keV, identified with fluorescence emission from iron in the \nK$_\\alpha$ ionization level, originating either in the accretion disk or in the surrounding material \\citep{matt91}. \n\nThe presence of heavily-obscured AGN among the most extreme ULIRGs at $z$$\\simeq$1-2 has recently been established from deep\nSpitzer observations \\citep{daddi07,fiore08,treister09c}. Most of these sources have very high, quasar-like, intrinsic luminosities, and hence most likely\ndo not constitute the bulk of the heavily-obscured AGN population \\citep{treister10}. Establishing the fraction of (U)LIRGs that host a lower luminosity AGN is \na more challenging task. Recent works based on X-ray stacking \\citep{fiore09} and using 70-$\\mu$m selected sources \\citep{kartaltepe10} report \na steep decrease in the fraction of AGN with decreasing IR luminosity, going from $\\sim$100\\% at L$_\\textnormal{IR}$=10$^{13}$~L$_\\odot$ \nto $<$10\\% at L$_\\textnormal{IR}$=10$^{10}$~L$_\\odot$. In the local Universe, \\citet{schawinski10} found that the incidence of low-luminosity, Seyfert-like, \nAGN as a function of stellar mass is more complicated, and is influenced by other parameters. For example, the dependence of AGN fraction on stellar mass\ncan be opposite if galaxy morphology is considered (increases with decreasing mass in the early-type galaxy population).\n\nIn this work, we estimate the fraction of heavily-obscured AGN in mid-IR-luminous and massive galaxies at high redshift, few of which are individually \ndetected in X-rays. The main goal is to constrain the amount of obscured SMBH accretion happening in distant galaxies. This can be done thanks\nto the very deep X-ray observations available in the Chandra Deep Fields and the very low and stable Chandra background, which allows for the efficient\nstacking of individually undetected sources. Throughout this letter, we assume a $\\Lambda$CDM cosmology with $h_0$=0.7, $\\Omega_m$=0.27\nand $\\Omega_\\Lambda$=0.73, in agreement with the most recent cosmological observations \\citep{hinshaw09}.\n\n\\section{Analysis and Results}\n\nBy stacking individually-undetected sources selected at longer wavelengths, it is possible to detect very faint X-ray emitters using Chandra observations. For \nexample, this technique was used successfully by \\citet{brandt01} in the Chandra Deep Field North (CDF-N) to measure\nthe average X-ray emission from a sample of Lyman break galaxies at $z$$\\simeq$2--4 and by \\citet{rubin04} to detect\nX-rays from red galaxies at $z$$\\sim$2. More recently, samples of heavily-obscured AGN candidates selected based\non their mid-IR properties have been studied in X-rays via Chandra stacking (e.g., \\citealp{daddi07,fiore08,treister09c}).\n\nThe 4 Msec Chandra observations of the Chandra Deep Field South (CDF-S), are currently\nthe deepest view of the X-ray sky. In addition, the CDF-S has been observed extensively at many wavelengths. The multiwavelength data available on the (E)CDF-S were \npresented by \\citet{treister09c}. Very relevant for this work are the deep Spitzer observations available in this field, using both the Infrared Array Camera (IRAC) and the \nMultiband Imaging Photometer for Spitzer (MIPS), from 3.6 to 24~$\\mu$m. Also critical is the availability of good quality photometric \nredshifts ($\\Delta$$z$\/(1+$z$)=0.008 for $R$$<$25.2) obtained thanks to deep observations in 18 medium-band optical filters performed using \nSubaru\/Suprime-Cam \\citep{cardamone10}. \n\nWe generated our sample starting with the 4959 Spitzer\/MIPS 24~$\\mu$m sources in the region covered by the Chandra observations that have\nphotometric redshift $z$$>$0.5, and hence rest-frame E$>$4~keV emission falling in the high-sensitivity Chandra range. In addition, sources \nindividually detected in X-rays and reported in the catalogs of \\citet{luo08}, \\citet{lehmer05} or \\citet{virani06} were removed from our \nsample, as these sources will otherwise dominate the stacked spectrum. We then inspected the remaining sources to eliminate\nindividual detections in the 4 Msec data not present in the 2 Msec catalog of \\citet{luo08}. We further excluded 28 sources that meet the \nselection criteria of \\citet{fiore08} for heavily obscured AGN, $f_{24}$\/$f_R$$>$1000 and $R$-$K$$>$4.5 (Vega), because we expect these sources to \ncontain an intrinsically luminous AGN (quasar), while the aim of this work is to find additional hidden accretion in less luminous objects. The median \nredshift of the sources in our final sample is 1.32 (average $z$=1.5) with a standard deviation of 0.77.\n\nIn order to perform X-ray stacking in the rest-frame, we started with the regenerated level 2 merged event files created by the Chandra X-ray \nCenter\\footnote{Data publicly available at http:\/\/cxc.harvard.edu\/cda\/whatsnew.html}. For each source, we extracted all events in a circle of \n30$''$ radius centered in the optical position. The energy of each event was then converted to the rest frame using the photometric redshift of the \nsource. Using standard CIAO \\citep{fruscione06} tools we then generated seven X-ray images for each source covering the energy range from 1-8 keV \nin the rest-frame with a fixed width of 1 keV. Images for individual sources were then co-added to measure the stacked signal. Total counts were measured\nin a fixed 5$''$ aperture, while the background was estimated by randomly placing apertures with same the area, 5$''$ to 30$''$ away from the center.\n\nSeveral groups have found (e.g, \\citealp{kartaltepe10} and references therein) that the fraction of galaxies containing an AGN is a strong function of\ntheir IR luminosity. Hence, it is a natural choice to divide our sample in terms of total IR luminosity. The infrared luminosity was estimated from the \nobserved 24~$\\mu$m luminosity assuming the relation found by \\citet{takeuchi05}: $\\log$(L$_{IR}$)=1.02+0.972 $\\log$(L$_{12~\\mu m}$). We further \nassumed that the $k$ correction between observed-frame 24~$\\mu$m and rest-frame 12~$\\mu$m luminosity for these sources \nis negligible, as shown by \\citet{treister09c}. We then separated our sample in 4 overlapping bins: $L_{IR}$$>$10$^{11}$$L_\\odot$, \n$L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$, 5$\\times$10$^{10}$$L_\\odot$$>$$L_{IR}$$>$10$^{10}$$L_\\odot$ and $L_{IR}$$>$10$^{10}$$L_\\odot$ \nand stacked them independently. The number of sources in each sample is 670, 1545, 2342 and 3887 respectively.\n\nIn Fig.~\\ref{obs_spec} we present the stacked spectra as a function of rest-frame energy, both in total counts and normalized at 1 keV to highlight the \ndifference in spectral shape among the different samples. At E$\\gtrsim$5 keV, the spectra begin to diverge, where we expect the AGN emission to\ndominate even for heavily-obscured sources. There is a clear trend, with more high energy X-ray emission with increasing IR luminosity.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.22\\textwidth]{f1a.eps}\n\\includegraphics[width=0.22\\textwidth]{f1b.eps}\n\\end{center}\n\\caption{{\\it Left panel:} Stacked background-subtracted Chandra counts as a function of rest-frame energy from 1 to 8 keV. Samples were selected\nbased on their IR luminosity in the following overlapping bins: $L_{IR}$$>$10$^{11}$$L_\\odot$ ({\\it filled circles}), $L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$ \n({\\it squares}), 5$\\times$10$^{10}$$L_\\odot$$>$$L_{IR}$$>$10$^{10}$$L_\\odot$ ({\\it triangles}) and $L_{IR}$$>$10$^{10}$$L_\\odot$ ({\\it open circles}).\n{\\it Right panel}: same as left panel but normalized at 1 keV in order to highlight the differences in spectral shape among the different samples.\nThe largest differences are at E$\\gtrsim$5 keV, where there is a clear trend in the relative intensity as a function of IR luminosity,\nsuggesting a larger fraction of AGN in the most luminous IR sources.}\n\\label{obs_spec}\n\\end{figure}\n\n\n\\section{Discussion}\n\nThe spectra shown in Fig.~\\ref{obs_spec} cannot be directly interpreted, as the detector-plus-telescope response information is lost after the conversion to \nrest-frame energy and stacking. Hence, we perform simulations assuming different intrinsic X-ray spectra in order to constrain the nature of the sources \ndominating the co-added signal. We use the XSPEC code \\citep{arnaud96} to convolve several intrinsic input spectra with the latest response \nfunctions\\footnote{Obtained from http:\/\/cxc.harvard.edu\/caldb\/calibration\/acis.html} for the Chandra ACIS-I camera used in the CDF-S observations. We\nthen compare these simulated spectra with the observations in our sample of IR-selected sources.\n\nThe low energy spectrum of (U)LIRGs is dominated by a combination of a thermal plasma component with temperatures $kT$$\\simeq$0.7~keV, particularly\nimportant at E$<$3 keV, and the emission from high-mass X-ray binaries (HMXBs) at 1$<$E (keV)$<$10 (e.g., \\citealp{persic02}). For each source, we\ngenerated a simulated spectrum using a combination of these two components, taking into account the luminosity and redshift of the source. For\nthe HMXB population we assumed a power-law given by $\\Gamma$=1.2 and cutoff energy $E_c$=20 keV, consistent with recent observations \n(e.g., \\citealp{lutovinov05}). This component was normalized assuming the relation between IR and X-ray luminosity in starburst galaxies found \nby \\citet{ranalli03}. For the thermal component, we assumed a black body with temperature $kT$=0.7 keV. The normalization of this component\nwas then adjusted to match the observations at $E$$<$3 keV.\n\nIn order to compute the possible contribution from heavily-obscured AGN to the stacked spectrum we assumed the observed X-ray spectrum of the nearby\nULIRG IRAS19254-7245, as observed by Suzaku \\citep{braito09}. In addition to the starburst emission described above, the X-ray spectrum is described by\nan absorbed, Compton-thick, power-law with $\\Gamma$=1.9, $N_H$=10$^{24}$~cm$^{-2}$, and a possible scattered component, characterized by a \npower-law with $\\Gamma$=1.9, no absorption, and 1\\% of the intensity of the direct emission. The resulting simulated spectral components and the comparison\nwith the observed stacked spectrum for sources with the four samples defined above are shown in Fig.~\\ref{simul_spec}. \n\n\\begin{figure}\n\\begin{center}\n\\plotone{f2.eps}\n\\end{center}\n\\caption{Stacked background-subtracted Chandra counts as a function of rest-frame energy, as in Fig.~\\ref{obs_spec}. {\\it Black data points (filled circles)} show\nthe stacked X-ray signal for sources binned by IR luminosity. The {\\it cyan dashed lines (stars)} shows the simulated spectra for the HMXB population\nnormalized using the \\citet{ranalli03} relation between star-formation rate and X-ray luminosity. The {\\it blue dashed lines (open squares)} show simulated thermal spectra\ncorresponding to a black body with $kT$=0.7 keV, required to explain the E$<$3 keV emission. An absorbed AGN spectrum, given by a power-law with $\\Gamma$=1.9 and a \nfixed $N_H$=10$^{24}$~cm$^{-2}$, is shown by the {\\it red dashed lines (open circles)}. In addition, a scattered AGN component, characterized by a 1\\% reflection of the underlying \nunobscured power-law, is shown by the {\\it green dashed lines (open triangles)}. The resulting summed spectrum ({\\it black solid lines}) is in very good \nagreement with the observed counts. The strong detection in the stacked spectrum at E$>$5 keV, in particular at the higher IR luminosities, confirms the presence of a significant \nnumber of heavily-obscured AGN in these samples.}\n\\label{simul_spec}\n\\end{figure}\n\nIt is not possible to explain the observed stacked spectral shape using only a plausible starburst spectrum without invoking an AGN component, which dominates \nat E$>$5~keV. The average intrinsic rest-frame 2-10 keV AGN luminosity needed to explain the observed spectrum, assuming that every source in the sample contains an AGN of the \nsame luminosity, is 6$\\times$10$^{42}$~erg~s$^{-1}$ for sources with $L_{IR}$$>$10$^{11}$$L_\\odot$, 3$\\times$10$^{42}$~erg~s$^{-1}$ \nfor sources with $L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$, 5$\\times$10$^{41}$~erg~s$^{-1}$ in the sample with 5$\\times$10$^{10}$$L_\\odot$$>$$L_{IR}$$>$10$^{10}$$L_\\odot$\nand 7$\\times$10$^{41}$~erg~s$^{-1}$ for sources with $L_{IR}$$>$10$^{10}$$L_\\odot$. All of these are (intrinsically) very low-luminosity AGN; even if there is a range, it is extremely\nunlikely to include high-luminosity quasars like those discussed in previous stacking papers. An alternative possibility is that the extra emission at E$>$5~keV is due entirely to \nthe Fe~K$\\alpha$ line, provided the errors in the photometric redshifts in these samples are significantly larger than the values reported by \\citet{cardamone10}. \nRegardless of the template assumed for the AGN emission, we obtain similar values for the average AGN luminosity in each sample.\n\nThe median hard X-ray luminosity for the Chandra sources with measured photometric redshifts in the catalog of \\citet{luo08} is 4.1$\\times$10$^{43}$ erg~s$^{-1}$ for \nthe sources in the $L_{IR}$$>$10$^{11}$$L_\\odot$ sample, 3.5$\\times$10$^{43}$~erg~s$^{-1}$ in the $L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$ group, 5.7$\\times$10$^{42}$~erg~s$^{-1}$ for \nsources with 5$\\times$10$^{10}$$L_\\odot$$>$$L_{IR}$$>$10$^{10}$$L_\\odot$ and 1.6$\\times$10$^{43}$~erg~s$^{-1}$ in the $L_{IR}$$>$10$^{10}$$L_\\odot$ sample. Hence, \nif the heavily-obscured AGN in our stacked samples have the same median intrinsic luminosity this would indicate that 15\\% (98 sources) of the 670 galaxies \nwith $L_{IR}$$>$10$^{11}$$L_\\odot$ contain a heavily-obscured AGN. This fraction is $\\sim$9\\% (132 and 205 sources respectively) in the $L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$ and\n5$\\times$10$^{10}$$L_\\odot$$>$$L_{IR}$$>$10$^{10}$$L_\\odot$ samples. For sources with $L_{IR}$$>$10$^{10}$$L_\\odot$ this fraction is $<$5\\%. The integrated intrinsic \nX-ray emission in the rest-frame 2-10 keV band due to the heavily-obscured AGN in this sample, obtained by multiplying the intrinsic X-ray luminosity by the number of sources \nand dividing by the studied area, is $\\sim$4.6$\\times$10$^{46}$~erg~cm$^{-2}$~s$^{-1}$~deg$^{-2}$. For comparison, the total emission from all the X-ray \ndetected AGN in the CDF-S is 1.63$\\times$10$^{47}$~erg~cm$^{-2}$~s$^{-1}$~deg$^{-2}$. Hence, this extra AGN activity can account for $\\sim$22\\% of the total SMBH accretion.\nAdding this to the obscured SMBH growth in X-ray detected AGN \\citep{luo08}, we confirm that most SMBH growth, $\\sim$70\\%, is significantly obscured and missed by even the \ndeepest X-ray surveys \\citep{treister04,treister10}.\n\nPerforming a similar study on the 28 sources with $f_{24}$\/$f_R$$>$1000 and $R$-$K$$>$4.5 that we previously excluded, we find a very hard X-ray\nspectrum, harder than that of the $L_{IR}$$>$10$^{11}$$L_\\odot$ sources. This spectrum is consistent with a population of luminous AGN with intrinsic rest-frame 2-10 keV \nluminosity $\\sim$2$\\times$10$^{43}$~erg~s$^{-1}$ and negligible contribution from the host galaxy, except at E$<$2 keV where the thermal component is $\\sim$30\\% of the\ntotal emission. This result justifies our choice of removing these sources from our study (otherwise they would dominate the stacked signal), while at the same time it confirms the\nAGN nature of the vast majority of these sources, in contrast to the suggestion that the extra IR emission could be due to star-formation processes \\citep{donley08, pope08,georgakakis10}. \nA similar result for these high-luminosity sources was found by \\citet{fiore10}: In a sample of 99 mid-IR excess sources in the COSMOS field he found a strong stacked signal \nat E$\\sim$6~keV, which he interpreted as due to the Fe~K$\\alpha$ line, a clear signature of AGN emission and high obscuration (see discussion below).\n\n\\subsection{Multiwavelength Properties}\n\nBy design, none of the sources in our sample are individually detected in X-rays, nor do they satisfy the selection criteria of \\citet{fiore08}. However, it is interesting to investigate\nif they present other AGN signatures. For example, 237 out of the 1545 sources with $L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$ in our sample (15\\%) are found inside the\nAGN IRAC color-color region defined by \\citet{stern05}. For comparison, in the sample of 2342 sources with 5$\\times$10$^{10}$$L_\\odot$$>$$L_{IR}$$>$10$^{10}$$L_\\odot$ --- \nin which from the stacked hard X-ray signal we determined a negligible AGN fraction --- there are 327 galaxies (14\\%) in the \\citet{stern05} region. This suggests that the \nIRAC color-color diagram cannot be used to identify heavily-obscured low-luminosity AGN, because the near-IR emission in these sources is dominated by the host \ngalaxy \\citep{cardamone08}. At longer wavelengths, 83 of the 1545 sources with $L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$ were detected in the deep VLA observations of the \nCDF-S \\citep{kellermann08}. In contrast, only 33 sources in the 5$\\times$10$^{10}$$L_\\odot$$>$$L_{IR}$$>$10$^{10}$$L_\\odot$ sample were detected in these \nobservations. Using the $q_{24}$ ratio between 1.4 GHz and 24~$\\mu$m flux densities (e.g., \\citealp{appleton04}), we find that in the $L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$ \nsample, only 14 sources have $q_{24}$$<$-0.23 and can be considered ``radio-loud'' \\citep{ibar08}, and in the 5$\\times$10$^{10}$$L_\\odot$$>$$L_{IR}$$>$10$^{10}$$L_\\odot$ \nsample only 10 sources have $q_{24}$$<$-0.23. Hence, we conclude that the fraction of bona fide radio-loud sources is negligible and that in most cases the radio emission \nis produced by star-formation processes.\n\n\\subsection{AGN Fraction Versus Stellar Mass}\n\nIn order to investigate the fraction of heavily-obscured AGN as a function of other galaxy parameters, we performed X-ray stacking of samples sorted by stellar mass.\nStellar masses were taken from \\citet{cardamone10}, who performed spectral fitting to the extensive optical and near-IR spectro-photometry using FAST \\citep{kriek09} and the stellar \ntemplates of \\citet{maraston05} assuming the \\citet{kroupa01} initial mass function and solar metallicity. We further restricted our sample to sources with $z$$<$1.2, for \nwhich photometric redshifts and stellar masses are very well determined ($\\Delta$z\/(1+$z$)=0.007). We then divided the sample into three mass bins: M$>$10$^{11}$M$_\\odot$, \n10$^{11}$$>$M (M$_\\odot$)$>$10$^{10}$ and 10$^{10}$$>$M (M$_\\odot$)$>$10$^{9}$. The resulting stacked X-ray spectra are shown in Fig.~\\ref{obs_spec_mass}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.2]{f3a.eps}\n\\includegraphics[scale=0.2]{f3b.eps}\n\\end{center}\n\\caption{Stacked Chandra counts for galaxies binned as a function of their stellar mass. The {\\it left panel} shows the spectra for the following\nbins: M$>$10$^{11}$M$_\\odot$ ({\\it red squares}), 10$^{10}$$<$M$<$10$^{11}$M$_\\odot$ ({\\it blue triangles}) and 10$^{9}$$<$M$<$10$^{10}$M$_\\odot$ ({\\it black circles}).\n{\\it Right panel:} same but normalized at 1 keV. In the M$>$10$^{11}$M$_\\odot$ sample, the strong excess at E=6-7~keV, which we associate with the Fe~K$\\alpha$ line, is an \nindicator of AGN activity. Similarly, for the sources with 10$^{10}$$<$M$<$10$^{11}$M$_\\odot$ there is a hard X-ray spectrum, also suggesting a significant AGN fraction.\nThese preliminary results indicate that these heavily-obscured moderate-luminosity AGN are predominantly present in the most massive galaxies.}\n\\label{obs_spec_mass}\n\\end{figure}\n\n\nFor sources with M$>$10$^{11}$M$_\\odot$, there is a significant excess at 6-7~keV, above a spectrum that otherwise declines with increasing energy. This might \nbe due to the presence of the Fe~K$\\alpha$ line, a clear indicator of AGN activity. Contrary to the case of stacking as a function of IR luminosity (Fig.~\\ref{simul_spec}), \nhere we do not find evidence for an absorbed power-law ---the 6-7 keV feature is simply too sharply peaked. Possibly the restriction to $z$$<$1.2 for the mass-binned stacking, where \nphotometric redshifts are most accurate, reveals an emission line that is broadened by less accurate photometric redshifts in the full sample. That is, the feature in the $L_{IR}$-binned stack \nthat we interpreted as a heavily absorbed power law may instead be an Fe~K$\\alpha$ line broadened artificially by bad photometric redshifts. In the 10$^{11}$$>$M (M$_\\odot$)$>$10$^{10}$ \nsample we found a significant hardening of the X-ray spectrum (Fig.~\\ref{obs_spec_mass}), suggesting the presence of a significant fraction of AGN. In contrast, only a soft spectrum, \nconsistent with star-formation emission, can be seen for sources with 10$^{10}$$>$M (M$_\\odot$)$>$10$^{9}$. Taken together, these results indicate that AGN are predominantly present \nin the most massive galaxies, in agreement with the conclusions of \\citet{cardamone10b} and others. This will be elaborated in a paper currently in preparation.\n\n\\subsection{Space Density of Heavily-Obscured AGN}\n\nThe fraction of Compton-thick AGN in the local Universe is still heavily debated. \\citet{treister09b} reported a fraction of $\\sim$8\\% in a flux-limited sample of sources\ndetected in the Swift\/BAT all-sky \\citep{tueller08} and International Gamma-Ray Astrophysics Laboratory (INTEGRAL; \\citealp{krivonos07}) surveys. From an INTEGRAL volume-limited \nsurvey at $z$$<$0.015, \\citet{malizia09} found a higher fraction of 24\\%, suggesting that even surveys at E$>$10~keV are potentially \nbiased against the detection of Compton-thick AGN. The fraction of moderate-luminosity Compton-thick sources in our sample of sources \nwith $L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$, relative to all AGN in the CDF-S, is $\\sim$25\\% (132\/525), assuming that Compton-thick and Compton-thin AGN have similar \nmedian intrinsic luminosities. This indicates that there is no major evolution in the number of moderate-luminosity heavily-obscured AGN from $z$=0 to 2. In contrast, at higher \nluminosities, \\citet{treister10} reported that the ratio of obscured to unobscured quasars increased from $\\sim$1 at $z$=0 to $\\sim$2-3 at $z$$\\simeq$2. Hence, although all these \nestimates are still uncertain, it appears that the evolution of Compton-thick AGN depends strongly on their luminosity. We further speculate that this is indication that the \ntriggering of low-luminosity AGN is not related to the major merger of gas-rich galaxies as found by \\citet{treister10} for high-luminosity quasars or that the time delay\nbetween galaxy interactions and black hole growth is long \\citep{schawinski10b}.\n\n\\acknowledgements\n\nWe thank the referee, Fabrizio Fiore, for very useful and constructive comments. Support for the work of ET and KS was provided by the National\nAeronautics and Space Administration through Chandra\/Einstein\nPost-doctoral Fellowship Award Numbers PF8-90055 and PF9-00069\nrespectively issued by the Chandra X-ray Observatory Center, which is\noperated by the Smithsonian Astrophysical Observatory for and on\nbehalf of the National Aeronautics Space Administration under contract\nNAS8-03060. CMU and CC acknowledge support from NSF grants \nAST-0407295, AST-0449678, AST-0807570 and Yale University.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nWith the rapid development of Internet of Things, the smart in-vehicle applications (i.e., autonomous driving, image assisted navigation and multimedia entertainment) have been widely applied to smart vehicles \\cite{Feng}\\cite{Zhang}, which can provide more comfortable and safer environment for drivers and passengers. Since these in-vehicle applications will consume huge computation resource and require low execution latency, the cloud server is employed to afford these complex computing tasks, which results in serious burden on backhaul network \\cite{Huang0}. Luckily, vehicular edge computing (VEC) with powerful computing capacity is deployed in the roadside densely and attached in the road side unit (RSU). Therefore, it is worthy to study how to efficiently utilize the computing capacity of VEC server to support low latency and energy-efficiency in-vehicle service. \n\nTo improve the offloading efficiency of vehicle terminals, the researchers have proposed many offloading and resource allocation methods in the VEC-based network. Zhang et al. \\cite{Mao} proposed a hierarchical cloud-based VEC offloading framework to reduce the execution delay, where a back server was utilized to offer extra computation resource for the VEC server. However, the priority of tasks were not considered in the offloading process, which may cause that delay sensitive tasks (i.e., assisted imaged navigation) cannot be processed timely for high-speed vehicles. To further reduce the task execution delay, Liu et al. \\cite{Liu} studied the task offloading problem by treating the vehicles and the RSUs as the two sides of a matching problem to minimize the task execution delay. But the computation capacity of VEC server should be fully exploited to decrease the energy consumption of in-vehicle applications. In addition, Li et al. \\cite{Li} considered the influence of time-varying channel on the task offloading strategies and formulated the problem of joint radio and computation resource allocation. However, the variety of vehicle speeds was not considered, which may cause that the high-speed vehicles cannot obtain the sufficient computation and wireless resource. To accelerate the radio and computation allocation process, Dai et al. \\cite{Dai} proposed a low-complexity algorithm to jointly optimize server selection, offloading ratio and computation resource, but the task handover between VEC servers may result in the increase in delay and occurrence of lost packets. To make a more accurate offloading decision, Sun et al. \\cite{Sun} proposed the task offloading algorithm by determining where the tasks were performed and the execution order of the tasks on the VEC server, but the proposed heuristic algorithm may not obtain a satisfying result when the vehicle's channel state fluctuated frequently. Considering the fluctuation of channel state, Zhan et al. \\cite{Zhan} proposed a deep reinforcement learning-based offloading algorithm to make the task offloading decision, which can be applied in the dynamic environment caused by vehicle's mobility. \n\nIn general, there are still some problems to be solved for the computing task offloading and resource allocation for the in-vehicle applications: (1) The impact of vehicle speed on task delay constraints have been neglected, which results in mismatching between vehicle speed and its allocated computation and wireless resource and cannot guarantee the delay requirements. (2) The vast fluctuation of vehicle's channel state caused by fast mobility has not been considered, which may result in offloading failure. (3) The computation capacity of VEC server has not been fully exploited because of the inaccurate resource allocation strategy. Inspired by the above work, we propose a vehicle speed aware computing task offloading and resource allocation strategy based on multi-agent reinforcement learning. Our work is novel in the following aspects.\n\n\\textbf{(1)Vehicle speed aware delay constraint model:} Different types of tasks and vehicle speeds demand various delay requirements for in-vehicle applications. Therefore, we fully analyze the internal relationship among vehicle speed, task type and delay requirements to propose a vehicle speed aware delay constraint model, which can make the task offloading and resource allocation process more accurately. \n\n\\textbf{(2)The calculation of energy consumption and delay for different types of tasks:} Based on the bandwidth and delay requirements, in-vehicle computing tasks are classified into three types: critic application, high-priority application and low-priority application. For different types of tasks, we calculate the energy consumption and delay based on task transmission and execution process for different offloading positions, respectively.\n\n\\textbf{(3)Multi-agent reinforcement learning based solution to our formulated problem:} A joint optimization of offloading and resource allocation problem is formulated by a Markov decision process (MDP) process, with the objective to minimize the energy consumption subject to delay constraint. In addition, multi-agent reinforcement learning is applied to solve the high-dimension action decision-making.\n\n\n\\section{System Model}\n\\subsection{System Framework}\nThe scenario considered in this paper is the computing task offloading of vehicles on urban roads in VEC network, as shown in Figure \\ref{system}. RSU is located along the roadside, and the coverage areas of adjacent RSUs do not overlap. Therefore, according to the coverage areas of RSUs, the road can be divided into several adjacent segments, where the vehicle can only establish the wireless link with the RSU of the current road segment. Each RSU is equipped with a VEC server whose powerful computation capacity can help the vehicle quickly handle computing tasks. Since the delay constraint of the computing task is extremely short, it can be assumed that the vehicle can still receive the task processing result from the previous VEC server when the vehicles travels to the road segment horizon. In addition, the computing task can also be executed locally to alleviate the traffic and computing burden of VEC server.\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=8.5cm]{figure-2.eps}\n\t\\centering\n\t\\caption{The architecture of computing task offloading in VEC network}\n\t\\label{system}\n\\end{figure}\n\nBecause the VEC server can real-time perceive the state information of vehicles and owns powerful computing processing capacity, so it can generate the optimal computing task offloading and resource allocation strategy (including computing resource allocation and wireless resource allocation) for each vehicle. First, the state information of the vehicle, including task queue, speed, location, remaining computing capacity and wireless resources, is reported to the RSU in real time. The RSU will forward the received state information to the VEC server, which utilizes the extracted state information to calculates the task execution delay and energy consumption to formulate the optimized problem. The goal is to reduce the energy consumption of all vehicles by optimizing task offloading and resource allocation decisions without exceeding delay constraint. Then, according to the results of the joint task offloading and resource allocation, the vehicle's computing tasks can be executed locally or offloaded to the VEC server.\n\n\\subsection{Vehicle Speed Aware Delay Constraint Model}\n\nVehicle' computing tasks can be divided into three types: critical application (CA), high-priority application (HPA) and low-priority application (LPA), which needs different bandwidth and delay requirements \\cite{Dzi}. We denote these three task types by $\\phi_1$, $\\phi_2$ and $\\phi_3$, respectively. CA task generally refers to the autonomous driving and road safety tasks, which needs ultra delay to ensure the safety of vehicle driving. Therefore, this type of task needs to be executed locally and delay threshold is set to $Th{{r}_{1}}$. For the HPA task, it mainly involves image assisted navigation, parking navigation and some optional security applications. The delay tolerance for HPA task is related to the current vehicle speed. For vehicles with low speed, it doesn't matter even if the computing task takes a little longer time. More wireless and computation resource can be allocated to vehicles with high speed, whose tasks can be processed preferentially. The delay threshold of HPA task is set to $Th{{r}_{2}}$ when the vehicle speed reaches at the maximum road speed limit ${{v}_{\\max }}$. LPA tasks generally includes multimedia and passenger entertainment activities, so the requirement for delay threshold is relatively slack. The delay threshold is set to $Th{{r}_{3}}$.\n\nIn this paper, it is assumed that the generated computing tasks are independent task sequence. The computing task of the vehicle $k$ at time $t$ to be processed is defined as ${{\\mathcal{I}}_{t}}(k)$. For HPA task, when the vehicle's speed is low, the delay threshold of the computing task can be relatively longer. With the increase of speed, the information of the vehicle received from surrounding environment will increase rapidly at the same delay because of the longer distance traveled. Therefore, the delay of the computing task that the vehicle needs to be processed should be reduced rapidly. When the speed reaches a higher level, with the increase of speed, the increase amplitude of the vehicle's information received from the surrounding environment will gradually decrease. Therefore, the delay of the computing task that the vehicle needs to be processed will be reduced slowly.\n\nTherefore, we select one-tailed normal function to describe the relationship between delay constraint, $\\mathcal{T}({{v}_{\\mathcal{I}_t(k)}})$, and speed for task $\\mathcal{I}_t(k)$ of $\\phi _2$, as follows:\n\\begin{equation}\\small\n\\begin{aligned}\n\\mathcal{T}({{v}_{\\mathcal{I}_t(k)}})&=Th{{r}_{2}}\\frac{1}{\\sqrt{2\\pi }\\alpha }\\exp (-\\frac{v_{{{\\mathcal{I}}_{t}}(k)}^{2}}{2{{\\alpha }^{2}}})\/(\\frac{1}{\\sqrt{2\\pi }\\alpha }\\exp (-\\frac{v_{\\max }^{2}}{2{{\\alpha }^{2}}})) \\\\ \n& =\\exp (-\\frac{v_{{{\\mathcal{I}}_{t}}(k)}^{2}-v_{\\max }^{2}}{2{{\\alpha }^{2}}})Th{{r}_{2}},\\text{ }if\\text{ }{{\\mathcal{I}}_{t}}\\text{(}k\\text{)}\\in \\phi _2 \\\\ \n\\end{aligned}\n\\end{equation}\nwhere ${{v}_{\\mathcal{I}_t(k)}}$ is the current vehicle speed and ${{\\alpha }^{2}}$ is the variance of the normal function. $v_{max}$ is the road maximum speed. To ensure that the probability that vehicle speed is within the maximum speed exceeds 95\\%, we denote $\\alpha ={{v}_{\\max }}\/1.96$. Therefore, We employ $\\Upsilon({{v}_{\\mathcal{I}_t(k)}})$ to represent the delay threshold of computing task ${{\\mathcal{I}}_{t}}(k)$ of all task types, as follows:\n\\begin{equation}\n\\begin{aligned}\n& \\Upsilon ({{\\mathcal{I}}_{t}}\\text{(}k\\text{)})=\\left\\{ \\begin{aligned}\n& Th{{r}_{1}},\\text{ }if\\text{ }{{\\mathcal{I}}_{t}}\\text{(}k\\text{)}\\in {{\\phi }_{1}} \\\\ \n& \\mathcal{T}\\text{(}{{v}_{{{\\mathcal{I}}_{t}}\\text{(}k\\text{)}}}\\text{), }if\\text{ }{{\\mathcal{I}}_{t}}\\text{(}k\\text{)}\\in {{\\phi }_{2}} \\\\ \n& Th{{r}_{3}},\\text{ }if\\text{ }{{\\mathcal{I}}_{t}}\\text{(}k\\text{)}\\in {{\\phi }_{3}} \\\\ \n\\end{aligned} \\right. \\\\ \n& \\text{ } \\\\ \n\\end{aligned}\n\\end{equation}\n\n\\section{Delay and Energy Consumption of Different Offloading Positions}\nFor the generated computing task, the task handover between VEC server is generally not considered \\cite{Zhang} to ensure its successful transmission. For HPA and LPA computing tasks generated by the in-vehicle applications at time $t$, there are usually three ways to handle them, which are hold on, offloading to VEC server and local execution, as shown in Figure \\ref{delay}.\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=8.5cm]{figure-1.eps}\n\t\\centering\n\t\\caption{Delay and energy consumption of different offloading pcositions}\n\t\\label{delay}\n\\end{figure}\nWhen the vehicle's current remaining computing resource and wireless resource and VEC server's remaining computing resource are insufficient to process the new computing task, the vehicle's computing task can choose to wait for a certain time until computing and wireless resources are released. \n\\subsection{Delay of task execution}\n\\subsubsection{Offloading to the VEC server}\nFor the local VEC server, we denote the set of vehicles in the service area by $\\mathbb{Q}$ and the number of these vehicles is $K$. When the computing task ${{\\mathcal{I}}_{t}}(k)$ belonging to task type $\\phi_2$ or $\\phi_3$ is offloaded to VEC server, the task completion time contains upload time, execution time and download time. For the task type $\\phi_2$, since the size of output file is much smaller than that of input file, the download time can be ignored. Considering that task upload, execution and download cannot be executed simultaneously within one transport time interval (TTI), the consumed time, $TR_{{{\\mathcal{I}}_{t}}(k)}$, needs to round up, which can be described as: \n\\begin{equation}\\small\nTR_{{{\\mathcal{I}}_{t}}(k)}^{{}}=\\left\\{ \\begin{aligned}\n& \\left\\lceil \\frac{{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{r_{k}^{VEC,up}} \\right\\rceil +\\left\\lceil \\frac{{{\\kappa }_{{{\\mathcal{I}}_{t}}(k)}}{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{b_{{{\\mathcal{I}}_{t}}(k)}^{VEC}{{f}^{VEC}}} \\right\\rceil ,\\text{ }if\\text{ }{{\\mathcal{I}}_{t}}(k)\\in \\phi_2 \\\\ \n& \\left\\lceil \\frac{{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{r_{k}^{VEC,up}} \\right\\rceil +\\left\\lceil \\frac{{{\\kappa }_{{{\\mathcal{I}}_{t}}(k)}}{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{b_{{{\\mathcal{I}}_{t}}(k)}^{VEC}{{f}^{VEC}}} \\right\\rceil +\\left\\lceil \\frac{{{\\omega }_{{{\\mathcal{I}}_{t}}(k)}}{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{r_{k}^{VEC,down}} \\right\\rceil ,\\\\&\\text{ }if\\text{ }{{\\mathcal{I}}_{t}}(k)\\in \\phi_3 \\\\ \n\\end{aligned} \\right.\n\\end{equation}\nwhere ${{c}_{{{\\mathcal{I}}_{t}}(k)}}$ is the file size of task ${{\\mathcal{I}}_{t}}(k)$ and ${{\\kappa }_{{{\\mathcal{I}}_{t}}(k)}}$ is the calculation density of processing the task ${{\\mathcal{I}}_{t}}(k)$. ${{\\omega }_{{{\\mathcal{I}}_{t}}(k)}}$ is the scaling ratio of the downloaded task size relative to the uploaded task size. $b_{{{\\mathcal{I}}_{t}}(k)}^{VEC}$ indicates the proportion of computing resource allocated by VEC server to the task ${{\\mathcal{I}}_{t}}(k)$. ${{f}^{VEC}}$ denotes the CPU frequency of VEC server. The transmission capacity between vehicle and server can be obtained from the number of allocated channels, channel bandwidth, transmission power and noise power \\cite{Huang}. For uplink channel $n$ of VEC server allocated to vehicle $k$, the available uplink transmission capacity, $r_{k,n}^{VEC,up}$, can be expressed as\n\\begin{equation}\\small\nr_{k,n}^{VEC,up}=\\omega _{VEC}^{up}{{\\log }_{2}}\\left( 1+\\frac{P\\cdot h_{k,n}^{VEC,up}}{{{\\sigma }^{2}}+I_{k,n}^{VEC,up}} \\right),\\text{ }for\\text{ }n\\in \\mathbb{N}_{VEC}^{up}\n\\end{equation}\nwhere $\\omega _{VEC}^{up}=B_{VEC}^{up}\/N_{VEC}^{up}$. $B_{VEC}^{up}$ is the uplink bandwidth of VEC server and $N_{VEC}^{up}$ is the number of total channels of VEC server. $\\sigma^2$ denotes noise power and $P$ is transmission power. $I_{k,n}^{VEC,up}$ denotes the interference on channel $n$. $\\mathbb{N}_{VEC}^{up}$ indicates the uplink channel set of VEC server. Let $z_{k,n}^{VEC,up}$ indicate whether the uplink channel $n$ is allocated to the vehicle $k$. If it is allocated, $z_{k,n}^{VEC,up} = 1$, otherwise, $z_{k,n}^{VEC,up} = 0$. Then the uplink transmission capacity between vehicle $k$ and VEC server, $r_{k}^{VEC,up}$, can be depicted as\n\\begin{equation}\nr_{k}^{VEC,up}=\\sum\\limits_{n\\in \\mathbb{N}_{VEC}^{up}}{z_{k,n}^{VEC,up}r_{k,n}^{VEC,up}}\n\\end{equation}\n\nSimilarly, the transmission capacity of downlink channel $n$ of VEC server allocated to vehicle $k$, $r_{k,n}^{VEC,down}$, can be expressed as\n\\begin{equation}\n\\begin{aligned}\nr_{k,n}^{VEC,down}&=\\omega _{VEC}^{down}{{\\log }_{2}}\\left( 1+\\frac{P\\cdot h_{k,n}^{VEC,down}}{{{\\sigma }^{2}}+I_{k,n}^{VEC,down}} \\right),\\\\& \\text{ }for\\text{ }n\\in \\mathbb{N}_{VEC}^{down}\n\\end{aligned}\n\\end{equation}\nwhere $\\omega _{VEC}^{down}=B_{VEC}^{down}\/N_{VEC}^{down}$. Then the downlink transmission capacity between vehicle $k$ and VEC server, $r_{k}^{VEC,down}$, can be depicted as\n\\begin{equation}\nr_{k}^{VEC,down}=\\sum\\limits_{n\\in {{\\mathbb{N}_{VEC}^{down}}}}{z_{k,n}^{VEC,down}r_{k,n}^{VEC,down}}\n\\end{equation}\n\n\\subsubsection{Local Execution}\nFor computing task ${{\\mathcal{I}}_{t}}(k)$ executed locally, the consumed time, $TL_{{{\\mathcal{I}}_{t}}(k)}$, can be expressed as:\n\\begin{equation}\nTL_{{{\\mathcal{I}}_{t}}(k)}^{{}}=\\left\\lceil \\frac{{{\\kappa }_{{{\\mathcal{I}}_{t}}(k)}}{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{{{f}^{k}}} \\right\\rceil \n\\end{equation}\nwhere ${{f}^{k}}$ is the CPU frequency of vehicle $k$. At time $t$, computing task ${{\\mathcal{I}}_{t}}(k)$ can select to hold on, be offloaded to the local VEC server and be executed locally. Therefore, for computing task ${{\\mathcal{I}}_{t}}(k)$, the total delay from generation to completion, $D({{\\mathcal{I}}_{t}}(k))$, is derived by\n\\begin{equation}\\small\n\\begin{aligned}\nD({{\\mathcal{I}}_{t}}(k))&=t-t_{{{\\mathcal{I}}_{t}}(k)}^{g}+\\tau _{{{\\mathcal{I}}_{t}}(k)}^{Hold}{{T}_{h}}\\\\&+(1-\\tau _{{{\\mathcal{I}}_{t}}(k)}^{Hold})[\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC}TR_{{{\\mathcal{I}}_{t}}(k)}^{{}}+(1-\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC})T{{L}_{{{\\mathcal{I}}_{t}}(k)}}]\n\\end{aligned}\n\\end{equation}\nwhere $t_{k,i}^{g}$ is the generated time of task ${{\\mathcal{I}}_{t}}(k)$. $\\tau _{{{\\mathcal{I}}_{t}}(k)}^{Hold}$ indicates whether task ${{\\mathcal{I}}_{t}}(k)$ holds on. If it holds on, ${{\\mathcal{I}}_{t}}(k) = 1$, otherwise, ${{\\mathcal{I}}_{t}}(k) = 0$. ${{T}_{h}}$ denotes the waiting time. $\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC}$ indicates whether task ${{\\mathcal{I}}_{t}}(k)$ is offloaded to VEC server. If it is allocated, $\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC} = 1$, otherwise, $\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC} = 0$.\n\n\\subsection{Energy Consumption of Task Execution}\n\\subsubsection{Offloading to VEC server}\nWhen the computing task is offloaded to the VEC server, the energy consumption origins from uploading and downloading computing task. For the task type $\\phi_2$, since the size of output file is much smaller than that of input file, the energy consumed by downloading computing task can be ignored. Therefore, the energy consumption of task $\\mathcal{I}_{t}(k)$ belonging to $\\phi_2$ or $\\phi_3$ offloaded to VEC server, $ER_{{{\\mathcal{I}}_{t}}(k)}$, can be depicted as\n\\begin{equation}\nER_{{{\\mathcal{I}}_{t}}(k)}^{{}}=\\left\\{ \\begin{aligned}\n& P\\frac{{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{r_{k}^{VEC,up}},\\text{ }if\\text{ }{{\\mathcal{I}}_{t}}(k)\\in \\phi_2 \\\\ \n& P(\\frac{{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{r_{k}^{VEC,up}}\\text{+}\\frac{{{\\omega }_{{{\\mathcal{I}}_{t}}(k)}}{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{r_{k}^{VEC,down}}),\\text{ }if\\text{ }{{\\mathcal{I}}_{t}}(k)\\in \\phi_3 \\\\ \n\\end{aligned} \\right.\n\\end{equation}\n\n\\subsubsection{Local Execution}\nWhen computing task ${{\\mathcal{I}}_{t}}(k)$ is executed locally, the consumed energy can be calculated according to the assigned computation resource, $EL_{{{\\mathcal{I}}_{t}}(k)}$, which can be expressed as\n\\begin{equation}\nEL_{{{\\mathcal{I}}_{t}}(k)}^{{}}={{\\xi }_{{{\\mathcal{I}}_{t}}(k)}}{{\\kappa }_{{{\\mathcal{I}}_{t}}(k)}}{{c}_{{{\\mathcal{I}}_{t}}(k)}}{{({{f}^{k}})}^{2}}\n\\end{equation}\nwhere ${\\xi }_{\\mathcal{I}_{t}(k)}$ is the energy density of processing task ${{\\mathcal{I}}_{t}}(k)$ \\cite{Din}.\nAccording to the different offloading positions of computing task ${{\\mathcal{I}}_{t}}(k)$, including VEC server and local device, the consumed energy of all vehicles served by local VEC server, $E(t)$, can be derived by\n\\begin{equation}\nE(t)=\\sum\\limits_{k\\in \\mathbb{Q}}{(1-\\tau _{{{\\mathcal{I}}_{t}}(k)}^{Hold})[\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC}E{{R}_{{{\\mathcal{I}}_{t}}(k)}}}+(1-\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC})E{{L}_{{{\\mathcal{I}}_{t}}(k)}}]\n\\end{equation}\n\n\\section{Delay and Energy-Efficiency Driven Computing Task Offloading and Resource Allocation Algorithm Based on Multi-Agent Reinforcement Learning}\n\\subsection{Problem Formulation}\nWe formulate the optimized problem of reducing the energy consumption of each vehicle without exceeding delay constraint by carrying out optimal computing task offloading and resource allocation strategy, which can be described as follows:\n\\begin{equation}\\label{problem}\n\\begin{aligned}\n& \\underset{{{X}_{t}}(k,n),\\forall k,n}{\\mathop{\\min }}\\,\\sum\\limits_{t=1}^{T}{E(t)} \\\\ \n& s.t. \\\\ \n& (c1)\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC}+\\tau _{{{\\mathcal{I}}_{t}}(k)}^{Hold}\\le 1,\\forall k\\in \\mathbb{Q} \\\\ \n& (c2)\\sum\\limits_{k}{b_{{{\\mathcal{I}}_{t}}(k)}^{VEC}}\\le 1,\\forall k\\in \\mathbb{Q} \\\\ \n& (c3)\\sum\\limits_{k}{z_{k,n}^{VEC,up}}\\le 1,\\forall k\\in \\mathbb{Q},n\\in \\mathbb{N}_{VEC}^{up} \\\\ \n& (c4)\\sum\\limits_{k}{z_{k,n}^{VEC,down}}\\le 1,\\forall k\\in \\mathbb{Q},n\\in \\mathbb{N}_{VEC}^{down} \\\\ \n& (c5)D({{\\mathcal{I}}_{t(k)}})\\le \\Upsilon({{v}_{\\mathcal{I}_t(k)}}),\\forall k\\in \\mathbb{Q} \\\\ \n\\end{aligned}\n\\end{equation}\nwhere ${{X}_{t}}(k,n)=(\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC},\\tau _{{{\\mathcal{I}}_{t}}(k)}^{Hold},z_{k,n}^{VEC,up},z_{k,n}^{VEC,down})$. Constraint (c1) implies that computing task ${{\\mathcal{I}}_{t(k)}}$ cannot be offloaded to the local VEC server, executed locally and hold on simultaneously. Constraint(c2) indicates that computation capacity allocated to computing task ${{\\mathcal{I}}_{t(k)}}$ by VEC server cannot exceed its own computing capacity. Constraint (c3) and constraint (c4) indicate that each channel must be assigned to one and only one vehicle at each scheduling period. Constraint (c5) indicates that computing task ${{\\mathcal{I}}_{t(k)}}$ should be completed within the delay constraint.\n\n\\subsection{Deep Reinforcement Learning-Based Solution Method}\nEquation (\\ref{problem}) is a multi-vehicle cooperation and competition problem, which is obviously a NP-hard problem. Therefore, we employ the deep reinforcement learning method to solve the proposed computing task offloading and resource allocation problem. First, we formulate our problem as a MDP to accurately describe the offloading and resource allocation decision process and utilize the multi-agent deep deterministic policy gradient (MADDPG) \\cite{Lowe} to find the optimal policy for the MDP. In what follows, we will present the elements of MDP, including state space, action space and reward function.\n\\subsubsection{State Space}\nWe defined the state space of vehicle $k$ as ${{s}_{k}}(t)$, including the state information of vehicle $k$, other vehicles and VEC server, which is depicted as\n\\begin{equation}\\small\n\\begin{aligned}\n{{s}_{k}}(t)&=[{{v}_{1}}(t),...,{{v}_{K}}(t),{{d}_{1}}(t),...,{{d}_{K}}(t),{{c}_{1}}(t),...,{{c}_{K}}(t), \\\\&r{{b}_{VEC}}(t),\ns\\tau _{k}^{Hold}(t),s\\tau _{k}^{VEC}(t),sb_{k}^{VEC}(t), sz_{1}^{VEC,up}(t),\\\\&...,sz_{N_{VEC}^{up}}^{VEC,up}(t),sz_{1}^{VEC,down}(t),...,sz_{N_{VEC}^{down}}^{VEC,down}(t)] \\\\ \n\\end{aligned}\n\\end{equation}\nwhere ${{v}_{k}}(t),{{d}_{k}}(t),{{c}_{k}}(t)$ are vehicle speed, position and file size to be processed of vehicle $k$ at time $t$, respectively. $r{{b}_{VEC}}(t)$ is the current remaining computation capacity of VEC server at time $t$. $s\\tau _{k}^{(\\centerdot )}(t)$ indicates whether the vehicle $k$ selects the offloading position $(\\centerdot)$ at time $t$. If it is selected, $s\\tau _{k}^{(\\centerdot )}(t) = 1$, otherwise, $s\\tau _{k}^{(\\centerdot )}(t)$ = 0. $sb_{k}^{VEC}(t)$is the ratio of computation resource allocated by VEC server to vehicle $k$ at time $t$. $sz_{1}^{VEC,up}(t),...,sz_{N_{VEC}^{up}}^{VEC,up}(t)$ indicates whether the uplink channel resource of VEC is available at time $t$. If it is available, the value is 1, otherwise, the value is 0. $sz_{1}^{VEC,down},...,sz_{N_{VEC}^{down}}^{VEC,down}(t)$ indicates whether the downlink channel resource of VEC is available at time $t$. If it is available, the value is 1, otherwise, the value is 0. Therefore, the state space of the system can be defined as: ${{S}_{t}}=({{s}_{1}}(t),...{{s}_{k}}(t)...,{{s}_{K}}(t))$.\n\n\\subsubsection{Action Space}\nSince vehicle $k$ cannot offload multiple tasks simultaneously at time $t$, which means that task ${{\\mathcal{I}}_{t(k)}}$ and vehicle $k$ have a one-to-one correspondence. Therefore, We can represent the action space of task ${{\\mathcal{I}}_{t(k)}}$ with the one of vehicle $k$. For vehicle $k$, the action space, $a_k(t)$, contains whether to hold on and offload task to VEC server, computation resource allocated by VEC server and the uplink and downlink channels allocated by VEC server, which can be expressed as\n\\begin{equation}\\small\n\\begin{aligned}\na_{k}^{{}}(t)&=[\\tau _{k}^{Hold}(t),\\tau _{k}^{VEC}(t),b_{k}^{VEC}(t),z_{k,1}^{VEC,up}(t),...,z_{k,N_{VEC}^{up}}^{VEC,up}(t),\\\\&z_{k,1}^{VEC,down}(t),...,z_{k,N_{VEC}^{down}}^{VEC,down}(t)]\n\\end{aligned}\n\\end{equation}\nTherefore, the action space of the system can be defined as: ${{A}_{t}}=\\{{{a}_{1}}(t),...{{a}_{k}}(t)...,{{a}_{K}}(t)\\}$.\n\n\\subsubsection{Reward}\nThe goal of this paper is to reduce the energy consumption of each vehicle terminal without exceeding task delay constraint, which can be realized by allocating the computation resource and wireless resource of the system. Therefore, we set rewards based on constraint conditions and objective function to accelerate the training speed. After taking action ${{a}_{k}}(t)$, if the state of vehicle $k$ does not satisfy the constraints (c1)-(c4), the reward function can be defined as \n\\begin{equation}\\small\n\\begin{aligned}\n& {{r}_{k}}(t)={{\\ell }_{1}}+{{\\Gamma }_{1}}\\cdot (s\\tau _{k}^{VEC}+s\\tau _{k}^{Hold}-1)\\cdot {{\\Lambda }_{(s\\tau _{k}^{VEC}+s\\tau _{k}^{Hold}\\le 1)}}\\\\&+{{\\Gamma }_{2}}\\cdot (\\sum\\limits_{k}{sb_{k}^{VEC}}-1)\\cdot {{\\Lambda }_{(\\sum\\limits_{k}{sb_{k}^{VEC}}\\le 1)}}\\\\& \n+{{\\Gamma }_{3}}\\cdot (\\sum\\limits_{k}{sz_{k,n}^{VEC,up}}-1)\\cdot {{\\Lambda }_{(\\sum\\limits_{k}{sz_{k,n}^{VEC,up}}\\le 1)}}\\\\&+{{\\Gamma }_{4}}\\cdot (\\sum\\limits_{k}{sz_{k,n}^{VEC,down}}-1)\\cdot {{\\Lambda }_{(\\sum\\limits_{k}{sz_{k,n}^{VEC,down}}\\le 1)}} \\\\ \n\\end{aligned}\n\\end{equation}\nwhere ${{\\Lambda }_{(\\centerdot )}}$ indicates that if the condition $(\\centerdot)$ is not satisfied, the value is -1, otherwise, the value is 0. ${{\\ell }_{1}},{{\\Gamma }_{1}},{{\\Gamma }_{2}},{{\\Gamma }_{3}},{{\\Gamma }_{4}}$ is experimental parameters. After taking action ${{a}_{k}}(t)$, if the state of vehicle $k$ satisfy all constraints (c1)-(c4), the reward function can be defined as \n\\begin{equation}\n{{r}_{k}}(t)={{\\ell }_{2}}+\\exp (Th{{r}_{k}}(t)-\\Upsilon({{v}_{\\mathcal{I}_t(k)}}))\n\\end{equation}\nwhere ${{\\ell }_{2}}$ is experimental parameters. After taking action ${{a}_{k}}(t)$, if the state of vehicle $k$ satisfy all constraints (c1)-(c5), the reward function can be defined as \n\\begin{equation}\nr(t)={{\\ell }_{3}}\\text{+}{{\\Gamma }_{5}}\\cdot \\exp ({{E}_{k}}(t))\n\\end{equation}\nwhere ${{\\ell }_{3}},{{\\Gamma }_{5}}$ denote experimental parameters.\n\\subsubsection{Joint Delay and Energy-Efficiency Algorithm Based on MADDPG}\nThe centralized training process is composed of $K$ agents, whose network parameter are $\\theta =\\{{{\\theta }_{1}},...,{{\\theta }_{K}}\\}$. We denote $\\mu =\\{{{\\mu }_{{{\\theta }_{1}}}},...,{{\\mu }_{{{\\theta }_{K}}}}\\}$ (abbreviated as $\\mu_i$) as the set of all agent deterministic policies. So for the deterministic policy ${{\\mu }_{k}}$ of agent $k$, the gradient can be depicted as\n\\begin{equation}\n\\begin{aligned}\n&{{\\nabla }_{{{\\theta }_{k}}}}J({{\\mu }_{k}})=\\\\&{{\\mathbb{E}}_{S,A\\sim \\mathcal{D}}}[{{\\nabla }_{{{\\theta }_{k}}}}{{\\mu }_{k}}({{a}_{k}}|{{s}_{k}}){{\\nabla }_{{{a}_{k}}}}Q_{k}^{\\mu }(S,{{a}_{1}},...,{{a}_{K}}){{|}_{{{a}_{k}}={{\\mu }_{k}}({{s}_{k}})}}]\n\\end{aligned}\n\\end{equation}\nwhere $\\mathcal{D}$ is experience replay buffer, which contains a series of $(S,A,{{S}^{'}},R)$. $Q_{k}^{\\mu }(s,{{a}_{1}},...,{{a}_{K}})$ is the Q-value function. For the critic network, it can be updated according to the loss function as follows\n\\begin{equation}\n\\begin{aligned}\n& \\mathcal{L}({{\\theta }_{k}})={{\\mathbb{E}}_{S,A,R,{{S}^{'}}}}{{[Q_{k}^{\\mu }(S,a_{1}^{{}},...,a_{K}^{{}})-y)}^{2}}] \\\\ \n& \\text{where} \\;\\; y=r_{k}^{{}}+\\gamma Q_{k}^{{{\\mu }^{'}}}({{S}^{'}},a_{1}^{'},...,a_{K}^{'}){{|}_{a_{j}^{'}=\\mu _{j}^{'}({{s}_{j}})}} \\\\ \n\\end{aligned}\n\\end{equation}\nwhere $\\gamma $ is the discount factor. The action network is updated by minimizing the policy gradient of the agent, which can be expressed as\n\\begin{equation}\n{{\\nabla }_{{{\\theta }_{k}}}}J\\approx \\frac{1}{X}\\sum\\limits_{j}{{{\\nabla }_{{{\\theta }_{k}}}}}{{\\mu }_{k}}(s_{k}^{j}){{\\nabla }_{{{a}_{k}}}}Q_{k}^{\\mu }({{S}^{j}},a_{1}^{j},...,a_{K}^{j}){{|}_{{{a}_{k}}={{\\mu }_{k}}(s_{k}^{j})}}\n\\end{equation}\nwhere $X$ is the size of mini-batch, $j$ is the index of samples. The specific joint delay and energy-efficiency algorithm based on MADDPG (JDEE-MADDPG) is shown in Algorithm 1.\n\\begin{algorithm}[!h]\n\t\\caption{JDEE-MADDPG}\n\t\\label{alg1}\n\n\t\\textbf{Initialize:} the positions, speed, task queue, computing resources and wireless resources of all vehicles. Initialize the computing and wireless resources of the VEC server. Initialize the weights of actor and critic networks.\n\t\n\t\\For{{episode= 1:M}}\n\t{\n\t\tInitialize a random process $\\mathcal{N}$ for action exploration;\n\t\n\t\tReceive initial state $S$;\n\t\t\n\t\t\\For{each vehicle $k=1,...,K$}\n\t\t{\n\t\t\tExecute actions ${{a}_{k}}$ and obtain new state $s_{k}^{'}$;\n\t\t\t\n\t\t\t\\uIf{ the $s_{k}^{'}$ does not satisfy constraints (c1)-(c4) in Eq.(12):}\n\t\t\t\t{\n\t\t\t\t \tObtain the reward of vehicle $k$ based on Eq.(15);\n\t\t\t }\n\t\t\t\\uElseIf{the $s_{k}^{'}$ satisfy all constraints (c1)-(c4) in Eq.(12):}\t \n\t\t\t\t{\n\t\t\t\t\tObtain the reward of vehicle $k$ based on Eq.(16);\n\t\t\t\t}\n\t\t\t\\uElseIf{the $s_{k}^{'}$ satisfy all constraints (c1)-(c5) in Eq.(12):}\n\t\t\t\t{\n\t\t\t\t\tObtain the reward of vehicle $k$ based on Eq.(17);\n\t\t\t\t}\n\n\t\t\t \\textbf{end}\n\t\t\t \n Obtain\tthe action $A$, new state ${{S}^{'}}$ and reward $R$;\n \n Store $(S,A,{{S}^{'}},R)$ in replay buffer $\\mathcal{D}$;\n \n\t\t}\n\t\t\\For{each vehicle $k=1,...,K$}\n\t\t{\n\t\t\tSample a random mini-batch of $X$ samples $({{S}^{j}},{{A}^{j}},{{R}^{j}},{{S}^{'}}^{j})$ from $\\mathcal{D}$;\n\t\t\t\n\t\t\tUpdate the critic network by minimizing the loss function, Eq.(19);\n\t\t\t\n\t\t\tUpdate actor network using the sampled policy gradient, Eq.(20);\n\t\t} \n\t\tUpdate the target network parameters of each vehicle $k$: $\\theta _{k}^{'}\\leftarrow \\delta {{\\theta }_{k}}+(1-\\delta )\\theta _{k}^{'}$\n\t}\n\\end{algorithm}\n\n\n\\section{Simulation Results }\n\\subsection{Parameter Setting}\nThe specific simulation parameters are presented in Table I and Table II. The algorithms compared in this section are as follows: \n\n\\textbf{All Local Execution (AL):} All computation tasks are executed locally.\n\n\\textbf{All VEC Execution (AV):} The CA tasks are executed locally, while HPA and LPA tasks are executed in VEC server. The resource allocation strategy is based on the size of task.\n\n\\textbf{Random Offloading (RD):} The HPA and LPA tasks are executed locally and in VEC server based on the uniform distribution. The resource allocation strategy is based on the size of task.\n\n\\textbf{Energy and Delay Greedy (EDG):} The offloading strategy is based on vehicle's channel state and resource allocation strategy is based on the size of task, in order to decrease the energy cost and execution delay in each step. \n\n\\begin{table}[!h]\n\t\\caption{Simulation Parameter Configuration}\n\t\\begin{tabular}{|c|c|ccc}\n\t\t\\cline{1-2}\n\t\t\\bf{Parameter} & \\bf{Value} & & & \\\\ \\cline{1-2}\n\t\tNumber of vehicles & 5, 7 ,9, 11, 13 & & & \\\\ \\cline{1-2}\n\t\tSize of task queue & 10 & & & \\\\ \\cline{1-2}\n\t\tSize of task input & {[}0.2, 1{]} Mb & & & \\\\ \\cline{1-2}\n\t\tSpeed of vehicle & {[}30, 50{]}, {[}50, 80{]}, {[}30, 80{]} Km\/h & & & \\\\ \\cline{1-2}\n\t\tRSU's coverage range & 500 m & & & \\\\ \\cline{1-2}\n\t\tRSU's bandwidth & 100 MHz & & & \\\\ \\cline{1-2}\n\t\tChannel model & Typical Urban & & & \\\\ \\cline{1-2}\n\t\t\\begin{tabular}[c]{@{}c@{}}Transmission power between\\\\ vehicle and RSU\\end{tabular} & 0.5 W & & & \\\\ \\cline{1-2}\n\t\t\\begin{tabular}[c]{@{}c@{}}Computation capacity \\\\ of VEC server\\end{tabular} & 10 G Cycles\/s & & & \\\\ \\cline{1-2}\n\t\t\\begin{tabular}[c]{@{}c@{}}Computation capacity \\\\ of vehicle\\end{tabular} & 1, 1.2, 1.4, 1.6, 1.8 G Cycles\/s & & & \\\\ \\cline{1-2}\n\t\tComputation density & {[}20, 50{]} Cycles\/bit & & & \\\\ \\cline{1-2}\n\t\tWaiting time of hold on & 20, 50 ms & & & \\\\ \\cline{1-2}\n\t\tDelay Threshold & 10, 40, 100ms & & & \\\\ \\cline{1-2}\t\t\n\t\t\\begin{tabular}[c]{@{}c@{}}Output data size\/ input \\\\ data size ratio\\end{tabular} & 0.1 & & & \\\\ \\cline{1-2}\n\t\tEnergy density {\\cite{Dai}} & $1.25\\times 10^{-26}$ J\/Cycle & & & \\\\ \\cline{1-2}\n\t\tParameters of reward & \\begin{tabular}[c]{@{}c@{}}${{\\Gamma }_{1}}=0.8$, ${{\\Gamma }_{2}},{{\\Gamma }_{3}},{{\\Gamma }_{4}},{{\\Gamma }_{5}}=0.5$ \\\\ ${{\\ell }_{1}}=-0.4,{{\\ell }_{2}}=-0.2,{{\\ell }_{3}}=0.5$ \\end{tabular} & & & \\\\ \\cline{1-2}\n\t\\end{tabular}\n\\end{table}\n\n\\begin{table}[!h]\n\t\\caption{The Neural Network and Training Parameters}\n\t\\begin{tabular}{|c|c|c|c|}\n\t\t\\hline\n\t\t\\textbf{Parameter} & \\textbf{Value} & \\textbf{Parameter} & \\textbf{Value} \\\\ \\hline\n\t\tLayers & 3 & Layer Type & Fully Connected \\\\ \\hline\n\t\t\\multicolumn{1}{|c|}{Hidden Units} & 512 & Learning Rate of Critic & 0.001 \\\\ \\hline\n\t\tOptimizer & Adam & Learning Rate of Actor & 0.0001 \\\\ \\hline\n\t\tEpisode & 140000 & Activation Function & Relu \\\\ \\hline\n\t\tMini-batch & 128 & Buffer Size & 20000 \\\\ \\hline\n\t\\end{tabular}\n\\end{table}\n\n\\subsection{Performance Evaluation}\nWe validate the algorithm performance in term of convergence property, task completion delay and energy consumption under different simulation configurations. \n\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=8.6cm]{figure0.eps}\n\t\\centering\n\t\\caption{Convergence property of different numbers of vehicles.}\n\t\\label{convergence}\n\\end{figure}\n\\begin{figure*}[!h]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{figure1.eps}\n\t\\centering\n\t\\caption{Average task completion delay of different algorithms: (a) the number of vehicles is 5, (b) the number of vehicles is 7, (c) the number of vehicles is 9, (d) the number of vehicles is 11, (e) the number of vehicles is 13.}\n\t\\label{figure1}\n\\end{figure*}\n\\begin{figure*}[!h]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{figure2.eps}\n\t\\centering\n\t\\caption{Average task energy consumption of different algorithms: (a) the number of vehicles is 5, (b) the number of vehicles is 7, (c) the number of vehicles is 9, (d) the number of vehicles is 11, (e) the number of vehicles is 13.}\n\t\\label{figure2}\n\\end{figure*}\n\\begin{figure*}[!h]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{figure3.eps}\n\t\\centering\n\t\\caption{Average task completion delay and energy consumption of different algorithms: (a)(b) the vehicle speed range is [30, 50], (c)(d) the vehicle speed range is [50, 80].}\n\t\\label{figure3}\n\\end{figure*}\n\nIn Figure \\ref{convergence}, we present the convergence performance of our proposed JDEE-MADDPG algorithm with the different numbers of vehicles. It can be seen that with the increase of training episodes, the average reward of vehicles rises gradually and preserves a stable positive reward eventually. In the initial stage, the average reward of our proposed JDEE-MADDPG algorithm with less vehicles is higher than that with more vehicles, because the increase of vehicles means higher dimension of state space and action space and our proposed JDEE-MADDPG algorithm needs to take more explorations. Therefore, it will result in that the average reward of large number of vehicles is lower than that of small number of vehicles. As the training episode increases, our proposed JDEE-MADDPG algorithm gradually achieves the convergence state, where the average reward of 5 vehicles is the highest, while the average reward of 13 vehicles is the lowest. The reason is that more vehicles implies that they will compete the limited computation resource and wireless resource more fiercely, which causes that the reward of energy consumption and delay decreases. \n\nIn Figure \\ref{figure1}, we present the comparison of average task completion delay with different numbers of vehicles when the vehicle speed range is from 30 to 80 Km\/h. It can be seen that compared with AL, AV and RD algorithms, our proposed JDEE-MADDPG algorithms can always preserve a lower level of task completion delay for each client. This is because that our proposed JDEE-MADDPG algorithm can allocate the computation resource and wireless resource to vehicles more accurately based on the task priority, task size, vehicle speed and vehicle's channel state. In addition, the task completion delay of some vehicles of EDG algorithm is less than that of our proposed JDEE-MADDPG algorithm, because our proposed algorithm sacrifices a little task completion delay to decrease the energy consumption of vehicle terminals without exceeding the task delay constraint. \n\n\nIn Figure \\ref{figure2}, we show the comparison of average task energy consumption with different numbers of vehicles when the vehicle speed range is from 30 to 80 Km\/h. It can be observed that compared with other algorithms, our proposed JDEE-MADDPG algorithm can always preserve a lower level of energy consumption. This is because that our proposed JDEE-MADDPG algorithm can always make the optimal offloading and resource allocation strategy based on the task priority, task size, vehicle speed and vehicle's channel state and reduce the energy consumption of all vehicles as soon as possible.\n\n\nIn Figure \\ref{figure3}, we compare the average task completion delay and energy consumption with different vehicle speed range when the number of vehicles is 9. Compared with AL, AV and RD algorithm, our proposed JDEE-MADDPG algorithm performs better in terms of delay and energy consumption. This is because that our proposed JDEE-MADDPG algorithm can utilize more information of vehicle terminals and VEC server, i.e., vehicle position, vehicle speed, task queue, channel state and remaining computation resource to make the optimal offloading and resource allocation strategy. Besides, the reason that some vehicles' task completion delay of our proposed JDEE-MADDPG algorithm remains higher than that of EDG algorithm is that our JDEE-MADDPG algorithm usually allocate more wireless and computation resource of VEC server to the vehicles with high speed without exceeding task delay constraint, which may cause that task completion delay of some vehicles is higher than that of EDG algorithm. \n\n\n\n\\section{Conclusion}\nIn this paper, we propose a vehicle speed aware computing task offloading and resource allocation algorithm to achieve the goal of energy-efficiency for all vehicles within task delay constraint. First, we establish the vehicle speed-based delay constraint model based on task types and vehicle speed. And then we calculate the task completion delay and energy consumption for different offloading positions based on the allocated computation and wireless resource. Finally, we formulate the mathematical model with the objective to minimize energy consumption of all vehicles subject to the delay constraint. The MADDPG method is utilized to obtain the offloading and resource allocation strategy. Simulation results show that the proposed JDEE-MADDPG algorithm can decrease energy consumption and task completion delay compare with other algorithms under different numbers of vehicles and vehicle speed ranges.\n\n\\section*{Acknowledgement}\nThis research work was supported in part by the National Science Foundation of China (61701389, U1903213), the Natural Science Basic Research Plan in Shaanxi Province of China (2018JQ6022) and the Shaanxi Key R\\&D Program (2018ZDCXL-GY-04-03-02).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn \\cite{LSY1, LSY2}, the authors introduced the notion of \\textit{holomorphic homogeneous regular}. Then in \\cite{Y:squeezing}, the equivalent notion of \\textit{uniformly squeezing} was introduced. Motivated by these studies, in \\cite{DGZ1}, the authors introduced the \\textit{squeezing function} as follows.\n\nDenote by $\\ball(r)$ the ball of radius $r>0$ centered at the origin 0. Let $\\Omega$ be a bounded domain in $\\cv^n$, and $p\\in \\Omega$. For any holomorphic embedding $f:\\Omega\\rightarrow \\ball(1)$, with $f(p)=0$, set\n$$s_{\\Omega,f}(p):=\\sup\\{r>0:\\ \\ball(r)\\subset f(\\Omega)\\}.$$\nThen, the squeezing function of $\\Omega$ at $p$ is defined as\n$$s_\\Omega(p):=\\sup_f\\{s_{\\Omega,f}(p)\\}.$$\n\nMany properties and applications of the squeezing function have been explored by various authors, see e.g. \\cite{DGZ1,DGZ2,DF:Bergman,DFW:squeezing,FS:squeezing,FW:squeezing,KZ:squeezing}.\n\nIt is clear that squeezing functions are invariant under biholomorphisms, and they are positive and bounded above by 1. It is a natural and interesting problem to study the uniform lower and upper bounds of the squeezing function.\n\nIt was shown recently in \\cite{KZ:squeezing} that the squeezing function is uniformly bounded below for bounded convex domains (cf. \\cite[Theorem 1.1]{Frankel}). On the other hand, in \\cite{DGZ1}, the authors showed that the squeezing function is not uniformly bounded below on certain domains with non-smooth boundaries, such as punctured balls. In \\cite{DF}, the authors constructed a smooth pseudoconvex domain in $\\cv^3$ on which the quotient of the Bergman metric and the Kobayashi metric is not bounded above near an infinite type point. By \\cite[Theorem 3.3]{DGZ2}, the squeezing function is not uniformly bounded below on this domain.\n\nThese studies raise the question: Is the squeezing function always uniformly bounded below near a smooth finite type point? In this paper, we answer the question negatively. More precisely, we have the following\n\n\\begin{thm}\\label{T:main}\nLet $\\Omega$ be a bounded domain in $\\cv^3$, and $q\\in \\partial\\Omega$. Assume that $\\Omega$ is smooth and pseudoconvex in a neighborhood of $q$ and the Bloom-Graham type of $\\Omega$ at $q$ is $d<\\infty$. Moreover, assume that the regular order of contact at $q$ is greater than $2d$ along two smooth complex curves not tangent to each other. Then the squeezing function $s_\\Omega(p)$ has no uniform lower bound near $q$.\n\\end{thm}\n\n\\begin{rmk}\nThe proof gives the estimate $s_\\Omega(p)\\leq C \\delta^{\\frac{1}{2d(2d+1)}}$ for some points approaching the boundary.\n\\end{rmk}\n\nIn section \\ref{S:prelim}, we recall some preliminary notions and results. In section \\ref{S:proof}, we prove Theorem \\ref{T:main}.\n\n\\section{Preliminaries}\\label{S:prelim}\n\nLet $\\Omega$ be a bounded domain in $\\cv^n$, $n\\ge 2$, and $q\\in \\partial\\Omega$. Assume that $\\Omega$ is smooth and pseudoconvex in a neighborhood of $q$. The \\textit{Bloom-Graham type} of $\\Omega$ at $q$ is the maximal order of contact of complex manifolds of dimension $n-1$ tangent to $\\partial \\Omega$ at $q$ (see e.g. \\cite{BG}). Choose local coordinates $(z,t)\\in \\cv^{n-1}\\times \\cv$ such that the complex manifold of dimension $n-1$ with the maximal order of contact is given by $\\{t=0\\}$. Then $\\Omega$ is locally given by $\\rho(z,t)<0$, where $\\rho(z,t)=\\Re t+P(z)+Q(z,t)$ with $Q(z,0)\\equiv 0$ and $\\deg P(z)=d$. (We say that the degree of $P$ is $d$ if the Taylor expansion of $P$ has no nonzero term of degree less than $d$.) Since $\\Omega$ is pseudoconvex, we actually have $d=2k$ (see e.g. \\cite{D'Angelo}).\n\nFor $1\\le k\\le n-1$, let $\\varphi:\\cv^k\\rightarrow \\cv^n$ be analytic with $\\varphi(0)=q$ and $\\textup{rank} d\\varphi(0)=k$. Then the \\textit{regular order of contact} at $q$ along the $k$-dimensional complex manifold defined by $\\varphi$ is defined as $\\deg \\rho\\circ\\varphi$ (see e.g. \\cite{D'Angelo}).\n\nDenote by $\\Delta$ the unit disc in $\\cv$. Let $p\\in \\Omega$ and $\\zeta\\in \\cv^n$. The \\textit{Kobayashi metric} is defined as\n$$K_\\Omega(p,\\zeta):=\\inf\\{\\alpha:\\ \\alpha>0,\\ \\exists\\ \\phi:\\Delta\\rightarrow \\Omega,\\ \\phi(0)=p,\\ \\alpha \\phi'(0)=\\zeta\\}.$$\nThen the \\textit{Kobayashi indicatrix} is defined as (see e.g. \\cite{K:indicatrix})\n$$D_\\Omega(p):=\\{\\zeta\\in \\cv^n:\\ K_\\Omega(p,\\zeta)<1\\}.$$\nFor each unit vector $e\\in \\cv^n$, set $D_\\Omega(p,e):=\\max\\{|\\eta|:\\ \\eta\\in \\cv,\\ \\eta e\\in D_\\Omega(p)\\}$. By the definition of Kobayashi indicatrix, the following three lemmas are clear.\n\n\\begin{lem}\\label{L:ball}\n$D_{\\ball(r)}(0)=\\ball(r)$.\n\\end{lem}\n\n\\begin{lem}\\label{L:indicatrix}\nLet $\\Omega_1$ and $\\Omega_2$ be two domains in $\\cv^n$ with $\\Omega_1\\subset \\Omega_2$. Then for each $p\\in \\Omega_1$, $D_{\\Omega_1}(p)\\subset D_{\\Omega_2}(p)$.\n\\end{lem}\n\n\\begin{lem}\\label{L:indicatrix1}\nLet $\\Omega$ be a domain in $\\cv^n$ and $f:\\Omega\\rightarrow \\cv^n$ a biholomorphic map. Then for each $p\\in \\Omega$, $D_{f(\\Omega)}(f(p))=f'(p)D_\\Omega(p)$.\n\\end{lem}\n\nWe also need the following localization lemma (see e.g. \\cite[Lemma 3]{FL:metrics}). We will use $\\gtrsim$ (resp. $\\lesssim$, $\\simeq$) to mean $\\ge$ (resp. $\\le$, $=$) up to a positive constant.\n\n\\begin{lem}\\label{L:localization}\nLet $\\Omega$ be a bounded domain in $\\cv^n$, $q\\in \\partial \\Omega$ and $U$ a neighborhood of $q$. If $V\\subset\\subset U$ and $q\\in V$, then\n$$K_\\Omega(p,\\zeta)\\simeq K_{\\Omega\\cap U}(p,\\zeta),\\ \\ \\ \\forall\\ p\\in V,\\ \\zeta\\in \\cv^n.$$\n\\end{lem}\n\nBy the above lemma, when we consider the size of the Kobayashi indicatrix in the next section, we will work in $\\Omega\\cap U$.\n\n\\section{Estimate of the squeezing function}\\label{S:proof}\n\nWe first choose local coordinates adapted to our purpose.\n\n\\begin{lem}\\label{L:normal}\nLet $\\Omega$ be a bounded domain in $\\cv^{n+1}$, $n\\ge 1$, and $q\\in \\partial\\Omega$. Assume that $\\Omega$ is smooth and pseudoconvex in a neighborhood of $q$ and the Bloom-Graham type of $\\Omega$ at $q$ is $2k$, $k\\ge 1$. Then there exist local coordinates $(z,t)=(z_1,\\cdots,z_n,u+iv)$ such that $q=(0,0)$ and $\\Omega$ is locally given by $\\rho(z,t)<0$ with\n\\begin{equation}\\label{E:normal}\n\\rho(z,t)=u+P(z)+Q(z)+vR(z)+u^2+v^2+o(u^2,uv,v^2,u|z|^{2k}),\n\\end{equation}\nwhere $P(z)$ is plurisubharmonic, homogeneous of degree $2k,$ but not pluriharmonic, $\\deg Q(z)\\ge 2k+1$ and $\\deg R(z)\\ge k+1$.\n\\end{lem}\n\\begin{proof}\nBy assumption, we have a local defining function of the form\n$$\\rho(z,t)=u+P(z)+Q(z)+au^2+buv+cv^2+uA(z)+vB(z)+o(|t|^2),$$\nwhere $P(z)$ is plurisubharmonic, homogeneous of degree $2k,$ but not pluriharmonic, $\\deg Q(z)\\ge 2k+1$,\nand $\\deg A(z),B(z)\\ge 1$. By changing $t$ to $t+dt^2$ and multiplying with $1+eu$ or $1+ev$, we can freely change the quadratic terms in $u,v$. Thus, we can assume that\n$$\\rho(z,t)=u+P(z)+Q(z)+u^2+v^2+uA(z)+vB(z)+o(|t|^2).$$\nMultiplying with $1-A(z)$, we get a new $A$, say $A'$ of degree at least $2$. Multiplying with $1-A'(z)$ we get\na new $A$ of degree at least $4$. Continuing, we can further assume that\n$$\\rho(z,t)=u+P(z)+Q(z)+u^2+v^2+vB(z)+o(|t|^2, u|z|^{2k}).$$\n\nWrite $B(z)=B_s(z)+B'(z)$, where $B_s(z)$ is the lowest order homogeneous term of degree $s\\ge 1$. Assume that $B_s(z)$ is pluriharmonic. Then there exists a holomorphic function $F(z)=A(z)-iB_s(z)$. Change again $\\rho$ to add the term $uA(z)$ with this new $A(z)$. Then $\\rho$ takes the form\n$$\\begin{aligned}\n\\rho(z,t)&=u+P(z)+Q(z)+u^2+v^2+uA(z)+vB_s(z)+vB'(z)+o(|t|^2,u|z|^{2k})\\\\\n&=u+P(z)+Q(z)+u^2+v^2+\\Re(tF(z))+vB'(z)+o(|t|^2,u|z|^{2k}).\n\\end{aligned}$$\nBy absorbing $\\Re(tF(z))$ into $u$, we get\n$$\\rho(z,t)=u+P(z)+Q(z)+u^2+v^2+vB'(z)+o(|t|^2,u|z|^{2k}).$$\nContinuing this process, we can assume that $\\rho$ takes the form\n$$\\rho(z,t)=u+P(z)+Q(z)+u^2+v^2+vB_l(z)+vB'(z)+o(|t|^2,u|z|^{2k}),$$\nwhere $B_l(z)$ is not pluriharmonic and $l\\le k$ or $l\\ge k+1$. We assume the first alternative, otherwise the proof is done. We will arrive at a contradiction to pseudoconvexity.\n\nNote that $P(z)$ is plurisubharmonic but not pluriharmonic. This implies that there exists a complex line through the origin on which the restriction of $P$ is subharmonic, but not harmonic (cf. \\cite{Forelli}). Pick a tangent vector $\\xi=(\\xi_1,\\dots \\xi_n)$ so that the Levi form of $P$ calculated at a point $\\eta\\xi$, $\\eta=|\\eta|e^{i\\theta}$, in the direction of $\\xi$ is $|\\eta|^{2k-2}G(\\theta)\\|\\xi\\|^2$. Here $G$ is a smooth nonnegative function which at most vanishes at finitely many angles. Choose $\\lambda$ such that $\\sigma=(\\xi,\\lambda)$ is a complex tangent vector to $\\partial \\Omega$, i.e.\n$$\\sum_{j=1}^n \\frac{\\partial \\rho}{\\partial z_j}\\xi_j+ \\frac{\\partial \\rho}{\\partial t}\\lambda=0.$$\nThen we have $|\\lambda|=O(|\\eta|^{2k-1}+|v||\\eta|^{l-1}+|t|^2+|u||\\eta|^{2k-1})\\|\\xi\\|$.\n\nThe Levi form of $\\rho$ at a boundary point $(\\eta\\xi,t)$, along the tangent vector $\\sigma$ is\n$$\\begin{aligned}\n\\mathcal L(r,\\sigma)=&|z|^{2k-2}G(\\theta)\\|\\xi\\|^2+|\\lambda|^2+\\mathcal L(vB_l(z),\\sigma)+\\cdots\\\\\n=&|z|^{2k-2}G(\\theta)\\|\\xi\\|^2+|\\lambda|^2+\\Re(\\sum_{j=1}^n\\frac{\\partial B_l}{\\partial z_j}i\\xi_j\\overline{\\lambda})+v\\sum_{k,m}\\frac{\\partial^2 B_l}{\\partial z_k\\overline z_m}\\xi_k\\overline{\\xi}_m+\\cdots.\n\\end{aligned}$$\n\nSince $B_l$ is not pluriharmonic, we can assume after changing $\\xi$ slightly that $\\frac{\\partial^2 B_l}{\\partial z_k\\partial \\overline{z}_m}\\xi_k\\overline{\\xi}_m\\neq 0$. Next choose $v=\\pm C|\\eta|^k$ with $C>\\max_\\theta\\{G(\\theta)\\}$. The second term is $o(|\\eta|^{2k-2}\\|\\xi\\|^2)$ and the third terms is $o(|\\eta|^{k+l-2}\\|\\xi\\|^2)$. The last term is $\\simeq (|\\eta|^{k+l-2}\\|\\xi\\|^2)$ and, since $l\\le k$, at least $O(|\\eta|^{2k-2}\\|\\xi\\|^2)$. Thus we have $\\mathcal L(r,\\sigma)<0$. This is a contraction.\n\\end{proof}\n\nBy Lemma \\ref{L:normal}, we can choose local coordinates $(z,w,t)=(z,w,u+iv)$ near $q$ such that $q=(0,0,0)$ and $\\Omega$ is locally given by $\\rho(z,w,t)<0$, where\n\\begin{equation}\\label{E:rho}\n\\rho(z,w,t)=u+P(z,w)+Q(z,w)+vR(z,w)+u^2+v^2+o(u^2,uv,v^2,u|(z,w)|^{2k}).\n\\end{equation}\nHere $P(z,w)$ is homogeneous of degree $2k$ with $P(z,0)=P(0,w)=0$, $\\deg Q(z,w)\\ge 2k+1$ with $\\deg Q(z,0)\\ge 4k+1$ and $\\deg Q(0,w)\\ge 4k+1$, and $\\deg R(z,w)\\ge k+1$. Set $p=(0,0,-\\delta)$ with $0<\\delta\\ll 1$.\n\n\\begin{lem}\\label{L:10}\nLet $\\zeta_1=(1,0,0)$ and $\\zeta_2=(0,1,0)$. Then $K_\\Omega(p,\\zeta_1),K_\\Omega(p,\\zeta_2)\\lesssim \\delta^{-\\frac{1}{4k+1}}$.\n\\end{lem}\n\\begin{proof}\nConsider the linear map $\\phi:\\Delta\\rightarrow \\cv^3$ with $\\phi(\\tau)=(\\beta\\tau,0,-\\delta)$ for $\\tau\\in \\Delta$, and $|\\beta|=\\epsilon \\delta^{\\frac{1}{4k+1}}$ for $0<\\epsilon\\ll 1$. Then\n$$\\rho\\circ\\phi(\\tau)\\le -\\delta+C|\\beta\\tau|^{4k+1}+o(\\delta)<-\\delta+\\epsilon\\delta+o(\\delta)<0.$$\nTherefore, $K_\\Omega(p,\\zeta_1)\\lesssim \\delta^{-\\frac{1}{4k+1}}$. The argument in the direction $\\zeta_2$ is similar.\n\\end{proof}\n\nLet $(a,b,0)$ be a point so that $P(a \\tau,b \\tau)$ is a subharmonic homogeneous polynomial of degree $2k$ which is not harmonic. Then both $a$ and $b$ must be nonzero. By scaling in each variable, we can assume that $a=b=1\/{\\sqrt {2}}$. \n\n\\begin{lem}\\label{L:11}\nLet $\\zeta=\\frac{1}{\\sqrt{2}}(1,1,0).$ Then $K_\\Omega(p,\\zeta)\\gtrsim \\delta^{-\\frac{1}{4k}}$.\n\\end{lem}\n\\begin{proof}\nFor $z,w$ small, we have\n$$\\begin{aligned}\nv^2\/2+vR(z,w)&\\ge v^2\/2-C|v|\\|z,w\\|^{k+1}+C^2\\|z,w\\|^{2k+2}-C^2\\|z,w\\|^{2k+2}\\\\\n&\\ge -C^2\\|z,w\\|^{2k+2}.\n\\end{aligned}$$\nTherefore,\n$$\\begin{aligned}\n\\rho&\\ge u+P(z,w)+Q(z,w)-C^2\\|z,w\\|^{2k+2}+u^2+v^2\/2+o(u^2,uv,v^2,u|(z,w)|^{2k})\\\\\n&\\ge u+P(z,w)+\\tilde{Q}(z,w)+u^2\/2+v^2\/4+o(u|(z,w)|^{2k})=:\\tilde{\\rho}.\n\\end{aligned}$$\n\nConsider an analytic map $\\phi:\\Delta\\rightarrow \\Omega$ with\n$$\\phi(\\tau)=(\\beta\\tau+f(\\tau),\\beta\\tau+g(\\tau),-\\delta+h(\\tau)),\\ \\ \\ |f(\\tau)|,|g(\\tau)|,|h(\\tau)|\\le |\\tau|^2.$$\nThen $\\tilde{\\rho}\\circ\\phi(\\tau)\\leq\\rho\\circ\\phi(\\tau)<0$. And we have\n$$\\begin{aligned}\n\\tilde{\\rho}(\\phi(\\tau))=&-\\delta+\\Re h(\\tau)+P(\\beta\\tau+f(\\tau),\\beta\\tau+g(\\tau))+\\tilde{Q}(\\beta\\tau+f(\\tau),\\beta\\tau+g(\\tau))\\\\\n&+u^2\/2+v^2\/4+o(u|(z,w)|^{2k})<0,\n\\end{aligned}$$\nand\n\\begin{equation}\\label{E:average}\n\\frac{1}{2\\pi}\\int_0^{2\\pi} \\tilde{\\rho}(\\phi(|\\tau|e^{i\\theta}))d\\theta<0.\n\\end{equation}\n\nNote that, by the homogeneous expansion of $P(z,w)$, we have for $|\\tau|<1$ small\n\\begin{equation}\\label{E:xitau}\n|\\beta\\tau|^{2k}-\\sum_{i=0}^{2k-1} |\\beta|^i|\\tau|^{4k-i}\\lesssim \\delta.\n\\end{equation}\nChoose $|\\tau|=c|\\beta|$ for some small constant $c>0.$ Then \\eqref{E:xitau} gives\n$$\\left|\\frac{\\beta}{2}\\right|^{4k}\\lesssim \\delta.$$\nHence, $K_\\Omega(p,u)\\gtrsim \\delta^{-\\frac{1}{4k}}$. \n\\end{proof}\n\n\\begin{lem}\\label{L:squeezing}\nLet $D$ be a bounded domain in $\\cv^n$, $n\\ge 2$, containing the origin. Assume that there exist two linearly independent nonzero vectors $\\zeta_1,\\zeta_2\\in D$ and $\\epsilon>0$ such that $\\epsilon(\\zeta_1+\\zeta_2)\\not\\in D$. Then there does not exist a linear map $L:D\\rightarrow \\cv^n$, with $L(0)=0$, such that $\\ball(3\\epsilon)\\subset L(D)\\subset \\ball(1)$.\n\\end{lem}\n\\begin{proof}\nLet $L:D\\rightarrow \\cv^n$ be a linear map with $L(0)=0$ and suppose $\\ball(3\\epsilon)\\subset L(D)$. Since $\\epsilon(\\zeta_1+\\zeta_2)\\not\\in D$ and $L$ is linear, we have $\\epsilon(L(\\zeta_1)+L(\\zeta_2))\\not\\in L(D)$. This implies that $\\epsilon(L(\\zeta_1)+L(\\zeta_2))\\not\\in \\ball(3\\epsilon)$ and thus $\\|L(\\zeta_1)+L(\\zeta_2)\\|\\ge 3$. However,\n$\\|L(\\zeta_1)+L(\\zeta_2)\\|\\leq \\|L(\\zeta_1)\\|+\\|L(\\zeta_2)\\|\\leq 1+1=2.$\nThis completes the proof.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{T:main}]\nChoose local coordinates $(z,w,t)$ such that $q=(0,0,0)$ and let $p=(-\\delta,0,0)$ for $\\delta>0$ small. Let $\\zeta_1=(1,0,0)$ and $\\zeta_2=(0,1,0)$, the two directions along which the regular order of contact at $q$ is greater than $2d=4k$. By Lemma \\ref{L:10}, $K_\\Omega(p,\\zeta_1),K_\\Omega(p,\\zeta_2)\\lesssim \\delta^{-\\frac{1}{4k+1}}$. By Lemma \\ref{L:11}, $K_\\Omega(p,\\frac{1}{\\sqrt{2}}(\\zeta_1+\\zeta_2))\\gtrsim \\delta^{-\\frac{1}{4k}}$.\n\nChoose $\\lambda>0$ with $\\lambda\\gtrsim \\delta^{\\frac{1}{4k+1}}$ such that $\\lambda \\zeta_1,\\lambda \\zeta_2\\in D_\\Omega(p)$. Then for $\\epsilon \\simeq \\delta^{\\frac{1}{4k(4k+1)}}$, we have $\\epsilon (\\lambda \\zeta_1+ \\lambda \\zeta_2)\\not\\in D_\\Omega(p)$. Thus, by Lemma \\ref{L:squeezing}, there does not exist a linear map $L:D_\\Omega(p)\\rightarrow \\cv^3$ such that $\\ball(3\\epsilon)\\subset L(D_\\Omega(p))\\subset \\ball(1)$.\n\nLet $f$ be a biholomorphism of $\\Omega$ into $\\ball(1)$ such that $f(p)=0$ and $\\ball(c)\\subset f(\\Omega)$ for some $c>0$. Set $L=f'(p)$. Then, by Lemmas \\ref{L:ball}, \\ref{L:indicatrix} and \\ref{L:indicatrix1}, $\\ball(c)\\subset L(D_\\Omega(p))\\subset \\ball(1)$. Therefore, we have $c\\lesssim \\delta^{\\frac{1}{4k(4k+1)}}$. Since $f$ is arbitrary, we get $s_\\Omega(p) \\lesssim \\delta^{\\frac{1}{4k(4k+1)}}$. Since $\\delta$ can be arbitrarily small, this completes the proof.\n\\end{proof}\n\n\\begin{rmk}\nTheorem \\ref{T:main} does not hold if only assuming that the regular order of contact at $q$ is greater than $2d$ along one smooth complex curve. For instance, consider $\\Omega$ given by\n$$\\{(z,w,t)\\in \\cv^3:\\ |t|^2+|z|^2+|w|^6<1\\}.$$\nThen at $q=(0,0,1)$, the Bloom-Graham type is $2$ and the regular order of contact along $(0,1,0)$ is $6>4$. But $\\Omega$ is a bounded convex domain and thus the squeezing function has a uniform lower bound by \\cite{KZ:squeezing}.\n\\end{rmk}\n\n\\begin{rmk}\nUsing similar arguments, one can extend Theorem \\ref{T:main} to higher dimensions as follows.\n\n\\begin{thm}\nLet $\\Omega$ be a bounded domain in $\\cv^n$, $n\\ge 4$, and $q\\in \\partial\\Omega$. Assume that $\\Omega$ is smooth and pseudoconvex in a neighborhood of $q$ and the Bloom-Graham type of $\\Omega$ at $q$ is $d$. Moreover, assume that the regular order of contact at $q$ is $d$ along a two-dimensional complex surface $\\Sigma$ and the regular order of contact at $q$ is greater than $2d$ along two smooth complex curves not tangent to each other contained in $\\Sigma$. Then the squeezing function $s_\\Omega(p)$ has no uniform lower bound near $q$.\n\\end{thm}\n\\end{rmk}\n\n\\begin{rmk}\nAfter the completion of this work, it was brought to our attention by Gregor Herbort that a similar comparison result to \\cite{DF} was obtained for the following domain in \\cite{DFH:Bergman}:\n$$\\Omega:=\\{(z,w,t)\\in \\cv^3:\\ \\Re t+|z|^{12}+|w|^{12}+|z|^2|w|^4+|z|^6|w|^2<0\\}.$$\nTherefore, by our remark in the introduction, the squeezing function does not have a uniform lower bound on this domain. More generally, we have the following\n\\begin{thm}\nLet $\\Omega$ be a bounded domain in $\\cv^3$, and $q\\in \\partial\\Omega$. Assume that $\\Omega$ is smooth and pseudoconvex in a neighborhood of $q$ and the Bloom-Graham type of $\\Omega$ at $q$ is $d<\\infty$. Let $\\rho$ be a defining function of $\\Omega$ near $q$ in the normal form \\eqref{E:normal} and assume that the leading homogeneous term $P(z)$ only contains positive terms. Moreover, assume that the regular order of contact at $q$ is greater than $d$ along two smooth complex curves not tangent to each other. Then the squeezing function $s_\\Omega(p)$ has no uniform lower bound near $q$.\n\\end{thm}\n\\begin{proof}[Sketch of proof]\nIn Lemma \\ref{L:10}, we get $K_\\Omega(p,\\zeta_1),K_\\Omega(p,\\zeta_2)\\lesssim \\delta^{-\\frac{1}{2k+1}}$, by the same argument. In Lemma \\ref{L:11}, we get $K_\\Omega(p,\\zeta)\\gtrsim \\delta^{-\\frac{1}{2k}}$, by noticing that instead of \\eqref{E:xitau} we have $|\\xi\\tau|^{2k}\\lesssim \\delta$ since all terms of $P(z)$ are positive. Then arguing exactly as in the proof of Theorem \\ref{T:main}, we get $s_\\Omega(p) \\lesssim \\delta^{\\frac{1}{2k(2k+1)}}$.\n\\end{proof}\n\\end{rmk}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRecent advances in computer vision and machine learning have dramatically increased the accuracy of face recognition technologies \\cite{schroff2015facenet, masi2018deep, deng2019arcface}. Face recognition is already commonly used in commercial products such as Apple's FaceID \\cite{faceid} or at border checkpoints in airports where the portrait on a digitized biometric passport is compared with the holder's face. Most people have little to no concerns about these specific applications as they are limited in scope and have a single well defined goal. As the technologies mature however it becomes possible to deploy them on a much larger scale. The most well known example of this is the large scale use of CCTV cameras equipped with intelligent analysis software. In this way, facial recognition checkpoints are deployed at areas like gas stations, shopping centers, and mosque entrances \\cite{larson2018china, NOS}.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/overview_.png}\n \\setlength{\\abovecaptionskip}{-2pt}\n \\setlength{\\belowcaptionskip}{-2pt}\n \\caption{Overview of our proposed approach. For each subject, we automatically extract one high quality frame (or face crop) and encrypt it for storage. Access to the images is only possible after legal authorization in case of an investigation.}\n \\label{fig:overview}\n\\end{figure}\nFrom a domestic security point of view, these technologies are extremely useful. They can be used to find missing children, identify and track criminals or locate potential witnesses. There are many examples where CCTV footage in combination with facial recognition software has supported and accelerated criminal investigations. In 2016 for example, the ``man in the hat'' responsible for the Brussels terror attacks was identified thanks to FBI facial recognition software \\cite{maninhat}. \n\\\\\n\\newline\nThe proliferation of facial recognition technology however also raises many valid privacy concerns. A fundamental human rights principle is that surveillance should be necessary and proportionate. This principle was adopted by the UN Human Rights Council (HRC) in the ``the right to privacy in the digital age'' resolution which states that ``States should ensure that any interference with the right to privacy is consistent with the principles of legality, necessity and proportionality''\\cite{unitednations}.\n\\\\\n\\newline\nGovernments have to balance both aspects of this technology before they implement a certain solution. On the one hand, there are the obvious advantages of a large scale blanket surveillance system but this clearly violates the proportionality principle. On the other hand, a recent study showed that a majority of the public considers it to be acceptable for law enforcement to use facial recognition tools to assess security threats in public spaces as long as it is within a clearly defined regulatory framework \\cite{smith2019more}.\n\\\\\nAn interesting research direction is the development of more privacy-friendly alternatives that can still support law enforcement in criminal investigations. In this work we present an intelligent frame selection approach that can be used as a building block in a more privacy-friendly alternative to large scale face recognition. The problem of key frame selection is defined as selecting a single frame out of a video stream that represents the content of the scene \\cite{wolf1996key}. In the context of facial recognition, our goal is to record a single clear crop of each person visible in the stream without revealing his or her identity. In such a way, the minimization strategy of privacy preserving technologies \\cite{DomingoFerrer2020} is implemented by collecting the minimal necessary amount of personal data.\nAccording to \\cite{duncan2007engineering}, data minimization is an unavoidable first step to engineer systems in line with the principles of privacy by design \\cite{cavoukian2009privacy}. Next, to ensure the confidentiality of the collected data, all images are encrypted (i.e. ``hide'' strategy) and access can only be provided to law enforcement after legal authorization as shown in figure \\ref{fig:overview}.\n\\\\\n\\newline\nExtracting face crops which are suitable for recognition is a difficult problem as surveillance footage is typically blurry because of the motion of the subjects. In addition, it is hard to quantify the quality of a single frame as it depends on multiple aspects such as the angle, illumination and head position. Frame selection techniques have been used before in the context of face recognition to reduce the computational cost or increase the recognition rate \\cite{wong_cvprw_2011, anantharajah2012quality, qi2018boosting, vignesh2015face, best2017automatic, hernandez2020biometric, terhorst2020ser}. In this work we introduce a novel deep learning based technique to perform intelligent face recognition. The key contribution is a new face image quality assessment (FQA) approach inspired by anomaly detection. Unlike previous solutions, our system is trained completely unsupervised without the need for labeled face images and without access to a face recognition system. Our approach can be used for the same tasks as previous frame selection techniques but in this work we propose to use it as a building block for a more privacy-friendly alternative to large scale face recognition.\n\\\\\n\\newline\nThis paper is organized as follows. An overview of the related work is presented in section \\ref{section:related}. Next, we propose a privacy preserving alternative to large scale facial recognition in section \\ref{section:privacy_preserving_appr}. Our novel FQA method is introduced in section \\ref{section:approach} and experimentally validated in section \\ref{section:experimental_setup} and \\ref{section:results}. We conclude in section \\ref{section:conclusion} and give some pointers for future research.\n\n\\section{Related work}\n\\label{section:related}\nDeep learning has dominated the field of facial recognition in the past years. Recent techniques can reach accuracies of 99.63\\% \\cite{schroff2015facenet} on the Labeled Faces in The Wild dataset \\cite{huang2008labeled}, the most commonly used benchmark dataset. These data-driven methods outperform techniques based on engineered features by a large margin \\cite{masi2018deep}. Face recognition is typically subdivided into face verification and face identification. Face verification is the task of deciding whether two pictures show the same person where face identification tries to determine the identity of the person on an image.\n\\\\\n\\newline\nA large amount of research is dedicated to extracting high quality face image representations. These can then be used to calculate the similarity between two face crops or as input to a classification model. Different loss functions have been designed to get meaningful face image representations. First attempts used softmax loss \\cite{sun2014deep} while recent approaches focus on euclidean distance-based loss \\cite{schroff2015facenet}, cosine-margin-based loss \\cite{wang2017normface} and variations of the softmax loss \\cite{deng2019arcface}. Given these high quality feature embeddings, we can directly perform face verification by calculating the distance between two images (i.e. one-to-one matching). Face identification requires a database which contains a reference image of every identity. To identify a person on an image, the probe image's embedding should be compared with all images in the reference database (i.e. one-to-many matching).\n\\\\\n\\newline\nFace images are a popular biometric since they can be collected in unconstrained environments and without the user's active participation. These properties in combination with the widespread deployment of surveillance cameras give rise to some severe privacy concerns. As a result, researchers explore techniques to secure the data collected by surveillance cameras and are developing privacy preserving facial recognition systems. The techniques differ in exactly what privacy-sensitive aspects they protect. Some techniques avoid deducing soft biometrics (e.g. age, gender, race) from data collected for verification or recognition purposes \\cite{mirjalili2018gender, mirjalili2020privacynet}. Other techniques focus on face image de-identification \\cite{newton2005preserving}, which eliminates the possibility of a facial recognition system to identify the subject while still preserving some facial characteristics. A very powerful technique hides the input image and the facial recognition result from the server that performs the recognition using a cryptographic enhanced facial recogniton system \\cite{erkin2009privacy}.\n\\\\\n\\newline\nA lot of effort has gone into the development of face image quality assessment (FQA) metrics. Face image quality is defined as the suitability of an image for consistent, accurate and reliable recognition \\cite{hernandez2020biometric}. FQA methods aim to predict a value which describes the quality of a probe image. Most previous FQA techniques focus on improving the performance of a face recognition system \\cite{wong_cvprw_2011, anantharajah2012quality, qi2018boosting, vignesh2015face, best2017automatic, hernandez2020biometric, terhorst2020ser}. An FQA system is then used as a first step to make sure that we only feed high quality images to the face recognition model, therefore increasing the accuracy and reducing the computational cost. A first family of FQA techniques (Full-reference and reduced-reference FQA) assumes the presence of a high quality sample of the probe image's subject. These methods do not work for unknown subjects, which is necessary for our purpose as we will explain in the next section. A second family of FQA techniques develops hand-crafted features to assess the quality of a face image \\cite{anantharajah2012quality, REFICAO} while more recent studies apply data driven methods and report a considerable increase in performance. Different studies \\cite{qi2018boosting, vignesh2015face} propose a supervised approach where a model is trained to predict the distance between the feature embeddings of two images. Since two samples are necessary to perform a comparison, one low quality sample can affect the quality score of a high quality sample. This is commonly solved by assuming that an image compliant with the ICAO guidelines \\cite{REFICAO} represents perfect quality \\cite{hernandez2020biometric}. In the work of Hernandez-Ortega et al. a pretrained resnet-50 network \\cite{he2016deep} is modified by replacing the classification layer with two regression layers which output the quality score. Alternatively, it is also possible to use human labeled data \\cite{best2017automatic}. The most similar to our work is \\cite{terhorst2020ser} which also proposes an unsupervised approach. Here the quality of an image is measured as its robustness in the embedding space, which is calculated by generating embeddings using random subnetworks of a selected face recognition model.\n\\\\\n\\newline\nCompared to previous work, we introduce a novel completely unsupervised FQA method based on a variational autoencoder. We assume no access to the identities of the people and show that it works well for unseen people. Unlike \\cite{terhorst2020ser}, no facial recognition model is used to calculate a quality score. In contrast to previous work our main goal is not necessarily to improve the facial recognition accuracy but instead we use it as a building block to enable a more privacy-friendly alternative to large scale face recognition, as explained in the next section.\n\n\\section{Frame selection as an alternative to face recognition}\n\\label{section:privacy_preserving_appr}\nIn this section we introduce a framework based on \\cite{simoens2013scalable} that uses intelligent frame selection in the context of face recognition to build a more privacy-friendly alternative to large scale face recognition. Instead of proactively trying to recognize individuals, we follow the presumption of innocence principle and do not indiscriminately perform recognition of people as they go about their daily business. Instead, our system uses key frame extraction to capture a high quality snapshot of every person passing by the camera. These snapshots are encrypted and stored locally on the camera or securely transmitted to a cloud back-end for storage. In a normal situation this data is then automatically deleted after a well defined time period as defined in article 17 of the GDPR (``Right to be forgotten''). In this case, the identity of the people will never be known and no human operator will have access to the unencrypted pictures. The images can only be decrypted after legal authorization in case of a criminal investigation or another situation where access to the identities present in a certain location at a certain time is warranted. The high quality images can then be used as input for a face recognition system or to aid the investigation process. Since only high quality crops are stored, the storage overhead is much lower than in a system where the full video is stored. This system is summarized in Figure \\ref{fig:overview} using a video from the ChokePoint dataset \\cite{wong_cvprw_2011}.\n\\\\\n\\newline\nIt is important to note that we need to store at least one crop for each person visible in the video. It is not enough to use a fixed threshold to decide whether a frame should be stored or not, instead we have to store the best frame for each individual even if this best frame still has a relatively low quality compared to images of other individuals (for example because the person never perfectly faces the camera). As a generalization we could also decide to store a short video clip of a few seconds before and after the best frame has been captured.\n\\\\\n\\newline\nAn obvious disadvantage of our approach is that it is not possible to proactively recognize people for example to detect wanted individuals. On the other hand it does support the criminal investigation after the fact. Our system is therefore complementary and more suited to low risk areas where a full blown face recognition system would violate the proportionality principle. \n\n\n\\section{Face image quality assessment}\n\\label{section:approach}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/FQA_overview.png}\n \\caption{Overview of our face quality-aware frame selection system.}\n \\label{fig:fqa_overview}\n\\end{figure}\nThe system proposed in the previous section relies on a Face Quality Assessment block to decide which crops to store. Any FQA method can be used but in this section we introduce a novel technique based on a variational autoencoder. Compared to other FQA methods, this has the benefit of being completely unsupervised. We do not assume access to a face recognition system or to the identities of the people in the dataset. Our method also generalizes well to other people outside of the original training dataset.\n\\\\\n\\newline\n An overview of the proposed system is depicted in figure \\ref{fig:fqa_overview}. The first step is to detect all faces in each frame and track them across subsequent frames. We use the MTCNN model \\cite{zhang2016joint} to detect faces in a still video frame. The output is a list of bounding boxes around the detected faces. To track a subject across a video, we simply calculate the euclidean distance between the bounding boxes of subsequent frames. Bounding boxes that are close to each other are considered to correspond to the same subject. To evaluate the quality of a face crop, we calculate the reconstruction probability of a variational autoencoder (VAE) trained on a dataset of high quality images. The VAE reconstruction probability is commonly used as an anomaly detection metric \\cite{an2015variational} to determine how different an input is to the data seen during training. By training the VAE on a publicly available dataset of high quality face images, we reformulate the FQA task as an anomaly detection task (i.e. how different is this face crop from the high quality face crops seen during training?). The next paragraph explains the VAE and reconstruction probability in more details.\n\\\\\n\\newline\nA variational autoencoder (VAE) \\cite{kingma2013auto} is a probabilistic variant of the standard autoencoder (AE). The encoder and decoder are modeled by probabilistic distributions rather than deterministic functions. The encoder $f_{\\phi}(x)$ models the posterior $q_{\\phi}(z | x)$ of the latent variable $z$ and a decoder $f_{\\theta}(z)$ models the likelihood $p_{\\theta}(x | z)$ of the data $x$ given the latent variable $z$. The prior distribution of the latent variable $p_{\\theta}(z)$ is chosen as a Gaussian distribution $\\mathcal{N}(0, I)$. The posterior $q_{\\phi}(z | x)$ and likelihood $p_{\\theta}(x | z)$ are isotropic multivariate normal distributions $\\mathcal{N}(\\mu_{z}, \\sigma_{z})$ and $\\mathcal{N}(\\mu_{x}, \\sigma_{x})$ respectively. Figure \\ref{fig:vae} shows the process of a forward pass of an image $x$ through the VAE, the arrows represent a sampling process. To train a VAE using backpropagation, every operation should be differentiable which is not the case for the sampling operations: $z \\sim \\mathcal{N}(\\mu_{z}, \\sigma_{z})$ and $\\hat{x} \\sim \\mathcal{N}(\\mu_{x}, \\sigma_{x})$. Applying the re-parameterization trick fixes this problem. A dedicated random variable $\\epsilon \\sim \\mathcal{N}(0, 1)$ is sampled such that the sampling operations can be rewritten as: $z \\sim \\mu_{z} + \\epsilon \\cdot \\sigma_z$ and $\\hat{x} \\sim \\mu_{x} + \\epsilon \\cdot \\sigma_x$. The VAE training objective is written as the expected log likelihood minus the KL divergence between the posterior and the prior as described in equation \\ref{eq:train_obj}. \n\\begin{equation}\n \\mathcal{L}(x) = E_{q_{\\phi}(z|x)}(p_{\\theta}(x|z)) - KL(q_{\\phi}(z|x)|p_{\\theta}(z))\n \\label{eq:train_obj}\n\\end{equation}\nThe first term is the reconstruction term and forces a good reconstruction $\\hat{x}$ of the input data $x$. The KL term regularizes the distribution of the latent space by forcing it to be Gaussian. By training a generative model, like a VAE, the model learns to approximate the training data distribution. When a large reconstruction error is observed, this is an indication that the data is not sampled from the data distribution the VAE was trained on.\n\\\\\n\\newline\nThe reconstruction probability is a generalization of the reconstruction error by taking the variability of the latent space and the reconstruction into account \\cite{an2015variational}. First, an image $x$ is fed to the encoder which generates the mean vector $\\mu_z$ and standard deviation vector $\\sigma_z$. Next, $L$ samples $\\{z^0, z^1,...,z^l\\}$ are drawn from the latent distribution $\\mathcal{N}(\\mu_z,\\sigma_z)$. All samples $z^l$ are fed into the decoder to get the distribution of the reconstruction of $x$ which is described by the mean $\\mu^{l}_{\\hat{x}}$ and the standard deviation $\\sigma^{l}_{\\hat{x}}$. The reconstruction probability is the probability of $x$ averaged over L samples as described by equation \\ref{eq:r_prop}.\n\\begin{equation}\n \\mathrm{RP}(x) = \\frac{1}{L} \\sum^{L}_{l=1} \\mathcal{N}(x | \\mu^{l}_{\\hat{x}}, \\sigma^{l}_{\\hat{x}})\n \\label{eq:r_prop}\n\\end{equation}\n\\begin{figure}\n \\centering\n \\includegraphics[width=225pt]{figures\/VAE.png}\n \\caption{Variational autoencoder with encoder $f_{\\phi}(x)$ and decoder $f_{\\theta}(z)$, each arrow represents a sampling processes.}\n \\label{fig:vae}\n\\end{figure}\n\\newline\nThe reconstruction probability was originally developed as an anomaly score by \\cite{an2015variational}. When a VAE is trained solely on samples of ``normal'' data, the latent distribution learns to represent these samples in a low dimensional space. Accordingly samples from ``normal'' data result in a high reconstruction probability while anomalies result in low reconstructing probabilities. We define the biometric quality of a face image as the reconstruction probability calculated by a VAE trained on high quality face images. Correspondingly, a high reconstruction probability is expected for high quality face images. Note that there is no explicit definition of face image quality and the quality score is independent of any face recognition model. The definition of a high quality face image is derived directly from the training data. \n\\\\\n\\newline\nThe encoder $f_{\\phi}(x)$ consists of 5 consecutive blocks of a convolution layer, batch normalization and a leaky ReLU activation function with at the end two fully connected layers. The outputs of the encoder are the parameters defining $q_{\\phi}(z|x)$. The decoder $f_{\\theta}(z)$ consists of 5 blocks of a transposed convolution layer, batch normalization and a leaky ReLU activation function with again two fully connected layers at the end. The outputs of the decoder are the parameters defining $p_{\\theta}(x|z)$. To calculate the reconstruction probability, $L$ is set to 10. The CelebA dataset \\cite{liu2015faceattributes} consisting of 202,599 face images serves as training data. The Adam optimization algorithm \\cite{kingma2014adam} was applied with a learning rate of 0.005 and a batch size of 144. The VAE was trained for 10 epochs. \nEach image is cropped by MTCNN \\cite{zhang2016joint}, resized to 64x64 pixels and converted to grayscale.\n\n\\section{Experimental setup}\n\\label{section:experimental_setup}\nIn this section, we isolate the FQA block for evaluation. According to the national institute of standards and technology (NIST), the default way to quantitatively evaluate a FQA system is analyzing the error vs. reject curve (ERC) \\cite{grother2007performance, grother2020ongoing}. As defined in section \\ref{section:related}, FQA indicates the suitability of an image for recognition. The ERC measures to what extent the rejection of low quality samples increases the verification performance as measured by the false non-match rate (FNMR). The FNMR is the rate at which a biometric matcher miscategorizes two signals from the same individual as being from different individuals \\cite{Schuckers2010}. Face verification consists of calculating a comparison score of two images and comparing this score with some threshold. The comparison score is defined as the euclidean distance between the FaceNet \\cite{schroff2015facenet} embeddings of the two images. To avoid a low quality sample affecting the verification performance of a high quality sample, one high quality reference image for every subject is required. For this, an ICAO compliant image is typically used \\cite{hernandez2020biometric}. We used the BioLab framework \\cite{ferrara2012face} to calculate an average ICAO compliance score for all images. For every subject, the image with the highest ICAO score is selected as a reference image. Note that it is not possible to use the BioLab framework as face quality assessment method directly because it cannot assess all images accurately and it is unable to operate in real-time. Now, assume a set of genuine image pairs $(x_i^{(1)}, x_i^{(2)})$ (i.e. two images of the same person) of length N. Every image pair $(x_i^{(1)}, x_i^{(2)})$ constitutes a distance $d_i$ (i.e. comparison score). To determine if two images match, the distance between the two images is compared with a threshold $d_t$. Using quality predictor function $F$ (i.e. a FQA method), a quality value $q_i$ is calculated for each image pair. Since $x_i^{(1)}$ is always a high quality image from the reference database, the quality $q_i$ of image pair $(x_i^{(1)}, x_i^{(2)})$ can be written as:\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=225pt]{figures\/example.png}\n \\caption{Sample images from the VggFace2 dataset ranked by different quality metrics.}\n \\label{fig:example_images}\n\\end{figure}\n\\begin{equation}\n q_i = q_i^{(2)} = F(x_i^{(2)}).\n\\end{equation}\nNow consider $R$ as a set of low quality entities composed by the samples that correspond with a predicted quality value below some threshold:\n\\begin{equation}\n R(r) = \\{i : q_i < F^{-1}(r), \\forall i < N \\}.\n\\end{equation}\n$F^{-1}$ is the inverse of the empirical cumulative distribution function of the predicted quality scores. The parameter $r$ is the fraction of images to discard, such that $F^{-1}(r)$ equals the quality threshold that corresponds with rejecting a fraction $r$ of all images. Then the FNMR can be written as:\n\\begin{equation}\n \\mathrm{FNMR} = \\frac{|d_i : d_i \\geq d_t, i \\notin R(r)|}{|d_i : d_i \\geq -\\infty, i \\notin R(r)|}\n\\end{equation}\nThe value of $r$ is manipulated to calculate the FMNR for different fractions of rejected images. The value of $d_t$ is fixed and computed using the inverse of the empirical cumulative distribution function of the distances between reference and probe images $M^{-1}$:\n\\begin{equation}\n d_t = M^{-1}(1 - f).\n\\end{equation}\nPractically, $f$ is chosen to give some reasonable FNMR. As suggested by \\cite{frontex} a maximum FNMR of 0.05 is maintained.\n\\section{Results}\n\\label{section:results}\n\\subsection{VggFace2}\nThe VggFace2 dataset \\cite{Cao18} is designed for training face recognition models and consists of more than 3.3 million face images of more than 9000 identities. In the following experiments, our approach (VAE) is compared to FaceQNet (FQN) \\cite{hernandez2020biometric}, a method based on the stochastic embedding robustness (SER) \\cite{terhorst2020ser} and a general image quality assessment (IQA) system \\cite{talebi2018nima}. FaceQNet is a fine-tuned resnet-50 network \\cite{he2016deep} which is trained on large amounts of face images for a face recognition task. The classification layer is then replaced by two layers designed for quality regression. The ground truth quality score is defined as the euclidean distance between the feature embeddings of the probe image and an ICAO compliant image of the same subject. The face image quality assessment method based on stochastic embedding robustness (SER) calculates a quality score by measuring the variations of the embeddings generated by random subnetworks of a resnet-101 \\cite{he2016deep} model trained for facial recognition. The general image quality assessment (IQA) system \\cite{talebi2018nima} does not take face features into account and predicts the general image quality. For all conducted experiments, images were cropped by the MTCNN face detector \\cite{zhang2016joint}.\n\\\\\n\\newline\nFigure \\ref{fig:example_images} shows five images from the VggFace2 dataset \\cite{Cao18} ranked by different quality metrics. This allows a first qualitative evaluation of the five metrics. As presented on the figure, all metrics agree on assigning the highest quality to the first image. All FQA metrics assign a low quality value to the third image because the person looks down, the general IQA method does not take this aspect into account and assigns a higher quality value. For the same reason, the IQA method assigns a low quality value to the fifth image in contrast to all FQA algorithms. On the fourth image, our method agrees with the IQA method and selects it as worst quality image as opposed to the other FQA metrics. We could assume that our method attaches more importance to blurriness compared to the other FQA algorithms. It is remarkable that the second image is ranked as second worst by FQN since the person looks straight into the camera.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=225pt]{figures\/FNMR_01.png}\n \\caption{ERC with an initial FNMR of 0.01.}\n \\label{fig:vggface2_fnmr_01}\n\\end{figure}\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=225pt]{figures\/FNMR_001.png}\n \\caption{ERC with an initial FNMR of 0.001.}\n \\label{fig:vggface2_fnmr_001}\n\\end{figure}\n\\\\\n\\newline\nIn a second experiment, we evaluate our proposed FQA algorithm on still face images by analyzing the ERC plots as explained in section \\ref{section:experimental_setup}. For 103.008 images of 300 subjects from the VggFace2 dataset \\cite{Cao18}, a high quality reference image is selected using the ICAO compliance scores. The ERC plots in figures \\ref{fig:vggface2_fnmr_01} and \\ref{fig:vggface2_fnmr_001} display the FNMR for different fractions of rejected images. The red line with the label ``PERFECT'' represents a quality measure which correlates perfectly with the distance scores. When an initial FNMR of 0.01 is set, in an ideal scenario, the FNMR will be zero after rejecting 1\\% of all images. The closer an ERC is to the red line, the better the performance of the used FQA algorithm. For an initial FNMR of 0.01 our approach clearly outperforms FaceQNet, SER and the general image quality assessment program. We hypothesize that SER would perform better when the same type of embeddings were used for verification as quality estimation. In the conducted experiments SER uses ArcFace \\cite{deng2018arcface} embeddings to estimate face image quality while the FNMR is calculated using FaceNet embeddings. For an initial FNMR of 0.001, the difference with the other approaches is smaller. It is important to note that our model is considerably smaller than FaceQNet and the resnet-101 model \\cite{he2016deep} used by SER.\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=225pt]{figures\/cp_map_0.jpg}\n \\caption{Consecutive face crops from one tracked identity in the ChokePoint dataset. The crop with the green border corresponds with the highest quality calculated by our FQA algorithm.}\n \\label{fig:cp_map_0}\n\\end{figure}\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=225pt]{figures\/cp_plot_0.png}\n \\caption{Logarithm of the reconstruction probability (i.e. face quality) for consecutive face crops.}\n \\label{fig:cp_plot_0}\n\\end{figure}\nFaceQNet comprises 7 times more parameters than our VAE and resnet-101 even 14 times more. Additionally, our method is trained completely unsupervised without the need for ground truth quality values while FaceQNet relies on distance scores as ground truth quality values. The ground truth generation process used by FaceQNet also indicates the dependency on one or more face recognition models. This dependency is even more prominent for SER since a specific face recognition network is used for generating the quality predictions.\n\\subsection{ChokePoint}\nIn a third experiment, we focus on the ChokePoint dataset \\cite{wong_cvprw_2011}. This dataset is designed for conducting experiments on face identification and verification under real-world surveillance conditions. The dataset consists of 48 videos or 64,204 face images. Both crowded and single-identity videos are included. We now evaluate our system qualitatively on selecting one high quality face crop of each identity in a video stream. Figure \\ref{fig:cp_map_0} shows consecutive face crops of an example video. The crop outlined in green is the frame that corresponds with the highest quality value calculated by our FQA algorithm. Figure \\ref{fig:cp_plot_0} shows how the quality score changes over time as the subject moves through the scene. We define the quality score as the logarithm of the reconstruction probability. We can see that initialy the quality score decreases as the person is moving through a darker area looking down. The shades and the angle of the face makes these crops less useful for face recognition. As the person moves closer to the camera, the brightness increases and the subject becomes clearly visible. This is also reflected in an increasing quality score. The highest score is obtained when the person is close to the camera and is looking in the general direction of the camera. As the person turns away, the score again decreases. This qualitative example shows that our model is indeed able to assign understandable and meaningful scores to each frame. We made videos of this and other examples publicly available \\footnote{\\url{https:\/\/drive.google.com\/drive\/folders\/1GRlFRSxHRfBnfTpI5DG2v2rN3nTCg5Y0?usp=sharing}}. \n\\subsection{Bicycle parking}\nFinally, we also validated our approach on video data from security cameras in a bicycle parking lot. This to investigate how well the model generalizes to data collected in the real world. Figure \\ref{fig:bicycle_park} shows three example frames with its corresponding frame crop and quality score. Even though these images are very different from the images the VAE was originaly trained on, we can see that the model generalizes well and is able to assign useful scores to each crop.\n\\begin{figure}\n \\centering\n \\includegraphics[width=225pt]{figures\/bicycle_park.png}\n \\caption{Three frames from the footage of a security camera in a bicycle parking lot. The corresponding frame crops and quality scores are depicted below each frame.}\n \\label{fig:bicycle_park}\n\\end{figure}\n\n\\section{Conclusion and future work}\n\\label{section:conclusion}\nIn this study, a novel face image quality assessment method is proposed based on a variational autoencoder's reconstruction probability. This is, by our knowledge, the first time a generative model like a VAE is used to tackle the problem of face image quality assessment. We demonstrate, by quantitative and qualitative results, that our method can be used as a biometric quality predictor. Unlike other data driven approaches, no facial recognition model is used for training and no explicit definition of face quality is given. Our FQA algorithm is used as a building block in a privacy-friendly alternative to large scale facial recognition. Instead of identifying all detected faces in a video stream, our system saves one high quality face crop without revealing the person's identity. This face crop is encrypted and access is only granted after legal authorization. In such a way, the system still supports criminal investigations while not violating the proportionality principle.\n\\\\\n\\newline\nIn future work, we will further optimize the VAE architecture keeping the constraints on model size and computational complexity in mind as the final goal would be to deploy the model on a stand-alone edge device. It would be interesting to investigate different hardware platforms such as FPGAs that allow the model to process data in real-time with a small energy consumption, making it possible to embed our system in low cost surveillance camera's. Moreover, our method should be evaluated on other datasets and in combination with alternative feature extractors.\n\n\\section{ Acknowledgments}\nWe thank Klaas Bombeke (MICT UGent) for providing us with the dataset used in the experiment of Figure~9, and Stijn Rammeloo from Barco for the feedback on the initial manuscript. This research was funded by the imec.ICON SenseCity project and the Flemish Government (AI Research Program).\n\n\\bigskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzaloa b/data_all_eng_slimpj/shuffled/split2/finalzzaloa new file mode 100644 index 0000000000000000000000000000000000000000..7063e2de55998ad9ff389466596ad48955e2a4cb --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzaloa @@ -0,0 +1,5 @@ +{"text":"\\subsection{Gauge fields}\n\nWe will consider $SU(N)$\ngauge fields \nin a four dimensional torus of size $L^4$. The twisted boundary\nconditions are implemented by requiring the field to gauge-transform\nunder the displacement of a period\n\\begin{equation}\n A_\\mu(x+L\\hat \\nu) = \\Omega_\\nu(x)A_\\mu(x)\\Omega_\\nu^+(x) + \n \\Omega_\\nu(x)\\partial_\\mu\\Omega_\\nu^+(x) \\,,\n\\end{equation}\nwhere $\\Omega_\\mu(x)$ are known as the twist matrices. The uniqueness\nof $A_\\mu(x+L\\hat\\mu+L\\hat\\nu)$ requires that the twist matrices have\nto obey the relation \n\\begin{equation}\n \\label{eq:consistency}\n \\Omega_\\mu(x+L\\hat \\nu)\\Omega_\\nu(x) = e^{2\\pi\\imath n_{\\mu\\nu}\/N}\n \\Omega_\\nu(x+L\\hat \\mu)\\Omega_\\mu(x)\\,,\n\\end{equation}\nwhere $n_{\\mu\\nu}$ is an anti-symmetric tensor of integers modulo $N$\ncalled the twist tensor. It is easy to check that under a gauge\ntransformation, $\\Lambda(x)$, the twist matrices change according to\n\\begin{equation}\n \\Omega_\\nu(x) \\longrightarrow \\Omega'_\\nu(x) = \\Lambda(x+L\\hat \\nu)\n \\Omega_\\nu(x)\\Lambda^+(x)\\,,\n\\end{equation}\nbut the twist tensor $n_{\\mu\\nu}$ remains unchanged. Therefore all the\nphysics of the twisted boundary conditions is contained in the twist\ntensor, and the particular choice of twist matrices is \nirrelevant. One can restrict the gauge transformations to those that\nleave the twist matrices unchanged. It is easy to check that the\nnecessary and sufficient condition for the gauge transformations is to\nobey the periodicity condition\n\\begin{equation}\n \\label{eq:gauge}\n \\Lambda(x+L\\hat\\nu) = \\Omega_\\nu(x)\\Lambda(x)\\Omega_\\nu^+(x)\\,.\n\\end{equation}\n\nThe reader interested in knowing more about the twisted boundary\nconditions is invited to consult the review~\\cite{ga:torus}. Here we\nwill use a particular setup: we choose to twist only one plane\n(the $x_1-x_2$ plane) by choosing $n_{12} = -n_{21} = 1$, while the\nrest of the components of the twist tensor will be zero. This means\nthat our gauge connections will still be periodic in the $x_3$ and\n$x_4$ directions. As we will\nsee, this choice guarantees that the action has a unique minimum\n(modulo gauge transformations), and therefore it turns out to be a\nconvenient choice for perturbative studies. This is the reason why the very\nsame choice has been made before to define the Twisted Polyakov Loop\nrunning coupling scheme~\\cite{deDivitiis:1994yp}, or for other\nperturbative studies~\\cite{Luscher:1985wf}. We will closely follow the\nnotation and steps \npresented in~\\cite{Perez:2013dra}, a reference that the reader\ninterested in more details should consult.\n\nA convenient implementation of twisted boundary conditions consists in\nusing space-time independent twist matrices. In particular for the\nperiodic directions we set the twist matrices to one\n\\begin{subequations}\n\\begin{eqnarray}\n \\Omega_{1,2}(x) &=& \\Omega_{1,2} \\\\\n \\Omega_{3,4}(x) &=& 1\\,.\n\\end{eqnarray}\n\\end{subequations}\n\nWe will use latin indexes ($i,j,\\dots=1,2$) to run over the directions in the\ntwisted plane, while greek indexes ($\\mu,\\nu,\\dots=0,\\dots,3$) will\nrun over the four space time directions. The consistency relation\nEq.~(\\ref{eq:consistency}) implies the \nfollowing condition for the twist matrices\n\\begin{equation}\n \\Omega_1\\Omega_2 = e^{2\\pi\\imath \/N}\n \\Omega_2\\Omega_1.\n\\end{equation}\nNotice that the boundary conditions for the gauge\nfield with this choice of the twist matrices are\n\\begin{equation}\n \\label{eq:twalg}\n A_\\mu(x+L\\hat k) = \\Omega_kA_\\mu(x)\\Omega_k^+\\,,\n\\end{equation}\nand $A_\\mu=0$ is a valid connection. In fact we\nwill show that it is the only connection compatible with the boundary\nconditions that does not depend on $x$, and therefore it is the\nunique minimum of the action modulo gauge transformations. \n\nEq.~(\\ref{eq:twalg}) defines a generalization of the Dirac algebra. It\ncan be shown~\\cite{ga:torus} that there is a\nunique solution for the matrices $\\Omega_i$ modulo similarity\ntransformations. Introducing the \\emph{color momentum}, $\\tilde p_i =\n\\frac{2\\pi\\tilde n_i}{NL}$ with $n_i=0,\\dots,N-1$\nit is easy to check that the $N^2$ matrices\n\\begin{equation}\n \\label{eq:defG}\n \\Gamma(\\tilde p) = \\frac{\\imath}{\\sqrt{2N}}e^{\\imath \\alpha(\\tilde\n p)} \\Omega_1^{-\\tilde \n n_2}\\Omega_2^{\\tilde n_1}\\,,\n\\end{equation}\nwhere $\\alpha(\\tilde p)$ are arbitrary phases, are linearly\nindependent and obey the relation \n\\begin{equation}\n \\Omega_i \\Gamma(\\tilde p) \\Omega_i^+ = e^{\\imath L\\tilde p_i}\n \\Gamma(\\tilde p)\\,.\n\\end{equation}\nMoreover all but\n$\\Gamma(\\tilde p=0)$ are traceless, and therefore they can be used as\na basis of the Lie algebra of the gauge group. This means that any\ngauge connection can be expanded as\n\\begin{equation}\n A_\\mu^a(x)T^a = \\sum_{\\tilde p}'\\hat A_\\mu(x,\\tilde p)e^{\\imath\n \\tilde px}\\Gamma(\\tilde p).\n\\end{equation}\nThe prime over the sum means that the term $\\tilde p_i=0$ is\nabsent in the sum, as required for a $SU(N)$ gauge group. Notice that\nthe coefficients $\\hat A_\\mu(x,\\tilde p)$ are functions (not \nmatrices) periodic in $x$. Therefore one can do an usual Fourier\nexpansion and obtain\n\\begin{equation}\n A_\\mu^a(x)T^a = \\frac{1}{L^4} \\sum_{p,\\tilde p}'\\tilde A_\\mu(p,\\tilde\n p)e^{\\imath \n (p+\\tilde p)x}\\Gamma(\\tilde p)\\,,\n\\end{equation}\nwith the usual spatial momentum \n\\begin{equation}\n p_\\mu = \\frac{2\\pi n_\\mu}{L}\\quad (n_\\mu\\in \\mathbb Z)\\,.\n\\end{equation}\nFinally we define the \\emph{total} momentum as the sum of the color\nand space momentum $P_i = p_i+\\tilde p_i$, $P_{3,4} = p_{3,4}$. Noting\nthat any $P_\\mu$ can be \nuniquely decomposed in the space momentum and color momentum degrees\nof freedom we can safely write $\\Gamma(P)$ instead of $\\Gamma(\\tilde\np)$. Our main conclusion is that any gauge connection compatible\nwith our choice of boundary conditions can be written as \n\\begin{equation}\n \\label{eq:gaugetw}\n A_\\mu^a(x)T^a = \\frac{1}{L^4} \\sum_{P}'\\tilde A_\\mu(P)\n e^{\\imath Px}\\Gamma(P)\\,.\n\\end{equation}\nIn particular the only connection that does not depend on $x$ is given\nby $\\tilde A_\\mu(P) = 0$. In general the matrices $\\Gamma(P)$ are not\nanti-hermitian, but one can choose the phases \n$\\alpha(P)$ of equation~(\\ref{eq:defG}) so that this condition is\nenforced\n\\begin{equation}\n \\alpha(P) = \\frac{\\theta}{2}P_1P_2\\qquad \n \\left(\n \\theta = \\frac{NL^2}{2\\pi}\n \\right)\\,.\n\\end{equation}\nIn this case, the Fourier coefficients $\\tilde A_\\mu(P)$ satisfy the\nusual relation\n\\begin{equation}\n \\tilde A_\\mu(P)^* = \\tilde A_\\mu(-P)\\,,\n\\end{equation}\nand the $\\Gamma$ matrices are normalized according to\n\\begin{equation}\n {\\rm Tr}\\left\\{ \\Gamma(P)\\Gamma(-P)\\right\\} = -\\frac{1}{2}\\,.\n\\end{equation}\n\nWe finally note that a simlar expansion is possible on the lattice,\nwith the only difference that the space momentum will be restricted\nto the Brillouin zone. \n\n\\subsection{Matter fields}\n\\label{sc:fermions}\n\nThe inclusion of matter fields interacting with gauge fields with\ntwisted boundary conditions is not completely straightforward. To\nunderstand why it is better first to consider how to include \nfermion fields in the fundamental representation. Since the twist\nmatrices tell us how fields change under translations, one naively\nexpects \n\\begin{equation}\n \\psi(x+L\\hat i) = \\Omega_i\\psi(x)\\,,\n\\end{equation}\nbut one can easily see that this choice is not consistent, \nsince the value of the field $\\psi(x+L\\hat i+L\\hat j)$ depends on the\norder in which we perform the translations due to the\nnon-commutativity of the twist matrices. This difficulty can be\navoided by introducing more fermions, or what usually is called a\n``smell'' degree of freedom~\\cite{Parisi:1984cy}. If\n$\\alpha,\\beta=1,\\dots,N_s$ are indices that run over the $N_s$ smells of\nfermions, and $a,b=1,\\dots,N$ run over the color degrees of freedom,\nthe boundary conditions of the fermions read\n\\begin{equation}\n \\psi^a_\\alpha(x+L\\hat i) =\n e^{\\imath\\theta_i}(\\Omega_i)_{ab}(\\Omega^*_i)_{\\alpha\\beta} \n \\psi^b_\\beta(x) \\,.\n\\end{equation}\nThis means that a fermion smell becomes a linear combination of the\ngauge transformed fermion smells under a translation. $\\theta_i$\nare in principle arbitrary, but introduced for \nconvenience to remove the zero-momentum modes of the Dirac\noperator. These phases have to be chosen\nsuch that they are not elements of the gauge group\n(i.e. $e^{\\imath\\theta} \\not\\in SU(N)$).\nThis choice of boundary conditions for the \nfermion fields is consistent, but they require the number of\nsmells to be equal to the number of colors. One can easily extend the\nconstruction to the case when the ratio $N_s\/N$ is an integer, but in\ngeneral one can not have an arbitrary number of fermions in the\nfundamental representation.\n\nOn the other hand fermions in the adjoint representation transform in\nthe same way as the gauge fields, and therefore any number of fermions\nwould be compatible with the twisted boundary conditions. \n\nRegardless of the representation but assuming that the matter fields\nare compatible with the twisted boundary conditions, $\\mathcal O(a)$\nimprovement for massless Wilson quarks is automatically\nsatisfied since fields live on a torus, and the boundary conditions do\nnot break chiral symmetry (see~\\cite{Sint:2010eh,Frezzotti:2003ni}). \n\n\n\n\n\\subsection{Cutoff effects in the twisted running coupling}\n\nThe comparison of the lattice and the continuum computations of\n$\\mathcal E(t)$ can give us an idea of the size of cutoff effects (to\nleading order in $g_0^2$) of the twisted gradient flow coupling. We\nare going to study in detail the case of lattice simulations using the\nWilson action, the Wilson flow, and the clover definition for the\nobservable. If we\ndefine \n\\begin{equation}\n \\hat{\\mathcal N}_T(c,a\/L) = \n \\frac{c^4}{128}\\sum_P' e^{-\\frac{c^2L^2}{4}\\hat P^2}\n \\frac{\\mathring P^2 C^2 - (\\mathring P_\\mu C_\\mu)^2}{\\hat P^2}\\,,\n\\end{equation}\nthe quantity\n\\begin{equation}\n Q(c, a\/L) = \\left|\\frac{\\hat{\\mathcal N}_T(c,a\/L) - \\mathcal N_T(c)}\n{{\\mathcal N}_T(c)}\\right|\\,,\n\\end{equation}\nquantifies to leading order the size of cutoff effects as a function of the\nlattice size and the scheme parameter $c$. A global picture of cutoff\neffects for the groups $SU(2)$ and $SU(3)$ \ncan be seen in the figure~\\ref{fig:cut1}. \n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/cutoff}\n \\caption{Cutoff effects to leading order of perturbation theory in\n the twisted gradient flow coupling. As we \n see, for $c\\in[0.3-0.5]$ cutoff effects are below the 7\\% for an\n $L\/a=8$ lattice.}\n \\label{fig:cut1}\n\\end{figure}\n\nThese figures may lead to the conclusion that a large value of $c$ is\noptimal. But from the point of view of lattice simulations, it is\nknown~\\cite{Fritzsch:2013je} that larger values of $c$ lead to larger\nstatistical errors when computing the coupling via lattice\nsimulations. For the typical lattice sizes ($L\/a\\sim 10-20$) that one\nuses in step scaling studies the values $c\\in[0.3,0.5]$ seem reasonable. \n\n\\subsection{Improved coupling definition}\n\nIf one is computing $t^2\\langle E(t)\\rangle$ non-perturbatively via\nlattice simulations, and one is using the Wilson action, the Wilson\nflow and the clover observable for the evaluation of the energy\ndensity observable, one can alternatively define the coupling via\n\\begin{equation}\n\\label{eq:latcou}\n g_T^2(L) = \\hat{\\mathcal N}_T^{-1}(c,a\/L)t^2\\langle E(t) \\rangle\n \\Big|_{t=c^2L^2\/8} \\,.\n\\end{equation}\nThis last coupling definition has the same properties, but one expects\nan improved scaling towards the continuum limit, since the leading\norder cutoff effects $\\propto g_0^2$ have been removed thanks to\nthe lattice factor $\\hat{\\mathcal N}_T(c,a\/L)$.\n\nIn a similar way, any choice of discretizations that define a coupling\ncan be normalized with a factor computed on the lattice\n(cf. section~\\ref{sc:disc}), leading to an improved scaling towards\nthe continuum. \n\n\n\n\n\\subsection{Perturbative behavior of the gradient flow in a twisted box: continuum}\n\n\\subsubsection{Generalities and gauge fixing}\n\nWe are interested in the perturbative expression for $\\langle E(t)\n\\rangle$, and in order to avoid some difficulties in the definition of\npropagators, it turns out to be convenient to fix the gauge of the\nflow field $B_\\mu(x,t)$. This can be achieved by studying the modified\nflow equation\n\\begin{equation}\n \\frac{{\\rm d} B_\\mu^{(\\alpha)}(x,t)}{{\\rm d}t} = D_\\nu^{(\\alpha)}\n G_{\\nu\\mu}^{(\\alpha)}(x,t) + \n \\alpha D_\\mu^{(\\alpha)}\\partial_\\nu B_\\nu^{(\\alpha)}(x,t) \\,.\n\\end{equation}\nThe superscript ${(\\alpha)}$ recalls that covariant derivatives and field\nstrength are made of the modified flow field $B_\\mu^{(\\alpha)}(x,t)$,\nsolution of the previous equation. A solution of this modified flow\nequation $B_\\mu^{(\\alpha)}(x,t)$ can be transformed in a solution of the\noriginal flow equation~(\\ref{eq:flow}) by a time dependent gauge\ntransformation~\\cite{Luscher:2011bx}\n\\begin{equation}\n B_\\mu = \\Lambda B_\\mu^{(\\alpha)}\\Lambda^{-1} + \n \\Lambda \\partial_\\mu\n \\Lambda^{-1} \\,,\n\\end{equation}\nwhere \n\\begin{equation}\n \\frac{{\\rm d} \\Lambda}{{\\rm d}t} =\n \\alpha \\Lambda \\partial_\\mu B_\\mu \\,;\\quad\n \\Lambda\\big|_{t=0} = 1\\,.\n\\end{equation}\n\nTherefore gauge invariant quantities are independent of $\\alpha$. Note\nthat the previously defined gauge transformation \n$\\Lambda(x)$ belongs to the restricted set of gauge transformations\nthat leave the twist matrices invariant (see\nequation~(\\ref{eq:gauge})), and the boundary conditions of\n$B_\\mu^{(\\alpha)}$ are also independent of $\\alpha$. \n\n\\subsubsection{Flow field and energy density to leading order}\n\nThe particular choice $\\alpha=1$ simplify the computations, and we\nwill use it for the rest of this section. The modified flow equation\nreads in this case\n\\begin{equation}\n \\label{eq:flowmd}\n \\frac{{\\rm d} B_\\mu}{{\\rm d}t} = D_\\nu G_{\\nu\\mu} +\n D_\\mu\\partial_\\nu B_\\nu \\,.\n\\end{equation}\nIn perturbation theory one re-scales the gauge potential with the bare\ncoupling $A_\\mu \\rightarrow g_0A_\\mu$, and the flow field has an\nasymptotic expansion in the bare coupling\n\\begin{equation}\n \\label{eq:flowfg0}\n B_\\mu(x, t) = \\sum_{n=1}^{\\infty} B_{\\mu,n}(x, t) g_0^n \\,.\n\\end{equation}\nTo leading order our flow equation~(\\ref{eq:flowmd}) is just the heat\nequation\n\\begin{eqnarray}\n \\label{eq:flowlo}\n \\frac{{\\rm d} B_{\\mu,1}(x,t)}{{\\rm d}t} &=& \\partial_\\nu^2\n B_{\\mu,1}(x,t) \\\\\n B_{\\mu,1}(x,0) &=& A_{\\mu}(x)\\, ,\n\\end{eqnarray}\nexpanding $B_{\\mu,1}(x,t)$ in our preferred basis~(\\ref{eq:gaugetw}) one\ncan easily solve~(\\ref{eq:flowlo}) and obtain\n\\begin{equation}\n B_{\\mu,1}(x,t) = \\frac{1}{L^4}\n \\sum_P' e^{-P^2t} \\tilde A_\\mu(P) e^{\\imath Px}\n \\Gamma(P)\\,.\n\\end{equation}\n\nFinally our observable of interest also has an expansion in powers of\n$g_0$\n\\begin{equation}\n \\langle E(t)\\rangle = -\\frac{1}{2}\\langle\n {\\rm Tr}\\{G_{\\mu\\nu}(x, t)G_{\\mu\\nu}(x,t)\\}\\rangle = \\mathcal E(t) + \\mathcal\n O(g_0^4)\\,.\n\\end{equation}\nOne can easily obtain \n\\begin{eqnarray}\n \\mathcal E(t) &=& \\frac{g_0^2}{2}\\langle \n \\partial_\\mu B_{\\nu,1}\\partial_\\mu B_{\\nu,1} - \\partial_\\mu\n B_{\\nu,1}\\partial_\\nu B_{\\mu,1} \n \\rangle \\\\\n \\nonumber\n &=& \\frac{-g_0}{2L^8}\\sum_{P,Q}'e^{-(P^2+Q^2)t}e^{\\imath (P+Q)x} \n \\left( P_\\alpha Q_\\alpha\\delta_{\\mu\\nu} -\n P_\\mu Q_\\nu\\right) \\langle \\tilde A_\\mu(P)\\tilde A_\\nu(Q)\\rangle \n {\\rm Tr}(\\hat\\Gamma(P)\\hat\\Gamma(Q))\\,.\n\\end{eqnarray}\nFinally using the expression for the gluon propagator\n\\begin{equation}\n \\langle \\tilde A_\\mu(P)\\tilde A_\\nu(Q) \\rangle =\n L^4 \\delta_{P_\\alpha, -Q_\\alpha} \\frac{1}{P^2}\n \\left[ \\delta_{\\mu\\nu} - (1-\\lambda^{-1})\\frac{P_\\mu P_\\nu}{P^2}\\right]\n \\frac{1}{{\\rm Tr}({\\Gamma(-P)\\Gamma(P)})} + \\mathcal O(g_0^2)\n\\end{equation}\none gets\n\\begin{equation}\n \\label{eq:et}\n \\mathcal E(t) = \n \\frac{3g_0^2}{2L^4}\\sum_P' e^{-2P^2t}\\,.\n\\end{equation}\n\n\\subsection{Perturbative behavior of the gradient flow in a twisted box: lattice}\n\nWhen defining the gradient flow in the lattice one has to make several\nchoices. These basically correspond to the particular discretizations\nof the action whose gradient is used to define the flow, as well as\nthe discretization of the energy density and the choice of action that\none simulates (i.e. Wilson\/improved actions). \n\nFirst we will analyze the popular case where the Wilson action is\nsimulated, and one uses the same action to define the flow (in this\ncase is called the Wilson flow). The clover definition of the\nobservable has been a typical choice~\\cite{Luscher:2010iy} for a \ndiscretization of\nthe energy density. Later we will comment on the general case. \n\n\\subsubsection{Generalities and gauge fixing}\n\nOn the lattice the gradient flow is substituted by a discretized\nversion. There are several possibilities: one can use the Wilson\naction \n\\begin{equation}\n S_w(V) = \\frac{1}{g_0^2} \\sum_{\\rm p} {\\rm Re}\\{{\\rm Tr}(1-U_{\\rm p})\\}\n\\end{equation}\nwhere the sum runs over the oriented plaquettes, and define the flow\nequation by equating the time derivative of the links with\nthe gradient of the Wilson action\n\\begin{equation}\n \\label{eq:flowlat}\n a^2\\partial_t V_\\mu(x,t) = -g_0^2 \\{T^a\\partial_{x,\\mu}^a S_w(V)\\}\n V_\\mu(x,t) \\,, \\qquad V_\\mu(x,0) = U_\\mu(x) \\,.\n\\end{equation}\nIn this case the gradient flow is usually referred as the Wilson\nflow. Some explanations of our notation are in order. \nIf\n$f(U_\\mu(x))$ is an arbitrary function of the link variable\n$U_\\mu(x)$, the components of its Lie-algebra valued derivative\n$\\partial_{x,\\mu}^a $ \nare defined as \n\\begin{equation}\n \\partial_{x,\\mu}^a f(U_\\mu(x)) = \\left.\\frac{ {\\rm d} f(e^{\\epsilon\n T^a}U_\\mu(x))}{ {\\rm d}\\epsilon} \\right|_{\\epsilon=0}\\,. \n\\end{equation}\nIn perturbation theory one is interested in a neighborhood of the\nclassical vacuum configuration. In this neighborhood the lattice \nfields $U_\\mu(x)$ and $V_\\mu(x,t)$ are parametrized as follows:\n\\begin{align}\n U_\\mu(x) &= \\exp\\{ag_0 A_\\mu(x)\\} \\;, &\n V_\\mu(x,t) &= \\exp\\{ag_0 B_\\mu(x,t)\\} \\;.\n\\end{align}\n\nAgain it is convenient to study a modified flow equation where the\ngauge degrees of freedom are damped. We will consider\n\\begin{equation}\n \\label{eq:flowlatmd}\n a^2\\partial_t V_\\mu^\\Lambda(x,t) = g_0^2 \\left\\{ \n -\\big[ T^a\\partial_{x,\\mu}^a S_w(V^\\Lambda) \\big] \n + a^2\\hat D_\\mu^{\\Lambda}\\big[\\Lambda^{-1}(x,t)\\dot \\Lambda(x,t)\\big] \n \\right\\} V_\\mu^\\Lambda(x,t) \\,,\n\\end{equation}\nwith $V_\\mu^\\Lambda(x,0) = U_\\mu(x)$ and the forward lattice covariant\nderivative \n$\\hat D_\\mu^{\\Lambda}$ acting on Lie-algebra valued functions according to\n\\begin{equation}\n \\hat D_\\mu f(x) = \\frac{1}{a}\\left[\n V_\\mu(x,t)f(x+\\hat\\mu)V_\\mu^{-1}(x,t) - f(x)\n \\right] \\,.\n\\end{equation}\n\nSolutions of the modified and original flow equations are related by a\ngauge transformation\n\\begin{equation}\n V_\\mu(x,t) = \\Lambda(x,t)V_\\mu^\\Lambda(x,t)\\Lambda^{-1} (x+\\hat\\mu,t)\\,.\n\\end{equation}\nThe most natural choice for $ \\Lambda(x,t)$ is the same functional\nused for gauge fixing \n\\begin{equation}\n \\label{eq:lam}\n \\Lambda^{-1}\\frac{{\\rm d} \\Lambda}{{\\rm d}t} = \\alpha\n \\hat\\partial^*_\\mu B_\\mu(x,t)\\,,\\qquad \n \\Lambda\\big|_{t=0} = 1\\,.\n\\end{equation}\nwhere $\\hat \\partial, \\hat \\partial^*$ denote the forward\/backward \nfinite differences. We again note that the boundary conditions of\n$V_\\mu^\\Lambda(x,t)$ are independent of $\\alpha$, since $\\Lambda(x,t)$\nbelongs to the restricted class of gauge transformations that leave\nthe twist matrices unchanged.\n\n\n\\subsubsection{Flow field and energy density to leading order}\n\nAgain the choice $\\alpha=1$ turns out to be convenient and we\nwill stick to it from now on.\n\nThe modified flow equation reads\n\\begin{equation}\n a^2\\partial_t V_\\mu(x,t) = g_0^2 \\left\\{ -[T^a\\partial_{x,\\mu}^a\n S_w(V)] + a^2\\hat D_\\mu(\\hat\\partial_\\nu^* B_\\nu ) \n \\right\\}\n V_\\mu(x,t) \\,, \\qquad V_\\mu(x,0) = U_\\mu(x) \\,.\n\\end{equation}\nThe flow field can be expanded in powers of $g_0$\n(equation~(\\ref{eq:flowfg0})) and to first order in $g_0$ we have\n\\begin{equation}\n \\label{eq:wflowlato1}\n \\partial_t B_{\\mu,1}(x,t) = \\hat \\partial_\\nu\\hat\\partial_\\nu^*\n B_{\\mu,1}(x,t) \\,.\n\\end{equation}\nExpanding the flow field in our favorite Lie-algebra basis\n(equation~(\\ref{eq:gaugetw})) one can write the solution to the\nprevious equation\n\\begin{equation}\n B_{\\mu,1}(x,t) = \\frac{1}{L^4}\\sum_P' e^{-\\hat P^2t} \\tilde A_\\mu(P)\n e^{\\imath Px} \\Gamma(P)\\,,\n\\end{equation}\nwhere \n\\begin{equation}\n \\hat P_\\mu = \\frac{2}{a}\\sin\\left(a\\frac{P_\\mu}{2}\\right)\n\\end{equation}\nis the usual lattice momentum.\n\nWe can choose among different discretizations for the energy\ndensity. The most popular one consists in using the clover definition\nfor $G_{\\mu\\nu}(x,t)$~\\cite{Luscher:2010iy}. To leading order we have\n\\begin{eqnarray}\n \\nonumber\n \\hat G_{\\mu\\nu}(x,t) &=& \\frac{g_0}{2}\\,\\mathring\\partial_\\mu\\left[B_{\\nu,1}(x,t) + \n B_{\\nu,1}(x-\\hat \\nu,t)\\right] \\\\\n &-&\n \\frac{g_0}{2}\\,\\mathring\\partial_\\nu\\left[B_{\\mu,1}(x,t) + \n B_{\\mu,1}(x-\\hat \\mu,t)\\right] + \\mathcal O(g_0^2) \\,,\n\\end{eqnarray}\nwhere $\\mathring\\partial_{\\mu} = \\tfrac{1}{2}(\\hat \\partial_\\mu +\n\\hat \\partial^*_\\mu)$ is the symmetric finite difference. The energy\ndensity computed with the clover definition for the field strength\ntensor reads \n\\begin{equation}\n \\langle E^{\\rm cl}(t)\\rangle = -\\frac{1}{2}\\langle {\\rm Tr}\\{ \\hat\n G_{\\mu\\nu} \\hat G_{\\mu\\nu}\\} \\rangle = \\mathcal E^{\\rm cl}(t, a\/L) +\n \\mathcal O(g_0^2) \n\\end{equation}\nUsing the definitions\n\\begin{subequations}\n\\begin{eqnarray}\n \\mathring P_\\mu &=& \\frac{1}{a}\\sin\\left(aP_\\mu\\right)\\,,\\\\\n C_\\mu &=& \\cos\\left(a\\frac{P_\\mu}{2}\\right)\\,,\n\\end{eqnarray}\n\\end{subequations}\nand the lattice gluon propagator, one can easily obtain\n\\begin{equation}\n \\label{eq:etcl}\n \\hat{\\mathcal E}^{\\rm cl}(t, a\/L) = \n \\frac{g_0^2}{2L^4}\\sum_{P}' e^{-2\\hat P^2t}\n \\frac{\\mathring P^2 C^2 - (\\mathring P_\\mu C_\\mu)^2}{\\hat P^2}\\,.\n\\end{equation}\n\n\n\\subsubsection[Some comments on different\n discretizations]{Some comments on different\n discretizations\\protect\\footnote{The author wants to thank S. Sint for his\n help in understanding the points discussed in this section.}}\n\\label{sc:disc}\n\nIn general the lattice computation of the leading order behavior of\nthe energy density involves several choices of discretization: the\naction that one simulates (labelled (a)), the action whose gradient\ndefines the flow evolution (labelled (f)), and finally the\ndiscretization used to compute the observable (labelled (O)). To\nleading order, these three choices can be expressed as \nchoice of ``actions'' \n\\begin{subequations}\n \\begin{eqnarray}\n S_a[\\tilde A_{\\mu}] &=& \\frac{1}{4L^4}\\sum_{P}' \\tilde A_\\mu(-P)\n K_{\\mu\\nu}^{(a)}(P) \\tilde \n A_\\nu(P) + \\mathcal O(g_0^2)\\,,\\\\\n S_f[\\tilde A_{\\mu}] &=& \\frac{1}{4L^4}\\sum_{P}' \\tilde A_\\mu(-P)\n K_{\\mu\\nu}^{(f)}(P) \\tilde \n A_\\nu(P) + \\mathcal O(g_0^2)\\,, \\\\\n S_O[\\tilde A_{\\mu}] &=& \\frac{1}{4L^4}\\sum_{P}' \\tilde A_\\mu(-P)\n K_{\\mu\\nu}^{(O)}(P) \\tilde \n A_\\nu(P) + \\mathcal O(g_0^2) \\,.\n \\end{eqnarray}\n\\end{subequations}\n\nThe matrices $K^{(a)}$ and $K^{(f)}$ may (and should) contain a gauge\nfixing part, but not the one corresponding to the observable\n$K^{(O)}$. In this way final results will be independent of the\nchoices of gauge.\nThe inverse of the $K_{\\mu\\nu}^{(a)}$ defines the lattice gluon propagator\n\\begin{eqnarray}\n \\langle A_\\mu(-P)A_\\nu(P)\\rangle &=& D_{\\mu\\nu}(P)\\,, \\\\\n K_{\\mu\\alpha}^{(a)}(P)D_{\\alpha\\nu}(P) &=& \\delta_{\\mu\\nu}\\,.\n\\end{eqnarray}\n\nUsing this notation it is trivial to obtain the form of the flow field\nto leading order\n\\begin{equation}\n \\tilde B_{\\mu,1}(P) = \\left( \\exp\\{-t K^{(f)}(P)\\}\\right)_{\\mu\\nu}\n \\tilde A_\\nu(P) = H_{\\mu\\nu}(t,P) \\tilde A_\\nu(P) \\,, \n\\end{equation}\nand noting that the reality of the action requires that $H^+(t,P) =\nH(t,-P)$, we can write the expression of the energy density to\nleading order as \n\\begin{eqnarray}\n \\mathcal E(t,a\/L) &=& g_0^2 \\langle S_O[\\tilde\n B_{\\mu,1}]\\rangle \\\\ \n &=& \\frac{g_0^2}{2L^4} \\sum_P' {\\rm Tr}\\{ H^+(t,P)K^{(O)}(P)H(t,P) \n D(P)\\}\\,.\n\\end{eqnarray}\n\nThis formula allows an easy evaluation of the energy density, to\nleading order in perturbation theory, for any choice of\ndiscretizations. One general point that one can make is that if one\nuses the Wilson flow the matrix $H(t,P)$ can be chosen to be\nproportional to the identity (by an appropriate gauge choice), and\ntherefore commutes with any other matrix. Moreover if the \naction that one simulates is the same as the one that we use to\ncompute the observable, the product of matrices $DK^{O}$ together with\nthe trace simply result in a factor 3, and therefore one obtains\n\\begin{equation}\n \\mathcal E(t,a\/L) = \\frac{3g_0^2}{2L^4} \\sum_P' e^{-2t\\hat P^2}\\,.\n\\end{equation}\n\nThis means that without changing the flow, improving\nthe action and the observable leads to exactly the same cutoff effects\nthan if one does not improve anything (to leading order). \n\n\\subsection{Tests}\n\nIn order to check the previous computations one can perform several\nconsistency checks. First it is obvious that the continuum result\n(equation~(\\ref{eq:et})) is recovered from the lattice one \n(equation~(\\ref{eq:etcl})) if one takes the limit $a\/L \\rightarrow\n0$. In the infinite volume limit boundary conditions are irrelevant,\nand therefore for $L\\rightarrow \\infty$ one should recover the result\nof~\\cite{Luscher:2010iy} that reads\n\\begin{equation}\n \\mathcal E^{(L=\\infty)}(t) = \\frac{3g_0^2(N^2-1)}{128\\pi^2 t^2}\\,.\n\\end{equation}\n\nThis result is reproduced from our expression\nequation~(\\ref{eq:et}) by simply noting that \n\\begin{equation}\n P_\\mu = \\frac{2\\pi}{L}\\left( n_\\mu + \\frac{\\tilde n_\\mu}{N}\\right)\\,,\n\\end{equation}\nand therefore \n\\begin{equation}\n \\frac{1}{L^4} \\sum_P' \\xrightarrow[L\\rightarrow \\infty]{}\n \\frac{1}{(2\\pi)^4}\\sum_{\\tilde p_i}' \\int_{-\\infty}^\\infty {\\rm d}^4P\\,.\n\\end{equation}\nFinally recalling that there are $N^2-1$ terms in the sum over $\\tilde\np_i$ (the term $\\tilde p_i=0$ is explicitly excluded) one obtains\n\\begin{equation}\n \\mathcal E(t)\\xrightarrow[L\\rightarrow \\infty]{}\n \\frac{3g_0^2}{32\\pi^4}\\sum_{\\tilde p_i}' \\int_{-\\infty}^\\infty {\\rm d}^4P\n e^{-2P^2 t} = \\frac{3g_0^2(N^2-1)}{128\\pi^2 t^2}\\,.\n\\end{equation}\n\nTo check the lattice computations we have performed some dedicated\npure gauge lattice \nsimulations. We use the plaquette action of an $SU(2)$ gauge theory in\ntwo different volumes $L\/a=4^4$ and $L\/a= \n6^4$. We collect $10,000$ measurements of $\\langle E^{\\rm\n cl}(t)\\rangle$ for different values of $t$ and $\\beta = 2\/g_0^2 =\n40, 80, 120, 200, 400, 560, 800, 960, 1120, 1280$. In these\nlarge-$\\beta$ simulations the \nmeasured $\\langle E^{\\rm cl}(t)\\rangle$ should\nreproduce the perturbative expression\n(equation~(\\ref{eq:etcl})). Being more precise, we will study\nnumerically the\nquantity\n\\begin{equation}\n R(g_0, t) = \\frac{\\langle E^{\\rm cl}(t)\\rangle - \\mathcal E^{\\rm\n cl}(t)}{\\mathcal E^{\\rm cl}(t)}\\,.\n\\end{equation}\nWe expect that $R(g_0,t) = \\mathcal O(g_0^2)$, and therefore by fitting\nthe data from the simulations to a linear behavior \n\\begin{equation}\n R(g_0, t) = m(t)g_0^2 + n(t)\n\\end{equation}\none should obtain an intercept $n(t)$ compatible with zero within\nerrors. Indeed this is the case, for different values of $t$ and $L$,\nas the reader can check in table~(\\ref{tab:fit}). A couple of\nrepresentative fits are shown in the figure~\\ref{fig:fit}.\n\n\\input{fit-table.tex}\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\includegraphics[width=\\textwidth]{fig\/L4c030}\n \\caption{Fit for $L=4^4$ and $c=0.3$.}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\includegraphics[width=\\textwidth]{fig\/L6c050}\n \\caption{Fit for $L=6^4$ and $c=0.5$}\n \\end{subfigure}\n\n \\caption{Some representative fits to the large-$\\beta$\n simulations. The plots show the function $R(g_0, t)$ at fixed\n $t=c^2L^2\/8$ versus $g_0^2$.} \n \\label{fig:fit}\n\\end{figure}\n\n\n\n\n\n\n\\section{Introduction}\n\n\\input{intro.tex}\n\n\\section{Twisted boundary conditions}\n\\label{sc:tw}\n\\input{bc.tex}\n\n\\section{The gradient flow in a twisted box}\n\\label{sc:flow}\n\\input{flow.tex}\n\n\\section{Running coupling definition}\n\\label{sc:coupling}\n\\input{coupling.tex}\n\n\\section{$SU(2)$ running coupling}\n\\label{sc:run}\n\\input{running.tex}\n\n\n\\section{Conclusions}\n\\input{conclusions.tex}\n\n\n\\section*{Acknowledgments}\n\nThis work has a large debt with\nM. Garc\\'ia Perez and A. Gonz\\'alez-Arroyo for sharing some of their results\nand notes before publication and for the many illuminating\ndiscussions. The help and advice of R. Sommer and U. Wolff was\ninvaluable in many of the steps of this work. \n\nI also want to thank my colleagues at DESY\/HU, specially\nP. Korcyl, P. Fritzsch, S. Sint and R. Sommer for the very many interesting\ndiscussions and their careful reading of the manuscript. D. Lin was very\nkind reading and helping to improve a manuscript of this work. \n\n\n\n\\subsection{Numerical computation of the step scaling function and\n running coupling}\n\n\\subsubsection{Simulation details}\n\nWe will simulate $SU(2)$ YM theory using the Wilson action\n\\begin{equation}\n S = \\frac{\\beta}{4}\\sum_{\\rm p} {\\rm Tr}\\left\\{ 1-U_{\\rm p}\\right\\}\n\\end{equation}\nwhere the sum runs over all oriented plaquettes. We simulate lattices\nof size $L\/a=20, 24, 30, 36$, and in order \nto compute the step scaling function also lattices of half this\nsize ($L\/a=10, 12, 15, 18$). The range of values of $\\beta$ (between 2.75\nand 12.0) translate to renormalized couplings\n$g_{\\rm TGF}^2(L)$ between 7.5 and 0.6 (for $c=0.3$), enough to cover both the\nnon-perturbative and perturbative regions of the \ntheory. Appendix~\\ref{ap:values} collects the values of the $g^2_{\\rm\n TGF}(L)$ of our simulations. \n\nWe will use a combination of heatbath~\\cite{Creutz:1980zw,Fabricius:1984wp,Kennedy:1985nu} and\noverrelaxation~\\cite{Creutz:1987xi} as suggested\nin~\\cite{Wolff:1992nq}. In particular we \nchoose to do one heatbath sweep followed by $L\/a$ overrelaxation\nsweeps. Since measuring the coupling (i.e. integrating the flow\nequations) is numerically more expensive \nthan the Monte Carlo updates, we repeat this process 50 times between\nmeasurements. \n\nIn total we collect 2048 measurements of the coupling for each lattice\nsize, each value of $\\beta$, and several values of\n$c\\in[0.3,0.5]$. These measurements are collected in $N_r$ \nparallel runs (replica) of length $N_{\\rm MC}$ each so that $N_r\\times\nN_{\\rm MC} = 2048$. \nWe check that there are no autocorrelation between\nmeasurements (i.e. $\\tau_{\\rm int}=0.5$ within errors), even for our\nlarger lattices and larger values of $c$. We conclude that we can\nsafely consider the measurements independent. \n\nThe Wilson flow equations are integrated using the adaptive step size\nintegrator described in appendix D of~\\cite{Fritzsch:2013je}. With\nthis scheme we \nmake sure that the integration error in each step is not larger than\n$10^{-6}$. \n\n\\subsubsection{Data analysis}\n\nFor each $L\/a$ we have computed the value of the twisted gradient flow\ncoupling at different values of\n$\\beta$ (we call it $g^2_{\\rm TGF}(\\beta;L\/a)$). These data are fitted to a\nPad\\'e-like ansatz\n\\begin{equation}\n \\label{eq:pade}\n g^2_{\\rm TGF}(\\beta;L\/a) = \\frac{4}{\\beta} \\quad\n \\frac{\\sum_{n=0}^{M-1} a_n\\beta^n + \\beta^M}\n {\\sum_{n=0}^{M-1} b_n\\beta^n + \\beta^M}\\,.\n\\end{equation}\nThis fit imposes the one-loop constraint to the data (i.e. $g^2_{\\rm\n TGF}(\\beta;L\/a) \\rightarrow 4\/\\beta$ at large $\\beta$), and has\na total of $2M$ free fit parameters. \n\nAlternatively, and to estimate the dependence of our results on the \nchoice of functional form used to fit the data, we use a different Pad\\'e\ninspired functional form\n\\begin{equation}\n \\label{eq:taylor}\n g^2_{\\rm TGF}(\\beta;L\/a) = \\frac{4}{\\beta} \\quad\n \\frac{1}\n {1 + \\sum_{n=1}^{M} c_n\/\\beta^n}\\,,\n\\end{equation}\nthat also ensures the correct one-loop behavior at large $\\beta$.\n\nWe obtain good fits ($\\chi^2\/{\\rm\nndof}\\sim 0.6-1.9$) with $M=2$ when using the functional form of\nEq.~\\ref{eq:pade} to fit the lattice data (i.e. 4 fitting\nparameters). When using the functional form of Eq.\\ref{eq:taylor} we\nneed $M=4$ to accurately describe the data on the small lattices\n($L\/a=10,12$) and $M=6$ for the larger ones\n($L\/a=15,18,20,24,30,36$). It is important to \nstress that the data are statistically uncorrelated, since they\ncorrespond to different simulations. \n\nIn the figures~\\ref{fig:fit_l24} we show a couple of these fits. Our\nworst fit corresponds to the $L\/a=24$ lattice and \nthe Pade fit gives a $\\chi^2\/{\\rm ndof}=1.69$, while the Taylor fit\nresults in a fit quality of $\\chi^2\/{\\rm ndof}=1.9$. We see how\nin this case the two different functional forms interpolate\ndifferently between the data, giving us confidence that if one\nestimates the error of the interpolation using both functional forms,\none will be on the safe side\\footnote{We point that probably a more\nsophisticated analysis technique (or simply, simulating an additional\nlattice to avoid having large gaps in the data), might result in a\nmore precise result.}. \n\\begin{figure}\n \\centering\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/fit_l24}\n \\caption{}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/fit_l36}\n \\caption{}\n \\end{subfigure}\n \\caption{Some examples of our fits to interpolate the values of the\n renormalized coupling for different values of $\\beta$. \n (a): Our worst fits corresponds to the $L\/a=24$. As we\n can see there is a difference between the different\n interpolating functions between the data points. We\n stress that this systematic effects is taken into account\n in our analysis by using both functional forms to estimate\n the error of the interpolations (see the text for more details).\n (b): Fits to the data of the $L\/a=36$ lattice. As we can\n see, in this case both interpolating functions agree\n within errors, although the polynomial fits tends to have\n larger errors.\n}\n \\label{fig:fit_l24}\n\\end{figure}\n\nWe use resampling methods to propagate errors by using 4000 bootstrap\nsamples. All fitting parameters derived from our original data are\ncomputed for each bootstrap sample. Interpolation points are computed\nfor each bootstrap sample and each functional form. The final error of\nthe interpolated point is computed using \\emph{both} functional forms\nand \\emph{all} bootstrap samples,\nand therefore takes into account not only the statistical uncertainty,\nbut also the systematic effect due to the dependence of the\ninterpolating functional form. \n\n\\subsubsection{Step scaling function}\n\nWe will first show the continuum extrapolations of the step scaling\nfunction $\\Sigma(u,a\/L)$ at some representative values of\n$u=7.5, 3.75, 1.5$. Figure~\\ref{fig:ss} shows that these\nextrapolations are mild. We have used the value $c=0.3$ that gives a\nprecision in the data for the renormalized coupling between $0.15\\%$\nand $0.25\\%$.\n\nOne of the advantages of the use of the\ntwisted boundary conditions is the absence of $\\mathcal O(a)$ cutoff\neffects, that are present for example in the Schr\\\"odinger functional\ndue to boundary effects. Here the invariance under\ntranslations guarantee that the continuum limit can be safely taken by\na linear extrapolation in $(a\/L)^2$.\n\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}{\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/extra_75}\n \\end{subfigure}\n \\begin{subfigure}[b]{\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/extra_375}\n \\end{subfigure}\n \\begin{subfigure}[b]{\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/extra_15}\n \\end{subfigure}\n \\caption{Examples of the continuum extrapolation of the step scaling\n function. The three figures corresponds (from top to bottom) to the\n values $u=7.5, 3.75, 1.5$. We recall that we use a scale factor\n $s=1\/2$, and the scheme is defined by the parameter $c=0.3$.} \n \\label{fig:ss}\n\\end{figure}\n\n\n\\subsubsection{Running coupling}\n\nAs a final application, we will compute the running coupling.\nWe will fix the scheme by setting \n$c=0.3$. We start our recursion in a volume $L_{\\rm max}$ defined by the\ncondition\n\\begin{equation}\n g^2_{\\rm TGF}(L_{\\rm max})\\Big|_{c=0.3} = 7.5\\,.\n\\end{equation}\nThe lattice step scaling function and its continuum limit is computed\nas described in the previous sections. As figure~\\ref{fig:ss}\nshows, the extrapolations towards the \ncontinuum are rather flat. The continuum limit values are used to\nfurther compute the values of the step scaling function at larger\nrenormalization scales (smaller volumes), up to $L_{\\rm min} = L_{\\rm\n max}\/2^{26}$, \nwhere $g^2_{\\rm TGF}(L_{\\rm min})|_{c=0.3}=0.5324(84)$.\n\nSince the same functional form (fitting parameters) are used\nrecursively to compute the values of the coupling at different scales,\none has to propagate errors taking into account the correlations\ncorrectly. This is done in the spirit of the resampling methods in the\nmost naive way: one uses as input for the coupling at a scale $L$ all\nthe bootstrap samples of the coupling from the scale $2L$. We recall\nhere that these bootstrap samples carry the information not only of\nthe statistical uncertainties, but also of the dependence of our\nresults on the functional form chosen to fit the data.\nOur results have carefully taken into account the two sources of\nsystematic uncertainty: the continuum extrapolation and the choice of\nfitting function for our lattice data.\n\nFigure~\\ref{fig:gvsL} shows the running of the coupling from the low\nenergies to the high energies, over a factor $2^{26}$ change in scale,\nwhile table~\\ref{tab:gvsL} contains the numerical values of the\ncoupling at different renormalization scales. The fact that the\nabsolute error in the renormalized coupling tends to be constant a\nlarge energies (small volumes), is a consequence of the error\npropagation, that dominates for large energies the error budget. \n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/gvsL}\n \\caption{$g^2_{\\rm TGF}(L)$ as a function of the renormalization\n scale $\\log(L\/L_{\\rm min})$, and a comparison with the two loop\n perturbative prediction. Errors are plotted, but compatible with\n the size of the points.}\n \\label{fig:gvsL}\n\\end{figure}\n\nAs a further consistency test, we have repeated the full running of\nthe coupling using as scale factor to define the step scaling function\n$s=2$ (i.e. we run from high to low energies), obtaining\nconsistent results. \n\n\\begin{table}\n \\centering\n \\begin{tabular}{l|llllll}\n \\toprule\n $L=L_{\\rm max}\/2^k$ & $k=0$ & $k=1$ & $k=2$ & $k=3$ & $k=4$ & $k=5$ \\\\\n $g^2_{\\rm TGF}(L)$ & 7.5 & 4.824(17) & 3.581(15) & 2.858(12) &\n 2.383(10) & 2.0464(95) \\\\\n \\midrule\n $L=L_{\\rm max}\/2^k$ & $k=6$ & $k=7$ & $k=8$ & $k=9$ & $k=10$ & $k=11$ \\\\\n $g^2_{\\rm TGF}(L)$ & 1.7949(94) & 1.5995(94) & 1.4432(93) &\n 1.3153(92) & 1.2085(90) & 1.1181(89) \\\\\n \\midrule\n $L=L_{\\rm max}\/2^k$ & $k=12$ & $k=13$ & $k=14$ & $k=15$ & $k=16$ & $k=17$ \\\\\n $g^2_{\\rm TGF}(L)$ & 1.0405(87) & 0.9732(86) & 0.9143(84) &\n 0.8621(83) & 0.8158(83) & 0.7742(82) \\\\\n \\midrule\n $L=L_{\\rm max}\/2^k$ & $k=18$ & $k=19$ & $k=20$ & $k=21$ & $k=22$ & $k=23$ \\\\\n $g^2_{\\rm TGF}(L)$ & 0.7368(82) & 0.7028(82) & 0.6720(82) &\n 0.6437(82) & 0.6178(82) & 0.5939(83) \\\\\n \\midrule\n $L=L_{\\rm max}\/2^k$ & $k=24$ & $k=25$ & $k=26$ & & & \\\\\n $g^2_{\\rm TGF}(L)$ & 0.5718(83) & 0.5514(84) & 0.5324(84) &&& \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Values of the renormalized twisted gradient flow coupling\n as a funtion of the renormalization scale $\\mu = 1\/cL$ for\n $c=0.3$. The final error at large scales\n (small volumes) is dominated by the error propagation.}\n \\label{tab:gvsL}\n\\end{table}\n\nThe $\\Lambda$ parameter can be extracted, in units of $L_{\\rm max}$\nvia\n\\begin{equation}\n \\Lambda = \\mu(\\beta_0g^2(\\mu))^{-\\beta_1\/2\\beta_0^2} e^{-1\/2\\beta_0g^2(\\mu)}\n e^{-\\int_0^{g^2(\\mu)}\\left\\{\\frac{1}{\\beta(x)}+\\frac{1}{\\beta_0x^3}-\\frac{\\beta_1}{\\beta_0^2x}\\right\\}}\\,,\n\\end{equation}\nusing that $\\mu = 1\/cL$. The previous formula is exact, but the last\nexponential is essentially unknown analytically. Nevertheless if one\nuses a value of $g^2_{\\rm TGF}(L)$ where the difference between the two loop\nand the non-perturbative results are negligible, the effect of the\nlast exponential is also negligible. Of course this is\nmore certain the smaller the coupling, but since the relative error of\nthe coupling grows as the coupling decreases, this would translate in\na larger error for the $\\Lambda$ parameter. Below we quote a couple of\nvalues as example. \n\\begin{eqnarray*}\n \\Lambda L_{\\rm max} = 1.509(44)\\quad (@ g_{TGF}^2(L) = 1.7949(94))\\,,\\\\\n \\Lambda L_{\\rm max} = 1.57(13)\\quad (@ g_{TGF}^2(L) = 1.0405(87))\\,.\n\\end{eqnarray*}\n\nWe want to end this section with a small comment on the use of\ndifferent values of $c$. The main point has already been\nraised in~\\cite{Fritzsch:2013je}: the larger the value of $c$, the larger\nthe (relative) statistical error of the coupling, but the scaling\ntowards the continuum seems better. This general behavior is\nconsistent with the leading order in perturbation theory as we have\nseen. We will simply say that the relative error in the raw data\nincreases with $c$, and roughly one can say that for $c=0.4$ the\nrelative error is two times larger than for $c=0.3$, while for\n$c=0.5$ the error is three times larger. This statement seem to hold\ntrue independently of the volume (i.e. of the value of $g^2_{\\rm TGF}$). \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{The criterion illustrated by the cuprate example}\n\\label{app:example}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=.7]{fig4a.eps}\n\\includegraphics[scale=.7]{fig4b.eps}\n\\caption{(a)~The 2D nodal band structure and its projections on $(1\\bar{1})$ and $(01)$\nsurfaces for a model $d$-wave superconductor. The first BZ and one extended zone are drawn. The solid black curves denote the Fermi surface and the shaded region is filled in the normal state.\nRed arrows are the unit vectors in \\Eq{n}. The blue dots with $\\pm$ signs represent nodes with\nvorticity $\\pm 2$ respectively. Top left: the slanted thin black line segment is the surface\nBZ of the $(1\\bar{1})$ edge. The black arrow with letter ``P'' indicates the direction of projection.\nThick red line segments mark the surface momenta with zero energy bound states. Top right:\nthe thin black horizontal line segment is the BZ of the $(01)$ edge.\nThe $\\pm$ vorticity nodes overlap after projection.\nThe vorticity of the node enclosed by the parallelogram is equal to the winding number difference along the two side blue segments because the winding along the top and bottom segments cancel due to the periodicity in momentum space. This can be explicitly seen by following the turning of the red arrows (the actual winding number is twice of the winding shown by the arrows, due to the spin degeneracy). (b) The (11) edge bandstructure. The surface flat bands are marked red. In constructing this figure we have used $\\epsilon(\\v k)=-\\cos k_x-\\cos k_y-\\mu$ and $\\Delta(\\v k)=\\Delta_0 (\\cos k_x-\\cos k_y)$ in \\Eq{dwave}. Here $\\mu=0.45,\\Delta_0=0.1$.}\n\\label{fig:d-wave}\n\\end{center}\n\\end{figure}\n\nThe idea behind the criterion presented in the main text is \nbest illustrated by using the cuprate superconductor as an example.\nThe Bogoliubov-de Gennes (BdG) Hamiltonian of the cuprate superconductor read\n\\be\nH_{\\rm cuprates}(\\v k)=\\epsilon(\\v k)\\s_0\\otimes\\tau_3+\\Delta(\\v k)\\s_0\\otimes\\tau_1,\\label{dwave}\n\\ee\nwhere $\\s_0$ is the identity matrix acting in the spin space and $\\tau_{1,3}$ are $2\\times 2$ Pauli matrices in the Nambu space, $\\epsilon(\\v k)$ is the normal state dispersion satisfying $\\e(-\\v k)=\\e(\\v k)$ and $\\Delta(\\v k)$ is the d-wave gap function. (Since the cuprates are quasi two dimensional materials, we shall use two dimensional notations in the following discussions.) The Fermi surface and the gap nodes are shown in \\Fig{fig:d-wave}a, therefore $d=2,q=0$. In the same figure, the normalized vector \\be \\hat{n}(\\v k)=(\\epsilon(\\v k),\\Delta(\\v k))\/\\sqrt{\\epsilon(\\v k)^2+\\Delta(\\v k)^2}\\label{n}\\ee is plotted as a function of $\\v k$ over two BZs (see red arrows). Inspecting these arrows one notices each node is a ``vortex'' in $\\hat{n}(\\v k)$. Around each vortex the arrows exhibit non-zero winding. The total winding number associated with each node is given by \n\\be\nw={2\\over 2\\pi}\\oint d\\v p\\cdot [n_1(\\v k)\\gr_{\\v k} n_2(\\v k)-n_2(\\v k)\\gr_{\\v k} n_1(\\v k)].\n\\label{w}\n\\ee (The extra factor of 2 is due to spin degeneracy). Clearly each node is characterized by an even integer winding number. The BdG Hamiltonian defined on all one ($=d-q-1$) dimensional loop enclosing the node are topologically nontrivial.\n\nNow consider the bandstructure projected along the $(1\\bar{1})$ direction. For each transverse momentum $k$ along $(11)$ we have a 1D chain running in $(1\\bar{1})$.\nSo long as $k$ does not coincide with the projection of the nodes the spectrum is fully gapped and characterized by the winding number defined in \\Eq{w}. Any two chains whose $k$ straddle the projection of a node their winding numbers must differ by $\\pm 2$ (see \\Fig{fig:d-wave}a captions), hence at least one of them is topologically non-trivial and possess $E=0$ end states\nwhen the boundary condition along $(1\\bar{1})$ changes from closed to open.\n~This implies E=0 bound states exists for {\\em intervals} of $k$. Therefore $d_{E=0}$ is indeed $q+1=1$. An example of the (11) boundary bandstructure is shown in \\Fig{fig:d-wave}b. The $k$ intervals showing the flat bands are represented by the thick red line segments in the top left corner of \\Fig{fig:d-wave}a. The only edges which do not possess ZBABS are the $\\{10\\}$ (Miller's notation is used) edges where the projection of positive and negative nodes overlap (see top right corner of \\Fig{fig:d-wave}a).\nFor the real material $d=3, q=1$ and the only modification is $d_{E=0}$ changes from 1 to 2.\n\n\\end{appendix}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{intro}Introduction}\nHawks and Doves, also known as Chicken, or the Snowdrift game, is a two-person, symmetric\ngame with the following payoff bi-matrix:\n\n\\begin{table}[!ht]\n\\begin{center}\n{\\normalsize\n\\begin{tabular}{c|cc}\n & H & D\\\\\n\\hline\n{\\rule[-3mm]{0mm}{8mm}}\nH & ($\\frac{G-C}{2},\\frac{G-C}{2}$) & ($G,0$)\\\\\n{\\rule[-3mm]{0mm}{4mm}}\nD & ($0,G$) & ($\\frac{G}{2},\\frac{G}{2}$)\\\\\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n\\vspace{-0.4cm}\n\n\n\\noindent In this matrix, H stands for ``hawk'', and D stands for ``dove''.\nMetaphorically, a hawkish behavior means a strategy of fighting, while a dove, when facing a confrontation, will always yield.\nAs in the \n\\textit{Prisoner's Dilemma} \\cite{axe84}, this game, for all its simplicity, appears to\ncapture some important features of social interactions. In this sense, it applies\nin many situations in which ``parading'', ``retreating'', and ``escalading'' are common.\nOne striking example of a situation\nthat has been thought to lead to a Hawk-Dove dilemma is the Cuban missile crisis in 1962\n \\cite{poundstone92}.\nIn the payoff matrix above, $G > 0$ is the gain that a hawk obtains when it meets a dove; the dove retreats and looses nothing.\nIf a dove meets another dove, one of them, or both, will retreat and they will gain half of\nthe price each ($G\/2$) in the average. Finally, when a hawk meets another hawk, they \nboth fight and each obtains an average payoff of $(G-C)\/2$, where $C$ is the cost of any\ninjury that might occur in the fight. It is assumed that $C > G$, i.e. the cost of injury\nalways exceeds the prize of the fight.\nThe game has the same structure as the Prisoner's Dilemma in that if both players\ncooperate (i.e. they play dove), they both gain something, although there is a strong motivation\nto act aggressively (i.e. to play the hawk strategy). However, in this game one makes the\nassumption that one player is willing to cooperate, even if the other does not, and that mutual defection, i.e. result (H,H), is detrimental to both players. \n\nIn contrast to the Prisoner's Dilemma which has a unique Nash equilibrium that corresponds to\nboth players defecting, the Hawk-Dove game has two Nash equilibria in pure strategies\n(H,D) and (D,H), and a third equilibrium in mixed strategies where strategy H is played\nwith probability $G\/C$, and strategy D with probability $1 - G\/C$. Note that we only consider\none-shot games in this work; repeated games are not taken into account.\n\nConsidering now not just two players but rather a large, mixing population of identical players,\n\\textit{evolutionary\ngame theory} \\cite{hofb-sigm-book-98} prescribes that the only evolutionarily\nstable strategy (ESS) of the population is the mixed strategy, giving rise, at equilibrium,\nto a frequency of hawks in the population equal to $G\/C$.\nIn the case of the Prisoner's Dilemma, one finds a unique ESS with all the individuals defecting.\nHowever,\nin 1992, Nowak and May \\cite{nowakmay92} showed\nthat cooperation in the population is sustainable in the Prisoner's Dilemma under certain conditions,\nprovided that the network of the interactions between players has a lattice spatial structure. Killingback and Doebeli \\cite{KD-96} extended the\nspatial approach to the Hawk-Dove game and found that a planar lattice structure\nwith only nearest-neighbor interactions may favor cooperation, i.e. the fraction of doves in\nthe population is often higher than what is predicted by evolutionary game theory. In addition,\ncomplex dynamics resembling phase transitions were observed, which is not the case in the\nmixing population.\nIn a more recent work however, Hauert and Doebeli \\cite{hauer-doeb-2004} were led to a different conclusion, namely that\nspatial structure does not seem to favor cooperation in the Hawk-Dove game. \nAdditional\nresults on the Hawk-Dove game on a two-dimensional lattice have been recently obtained by Sysi-Aho et al.\n\\cite{myopic-hd-05} using a simple local decision rule for the players that does not reduce to the customary\n\\textit{replicator} or \\textit{imitation} dynamics \\cite{hofb-sigm-book-98}. They concluded that,\nwith their player's decision rule, cooperation persists, giving results different from those obtained\nwith the replicator dynamics. \nThese apparently\ncontradictory results aroused our curiosity, and motivated us to study the situation in a more general\nsetting, in which the mixing population and the lattice are special cases. \n\nFollowing pioneering work by sociologists in the sixties such as that of Milgram\n\\cite{milgram67}, in the last few years it has become apparent that the topological structures of social\ninteractions networks have particular, and partly unexpected, properties that are a consequence\nof their \\textit{small-world} characteristics. Roughly speaking, small-world networks are\ngraphs that have a short \\textit{average path length}, i.e. any node is relatively close to any other\nnode, like random graphs and unlike regular lattices.\nHowever, in contrast with random graphs, they also have a certain amount of local structure,\nas measured, for instance, by a quantity called the \\textit{clustering coefficient} (an excellent\nreview of the subject is \\cite{newman-03}).\nIn the same vein, many real conflicting situations in economy and sociology are not well \ndescribed neither by a fixed\ngeographical position of the players in a regular lattice, nor by a mixing population or a\nrandom graph. Starting from the two limiting cases of a random-graph and the two-dimensional lattice, our objective\nhere is to study the Hawk-Dove game on small-world networks in order to cover the ``middle ground''\nbetween them. Although the Watts--Strogatz networks \\cite{watts-strogatz-98} used here are not faithful representations\nof the structure of real social networks, they are a useful first step toward a better understanding\nof evolutionary games on networks. While we study here the Hawk-Dove game, this class of networks has been previously used for the Prisoner's \nDilemma in \\cite{social-pd-kup-01,pd-dyn-sw-02,watts99}. The work of \\cite{social-pd-kup-01} is especially relevant\nfor our present study, while the two others deal either with special features such as\n``influential individuals''\\cite{pd-dyn-sw-02}, or refer to iterated versions of the game \\cite{watts99}.\n\nRecently, Santos and Pacheco \\cite{santos-pach-05} have investigated both the Prisoner's\nDilemma and Hawk-Dove games on\nfixed scale-free networks. The main observation from their simulations is that, at least on preferential attachment\nnetworks, the amount of cooperative behavior is much higher than in either mixing or \nlattice-structured populations. In the abstract, and in some particular social situation\nin which some individuals have an unusually high number of contacts than the rest, this is an interesting result.\nHowever, scale-free graphs, which characterize the web and Internet among others, are not a realistic \nmodel of most observed social networks for various reasons (see \\cite{jin-gir-newman-01,ebel-dav-born-03}),\nwhich is why we do not comment further on the issue.\n\n\\section{\\label{sect:model}The Model}\nIn this section we present our network models and their dynamical properties.\n\n\\subsection{\\label{pop-topo}Population Topologies}\nWe consider a population $P$ of $N$ players where\neach individual $i$ in $P$ is represented as a vertex $v_i$ of a graph $G(V,E)$,\nwith $v_i \\in V, \\; \\forall i \\in P$. An interaction between two players $i$ and\n$j$ is represented by the undirected edge $e_{ij} \\in E, \\; e_{ij} \\equiv e_{ij}$.\nThe number of neighbors of player $i$ is the degree $k_i$ of vertex $v_i$. The average\ndegree of the network will be called $\\bar k$.\n\nWe shall use three main graph population structures: regular lattices, random graphs, and small-worlds.\nIn fact, our goal is to explore significant population network structures that somehow fall\nbetween the regular lattice and random graph limits, including the bounding cases.\n\nOur regular lattices are two-dimensional with $k_i=8, \\; \\forall v_i \\in V$ and periodic boundary conditions. \nThis neighborhood is usually called the Moore neighborhood and comprises nine individuals, including the central node.\nWe would like to stress that we believe regular lattice structures are not realistic representations\nof most actual population structures, especially human, except when mobility and dispersal ability\nof the individuals are limited as, for example, in plant ecology and territorial animals. \nThe main reasons why lattices have been so heavily used is that they are more amenable to mathematical analysis and are easier to simulate.\nWe include them here for two reasons: as an interesting bounding case, and to allow comparison with previous work.\n \nThe small-world networks used here are similar to the graphs proposed by Watts and Strogatz\n\\cite{watts-strogatz-98}. However, there are two main differences (see \\cite{boccara-04}). First, we start from a \ntwo-dimensional regular lattice substrate, instead of a one-dimensional lattice. This does not\nmodify the main features of the resulting graphs, as observed in \\cite{watts99}, and as measured by us.\nThe reason for starting from a two-dimensional lattice is to keep with the customary ordered population\ntopology that is used in structured evolutionary games. Although\nthey have been used as a starting point for the Prisoner's Dilemma by Abramson and Kuperman \\cite{social-pd-kup-01}, one-dimensional lattices do not\nmake much sense in a social or biological setting, although after some rewiring the effect of\nthe substrate becomes almost negligible. \n\nThe second difference is in the rewiring process. The\nalgorithm used here comes from \\cite{boccara-04} and works as follows: starting from a regular two-dimensional lattice with\nperiodic boundary conditions, visit each edge and, with probability $p$, replace it by\nan edge between two randomly selected vertices, with the constraint that two vertices are\nnot allowed to be connected by more than one edge. As in the original Watts--Strogatz model,\nthe average vertex degree $\\bar k$ does not change, and the process may produce disconnected graphs, \nwhich have been avoided in our simulations.\nThe advantage of this construction is that, for $p \\rightarrow 1$,\nthe graph approaches a classical Erd\\\"os--R\\'enyi random graph, while this is not the case for\nthe original Watts--Strogatz construction, since in the latter, the degree of any vertex is \nalways larger than or\nequal to $k\/2$, $k$ being the degree of a vertex in the original lattice. \n\nWe would like to point out that it is known that Watts--Strogatz small worlds \nare not adequate representations of social networks. Although they share some common statistical\nproperties with the latter, i.e. high clustering and short average path length, they lack other features that characterize\nreal social networks such as clusters, and dynamical self-organization \\cite{ebel-dav-born-03}. In \nspite of these shortcomings, they are\na convenient first approximation for studying the behavior of agents in situations where the interaction\nnetwork is neither regular, nor random. Note also that once fixed, the interaction network\ndoes not change during the system evolution in our study, only the strategies may evolve. Evolutionary games\non dynamic networks have been studied, for instance in \\cite{zimm-et-al-04,luthi-giac-tom-05,games-ecal-05}.\n\n\\subsection{\\label{dyn}Population Dynamics}\n\\subsubsection{Local Dynamics}\nThe local dynamics of a player $i$ only depends on its own strategy $s_i \\in \\{H,D\\}$, and on\nthe strategies of the $k_i$ players in its neighborhood $N(i)$. Let us call $M$ the payoff matrix\nof the game (see section \\ref{intro}). The quantity\n$$G_i(t) = \\frac{1}{k_i} \\sum _{j \\in N(i)} s_i(t)\\; M\\; s_{j}^T(t)$$\n\\noindent is the average payoff collected by player $i$ at time step $t$.\nNote that in our study, $i \\notin N(i)$ meaning that self-interaction is not considered when\ncalculating the average payoff of an individual.\nSelf-interaction has traditionally been taken into account in some previous work on the\nPrisoner's Dilemma\ngame on grids \\cite{nowakmay92,nowaketal94} on the grounds that, in biological applications,\nseveral entities may occupy a single patch in the network. Nowak et al. find that\nself-interaction does not qualitatively change the results in the Prisoner's Dilemma game. In the Hawk-Dove game\nself-interaction is usually not considered; moreover, in this work we wish to compare results\nwith those of \\cite{KD-96,hauer-doeb-2004}, where self-interaction is not included.\n\n\nWe use three types of rules to update a player's strategy. The rules are among those employed by\nHauert et al. \\cite{hauer-doeb-2004} to allow for comparison of the results in regular lattices\nand in small-world networks. Decision rules based on the player's satisfaction degree, \nsuch as those used in \\cite{zimm-et-al-04,games-ecal-05,luthi-giac-tom-05,myopic-hd-05} are not examined here.\nThe rules are the following:\n\\begin{enumerate}\n\\item replicator dynamics;\n\\item proportional updating;\n\\item best-takes-over.\n\\end{enumerate}\n\n\nThe \\textit{replicator dynamics} rule aims at maximal consistency with the original\nevolutionary game theory equations. Player $i$ is updated by drawing\nanother player $j$ at random from the neighborhood $N(i)$\nand replacing $s_i$ by $s_j$ with probability $p_j = \\phi(G_j - G_i)$ \\cite{hofb-sigm-book-98}.\n\n\nThe \\textit{proportional updating} rule is also a stochastic rule. All the players in the neighborhood $N(i)$,\nplus the player $i$ itself compete for the strategy $i$ will take at the next time step, each with a probability $p_j$ given by\n$$ p_j = \\frac{G_j}{\\sum_l G_l}, \\;\\; l,j \\in \\{N(i) \\cup i \\}.$$\n\\noindent Negative payoffs cannot be used with this rule, since the probabilities of replication\nmust be $p_j \\ge 0$. In order to avoid negative, or zero, values, the payoffs have been\nshifted by an amount equal to the cost $C$ which, of course, leaves the game's Nash equilibria invariant.\n\n\n\nIn \\textit{best-takes-over}, the strategy $s_i(t)$ of individual $i$ at time step $t$ will\nbe\n$$s_i(t) = s_j(t-1),$$\nwhere\n$$j \\in \\{N(i) \\cup i\\} \\;s.t.\\; G_j = \\max \\{G_k(t-1)\\}, \\; \\forall k \\in \\{N(i) \\cup i\\}.$$\n\\noindent That is, individual $i$ will adopt the strategy of the player with the highest\npayoff among its neighbors.\nIf there is a tie, the winner individual is chosen uniformly at random between the best, and its strategy\nreplaces the current strategy of player $i$, otherwise the rule is deterministic. It should be\nnoted that this rule does not fit to the usual continuous evolutionary game theory which\nleads to replicator dynamics, since the update\ndecision is a step function.\n\n\\subsubsection{Global Dynamics}\nCalling $C(t) = (s_1(t), s_2(t), \\ldots , s_N(t))$ a \\textit{configuration} of the population\nstrategies at time\nstep $t$, the global \\textit{synchronous} system dynamics is implicitly given by:\n$$C(t) = F(C(t-1)), \\;\\; t =1,2, \\ldots $$\n\\noindent where $F$ is the evolution operator.\n\nSynchronous update, with its idealization of a global clock, is customary \nin spatial evolutionary games, and most results have been obtained using this model \n\\cite{nowakmay92,KD-96}.\nHowever, perfect synchronicity is only an abstraction. Indeed, in some biological\nand, particularly, sociological environments, agents normally act at different and possibly uncorrelated\ntimes, which seems to preclude a faithful globally synchronous simulation in most\ncases of interest \\cite{hubglance93}. In spite of this, it has been shown that the\nupdate mode does not fundamentally alter the results, as far\nas evolutionary games are concerned \\cite{nowaketal94,hauer-doeb-2004}. In this paper we\npresent results for both synchronous and asynchronous dynamics.\n\nAsynchronous dynamics must nevertheless be further qualified, since there \nare many ways for serially updating the strategies\nof the agents. Here we use the discrete update dynamics that makes the least assumption\nabout the update sequence: the next cell to be updated is chosen\nat random with uniform probability and with replacement.\nThis corresponds to a binomial distribution of the updating probability and is a good approximation of a continuous-time Poisson\nprocess. This asynchronous update is analogous to the one used by Hauert et al. \n\\cite{hauer-doeb-2004}, which will allow us to make meaningful comparisons.\n\n\\section{\\label{sim}Simulation Results}\nIn order to analyze the influence of the structure of the network\non the proportion of cooperation (i.e. dove behavior),\n2500 players were organized into 5 different networks:\na 50 by 50 toroidal lattice where every cell is connected to its 8 nearest neighbors,\nthree different small-world networks, and the random graph.\nThe three categories of small worlds are obtained by rewiring each edge\nwith a certain probability $p$ using the technique described under \\ref{pop-topo}.\nThe values used are $p \\in \\{0.01,0.05,0.1\\}$.\nThe random graph is generated by first creating the lattice\nand then rewiring each link, in the same manner used to construct\nsmall worlds, but with probability $p=1$. Although our population size is smaller\nthan that used in \\cite{hauer-doeb-2004}, which is $10000$, results turn out to be\n qualitatively similar and comparable.\nFor each of the 5 networks mentioned above and for all update policies,\n50 runs of 5000 time steps each were executed.\nIn the following figures, the curves indicating the proportion of doves in the population\nwere obtained by averaging over the last 10 time steps of each run, well after all transients\nhave decayed.\nAt the beginning of each run, we generate a new network of the type being studied\nand randomly initialize it with 50\\% doves and 50\\% hawks. For completeness, we mention\nthat experiments with 10\\% and 90\\% initial cooperators respectively, give results that\nare qualitatively indistinguishable from the 50\\% case in the long run. Therefore, we do not include the \ncorresponding graphs for reasons of space.\n\nIn the following figures, the dashed diagonal line going from a fraction of cooperators\nof $1$ for $r=0$, to a fraction of $0$ for $r=1$, represents the equation $1-G\/C = 1-r$,\nwhich is the equilibrium fraction of cooperators as a function of $r$ given by the standard replicator-dynamics equations \\cite{hofb-sigm-book-98},\nand it is reported here for the sake of comparison. \nIt should be noted, however, that the simulations are not expected to fit this line. The reason is\nthat the analytic solution is obtained under two main hypotheses: the population size is very\nlarge, and individuals are matched pairwise randomly. These conditions are not satisfied by\nthe finite-size, discrete systems used for the simulations, and thus one should not expect strict\nadherence to the mean-field equations. On the other hand, the type of \\textit{mesoscopic} system\nsimulated here is probably closer to reality, where finiteness and discreteness are the rule.\nAnother reason why we do not expect the results of the simulations to closely fit the\ntheoretical solution is that two of the local update rules\n(best-takes-over and proportional updating) do not reduce to the standard replicator dynamics.\n\nThis section is subdivided into three separate parts, one per decision rule previously\nmentioned under \\ref{dyn}.\n\n\\subsection{Replicator Dynamics}\nTo determine the probability $p_j$ for replacing an individual $i$, having a gain $G_i$, \nby one of its randomly chosen neighbors $j$, whose gain is $G_j$,\nwe use the the previously introduced function $\\phi(G_j - G_i)$ as follows:\n\\begin{eqnarray}\np_j = \\phi(G_j - G_x) =\n\\begin{cases} \\textrm{\\large{$\\frac{G_j - G_i}{{d}_{max}}$}} & \\textrm{if $G_j - G_i > 0$}\\\\\\\\\n0 & \\textrm{otherwise}\n\\end{cases}\n\\label{repl_dyn_eq}\n\\end{eqnarray}\nwhere ${d}_{max} = \\frac{G+C}{2}$ is the largest difference in gain\nthere can be between two players.\n\nWith this definition of $\\phi$, individual $i$ imitates neighbor $j$'s strategy\nwith a certain probability proportional to the difference of their average payoffs\nand only if $j$ has a higher gain than $i$.\nNotice that if $i$ and $j$ have the same average payoffs, $i$'s strategy is left untouched, while\nif $G_j -G_i = {d}_{max}$, $i$ necessarily adopts $j$'s strategy.\n\nNow taking a look at figures \\ref{repl_dyn_async} and \\ref{repl_dyn_sync},\nwe clearly observe that for both synchronous and asynchronous dynamics,\ncooperation is globally inhibited by spatial structure, confirming the results of \\cite{hauer-doeb-2004}.\nEven the case of the random graph generates higher rates of hawks.\nFurther details as to why this may occur can be found in section \\ref{disc}.\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{tabular}{ccc}\n\\mbox{\\includegraphics[width=5.5cm, height=5.5cm]{repl_dyn_async_uc_3d}}\\protect & \\hspace*{1cm} &\n\\mbox{\\includegraphics[width=5.5cm, height=5.5cm]{repl_dyn_async_uc_dPercentage}}\\\\\n(a) & &(b)\n\\end{tabular}\n\\end{center}\n\\caption{\\label{repl_dyn_async}(Color online) asynchronous replicator dynamics updating;\n(a) frequency of doves as a function of the gain-to-cost ratio $r$ for differents topologies:\nlattice ($p=0$), small worlds ($p=0.01$, $p=0.05$, $p=0.1$), random graph ($p=1$);\n(b) small world with $p = 0.05$ compared to the grid ($p=0$) and random graph ($p=1$) cases.\nBars indicate standard deviations and the diagonal dashed line is $1-r$ (see text).}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{tabular}{ccc}\n\\mbox{\\includegraphics[width=5.5cm, height=5.5cm]{repl_dyn_sync_3d}}\\protect & \\hspace*{1cm} &\n\\mbox{\\includegraphics[width=5.5cm, height=5.5cm]{repl_dyn_async_uc_dPercentage}}\\\\\n(a) & & (b)\n\\end{tabular}\n\\end{center}\n\\caption{\\label{repl_dyn_sync}(Color online) synchronous replicator dynamics updating;\n(a) frequency of doves as a function of the gain-to-cost ratio $r$ for differents topologies:\nlattice ($p=0$), small worlds ($p=0.01$, $p=0.05$, $p=0.1$), random graph ($p=1$);\n(b) small world with $p = 0.05$ compared to the grid ($p=0$) and random graph ($p=1$) cases.\nBars indicate standard deviations and the diagonal dashed line is $1-r$ (see text).}\n\\end{figure}\n\nWe note in passing that the experimental curve corresponding to the random graph limit appears\nto be close to the curve corresponding to the pair approximation calculation in Hauert and Doebeli's\nwork \\cite{hauer-doeb-2004}. This is not surprising, given that pair approximation works\nbetter in random graphs than in regular lattices, unless higher-order effects are taken into\naccount \\cite{van-baalen-00}. Since the curves for the random graphs in \nfigures \\ref{repl_dyn_async} and \\ref{repl_dyn_sync} are averages over many graph realizations,\neach pair has some probability to contribute in the simulation,\nwhich explains the resemblance between our experimental curves and the calculations of \\cite{hauer-doeb-2004}.\n\n\\subsection{Proportional Updating}\n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{tabular}{ccc}\n\\mbox{\\includegraphics[width=5.5cm, height=5.5cm]{prop_async_uc_3d}}\\protect & \\hspace*{1cm} &\n\\mbox{\\includegraphics[width=5.5cm, height=5.5cm]{prop_async_uc_dPercentage}}\\\\\n(a) & & (b)\n\\end{tabular}\n\\end{center}\n\\caption{\\label{prop_async}(Color online) asynchronous proportional udpating;\n(a) frequency of doves as a function of the gain-to-cost ratio $r$ for differents topologies:\nlattice ($p=0$), small worlds ($p=0.01$, $p=0.05$, $p=0.1$), random graph ($p=1$);\n(b) small world with $p = 0.05$ compared to the grid ($p=0$) and random graph ($p=1$) cases.\nBars indicate standard deviations and the diagonal dashed line is $1-r$ (see text).}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{tabular}{ccc}\n\\mbox{\\includegraphics[width=5.5cm, height=5.5cm]{prop_sync_3d}}\\protect & \\hspace*{1cm} &\n\\mbox{\\includegraphics[width=5.5cm, height=5.5cm]{prop_sync_dPercentage}}\\\\\n(a) & & (b)\n\\end{tabular}\n\\end{center}\n\\caption{\\label{prop_sync}(Color online) synchronous proportional updating;\n(a) frequency of doves as a function of the gain-to-cost ratio $r$ for differents topologies:\nlattice ($p=0$), small worlds ($p=0.01$, $p=0.05$, $p=0.1$), random graph ($p=1$);\n(b) small world with $p = 0.05$ compared to the grid ($p=0$) and random graph ($p=1$) cases.\nBars indicate standard deviations and the diagonal dashed line is $1-r$ (see text).}\n\\end{figure}\n\nFigures \\ref{prop_async} and \\ref{prop_sync} show that,\nwhen using the proportional updating rule,\nspatial structure neither favors nor inhibits dove-like behavior\ncontrary to what \\cite{KD-96} and \\cite{hauer-doeb-2004} seem to suggest.\nIndeed, for low values of $r$, the more the network is structured,\nthe higher the proportion of doves.\nHowever as $r$ increases, the tendency is reversed,\nthus giving a lower percentage of doves in the lattice and small-world networks\nthan present in the random graph topology.\nThis phenomenon is even more marked when using the asynchronous update.\n\nThus when using the proportional updating rule, if spatial structure should favor one strategy over the other for a given value of $r$,\nit would be the one that is already present in greater numbers\nwhen the topology is a random graph.\n \n\nAnother interesting aspect observed is the higher percentage of doves\nwhen updating asynchronously compared to the synchronous equivalent.\nThis will be discussed in more detail in section \\ref{disc}.\n\n\n\\subsection{Best-takes-over}\nAs pointed out by Hauert and Doebeli \\cite{hauer-doeb-2004}, the best-takes-over rule lacks stochasticity,\nwhich in figures \\ref{bto_async} and \\ref{bto_sync}, translates into discontinuous jumps.\n\nNote that when updating synchronously, best-takes-over is the only rule,\nout of the three studied here, where spatial structure actually favors cooperation, as remarked\nin \\cite{KD-96}, where this was the local update rule used. In fact, the same qualitative results\nwere found in \\cite{hauer-doeb-2004}; however, they appear in the \"supplementary material\" section,\nnot in the main text.\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{tabular}{ccc}\n\\mbox{\\includegraphics[width=5.5cm, height=5.5cm]{bto_async_uc_3d}}\\protect & \\hspace*{1cm} &\n\\mbox{\\includegraphics[width=5.5cm, height=5.5cm]{bto_async_uc_dPercentage}}\\\\\n(a) & & (b)\n\\end{tabular}\n\\end{center}\n\\caption{\\label{bto_async}(Color online) asynchronous best-takes-over updating;\n(a) frequency of doves as a function of the gain-to-cost ratio $r$ for differents topologies:\nlattice ($p=0$), small worlds ($p=0.01$, $p=0.05$, $p=0.1$), random graph ($p=1$);\n(b) small world with $p = 0.05$ compared to the grid ($p=0$) and random graph ($p=1$) cases.\nBars indicate standard deviations and the diagonal dashed line is $1-r$ (see text).}\n\\end{figure}\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{tabular}{ccc}\n\\mbox{\\includegraphics[width=5.5cm, height=5.5cm]{bto_sync_3d}}\\protect & \\hspace*{1cm} &\n\\mbox{\\includegraphics[width=5.5cm, height=5.5cm]{bto_sync_dPercentage}}\\\\\n(a) & & (b)\n\\end{tabular}\n\\end{center}\n\\caption{\\label{bto_sync}(Color online) synchronous best-takes-over updating;\n(a) frequency of doves as a function of the gain-to-cost ratio $r$ for differents topologies:\nlattice ($p=0$), small worlds ($p=0.01$, $p=0.05$, $p=0.1$), random graph ($p=1$);\n(b) small world with $p = 0.05$ compared to the grid ($p=0$) and random graph ($p=1$) cases.\nBars indicate standard deviations and the diagonal dashed line is $1-r$ (see text).}\n\\end{figure}\n\n\\subsection{Time Evolution}\n\\label{tev}\n\nWhile the figures in the previous subsections summarize the results at system stability,\nhere we describe the dynamical behavior of populations through the first $100$ time\nsteps, where fluctuations might influence the system dynamics.\n\nWe have studied both asynchronous and synchronous dynamics for\nthe three update rules in three topologies each: lattice, random graph, and a small\nworld with $p=0.05$. This was done for $r=0.7$, where defection predominates. The results are \nrelatively uninteresting for the replicator and proportional updates in all topologies. One\nobserves in the average a monotone decrease of cooperation starting with $50\\%$ at time $0$ until\nthe curve flattens out at the values reported in figures \\ref{repl_dyn_async} to \\ref{prop_sync}.\nThe only difference is that\nthe variance is more pronounced in the proportional case, as one would expect looking\nat standard deviations in figures \\ref{repl_dyn_async} to \\ref{prop_sync}.\n\nThe situation is different, and more interesting, in the case of best-takes-over\nupdate whose determinism causes stronger variations. The most striking feature is a \nsudden drop of cooperation at the beginning\nof the simulation, followed by an increase and by fluctuations whose amplitude diminishes\nover time. The effect is much more pronounced with synchronous dynamics, shown in Fig. \\ref{time-ev}, than\nwith the asynchronous one. The behavior appears in all three topologies but the drop is stronger\nin lattices and small worlds with respect to the random graph at earlier times. As time\ngoes by, fluctuations remain larger in the random graph case. Nevertheless, no experiment\nled to total extinction of cooperators at $r=0.7$.\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{tabular}{ccccc}\n\\mbox{\\includegraphics[width=5.5cm, height=5.5cm]{bto_sync_07_0_dPercentage_time_ev}}\\protect &\n\\hspace*{0.3cm} &\n\\mbox{\\includegraphics[width=5.5cm, height=5.5cm]{bto_sync_07_005_dPercentage_time_ev}}\\protect &\n\\hspace*{0.3cm} &\n\\mbox{\\includegraphics[width=5.5cm, height=5.5cm]{bto_sync_07_1_dPercentage_time_ev}}\\\\\n(a) & & (b) & & (c)\n\\end{tabular}\n\\end{center}\n\\caption{\\label{time-ev} time evolution (first $100$ steps) of the proportion of doves\nfor best-takes-over update; synchronous evolution with $r=0.7$. (a) lattice structure;\n(b) small world with $p=0.05$; (c) random graph. Ten randomly chosen evolutions are shown in\neach case.}\n\\end{figure}\n\n\n\n\\section{\\label{disc}Analysis and Discussion}\nIf we take a closer look when comparing Fig. \\ref{prop_async} and Fig. \\ref{prop_sync},\nwe notice that, for proportional dynamics, asynchronous updating allows for better cooperation than its\nsynchronous counterpart. The reason for this difference can be intuitively understood\nin the following manner:\nwhen updating asynchronously, let us suppose a player $y$ has just imitated the strategy\nof one of its neighbors $x$. Another way of viewing this change, is to say that player $x$ has ``infected''\nindividual $y$ with its strategy. If $x$ is a dove player, making $y$ a dove as well,\nnot only does the percentage of doves increase in the population, but\nthe next time either $x$ or $y$ is evaluated for an update, it will be able to take advantage of the other one's\npresence to help increase its payoff. Hence, the two players mutually reinforce each other.\nMeanwhile, if $y$ is infected by $x$ and turns into a hawk, on the one hand $x$ has successfully propagated\nhis strategy thus increasing the overall amount of hawks in the population,\nbut on the other hand this propagation will cause him to have a lower payoff than he previously had.\nNot only is $x$'s payoff negatively affected, but $x$'s prescence also harms $y$'s payoff.\n\nThe same reasoning cannot be held when updating synchronously.\nIndeed, a player $x$ may change strategies\nat the same time it infects its neighbor $y$.\nSo if $x$'s initial strategy was $D$, it might switch\nto $H$ as it infects its neighbor $y$, in which case\n$x$ will no longer have a positive effect on $y$'s payoff\ncontrary to what happens in asychronous updating.\n\nWhen applying the replicators dynamics rule, the small drop of the percentage of doves seen\non the very left of figures \\ref{repl_dyn_async} and \\ref{repl_dyn_sync} is due to the fact that for\n$r=0$ the game is somewhat degenerated. Indeed, any cluster of more than one hawk will either reduce to\na single hawk or totally disappear, since a dove, no matter what its neighborhood comprises,\nwill always have a gain of zero while a hawk that interacts with at least one other hawk will have a negative payoff.\nThe remaining lone hawks will however survive but will not be able to propagate (having a gain \nexactly equal to that of their neighboring doves). The system is thus found locked in a configuration\nof a very high proportion of doves with a significant number of isolated hawks.\n\nIf $r > 0$, lone hawks always have a higher payoff than the doves in their surroundings and will thus infect one\nof its neighbors with its strategy. However for $0 < r \\leq 0.1 $, once the pair of hawks is established, their payoff\nis lower than the one of any of the doves connected to either one them. Even a dove that interacts with both\nhawks has an average payoff still greater than what a hawk composing the\npair receives.\nConsequently, when $0 < r \\leq 0.1$, clusters of hawks first start by either disappearing\nor reducing to single hawks like previously explained for the $r=0$ case, but then these lone hawks\nwill become pairs of hawks.\nIf the updates are done synchronously, a pair of hawks will either vanish\nor reduce back to a single hawk. One can clearly see that in the long run, hawks will become extinct.\nNow if the updates are done asynchronously, a pair cannot totally disappear since only one\nplayer is updated at a time. However, this mechanism of a pair reducing to a single hawk and\nturning back to a pair again\nwill cause the small groups of two hawks to move across the network and ``collide'' with each other\nforming larger groups that reduce back to a single-pair hawk formation. Therefore, after a large\nnumber of time steps, only a very few hawks will survive. \n\nIf we take another look at figures \\ref{repl_dyn_async} and \\ref{repl_dyn_sync},\nwe notice that when the population of players is constrained to a lattice-like structure,\nthe proportion of doves is reduced to zero for values of\nthe gain-to-cost ratio greater or equal to approximately $0.8$,\nwhile this not the case when the topology is a random graph.\nLet us try to give a qualitative explanation of the two different behaviors:\nthe first thing to be pointed out is that in the case of the replicators dynamics,\nif a dove is surrounded by 8 hawk-neighbors,\nit is condemned to die for values of $r$ greater than $\\frac{7}{9}$ whatever the topology may be.\nHowever, this does not explain why for these same values, doves no longer exist on\nsquare lattices or small worlds but are able to survive on random graphs.\nIf the population were mixing, $r=0.8$ would induce a proportion of doves equal to $20\\%$.\nTherefore, let us suppose that at a certain time step,\nthere is approximately $20\\%$ of doves in our population.\nFurthermore, as pointed out by Hauert and Doebeli \\cite{hauer-doeb-2004}, in the Hawk-Dove\ngame on lattices, the doves are usually spread out and form many small-isolated patches.\nThus, we will also suppose $20\\%$ of doves in the population\nimplies that in a set of players comprising an individual and its immediate eight neighbors,\nthere are about two doves.\nHence, a D-player has on average one dove and seven hawks in its neighborhood.\nIn the lattice network, this pair of doves can be linked in two different manners\n(see Fig. \\ref{2D_lattice_configs}), having either two or four common neighbors,\nthus an average of three.\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{tabular}{ccc}\n\\mbox{\\includegraphics[width=3.5cm, height=3.5cm]{lattice_2D_a}}\\protect\n& \\hspace{1cm} & \\mbox{\\includegraphics[width=3.5cm, height=3.5cm]{lattice_2D_b}}\\\\\n(a) & \\hspace{1cm} & (b)\\\\\n\\end{tabular}\n\\end{center}\n\\caption{\\label{2D_lattice_configs}(Color online) lattice: two possible configurations.}\n\\end{figure}\n\nMore generally, if we denote $\\Gamma$ the clustering coefficient of the graph and $\\overline{k}$\nthe average degree, a pair of doves will have on average $\\Gamma(\\overline{k} - 1)$\ncommon neighbors.\nLet us denote $x$ one of the two doves composing the pair,\n$H_x$ a hawk linked to $x$ but not to the other dove of the pair and $H_{x,y}$ one that is connected to both doves.\nIf $\\frac{2}{3} < r < \\frac{7}{8}$ and assuming that the hawks surrounding the pair of doves are not interacting with any other doves\n(this gives the pair of doves a maximum chance of survival), we have\n$$G_{H_x} < G_x < G_{H_{x,y}},$$\nwhere $G_\\alpha$ is the average payoff of player $\\alpha$\n\nConsequently, according to Eq. (\\ref{repl_dyn_eq}),\n$x$ can infect $H_x$, and $H_{x,y}$ can infect $x$.\n\nLet us now calculate for what values of $r$ the probability that $x$ invades the site\nof at least one $H_x$ is less than an $H_{x,y}$ infecting $x$.\nTo do so, let us distinguish the case of the asynchronous udating policy from the synchronous one.\n\n\\subsubsection*{Asynchronous Dynamics}\nThe probability that an $H_x$ neighbor is chosen to be updated and adopts strategy $D$ is given by\n\\begin{equation}\n\\underbrace{\\frac{(1 - \\Gamma)(\\overline{k} - 1)}{N}}_{(\\ast)} \\underbrace{\\frac{1}{\\;\\overline{k}\\;}}_{(\\ast\\ast)} \\;\\phi(G_x - G_{H_x}),\n\\label{H2D_async}\n\\end{equation}\nwhere $N$ is the size of the population, $(\\ast)$ the probability an $H_x$ hawk is chosen to be updated (among the $N$ players),\n$(\\ast\\ast)$ the probability the chosen $H_x$ hawk compares its payoff with player $x$,\nand finally $\\phi$ is the function defined in Eq. (\\ref{repl_dyn_eq}).\n\nThe probability that $x$ is chosen to be updated and is infected by one of the $H_{x,y}$ hawks is given by\n\\begin{equation}\n\\underbrace{\\frac{1}{N}}_{(\\ast)} \\underbrace{\\frac{\\Gamma(\\overline{k}-1)}{\\overline{k}}}_{(\\ast\\ast)} \\;\\phi(G_{H_{x,y}} - G_x),\n\\label{D2H_async} \n\\end{equation}\nwhere $(\\ast)$ is the probability $x$ is chosen to be updated,\n$(\\ast\\ast)$ the probability it measures itself against an $H_{x,y}$ neighbor,\nand $\\phi$ the function defined by Eq. (\\ref{repl_dyn_eq}).\n\nFor a square lattice with a Moore neighborhood ($\\Gamma = \\frac{3}{7}$ and $\\overline{k} = 8$),\nexpressions \\ref{H2D_async} and \\ref{D2H_async} give us $r > \\frac{46}{59} \\approx 0.78$,\nwhereas for a random graph, $\\Gamma = \\frac{\\overline{k}}{N-1} = \\frac{8}{2499} \\simeq 0.003 \\approx 0$ implies that\na pair of doves does not have any common hawk neighbors enabling them to survive\nif $r < \\frac{7}{8}$.\nAs for the small-world cases, the clustering coefficient is very close to that of the lattice, generating a behavior\npratically identical to the latter.\nThis gives a qualitative explanation for the difference observed in Fig. \\ref{repl_dyn_async}.\n\n\\subsubsection*{Synchronous Dynamics}\nThe probability that at least one $H_x$ adopts strategy $D$ is given by\n\\begin{equation}\n1 - \\overbrace{[1 - \\underbrace{\\frac{1}{\\;\\overline{k}\\;}\\;\\phi(G_x - G_{H_x})}_{(\\ast)}]^{(1-\\Gamma)(\\overline{k}-1)}}^{(\\ast\\ast)},\n\\label{H2D_sync}\n\\end{equation}\nwhere $(\\ast)$ is the probability a specific $H_{x}$ turns into a dove and $(\\ast\\ast)$ the probability none of the $H_x$\nadopt strategy $D$.\n\nThe probability that $x$ adopts the hawk strategy is given by\n\\begin{equation}\n\\underbrace{\\frac{\\Gamma(\\overline{k} - 1)}{\\overline{k}}}_{(\\ast)} \\;\\phi(G_{H_{x,y}} - G_x),\n\\label{D2H_sync}\n\\end{equation}\nwhere $(\\ast)$ is the probability player $x$ compares its payoff with one of its $H_{x,y}$ neighbors.\n\nFor a square lattice with a Moore neighborhood ($\\Gamma = \\frac{3}{7}$ and $\\overline{k} = 8$),\nexpressions \\ref{H2D_sync} and \\ref{D2H_sync} yield\n$$\n1 - \\left[1 - \\frac{1}{8}\\left(\\frac{-8G+7C}{G+C}\\right)\\right]^4 < \\frac{3}{8}\\left(\\frac{9G - 6C}{G+C}\\right),\n$$\nand given that $\\frac{G}{C} = r$, we obtain\n$$\n1 - \\left[1 - \\frac{1}{8}\\left(\\frac{-8r+7}{r+1}\\right)\\right]^4 < \\frac{3}{8}\\left(\\frac{9r - 6}{r+1}\\right),\n$$\nwhich is true for about $r > 0.775$. This also holds for the small-world cases, since, once again,\nthey have a $\\Gamma$ close to the one of the lattice. \n\nFor a random graph of $N=2500$ nodes and $\\overline{k}=8$, we have $\\Gamma \\approx 0$.\nTherefore, a pair of doves has a negligible probability of having a hawk neighbor in common and thus cannot be infected by the H strategy if $r < \\frac{7}{8}$.\nThis enables a small percentage of doves to survive on the random graph topology contrary to the lattice and small-world\nnetworks (see Fig. \\ref{repl_dyn_sync}). \n\nIn a few words, whether the update policy is asynchronous or synchronous, as soon as $r > \\frac{7}{9}$,\nisolated doves, as well as pairs of doves surrounded by hawks, will end up disappearing in the\nlattice and small-world cases due to the high clustering coefficient.\nHowever, in the random graph scenario, although isolated doves are also bound to die if $r > \\frac{7}{9}$,\npairs of doves have a more than even chance of surviving (at least as long as $r < \\frac{7}{8}$).\n\n\\section{\\label{concl}Conclusions}\nIn this work we clarify previous partially contradictory results on cooperation in populations playing the\nHawk-Dove game on regular grids. Furthermore, we notably extend the study to Watts--Strogatz small-world graphs, as\nthese population structures lie between the two extreme cases of regular lattices and random graphs,\nand are a first simple step towards real social interaction networks. This allows us to\nunravel the role of network clustering on cooperation in the Hawk-Dove game.\nWe find that, in general, spatial structure on the network of interactions in the game\neither favors or inhibits cooperation with respect to the perfectly mixed case.\nThe influence it has depends not only on the rule that determines a player's future strategy,\nbut also on the value of the gain-to-cost ratio $G\/C$ and to a lesser degree,\non the synchronous and asynchronous timing of events.\n\nIn the case of the best-takes-over rule, dove-like behavior is advantaged if synchronous\nupdate is used but the rule is noisy due to its discrete nature.\nIn the case of the proportional update\nrule, giving the network a regular structure tends to increase\nthe percentage of the strategy that would already be in majority on a random graph\nconfiguration of the population. The more important the structure, in terms of clustering\ncoefficient, the higher the percentage of the dominant strategy. In fact, cooperation predominates\nfor low to medium $r$ values, while for higher $r$ values cooperation falls below the \nlarge population, mixing case.\nFinally, the replicators dynamics rule tends to favor hawks over doves on\nspatially structured topologies such as small worlds and square lattices, thus\nconfirming previous results for regular lattices and extending them to small-world networks.\nIn the end, although small-world topologies show behaviors that are somewhat in between\nthose of the random graph and the two-dimensional lattice, they usually tend more\ntowards the latter, at least in terms of cooperation level.\n\nIn this work, we have used static\nnetwork structures, which is a useful step but it is not realistic enough as the interactions\nthemselves help shape the network. In future work we shall\nextend the study using more faithful social network structures, including their dynamical aspects.\n\n\n\\begin{small}\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Noncommutative spacetime and nonabelian gauge fields}\n\\hspace*{12pt}Between 1921-1948, Kaluza, Klein and Thiry \\cite{KK} had shown that the Hilbert-Einstein action in the spacetime extended with a circle ${\\cal M}^4 \\times S^1$ consisted of gravity, electromagnetism and a Brans-Dicke scalar. In 1968, R.Kerner \\cite{Kerner} had generalized the Kaluza-Klein theory to include nonabelian gauge fields. Today, the multi-dimensional theories are studied widely as candidates of unified theories of interactions. However, these theories have a weakness of containing an infinite tower of massive fields leading to theoretical and observational obstacles.\n\nIn the 1980's, Connes had put forward the new concept of spacetime based on noncommutative geometry(NCG) \\cite{Co}.In 1986, Connes and Lott \\cite{CoLo} applied the idea to the spacetime extended by two discrete points ${\\cal M}^4 \\times Z_2$ and shown that Higgs fields emerged naturally in a gauge theory with a quartic potential. The most attractive feature of NCG with discrete extra dimensions is that it does not contains an infinte tower of massive fields.\n \nIn 1993, Chamsedine, Felder and Fr\\\"ohlich \\cite{CFF} made the first attempt to generalize the Hilbert-Einstein action to NCG, leading to no new physical content. In 1994, G.Landi, N.A.Viet and K.C.Wali \\cite{LVW} had overcome this no-go result and derived the zero mode sector of the Kaluza-Klein theory from the generalized Hilbert-Einstein's action. Viet and Wali \\cite{VW1} have generalized this model further and obtained a full spectum consisting of bigravity, bivector and biscalar. In each pair, one field is massless and the other one is masive.\n\nThe incorporation of the nonabelian gauge fields in Viet-Wali's model is a not trivial task. Recently, Viet and Du \\cite{VietDu} have successfully derived the nonabelian gauge interaction from the Hilbert-Einstein's action. However, it is possible to do so only in two following cases:\n\ni. The gauge vector fields must be abelian on one sheet of spacetime and nonabelian on the other one. This is exactly the case of the electroweak gauge fields on the two copies of Connes-Lott's spacetime of chiral spinors.\n\nii. The gauge vector field must be the same on both copies of spacetime of chiral spinors. This is also the case of QCD of strong interaction.\n\nSo, NCG can \"explain\" the specific gauge symmetry structure of the Standard Model.\n \nIn this article, we propose a new noncommutative spacetime structure ${\\cal M}^4 \\times Z_2 \\times Z_2$, which is the ordinary spacetime extended by two discrete extra dimension, each consists of two discrete points. In other words, this noncommutative spacetime consists of two copies of Connes-Lott's spacetime. The generalized Hilbert-Einstein action in this new spacetime contains all the known interactions of Nature and the observed Higgs field. In a more general case, this theory can also lead to multigravity, which might be necessary to explain the dark matter and inflationary cosmology related observations. \n\\section{A new model of noncommutative spacetime and gravity}\n\\hspace*{12pt}The noncommutative spacetime ${\\cal M}^4 \\times Z_2 \\times Z_2$ is the usual four dimensional spinor manifold extended by two extra discrete dimensions given by two differential elements $DX^5$ and $DX^6$ in addition to the usual four dimensional ones $dx^\\mu$. Each extra dimension consists of only two points. This structure can also be viewed as four sheeted space-time having a noncommutative differential structure with the following spectral triplet:\n\ni) The Hilbert space ${\\mathcal H}= {\\mathcal H}^v \\oplus {\\cal H}^w $ which is a direct sum of two Hilbert spaces ${\\mathcal H}^u = {\\cal H}^u_L \\oplus {\\cal H}^u_R, u = v,w$, which are direct sums of the Hilbert spaces of left-handed and right-handed spinors. Thus the wave functions $ \\Psi \\in {\\mathcal H}$ can be represented as follows \n\\begin{equation}\n \\Psi(x) = \\pmatrix{\n \\Psi^v(x) \\cr\n \\Psi^w(x) \\cr\n } ~~,~~ \n \\Psi^u(x) = \\pmatrix{\n \\psi^v_L(x) \\cr\n \\psi^w_R(x) \\cr\n } \\in {\\mathcal H}^u ~;~ u=v,w,\n\\end{equation}\nwhere the functions $\\psi^u_{L,R}(x) \\in {\\cal H}^u_{L,R}$ are defined on the 4-dimensional spin manifold ${\\cal M}^4$.\n\nii) The algebra ${\\cal A}={\\cal A}^v \\oplus {\\cal A}^w ; {\\cal}^u = {\\cal A}^u_L \\oplus {\\cal A}^u_R$ contains the 0-form ${\\cal F}$ \n\\begin{equation}\\label{0form}\n {\\mathcal F}(x) = \\pmatrix{\n F^v(x) & 0 \\cr\n 0 & F^w(x) \\cr\n } ~,~ F^u(x) = \\pmatrix{\n f^u_L(x) & 0 \\cr\n 0 & f^u_R(x) \\cr\n } \\in {\\cal A}^u,\n\\end{equation}\nwhere $f^u_{L,R}(x)$ are real valued function operators defined on the ordinary spacetime ${\\cal M}^4$ and acting on the Hilbert spaces ${\\cal H}^u_{L,R}$.\n\niii) The Dirac operator ${\\cal D} = \\Gamma^P D_P = \\Gamma^\\mu \\partial_\\mu + \\Gamma^5 D_5+ \\Gamma^6 D_6, P=0,1,2,3,5,6 $ is defined as follows\n\\begin{eqnarray}\\label{Dirac1}\n{\\cal D} &=& \\pmatrix{\nD & m_1 \\theta_1 \\cr\nm_1 \\theta_1 & D \\cr\n}~,~ D = \\pmatrix{\nd & m_2 \\theta_2 \\cr\nm_2 \\theta_2 & d \\cr} ~,~ d = \\gamma^\\mu \\partial_\\mu \\\\ \nD_\\mu {\\cal F} &=& \\pmatrix{\n\\partial_\\mu F^v(x) & 0 \\cr\n0 & \\partial_\\mu F^w(x) \\cr \n} ~,~\n\\partial_\\mu F^u = \\pmatrix{\n\\partial_\\mu f^u_L(x) & 0 \\cr\n0 & \\partial_\\mu f^u_R(x) \\cr \n} \\\\\nD_{6} {\\cal F} &=& m_1(F^v - F^w){\\bf r} ~,~ D_5 F^u = m_2(f^u_L-f^u_R) {\\bf r} ~,~ {\\bf r} = \\pmatrix{\n1 & ~0\\cr\n0 & -1\\cr\n} \n\\end{eqnarray}\nwhere $\\theta_1, \\theta_2$ are Clifford elements $ \\theta^2_1 = \\theta^2_1=1$, $m_1, m_2$ are parameters with dimension of mass. \n\nThe construction of noncommutative Riemannian geometry in the Cartan formulation is given in \\cite{VW1} in a perfect parallelism with the ordinary one. Here we will use the following flat and curved indices to extend the 4 dimensions with 5-th and 6-th dimensions.\n\\begin{eqnarray}\nE,F,G = A, \\dot{6} ~,~ & A,B,C = a, \\dot{5}&~,~ a,b,c =0,1,2,3 \\\\\nP,Q,R = M,6 ~,~ &M,N,L = \\mu, 5 &~,~\\mu,\\nu, \\rho=0,1,2,3.\n\\end{eqnarray}\n\nThe construction of noncommutative Riemannian geometry \\cite{VW1} is in a perfect parallelism with the ordinary one. The starting point is the locally flat reference frame, which is a linear transformation of the curvilinear one with the vielbein coefficients. For transparency, let us write down the vielbein in 4,5 and 6 dimensions as follows \n\\begin{equation}\ne^a = dx^\\mu e^a_\\mu(x) ~,~ E^A = DX^M E^A_M(x) ~,~{\\cal E}^E = DX^P {\\cal E}^E_P(x),\n\\end{equation}\nwhere $e^a_\\mu(x), E^A_M(x), {\\cal E}^E_P$ are 4,5 and 6-dimensional vielbeins.\nThe Levi-Civita connection 1-forms $\\Omega^\\dagger_{EF} = - \\Omega_{FE}$ are introduced as a direct generalization of the ordinary case. With a condition \\cite{VW1}, which is a generalization of the torsion free condition one can determine the Levi-Civita connection 1-forms and hence the Ricci curvature tensor from the generalized Cartan structure equations \n\\begin{eqnarray}\n{\\cal T}^E &= & DE^E + E^F \\Omega^E_F \\label{Torsion}\\\\\n{\\cal R}^{EF} &=& D\\Omega^{EF}+ \\Omega^E_G \\wedge \\Omega^{GF} \\label{Curv}\n\\end{eqnarray}\nThen we can calculate the Ricci scalar curvature $R=\\eta^{EG} \\eta^{FH} R_{EFGH}$.\n\nThe construction of our model is carried out in two subsequent steps. First, we construct the 6-dimensional Ricci curvature with an ansatz containing one 5-dimensional gravity and two 5-dimensional vectors fields, where one is abelian and the other is nonabelian to use Viet-Du's results. Then\n\\begin{equation}\nR_6 = R_5 - {1 \\over 4} G^{MN} G_{MN} = R_5 + {\\cal L}_g(5)\n\\end{equation}\nwhere $G_{MN}; M,N=0,1,2,3,5$ is the 5-dimensional covariant field streng tensor of the nonabelian $SU(2)\\times U(1)$ gauge fields.\n\nIn the second step, the gravity sector is reduced further to 4-dimensional gravity nonabelian gauge $SU(3)$ vector of strong interaction \n\\begin{equation}\nR_5 = R_4 - {1 \\over 4} Tr H^{\\mu \\nu} H_{\\mu \\nu} ~~,~~ H_{\\mu \\nu} = \\partial_\\mu B_\\nu - \\partial_\\nu B_\\mu + i g_S [B_\\mu, B_\\nu],\n\\end{equation}\nwhere $B_\\mu = B^i_\\mu(x) \\lambda^i$ are the gluon field and $\\lambda^i, i=1,..,8$ are the GellMann matrices.\n\nConnes-Lott's procedure can be applied now to reduce the 5-dimensional gauge Lagrangian ${\\cal L}_g(5)$ to the 4-dimensional electroweak gauge-Higgs sector of the Standard Model as follows\n\\begin{equation}\n{\\cal L}_g(5) = -{1 \\over 4}( F^{\\mu\\nu} F_{\\mu\\nu}+ G^{\\mu \\nu} G_{\\mu \\nu}) + {1 \\over 2} \\nabla^\\mu \\bar{H} \\nabla_\\mu H + V(\\bar{H}, H), \n\\end{equation} \nwhere $H$ is a Higgs doublet, $\\nabla_\\mu$ is the gauge covariant derivative and $V(\\bar{H}, H)$ is the usual quartic potential of the Higgs field.\n\\section{Multigravity in noncommutative spacetime}\n\n\\hspace*{12pt}In Section 2, we have presented the minimal ansatz to include the all known interactions and the Higgs fields. New cosmological observations might shed light to more detailed structure of the new commutative spacetime. In principles, in a more general case, our model can adopt up to 4 gravitational fields, one of those is massless while the other ones are massive. \n\nFrom theoretical points of view, this model can provide a geometric construction approach to the massive gravity, which has\nrecently attracted a lot of attention as a candidate theory of modified gravity \\cite{DeRham}. From the viewpoints of modern cosmology, multigravity might give new explanations the existence of dark matter and inflationary cosmology.\n \n\\section{Summary and discusions}\nWe have presented a new noncommutative spacetime ${\\cal M}^4 \\times Z_2 \\times Z_2$, which can unify all the known interactions and Higgs field on a geometric foundation. This is very similar to the foundation of Einstein's general relativity. This model unifies all forces in nature without resorting to infinite tower of massive fields.\n\nIn the most general case, this theory can contain more (but still a finite number) degrees of freedom, including four different massless and massive gravity fields, Brans-Dicke scalars and more gauge fields. The model can provide a geometric foundation to the theories of massive and modified gravity. The reality might be just a special case of the most general theory. The cosmological observations might help us to see more details of this theory.\n\nThere are some issues, at the moment we are not able to answer such as the physical meaning of the sixth dimension and the energy scale of this theory. It is worth to quote the following relation from the work by Viet and Du \\cite{VietDu}\n\\begin{equation}\ng = 8 m \\sqrt{\\pi G_N},\n\\end{equation}\nwhere $g$ the weak coupling constant and $G_N$ is the Newton constant. This relation must hold when the theory becomes valid. One can speculate this might happen at an energy scale, which is million times lower that the Planck scale. That might be the case in some evolution stage of our universe after the Big Bang. All the above perspectives would merit more research.\n\\section*{Acknowledgments}\nThe discussions with Pham Tien Du, Do Van Thanh (College of Natural Sciences, VNU) and Nguyen Van Dat (ITI-VNU) are greatly appreciated. The author would also like to thank Jean Tran Thanh Van for the hospitality at Quy Nhon and supports. The work is partially supported by ITI-VNU and Department of Physics, College of Natural Sciences, VNU.\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nThe past decades have seen a proliferation of research using evolutionary theory to study social traits, in the fields of biology, animal behavior, and even social science \\cite{Nowak1998,Nowak2004,Hilbe2018a,Ohtsuki2006,Weitz2016,Tilman2020,Allen2017a}.\nMost of this theoretical development has been based on mathematical models that assume either infinite populations \\cite{Taylor1978,Schuster1983,Nowak2006a,Weitz2016,Tilman2020} or finite populations of constant size \\cite{Hilbe2018a,Ohtsuki2006,Nowak2004}.\nDespite these simplifying assumptions, mathematical models provide rich insights into how exogenous and intrinsic factors drive evolutionary dynamics of social behavior. \nFor example, \nthe literature has produced a rich set of explanations for cooperation based on repeated interactions, the establishment of reputations, and various forms of population structure \\cite{Nowak1992,Ohtsuki2006,Tarnita2009,Allen2017a,McAvoy2021,Su2022,Su2022nhb,Cooney2019,Hilbe2018a,Santos2018,Nowak2005,Nowak1993,Nowak2006fiverule,Stewart2013}. Several key theoretical insights have been validated by controlled experiments on human subjects \\cite{Gachter2009,Yamauchi2011,Yoelia2013,Greiner2005}. This field of research has been so successful that the question of how cooperation can be favored by natural selection, famously posed by Darwin, is now not only resolved, but resolved in several distinct ways applicable in different contexts.\n\nHere we reveal an qualitatively different and pervasive mechanism that can promote cooperation by natural selection or payoff-biased imitation. \nMost mechanisms known to support cooperation boil down to some form of population structure \\cite{Kay2020} -- either physical limitations on social interactions, reproduction, or imitation, or structure imposed by tags or reputations. By contrast, here we describe a much more simple scenario that can favor cooperation in a population that lacks any form of exogenous or endogenous structure. We show that demographic stochasticity, which is \\textit{a priori} a realistic feature of any natural population, can by itself promote social behaviors that would otherwise be suppressed in idealized populations of constant (or infinite) size.\n\nThere is precedent for the idea that demographic stochasticity alters evolutionary dynamics. The fact that mortality, reproduction, and migration are subject to demographic fluctuations in populations -- as well as processes of imitation and innovation -- is known influence the dynamics of competing types under frequency-independent selection \\cite{Parsons2007,Parsons2007a,Parsons2010,McKane2005,Butler2009,Hallatschek2007,Stollmeier2018,Wienand2017,Taitelbaum2020,Chotibut2017} and also frequency-dependent selection \\cite{Constable2016,Houchmandzadeh2012,Houchmandzadeh2015,Huang2015}.\nFor example, when a population contains two types with the same expected number of offspring, one type can be favored when the population size is small, and the other type favored when the population size is near to its carrying capacity \\cite{Parsons2007,Parsons2007a,Parsons2010}.\nAnd a few studies have shown that demographic stochasticity can even reverse the direction of natural selection, promoting a type that would otherwise be disfavored without stochasticity \\cite{Constable2016,Houchmandzadeh2012,Houchmandzadeh2015}.\n\nNonetheless, prior work on selection with demographic stochasticity has either assumed constant fitness, in which one's fitness is independent of the composition of the population, or assumed different carrying capacities for different phenotypes, e.g.,~producers enjoy a larger carrying capacity than non-producers \\cite{Constable2016,Houchmandzadeh2012,Houchmandzadeh2015}.\nMost models of demographic stochasticity \nalso assume that offspring numbers follow a Poisson distribution \\cite{Constable2016,Huang2015,Parsons2010,Parsons2007,Parsons2007a,Houchmandzadeh2012,Houchmandzadeh2015}, so that the mean and variance in offspring number are identical. But empirical field studies have found that over-dispersion in offspring number (variance exceeding mean) is commonplace across diverse taxa \\cite{Zuur2009,Linden2011,VerHoef2007,Richards2008}.\n\n\n\nIn this paper, we develop a general framework to study evolutionary dynamics with demographic stochasticity, which can capture both frequency-dependent fitness, arising from social interactions, as well as over-dispersion in the number of offspring. We provide a simple analytical condition that governs the long-term outcome of competition between multiple types. Applied to pairwise social interactions involving cooperation or defection, we find that demographic stochasticity can favor cooperators provided the offspring variance is sufficiently large, even without any other mechanisms. For more general pairwise payoff structures, we show that demographic stochasticity can reverse the stability of equilibria, from coexistence to bi-stability and vice versa, or from dominance of one type to dominance of another. Our analysis highlights the profound effects of demographic stochasticity on the evolution of interacting types in a population.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\textwidth]{figure_main\/main_figure1.pdf}\n \\caption{\\textbf{Evolutionary dynamics with demographic stochasticity.} \n (A) Competition between cooperators (blue circle) and defectors (red circle) in a stochastic population of non-constant size. Each individual $i$ derives payoff $\\pi_i$ from pairwise game-play with each other individual in the population. The number of offspring produced by an individual within time $\\Delta t$ has mean $(B+s\\pi_i)\\Delta t$ and variance $(\\delta_1B+\\delta_2 s\\pi_i)\\Delta t$, which are both higher for defectors than for cooperators. When selection is weak ($s \\ll \\alpha$), the population quickly reaches carrying capacity (during time period I) while the frequency of cooperators and defectors remains unchanged from its initial value ($p_0=1\/2$ shown here). Thereafter (time period II) the population remains near carrying capacity ($M \\approx 1000$ shown here), while the frequency of cooperators and defectors slowly vary until either cooperators go extinct (example in panel B) or defectors go extinct (panel C).\n Parameters: $b=3$, $c=1$, $s=0.01$, $\\delta_1=\\delta_2=1$, $x_0=y_0=10$, $\\lambda=1\\times10^{-3}$, $B=2$, $D=1$.}\n \\label{fig1}\n\\end{figure}\n\n\\section{Model}\n\nWe first consider an evolving population of two types: cooperators (C) and defectors (D).\nEach individual interacts pairwise with each other, in which the cooperator pays a cost $c$ to bring his opponent a benefit $b$ ($b>c$), and the defector pays no cost and provides no benefit. In other words, pairwise interactions follow a simple ``donation game\", which provides a minimal model for studying the evolution of cooperation \\cite{rapoport1965prisoner}.\nFollowing all pairwise interactions, \neach individual obtains an average payoff that will determine their reproductive output (or, equivalently, the number of individuals who copy their type by social contagion). In a population with $x$ cooperators and $y$ defectors, the cooperator's payoff (denoted by $\\pi_C$) and the defector's payoff (denoted by $\\pi_D$) are \n\n\n\\begin{subequations}\n\\begin{align}\n \\pi_C=&\\frac{x}{x+y}b-c, \\\\ \\pi_D=&\\frac{x}{x+y}b.\n\\end{align}\n\\end{subequations}\n\n\nIn a classic Moran model, each birth event is followed by a death event, and so the population size remains constant. Here we remove this constraint by decoupling the birth and death events. Births are assumed to follow a continuous-time Markov process with independent and stationary increments (see Section S1 in Supplementary Information), such that the expected number of offspring individual $i$ produces per unit time is \\begin{equation}\n \\mathbb{E}(\\xi_i)=B+s\\pi_i,\n \\label{eq:mean}\n\\end{equation}\nwhere $B$ is a baseline number of offspring, $\\pi_i$ is individual $i$'s payoff, and the parameter $s>0$ is the intensity of selection. Note that the baseline birth rate is the same for all individuals, regardless of type, and it does not depend upon payoffs from social interactions. The selection intensity $s$ measures to what degree the payoff derived from social interactions affects the offspring number. In this paper we focus on the case of weak selection ($s\\ll 1$), a regime widely adopted in the literature \\cite{Allen2017a,McAvoy2021,Nowak2004,Ohtsuki2006}. Since the defector's payoff $\\pi_D$ is larger than the cooperator's payoff $\\pi_C$ in any population state, defectors always have a greater expected fecundity (Fig.~\\ref{fig1}). \n\nTo fully describe the birth process, we also specify the variance in the number of offspring. We are particularly interested in cases of over-dispersion, which can be modelled in many alternative ways \\cite{Linden2011,VerHoef2007}, such as a quasi-Poisson model (variance proportional to mean), mixed-effects Poisson model, and negative binomial model (variance a quadratic function of mean). Here we study a general class of Markov birth models by stipulating\n\\begin{equation}\n {\\rm Var}(\\xi_i)=\\delta_1 B+\\delta_2 s\\pi_i,\n \\label{eq:variance}\n\\end{equation}\nwhere parameters $\\delta_1$ and $\\delta_2$ measure the magnitude of offspring variance ${\\rm Var}(\\xi_i)$ relative to the mean $\\mathbb{E}(\\xi_i)$. The parameter $\\delta_1$ controls how offspring variance is influenced by the baseline birth rate; and $\\delta_2$ controls how offspring variance is influenced by payoffs from social interactions. Specific choices of $\\delta_1$ and $\\delta_2$ reduce to well-known classical models, such as a deterministic system ($\\delta_1=\\delta_2=0$) or a Poison birth process ($\\delta_1=\\delta_2=1$). In the regime of weak selection, the number of offspring produced per unit time is over-dispersed whenever $\\delta_1>1$.\n\n\n\nDeath events are modelled as a Poisson process, arising from two rates that are summed. First, an individual dies at constant baseline rate, $D$. Second, in order to model competition for limited resources, additional deaths occur at rate $\\lambda$ times the current total population size.\n\n\\section{Results}\n\n\n\n\n\n\\subsection{Evolution of cooperation with demographic stochasticity}\n\\label{section3.1}\nLet $x$ and $y$ denote the number of cooperators and defectors respectively, which will change over time. \nGiven the class of models described above for the payoff-dependent birth-process and the population-size dependent death process, the evolutionary dynamics of $x$ and $y$ can be approximated by a two-dimensional It\\^o stochastic differential equation (see Section S1 in Supplementary Information):\n\\begin{subequations}\n \\begin{align}\n {\\rm d}x&=x\\left[\\alpha+s\\pi_{\\rm C}-\\lambda (x+y)\\right] {\\rm d}t+ \\sqrt{x\\left[\\delta_1 B+\\delta_2 s\\pi_{\\rm C}+D+\\lambda(x+y)\\right]}{\\rm d}W^{(1)}_t, \\label{eq:ito_diffusion_a} \\\\\n {\\rm d}y&=y\\left[\\alpha+s\\pi_{\\rm D}-\\lambda (x+y)\\right]{\\rm d}t+ \\sqrt{y\\left[\\delta_1 B+\\delta_2 s\\pi_{\\rm D}+D+\\lambda(x+y) \\right]}{\\rm d}W^{(2)}_t, \\label{eq:ito_diffusion_b} \n \\end{align} \\label{eq:ito_diffusion} \n\\end{subequations}\nwhere $\\alpha =B-D>0$ indicates the net growth rate from baseline birth and death events, and $W_t^{(1)}$ and $W_t^{(2)}$ are independent standard Wiener processes. Although the birth process can be over-dispersed in our model (when $\\delta_1>1$), deaths follow a simple Poisson process with variance equal to mean. \n\nTo study how the relative abundance of cooperators and the total population size evolve over time, we make the co-ordinate transformation $(p,n)=(x\/(x+y),x+y)$. Applying It\\^o's lemma in Eq.~\\ref{eq:ito_diffusion}, the system can then be described by the equations\n\\begin{subequations}\n\\begin{align}\n {\\rm d}p=&scp(1-p)\\left(-1 +\\frac{\\delta_2}{n}\\right){\\rm d}t+\\frac{y}{n^2} \\sqrt{x(\\delta_1 B+D+\\lambda n) }{\\rm d}W^{(1)}_t \\notag\\\\\n &-\\frac{x}{n^2}\\sqrt{y(\\delta_1 B + D+\\lambda n)}{\\rm d}W^{(2)}_t, \\label{eq:transformed_a}\\\\\n {\\rm d}n=&[n\\alpha+s(b-c)pn-\\lambda n^2]{\\rm d}t+ \\sqrt{x(\\delta_1 B +D+\\lambda n)}{\\rm d}W^{(1)}_t \\notag \\\\\n & +\\sqrt{y(\\delta_1 B +D+\\lambda n)}{\\rm d}W^{(2)}_t. \\label{eq:transformed_b}\n\\end{align}\n\\label{eq:transformed}\n\\end{subequations}\n\nThe simple case in which stochasticity is absent (i.e., $\\delta_1=\\delta_2=0$ for births, and no variance for deaths) provides a deterministic reference point for comparison to any stochastic system. \nIn the deterministic system, ${\\rm d}p$ is always negative and the abundance of cooperators continuously decreases until cooperators reach extinction. Thus, cooperation is never favored by natural selection in the deterministic limit. Moreover, in this deterministic limit, changes in the total population size $n$ depend on both $p$ and $n$. But for sufficiently weak selection intensity ($s\\ll \\alpha$), changes in the total population size $n$ are much more rapid than changes in the cooperator frequency, $p$.\nIn the regime of weak selection, before $p$ changes its value at all, $n$ has grown logistically to its equilibrium value $(\\alpha+s(b-c)p)\/\\lambda$, which we denote by $M$. $M$ is called carrying capacity, and it describes the maximum number of individuals that the environment can sustain. When the net growth rate is much larger than selection intensity, $\\alpha \\gg s$, the carrying capacity is well approximated by $M\\approx \\alpha\/\\lambda$. \n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=0.75]{figure_main\/heat_simu.pdf}\n \\caption{\\textbf{Demographic stochasticity can favor the evolution of cooperation.} Colors represent the fixation probability of cooperation relative to neutral drift, $\\rho - p_0$, as a function of parameters $\\delta_2$ and $\\delta_1$. We say that selection favors cooperation when cooperators are more likely to fix than under neutrality (blue regions). Panel (A) shows exact solutions sampled from the stochastic differential equation (Eq.~\\ref{eq:ito_diffusion}), whereas panel (B) shows the analytical approximation in the regime of weak selection (Eq.~\\ref{eq:fixation prob}). The dashed line indicates the separation between regimes that favor cooperation (blue) or favor defection (red).\n \n Parameters: $B=2$, $D=1$, $s=0.005$, $b=1.1$, $c=1$, $\\lambda=5\\times 10 ^{-3}$, $x_0=y_0=50$.}\n \\label{fig2}\n\\end{figure}\n\nFor a stochastic system ($\\delta_1 \\ne 0$ and $\\delta_2 \\ne 0$) the trajectories of $p$ and $n$ are not determined by the initial conditions alone, but depend upon chance events. We quantify the evolutionary advantage of cooperators by studying the fixation probability -- namely, the chance of absorption into the full-cooperation state ($p=1$). \nStarting from $x_0$ cooperators and $y_0$ defectors initially (thus $p_0=x_0\/(x_0+y_0)$ and $n_0=x_0+y_0$), the fixation probability, denoted by $\\rho(x_0,y_0)$ or $\\rho(p_0,n_0)$, is the probability that at some time $t$ defectors become extinct while cooperators still exist, that is $y(t)=0$ but $x(t)>0$ \\cite{Czuppon2018}. \nIn the regime $s \\ll \\alpha$ the fixation probability can be calculated by separating the time-scale of changes in $p$ versus changes in $n$ \\cite{Parsons2017}. This analysis is tantamount to assuming that the total population size $n$ rapidly reaches its carrying capacity, while $p$ remains unchanged from $p_0$, and that subsequently $p$ evolves in one dimension while the population size remains near the slow manifold $n=M$ (see Fig.~\\ref{fig1} and Supplementary Fig.~S3). Under this analysis, we can approximate the fixation probability by a simple expression (Section S2.1 in Supplementary Information) \n\\begin{equation}\n \\rho(p_0,n_0) \\approx p_0+\\frac{sc}{(\\delta_1+1) B}\\left(\\delta_2-M\\right)p_0(1-p_0).\n \\label{eq:fixation prob}\n\\end{equation}\nWe performed numerical simulations, drawing sample paths from the full SDE system given by Eq.~\\ref{eq:ito_diffusion}, to verify the accuracy of this analytic approximation for the fixation probability (Fig.~\\ref{fig2}).\n\nNote that fixation probability does not depend on the initial population size, but rather on the initial frequency of cooperators. In the absence of selection ($s=0$), the fixation probability equals the initial frequency of cooperators, $p_0$. And so we say that cooperation is favored by selection if the fixation probability exceeds $p_0$, which will occur whenever\n\\begin{equation}\n \\delta_2>M. \\label{eq:condition}\n\\end{equation}\nThis simple condition tells us when demographic stochasticity causes selection to favor cooperators, even though selection disfavors cooperation in a deterministic setting. In particular, demographic stochasticity can favor the fixation of cooperators when the offspring variance is sufficiently large -- in particular, when $\\delta_2$ exceeds the carrying capacity $M$. What matters for the direction of selection, then, is the size of the offspring variance arising from payoffs in social interactions, relative to its mean.\n\n\n\nWe can gain some useful intuition for the forces that govern the fate of cooperators by considering the deterministic part of Eq.~\\ref{eq:transformed_a}. The first term in this equation, $-scp(1-p)$, represents the deterministic contribution to the evolution of cooperator frequency, which always opposes cooperators. Whereas the second term in this equation, $\\delta_2scp(1-p)\/n$, arises from demographic stochasticity and it always favors cooperators. Whether or not cooperation is favored overall depends upon the balance between these two forces -- the deterministic force suppressing cooperation and demographic stochasticity that favors cooperation. For $\\delta_2M$, the stochastic advantage matters more, so that cooperators are favored, which constitutes an evolutionary reversal compared to a classical model without demographic stochasticity.\n\n\nOther model parameters, $s$, $c$, $p_0$, $\\delta_1$ and $B$, do not produce a reversal in the direction of selection for cooperation, but they nonetheless influence the fixation probability. For example, increasing $\\delta_1$ or increasing the baseline birth rate $B$ moves the fixation probability towards the neutral value, $p_0$. Moreover, in the regime where demographic stochasticity favors cooperation, $\\delta_2>M$, the fixation probability is increased yet further when the selection intensity $s$ is large or when the cost of cooperation $c$ is large (Eq.~\\ref{eq:fixation prob}). Both of these results contravene the classical intuition that selection and the cost of cooperation should disfavor cooperators. \nWe have performed simulations to verify the effects of all these parameters, in comparison to the analytical approximation (Supplementary Fig.~S2).\n\n\n\\subsection{An explicit birth-death process}\n\nOur model of demographic stochasticity is quite general, stipulating only several properties of the Markov birth and death processes for competing types. We have analyzed this class of models by approximation, using a stochastic differential equation. In this section we construct an explicit example of birth and death processes that satisfy our model stipulations, and we compare the predictions of our SDE analysis to individual-based simulations of the discrete stochastic process.\n\nMost prior studies of demographic stochasticity are based on a reproduction process with a single offspring per birth event, which naturally leads to a Poisson birth process \\cite{Feller1950,Huang2015,Constable2016,Parsons2010,Parsons2007,Parsons2007a}. The Poisson process occurs as a special case within our family of models, when $\\delta_1=\\delta_2=1$. In this case, our analysis shows that demographic stochasticity alone cannot favor cooperation, because $\\delta_21$, $\\delta_2$, and $B$. \n Two examples with the parameters that correspond to $(\\delta_1=6,\\delta_2=60)$ and $(\\delta_1=6,\\delta_2=140)$ are shown in each panel. Blue squares indicate the fixation probability of cooperators, starting from an initial population with $x_0=y_0=50$, observed in $5\\times 10^7$ replicate Monte Carlo simulations, with carrying capacity either $M=100$ or $M=200$. Selection favors cooperation if the fixation probability $\\rho$ exceeds the initial fraction of cooperators, 0.5 (horizontal dashed line).\n The solid lines plot our analytical approximation for the fixation probability (Eq.~\\ref{eq:fixation prob}). As predicted by our analysis, cooperation is favored when $\\delta_2>M$.\n Parameters: $B=2$, $D=1$, $\\delta_1=6$, $s=0.001$, $b=1.1$, $c=1$, $m=5$ (negative binomial), $x_0=y_0=50$, $\\lambda=1\/100$ ($M=100$) or $\\lambda=1\/200$ ($M=200$).}\n \\label{fig3}\n\\end{figure}\n\nThe parameters of the compound Poisson process depend upon an individual's payoff and the selection intensity. For the Poisson-Poisson case (the litter size follows a Poisson distribution) the reproductive process of individual $i$ is characterized by parameters $\\theta_i$ and $\\mu_i$, and we assume that the payoff $\\pi_i$ affects both $\\theta_i$ and $\\mu_i$ linearly\n\\begin{subequations}\n\\begin{align}\n \\theta_i&=\\theta_0+k_\\theta s\\pi_i, \\\\\n \\mu_i&=\\mu_0+k_\\mu s\\pi_i.\n\\end{align}\n\\end{subequations}\nFor the Poisson-negative binomial case (the litter size follows a negative binomial distribution), we assume that all individuals share the same $m$ and that payoffs affect $q_i$ and $\\theta_i$ as follows:\n\\begin{subequations}\n\\begin{align}\n \\theta_i&=\\theta_0+k_\\theta s\\pi_i, \\\\\n q_i&=q_0+k_q s\\pi_i.\n\\end{align}\n\\end{subequations}\n\nGiven these equations, we can always choose parameters of the compound Poisson process\nthat satisfy our general stipulations on the mean and variance in the total offspring produced per unit time (Eq.~\\ref{eq:mean} and Eq.~\\ref{eq:variance}), provided $\\delta_1>1$ and $\\delta_2>0$ (see Section S3 in Supplementary Information). Note that for both of these compound Poisson birth processes (Poisson-Poisson and Poisson-Negative-Binomial) the total number of offspring produced per unit time must be over-dispersed ($\\delta_1>1$).\n\nWe can compare Monte-Carlo simulations of these explicit population processes (discrete state, continuous time) to the analytical prediction for the fixation probability that we derived from a stochastic differential equation (Eq.~\\ref{eq:fixation prob}). We find good agreement between the individual-based simulations and analytic approximations, for carrying capacities as small as $M=100$ or $M=200$ (Fig.~\\ref{fig3}). Note that in both cases shown in Fig.~\\ref{fig3}, for sufficiently large $\\delta_2$ we have $k_\\theta<0$ and $k_m>0$ or $k_q>0$. In other words, higher payoffs reduce the rate of birth events but increase the mean litter size per birth event; and when these effects are strong enough, then selection favors cooperation.\n\n\n\n\n\n\\subsection{Intuition for the effects of demographic stochasticity}\n\nThere is a simple intuition for how demographic stochasticity can favor cooperation in our class of models, even though cooperation is always disfavored in models with constant (or infinite) population size. The key insight has to do with the rapid growth of the total population size to carrying capacity, followed by slow dynamics in the frequency of cooperators near the manifold $n=M$. Importantly, during the slow dynamics there are still small fluctuations that move the population off the manifold $n=M$, followed by a rapid return back to carrying capacity. These small fluctuations have the effect of inducing an advective force pushing the frequency of cooperators $p$ in one direction or another. \n\nTo be more precise, we have already noted that the total population size $n$ equilibrates much more quickly than the frequency of cooperators $p$ (Eq.~\\ref{eq:transformed}), in the regime we study $\\alpha \\gg s$. And so, given an arbitrary initial state $p_0$ and $n_0$, $n$ will quickly converge to the slow manifold\n\\begin{equation}\n n=\\frac{\\alpha+s(b-c)p_0}{\\lambda}\\approx \\frac{\\alpha}{\\lambda}=M,\n \\label{eq:slow manifold}\n\\end{equation}\nwhile $p$ does not change from $p_0$ (see example in Supplementary Fig.~S3B). After the population size reaches carrying capacity, trajectories then move along the slow manifold until one type or the other fixes ($p=0$ or $p=1$). We focus on the dynamics on the slow manifold, which simplifies the analysis to a one-dimensional system \\cite{Parsons2017}. \n\nIn the co-ordinate system $(x,y)$, the slow manifold is defined $x+y=M$, and the fast manifolds are lines connecting the origin to points on the slow manifold (see Fig.~\\ref{fig4}A). \nGiven any initial conditions, the trajectory will rapidly approach the slow manifold along one of these lines, and then subsequently move within the slow manifold. However, unlike the case of a strictly constant population size, the system with demographic stochasticity does not lie precisely on the slow manifold at all times. Small fluctuations take the system off the slow manifold briefly, and then the system rapidly returns to the slow manifold. Critically, the position where the system returns to the slow manifold, after a fluctuation, is not necessarily the same as where it started. In fact, there can be a systematic deviation in the position on the slow manifold that arises from stochastic fluctuations and rapid returns -- which produces an advective force on the frequency $p$ along the slow manifold (see Fig.~\\ref{fig4}B, C, D). It is this systematic deviation, caused by demographic stochasticity, that introduces a force favoring cooperation.\n\nIn particular, fluctuations from $x+y=M$ follow a two-dimensional Gaussian distribution with variance $x(\\delta_1B+\\delta_2s\\pi_C+D+\\lambda n)$ in the $x$-direction and variance $y(\\delta_1B+\\delta_2 s\\pi_D+D+\\lambda n)$ in the $y$-direction. In Fig.~\\ref{fig4}, we illustrate the fluctuation starting from state $x=y=M\/2$ (see Supplementary Information Section S2.2 for the analysis of any other states).\nWhen $\\pi_C=\\pi_D$, the Gaussian fluctuation is isotropic, and so a fluctuation followed by return along a fast-manifold line produces no expected change in the resulting position on the slow manifold (see Fig.~\\ref{fig4}B). However, whenever $\\pi_C \\ne \\pi_D$, the two-dimensional Gaussian fluctuation has an ellipsoid shape, and fluctuation followed by rapid return produces an expected change in the frequency of cooperators, $p$, along the slow manifold. In particular, when $\\pi_C < \\pi_D$, the expected change due to demographic stochastic favors cooperators, whereas if $\\pi_C > \\pi_D$ the expected change favors defectors (Fig.~\\ref{fig4}C,D). In general, we can analytically calculate the adjective force along the slow manifold that arises from these stochastic fluctuations and rapid returns (Section S2.2 in Supplementary Information). \n\n\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[scale=0.65]{figure_main\/fig4.pdf}\n \\caption{\\textbf{How demographic stochasticity can favor cooperation or defection.} \n (A) The system features a separation of timescales, where the total number of individuals $n=x+y$ changes much faster than the fraction of cooperators $p=x\/(x+y)$.\n Starting from $x_0$ and $y_0$ cooperators and defectors, trajectories rapidly converge to the slow manifold ($x+y=M$) along the fast manifold $x\/y=x_0\/y_0$.\n (B, C, D) Stochastic fluctuations away from the slow manifold, followed by rapid return, can induce an advective force on the frequency of cooperators.\n For simplicity we consider constant payoffs, where $\\pi_C$ and $\\pi_D$ are independent of the number of cooperators and defectors. The ellipses illustrate the variance-covariance structure of two-dimensional Gaussian fluctuations around the slow manifold from a given point $x=M\/2$ and $y=M\/2$ (red point $O$). \n (B) When $\\pi_C=\\pi_D$, fluctuations from point $O$ are isotropic, shown as a circle.\n We consider four representative fluctuations from point $O$, $X_{-},X_{+},Y_{-},Y_{+}$, and the following points of return $X_{-}',X_{+}',Y_{-}',Y_{+}'$ to the slow manifold.\n For isotropic fluctuations there is no expected change in $p$ after return to the slow manifold.\n (C) For $\\pi_C<\\pi_D$, the Gaussian fluctuations are an-isotropic, shown as an ellipse, with larger fluctuations in the number of defectors. This asymmetry leads to an expected increase in cooperator frequency $p$ after return to the slow manifold, as indicated by the blue arrow.\n (D) For $\\pi_C>\\pi_D$, the larger fluctuation occurs in the number of cooperators, which leads to an expected decrease in cooperator frequency after return to the slow manifold. These effects of an-isotropic noise are similar to those discussed by \\cite{Constable2016}, but they arise here even when both types have the same baseline birth rate and the same carrying capacity, under weak selection.}\n \\label{fig4}\n\\end{figure}\n\n\nFor the donation game we have studied so far, cooperators always have a lower payoff than defectors regardless of the population state. And so the advective force arising from demographic stochasticity always favors cooperation, regardless of $p$. If this force is large enough relative to the deterministic force favoring defectors, then it can produce a net advantage for cooperators. For other types of pairwise games, however, the direction of deterministic selection ($\\pi_C$ vs $\\pi_D$) may depends on the current frequency $p$ in the population, and so the noise-induced advection may change sign along the slow manifold, producing complicated effects on long-term dynamics. We investigate these effects of demographic noise on evolutionary dynamics for general two-player games in the next section. \n\n\n\n\\subsection{General evolutionary game dynamics with demographic stochasticity}\n\nFor an arbitrary two-player game that gives rise to payoffs, the two-dimensional system can be simplified to a one-dimensional system by separation of timescales, provided selection is weak enough, $s \\ll \\alpha$. Suppose the game has the following payoff structure:\n\\begin{equation}\n \\begin{array}{cc}\n & \\begin{array}{cc}\n {\\rm C} & {\\rm D} \n \\end{array} \\\\ \\begin{array}{c}\n {\\rm C}\\\\{\\rm D} \n \\end{array}\n & \\left(\\begin{array}{cc}\n a&b \\\\\n c&d \n \\end{array}\\right).\n\\end{array}\n\\end{equation}\nPlayers have two strategies, which we still generically call cooperation (C) or defection (D). When two cooperators interact, both of them receive payoff $a$. When a cooperator interacts with a defector, the cooperator receives $b$ and the defector $c$. Mutual defection brings payoff $d$ to both players. The average payoff for a cooperator or defector in a population are respectively\n\\begin{equation}\n\\begin{split}\n \\pi_C&=\\frac{xa+yb}{x+y}, \\\\\n \\pi_D&=\\frac{xc+yd}{x+y}.\n\\end{split}\n\\end{equation}\n\nSimilar to Section \\ref{section3.1}, we can describe the system by a stochastic differential equation:\n\\begin{subequations}\n\\begin{align}\n {\\rm d}p=&sp(1-p)\\left(1-\\frac{\\delta_2}{n}\\right)(\\pi_{\\rm C}-\\pi_{\\rm D}){\\rm d}t+\\frac{1-p}{n} \\sqrt{x(\\delta_1 B+D+\\lambda n)}{\\rm d}W^{(1)}_t \\notag \\\\\n &-\\frac{p}{n}\\sqrt{y(\\delta_1 B +D+\\lambda n)}{\\rm d}W^{(2)}_t, \\label{eq:transformed_general_a}\\\\\n {\\rm d}n=&[n\\alpha+s(p\\pi_{\\rm C}+(1-p)\\pi_{\\rm D})pn-\\lambda n^2]{\\rm d}t+ \\sqrt{x(\\delta_1 B+D+\\lambda n)}{\\rm d}W^{(1)}_t\\notag \\\\\n &+\\sqrt{y(\\delta_1 B +D+\\lambda n)}{\\rm d}W^{(2)}_t. \\label{eq:transformed_general_b}\n\\end{align}\n\\label{eq:transformed_general}\n\\end{subequations}\nSince the population size quickly equilibrates to the carrying capacity $M\\approx\\alpha\/\\lambda$, we substitute $n=M$ into Eq.~\\ref{eq:transformed_general_a} which yields a one-dimensional equation for the evolution of $p$ along the slow manifold:\n\\begin{subequations}\n\\begin{align}\n {\\rm d}p=&sp(1-p)\\left[\\left(1-\\frac{\\delta_2}{M}\\right)\\left(b-d+(a-b-c+d)p\\right) \\right]{\\rm d}t \\notag \\\\\n &+\\sqrt{\\frac{(\\delta_1+1) B p(1-p)}{M}}\\left(\\sqrt{1-p}{\\rm d}W^{(1)}_t-\\sqrt{p}{\\rm d}W^{(2)}_t\\right).\n\\end{align}\n\\label{eq:general one dimensional}\n\\end{subequations}\n\nIn the case of deterministic births and deaths ($\\delta_1=\\delta_2=0$ and neglecting variance in the death process), this equation simplifies to the classic replicator equation \\cite{Schuster1983,Nowak2006a}. For general games there may be interior equilibrium points, and so the fixation probability is no longer a good measure to describe long-term evolutionary outcomes. Instead, we analyze the dynamics from two perspectives. One is from the perspective of the deterministic behavior on the slow manifold,\nwhich neglects stochasticity altogether in Eq.~\\ref{eq:general one dimensional} and studies the equilibria of the resulting ordinary differential equation. The other, more nuanced perspective accounts for stochasticity. Since $p=0$ and $p=1$ are the only absorbing states, any trajectory will finally reach one of these states and then become invariant. However, we can impose a reflecting condition on the boundary, which is equivalent to assuming that, when the number of one phenotype reaches zero, a new mutant of this phenotype arises instantly. The resulting evolutionary process of $p$ becomes an ergodic Markov process which has a unique stationary distribution $v^*(p)$. A frequency $p$ with greater probability density means that trajectories spend more time there. Derivation of the stationary distribution $v^*(p)$ under reflecting boundaries is given in Section S4.1 of Supplementary Information. \n\nWhen we ignore the stochastic terms, then Eq.~\\ref{eq:general one dimensional} is an ODE with the same equilibrium points and stabilities as the classic replicator equation, provided $\\delta_2M$, then the equilibrium points are the same as the classic replicator equation, but the stabilities are reversed: equilibrium points that are classically unstable become stable, and conversely. And so the value of $\\delta_2$, which determines the payoff-component of offspring variance, can reverse the evolutionary outcome, even from a deterministic perspective.\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=\\textwidth]{figure_main\/main_figure4.pdf}\n \\caption{\\textbf{General evolutionary game dynamics with demographic stochasticity.} \n We consider three types of representative games, such as prisoner's dilemma (A, D), snowdrift game (B, E), and stag-hunt games (C, F).\n In the prisoner's dilemma games, when demographic stochasticity is absent or does not meet $\\delta_2>M$, defectors dominate the population (see trajectories sampled in A, left part).\n While the evolutionary direction can be reversed for $\\delta_2>M$, where cooperation becomes the dominant strategy (see trajectories sampled in A, right part).\n Shown in (D) is the stationary distribution of cooperators for $\\delta_2=0$, $\\delta_2=25000$, and $\\delta_2=50000$.\n Analogously, in the snowdrift game, the demographic stochasticity with $\\delta_2>M$ changes the equilibrium from the coexistence of two strategies (B, left part) to the bi-stability (B, right part), which suggests the transformation of a snowdrift game to a stag-hunt game.\n We also find that with demographic stochasticity, the evolution in the stag-hunt games proceed ``as if\" the population are playing snowdrift games. \n Parameters: $B=2$, $D=1$, $s=10^{-3}$, $\\delta_1=2.5$, $\\lambda=10^{-4}$, $x_0=y_0=100$. }\n \\label{fig5}\n\\end{figure}\n\nMore generally, we can classify three different deterministic scenarios based on the payoff matrix of the two-player, two-action game. For dominance games (Fig.~\\ref{fig5}A), one strategy is always dominant. Here, without loss of generality, we assume defection dominates cooperation ($aM$, cooperation becomes the dominant strategy and all trajectories converge to full-cooperator state ($p=1$ stable and $p=0$ unstable). For coexistence games ($ad$, e.g.,~a snowdrift game), the best response is to choose the opposite strategy of the opponent (Fig.~\\ref{fig5}B). If $\\delta_2M$, $p^*$ becomes unstable and $p=0$ and $p=1$ are each stable. Thus, all trajectories converge to either the full-cooperator or the full-defector state, similar to the outcome of a classic coordination game. For coordination games (Fig.~\\ref{fig5}C), the best response is to choose the same strategy as the opponent ($a>c$ and $d>b$, e.g., a stag-hunt game). In this case, $\\delta_2M$, $p^*$ becomes stable while $p=0$ and $p=1$ are unstable. Most trajectories fluctuate around $p^*$ for a long time, showing similar behavior as a classic coexistence game. In summary, in a population with sufficiently large offspring variance ($\\delta_2>M$), the outcome of each type of game has the dynamical properties classically associated with the opposite type of game in a deterministic setting. In other words, demographic stochasticity effectively transforms the payoff structure of a game in the following way\n\n\\begin{equation}\n \\begin{pmatrix}\n a&b\\\\c&d\n \\end{pmatrix} \\Rightarrow \\begin{pmatrix}\n -a&-b\\\\-c&-d\n \\end{pmatrix}.\n\\end{equation}\n\nWe can also characterize general two-player games in term of the stationary frequency distribution of strategies, with reflecting boundaries. This description accounts for more details in the stochastic dynamics, and it reveals a similar, transformative effect of large offspring variance. \nIf $\\delta_2$ is sufficiently large, namely $\\delta_2>M$, then modes of the stationary distribution can be moved from one boundary to the other boundary (dominance games, Fig.~\\ref{fig5}D), from the interior to the two boundaries (coexistence games, Fig.~\\ref{fig5}E), or from the two boundaries to the interior (coordination games, Fig.~\\ref{fig5}F). \nThese results reflect our ODE-based analysis above, and they show that sufficient offspring variance can reverse the evolutionary dynamics in an interacting population. These dramatic effects extend to games with more than two actions, such as rock-paper-scissors (Supplementary Fig.~S4).\n\n\n\n\nThese two analytical perspectives underscore that large offspring variance can reshape the payoff structure of a game, producing dynamics classically seen in an entirely different game type. So far, we have focused on the scaling factor $\\delta_2$, which governs how offspring variance grows with payoff, as opposed to $\\delta_1$, which governs the baseline offspring variance. The value of $\\delta_1$ can also profoundly influence evolutionary outcomes, although this cannot be seen from a deterministic perspective alone because $\\delta_1$ has no effect on stabilities of equilibria. Analysis of the stationary distribution shows that a large baseline variance ($\\delta_1B$) can transform any game into a coordination game (see Section S4.1 in Supplementary Information). An example of this result is shown in Fig.~\\ref{fig5}F, where even though $\\delta_2=25,000$ exceeds the carrying capacity, the stationary distribution is not unimodal around intermediate frequency. This is because the effect of $\\delta_2$ here is offset by the effect of $\\delta_1$.\nThese results show that demographic noise, especially when offspring variance is high, can qualitatively change the evolutionary outcomes compared to predictions of traditional analysis by replicator equations for fixed or infinite population size \\cite{Hofbauer1998}.\n\n\n\n\n\n\n\n\n\\section{Discussion}\n\nThe question of how cooperation can be maintained is a longstanding and active area of research, spanning multiple disciplines. A large literature has produced compelling explanations for cooperation, but these typically rely on some form of population structure or repeated interactions. Here, we find that even in a well-mixed population with one-shot interactions, natural stochasticity in the total population size alone can favor cooperation that would otherwise be suppressed. For other types of social interactions, as well, demographic stochasticity can reverse the direction of evolutionary trajectories and produce behavioral outcomes that contravene classical expectations.\n\n\nIt is intuitively easier to invade a noisy population than a stable population. And so natural selection near carrying capacity prefers types not only with higher fecundity (greater mean offspring number), but also with lower reproductive noise (smaller offspring variance) \\cite{Parsons2007,Parsons2007a}. The reversal in the direction of selection in a stochastic population reflects this basic trade-off between offspring mean and offspring variance. A larger payoff produces higher fecundity but also greater noise in the reproduction process. Whether it is the mean or the variance in offspring number that dominates the course of evolution is determined by their relative importance, which is governed by $\\delta_2$ in our model.\nClassical models of populations with constant (or infinite) size neglect the effects of offspring variance altogether; but more realistic models, we have seen, permit regimes where offering variance is more important than fecundity.\n\n\n\nAlthough demographic noise has been studied extensively in population models, the underlying mechanism for our results is qualitatively different from those explored in prior studies. Most research on demographic noise has been restricted constant fitness for competing types \\cite{Parsons2007,Parsons2007a,Parsons2010,McKane2005,Butler2009,Hallatschek2007,Stollmeier2018,Wienand2017,Taitelbaum2020,Chotibut2017}, which does not provide a model of social interactions. However, Constable et al.~ analyzed a frequency-dependent fitness model, and they also found that demographic noise can reverse the direction of selection \\cite{Constable2016}. Their model is based on the production and consumption of a public good. One phenotype produces the public good, at a cost that reduces its baseline birth rate, while the other phenotype does not produce the public good. \nThey analyze the case when ``cooperators\" (who produce the public good) have a larger intrinsic carrying capacity than non-producers, and the larger carrying capacity then yields an evolutionary advantage by making producers more robust against invasion. \nThis mechanism is thus a stochastic form of $r$ versus $K$ selection \\cite{pianka1970r}, and it occurs when births and deaths follow Poisson processes. By contrast, in our model, the evolutionary advantage of cooperators arises even though both types have the same baseline birth rate and the same carrying capacity; \n\nand it arises only when the birth process related to payoff is sufficiently over-dispersed. \nThis mechanism is thus fundamentally different from a trade-off between baseline birth rate and carrying capacity of competing types in a Poisson model \\cite{Constable2016,Houchmandzadeh2012,Houchmandzadeh2015}, and it is more closely related to phenomena in population models with heavy-tailed offspring distributions \\cite{schweinsberg2000necessary,Eldon2006,Sargsyan2008,der2012dynamics}.\n\nAside from promoting cooperation in the prisoner's dilemma, demographic stochasticity also transforms outcomes in other forms of social interaction. Stochasticity can convert a snowdrift game into a stage-hunt game, for example, so that the stable co-existence expected in a deterministic or Poisson setting is transformed into bi-stability. Here, again, the underlying mechanism that reverses the evolutionary outcome is over-dispersion in the offspring contribution related to payoff, even when both types have the same baseline birth rate and carrying capacity.\n\nAll of our analyses have assumed a fast-growing population ($\\alpha \\gg s$), which rapidly reaches carrying capacity before any change in the relative frequencies of competing types. The dynamics of competition may be more complicated in a stochastic, slow-growing population, because their analysis cannot be reduced to a one-dimensional slow manifold. In this regime, fixation will take place before reaching carrying capacity. We can nonetheless derive approximations for the fixation probability in this regime as well (Section S4.2 in Supplementary Information), and, in the case of the donation game, we find that cooperation will be favored by selection provided $\\delta_2$ exceeds the initial population size, $\\delta_2>n_0$. This condition is typically easier to satisfy than Eq.~\\ref{eq:condition}, and it is confirmed by both numerical simulations and Monte Carlo simulations of the compound Poisson process (Supplementary Fig.~S5 and Fig.~S6). After cooperators or defectors fix, in this regime of a slow-growing population, the population will tend to grow logistically to its carrying capacity; but in this case the carrying capacity is larger for cooperators (Supplementary Fig.~S7), which provides an additional evolutionary advantage and greater chance of long-term persistence (Supplementary Fig.~S8).\n\n\n\nOur results highlight the strong impact of stochasticity on evolutionary outcomes in populations. The demographic stochasticity we have studied arises from intrinsic properties of birth and death processes, which have size of order $O(\\sqrt{n})$. As the population size grows towards infinity this form of stochasticity has little influence on evolutionary dynamics, which is consistent with the recent finding that migration in finite, group-structured populations can favor cooperators provided the population size is not too large \\cite{Braga2022}. Aside from intrinsic stochasticity during reproduction, real populations may also be subject to external noise, arising from exogenous variation in environmental conditions. Unlike demographic noise, exogenous noise can be substantial even in population of arbitrary large size. Prior studies on environmental fluctuations, including fluctuations in selection intensity \\cite{Assaf2013}, carrying capacity \\cite{Wienand2017,Taitelbaum2020}, and payoff structure \\cite{Stollmeier2018}, have analyzed their effects by imposing an external noise term onto an otherwise classical, deterministic and continuous system of equations. The effects of exogenous noise on discrete stochastic systems remain less explored, and they are likely to differ qualitatively from stochastic perturbations of continuous systems \\cite{Durrett1994}. Coupling intrinsic demographic noise with external environmental noise may produce even more complicated effects, which remains a topic for future research.\n\nThe impact of stochasticity on strategic outcomes likely extends beyond the two-player\/two-action games we focused on, to include many aspects of non-human and human social behavior. Even if behavioral spread is caused by biased imitation, there is nonetheless variance in number of individuals who imitate a type, as well as physical variation in population sizes of interacting social groups as individuals move between social settings. Empirical data has documented burstiness, a form of over-dispersion, in social interactions \\cite{Stehle2010,Goh2008}. Likewise, in the context of behavior during an epidemic, there is evidence of super-spreading individuals that cause over-dispersion in infectiousness \\cite{Tkachenko2021,Kirkegaard2021}, which may influence frequency-dependent competition among co-circulating variants. Extending our model and analysis to these settings remains an open topic for future research.\n\n\n\n\n\\clearpage\n\\bibliographystyle{unsrtnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbfhy b/data_all_eng_slimpj/shuffled/split2/finalzzbfhy new file mode 100644 index 0000000000000000000000000000000000000000..1521ca8930ba318a7680b3d6aa8dc0697392c140 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbfhy @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:introduction}\nQuivers with relations play a fundamental role in the representation theory of finite dimensional algebras.\n\nEvery basic finite dimensional algebra $\\Lambda$ over an algebraically closed field $\\Bbbk$ is isomorphic to a path algebra $\\Bbbk Q\/R$ of a finite quiver $Q$ with admissible relations $R$. Moreover, the modules of $\\Lambda$ are in bijection with the representations of $(Q,I)$ over $\\Bbbk$. Without relations, we get a correspondence only to hereditary algebras \\cite{Gabriel}; see also \\cite[Ch.~III.1]{ARS}. \n\nNon-hereditary algebras are central in most fields of modern representation theory of algebras. For one, higher homological algebra requires algebras of global dimension at least 2 \\cite{Jasso, Kvamme}. There is a rich tradition of studying classes of non-hereditary algebras, such as gentle \\cite{AH, AS}, clannish \\cite{C-B}, Schur \\cite{Erdmann}, preprojective \\cite{Ringel}, and self--injective \\cite{SY} algebras.\n\nContinuous quivers and their representations were first explicitly studied in \\cite{IRT}. They are a natural generalisation of quivers, replacing finite sets of vertices with uncountably infinite sets. In the process, one gains intuition about what characteristics of representation theory come from innate properties of algebraic structures, and what comes from the discrete examples that are usually studied.\n\nOne parameter persistence modules are often defined over the real line so that persistence modules coincide with pointwise finite-dimensional representations of a continuous quiver of type $\\mathbb{A}$ (see, for example, \\cite{CdSGO}).\nIn \\cite{BBOS} the authors consider $m\\times n$ rectangular grid quivers which have the commutativity relation on each square.\nThe authors of \\cite{BBH} study homological approximations in order to obtain new invariants of these representations (persistence modules).\n\n\nGiven the important role of quiver relations in the representation theory of finite-dimensional algebra, it is natural to ask if relations can be extended to the continuous setting. This has already been done in a restricted sense by the second author and Zhu in \\cite{RZ}. We give a more general definition that works with any underlying quiver. To capture the full generality we actually go beyond quivers and consider categories instead.\n\nIn starting this work, we were motivated by two areas of study that we intend to lift to the continuous setting: gentle algebras and $d$-cluster-tilting subcategories. In gentle algebras, the relations appear in the definition, and are always generated by compositions of two arrows. This type of relations are generalized as \\emph{point relations} in \\Cref{subsec:point relations}. An important class of $d$-cluster-tilting subcategories appear in the module category of type A algebras, with relations consisting of all paths above a certain length \\cite{Vaso}. This type of relations is generalized as \\emph{length relations} in \\Cref{subsec:length}.\n\n\n\\subsection{Contributions}\nIn \\Cref{sec:definition and general}, we give essential background, before stating our main definition.\n\n\\begin{definition}[\\Cref{def:admissible}]\nLet $\\mathcal{C}$ be a category and $\\mathcal{I}$ an ideal in $\\mathcal{C}$.\n We say $\\mathcal{I}$ is \\emph{admissible} if the following are satisfied.\n \\begin{enumerate}\n \\item For each $f$ in $\\mathcal{I}$, there exists a finite collection of morphisms $g_1,\\ldots,g_n$ not in $\\mathcal{I}$ such that $f = g_n\\circ\\cdots\\circ g_1$.\n \\item For each nonzero endomorphism $f$, if $f$ is not an isomorphism then there exists $n\\geq 2\\in\\mathbb{N}$ such that $f^n\\in\\mathcal{I}(X,X)$.\n \\end{enumerate}\n\\end{definition}\n\n\\Cref{sec:general results} gives some general results on the quotient category $\\mathcal{C} \/ \\mathcal{I}$, summarized here.\n\n\n\\begin{theorem}[\\Cref{prop:connected,prop:basic,prop:radical commutes with admissible}]\n Let $\\mathcal{C}$ be a category. Let $\\mathcal{I}$ be an admissible ideal of morphisms.\n \n \\begin{enumerate}\n \\item If $\\mathcal{C}$ is connected, then $\\mathcal{C} \/ \\mathcal{I}$ is also connected.\n \\item If $\\mathcal{C}$ is Krull--Remak--Schmidt--Azumaya, then $\\mathcal{C} \/ \\mathcal I$ is also Krull--Remak--Schmidt--Azumaya.\n \\item If all endomorphism rings of $\\mathcal{C}$ are artinian, then \\[\\mathsf{Rad}(\\mathcal{C} \/ \\mathcal{I})=\\mathsf{Rad}(\\mathcal{C}) \/ \\mathcal{I}.\\]\n \\end{enumerate}\n\\end{theorem}\n\nIn \\Cref{sec:relations} we give two important classes of relations that can generate admissible ideals. The first is \\emph{point relations}, which generalizes relations of length two. The idea is that certain paths through a vertex in the quiver are excluded, but others not. For an illustration see \\Cref{fig:point intro}.\n\n\\begin{figure}[h!]\n \\centering\n \\begin{subfigure}[b]{0.3\\textwidth}\n \\centering\n \\begin{tikzpicture}\n \\node (center) at (0,0) {$\\bullet$};\n \\node at (0,.3) {$P$};\n \\node (left) at (-1,0) {$\\circ$};\n \\node (right) at (1,0) {$\\circ$};\n \\draw[->] (left)--(center);\n \\draw[->] (center) -- (right);\n \\end{tikzpicture}\n \\caption{Discrete case}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.3\\textwidth}\n \\centering\n \\begin{tikzpicture}[decoration={markings, mark=at position 0.5 with {\\arrow{>}}}]\n \\node (center) at (0,0) {$\\bullet$};\n \\node (left) at (-1,0) {};\n \\node (right) at (1,0) {};\n \\node at (0,.3) {$P$};\n \\draw[very thick, postaction=decorate] (left.center) to[bend left] (center.center);\n \\draw[very thick, postaction=decorate] (center.center) to[bend right] (right.center);\n \\end{tikzpicture}\n \\caption{Continuous case}\n \\end{subfigure}\n \\caption{An illustration of point relations in the discrete and continuous case. In both cases, the relations contain all paths passing through the point $P$.\n See \\Cref{fig:arrow and line} on page~\\pageref{fig:arrow and line} for an explanation of our drawing conventions.}\n \\label{fig:point intro}\n\\end{figure}\n\n\n\\begin{theorem}[\\Cref{thm:point relations are admissible}]\n Let $\\{\\mathcal P_\\alpha\\}$ be an admissible collection of point relations in $\\mathcal{C}$, such that any cycles are either isomorphisms (and hence trivial) or contained in at least one $\\mathcal P_\\alpha$.\n Then $\\mathcal{I}=\\langle \\bigcup_\\alpha \\mathcal P_\\alpha \\rangle$ is an admissible ideal in $\\mathcal{C}$.\n\\end{theorem}\n\nThe other class of relations we define are \\emph{length relations}. This is a generalisation of relations generated by paths containing at least $n$ arrows, where $n$ is a natural number.\n\n\\begin{theorem}[\\Cref{thm:length relations are admissible}]\n A length relation generates an admissible ideal.\n\\end{theorem}\n\\Cref{sec:examples} contains multiple examples of how relations work, including a sketch of their Auslander--Reiten theory.\n\n\\subsection{Future Work}\nThe present paper is a precursor to future work on generalizations of non-hereditary structures.\nOf note, the authors will consider point relations, such as \\Cref{ex:crossing real lines}, that generalize gentle algebras. They will also study the modding out by length relations, such as \\Cref{ex:length relations}(\\ref{ex:length relations:continuous A}), to generate higher cluster tilting subcategories.\n\n\\subsection{Acknowledgements}\nThe idea for this project was conceived at the Hausdorff Research Institute of Mathematics, KMJ visited JDR at Ghent University during this project, and JDR visited KMJ at Aarhus University during this project.\nThe authors thank each of these institutions for their hospitality.\nKMJ is supported by the Norwegian Research Council via the project Higher Homological Algebra and Tilting Theory (301046).\nJDR is supported at Ghent University by BOF grant 01P12621.\nThe authors would like to thank Jenny August, Raphael Bennett-Tennenhaus, Charles Paquette, Amit Shah, Emine Y{\\i}ld{\\i}r{\\i}m, and Shijie Zhu for helpful discussions.\n\n\\subsection{Conventions}\nWe work over $\\Bbbk=\\overline{\\Bbbk}$ be a field of characteristic 0.\nBy $\\mathsf{Vec}(\\Bbbk)$ and $\\mathsf{vec}(\\Bbbk)$ we denote the categories of $\\Bbbk$-vector spaces and finite-dimensional $\\Bbbk$-vector spaces, respectively.\nFor a $\\Bbbk$-algebra $\\Lambda$, denote by $\\mathsf{J}(\\Lambda)$ the Jacobson radical of $\\Lambda$.\nWe assume $\\mathcal{C}$ is a $\\Bbbk$-linear category.\n\nRecall that a category $\\mathcal{C}$ is called \\emph{Krull--Remak--Schmidt--Azumaya} if any object $X$ is isomorphic to a arbitrary sum $\\bigoplus X_i$, where each $\\End_\\mathcal{C} X_i$ is a local ring, which itself is unique up to isomorphism.\nIf every object is instead isomorphic to a finite sum $\\bigoplus_{i=1}^n X_i$ as above, we say $\\mathcal{C}$ is \\emph{Krull--Remak--Schmidt}.\n\nFinally, we consider (discrete) quivers, continuous generalizations of such quivers, and combinations of the two.\nWhen we draw an arrow, we use a thin line with an arrow head at the end to indicate the direction.\nWhen we draw a continuous line segment, we use a bold line with the arrow head in the middle to indicate the direction;\nsee \\Cref{fig:arrow and line}.\n\n\\begin{figure}[h]\n \\centering\n \\begin{tikzpicture}[decoration={markings, mark=at position 0.5 with {\\arrow{>}}}]\n \\node (left-start) at (0,0) {$\\bullet$};\n \\node (left-end) at (2,0) {$\\bullet$};\n \\node (right-start) at (3,0) {$\\bullet$};\n \\node (right-end) at (5,0) {$\\bullet$};\n \\draw[->] (left-start)--(left-end);\n \n \\draw[very thick, postaction=decorate] (right-start.center)--(right-end.center);\n \n \\end{tikzpicture}\n \\caption{On the left, how we draw arrows. On the right, how we draw line segments.}\n \\label{fig:arrow and line}\n\\end{figure}\n\n\n\\section{Definition and General Results}\\label{sec:definition and general}\n\n\\subsection{$\\Bbbk$-linear categorization}\n\n\\begin{definition}\\label{def:categorification}\n Let $Q$ be a (finite) quiver and $\\Bbbk Q$ its path algebra.\n Let $\\mathcal{Q}$ be the category whose indecomposable objects are the vertices of $Q$ and morphisms between indecomposables $i$ and $j$ are given by\n \\begin{displaymath}\n \\Hom_{\\mathcal{Q}}(i,j) = e_j \\Bbbk Q e_i.\n \\end{displaymath}\n The objects in $\\mathcal{Q}$ are finite direct sums of the indecomposables (and 0).\n The morphisms in $\\mathcal{Q}$ are given by extending bilinearly.\n We call $\\mathcal{Q}$ the \\emph{$\\Bbbk$-linear categorification} of $Q$.\n\\end{definition}\n\n\\begin{example}\\label{ex:discrete categorification}\nLet $Q$ be the following quiver:\n\n\\begin{center}\n\\begin{tikzpicture}[xscale=1.5, yscale=.5]\n \\node (1) at (0,1) {1};\n \\node (2) at (1,2) {2};\n \\node (3) at (1,0) {3};\n \\node (4) at (2,1) {4};\n \n \\draw[->] (1) -- node[pos=0.6, above]{$\\alpha_1$} (2);\n \\draw[->] (1) -- node[pos=0.6, below]{$\\beta_1$} (3);\n \\draw[->] (2) -- node[pos=0.4, above]{$\\alpha_2$} (4);\n \\draw[->] (3) -- node[pos=0.4, below]{$\\beta_2$} (4);\n\\end{tikzpicture}\n\\end{center}\nThen the $\\Bbbk$-linear categorification $\\mathcal{Q}$ is a category with indecomposable objects $1, 2, 3$ and $4$. The morphisms in $\\mathcal{Q}$ are given by paths in $Q$, so for example we have $\\Hom(1,4) \\cong\\Bbbk^2$, while $\\Hom(4,1) = 0$\n\\end{example}\n\n\\begin{proposition}\\label{lem:correspondence between paths and morphisms}\n There is a bijection between nonzero elements in $\\Bbbk Q$ and nonzero morphisms in $\\mathcal{Q}$.\n\\end{proposition}\n\\begin{proof}\n A non-zero element in $\\Bbbk Q$ is a finite sum of paths in $Q$. We can therefore define a map $F$ from the elements in $\\Bbbk Q$ to morphisms in $\\mathcal{Q}$ by specifying the action of $F$ on paths in $Q$. We let this mapping be determined by $\\Hom_{\\mathcal{Q}}(i,j) = e_j \\Bbbk Q e_i$. This map is a bijection by bilinearity of $\\mathcal{Q}$.\n\\end{proof}\n\n\n\\begin{lemma}\\label{prop:same representations}\n Let $\\mathsf{Mod}(\\mathcal{Q})$ be the category of functors $\\mathcal{Q}\\to \\mathsf{Vec}(\\Bbbk)$.\n Then there exists an isomorphism of categories $\\Phi:\\mathsf{Mod}(\\mathcal{Q}) \\to \\mathsf{Rep}(Q)$.\n\\end{lemma}\n\n\\begin{proof}\n Let $F$ be a functor in $\\mathsf{Mod}(\\mathcal{Q})$.\n We now define the corresponding representation $V=\\Phi(F)$.\n Let $M$ be the representation of $Q$ over $\\Bbbk$ where $V(i)=F(i)$ for each $i\\in Q_0$.\n For a path $\\rho$ in $Q$, let $V(\\rho)$ be the $\\Bbbk$-linear map $F(\\rho)$.\n \n Let $f:F\\to G$ be a morphism in $\\mathsf{Mod}(\\mathcal{Q})$.\n Then $\\Phi(f):\\Phi(F)\\to \\Phi(G)$ is defined by the $f_i:F(i)\\to G(i)$ for each $i\\in Q_0$.\n Straightforward computations show that $\\Phi$ respects composition and so it is a functor.\n \n Define $\\Phi^{-1}:\\mathsf{Rep}(Q)\\to \\mathsf{Mod}(\\mathcal{Q})$ in the following way.\n For a representation $V$ of $Q$, let $F=\\Phi^{-1}(V)$ be determined by $F(i)=V(i)$, for each $i\\in Q_0$, and $F(\\rho)=V(\\rho)$ for each path in $Q$.\n Morphisms are defined similarly.\n One may check $\\Phi^{-1}\\Phi$ and $\\Phi\\Phi^{-1}$ are the identity functors on $\\mathsf{Mod}(\\mathcal{Q})$ and $\\mathsf{Rep}(Q)$, respectively.\n\\end{proof}\n\n\\subsection{The Jacobson Radical}\\label{sec:jacobson radical}\n\n\\begin{definition}\\label{def:radical of C}\n Let $\\mathcal{C}$ be a category.\n The \\emph{radical} $\\mathsf{Rad}(\\mathcal{C})$ of $\\mathcal{C}$ is the ideal consisting of\n \\begin{displaymath}\n \\mathsf{Rad}_{\\mathcal{C}}(X,Y) := \\left\\{ f\\in\\Hom_{\\mathcal{C}}(X,Y) \\mid \\forall g\\in\\Hom_{\\mathcal{C}}(Y,X),\\, f\\circ g\\in\\mathsf{J}(\\End_{\\mathcal{C}}(Y)) \\right\\},\n \\end{displaymath}\n for each pair of objects $X,Y$ in $\\mathcal{C}$.\n\\end{definition}\n\n\\begin{proposition}[\\cite{Krause}]\\label{prop:Krause}\n Let $X$ and $Y$ be objects in $\\mathcal{C}$.\n Then $\\mathsf{Rad}_{\\mathcal{C}}(X,Y)=\\mathsf{J}(\\Hom_{\\mathcal{C}}(X,Y))$.\n\\end{proposition}\n\n\\begin{proposition}\\label{prop:easy radical}\n Let $f:X\\to Y$ be an morphism for indecomposable objects $X,Y$ in $\\mathcal{C}$.\n Then $f\\in\\mathsf{Rad}(\\mathcal{C})$ if and only if $f$ is not an isomorphism.\n\\end{proposition}\n\\begin{proof}\n Suppose $f$ is not an isomorphism.\n If $\\mathcal{C}$ does not have cycles we are done.\n If $\\mathcal{C}$ has cycles, let $g:Y\\to X$ be a nonzero morphism.\n Then $f\\circ g\\in\\mathsf{J}(\\End_{\\mathcal{C}}(Y))$ and so $f\\in \\mathsf{Rad}_{\\mathcal{C}}(X,Y)$.\n Reversing the argument shows that if $f\\in\\mathsf{Rad}_{\\mathcal{C}}(X,Y)$ then $f$ is not an isomorphism.\n\\end{proof}\n\nRecall that $\\mathcal{C}$ is semi-simple if every object in $\\mathcal{C}$ is a finite direct sum of simple objects and all such direct sums exist.\n\n\\begin{proposition}\\label{prop:C is basic}\n If $\\mathcal{C}$ is Krull--Remak--Schmidt, then $\\mathcal{C}\/ \\mathsf{Rad}(\\mathcal{C})$ is semi-simple.\n\\end{proposition}\n\\begin{proof}\n Let $\\mathcal{C}$ be a Krull--Remak--Schmidt category.\n Let $X$ and $Y$ be indecomposables in $\\mathcal{C}$ such that $X\\not\\cong Y$.\n Then $\\Hom_{\\mathcal{C}}(X,Y)=\\mathsf{Rad}_{\\mathcal{C}}(X,Y)$ and so $\\Hom_{\\mathcal{C} \/ \\mathsf{Rad}(\\mathcal{C})}(X,Y)=0$.\n Extending bilinearly we see $\\mathcal{C} \/ \\mathsf{Rad}(\\mathcal{C})$ is semi-simple.\n\\end{proof}\n\n\\begin{remark}\\label{rmk:categorification is basic}\n It follows immediately from \\Cref{prop:C is basic} that if $Q$ is a finite acyclic quiver then $\\mathcal{Q} \/\\mathsf{Rad}(\\mathcal{Q})$ is semi-simple.\n\\end{remark}\n\n\\subsection{Admissible Ideals}\\label{sec:admissible ideals}\n\n\\begin{definition}[\\cite{Krause}]\n Let $\\{\\mathcal{C}_i\\}_{i\\in I}$ be a family of full additive subcategories of $\\mathcal{C}$. We have an \\emph{orthogonal decomposition} $\\coprod_{i\\in I} \\mathcal{C}_i$ of $\\mathcal{C}$ if every object $X$ in $\\mathcal{C}$ is isomorphic to a direct sum $\\bigoplus_{i\\in I} X_i$, where $X_i$ is an object of $\\mathcal{C}_i$, and for $X_i\\in \\mathcal{C}_i,X_j\\in C_j$ we have $\\Hom_{\\mathcal{C}}(X_i,X_j)=0$ when $i\\neq j$.\n \n We say $\\mathcal{C}$ is \\emph{connected} if the only orthogonal decomposition of $\\mathcal{C}$ is the trivial one. \n\\end{definition}\n\nAn \\emph{ideal} $\\mathcal{I}$ of a category $C$ is a collection of morphisms is $\\mathcal{C}$ such that for any $f\\in \\mathcal{I}$ and for any $g$ and $h$ such that the composition $gfh $ is defined, the composition $gfh\\in \\mathcal{I}$.\nFor an ideal $\\mathcal{I}$ of $\\mathcal{C}$, we denote by $\\mathcal{I}(X,Y)$ the morphisms in $\\Hom_{\\mathcal{C}}(X,Y)\\cap \\mathcal{I}$.\n\n\\begin{remark}\n For an ideal $\\mathcal{I}$ of $\\mathcal{C}$, the category $\\mathcal{C} \/\\mathcal{I}$ has the same objects as $\\mathcal{C}$.\n The morphisms of $\\mathcal{C} \/ \\mathcal{I}$ are given by $\\Hom_{\\mathcal{C}}(X,Y) \/ \\mathcal{I}(X,Y)$.\n A representation $V: \\mathcal{C} \/ \\mathcal{I} \\to \\Bbbk\\text{vec}$ is also a representation of $\\mathcal{C}$ by precomposition with the quotient functor $\\pi$. Thus we obtain a representation $\\widetilde{V}:\\mathcal{C} \\stackrel{\\pi}{\\to} \\mathcal{C} \/ \\mathcal{I} \\stackrel{V}{\\to} \\Bbbk\\text{vec}$.\n Hence\n we may consider the representations of $\\mathcal{C}\/\\mathcal{I}$ as a subcategory of the representations of $\\mathcal{C}$.\n In particular, representations of $\\mathcal{C}\/\\mathcal{I}$ are those representations $V$ of $\\mathcal{C}$ such that if $f\\in\\mathcal{I}$ then $V(f)=0$.\n\\end{remark}\n\n\\begin{definition}\\label{def:admissible}\n Let $\\mathcal{C}$ be a $\\Bbbk$-linear, Krull--Remak--Schmidt--Azumaya category and $\\mathcal{I}$ an ideal in $\\mathcal{C}$.\n We say $\\mathcal{I}$ is \\emph{admissible} if the following are satisfied.\n \\begin{enumerate}\n \\item For each $f$ in $\\mathcal{I}$, there exists a finite collection of morphisms $g_1,\\ldots,g_n$ not in $\\mathcal{I}$ such that $f = g_n\\circ\\cdots\\circ g_1$.\n \\item For each indecomposable $X$ in $\\mathcal{C}$, the endomorphism ring $\\End_{\\mathcal{C}}(X) \/ \\mathcal{I}(X,X)$ is finite-dimensional.\n \\end{enumerate}\n\\end{definition}\n\nWe remark that, in \\Cref{def:admissible}(2), we do not want to require that there is some $n$ that works for all nonisomorphism endomorphisms $f$.\nSee \\Cref{ex: cycles length} for an explicit example why.\n\n\n\\begin{lemma}\\label{lem: admissible in radical}\n\tLet $\\mathcal{C}$ be a Krull--Remak--Schmidt category and $\\mathcal{I}$ an ideal in $\\mathcal{C}$.\n If $\\mathcal{I}$ is admissible, then $\\mathcal{I}$ is contained in the radical of $\\mathcal{C}$.\n\\end{lemma}\n\n\\begin{proof}\n\tLet $f:X\\rightarrow Y$ be a morphism in $\\mathcal{I}$ between indecomposable objects. Then we know by \\Cref{prop:easy radical} that if $f$ is not contained in the radical, it is an isomorphism. However, if $f$ is an isomorphism, we have $1_X\\in\\mathcal{I}$.\n\tThen \\emph{every} morphism to \/ from $X$ is in $\\mathcal{I}$.\n\tThus, if $1_x=g_n\\circ\\cdots\\circ g_1$ for any composition, both $g_n,g_1\\in \\mathcal{I}$, which contradicts condition \\Cref{def:admissible}(1).\n\tHence $\\mathcal{I}(X,Y)\\subseteq \\mathsf{Rad}(X,Y)$ for indecomposable $X,Y$. \n\t\n\tNow let $X=\\bigoplus_{i=0}^m X_i$ and $Y=\\bigoplus_{j=0}^n Y_j$, where each $X_i$ and $Y_j$ is indecomposable. Consider $f\\in \\mathcal{I}(X,Y)$. We can rewrite $f$ as $f=(f_{ij})$, where $f_{ij}:X_i\\rightarrow Y_j$. By composition with the canonical injections and projections, we see that $f_{ij}\\in \\mathcal{I}(X_i,Y_j)$, so by the above, $f_{ij}\\in \\mathsf{Rad}(X_i,Y_j)$.\n\tThen by linearity, $f\\in \\mathsf{Rad}(X,Y)$.\n\\end{proof}\n\n\nLet $Q$ be a finite quiver and let $\\mathcal{Q}$ be its $\\Bbbk$-linear categorification. Suppose $I$ is an ideal of the path algebra $\\Bbbk Q$. We show how to build an ideal $\\mathcal{I}$ in $\\mathcal{Q}$ from $I$. \n\nFrom the definition of the $\\Bbbk$-linear categorification, we know that each path in $I$ corresponds to a non-zero morphism in $\\mathcal{Q}$, see \\Cref{lem:correspondence between paths and morphisms}. We (na\\\"ively) let $\\mathcal{I}$ be the set of morphisms obtained by mapping $I$ to $\\mathcal{Q}$. We now show that $\\mathcal{I}$ is an ideal of the category $\\mathcal{Q}$. \n\nBy $\\Bbbk$-linearity of $Q$, it is enough to consider morphisms between indecomposable objects.\nSuppose $f\\in\\mathcal{I}(i,j)$ for some $i,j\\in Q_0=\\operatorname{Ind}\\mathcal{Q}$, and let $g:j\\rightarrow k$ and $h:l\\rightarrow i$ be two nonzero morphisms in $\\mathcal{Q}$.\nBy \\Cref{lem:correspondence between paths and morphisms} we know that $f$ corresponds to an element $\\rho$ in $e_j\\Bbbk Qe_i$.\nFurther, $g$ corresponds to an element $\\psi$ in $\\Bbbk Qe_j$ and $h$ corresponds to an element $\\phi$ in $e_i\\Bbbk Q$.\nEach of $\\rho$, $\\phi$, and $\\psi$ are are sums of paths in $Q$ from the respective source and to the respective target.\nWithout loss of generality, due to $\\Bbbk$-linearity, suppose each of $\\rho$, $\\phi$, and $\\psi$ is a path in $Q$.\nWe see $\\psi\\rho\\phi$ is an element of $I$ since $I$ is a two-sided ideal containing $\\rho$.\nThe image of the composition $\\psi\\rho\\phi$ is the composition $gfh$, which must therefore be in $\\mathcal{I}$.\n\n\\begin{proposition}\\label{prop:admissible is correct}\n Let $Q$ be a finite quiver and $I$ an ideal of $\\Bbbk Q$ as a path algebra.\n Let $\\mathcal{Q}$ be the $\\Bbbk$-linear category induced by $Q$ and let $\\mathcal{I}$ be the ideal induced by $I$ in $\\mathcal{Q}$.\n Then $\\mathcal{I}$ is an admissible ideal of $\\mathcal{Q}$ as in \\Cref{def:admissible} if and only if $I$ is an admissible ideal of $\\Bbbk Q$.\n\\end{proposition}\n\n\\begin{proof}\n Let $I$ be an admissible ideal of $\\Bbbk Q$ and $\\rho\\in I$.\n We first prove $\\mathcal I$ satisfies property (1) of \\Cref{def:admissible}.\n Without loss of generality, assume $\\rho$ is a path in $Q$.\n Then $\\rho = \\alpha_n \\alpha_{n-1}\\cdots \\alpha_2\\alpha_1$, where each $\\alpha_i$ is an arrow in $Q$.\n Now, let $f$ be the morphism in $\\mathcal{Q}$ corresponding to $\\rho$ and $g_i$ the morphism in $\\mathcal{Q}$ corresponding to $\\alpha_i$, for each $i$.\n Then we know each $g_i\\notin \\mathcal I$ and have satisfied property (1) of \\Cref{def:admissible}.\n Reversing the argument proves the converse.\n \n If $Q$ has no cycles then the proposition immediately holds for property (2) of \\Cref{def:admissible}.\n So, suppose $Q$ has at least one oriented cycle.\n Since $I \\subset \\mathsf{Rad}^n(\\Bbbk Q)$, for some $n\\geq 2$, we see that $\\mathcal I$ must immediately satisfy property (2) of \\Cref{def:admissible}.\n \n Now suppose $\\mathcal{I}$ satisfies \\Cref{def:admissible}(2).\n Since $Q$ is finite, there are finitely many cycles.\n For each cycle $\\rho$ at each vertex $i$, let \n \\[ m_\\rho= \\min_m \\{ m \\mid \\rho^m \\in \\mathcal{I}(i,i)\\}.\\]\n We know such an $m_\\rho$ exists since $\\End_{\\mathcal{Q}}(i) \/ \\mathcal{I}(i,i)$ is finite-dimensional.\n Let $n_\\rho$ be $m_\\rho$ time the length of $\\rho$.\n Then let \\[ N = \\max\\left(\\max_\\rho \\{n_\\rho\\}\\cup \\{\\text{length of longest path without cycles in }Q\\}\\right).\\]\n Thus, $\\mathsf{Rad}^N(\\Bbbk Q)\\supset I$.\n This concludes the proof.\n\\end{proof}\n\n\\begin{remark}\nThe second half of the proof above can be extended to more general quivers.\nSuppose $\\mathcal{Q}$ is the $\\Bbbk$-linear categorification of a (not necessarily finite) quiver $Q$ with finitely many cycles. Suppose that for each cycle $\\rho$ in the quiver with corresponding morphism $f_\\rho:X\\rightarrow X$, there is some $n\\geq 2$ such that $f_\\rho\\in \\mathcal{I}(X,X)$.\nThen $\\mathcal{I}$ satisfies criterion (2) in \\Cref{def:admissible}. \n\nFor the majority of our examples, this will be the criterion we actually use.\n\\end{remark}\n\n\\begin{example}\\label{ex:discrete ideal}\n Consider the quiver from \\Cref{ex:discrete categorification}. Let $I$ be the commutative ideal generated by $\\{\\alpha_2\\alpha_1-\\beta_2\\beta_1\\}$.\n \n In the $\\Bbbk$-linear categorification, the relation $\\alpha_2\\alpha_1-\\beta_2\\beta_1$ can be written as $\\left [ \\begin{smallmatrix}\\alpha_2 & \\beta_2\\end{smallmatrix}\\right]\\left [ \\begin{smallmatrix}\\alpha_1 \\\\ -\\beta_1\\end{smallmatrix}\\right]$. The ideal generated by this morphism fulfills the criteria for being an admissible ideal. \n\\end{example}\n\n\\subsection{General Results}\\label{sec:general results}\n\n\\begin{proposition}\\label{prop:connected}\n Let $\\mathcal{C}$ be connected, and let $\\mathcal{I}$ be an admissible ideal of morphisms. Then $\\mathcal{C} \/ \\mathcal{I}$ is connected.\n\\end{proposition}\n\n\\begin{proof}\n Assume towards a contradiction that $\\mathcal{C} \/ \\mathcal{I}$ is not connected; then there exists a decomposition of $\\mathcal{C} \/ \\mathcal{I}$ into mutually orthogonal subcategories $\\mathcal{C}'_1, \\cdots, \\mathcal{C}'_n$. We can lift these subcategories to subcategories $\\mathcal{C}_1, \\cdots, \\mathcal{C}_n$ of $\\mathcal{C}$. As $\\mathcal{C}$ is connected, these subcategories cannot all be mutually orthogonal, so assume that there exists some morphism $f:X_i\\rightarrow X_j$, with $X_i\\in \\mathcal{C}_i, X_j\\in \\mathcal{C}_j$ and $i\\neq j$. To preserve mutual orthogonality in $\\mathcal{C}\/\\mathcal{I}$, we must have $f\\in \\mathcal{I}$. Then since $I$ is an admissible ideal, we can write $f=g_m\\circ \\cdots \\circ g_1$, where each of the $g_1, \\cdots, g_m$ are not in $\\mathcal{I}$.\n \n Consider $g_1: X_i\\rightarrow Y$ and denote its image in $\\mathcal{C} \/ \\mathcal{I}$ by $\\overline{g_1}$. As $\\mathcal{C} \/ \\mathcal{I}$ is not connected, we can write $Y=\\bigoplus_{i=1}^n Y_i$, with $Y_i\\in \\mathcal{C}'_i$ and $\\overline{g_1}=(g^1_1,\\cdots g_1^n)$. Now, as $\\overline{g_1}$ is nonzero, $g_1^k$ is nonzero for some $k$. If $k\\neq i$, we have reached a contradiction. If $k=i$, we can repeat the argument with $\\overline{g_2}|_{Y_i}$, eventually reaching a contradiction. \n\\end{proof}\n\n\\begin{proposition}\\label{prop:basic}\n Let $\\mathcal{C}$ be Krull--Remak--Schmidt--Azumaya and $\\mathcal{I}$ an admissible ideal.\n Then $\\mathcal{C} \/ \\mathcal I$ is Krull--Remak--Schmidt--Azumaya.\n\\end{proposition}\n\\begin{proof}\nLet $X$ be an object in $\\mathcal{C} \/ \\mathcal I$. Then $X$ is an object in $\\mathcal{C}$, which we assume to be Krull--Remak--Schmidt--Azumaya. It follows that $X=\\bigoplus_{\\alpha} X_\\alpha$, where $\\End_\\mathcal{C}(X_\\alpha)$ is local. If we can show that $\\End_{\\mathcal{C}\/\\mathcal{I}}(X_\\alpha)$ is local, we are done.\n\nWe know that $\\End_{\\mathcal{C}\/\\mathcal{I}}(X_{\\alpha})=\\End_\\mathcal{C}(X_{\\alpha})\/\\mathcal{I}(X_{\\alpha},X_{\\alpha})$. By property 1 of \\Cref{def:admissible}, the identity on $X_{\\alpha}$ cannot be an element of $\\mathcal{I}(X_{\\alpha},X_{\\alpha})$, so $\\End_{\\mathcal{C}\/\\mathcal{I}}(X_{\\alpha})$ is a nonzero quotient ring of a local ring, which is local by the ideal correspondence theorem for quotient rings.\n\\end{proof}\n\n\\begin{proposition}\\label{prop:radical commutes with admissible}\n Let $\\mathcal{C}$ be a category such that all endomorphism rings are artinian and $\\mathcal{I}$ an admissible ideal.\n Then \\[\\mathsf{Rad}(\\mathcal{C} \/ \\mathcal{I})=\\mathsf{Rad}(\\mathcal{C}) \/ \\mathcal{I}.\\] \n\\end{proposition}\n\\begin{proof}\nThe equation $\\mathsf{Rad}(\\mathcal{C} \/ \\mathcal{I}) = \\mathsf{Rad}(\\mathcal{C}) \/ \\mathcal{I}$ holds if and only if $\\mathsf{Rad}_{\\mathcal{C} \/ \\mathcal{I}}(X,Y) = \\mathsf{Rad}_\\mathcal{C}(X,Y) \/ \\mathcal{I}$ holds for all pairs of objects $X,Y\\in \\mathcal{C}$.\n\nFirst note that if $\\End_{\\mathcal{C} }(Y)$ is artinian, then\n\\begin{align*}\n\t \\mathsf{Rad}_\\mathcal{C}(Y,Y) \/ \\mathcal{I} &= \\mathsf{J}(\\End_{\\mathcal{C} }(Y))\/ \\mathcal{I}= \\mathsf{J}(\\End_{\\mathcal{C} }(Y)+\\mathcal{I})\/\\mathcal{I} \\\\ \n\t &= \\mathsf{J}(\\End_{\\mathcal{C} \/ \\mathcal{I}}(Y)) =\\mathsf{Rad}_{\\mathcal{C} \/ \\mathcal{I}}(Y,Y).\n\\end{align*}\nNow we can see that \n\\begin{align*}\n\tf \\in \\mathsf{Rad}_\\mathcal{C}(X,Y) \/ \\mathcal{I} &\\Leftrightarrow f = f'+\\mathcal{I}(X,Y)\\text{, with } f'\\in \\mathsf{Rad}_\\mathcal{C}(X,Y) \\\\\n\t& \\Leftrightarrow f = f'+\\mathcal{I}(X,Y)\\text{, s.t.\\ } f'\\circ g'\\in \\mathsf{J}(\\End_{\\mathcal{C} }(Y))\\, \\forall\\, g'\\in \\Hom_\\mathcal{C} (Y,X)\\\\\n\t& \\Leftrightarrow f\\circ (g'+\\mathcal{I}(Y,X))\\in \\mathsf{J}(\\End_{\\mathcal{C} }(Y) )\/\\mathcal{I}\\, \\forall \\, g'\\in \\Hom_\\mathcal{C} (Y,X)\\\\\n\t& \\Leftrightarrow f\\circ g \\in \\mathsf{J}(\\End_{\\mathcal{C} \/\\mathcal{I}}(Y) )\\, \\forall\\, g\\in \\Hom_{\\mathcal{C}\/\\mathcal{I}} (Y,X)\\\\\n\t& \\Leftrightarrow f \\in \\mathsf{Rad}_{\\mathcal{C}\/\\mathcal{I}}(X,Y). \n\\end{align*}\n\\end{proof}\nIn particular, if $\\mathcal{C}$ is $\\Bbbk$-linear and Krull--Remak--Schmidt, then all endomorphism rings are artinian and the above Proposition holds.\n\nThe following lemma is useful for our examples in \\Cref{sec:examples}.\n\\begin{lemma}\\label{lem:stack admissible}\n Let $\\mathcal{C}$ be a small, $\\Bbbk$-linear, Krull--Remak--Schmidt--Azumaya category and $\\mathcal{I}$ an admissible ideal in $\\mathcal{C}$.\n Let $\\mathcal{J}$ be an admissible ideal in $\\mathcal{C}\/\\mathcal{I}$ and $\\widetilde{\\Jay}$ the set of morphisms $f$ in $\\mathcal{C}$ such that $f+\\mathcal{I}\\in\\mathcal{J}$ in $\\mathcal{C}\/\\mathcal{I}$. Then the following hold:\n \\begin{enumerate}\n \\item $\\widetilde{\\Jay}$ contains $\\mathcal{I}$.\n \\item $\\widetilde{\\Jay}$ is an ideal.\n \\item $\\widetilde{\\Jay}$ is admissible in $\\mathcal{C}$.\n \\item $(\\mathcal{C}\/\\mathcal{I})\/\\mathcal{J} \\simeq \\mathcal{C}\/\\widetilde{\\Jay}$.\n \\end{enumerate} \n\\end{lemma}\n\\begin{proof}\n \\textbf{1.} Let $f\\in\\mathcal{I}$.\n Then $f\\mapsto 0$ in $\\mathcal{C}\/\\mathcal{I}$.\n Since all zero morphisms in $\\mathcal{C}\/\\mathcal{I}$ are in $\\mathcal{J}$, we see $f\\in \\widetilde{\\Jay}$.\n \n \\textbf{2.} Let $f:X\\to Y$ be nonzero in $\\widetilde{\\Jay}$ and let $g:Y\\to Z$ be nonzero in $\\mathcal{C}$.\n Then $f+\\mathcal{I}$ and $g+\\mathcal{I}$ are in $\\mathsf{Mor}(\\mathcal{C}\/\\mathcal{I})$.\n So, $(g+\\mathcal{I})\\circ(f+\\mathcal{I})$ is in $\\mathcal{J}$ and is equal to $gf + \\mathcal{I}$.\n Then $gf \\in \\widetilde{\\Jay}$.\n \n \\textbf{3.} Let $f\\in\\widetilde{\\Jay}$.\n Then $f+\\mathcal{I}\\in\\mathcal{J}$ and, by assumption, there exists $(g_1+\\mathcal{I}),\\ldots,(g_n+\\mathcal{I})$ in $\\mathsf{Mor}(\\mathcal{C}\/\\mathcal{I})\\setminus\\mathcal{J}$ such that \\[ f + \\mathcal{I} = (g_n+\\mathcal{I})\\circ (g_{n-1}+\\mathcal{I})\\circ\\cdots\\circ (g_2+\\mathcal{I})\\circ(g_1+\\mathcal{I}).\\]\n Then for each $g_i$ there is $h_i\\in\\mathcal{I}$ such that $g_i+h_i\\mapsto g_i+\\mathcal{I}$ and \\[f = (g_n+h_n)\\circ(g_{n-1}+h_{n-1})\\circ\\cdots\\circ(g_2+h_2)\\circ (g_1+h_1). \\]\n Since each $g_i\\notin \\mathcal{J}$, we know each $g_i+h_i\\notin\\widetilde{\\Jay}$ and so $f$ is a finite composition of morphisms not in $\\widetilde{\\Jay}$.\n Additionally, for any nonzero, nonisomorphism endomorphism $f$, we have $f^n\\in\\mathcal{I}$ for some $n\\in\\mathbb{N}$.\n Then $f^n\\in\\widetilde{\\Jay}$ by statement 1.\n Therefore, $\\widetilde{\\Jay}$ is admissible.\n \n \\textbf{4.} Recall $\\mathsf{Ob}((\\mathcal{C}\/\\mathcal{I})\/\\mathcal{J}) = \\mathsf{Ob}(\\mathcal{C}\/\\widetilde{\\Jay})$.\n We now produce a bijection between $\\mathsf{Mor}((\\mathcal{C}\/\\mathcal{I})\/\\mathcal{J})$ and $\\mathsf{Mor}(\\mathcal{C}\/\\widetilde{\\Jay})$ by producing bijections \\[ \\phi_{X,Y}:\\Hom_{\\mathcal{C}\/\\widetilde{\\Jay}}(X,Y) \\to \\Hom_{(\\mathcal{C}\/\\mathcal{I})\/\\mathcal{J}}(X,Y) \\] for each ordered pair $X,Y$ of objects.\n \n Let $f+\\widetilde{\\Jay}\\in\\Hom_{\\mathcal{C}\/\\widetilde{\\Jay}}(X,Y)$.\n Then there exists $g\\in\\widetilde{\\Jay}\\subset \\mathsf{Mor}(\\mathcal{C})$ such that $f+g\\mapsto f+\\widetilde{\\Jay}\\in\\mathsf{Mor}(\\mathcal{C}\/\\widetilde{\\Jay})$.\n If $g\\in\\mathcal{I}$ then $f+g\\mapsto f+\\mathcal{I}\\in\\mathsf{Mor}(\\mathcal{C}\/\\mathcal{I})$; otherewise $f+g\\mapsto f+g+\\mathcal{I}\\in\\mathsf{Mor}(\\mathcal{C}\/\\mathcal{I})$.\n In either case, $f+g\\mapsto f+\\mathcal{J}$ in $\\Hom_{(\\mathcal{C}\/\\mathcal{I})\/\\mathsf{J}}(X,Y)$.\n We define $\\phi_{X,Y}(f+\\widetilde{\\Jay}):= f+\\mathcal{J}$.\n \n It is immediate that $\\phi_{X,Y}$ is injective.\n Suppose $f+\\mathcal{J}\\in\\Hom_{(\\mathcal{C}\/\\mathcal{I})\/\\mathcal{J}}$.\n Then there exists $g+\\mathcal{I}$ in $\\Hom_{\\mathcal{C}\/\\mathcal{I}}(X,Y)$ such that $f+g+\\mathcal{I}\\mapsto f+\\mathcal{J}$.\n Then there exists $h\\in\\Hom_{\\mathcal{C}}(X,Y)$ such that $f+g+h\\mapsto f+g+\\mathcal{I}$.\n But this means $g+h\\in\\widetilde{\\Jay}$ and so $f+(g+h)\\mapsto f+\\widetilde{\\Jay}$ in $\\Hom_{\\mathcal{C}\/\\widetilde{\\Jay}}(X,Y)$.\n Thus, $\\phi_{X,Y}$ is surjective.\n\\end{proof}\n\n\\section{Relations}\\label{sec:relations}\n\nIn this section we look at two types of admissible ideals: those generated by point relations (\\Cref{subsec:point relations}) and those generated by length relations (\\Cref{subsec:length}).\nThese generalize a relation generated by a single path of length two and relations generated by all paths of a particular length, respectively.\n\n\\subsection{Point Relations}\\label{subsec:point relations}\nHere we generalize relations generated by a single path of length 2 to a point relation (\\Cref{def:point relation}) and prove this generates an admissible ideal (\\Cref{thm:point relations are admissible}).\nWe then give examples that point to a continuous version of a gentle algebra (\\Cref{ex:monomial points,ex:crossing real lines}).\n\n\\begin{definition}\\label{def:decomposition point}\n Let $f:X\\to Y$ be a nonisomorphism between indecomposables in $\\mathcal{C}$.\n A \\emph{decomposition point} $Z$ in $\\mathcal{C}$ is an indecomposable object such that there exists nonisomorphisms $g:X\\to Z$ and $h:Z\\to Y$ where $f=h\\circ g$.\n\\end{definition}\n\n\\begin{definition}\\label{def:acyclic morphism}\n Let $f:X\\to Y$ be a nonisomorphism of indecomposables in $\\mathcal{C}$, $Z$ a decomposition point of $f$, and $f=g\\circ h$ such a decomposition.\n We call $f$ an \\emph{acyclic morphism} if for all pairs $g':X\\to Z$ and $h':Z\\to Y$ such that $f=h'\\circ g'$ then $h'$ and $g'$ are scalar multiples of $h$ and $g$, respectively.\n\\end{definition}\n\nNote that an acyclic morphism cannot be irreducible. (I.e., it must be a path of length at least 2 in a quiver.)\n\n\\begin{definition}\\label{def:point relation}\n Let $f$ be an acyclic morphism and $Z$ a decomposition point of $f$.\n Let $P$ be the set of all nonisomorphisms $g$ between indecomposables satisfying the following.\n \\begin{itemize}\n \\item There exists $h_1$ and $h_2$ morphisms of indecomposables such that $f=h_2\\circ g\\circ h_1$.\n \\item We have $Z$ as a decomposition point of $g$.\n \\end{itemize}\n Let $\\mathcal P_{f,Z}$ be the ideal in $\\mathcal{C}$ generated by $P$.\n We call $\\mathcal P_{f,Z}$ the \\emph{point relation through $Z$ by $f$}.\n\\end{definition}\n\n\\begin{definition}\\label{def:admissible point relations}\n Let $\\{\\mathcal P_\\alpha\\}$ be a collection of point relations in $\\mathcal{C}$.\n We say $\\{\\mathcal P_\\alpha\\}$ is \\emph{admissible} if each morphism of indecomposables appears in at most finitely-many $\\mathcal P_\\alpha$.\n\\end{definition}\n\n\\begin{theorem}\\label{thm:point relations are admissible}\n Let $\\{\\mathcal P_\\alpha\\}$ be an admissible collection of point relations in $\\mathcal{C}$ and let $\\mathcal{I}=\\langle \\bigcup_\\alpha P_\\alpha\\rangle$.\n Suppose also that for each indecomposable $X$ in $\\mathcal{C}$, we have $\\End_{\\mathcal{C}}(X) \/ \\mathcal{I}(X,X)$ is finite dimensional.\n Then $\\mathcal{I}$ is an admissible ideal.\n\\end{theorem}\n\\begin{proof}\nWe satisfy \\Cref{def:admissible}(2) by assumption.\n\nNow suppose $f\\in \\mathcal{I}$; we show that $f$ can be written as a finite composition of morphisms not in $\\mathcal{I}$.\nSince $f$ is a finite sum of morphisms of indecomposables, we assume without loss of generality $f$ is a morphism of indecomposables.\nThen, by assumption there are at most finitely-many $\\mathcal P_\\alpha$ such that $f\\in \\mathcal P_\\alpha$.\n\nWe proceed by induction beginning with $f$ is only in $\\mathcal P_1$.\nLet $f_1$, $P_1$, and $Z_1$ be as in \\Cref{def:point relation}.\nThen there exists morphisms $h_1$ and $h_2$ in $Mor(\\mathcal{C})$ and $g\\in P_1$ such that $f = h_2\\circ g \\circ h_1$.\nFurther, $g=h'_2\\circ h'_1$ where the target of $h'_1$ is $Z_\\alpha$ and the source of $h'_2$ is $Z_\\alpha$.\nSo, let $g_1 = h'_1\\circ h_1$ and let $g_2=h_2\\circ h'_2$.\nNote that neither $g_1$ nor $g_2$ is in $\\mathcal P_1$.\nFurther, neither $g_1$ nor $g_2$ is in $\\mathcal{I}$ or else $f$ would be in another $\\mathcal P_\\alpha$ as well.\nThus, we have our desired decomposition.\n\nNow assume that if $f$ is in $n$ of the $\\mathcal P_\\alpha$, then $f$ is a finite composition of morphisms not in $\\mathcal{I}$.\nSuppose $f$ is in $n+1$ of the $\\mathcal P_\\alpha$ and denote one of them by $\\mathcal P_1$.\nLet $f_1$, $P_1$, and $Z_1$ be as before for $\\mathcal P_1$.\nWe find $g_1$ and $g_2$ as before, but they may be in $\\mathcal{I}$.\nHowever, each $g_1$ and $g_2$ may only be in $n$ or fewer $\\mathcal P_\\alpha$ and so are a finite composition of morphisms not in $\\mathcal{I}$.\nTherefore, $f$ is a finite composition of morphisms not in $\\mathcal{I}$.\n\\end{proof}\n\n\\begin{example}[Discrete quiver]\\label{ex:monomial points}\nLet $Q$ be a discrete quiver. Then any quadratic monomial relation in $Q$ corresponds to a point relation in the $\\Bbbk$-linear categorification of $Q$.\n\nIn particular, any gentle algebra can be obtained by considering a quiver with point relations. \n\\end{example}\n\n\\begin{example}[Continuous ``gentle'', crossing real lines]\\label{ex:crossing real lines}\nConsider two copies of the real line, labeled $\\mathbb{R}$ and $\\mathbb{R}'$. We label the numbers in $\\mathbb{R}$ by $x$ and the numbers in $\\mathbb{R}'$ by $x'$. Identify $0$ and $0'$ and label the category of $\\Bbbk$-representations of the resulting partially ordered set by $\\mathcal{C}$. \n\\begin{figure}\n \\centering\n \\begin{tikzpicture}[inner sep = 0cm, outer sep = 0cm, xscale = 1, decoration={\n markings,\n mark=at position 0.5 with {\\arrow{>}}}, very thick\n ]\n \\coordinate (topleft) at (0,1);\n \\coordinate (bottomleft) at (0,0);\n \\coordinate (topright) at (10,1);\n \\coordinate (bottomright) at (10,0);\n \\coordinate (mid) at (5,0.5);\n \\draw[blue, postaction=decorate] (topleft) ..controls +(4,0).. (mid); \n \\draw[red, postaction=decorate] (mid) ..controls +(1,-.5).. (bottomright);\n \\draw[red, postaction=decorate] (bottomleft) ..controls +(4,0).. (mid); \n \\draw[blue, postaction=decorate] (mid) ..controls +(1,.5).. (topright);\n \\draw[fill=white] (mid) circle[radius=1mm];\n \\end{tikzpicture}\n \\caption{The category considered in \\Cref{ex:crossing real lines}. The two copies of the real line have been drawn in different colours}\n \\label{fig:my_label}\n\\end{figure}\n\nLet $\\mathcal P$ be the point relation at $0$ generated morphisms starting in $\\mathbb{R}_{<0}$ and ending in $\\mathbb{R}'_{>0}$. Dually, let $\\mathcal P'$ be the point relation at $0$ generated morphisms starting in $\\mathbb{R}'_{<0}$ and ending in $\\mathbb{R}_{>0}$. The collection $\\{\\mathcal P, \\mathcal P'\\}$ generates an admissible ideal.\nIn later work, we will argue that this $\\mathcal{C}$ with this ideal yields a continuous generalization of a gentle algebra. \n\n\\begin{remark}\nIf we do not assume that $\\End_{\\mathcal{C}}(X) \/ \\mathcal{I}(X,X)$ is finite dimensional in our hypothesis of \\Cref{thm:point relations are admissible}, it is possible that we do not have an admissible ideal.\nSee \\Cref{ex:big wedge}.\n\\end{remark}\n\n\\end{example}\n\\subsection{Length Relations}\\label{subsec:length}\nWe now generalize relations generated by all paths of a certain length to length relations.\nTo do this we define a way of measuring length in our category (\\Cref{def:weakly archimedean monoid,def:category with length in Lambda}) and provide examples (\\Cref{ex:weakly Archimedian monoids,ex:categories with length in Lambda}).\nThen we define the length relations (\\Cref{def:length relation}) and provide examples (\\Cref{ex:length relations}) and prove that length relations generate admissible ideals (\\Cref{thm:length relations are admissible}).\nIn \\Cref{apx:length} we discuss the proof of \\Cref{thm:length relations are admissible} (\\Cref{subsec:need weakly archimedean}), why we require the specific setup that we have (\\Cref{sec:more on length}), and compare our notion of length to the notion of a metric on a category, introduced by Lawvere \\cite{L73} (\\Cref{subsec:length vs metric}).\n\nRecall a commutative monoid $\\Lambda$ is a set with an associative, commutative, binary operation $+_{\\Lambda}:\\Lambda\\times\\Lambda \\to \\Lambda$ and an identity 0.\n\\begin{definition}\\label{def:weakly archimedean monoid}\n Let $\\Lambda$ be a commutative monoid.\n We say $\\Lambda$ is \\emph{weakly Archimedian} if it satisfies the following.\n \\begin{itemize}\n \\item There is a total order $\\leq$ on $\\Lambda$.\n \\item If $\\lambda\\neq 0$ then $\\lambda>0$.\n \\item If $\\lambda_1>\\lambda_2$ then, for any $\\lambda_3$, we have $\\lambda_1+\\lambda_3> \\lambda_2+\\lambda_3$ or $\\lambda_1+\\lambda_3 = \\lambda_2+\\lambda_3=\\max \\Lambda$.\n \\item For all $0<\\lambda_1<\\lambda_2$ in $\\Lambda$, there exists $n\\in\\mathbb{N}$ such that\n \\begin{displaymath}\n n\\lambda_1 := \\underbrace{\\lambda_1 +_{\\Lambda} \\lambda_1 +_{\\Lambda} \\cdots +_{\\Lambda} \\lambda_1}_{n} \\geq \\lambda_2.\n \\end{displaymath}\n \\end{itemize}\n\\end{definition}\n\n\\begin{example}\\label{ex:weakly Archimedian monoids}\n We give three examples, two of which the reader might expect.\n \\begin{enumerate}\n \\item The set $\\mathbb{N}$ with the usual total order and $+_{\\mathbb{N}}$ given in the usual way is weakly Archimedian.\n \\item The set $\\mathbb{R}_{\\geq 0}$ with the usual total order and $+_{\\mathbb{R}}$ given in the usual way is weakly Archimedian.\n \\item\\label{ex:weakly Archimedian monoids:with max} Let $\\Lambda = \\{0,1,2,\\ldots,n-1,n,\\infty\\}$. Let $+_{\\Lambda}$ be given by\n \\begin{displaymath}\n \\lambda_1+_{\\Lambda}\\lambda_2 = \\begin{cases}\n \\lambda_1+_{\\mathbb{N}}\\lambda_2 & (\\lambda_1+_{\\mathbb{N}}\\lambda_2)\\leq n \\\\\n \\infty & \\text{otherwise}.\n \\end{cases}\n \\end{displaymath}\n For the total order, we say $0<1<2<\\cdots0$ and for each $\\lambda<\\ell(f)$ there are $g,h\\in\\mathsf{Mor}(\\widehat{\\mathcal{C}})$ such that $f=h\\circ g$ and either $\\ell(g)=\\lambda$ or $\\ell(h)=\\lambda$.\n \\end{enumerate}\n\\end{definition}\n\n\\begin{example}\\label{ex:categories with length in Lambda}\n We give some existing examples of \\Cref{def:category with length in Lambda}.\n \\begin{enumerate}\n \\item\\label{ex:categories with length in Lambda:classic} Let $Q$ be a quiver, $\\mathcal{Q}$ the $\\Bbbk$-linear categorification of $Q$, and $\\widehat{\\mathcal{Q}}$ a stem category of $\\mathcal{Q}$ whose morphisms are generated by arrows.\n Let $\\Lambda=\\mathbb{N}$ and set $\\ell(\\alpha)=1$ for each morphism in $\\widehat{\\mathcal{Q}}$ from an arrow $\\alpha$ in $Q$.\n Then $\\mathcal{Q}$ has length in $\\mathbb{N}$.\n \\item\\label{ex:categories with length in Lambda:AR} Let $\\mathcal{Q}$ be the additive $\\Bbbk$-linearization of a continuous quiver $\\widehat{\\mathcal{Q}}$ of type $A$ as in \\cite{IRT}.\n Then $\\widehat{\\mathcal{Q}}$ is a stem category of $\\mathcal{Q}$.\n \n Define $\\ell: \\mathsf{Mor}(\\widehat{\\mathcal{Q}})\\to\\mathbb{R}_{\\geq 0}$ by $\\ell(g_{x,y}) = |x-y|$ where $g_{x,y}$ is the unique nonzero morphism in $\\widehat{\\mathcal{Q}}$ from $x$ to $y$.\n Then $\\mathcal{Q}$ has length in $\\mathbb{R}_{\\geq 0}$.\n Note $\\mathcal{Q}$ is also a category with a metric.\n See \\Cref{subsec:length vs metric}.\n \\end{enumerate}\n\\end{example}\n\nWe now define a length relation.\n\\begin{definition}\\label{def:length relation}\n Let $\\Lambda$ be a weakly Archimedian monoid and $\\mathcal{C}$ a category with length in $\\Lambda$ with stem category $\\widehat{\\mathcal{C}}$.\n Consider $\\Lambda_1,\\Lambda_2$ subsets of $\\Lambda$ such that $\\Lambda_1\\amalg\\Lambda_2=\\Lambda$, $|\\Lambda_1|\\geq 2$, and for all $\\lambda_1\\in\\Lambda_1, \\lambda_2\\in\\Lambda_2$ we have $\\lambda_1<\\lambda_2$.\n Then the set $\\ell^{-1}(\\Lambda_2)$ in $\\widehat{\\mathcal{C}}$ generates an ideal $\\mathcal{I}$ in $\\mathcal{C}$.\n We call $\\mathcal{I}$ a \\emph{length relation}.\n\\end{definition}\n\n\\begin{remark}\\label{rmk:length is not a number}\n It is possible that $\\Lambda_1$ has no maximum element and $\\Lambda_2$ has no minimum element.\n (Consider, for example, $\\Lambda=\\mathbb{Q}_{\\geq 0}$.)\n Thus, we may not always be able to say that we are taking ``paths longer than $\\lambda$'' for some $\\lambda\\in\\Lambda$.\n\\end{remark}\n\n\\begin{example}\\label{ex:length relations}\n We give three examples of length relations.\n \\begin{enumerate}\n \\item Let $Q$ be a quiver and $\\mathcal{Q}$ its $\\Bbbk$-linear categorification, which has length in $\\mathbb{N}$ (\\Cref{ex:categories with length in Lambda}(\\ref{ex:categories with length in Lambda:classic})).\n Let $\\widehat{\\mathcal{Q}}$ be the stem category of $\\mathcal{Q}$ seen, effectively, as $Q$ embedded in $\\mathcal{Q}$.\n Let $\\Lambda_1 = \\{0,1,2\\}$ and $\\Lambda_2=\\{3,4,5,\\cdots\\}$.\n Then $\\mathcal{I}=\\langle \\ell^{-1}(\\Lambda_2)\\rangle $ is the set of morphisms in $\\mathcal{Q}$ generated by paths with length $\\geq 3$ in $Q$.\n \\item Any Nakayama algebra where the relations have constant length $l$ can be realized as the $\\Bbbk$-linear categorification of its underlying quiver with length relations of length $l$ in $\\mathbb{N}$.\n \\item\\label{ex:length relations:continuous A} Let $\\mathcal{Q}$ and $\\widehat{\\mathcal{Q}}$ be as in \\Cref{ex:categories with length in Lambda}(\\ref{ex:categories with length in Lambda:AR}).\n Recall $\\mathcal{Q}$ has length in $\\mathbb{R}_{\\geq 0}$.\n Let $\\Lambda_1 =[0,4]$ and $\\Lambda_2=(4,+\\infty)$.\n Then $\\langle\\ell^{-1}(\\Lambda_2)\\rangle$ is the set of morphisms in $\\mathcal{Q}$ of length \\emph{strictly greater than $4$}.\n \\end{enumerate}\n\\end{example}\n\n\\begin{theorem}\\label{thm:length relations are admissible}\n Let $\\Lambda$ be a weakly Archimedian monoid, $\\mathcal{C}$ a category with length in $\\Lambda$ with stem category $\\widehat{\\mathcal{C}}$, and $\\mathcal{I}$ a length relation.\n If $\\End_{\\widehat{\\mathcal{C}}}(X)$ is a finitely-generated monoid, for each $X\\in\\mathsf{Ob}(\\widehat{\\mathcal{C}})$, then $\\mathcal{I}$ is an admissible ideal.\n\\end{theorem}\n\\begin{proof}\n If $\\mathcal{I}=\\emptyset$ then condition (1) is vacously satisfied.\n Assume $\\mathcal{I}\\neq\\emptyset$, let $f\\in \\mathcal{I}$ such that $f\\in\\mathsf{Mor}(\\widehat{\\mathcal{C}})$, and let $\\lambda\\in\\Lambda_1$ such that $\\lambda>0$.\n Then there is $n\\in\\mathbb{N}$ such that $n\\lambda \\geq \\ell(f)$.\n Thus, there is some decomposition $f=g_n\\circ\\cdots\\circ g_1$ where $\\ell(g_i)\\in\\Lambda_1$ for each $g_i$.\n Thus, each $g_i$ is not in $\\mathcal{I}$.\n \n Since $\\End_{\\widehat{\\mathcal{C}}}(X)$ is a finitely-generated monoid, let $m$ be the number of generators and let $\\{f_i\\}_{i=1}^m$ be the set of generators.\n Let\n \\[ N = \\max_i \\{ \\min_n \\{ n\\ell(f_i) \\mid n\\ell(f_i)\\in\\Lambda_2\\}\\}.\\]\n Then\n \\[ \\dim_{\\Bbbk}(\\End_{\\mathcal{C}}(X) \/ \\mathcal{I}(X,X)) \\leq m\\cdot N + 1,\\]\n where we need the ``$+1$'' to account for the identity in $\\End_{\\widehat{\\mathcal{C}}}(X)$.\n Therefore, $\\mathcal{I}$ is an admissible ideal.\n\\end{proof}\n\n\n\n\\section{Examples}\\label{sec:examples}\n\\begin{figure}[b]\n \\centering\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \n\\begin{tikzpicture}[xscale=2]\n\\node (1) at (0,0) {1};\n\\node (2) at (1,1) {2};\n\\node (3) at (1,0) {3};\n\\node (4) at (1,-1) {4};\n\\node (5) at (2,0) {5};\n\n\\draw[->] (1) -- node[pos=0.6, above]{$\\alpha_1$} (2);\n\\draw[->] (1) -- node[pos=0.7, above]{$\\beta_1$} (3);\n\\draw[->] (1) -- node[pos=0.6, above]{$\\gamma_1$} (4);\n\\draw[->] (2) -- node[pos=0.4, above]{$\\alpha_2$} (5);\n\\draw[->] (3) -- node[pos=0.3, above]{$\\beta_2$} (5);\n\\draw[->] (4) -- node[pos=0.4, above]{$\\gamma_2$} (5);\n\\end{tikzpicture}\n \\caption{}\n \\label{fig:three-path-discrete}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\begin{tikzpicture}[xscale=2, inner sep = 0cm, outer sep = 0cm, very thick,decoration={\n markings,\n mark=at position 0.5 with {\\arrow{>}}}\n ]\n\\node (start) at (0,0) {$\\bullet$};\n\\node (end) at (2,0) {$\\bullet$};\n\\node at (-.1,0){$0$};\n\\node at (2.1,0){$1$};\n\n\\draw[postaction={decorate}] (start) .. controls (.5,1.3) and (1.5,1.3).. node[above=2pt, pos=0.6]{$\\alpha$} (end);\n\\draw[postaction={decorate}] (start) -- node[pos=0.6, above=2pt]{$\\beta$} (end);\n\\draw[postaction={decorate}] (start) .. controls (.5,-1.3) and (1.5,-1.3).. node[pos=0.6, above=2pt]{$\\gamma$} (end);\n\\end{tikzpicture}\n \\caption{}\n \\label{fig:three-path-continuous}\n \\end{subfigure}\n \\caption{The quivers considered in \\Cref{ex: discrete 3 paths,ex: cts 3 paths}, respectively.}\n \\label{fig:three-path-quivers}\n\\end{figure}\n\n\\begin{example}\\label{ex: discrete 3 paths}\nConsider the (discrete) quiver $Q$ shown in \\Cref{fig:three-path-discrete}.\nThe relation $\\alpha_2\\alpha_1-2\\beta_2\\beta_1+3\\gamma_2\\gamma_1$ generates an admissible ideal in the classical sense. In the $\\Bbbk$-linear categorification of $Q$, we rewrite the relation as the composition of morphisms\n\\[1\\xrightarrow{\\left [ \\begin{smallmatrix}\\alpha_1 \\\\ -2\\beta_1\\\\3\\gamma_1\\end{smallmatrix}\\right]} 2\\oplus 3\\oplus 4\\xrightarrow{\\left [ \\begin{smallmatrix}\\alpha_2 & \\beta_2 &\\gamma_2\\end{smallmatrix}\\right]} 5,\n \\]\n which generates an admissible ideal\n \n\\end{example}\n\n\n\\begin{example} \\label{ex: cts 3 paths}\n\nConsider the continuous analogue of \\Cref{ex: discrete 3 paths}, displayed in \\Cref{fig:three-path-continuous}.\nWe consider a similar relation $\\alpha-2\\beta+3\\gamma$.\nLet $X$, $Y$, and $Z$ be points on the interior of the paths $\\alpha$, $\\beta$, and $\\gamma$, respectively.\nThen $\\alpha=\\alpha_2\\alpha_1$ where $\\alpha_1:0\\to X$ and $\\alpha_2:X\\to 1$.\nWe similarly write $\\beta=\\beta_2\\beta_1$ and $\\gamma=\\gamma_2\\gamma_1$.\nThen the relation $\\alpha-2\\beta+3\\gamma$ can be written as the composition\n\\[0\n\\xrightarrow{\\left [ \\begin{smallmatrix}\\alpha_1 \\\\ -2\\beta_1\\\\3\\gamma_1\\end{smallmatrix}\\right]}\nX\\oplus Y\\oplus Z\n\\xrightarrow{\\left [ \\begin{smallmatrix}\\alpha_2 & \\beta_2 &\\gamma_2\\end{smallmatrix}\\right]}\n1,\n \\]\nand it generates an admissible ideal.\n\\end{example}\n\n\\begin{example}[Real line with point relations on integers]\\label{ex:real line integers}\nLet $\\mathcal{Q}$ be the additive $\\Bbbk$-linearization of $\\mathbb{R}$ as a category where paths move upwards.\nFor a point $c\\in \\mathbb{R}$, let the (unique) point relation at $c$ be $\\mathcal P_c$. The collection of point relations on the integers, $\\{\\mathcal P_z\\}_{z\\in \\mathbb{Z}}$ generates an admissible ideal by \\Cref{thm:point relations are admissible}.\n\nThe ``Auslander--Reiten space'' of the representations of this quiver is shaped like a mountain range; it is a set of triangles joined at their bottom vertices, see \\Cref{fig:AR mountain range}\n\\begin{figure}\n \\centering\n \\begin{subfigure}{\\textwidth}\n \\centering\n \\begin{tikzpicture}\n \\draw[dotted, thick] (-1.5,0) -- (0,0)--(-1.5,1);\n \\draw[dotted,thick] (10.5,1) -- (9,0) -- (10.5,0);\n \\draw[] (0,0) -- (1.5,1)--(3,0)--(4.5,1)--(6,0)--(7.5,1)--(9,0)--(0,0);\n \\end{tikzpicture}\n \\caption{The ``Auslander--Reiten space'' of representations of the quiver from \\Cref{ex:real line integers}}\n \\label{fig:AR mountain range}\n \\end{subfigure}\n \n \\begin{subfigure}{\\textwidth}\n \\begin{tikzpicture}\n \\draw[color=white] (0,0)--(0,1.3);\n \\draw[dotted, thick] (-1.5,0) -- (0,0)--(-1,1)--(-1.5,1);\n \\draw[dotted,thick] (10.5,1) -- (10,1) -- (9,0) -- (10.5,0);\n \\draw[] (0,0) -- (1,1)--(2,1)--(3,0)--(4,1)--(5,1)--(6,0)--(7,1)--(8,1)--(9,0)--(0,0);\n \\end{tikzpicture}\n \n \\caption{The ``Auslander--Reiten space'' for representations of $\\mathbb{R}$ modulo paths longer than $s$ and modulo paths that go through any $x\\in r\\mathbb{Z}$, where $r>s\\in\\mathbb{R}_{>0}$.}\\label{fig:AR chopped mountains}\n \\end{subfigure}\n \\caption{The ``Auslander--Reiten spaces'' of representations of the quivers in \\Cref{ex:real line integers,ex:real line points}}\n \\label{fig:AR spaces}\n\\end{figure}\n\\end{example}\n\n\\begin{example}[Circle with length\/Kupisch relations]\\label{ex:circle length}\n Let $\\mathcal{Q}$ be the additive $\\Bbbk$-linearization of a continuous quiver $\\widehat{\\mathcal{Q}}$ of type $\\widetilde{A}$ as in \\cite{HR}.\n Define $\\ell: \\mathsf{Mor}(\\widehat{\\mathcal{Q}})\\to\\mathbb{R}_{\\geq 0}$ by $\\ell(f)=\\phi-\\theta + 2n\\pi$ where $f:e^{i\\theta}\\to e^{i\\phi}$, and $0\\leq \\phi-\\theta<2\\pi$, and $n$ is the number of full rotations around the circle at $e^{i\\theta}$ before moving to $e^{i\\phi}$.\n Then $\\mathcal{Q}$ has length in $\\mathbb{R}_{\\geq 0}$.\n If $\\mathcal{Q}$ is acyclic, we may replace $\\mathbb{R}_{\\geq 0}$ with $\\Lambda=[0,2\\pi)\\cup\\{\\infty\\}$ and define $+_\\Lambda$ similarly to \\Cref{ex:weakly Archimedian monoids}(\\ref{ex:weakly Archimedian monoids:with max}).\n \n \\begin{figure}\n \\centering\n \\begin{tikzpicture}[very thick, decoration={markings,mark=at position 0.5 with {\\arrow{>}}}]\n \\draw[radius=3cm] (0,0) circle;\n \\draw[radius=2.8cm, color=red,postaction=decorate] (180:2.8cm) arc[start angle =180, delta angle= 180] node[above left, pos=0.666, red]{\\XSolidBrush};\n \\draw[radius=3.2cm, color=blue, postaction=decorate] (0:3.2cm) arc[start angle = 0, delta angle= 90] node[above right, pos=0.666, blue]{\\CheckmarkBold};\n \\foreach \\x in {0, 30,...,360}\n \t\\draw[thin] (\\x:2.9cm) -- (\\x:3.1cm); \n \\end{tikzpicture}\n \\caption{The circle with length relations as described in \\Cref{ex:circle length}. In this figure, the relations have length $\\frac{2\\pi}{3}$.}\n \n \\label{fig:circle length}\n \\end{figure}\n \n Now assume $\\widehat{\\mathcal{Q}}$ has cyclic orientation.\n Let $\\kappa$ be a Kupisch function as in \\cite[Definition~3.9]{RZ}.\n That is, $\\kappa:\\mathbb{R}\\to\\mathbb{R}_{> 0}$ is a function such that $\\kappa(t)+t>t$ and $\\kappa(t+1)=\\kappa(t)$, for all $t\\in\\mathbb{R}$.\n This yields a map $\\mathbb{S}^1\\to \\mathbb{R}_{>0}$ where $\\mathbb{S}^1=[0,1]\/\\{0\\sim 1\\}$.\n If $\\kappa$ is constant with value $a$, then this yields a length relation where $\\Lambda_1=[0,a]$ and $\\Lambda_2=(a,+\\infty)$.\n If $\\kappa$ is not constant, then we do not have a length relation.\n However, if $\\kappa$ does not have any separation points \\cite[Definition~4.2]{RZ}, then $\\kappa$ still induces an admissible ideal.\n\\end{example}\n\n\\begin{example}[Real line with length and point relations]\\label{ex:real line points}\n Let $\\mathcal{Q}$ be the additive $\\Bbbk$-linearization of $\\mathbb{R}$ as a category where paths move upwards.\n Let $r,s$ be positive real numbers and for each $x\\in r\\mathbb{Z}\\subset \\mathbb{R}$, let $\\mathcal P_x$ be the (unique) point relation in $\\mathcal{Q}$ through $x$ and $\\mathcal{I}$ the admissible ideal generated by $\\bigcup_{x\\in r\\mathbb{Z}} \\mathcal{P}_x$.\n Let $\\mathcal{J}$ be the the length relation in $\\mathcal{Q}\/\\mathcal{I}$ obtained by modding out by paths of length greater than $s$.\n By \\Cref{thm:length relations are admissible,thm:point relations are admissible} with \\Cref{lem:stack admissible} we obtain an admissible ideal $\\widetilde{\\Jay}$ given by the point relations at each $x\\in r\\mathbb{Z}$ and paths of length greater than $s$.\n \n If $r\\leq s$ then $\\mathcal{C}\/\\mathcal{I} = \\mathcal{C}\/\\widetilde{\\Jay}$ since we cannot have a morphism of length greater than $r$ in $\\mathcal{C}\/\\mathcal{I}$ anyway.\n If $r>s$ then we obtain paths of of length less than or equal to $s$ that do not pass through any $x\\in r\\mathbb{Z}$.\n The ``Auslander--Reiten space'' for the case $r>s$ is in \\Cref{fig:AR chopped mountains}; notice the similarity with \\Cref{fig:AR mountain range}.\n\\end{example}\n\n\\begin{example}[Complications with Cycles]\\label{ex: cycles length}\n For each $n\\in\\mathbb{N}$, let $\\mathcal{C}_n$ be circle whose radius is $\\frac{1}{2}e^{-n}$.\n Let $\\mathcal{C}$ be the additive $\\Bbbk$-linearization of $\\mathbb{R}_{\\leq 0}\\amalg(\\coprod_{n\\in\\mathbb{N}} \\mathcal{C}_n)\/ \\sim$, where $\\mathbb{R}\\ni -n\\sim 0\\in\\mathcal{C}_n$.\n See \\Cref{fig:cycles with decreasing length} for a visual depiction.\n \n We see that $\\mathcal{C}$ has length in $\\mathbb{R}_{\\geq 0}$.\n Let $\\mathcal{I}$ be a length relation.\n Since our length is in $\\mathbb{R}_{\\geq 0}$ we can say we are modding out by length $>L$ or $\\geq L$ for some $L>0\\in\\mathbb{R}$.\n \n Notice that for each $N\\in\\mathbb{N}$, there exists some $\\mathcal{C}_n$ with radius $r$ such that $Nr < L$.\n Therefore, there is no natural number $n$ such that for all nonisomorphism endomorphisms $f$ we have $f^n\\in\\mathcal{I}$.\n \n \\begin{figure}\n \\centering\n \\begin{tikzpicture}[scale=2, decoration={markings, mark=at position 0.6 with {\\arrow{>}}}]\n \\foreach \\x in {0, -1,...,-3}{\n \t\\node[outer sep=4+\\x] (\\x) at (\\x,0){$\\bullet$};\n \t\\node at (\\x.south){\\x};}\n \\node (-4) at (-4,0){$\\cdots$};\n \\foreach \\x [remember=\\x as \\y (initially 0)]in {-1, -2,...,-4}\n \t\\draw[very thick, postaction=decorate] (\\x.center)--(\\y.center);\n \\foreach \\x [evaluate=\\x as \\r using .5*e^(\\x\/2)] in {0, -1,...,-3}\n \\draw[radius=-\\r, very thick, postaction=decorate](\\x)+(0,\\r ) circle;\n \\end{tikzpicture}\n \\caption{Illustration of $\\mathcal{C}$ in \\Cref{ex: cycles length}, where we have glued smaller and smaller circles to each non-positive integer $n$ in $\\mathbb{R}_{\\leq 0}$.}\n \\label{fig:cycles with decreasing length}\n \\end{figure}\n\\end{example}\n\n\\begin{example}[Big wedge]\\label{ex:big wedge}\n Let $\\mathcal{C}$ be a cyclic continuous quiver of type $\\widetilde{A}$ as in \\cite{HR}.\n Let $\\widehat{\\mathcal{Q}} = (\\coprod_{\\mathbb{N}} \\mathcal{C} )\/\\sim$ where we join all the copies of the $\\mathcal{C}$ together at one point.\n Denote the wedge point by $X$.\n Let $\\mathcal{Q}$ be the additive $\\Bbbk$-linearization of $\\widehat{\\mathcal{Q}}$.\n Let us discuss the contruction of an admissible ideal out of point relations and a length relation.\n\n Notice $\\mathcal{Q}$ has length in $\\mathbb{R}_{\\geq 0}$.\n However, since the endomorphism ring of $X$ is an infinitely-generated monoid in $\\widehat{\\mathcal{Q}}$, we see that $\\End_{\\mathcal{C}}(X)\/ \\mathcal{I}(X,X)$ is not finite-dimensional.\n Instead, we must add a point relation on all but finitely-many different copies of $\\mathcal{C}$.\n If we do not have a point relation on all the cycles, we also need a length relation.\n Without such a combination, it is not possible to build an admissible ideal out of point relations and a length relation.\n\\end{example}\n\n\\subsection{The Real Plane}\\label{subsec:real plane}\n We now consider a continuous version of the grid quiver with commutativity relations as examined in \\cite{BBOS}.\n Let $\\widehat{\\mathcal{Q}}$ be the category whose objects are points in $\\mathbb{R}^2$.\n \n We now define the $\\Hom$ set between an arbitrary pair of points $(x,y)$ and $(z,w)$.\n Hom sets are given by considering paths made of up of horizontal and vertical line segments.\n For a pair $(x,y)$ and $(z,w)$, consider the set $P_{x,y}^{z,w}$ of all finite sequences $\\{(x_i,y_i)\\}_{i=i}^n$ such that\n \\begin{itemize}\n \\item $(x_1,y_1)=(x,y)$ and $(x_n,y_n)=(z,w)$,\n \\item $x_1\\leq x_2\\leq\\cdots \\leq x_n$ and $y_1\\leq y_2\\leq\\cdots \\leq y_n$,\n \\item $(x_i,y_i)\\neq (x_{i+1},y_{i+1})$ for all $1\\leq iz$ or $y>w$, then\n \\[ \\Hom_{\\widehat{\\mathcal{Q}}}((x,y), (z,w)) = \\emptyset.\\]\n If (i) $x=z$ and $y< w$ or (ii) $x< z$ and $y=w$, then\n \\[ \\Hom_{\\widehat{\\mathcal{Q}}}((x,y), (z,w)) = \\left\\{ \\{ (x,y), (z,w) \\}\\right\\}. \\]\n If $(x,y)=(z,w)$ then\n \\[ \\Hom_{\\widehat{\\mathcal{Q}}}((x,y), (x,y)) = \\left\\{ \\{ (x,y) \\}\\right\\}. \\]\n Composition is given by concatenating sequences and, if necessary, deleting a repeated term.\n \n Let $\\mathcal{Q}$ be the additive $\\Bbbk$-linearization of $\\widehat{\\mathcal{Q}}$ and let $(x,y),(z,w)\\in \\mathbb{R}^2$ such that $x\\varphi(n-1)$ and $k$. Hence, $\\sum(D_n+\\cdots+D_0)\\leq \\sum(C_{m+k}+\\cdots+C_0)$. Set $\\varphi(n)=m+k$. This proves the second inequality.\n\nFor the last inequality, since $C_{\\upharpoonright J}$ is left indecomposable, by Lemma \\ref{ncopies} we have $\\underline{2}.C_{\\upharpoonright J}\\leq C_{\\upharpoonright J}$ which implies that $\\sum \\underline{2}.C_{\\upharpoonright J}\\leq \\sum C_{\\upharpoonright J}$. \n\\end{proof} \n\n\nLemmas \\ref{Nonisoplc} and \\ref{ReducedequimorphP} imply the following. \n\n\\begin{theorem} \\label{Reduced}\nLet P be the sum of a countable labelled chain $C:=(I,\\ell)$ where $I$ has no least element and $\\ell(i)=(P_i,r(i))\\in \\mathcal{N}_{\\leq\\omega}\\times\\{-1, 0, +1\\}$ such that r takes 0 and $\\pm 1$ densely. Then $Sib(P)=2^{\\aleph_0}$. \n\\end{theorem}\n\n\n\n\n\n\\subsection{Direct Sum}\n \nThroughout this subsection $P$ is a countable direct sum of at least two non-empty connected $NE$-free posets. Thus, it is disconnected. \nIn this section we prove that $P$ has one or infinitely many siblings on condition that this property holds for each component of $P$. \n\n\\begin{lemma} \\label{Connecteddisconnected}\nIf some sibling of $P$ is connected, then $Sib(P)=\\infty$.\n\\end{lemma}\n\n\\begin{proof}\nLet $P'\\approx P$ where $P'$ is connected. So, $P'$ embeds into some component $Q$ of $P$. Since $P$ has at least two non-empty components, $P'\\oplus 1\\hookrightarrow Q\\oplus 1\\hookrightarrow P\\hookrightarrow P'$. So, for every $n$, $P'\\oplus \\Bar{K}_n\\hookrightarrow P'$ where $\\Bar{K}_n$ is an antichain of size $n$. Since $P'$ is connected and $P'\\approx P'\\oplus \\Bar{K}_n$, $P'$ and consequently $P$ has infinitely many pairwise non-isomorphic siblings. \n\\end{proof}\n\n\n\\begin{lemma} \\label{Infinitecomponent} \nIf some component of P has infinitely many siblings, then \\\\ $Sib(P)=\\infty$. \n\\end{lemma}\n\n\\begin{proof}\nLet $P$ satisfy the conditions of the lemma. Let $Q$ be a component of $P$ with infinitely many pairwise non-isomorphic siblings $Q_1, Q_2, Q_3, \\ldots$. For each $n$, let $P_n$ be the poset resulting from $P$ by replacing every component of $P$ which is equimorphic to $Q$ by $Q_n$. Now suppose that $P_m\\cong P_n$ for $m\\neq n$. Consider a component $Q'$ of $P_m$ equimorphic to $Q_m$. We know that $Q'$ is isomorphic to some component $Q''$ of $P_n$. We have $Q''\\cong Q'\\approx Q_m\\approx Q$ which implies that $Q''=Q_n$ because $Q''$ is a component of $P_n$. But then $Q_m\\cong Q_n$, a contradiction. \nTherefore, the resulting posets $P_n$ are pairwise non-isomorphic siblings of $P$. \n\\end{proof}\n\n\\begin{lemma} \\label{increasing}\nSuppose that P has infinitely many non-trivial components. Then $Sib(P)=\\infty$. \n\\end{lemma}\n\n\\begin{proof}\nSince $P$ has infinitely many non-trivial components and $\\mathcal{N}_{\\leq\\omega}$ is w.q.o, there is an increasing sequence $(P_n)_{n<\\omega}$ of non-trivial components of $P$. Let $\\mathcal{Q}$ be the direct sum of the non-trivial components of $P$ other than the $P_n$. We have $P=\\bigoplus_n P_n\\oplus \\mathcal{Q}\\oplus A$ where $A$ is the direct sum of the trivial components of $P$. Notice that since $P$ is countable, so is $A$. Therefore, $A$ embeds in $\\bigoplus_n P_{2n+1}$. Also $\\bigoplus_n P_n$ embeds into $\\bigoplus_n P_{2n}$. Hence, $P$ embeds into $P':=\\bigoplus_n P_n\\oplus \\mathcal{Q}$. That is, $P'\\approx P$. Similarly, $P'\\oplus \\Bar{K}_n \\approx P'\\approx P$ where $\\Bar{K}_n$ is an antichain of size $n$. Since $P'\\oplus\\Bar{K}_n$ has exactly $n$ trivial components, it follows that $P$ has infinitely many pairwise non-isomorphic siblings. \n\\end{proof}\n\nHence, by Lemmas \\ref{Connecteddisconnected}, \\ref{Infinitecomponent} and \\ref{increasing} we have the following. \n\n\\begin{proposition} \\label{InfinitesiblingnumberofP}\nLet P be a countable direct sum of NE-free posets with at least two non-empty components. If P is a sibling of some connected NE-free poset, or some component of P has infinitely many siblings, or P has infinitely many non-trivial components, then $Sib(P)=\\infty$. \n\\end{proposition}\n\n\n\n\n\\begin{lemma} \\label{Finitenontrivial} \nIf P has only finitely many non-trivial components and each component has only one sibling, then $Sib(P)=1$.\n\\end{lemma}\n\n\\begin{proof}\nSet $P:=\\bigoplus_{i0$ and that $P' \\subseteq P$ induces a sibling\nof $P$ via an embedding $f$. We prove that $f$ induces a bijection on the set of indices of components of $P$. First note that since the components of $P$ are connected, for each $i$, there is $j$ such that $f(P_i)\\subseteq P_j$. For each $i$, define $\\hat{f}(i)=j$ where $j$ is such that $f(P_i)\\subseteq P_j$. Suppose that for $i\\neq j$, $\\hat{f}(i)=\\hat{f}(j)=k$. It follows that $f$ embeds $P_i\\oplus P_j$ in $P_k$. We first show that $k$ cannot be $i$ or $j$. Suppose, without loss of generality, that $k=i$. Then $P_i\\oplus 1\\hookrightarrow P_i\\oplus P_j\\hookrightarrow P_i$ and by Lemma \\ref{Connecteddisconnected}, $P_i$ has infinitely many siblings, a contradiction. So, $k\\neq i, j$. It follows that $\\hat{f}(k)\\neq k$ because otherwise $f$ embeds $P_i\\oplus P_k$ in $P_k$ which implies that $P_k$ has infinitely many siblings, a contradiction. Further, $\\hat{f}(k)\\neq i, j$ because otherwise $P_i\\oplus P_j\\approx P_k$. By a similar argument, $\\hat{f}^2(k)\\neq i, j, k, \\hat{f}(k)$. Iterating, the $P_{\\hat{f}^n(k)}$ provide infinitely many non-trivial components of $P$, a contradiction. Thus, $\\hat{f}$ is injective on the set of indices of non-trivial components of $P$ and since there are finitely many such indices, $\\hat{f}$ is bijective. Extending $\\hat{f}$ to the set $I$ of indices of components of $P$, it follows that $\\hat{f}$ is bijective on $I$. Pick $i \\varphi(n-1)$ and some $k$. We know that the sequence $(a_n)_{n<\\omega}$ is coinitial in $J'$ and the $P_{a_n}$ are non-trivial. Pick a chain $a_{n_i}<\\cdots 0$ for every $i$. Now, suppose for two $i\\neq j$, $h(P(M_i)), h(P(M_j))\\subseteq P(M_k)$. By Lemma \\ref{Distinctobjects}, we have $\\lambda_i + \\lambda_j\\leq \\lambda_k$. This means that $\\lambda_k > \\lambda_i, \\lambda_j$. A contradiction is immediate when $k=i$ or $k=j$ because then $\\lambda_k > \\lambda_k$. Assume that $k\\neq i, j$. Since $\\lambda_k > \\lambda_i, \\lambda_j$, $P(M_k)$ cannot be embedded into $P(M_i)$ or $P(M_j)$. If $P(M_k)$ embeds in $P(M_k)$, then by Lemma \\ref{Distinctobjects}, $\\lambda_i+\\lambda_j+\\lambda_k \\leq \\lambda_k$, a contradiction to $\\lambda_i, \\lambda_j\\neq 0$. It follows that $P(M_k)$ embeds into some $P(M_l)$ where $l\\neq i, j, k$. Continuing this, we get infinitely many gs-connected children of $M$, a contradiction. Thus, $\\hat{h}$ is injective and since $X$ is finite, $\\hat{h}$ is a bijection on $X$. It also follows that $h(P_u)\\subseteq P_u$. Define $\\hat{h}(\\{u\\})=\\{u\\}$. \n\n\nSince $X$ is finite, for every $i$, there is some integer $m_i > 0$ such that $\\hat{h}^{m_i}(M_i)=M_i$. This means that for every $i$, $P(M_i)\\approx P(\\hat{h}.i(M_i))$ where $\\hat{h}.i$ is the orbit of $M_i$ under $\\hat{h}$. Hence, a sibling of $P(M)$ is of the form $\\bigoplus_iQ(M_i)\\oplus P'_u$ where $Q(M_i)\\approx P(M_i)$ for every $i$ and $P'_u\\cong P_u$ because $P_u$ has only one sibling. \n\\end{proof} \n\nFor case $v(M)=1$ where $M\\in T$ is edge minimal, we provide a result similar to Lemma \\ref{Bijectiononconnected}. \n\n\\begin{lemma} \\label{Bijectionondisconnected}\nLet $M\\in T$ be edge minimal such that $v(M)=1$ and $h$ an embedding of $P(M)$. \n\\begin{enumerate}\n \\item For every gs-disconnected child $M_i$ of $M$ we have $h(P(M_i))\\subseteq P(M_i)$.\n \\item For any chain $P_u^i$ in the representation $P_u^1 + P(M_1) + P_u^2 + P(M_2) + \\cdots + P(M_{k-1}) + P_u^k$ of $P(M)$ where $\\{u\\}$ is the unique possible gs-connected child of $M$, $h(P_u^i)\\subseteq P_u^i$. \n\\end{enumerate}\nHence, a sibling of $P(M)$ is of the form \n$$Q_u^1 + Q(M_1) + Q_u^2 + Q(M_2) + \\cdots + Q(M_{k-1}) + Q_u^k $$\nwhere $Q(M_i)\\approx P(M_i)$ and $Q_u^i\\approx P_u^i$ for every $i$. \n\\end{lemma}\n\n\\begin{proof}\nBy Proposition \\ref{Finiteps}, $P(M)$ is of the form \n$$P_u^1 + P(M_1) + P_u^2 + P(M_2) + \\cdots + P(M_{k-1}) + P_u^k\\ $$\nwhere each $M_i$ is a gs-disconnected child of $M$ and the $P_u^i$ are the intervals of the unique chain $P_u$ where $\\{u\\}$ is the possible gs-connected child of $M$. \n\n(1) Let $i$ and $x\\in P(M_i)$ be given. Since $M_i$ is gs-disconnected, take some $y$ in a component of $P(M_i)$ other than the component to which $x$ belongs. So, $x\\perp y$. We have $h(x)\\perp h(y)$. This implies that $h(x)\\notin P_u^j$ where $1\\leq j \\leq k$. Suppose that $h(x)\\in P(M_j)$ where $i\\neq j$. Without loss of generality assume that $P(M_i) <_P P(M_j)$. Since $h(x)\\perp h(y)$, we have $h(y)\\in P(M_j)$. So, for every $y\\in P(M_i)$ in a component other than the component to which $x$ belongs, we have $h(y)\\in P(M_j)$. Exchanging the role of $x$ and $y$, for every $x\\in P(M_i)$ in a component other than the component to which $y$ belongs, we have $h(x)\\in P(M_j)$. It means that $h(P(M_i))\\subseteq P(M_j)$. Let $\\lambda_i, \\lambda_j$ be the number of elements of $M_i, M_j$, respectively. By Lemma \\ref{Distinctobjects} we have $\\lambda_i \\leq \\lambda_j$. Thus, it is not the case that $P(M_j)$ embeds in $P(M_j)$ by $h$ because then we get $\\lambda_i+\\lambda_j \\leq \\lambda_j$, a contradiction to $\\lambda_i > 0$. So, $h(P(M_j))\\subseteq P(M_l)$ where $j < l$. Continuing this, we get infinitely many gs-disconnected children of $M$, a contradiction. \n\n(2) Let $i$ and some $x\\in P_u^i$ be given. Suppose $h(x)\\in P_u^j$ or $h(x)\\in P(M_j)$ where $j\\neq i$. Assume that $i < j$. The case $j 0$ there exists a dense and open subset $O^\\epsilon$ with the property that every $N \\in O^\\epsilon$ decomposes as $N \\cong A \\oplus B$ with $A$ indecomposable and $B$ $\\epsilon$-trivial.\n\\end{theorem}\n\nRecall that, in a topological space, a \\define{generic} property is one that holds for all points of some dense and open set.\nOur main result then implies that, for every $\\epsilon > 0$, the property of decomposing as a direct sum of an indecomposable and an $\\epsilon$-trivial module is generic.\n\nTo put this result into context, we mention a similar instance of a generic property of persistence modules. \nThe set of all finitely presentable one-parameter persistence modules that are \\emph{staggered} (meaning that the finite endpoints of the intervals in their barcode are all distinct) is dense but not open.\nNevertheless, for every $\\epsilon > 0$ there exists a dense and open set of modules that decomposes as $A \\oplus B$ with $A$ staggered and $B$ $\\epsilon$-trivial, as a direct consequence of the isometry theorem \\cite{lesnick,chazal-silva-glisse-oudot,bubenik-scott,bauer-lesnick}.\n\n\n\\smallskip\nWe prove \\cref{theorem:main-theorem} by combining the following two results, which are of independent interest.\nThe first one states that, in the space of finitely presentable two-parameter persistence modules, indecomposable modules are dense.\n\n\\begin{restatable}{proposition}{indecomposabledense}\n \\label{theorem:indecomposables-dense}\n Let $N : \\Rbf^2 \\to \\mathbf{vec}$ be finitely presentable.\n For every $\\epsilon > 0$, there exists an indecomposable persistence module $M : \\Rbf^2 \\to \\mathbf{vec}$ such that $d_I(M,N) \\leq \\epsilon$.\n\\end{restatable}\n\nThe second result relates the property of being close to an indecomposable to the property of decomposing as a direct sum of an indecomposable and a nearly trivial persistence module.\n\n\\begin{restatable}{proposition}{stabilityindecomposability}\n \\label{proposition:stability-indecomposability}\n Let $n \\geq 1$ and let $M : \\Rbf^n \\to \\mathbf{vec}$ be finitely presentable and indecomposable.\n For every $\\epsilon > 0$ there exists $\\delta > 0$ such that every persistence module $N : \\Rbf^n \\to \\mathbf{vec}$ with $d_I(M,N) < \\delta$ decomposes as $N \\cong A \\oplus B$ with $A$ indecomposable and $B$ $\\epsilon$-trivial.\n\\end{restatable}\n\n\\subparagraph{Discussion}\n\\cref{proposition:stability-indecomposability} motivates the following definition.\n\n\\begin{definition}\n Let $\\epsilon > 0$.\n A persistence module $M : \\Rbf^n \\to \\mathbf{vec}$ is \\define{$\\epsilon$-indecomposable} if we have $M \\cong A \\oplus B$ with $A$ indecomposable and $B$ $\\epsilon$-trivial.\n\\end{definition}\nOur main theorem then implies that, for every $\\epsilon > 0$, two-parameter persistence modules are generically $\\epsilon$-indecomposable.\nSince there are no general strong relationships between meager and measure-zero sets \\cite{oxtoby}, our results do not imply that random two-parameter persistence modules, according to some suitable probability measure, are $\\epsilon$-indecomposable.\nOne could then ask the following general questions:\nWhat are interesting and useful probability measures on spaces of multi-parameter persistence modules?\nAccording to these probability measures, how do random multi-parameter persistence modules decompose?\n\nWe prove \\cref{proposition:stability-indecomposability} in the generality of multi-parameter persistence modules, while we have given \\cref{theorem:indecomposables-dense} and consequently \\cref{theorem:main-theorem} only in the case of two-parameter persistence.\nWe believe the results hold for any number of parameters greater than one.\nSimilarly, the results extend to classes of modules more general than finitely presentable ones.\nWe leave these extensions for future work.\n\n\\subparagraph{Structure of the paper}\nIn \\cref{section:background}, we recall necessary background and introduce notation.\nIn \\cref{section:almost-indecomposable-open}, we prove \\cref{proposition:stability-indecomposability}.\nIn \\cref{section:indecomposables-dense}, we prove \\cref{theorem:indecomposables-dense} and \\cref{theorem:main-theorem}.\nIn \\cref{section:proof-main-lemma} we prove a key lemma that allows us to ``tack'' indecomposables together.\n\n\n\n\\ifarxivversion\n\\subparagraph{Acknowledgements}\nWe thank H\\r{a}vard Bjerkevik for interesting conversations and useful observations.\nL.~S.~thanks\nNicolas Berkouk,\nMathieu Carri\u00e8re,\nRen\u00e9 Corbet,\nChristian Hirsch,\nClaudia Landi,\nVadim Lebovici,\nDavid Loiseaux,\nEzra Miller,\nSteve Oudot,\nFran\u00e7ois Petit,\nand Alexander Rolle\nfor discussions during the \\emph{Metrics in Multiparameter Persistence workshop} (Lorentz Center, 2021).\nThis research has been conducted while U.~B.~was participating in the program \\emph{Representation Theory: Combinatorial Aspects and Applications};\nU.~B.~thanks the Centre for Advanced Study (CAS) at the Norwegian Academy of Science and Letters for their hospitality and support.\nThis research has been supported by the DFG Collaborative Research Center SFB\/TRR 109 \\emph{Discretization in Geometry and Dynamics}.\n\\fi\n\n\\section{Background}\n\\label{section:background}\n\nAlthough we use some notions from category theory, we only assume familiarity with basic concepts; in particular: categories, isomorphisms, functors, functor categories, direct sums, and kernels and cokernels.\nSee, e.g., \\cite{maclane} for an introduction.\n\nThroughout the paper, we fix a field $\\kbb$ and let $\\mathbf{vec}$ denote the category of finite dimensional $\\kbb$-vector spaces.\n\nAn \\define{extended metric space} consists of a set $A$ together with a function $d : A \\times A \\to \\Rbb \\cup \\{\\infty\\}$ such that, for all $a,b \\in A$ we have $d(a,b) = d(b,a) \\geq 0$ and $d(a,b) = 0$ if and only if $a=b$; and for all $a,b,c \\in A$ we have $d(a,c) \\leq d(a,b) + d(b,c)$, with the convention that $r + \\infty = \\infty + r = \\infty$ for all $r \\in \\Rbb \\cup \\{\\infty\\}$.\n\n\\subparagraph{Posets}\nWe let $\\Rbf$ denote the poset of real numbers with its standard ordering and reserve the notation $\\Rbb$ for the metric space of real numbers.\nLet $n \\in \\Nbb$ and consider the product poset $\\Rbf^n$.\nBy an abuse of notation, if $r \\in \\Rbf$, we interpret it as an element $r \\in \\Rbf^n$ all of whose coordinates are equal to $r$.\nThus, if for instance $s \\in \\Rbf^n$ and $r \\in \\Rbf$, then $s + r \\in \\Rbf^n$ denotes the element $(s_1 + r, s_2 + r, \\dots, s_n + r)$. \n\nWe interpret any poset $\\Pscr$ as a category with objects the elements of the poset and exactly one morphism $i \\to j \\in \\Pscr$ whenever $i \\leq j$.\n\n\\subparagraph{Persistence modules}\nLet $\\Pscr$ be a poset.\nA pointwise finite dimensional \\define{$\\Pscr$-persistence module} is a functor $M : \\Pscr \\to \\mathbf{vec}$.\nNote that all persistence modules in this paper are assumed to be pointwise finite dimensional and that, for the sake of brevity, we may omit this qualifier.\nWhen the indexing poset $\\Pscr$ is clear from the context, we may refer to a $\\Pscr$-persistence module simply as a persistence module or as a module.\nThe collection of all $\\Pscr$-persistence modules forms a category, where the morphisms are given by natural transformations.\n\nIf $M : \\Pscr \\to \\mathbf{vec}$ is a $\\Pscr$-persistence module and $i \\leq j \\in \\Pscr$, we let $\\phi^M_{i,j} : M(i) \\to M(j)$ denote the structure morphism corresponding to the morphism $i \\to j$ in $\\Pscr$ seen as a category.\n\nLet $\\Pscr$ be a poset and let $i \\in \\Pscr$.\nDefine the persistence module $\\Psf_i : \\Pscr \\to \\mathbf{vec}$ by\n\\[\n \\Psf_i(j) =\n \\begin{cases}\n \\kbb & i \\leq j\\\\\n 0 & \\text{else},\n \\end{cases}\n\\]\nwith all structure morphisms that are not forced to be zero being the identity $\\kbb \\to \\kbb$.\n\nA $\\Pscr$-persistence module is \\define{finitely presentable} if it is isomorphic to the cokernel of a morphism $\\bigoplus_{j \\in J} \\Psf_j \\to \\bigoplus_{i \\in I} \\Psf_i$, where $I$ and $J$ are finite multisets of elements of $\\Pscr$.\n\n\n\\subparagraph{Restrictions and extensions}\nIf $\\Qscr \\subseteq \\Pscr$ is an inclusion of posets and $M : \\Pscr \\to \\mathbf{vec}$ is a $\\Pscr$-persistence module, the \\define{restriction} of $M$ to $\\Qscr$, denoted $M|_\\Qscr : \\Qscr \\to \\mathbf{vec}$, is the $\\Qscr$-persistence module obtained by precomposing $M : \\Pscr \\to \\mathbf{vec}$ with the inclusion $\\Qscr \\to \\Pscr$.\n\nWe consider two main types of subposets of $\\Rbf^n$.\nIn one case, we let $\\{r_1 < r_2 < \\dots < r_k\\}$ be a finite set of real numbers and consider the product poset $\\{r_1 < r_2 < \\dots < r_k\\}^n \\subseteq \\Rbf^n$.\nWe refer to a subposet of $\\Rbf^n$ obtained in this way as a \\define{finite grid}.\nIn the other case, we let $\\{r_i\\}_{i \\in \\Zbb}$ be a countable set of real numbers without accumulation points and such that $r_i < r_{i+1}$ for all $i \\in \\Zbb$, and again consider the product poset $\\{r_i\\}^n \\subseteq \\Rbf^n$.\nWe refer to a subposet of $\\Rbf^n$ obtained in this way as a \\define{countable grid}.\nA \\define{regular grid} is any countable grid of the form $(s \\Zbf)^n = \\{s \\cdot m : m \\in \\Zbf\\}^n \\subseteq \\Rbf^n$ for $s > 0 \\in \\Rbb$, where $\\Zbf$ denotes the poset of integers.\nNote that, in the definitions of finite and countable grid, one could take a product of different subposets of $\\Rbf$, instead of an $n$-fold product of the same poset.\nSince we do not need this generality, we choose the more restrictive definition for simplicity.\n\n\nLet $\\Pscr \\subseteq \\Rbf^n$ be a finite or countable grid.\nGiven $M : \\Pscr \\to \\mathbf{vec}$ define $\\widehat{M} : \\Rbf^n \\to \\mathbf{vec}$\nby\n\\[\n \\widehat{M}(r) =\n \\begin{cases}\n M\\left(\\sup\\{ p \\in \\Pscr : p \\leq r\\}\\right) & \\text{if there exists $p \\in \\Pscr$ such that $p \\leq r$}\\\\\n 0 & \\text{else};\n \\end{cases}\n\\]\nfor its structure morphisms use the structure morphisms of $M$ and the fact that $\\sup\\{p \\in \\Pscr : p \\leq r\\} \\leq \\sup\\{p \\in \\Pscr : p \\leq s\\}$ whenever $r \\leq s$ and there exists $p \\in \\Pscr$ such that $p \\leq r$.\nWe refer to $\\widehat{M}$ as the \\define{extension} of $M$ along the inclusion $\\Pscr \\subseteq \\Rbf^n$.\nAs a side remark, we note that this notion of extension is an instance of the notion of left Kan extension from category theory, but we do not make use of this fact.\n\nIf $\\Pscr \\subseteq \\Rbf^n$ is a finite or countable grid and $M : \\Rbf^n \\to \\mathbf{vec}$, define the \\define{restriction-extension} of $M$ along $\\Pscr$, denoted $M_\\Pscr : \\Rbf^n \\to \\mathbf{vec}$, as the extension along $\\Pscr \\subseteq \\Rbf^n$ of the restriction $M|_\\Pscr : \\Pscr \\to \\mathbf{vec}$.\nGiven $M,N : \\Rbf^n \\to \\mathbf{vec}$ and a morphism $f : M \\to N$, there is a morphism $f_\\Pscr : M_\\Pscr \\to N_\\Pscr$ given by extending the restriction $f|_\\Pscr : M|_\\Pscr \\to N|_\\Pscr$.\nIt follows immediately that this construction is functorial in the sense that given modules $A,B,C : \\Rbf^n \\to \\mathbf{vec}$ and morphism $f : A \\to B$ and $g : B \\to C$, we have $g_\\Pscr \\circ f_\\Pscr = (g \\circ f)_\\Pscr : A_\\Pscr \\to C_{\\Pscr}$.\n\n\n\\begin{lemma}\n \\label{lemma:fp-is-extension-finite-poset}\n Let $n \\geq 1$ and let $M : \\Rbf^n \\to \\mathbf{vec}$.\n Then $M$ is finitely presentable if and only if there exists a finite grid $\\Pscr \\subseteq \\Rbf^n$ such that $M \\cong M_\\Pscr$.\n\\end{lemma}\n\\begin{proof}\n Assume that $M$ is finitely presentable, so that $M$ is isomorphic to the cokernel of a morphism $\\bigoplus_{j \\in J} \\Psf_j \\to \\bigoplus_{i \\in I} \\Psf_i$ with $I$ and $J$ finite subsets of $\\Rbf$.\n Consider the set $S = \\{x_k \\in \\Rbf : x \\in I \\cup J, \\, 1 \\leq k \\leq n\\}$ of all the coordinates of the points in $I \\cup J$, and the finite grid $\\Pscr = S^n \\subseteq \\Rbf^n$.\n It is straightforward to see that $M \\cong M_\\Pscr$.\n\n Now assume that $M \\cong \\widehat{N}$ with $\\Pscr \\subseteq \\Rbf^n$ a finite grid and $N : \\Pscr \\to \\mathbf{vec}$.\n Since the poset $\\Pscr$ has finitely many elements and $N$ is pointwise finite dimensional, there exists an epimorphism $e : \\bigoplus_{i \\in I} \\Psf_i \\to N$, for some finite multiset $I$ of elements of $\\Pscr$.\n Similarly, if $K$ is the kernel of the morphism $e$, there must exist an epimorphism $\\bigoplus_{j \\in J} \\Psf_j \\to K$, with $J$ finite.\n Putting these two morphisms together we see that $N$ is isomorphic to the cokernel of a morphism $\\bigoplus_{j \\in J} \\Psf_j \\to \\bigoplus_{i \\in I} \\Psf_i$.\n It is easy to see that $M$ is then isomorphic to the cokernel of the induced morphism $\\bigoplus_{j \\in J} \\widehat{\\Psf_j} \\to \\bigoplus_{i \\in I} \\widehat{\\Psf_i}$, and that $\\widehat{\\Psf_r} = \\Psf_r : \\Rbf^n \\to \\mathbf{vec}$ for all $r \\in \\Pscr$.\n\\end{proof}\n\nIt is straightforward to see that, in a grid extension persistence module, the structure maps that do not cross the grid are isomorphisms, as recorded in the following lemma.\n\n\\begin{lemma}\n \\label{lemma:structure-map-iso}\n Let $\\Pscr$ be a finite or countable grid and let $M : \\Rbf^n \\to \\mathbf{vec}$ be isomorphic to the extension of a $\\Pscr$-persistence module.\n Let $r < s \\in \\Rbf^n$ such that every $p \\in \\Pscr$ with $p \\leq s$ also satisfies $p \\leq r$.\n Then the structure morphism $\\phi^M_{r,s} : M(r) \\to M(s)$ is an isomorphism.\n \\qed\n\\end{lemma}\n\n\\subparagraph{Decomposition of persistence modules}\nThe proofs of the results in this section are standard, we include them in \\cref{appendix}, for completeness.\nLet $\\Pscr$ be a poset and let $M : \\Pscr \\to \\mathbf{vec}$.\nWe say that $M$ is \\define{decomposable} if there exist $A,B : \\Pscr \\to \\mathbf{vec}$ non-zero such that $M \\cong A \\oplus B$.\nIf $M$ is non-zero and not decomposable, we say that $M$ is \\define{indecomposable}.\n\n\nThe next two results follow from \\cite{botnan-crawleybovey} and the Krull--Remak--Schmidt--Azumaya theorem~\\cite{azumaya}.\n\n\n\\begin{restatable}{theorem}{theoremdecomposition}\n \\label{theorem:decomposition}\n Let $\\Pscr$ be a poset and let $M : \\Pscr \\to \\mathbf{vec}$ be a pointwise finite dimensional $\\Pscr$-persistence module.\n There exists a set $I$ and an indexed family of indecomposable $\\Pscr$-persistence modules $\\{M_i\\}_{i \\in I}$ such that $M \\cong \\bigoplus_{i \\in I} M_i$.\n Moreover, if $M \\cong \\bigoplus_{j \\in J} M_j$ for another indexed family of indecomposable $\\Pscr$-persistence modules $\\{M_j\\}_{j \\in J}$, then there exists a bijection $f : I \\to J$ such that $M_i \\cong M_{f(j)}$ for all $i \\in I$.\n\\end{restatable}\n\n\\begin{restatable}{lemma}{indecomposablelocalring}\n \\label{lemma:indecomposable-local-ring}\n Let $\\Pscr$ be a poset.\n A persistence module $M : \\Pscr \\to \\mathbf{vec}$ is indecomposable if and only if its endomorphism ring $\\End(M)$ is local.\n\\end{restatable}\n\n\\begin{proof}\n This is a direct consequence of \\cite[Theorem~1.1]{botnan-crawleybovey}.\n\\end{proof}\n\nThe following result states that direct sum decompositions behave well with respect to restrictions and extensions; its proof is straightforward.\n\n\\begin{lemma}\n \\label{lemma:decomposition-restriction-extension}\n Let $\\Pscr \\subseteq \\Rbf^n$ be a subposet.\n Let $M : \\Rbf^n \\to \\mathbf{vec}$ decompose as $M \\cong \\bigoplus_{i\\in I} M_i$ for some indexed family $\\{M_i\\}_{i \\in I}$ of persistence modules.\n Then $M|_\\Pscr \\cong \\bigoplus_{i\\in I} (M_i)|_\\Pscr$.\n Similarly, if $\\Pscr$ is a finite or countable grid and $M : \\Pscr \\to \\mathbf{vec}$ decomposes as $M \\cong \\bigoplus_{i\\in I} M_i$ for some indexed family $\\{M_i\\}_{i \\in I}$ of $\\Pscr$-persistence modules, then $\\widehat{M} \\cong \\bigoplus_{i\\in I} \\widehat{M_i}$.\n \\qed\n\\end{lemma}\n\nThe next result asserts that finitely presentable persistence modules decompose as a finite direct sum of finitely presentable modules.\nThis holds in the generality of persistence modules indexed by an arbitrary poset, but we do not need this generality.\n\n\\begin{restatable}{lemma}{decompositionfp}\n \\label{lemma:decomposition-fp}\n Let $n \\geq 1$ and let $M : \\Rbf^n \\to \\mathbf{vec}$.\n If $M$ is finitely presentable, then $M \\cong \\bigoplus_{i \\in I} M_i$ with $\\{M_i\\}_{i \\in I}$ a finite family of finitely presentable indecomposable $\\Rbf^n$-persistence modules.\n\\end{restatable}\n\n\\subparagraph{Interleavings, interleaving distance, and space of persistence modules}\nLet $n \\in \\Nbb$.\nIf $M : \\Rbf^n \\to \\mathbf{vec}$ is a $\\Rbf^n$-persistence module and $r \\in \\Rbf$, the \\define{$r$-shift} of $M$ is the persistence module $M[r] : \\Rbf^n \\to \\mathbf{vec}$ with $M[r](s) = M[s+r]$ for all $s \\in \\Rbf^n$ and $\\phi^{M[r]}_{s,t} = \\phi^M_{s+r,t+r}$ for all $s\\leq t \\in \\Rbf^n$.\nIf $r \\geq 0 \\in \\Rbf$, then there is a natural transformation $\\eta^M_r : M \\to M[r]$ with component $M(a) \\to M[r](a) = M(a+r)$ given by the structure morphism $\\phi^M_{a,a+r}$.\n\nLet $M,N : \\Rbf^n \\to \\mathbf{vec}$ and $\\epsilon \\geq 0 \\in \\Rbf$.\nAn \\define{$\\epsilon$-interleaving} between $M$ and $N$ consists of a pair of morphisms $f : M \\to N[\\epsilon]$ and $g : N \\to M[\\epsilon]$ such that $g[\\epsilon] \\circ f = \\eta^M_{2\\epsilon}$ and $f[\\epsilon] \\circ f = \\eta^N_{2\\epsilon}$.\nThe \\define{interleaving distance} between $M$ and $N$ is\n\\[\n d_I(M,N) = \\inf\\left(\\{\\epsilon \\geq 0 : \\text{ exists an $\\epsilon$-interleaving between $M$ and $N$ }\\} \\cup \\{\\infty\\}\\right) \\in \\Rbb \\cup \\{\\infty\\}.\n\\]\n\nBy composing interleavings, one shows that $d_I$ satisfies the triangle inequality.\nUsing \\cref{lemma:structure-map-iso}, one sees that if $M, N : \\Rbf^n \\to \\mathbf{vec}$ are finitely presentable and $d_I(M,N) = 0$, then $M$ and $N$ are isomorphic \\cite[Corollary~6.2]{lesnick}.\nThis implies that $d_I$ defines an extended metric on the set of isomorphism classes of finitely presentable $\\Rbf^n$-persistence modules.\nWe mention here that the collection of all isomorphism classes of finitely presentable $\\Rbf^n$-persistence modules is indeed a set, as opposed to a proper class, but we shall not delve into the details as they are standard and not relevant to the results presented here.\n\nWe consider the set of isomorphism classes of finitely presentable $\\Rbf^n$-persistence modules which we topologize with the topology generated by the open balls with respect to the interleaving distance.\n\nLet $\\epsilon > 0$.\nA persistence module $M : \\Rbf^n \\to \\mathbf{vec}$ is \\define{$\\epsilon$-trivial} if, for every $r \\in \\Rbf^n$, the structure morphism $\\phi^M_{r,r+\\epsilon} : M(r) \\to M(r + \\epsilon)$ is the zero morphism.\nNote that this is equivalent to $M$ being $\\epsilon\/2$-interleaved with the zero module.\nThus, if $d_I(M,0) < \\epsilon\/2$, then $M$ is $\\epsilon$-trivial.\n\n\n\\subparagraph{Shifts and restriction-extensions}\nIf $\\Pscr = (\\{s_i\\}_{i \\in \\Zbb})^n \\subseteq \\Rbf^n$ is a countable grid and $r \\in \\Rbf$, define the \\define{shifted} countable grid $\\Pscr + r = (\\{s_i + r\\}_{i \\in \\Zbb})^n \\subseteq \\Rbf^n$.\nLet $M : \\Rbf^n \\to \\mathbf{vec}$ and $s \\in \\Rbf$.\nIf $a \\in \\Rbf^n$, then $\\sup\\{ p + r : p + r \\leq a + s\\} = \\sup\\{p + r - s : p + r - s \\leq a\\} + s$.\nThus,\n\\[\n M_{\\Pscr + r}[s] = M[s]_{\\Pscr + (r-s)},\n\\]\na fact that we use repeatedly in \\cref{section:almost-indecomposable-open}.\nIn particular, given $N : \\Rbf^n \\to \\mathbf{vec}$ and a morphism $f : M \\to N[s]$, we get a morphism\n\\begin{equation}\n \\label{equation:shift-and-restriction-extension}\n f_\\Pscr : M_{\\Pscr + r} \\to N[s]_{\\Pscr + r} = N_{\\Pscr + r + s}[s].\n\\end{equation}\n\n\\section{Nearly indecomposables are open}\n\\label{section:almost-indecomposable-open}\n\nBefore we can prove \\cref{proposition:stability-indecomposability}, we need a definition and a lemma.\n\n\\begin{definition}\n Let $\\alpha > 0$.\n A countable grid $\\Pscr = \\left(\\{s_i\\}_{i \\in \\Zbb}\\right)^n \\subseteq \\Rbf^n$ has \\define{mesh width $\\alpha$} if for all $i \\in \\Zbb$ we have $s_{i+1} - s_i \\leq \\alpha$.\n\\end{definition}\n\nNote that, for any fixed $\\alpha > 0$, every countable grid $S^n = \\left(\\{s_i\\}_{i \\in \\Zbb}\\right)^n$ is included in some countable grid with mesh width $\\alpha$, for example, the countable grid $(S \\cup (\\alpha\\Zbf))^n$.\n\n\\begin{lemma}\n \\label{lemma:factor-bounded-grid}\n Let $\\alpha > 0$, let $\\Pscr$ be a countable grid with mesh width $\\alpha$, and let $L : \\Rbf^n \\to \\mathbf{vec}$.\n For every $r < \\alpha$ there exists a morphism $m : L_{\\Pscr + r}[r] \\to L_\\Pscr[\\alpha]$ rendering the following square commutative:\n \\[\n \\begin{tikzpicture}\n \\matrix (m) [matrix of math nodes,row sep=4em,column sep=7em,minimum width=2em,nodes={text height=1.75ex,text depth=0.25ex}]\n { L_{\\Pscr} & L_{\\Pscr}[\\alpha] \\\\\n L[r]_\\Pscr & L_{\\Pscr + r}[r],\\\\};\n \\path[line width=0.75pt, -{>[width=8pt]}]\n (m-1-1) edge node [above] {$\\eta^{L_\\Pscr}_\\alpha$} (m-1-2)\n (m-1-1) edge node [left] {$(\\eta^L_r)_\\Pscr$} (m-2-1)\n (m-2-2) edge node [right] {$m$} (m-1-2)\n (m-2-1) edge [-,double equal sign distance] (m-2-2)\n ;\n \\end{tikzpicture}\n \\]\n\\end{lemma}\n\\begin{proof}\n We start by defining the morphism $m$, which is equivalently a morphism $m : L_{\\Pscr+r} \\to L_\\Pscr[\\alpha-r]$.\n If $a \\in \\Rbf^n$, then\n $L_{\\Pscr + r}(a) = L(\\sup\\{ s + r : s \\in \\Pscr, s + r \\leq a\\})$ and\n \\[\n L_{\\Pscr}[\\alpha-r](a) = L_{\\Pscr}(a + \\alpha - r) = L(\\sup\\{s : s \\in \\Pscr, s \\leq a + \\alpha - r\\}).\n \\]\n Let $s_0$ such that $s_0 + r = \\sup\\{ s + r : s \\in \\Pscr, s + r \\leq a\\}$ so that $L_{\\Pscr + r}(a) = L(s_0 + r)$.\n If $\\Pscr = (\\{s_i\\}_{i \\in \\Zbb})^n$, then $s_0 = (s_{i_1}, \\dots, s_{i_n})$.\n Let $s_1 = (s_{i_1+1}, \\dots, s_{i_n+1})$.\n Note that, since $\\Pscr$ has mesh width $\\alpha$, we have $s_1 - \\alpha \\leq s_0$.\n Thus, $s_1 - \\alpha + r \\leq a$, which implies $s_1 \\leq a + \\alpha - r$.\n This means that $s_1 \\leq \\sup\\{s : s \\in \\Pscr, s \\leq a + \\alpha - r\\}$.\n We can then consider the composite:\n \\[\n L_{\\Pscr + r}(a) = L(s_0) \\to L(s_1) \\to L(\\sup\\{s : s \\in \\Pscr, s \\leq a + \\alpha - r\\}) = L_{\\Pscr}[\\alpha-r](a),\n \\]\n given by the structure morphisms of $L$.\n Let us denote the above composite by $m_a$.\n Since the morphism was constructed only using the structure morphisms of $L$, it is straightforward to check that the morphisms $m_a : L_{\\Pscr + r}(a) \\to L_{\\Pscr}[\\alpha-r](a)$ assemble into a natural transformation $m : L_{\\Pscr+r} \\to L_\\Pscr[\\alpha-r]$.\n By the same reason, it follows that $m$ renders commutative the square in the statement.\n\\end{proof}\n\n\n\\stabilityindecomposability*\n\\begin{proof}[Proof outline]\\renewcommand{\\qedsymbol}{}\n Let $M$ be as in the statement.\n Let us start by giving an outline of the proof strategy.\n We start by arguing that the persistence module $M$ is isomorphic to the extension of a persistence module over a sufficiently fine countable grid.\n We then choose a sufficiently small $\\delta > 0$ and consider an arbitrary module $N$ at interleaving distance at most $\\delta$ from $M$.\n Next, we use \\cref{lemma:structure-map-iso}, which says that the extension of a module over a grid can only change when one of its coordinates crosses a point in the grid; this is used to show that, after restricting to appropriate grids, a restriction of $M$ is indecomposable and isomorphic to a direct summand of a restriction of $N$.\n This allows us to find a decomposition of $N$ into two summands, one of which is indecomposable and is related to a restriction of $M$.\n Finally, we show that the other summand is close to the zero module in the interleaving distance.\n\\end{proof}\n\n\n\\begin{proof}\n By \\cref{lemma:fp-is-extension-finite-poset}, the module $M$ is isomorphic to the extension of its restriction to some finite grid $\\Qscr = \\{r_1 < r_2 < \\dots < r_k\\}^n \\subseteq \\Rbf^n$.\n Let\n \\[\n \\tau = \\min_{1\\leq i \\leq k-1} r_{i+1} - r_i\\;\\;, \\;\\; \\alpha = \\min\\left(\\epsilon\/4\\; , \\tau\\right)\\;\\;, \\;\\; \\delta < \\alpha\/4,\n \\]\n and let $\\Pscr = (\\{s_i\\}_{i \\in \\Zbb})^n$ be a countable grid with mesh width $\\alpha$ and containing $\\Qscr$.\n Note that $M$ is isomorphic to the extension of its restriction to $\\Pscr$, that is $M \\cong M_\\Pscr$.\n\n Assume given $N : \\Rbf^n \\to \\mathbf{vec}$ and a $\\delta$-interleaving $f : M \\to N[\\delta]$ and $g : N \\to M[\\delta]$ between $M$ and $N$.\n The interleaving equalities $g[\\delta] \\circ f = \\eta^M_{2\\delta}$ and $f[2\\delta] \\circ g[\\delta] = \\eta^{N[\\delta]}_{2\\delta}$ induce, by \\cref{equation:shift-and-restriction-extension}, the following commutative diagram of $\\Rbf^n$-persistence modules:\n \\begin{equation}\n \\label{equation:decomposition-N}\n \\begin{tikzpicture}[baseline=(current bounding box.center)]\n \\matrix (m) [matrix of math nodes,row sep=4em,column sep=3em,minimum width=2em,nodes={text height=1.75ex,text depth=0.25ex}]\n { & N_{\\Pscr + \\delta}[\\delta] & & N_{\\Pscr + 3\\delta}[3\\delta] \\\\\n M_{\\Pscr} & & M_{\\Pscr + 2\\delta}[2\\delta] & \\\\};\n \\path[line width=0.75pt, -{>[width=8pt]}]\n (m-2-1) edge node [above] {$\\cong$} node [below] {$(\\eta^M_{2\\delta})_{\\Pscr}$} (m-2-3)\n (m-2-1) edge [>->] node [above left] {$f_{\\Pscr}$} (m-1-2)\n (m-1-2) edge [->>] node [above right] {$g[\\delta]_{\\Pscr}$} (m-2-3)\n (m-1-2) edge node [above] {$(\\eta^{N[\\delta]}_{2\\delta})_{\\Pscr}$} (m-1-4)\n (m-2-3) edge node [below right] {$f[2\\delta]_\\Pscr$} (m-1-4)\n ;\n \\end{tikzpicture}\n \\end{equation}\n The bottom horizontal morphism is an isomorphism by \\cref{lemma:structure-map-iso} and the fact that $2\\delta < \\alpha\/2 \\leq \\tau\/2$.\n This implies that the left diagonal morphism is a split monomorphism and that the middle diagonal morphism is a split epimorphism.\n\n In particular, it follows that $M_{\\Pscr}$ is a direct summand of $N_{\\Pscr+\\delta}[\\delta]$, so that $N_{\\Pscr+\\delta}[\\delta] \\cong M_{\\Pscr} \\oplus X$ for some $X$.\n At the same time, by \\cref{theorem:decomposition}, there exists a decomposition $N \\cong \\bigoplus_{i \\in I} N_i$ with $N_i$ indecomposable for all $i \\in I$.\n Using \\cref{lemma:decomposition-restriction-extension}, we see that, by restricting to $\\Pscr$ and extending, we get a decomposition $N_{\\Pscr+\\delta} \\cong \\bigoplus_{i \\in I} (N_i)_{\\Pscr+\\delta}$, where now the summands $\\{(N_i)_{\\Pscr+\\delta}\\}_{i \\in I}$ may not be indecomposable anymore.\n Nevertheless, since $M_\\Pscr \\cong M$ is indecomposable by assumption, there exists $i \\in I$ such that $(N_i)_{\\Pscr+\\delta}\\cong M_\\Pscr[-\\delta] \\oplus X'$ for some $X'$, using the fact that $N_{\\Pscr+\\delta}[\\delta] \\cong M_{\\Pscr} \\oplus X$ and \\cref{theorem:decomposition}.\n We consider the decomposition\n \\[\n N_{\\Pscr + \\delta}\\;\\; \\cong \\;\\;(N_i)_{\\Pscr + \\delta} \\oplus \\bigoplus_{j \\in I \\setminus i} (N_j)_{\\Pscr + \\delta} \\;\\;\\cong \\;\\;(M_\\Pscr[-\\delta] \\oplus X') \\oplus \\bigoplus_{j \\in I \\setminus i} (N_j)_{\\Pscr + \\delta},\n \\]\n and let $A = N_i$ and $B = \\bigoplus_{j \\in I \\setminus i} N_j$.\n Since this will be of use later, we note here that, by construction, $B_\\Pscr$ is isomorphic to a summand of $X$.\n \n We have thus decomposed $N \\cong A \\oplus B$ with $A$ indecomposable.\n To conclude, it remains to show that $B$ is $\\epsilon$-trivial, and for this it is enough to show that $d_I(B,0) < \\epsilon\/2$.\n By definition of $\\Pscr$, we have $d_I(B,B_\\Pscr) < \\alpha \\leq \\epsilon\/4$, which reduces the problem to showing that $d_I(B_\\Pscr,0) < \\epsilon\/4$, by the triangle inequality.\n Since $B_\\Pscr$ is a summand of $X$, it is enough to prove that $d_I(X,0) < \\epsilon\/4$.\n We will show that $\\eta^X_{\\alpha} : X \\to X[\\alpha]$ is the zero morphism, which implies that $d_I(X,0) \\leq \\alpha\/2 \\leq \\epsilon\/8 < \\epsilon\/4$.\n\n By the naturality of shifting, that is, by the commutativity of the following square\n \\[\n \\begin{tikzpicture}\n \\matrix (m) [matrix of math nodes,row sep=4em,column sep=7em,minimum width=2em,nodes={text height=1.75ex,text depth=0.25ex}]\n { M_\\Pscr \\oplus X & M_\\Pscr[\\alpha] \\oplus X[\\alpha] \\\\\n N_{\\Pscr + \\delta} [\\delta] & N_{\\Pscr + \\delta}[\\delta + \\alpha],\\\\};\n \\path[line width=0.75pt, -{>[width=8pt]}]\n (m-1-1) edge node [above] {$\\left(\\eta^{M_\\Pscr}_\\alpha, \\eta^X_\\alpha\\right)$} (m-1-2)\n (m-1-1) edge node [left] {$\\cong$} (m-2-1)\n (m-1-2) edge node [left] {$\\cong$} (m-2-2)\n (m-2-1) edge node [above] {$\\eta^{N_{\\Pscr+\\delta}[\\delta]}_\\alpha$} (m-2-2)\n ;\n \\end{tikzpicture}\n \\]\n it is sufficient to show that the following composite is the zero morphism:\n \\begin{equation}\n \\label{equation:sufficient-zero-morphism}\n X \\rightarrowtail M_{\\Pscr} \\oplus X \\xrightarrow{\\cong} N_{\\Pscr+\\delta}[\\delta] \\xrightarrow{\\eta^{N_{\\Pscr+\\delta}[\\delta]}_\\alpha} N_{\\Pscr+\\delta}[\\delta + \\alpha].\n \\end{equation}\n Note that, by the construction of the splitting $N_{\\Pscr+\\delta}[\\delta] \\cong M_{\\Pscr} \\oplus X$ and the commutativity of the right triangle in \\cref{equation:decomposition-N}, the following composite is the zero morphism\n \\begin{equation}\n \\label{equation:zero-morphism}\n X \\rightarrowtail M_{\\Pscr} \\oplus X \\xrightarrow{\\cong} N_{\\Pscr+\\delta}[\\delta]\n \\xrightarrow{(\\eta^{N[\\delta]}_{2\\delta})_{\\Pscr}} N_{\\Pscr + 3\\delta}[3\\delta].\n \\end{equation}\n Thus, it suffices to see that the right-most morphism in \\cref{equation:sufficient-zero-morphism} factors through the right-most morphism in \\cref{equation:zero-morphism}.\n Letting $L = N[\\delta]$ and using \\cref{equation:shift-and-restriction-extension},\n this translates to the claim that the morphism \n $\\eta^{L_{\\Pscr}}_\\alpha : L_{\\Pscr} \\to L_{\\Pscr+\\delta}[\\delta + \\alpha]$ factors through $(\\eta^{L}_{2\\delta})_{\\Pscr} : L_{\\Pscr} \\to L_{\\Pscr + 2\\delta}[2\\delta]$, which follows from \\cref{lemma:factor-bounded-grid} by letting $r = 2\\delta = \\alpha\/2 < \\alpha$.\n\\end{proof}\n\n\\begin{corollary}\n \\label{corollary:open-with-indecomposables}\n Let $n \\geq 1$.\n For every $\\epsilon \\geq 0$, there exists an open set $O^\\epsilon$ of isomorphism classes of finitely presentable $n$-parameter persistence modules, containing the set of all indecomposable finitely presentable modules as a subset, and such that every $N \\in O^\\epsilon$ decomposes as $N \\cong A \\oplus B$ with $A$ indecomposable and $d_I(B,0) < \\epsilon$.\n\\end{corollary}\n\\begin{proof}\n Fix $\\epsilon > 0$.\n By \\cref{proposition:stability-indecomposability}, for every indecomposable finitely presentable $M : \\Rbf^n \\to \\mathbf{vec}$ there exists $\\delta^M > 0$ such that, for all finitely presentable $N : \\Rbf^n \\to \\mathbf{vec}$ in the open $\\delta^M$-ball around $M$ in the interleaving distance, it holds that $N$ decomposes as $N \\cong A \\oplus B$ with $A$ indecomposable and $B$ $\\epsilon$-trivial.\n It is then clear that the set\n \\[\n O^\\epsilon := \\bigcup_{\\substack{M \\text{ finitely presentable} \\\\ \\text{and indecomposable}}} \\{N : \\text{ $N$ is finitely presentable and } d_I(M,N) < \\delta^M \\}\n \\]\n satisfies the conditions in the statement.\n\\end{proof}\n\n\n\n\\section{Indecomposables are dense}\n\\label{section:indecomposables-dense}\n\n\nThe proof of \\cref{theorem:indecomposables-dense} depends on the following key lemma, which lets us ``tack together'' two indecomposable modules to obtain a third indecomposable module that is at small interleaving distance from the direct sum of the initial modules.\nThe proof is in \\cref{section:proof-main-lemma}.\n\n\\begin{restatable}[Tacking Lemma]{lemma}{mainlemma}\n \\label{lemma:main-lemma-attaching}\n Let $A,B : \\Rbf^2 \\to \\mathbf{vec}$ be finitely presentable, indecomposable, and isomorphic to extensions of $\\Pscr$-persistence modules for some regular grid $\\Pscr \\subseteq \\Rbf^2$.\n For every $\\delta > 0$ there exists a regular grid $\\Qscr \\supseteq \\Pscr$ and $M : \\Rbf^2 \\to \\mathbf{vec}$ with $M$ indecomposable, finitely presentable, isomorphic to the extension of a $\\Qscr$-persistence module, and such that\n \\[\n \\mathsf{im}\\left(\\eta^M_\\delta\\right) \\cong \\mathsf{im}\\left(\\eta^A_\\delta\\right) \\oplus \\mathsf{im}\\left(\\eta^B_\\delta\\right).\n \\]\n\\end{restatable}\n\nWe also need the following two simple lemmas.\n\n\\begin{lemma}\n \\label{lemma:image-to-interleaving}\n Let $A : \\Rbf^n \\to \\mathbf{vec}$ and let $\\delta \\geq 0$.\n Consider $\\mathsf{im}(\\eta^A_\\delta ) : \\Rbf^n \\to \\mathbf{vec}$, the image of the morphism of persistence modules $\\eta^A_\\delta : A \\to A[\\delta]$ given by the structure morphisms.\n Then $A$ and $\\mathsf{im}(\\eta^A_\\delta)$ are $\\delta$-interleaved.\n As a consequence, if $B : \\Rbf^n \\to \\mathbf{vec}$ satisfies $\\mathsf{im}(\\eta^A_\\delta) \\cong \\mathsf{im}(\\eta^B_\\delta)$, then $A$ and $B$ are $2\\delta$-interleaved.\n\\end{lemma}\nIn fact, the $2\\delta$ can be improved to $\\delta$ using the notion of asymmetric interleaving \\cite[Section~2.6.1]{lesnick-thesis}, but this is not necessary for our purposes.\n\\begin{proof}\n The second statement follows directly from the first one.\n By construction, there is a canonical epimorphism $A \\to \\mathsf{im}(\\eta^A_\\delta)$, and thus we get a morphism $f : A \\to \\mathsf{im}(\\eta^A_\\delta)[\\delta]$, by composing with $\\eta^{\\mathsf{im}(\\eta^A_\\delta)}_\\delta: \\mathsf{im}(\\eta^A_\\delta) \\to \\mathsf{im}(\\eta^A_\\delta)[\\delta]$.\n Similarly, there is a canonical monomorphism $g : \\mathsf{im}(\\eta^A_\\delta) \\to A[\\delta]$.\n A straightforward check shows that $f$ and $g$ form a $\\delta$-interleaving.\n\\end{proof}\n\n\\begin{lemma}\n \\label{lemma:eta-direct-sum}\n Let $A,B : \\Rbf^n \\to \\mathbf{vec}$ and $\\epsilon \\geq 0$.\n Then $\\mathsf{im}\\left(\\eta^A_\\epsilon\\right) \\oplus\\mathsf{im}\\left(\\eta^B_\\epsilon\\right) \\cong \\mathsf{im}\\left(\\eta^{A \\oplus B}_\\epsilon\\right)$.\n \\qed\n\\end{lemma}\n\n\\indecomposabledense*\n\\begin{proof}\n We start by reducing the problem to the case in which the given module is an extension of a module defined over a regular grid.\n Consider the regular grid $\\Pscr = ((\\epsilon\/2)\\Zbf)^2$ and note that $d_I(N_\\Pscr,N) \\leq \\epsilon\/2$.\n Let $L = N_\\Pscr$.\n It is easy to see that $L$ is finitely presentable.\n\n If $L = 0$, then take $M$ to be the persistence module such that $M(x,y) = \\kbb$ if $0 \\leq x < \\epsilon$ and $y \\geq 0$, and $0$ otherwise, with all the structure morphisms that are not forced to be the zero morphism being the identity $\\kbb \\to \\kbb$.\n It is straightforward to see that $M$ is indecomposable and that $d_I(M,0) = \\epsilon\/2$.\n This case now follows from $d_I(N,M) \\leq d_I(N,L) + d_I(L,M) \\leq \\epsilon\/2 + d_I(0,M) \\leq \\epsilon$.\n\n We now assume that $L$ is non-zero.\n By \\cref{lemma:decomposition-fp}, there exist finitely presentable indecomposables $X_1, \\dots, X_k : \\Rbf^2 \\to \\mathbf{vec}$ such that $L \\cong X_1 \\oplus \\dots \\oplus X_k$.\n Note that, for all $1 \\leq i \\leq k$, the persistence module $X_i$ is isomorphic to the extension of a $\\Qscr$-persistence module for any grid $\\Qscr \\supseteq \\Pscr$.\n Let us define $M_i : \\Rbb^2 \\to \\mathbf{vec}$ for $1 \\leq i \\leq k$ inductively.\n Let $M_1 = X_1$ and, for $2 \\leq i \\leq k$, let $M_{i}$ be obtained by setting $A = X_i$, $B = M_{i-1}$, and $\\delta = \\epsilon\/4$ in \\cref{lemma:main-lemma-attaching}.\n Thus, $M_i$ is finitely presentable, indecomposable, and satisfies\n \\[\n \\mathsf{im}\\left(\\eta^{M_i}_{\\epsilon\/4}\\right) \\cong \\mathsf{im}\\left(\\eta^{X_i}_{\\epsilon\/4}\\right) \\oplus \\mathsf{im}\\left(\\eta^{M_{i-1}}_{\\epsilon\/4}\\right).\n \\]\n Let $M = M_k$.\n It follows by induction and \\cref{lemma:eta-direct-sum} that \n \\[\n \\mathsf{im}\\left(\\eta^{M}_{\\epsilon\/4}\\right) \\cong \\mathsf{im}\\left(\\eta^{X_1}_{\\epsilon\/4}\\right) \\oplus \\dots \\oplus \\mathsf{im}\\left(\\eta^{X_k}_{\\epsilon\/4}\\right) \\cong \\mathsf{im}\\left(\\eta^{X_1 \\oplus \\dots \\oplus X_k}_{\\epsilon\/4}\\right) \\cong \\mathsf{im}\\left(\\eta^L_{\\epsilon\/4}\\right).\n \\]\n Using \\cref{lemma:image-to-interleaving}, we get that $M$ and $L$ are $\\epsilon\/2$-interleaved.\n By the triangle inequality, we get $d_I(M,N) \\leq d_I(M,L) + d_I(L,N) \\leq \\epsilon$, as required.\n\\end{proof}\n\nWe can now give the proof of our main result.\n\n\\begin{proof}[Proof of \\cref{theorem:main-theorem}]\n Given $\\epsilon > 0$, the set $O^\\epsilon$ of \\cref{corollary:open-with-indecomposables} is open and contains all indecomposables finitely presentable modules.\n Since, when $n=2$, the set of indecomposables is dense by \\cref{theorem:indecomposables-dense}, the result follows.\n\\end{proof}\n\n\n\\section{Tacking indecomposables together}\n\\label{section:proof-main-lemma}\n\nIn this section, we prove \\cref{lemma:main-lemma-attaching} (the Tacking Lemma).\nWe first summarize the strategy.\n\n\n\\begin{proof}[Proof outline]\\renewcommand{\\qedsymbol}{}\nWe start by defining a persistence module $\\Gsf$ on a finite grid which allows us to ``tack together'' indecomposable modules.\nNow, fix modules $A$ and $B$ as in the statement.\nThe tacking works in three steps.\nIn \\cref{lemma:add-1-dim-corner}, we replace $A$ and $B$ by modules which admit a ``thin corner'' (\\cref{definition:1-dim-corner}); the purpose of this thin corner is that it allows us to attach a copy of $\\Gsf$ to each of the modules, which we do in the next step.\nIn \\cref{lemma:add-antenna-attachment}, we replace the modules by modules that admit a ``horizontal antenna attachment'' (\\cref{definition:horizontal-antenna-attachment}); the purpose of this horizontal antenna attachment is that it allows us to tack together the two modules with a third copy of $\\Gsf$. \nFinally, in \\cref{lemma:actual-tacking}, we construct $M$ by tacking together the two modules.\n\\cref{figure:diagram-proof-main-lemma} shows a diagrammatic description of these steps.\n\\end{proof}\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=1\\linewidth]{pictures\/diagram-steps.eps}\n \\caption{A schematic summary of the main steps in the proof of the tacking lemma.\n The transition from (0.) to (1.) corresponds to \\cref{lemma:add-1-dim-corner}; the transition from (1.) to (2.) corresponds to \\cref{lemma:add-antenna-attachment}; the transition from (2.) to (3.) corresponds to \\cref{lemma:actual-tacking}.}\n \\label{figure:diagram-proof-main-lemma}\n\\end{figure}\n\n\\begin{definition}\n\\label{def:G}\n Let $\\Pscr = \\{0, 1, 2, 3, 4\\}^2$ and define $\\Gsf : \\Pscr \\to \\mathbf{vec}$ as follows:\n \\[ \\footnotesize\n \\begin{tikzpicture}\n \\matrix (m) [matrix of math nodes,row sep=3em,column sep=3em,minimum width=2em,nodes={text height=1.75ex,text depth=0.25ex}]\n { \\kbb & \\kbb & \\kbb & \\kbb & \\kbb \\\\\n \\kbb & \\kbb^2 & \\kbb^2 & \\kbb^2 & \\kbb \\\\\n 0 & \\kbb & \\kbb^2 & \\kbb^2 & \\kbb \\\\\n 0 & 0 & \\kbb & \\kbb^2 & \\kbb \\\\\n 0 & 0 & 0 & \\kbb & \\kbb \\\\};\n \\path[line width=0.5pt, -{>[width=6pt]}]\n (m-1-1) edge [-,double equal sign distance] (m-1-2)\n (m-1-2) edge [-,double equal sign distance] (m-1-3)\n (m-1-3) edge [-,double equal sign distance] (m-1-4)\n (m-1-4) edge [-,double equal sign distance] (m-1-5)\n\n (m-2-2) edge [-,double equal sign distance] (m-2-3)\n (m-2-3) edge [-,double equal sign distance] (m-2-4)\n\n (m-3-3) edge [-,double equal sign distance] (m-3-4)\n\n (m-5-5) edge [-,double equal sign distance] (m-4-5)\n (m-4-5) edge [-,double equal sign distance] (m-3-5)\n (m-3-5) edge [-,double equal sign distance] (m-2-5)\n (m-2-5) edge [-,double equal sign distance] (m-1-5)\n\n (m-4-4) edge [-,double equal sign distance] (m-3-4)\n (m-3-4) edge [-,double equal sign distance] (m-2-4)\n\n (m-3-3) edge [-,double equal sign distance] (m-2-3)\n\n (m-2-1) edge node [left] {\\footnotesize $0$} (m-1-1)\n (m-2-1) edge node [above] {\\footnotesize $\\begin{pmatrix} 1\\\\ 1\\end{pmatrix}$} (m-2-2)\n (m-2-2) edge node [right] {\\footnotesize $(1,-1)$} (m-1-2)\n (m-2-3) edge node [right] {\\footnotesize $(1,-1)$} (m-1-3)\n (m-2-4) edge node [right] {\\footnotesize $(1,-1)$} (m-1-4)\n\n (m-5-4) edge node [above] {\\footnotesize $0$} (m-5-5)\n (m-5-4) edge node [left] {\\footnotesize $\\begin{pmatrix} 1\\\\ 1\\end{pmatrix}$} (m-4-4)\n (m-4-4) edge node [above] {\\footnotesize $(1,-1)$} (m-4-5)\n (m-3-4) edge node [above] {\\footnotesize $(1,-1)$} (m-3-5)\n (m-2-4) edge node [above] {\\footnotesize $(1,-1)$} (m-2-5)\n\n (m-3-2) edge node [left] {\\footnotesize $\\begin{pmatrix} 1\\\\ 0\\end{pmatrix}$} (m-2-2)\n (m-3-2) edge node [above] {\\footnotesize $\\begin{pmatrix} 1\\\\ 0\\end{pmatrix}$} (m-3-3)\n\n (m-4-3) edge node [left] {\\footnotesize $\\begin{pmatrix} 0\\\\ 1\\end{pmatrix}$} (m-3-3)\n (m-4-3) edge node [above] {\\footnotesize $\\begin{pmatrix} 0\\\\ 1\\end{pmatrix}$} (m-4-4)\n\n (m-3-1) edge (m-2-1)\n (m-4-1) edge (m-3-1)\n (m-5-1) edge (m-4-1)\n\n (m-4-2) edge (m-3-2)\n (m-5-2) edge (m-4-2)\n\n (m-5-3) edge (m-4-3)\n\n (m-5-1) edge (m-5-2)\n (m-5-2) edge (m-5-3)\n (m-5-3) edge (m-5-4)\n\n (m-4-1) edge (m-4-2)\n (m-4-2) edge (m-4-3)\n \n (m-3-1) edge (m-3-2)\n ;\n \\end{tikzpicture}\n \\]\n\\end{definition}\n\n\n\n\\begin{restatable}{lemma}{Gindecomposable}\n \\label{lemma:G-is-indecomposable}\n The persistence module $\\Gsf$ is indecomposable.\n More specifically, the morphism of rings $\\End(\\Gsf) \\to \\End(\\Gsf(4,4)) \\cong \\kbb$ given by evaluation at $(4,4)$ is an isomorphism.\n\\end{restatable}\n\n\\begin{proof}\n\n\n Let $f : \\Gsf \\to \\Gsf$ be an endomorphism.\n If $f(4,4) : \\kbb \\to \\kbb$ is given by multiplication by $\\alpha$, then, for all $0 \\leq j \\leq 4$, we have that $f(4,j) : \\kbb \\to \\kbb$ and $f(j,4) : \\kbb \\to \\kbb$ must also be given by multiplication by $\\alpha$.\n\n Assume that $f(1,2) : \\kbb \\to \\kbb$ is given by multiplication by $\\beta$ and that $f(2,1) : \\kbb \\to \\kbb$ is given by multiplication by $\\gamma$.\n Since the structure morphisms $\\kbb = \\Gsf(1,2) \\to \\Gsf(4,4) = \\kbb$ is the identity, we must have $\\beta = \\alpha$.\n By an analogous argument, we have $\\gamma = \\alpha$.\n This also implies that for all $(j,k)$ with $\\Gsf(j,k) = \\kbb^2$ the endomorphism $f(j,k)$ is given by multiplication by $\\alpha$.\n \n \n To conclude, note that, by the definition of the structure morphisms $\\Gsf(0,3) \\to \\Gsf(1,3)$ and $\\Gsf(3,0) \\to \\Gsf(3,1)$, we have that $f(0,3)$ and $f(3,0)$ are also given by multiplication by $\\alpha$.\n Thus, $\\End(\\Gsf) \\cong \\End(\\Gsf(4,4)) \\cong \\kbb$, and thus $\\Gsf$ is indecomposable, by \\cref{lemma:indecomposable-local-ring}.\n\\end{proof}\n\nThe crucial property of $\\Gsf$ of importance to us, other than being indecomposable, is the fact that there are two pairs of adjacent copies of the ground field $\\kbb$ connected by a zero map, on the bottom right at $(0,3)$ and $(0,4)$, and on the top left at $(3,0)$ and $(4,0)$.\n\n\\begin{definition}\n \\label{definition:1-dim-corner}\n Let $X : \\Rbf^2 \\to \\mathbf{vec}$.\n We say that $X$ \\define{admits a thin corner} if there exists $\\epsilon > 0$, $X' : (\\epsilon \\Zbf)^2 \\to \\mathbf{vec}$, and $(x,y) \\in (\\epsilon \\Zbf)^2$ such that $X$ is isomorphic to the extension of $X'$, and we have $X'(x,y) = \\kbb$ and $X'(x-\\epsilon,y) = X'(x,y-\\epsilon) = 0$.\n\\end{definition}\n\nSee \\cref{figure:diagram-proof-main-lemma}(1.) for a schematic depiction of two modules admitting a thin corner.\n\n\\begin{lemma}\n \\label{lemma:add-1-dim-corner}\n Let $A : \\Rbf^2 \\to \\mathbf{vec}$ be finitely presentable, indecomposable, and isomorphic to the extension of a $\\Pscr$-persistence module for some regular grid $\\Pscr \\subseteq \\Rbf^2$.\n For every $\\delta > 0$, there exists $A_1 : \\Rbf^2 \\to \\mathbf{vec}$ indecomposable, finitely presentable, admitting a thin corner, isomorphic to an extension of a $\\Qscr$-persistence module for some regular grid $\\Qscr \\supseteq \\Pscr$, and such that $\\mathsf{im}\\left(\\eta^A_\\delta\\right) \\cong \\mathsf{im}\\big(\\eta^{A_1}_\\delta\\big)$.\n\\end{lemma}\n\\begin{proof}\n Assume that $\\Pscr = (\\epsilon \\Zbf)^2$.\n Let $m \\in \\Nbb$ be such that $\\epsilon\/m < \\delta$.\n Let $\\tau = \\epsilon\/m$.\n Since $A$ is finitely presentable, there exists $(x,y) \\in (\\tau\\Zbf)^2$ such that $A(x,y) \\neq 0$ and $A(x-\\tau,y) = 0 = A(x,y-\\tau)$.\n Thus, on the regular grid $((\\tau\/2)\\Zbf)^2$, and around index $(x,y)$ (highlighted entry), the module $A$ restricts as follows:\n \\[ \\footnotesize\n \\begin{tikzpicture}\n \\matrix (m) [matrix of math nodes,row sep=3em,column sep=1.5em,minimum width=2em,nodes={text height=1.75ex,text depth=0.25ex}]\n { A(x-\\tau,y+\\tau) & A(x-\\tau\/2,y+\\tau) & A(x,y+\\tau) \n & A(x+\\tau\/2,y+\\tau) & A(x+\\tau,y+\\tau) \\\\\n 0 & 0 & A(x,y+\\tau\/2)\n & A(x+\\tau\/2,y+\\tau\/2) & A(x+\\tau,y+\\tau\/2) \\\\\n 0 & 0 & |[fill=lightgray!25, rectangle, outer sep = 2pt, minimum size = 0]| A(x,y) & A(x+\\tau\/2,y) & A(x+\\tau,y) \\\\\n 0 & 0 & 0 & 0 & A(x+\\tau,y-\\tau\/2) \\\\\n 0 & 0 & 0 & 0 & A(x+\\tau,y-\\tau) \\\\};\n \\path[line width=0.5pt, -{>[width=4pt]}]\n (m-1-1) edge node [above] {$\\cong$} (m-1-2)\n (m-2-1) edge node [above] {$\\cong$} (m-2-2)\n (m-3-1) edge node [above] {$\\cong$} (m-3-2)\n (m-4-1) edge node [above] {$\\cong$} (m-4-2)\n (m-5-1) edge node [above] {$\\cong$} (m-5-2)\n\n (m-1-2) edge (m-1-3)\n (m-2-2) edge (m-2-3)\n (m-3-2) edge (m-3-3)\n (m-4-2) edge (m-4-3)\n (m-5-2) edge (m-5-3)\n\n (m-1-3) edge node [above] {$\\cong$} (m-1-4)\n (m-2-3) edge node [above] {$\\cong$} (m-2-4)\n (m-3-3) edge node [above] {$\\cong$} (m-3-4)\n (m-4-3) edge node [above] {$\\cong$} (m-4-4)\n (m-5-3) edge node [above] {$\\cong$} (m-5-4)\n\n (m-1-4) edge (m-1-5)\n (m-2-4) edge (m-2-5)\n (m-3-4) edge (m-3-5)\n (m-4-4) edge (m-4-5)\n (m-5-4) edge (m-5-5)\n\n (m-5-1) edge node [left] {$\\cong$} (m-4-1)\n (m-5-2) edge node [left] {$\\cong$} (m-4-2)\n (m-5-3) edge node [left] {$\\cong$} (m-4-3)\n (m-5-4) edge node [left] {$\\cong$} (m-4-4)\n (m-5-5) edge node [left] {$\\cong$} (m-4-5)\n\n (m-4-1) edge (m-3-1)\n (m-4-2) edge (m-3-2)\n (m-4-3) edge (m-3-3)\n (m-4-4) edge (m-3-4)\n (m-4-5) edge (m-3-5)\n\n (m-3-1) edge node [left] {$\\cong$} (m-2-1)\n (m-3-2) edge node [left] {$\\cong$} (m-2-2)\n (m-3-3) edge node [left] {$\\cong$} (m-2-3)\n (m-3-4) edge node [left] {$\\cong$} (m-2-4)\n (m-3-5) edge node [left] {$\\cong$} (m-2-5)\n\n (m-2-1) edge (m-1-1)\n (m-2-2) edge (m-1-2)\n (m-2-3) edge (m-1-3)\n (m-2-4) edge (m-1-4)\n (m-2-5) edge (m-1-5)\n ;\n \\end{tikzpicture}\n \\]\n where the isomorphisms are due to the fact that $A$ is an extension of a persistence module on the regular grid $(\\tau\\Zbf)^2$.\n\n Let $\\kbb \\to A(x,y)$ be any non-zero morphism.\n We now define a persistence module $A'$, and then define $A_1$ to be an extension of $A'$.\n Let $A' : ((\\tau\/2)\\Zbf)^2 \\to \\mathbf{vec}$ coincide with the restriction of $A$ to $((\\tau\/2)\\Zbf)^2$ at all places, except at $(x,y)$ where it takes the value $\\kbb$.\n The structure morphisms $A'(x,y) \\to A'(x,y+\\tau\/2) = A(x,y+\\tau\/2) $ and $A'(x,y) \\to A'(x+\\tau\/2, y) = A(x+\\tau\/2, y)$ are defined by composing the corresponding structure morphisms of $A$ with the map $\\kbb \\to A(x,y)$.\n Thus, around index $(x,y)$ (again highlighted), the module $A'$ looks as follows:\n \\[ \\footnotesize\n \\begin{tikzpicture}\n \\matrix (m) [matrix of math nodes,row sep=3em,column sep=1.5em,minimum width=2em,nodes={text height=1.75ex,text depth=0.25ex}]\n { A(x-\\tau,y+\\tau) & A(x-\\tau\/2,y+\\tau) & A(x,y+\\tau) \n & A(x+\\tau\/2,y+\\tau) & A(x+\\tau,y+\\tau) \\\\\n 0 & 0 & A(x,y+\\tau\/2)\n & A(x+\\tau\/2,y+\\tau\/2) & A(x+\\tau,y+\\tau\/2) \\\\\n 0 & 0 & |[fill=lightgray!25, rectangle, outer sep = 2pt, minimum size = 0]| \\kbb & A(x+\\tau\/2,y) & A(x+\\tau,y) \\\\\n 0 & 0 & 0 & 0 & A(x+\\tau,y-\\tau\/2) \\\\\n 0 & 0 & 0 & 0 & A(x+\\tau,y-\\tau) \\\\};\n \\path[line width=0.5pt, -{>[width=4pt]}]\n (m-1-1) edge (m-1-2)\n (m-2-1) edge (m-2-2)\n (m-3-1) edge (m-3-2)\n (m-4-1) edge (m-4-2)\n (m-5-1) edge (m-5-2)\n\n (m-1-2) edge (m-1-3)\n (m-2-2) edge (m-2-3)\n (m-3-2) edge (m-3-3)\n (m-4-2) edge (m-4-3)\n (m-5-2) edge (m-5-3)\n\n (m-1-3) edge (m-1-4)\n (m-2-3) edge (m-2-4)\n (m-3-3) edge (m-3-4)\n (m-4-3) edge (m-4-4)\n (m-5-3) edge (m-5-4)\n\n (m-1-4) edge (m-1-5)\n (m-2-4) edge (m-2-5)\n (m-3-4) edge (m-3-5)\n (m-4-4) edge (m-4-5)\n (m-5-4) edge (m-5-5)\n\n (m-5-1) edge (m-4-1)\n (m-5-2) edge (m-4-2)\n (m-5-3) edge (m-4-3)\n (m-5-4) edge (m-4-4)\n (m-5-5) edge (m-4-5)\n\n (m-4-1) edge (m-3-1)\n (m-4-2) edge (m-3-2)\n (m-4-3) edge (m-3-3)\n (m-4-4) edge (m-3-4)\n (m-4-5) edge (m-3-5)\n\n (m-3-1) edge (m-2-1)\n (m-3-2) edge (m-2-2)\n (m-3-3) edge (m-2-3)\n (m-3-4) edge (m-2-4)\n (m-3-5) edge (m-2-5)\n\n (m-2-1) edge (m-1-1)\n (m-2-2) edge (m-1-2)\n (m-2-3) edge (m-1-3)\n (m-2-4) edge (m-1-4)\n (m-2-5) edge (m-1-5)\n ;\n \\end{tikzpicture}\n \\]\n We have the following isomorphisms of endomorphism rings:\n \\[\n \\End(A) \\cong \\End\\left(A|_{((\\tau\/2)(2\\Zbf+1))^2}\\right) = \\End\\left(A'|_{((\\tau\/2)(2\\Zbf+1))^2}\\right) \\cong \\End\\left(A'|_{((\\tau\/2)\\Zbf)^2}\\right).\n \\]\n The first isomorphism is due to the fact that $A$ is isomorphic to the extension of a $(\\tau\\Zbf)^2$-persistence module.\n The equality is because\n $A|_{((\\tau\/2)(2\\Zbf+1))^2} = A'|_{((\\tau\/2)(2\\Zbf+1))^2}$.\n For the last isomorphism, note that the value of $A'$ at any element of the grid $((\\tau\/2)\\Zbf)^2$ other than $(x,y)$ is isomorphic, through a structure morphism of $A'$, to the value of $A'$ at an element of the grid $((\\tau\/2)(2\\Zbf+1))^2$, by construction.\n Moreover, by construction, the structure morphism $A'(x,y) = \\kbb \\to A'(x+\\tau\/2, y+\\tau\/2)$ is non-zero, and thus a monomorphism; and $(x+\\tau\/2, y+\\tau\/2) \\in ((\\tau\/2)(2\\Zbf+1))^2$.\n It follows that the action of an endomorphism of $A'$ is completely determined by the action of the endomorphism on the restriction $A'|_{((\\tau\/2)(2\\Zbf+1))^2}$, proving the last isomorphism.\n\n Thus, $A'$ is indecomposable by \\cref{lemma:indecomposable-local-ring} and the fact that $A$ is indecomposable.\n Define $A_1 = \\widehat{A'}$, the extension of $A'$ to $\\Rbf^2$.\n Note that $A_1$ is indecomposable, finitely presentable, and satisfies $\\mathsf{im}\\big(\\eta^{A_1}_\\delta\\big) = \\mathsf{im}\\left(\\eta^{A}_\\delta\\right)$.\n Moreover, $A_1$ admits a thin corner, by construction.\n\\end{proof}\n\n\\begin{definition}\n \\label{definition:horizontal-antenna-attachment}\n Let $X : \\Rbf^2 \\to \\mathbf{vec}$ and $(x,y) \\in \\Rbf^2$.\n We say that $X$ \\define{admits a horizontal antenna attachment} at $(x,y)$ if there exists $\\epsilon > 0$ and $X' : (\\epsilon \\Zbf)^2 \\to \\mathbf{vec}$, such that $(x,y) \\in (\\epsilon \\Zbf)^2$, $X$ is isomorphic to the extension of $X'$, and we have $X'(x,y) = \\kbb$, $X'(x-k\\epsilon,y) = 0$ for all $k \\geq 1 \\in \\Nbb$, and the structure morphism $X'(x,y) \\to X'(x,y+\\epsilon)$ is zero.\n\\end{definition}\n\nFor example, the extension $\\widehat\\Gsf: \\Rbf^2 \\to \\mathbf{vec}$ admits a horizontal antenna attachment.\nSee \\cref{figure:diagram-step-3} for a schematic depiction of two modules admitting a horizontal antenna attachment.\n\n\\begin{lemma}\n \\label{lemma:add-antenna-attachment}\n Let $A : \\Rbf^2 \\to \\mathbf{vec}$ be finitely presentable, indecomposable, and admitting a thin corner.\n For every $\\delta > 0$, there exists $A_1 : \\Rbf^2 \\to \\mathbf{vec}$ indecomposable, finitely presentable, admitting a horizontal antenna attachment, and such that $\\mathsf{im}\\left(\\eta^A_\\delta\\right) \\cong \\mathsf{im}\\big(\\eta^{A_1}_\\delta\\big)$.\n\\end{lemma}\n\n\\begin{proof}\n Since $A$ admits a thin corner, we may assume the following:\n there exists a regular grid $(\\epsilon \\Zbf)^2$ for some $\\epsilon > 0$ and $X : (\\epsilon \\Zbf)^2 \\to \\mathbf{vec}$ such that $A \\cong \\widehat{X}$, and such that there exists $(x,y) \\in (\\epsilon \\Zbf)^2$ with $X(x,y) = \\kbb$ and $X(x-\\epsilon,y) = X(x,y-\\epsilon) = 0$.\n\n Let $m \\in \\Nbb$ be such that $\\epsilon\/m < \\delta$ and let $\\tau = \\epsilon\/m$.\n We make use of the regular grid $((\\tau\/5)\\Zbf)^2$.\n By extending $X$ to $\\Rbf^2$ and restricting to $((\\tau\/5)\\Zbf)^2$, we get a module $X' : ((\\tau\/5)\\Zbf)^2 \\to \\mathbf{vec}$ that, when restricted to the finite grid\n \\begin{align*}\n \\Tscr = \\{(&x-\\tau\/5,x,x+\\tau\/5,x+2\\tau\/5,x+3\\tau\/5,x+4\\tau\/5)\\}\\\\\n &\\times \\{(y-\\tau\/5,y,y+\\tau\/5,y+2\\tau\/5,y+3\\tau\/5,y+4\\tau\/5)\\},\n \\end{align*}\n looks as follows (index $(x,y)$ highlighted):\n \\[\\footnotesize\n \\begin{tikzpicture}\n \\matrix (m) [matrix of math nodes,row sep=3em,column sep=3em,minimum width=2em,nodes={text height=1.75ex,text depth=0.25ex}]\n { 0 & \\kbb & \\kbb & \\kbb & \\kbb & \\kbb \\\\\n 0 & \\kbb & \\kbb & \\kbb & \\kbb & \\kbb \\\\\n 0 & \\kbb & \\kbb & \\kbb & \\kbb & \\kbb \\\\\n 0 & \\kbb & \\kbb & \\kbb & \\kbb & \\kbb \\\\\n 0 & |[fill=lightgray!25, rectangle, outer sep = 2pt, minimum size = 0]| \\kbb & \\kbb & \\kbb & \\kbb & \\kbb \\\\\n 0 & 0 & 0 & 0 & 0 & 0 \\\\};\n \\path[line width=0.5pt, -{>[width=6pt]}]\n (m-1-2) edge [-,double equal sign distance] (m-1-3)\n (m-1-3) edge [-,double equal sign distance] (m-1-4)\n (m-1-4) edge [-,double equal sign distance] (m-1-5)\n (m-1-5) edge [-,double equal sign distance] (m-1-6)\n (m-2-3) edge [-,double equal sign distance] (m-2-4)\n (m-2-4) edge [-,double equal sign distance] (m-2-5)\n (m-3-4) edge [-,double equal sign distance] (m-3-5)\n (m-5-6) edge [-,double equal sign distance] (m-4-6)\n (m-4-6) edge [-,double equal sign distance] (m-3-6)\n (m-3-6) edge [-,double equal sign distance] (m-2-6)\n (m-2-6) edge [-,double equal sign distance] (m-1-6)\n (m-4-5) edge [-,double equal sign distance] (m-3-5)\n (m-3-5) edge [-,double equal sign distance] (m-2-5)\n (m-3-4) edge [-,double equal sign distance] (m-2-4)\n (m-2-2) edge [-,double equal sign distance] (m-1-2)\n (m-2-2) edge [-,double equal sign distance] (m-2-3)\n (m-2-3) edge [-,double equal sign distance] (m-1-3)\n (m-2-4) edge [-,double equal sign distance] (m-1-4)\n (m-2-5) edge [-,double equal sign distance] (m-1-5)\n (m-5-5) edge [-,double equal sign distance] (m-5-6)\n (m-5-5) edge [-,double equal sign distance] (m-4-5)\n (m-4-5) edge [-,double equal sign distance] (m-4-6)\n (m-3-5) edge [-,double equal sign distance] (m-3-6)\n (m-2-5) edge [-,double equal sign distance] (m-2-6)\n (m-3-3) edge [-,double equal sign distance] (m-2-3)\n (m-3-3) edge [-,double equal sign distance] (m-3-4)\n (m-4-4) edge [-,double equal sign distance] (m-3-4)\n (m-4-4) edge [-,double equal sign distance] (m-4-5)\n (m-3-2) edge [-,double equal sign distance] (m-2-2)\n (m-4-2) edge [-,double equal sign distance] (m-3-2)\n (m-5-2) edge [-,double equal sign distance] (m-4-2)\n (m-4-3) edge [-,double equal sign distance] (m-3-3)\n (m-5-3) edge [-,double equal sign distance] (m-4-3)\n (m-5-4) edge [-,double equal sign distance] (m-4-4)\n (m-5-2) edge [-,double equal sign distance] (m-5-3)\n (m-5-3) edge [-,double equal sign distance] (m-5-4)\n (m-5-4) edge [-,double equal sign distance] (m-5-5)\n (m-4-2) edge [-,double equal sign distance] (m-4-3)\n (m-4-3) edge [-,double equal sign distance] (m-4-4)\n (m-3-2) edge [-,double equal sign distance] (m-3-3)\n\n (m-1-1) edge (m-1-2)\n (m-2-1) edge (m-2-2)\n (m-3-1) edge (m-3-2)\n (m-4-1) edge (m-4-2)\n (m-5-1) edge (m-5-2)\n (m-6-1) edge (m-6-2)\n\n (m-6-2) edge (m-6-3)\n (m-6-3) edge (m-6-4)\n (m-6-4) edge (m-6-5)\n (m-6-5) edge (m-6-6)\n\n (m-2-1) edge (m-1-1)\n (m-3-1) edge (m-2-1)\n (m-4-1) edge (m-3-1)\n (m-5-1) edge (m-4-1)\n (m-6-1) edge (m-5-1)\n\n (m-6-2) edge (m-5-2)\n (m-6-3) edge (m-5-3)\n (m-6-4) edge (m-5-4)\n (m-6-5) edge (m-5-5)\n (m-6-6) edge (m-5-6)\n ;\n \\end{tikzpicture}\n \\]\n Let $A' : ((\\tau\/5)\\Zbf)^2 \\to \\mathbf{vec}$ coincide with $X'$ at all places, except at the subgrid $\\Tscr$, where we use a copy of the module $\\Gsf$ as follows (index $(x,y)$ highlighted):\n \\[ \\footnotesize\n \\begin{tikzpicture}\n \\matrix (m) [matrix of math nodes,row sep=3em,column sep=3em,minimum width=2em,nodes={text height=1.75ex,text depth=0.25ex}]\n { 0 & \\kbb & \\kbb & \\kbb & \\kbb & \\kbb \\\\\n 0 & \\kbb & \\kbb^2 & \\kbb^2 & \\kbb^2 & \\kbb \\\\\n 0 & 0 & \\kbb & \\kbb^2 & \\kbb^2 & \\kbb \\\\\n 0 & 0 & 0 & \\kbb & \\kbb^2 & \\kbb \\\\\n 0 & |[fill=lightgray!25, rectangle, outer sep = 2pt, minimum size = 0]|0 & 0 & 0 & \\kbb & \\kbb \\\\\n 0 & 0 & 0 & 0 & 0 & 0 \\\\};\n \\path[line width=0.5pt, -{>[width=6pt]}]\n (m-1-2) edge [-,double equal sign distance] (m-1-3)\n (m-1-3) edge [-,double equal sign distance] (m-1-4)\n (m-1-4) edge [-,double equal sign distance] (m-1-5)\n (m-1-5) edge [-,double equal sign distance] (m-1-6)\n\n (m-2-3) edge [-,double equal sign distance] (m-2-4)\n (m-2-4) edge [-,double equal sign distance] (m-2-5)\n\n (m-3-4) edge [-,double equal sign distance] (m-3-5)\n\n (m-5-6) edge [-,double equal sign distance] (m-4-6)\n (m-4-6) edge [-,double equal sign distance] (m-3-6)\n (m-3-6) edge [-,double equal sign distance] (m-2-6)\n (m-2-6) edge [-,double equal sign distance] (m-1-6)\n\n (m-4-5) edge [-,double equal sign distance] (m-3-5)\n (m-3-5) edge [-,double equal sign distance] (m-2-5)\n\n (m-3-4) edge [-,double equal sign distance] (m-2-4)\n\n (m-2-2) edge node [left] {\\footnotesize $0$} (m-1-2)\n (m-2-2) edge node [above] {\\footnotesize $\\begin{pmatrix} 1\\\\ 1\\end{pmatrix}$} (m-2-3)\n (m-2-3) edge node [right] {\\footnotesize $(1,-1)$} (m-1-3)\n (m-2-4) edge node [right] {\\footnotesize $(1,-1)$} (m-1-4)\n (m-2-5) edge node [right] {\\footnotesize $(1,-1)$} (m-1-5)\n\n (m-5-5) edge node [above] {\\footnotesize $0$} (m-5-6)\n (m-5-5) edge node [left] {\\footnotesize $\\begin{pmatrix} 1\\\\ 1\\end{pmatrix}$} (m-4-5)\n (m-4-5) edge node [above] {\\footnotesize $(1,-1)$} (m-4-6)\n (m-3-5) edge node [above] {\\footnotesize $(1,-1)$} (m-3-6)\n (m-2-5) edge node [above] {\\footnotesize $(1,-1)$} (m-2-6)\n\n (m-3-3) edge node [left] {\\footnotesize $\\begin{pmatrix} 1\\\\ 0\\end{pmatrix}$} (m-2-3)\n (m-3-3) edge node [above] {\\footnotesize $\\begin{pmatrix} 1\\\\ 0\\end{pmatrix}$} (m-3-4)\n\n (m-4-4) edge node [left] {\\footnotesize $\\begin{pmatrix} 0\\\\ 1\\end{pmatrix}$} (m-3-4)\n (m-4-4) edge node [above] {\\footnotesize $\\begin{pmatrix} 0\\\\ 1\\end{pmatrix}$} (m-4-5)\n\n (m-3-2) edge (m-2-2)\n (m-4-2) edge (m-3-2)\n (m-5-2) edge (m-4-2)\n\n (m-4-3) edge (m-3-3)\n (m-5-3) edge (m-4-3)\n\n (m-5-4) edge (m-4-4)\n\n (m-5-2) edge (m-5-3)\n (m-5-3) edge (m-5-4)\n (m-5-4) edge (m-5-5)\n\n (m-4-2) edge (m-4-3)\n (m-4-3) edge (m-4-4)\n \n (m-3-2) edge (m-3-3)\n\n (m-1-1) edge (m-1-2)\n (m-2-1) edge (m-2-2)\n (m-3-1) edge (m-3-2)\n (m-4-1) edge (m-4-2)\n (m-5-1) edge (m-5-2)\n (m-6-1) edge (m-6-2)\n\n (m-6-2) edge (m-6-3)\n (m-6-3) edge (m-6-4)\n (m-6-4) edge (m-6-5)\n (m-6-5) edge (m-6-6)\n\n (m-2-1) edge (m-1-1)\n (m-3-1) edge (m-2-1)\n (m-4-1) edge (m-3-1)\n (m-5-1) edge (m-4-1)\n (m-6-1) edge (m-5-1)\n\n (m-6-2) edge (m-5-2)\n (m-6-3) edge (m-5-3)\n (m-6-4) edge (m-5-4)\n (m-6-5) edge (m-5-5)\n (m-6-6) edge (m-5-6)\n ;\n ;\n \\end{tikzpicture}\n \\]\n We have the following isomorphisms of endomorphism rings:\n \\[\n \\End\\left(X'\\right) \\cong \\End\\left(X'|_{((\\tau\/5)(5\\Zbf + 4))^2}\\right) = \\End\\left(A'|_{((\\tau\/5)(5\\Zbf + 4))^2}\\right) \\cong \\End\\left(A'|_{((\\tau\/5)\\Zbf)^2}\\right).\n \\]\n The first isomorphism is due to the fact that $X'$ is isomorphic to the restriction of an extension of an $(\\epsilon\\Zbf)^2$-persistence module, and $\\epsilon$ is an integer multiple of $\\tau$, by construction.\n The equality is because \n $X'|_{((\\tau\/5)(5\\Zbf + 4))^2} = A'|_{((\\tau\/5)(5\\Zbf + 4))^2}$, by construction.\n For the last isomorphism, note that the value of $A'$ at any element of the grid $((\\tau\/5)\\Zbf)^2$ not belonging to $\\Tscr$ is isomorphic, through a structure morphism of $A'$, to the value of $A'$ at an element of the grid $((\\tau\/5)(5\\Zbf+4))^2$, by construction.\n Moreover, the action of an endomorphism of $A'$ on $A'|_\\Tscr$ is determined by the action of the endomorphism on $A'(x+4\\tau\/5,y+4\\tau\/5)$, by \\cref{lemma:G-is-indecomposable}; and $(x+4\\tau\/5,y+4\\tau\/5) \\in ((\\tau\/5)(5\\Zbf+4))^2$.\n It follows that the action of an endomorphism of $A'$ is completely determined by the action of the endomorphism on the restriction $A'|_{((\\tau\/5)(5\\Zbf+4))^2}$, proving the last isomorphism.\n\n Thus, $A'$ is indecomposable by \\cref{lemma:indecomposable-local-ring} and the fact that $A$, and hence $X'$, is indecomposable.\n Define $A_1$ to be the extension of $A'$ to the full poset $\\Rbf^2$.\n Note that $A_1$ is indecomposable, finitely presentable, and such that $\\mathsf{im}\\left(\\eta^{A}_\\delta\\right) = \\mathsf{im}\\big(\\eta^{A_1}_\\delta\\big)$.\n Moreover, $A_1$ admits a horizontal antenna attachment at $(x,y+3\\tau\/5)$, by construction.\n\\end{proof}\n\n\\begin{lemma}\n \\label{lemma:sheaf-condition-persistence-module}\n Let $\\epsilon > 0$ and let $a \\in \\epsilon \\Zbf$.\n Let $s(a) = \\{(x,y) \\in (\\epsilon \\Zbf)^2 : x \\leq a\\}$, $g(a) = \\{(x,y) \\in (\\epsilon \\Zbf)^2 : x \\geq a\\}$, and $e(a) = \\{(x,y) \\in (\\epsilon \\Zbf)^2 : x = a\\}$, which are all subposets of $(\\epsilon \\Zbf)^2$.\n Assume given persistence modules $X : s(a) \\to \\mathbf{vec}$ and $Y : g(a) \\to \\mathbf{vec}$ such that $X|_{e(a)} = Y|_{e(a)}$.\n Then, there exists a unique persistence module $C : (\\epsilon\\Zbf)^2 \\to \\mathbf{vec}$ such that $C|_{s(a)} = X$ and $C|_{g(a)} = Y$.\n If, moreover, $X$ is indecomposable and, for every indecomposable summand $I$ of $Y$, we have that $I|_{e(a)} \\neq 0$, then $C$ is indecomposable.\n\\end{lemma}\n\\begin{proof}\n The statement about the existence and uniqueness of $C$ is clear.\n Now, assume that $C \\cong S \\oplus T$.\n On one hand, we have $X \\cong S|_{s(a)} \\oplus T|_{s(a)}$.\n Since $X$ is indecomposable, without loss of generality, it must be the case that $T_{g(a)} = 0$.\n In particular, $T_{e(a)} = 0$.\n On the other hand, $Y \\cong S|_{g(a)} \\oplus T|_{g(a)}$, and, since all the indecomposable summands $I$ of $Y$ satisfy $I|_{e(a)} \\neq 0$, we must have $T|_{g(a)} = 0$.\n Thus, $T = 0$ and $C$ does not admit any non-trivial decompositions.\n\\end{proof}\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=1\\linewidth]{pictures\/step-3.eps}\n \\caption{A schematic summary of the main constructions in the proof of \\cref{lemma:actual-tacking}.}\n \\label{figure:diagram-step-3}\n\\end{figure}\n\n\n\\begin{lemma}\n \\label{lemma:actual-tacking}\n Let $A,B : \\Rbf^2 \\to \\mathbf{vec}$ be finitely presentable indecomposable, and isomorphic to extensions of $\\Pscr$-persistence modules for some regular grid $\\Pscr \\subseteq \\Rbf^2$.\n Assume that $A$ and $B$ both admit horizontal antenna attachments at points in the grid $\\Pscr$.\n For every $\\delta > 0$, there exists a regular grid $\\Qscr \\supseteq \\Pscr$ and $M : \\Rbf^2 \\to \\mathbf{vec}$ with $M$ is indecomposable, finitely presentable, isomorphic to the extension of a $\\Qscr$-persistence module, and such that\n \\[\n \\mathsf{im}\\left(\\eta^M_\\delta\\right) \\cong \\mathsf{im}\\left(\\eta^A_\\delta\\right) \\oplus \\mathsf{im}\\left(\\eta^B_\\delta\\right).\n \\]\n\\end{lemma}\n\\begin{proof}\n Without loss of generality, we may assume that $A$ and $B$ are extensions of modules $A'$ and $B'$ defined over a regular grid $\\Qscr = (\\epsilon\\Zbf)^2$ with $\\epsilon < \\delta\/5$.\n Let $(x,y) \\in \\Qscr$ and $(w,z) \\in \\Qscr$ be horizontal antenna attachments for $A$ and $B$, respectively.\n Without loss of generality, we may assume that $y \\geq z$.\n Choose $a,b \\in \\epsilon\\Zbf$ with $a < \\min(x,w)$ and $b > y$.\n Let the subposets $s(a)$, $g(a)$, and $e(a)$ of $(\\epsilon\\Zbf)^2$ be as in \\cref{lemma:sheaf-condition-persistence-module}.\n\n We now use \\cref{lemma:sheaf-condition-persistence-module} to construct a persistence module $M$ as in the statement.\n In order to do this, we construct persistence modules $X$ and $Y$ over the grids $s(a)$ and $g(a)$, respectively.\n \\cref{figure:diagram-step-3} is a schematic summary of these constructions.\n\n Define a persistence module $Y_1 : g(a) \\to \\mathbf{vec}$ as follows:\n \\[\n Y_1(c,d) =\n \\begin{cases}\n \\kbb & \\text{if $d = y$ and $a \\leq c \\leq x$} \\\\\n A'(c,d) & \\text{else}\n \\end{cases}\n \\]\n The structure morphisms are taken to be those of $A'$ wherever that makes sense.\n The structure morphisms $\\kbb = Y_1(c,y) \\to Y_1(c+\\epsilon,y) = \\kbb$ for $a \\leq c \\leq x-\\epsilon$ are taken to be the identity, and the structure morphisms $Y_1(c,y) \\to Y_1(c,y+\\epsilon)$ and $Y_1(c,y-\\epsilon) \\to Y_1(c,y)$ are taken to be zero.\n It is straightforward to check that $Y_1$ is a well-defined persistence module and that $\\End(Y_1) \\cong \\End(A')$.\n Thus, $Y_1$ is indecomposable.\n Define $Y_2$ in an entirely analogous way using $B'$ and its horizontal antenna attachment.\n Let $Y = Y_1 \\oplus Y_2$.\n\n Consider the following subgrid of $\\epsilon \\Zbf$:\n \\[\n \\Tscr = \\{(a-5\\epsilon,a-4\\epsilon,a-3\\epsilon,a-2\\epsilon,a-\\epsilon)\\} \\times \\{(b, b+\\epsilon, b+2\\epsilon, b+3\\epsilon, b+4\\epsilon)\\}.\n \\]\n Let us assume that $y > z$; the case $y=z$ is similar.\n Define a persistence module $X : s(a) \\to \\mathbf{vec}$ as follows:\n \\[\n X(c,d) =\n \\begin{cases}\n \\Gsf\\left(\\frac{(c,d)-(a-5\\epsilon,b)}{\\epsilon}\\right) & \\text{if $(c,d) \\in \\Tscr$} \\\\\n \\kbb & \\text{if $c=a-\\epsilon$ and $y \\leq d \\leq b$} \\\\\n \\kbb & \\text{if $c=a-2\\epsilon$ and $z \\leq d \\leq b$} \\\\\n \\kbb & \\text{if $(c,d)=(a-\\epsilon,z)$} \\\\\n 0 & \\text{else}\n \\end{cases}\n \\]\n For the structure morphisms corresponding to pairs of elements of $\\Tscr$ we use the structure morphisms of $\\Gsf$; for the remaining vertical structure morphisms we use the identity of $\\kbb$ when possible and the zero morphism otherwise; having done this, for the remaining horizontal morphisms we use the identity of $\\kbb$ when possible (taking commutativity into account) and the zero morphism otherwise (in particular, all horizontal maps with second coordinate different from $y$ and $z$).\n \n It is again straightforward to check that $X$ is a well-defined persistence module and that $\\End(X) = \\End(\\Gsf)$.\n Thus, $X$ is indecomposable.\n\n Finally, define $C : (\\epsilon \\Zbf)^2 \\to \\mathbf{vec}$ using \\cref{lemma:sheaf-condition-persistence-module}; it follows that $C$ is indecomposable.\n Finally, extend $C$ to the poset $\\Rbf^2$ to get a module $M$, which satisfies the conditions in the statement by construction.\n\\end{proof}\n\n\n\n\n\\begin{proof}[Proof of \\cref{lemma:main-lemma-attaching}]\n Let $A$ and $B$ be as in the statement.\n Using \\cref{lemma:add-1-dim-corner}, we may assume that $A$ and $B$ admit a thin corner, and using \\cref{lemma:add-antenna-attachment}, we may assume that $A$ and $B$ admit horizontal antenna attachments.\n \\cref{lemma:actual-tacking} now finishes the proof.\n\\end{proof}\n\n\\ifarxivversion\n\\bibliographystyle{alpha}\n\\fi\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{introduction}\\label{sec_1}\nUltrafast pump-probe spectroscopy is a good tool to investigate the nonequilibrium properties of a given system since a pump pulse triggers ultrafast processes and a subsequent probe pulse monitors the pump-induced dynamical processes~\\cite{Mukamel1995, Diels1996, Krausz2009, Giannetti2016}.\nEspecially, by using femtosecond pulses, nonequilibrium dynamics of electrons can be detected since the timescale of the motion of electrons is of the order of a femtosecond.\nHowever, increasing the resolution of optical measurements in both the time and energy domains is difficult and limited by the uncertainty principle.\n\nRecently, ultrafast spectroscopic techniques have been advanced by using a transform-limited pulse, i.e., a pulse that has the minimum possible duration for a given spectral bandwidth, and have opened a new door to make both temporal and spectral resolutions as high as possible~\\cite{Diels1996}.\nThese techniques can disclose new ultrafast nonequilibrium phenomena.\nIn fact, by applying these techniques, interference in the energy domain has been observed in atomic systems and nanometric tips~\\cite{Wollenhaupt2002, Lindner2005, Kiffner2006, Milosevic2006, Kruger2018}.\nThis interference is applied to control the atomic storage medium for recording the information of optical pulses~\\cite{Leung1982, Carlson1983, Hemmer1994, Fleischauer2002, Ohmori2009}.\nHowever, as far as we know, there has been no such report on transient interference of pump-probe spectroscopy of band and Mott insulators both experimentally and theoretically.\n\nIn this paper, we investigate ultrafast pump-probe spectroscopy of band and Mott insulators and propose transient interference between temporary well-separated pulses in electron systems as in the case of atomic systems.\nWe formulate such transient interference in pump-probe spectroscopy of a two-band model.\nWe find that the existence of a continuum structure in the excitation spectrum is important for generating the transient interference since the continuum structure acts as a medium for storing the spectral information of the pump pulse and for creating interference between temporary well-separated pump and probe photons.\nThe information persists due to a memory effect, i.e., a relaxation process of electron systems.\nAs a result, the time-domain pump-probe spectrum depends on both probe energy $\\omega$ and the central frequency of the pump and probe pulses $\\Omega$ and thus oscillates with a frequency\n\\begin{align}\n\\label{probe_dep}\n\\omega_0 = \\omega - \\Omega.\n\\end{align}\nIn order to demonstrate the transient interference in the presence of electron correlation, we perform numerical calculations of the pump-probe spectrum in a one-dimensional (1D) half-filled Hubbard model.\nMoreover, we find that bosons coupled to electrons in the two-band model make an additional contribution to the interference.\nBased on the result, we speculate that the transient interference will be observed in Mott insulators strongly correlated to magnons.\nFor the observation of the proposed transient interference, high resolution of measurements of both time and energy is required in ultrafast pump-probe spectroscopy.\nRecently, oscillations of electronic states above the charge-transfer gap in a two-dimensional (2D) Mott insulator Nd$_2$CuO$_4$ were observed on the reflectivity changes detected by pump-probe measurement with ultrashort pulses~\\cite{Miyamoto2018}. \nThe time and energy resolutions of the measurement are as high as $10 \\text{fs}$ and $0.01 \\text{eV}$, respectively.\nBy extracting the oscillatory components from the pump-probe spectrum, the oscillation component with the frequency indicated by Eq.~(\\ref{probe_dep}) was found~\\cite{Miyamoto2018}.\nWe propose that the transient interference will be one of the possible origins of the observed oscillations.\n\nThis paper is organized as follows. We introduce a two-band model, which is a minimal model to describe the interference effect by two photon pulses through an electron system, and show the pump-probe absorption spectrum in Sec.~\\ref{sec_2}. In Sec.~\\ref{sec_3}, we calculate the time-dependent optical conductivity at half filling just after pumping. The effect of bosons coupled to electrons on the pump-probe spectrum is discussed in Sec.~\\ref{sec_4}. Finally, a summary is given in Sec.~\\ref{sec_5}.\n\n\\section{Two-band model}\\label{sec_2}\nWe first introduce a two-band model, which is the minimal model to describe the interference effect by two photon pulses through an electron system, and analytically calculate the pump-probe absorption spectrum.\nWith the assumption of dipole transitions, the Hamiltonian of the two-band model under the time-dependent electric field reads\n\\begin{align*}\n\\mathcal{H} =& \\sum_{\\bm{k}} \\varepsilon_{\\bm{k}} c_{\\text{c}\\bm{k}}^\\dag c_{\\text{c}\\bm{k}} + \\sum_{\\bm{k}} \\varrho_{\\bm{k}} c_{\\text{v}\\bm{k}}^\\dag c_{\\text{v}\\bm{k}} \\nonumber\\\\\n&-\\sum_{\\bm{k}} \\left( d_{\\text{cv}} \\mathcal{E}(t) c_{\\text{c}\\bm{k}}^\\dag c_{\\text{v}\\bm{k}} + d_{\\text{cv}}^* \\mathcal{E}(t) c_{\\text{v}\\bm{k}}^\\dag c_{\\text{c}\\bm{k}} \\right), \\nonumber\n\\end{align*}\nwhere $c_{\\text{c(v)}\\bm{k}}$ is an annihilation operator for fermions in the conduction (valence) band with momentum $\\bm{k}$. The energies of the conduction and valence bands are $\\varepsilon_{\\bm{k}} = \\varepsilon + \\frac{\\hbar ^2 \\bm{k}^2}{2m_{\\text{c}}}$ and $ \\varrho_{\\bm{k}} = \\varrho + \\frac{\\hbar ^2 \\bm{k}^2}{2m_{\\text{v}}}$,\nwhere $\\varepsilon$ and $\\varrho$ are the minimum and maximum of the conduction and valence bands, respectively, and $m_c$ and $m_v$ are the effective masses of electrons in the conduction and valence bands, respectively.\nWe introduce the interband dipole matrix element $d_{\\text{cv}}$ and external electric field $\\mathcal{E}(t)$. Hereafter, we set $\\hbar=1$.\n\nBy taking the long-wave-length limit of the electric field, the optical Bloch equation is written as~\\cite{HaugKoch}\n\\begin{align} \\label{Eq_bloch_eq_1}\n&\\left( \\frac{\\partial}{\\partial t} + i\\{ \\varepsilon _{\\bm{k}}-\\varrho_{\\bm{k}}-i \\gamma \\} \\right) p_{\\text{vc}}^0(\\bm{k},t) = d_{\\text{cv}} \\mathcal{E}(t) \\{ 1-2f_c(\\bm{k}) \\}\n\\end{align}\nand\n\\begin{align} \\label{Eq_bloch_eq_2}\n\\left( \\frac{\\partial}{\\partial t} + \\Gamma \\right) f_{\\text{c}}(\\bm{k},t) = -2\\text{Im}\\left[ d_{\\text{cv}} \\mathcal{E}(t) p_{\\text{vc}}^{0*}(\\bm{k},t) \\right],\n\\end{align}\nwhere $f_{\\text{c}}(\\bm{k}) = \\langle c_{\\text{c}\\bm{k}}^\\dag c_{\\text{c}\\bm{k}} \\rangle$ and $p_{\\text{vc}}^0(\\bm{k}) = \\langle c_{\\text{v}\\bm{k}}^\\dag c_{\\text{c}\\bm{k}} \\rangle$, with $\\langle \\cdots \\rangle$ representing the expectation value.\nWe introduce a phenomenological damping rate $\\Gamma$ for $f_{\\text{c}}$ and dephasing rate $\\gamma$ for $p_{\\text{vc}}^0$.\nWe consider an electric field $\\mathcal{E}(t) = \\frac{1}{2}\\left(\\mathcal{\\tilde{E}}(t)e^{-i\\Omega t} + \\mathcal{\\tilde{E}}^*(t)e^{i\\Omega t}\\right)$, where $\\mathcal{\\tilde{E}}(t)=2\\left\\{ \\mathcal{\\tilde{E}}_{\\text{p}}(t)e^{i\\bm{k}_p\\cdot \\bm{r}} + \\mathcal{\\tilde{E}}_{\\text{t}}(t)e^{i\\bm{k}_t\\cdot \\bm{r}} \\right\\}$, and the electric field and wave vector of the pump (probe) pulse are $\\mathcal{\\tilde{E}}_{\\text{p}}$ and $\\bm{k}_{\\text{p}}$ ($\\mathcal{\\tilde{E}}_{\\text{t}}$ and $\\bm{k}_{\\text{t}}$), respectively.\nIntroducing an expansion parameter $\\lambda$ through $\\mathcal{E}(t) \\rightarrow \\lambda \\mathcal{E}(t)$, we obtain $p_{\\text{vc}}^0 = \\lambda p_{\\text{vc}}^{0(1)} + \\lambda^2 p_{\\text{vc}}^{0(2)} + \\lambda^3 p_{\\text{vc}}^{0(3)} + \\cdots,\\; f_{\\text{c}} = \\lambda f_{\\text{c}}^{(1)} + \\lambda^2 f_{\\text{c}}^{(2)} + \\lambda^3 f_{\\text{c}}^{(3)} + \\cdots$.\nThe shape of the probe pulse is represented by the delta function $\\mathcal{\\tilde{E}}_{\\text{t}}(t) = \\mathcal{\\tilde{E}}_{\\text{t}} \\delta (t -\\tau)$ $(\\tau >0)$, where $\\tau$ is the delay time between the pump and probe pulses.\nThe pump-induced absorption change is given by $\\alpha = -\\textrm{Im}\\left[ d_{cv}^*\\chi(\\bm{k},\\omega) \\right].$\nTaking $\\mathcal{\\tilde{E}}_{\\text{p}}(t)=\\mathcal{\\tilde{E}}_{\\text{p}}e^{-\\sigma|t|}$ and with the rotating-wave approximation, the probe susceptibility is given by (see Appendix A)\n\\begin{align} \\label{chi0}\n&\\chi (\\bm{k},\\omega)\\simeq \\frac{p_{\\text{vc}}^{0(3)}(\\bm{k},\\omega)}{\\mathcal{E}_{\\text{t}}(\\omega)}\\nonumber \\\\\n&=\\frac{8d_{\\text{cv}}\\left|d_{\\text{cv}}\\right| {}^2 \\left| \\mathcal{\\tilde{E}}_p\\right| {}^2 e^{-\\left(\\sigma - \\gamma \\right)\\tau} e^{i \\tau \\left( - \\Omega +\\varepsilon _k-\\varrho _k\\right)} \\Gamma \\sigma }{\\left(i \\gamma +\\omega -\\varepsilon _k+\\varrho _k\\right)(i\\Gamma +i\\sigma + \\omega -\\Omega ) v_{\\bm{k}}^+ u_{\\bm{k}}^+ u_{\\bm{k}}^-}+\\cdots,\n\\end{align}\nwhere $u_{\\bm{k}}^{\\pm}=i\\gamma \\pm i\\sigma + \\Omega - \\varepsilon _{\\bm{k}} + \\varrho _{\\bm{k}}$ and $v_{\\bm{k}}^+ = i\\gamma +i\\Gamma -i\\sigma + \\Omega - \\varepsilon _{\\bm{k}}+ \\varrho _{\\bm{k}}$. \nIn the limit $\\gamma\\rightarrow 0$, the pole of the energy denominator $\\omega = \\varepsilon_{\\bm{k}} - \\varrho _{\\bm{k}}$ in the third term of $\\chi (\\bm{k},\\omega)$ gives rise to an oscillatory behavior of $e^{i(\\omega -\\Omega)\\tau}$ with decay $e^{-(\\sigma-\\gamma)\\tau}$.\nThis is the oscillation component indicated by Eq.~(\\ref{probe_dep}).\nSince the timescale where the oscillation persists is on the order of $\\gamma^{-1}$, real-time ultrafast dynamics should be observed with high accuracy~\\cite{Rhodes2013}.\n\nIn order to maintain the oscillation in the two-band model, we have to select a proper set of parameters that leads to the coherence and memory effect in the energy domain.\nFirst of all, we examine the coherence in the energy domain.\nWhen $\\sigma \\gg 1\/\\tau$, i.e., the pulse duration is much shorter than the time delay $\\tau$, we obtain $\\Delta t \\sim 0$, where $\\Delta t$ is the uncertainty in the time domain.\nSimultaneously, the energy uncertainty $\\Delta E \\sim \\infty$, leading to low energy resolution.\nAs a result, the interference in the energy domain is invisible.\nThis corresponds to the fact that the interference pattern vanishes in Young's double-slit experiment if the path of light is measured~\\cite{Englert1996, Durr1998}.\nIn fact, if the electric field of the pump pulse is represented by the $\\delta$ function, $p_{vc}^{0(3)}(\\bm{k},\\omega)$ completely cancels out $\\mathcal{E}_t(\\omega)$, which means that $\\chi(\\bm{k},\\omega)$ does not have the interference term $e^{i(\\omega -\\Omega)\\tau}$ (see Appendix A).\nIn contrast, when $\\sigma \\lesssim 1\/\\tau$, the coherence in the energy domain is obtained, which leads to the interference in energy space.\n\nSecond, we examine the memory effect.\nWhen $\\sigma \\ll \\gamma$, i.e., the pulse duration is longer than the dephasing time, $\\Delta t \\sim \\infty$ and $\\Delta E\\sim 0$ are simultaneously obtained.\nThis leads to the relaxation that holds true as long as electrons have well-defined energies, and their energy changes are slow with the timescale of $1\/\\Delta \\epsilon$, where $\\Delta \\epsilon$ is the characteristic energy exchange in a scattering event~\\cite{Aihara1982, Kuznetsov1991, Kuznetsov1991_2, Rossi2002, SchaeferWegener}.\nWhen $\\sigma \\gtrsim \\gamma$, the relaxation involving electrons with ill-defined energies starts to contribute to the memory effect.\nTherefore, if $\\sigma$ and $1\/\\tau$ are carefully controlled to realize $1\/\\tau \\gtrsim \\sigma \\gtrsim \\gamma$, both coherence in the energy domain and the memory effect are relevant, and the interference in the energy domain is maintained for the time $\\gamma^{-1}$.\nUsually, $\\gamma$ of a given system cannot be changed. However, if we make use of the quantum Zeno effect~\\cite{Misra1977, Itano1990, Kaulakys1997, Streed2006}, we might be able to control $\\gamma$, which can help us to observe our finding.\n\n\\section{Hubbard model}\\label{sec_3}\nPump-probe spectroscopy has been performed in strongly correlated systems to investigate exotic phenomena~\\cite{Okamoto2010, Okamoto2011, Filippis2012, Matsueda2012, Zala2013, Golez2014, Eckstein2014, Novelli2014, Prelovsek2015, Giannetti2016, Bittner2017, Bittner2018, Miyamoto2018}.\nEven in correlated electron systems, there is a continuum structure in the excitation spectrum. This indicates that interference effects similar to those in the two-band model may be realized, which will be demonstrated by using a 1D half-filled Hubbard model, which is given by\n\\begin{align}\nH=-t_\\mathrm{h}\\sum_{i,\\sigma} \\left( c^\\dagger_{i,\\sigma} c_{i+1,\\sigma} + \\mathrm{H.c.}\\right) + U\\sum_i n_{i,\\uparrow}n_{i,\\downarrow},\n\\label{H}\n\\end{align}\nwhere $c^\\dagger_{i\\sigma}$ is the creation operator of an electron with spin $\\sigma$ at site $i$, $n_{i,\\sigma}=c^\\dagger_{i,\\sigma}c_{i,\\sigma}$, $n_i=\\sum_\\sigma n_{i,\\sigma}$, and $t_\\mathrm{h}$ and $U$ are the nearest-neighbor hopping and on-site Coulomb interaction, respectively. Taking $t_\\mathrm{h}$ to be the unit of energy ($t_\\mathrm{h}=1$), we use $U=10$.\n\nWe investigate the probe-energy dependence of the optical conductivity of a Hubbard open chain with $L=10$, where $L$ is the number of sites.\nWe assume that both the pulses have the same shape of the vector potential given by $A(t)=A_0 e^{-(t-t_0)^2\/(2t_\\mathrm{d}^2)} \\cos[\\Omega(t-t_0)]$.\nWe set $A_0=0.1$, $t_0=3.0$, $t_\\mathrm{d}=0.5$, and $\\Omega=E_g =7.1$ for the pump pulse and $A_0=0.001$, $t_0=\\tau+3.0$, $t_\\mathrm{d}=0.02$, and $\\Omega=E_g =7.1$ for the probe pulse unless otherwise specified, where $E_g$ is the energy of the Mott gap.\nAn external spatially homogeneous electric field applied along the chain in the Hamiltonian can be incorporated via the Peierls substitution in the hopping terms as $c_{i,\\sigma}^\\dag c_{i+1,\\sigma} \\rightarrow e^{iA(t)}c_{i,\\sigma}^\\dag c_{i+1,\\sigma}$.\nUsing the method discussed in Refs.~\\cite{Lu2015, Shao2016}, we obtain the optical conductivity in the nonequilibrium system, $\\sigma(\\omega,\\tau) = \\frac{j_\\text{probe}(\\omega,\\tau) }{i(\\omega +i\\eta)LA_\\text{probe}(\\omega)}$, where $j_\\text{probe}(\\omega,\\tau)$ is the Fourier transform of the current induced by the probe pulse and $A_{\\text{probe}}(\\omega)$ is the Fourier transform of the vector potential of the probe pulse (see Appendix B for details).\n\nTo trace the temporal evolution of the system, we employ the time-dependent Lanczos method to evaluate $|\\psi (t)\\rangle$. \nHere $|\\psi(t+\\delta{t})\\rangle\\simeq\\sum_{l=1}^{M}{e^{-i\\epsilon_l\\delta{t}}}|\\phi_l\\rangle\\langle\\phi_l|\\psi(t)\\rangle$, where $\\epsilon_l$ and $|\\phi_l\\rangle$ are eigenvalues and eigenvectors of the tridiagonal matrix generated in the Lanczos iteration, respectively, $M$ is the dimension of the Lanczos basis, and $\\delta t$ is the minimum time step. We set $M=50$ and $\\delta t=0.02$.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[clip, width=20pc]{Fig_OC_L10p_wpump10_wprobe_t_high.eps}\n \\caption{$\\text{Re} \\sigma (\\omega,\\tau)$ in the 1D half-filled Hubbard chain with $L=10$ and $U=10$, before pumping ($\\tau<0$) and after pumping ($\\tau=10$,\\;20,\\;30, and 40). Since the system is weakly excited, the dashed line for $\\tau<0$ is almost overlapped by the solid lines above $\\omega =7$.}\n \\label{band}\n\\end{figure}\n\nFigure~\\ref{band} shows the real part of the time-dependent optical conductivity $\\text{Re} \\sigma(\\omega,\\tau)$ of the Hubbard model.\nPhotoinduced decreases in the spectral weights at absorption peaks above the Mott gap are small since the system is weakly excited. \nThe pump photon excites carriers into an optically allowed odd-parity state.\nThe probe pulse couples in part to the odd-parity state, resulting in an excitation from the optically allowed state to an optically forbidden even-parity state.\nIn 1D Mott insulators with open boundary conditions, the optically forbidden state is located slightly above the optically allowed state~\\cite{Mizuno2000}.\nLow-energy in-gap excitation comes from the excitation from the optically allowed to forbidden state~\\cite{Lu2015}.\nInside the Mott gap, we find photoinduced low-energy spectral weights at $\\omega \\simeq1.2$, 2.2, and 3.3.\nThese energies correspond to the energy differences between the optically allowed populated state at $\\omega=7.1$ and the optically forbidden states.\n\nFigures~\\ref{probe_energy_dep_w0}(a)-\\ref{probe_energy_dep_w0}(e) show the $\\tau$ dependence of $\\text{Re} \\sigma(\\omega,\\tau)$ above the Mott gap with probe energy $\\omega=7.10$, $7.92$, $8.98$, $10.08$, and $11.18$, respectively, whose energies agree with the peak energies of the absorption spectrum in Fig.~\\ref{band}.\nWe find that the frequencies of the oscillations depend on $\\omega$.\nThe larger $\\omega$ is, the larger the frequency is, which is consistent with our argument in the two-band model discussed above.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[clip, width=20pc]{OC_power.eps}\n \\caption{$\\text{Re} \\sigma(\\omega,\\tau)$ in the 1D half-filled Hubbard chain with $L=10$ and $U=10$ for (a) $\\omega=7.1$, (b) $\\omega=7.92$, (c) $\\omega=8.98$, (d) $\\omega=10.08$, and (e) $\\omega=11.18$. The power spectra of $\\text{Re} \\sigma (\\omega,\\tau)$ for (f) $\\omega=7.1$, (g) $\\omega=7.92$, (h) $\\omega=8.98$, (i) $\\omega=10.08$, and (j) $\\omega=11.18$.}\n \\label{probe_energy_dep_w0}\n\\end{figure}\n\nIn order to further examine the probe-energy dependence, we show the power spectra of $\\text{Re} \\sigma(\\omega,\\tau)$ with respect to $\\tau$ in Figs.~\\ref{probe_energy_dep_w0}(f)-\\ref{probe_energy_dep_w0}(j) for $\\omega=7.1$, 7.92, 8.98, 10.08, and 11.18, respectively.\nWe discuss two possible contributions to the power spectra.\nThe first one is the contribution from the Rabi oscillation, whose frequencies are related to the low-energy in-gap states at $\\omega=$1.2, 2.2, and 3.3.\nIn fact, we find the Rabi-oscillation contributions to the spectral weights at $\\omega_0=$1.2, 2.2, and 3.3 in Figs.~\\ref{probe_energy_dep_w0}(f)-\\ref{probe_energy_dep_w0}(j).\nSince our system is of finite size, energy levels are discretized.\nTherefore, there are oscillations with resonant frequencies between the levels.\nIn the thermodynamic limit, the number of the levels is infinite, and thus we expect that the contributions from a huge number of such resonances with various frequencies cancel out, giving rise to an inifinite number of infinitesimal weights in the power spectra.\nThus, we consider that the Rabi-oscillation contribution to the power spectra is visible only in finite-size systems and negligible in the thermodynamic limit.\n\nThe second one is the contribution from the interference effect, which gives rise to the $\\omega$ dependence of the pump-probe spectra as discussed in the two-band model. \nThe oscillations with the frequencies $\\omega-\\Omega$ appear at $\\omega_0=7.92-7.10=0.82$, $8.98-7.10=1.88$, $10.08-7.10=2.98$, and $11.18-7.10=4.08$.\nThese energies correspond to the energy difference between the levels at $\\omega=\\Omega=7.1$ and the excited states above the Mott gap, all of which belong to the same electronic states with odd parity.\nWe consider that this origin makes the dominant contribution to the power spectra in the thermodynamic limit.\nIn order to induce the transient interference, we should use the pump pulse whose spectrum covers some energy levels. Then we can store the information of the pump pulse in electronic states with a wide range of energies above the Mott gap.\n\nAccording to the two possible contributions to the power spectra, in Fig.~\\ref{probe_energy_dep_w0}(g), for example, we find peak structures at $\\omega_0=0.82$, 1.2, and 2.2.\nThe peak structures at $\\omega_0=1.2$ and 2.2 come from the Rabi oscillation of the two odd- and even-parity states.\nOn the other hand, the origin of the structure at $\\omega_0=0.82$ is the interference because $\\omega_0=0.82$ corresponds to one of the energy differences between the odd-odd states mentioned above.\nSimilarly, Figs.~\\ref{probe_energy_dep_w0}(h)-\\ref{probe_energy_dep_w0}(j) are understood in the same way (see Appendix B for details).\n\n\\section{electron-boson coupling in the two-band model} \\label{sec_4}\n\nFinally, we discuss the effect of bosons coupled to electrons on the probe-energy-dependent oscillation.\nNonequilibrium electron dynamics coupled to a boson driven by a laser has been extensively studied.\nFurthermore, since non-Markovian relaxation is important in electron systems coupled to a bosonic environment, open quantum systems with non-Markovian properties have been studied for a long time~\\cite{Jaynes1963, Caldeira1985, Leggett1987, BreuerPetruccione, Zurek2003, Reiter2014, Seetharam2015, Nazir2016, deVega2017}.\nThe additional Hamiltonian due to boson degrees of freedom is \n\\begin{align}\\label{H_ph}\n\\mathcal{H}_{\\text{ph}} = \\sum_{\\bm{q}} \\omega_{\\bm{q}} a_{\\bm{q}}^\\dag a_{\\bm{q}} + \\sum_{\\bm{k},\\bm{q}} g_{\\bm{q}} (a_{-\\bm{q}}^\\dag + a_{\\bm{q}}) (c_{\\text{c}\\bm{k+q}}^\\dag c_{\\text{c}\\bm{k}} + c_{\\text{v}\\bm{k+q}}^\\dag c_{\\text{v}\\bm{q}}),\n\\end{align}\nwhere $a_{\\bm{q}}$ is an annihilation operator for bosons with momentum $\\bm{q}$, $\\omega_{\\bm{q}}$ is the boson frequency, and $g_{\\bm{q}}$ is an electron-boson coupling constant.\n\nWe examine the two-band model with electron-boson coupling under the application of the exponential pump pulse.\nTotal polarization is given by $p_{\\text{vc}}^{}(\\bm{k},t)= p_{\\text{vc}}^{0}(\\bm{k},t) + p_{\\text{vc}}^{b}(\\bm{k},t)$, where $p_{\\text{vc}}^{0}(\\bm{k},t)$ is from the one-particle contribution, as discussed above, and $p_{\\text{vc}}^{b}(\\bm{k},t)$ is from the electron-boson coupling.\nSolving the kinetic equation with $\\mathcal{H}_{\\text{ph}}$ (see Appendix A), the probe susceptibility is given by\n\\begin{align} \\label{Eq_chi_b_main}\n&\\chi ^{b}(\\bm{k},\\omega)\\simeq \\frac{p_{\\text{vc}}^{b(3)}(\\bm{k},\\omega)}{\\mathcal{E}_{\\text{t}}(\\omega)}\\nonumber =\\sum _{\\bm{q}} g_{\\bm{q}}^2\\mathcal{N}_{\\bm{q}} \\cdot 4i\\sigma d_{\\text{cv}}\\left| d_{\\text{cv}}\\right|{}^2 \\left| \\mathcal{\\tilde{E}}_p\\right| {}^2 \\nonumber \\\\\n&\\times \\Biggl[\n\\frac{e^{-\\tau \\left(\\sigma-\\gamma\\right)} e^{i \\tau \\left( -\\Omega +\\varepsilon_{\\bm{k}} -\\varrho _{\\bm{k}}\\right)} \\left(-i\\gamma -2i \\Gamma - \\omega + \\varepsilon _{\\bm{k}}- \\varrho _{\\bm{k}}\\right) }{\\left(i \\gamma +\\omega -\\varepsilon _{\\bm{k}}+\\varrho _{\\bm{k}}\\right){}^2 \\left(i \\gamma +\\omega -\\varepsilon _{\\bm{k+q}}+\\varrho _{\\bm{k}}+\\omega _{\\bm{q}}\\right) v_{\\bm{k}}^+}\\nonumber \\\\\n&\\times \\frac{\\left(2 i \\gamma +2 \\omega -\\varepsilon _{\\bm{k}}-\\varepsilon _{\\bm{k+q}}+\\varrho _{\\bm{k}}+\\varrho _{\\bm{k-q}}+2 \\omega _{\\bm{q}}\\right)}{(i\\Gamma +i\\sigma + \\omega -\\Omega ) \\left(i \\gamma +\\omega -\\varepsilon _{\\bm{k}}+\\varrho _{\\bm{k-q}}+\\omega _{\\bm{q}}\\right) u_{\\bm{k}}^+u_{\\bm{k}}^-}\\Biggr]\\nonumber \\\\\n&+\\cdots,\n\\end{align}\nwhere $\\mathcal{N}_{\\bm{q}}=\\frac{1}{e^{\\omega_{\\bm{q}}\/k_BT}-1}$.\nIn the limit $\\gamma \\rightarrow 0$, the pole of the energy denominator $\\omega = \\varepsilon_{\\bm{k}} - \\varrho _{\\bm{k}}$ gives rise to an oscillatory behavior of $e^{i(\\omega -\\Omega)\\tau}$ with decay $e^{-(\\sigma -\\gamma)\\tau}$, which is the same behavior as the third term in Eq.~(\\ref{chi0}).\nTherefore, the information of pump and probe pulses is transmitted with the help of boson-assisted electron scattering, which gives one of the possible origins of the transient interference.\n\nIn Mott insulators, magnons are strongly coupled to photo-excited electrons in 2D Mott insulators, in contrast to the 1D Mott insulator where spin and charge degrees of freedom are separated.\nTherefore, the interference proposed in this work will be easily realized in the 2D Mott insulators.\nWe thus speculate that the oscillations observed by the pump-probe spectroscopy of the 2D Mott insulator Nd$_2$CuO$_4$~\\cite{Miyamoto2018} come from the interference effect. \nIn order to confirm this speculation, we need to investigate theoretically the pump-probe spectrum of the 2D half-filled Hubbard model, but it remains for a future work.\n\n\\section{summary}\\label{sec_5}\nIn summary, we suggested the transient interference in the energy domain between temporary well-separated light pulses using electronic states of band and Mott insulators as a medium, which manifests as the oscillation of the pump-probe spectrum whose frequency is indicated by Eq.~(\\ref{probe_dep}).\nThis interference could be observed only after recent developments of ultrafast spectroscopic techniques.\nThe transient interference reflects the universal property of interference between two photon pulses mediated by electron systems, which does not depend on the details of the electron systems. Therefore, the interference is also realized in the presence of electron correlation since there is a continuum structure. We examined this by calculating the pump-probe spectrum in the 1D half-filled Hubbard model.\nTo verify our prediction, we suggested an experiment for Nd$_2$CuO$_4$ with varying pump-pulse duration and delay. \nSince our theory predicts the transient oscillation even in the 1D Mott insulators, we proposed a pump-probe experiment in Sr$_2$CuO$_3$.\nFurthermore, we found that bosons coupled to electrons in the two-band model make the additional contribution to the transient interference.\nBased on the result, both magnons coupled to electrons and the continuum structure in electronic excitation spectrum would be possible origins of the oscillation observed in Nd$_2$CuO$_4$.\n\n\\begin{acknowledgments}\nWe would like to thank H. Okamoto, T. Miyamoto, I. Eremin, and P. Prelov\\v{s}ek for fruitful discussions. This work was supported by CREST (Grant No. JPMJCR1661), the Japan Science and Technology Agency, the creation of new functional devices and high-performance materials to support next-generation industries (CDMSI), and the challenge of basic science exploring extremes through multi-physics and multi-scale simulations to be tackled by using a post-K computer.\n\\end{acknowledgments}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Conclusion and Future Work}\n\\label{sec:conclusions}\n\nWe presented the first distributed optimal primal-dual resource allocation\nalgorithm for uplink OFDM systems. The key features of the proposed\nalgorithm include: (a) incorporating practical OFDM system\nconstraints such as self-noise and maximum SNR constraints, (b)\nreduced primal-dual algorithm which eliminates unnecessary variable\nupdates, (c) distributed implementation by the end users and base\nstation, (d) simple local updates with limited message passing, (e)\nglobal convergence despite of the existence of multiple global\noptimal solutions, and (f) fast convergence compared with the\nstate-of-art centralized optimal algorithm. Currently we are\nextending this work in several directions, including proving the\ntheoretical convergence speed of the algorithm.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbmwx b/data_all_eng_slimpj/shuffled/split2/finalzzbmwx new file mode 100644 index 0000000000000000000000000000000000000000..7240bb5aa68c7352666d5a763d74a2e50daafb08 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbmwx @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nLeft Handed Materials (LHM) are a new kind of materials which were theoretically envisionned by Veselago \\cite{veselago} as early as in 1967. Such materials have simultaneously negative relative permittivity ($\\varepsilon_{\\mathrm{r}}$) and negative relative permeability ($\\mu_{\\mathrm{r}}$). This theoretical curiosity became a real field of research in 2000 after Pendry showed the potential of LHM to overcome the diffraction limit \\cite{pendry_prl00} and Smith \\textit{et al.} \\cite{smith_nr} proposed a first realization for such extra-ordinary materials based on periodic lattices combining Split Ring Resonators (SRR: Concentric annular rings with splits) and wires. The latter work can be considered as the experimental foundation of LHM (as the first experimental evidence of negative refraction) and it is based on a theoretical study by Pendry \\textit{et al.} which shows that negative permittivity could be obtained by a periodic arrangement of parallel wires and that a periodic lattice of SRR had a negative magnetic response around its resonance frequency \\cite{pendry_ieee99}.\n\n\\noindent In the recent years, there has been a keen interest in wave propagation in periodically structured media \\cite{wavesPC}. Investigation of photonic crystals has paved the way to the theoretical prediction and experimental realization of photonic band gaps \\cite{yablono87,john,krauss,zengerle87,gralak2000,notomi2002,luo2002} i.e. ranges of frequencies for which light, or a light polarization, is disallowed to propagate. Soon after, the focus has been extended to the study of acoustic waves in periodic media, and the existence of phononic band gaps has been verified both theoretically and experimentally \\cite{2-ross1,2-ross2,acoustic,zhang2004,hu2004,feng2006}. Recently, the interest was even extended to the study of different types of waves, e.g. liquid surface waves \\cite{chou97,chou98,mciver2000,torres2000,hu2003} or biharmonic waves \\cite{ross-platonic,sasha-prsa,farhat-apl,farhat-epl,farhat-prl2009} in perforated thin plates. It has been shown that complete bandgaps also exist for these waves when propagating through a periodic lattice of vertically standing rods or over a periodically perforated thin plate \\cite{torres2000,farhat-apl}. In addition, many interesting phenomena have been reported, including negative refraction \\cite{veselago,pendry2000,maystre2004,guenneau2005,smith2000,ramak2005}, the superlensing effect and cloaking \\cite{schurig,norris,farhatprl,farhatwm, farhatpre}. The essential condition for the AANR effect is that the constant frequency surfaces (EFS: equifrequency surfaces) should become convex everywhere about some point in the reciprocal space, and the size of these EFS should shrink with increasing frequency \\cite{zengerle87,gralak2000,notomi2002}.\\\\\n\n\\noindent In this paper, we focus on the application of split ring resonator (SRR) structures \\cite{pendry_ieee99,Seb_SRR,guenneau_physb06} to the domain of elastic waves. We first derive the homogenized governing equations of bending waves propagating within a thin-plate with a doubly periodic square array of freely vibrating holes shaped as SRR, from the generalized biharmonic equation, and an asymptotic analysis involving three scales (one for the thickness of the thin-cut of each SRR, one for the array-pitch, and one for the wave wavelength). We then present an analysis of dispersion curves. To do this, we set the spectral problem for the biharmonic operator within a doubly periodic square array of SRR, homogeneous stress-free boundary conditions are prescribed on the contour of each resonator and the standard Floquet-Bloch conditions are set on the boundary of an elementary cell of the periodic structure. Such a structure presents an elastic bandgap at low frequencies. It turns out that the asymptotic analysis of our structure allows us to get analytically the frequency of the first localized mode and then the frequency of the first band gap.\nThe aim of our work is actually to demonstrate the AANR effect at low frequencies for elastic thin perforated plates as well as their superlensing properties. Ultra-refraction is also considered and shows the versatility and power of using such structured media to realize new functionalities for surface elastic waves.\n\n\\section{Homogenization of a thin-plate with an array of stress-free SRR inclusions near resonance}\n\nThe equations for bending of plates can be found in many textbooks \\cite{timoshenko,graff}.\nThe wavelength $\\lambda$ is supposed to be large enough compared to the thickness\n of the plate $H$ and small compared to its in-plane dimension $L$, i.e. $H \\ll \\lambda \\ll L$.\nIn this case we can adopt the hypothesis of the theory of Von-Karman\n\\cite{timoshenko,graff}. In this way, the mathematical setup is essentially two-dimensional,\nthe thickness $H$ of the plate appearing simply as a parameter in the governing equation.\n\nWe would like to homogenize a periodically structured thin-plate involving resonant elements. The resonances are associated with fast-oscillating displacement fields in thin-bridges of perforations shaped as split ring resonators (SRR), and we filter these oscillations by introducing a third scale in the usual two-scale expansion. We start with the Kirchhoff-Love equation and we consider an open bounded region $\\Omega_f\\in\\mathbb{R}^2$. This region is e.g. a slab lens consisting of a square array of SRR shaped as the letter $C$.\n\n\\noindent When the bending wave penetrates the structured area $\\Omega_f$ of the plate\nwhose geometry is shown in Figs. \\ref{fig6}(c)-(d), it\nundergoes fast periodic oscillations. To filter these oscillations,\nwe consider an asymptotic expansion of the associated vertical\ndisplacement $U_\\eta$ solution of the biharmonic equation given in (\\ref{bihaeta}) in terms of a\nmacroscopic (or slow) variable ${\\bf x}=(x_1,x_2)$\nand a microscopic (or fast) variable ${\\bf\nx}_\\eta={\\bf x}\/\\eta$, where $\\eta$ is a small positive\nreal parameter.\n\n\\begin{figure}[]\n\\begin{center}\n\\scalebox{0.6}{\\includegraphics[angle=0]{Fig1.eps}}\n\\caption{(a) Geometry of a split ring resonator C consisting of a disc $\\Sigma$ connected to a thin ligament $\\Pi_\\eta$ of length $l$ and thickness $\\eta h$ where $0<\\eta\\ll 1$ in a unit cell Y; (b) Helmholtz resonator consisting of a mass connected to a wall via a spring which models resonances of SRR in (a); (c) Doubly periodic square lattice of SRR with the first Brillouin zone $\\Gamma$XM\nin reciprocal space; (d) Geometry of the thin plate of thickness $H$, with a source on the left side of a platonic crystal (PC) slab occupying\nthe region $\\Omega_f$.}\n\\label{fig6}\n\\end{center}\n\\end{figure}\n\n\n\n\\noindent With all the above assumptions, the out-of-plane\ndisplacement ${\\bf u}_\\eta=(0,0,U_\\eta(x_1,x_2))$ in the\n$x_3$-direction (along the vertical axis) is solution of\n(assuming a time-harmonic dependence $\\exp(-i\\omega t)$\nwith $\\omega$ the angular wave frequency):\n\\begin{equation}\n\\begin{array}{ll}\n\\displaystyle{\\frac{\\partial^2}{\\partial x_1\\partial x_1}\\left({{D}_\\eta}\\left(\\frac{\\partial^2}{\\partial x_1\\partial x_1}+\\nu_\\eta \\frac{\\partial^2}{\\partial x_2\\partial x_2}\\right)\\right)U_\\eta} \\nonumber \\\\\n+\\displaystyle{\\frac{\\partial^2}{\\partial x_2\\partial x_2}\\left({{D}_\\eta}\\left(\\frac{\\partial^2}{\\partial x_2\\partial x_2}+\\nu_\\eta \\frac{\\partial^2}{\\partial x_1\\partial x_1}\\right)\\right)U_\\eta} \\nonumber \\\\\n+\\displaystyle{2\\frac{\\partial^2}{\\partial x_1\\partial x_2}\\left({D}_{\\eta}\\left(1-\\nu_\\eta\\right)\\frac{\\partial^2}{\\partial x_1\\partial x_2}\\right)U_\\eta\n-\\,\\beta_\\eta^4\\,U_\\eta}\n=0 \\; ,\n\\end{array}\n\\label{bihaeta}\n\\end{equation}\ninside the heterogeneous isotropic region $\\Omega_f$ (the platonic crystal, PC, in Fig. \\ref{fig6}(d)), where\n$$D_\\eta=D(\\frac{{\\bf x}}{\\eta}) \\; , \\; \\nu_\\eta=\\nu(\\frac{{\\bf x}}{\\eta}) \\; \\hbox{ and }\n\\; \\beta^4_\\eta=\\beta_0^4\\rho(\\frac{{\\bf x}}{\\eta}) \\; , $$\nare nondimensionalized spatially varying parameters related to the flexural rigidity of the plate, its Poisson ratio and the\nwave frequency, respectively. In most cases, $D$ and $\\nu$ take piecewise constant values, with\n$D>0$ and $-1\/2<\\nu<1\/2$. Note that $\\beta_0^2=\\omega\\sqrt{\\,\\rho_0 H\/D_0}$, where $D_0$ is the\nflexural rigidity of the plate outside the platonic crystal, $\\rho_0$ its density\nand H its thickness.\n\nRemark that (\\ref{bihaeta}) is written in weak form and we notably retrieve the classical boundary conditions for a\nhomogeneous plate with stress-free inclusions (vanishing of bending moments and shearing stress for vanishing\n$D_\\eta$ and $\\nu_\\eta$ in the soft phase) \\cite{graff}.\nSince there is only one phase in the problem which we consider (homogeneous medium outside freely-vibrating inclusions),\nit is also possible to recast (\\ref{bihaeta}) as\n\\begin{equation}\n(\\sqrt{D_\\eta}\\nabla^2 U_\\eta+\\beta_\\eta^2)(\\sqrt{D_\\eta}\\nabla^2 U_\\eta-\\beta_\\eta^2) U_\\eta=0 \\; , \\hbox{ in\n$\\Omega_f\\setminus\\overline{\\Theta_\\eta}$} \\; , \\;\n\\label{6-helmholtz}\n\\end{equation}\nsince $D_\\eta$ vanishes inside the inclusions $\\Theta_\\eta=\\bigcup_{i\\in\\mathbb{Z}^2}\\{ \\eta (i+C) \\}$ and it is a constant in the matrix.\nBear in mind that the number of SRR in $\\Omega_f$ is an integer which scales as $\\eta^{-2}$.\nNote also that the vanishing of bending moment and shearing stress deduced from (\\ref{bihaeta}) requires that\n\\begin{equation}\n\\begin{array}{ll}\n\\left((1+\\nu_\\eta)\\left(\\frac{\\partial^2}{\\partial x_1^2}+\\frac{\\partial^2}{\\partial x_2^2}\\right)\n+2(1-\\nu_\\eta)\\left(\\frac{\\partial^2}{\\partial x_1\\partial x_2}\\right)\\right)U_\\eta=0 \\; ,\n\\nonumber \\\\\n\\left((3-\\nu_\\eta)\\left(\\frac{\\partial^3}{\\partial x_1^3}+\\frac{\\partial^3}{\\partial x_2^3}\\right)\n+(1+\\nu_\\eta)\\left(\\frac{\\partial^3}{\\partial x_1^2\\partial x_2}\n+\\frac{\\partial^3}{\\partial x_1\\partial x_2^2}\\right)\\right)U_\\eta=0 \\; ,\n\\end{array}\n\\end{equation}\nAt the boundary $\\partial\\Theta_\\eta$ of $\\Theta_\\eta$, which is consistent with our former work on thin perforated plates \\cite{farhat-epl}.\n\n\\noindent In the present case, perforations are shaped as split ring resonators, and each SRR $C$ can be modeled as\n\\begin{equation}\nC=\\{a < \\sqrt{x_1^2+x_2^2} < b\\} \\setminus\\overline{\\Pi_\\eta} \\;\n\\end{equation}\nwhere $a$ and $b$ are functions of variables $x_1,x_2$,\nunless the ring is circular and\n\\begin{equation}\n\\Pi_\\eta = \\Bigl \\{ (x_1,x_2) \\, : \\, 0 0$, $a(x)$ is a $T$-periodic sign-changing (i.e.~indefinite) weight function and \n$g(u)$ is a nonlinear term satisfying $g(0) = 0$.\nAmong other results, we proved therein that a two-solution theorem holds for the $T$-periodic boundary value problem associated with equation \\eqref{eq-ode}: more precisely, for weight functions $a(x)$ satisfying the mean value condition $\\int_{0}^{T} a(x)\\,\\mathrm{d}x < 0$\nand for a large class of nonlinear terms $g(u)$ which are superlinear at zero (namely, $g(u)\/u \\to 0$ for $u \\to 0^{+}$), two positive $T$-periodic solutions of \\eqref{eq-ode} exist, whenever the parameter $\\lambda$ is large enough (see \\cite[Theorem~3.1]{BoFe-PP} for the precise statement of this result). We refer the reader to the introduction of \\cite{BoFe-PP} for several comments about this solvability pattern, arising as a result of a delicate interplay between the behaviors of the nonlinear differential operator driving equation \\eqref{eq-ode} and the nonlinear term $a(x)g(u)$ when $u \\to +\\infty$.\n\nThe technical tool used in \\cite{BoFe-PP} to prove the above mentioned result is topological degree theory in Banach spaces, along a line of research started in \\cite{FeZa-15jde} and later developed and applied in several different situations (cf.~\\cite{Fe-18}), always dealing with nonlinear BVPs for semilinear equations of the type\n\\begin{equation}\\label{eq-ode2}\nu'' + q(x)g(u) = 0,\n\\end{equation}\nwhere $q(x)$ is an indefinite weight function. As well known, the first step within this approach is the formulation of the differential equation as a nonlinear functional equation in a Banach space: this can be done in a standard way when considering equation \\eqref{eq-ode2}, since the differential operator $Lu = -u''$ is a (linear) Fredholm operator of index zero. Then, depending on the invertibility\/non-invertibility of $L$ (and, hence, on the boundary conditions), either classical Leray--Schauder degree theory or Mawhin's coincidence degree theory apply. \n\nAs far as the strongly nonlinear equation \\eqref{eq-ode} is concerned, different strategies can be followed to achieve this goal. In \\cite{BoFe-PP}, we chose to write \\eqref{eq-ode} as the equivalent planar system\n\\begin{equation*}\nu' = \\frac{v}{\\sqrt{1+v^2}}, \\qquad v' = -\\lambda a(x)g(u),\n\\end{equation*}\nin order to directly apply coincidence degree theory in the product space, as proposed in the recent paper \\cite{FeZa-17tmna}.\nThis approach, which looks very natural when dealing with the periodic problem, has the drawback of not being suited for other boundary conditions. In particular, in spite of the well-known strong analogies existing in this setting between the periodic and the Neumann boundary value problem (see, for instance, \\cite{BoZa-15}), the possibility of proving the Neumann counterpart of the result in \\cite{BoFe-PP} is not discussed therein.\n\nThe aim of this brief paper is to provide a positive answer to this question. \nMore generally, we deal with the Neumann boundary value problem for the PDE version of equation \\eqref{eq-ode}, namely\n\\begin{equation}\\label{eq-main-pde}\n\\begin{cases}\n\\, \\mathrm{div}\\,\\Biggl{(} \\dfrac{\\nabla u}{\\sqrt{1- | \\nabla u |^{2}}}\\Biggr{)} + \\lambda a(|x|) g(u) = 0, & \\text{in $B$,} \\\\\n\\, \\partial_{\\nu}u=0, & \\text{on $\\partial B$,}\n\\end{cases}\n\\end{equation}\nwhere $B$ is a ball of the $N$-dimensional Euclidean space and $a(\\vert x \\vert)$ is a (sign-changing) radial weight function. \n\nIn this framework, we prove the following two-solution theorem for positive radial solutions of \\eqref{eq-main-pde}.\n\n\\begin{theorem}\\label{th-main}\nLet $N \\geq 1$ be an integer and let $B \\subseteq \\mathbb{R}^N$ be an open ball of center the origin and radius $R>0$.\nLet $a \\colon \\mathopen{[}0,R\\mathclose{]} \\to \\mathbb{R}$ be an $L^{1}$-function such that\n\\begin{itemize}[leftmargin=30pt,labelsep=12pt,itemsep=5pt]\n\\item [$(a_{*})$]\nthere exist $m\\geq 1$ closed and pairwise disjoint intervals $I^{+}_{1},\\ldots,I^{+}_{m}$ in $\\mathopen{[}0,R\\mathclose{]}$ such that\n\\begin{align*}\n&\\qquad\\qquad a(r)\\geq0, \\; \\text{ for a.e.~$r\\in I^{+}_{i}$,} \\quad a\\not\\equiv0 \\; \\text{ on $I^{+}_{i}$,} \\quad \\text{for $i=1,\\ldots,m$,} \\\\\n&\\qquad\\qquad a(r)\\leq0, \\; \\text{ for a.e.~$r\\in \\mathopen{[}0,R\\mathclose{]}\\setminus\\bigcup_{i=1}^{m}I^{+}_{i}$;}\n\\end{align*}\n\\item [$(a_{\\#})$] $\\displaystyle \\int_{B} a(|x|) \\,\\mathrm{d}x < 0$.\n\\end{itemize}\nLet $g \\colon \\mathopen{[}0,+\\infty\\mathclose{[} \\to \\mathopen{[}0,+\\infty\\mathclose{[}$ be a continuous function satisfying \n\\begin{itemize}[leftmargin=30pt,labelsep=12pt,itemsep=5pt]\n\\item [$(g_{*})$] $g(0)=0$ and $g(u)>0$, for all $u > 0$;\n\\item [$(g_{0})$] $\\displaystyle \\lim_{u\\to 0^{+}} \\dfrac{g(u)}{u} = 0$ and $\\displaystyle \\lim_{\\substack{u\\to0^{+} \\\\ \\omega\\to1}}\\dfrac{g(\\omega u)}{g(u)}=1$;\n\\item [$(g_{\\infty})$] $\\displaystyle\\lim_{\\substack{u\\to+\\infty \\\\ \\omega\\to1}}\\dfrac{g(\\omega u)}{g(u)}=1$.\n\\end{itemize}\nThen, there exists $\\lambda^{*}>0$ such that for every $\\lambda>\\lambda^{*}$ there exist at least two positive radial solutions of problem \\eqref{eq-main-pde}.\n\\end{theorem}\n\nNotice that no Sobolev subcriticality assumptions are required for the nonlinear term: for instance, the function $g(u) = u^{p}$ enters the setting of Theorem~\\ref{th-main} for every $p > 1$. The only restrictions on the growth at infinity come from assumption\n$(g_{\\infty})$: as an example, a function behaving, for $u$ large, like $e^u$ cannot be treated. Observe that a dual condition is required also at $u = 0$; for some comments about these assumptions (of regular-oscillation type) we refer to \\cite[p.~452--453]{BoFeZa-16}.\n\n\\begin{figure}[!htb]\n\\begin{tikzpicture}[scale=1]\n\\begin{axis}[\n tick label style={font=\\scriptsize},\n axis y line=left, \n axis x line=middle,\n xtick={0.359781, 1.39176, 2.60244, 4.3119 ,5},\n ytick={-1,0,1},\n xticklabels={,,,,$5$},\n yticklabels={$-1$,$0$,$1$},\n xlabel={\\small $|x|$},\n ylabel={\\small $a(|x|)$},\nevery axis x label\/.style={\n at={(ticklabel* cs:1.0)},\n anchor=west,\n},\nevery axis y label\/.style={\n at={(ticklabel* cs:1.0)},\n anchor=south,\n},\n width=5.5cm,\n height=4.5cm,\n xmin=0,\n xmax=6,\n ymin=-1.5,\n ymax=1.5]\n\\addplot graphics[xmin=0,xmax=5,ymin=-1.1,ymax=1.1] {BoFe-fig01-a.pdf};\n\\end{axis}\n\\end{tikzpicture}\n\\qquad\n\\begin{tikzpicture}[scale=1]\n\\begin{axis}[\n tick label style={font=\\scriptsize,major tick length=3pt},\n scale only axis,\n enlargelimits=false,\n xtick={0, 0.359781, 1.39176, 2.60244, 4.3119 ,5},\n xticklabels={$0$,,,,,$5$},\n ytick={0,8},\n max space between ticks=50,\n minor y tick num=3, \n xlabel={\\small $|x|$},\n ylabel={\\small $u(|x|)$},\nevery axis x label\/.style={\nbelow,\nat={(1.9cm,0.1cm)},\n yshift=-8pt\n },\nevery axis y label\/.style={\nbelow,\nat={(0cm,1.4cm)},\n xshift=-3pt},\n y label style={rotate=90,anchor=south},\n width=3.8cm,\n height=2.8cm, \n xmin=0,\n xmax=5,\n ymin=0,\n ymax=8]\n\\addplot graphics[xmin=0,xmax=5,ymin=0,ymax=8] {BoFe-fig01-profile.pdf};\n\\end{axis}\n\\end{tikzpicture}\n\\vspace{10pt}\\\\\n\\includegraphics[width=0.46\\linewidth]{BoFe-fig01-graph1.pdf}\n\\includegraphics[width=0.46\\linewidth]{BoFe-fig01-graph2.pdf}\n\\caption{The picture shows the graph of the weight function $a(|x|)=(\\cos(||x|-5|^{3\/2}+1)$ in $\\mathopen{[}0,R\\mathclose{]}=\\mathopen{[}0,5\\mathclose{]}$ and the graphs of the two radial solutions of problem \\eqref{eq-main-pde} where $N=2$, $g(u)=u^{2}+u^{3}$ and $\\lambda=0.1$. We notice that the large solution appears quite ``sharp-cornered'', in agreement with the analysis in \\cite[Section~4]{BoFe-PP} and \\cite[Section~3.2]{BoGa-19ccm}.} \n\\label{fig-01}\n\\end{figure}\n\nThe proof of our main result is based again on topological degree theory, but, of course, with some differences with respect to the approach in \\cite{BoFe-PP}. In particular, after having converted the radial Neumann problem \\eqref{eq-main-pde} into the (singular) one-dimensional problem\n\\begin{equation}\\label{eq-main}\n\\begin{cases}\n\\, \\Biggl{(} r^{N-1}\\dfrac{u'}{\\sqrt{1-(u')^{2}}}\\Biggr{)}' + \\lambda r^{N-1} a(r) g(u) = 0,\\\\\n\\, u'(0) = u'(R) = 0,\n\\end{cases}\n\\end{equation}\nwe write it as a fixed point equation in the Banach space $\\mathcal{C}(\\mathopen{[}0,R\\mathclose{]})$, by exploiting a strategy comparable to the one used, for instance, in \\cite{BeJeMa-10,GaMaZa-93} (see also Remark~\\ref{rem-2.1}). At this point, Leray--Schauder degree theory can be applied: the computation of the degree on three different balls of the space $\\mathcal{C}(\\mathopen{[}0,R\\mathclose{]})$ leads to the result, similarly as in \\cite{BoFe-PP,BoFeZa-16}.\n\nThe paper is organized as follows. In Section~\\ref{section-2}, we describe the abstract setting and we present two lemmas for the computation of the degree. We give here all the details of the arguments, since it seems that no appropriate reference exists for the auxiliary results that we need. In Section~\\ref{section-3}, we give the proof of Theorem~\\ref{th-main}, by showing the applicability of the above mentioned lemmas, under the present assumptions. Compared to the case $N=1$, some technical difficulties arise when $N \\geq 2$. First, the convexity (respectively, concavity) of the solutions is not a priori prescribed by the negativity (respectively, positivity) of the weight function $a(r)$. Second, \nwhen the weight function is positive near the center of the ball, as expected, some further care is needed in the estimates in order to exclude the possible appearance of a ``sudden loss of energy'' for the solutions (cf.~\\cite{GaMaZa-97}). We stress that, due to the peculiar features of the Minkowski curvature operator, this can be successfully done even with no subcriticality assumptions on the nonlinear term.\n\n\n\\section{The abstract setting}\\label{section-2}\n\nIn this section we present an abstract formulation of the problem and we prove some preliminary results based on the theory of the topological degree.\n\n\\subsection{The fixed point reduction}\\label{section-2.1}\n\nThroughout this section, we deal with the boundary value problem \n\\begin{equation}\\label{eq-phi}\n\\begin{cases}\n\\, \\bigl{(} r^{N-1}\\varphi(u')\\bigr{)}' + r^{N-1} f(r,u) = 0, \\\\\n\\, u'(0) = u'(R) = 0,\n\\end{cases}\n\\end{equation} \nwhere $N \\geq 1$ is an integer,\n\\begin{equation*}\n\\varphi(s) = \\frac{s}{\\sqrt{1-s^2}}, \\quad s \\in \\mathopen{]}-1,1\\mathclose{[},\n\\end{equation*}\nand $f \\colon \\mathopen{[}0,R\\mathclose{]} \\times \\mathbb{R} \\to \\mathbb{R}$ is an $L^{1}$-Carath\\'{e}odory function (i.e.~$f(\\cdot,u)$ is measurable for every $u \\in \\mathbb{R}$, $f(r,\\cdot)$ is continuous for a.e. $r \\in \\mathopen{[}0,R\\mathclose{]}$, and for every $K > 0$\nthere exists $\\eta_k \\in L^1(0,R)$ such that $|f(r,u)| \\leq \\eta_k(r)$ for a.e. $r \\in \\mathopen{[}0,R\\mathclose{]}$ and for every \n$| u| \\leq K$). Let us recall that a solution of \\eqref{eq-phi} is a continuously differentiable function $u\\colon \\mathopen{[}0,R\\mathclose{]} \\to \\mathbb{R}$, with $u'(0) = u'(R) = 0$ and $|u'(r)| < 1$ for every $r \\in \\mathopen{]}0,R\\mathclose{[}$, such that the map $r \\mapsto r^{N-1}\\varphi(u')$ is absolutely continuous on $\\mathopen{[}0,R\\mathclose{]}$ and the differential equation in \\eqref{eq-phi} is satisfied almost everywhere (see also Remark \\ref{regolarita}).\n\nIt is convenient for the sequel to embed problem \\eqref{eq-phi} into the two-parameter family of problems\n\\begin{equation}\\label{eq-phi-theta}\n\\begin{cases}\n\\, \\bigl{(} r^{N-1}\\varphi(u')\\bigr{)}' + \\vartheta r^{N-1} \\bigl{[} f(r,u) + \\alpha v(r) \\bigr{]} = 0, \\\\\n\\, u'(0) = u'(R) = 0,\n\\end{cases}\n\\end{equation} \nwhere $\\vartheta\\in\\mathopen{[}0,1\\mathclose{]}$, $\\alpha\\geq0$ and $v(r)$ is a fixed Lebesgue integrable function.\nNotice that \\eqref{eq-phi} is exactly \\eqref{eq-phi-theta} with $\\vartheta=1$ and $\\alpha=0$.\n\nOur aim is to write \\eqref{eq-phi-theta} as a fixed point problem in the Banach space $\\mathcal{C}(\\mathopen{[}0,R\\mathclose{]})$ of continuous functions in $\\mathopen{[}0,R\\mathclose{]}$, endowed with the usual norm $\\|\\cdot\\|_{\\infty}$. Accordingly, for $\\vartheta\\in\\mathopen{[}0,1\\mathclose{]}$ and $\\alpha\\geq0$, we introduce the operator\n\\begin{equation*}\nT_{\\vartheta,\\alpha} \\colon \\mathcal{C}(\\mathopen{[}0,R\\mathclose{]}) \\to \\mathcal{C}(\\mathopen{[}0,R\\mathclose{]}),\n\\end{equation*}\ndefined as\n\\begin{equation}\\label{operator}\n\\begin{aligned}\n(T_{\\vartheta,\\alpha} u) (r) &= u(0) - \\dfrac{1}{R^{N-1}} \\int_{0}^{R} \\zeta^{N-1} \\bigl{[} f(\\zeta,u(\\zeta)) + \\alpha v(\\zeta) \\bigr{]} \\,\\mathrm{d}\\zeta\n \\\\ &\\quad + \\int_{0}^{r} \\varphi^{-1} \\biggl{(} -\\dfrac{\\vartheta}{\\zeta^{N-1}} \\int_{0}^{\\zeta}\\xi^{N-1} \\bigl{[} f(\\xi,u(\\xi)) + \\alpha v(\\xi) \\bigr{]}\\,\\mathrm{d}\\xi \\biggr{)} \\,\\mathrm{d}\\zeta,\n\\end{aligned}\n\\end{equation}\nfor every $u\\in\\mathcal{C}(\\mathopen{[}0,R\\mathclose{]})$.\nNotice that the definition of $T_{\\vartheta,\\alpha}$ is well-posed, since the function\n\\begin{equation*}\n\\mathopen{]}0,R\\mathclose{]} \\ni \\zeta \\mapsto \\dfrac{1}{\\zeta^{N-1}} \\int_{0}^{\\zeta}\\xi^{N-1} \\bigl{[} f(\\xi,u(\\xi)) + \\alpha v(\\xi) \\bigr{]}\\,\\mathrm{d}\\xi\n\\end{equation*}\ncontinuously extends up to $\\zeta=0$. For future convenience, we also observe that $T_{\\vartheta,\\alpha} u\\in\\mathcal{C}^{1}(\\mathopen{[}0,R\\mathclose{]})$ with\n\\begin{equation}\\label{diff-operator}\n(T_{\\vartheta,\\alpha} u)' (r) = \n\\begin{cases}\n\\, \\displaystyle \\varphi^{-1} \\biggl{(} -\\dfrac{\\vartheta}{r^{N-1}} \\int_{0}^{r}\\xi^{N-1} \\bigl{[} f(\\xi,u(\\xi)) + \\alpha v(\\xi) \\bigr{]}\\,\\mathrm{d}\\xi \\biggr{)}, &\\text{if $r\\in\\mathopen{]}0,R\\mathclose{]}$,} \\\\\n\\, 0, &\\text{if $r=0$,}\n\\end{cases}\n\\end{equation}\nfor every $u\\in\\mathcal{C}(\\mathopen{[}0,R\\mathclose{]})$.\n\nThe following result holds true.\n\n\\begin{theorem}\\label{th-operator}\nLet $f \\colon \\mathopen{[}0,R\\mathclose{]} \\times \\mathbb{R} \\to \\mathbb{R}$ be an $L^{1}$-Carath\\'{e}odory function, $v \\in L^1(0,R)$, \n$\\vartheta\\in\\mathopen{[}0,1\\mathclose{]}$, and $\\alpha\\geq0$. Then, the operator $T_{\\vartheta,\\alpha}$ defined in \\eqref{operator} is completely continuous. Moreover, $u(r)$ is a solution of \\eqref{eq-phi-theta} if and only if $u\\in\\mathcal{C}(\\mathopen{[}0,R\\mathclose{]})$ is a fixed point of $T_{\\vartheta,\\alpha}$.\n\\end{theorem}\n\n\\begin{proof}\nThe continuity of the operator $T_{\\vartheta,\\alpha}$ follows straightforwardly by observing that $T_{\\vartheta,\\alpha}$ is a composition of continuous maps. We claim that $T_{\\vartheta,\\alpha}$ sends bounded sets into relatively compact sets. As a well-known consequence of Ascoli--Arzel\\`{a} theorem, this is true if we prove that $\\{T_{\\vartheta,\\alpha}u_{n}\\}_{n}$ and $\\{(T_{\\vartheta,\\alpha}u_{n})'\\}_{n}$ are uniformly bounded for every bounded sequence $\\{u_{n}\\}_{n}$ in $\\mathcal{C}(\\mathopen{[}0,R\\mathclose{]})$. This easily follows from the regularity assumptions on $f(r,u)$ and $v(r)$.\n\nWe prove the equivalence between the Neumann boundary value problem \\eqref{eq-phi-theta} and the fixed point problem $u=T_{\\vartheta,\\alpha}u$.\nLet $u(r)$ be a solution of \\eqref{eq-phi-theta}, hence $u\\in\\mathcal{C}(\\mathopen{[}0,R\\mathclose{]})$. By integrating the equation in \\eqref{eq-phi-theta} in $\\mathopen{[}0,r\\mathclose{]}$, recalling that $u'(0)=0$, and dividing by $r^{N-1}$, we obtain\n\\begin{equation}\\label{eq-phi-th2.1}\n\\varphi(u'(r)) = -\\dfrac{\\vartheta}{r^{N-1}} \\int_{0}^{r} \\zeta^{N-1} \\bigl{[} f(\\zeta,u(\\zeta)) + \\alpha v(\\zeta) \\bigr{]} \\,\\mathrm{d}\\zeta, \\quad \\text{for all $r\\in\\mathopen{]}0,R\\mathclose{]}$.}\n\\end{equation}\nSince $u'(R)=0$ we deduce that\n\\begin{equation}\\label{eq-averR}\n\\dfrac{1}{R^{N-1}} \\int_{0}^{R}\\zeta^{N-1} f(\\zeta,u(\\zeta))\\,\\mathrm{d}\\zeta =0.\n\\end{equation}\nBy applying $\\varphi^{-1}$ to \\eqref{eq-phi-th2.1} and integrating in $\\mathopen{[}0,r\\mathclose{]}$, we thus deduce that $u=T_{\\vartheta,\\alpha}u$.\n\nOn the other hand, let $u\\in\\mathcal{C}(\\mathopen{[}0,R\\mathclose{]})$ be such that $u=T_{\\vartheta,\\alpha}u$. Therefore, we have $u\\in\\mathcal{C}^{1}(\\mathopen{[}0,R\\mathclose{]})$ and $u'$ is given by the right-hand side of formula \\eqref{diff-operator}. In particular, $u'(0)=0$ and $|u'(r)|<1$ for every $r\\in\\mathopen{]}0,R\\mathclose{[}$. \nNext, by computing $u(0)=(Tu)(0)$, we obtain that \\eqref{eq-averR} holds and so $u'(R)=0$.\nBy computing $\\varphi(u')$, multiplying by $r^{N-1}$, and differentiating, we finally obtain that $u(r)$ solves \\eqref{eq-phi-theta}. The proof is thus completed.\n\\end{proof}\n\n\\begin{remark}\\label{regolarita}\nLet us notice that, for every solution $u(r)$ of \\eqref{eq-phi-theta}, from \\eqref{eq-phi-th2.1} it follows that $\\varphi(u')$ is absolutely continuous on $\\mathopen{[}0,R\\mathclose{]}$. Therefore, since $\\varphi^{-1}$ is smooth, $u'$ is absolutely continuous as well.\nIn particular, if both $f(r,u)$ and $v(r)$ are continuous functions, then $u \\in \\mathcal{C}^2(\\mathopen{[}0,R\\mathclose{]})$ (cf.~\\cite[Remark~3.3]{CoCoRi-14}).\n$\\hfill\\lhd$\n\\end{remark}\n\nAs a consequence of Theorem~\\ref{th-operator}, if $\\Omega\\subseteq \\mathcal{C}(\\mathopen{[}0,R\\mathclose{]})$ is an open and bounded set such that\n\\begin{equation*}\nu \\neq T_{\\vartheta,\\alpha} u, \\quad \\text{for all $u\\in\\partial\\Omega$,}\n\\end{equation*}\nthe Leray--Schauder degree $\\mathrm{deg}_{\\mathrm{LS}}(\\mathrm{Id}-T_{\\vartheta,\\alpha},\\Omega,0)$ is well-defined (we refer to \\cite{De-85} for a classical reference about topological degree theory).\n\n\\begin{remark}\\label{rem-2.1}\nDealing with the (one-dimensional) periodic boundary value problem\n\\begin{equation}\\label{eq-per}\n\\begin{cases}\n\\, \\bigl{(} \\varphi(u')\\bigr{)}' + f(r,u) = 0, \\\\\n\\, u(0) = u(R), \\; u'(0) = u'(R),\n\\end{cases}\n\\end{equation} \na similar strategy can be followed in order to obtain a functional analytic formulation. Precisely, it can be seen (cf.~\\cite{BeMa-07,MaMa-98}) that $u(r)$ is a solution of \\eqref{eq-per} if and only if \n$u \\in \\mathcal{C}(\\mathopen{[}0,R\\mathclose{]})$ is a fixed point of the operator $\\widetilde T \\colon \\mathcal{C}(\\mathopen{[}0,R\\mathclose{]}) \\to \\mathcal{C}(\\mathopen{[}0,R\\mathclose{]})$ defined as \n\\begin{align*}\n (\\widetilde T u) (r) &= u(0) -\\int_{0}^{R} f(\\zeta,u(\\zeta)) \\,\\mathrm{d}\\zeta\n \\\\ &\\quad + \\int_{0}^{r} \\varphi^{-1} \\biggl{(} - \\int_{0}^{\\zeta} f(\\xi,u(\\xi))\\,\\mathrm{d}\\xi + Q_{\\varphi}\\left( \\int_0^{\\odot} \nf(\\eta,u(\\eta))\\,\\mathrm{d}\\eta\\right) \\biggr{)} \\,\\mathrm{d}\\zeta,\n\\end{align*}\nwhere, for each $h \\in \\mathcal{C}(\\mathopen{[}0,R\\mathclose{]})$, $Q_\\varphi(h) = Q_\\varphi(h(\\odot)) \\in \\mathbb{R}$ is defined as the unique solution of the equation\n\\begin{equation*}\n\\int_0^R \\varphi^{-1}\\left( -h(r) + Q_\\varphi(h)\\right)\\,\\mathrm{d}r = 0.\n\\end{equation*}\nIt is worth noticing that, compared with the Neumann case, this formulation is slightly less transparent: indeed, due to the non-locality of the\nperiodic boundary conditions, the additional term $Q_\\varphi$ appears. \nAn alternative fixed point formulation for \\eqref{eq-per}, relying on a direct use of coincidence degree theory for the equivalent planar system\n$u' = \\varphi^{-1}(v)$, $v' = - f(r,u)$, has been recently proposed in \\cite{FeZa-17tmna}.\n$\\hfill\\lhd$\n\\end{remark}\n\n\n\\subsection{Two degree lemmas}\\label{section-2.2}\n\nTaking advantage of the abstract setting just presented, we now prove two lemmas for the computation of the degree on open balls $B(0,d)$ (with center $0$ and radius $d>0$) of the Banach space $\\mathcal{C}(\\mathopen{[}0,R\\mathclose{]})$ in the framework of the Neumann problem\n\\begin{equation}\\label{eq-phi-ag}\n\\begin{cases}\n\\, \\bigl{(} r^{N-1}\\varphi(u')\\bigr{)}' + \\lambda r^{N-1} a(r)g(u) = 0, \\\\\n\\, u'(0) = u'(R) = 0,\n\\end{cases}\n\\end{equation} \nwhere $a \\colon \\mathopen{[}0,R\\mathclose{]} \\to \\mathbb{R}$ is an $L^1$-function, $g \\colon \\mathopen{[}0,+\\infty\\mathclose{[} \\to \\mathopen{[}0,+\\infty\\mathclose{[}$ is a continuous function satisfying $g(0) = 0$, and $\\lambda>0$. \nNotice that, to enter the setting of the previous section, we need for the nonlinearity appearing in the equation to be defined for every $u \\in \\mathbb{R}$; accordingly, we set\n\\begin{equation*}\nf(r,u) = \\begin{cases}\n\\, \\lambda a(r) g(u), & \\text{if $u\\geq0$,} \\\\\n\\, -u,& \\text{if $u<0$.} \n\\end{cases}\n\\end{equation*}\nObserve that, due to the assumptions on $a(r)$ and $g(u)$, the function $f(r,u)$ is $L^{1}$-Carath\\'{e}odory.\n\nRecalling the definition \\eqref{operator} of $T_{\\vartheta,\\alpha}$, we thus compute the Leray--Schauder degree of the map $\\mathrm{Id}-T_{1,0}$. Observe that, by standard maximum principle arguments (based on the monotonicity of the map $r \\mapsto r^{N-1}\\varphi(u'(r))$ when $u(r)<0$), $u = T_{1,0} u$ implies that $u(r)$ is a \\emph{non-negative} solution of \\eqref{eq-phi} and thus solves \\eqref{eq-phi-ag}.\n\nThe first lemma gives conditions for zero degree.\n\n\\begin{lemma}\\label{lem-deg0}\nLet $a \\colon \\mathopen{[}0,R\\mathclose{]} \\to \\mathbb{R}$ be an $L^1$-function, let $g \\colon \\mathopen{[}0,+\\infty\\mathclose{[} \\to \\mathopen{[}0,+\\infty\\mathclose{[}$ be a continuous function satisfying $g(0) = 0$, and $\\lambda>0$. \nLet $d>0$ and assume that there exists a non-negative function $v \\in L^1(0,R)$, with $v\\not\\equiv0$, such that the following properties hold:\n\\begin{itemize}\n\\item[$(H_{1})$]\nIf $\\alpha \\geq 0$ and $u(r)$ is a non-negative solution of\n\\begin{equation}\\label{eq-lem-deg0}\n\\begin{cases}\n\\, \\bigl{(} r^{N-1}\\varphi(u')\\bigr{)}' + \\lambda r^{N-1} a(r) g(u) + \\alpha r^{N-1} v(r) = 0, \\\\\n\\, u'(0)=u'(R)=0,\n\\end{cases}\n\\end{equation}\nthen $\\|u\\|_{\\infty}\\neq d$.\n\\item[$(H_{2})$]\nThere exists $\\alpha_{0} \\geq 0$ such that problem \\eqref{eq-lem-deg0}, with $\\alpha=\\alpha_{0}$, has no non-negative solutions $u(r)$ with $\\|u\\|_{\\infty}\\leq d$.\n\\end{itemize}\nThen, it holds that $\\mathrm{deg}_{\\mathrm{LS}}(\\mathrm{Id}-T_{1,0},B(0,d),0) = 0$.\n\\end{lemma}\n\n\\begin{proof}\nFirst, recalling Theorem~\\ref{th-operator}, we have that $u(r)$ is a solution of \n\\begin{equation}\\label{eq-lem-deg0f}\n\\begin{cases}\n\\, \\bigl{(} r^{N-1}\\varphi(u')\\bigr{)}' + r^{N-1} \\bigl{[} f(r,u) + \\alpha v(r) \\bigr{]}= 0, \\\\\n\\, u'(0)=u'(R)=0,\n\\end{cases}\n\\end{equation}\nif and only if $u = T_{1,\\alpha} u$.\nBy maximum principle arguments, since $v \\geq 0$, every solution of \\eqref{eq-lem-deg0f} is non-negative and thus solves \\eqref{eq-lem-deg0}.\nTherefore, for every $\\alpha \\geq 0$, condition $(H_{1})$ ensures that \n\\begin{equation*}\nu \\neq T_{1,\\alpha} u, \\quad \\text{for all $u\\in \\partial B(0,d)$.}\n\\end{equation*}\nWe deduce that $\\mathrm{deg}_{\\mathrm{LS}}(\\mathrm{Id}-T_{1,\\alpha},B(0,d),0)$ is well-defined for every $\\alpha \\geq 0$. As a final step, we apply the homotopy invariance property of the degree to conclude that\n\\begin{equation*}\n\\mathrm{deg}_{\\mathrm{LS}}(\\mathrm{Id}-T_{1,0},B(0,d),0)=\\mathrm{deg}_{\\mathrm{LS}}(\\mathrm{Id}-T_{1,\\alpha_{0}},B(0,d),0)=0,\n\\end{equation*}\nwhere the last equality follows from hypothesis $(H_{2})$.\n\\end{proof}\n\nThe second lemma ensures non-zero degree. Notice that here we need to assume further conditions on $a(r)$ and $g(u)$.\n\n\\begin{lemma}\\label{lem-deg1}\nLet $a \\colon \\mathopen{[}0,R\\mathclose{]} \\to \\mathbb{R}$ be an $L^1$-function satisfying $(a_{\\#})$, let $g \\colon \\mathopen{[}0,+\\infty\\mathclose{[} \\to \\mathopen{[}0,+\\infty\\mathclose{[}$ be a continuous function satisfying $(g_{*})$, and $\\lambda>0$. Let $d>0$ and assume that the following property holds:\n\\begin{itemize}\n\\item[$(H_{3})$]\nIf $\\vartheta\\in \\mathopen{]}0,1\\mathclose{]}$ and $u(r)$ is a non-negative solution of\n\\begin{equation}\\label{eq-lem-deg1}\n\\begin{cases}\n\\, \\bigl{(} r^{N-1}\\varphi(u')\\bigr{)}' + \\vartheta \\lambda r^{N-1} a(r) g(u) = 0, \\\\\n\\, u'(0)=u'(R)=0,\n\\end{cases}\n\\end{equation}\nthen $\\|u\\|_{\\infty} \\neq d$.\n\\end{itemize}\nThen, it holds that $\\mathrm{deg}_{\\mathrm{LS}}(\\mathrm{Id}-T_{1,0},B(0,d),0) = 1$.\n\\end{lemma}\n\n\\begin{proof}\nFirst, recalling Theorem~\\ref{th-operator}, we have that $u(r)$ is a solution of\n\\begin{equation}\\label{eq-lem-deg11f}\n\\begin{cases}\n\\, \\bigl{(} r^{N-1}\\varphi(u')\\bigr{)}' + \\vartheta r^{N-1} f(r,u) = 0, \\\\\n\\, u'(0)=u'(R)=0,\n\\end{cases}\n\\end{equation}\nif and only if $u = T_{\\vartheta,0} u$.\nBy maximum principle arguments, every solution of \\eqref{eq-lem-deg11f} is non-negative and thus solves \\eqref{eq-lem-deg1}.\n\nHypothesis $(H_{3})$ implies that $\\mathrm{deg}_{\\mathrm{LS}}(\\mathrm{Id}-T_{\\vartheta,0},B(0,d),0)$ is well-defined for every $\\vartheta\\in\\mathopen{]}0,1\\mathclose{]}$.\nLet us consider the case $\\vartheta=0$. The fixed point problem $u = T_{0,0} u$ reduces to\n\\begin{equation*}\nu(r) = (T_{0,0} u)(r) = u(0) - \\dfrac{1}{R^{N-1}} \\int_{0}^{R} \\zeta^{N-1} f(\\zeta,u(\\zeta)) \\,\\mathrm{d}\\zeta, \\quad u\\in \\mathcal{C}(\\mathopen{[}0,R\\mathclose{]}),\n\\end{equation*}\nwhose solutions are constant functions.\nObserving that\n\\begin{equation*}\n(\\mathrm{Id}-T_{0,0}) u \\equiv - \\dfrac{1}{R^{N-1}} \\int_{0}^{R} \\zeta^{N-1} f(\\zeta,s) \\,\\mathrm{d}\\zeta, \\quad \\text{for all $u\\in \\mathcal{C}(\\mathopen{[}0,R\\mathclose{]})$ with $u\\equiv s\\in\\mathbb{R}$,}\n\\end{equation*}\nwe consider the function\n\\begin{equation*}\nf^{\\#}(s) = \\dfrac{1}{R^{N-1}} \\int_{0}^{R} \\zeta^{N-1} f(\\zeta,s) \\,\\mathrm{d}\\zeta =\n\\begin{cases}\n\\, -\\dfrac{R}{N} s, & \\text{if $s \\leq 0$,} \\\\\n\\, \\dfrac{\\lambda}{R^{N-1}} \\biggl{(}\\displaystyle \\int_{0}^{R} \\zeta^{N-1}a(\\zeta) \\,\\mathrm{d}\\zeta \\biggr{)} g(s), & \\text{if $s \\geq 0$.}\n\\end{cases}\n\\end{equation*}\nBy hypothesis $(a_{\\#})$ we deduce that\n\\begin{equation*}\n\\int_{0}^{R} r^{N-1} a(r) \\,\\mathrm{d}r = \\dfrac{1}{\\omega_{N}} \\int_{B}a(|x|) \\,\\mathrm{d}x < 0\n\\end{equation*}\n(where $\\omega_{N}$ is the measure of the unit sphere in $\\mathbb{R}^{N}$) \nand, consequently, we obtain that $f^{\\#}(s)s < 0$ for $s \\neq 0$.\nAs a consequence of the excision property, we can reduce the study of the degree of $\\mathrm{Id}-T_{0,0}$ in the set $B(0,d)\\cap\\mathbb{R}=\\mathopen{]}-d,d\\mathclose{[}$.\nSince $f^{\\#}(s)$ has no zeros on $\\partial(B(0,d)\\cap\\mathbb{R})=\\{\\pm d\\}$ and, more precisely, $f^{\\#}(d)<00$ be such that\n\\begin{equation*}\n\\varepsilon<\\dfrac{ |I^{+}_{i} |}{4} \\quad \\text{ and } \\quad \\int_{\\sigma_{i}+2\\varepsilon}^{\\tau_{i}-2\\varepsilon} r^{N-1} a(r)\\,\\mathrm{d}r > 0, \\quad \\text{for every $i = 1,\\ldots,m$,}\n\\end{equation*}\nand let us define\n\\begin{equation*}\n\\delta^{*} = \\dfrac{2^{N-1}\\varepsilon^{N}}{R^{N-1}}\n\\end{equation*}\nand\n\\begin{equation*}\n\\delta_{*} = \\min_{i=1,\\ldots,m} \n\\dfrac{\\delta^{*}}{1+2\\varphi\\biggl{(}\\dfrac{1}{2} \\biggr{)} |I^{+}_{i}| \\dfrac{\\tau_{i}^{2N-2}}{\\sigma_{i}^{N-1}(2\\varepsilon)^{N}}},\n\\end{equation*}\nif $\\sigma_{1}\\neq0$, or \n\\begin{equation*}\n\\delta_{*} = \\min \\left\\{ \n\\dfrac{\\delta^{*}-\\gamma}{1+2\\varphi\\biggl{(}\\dfrac{1}{2} \\biggr{)} |I^{+}_{1}| \\dfrac{\\tau_{1}^{2N-2}}{\\gamma^{N-1}(2\\varepsilon)^{N}}},\n\\min_{i=2,\\ldots,m} \\dfrac{\\delta^{*}}{1+2\\varphi\\biggl{(}\\dfrac{1}{2} \\biggr{)} |I^{+}_{i}| \\dfrac{\\tau_{i}^{2N-2}}{\\sigma_{i}^{N-1}(2\\varepsilon)^{N}}} \\right\\}.\n\\end{equation*}\nif $\\sigma_{1}=0$, where $\\gamma=\\min\\{\\delta^{*},\\tau_{1}\\}\/2$. Notice that, in both cases, $\\delta_{*}\\in\\mathopen{]}0,\\delta^{*}\\mathclose{[}$.\nMoreover, let\n\\begin{equation*}\n\\lambda^{*} = \\max_{i=1,\\ldots,m}\\dfrac{2 R^{N-1}\\varphi(1\/2)}{ \\min \\bigl{\\{} g(u) \\colon u \\in \\mathopen{[} \\delta_{*},\\delta^{*} \\mathclose{]} \\bigr{\\}} \\displaystyle{\\int_{\\sigma_{i}+2\\varepsilon}^{\\tau_{i}-2\\varepsilon} r^{N-1} a(r)\\,\\mathrm{d}r}}.\n\\end{equation*}\nFrom now on, let $\\lambda > \\lambda^{*}$ be fixed.\n\n\\subsubsection*{Step~2. Computation of the degree in $B(0,\\delta^{*})$.}\nWe are going to apply Lemma~\\ref{lem-deg0} to the open ball $B(0,\\delta^{*})$ taking as $v(r)$ the indicator function of the set $\\bigcup_{i} I^{+}_{i}$.\n\nFirst we verify condition $(H_{1})$. We suppose by contradiction that there exist $\\alpha \\geq 0$ and a non-negative solution $u(r)$ to \\eqref{eq-lem-deg0} such that $\\|u\\|_{\\infty} = \\delta^{*}$. \n\n\\smallskip\n\\noindent\n\\textit{Claim~1.} There exists $i=\\{1,\\ldots,m\\}$ such that\n\\begin{equation}\\label{eq-cl1}\n\\max_{r\\in I^{+}_{i}} u(r) = \\delta^{*}.\n\\end{equation}\nSince $v\\equiv0$ on $\\mathopen{[}0,R\\mathclose{]}\\setminus \\bigcup_{i} I^{+}_{i}$, by conditions $(a_{*})$ and $(g_{*})$ we deduce that the map $r\\to r^{N-1}\\varphi(u'(r))$ is non-increasing on each interval $I^{+}_{i}$ and non-decreasing on each interval $J \\subseteq \\mathopen{[}0,R\\mathclose{]}\\setminus \\bigcup_{i} I^{+}_{i}$. \nWe show that\n\\begin{equation}\\label{eq-convexity}\n\\max_{r\\in J}u(r) = \\max_{r\\in\\partial J} u(r),\n\\end{equation}\nfor every interval $J \\subseteq \\mathopen{[}0,R\\mathclose{]}\\setminus \\bigcup_{i} I^{+}_{i}$.\nIndeed, let $J=\\mathopen{[}\\tau,\\sigma\\mathclose{]}$ and $\\hat{r}\\in\\mathopen{]}\\tau,\\sigma\\mathclose{[}$. If $u'(\\hat{r}) \\geq 0$, then $u'(r) \\geq 0$ for all $r\\in\\mathopen{[}\\hat{r},\\sigma\\mathclose{]}$, and so $u(\\hat{r})\\leq u(\\sigma)$. Analogously, if $u'(\\hat{r}) \\leq 0$, then $u'(r) \\leq 0$ for all $r\\in\\mathopen{[}\\tau,\\hat{r}\\mathclose{]}$, and so $u(\\hat{r})\\leq u(\\tau)$. Therefore, \\eqref{eq-convexity} holds. \n\nWe further observe that if $\\tau=0$ then $u'(\\tau)=0$ and so $\\max_{r\\in J}u(r) = u(\\sigma)$; if $\\sigma=R$ then $u'(\\sigma)=0$ and so $\\max_{r\\in J}u(t) = u(\\tau)$. As a consequence of this and \\eqref{eq-convexity}, \\eqref{eq-cl1} follows.\n\n\nFrom now on we focus on the behavior of $u(r)$ on $I^{+}_{i}=\\mathopen{[}\\sigma_{i},\\tau_{i}\\mathclose{]}$. \n\n\\smallskip\n\\noindent\n\\textit{Claim~2.} It holds that\n\\begin{equation}\\label{estim-u'}\n|u'(r)| \\leq \\dfrac{\\tau_{i}^{N-1}}{(2\\varepsilon)^{N}} \\, u(r), \\quad \\text{for all $r\\in \\mathopen{[}\\sigma_{i}+2\\varepsilon,\\tau_{i}-2\\varepsilon\\mathclose{]}$.}\n\\end{equation}\nIndeed, let us fix $r\\in \\mathopen{[}\\sigma_{i}+2\\varepsilon,\\tau_{i}-2\\varepsilon\\mathclose{]}$. If $u'(r)=0$ then the estimate is obvious. If $u'(r)>0$, by using the monotonicity of the map $r\\to r^{N-1}\\varphi(u'(r))$, we have that \n\\begin{equation*}\nu'(\\xi) \\geq \\varphi^{-1} \\biggl{(} \\biggl{(}\\dfrac{r}{\\xi}\\biggr{)}^{\\!N-1} \\varphi(u'(r)) \\biggr{)} \\geq \\varphi^{-1} \\bigl{(} \\varphi(u'(r)) \\bigr{)} = u'(r), \\quad \\text{for all $\\xi\\in\\mathopen{]}\\sigma_{i},r\\mathclose{]}$.}\n\\end{equation*}\nBy integrating the above inequality in $\\mathopen{[}\\sigma_{i},r\\mathclose{]}$ we obtain \n\\begin{equation*}\nu(r)\\geq u(r)-u(\\sigma_{i})= \\int_{\\sigma_{i}}^{r} u'(\\xi)\\,\\mathrm{d}\\xi\n\\geq (r-\\sigma_{i}) u'(r)\\geq 2\\varepsilon u'(r) \\geq \\dfrac{(2\\varepsilon)^{N}}{\\tau_{i}^{N-1}} u'(r),\n\\end{equation*}\nwhere the last inequality follows from $2\\varepsilon<\\tau_{i}$. This implies~\\eqref{estim-u'}. As last case, if $u'(r)<0$, by arguing as above, we have that \n\\begin{align*}\n-u'(\\xi) &\\geq \\varphi^{-1} \\biggl{(} \\biggl{(}\\dfrac{r}{\\xi}\\biggr{)}^{\\!N-1} \\varphi(-u'(r)) \\biggr{)} \n\\geq -\\biggl{(}\\dfrac{r}{\\xi}\\biggr{)}^{\\!N-1} u'(r)\n\\\\\n& \\geq -\\biggl{(}\\dfrac{r}{\\tau_{i}}\\biggr{)}^{\\!N-1} u'(r), \\quad \\text{for all $\\xi\\in\\mathopen{[}r,\\tau_{i}\\mathclose{]}$,}\n\\end{align*}\nwhere we have used the oddness of $\\varphi$ and $\\varphi^{-1}$, and the elementary inequality\n\\begin{equation*}\n\\varphi^{-1} \\bigl{(} \\vartheta \\varphi(s) \\bigr{)} \\geq \\vartheta s, \\quad \\text{for all $\\vartheta\\in\\mathopen{[}0,1\\mathclose{]}$ and $s\\in\\mathopen{[}0,1\\mathclose{[}$,}\n\\end{equation*}\ncoming from the convexity of $\\varphi$ in $\\mathopen{[}0,1\\mathclose{[}$.\nThen, by integrating in $\\mathopen{[}r,\\tau_{i}\\mathclose{]}$ we obtain \n\\begin{equation*}\nu(r)\\geq u(r)-u(\\tau_{i}) = - \\int_{r}^{\\tau_{i}} u'(\\xi)\\,\\mathrm{d}\\xi \n\\geq -(\\tau_{i}-r) \\biggl{(}\\dfrac{r}{\\tau_{i}}\\biggr{)}^{\\!N-1}u'(r) \n\\geq - \\dfrac{(2\\varepsilon)^{N}}{\\tau_{i}^{N-1}} u'(r),\n\\end{equation*}\nfinally implying \\eqref{estim-u'}.\n\nFor further convenience, we observe that from \\eqref{estim-u'} we have in particular that\n\\begin{equation}\\label{u'_1_2}\n|u'(r)| \\leq \\dfrac{1}{2}, \\quad \\text{for all $r \\in \\mathopen{[}\\sigma_{i}+2\\varepsilon,\\tau_{i}-2\\varepsilon\\mathclose{]}$.}\n\\end{equation}\n\n\\smallskip\n\\noindent\n\\textit{Claim~3.} It holds that\n\\begin{equation}\\label{eq-delta}\n\\min_{r\\in \\mathopen{[}\\sigma_{i}+2\\varepsilon,\\tau_{i}-2\\varepsilon\\mathclose{]}} u(r)\\geq \\delta_{*}.\n\\end{equation}\nTo show this, let $r^{*}\\in I^{+}_{i}$ and $\\check{r}\\in\\mathopen{[}\\sigma_{i}+2\\varepsilon,\\tau_{i}-2\\varepsilon\\mathclose{]}$ be such that\n\\begin{equation*}\nu(r^{*}) = \\max_{r\\in \\mathopen{[}\\sigma_{i},\\tau_{i}\\mathclose{]}} u(r) = \\delta^{*}, \\quad u(\\check{r}) = \\min_{r\\in \\mathopen{[}\\sigma_{i}+2\\varepsilon,\\tau_{i}-2\\varepsilon\\mathclose{]}} u(r).\n\\end{equation*}\nThe case $r^{*}=\\check{r}$ is trivial. So we consider the two cases: $r^{*}<\\check{r}$ and $r^{*}>\\check{r}$. \n\nIf $\\sigma_{i} \\leq r^{*} < \\check{r}\\in \\mathopen{[}\\sigma_{i}+2\\varepsilon,\\tau_{i}-2\\varepsilon\\mathclose{]}$, then $u'(r^{*}) \\leq 0$. \nUsing the monotonicity of $r\\mapsto r^{N-1} \\varphi(u'(r))$ in $\\mathopen{[}\\sigma_{i},\\tau_{i}\\mathclose{]}$, we deduce that\n\\begin{equation}\\label{eq-monot1}\nu'(\\xi) \\geq \\varphi^{-1} \\biggl{(} \\biggl{(} \\dfrac{\\zeta}{\\xi}\\biggr{)}^{\\! N-1} \\varphi(u'(\\zeta)) \\biggr{)}, \\quad \\text{for all $\\xi,\\zeta\\in \\mathopen{]}\\sigma_{i},\\tau_{i}\\mathclose{]}$ with $\\xi\\leq\\zeta$,}\n\\end{equation}\nand so, from \\eqref{eq-monot1} with $\\xi=r^{*}$, $u'(\\zeta) \\leq 0$ for all $\\zeta\\in \\mathopen{[}r^{*},\\check{r}\\mathclose{]}$, in particular $u'(\\check{r}) \\leq 0$. \nAssume now $i\\geq 2$, or $i =1$ and $\\sigma_{1}\\neq 0$. \nAn integration of \\eqref{eq-monot1} with $\\zeta=\\check{r}$ on $\\mathopen{[}r^{*},\\check{r}\\mathclose{]}$ leads to\n\\begin{align*}\n\\delta^{*} - u(\\check{r}) &= - \\int_{r^{*}}^{\\check{r}} u'(\\xi) \\,\\mathrm{d}\\xi \n\\leq \\int_{r^{*}}^{\\check{r}} \\varphi^{-1} \\biggl{(} \\biggl{(} \\dfrac{\\check{r}}{\\xi}\\biggr{)}^{\\! N-1} \\varphi(-u'(\\check{r})) \\biggr{)} \\,\\mathrm{d}\\xi\n\\\\ &\\leq \\int_{r^{*}}^{\\check{r}} \\biggl{(} \\dfrac{\\check{r}}{\\xi}\\biggr{)}^{\\! N-1} \\varphi(-u'(\\check{r})) \\,\\mathrm{d}\\xi\n\\leq (\\check{r}-r^{*}) \\biggl{(} \\dfrac{\\check{r}}{r^{*}}\\biggr{)}^{\\! N-1} \\varphi(-u'(\\check{r}))\n\\\\ &\\leq |I^{+}_{i}| \\biggl{(} \\dfrac{\\tau_{i}}{\\sigma_{i}}\\biggr{)}^{\\! N-1} \\varphi(-u'(\\check{r})).\n\\end{align*}\nUsing \\eqref{u'_1_2} together with the fact that\n\\begin{equation*}\n\\varphi(s) \\leq 2\\varphi\\biggl{(}\\dfrac{1}{2} \\biggr{)}s, \\quad \\text{for all $s\\in\\biggl{[}0,\\dfrac{1}{2}\\biggr{]}$,}\n\\end{equation*}\nand estimate \\eqref{estim-u'}, we obtain\n\\begin{equation*}\n\\delta^{*} - u(\\check{r}) \\leq 2\\varphi\\biggl{(}\\dfrac{1}{2} \\biggr{)} |I^{+}_{i}| \\dfrac{\\tau_{i}^{2N-2}}{\\sigma_{i}^{N-1}(2\\varepsilon)^{N}} \\, u(\\check{r}).\n\\end{equation*}\nThen, \\eqref{eq-delta} holds.\nConsider now the case $i=1$ and $\\sigma_{1}=0$. Then, recalling that $u'(\\xi)\\leq 0$ on $\\mathopen{[}0,\\tau_{1}\\mathclose{]}$, we have $r^{*}=0$ and $\\check{r}=\\tau_{1}-2 \\varepsilon$. Now, arguing as above and using the fact that $|u'(\\xi)|<1$ on $\\mathopen{[}0,\\tau_{1}\\mathclose{]}$, we deduce\n\\begin{align*}\n\\delta^{*} - u(\\tau_{1}-2\\varepsilon) &= - \\int_{0}^{\\tau_{1}-2 \\varepsilon} u'(\\xi) \\,\\mathrm{d}\\xi \n= - \\int_{0}^{\\gamma} u'(\\xi) \\,\\mathrm{d}\\xi - \\int_{\\gamma}^{\\tau_{1}-2\\varepsilon} u'(\\xi) \\,\\mathrm{d}\\xi\n\\\\&\\leq \\gamma + \\int_{\\gamma}^{\\tau_{1}-2\\varepsilon} \\varphi^{-1} \\biggl{(} \\biggl{(} \\dfrac{\\tau_{1}-2\\varepsilon}{\\xi}\\biggr{)}^{\\! N-1} \\varphi(-u'(\\tau_{1}-2\\varepsilon)) \\biggr{)} \\,\\mathrm{d}\\xi\n\\\\ &\\leq \\gamma + 2\\varphi\\biggl{(}\\dfrac{1}{2} \\biggr{)} |I^{+}_{1}| \\dfrac{\\tau_{1}^{2N-2}}{\\gamma^{N-1}(2\\varepsilon)^{N}} \\, u(\\tau_{1}-2 \\varepsilon).\n\\end{align*}\nThen, \\eqref{eq-delta} holds.\n\nOn the other hand, if $\\tau_{i}\\geq r^{*} > \\check{r}\\in \\mathopen{[}\\sigma_{i}+2\\varepsilon,\\tau_{i}-2\\varepsilon\\mathclose{]}$, then $u'(r^{*}) \\geq 0$. Notice that this can happen only when $i\\geq 2$, or $i=1$ and $\\sigma_{1}\\neq 0$.\nUsing the monotonicity of $r\\mapsto r^{N-1} \\varphi(u'(r))$ in $\\mathopen{[}\\sigma_{i},\\tau_{i}\\mathclose{]}$, we deduce that\n\\begin{equation}\\label{eq-monot2}\n\\varphi^{-1} \\biggl{(} \\biggl{(} \\dfrac{\\xi}{\\zeta}\\biggr{)}^{\\! N-1} \\varphi(u'(\\xi)) \\biggr{)} \\geq u'(\\zeta), \\quad \\text{for all $\\xi,\\zeta\\in \\mathopen{]}\\sigma_{i},\\tau_{i}\\mathclose{]}$ with $\\xi\\leq\\zeta$,}\n\\end{equation}\nand so, from \\eqref{eq-monot2} with $\\zeta=r^{*}$, $u'(\\xi) \\geq 0$ for all $\\xi\\in \\mathopen{[}\\check{r},r^{*}\\mathclose{]}$, in particular $u'(\\check{r}) \\geq 0$. \nAn integration of \\eqref{eq-monot2} with $\\xi=\\check{r}$ on $\\mathopen{[}\\check{r},r^{*}\\mathclose{]}$ and an application of \\eqref{estim-u'} lead to\n\\begin{align*}\n\\delta^{*} - u(\\check{r}) &= \\int_{\\check{r}}^{r^{*}} u'(\\zeta) \\,\\mathrm{d}\\zeta\n\\leq \\int_{\\check{r}}^{r^{*}} \\varphi^{-1} \\biggl{(} \\biggl{(} \\dfrac{\\check{r}}{\\zeta}\\biggr{)}^{\\! N-1} \\varphi(u'(\\check{r})) \\biggr{)} \\,\\mathrm{d}\\zeta\n\\\\ &\\leq \\int_{\\check{r}}^{r^{*}}\\varphi^{-1} \\bigl{(} \\varphi(u'(\\check{r})) \\bigr{)} \\,\\mathrm{d}\\zeta\n\\leq |I^{+}_{i}| \\dfrac{\\tau_{i}^{N-1}}{(2\\varepsilon)^{N}} \\, u(\\check{r})\n\\leq |I^{+}_{i}| \\dfrac{\\tau_{i}^{N-1}}{(2\\varepsilon)^{N}} \\dfrac{\\tau_{i}^{N-1}}{\\sigma_{i}^{N-1}} \\, u(\\check{r}).\n\\end{align*}\nThen, we have \\eqref{eq-delta}.\n\n\\smallskip\n\nWe are now ready to verify condition $(H_{1})$ of Lemma~\\ref{lem-deg0}. We integrate the equation in \\eqref{eq-lem-deg0} on $\\mathopen{[}\\sigma_{i}+2\\varepsilon,\\tau_{i}-2\\varepsilon\\mathclose{]}$ and, recalling \\eqref{u'_1_2}, \\eqref{eq-delta} and that $\\varphi$ is odd, we obtain\n\\begin{equation*}\n\\lambda \\min \\bigl{\\{} g(u) \\colon u \\in \\mathopen{[}\\delta_{*},\\delta^{*} \\mathclose{]}\\bigr{\\}} \\int_{\\sigma_{i}+2\\varepsilon}^{\\tau_{i}-2\\varepsilon} r^{N-1}a(r)\\,\\mathrm{d}r \\leq 2 R^{N-1}\\varphi \\biggl{(} \\dfrac{1}{2}\\biggr{)}, \n\\end{equation*}\na contradiction with respect to $\\lambda > \\lambda^{*}$.\n\nAs for assumption $(H_{2})$, we integrate the equation in \\eqref{eq-lem-deg0} in $\\mathopen{[}0,R\\mathclose{]}$ and, passing to the absolute value, we deduce\n\\begin{equation*}\n\\alpha \\| r^{N-1}v \\|_{L^{1}} \\leq \\lambda R^{N-1} \\|a\\|_{L^{1}} \\max_{u \\in \\mathopen{[}0,\\delta^{*}\\mathclose{]}} g(u).\n\\end{equation*}\nA contradiction follows for $\\alpha$ sufficiently large. From Lemma~\\ref{lem-deg0} we thus obtain\n\\begin{equation}\\label{deg-B0rho*}\n\\mathrm{deg}_{\\mathrm{LS}}(\\mathrm{Id}-T_{1,0},B(0,\\delta^{*})) = 0.\n\\end{equation}\n\n\\subsubsection*{Step~3. Fixing the constant $d^{*}$ and computation of the degree in $B(0,d^{*})$.}\nFirst, we claim that there exists $d^{*}\\in\\mathopen{]}0,\\delta^{*}\\mathclose{[}$ such that, for every $\\vartheta\\in \\mathopen{]}0,1\\mathclose{]}$, every non-negative solution $u(r)$ of \\eqref{eq-lem-deg1} with $\\|u\\|_{\\infty} \\leq d^{*}$ is such that $u\\equiv0$.\n\nWe assume, by contradiction, that there exists a sequence $\\{u_{n}\\}_{n}$ of non-negative solutions of \\eqref{eq-lem-deg1} for $\\vartheta=\\vartheta_{n}$ satisfying $0<\\|u_{n}\\|_{\\infty}=d_{n}\\to0$.\nWe define\n\\begin{equation*}\nv_{n}(r) = \\dfrac{u_{n}(r)}{d_{n}}, \\quad r\\in\\mathopen{[}0,R\\mathclose{]},\n\\end{equation*}\nand observe that $v_{n}(r)$ is a non-negative solution of the Neumann problem associated with\n\\begin{equation}\\label{eq-v_n}\n\\Biggl{(} \\dfrac{r^{N-1} v_{n}'}{\\sqrt{1-(u_{n}')^{2}}}\\Biggr{)}' + \\vartheta_{n} \\lambda r^{N-1} a(r) q(u_{n}(r)) v_{n} = 0,\n\\end{equation}\nwhere we set $q(u) = g(u)\/u$ for $u > 0$ and $q(0) = 0$. \nIntegrating equation \\eqref{eq-v_n} between $0$ and $r$ and dividing by $r^{N-1}$ we obtain that\n\\begin{equation*}\n\\dfrac{v_{n}'(r)}{\\sqrt{1-u_{n}'(r)^{2}}} = -\\dfrac{1}{r^{N-1}} \\vartheta_{n} \\lambda \\int_{0}^{r} \\xi^{N-1}a(\\xi)q(u_{n}(\\xi))v_{n}(\\xi) \\,\\mathrm{d}\\xi, \\quad \\text{for all $r \\in \\mathopen{]}0,R\\mathclose{]}$.}\n\\end{equation*}\nPassing to the absolute value we have\n\\begin{equation*}\n|v_{n}'(r)| \\leq \\dfrac{|v_{n}'(r)|}{\\sqrt{1-u_{n}'(r)^{2}}}\n\\leq \\lambda \\int_{0}^{R} |a(\\xi)| |q(u_{n}(\\xi))||v_{n}(\\xi)| \\,\\mathrm{d}\\xi, \\quad \\text{for all $r \\in \\mathopen{]}0,R\\mathclose{]}$.}\n\\end{equation*}\nTherefore, using the first condition in $(g_{0})$ and the fact that $\\| v_{n} \\|_{\\infty} \\leq 1$, we obtain that $v_{n}' \\to 0$ uniformly.\nAs a consequence, $v_{n} \\to 1$ uniformly in $\\mathopen{[}0,R\\mathclose{]}$, since\n\\begin{equation*}\n|v_{n}(r) - 1 | = |v_{n}(r) - v_{n}(\\hat{\\eta}_{n}) | \\leq \\int_{0}^{R} | v_{n}'(\\xi) | \\,\\mathrm{d}\\xi, \\quad \\text{for all $r \\in \\mathopen{[}0,R\\mathclose{]}$,}\n\\end{equation*}\nwhere $\\hat{\\eta}_{n}\\in \\mathopen{[}0,R\\mathclose{]}$ is such that $u_{n}(\\hat{\\eta}_{n}) = \\|u_{n}\\|_{\\infty} = d_{n}$.\nAn integration of equation \\eqref{eq-v_n} in $\\mathopen{[}0,R\\mathclose{]}$ gives\n\\begin{equation*}\n\\int_{0}^{R} r^{N-1} a(r) g(u_{n}(r))\\,\\mathrm{d}r = 0\n\\end{equation*}\nand hence\n\\begin{equation*}\n\\int_{0}^{R} r^{N-1} a(r) g(d_{n})\\,\\mathrm{d}r + \\int_{0}^{R} r^{N-1} a(r)\\bigl{[}g(d_{n} v_{n}(r)) - g(d_{n})\\bigr{]}\\,\\mathrm{d}r=0.\n\\end{equation*}\nDividing by $g(d_{n}) > 0$, we have\n\\begin{equation*}\n0 < - \\int_{0}^{R} r^{N-1} a(r)\\,\\mathrm{d}r \\leq R^{N-1}\\|a\\|_{L^{1}} \\sup_{r\\in \\mathopen{[}0,R\\mathclose{]}}\\biggl{|}\\dfrac{g(d_{n} v_{n}(r))}{g(d_{n})} - 1\\biggr{|}.\n\\end{equation*}\nUsing the second condition in $(g_{0})$ and recalling that $v_{n} \\to 1$ uniformly, we find a contradiction. The claim is thus proved and we can fix $d^{*}\\in\\mathopen{[}0,\\delta^{*}\\mathclose{[}$.\n\nFinally, condition $(H_{3})$ of Lemma~\\ref{lem-deg1} is trivially satisfied for $d=d^{*}$, and therefore\n\\begin{equation}\\label{deg-B0d*}\n\\mathrm{deg}_{\\mathrm{LS}}(\\mathrm{Id}-T_{1,0},B(0,d^{*})) = 1.\n\\end{equation}\n\n\\subsubsection*{Step~4. Fixing the constant $D^{*}$ and computation of the degree in $B(0,D^{*})$.}\nFirst, we claim that there exists $D^{*}>\\delta^{*}$ such that, for every $\\vartheta\\in \\mathopen{]}0,1\\mathclose{]}$, every non-negative solution $u(r)$ of \\eqref{eq-lem-deg1} satisfies $\\|u\\|_{\\infty} < D^{*}$.\n\nWe assume, by contradiction, that there exists a sequence $\\{u_{n}\\}_{n}$ of non-negative solutions of \\eqref{eq-lem-deg1} for $\\vartheta=\\vartheta_{n}$ and $\\lambda=\\lambda_{n}$ satisfying $\\|u_{n}\\|_{\\infty}=D_{n}\\to+\\infty$. We proceed similarly to the previous step. We define\n\\begin{equation*}\nv_{n}(r) = \\dfrac{u_{n}(r)}{D_{n}}, \\quad r\\in\\mathopen{[}0,R\\mathclose{]},\n\\end{equation*}\nwhich solves the Neumann problem associated with equation \\eqref{eq-v_n}. Since $\\|u_{n}'\\|_{\\infty} \\leq 1$, we easily find $\\|v_{n}'\\|_{\\infty} \\to 0$ and, consequently, $v_{n} \\to 1$ uniformly in $\\mathopen{[}0,R\\mathclose{]}$ (proceeding as shown in Step~3).\nIntegrating equation \\eqref{eq-v_n} and dividing by $g(D_{n}) > 0$, we thus obtain\n\\begin{equation*}\n0 < - \\int_{0}^{R} r^{N-1}a(r)\\,\\mathrm{d}r \\leq R^{N-1}\\|a\\|_{L^{1}} \\sup_{r\\in \\mathopen{[}0,R\\mathclose{]}}\\biggl{|}\\dfrac{g(D_{n} v_{n}(r))}{g(D_{n})} - 1\\biggr{|}.\n\\end{equation*}\nUsing $(g_{\\infty})$, a contradiction easily follows. The claim is thus proved and we can fix $D^{*}>\\delta^{*}$.\n\nFinally, condition $(H_{3})$ of Lemma~\\ref{lem-deg1} is trivially satisfied for $d=D^{*}$, and therefore\n\\begin{equation}\\label{deg-B0D*}\n\\mathrm{deg}_{\\mathrm{LS}}(\\mathrm{Id}-T_{1,0},B(0,D^{*})) = 1.\n\\end{equation}\n\n\\subsubsection*{Step~5. Concluding the proof.}\nStarting from the formulas \\eqref{deg-B0rho*}, \\eqref{deg-B0d*}, \\eqref{deg-B0D*} proved in the last three steps, we apply the additivity property of the coincidence degree to obtain\n\\begin{equation*}\n\\mathrm{deg}_{\\mathrm{LS}}(\\mathrm{Id}-T_{1,0},B(0,\\delta^{*}) \\setminus \\overline{B(0,d^{*})}) = -1\n\\end{equation*}\nand\n\\begin{equation*}\n\\mathrm{deg}_{\\mathrm{LS}}(\\mathrm{Id}-T_{1,0},B(0,D^{*}) \\setminus \\overline{B(0,\\delta^{*})}) = 1.\n\\end{equation*}\nAs a consequence of the existence property of the degree, there exist two solutions $u_{s}\\in B(0,\\delta^{*}) \\setminus \\overline{B(0,d^{*})}$ and $u_{\\ell}\\in B(0,D^{*}) \\setminus \\overline{B(0,\\delta^{*})}$ of problem \\eqref{eq-phi}. Then, $u_{s}(r)$ and $u_{\\ell}(r)$ satisfy \n\\begin{equation*}\nd^{*} < \\| u_{s} \\|_{\\infty} < \\delta^{*} < \\| u_{\\ell} \\|_{\\infty} < D^{*}.\n\\end{equation*}\nBy maximum principle arguments, both these solutions are non-negative and hence they solve \\eqref{eq-main}.\nIt thus remains to show that they are positive, that is, $u(r) > 0$ for every $r \\in [0,R]$, for both $u = u_s$ and $u = u_{\\ell}$.\n\nBy contradiction, assume that $u(r_0) = 0$ for some $r_0 \\in [0,R]$. Then $u'(r_0) = 0$, coming from the boundary conditions if $r_0 = 0$ and from the fact that $u(r)$ is non-negative if $r_0 \\in (0,R]$.\n\nIf $r_0 = 0$, integrating the equation we find\n\\begin{equation*}\nu(r) = - \\int_{0}^r \\varphi^{-1}\\biggl{(}\\dfrac{1}{\\zeta^{N-1}} \\int_{0}^{\\zeta} \\xi^{N-1}a(\\xi)g(u(\\xi))\\,\\mathrm{d}\\xi \\biggr{)} \\,\\mathrm{d}\\zeta, \\quad \\text{for all $r \\in \\mathopen{[}0,R\\mathclose{]}$,}\n\\end{equation*}\nimplying, since $|\\varphi^{-1}(s)|\\leq |s|$ for all $s \\in \\mathbb{R}$, that\n\\begin{equation*}\n|u(r)| \\leq \\int_{0}^r \\int_{0}^{\\zeta} |a(\\xi)| |g(u(\\xi))|\\,\\mathrm{d}\\xi \\,\\mathrm{d}\\zeta \n\\leq R \\int_{0}^r |a(\\xi)| |g(u(\\xi))|\\,\\mathrm{d}\\xi, \n\\end{equation*}\nfor all $r \\in \\mathopen{[}0,R\\mathclose{]}$.\nRecalling that $g(u)\/u \\to 0$ for $u \\to 0^{+}$, we finally obtain \n\\begin{equation*}\n|u(r)| \\leq M R \\int_{0}^r |a(\\xi)| |u(\\xi)|\\,\\mathrm{d}\\xi , \\quad \\text{for all $r \\in \\mathopen{[}0,R\\mathclose{]}$,}\n\\end{equation*}\nwhere $M > 0$ is a suitable constant. By Gronwall's lemma, $u(r) = 0$ for every $r \\in \\mathopen{[}0,R\\mathclose{]}$, a contradiction.\n\nIn the case $r_0 \\in \\mathopen{]}0,R\\mathclose{]}$, the contradiction is reached again using the assumption $g(u)\/u \\to 0$ for $u \\to 0^{+}$, which as well known implies\n (together with the smoothness of $\\varphi^{-1}$)\nthat the only solution of the planar system\n\\begin{equation*}\nu' = \\varphi^{-1}\\biggl{(} \\dfrac{v}{r^{N-1}} \\biggr{)}, \\qquad v' = -r^{N-1} a(r) g(u),\n\\end{equation*}\nsatisfying the initial condition $(u(r_0),v(r_0)) = (0,0)$ is the trivial one.\n\\qed\n\n\\bibliographystyle{elsart-num-sort}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nStable broadband sources are essential for a variety of applications, from spectroscopy \\cite{Liu2013} to imaging \\cite{Kano2005} and nonlinear optical parametric amplifiers \\cite{Reed1995}. Supercontinuum (also known as white light) generation offers a promising tool for these multidisciplinary uses. Sapphire is regarded as the crystal of choice for visible supercontinuum generation \\cite{Yakovlev1994} and is the focus of our particular study. Supercontinua are generated in sapphire through the process of single filamentation. This general process encompasses a wide variety of linear and nonlinear effects including self-steepening, self-phase modulation, dispersion, four-wave mixing, Raman excitation, second and third harmonic generation, and plasma generation, absorption, and refraction \\cite{Alfano2006}. Since in the most general case it is diffcult to separate the exact contribution of each of these, individual optimization of each effect is neither particularly feasible nor extremely desirable. However, each of these effects can be highly influenced by the spatial distribution of the beam. One of the most basic effects of spatial shaping is to focus the beam tighter, thereby generating more plasma and influencing the resultant spectrum. Although studies with Bessel \\cite{Majus2013} and Laguerre-Gauss beams \\cite{Sztul2006a} have failed to find significant spectral deviations or improvements in effciency from the Gaussian regime, systematic spatial optimization has not been performed. \n\nPrevious work that has shown promise in this direction includes a study performed with microlenses generated via spatial light modulators \\cite{Borrego-Varillas2014}. Moreover, there have been several successful studies of spectral pump-beam optimization in filamentation \\cite{Thompson2017, Ackermann2006} and other nonlinear effects, such as the control of second harmonic generation \\cite{Thompson2016} and the enhancement of spontaneous Raman signals through a turbid medium \\cite{Thompson2016a}. We extend these results and methods to the theoretically challenging regime of supercontinuum generation by using the wavefront shaping algorithm to influence the supercontinuum spectrum.\n\n\n\nThere are several aspects of interest here, including reshaping the physics of filamentation and drawing attention to the potentials of non-Gaussian beams. More commercial applications are also possible. We envision that this work may lead to universally stronger seeds for spectroscopic applications that depend on nonlinear effects for a large signal-to-noise. Our preliminary results are a proof-of-concept that the idea of spatial optimization has merit and may be further expanded with additional work.\n\n\n\n\\section{Experimental Setup}\n\nThe experimental setup is depicted in Fig. \\ref{fig:setup}. We used a Ti:Sapphire regenerative amplifier (Coherent, Legend) to produce infrared ($\\lambda = 802$ nm) $35$ fs pulses with a $1$ kHz repetition rate and $4$ W average power that we attenuated to produce supercontinuum. We then investigated two regimes of supercontinuum generation: chirped and unchirped. In the first case, we added positive chirp by changing the grating distance within the compressor unit of the amplifier. This produced pulses of 900 fs FWHM duration, measured using a commercial autocorrelator (Pulse Check; A.P.E.).\n\n\n\\begin{figure\n\t\\centering\n\t\n\n\n\t\\includegraphics[width=0.5\\textwidth]{setup}\n\t\\caption{Setup for generating supercontinua (SC) from shaped pulses; a photograph of the generated SC is shown in the inset. The angle of the spatial light modulator (SLM) is greatly exaggerated. OAP stands for off-axis parabola and was used to collimate the SC spectrum after the crystal.}\n\t\\label{fig:setup}\n\t\n\n\\end{figure} \n\nDifferent powers\/pulse were needed to produce supercontinua in each regime: we used 5-6 $\\mu$J in the chirped and 1 $\\mu$J in the unchirped cases. In both regimes, we used a spatial phase-only light modulator (Hamamatsu; x10468; abbreviated SLM) to shape the originally Gaussian beam. The beam has a 15 mm diameter prior to being shaped by the SLM; this is the greatest we could expand the beam without clipping on the SLM screen. We then split the beam and focused part of it with a 5 cm focal length lens to generate a supercontinuum in a 3 mm thick single crystal c-cut sapphire plate (Newlight Photonics; SAP0030-C; Toronto, ON). The resultant diverging supercontinuum was collimated by an off-axis paraboloid to a 1-inch diameter. The pump beam was then filtered out via a 750 nm short pass filter (Semrock; FF01-745\/SP-25; Rochester, NY). The filtered beam was subsequently refocused with a 20 cm focal length lens into a multi-mode 600 $\\mu$m core fiber. Since the supercontinuum light should roughly focus to 5 $\\mu$m, the fiber core is of sufficient size to collect all the light and not be affected by any spatial phase changes the SLM adds. We checked this assumption by translating the fiber in x-y dimensions in the focal plane; this operation revealed no unmeasured light. Hence, we are confident that the VCSA algorithm is not optimizing the light-collection system.\n\nThe other part of the beam was sent through a long (2 m focal length) lens to be very loosely focused onto a CCD array (Spiricon; SP620U) and recorded by the computer. These images did not take part in the spatial optimization at all -- they are there to help visualize the effect of different phase maps on the beam's spatial profile in the focus. \n\n\\section{Optimization Details}\n\nFor all optimization regimes, we used a variation of the continuous sequential algorithm (abbreviated VCSA) \\cite{Vellekoop2008, Thompson2015}. The VCSA groups pixels on the SLM together and cycles through $2\\pi$ phase values in 8 steps. The algorithm then compares spectrometer output in a particular spectral range before and after adding different phase values. If the average of the spectrometer reading in that spectral range improves, then the algorithm keeps the phase value. This cycle is repeated three times and results averaged to minimize influence from shot-to-shot fluctuations and other noise. The algorithm then moves on to another pixel group and repeats the process. Each iteration takes 12 seconds, with the spectrometer integration time forming the largest limit on speed. \n\nFor all results given in this paper, we employed the ``spiral out\" method of this algorithm, which starts with large pixel groups (of 264 $x$ 300 pixels) in the center and spirals out to the edges, as in Figure \\ref{fig:algorithm}. It then starts a new stage at the center with smaller pixel groups (of 132 $x$ 150 pixels) and spirals out until it is forced to repeat itself with even smaller groups of pixels (of 72 $x$ 60). The final run consists of groups of 24 $x$ 24 pixels. In total, we let the algorithm optimize for roughly half an hour for the results given in this paper. We do not consider time to be a major limit in our experiment, as there are no discernible differences between spectra taken at the beginning of the day and those taken at the end. Further, for spectroscopic applications, it will not be necessary to quickly reoptimize the masks so long as the user takes care to produce a bank of working masks that they may easily switch between.\n\n\\begin{figure\n\t\\centering\n\t\n\t\n\t\n\n\t\\includegraphics[width=0.5\\textwidth]{algorithm_3}\n\t\\caption{The smaller the block the longer the program takes to finish; the process can be stopped at any time if the user is satisfied with the results. Our SLM is comprised of 792 $x$ 600 pixels total.}\n\t\\label{fig:algorithm}\n\t\n\t\n\t\n\\end{figure} \n\n\\section{Preliminary Results}\n\nUsing these methods, we were able to obtain a general 10\\% broadening of the spectral width of the supercontinuum generation for highly positively chirped pulses (900 fs), as shown in Fig. \\ref{fig:chirpedresults}. However, this regime is tricky to work with as the damage threshold for these focusing conditions in sapphire is near the critical power of self-focusing. This makes further optimization difficult but not impossible, potentially under different focusing conditions.\n\nFor 35 fs unchirped pulses, we discovered that it is possible to shift the supercontinuum spectral cutoff peak between 450 and 650 nm, as our preliminary results indicate in Figure \\ref{fig:results}. The region from 450-500 nm is completely absent in the supercontinuum spectrum generated without any phase mask applied and so represents a significant broadening ($>\\approx 20\\%$). In this case, the effect of the added phase mask on the supercontinuum spectrum is easily noticeable by eye and hence can not be due to any limitations in our light-collection system.\n\n Further, the phase masks shown in Figure \\ref{fig:results} generate the same spectrum from day-to-day without any special additional environmental control, making our experiment repeatable in a variety of conditions. However, the spatial profile of the shaped beam is very sensitive to the alignment of the pump beam on the SLM screen. This is because any displacement in this region will result in different parts of the beam obtaining different phase values, and hence not reproducing the original phase-optimized beam. In this case, each phase mask will need to be re-optimized to obtain a tailored spectrum.\n\n\\begin{figure\n\t\\centering\n\t\n\t\n\t\n\n\t\\includegraphics[width=0.5\\textwidth]{chirped_results2}\n\t\\caption{SC spectrum before (blue line) and after (red line, dash-dot) spatially optimizing the pump pulse. The range of optimization was 500--550 nm.}\n\t\\label{fig:chirpedresults}\n\t\n\t\n\t\n\t\n\t\n\\end{figure} \n\n\n\n\\begin{figure\n\t\\centering\n\t\n\t\n\t\n\n\t\\includegraphics[width=1\\textwidth]{dotdash_results2}\n\t\\caption{(a) Measured supercontinuum spectrum for different optimization regimes -- the supercontinuum cutoff peak is spectrally shifted as the spatial shape changes. Each entry in the legend corresponds to the optimization range of that particular run of the algorithm (i.e. for the second entry, the algorithm attempts to optimize the average spectrometer-measured counts in the range of 450 -- 500 nm). All spectrums were taken with the phase masks and profiles in (b). (b) SLM phase masks (top), beam profiles in the focus magnified approximately 40 times and with the left three profiles integrated 5x longer than the right-most profile (middle), and true-color photographs of the resultant supercontinuum (bottom; taken with a Sony DSLR camera) for different optimization regimes.}\n\t\\label{fig:results}\n\t\n\t\n\t\n\\end{figure} \n\n\\section{Conclusions}\n\nOur preliminary results indicate that spatial beam shaping has a substantial untapped potential in optimizing supercontinuum generation by enhancing a particular spectral region. We envision that this technique can dramatically improve the ability to tailor the supercontinuum spectrum for any particular application. For example, we can provide significantly stronger seed pulses for optical parametric amplifiers and substantially enhance signals in broadband coherent anti-Stokes Raman spectroscopy\/microscopy. An SLM provides a much more flexible platform, as compared to a micro-structured fiber, to tailor the spectral properties of the supercontinuum\n\\cite{Zheltikov2006}. The user will simply load the SLM with the phase mask for the particular spectral range they desire. By pre-generating optimal phase masks, the frequency can be tuned at the 10 Hz refresh rate of the SLM.\n\nIn the future, we envision that this will lead to higher available powers for various nonlinear spectroscopy experiments and hence a greater signal-to-noise, paving the way for future precision measurements. Further work will include explorations of the theoretical foundations of spatial effects in high-order nonlinear optical interactions, which we initiated in \\cite{Thompson2016}. We also plan thorough investigations of algorithmic shaping in the IR.\n\nThis research was partially supported by the NSF (Grant \\# PHY-1307153, DBI-1532188, and ECCS-1509268), the US Department of Defense (award FA9550-15-1-0517), the Welch Foundation (Awards No. A-1547 and No. A-1261), the Cancer Prevention Research Institute of Texas (grant RP160834), and the Office of Navel Research (Award No. N00014-16-1-3054).\n\n\\bibliographystyle{jmo_test2}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe $\\ast$-involution, sometimes referred to as Kashiwara's involution, is an involution on the crystal $B(\\infty)$ that is induced from a subtle involutive antiautomorphism of $U_q(\\mathfrak{g})$. \nThe importance of $\\ast$ in the theory of crystal bases and their applications cannot be understated. Here are just a few of its applications.\n\\begin{enumerate}\n\\item Saito~\\cite{Sai94} used the involution during the proof that Lusztig's PBW basis has a crystal structure isomorphic to $B(\\infty)$, provided that $\\mathfrak{g}$ is a finite-dimensional semisimple Lie algebra.\n\\item Kamnitzer and Tingley~\\cite{KT09} generalize the definition of the crystal commutor of Henriques and Kamnitzer \\cite{HK06} in terms of the $\\ast$-involution. This leads to a proof by Savage~\\cite{Sav09} that the category of crystals forms a coboundary category over any symmetrizable Kac-Moody algebra.\n\\item In affine type $A$, Jacon and Lecouvey~\\cite{JL09} prove that $\\ast$ coincides with the Zelevinsky involution~\\cite{MW86,Zel80} on the set of simple modules for the affine Hecke algebra.\n\\end{enumerate}\nSeveral combinatorial realizations of the $\\ast$-involution are known in the literature. For example, Lusztig~\\cite{Lusztig93} gave a description of the behavior of $\\ast$ on Lusztig's PBW basis in the finite types, Kamnitzer~\\cite{Kamnitzer07} showed that $\\ast$ acts on an MV polytope by negation, Kashiwara and Saito \\cite{KS97} gave a description of $\\ast$ in terms of quiver varieties~\\cite{KS97}, and Jacon and Lecouvey~\\cite{JL09} give a description of the involution in terms of the multisegment model. Such model-specific calculations of the $*$-crystal operators are important as, {\\it a priori}, the algorithm for computing the action of these operators is not efficient \\cite[Thm. 2.2.1]{K93} (see also \\cite[Prop. 8.1]{K95}).\n\n\nIn this paper, the authors continue their development of the rigged configuration model of $B(\\infty)$~\\cite{SalS15,SalS15II}. Rigged configurations are sequences of partitions, one for every node of the underlying Dynkin diagram, where each part is paired with an integer, satisfying certain conditions. These objects arose as an important tool in mathematical physics from the studies of the Bethe Ansatz by Kerov, Kirillov, and Reshetikhin~\\cite{KKR86,KR86}, and they have been shown to correspond to the action and angle variables of box-ball systems~\\cite{KOSTY06}. Additionally, rigged configurations have been used extensively in the theory of Kirillov-Reshetikhin crystals \\cite{HKOTT02,HKOTY99,OSS13,OSS03,OSS03II,Sakamoto14,S05,S06,SchillingS15,SW10,Scrimshaw15}. During the course of this study, a crystal structure was given to rigged configurations~\\cite{S06, SchillingS15}.\n\nOur description of $\\ast$ is as nice as one could hope: in contrast to the definition of $e_a$ and $f_a$ on rigged configurations, one interchanges ``label'' and ``colabel'' to obtain a definition of $e_a^*$ and $f_a^*$ (see Definition \\ref{def:RC_star_crystal_ops}). In turn, applying $\\ast$ to a rigged configuration replaces all labels with its corresponding colabels and leaves the partitions fixed (see Corollary~\\ref{cor:RC_star_involution}).\n\nThe method of proof applied here is to use a classification theorem of $B(\\infty)$ asserted by Tingley and Webster~\\cite{TW} by translating the $\\ast$-involution directly into the classification theorem of Kashiwara and Saito~\\cite{KS97} without the use of Kashiwara's embedding. This classification theorem requires several assertions to be satisfied, and proving these assertions hold in $\\RC(\\infty)$ with our new $\\ast$-crystal operators consumes most of Section \\ref{sec:*crystal}.\n\n\nThe (conjectural) bijection $\\Phi$ between $U_q'(\\mathfrak{g})$-rigged configurations and tensor products of Kirillov-Reshetikhin crystals~\\cite{OSS13, OS12, OSS03, OSS03II, OSS03III, S05, SchillingS15, SS2006, Scrimshaw15} is given roughly as follows. It removes the largest row with a colabel of 0, which is the minimal colabel, for each $e_a$ from $b$ to the highest weight element in $B(\\Lambda_1)$, where $b$ is the leftmost factor in the tensor product. \nLet $\\theta$ be the involution on $U_q'(\\mathfrak{g})$-rigged configurations which interchanges labels with colabels on classically highest weight $U_q'(\\mathfrak{g})$-rigged configurations and let $\\widetilde{\\ast}^L$ denote the involution which is the composition of Lusztig's involution and the map sending the result to the classically highest weight element~\\cite{S05, SS2006} (where it is also denoted by $\\ast$). It is known that $\\Phi \\circ \\theta = \\widetilde{\\ast}^L \\circ \\Phi$ on classically highest weight elements. In particular, the latter map reverses the order of the tensor product. Thus, given the description of the crystal commutor, our work suggests there is a strong link between the $\\ast$-involution and the bijection $\\Phi$. We hope this could lead to a more direct description of the bijection $\\Phi$, its related properties, and a (combinatorial) proof of the $X = M$ conjecture of~\\cite{HKOTT02, HKOTY99}.\n\nAnother model for $B(\\infty)$ uses marginally large tableaux, as developed by Hong and Lee \\cite{HL08, HL12}. It is known that the bijection $\\Phi$ mentioned above can be extended to a $U_q(\\mathfrak{g})$-crystal isomorphism between rigged configurations and marginally large tableaux~\\cite{SalS15III} when $\\mathfrak{g}$ is of finite classical type or type $G_2$. An ambitious hope of this paper is that it may lead to a description of the $\\ast$-crystal structure on marginally large tableaux. (In finite type $A$, this result is in~\\cite{CT15}.) However, this appears to be a hard problem as the bijection $\\Phi$ is highly recursive and depends on conditions on colabels, many of which can change under applying the $\\ast$-crystal operators.\n\nThere is also a model for $B(\\infty)$ using Littelmann paths constructed by Li and Zhang~\\cite{LZ11}. From~\\cite{PS15}, natural virtualization maps arise to the embeddings on the underlying geometric information. The virtualization map on rigged configurations is also quite natural, giving evidence that rigged configurations encode more geometry than their combinatorial origins and description suggests. This is also evidence that there exists a straightforward and natural explicit combinatorial bijection between rigged configurations and the Littelmann path model. Thus this work could potentially lead to a description of the $\\ast$-crystal on the Littelmann path model.\n\nIn a similar vein, the virtualization map is known to act naturally on MV polytopes~\\cite{JS15,NS08}, also reflecting the geometric information of the root systems via the Weyl group. This is evidence that there should be a natural explicit combinatorial bijection between MV polytopes and rigged configurations (and the Littelmann path model). Moreover, considering the $\\ast$-involution, which acts by negation on MV polytopes~\\cite{Kamnitzer07, Kamnitzer10}, this work gives further evidence that such a bijection should exist. Furthermore, this bijection would suggest a natural generalization beyond finite type, which the authors expect to recover the KLR polytopes of~\\cite{TW}.\n\nThis paper is organized as follows.\nIn Section~\\ref{sec:background}, we give the necessary background on crystals and the $\\ast$-involution.\nIn Section~\\ref{sec:RC}, we give background information on the rigged configuration model for $B(\\infty)$.\nIn Section~\\ref{sec:*crystal}, we give the proof of our main theorem and some consequences.\nIn Section~\\ref{sec:hw_crystals}, we give a description of highest weight crystals using the $\\ast$-crystal structure and describe the natural projection from $B(\\infty)$ in terms of rigged configurations.\n\n\n\n\n\n\n\n\n\\section{Crystals and the $\\ast$-involution}\n\\label{sec:background}\n\nLet $\\mathfrak{g}$ be a symmetrizable Kac-Moody algebra with quantized universal enveloping algebra $U_q(\\mathfrak{g})$ over $\\mathbf{Q}(q)$, index set $I$, generalized Cartan matrix $A = (A_{ij})_{i,j\\in I}$, weight lattice $P$, root lattice $Q$, fundamental weights $\\{\\Lambda_i \\mid i \\in I\\}$, simple roots $\\{\\alpha_i \\mid i\\in I\\}$, and simple coroots $\\{h_i \\mid i\\in I\\}$. There is a canonical pairing $\\langle\\ ,\\ \\rangle\\colon P^\\vee \\times P \\longrightarrow \\mathbf{Z}$ defined by $\\langle h_i, \\alpha_j \\rangle = A_{ij}$, where $P^{\\vee}$ is the dual weight lattice.\n\nAn \\defn{abstract $U_q(\\mathfrak{g})$-crystal} is a set $B$ together with maps\n\\[\n e_i, f_i \\colon B \\longrightarrow B\\sqcup\\{0\\},\\qquad\n\\varepsilon_i,\\varphi_i\\colon B \\longrightarrow \\mathbf{Z} \\sqcup \\{-\\infty\\},\\qquad\n\\mathrm{wt}\\colon B \\longrightarrow P\n\\]\nsatisfying certain conditions (see \\cite{HK02,K95}). Any $U_q(\\mathfrak{g})$-crystal basis, defined in the classical sense (see \\cite{K91}), is an abstract $U_q(\\mathfrak{g})$-crystal. In particular, the negative half $U_q^-(\\mathfrak{g})$ of the quantized universal enveloping algebra of $\\mathfrak{g}$ has a crystal basis which is an abstract $U_q(\\mathfrak{g})$-crystal. We denote this crystal by $B(\\infty)$ (rather than the using the entire tuple $(B(\\infty),e_i,f_i,\\varepsilon_i,\\varphi_i,\\mathrm{wt})$), and denote its highest weight element by $u_\\infty$. As a set, one has\n\\[\nB(\\infty) = \\{ f_{i_d} \\cdots f_{i_2} f_{i_1} u_\\infty : i_1,\\dots,i_d \\in I, \\ d \\ge 0 \\}.\n\\]\nThe remaining crystal structure on $B(\\infty)$ is\n\\begin{align*}\n\\mathrm{wt}( f_{i_d} \\cdots f_{i_2} f_{i_1} u_\\infty) &= -\\alpha_{i_1}-\\alpha_{i_2}-\\cdots-\\alpha_{i_d} ,\\\\\n\\varepsilon_i(b) &= \\max\\{ k \\in \\mathbf{Z} : e_ib \\neq 0 \\}, \\\\\n\\varphi_i(b) &= \\varepsilon_i(b) + \\langle h_i,\\mathrm{wt}(b) \\rangle. \n\\end{align*}\nWe say that $b\\in B(\\infty)$ has \\defn{depth} $d$ if $b = f_{i_d} \\cdots f_{i_2} f_{i_1} u_\\infty$ for some $i_1,\\dots,i_d \\in I$.\n\n\n\nThere is a $\\mathbf{Q}(q)$-antiautomorphism $*\\colon U_q(\\mathfrak{g}) \\longrightarrow U_q(\\mathfrak{g})$ defined by\n\\[\nE_i \\mapsto E_i, \\qquad\nF_i \\mapsto F_i, \\qquad\nq \\mapsto q, \\qquad\nq^h \\mapsto q^{-h}.\n\\]\nThis is an involution which leaves $U_q^-(\\mathfrak{g})$ stable. Thus, the map $\\ast$ induces a map on $B(\\infty)$, which we also denote by $*$, and is called the \\defn{$\\ast$-involution} or \\defn{star involution} (and is sometimes known as Kashiwara's involution). Denote by $B(\\infty)^*$ the image of $B(\\infty)$ under $*$. \n\n\\begin{thm}[\\cite{K93,Lusztig90}]\nWe have\n$\nB(\\infty)^* = B(\\infty).\n$\n\\end{thm}\n\nThis induces a new crystal structure on $B(\\infty)$ with Kashiwara operators\n\\[\n e_i^* = * \\circ e_i \\circ *,\\ \\ \\ \\ \\ \\ \n f_i^* = * \\circ f_i \\circ *,\n\\]\nand the remaining crystal structure is given by\n\\[\n\\varepsilon_i^* = \\varepsilon_i \\circ * , \\qquad \\qquad\n\\varphi_i^* = \\varphi_i \\circ *,\n\\]\nand weight function $\\mathrm{wt}$, the usual weight function on $B(\\infty)$.\nAdditionally, for $b\\in B(\\infty)$ and $i\\in I$, define\n\\begin{equation}\n\\label{eq:jump}\n\\kappa_i(b) := \\varepsilon_i(b) + \\varepsilon_i^*(b) + \\langle h_i, \\mathrm{wt}(b)\\rangle.\n\\end{equation}\nThis was called the {\\it $i$-jump} in \\cite{LV11}.\n\n\n\n\n\n\nWe will appeal to the following statement from \\cite{CT15}, which was proven in a dual form in \\cite{TW} based on Kashiwara and Saito's classification theorem for $B(\\infty)$ from \\cite{KS97}. First, a \\defn{bicrystal} is a set $B$ with two abstract $U_q(\\mathfrak{g})$-crystal structures $(B,e_i,f_i,\\varepsilon_i,\\varphi_i,\\mathrm{wt})$ and $(B,e_i^\\star,f_i^\\star,\\varepsilon_i^\\star,\\varphi_i^\\star,\\mathrm{wt})$ with the same weight function. In such a bicrystal $B$, we say $b\\in B$ is a \\defn{highest weight element} if $e_ib = e_i^\\star b = 0$ for all $i \\in I$.\n\n\\begin{prop}\n\\label{prop:star_properties}\nFix a bicrystal $B$ with highest weight $b_0$ such the crystal data is determined by setting $\\mathrm{wt}(b_0) = 0$. Assume further that, for all $i \\neq j$ in $I$ and all $b\\in B$, \n\\begin{enumerate}\n\\item\\label{item:star1} $f_ib$, $f_i^\\star b \\neq 0$;\n\\item\\label{item:star2} $f_i^\\star f_jb = f_jf_i^\\star b$;\n\\item\\label{item:star3} $\\kappa_i(b) \\ge 0$;\n\\item\\label{item:star4} $\\kappa_i(b) = 0$ implies $f_ib = f_i^\\star b$;\n\\item\\label{item:star5} $\\kappa_i(b) \\ge 1$ implies $\\varepsilon_i^\\star(f_ib) = \\varepsilon_i^\\star(b)$ and $\\varepsilon_i(f_i^\\star b) = \\varepsilon_i(b)$;\n\\item\\label{item:star6} $\\kappa_i(b) \\ge 2$ implies $f_if_i^\\star b = f_i^\\star f_ib$.\n\\end{enumerate}\nThen \n\\[\n(B,e_i,f_i,\\varepsilon_i,\\varphi_i,\\mathrm{wt}) \\cong (B,e_i^\\star,f_i^\\star,\\varepsilon_i^\\star,\\varphi_i^\\star,\\mathrm{wt}) \\cong B(\\infty),\\]\nwith $e_i^\\star = e_i^*$ and $f_i^\\star = f_i^*$.\n\\end{prop}\n\nHowever, we will need to slightly weaken the assumptions of Proposition~\\ref{prop:star_properties}.\n\n\\begin{prop}\n\\label{prop:weaker_conditions}\nLet $(B,e_i,f_i,\\varepsilon_i,\\varphi_i,\\mathrm{wt})$ and $(B^\\star,e_i^\\star,f_i^\\star,\\varepsilon_i^\\star,\\varphi_i^\\star,\\mathrm{wt})$ be highest weight abstract $U_q(\\mathfrak{g})$-crystals with the same highest weight vector $b_0 \\in B \\cap B^\\star$, where the remaining crystal data is determined by setting $\\mathrm{wt}(b_0) = 0$. Suppose also that (\\ref{item:star1})--(\\ref{item:star6}) are satisfied. Then\n\\[\n(B,e_i,f_i,\\varepsilon_i,\\varphi_i,\\mathrm{wt}) \\cong (B^\\star,e_i^\\star,f_i^\\star,\\varepsilon_i^\\star,\\varphi_i^\\star,\\mathrm{wt}) \\cong B(\\infty),\n\\]\nwith $e_i^\\star = e_i^*$ and $f_i^\\star = f_i^*$.\n\\end{prop}\n\n\\begin{proof}\nWe prove that $B \\cap B^\\star$ is closed under $f_i$ and $f_i^\\star$ by using induction on the depth and making repeated use of conditions (\\ref{item:star1})--(\\ref{item:star6}) above. The base case is depth $0$, where we just have $b_0$. Suppose all elements of depth at most $d$ in $B$ and $B^\\star$ are in $B \\cap B^\\star$. Next, fix some $b \\in B \\cap B^\\star$ at depth $d$. If $\\kappa_i(b) = 0$, then $f_i b = f_i^\\star b$ for all $i\\in I$. If $i \\neq j$, then $f_i^\\star f_j b' = f_j f_i^\\star b'$. Hence $f_i^* b \\in B \\cap B^\\star$, and $f_j b \\in B \\cap B^\\star$ by our induction assumption and that $B$ (resp., $B^\\star$) is closed under $f_j$ (resp., $f_i^\\star$). A similar argument shows that $f_i b \\in B \\cap B^\\star$ if $\\kappa_i(b) \\geq 1$ since $\\kappa_i(b') \\geq 2$. Therefore $B = B \\cap B^\\star = B^\\star$ since $B \\cap B^\\star$ is closed under $f_i$ and $f_j^\\star$ and generated by $b_0$ (along with $B$ and $B^\\star$). Thus the claim follows by Proposition~\\ref{prop:star_properties}.\n\\end{proof}\n\n\n\n\n\\section{Rigged configurations}\n\\label{sec:RC}\n\nLet $\\mathcal{H} = I \\times \\mathbf{Z}_{>0}$. A rigged configuration is a sequence of partitions $\\nu = (\\nu^{(a)} \\mid a \\in I)$ such that each row $\\nu_i^{(a)}$ has an integer called a \\defn{rigging}, and we let $J = \\bigl(J_i^{(a)} \\mid (a, i) \\in \\mathcal{H} \\bigr)$, where $J_i^{(a)}$ is the multiset of riggings of rows of length $i$ in $\\nu^{(a)}$. We consider there to be an infinite number of rows of length $0$ with rigging $0$; i.e., $J_0^{(a)} = \\{0, 0, \\dotsc\\}$ for all $a \\in I$. The term rigging will be interchanged freely with the term \\defn{label}. We identify two rigged configurations $(\\nu, J)$ and $(\\widetilde{\\nu}, \\widetilde{J})$ if \n\\[\nJ_i^{(a)} = \\widetilde{J}_i^{(a)}\n\\]\nfor any fixed $(a, i) \\in \\mathcal{H}$. Let $(\\nu, J)^{(a)}$ denote the rigged partition $(\\nu^{(a)}, J^{(a)})$.\n\nDefine the \\defn{vacancy numbers} of $\\nu$ to be \n\\begin{equation}\n\\label{eq:vacancy}\np_i^{(a)}(\\nu) = p_i^{(a)} = - \\sum_{(b,j) \\in \\mathcal{H}} A_{ab} \\min(i, j) m_j^{(b)},\n\\end{equation}\nwhere $m_i^{(a)}$ is the number of parts of length $i$ in $\\nu^{(a)}$. The \\defn{corigging}, or \\defn{colabel}, of a row in $(\\nu,J)^{(a)}$ with rigging $x$ is $p_i^{(a)} - x$. In addition, we can extend the vacancy numbers to\n\\[\np_{\\infty}^{(a)} = \\lim_{i\\to\\infty} p_i^{(a)} = - \\sum_{b \\in I} A_{ab} \\lvert \\nu^{(b)} \\rvert\n\\]\nsince $\\sum_{j=1}^{\\infty} \\min(i,j) m_j^{(b)} = \\lvert \\nu^{(b)} \\rvert$ for $i \\gg 1$. Note this is consistent with letting $i = \\infty$ in Equation~\\eqref{eq:vacancy}.\n\nLet $\\RC(\\infty)$ denote the set of rigged configurations generated by $(\\nu_{\\emptyset}, J_{\\emptyset})$, where $\\nu_{\\emptyset}^{(a)} = 0$ for all $a \\in I$, and closed under the crystal operators as follows.\n\n\\begin{dfn}\n\\label{def:RC_crystal_ops}\nFix some $a \\in I$, and let $x$ be the smallest rigging in $(\\nu,J)^{(a)}$.\n\\begin{itemize}\n\\item[\\defn{$e_a$}:] If $x =0$, then $e_a(\\nu, J) = 0$. Otherwise, let $r$ be a row in $(\\nu, J)^{(a)}$ of minimal length $\\ell$ with rigging $x$. Then $e_a(\\nu, J)$ is the rigged configuration which removes a box from row $r$, sets the new rigging of $r$ to be $x+1$, and changes all other riggings such that the coriggings remain fixed.\n\n\\item[\\defn{$f_a$}:] Let $r$ be a row in $(\\nu, J)^{(a)}$ of maximal length $\\ell$ with rigging $x$. Then $f_a(\\nu, J)$ is the rigged configuration which adds a box to row $r$, sets the new rigging of $r$ to be $x-1$, and changes all other riggings such that the coriggings remain fixed.\n\\end{itemize}\n\\end{dfn}\n\n\nWe define the remainder of the crystal structure on $\\RC(\\infty)$ by\n\\begin{gather*}\n\\varepsilon_a(\\nu, J) = \\max \\{ k \\in \\mathbf{Z} \\mid e_a^k(\\nu, J) \\neq 0 \\}, \\hspace{20pt} \\varphi_a(\\nu, J) = \\inner{h_a}{\\mathrm{wt}(\\nu,J)} + \\varepsilon_a(\\nu, J),\n\\\\ \\mathrm{wt}(\\nu, J) = -\\sum_{a \\in I} \\lvert \\nu^{(a)} \\rvert \\alpha_a.\n\\end{gather*}\nFrom this structure, we have $p_\\infty^{(a)} = \\inner{h_a}{\\mathrm{wt}(\\nu,J)}$ for all $a\\in I$.\n\n\\begin{thm}[{\\cite{SalS15, SalS15II}}]\n\\label{thm:binf_isomorphism}\nLet $\\mathfrak{g}$ be of symmetrizable type. Then $\\RC(\\infty) \\cong B(\\infty)$ as $U_q(\\mathfrak{g})$-crystals.\n\\end{thm}\n\n\\begin{prop}[{\\cite{SalS15, S06}}]\n\\label{prop:ep_phi}\nLet $(\\nu, J) \\in \\RC(\\infty)$ and fix some $a \\in I$. Let $x$ denote the smallest label in $(\\nu,J)^{(a)}$. Then we have\n\\[\n\\varepsilon_a(\\nu, J) = -\\min(0, x) \\hspace{40pt} \\varphi_a(\\nu, J) = p_{\\infty}^{(a)} - \\min(0, x).\n\\]\n\\end{prop}\n\nIt is a straightforward computation from the vacancy numbers to show that\n\\begin{equation}\n\\label{eq:convexity_exact}\n\\inner{h_a}{\\lambda} - \\sum_{b \\in I} A_{ab} m_i^{(b)} = -p_{i-1}^{(a)} + 2 p_i^{(a)} - p_{i+1}^{(a)}.\n\\end{equation}\nFrom this, we obtain the well-known convexity properties of the vacancy numbers.\n\n\\begin{lemma}[Convexity]\n\\label{lemma:convexity}\nIf $m_i^{(a)} = 0$, then we have\n\\[\n2 p_i^{(a)} \\geq p_{i-1}^{(a)} + p_{i+1}^{(a)}.\n\\]\nMoreover, $p_{i-1}^{(a)} \\geq p_i^{(a)} \\leq p_{i+1}^{(a)}$ if and only if $p_{i-1}^{(a)} = p_i^{(a)} = p_{i+1}^{(a)}$.\n\\end{lemma}\n\nIn the sequel, we will refer to this lemma simply as convexity as we will frequently use it.\n\n\n\n\n\n\n\\section{Star-crystal structure}\n\\label{sec:*crystal}\n\n\\begin{dfn}\n\\label{def:RC_star_crystal_ops}\nFix some $a \\in I$, and let $x$ be the smallest \\emph{co}rigging in $(\\nu,J)^{(a)}$.\n\\begin{itemize}\n\\item[\\defn{$e_a^*$}:] If $x =0$, then $e_a(\\nu, J) = 0$. Otherwise let $r$ be a row in $(\\nu, J)^{(a)}$ of minimal length $\\ell$ with corigging $x$. Then $e_a(\\nu, J)$ is the rigged configuration which removes a box from row $r$ and sets the new corigging of $r$ to be $x+1$.\n\n\\item[\\defn{$f_a^*$}:] Let $r$ be a row in $(\\nu, J)^{(a)}$ of maximal length $\\ell$ with corigging $x$. Then $f_a(\\nu, J)$ is the rigged configuration which adds a box to row $r$ and sets the new colabel of $r$ to be $x-1$.\n\\end{itemize}\n\\end{dfn}\n\nIf $e_a^*$ removes a box from a row of length $\\ell$ in $(\\nu, J)$, then the the vacancy numbers change by the formula\n\\begin{equation}\n\\label{eq:change_vac_e}\n\\widetilde{p}_i^{(b)} = \\begin{cases}\np_i^{(b)} & \\text{if } i \\leq \\ell, \\\\\np_i^{(b)} + A_{ab} & \\text{if } i > \\ell.\n\\end{cases}\n\\end{equation}\nOn the other hand, if $f_a^*$ adds a box to a row of length $\\ell$, then the vacancy numbers change by\n\\begin{equation}\n\\label{eq:change_vac_f}\n\\widetilde{p}_i^{(b)} = \\begin{cases}\np_i^{(b)} & \\text{if } i < \\ell, \\\\\np_i^{(b)} - A_{ab} & \\text{if } i \\geq \\ell.\n\\end{cases}\n\\end{equation}\nSimilar equations hold for $e_a$ and $f_a$ respectively. So the riggings of unchanged rows are changed according to Equation~\\eqref{eq:change_vac_e} and Equation~\\eqref{eq:change_vac_f} under $e_a$ and $f_a$, respectively. \n\n\\begin{remark}\nBy Equation~\\eqref{eq:change_vac_e} and Equation~\\eqref{eq:change_vac_f}, the crystal operators $e_a$ and $f_a$ preserve all colabels of $(\\nu, J)$ other than the row changed in $(\\nu, J)^{(a)}$.\n\\end{remark}\n\n\\begin{ex}\n\\label{ex:running}\nConsider type $D_4$ with Dynkin diagram\n\\[\n\\begin{tikzpicture}[xscale=2,yscale=.75]\n\\node[circle,fill,scale=.45,label={below:$1$}] (1) at (0,0) {};\n\\node[circle,fill,scale=.45,label={below:$2$}] (2) at (1,0) {};\n\\node[circle,fill,scale=.45,label={right:$3$}] (3) at (2,1) {};\n\\node[circle,fill,scale=.45,label={right:$4.$}] (4) at (2,-1) {};\n\\path[-]\n (1) edge (2)\n (2) edge (3)\n (2) edge (4);\n\\end{tikzpicture}\n\\]\nLet $(\\nu,J)$ be the rigged configuration\n\\begin{align*}\n(\\nu, J) \n&= f_2^*f_3^* f_1^* f_2^* f_2^* f_4^* f_3^* f_1^* f_2^* (\\nu_{\\emptyset}, J_{\\emptyset}) \\\\\n&=\n\\begin{tikzpicture}[scale=.35,anchor=top,baseline=-18]\n \\rpp{2}{0}{-1}\n \\begin{scope}[xshift=6cm]\n \\rpp{3,1}{-2,-1}{-3,-1}\n \\end{scope}\n \\begin{scope}[xshift=14cm]\n \\rpp{2}{0}{-1}\n \\end{scope}\n \\begin{scope}[xshift=20cm]\n \\rpp{1}{0}{0}\n \\end{scope}\n\\end{tikzpicture}.\n\\end{align*}\nThen\n\\[\nf_2^*(\\nu, J) = \\begin{tikzpicture}[scale=.35,anchor=top,baseline=-18]\n \\rpp{2}{0}{-1}\n \\begin{scope}[xshift=6cm]\n \\rpp{4,1}{-3,-1}{-5,-1}\n \\end{scope}\n \\begin{scope}[xshift=14cm]\n \\rpp{2}{0}{-1}\n \\end{scope}\n \\begin{scope}[xshift=20cm]\n \\rpp{1}{0}{0}\n \\end{scope}\n\\end{tikzpicture}.\n\\]\n\\end{ex}\n\nLet $\\RC(\\infty)^*$ denote the closure of $(\\nu_{\\emptyset}, J_{\\emptyset})$ under $f_a^*$ and $e_a^*$. We define the remaining crystal structure by\n\\begin{gather*}\n\\varepsilon_a^*(\\nu, J) = \\max \\{ k \\in \\mathbf{Z} \\mid (e_a^*)^k(\\nu, J) \\neq 0 \\}, \n\\hspace{20pt} \n\\varphi_a^*(\\nu, J) = \\inner{h_a}{\\mathrm{wt}(\\nu,J)} + \\varepsilon_a^*(\\nu, J),\n\\\\ \\mathrm{wt}(\\nu, J) = -\\sum_{a \\in I} \\lvert \\nu^{(a)} \\rvert \\alpha_a.\n\\end{gather*}\n\n\\begin{remark}\n\\label{remark:duality}\nWe will say an argument holds by duality when we can interchange:\n\\begin{itemize}\n\\item ``label'' and ``colabel'';\n\\item $e_a$ and $e_a^*$;\n\\item $f_a$ and $f_a^*$.\n\\end{itemize}\nFor an example, compare the proof of Proposition~\\ref{prop:ep_phi_star} with~\\cite[Thm.~3.8]{Sakamoto14}.\n\\end{remark}\n\n\\begin{lemma}\nThe tuple $(\\RC(\\infty)^*, e_a^*, f_a^*, \\varepsilon_a^*, \\varphi_a^*, \\mathrm{wt})$ is an abstract $U_q(\\mathfrak{g})$-crystal.\n\\end{lemma}\n\n\\begin{proof}\nThe proof that $(\\RC(\\infty)^*, e_a^*, f_a^*, \\varepsilon_a^*, \\varphi_a^*, \\mathrm{wt})$ is an abstract $U_q(\\mathfrak{g})$-crystal is dual to that $\\RC(\\infty)$ is an abstract $U_q(\\mathfrak{g})$-crystal under $e_a$ and $f_a$ in~\\cite[Lemma~3.3]{SalS15}.\n\\end{proof}\n\n\n\\begin{prop}\n\\label{prop:ep_phi_star}\nLet $(\\nu, J) \\in \\RC(\\infty)$ and fix some $a \\in I$. Let $x$ denote the smallest colabel in $(\\nu,J)^{(a)}$. Then we have\n\\[\n\\varepsilon_a^*(\\nu, J) = -\\min(0, x), \\hspace{40pt} \\varphi_a^*(\\nu, J) = p_{\\infty}^{(a)} - \\min(0, x).\n\\]\n\\end{prop}\n\n\\begin{proof}\nThe following argument for $\\varepsilon_a^*$ is essentially the dual to that given in~\\cite[Thm.~3.8]{Sakamoto14}. We include it here as an example of Remark~\\ref{remark:duality}.\n\nIt is sufficient to prove $\\varepsilon_a^*(\\nu, J) = -\\max(0, x)$ since $p_{\\infty}^{(a)} = \\inner{h_a}{\\mathrm{wt}(\\nu, J)}$. If $x \\geq 0 = \\varepsilon_a^*(\\nu, J)$, then $e_a^*(\\nu, J) = 0$ by definition. Thus we proceed by induction on $\\varepsilon_a^*(\\nu,J)$ and assume $x < 0$. Let $(\\nu',J') = e_a^*(\\nu, J)$ and $y'$ denote the resulting colabel from a colabel $y$. In particular, we have $x' = x + 1$ and all other colabels follow Equation~\\eqref{eq:change_vac_e}. Next, let $y$ denote the colabel of a row of length $j$. For $j < \\ell$, we have $y > x$ (equivalently $y \\geq x - 1$) because we chose $\\ell$ as large as possible. Thus $y' = y$, and hence $y' = y \\geq x + 1 = x'$. For $j \\geq \\ell$, we have $y \\geq x$ by the minimality of $x$ and $y' = y + 2$. Hence, $y' = y + 2 \\geq x + 1 = x'$, and so $\\varepsilon_a^*(\\nu', J') = \\varepsilon_a^*(\\nu, J) - 1$ as desired.\n\\end{proof}\n\n\nThe rest of this section will amount to showing that Conditions~(\\ref{item:star1})--(\\ref{item:star6}) of Proposition~\\ref{prop:weaker_conditions} hold.\nNote that using Proposition \\ref{prop:ep_phi_star} and \\cite[Prop. 4.2]{SalS15}, we can rewrite Equation \\eqref{eq:jump} as \n\\begin{equation}\n\\label{eq:RCjump}\n\\begin{aligned}\n\\kappa_a(\\nu,J) &= -\\min(0,x_\\ell) - \\min(0,x_c) + \\langle h_a , \\mathrm{wt}(\\nu,J) \\rangle, \\\\\n&= -\\min(0,x_\\ell) - \\min(0,x_c) + p_\\infty^{(a)},\n\\end{aligned}\n\\end{equation}\nwhere $x_\\ell$ and $x_c$ are the smallest label and colabel, respectively, in $(\\nu,J)^{(a)}$.\n\n\n\\begin{lemma}\n\\label{lemma:kappa0}\nFix $(\\nu, J) \\in \\RC(\\infty)$ and $a \\in I$. Assume $\\kappa_a(\\nu, J) = 0$. Then $f_a(\\nu, J) = f_a^*(\\nu, J)$.\n\\end{lemma}\n\n\\begin{proof}\nSuppose that $f_a$ adds a box to a row of length $i$ with rigging $x$. Recall that $x = -\\varepsilon_a(\\nu, J)$. Suppose the longest row of $\\nu^{(a)}$ has length $\\ell > i$ and let $x_\\ell$ denote any rigging of the longest row. Therefore, we have $x_\\ell > x$ by the definition of $f_a$, and we have $p_\\ell^{(a)} \\leq p_{\\infty}^{(a)}$ by convexity. Thus from the definition of $\\varepsilon_a^*$, we have\n\\begin{equation}\n\\label{eq:contradiction_kappa0}\n\\varepsilon_a^*(\\nu, J) \\geq x_\\ell - p_\\ell^{(a)} > x - p_{\\infty}^{(a)} = -\\varepsilon_a(\\nu, J) - p_{\\infty}^{(a)}.\n\\end{equation}\nThis implies $\\varepsilon_a^*(\\nu, J) + \\varepsilon_a(\\nu, J) + p_{\\infty}^{(a)} > 0$, which is a contradiction. Therefore $f_a$ must add a box to one of the longest rows of $\\nu^{(a)}$. Moreover, if $p_\\ell^{(a)} < p_{\\infty}^{(a)}$, then Equation~\\eqref{eq:contradiction_kappa0} would still hold and result in a contradiction. Similar statements holds for $f_a^*$ by duality.\n\nTherefore $f_a$ and $f_a^*$ act on the longest row of $\\nu^{(a)}$ and $p_i^{(a)} = p_{i+1}^{(a)} = p_{\\infty}^{(a)}$. Let $x$ and $x^*$ denote the label of the row on which $f_a$ and $f_a^*$ act, respectively. Both of these labels decrease by $1$ after applying $f_a$ and $f_a^*$, respectively, by Equation~\\eqref{eq:colabel_change} and Equation~\\eqref{eq:label_change}, respectively. So it is sufficient to show $x = x^*$. Note that $x \\leq x^*$ as the smallest colabel is the one with the largest rigging. Suppose $x < x^*$, then we have\n\\[\n\\varepsilon_a^*(\\nu, J) \\geq x^* - p_i^{(a)} > x - p_{\\infty}^{(a)} \\geq -\\varepsilon_a(\\nu, J) - p_{\\infty}^{(a)},\n\\]\nwhich is a contradiction. Therefore we have $f_a = f_a^*$.\n\\end{proof}\n\n\\begin{ex}\n\\label{ex:kappa0}\nLet $(\\nu,J)$ be the rigged configuration of type $D_4$ from Example \\ref{ex:running}. Then $\\kappa_2(\\nu,J) = 0$, and \nand \n\\[\nf_2(\\nu,J) = \n\\begin{tikzpicture}[scale=.35,anchor=top,baseline=-18]\n \\rpp{2}{0}{-1}\n \\begin{scope}[xshift=6cm]\n \\rpp{4,1}{-3,-1}{-5,-1}\n \\end{scope}\n \\begin{scope}[xshift=14cm]\n \\rpp{2}{0}{-1}\n \\end{scope}\n \\begin{scope}[xshift=20cm]\n \\rpp{1}{0}{0}\n \\end{scope}\n\\end{tikzpicture}.\n\\]\nOne can check that this agrees with $f_2^*(\\nu,J)$ from Example \\ref{ex:running}.\n\\end{ex}\n\n\n\n\\begin{lemma}\n\\label{lemma:kappa1}\nFix $(\\nu, J) \\in \\RC(\\infty)$ and $a \\in I$. Assume $\\kappa_a(\\nu, J) \\geq 1$. Then\n\\[\n\\varepsilon_a^*\\bigl(f_a(\\nu, J)\\bigr) = \\varepsilon_a^*(\\nu, J),\n\\hspace{40pt}\n\\varepsilon_a(f_a^*(\\nu,J)) = \\varepsilon_a(\\nu,J).\n\\]\n\\end{lemma}\n\n\\begin{proof}\nLet $x^c$ denote the smallest colabel of $(\\nu,J)^{(a)}$.\nLet $x$ and $i$ denote the rigging and length of the row on which $f_a$ acts. By the minimality of $x^c$, we have $x_c := p_i^{(a)} - x \\geq x^c$. Note that the colabel of the row after the application of $f_a$ becomes\n\\begin{equation}\n\\label{eq:new_colabel}\n\\widetilde{x}_c := p_{i+1}^{(a)} - 2 - (x - 1) = p_{i+1}^{(a)} - x - 1,\n\\end{equation}\nwhich implies that\n\\begin{equation}\n\\label{eq:colabel_change}\n\\widetilde{x}_c = x_c + p_{i+1}^{(a)} - p_i^{(a)} - 1.\n\\end{equation}\n\nThe remainder of the proof will be split into two cases: $x^c = p_i^{(a)} - x$ and $x^c < p_i^{(a)} - x$.\n\n\\case{$x^c = p_i^{(a)} - x$}\n\nFirst, consider the case $p_{i+1}^{(a)} \\leq p_i^{(a)}$. We also assume there exists a row of length $\\ell > i$ of $\\nu^{(a)}$, and let $x_\\ell$ denote the rigging of that row. Thus $x < x_\\ell$ and $x^c \\leq p_\\ell^{(a)} - x_\\ell$ by the definition of $f_a$ and the minimality of $x^c$. Hence \n\\[\np_i^{(a)} - x = x^c \\leq p_\\ell^{(a)} - x_\\ell < p_\\ell^{(a)} - x,\n\\]\nwhich is equivalent to $p_i^{(a)} < p_\\ell^{(a)}$. It must be the case, then, that $p_{i+1}^{(a)} > p_i^{(a)}$ by convexity, which is impossible. Thus the longest row of $\\nu^{(a)}$ must be of length $i$. Convexity implies $p_i^{(a)} = p_{i+1}^{(a)} = p_\\infty^{(a)}$, which results in\n\\begin{align*}\n\\kappa_a(\\nu, J) & = p_{\\infty}^{(a)} - \\min(x, 0) - \\min(x^c, 0)\n\\\\ & = p_i^{(a)} - \\min(x, 0) - \\min\\bigl(p_i^{(a)} - x, 0\\bigr).\n\\end{align*}\nSince $x$ was the rigging chosen by $f_a$, we must have $x \\leq 0$. Additionally, if $p_i^{(a)} - x = x^c \\leq 0$, we have\n\\[\n1 \\leq \\kappa_a(\\nu, J) = p_i^{(a)} - x - (p_i^{(a)} - x) = 0,\n\\]\nwhich is a contradiction. Thus $x^c \\geq 1$, which implies $\\varepsilon_a^*(\\nu, J) = 0 = \\varepsilon_a^*\\bigl(f_a(\\nu, J))$ since $\\widetilde{x}_c \\geq 0$.\n\nNext if $p_{i+1}^{(a)} = p_i^{(a)} + 1$, then $\\varepsilon_a^*\\bigl(f_a(\\nu, J)\\bigr) = \\varepsilon_a^*(\\nu, J)$ since $\\widetilde{x}_c = x^c$, all other coriggings are fixed, and $\\widetilde{x}_c = x_c$ by Equation~\\eqref{eq:colabel_change} .\n\nSo we now assume $p_{i+1}^{(a)} \\geq p_i^{(a)} + 2$, which implies $\\widetilde{x}_c > x^c$. If there is another row with a corigging of $x^c$ or $x^c \\geq 0$, then $\\varepsilon_a^*\\bigl(f_a(\\nu, J)\\bigr) = \\varepsilon_a^*(\\nu, J)$. So assume $f_a$ acts on the only row with a corigging of $x^c < 0$. Note that $p_i^{(a)} - x = x^c < 0$ and $x \\leq 0$ implies $p_i^{(a)} < 0$.\n\nWe have $m_i^{(a)} = 1$ as, otherwise, we would either have a second corigging of $x^c$ or a smaller corigging from the minimality of $x$. Thus, by Equation~\\eqref{eq:convexity_exact}, \n\\begin{align*}\n-2 - \\sum_{b \\neq a} A_{ab} m_i^{(b)} &= -p_{i-1}^{(a)} + 2p_i^{(a)} - p_{i+1}^{(a)} \\\\\n&\\leq -p_{i-1}^{(a)} + 2p_i^{(a)} - p_i^{(a)} - 2 \\\\\n&= -p_{i-1}^{(a)} + p_i^{(a)} - 2.\n\\end{align*}\nSince $m_i^{(b)} \\geq 0$ and $-A_{ab} \\geq 0$ for all $a \\neq b$, we then have\n\\[\n0 \\leq -p_{i-1}^{(a)} + p_i^{(a)},\n\\]\nor, equivalently, $p_{i-1}^{(a)} \\leq p_i^{(a)}$. If $i$ is the length of the smallest row, then $0 \\leq p_{i-1}^{(a)} \\leq p_i^{(a)} < 0$ by convexity, which is a contradiction. Thus let $x_\\ell$ denote rigging of the longest row in $\\nu^{(a)}$ such that $\\ell < i$, and by convexity, we have $p_\\ell^{(a)} \\leq p_i^{(a)}$. By the definition of $f_a$, we have $x_\\ell \\geq x$. Thus, we have $p_\\ell^{(a)} - x_\\ell \\leq p_i^{(a)} - x$. However, by the unique minimality of $x^c$, we have $p_\\ell^{(a)} - x_\\ell > x^c = p_i^{(a)} - x$. This is a contradiction. Therefore $\\varepsilon_a^*\\bigl(f_a(\\nu, J)\\bigr) = \\varepsilon_a^*(\\nu, J)$.\n\n\\case{$x^c < p_i^{(a)} - x$}\n\nAssume $\\varepsilon_a^*\\bigl(f_a(\\nu,J)\\bigr) \\neq \\varepsilon_a^*(\\nu, J)$. Then\n\\begin{equation}\n\\label{eq:difference_epsilon_change}\np_{i+1}^{(a)} - p_i^{(a)} - 1 < x^c+ x - p_i^{(a)} < 0,\n\\end{equation}\nas, otherwise, the new corigging is not smaller than the minimal corigging (i.e., $\\widetilde{x}_c < x^c$), which occurs on a different row and does not change under $f_a$. We rewrite Equation~\\eqref{eq:difference_epsilon_change} as\n\\begin{equation}\n\\label{eq:rewrite_less_than}\np_{i+1}^{(a)} - 1 < x^c + x < p_i^{(a)}.\n\\end{equation}\nSuppose there exists a row of length $\\ell > i$ in $\\nu^{(a)}$. Then $x_\\ell > x$ and $x^c \\leq p_\\ell^{(a)} - x_\\ell$. Therefore,\n\\begin{equation}\n\\label{eq:double_inequality}\np_{i+1}^{(a)} - \\ell < p_\\ell^{(a)} - x_\\ell + x < p_\\ell^{(a)},\n\\end{equation}\nwhich implies $p_{i+1}^{(a)} \\leq p_j^{(a)}$ for all $\\ell \\geq j > i$ by convexity. Note that Equation~\\eqref{eq:double_inequality} implies that $\\ell > i+1$ since, otherwise, we would have $p_{i+1}^{(a)} < p_{i+1}^{(a)}$. We also have $p_{i+1}^{(a)} \\leq p_j^{(a)}$ for all $j > i$ if there does not exist a row of length $\\ell > i$. Since there does not exist a row of length $i+1$, we must have $p_i^{(a)} \\leq p_{i+1}^{(a)} \\leq p_{i+2}^{(a)}$ by convexity. Yet, Equation~\\eqref{eq:rewrite_less_than} implies\n\\[\np_{i+1}^{(a)} < p_i^{(a)} \\leq p_{i+1}^{(a)},\n\\]\nbut this is a contradiction. Therefore $\\varepsilon_a^*\\bigl(f_a(\\nu, J)\\bigr) = \\varepsilon_a^*(\\nu, J)$.\n\\end{proof}\n\n\n\\begin{ex}\n\\label{ex:kappa1}\nAgain, let $(\\nu,J)$ be the rigged configuration of type $D_4$ from Example \\ref{ex:running}. Then $\\varepsilon_3(\\nu,J) = 0$, $\\varepsilon_3^*(\\nu,J) = 1$, and $\\kappa_3(\\nu,J) = 1$. We have\n\\begin{align*}\nf_3(\\nu,J) &=\n\\begin{tikzpicture}[scale=.35,anchor=top,baseline=-18]\n \\rpp{2}{0}{-1}\n \\begin{scope}[xshift=6cm]\n \\rpp{3,1}{-1,-1}{-1,-1}\n \\end{scope}\n \\begin{scope}[xshift=14cm]\n \\rpp{3}{-1}{-2}\n \\end{scope}\n \\begin{scope}[xshift=20cm]\n \\rpp{1}{0}{0}\n \\end{scope}\n\\end{tikzpicture}\\\\\nf_3^*(\\nu,J) &= \n\\begin{tikzpicture}[scale=.35,anchor=top,baseline=-18]\n \\rpp{2}{0}{-1}\n \\begin{scope}[xshift=6cm]\n \\rpp{3,1}{-2,-1}{-2,-1}\n \\end{scope}\n \\begin{scope}[xshift=14cm]\n \\rpp{3}{0}{-2}\n \\end{scope}\n \\begin{scope}[xshift=20cm]\n \\rpp{1}{0}{0}\n \\end{scope}\n\\end{tikzpicture}.\n\\end{align*}\nThen $\\varepsilon_3^*\\bigl(f_3(\\nu,J)\\bigr) = 1$ and $\\varepsilon_3\\bigl(f_3^*(\\nu,J)\\bigr) = 0$.\n\\end{ex}\n\n\n\\begin{lemma}\n\\label{lemma:kappa2}\nFix $(\\nu, J) \\in \\RC(\\infty)$ and $a \\in I$. Assume $\\kappa_a(\\nu, J) \\geq 2$. Then\n\\[\nf_af_a^*(\\nu, J) = f_a^* f_a(\\nu, J).\n\\]\n\\end{lemma}\n\n\\begin{proof}\nSuppose $f_a$ (resp., $f_a^*$) acts on row $r$ of length $i$ (resp., row $r^*$ of length $i^*$) with rigging $x$ (resp., $x^*$). Without loss of generality, let $r$ (resp., $r^*$) be the northernmost such row in the diagram of $\\nu^{(a)}$. Let $x_c^* = p_{i^*}^{(a)} - x^*$ and $x_c = p_i^{(a)} - x$. Note that $x \\leq x^*$ and $x_c^* \\leq x_c$. Applying $f_a^*$, the new rigging (and the only changed rigging) is\n\\begin{equation}\n\\label{eq:label_change}\n\\widetilde{x}^* = x^* + p_{i^*+1}^{(a)} - p_{i^*}^{(a)} - 1.\n\\end{equation}\nRecall that Equation~\\eqref{eq:colabel_change} gives the new corigging (and only changed corigging)\n\\[\n\\widetilde{x}_c = x_c + p_{i+1}^{(a)} - p_i^{(a)} - 1\n\\]\nafter applying $f_a$. We split the proof into three cases: the first two are cases in which $r \\neq r^*$ and the last is when $r = r^*$.\n\n\\case{$f_a$ acts on row $r \\neq r^*$ in $f_a^*(\\nu, J)$}\n\nSuppose $f_a f_a^*(\\nu, J) \\neq f_a^* f_a(\\nu, J)$. This is equivalent to $f_a^*$ acting on row $r' \\neq r^*$ in $f_a(\\nu, J)$. Note that we must have $r' = r$, since $f_a$ preserves all other colabels. From Equation~\\eqref{eq:colabel_change}, we must have $p_{i+1}^{(a)} < p_i^{(a)} + 2$, as otherwise $\\widetilde{x}_c > x_c \\geq x_c^*$, which would imply $r = r' = r^*$ and be a contradiction. Next, consider when $p_{i+1}^{(a)} = p_i^{(a)} + 1$. Thus we have $\\widetilde{x}_c = x_c$. Since $r' \\neq r^*$, we must have $i = i^*$ and $x_c^* = \\widetilde{x}_c = x_c$. However, this contradicts the assumption $r \\neq r^*$ as we have $x = x^*$.\n\nHence $p_{i+1}^{(a)} \\leq p_i^{(a)}$. Suppose $i$ is the length of the longest row of $\\nu^{(a)}$. Then $p_i^{(a)} = p_{i+1}^{(a)} = p_{\\infty}^{(a)}$ by convexity. Moreover, we have $\\widetilde{x}_c = x_c - 1 = x_c^*$ since $r, r' \\neq r^*$. Note that since $f_a$ (resp., $f_a^*$) acts on $r$ (resp., $r^*$), we must have $x \\leq 0$ (resp., $x_c^* \\leq 0$). Therefore, we have\n\\begin{align*}\n2 \\leq \\kappa_a(\\nu, J) & = p_i^{(a)} - \\min(x, 0) - \\min(x_c^*, 0)\n\\\\ & = p_i^{(a)} - x - x_c^*\n\\\\ & = p_i^{(a)} - x - (p_i^{(a)} - x - 1) = 1,\n\\end{align*}\nwhich is a contradiction.\n\nSuppose there exists a row $r_\\ell$ of length $\\ell > i$ in $\\nu^{(a)}$. Let $x_\\ell$ denote the rigging of $r_\\ell$, and note $x < x_\\ell$ by our assumption. Therefore, we have\n\\[\np_{i+1}^{(a)} - x - 1 = \\widetilde{x}_c \\leq x_c^* \\leq p_\\ell^{(a)} - x_l < p_\\ell^{(a)} - x,\n\\]\nwhich implies $p_{i+1}^{(a)} \\leq p_\\ell^{(a)}$. Assume there exists a row of length $i+1$ in $\\nu^{(a)}$ with rigging $x_{i+1}$. It follows that\n\\[\np_{i+1}^{(a)} - x > p_{i+1}^{(a)} - x_{i+1} \\geq x_c^* = p_{i^*}^{(a)} - x^*,\n\\]\nwhich is equivalent to\n\\[\np_{i+1}^{(a)} - p_{i^*}^{(a)} > x - x^*.\n\\]\nFurthermore,\n\\[\np_{i+1}^{(a)} - x - 1 = \\widetilde{x}_c \\leq x_c^* = p_{i^*}^{(a)} - x^*,\n\\]\nwhich results in\n\\begin{equation}\n\\label{eq:changing_f_star_inequality}\nx - x^* \\geq p_{i+1}^{(a)} - p_{i^*}^{(a)} - 1.\n\\end{equation}\nAdditionally, Equation~\\eqref{eq:changing_f_star_inequality} is necessarily a strict inequality if $i^* > i$ because it must be the case that $\\widetilde{x}_c < x_c^*$.\nHence\n\\[\np_{i+1}^{(a)} - p_{i^*}^{(a)} > x - x^* \\geq p_{i+1}^{(a)} - p_{i^*}^{(a)} - 1,\n\\]\nwhich is a contradiction for $i^* > i$ as the right inequality becomes a strict inequality.\n\nNext, note that $\\widetilde{x}^* = x^* + p_{i^*+1}^{(a)} - p_{i^*}^{(a)} - 1 \\geq x$ since $f_a$ acts on $r$. Hence \n\\begin{equation}\n\\label{eq:unchanged_f_inequality}\np_{i^*+1}^{(a)} - p_{i^*}^{(a)} - 1 \\geq x - x^*,\n\\end{equation}\nwhich is a strict inequality for $i \\leq i^*$. Thus \n\\[\np_{i^*+1}^{(a)} - p_{i^*}^{(a)} - 1 \\geq x - x^* \\geq p_{i+1}^{(a)} - p_{i^*}^{(a)} - 1,\n\\]\nor, equivalently, $p_{i^*+1}^{(a)} \\geq p_{i+1}^{(a)}$. Since $i \\leq i^*$, we have $p_{i^*+1}^{(a)} > p_{i+1}^{(a)}$, and hence $i = i^*$ cannot occur\n\nNow suppose $i^* < i$. Therefore $x_c < p_{i+1}^{(a)} - x_{i+1}$, which implies\n\\[\np_{i+1}^{(a)} - x - 1 = \\widetilde{x}_c \\leq x_c^* < p_{i+1}^{(a)} - x_{i+1} < p_{i+1}^{(a)} - x.\n\\]\nSo $p_{i+1}^{(a)} < p_{i+1}^{(a)}$, which is a contradiction.\n\nFinally, if there does not exist a row of length $i+1$, then $p_i^{(a)} = p_{i+1}^{(a)} = p_\\ell^{(a)}$ by convexity, so the argument given above will still yield a contradiction. Hence, $f_a f_a^*(\\nu,J) = f_a^* f_a(\\nu, J)$.\n\n\\case{$f_a$ acts on row $r' \\neq r$ in $f_a^*(\\nu, J)$}\n\nNote that $r' = r^*$, where $r^* \\neq r$, as $f_a^*$ fixes all other riggings. So from Lemma~\\ref{lemma:kappa1}, we have\n\\[\n\\widetilde{x}^* = -\\varepsilon_a\\bigl(f_a^*(\\nu, J)\\bigr) = -\\varepsilon_a(\\nu, J) = x,\n\\]\nand hence $i \\leq i^*$. Therefore $x < x^*$ as $x = x^*$ implies $r = r^*$. Thus,\n\\[\nx = \\widetilde{x}^* = x^* + p_{i^*+1}^{(a)} - p_{i^*}^{(a)} - 1 > x + p_{i^*+1}^{(a)} - p_{i^*}^{(a)} - 1,\n\\]\nand hence $p_{i^*+1}^{(a)} \\leq p_{i^*}^{(a)}$. Dually, $f_a^*$ acts on row $r$ in $f_a(\\nu, J)$ since this would contradict $f_a$ acting on $r^* \\neq r$ from the previous case. Similarly, \n\\[\n\\widetilde{x}_c = -\\varepsilon_a^*\\bigl(f_a^*(\\nu, J)\\bigr) = -\\varepsilon_a^*(\\nu, J) = x_c^*\n\\]\nby the dual version of Lemma~\\ref{lemma:kappa1}, implying $i^* \\leq i$. Hence, $i = i^*$ and\n\\[\np_i^{(a)} - x^* = x_c^* = \\widetilde{x}_c = x_c + p_{i+1}^{(a)} - p_i^{(a)} - 1 = - x + p_{i+1}^{(a)} - 1,\n\\]\nwhich yields $x - x^* = p_{i+1}^{(a)} - p_i^{(a)} - 1$. Thus\n\\begin{align*}\nx & = \\widetilde{x}^* = x^* + p_{i+1}^{(a)} - p_i^{(a)} - 1\n\\\\ p_i^{(a)} - x^* = x_c^* & = \\widetilde{x}_c = x_c + p_{i+1}^{(a)} - p_i^{(a)} - 1 = p_{i+1}^{(a)} - x - 1 \\leq p_i^{(a)} - x - 1,\n\\end{align*}\nwhich implies $x - x^* \\leq -1$. Hence,\n\\[\np_i^{(a)} - p_{i+1}^{(a)} = x^* - x - 1 \\leq -2,\n\\]\nand this contradicts $0 \\leq p_i^{(a)} - p_{i+1}^{(a)}$. \n\n\n\\case{$r = r^*$}\n\nFrom $x = x^*$ and $i = i^*$, we have \n\\begin{align*}\n\\widetilde{x}^* & = x^* + p_{i+1}^{(a)} - p_i^{(a)} - 1 = x + p_{i+1}^{(a)} - p_i^{(a)} - 1,\n\\\\ \\widetilde{x}_c & = x_c + p_{i+1}^{(a)} - p_i^{(a)} - 1 = p_i^{(a)} - x + p_{i+1}^{(a)} - p_i^{(a)} - 1 = p_{i+1}^{(a)} - x - 1,\n\\\\ x_c^* & = x_c = p_i^{(a)} - x.\n\\end{align*}\n\nIf $p_{i+1}^{(a)} \\leq p_i^{(a)} + 1$, then $\\widetilde{x}^* \\leq x$ and $\\widetilde{x}_c \\leq x_c^*$. Hence, $f_a$ and $f_a^*$ select row $r$ in $f_a^*(\\nu, J)$ and $f_a(\\nu, J)$, respectively, and so we have $f_a f_a^*(\\nu, J) = f_a^* f_a(\\nu, J)$. Next, consider the case when $p_{i+1}^{(a)} \\geq p_i^{(a)} + 2$. Then $\\widetilde{x}^* > x$ and $\\widetilde{x}_c > x_c^*$. If $m_i^{(a)} \\geq 2$, then there exists a row $r' \\neq r$ such that $x^* = x$ and $x_c^* = x_c$. So $f_a$ and $f_a^*$ select row $r'$ in $f_a^*(\\nu, J)$ and $f_a(\\nu, J)$, respectively, and thus we have $f_a f_a^*(\\nu, J) = f_a^* f_a(\\nu, J)$.\nIf $m_i^{(a)} = 1$, then, as in Lemma~\\ref{lemma:kappa1}, we have\n\\[\n-2 - \\sum_{b \\neq a} A_{ab} m_i^{(b)} = -p_{i-1}^{(a)} + 2p_i^{(a)} - p_{i+1}^{(a)} \\leq -p_{i-1}^{(a)} + p_i^{(a)} - 2.\n\\]\nfrom Equation~\\eqref{eq:convexity_exact}. This implies $p_{i-1}^{(a)} \\leq p_i^{(a)}$. We consider the case when $r$ is the smallest row of $\\nu^{(a)}$, which implies $r$ is the unique row with rigging $x$ and corigging $x_c^*$. Moreover, we have $0 \\leq p_{i-1}^{(a)} \\leq p_i^{(a)}$ by convexity. Because we are acting on $r$ by $f_a$ and $f_a^*$, we have $x \\leq 0$ and $x_c^* \\leq 0$. Hence,\n\\[\n0 \\leq p_i^{(a)} \\leq p_i^{(a)} - x = x_c^* \\leq 0 \\implies \n0 = p_i^{(a)} = x = x_c^*. \n\\]\nTherefore $f_a$ (resp., $f_a^*$) acts on a row of length $0$ in $f_a^*(\\nu, J)$ (resp., $f_a(\\nu, J)$) as all other riggings (resp., coriggings) are positive. Moreover, the resulting rigging is $-1$ in both cases, and so $f_a f_a^*(\\nu, J) = f_a^* f_a(\\nu, J)$.\n\nNow assume there exists a row $r_\\ell$ of length $\\ell < i$ in $\\nu^{(a)}$, and without loss of generality, suppose $\\ell$ is maximal. Let $x_\\ell$ denote the rigging of $r_\\ell$, and by the definition of $f_a$ and $f_a^*$, we have $x_\\ell \\geq x$ and $p_\\ell^{(a)} - x_\\ell \\geq x_c^* = p_i^{(a)} - x$. By convexity, we have $p_\\ell^{(a)} \\leq p_{i-1}^{(a)} \\leq p_i^{(a)}$. Therefore, we have\n\\[\np_i^{(a)} - x \\leq p_\\ell^{(a)} - x_\\ell \\leq p_i^{(a)} - x,\n\\]\nand so $p_i^{(a)} - x = p_\\ell^{(a)} - x_\\ell$. Moreover, if $p_\\ell^{(a)} < p_i^{(a)}$, then we have $x_\\ell < x$, which cannot occur, and hence we also have $x = x_\\ell$. Therefore $f_a$ and $f_a^*$ acts on $r_\\ell$ in $f_a^*(\\nu, J)$ and $f_a(\\nu, J)$, respectively. Thus we have $f_a f_a^*(\\nu, J) = f_a^*f_a(\\nu, J)$.\n\\end{proof}\n\n\n\\begin{ex}\n\\label{ex:kappa2}\nContinuing our running example, let $(\\nu,J)$ be the rigged configuration of type $D_4$ from Example \\ref{ex:running}. Then $\\kappa_4(\\nu,J) = 2$ and \n\\[\nf_4^*f_4(\\nu,J) = f_4f_4^*(\\nu,J) = \n\\begin{tikzpicture}[scale=.35,anchor=top,baseline=-18]\n \\rpp{2}{0}{-1}\n \\begin{scope}[xshift=5cm]\n \\rpp{3,1}{-1,-1}{-1,-1}\n \\end{scope}\n \\begin{scope}[xshift=11.5cm]\n \\rpp{2}{0}{-1}\n \\end{scope}\n \\begin{scope}[xshift=16.5cm]\n \\rpp{3}{-1}{-2}\n \\end{scope}\n\\end{tikzpicture}.\n\\]\nWith $a=4$, we have the diagram\n\\[\n\\begin{tikzpicture}[xscale=2,yscale=1.5,font=\\normalsize]\n\\node (t) at (0,0) {$(\\nu,J)$};\n\\node (s) at (-1,-1) {$f_4^*(\\nu,J)$};\n\\node (a) at (1,-1) {$f_4(\\nu,J)$};\n\\node (ss) at (-2,-2) {$f_4^*f_4^*(\\nu,J)$};\n\\node (aa) at (2,-2) {$f_4f_4(\\nu,J)$};\n\\node (as) at (0,-2) {$f_4^*f_4(\\nu,J) = f_4f_4^*(\\nu,J)$};\n\\node (sss) at (-2,-3) {$\\substack{f_4^*f_4^*f_4^*(\\nu,J) \\\\ = f_4f_4^*f_4^*(\\nu,J)}$};\n\\node (asa) at (0,-3) {$\\substack{f_4^*f_4^*f_4(\\nu,J) \\\\ = f_4^*f_4f_4^*(\\nu,J) \\\\ = f_4f_4^*f_4(\\nu,J) \\\\ = f_4f_4f_4^*(\\nu,J)}$};\n\\node (aaa) at (2,-3) {$\\substack{f_4^*f_4f_4(\\nu,J) \\\\ = f_4f_4f_4(\\nu,J)}$};\n\\foreach \\x in {-2,0,2}\n {\\node at (\\x,-4) {$\\vdots$};}\n\\path[->]\n (t) edge (s)\n (t) edge (a)\n (s) edge (ss)\n (s) edge (as)\n (a) edge (aa)\n (a) edge (as)\n (ss) edge (sss)\n (as) edge (asa)\n (aa) edge (aaa)\n (sss) edge (-2,-3.85)\n (asa) edge (0,-3.85)\n (aaa) edge (2,-3.85);\n\\end{tikzpicture}\n\\]\nAs discussed in \\cite[Cor. 2.8]{CT15}, $\\kappa_4(\\nu,J)$ counts how many times one must apply either $f_4$ or $f_4^*$ to $(\\nu,J)$ to reach a point where $f_4$ and $f_4^*$ have the same affect.\n\\end{ex}\n\n\n\n\n\\begin{thm}\nLet $e_a$ and $f_a$ be the crystal operators given by Definition~\\ref{def:RC_crystal_ops}, and let $e_a^*$ and $f_a^*$ be given by Definition~\\ref{def:RC_star_crystal_ops}. Then we have\n\\[\ne_a^* = * \\circ e_a \\circ *, \\ \\ \\ \\ \\ \\\nf_a^* = * \\circ f_a \\circ *.\n\\]\n\\end{thm}\n\n\\begin{proof}\nWe show the conditions of Proposition~\\ref{prop:star_properties} hold for $\\RC(\\infty)$ with the given crystal operations. Fix some $(\\nu, J) \\in \\RC(\\infty)$ and $a \\in I$.\n\nWe first note the fact that $f_a(\\nu, J)$, $f_a^*(\\nu, J) \\neq 0$ follows immediately from the definitions. So we have Condition~(\\ref{item:star1}). Now let $b \\in I$. As $f_b$ acts on labels and preserves colabels in $(\\nu,J)^{(k)}$, for $k \\neq b$ in $I$, and $f_a^*$ acts on colabels and preserves labels in $(\\nu,J)^{(k)}$, for $k \\neq a$ in $I$, it follows that $f_a^* f_b (\\nu, J) = f_b f_a^* (\\nu, J)$ for all $a \\neq b$. Hence Condition~(\\ref{item:star2}) is satisfied.\n\nLemma~\\ref{lemma:kappa0} implies Condition~(\\ref{item:star4}).\n\nLemma~\\ref{lemma:kappa1} implies Condition~(\\ref{item:star5}).\n\nLemma~\\ref{lemma:kappa2} implies Condition~(\\ref{item:star6}).\n\nThus it remains we prove Condition~(\\ref{item:star3}), that $\\kappa_a(\\nu,J) \\ge 0$. We prove this by induction on the depth of $(\\nu,J)$. Observe that $\\kappa_a(\\nu_\\emptyset,J_\\emptyset) = 0$, which is our base case. Now suppose $\\kappa_a(\\nu,J) \\ge 0$ for all $(\\nu,J) \\in \\RC(\\infty)$ at depth at most $d$. It suffices to show that $\\kappa_a\\bigl(f_a(\\nu,J)\\bigr) \\ge 0$ and $\\kappa_a\\bigl(f_a^*(\\nu,J)\\bigr) \\ge 0$.\n\nNote that all labels, except for the row of $\\nu^{(a)}$ at which the box was added, possibly change by adding $-A_{ab}$ under $f_a$ by Equation~\\eqref{eq:change_vac_f}. Additionally, $p_{\\infty}^{(b)}$ changes by $-A_{ab}$. Thus for $b \\neq a$ a label, and hence possibly $-\\varepsilon_b(\\nu, J)$, increases by $-A_{ab}$ and the colabels, and hence $-\\varepsilon_b^*(\\nu, J)$, stay fixed. Therefore, by the above and its dual, for $a \\neq b$, we have\n\\[\n\\kappa_b\\bigl(f_a(\\nu, J)\\bigr), \\kappa_b\\bigl(f_a^*(\\nu, J)\\bigr) \\geq \\kappa_b(\\nu, J) \\geq 0,\n\\]\nsince $-A_{ab} \\geq 0$. Now it is sufficient to show that $\\varepsilon_a^*(\\nu, J)$ increases by at least $1 - \\kappa_a(\\nu, J)$ because $\\varphi_a(\\nu, J) = p_{\\infty}^{(a)} + \\varepsilon_a(\\nu, J)$ decreases by $1$ after the application of $f_a$. If $\\kappa_a(\\nu, J) = 0$, then Lemma~\\ref{lemma:kappa0} gives $f_a(\\nu, J) = f_a^*(\\nu, J)$, and so $\\varepsilon_a^*(\\nu, J)$ is increased by $1$. By Lemma~\\ref{lemma:kappa1}, we have $\\varepsilon_a^*(\\nu, J) = \\varepsilon_a^*\\bigl(f_a(\\nu, J)\\bigr)$ when $\\kappa_a(\\nu, J) \\geq 1$. Note that by our assumption, this is all possible values. Therefore, both $\\kappa_a\\bigl(f_a(\\nu, J)\\bigr), \\kappa_a\\bigl(f_a^*(\\nu, J)\\bigr) \\geq 0$, as required.\n\nThus Conditions~(\\ref{item:star1})--(\\ref{item:star6}) are satisfied. Moreover, $\\RC(\\infty)$ and $\\RC(\\infty)^*$ are both generated by the highest weight element $(\\nu_{\\emptyset}, J_{\\emptyset})$. Hence $\\RC(\\infty) = \\RC(\\infty)^*$ and the result follows from Proposition~\\ref{prop:weaker_conditions}.\n\\end{proof}\n\nNow from Definition~\\ref{def:RC_crystal_ops} and Definition~\\ref{def:RC_star_crystal_ops}, we have that the $*$-involution is given as follows.\n\n\\begin{cor}\n\\label{cor:RC_star_involution}\nThe $*$-involution on $\\RC(\\infty)$ is given by replacing every rigging $x$ of a row of length $i$ in $(\\nu, J)^{(a)}$ by the corresponding corigging $p_i^{(a)} - x$ for all $(a, i) \\in \\mathcal{H}$.\n\\end{cor}\n\nLet $\\mathfrak{g}$ and $\\widehat{\\mathfrak{g}}$ be symmetrizable Kac-Moody algebras such that there exists a folding of the Dynkin diagram of $\\mathfrak{g}$ to the Dynkin diagram of $\\widehat{\\mathfrak{g}}$ with corresponding index sets $I$ and $\\widehat{I}$, respectively. Consider the map $\\phi \\colon \\widehat{I} \\searrow I$ induced by such a Dynkin diagram folding and consider a sequence $(\\gamma_a \\in \\mathbf{Z}_{>0})_{a \\in I}$ such that the map $\\Psi \\colon P \\longrightarrow \\widehat{P}$ given by\n\\[\n\\Lambda_a \\mapsto \\gamma_a \\sum_{b \\in \\phi^{-1}(a)} \\Lambda_b\n\\]\nalso satisfies\n\\[\n\\alpha_a \\mapsto \\gamma_a \\sum_{b \\in \\phi^{-1}(a)} \\alpha_b.\n\\]\nThis induces a \\defn{virtualization map} $v$ of $B(\\infty)$ of type $\\mathfrak{g}$ to that of type $\\widehat{\\mathfrak{g}}$. In particular, on $\\RC(\\infty)$, the image $(\\widehat{\\nu}, \\widehat{J})$ of a rigging configuration $(\\nu, J)$ is given by\n\\[\n\\widehat{m}_{\\gamma_a i}^{(b)} = m_i^{(a)},\n\\hspace{30pt}\n\\widehat{J}_{\\gamma_a i}^{(b)} = \\gamma_a J_i^{(a)},\n\\]\nfor all $b \\in \\phi^{-1}(a)$. We refer the reader to~\\cite{OSS03III, SalS15III, SchillingS15} for more details.\n\n\\begin{cor}\nLet $v$ be a virtualization map on $B(\\infty)$ of type $\\mathfrak{g}$ to $\\widehat{\\mathfrak{g}}$. Then\n\\[\n\\ast \\circ v = v \\circ \\ast.\n\\]\n\\end{cor}\n\n\\begin{proof}\nThis follows from the fact\n$\n\\widehat{p}_{\\gamma_a i}^{(b)} = \\gamma_a p_i^{(a)}\n$\nfor all $b \\in \\phi^{-1}(a)$ and Corollary~\\ref{cor:RC_star_involution}.\n\\end{proof}\n\n\n\n\n\n\\section{Highest weight crystals}\n\\label{sec:hw_crystals}\n\nWe wish to classify the subcrystal of $\\RC(\\infty)$ which is isomorphic to $B(\\lambda)$ with respect to the $\\ast$-crystal structure. In particular, defining $B(\\lambda)$ requires the additional condition that $\\varphi_a^*(\\nu, J) = \\max\\{k \\in \\mathbf{Z} \\mid (f_a^*)^k(\\nu, J) \\neq 0\\}$. For example, the condition $\\varphi_a(\\nu,J) = \\max\\{ k \\in \\mathbf{Z} \\mid f_a^k(\\nu,J) \\neq 0 \\}$ means, for all riggings $x$ corresponding to a row of length $i$ in $\\nu^{(a)}$, we have $x \\leq p_i^{(a)}$. If we consider the natural dual to this, we have $p_i^{(a)} - x \\leq p_i^{(a)}$, or equivalently $x \\geq 0$. We show this is the correct condition by proving the dual version of~\\cite[Thm.~6.1]{SalS15}.\n\n\nFor any $\\lambda \\in P^+$, we define\n\\[\n\\RC(\\lambda) := \\{ (\\nu, J) \\in \\RC(\\infty) \\mid \\max J_i^{(a)} \\leq p_i^{(a)}(\\nu; \\lambda) \\text{ for all } (a, i) \\in \\mathcal{H} \\},\n\\]\nwhere\n\\begin{equation}\n\\label{eq:vacancy_numbers}\np_i^{(a)}(\\nu; \\lambda) := \\inner{h_a}{\\lambda} - \\sum_{b \\in I} A_{ab} \\sum_{j \\in \\mathbf{Z}_{>0}} \\min(i,j) m_j^{(b)}.\n\\end{equation}\nNote that Equation~\\eqref{eq:vacancy_numbers} differs from Equation~\\eqref{eq:vacancy} by\n\\[\np_i^{(a)}(\\nu) + \\inner{h_a}{\\lambda} = p_i^{(a)}(\\nu; \\lambda).\n\\]\nWhen there is no danger of confusion, we will simply write $p_i^{(a)} = p_i^{(a)}(\\nu; \\lambda)$.\n\nWe consider a crystal structure on $\\RC(\\lambda)$ as that inherited from $\\RC(\\infty)$ under the natural projection except with $\\mathrm{wt}(\\nu, J) = \\lambda - \\sum_{a \\in I} \\lvert \\nu^{(a)} \\rvert \\alpha_a$.\n\n\\begin{thm}[{\\cite{SalS15,SalS15II,S06}}]\nWe have\n$\n\\RC(\\lambda) \\cong B(\\lambda).\n$\n\\end{thm}\n\nUsing the $\\ast$-crystal structure, we easily obtain~\\cite[Prop.~8.2]{K95}. (We refer the reader to \\cite{K95} or \\cite{HK02} for an exposition on the tensor product of crystals. Note that we are using the opposite, anti-Kashiwara, convention. The precise definition in this setting may be found, for example, in \\cite{SalS15}.)\n\n\\begin{prop}\nLet $\\lambda \\in P^+$. Then we have\n\\[\n\\RC(\\lambda) \\cong \\{ t_\\lambda \\otimes (\\nu,J) \\in T_\\lambda \\otimes \\RC(\\infty) \\mid \\varepsilon_a^*(\\nu,J) \\le \\langle h_a , \\lambda \\rangle \\text{ for all } a \\in I\\}.\n\\]\n\\end{prop}\n\n\\begin{proof}\nFix some $(\\nu, J) \\in \\RC(\\infty)$. Let $x$ be a rigging of a row of length $i$. We have\n\\[\n\\inner{h_a}{\\lambda} \\geq \\varepsilon_a^*(\\nu, J) = -\\min(0, p_i^{(a)} - x)\n\\]\nif and only if\n\\[\np_i^{(a)} + \\inner{h_a}{\\lambda} \\geq x.\n\\]\nRecall that the left-hand side is the vacancy numbers in $\\RC(\\lambda)$ by Equation~\\eqref{eq:vacancy_numbers}, and so we have the defining relation for $\\RC(\\lambda)$.\n\\end{proof}\n\nBy letting $\\pi_{\\lambda} \\colon B(\\infty) \\longrightarrow B(\\lambda)$ be the natural projection, we can rephrase the last proposition as\n\\[\n\\tau(b^*) = \\varepsilon(b), \\qquad \\qquad \\varepsilon(b^*) = \\tau(b),\n\\]\nwhere $\\varepsilon(b) = \\sum_{a \\in I} \\varepsilon_a(b) \\Lambda_a$ and $\\tau(b) = \\min \\{ \\lambda \\mid \\pi_{\\lambda}(b) \\in B(\\lambda) \\}$. In~\\cite{SalS15II}, $\\tau$ was called the difference statistic and can be explicitly given on rigged configurations by\n\\[\n\\tau(\\nu, J) = \\sum_{a \\in I} \\min_{i \\in \\mathbf{Z}_{>0}} \\{p_i^{(a)} - \\max J_i^{(a)} \\} \\Lambda_a.\n\\]\n\n\n\n\n\nNow we formalize the dual version of $\\RC(\\lambda)$.\n\n\\begin{dfn}\nLet $\\RC(\\lambda)^*$ denote the closure of $(\\nu_{\\emptyset}, J_{\\emptyset})$ under $e_a^*$ and the following modified $f_a^*$, both using the vacancy numbers given by Equation~\\eqref{eq:vacancy_numbers} to determine the colabels. Consider $f_a^*$ as in Definition~\\ref{def:RC_star_crystal_ops} except define $f_a^*(\\nu, J) = 0$ if in the result, there exists a rigging $x < 0$.\n\\end{dfn}\n\nNote that the condition that $x \\geq 0$ is equivalent to $p_i^{(a)} - x \\leq p_i^{(a)}$. Hence, by duality, the proof of~\\cite[Lemma~3.6]{S06} holds, and we obtain the following.\n\n\\begin{lemma}\n\\label{lemma:lower_regular}\nLet $(\\nu, J) \\in \\RC(\\lambda)^*$. Then\n\\[\n\\varphi_a^*(\\nu, J) = \\max\\{ k \\in \\mathbf{Z} \\mid (f_a^*)^k (\\nu, J) \\neq 0 \\}\n\\]\nfor all $a \\in I$.\n\\end{lemma}\n\nLet $\\RC_{\\lambda}(\\infty)^* = T_{\\lambda} \\otimes \\RC(\\infty)$ with the $\\ast$-crystal structure. Let $C = \\{c\\}$ be the crystal given by\n\\[\n\\mathrm{wt}(c) = 0, \\qquad\n\\varphi_a^*(c) = \\varepsilon_a^*(c) = 0, \\qquad\nf_a^*c = e_a^*c = 0,\n\\]\nfor all $a \\in I$. Nakashima \\cite[Thm. 3.1]{N99} has shown that the connected component generated by $c \\otimes t_{\\lambda} \\otimes u_{\\infty}$ is isomorphic to $B(\\lambda)$.\n\nIn~\\cite{SalS15}, the map $\\psi_{\\lambda,\\mu} \\colon \\RC(\\lambda) \\longrightarrow \\RC(\\mu)$, for $\\lambda \\leq \\mu$ in $P^+ \\sqcup \\{\\infty\\}$, is the identity map on rigged configurations. This follows because $e_a$ and $f_a$ are determined by the riggings alone, not the vacancy numbers, and so preserving the labels is sufficient to show $\\psi_{\\lambda,\\mu}$ commutes with the crystal operators. However, for the $\\ast$-crystal structure, we need to preserve coriggings, and as such, we need to take into account the shift in vacancy numbers. Thus, define a map $\\psi_{\\lambda,\\mu}^* \\colon \\RC(\\lambda)^* \\longrightarrow \\RC(\\mu)^*$ as the identity on the partitions but with new riggings\n\\[\nx' = x + \\langle h_a , \\mu - \\lambda\\rangle,\n\\]\nwhere we make the convention that $\\inner{h_a}{\\infty} = 0$. Note that $\\psi_{\\lambda,\\mu}^*$ commutes with the crystal operators (however, it only becomes a crystal embedding after an appropriate tensor product is taken to shift weights).\n\nWith this modification, Proposition~\\ref{prop:ep_phi_star}, and Lemma~\\ref{lemma:lower_regular}, we have the dual argument of~\\cite[Thm.~6.1]{SalS15}. \n\n\\begin{thm}\nLet $C_{\\emptyset}^*$ denote the connected component of $C \\otimes \\RC_{\\lambda}(\\infty)^*$ generated by $c \\otimes (\\nu_{\\emptyset}, J_{\\emptyset})$. The map $\\Psi \\colon C_{\\emptyset}^* \\longrightarrow \\RC(\\lambda)^*$ given by \n\\[\nc \\otimes (\\nu_{\\lambda}, J_{\\lambda}) \\mapsto (\\psi_{\\lambda,\\infty}^*)^{-1}(\\nu_{\\lambda}, J_{\\lambda})\n\\]\nis a weight-preserving bijection which commutes with $e_a^*$ and $f_a^*$ for every $a\\in I$.\n\\end{thm}\n\n\\begin{cor}\nLet $\\mathfrak{g}$ be of symmetrizable type. Then $\\RC(\\lambda)^* \\cong B(\\lambda)$.\n\\end{cor}\n\nHence, we can now construct an explicit crystal isomorphism $\\RC(\\lambda)^* \\cong \\RC(\\lambda)$ by passing through $\\RC(\\infty)$.\n\n\\begin{cor}\nLet $\\mathfrak{g}$ be a symmetrizable Kac-Moody algebra and let $\\lambda \\in P^+$. Define $\\Xi \\colon \\RC(\\lambda) \\longrightarrow \\RC(\\lambda)^*$ by $\\Xi(\\nu, J) = (\\nu, J')$, where the resulting riggings are\n\\[\nx' = x + \\inner{h_a}{\\lambda}.\n\\]\nThen $\\Xi$ is a crystal isomorphism.\n\\end{cor}\n\n\\begin{proof}\nWe have $\\Xi = (\\psi_{\\lambda,\\infty}^*)^{-1} \\circ \\psi_{\\lambda,\\infty}$.\n\\end{proof}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbsxo b/data_all_eng_slimpj/shuffled/split2/finalzzbsxo new file mode 100644 index 0000000000000000000000000000000000000000..352a029efa2147498aa77f32eec9498ba69efae1 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbsxo @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nBranching programs are one of the well known models of computation. These models have been shown useful in a variety of domains, such as hardware verification, model checking, and other CAD applications (see for example the book by Wegener \\cite{Weg00}). It is known that the class of Boolean functions computed by polynomial size branching programs are coincide with the class of functions computed by non-uniform log-space machines. Moreover branching program is a convenient model for considering different (natural) restrictive variants and different complexity measures such as size (number of inner nodes), length, and width.\n\nOne of important restrictive branching programs is oblivious read once branching programs, also known in applied computer science as Ordered Binary Decision Diagrams (OBDD) \\cite{Weg00}. Since the length of an OBDD is at most linear (in the length of the input), the main complexity measure is ``width''.\n\nOBDDs also can be seen as nonuniform automata (see for example \\cite{ag05}). During the last decades different variants of OBDDs were considered, i.e. deterministic, nondeterministic, probabilistic, and quantum, and\nmany results have been proved on comparative power of deterministic, nondeterministic, and randomized OBDDs \\cite{Weg00}. For example, Ablayev and Karpinski \\cite{ak96} presented the first function that is polynomially easy for randomized and exponentially hard for deterministic and even nondeterministic OBDDs. More specifically, it was proven that OBDD variants of $ \\mathsf{coRP} $ and $\\mathsf{NP}$ are different.\n\nIn the last decade quantum model of OBDD came into play \\cite{agk01},\\cite{nhk00},\\cite{SS05}. It was proven that quantum OBDDs can be exponentially cheaper than classical ones and it was shown that this bound is tight \\cite{agkmp05}.\n\n\nIn this paper we present the first results on comparative complexity for classical and quantum OBDDs computing partial functions. Then, we focus on the width complexity of deterministic and nondeterministic OBDDs, which have been investigated in different papers (see for more information and citations \\cite{hs00}, \\cite{hs03}). Here we present very strict hierarchies for the classes of Boolean functions computed by deterministic and nondeterministic OBDDs.\n\nThe paper is organized as follows. The Section 2 contains the definitions and notations used in the paper. In Section 3, we compare classical and exact quantum OBDDs. We consider a partial function depending on parameter $k$ such that, for any $k>0$, this function is computed by an exact quantum OBDD of width $2$ but deterministic or bounded error probabilistic OBDDs need width $2^{k+1}$. Also it is easy to show that nondeterministic OBDD needs width $k+1$. In Section 4, we consider quantum and classical nondeterminism. We show that quantum nondeterministic OBDDs can be more efficient than their classical counterparts. We present an explicit function which is computed by a quantum nondeterministic OBDD with constant width but any classical nondeterministic OBDD needs non-constant width. The Section 5 contains our results on hierarchies on the sublinear (5.1) and larger (5.2) widths of deterministic and nondeterministic OBDDs.\n\nThe proofs of lower bounds results (Theorem \\ref{th-DOBDD1} and Lemma \\ref{peq1s}) are based on Pigeonhole principle. The lower bound on Theorem \\ref{th-POBDD1} uses the technique of Markov chains. \n\n\\section{Preliminaries}\n\nWe refer to \\cite{Weg00} for more information on branching programs. The main model investigated throughout the paper is OBDD (Ordered Binary Decision Diagram), a restricted version of branching program.\n\nIn this paper we use following notations for vectors. We use subscript for enumerating elements of vector and strings and superscript for enumerating vectors and strings. For a binary string $\\nu$, $\\#_1(\\nu)$ and $\\#_0(\\nu)$ are the number of $1$'s and $0$'s in $\\nu$, respectively. We denote $\\#^k_{0}(\\nu)$ and $\\#^k_{1}(\\nu)$ to be the numbers of $1$'s and $0$'s in the first $k$ elements of string $\\nu$, respectively.\n \n\nFor a given $ n>0 $, a probabilistic OBDD $ P_n $ with width $d$, defined on $ \\{0,1\\}^n $, is a 4-tuple\n$\n\tP_n=(T,v^0,Accept{\\bf,\\pi}),\n$\nwhere\n\\begin{itemize}\n\t\\item $ T = \\{ T_j : 1 \\leq j \\leq n \\mbox{ and } T_j = (A_j(0),A_j(1)) \\} $ is an ordered pairs of (left) stochastic matrices representing the transitions, where, at the $j$-th step, $ A_j(0) $ or $ A_j(1) $, determined by the corresponding input bit, is applied.\n\t\\item $ v^0 $ is a zero-one initial stochastic vector (initial state of $P_n$).\n\t\\item $ Accept \\subseteq \\{1,\\ldots,d\\} $ is the set of accepting nodes.\n\\item $ \\pi $ is a permutation of $ \\{1,\\ldots,n\\} $ defining the order of testing the input bits.\n\\end{itemize}\n\n For any given input $ \\nu \\in \\{0,1\\}^n $, the computation of $P_n$ on $\\nu$ can be traced by a stochastic vector which is initially $ v^0 $. In each step $j$, $1 \\leq j \\leq n$, the input bit $ x_{\\pi(j)} $ is tested and then the corresponding stochastic operator is applied:\n\\[\n\tv^j = A_{j}(x_{\\pi(j)}) v^{j-1},\n\\]\nwhere $ v^j $ represent the probability distribution vector of nodes after the $ j$-th steps, $ 1 \\leq j \\leq n $.\nThe accepting probability of $ P_n $ on $ \\nu $ is \n\\[\n\t\\sum_{i \\in Accept} v^n_i.\n\\]\n\nWe say that a function $f$ is computed by $ P_n $ with bounded error if there exists an $ \\varepsilon \\in (0,\\frac{1}{2}] $ such that $ P_n $ accepts all inputs from $ f^{-1}(1) $ with a probability at least $ \\frac{1}{2}+\\varepsilon $ and $ P_n $ accepts all inputs from $ f^{-1}(0) $ with a probability at most $ \\frac{1}{2}-\\varepsilon $. We say that $P_n$ computes $f$ \\textit{exactly} if $ \\varepsilon =1\/2 $. \n\nA deterministic OBDD is a probabilistic OBDD restricted to use only 0-1 transition matrices. In the other words, the system is always being in a single node and, from each node, there is exactly one outgoing transition for each tested input bit.\n\nA nondeterministic OBDD (NOBDD) can have the ability of making more than one outgoing transition for each tested input bit from each node and so the program can follow more than one computational path and if one of the path ends with an accepting node, then the input is accepted (rejected, otherwise). \n\n\n\\begin{itemize}\n\\item \n\nAn OBDD is called {\\em stable} if each transition set $T_j$ is identical for each level.\n\\item\n\nAn OBDD is called ID (ID-OBDD) if the input bits are tested in order $\\pi=(1,2,\\dots, n)$.\nIf a {\\em stable} ID-OBDD has a fixed width and transition rules for each $ n $, then it can be considered as a realtime finite automaton.\n\n\\end{itemize}\n\nQuantum computation is a generalization of classical computation \\cite{Wat09A}. Therefore, each quantum model can simulate its probabilistic counterparts. In some cases, on the other hand, the quantum models are defined in a restricted way, e.g., using only unitary operators during the computation followed by a single measurement at the end, and so they may not simulate their probabilistic counterparts. Quantum automata literature has a lot of such results such as \\cite{KW97,AF98,BC01B}. A similar result was also given for OBDDs in \\cite{SS05}, in which a function with a small size of deterministic OBDD was given but the quantum OBDD defined in a restricted way needs exponential size to solve this function.\n\nQuantum OBDDs that defined with the general quantum operators, i.e. superoperator \\cite{Wat03,Wat09A,YS11A}, followed by a measurement on the computational basis at the end can simulate its classical counterpart with the same size and width. So we can always follow that any quantum class contains its classical counterpart. \n\nIn this paper, we follow our quantum results based stable ID-OBDDs, which are realtime quantum finite automata. But, we give the definition of quantum OBDDs for the completeness of the paper.\n\nA quantum OBDD is the same as a probabilistic OBDD with the following modifications:\n\\begin{itemize}\n\t\\item The state set is represented by a $ d $-dimensional Hilbert space over field of complex numbers. The initial one is $ \\ket{\\psi}_0=\\ket{q_0}$ where $ q_0 $ corresponds to the initial node.\n\t \n\t \\item Instead of a stochastic matrix, we apply a unitary matrix in each step. That is, $ T = \\{ T_j : 1 \\leq j \\leq n \\mbox{ and } T_j = (U_j^0,U_j^1) \\} $, where, at the $j$-th step, $ U_j^0 $ or $ U_j^1 $, determined by the corresponding input bit, is applied,\n\t \n\t \\item At the end, we make a measurement on the computational basis.\n\\end{itemize}\n\n\n\nThe state of the system is updated as follows after the $ j$-th step:\n\\[\n\t\\ket{\\psi}_j = U_j^{x_{\\pi(j)}} (\\ket{\\psi}_{j-1}),\n\\]\nwhere $ \\ket{\\psi}_{j-1} $ and $ \\ket{\\psi}_j $ represent the state of the system after the $ (j-1)$-th and $ j$-th steps, respectively, where $ 1 \\leq j \\leq n $.\n\n\n\nThe accepting probability of the quantum program on $ \\nu $ is calculated from $\\ket{\\psi}_n=(z_1, \\dots, z_d)$ as\n\\[\n\t\\sum_{i \\in Accept} |z_i|^2.\n\\]\n\n\n\\section{Exact Quantum OBDDs.}\n\nIn \\cite{AY12}, Ambainis and Yakary{\\i}lmaz defined a new family of unary promise problems: For any $ k>0 $, $ A^k = (A^k_{yes},A^k_{no}) $ such that $ A^k_{yes} = \\{ a^{(2i)2^k} : i \\geq 0 \\} $ and $ A^k_{no} = \\{ a^{(2i+1)2^k} : i \\geq 0 \\} $. They showed that each member of this family ($A^k$) can be solved exactly by a 2-state realtime quantum finite automaton (QFA) but any exact probabilistic finite automaton (PFA) needs at least $ 2^{k+1} $ states. Recently, Rashid and Yakary{\\i}lmaz \\cite{RY14A} showed that bounded-error realtime PFAs also needs at least $ 2^{k+1} $ states for solving $A^k$.\\footnote{The same result is also followed for two-way nondeterministic finite automata by Geffert and Yakary{\\i}lmaz \\cite{GY14A}.} Based on this promise problem, we define a partial function:\n\\[\n\t\\mathtt{PartialMOD^k_n} (\\nu) = \\left\\lbrace \\begin{array}{lcll}\n\t\t1 & , & \\mbox{if } \\#_1(\\nu) = 0 & (mod\\textrm{ }2^{k+1}) \\\\\n\t\t0 & , & \\mbox{if } \\#_1(\\nu) = 2^{k} & (mod\\textrm{ }2^{k+1}) \\\\\n\t\t* & , & \\multicolumn{2}{l}{\\mbox{otherwise}} \\\\\t\n\\end{array}\t \\right. ,\n\\]\nwhere the function is not defined for the inputs mapping to ``*''. We call the inputs where the function takes the value of 1 (0) as yes-instances (no-instances).\n\n\\begin{theorem}\n\tFor any $k \\geq 0$, $ \\mathtt{PartialMOD^k_n} $ can be solved by a stable quantum ID-OBDD with width 2 exactly.\n\\end{theorem}\n\n\nThe OBDD can be construct by the same way as QFA, which solves promise problem $ A^k $ \\cite{AY12}.\n\n\n\nWe show that the width of deterministic, or bounded-error stable probabilistic OBDDs that solve $ \\mathtt{PartialMOD^k_n} $ cannot be less than $ 2^{k+1} $.\n\n\n\\begin{remark} \n Note that, a proof for deterministic OBDD is not similar to the proof for automata because potentially nonstability can give profit. Also this proof is different from proofs for total functions (for example, $ MOD_p$) due to the existence of incomparable inputs.\nNote that, classical one-way communication complexity techniques also fail for partial functions (for example, it can be shown that communication complexity of $ \\mathtt{PartialMOD^k_n} $ is $1$), and we need to use more careful analysis in the proof. \n\\end{remark}\n \n \n Deterministic stable ID-OBDD with width $2^{k+1}$ for $ \\mathtt{PartialMOD^k_n} $ can be easy constructed. We left open the case for bounded-error non-stable probabilistic OBDDs.\n\n\\begin{theorem}\n\\label{th-DOBDD1}\n\tFor any $k\\geq 0$, there are infinitely many $n$ such that any deterministic OBDD computing the partial function $ \\mathtt{PartialMOD^k_n} $ has width at least $2^{k+1}$.\n\\end{theorem}\n\n\\begin{proof}\nLet $\\nu\\in\\{0,1\\}^n, \\nu=\\sigma\\gamma$. We call $\\gamma$ valid for $\\sigma$ if $\\nu \\in (\\mathtt{PartialMOD^k_n})^{-1}(0) \\cup (\\mathtt{PartialMOD^k_n})^{-1}(1).$ We call two substrings $\\sigma'$ and $\\sigma''$ comparable if for all $\\gamma$ it holds that $\\gamma$ is valid for $\\sigma'$ iff $\\gamma$ is valid for $\\sigma''$.\nWe call two substrings $\\sigma'$ and $\\sigma''$ nonequivalent if they are comparable and there exists a valid substring $\\gamma$ such that $\\mathtt{PartialMOD^k_n}(\\sigma'\\gamma)\\neq\\mathtt{PartialMOD^k_n}(\\sigma''\\gamma)$ . \n\nLet $P$ be a deterministic OBDD computing the partial function $ \\mathtt{PartialMOD^k_n} $. Note that paths associated with nonequivalent strings must lead to different nodes. Otherwise, if $\\sigma$ and $\\sigma'$ are nonequivalent, there exists a valid string $\\gamma$ such that $\\mathtt{PartialMOD^k_n}(\\sigma\\gamma)\\neq\\mathtt{PartialMOD^k_n}(\\sigma'\\gamma)$ and computations on these inputs lead to the same final node.\n\nLet $N=2^k$ and $\\Gamma=\\{\\gamma: \\gamma\\in \\{0,1\\}^{2N-1}, \\gamma=0\\dots 0 1\\dots 1\\}$.\nWe will naturally identify any string $\\nu$ with the element $a=\\#_1(\\nu) \\pmod{2N}$ of additive group $\\mathbb{Z}_{2N}$. We call two strings of the same length different if the numbers of ones by modulo $2N$ in them are different. We denote by $\\rho(\\gamma^1,\\gamma^2)=\\gamma^1-\\gamma^2$ the distance between numbers $\\gamma^1, \\gamma^2.$ Note that $\\rho(\\gamma^1,\\gamma^2)\\neq \\rho(\\gamma^2,\\gamma^1) $.\n\nLet the width of $P$ is $t<2N.$ On each step $i$ ($i=1,2,\\dots$) of the proof we will count the number of different strings, which lead to the same node \n(denote this node $v_i$). On $i$-th step we consider $(2N-1)i$-th level of $P$.\n\nLet $i=1$.\nBy pigeonhole principle there exist two different strings $\\sigma^1$ and $ \\sigma^2$ from the set $\\Gamma$ such that corresponding paths lead to the same node $v_1$ of the $(2N-1)$-th level of $P$. Note that $\\rho(\\sigma^1, \\sigma^2)\\neq N,$ because in this case $\\sigma^1$ and $\\sigma^2$ are nonequivalent and cannot lead to the same node.\n\nWe will show by induction that on each step of the proof the number of different strings which lead to the same node increases.\n\nStep 2. By pigeonhole principle there exist two different strings $\\gamma^1$ and $ \\gamma^2$ from the set $\\Gamma$ such that corresponding paths from the node $v_1$ lead to the same node $v_2$ of the $(2N-1)2$-th level of $P$. In this case, the strings $\\sigma^1 \\gamma^1,\\sigma^2 \\gamma^1,\\sigma^1 \\gamma^2 $, and $\\sigma^2 \\gamma^2$ lead to the node $v_2$. Note that $\\rho(\\gamma^1, \\gamma^2)\\neq N,$ because in this case $\\sigma^1\\gamma^1$ and $\\sigma^1\\gamma^2$ are nonequivalent and cannot lead to the same node. \n\nAdding the same number does not change the distance between the numbers, so we have\n \\[\\rho(\\sigma^1+\\gamma^1, \\sigma^2+\\gamma^1)=\\rho(\\sigma^1,\\sigma^2)\\]\n and\n\\[\\rho(\\sigma^1+\\gamma^2, \\sigma^2+\\gamma^2)=\\rho(\\sigma^1,\\sigma^2).\\]\nLet $\\gamma^2>\\gamma^1$. Denote $\\Delta=\\gamma^2-\\gamma^1 $. Let us count the number of different numbers among $\\sigma^1+\\gamma^1,$ $\\sigma^2+\\gamma^1 $, $\\sigma^1+\\gamma^1+\\Delta $, and $\\sigma^2+\\gamma^1+\\Delta $.\nBecause $\\sigma^1$ and $\\sigma^2$ are different and $\\rho(\\sigma^1, \\sigma^2)\\neq N$, the numbers from the pair $\\sigma^1+\\gamma^1 $, and $\\sigma^2+\\gamma^1 $ coincide with corresponding numbers from the pair $\\sigma^1+\\gamma^1+\\Delta $ and $\\sigma^2+\\gamma^1+\\Delta $ iff $\\Delta=0 \\pmod {2N}$. But $\\Delta \\neq 0 \\pmod {2N}$ since the numbers $\\gamma^1 $ and $ \\gamma^2$ are different and $\\gamma^1, \\gamma^2<2N.$ The numbers $\\sigma^1+\\gamma^1+\\Delta $ and $\\sigma^2+\\gamma^1+\\Delta $ cannot be a permutation of numbers $\\sigma^1+\\gamma^1 $ and $\\sigma^2+\\gamma^1 $ since $\\rho(\\gamma^1, \\gamma^2)\\neq N$ and $\\rho(\\sigma^1, \\sigma^2)\\neq N$.\nIn this case, at least 3 numbers from $\\sigma^1+\\gamma^1 $, $\\sigma^2+\\gamma^1 ,$ $\\sigma^1+\\gamma^2 $, and $\\sigma^2+\\gamma^2 $ are different.\n\nStep of induction. Let the numbers $\\sigma^1, \\dots, \\sigma^i$ be different on the step $i-1$ and the corresponding paths lead to the same node $v_{i-1}$ of the $(2N-1)(i-1)$-th level of $P$. \n\nBy pigeonhole principle there exist two different strings $\\gamma^1 $ and $ \\gamma^2$ from the set $\\Gamma$ such that the corresponding paths from the node $v_{i-1}$ lead to the same node $v_i$ of the $(2N-1)i$-th level of $P$. So paths $\\sigma^1\\gamma^1,\\dots, \\sigma^i\\gamma^1,\\sigma^1\\gamma^2,\\dots, \\sigma^i\\gamma^2 $ lead to the same node $v_i$. Let us estimate a number of different strings among them. Note that $\\rho(\\gamma^1, \\gamma^2)\\neq N$ since, in this case, the strings $\\sigma^1\\gamma^1$ and $ \\sigma^1\\gamma^2$ are nonequivalent but lead to the same node. \n\nThe numbers $\\sigma^1, \\dots,\\sigma^{i}$ are different and $\\rho(\\sigma^l, \\sigma^j)\\neq N$ for each pair $(l,j) $ such that $ l\\neq j$.\nLet $\\sigma^1 < \\dots <\\sigma^i$. We will show that among $\\sigma^1+\\gamma^1 ,\\dots, \\sigma^i+\\gamma^1 $ and $\\sigma^1+\\gamma^1+\\Delta , \\dots, \\sigma^i+\\gamma^1+\\Delta $ at least $i+1$ numbers are different.\n\n\nThe sequence of numbers $\\sigma^1+\\gamma^1 ,\\dots, \\sigma^i+\\gamma^1 $ coincide with the the sequence $\\sigma^1+\\gamma^1+\\Delta , \\dots, \\sigma^i+\\gamma^1+\\Delta $ iff $\\Delta=0 \\pmod {2N}$. But $\\Delta\\neq 0 \\pmod {2N}$ since $\\gamma^1$ and $ \\gamma^2$ are different and $\\gamma^1, \\gamma^2<2N.$\n \nSuppose that the sequence $\\sigma^1+\\gamma^1+\\Delta , \\dots, \\sigma^i+\\gamma^1+\\Delta $ is a permutation of the sequence $\\sigma^1+\\gamma^1 $,$\\dots,$ $\\sigma^i+\\gamma^1 $. In this case, \nwe have numbers $a_0, \\dots, a_r$ from $\\mathbb{Z}_{2N}$ such that all $a_j$ are from the sequence $\\sigma^1+\\gamma^1 $,$\\dots,$ $\\sigma^i+\\gamma^1 $, $a_0=a_r=\\sigma^1+\\gamma^1 $, and $a_j=a_{j-1}+\\Delta $, where $j=1, \\dots, r$. \nIn this case, $r\\Delta=2Nm.$\nBecause $N=2^k$, $\\Delta<2N$, and $\\Delta\\neq N$ we have that $r$ is even. For $z=r\/2$ we have $z\\Delta=Nm$.\nSince all numbers from $\\sigma^1+\\gamma^1,\\dots, \\sigma^i+\\gamma^1$ are different, we have that $\\rho(a_0,a_z)=N$.\nSo we have that $a_0 $ and $ a_z$ are nonequivalent but the corresponding strings lead to the same node $v_i$. \nSo after $i$-th step, we have that at least $i+1$ different strings lead to the same node $v_i.$\n\nOn the $N$-th step, we have that $N+1$ different strings lead to the same node $v_N$. Among these strings, there must be at least two nonequivalent strings. Thus we can follow that $P$ cannot compute the function $\\mathtt{PartialMOD^k_n}$ correctly. \n\\qed\\end{proof}\n\n\\begin{theorem}\n\\label{th-NOBDD1}\n\tFor any $k\\geq 0$, there are infinitly many $n$ such that any nondeterministic OBDD computing the partial function $ \\mathtt{PartialMOD^k_n} $ has width at least $2^{k+1}$.\n\\end{theorem}\n\nThe proof is similar to deterministic case with the following modifications. \nLet $P$ -- NOBDD, that computes $ \\mathtt{PartialMOD^k_n} $. \nWe consider only accepting pathes in $ P $. Note that if $P$ computes $ \\mathtt{PartialMOD^k_n} $ correctly then accepting paths associated with nonequivalent strings can not pass through the same nodes. Denote $N=2^k$. Let $\\Gamma=\\{\\gamma: \\gamma\\in \\{0,1\\}^{2N-1}, \\gamma=\\underbrace{0\\dots 0}_{2N-1-j} \\underbrace{1\\dots 1}_{j}, j=0, \\dots,2N-1\\}$.\n\n\n Denote $V_{l}$ -- a set of nodes on the $l$-th level of $ P $. Assume that the width of $P$ is less than $2N$, that is, $|V_l|<2N$ for each $ l=0, \\dots,n $.\n\nDenote $V_l(\\gamma, V)$ -- a set of nodes of $l$-th level through which accepting paths, leading from the nodes of set $ V $ and corresponding to string $\\gamma$, pass.\n\nOn step $i$ ($ i=1,2,\\dots $) of the proof we consider $(2N-1)i$-th level of $P$.\nBecause $|V_{2N-1}|<2N$, on the first step of the proof we have that there exists two different strings $\\sigma^1,\\sigma^2$ from the set $\\Gamma$ such that $ V_{2N-1}(\\sigma^1, V_0) \\cap V_{2N-1}(\\sigma^2, V_0) \\neq \\emptyset$. Denote this nonempty intersection $G_1$. Next we continue our proof considering only accepting paths which pass through the nodes of $G_1.$\n\nUsing the same ideas as for deterministic case we can show that the number of strings with different numbers of ones by modulo $2N$, such that corresponding accepting paths lead to the same set of nodes, increases with each step of the proof. \n\nOn the $N$-th step of the proof we will have that there must be two different nonequivalent strings such that corresponding accepting paths lead to the same set $G_N$ of nodes. That implies that $P$ does not compute the function $\\mathtt{PartialMOD^k_n}$ correctly. \n\n\n\n\n\\begin{theorem}\n\\label{th-POBDD1}\nFor any $k\\geq 0$, there are infinitely many $n$ such that any stable probabilistic OBDD computing $ \\mathtt{PartialMOD^k_n} $ with bounded error has width at least $2^{k+1}$. \n\\end{theorem}\n\nThe proof of the Theorem is based on the technique of Markov chains and the details are given in Appendix \\ref{app-POBDD1}.\n\n\\section{Nondeterministic Quantum and Classical OBDDs.}\n\nIn \\cite{YS10A}, Yakary{\\i}lmaz and Say showed that nondeterministic QFAs can define a super set of regular languages, called exclusive stochastic languages \\cite{Paz71}. This class contains the \\textit{complements} of some interesting languages:\n$ PAL = \\{ w \\in \\{0,1\\}^* : w = w^r \\} $, where $w^r$ is a reverse of $w$, $ O = \\{ w \\in \\{0,1\\}^* : \\#_1(w)=\\#_0(w)\\} $, $ SQUARE = \\{ w \\in \\{0,1\\}^* : \\#_1(w)=(\\#_0(w))^2\\} $, and $ POWER = \\{ w \\in \\{0,1\\}^* : \\#_1(w)=2^{\\#_0(w)}\\} $.\n\nBased on these languages, we define three symmetric functions for any input $ \\nu \\in \\{0,1\\}^n $:\n\t \\[ \\mathtt{NotO_n}(\\nu) = \\left\\lbrace \\begin{array}{lll}\n\t\t0 & , & \\mbox{if } \\#_0(\\nu) = \\#_1(\\nu) \\\\\n\t\t1 & , & \\mbox{otherwise }\n\\end{array}\t \\right., \\]\n\t\\[ \\mathtt{NotSQUARE_n}(\\nu) = ~~~~ \\left\\lbrace \\begin{array}{lll}\n\t\t0 & , & \\mbox{if } (\\#_0(\\nu))^2 = \\#_1(\\nu) \\\\\n\t\t1 & , & \\mbox{otherwise }\n\\end{array}\t \\right., \\]\n\t\\[ \\mathtt{NotPOWER_n}(\\nu) = \\left\\lbrace \\begin{array}{lll}\n\t\t0 & , & \\mbox{if } 2^{\\#_0(\\nu)} = \\#_1(\\nu) \\\\\n\t\t1 & , & \\mbox{otherwise }\n\\end{array}\t \\right.. \\]\n\n\n\\begin{theorem}\n\tBoolean Functions $ \\mathtt{NotO_n} $, $ \\mathtt{NotSQUARE_n} $, and $ \\mathtt{NotPOWER_n} $ can be solved by a nondeterministic quantum OBDD with constant width.\n\\end{theorem}\n\nFor all these three functions, we can define nondeterministic quantum (stable ID-) OBDD with constant width based on nondeterministic QFAs for languages $ O $, $SQUARE$, and $POWER$, respectively \\cite{YS10A}. \n\n\nThe complements of $ PAL, O, SQUARE $ and $POWER$ cannot be recognized by classical nondeterministic finite automata. But, for example, the function version of the complement of $PAL$, $ \\mathtt{NotPAL_n} $, which returns 1 only for the non-palindrome inputs, is quite easy since it can be solved by a deterministic OBDD with width $3$. Note that the order of such an OBDD is not natural $(1,\\dots,n)$. However, as will be shown soon, this is not the case for the function versions of the complements of the other three languages.\n\n\\begin{theorem}\\label{nondetermenisticBoundTheorem}\nThere are infinitely many $n$ such that any NOBDD $P_n$ computing $ \\mathtt{NotO_n} $ has width at least $\\lfloor\\log n\\rfloor -1 $.\n\\end{theorem}\n\nThe proof of Theorem is based on the complexity properties of Boolean function $\\mathtt{NotO_n}$. At first we will discuss complexity properties of this function in Lemma \\ref{leqN}. After that we will prove claim of Theorem.\n\t\n\\begin{lemma}\\label{leqN}\nThere are infinitely many $n$ such that any OBDD computing $ \\mathtt{NotO_n} $ has width at least $n\/2 + 1$. (For the proof see \\ref{app-leqN}).\n\\end{lemma}\n{\\em Proof of Theorem \\ref{nondetermenisticBoundTheorem}}\\quad\\quad\n Let function $\\mathtt{NotO_n}$ is computed by $NOBDD$ $P_n$ of width $d$, then by the same way as in the proof of Theorem \\ref{th-NOBDD1} we have\n$d\\geq\\log (n\/2 + 1)>\\log n -1.$\n\\hfill$\\Box$\\\\\n\nBy the same way we can show that there are infinitely many $n$ such that any NOBDD $P_n$ computing function $\\mathtt{NotSQUARE_n}$ has width at least $ \\Omega(\\log (n)) $ and any NOBDD $P'_n$ computing function $\\mathtt{NotPOWER_n}$ has width at least $ \\Omega(\\log\\log (n)) $.\n\n\n\\section{Hierarchies for Deterministic and Nondeterministic OBDDs}\n\nWe denote $\\mathsf{OBDD^d}$ and $\\mathsf{NOBDD^d}$ to be the sets of Boolean functions that can be computed by $OBDD$ and $NOBDD$ of width $d=d(n)$, respectively, where $n$ is the number of variables.\nIn this section, we present some width hierarchies for $\\mathsf{OBDD^d}$ and $\\mathsf{NOBDD^d}$. Moreover, we discuss relations between these classes.\nWe consider $\\mathsf{OBDD^d}$ and $\\mathsf{NOBDD^d}$ with small (sublinear) widths and large widths.\n\n\\subsection{Hierarchies and relations for small width OBDDs}\n\n \nWe have the following width hierarchy for deterministic and nondeterministic models.\n\n\\begin{theorem}\\label{smallwh}\nFor any integer $n$, $d=d(n)$, and $1{\\mbox{\\bf{generateCookie}}(Miner $M$)\\{}\\\\\n3.\\>\\>{$R_M$ := getRandom(256);}\\\\\n4.\\>\\>{$C_M$ := $H^2(R_M, M.uname)$;}\\\\\n5.\\>\\>{$K_M$ := $M.key$;}\\\\\n6.\\>\\>{store($M.uname$, $K_M$, $R_M$, $target$);}\\\\\n7.\\>\\>{sendEncrypted($M$, $E_{K_M}(R_M)$);}\\\\\n\n8.\\>{\\mbox{\\bf{verifyJob}}(Miner $M$,\\ $nonce$,\\ $extranonce2$)\\{}\\\\\n9.\\>\\>{($K_M$,\\ $R_M$,\\ $target$) := getMParams($M.uname$);}\\\\\n10.\\>\\>{$C_M$ := $H^2(R_M, M.uname)$;}\\\\\n11.\\>\\>{$F$ := computeF($C_M$,\\ $extranonce2$);}\\\\\n12.\\>\\>{\\mbox{\\bf{if}}\\ ($H^2(nonce || F) < target$)}\\\\\n13.\\>\\>\\>{sendToMiner($M$, result, ``accept'');}\\\\\n14.\\>\\>{\\mbox{\\bf{else}}\\ sendToMiner($M$, result, ``reject'');}\\\\\\\\\n\n15.{Implementation\\ \\mbox{\\bf{Miner}}}\\\\\n16.\\>{$K_M$ : int[256] \\% key\\ shared\\ with\\ pool}\\\\\n17.\\>{$C_M$ : int[256] \\% mining\\ cookie;}\\\\\n18.\\>{\\mbox{\\bf{solvePuzzle}}($target$: int)\\{}\\\\\n19.\\>\\>{\\mbox{\\bf{do}}}\\\\\n20.\\>\\>\\>{$randPerm$ := newPseudoRandPerm(32);}\\\\\n21.\\>\\>\\>{$extranonce2$ := getRandom(32);}\\\\\n22.\\>\\>\\>{$F$ := computeF($C_M$,\\ $extranonce2$);}\\\\\n23.\\>\\>\\>{\\mbox{\\bf{while}}\\ ($randPerm$.isNext())\\{}\\\\\n24.\\>\\>\\>\\>{$nonce$ := $randPerm$.next();}\\\\\n25.\\>\\>\\>\\>{\\mbox{\\bf{if}}\\ ($H^2(nonce || F) < target$)}\\\\\n26.\\>\\>\\>\\>\\>{sendToPool($uname$, $nonce$, $extranonce2$);}\\\\\n27.\\>\\>{\\mbox{\\bf{while}} ($clean\\_jobs$ != 1)}\n\\vspace{-25pt}\n\\end{tabbing}\n\\caption{Bedrock pseudo-code for cookie generation and job verification (pool\nside), and job solving (miner side).}\n\\label{alg:bedrock}\n\\end{algorithm}\n\\end{minipage}\n\\normalsize\n\\vspace{-15pt}\n\\end{figure}\n\n\nBedrock has 3 components, each addressing different Stratum vulnerabilities.\nThe first component authenticates and obfuscates the job assignment and share\nsubmission messages. The second component secures the share difficulty\nnotifications, and the third component secures the pool's inference of the\nminer's capabilities. In the following we detail each component. We assume\nthat the pool shares a unique secret symmetric key $K_M$ with each miner $M$.\nThe miner and the pool create the key during the first authorization protocol\n(see $\\S$~\\ref{sec:stratum}), e.g., using authenticated Diffie-Hellman).\n\n\\vspace{-5pt}\n\n\\subsubsection{Mining Cookies}\n\\label{sec:bedrock:solution:cookies}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.47\\textwidth]{{bedrock.puzzle}.pdf}\n\\vspace{-15pt}\n\\caption{Bedrock puzzle illustration. The cookie $C_M$ is generated on the\npool, while the $nonce$ and $extranonce2$ are generated on the miner. The\ncoinbase transaction contains both $C_M$ and $extranonce2$, see\nFigure~\\ref{fig:coinbase}.}\n\\label{fig:bedrock:puzzle}\n\\vspace{-15pt}\n\\end{figure}\n\nThe share submission packets are particularly vulnerable. First, they can\nreveal the target value, thus the difficulty of the jobs on which the miner\nworks and then the miner's hashrate (see $\\S$~\\ref{sec:bedrock:discussion}).\nSecond, share submissions can be hijacked by an active adversary, see\n$\\S$~\\ref{sec:attacks:active}. Encryption of share submissions\nwill prevent these attacks, but it will strain the pool's resources.\n\nTo efficiently address these vulnerabilities, we introduce the concept of {\\it\nmining cookie}, a secret that each miner shares with the pool, see\nFigure~\\ref{fig:bedrock:puzzle} and Algorithm~\\ref{alg:bedrock}. The miner uses\nits mining cookie as an additional, secret field in the Bitcoin puzzle. Without\nknowledge of the mining cookie, an adversary cannot infer the progress made by\nthe miner, thus its hashrate and payout, thus cannot hijack shares submitted by\nthe miner.\n\nSpecifically, let $R_M$ be a random cookie seed that the pool generates for a\nminer $M$ Algorithm~\\ref{alg:bedrock}, line 3). The pool associates $R_M$ with\n$M$, and stores it along with $M$'s symmetric key $K_M$, and its current\n$target$ value (line 6). The pool computes $M$'s cookie as $C_M = H^2(R_M,\nM.uname)$ (line 4), where $M.uname$ is the username of the miner. It then\nsends $R_M$ to $M$, encrypted with the key $K_M$ (line 7), see\n$\\S$~\\ref{sec:bedrock:solution:set}. The miner similarly uses $R_M$ and its\n$username_M$ to compute $C_M$.\n\nTo minimally modify Bitcoin, Bedrock stores the cookie as part of the coinbase\ntransaction (see Figure~\\ref{fig:coinbase}), in the place of its unused {\\it\nprevious hash} field. This field is unused since the coinbase transaction does\nnot have a need for a meaningful input address hash (see\n$\\S$~\\ref{sec:model:coinbase}). Thus, the puzzle remains the same: The miner\niterates over the $nonce$ and $extranonce2$ values, and reports the pairs that\nsolve the puzzle, along with its username, in share submission packets (lines\n23-26).\n\nTo verify the shares, the pool retrieves the miner's key $K_M$, random seed\n$R_M$ and $target$ values (line 9). It uses $R_M$ to reconstruct the cookie\n(line 10) and uses $target$, and the reported $nonce$ and $extranonce2$ values,\nto reconstruct and verify the puzzle lines 11 ans 12).\n\n\n\\noindent\n{\\bf Random iterators}.\nIn the Bitcoin protocol and the Stratum implementation on F2Pool, the $nonce$\nand $extranonce2$ values are incremented sequentially: once the miner exhausts\n$nonce$, it increments $extranonce2$, then continues to iterate over a reset\n$nonce$ value. In $\\S$~\\ref{sec:bedrock:discussion} we show that this further\nexposes the miner to hashrate inference attacks. We address this problem by\nrequiring the miner to choose random values for $nonce$ and $extranonce2$ at\neach puzzle iteration. To prevent the miner from recomputing an expensive\nMerkle tree root at each iteration, we iterate through the $nonce$ space\nusing a pseudo random permutation (lines 20, 24).\n\n\\noindent\n{\\bf Cookie refresh}.\nWhen a miner mines the current block, i.e., when $H^2(nonce || F || C_M)$ is\nless than the target corresponding to the $Nbits$ value, see\n$\\S$~\\ref{sec:model:puzzle}, the puzzle solution needs to be published in the\nblockchain. The published block needs to include all the fields that defined\nthe puzzle (see $\\S$~\\ref{sec:model:puzzle}), including the miner's cookie, to\nbe publicly verified.\n\nTo prevent an adversary who monitors the blockchain to learn the\nmining cookie of a victim miner and then launch a successful BiteCoin attack\n(see $\\S$~\\ref{sec:bedrock:discussion}), Bedrock changes the mining cookie of\nthe miner once the miner mines the current block. This is an infrequent\nevent: for an AntMiner S7 mining equipment, with a hashrate of 4.73 TH\/s, and\nthe current Bitcoin network difficulty (2.58522748405e+11),\nEquation~\\ref{eq:expected_time_to_share} shows that the expected time to mine a\nblock is 7.44 years. This is a very low lower bound since it assumes a\nconstant difficulty. In reality, the difficulty has increased exponentially\nsince the creation of Bitcoin.\nTo change the cookie, the pool invokes generateCookie (line 2).\n\n\n\\vspace{-5pt}\n\n\\subsubsection{Protect Communicated Secrets}\n\\label{sec:bedrock:solution:set}\n\n\\vspace{-5pt}\n\nStratum's share difficulty notification messages reveal the difficulty assigned\nby the pool to the miner and that the miner uses in the subsequent jobs.\nKnowledge of the puzzle difficulty value coupled with the (regulated) share\nsubmission rate, will enable the adversary to infer the hashrate of the miner\n(see Equation~\\ref{eq:e}), thus its payout. In addition, Bedrock also needs to\ncommunicate secret values (e.g., the random $R_M$, see\n$\\S$~\\ref{sec:bedrock:solution:cookies}). Bedrock addresses these problems by\nextending Stratum's set difficulty notifications to the following {\\bf mining\nencrypted} message:\n\n\\vspace{-10pt}\n\n\n\\[\n{\\tt mining.encrypted,\\ E_{K_M}(param\\_list)}\n\\]\n\n\\noindent\nwhere (\\textit{param\\_list}) is a list of values that need protection, i.e.,\ndifficulty values and the secret $R_M$. Specifically, \\textit{param\\_list} can\ncontain any number of sensitive values in the format {\\tt\n[[``difficulty'',1024],[``secret'',$R_M$]]}. \n\n\n\\vspace{-5pt}\n\n\\subsubsection{Secure Hashrate Computation}\n\n\\vspace{-5pt}\n\nThe hashrate inference protocol following a miner subscription and\nauthorization, as documented in $\\S$~\\ref{sec:attacks:passive:isp} and\n$\\S$~\\ref{sec:eval:passive:isp} can be exploited also by an adversary to infer\nthe miner's hashrate. To address this vulnerability, Bedrock requires the miner\nto directly report its hashrate during the initial subscription message, along\nwith other miner capabilities. The miner can locally estimate its hashrate,\ne.g., by creating and executing random jobs with a difficulty of 1024. The\nminer also needs to factor in its communication latency to the pool, which it\ncan infer during the subscription protocol. The miner sends its hashrate\nencrypted, using the ``mining encrypted'' message defined above.\n\n\nIf subsequently, the pool receives share submissions from the miner, outside\nthe desired rate range, it can then adjust the difficulty (through the above\nencrypted share difficulty notifications) in order to reflect its more accurate\ninference of the miner's hashrate.\n\n\\vspace{-5pt}\n\n\\section{Discussion}\n\\label{sec:discussion}\n\n\\subsection{Security Discussion}\n\\label{sec:bedrock:discussion}\n\n\\vspace{-5pt}\n\nWe now discuss attacks against Stratum and Bedrock, and detail the defenses\nprovided by Bedrock.\n\n\\noindent\n\\textbf{Target reconstruction attack}.\nAn attacker that can inspect cleartext subscription response, job assignment\nand share submission packets, can reconstruct the job (i.e., puzzle) solved by\nthe victim miner: Recover $extranonce1$ from an early miner subscription\nmessage, $coinbase1$, $coinbase2$ and the Merkle tree branches from a job\nassignment, and $nonce$ and $extranonce2$ from a subsequent share submission\npacket. The attacker then reconstructs the $F$ field of the puzzle (see\n$\\S$~\\ref{sec:model:puzzle}) and uses it to infer the miner's hashrate, even\nwithout knowing the puzzle's associated $target$ value. Specifically, the\nattacker computes the double hash of $F$ concatenated with $nonce$, then\ncounts the number of leading zeroes to obtain an upper bound on the job's\ntarget. The attacker then uses recorded inter-share submission time stats and\nEquation~\\ref{eq:e} to estimate the miner hashrate.\n\nBedrock thwarts this attack through its use of the cookie $C_M$, a secret known\nonly by the miner and the pool. The cookie is part of the puzzle. Without its\nknowledge, the attacker cannot reconstruct the entire puzzle, thus infer the\ntarget.\n\n\\noindent\n\\textbf{Brute force the cookie}.\nThe attacker can try to brute force the cookie value. To gain confidence, the\nattacker uses the fields from multiple jobs assigned to the same miner to try\neach candidate cookie value. A candidate is considered ``successful'' if it\nproduces a high target value for all the considered jobs. However, in\n$\\S$~\\ref{sec:implementation} we leverage the unused, 256-bit long ``previous\nhash'' field of the coinbase transaction, to store the mining cookie. Brute\nforcing this field is consider unfeasible.\n\n\\noindent\n{\\bf Resilience to cryptographic failure}.\nWe assume now an adversary that is able to break the encryption employed by the\npool and the miner, e.g., due to the use of weak random values. Giechaskiel et\nal.~\\cite{GCR16} studied the effect of broken cryptographic primitives on\nBitcoin, see $\\S$~\\ref{sec:related}. While such an adversary can compromise the\nprivacy of the miner, by recovering the miner's cookie, he will be prevented\nfrom launching active attacks. This is because the miner's cookie is a function\nof both a random number and the miner's username.\n\nSpecifically, if the attacker hijacks a miner's share submission, the pool\nwould use the attacker's username instead of the victim's username to construct\nthe cookie, the coinbase transaction and eventually the header block. The share\nwill only validate if the attacker managed to find a username that produced a\ndouble hash that was still smaller than the target corresponding to the\ndifficulty set by the pool. However, the attacker will need to find such\nusernames for each hijacked share. If the attacker was able to quickly find\nsuch partial collisions, it would be much easier to simply compute the shares\nwithout doing any interception and hijacking.\n\nWe further consider an attacker able to break the hash function (invert and\nfind collisions). Such an attacker can recover a miner's $R_M$ value, then find\na username that produces a collision with the miner's cookie $C_M$. We observe\nhowever that such an attacker could then be able to also mine blocks quickly,\ne.g., by inverting hash values that are smaller than the target corresponding\nto the $Nbits$ value.\n\n\\vspace{-5pt}\n\n\n\n\n\n\n\n\n\n\\subsection{Limitations}\n\n\\vspace{-5pt}\n\n\\noindent\n\\textbf{Opportunistic cookie discovery}.\nWhen the miner mines the current block, i.e., the double hash of the puzzle is\nsmaller than the target corresponding to $Nbits$, the miner's cookie is\npublished in the blockchain. An adversary who has captured job assignments and\nshare submissions from the miner, just before this takes place, can use them,\nalong with the published cookie, to reconstruct the entire puzzle and infer the\nminer's hashrate.\n\nThis opportunistic attack may take years (e.g., 7.44 years\nfor an AntMiner S7, see $\\S$~\\ref{sec:bedrock:solution:cookies}), while, from\nour experience, mining equipment has a useful lifetime of around 2 years.\n\\newmaterial{However, this attack may be more effective against an entity\nthat owns many homogeneous miners: an adversary may only need days to infer\nthe rate of a single miner.}\n\nHowever, to address this limitation, each miner could, at random intervals,\nchange its operation frequency to a randomly chosen value within an\n``acceptable'' operation range. Assuming that the adversary only captures a\nlimited window of the victim miner's communications, he will only be able to\n(i) recover temporary, past hashrate values of the miner, and (ii) reconstruct\nthe miner's payouts over the monitored interval. Since the miner changes its\noperation frequency, once a new cookie is assigned, the adversary will not be\nable to predict the miner's future hashrates and payouts.\n\n\\noindent\n{\\bf Verification scope}.\nWe have only investigated the implementation of Stratum in the pool F2Pool.\nHowever, the identified privacy issues also likely affect other pools, as any\nobfuscation to the set difficulty messages would break the compatibility with\nthe Stratum protocol implemented in current mining equipment.\n\n\\vspace{-5pt}\n\n\\section{Implementation and Testbed}\n\\label{sec:implementation}\n\n\\vspace{-5pt}\n\nIn our experiments, we have used AntMiner S7, a specialized FPGA device for\nBitcoin mining that achieves a hashrate of 4.73 TH\/s at 700MHz~\\cite{s7_miner}.\nWe have configured the device for mining on the F2Pool pool, using the Stratum\nprotocol~\\cite{f2pool_help}.\n\n\n\\subsection{Passive Attacks}\n\\label{sec:implementation:passive}\n\nIn order to collect experimental traffic for the passive attacks, we have\nleveraged the ability of the AntMiner S7 device to operate at different chip\nclock frequencies in order to simulate miner devices with different\ncapabilities. Specifically, we carried out 24 hour long mining experiments with\nthe AntMiner S7 operating at frequencies ranging from 100 MHz to 700MHz, with\n50MHz granularity. We have used Tcpdump~\\cite{jacobson2003tcpdump} to capture\n138MB of Stratum traffic of AntMiner S7 devices in the US (May 27 - June\n8, 2016) and Venezuela (March 8 - April 2, 2016). We have sliced the resulting\npcap files into 24 hour intervals using editcap, then processed the\nresults using python scripts with the scapy library~\\cite{biondi2011scapy}.\n\nIn addition to the mining traffic, for each of the 24 hour runs, we collected\nthe empirical payout as reported by the pool, as well as the device hashrate\nreported by its internal functionality. We used 24 hour runs because the pool\nuses 24 hour cycles for executing payouts. We have manually synchronized the\nruns and payout cycles so as to easily correlate the data collected with its\ncorresponding payout.\n\n\\noindent\n{\\bf StraTap attack}.\nTo implement the StraTap attack, we have created a script that selects packets\nfrom the captured traces with the ``set\\_difficulty'' pattern (invoked method\nof the share difficulty notification messages). This pattern signals our script\nto perform a share submission count reset, as well as a new recording of the\nnew difficulty.\n\n\\noindent\n{\\bf ISP Log attack}.\nFor the ISP Log attack, we used packets sent after the 3-way handshake\ninitiated by the pool. In addition, to compute more accurate inter-packet\ntimes, we only considered packets that had the PUSH flag set (captured by most\nfirewall logs, e.g., Snort IDS), thus with non-empty payloads (i.e., no ack\npackets that originated on the miner). The PUSH flag is used to mitigate the\neffects of delays on the processing of share submissions, that may end up\ncausing share rejections. By setting the PUSH flag, miners try to increase the\nchance that their shares are quickly processed.\n\n\\vspace{-5pt}\n\n\\subsection{BiteCoin Attack Implementation}\n\\label{sec:implementation:bitecoin}\n\n\\vspace{-5pt}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.49\\textwidth]{bitecoin_system}\n\\caption{Architecture of BiteCoin attack implementation.}\n\\label{fig:bitecoin:system}\n\\vspace{-15pt}\n\\end{figure}\n\nThe BiteCoin attack system is illustrated in Figure~\\ref{fig:bitecoin:system}.\nWe have built WireGhost using the iptables nfqueue target, in order to pass\npackets into user space. Once it receives network segments in the user space,\nit uses the scapy python library to parse and modify packets. Additionally, it\nuses the the python nfqueue bindings in order to pass a verdict to the packets.\n\nIn order to test BiteCoin and WireGhost, we set up the victim miner behind an\nattacker controlled server that performed ``source NAT'' and packet forwarding\nfor it. This architecture allowed us to emulate an active attacker\nintercepting the communication between the miner and the pool. We have\nimplemented the attacker as a python script that connects to the F2Pool using\nStratum, then intercepts and modifies job assignments and share submissions on\nthe victim's connection to the pool. While the attacker script does not perform\nany mining, in $\\S$~\\ref{sec:eval:bitecoin} we show that it is able to steal\nthe victim's hashing power.\n\n\\vspace{-5pt}\n\n\\subsection{Bedrock Implementation}\n\n\\vspace{-5pt}\n\nOne requirement of Bedrock is to minimally disrupt the Stratum protocol, see\n$\\S$~\\ref{sec:bedrock:requirements}. Thus, instead of designing the cookie to\nbe an external field, we seek to leverage unused fields of the coinbase\ntransaction. An obvious candidate for the cookie placement is the input script\nwhere the $extranonce1$ and $extranonce2$ reside. However, most pools have\nalready started using this space for their own internal procedures, e.g., in\nF2Pool, to store the miner's name.\n\nInstead, Bedrock uses the yet unused, 32 byte (256 bit) long ``previous input\naddress'' field of the coinbase transaction, see Figure~\\ref{fig:coinbase}.\nSince the coinbase transaction rewards the pool with the value of the mined\nblock (if that event occurs), its input is not used. We have investigated the\nStratum implementation of several pools, including F2Pool~\\cite{f2pool_help},\nGHash.io~\\cite{GHash.io}, SlushPool~\\cite{SlushPool} and have confirmed that\nnone of them use this field. In addition, we note that the size of this field\nmakes it ideal to store the output of a double SHA-256 hash.\n\n\\begin{table}\n\\centering\n\\textsf{\n\\small\n\\begin{tabular}{l r r}\n\\textbf{Freq(MHz)} & \\textbf{Hashrate(GHz)} & \\textbf{StraTap Hashrate(GHz)}\\\\\n\\midrule\n700 & 4720.55 & 4571.48\\\\\n650 & 4371.85 & 4309.96\\\\\n600 & 4040.49 & 4151.27\\\\\n550 & 3693.90 & 3624.13\\\\\n500 & 3365.38 & 3524.57\\\\\n450 & 3030.01 & 3154.80\\\\\n400 & 2689.34 & 2696.72\\\\\n350 & 2364.61 & 2382.17\\\\\n300 & 2023.65 & 2039.55\\\\\n250 & 1687.17 & 1699.91\\\\\n200 & 1347.14 & 1274.29\\\\\n150 & 1010.19 & 1007.06\\\\\n100 & 672.55 & 703.28\\\\\n\\end{tabular}\n}\n\\caption{Operation frequency, actual hashrate and StraTap inferred hashrate.\nWe observe the correlation between the actual and the average hashrate, that\nallowed StraTap to achieve a good payout estimate.\n}\n\\label{table:freq_hashrate_payout}\n\\vspace{-15pt}\n\\end{table}\n\n\n\\vspace{-5pt}\n\n\\section{Evaluation}\n\\label{sec:eval}\n\n\\vspace{-5pt}\n\nIn this section we evaluate the StraTap, ISP Log and BiteCoin attacks, as well\nas the performance of Bedrock. We use the mean squared error (MSE) and the mean\npercentage error (MPE) to evaluate the accuracy of the predictions made by the\npassive attacks. Specifically, let $P = \\{ P_1, .., P_n\\}$ be a set of observed\ndaily payments over $n$ days, and let $\\bar{P} = \\{ \\bar{P_1}, .., \\bar{P_n}\n\\}$ be the corresponding predicted daily payments for the same days. Then,\nMSE($\\bar{P},P$) = $\\frac{1}{n} \\sum_i^n (\\bar{P_i} - P_i)^2$, and\nMPE($\\bar{P},P$) = $\\frac{100\\%}{n} \\sum_i^n \\frac{P_i - \\bar{P_i}}{P_i}$.\n\n\\vspace{-5pt}\n\n\\subsection{The StraTap Attack}\n\\label{sec:eval:passive:stratap}\n\n\\vspace{-5pt}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{isplog_stratap_payout}\n\\caption{Payout prediction by StraTap and ISP Log attacks, compared to\nempirical payout, in mili Bitcoin (mBTC), as a function of the miner's\nfrequency of operation (MHz). The \\textit{actual payout} series (red diamonds)\ncorresponds to daily payouts collected from the F2Pool account records. The\nStraTap payout series (blue disks) shows daily payout predictions based on\nentire Stratum messages intercepted. The ISP Log series (green triangles) shows\nthe daily payout prediction when using the average inter-packet times over 50\npackets. StraTap's prediction error ranges between 1.75-6.5\\% (MSE=0.062,\nMPE=3.46\\%). ISP Log has an error between 0.53 - 34.4\\% (MSE = 2.02, MPE =\n-9.49\\%).}\n\\label{fig:eval:stratap}\n\\vspace{-15pt}\n\\end{figure}\n\nWe have used the StraTap script described in\n$\\S$~\\ref{sec:implementation:passive} to calculate the average time of share\ncreation for each of the detected intervals of constant difficulty. For each of\nthe 24 hour runs, we also calculated the weighted average difficulty as well as\nthe weighted average hashrate for the entire run. In addition, we have also\nused Equation~\\ref{eq:e}, along with the computed average time and recorded\ndifficulty values, to compute a prediction of the weighted average hashrate of\nthe miner.\n\nTable~\\ref{table:freq_hashrate_payout} shows the AntMiner's frequency of\noperation, the output hashrate achieved at that frequency, and the predicted\nhashrate. As expected, there is a linear relationship between the frequency of\noperation and the device's hashrate achieved. As a consequence, this\nrelationship is preserved across the empirical payout reported by the pool\noperators.\n\nSpecifically, we have used the pool's hashrate to BTC conversion (see\n$\\S$~\\ref{sec:stratum}) to predict the miner's resulting daily payout.\nFigure~\\ref{fig:eval:stratap} shows the data series for the empirical and\npredicted payouts, versus the operation frequency of the miner. The StraTap\nattack achieves a prediction error of between 1.75\\% and 6.5\\%, with a mean\nsquare error (MSE) of 0.062 and mean percentage error (MPE) of -3.46\\%. Thus,\nStraTap's predictions tend to be slightly larger than the actual payout values.\n\n\\subsection{The ISP Log Attack}\n\\label{sec:eval:passive:isp}\n\n\\vspace{-5pt}\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[]\n{\\label{fig:timeline:hashrate:200MHz}{\\includegraphics[width=0.49\\textwidth]{timeline_80packet_200MHz_indicator}}}\n\\vspace{-5pt}\n\\subfigure[]\n{\\label{fig:timeline:hashrate:600MHz}{\\includegraphics[width=0.49\\textwidth]{timeline_125packet_600MHz_indicator}}}\n\\vspace{-5pt}\n\\caption{Timelines that focus on the interval between the first two share\ndifficulty notifications, following a miner subscription and authorization\nprotocol, when\n(a) the miner operates at 200MHz and\n(b) the miner operates at 600MHz.\nWhile the intervals between the first two such notifications at both\nfrequencies contain approximately 50 share submission packets, this interval is\nsignificantly shorter at 600MHz. This is because at 600MHz the miner can solve\nthe 1024 difficulty puzzles much faster than at 200MHz. The ``ISP Log'' attack\nexploits this observation to infer the hashrate of the miner, while only\ncounting packets (i.e., without being able to inspect them).}\n\\label{fig:timeline:hashrate}\n\\vspace{-15pt}\n\\end{figure}\n\nWe first present results of our analysis of F2Pool's hashrate inference\nprotocol. We then show the ability of the ISP Log attack to leverage these\nfindings to infer the miner's daily payouts, given only metadata of the miner's\npackets.\n\n\\begin{table}\n\\centering\n\\textsf{\n\\small\n\\begin{tabular}{l r r}\n\\textbf{Freq(MHz)} & \\textbf{\\# of Packets} & \\textbf{Time Interval}\\\\\n\\midrule\n100 & 57 & 288.872897148\\\\\n150 & 56 & 256.145660877\\\\\n200 & 51 & 153.622557878\\\\\n250 & 63 & 146.007184982\\\\\n300 & 55 & 131.089562893\\\\\n350 & 62 & 146.259056807\\\\\n400 & 54 & 101.954112053\\\\\n450 & 67 & 104.665092945\\\\\n500 & 50 & 58.2229411602\\\\\n550 & 62 & 76.0586118698\\\\\n600 & 54 & 50.7432210445\\\\\n650 & 56 & 45.6691811085\\\\\n\\end{tabular}\n}\n\\caption{Number of share submission packets for the initial $1024$ difficulty\nperiod, as well as the length of the time interval when the pool accepted those\nshares, for various miner frequencies of operation. At any miner operation\nfrequency, at least $50$ share submission packets are accepted, irrespective of\nwait time. This process enables the pool and the ISP Log attack to infer the\nminer's hashrate.\n}\n\\label{table:timeline_1024}\n\\vspace{-15pt}\n\\end{table}\n\n\\noindent\n{\\bf Hashrate inference protocol}.\nAs mentioned in $\\S$~\\ref{sec:attacks:passive:isp}, immediately following the\nminer subscription and authorization, the pool sets the difficulty to 1024, and\nchanges it only after receiving a sufficient number of share submissions to\ninfer the miner's hashrate. For instance,\nFigure~\\ref{fig:timeline:hashrate:200MHz} shows that when the miner operates at\n200MHz, the number of share submissions between the first two share difficulty\nnotification messages is similar to the number of share submissions when the\nminer operates at 600MHz (Figure~\\ref{fig:timeline:hashrate:600MHz})\n(approximately 50). However, the time interval between the first two share\ndifficulty notifications is much shorter at 600MHz: the miner can compute 50\nshares at the constant difficulty 1024 much faster than when operating at\n200MHz.\n\nMore general, Table~\\ref{table:timeline_1024} shows the number of share\nsubmission packets sent for this initial measurements period for each of the\nfrequencies analyzed. We observe that the pool requires that this process\ngenerates at least $50$ share submissions, irrespective of the miner operation\nfrequency. The pool waits up to 288 seconds to receive the required number of\nshares, before sending the second share difficulty notification.\n\nWe conjecture that the pool uses this process in order to infer the hashrate of\nthe miner, which it needs in order to assign jobs (puzzles) that a miner can\nsolve at a ``desirable'' rate. Specifically, large pools handle thousands of miners\nsimultaneously~\\footnote{The Bitcoin network currently has around 100,000\nminers~\\cite{MinerCount,Corti}, of which at least 16\\% work with\nF2Pool~\\cite{F2PoolShare}.}. In order to minimize the time it takes to process\nshare submissions received from thousands of miners, the pool\nneeds to regulate the rate at which a miner submits shares, irrespective of the\nminer's computing capabilities. Figure~\\ref{fig:timeline:rate} illustrates\nthis share submission rate control. In our experiments we observed that for F2Pool,\nthis rate ranges to between 1 to 4 share submissions per minute. A second reason\nfor this process stems from the need of miners to prove computation progress\nand gain regular payouts.\n\n\\begin{figure}[t!]\n\\vspace{-15pt}\n\\centering\n\\includegraphics[width=0.43\\textwidth]{passive_blind_restricted_30_R}\n\\vspace{-5pt}\n\\caption{\n1st, 2nd and 3rd quartile for the inter-packet times of the first 50\npackets during the initial difficulty setting procedure, as a function of the\nminer's operating frequency. We observe a monotonically decreasing tendency\nof the inter-packet times, with an increase in the miner capabilities.\nThis suggests that inter-packet time stats over the first 50 packets can provide\na good hashrate estimator for the ISP Log attack.\n\\label{fig:isp:log:predictor}}\n\\vspace{-15pt}\n\\end{figure}\n\n\\noindent\n{\\bf ISP Log attack results}.\nWe have implemented the ISP Log attack using statistics of the inter-packet\narrival time of the first 50 packets sent by the miner to the pool, after a\ndetected 3-way miner subscription and authorization protocol.\nFigure~\\ref{fig:isp:log:predictor} shows the 1st, 2nd (median) and 3rd\nquartiles of the inter-packet times, for the first 50 packets, when the miner\noperates at frequencies ranging from 100 to 650 MHz. The linearly decreasing\nbehavior of the median, 1st and 3rd quartiles indicates that statistics over\nthe inter-packet times of the first 50 packets, may make a good predictor.\n\n\n\nTo confirm this, we have used the mean inter-packet time over the first 50\npackets to predict the miner's hashrate and then its payout.\nFigure~\\ref{fig:eval:stratap} compares the ISP Log attack daily payout\nprediction with that of StraTap and with the empirical payout. The ISP Log has\nan error that ranges between 0.53\\% and 34.4\\%, with an MSE of 2.02 and MPE of\n-9.49\\% . Thus, ISP Log over predicts the daily payouts, and, as expected, it\nexceeds the error of the StraTap attack. \n\n\\vspace{-5pt}\n\n\\subsection{BiteCoin: Proof of Concept}\n\\label{sec:eval:bitecoin}\n\n\\vspace{-5pt}\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[]\n{\\label{fig:bitecoin:timeline:attacker}{\\includegraphics[width=0.49\\textwidth]{bitecoin_attacker_marked}}}\n\\vspace{-5pt}\n\\subfigure[]\n{\\label{fig:bitecoin:timeline:victim}{\\includegraphics[width=0.49\\textwidth]{bitecoin_victim}}}\n\\vspace{-5pt}\n\\caption{Greedy BiteCoin attack timelines for\n(a) adversary and (b) victim miner. In a 5h interval, the attacker hijacked 342\njob assignments and 72 corresponding share submissions of the victim miner.\n23 shares (the green clumps marked with red arrows) were accepted by the pool.}\n\\label{fig:bitecoin:timeline}\n\\vspace{-15pt}\n\\end{figure}\n\nWe have experimented with the BiteCoin implementation described in\n$\\S$~\\ref{sec:implementation:bitecoin}. Specifically, the attacker greedily\ninjected all the jobs assigned by the pool into the victim communication stream\nduring the attack time and without any modification. Our implementation\ninjected a total of 342 job assignments in a time interval of 5 hours, from\nhour 21:25 to 02:24. The attacker monitored the share submissions from the\nvictim, and hijacked shares corresponding to the injected jobs.\n\nFigure~\\ref{fig:bitecoin:timeline} shows the results of this attack. The\nadversary, whose timeline is shown in\nFigure~\\ref{fig:bitecoin:timeline:attacker}, hijacked 72 share submissions from\nthe victim miner. 23 shares (the green clumps marked with red arrows) were\naccepted by the pool, i.e., as if they were mined by the attacker and not by\nthe victim. 49 shares were rejected. Figure~\\ref{fig:bitecoin:timeline:victim}\nshows the timeline of the attack from the perspective of the victim miner.\n\nThe gaps are likely due to the script trying to get some constant work in.\nEvery disconnection and reconnection of the attacker will trigger a subscribe\nprotocol where the first job has the true flag set. This would explain why\nthere are no hijacked shares between around 22:00 and 1:00 in the attacker\ntimeline and also the gap of any activity in the victim timeline. These\nconstant reconnects may have constantly blanked the job pool of the victim\nuntil the attacker was able to maintain its connection to submit the shares.\n\n\n\\vspace{-5pt}\n\n\\subsection{The Bedrock Evaluation}\n\\label{sec:eval:bedrock}\n\n\\vspace{-5pt}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{overhead}\n\\caption{Overhead comparison of Bedrock and a complete encryption approach, for\nminer and pool. Bedrock imposes a small daily overhead on both the pool (12.03s\nto handle 16,000 miners) and miner (0.002s). However, a solution that\nencrypts all Stratum packets imposes a daily overhead of 1.36 hours on the\npool.}\n\\label{fig:eval:bedrock}\n\\vspace{-15pt}\n\\end{figure}\n\nWe measured Bedrock's encryption times when using AES-256 in CBC mode on the\nAntMiner S7 and on a server with 40 cores Intel(R) Xeon(R) CPU E5-2660 v2 @\n2.20GHz and 64 GB RAM. The AntMiner was able to encrypt 1024-blocks at\n32,231.09 Kb\/sec while the server was able to encrypt at 86,131.07 Kb\/sec for\nthe same block size.\n\n\nBased on the collected data, Stratum generates an average of 31.63 set\ndifficulty messages per day. Figure~\\ref{fig:eval:bedrock} shows that Bedrock\nimposes a 0.002s decryption overhead per day on an AntMiner S7, while on a pool\nusing the above server to handle 16,000 miners, it imposes an encryption\noverhead of 12.03 seconds per day.\n\nIn contrast, a solution that encrypts each Stratum packet imposes an overhead\nof 0.13 seconds per day on the AntMiner, and an unacceptable 1.36 hours per day\non the pool server, to handle 16,000 miners.\n\n\\vspace{-15pt}\n\n\\subsubsection{TLS Overheads}\n\n\\newmaterial{\nWe also compare Bedrock against Stratum protected with TLS. We have used a\nreplay of a 24 hour subset of our Stratum traffic dataset\n($\\S$~\\ref{sec:implementation:passive}), sent over TLS between a laptop used as\nminer (AntMiner does not support TLS) and the server above, used as the pool.}\n\n\\noindent\n\\newmaterial{\n{\\bf Computation overheads}.\nTo measure the TLS computation overheads, we have used\nTcpdump~\\cite{jacobson2003tcpdump} to capture the times when Stratum\/TLS\npackets leave from and arrive at the pool application, and also captured the\ntime when the packets are sent from\/received by the pool TLS socket. We have\ncomputed the total daily pool side TLS overhead of sending and receiving\nStratum packets (job assignment, share submission, notifications, set\ndifficulty change, etc). Figure~\\ref{fig:eval:bedrock} shows the difference\nbetween this overhead and the same overhead but when using bare TCP. It shows\nthat the daily computation overhead imposed by TLS on the pool, through the\ntraffic of 16,000 miners, is 1.01 hours.} This amounts to a\ncomputational overhead percentage of at least 4.3\\%.\n\n\n\\noindent\n\\newmaterial{\n{\\bf Bandwidth overhead}.\nIn addition, we have measured the bandwidth overhead imposed by TLS. The total\nminer-to-pool payload (single miner) for cleartext Stratum\/TCP traffic is\n465,875 bytes and for Stratum\/TLS is 738,873 bytes. The total pool-to-miner\npayload of Stratum\/TCP is 3,852,795 bytes while for Stratum\/TLS is 4,062,956\nbytes. Thus, TLS imposes a 58\\% overhead on the miner-to-pool bandwidth, for a\ntotal of 4.05GB daily overhead on the pool from 16,000 miners. This uplink\noverhead is significant, especially for miners in countries with poor Internet\nconnectivity.}\n\n\\newmaterial{\nTLS imposes a 5\\% overhead on the pool-to-miner bandwidth, for a total of\n3.13GB daily overhead on the pool. The TLS overhead is much larger in\nminer-to-pool communications, even though there are more pool-to-miner packets.\nThis is because the miner-to-pool share submission packets are much smaller\nthan the pool-to-miner job assignments, thus the TLS overhead (125 to 160\nbytes) becomes a significant factor for them.} In contrast, the\npercentage bandwidth overhead for Bedrock is only 0.04\\%.\n\n\\noindent\n\\newmaterial{\n{\\bf Conclusions}.\nBedrock is more efficient than blanket encryption and TLS. While the pool could\nuse more equipment to handle encryption more efficiently, blanket encryption\nand TLS do not address the hashrate inference vulnerability. In addition, TLS\nimposes a significant uplink bandwidth overhead on miners.}\n\n\\vspace{-10pt}\n\n\\section{Related Work}\n\\label{sec:related}\n\n\\vspace{-5pt}\n\n\\noindent\n{\\bf Bitcoin mining attacks}.\nDecker and Wattenhofer~\\cite{DW13} study Bitcoin's use of a multi-hop broadcast\nto propagate transactions and blocks through the network to update the ledger\nreplicas, then study how the network can delay or prevent block propagation.\nHeilman et al.~\\cite{HKZG15} propose eclipse attacks on the Bitcoin network,\nwhere an attacker leverages the reference client's policy for updating peers to\nmonopolize all the connections of a victim node, by forcing it to accept only\nfraudulent peers. The victim can then be exploited to attack the mining and\nconsensus systems of Bitcoin. Bissas et al.~\\cite{BLOAH16} present and\nvalidate a novel mathematical model of the blockchain mining process and use it\nto conduct an economic evaluation of double-spend attacks, both with and\nwithout a concurrent eclipse attack.\n\nCourtois and Bahack~\\cite{CB14} propose a practical block withholding attack,\nin which dishonest miners seek to obtain a higher reward than their relative\ncontribution to the network. They also provide an excellent background\ndescription of the motivation and functionality of mining pools and the mining\nprocess.\n\n\\noindent\n{\\bf Bitcoin anonymity}.\nSignificant work has focused on breaking the anonymity of Bitcoin\nclients~\\cite{BKP14,KKM14,MPJLMVS13,AKRSC13}. For instance, Biryukov et\nal.~\\cite{BKP14} proposed a method to deanonymize Bitcoin users, which allows\nto link user pseudonyms to the IP ad- dresses where the transactions are\ngenerated. Koshy et al.~\\cite{KKM14} use statistical metrics for mappings of\nBitcoin to IP addresses, and identify pairs that may represent ownership\nrelations.\n\nSeveral solutions arose to address this problem. Miers et al.~\\cite{MGGR13}\nproposed ZeroCoin, that extends Bitcoin with a cryptographic accumulator and\nzero knowledge proofs to provide fully anonymous currency transactions.\nBen-Sasson et al.~\\cite{BSCG0MTV14} introduced Zerocash, a decentralized\nanonymous payment solution that hides all information linking the source and\ndestination of transactions. Bonneau et al.~\\cite{BNMCKF14} proposed\nMixcoin, a currency mix with accountability assurances and randomized fee\nbased incentives.\n\nOur work is orthogonal to previous work on Bitcoin anonymity, as it identifies\nvulnerabilities in Stratum, the communication protocol employed by\n\\newmaterial{cryptocurrency} mining solutions. As such, our concern is for the\nprivacy and security of the miners, as they generate coins. Our attacks are\nalso more general, as they apply not only to Bitcoin, but to a suite of other\npopular altcoin solutions,\ne.g.,~\\cite{litecoin_stratum,ethereum_stratum,monero_stratum} that build on\nStratum.\n\n\\noindent\n{\\bf Effects of broken crypto on Bitcoin}.\nGiechaskiel et al.~\\cite{GCR16} systematically analyze the effects of broken\ncryptographic primitives on Bitcoin. They reveal a wide range of possible\neffects that range from minor privacy violations to a complete breakdown of the\ncurrency. Our attacks do not need broken crypto to succeed. However, we show\nthat Bedrock, our secure Stratum extension is resilient to broken crypto\nprimitives.\n\n\\vspace{-10pt}\n\n\\section{Conclusions}\n\n\\vspace{-5pt}\n\nIn this paper we have shown that the lack of security in Stratum, Bitcoin's\nmining communication protocol, makes miners vulnerable to a suite of passive\nand active attacks, that expose their owners to hacking, coin and equipment\ntheft, loss of revenue, and prosecution. We have implemented and shown that the\nattacks that we introduced are efficient. Our attacks reveal that encryption is\nnot only undesirable, due to its significant overheads, but also ineffective:\nan adversary can predict miner earnings even when given access to only the\ntimestamps of miner communications. We have developed Bedrock, a minimal and\nefficient Stratum extension that protects the privacy and security of mining\nprotocol participants. We have shown that Bedrock imposes an almost negligible\ncomputation overhead on the mining participants and is resilient to active\nattacks even if the used cryptographic tools are compromised.\n\n\\section{Acknowledgments}\n\n\\newmaterial{\nWe thank the shepherds and the anonymous reviewers for their excellent\nfeedback. We thank Patrick O'Callaghan for suggesting this problem and for\ninsightful discussions. This research was supported by NSF grants 1526494 and\n1527153.}\n\n\\vspace{-10pt}\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe problem of designing auctions that maximize the seller's revenue in settings with many heterogeneous goods has attracted a large amount of interest in the last years, both from the Computer Science as well as the Economics community (see e.g.~\\citep{Manelli:2006vn,Pavlov:2011fk,Hart:2012uq,Hart:2012zr,Daskalakis:2012fk,Daskalakis:2013vn,gk2014,Menicucci:2014jl,Daskalakis:2014fk}). Here the seller faces a buyer whose true values for the $m$ items come from a probability distribution over $\\mathbb R_+^m$ and, based only on this incomplete prior knowledge, he wishes to design a selling mechanism that will maximize his expected revenue. For the purposes of this paper, the prior distribution is a product one, meaning that the item values are independent. The buyer is additive, in the sense that her happiness from receiving any subset of items is the sum of her values of the individual items in that bundle. The buyer is also selfish and completely rational, thus willing to lie about her true values if this is to improve her own happiness. So, the seller should also make sure to give the right incentives to the buyer in order to avoid manipulation of the protocol by misreporting. \n\nThe special case of a single item has been very well understood since the seminal work of~\\citet{Myerson:1981aa}. However, when one moves to settings with multiple goods, the problem becomes notoriously difficult and novel approaches are necessary. Despite the significant effort of the researchers in the field, essentially only specialized, partial results are known: there are exact solutions for two items in the case of identical uniform distributions over unit-length intervals~\\citep{Pavlov:2011fk,Manelli:2006vn}, exponential over $[0,\\infty)$ \\citep{Daskalakis:2013vn} or identical Pareto distributions with tail index parameters $\\alpha\\geq 1\/2$ \\citep{Hart:2012uq}. For more than two items, optimal results are only known for uniform values over the unit interval~\\citep{gk2014}, and due to the difficulty of exact solutions most of the work focuses in showing approximation guarantees for simple selling mechanisms~\\citep{Hart:2012uq,Li:2013ty,Babaioff:2014ys,g2014,Bateni:2014ph,Rubinstein:2015kx}. This difficulty is further supported by the complexity ($\\#P$-hardness) results of~\\citet{Daskalakis:2012fk}. It is important to point out that even for two items \\emph{we know of no general and simple, closed-form conditions framework under which optimality can be extracted when given as input the item distributions, in the case when these are not necessarily identical.} This is our goal in the current paper. \n \n\\paragraph{Our contribution}\nWe introduce general but simple and clear, closed-form distributional conditions that can guarantee optimality and immediately give the form of the revenue-maximizing selling mechanism (its payment and allocation rules), for the setting of two goods with values distributed over bounded intervals (Theorem~\\ref{th:characterization_main}). For simplicity and a clearer exposition we study distributions supported over the real unit interval $[0,1]$. By scaling, the results generalize immediately to intervals that start at $0$, but more work would be needed to generalize them to arbitrary intervals. We use the closed forms to get optimal solutions for a wide class of distributions satisfying certain simple analytic assumptions (Theorem~\\ref{th:characterization_iid} and Sect.~\\ref{sec:non-iid}). As useful examples, we provide exact solutions for families of monomial ($\\propto x^c$) and exponential ($\\propto e^{-\\lambda x}$) distributions (Corollaries~\\ref{th:optimal_two_power} and \\ref{th:optimal_two_expo} and Sect.~\\ref{sec:non-iid}), and also near-optimal results for power-law ($\\propto (x+1)^{-\\alpha}$) distributions (Sect.~\\ref{sec:approximate_convex_fail}). This last approximation is an application of a more general result (Theorem~\\ref{th:two_iid_approx}) involving the relaxation of some of the conditions for optimality in the main Theorem~\\ref{th:characterization_main}; the ``solution'' one gets in this new setting might not always correspond to a feasible selling mechanism, however it still provides an upper bound on the optimal revenue as well as hints as to how to design a well-performing mechanism, by ``convexifying'' it into a feasible mechanism (Sect.~\\ref{sec:approximate_convex_fail}).\n\nParticularly for the family of monomial distributions it turns out that the optimal mechanism is a very simple deterministic mechanism that offers to the seller a menu of size just $4$ (using the menu-complexity notion of Hart and Nisan \\citep{Hart:2012ys,Wang:2013ab}): fixed prices for each one of the two items and for their bundle, as well as the option of not buying any of them. For other distributions studied in the current paper randomization is essential for optimality, as is generally expected in such problems of multidimensional revenue maximization (see e.g.~\\citep{Hart:2012zr,Pavlov:2011fk,Daskalakis:2013vn}). For example, this is the case for two i.i.d. exponential distributions over the unit interval $[0,1]$, which gives the first such example where determinism is suboptimal even for regularly\\footnote{A probability distribution $F$ is called \\emph{regular} if $t-\\frac{1-F(t)}{f(t)}$ is increasing. This quantity is known as the \\emph{virtual valuation}.} i.i.d.\\ items. \nA point worth noting here is the striking difference between this result and previous results~\\citep{Daskalakis:2013vn,g2014} about i.i.d.\\ exponential distributions which have as support the entire $\\mathbb R_+$: the optimal selling mechanism there is the deterministic one that just offers the full bundle of both items.\n\nAlthough the conditions that the probability distributions must satisfy are quite general, they leave out a large class of distributions. For example, they do not apply to power-law distributions with parameter $\\alpha>2$. In other words, this work goes some way towards the complete solution for arbitrary distributions for two items, but the general problem is still open. In this paper, we opted towards simple conditions rather than full generality, but we believe that extensions of our method can generalize significantly the range of distributions; we expect that a proper ``ironing'' procedure will enable our technique to resolve the general problem for two items.\n\n\\paragraph{Techniques}\nThe main result of the paper (Theorem~\\ref{th:characterization_main}) is proven by utilizing the \\emph{duality} framework of~\\citep{gk2014} for revenue maximization, and in particular using complementarity: the optimality of the proposed selling mechanism is shown by verifying the existence of a dual solution with which they satisfy together the required complementary slackness conditions of the duality formulation. Constructing these dual solutions explicitly seems to be a very challenging task and in fact there might not even be a concise way to do it, especially in closed-form. So instead we just prove the existence of such a dual solution, using a \\emph{max-flow min-cut} argument as main tool (Lemma~\\ref{lemma:coloring}, Fig.~\\ref{fig:flows_graph}). This is, in a way, an abstraction of a technique followed in~\\citep{gk2014} for the case of uniform distributions which was based on Hall's theorem for bipartite matchings. Since here we are dealing with general and non-identical distributions, this kind of refinement is essential and non-trivial, and in fact forms the most technical part of the paper. Our approach has a strong geometric flavor, enabled by introducing the notion of the \\emph{deficiency} of a two-dimensional body (Definition~\\ref{def:deficiency}, Lemma~\\ref{lemma:no_positive_def}), which is inspired by classic matching theory~\\citep{Ore:1955fk,Lovasz:1986qf}. \n\n\\subsection{Model and Notation}\nWe study a two-good monopoly setting in which a seller deals with a buyer who has values $x_1, x_2\\in I$ for the items, where $I=[0,1]$. The seller has only an incomplete knowledge of the buyer's preference, in the form of two independent distributions (with densities) $f_1$, $f_2$ over $I$ from which $x_1$ and $x_2$ are drawn, respectively. The cdf of $f_j$ will be denoted by $F_j$. As in the seminal work of~\\citet{Myerson:1981aa}, the density functions will be assumed to be absolutely continuous and positive.\nWe will also use vector notation $\\vecc x=(x_1,x_2)$. For any item $j\\in\\{1,2\\}$, index $-j$ will refer the complementary item, that is $3-j$, and as it's standard in game theory $\\vecc x_{-j}=x_{-j}$ will denote the remaining of vector $\\vecc x$ if the $j$-th coordinate is removed, so $\\vecc x=(x_j,x_{-j})$ for any $j=1,2$.\n\nThe seller's goal is to design a selling mechanism that will maximize his revenue. Without loss\\footnote{This is due to the celebrated Revelation Principle~\\citep{Myerson:1981aa}.} we can focus on direct-revelation mechanisms: the bidder will be asked to submit bids $b_1,b_2$ and the mechanism consists simply of an allocation rule $a_1,a_2:I^2\\to I$ and a payment function $p:I^2\\to\\mathbb R_+$ such that $a_j(b_1,b_2)$ is the probability of item $j$ being sold to the buyer (notice how we allow for randomized mechanisms, i.e.~lotteries) and $p(b_1,b_2)$ is the payment that the buyer expects to pay; it is easier to consider the expected payment for all allocations, rather than individual payments that depend on the allocation of items. The reason why the bids $b_j$ are denoted differently than the original values $x_j$ for the items is that, since the bidder is a rational and selfish agent, she might lie and misreport $b_j\\neq x_j$ if this is to increase her personal gain given by the quasi-linear \\emph{utility} function \n\\begin{equation}\n\\label{eq:utility}\nu(\\vecc b;\\vecc x)\\equiv a_1(\\vecc b) x_1+a_2(\\vecc b) x_2-p(\\vecc b),\n\\end{equation}\nthe expected happiness she'll receive by the mechanism minus her payment. \nThus, we will demand our selling mechanisms to satisfy the following standard properties: \n\\begin{itemize}\n\\item \\emph{Incentive Compatibility (IC)}, also known as truthfulness, saying that the player would have no incentive to misreport and manipulate the mechanism, i.e.\\ her utility is maximized by truth-telling: $u(\\vecc b;\\vecc x)\\leq u(\\vecc x;\\vecc x)$ \n\\item \\emph{Individual Rationality (IR)}, saying that the buyer cannot harm herself just by truthfully participating in the mechanism: $u(\\vecc x;\\vecc x)\\geq 0$.\n\\end{itemize}\nIt turns out the critical IC property comes without loss\\footnote{Also due to the Revelation Principle.} for our revenue-maximization objective, so for now on we will only consider truthful mechanisms, meaning we can also relax the notation $u(\\vecc b;\\vecc x)$ to just $u(\\vecc x)$.\n\nThere is a very elegant and helpful analytic characterization of truthfulness, going back to~\\citet{Rochet:1985aa} (for a proof see e.g.~\\citep{Hart:2012uq}), which states that the player's utility function must be \\emph{convex} and that the allocation probabilities are simply given by the utility's derivatives, i.e.\\ $\\partial u(\\vecc x)\/\\partial x_j=a_j(\\vecc x)$. Taking this into consideration and rearranging~\\eqref{eq:utility} with respect to the payment, we define\n$$\n\\mathcal R_{f_1,f_2}(u)\\equiv \\int_0^1\\int_0^1\\left(\\frac{\\partial u(\\vecc x)}{\\partial x_1}x_1+\\frac{\\partial u(\\vecc x)}{\\partial x_2}x_2-u(\\vecc x) \\right)f_1(x_1)f_2(x_2)\\,dx_1\\,dx_2\n$$\nfor every absolutely continuous function $u:I^2\\longrightarrow\\mathbb R_+$. If $u$ is convex with partial derivatives in $[0,1]$ then $u$ is a valid utility function and \\emph{$\\mathcal R_{f_1,f_2}(u)$ is the expected revenue of the seller under the mechanism induced by $u$}. Let $\\ensuremath{\\text{\\rm\\sc Rev}}(f_1,f_2)$ denote the best possible such revenue, i.e.\\ the supremum of $\\mathcal R_{f_1,f_2}(u)$ when $u$ ranges over the space of all feasible utility functions over $I^2$. So the problem we want to deal with in this paper is exactly that of $\\sup_u \\mathcal R_{f_1,f_2}(u)$.\n\nWe now present the condition on the probability distributions which will enable our technique to provide a closed-form of the optimal auction.\n\n\\begin{assumption}\n\\label{assume:upwards_def}\n\\label{assume:regularity}\nThe probability distributions $f_1,f_2$ are such that functions $h_{f_1,f_2}(\\vecc x)-f_2(1)f_1(x_1)$ and $h_{f_1,f_2}(\\vecc x)-f_1(1)f_2(x_2)$ are nonnegative, where\n\\begin{equation}\n\\label{eq:def_h}\nh_{f_1,f_2}(\\vecc x)\\equiv 3 f_1(x_1)f_2(x_2)+x_1f_1'(x_1)f_2(x_2)+x_2f'_2(x_2)f_1(x_1).\n\\end{equation}\nFunction $h_{f_1,f_2}$ will also be assumed to be absolutely continuous with respect to each of its coordinates.\n\\end{assumption}\nWe will drop the subscript $f_1,f_2$ in the above notations whenever it is clear which distributions we are referring to. Assumption~\\ref{assume:upwards_def} is a slightly stronger condition than $h(\\vecc x)\\geq 0$ which is a common regularity assumption in the economics literature for multidimensional auctions with $m$ items: $(m+1)f(\\vecc x)+\\nabla f(\\vecc x)\\cdot \\vecc x\\geq 0$, where $f$ is the joint distribution for the item values (see e.g.~\\citep{Manelli:2006vn,Pavlov:2011fk,McAfee:1988nx}). In fact, \\citet{Manelli:2006vn} make the even stronger assumption that for each item $j$, $x_j f_j(x_j)$ is an increasing function. Even more recently, that assumption has also been deployed by \\citet{Wang:2013ab} in a two-item setting as one of their sufficient conditions for the existence of optimal auctions with small-sized menus. It has a strong connection with the standard single-dimensional regularity condition of~\\citet{Myerson:1981aa}, since for $m=1$ condition $h(\\vecc x)\\geq 0$ gives that $f(x)\\left( x-\\frac{1-F(x)}{f(x)} \\right)$ is increasing, thus ensures the single-crossing property of the virtual valuation function (see also the discussion in \\citep[Sect.~2]{Manelli:2006vn}). \n\nStrengthening the regularity condition $h(\\vecc x)\\geq 0$ to that of Assumption~\\ref{assume:upwards_def} is essentially only used \nas a technical tool within the proof of Lemma~\\ref{lemma:no_positive_def}, and\nas a matter of fact we don't really need it to hold in the entire unit box $I^2$ but just in a critical sub-region $D_{1,2}$ which corresponds to the valuation subspace where both items are sold with probability $1$ (see Fig.~\\ref{fig:Exp_Uniform} and Sect.~\\ref{sec:partition_optimal}). As mentioned earlier in the Introduction, we introduce this technical conditions in order to simplify our exposition and enforce the clarity of the techniques, but we believe that a proper ``ironing''~\\citep{Myerson:1981aa} process can probably bypass these restrictions and generalize our results.\nThe critical Assumption~\\ref{assume:upwards_def} is of course satisfied by all distributions considered in the results of this paper, namely monomial $\\propto x^c$ for any power $c\\geq 0$ (Corollary~\\ref{th:optimal_two_power}), exponential $\\propto e^{-\\lambda x}$ with rates $\\lambda\\leq 1$ (Corollary~\\ref{th:optimal_two_expo}), power-law $\\propto (t+1)^{-\\alpha}$ with parameters $\\alpha\\leq 2$ (Example~\\ref{example:power-law}), as well as combinations of these (see Example~\\ref{example:uniform-expo}). However, there is still a large class of distributions not captured by Assumption~\\ref{assume:upwards_def} as it is, e.g.\\ exponential with rates larger than $1$, power-law with parameters greater than $2$ and some beta-distributions (take, for example, $\\propto x^2(1-x)^2$). See Footnote~\\ref{foot:alter-assumption-monotone} for an alternative condition that can replace Assumption~\\ref{assume:upwards_def}.\n\\section{Sufficient Conditions for Optimality}\nThis section is dedicated to proving the main result of the paper:\n\\begin{theorem}\n\\label{th:characterization_main}\nIf there exist decreasing, concave functions $s_1,s_2:I\\to I$, with $s_1'(t),s_2'(t)> -1$ for all $t\\in I$, such that for almost every\\footnote{Everywhere except a subset of zero Lebesgue measure.} (a.e.) $x_1,x_2\\in I$\n\\begin{equation}\n\\label{eq:1slice_gen_functions}\n\\frac{s_1(x_2)f_1(s_1(x_2))}{1-F_1(s_1(x_2))} =2+\\frac{x_2f_2'(x_2)}{f_2(x_2)}\n\\quad\\text{and}\\quad\n\\frac{s_2(x_1)f_2(s_2(x_1))}{1-F_2(s_2(x_1))} =2+\\frac{x_1f_1'(x_1)}{f_1(x_1)}, \n\\end{equation}\nthen \nthere exists a constant $p\\in[0,2]$ such that \n\\begin{equation}\n\\label{eq:2slice_gen}\n\\int_{D}h(\\vecc x)\\,dx_1\\,dx_2 \n=f_1(1)+f_2(1)\n\\end{equation}\nwhere $D$ is the region of $I^2$ enclosed by curves\\footnote{See Fig.~\\ref{fig:Exp_Uniform}.} $x_1+x_2=p$, $x_1=s_1(x_2)$ and $x_2=s_2(x_1)$ and including point $(1,1)$, i.e.~$D=\\sset{\\vecc x\\in I\\fwh{x_1+x_2\\geq p\\lor x_1\\geq s_1(x_2) \\lor x_2\\geq s_2(x_1)}}$, \nand the optimal selling mechanism is given by the utility function\n\\begin{equation}\n\\label{eq:optimal_auction_gen}\nu(\\vecc x)=\\max\\sset{0,x_1-s_1(x_2),x_2-s_2(x_1),x_1+x_2-p}.\n\\end{equation}\nIn particular, if $p\\leq\\min\\sset{s_1(0),s_2(0)}$, then the optimal mechanism is the deterministic full-bundling with price $p$. \n\\end{theorem}\n\nNotice that for any $s\\in I$ we have\n\\begin{align*}\n\\int_s^1h(\\vecc x)\\,dx_1 &=\\int_s^1 3f_1(x_1)f_2(x_2)+x_1f_1'(x_1)f_2(x_2)+x_2f_2'(x_2)f_1(x_1) \\,dx_1\\\\\n\t\t\t\t&=3f_2(x_2)(1-F_1(s))+f_2(x_2)\\int_s^1x_1f_1'(x_1)\\,dx_1+x_2f_2'(x_2)(1-F_1(s))\\\\\n\t\t\t\t&=3f_2(x_2)(1-F_1(s))+f_2(x_2)\\left(\\left[x_1f_1(x_1)\\right]_s^1-(1-F_1(s))\\right)+x_2f_2'(x_2)(1-F_1(s))\\\\\n\t\t\t\t&=2f_2(x_2)(1-F_1(s))+f_2(x_2)(f_1(1)-sf_1(s))+x_2f_2'(x_2)(1-F_1(s))\\\\\n\t\t\t\t&=(1-F_1(s))f_2(x_2)\\left[2+\\frac{x_2f_2'(x_2)}{f_2(x_2)}-\\frac{sf_1(s)}{1-F_1(s)} \\right] +f_1(1)f_2(x_2)\n\\end{align*}\nwhich means that an equivalent way of looking at~\\eqref{eq:1slice_gen_functions} is, more simply, by\n\\begin{equation}\n\\label{eq:1slice_gen_integrals}\n\\int_{s_1(x_2)}^1h(\\vecc x)\\,dx_1=f_1(1)f_2(x_2)\n\\quad\\text{and}\\quad\n\\int_{s_2(x_1)}^1h(\\vecc x)\\,dx_2=f_2(1)f_1(x_1).\n\\end{equation}\nThis also means that~\\eqref{eq:1slice_gen_integrals} can take the place of~\\eqref{eq:1slice_gen_functions} in the statement of Theorem~\\ref{th:characterization_main} whenever this gives an easier way to solve for functions $s_1$ and $s_2$.\n\\subsection{Partitioning of the Valuation Space}\n\\label{sec:partition_optimal}\n\\begin{figure}\n\\centering\n\\includegraphics[width=10cm]{Figures\/Exp_Uniform.pdf}\n\\caption{\\footnotesize The valuation space partitioning of the optimal selling mechanism for two independent items, one following a uniform distribution and the other an exponential with parameter $\\lambda=1$. Here $s_1(t)=(2-t)\/(3-t)$, $s_2(t)=2-W(2e)\\approx 0.625$ and $p\\approx 0.787$. In region $D_{1}$ (light grey) item $1$ is sold deterministically and item $2$ with a probability of $-s_1'(x_2)$, in $D_{2}$ (light grey) only item $2$ is sold and region $D_{1,2}$ (dark grey) is where the full bundle is sold deterministically, for a price of $p$.}\n\\label{fig:Exp_Uniform}\n\\end{figure}\nDue to the fact that the derivatives of functions $s_j$ in Theorem~\\ref{th:characterization_main} are above $-1$, each curve $x_1=s_1(x_2)$ and $x_2=s_2(x_1)$ can intersect the full-bundle line $x_1+x_2=p$ at most at a single point. So let $x_2^*=x_2^*(p), x_1^*=x_1^*(p)$ be the coordinates of these intersections, respectively, i.e.~$s_1(x_2^*)=p-x_2^*$ and $s_2(x_1^*)=p-x_1^*$. If such an intersection does not exist, just define $x_2^*=0$ or $x_1^*=0$.\n\nThe construction and the optimal mechanism given in Theorem~\\ref{th:characterization_main} then gives rise to the following partitioning of the valuation space $I^2$ (see Fig.~\\ref{fig:Exp_Uniform}):\n\\begin{itemize}\n\\item Region $\\bar D=I^2\\setminus D$ where no item is allocated\n\\item Region $D_1=\\sset{\\vecc x\\in I^2\\fwh{x_1\\geq s_1(x_2)\\land x_2\\leq x_2^*}}$ where item $1$ is sold with probability $1$ and item $2$ with probability $-s_1'(x_2)$ for a price of $s_1(x_2)-x_2s_1'(x_2)$\n\\item Region $D_2=\\sset{\\vecc x\\in I^2\\fwh{x_2\\geq s_2(x_1)\\land x_1\\leq x_1^*}}$ where item $2$ is sold with probability $1$ and item $1$ with probability $-s_2'(x_1)$ for a price of $s_2(x_1)-x_1s_2'(x_1)$\n\\item Region $D_{1,2}=D\\setminus{D_1\\union D_2}=\\sset{\\vecc x\\in I^2\\fwh{x_1+x_2\\geq p \\land x_1\\geq x_1^* \\land x_2\\geq x_2^*}}$ where both items are sold deterministically in a full bundle of price $p$.\n\\end{itemize}\n\nUnder this decomposition, by \\eqref{eq:1slice_gen_integrals}:\n$$\n\\int_{D_{1}}h(\\vecc x)\\,dx_1\\,dx_2=\\int_{0}^{x_2^*}\\int_{s_1(x_2)}^1h(\\vecc x)\\,dx_1\\,dx_2=f_1(1)F_2(x_2^*)\n$$\nso expression~\\eqref{eq:2slice_gen} can be written equivalently as\n\\begin{equation}\n\\label{eq:2slice_gen_bundle_region}\n\\int_{D_{1,2}}h(\\vecc x)\\,dx_1\\,dx_2 \n= f_1(1)(1-F_2(x_2^*))+f_2(1)(1-F_1(x_1^*)).\n\\end{equation}\n\\subsection{Duality}\n\\label{sec:duality}\nThe major underlying tool to prove Theorem~\\ref{th:characterization_main} will be the duality framework of~\\citep{gk2014}. For completeness we briefly present here the formulation and key aspects, and the interested reader is referred to the original text for further details. \n\nRemember that the revenue optimization problem we want to solve here is to maximize $\\mathcal R(u)$ over the space of all convex functions $u:I^2\\longrightarrow\\mathbb R_+$ with\n\\begin{equation}\n\\label{eq:allocs_probs_01}\n0\\leq \\frac{\\partial u(\\vecc x)}{\\partial x_j}\\leq 1,\\qquad j=1,2,\n\\end{equation}\nfor a.e. $\\vecc x\\in I^2$. First we relax this problem by dropping the convexity assumption and replacing it with (absolute) continuity. We also drop the lower bound in~\\eqref{eq:allocs_probs_01}. Then this new relaxed program is dual to the following: minimize $\\int_0^1\\int_0^1 z_1(\\vecc x)+z_2(\\vecc x)\\,d\\vecc x$ where the new dual variables $z_1,z_2:I^2\\longrightarrow\\mathbb R_+$ are such that $z_j$ is (absolutely) continuous with respect to its $j$-coordinate and the following conditions are satisfied for all $x_1,x_2\\in I$:\n\\begin{align}\nz_j(0,x_{-j}) &=0, &&j=1,2, \\label{eq:dual_const_1}\\\\\nz_j(1,x_{-j}) &\\geq f_j(1)f_{-j}(x_{-j}), &&j=1,2, \\label{eq:dual_const_2}\\\\\n\\frac{\\partial z_1(\\vecc x)}{\\partial x_2}+\\frac{\\partial z_2(\\vecc x)}{\\partial x_2} &\\leq 3 f_1(x_1)f_2(x_2)+ x_1f_1'(x_1)f_2(x_2)+x_2f_1(x_1)f_2'(x_2).\\label{eq:dual_const_3}\n\\end{align}\nWe will refer to the first optimization problem, where $u$ ranges over the relaxed space of continuous, nonnegative functions with derivatives at most $1$, as the \\emph{primal program} and to the second as the \\emph{dual}. Intuitively, every dual solution $z_j$ must start at zero and grow all the way up to $f_j(1)f_{-j}(x_{-j})$ while travelling in interval $I$, in a way that the sum of the rate of growth of both $z_1$ and $z_2$ is never faster than the right hand side of~\\eqref{eq:dual_const_3}.\nIn~\\citep{gk2014} is proven that indeed these two programs satisfy both weak duality, i.e.~for any feasible $u,z_1,z_2$ we have\n$$\n\\mathcal R(u)\\leq \\int_{0}^1\\int_0^1 z_1(\\vecc x)+z_2(\\vecc x)\\,d\\vecc x\n$$ \nas well as complementary slackness, in the form of the even stronger following form of $\\varepsilon$-complementarity: \n\n\\begin{lemma}[Complementarity]\\label{lemma:complementarity}\nIf $u,z_1,z_2$ are feasible primal and dual solutions, respectively, $\\varepsilon>0$ and the following complementarity constraints hold for a.e. $\\vecc x\\in I^2$,\n\\begin{align}\n u(\\vecc x) \\left( h(\\vecc x)\n -\\frac{\\partial z_1(\\vecc x)}{\\partial x_1}-\\frac{\\partial z_2(\\vecc x)}{\\partial x_2}\n \\right) &\\leq \\varepsilon f_1(x_1)f_2(x_2), \\label{eq:e_compl_2}\\\\\nu(1,x_{-j}) \\left( z_j(1, x_{-j})\n -f_j(1)f_{-j}(x_{-j}) \\right) &\\leq \\varepsilon f_j(1)f_{-j}(x_{-j}) \\label{eq:e_compl_3}, &&j=1,2,\\\\\n z_j(\\vecc x) \\left( 1 - \\frac{\\partial\n u(\\vecc x)}{\\partial x_j} \\right) &\\leq \\varepsilon f_1(x_1)f_2(x_2), &&j=1,2, \\label{eq:e_compl_4}\n\\end{align}\nwhere $h$ is defined in~\\eqref{eq:def_h}, then the values of the primal and dual programs differ by at most\n$7\\varepsilon$. In particular, if the conditions are satisfied\nwith $\\varepsilon=0$, both solutions are optimal.\n\\end{lemma} \n\nOur approach into proving Theorem~\\ref{th:characterization_main} will be to show the existence of a pair of dual solutions $z_1,z_2$ with respect to which the utility function $u$ given by the theorem indeed satisfies complementarity. Notice here the existential character of our technique: our duality approach offers the advantage to use the proof of just the existence of such duals, without having to explicitly describe them and compute their objective value in order to prove optimality, i.e.~that the primal and dual objectives are indeed equal. Also notice that the utility function $u$ given by Theorem~\\ref{th:characterization_main} is convex by construction, so in case someone shows optimality for $u$ in the relaxed setting, then $u$ must also be optimal among all feasible mechanisms.\n\nDefine function $W:I^2\\to\\mathbb R_+$ by\n$$\nW(\\vecc x)=\n\\begin{cases}\nh(\\vecc x), &\\text{if}\\;\\; \\vecc x\\in D,\\\\\n0, &\\text{otherwise},\n\\end{cases} \n$$\nwhere $D$ is defined in Sect.~\\ref{sec:partition_optimal} (see Fig.~\\ref{fig:Exp_Uniform}).\nIf one could decompose $W$ into functions $w_1,w_2:I^2\\to\\mathbb R_+$ such that\n\\begin{align}\nw_1(\\vecc x)+w_2(\\vecc x) &=W(\\vecc x)\\label{eq:Wdecomp_sum} \\\\\n\\int_0^1w_j(\\vecc x)\\,d x_j &= f_j(1)f_{-j}(x_{-j}) \\label{eq:Wdecomp1}, \\qquad j=1,2,\n\\end{align}\nfor all $\\vecc x\\in I$, and $w_j$ is almost everywhere continuous with respect to its $j$-th coordinate, then by defining \n$$\nz_j(\\vecc x)=\\int_0^{x_j} w_j(t,x_{-j})\\,dt\n$$\nwe'll have\n\\begin{align}\n\\frac{\\partial z_1(\\vecc x)}{\\partial x_1}+\\frac{\\partial z_2(\\vecc x_2)}{\\partial x_2} &=\n\\begin{cases}\n h(\\vecc x), & \\text{for}\\;\\; \\vecc x\\in D,\\\\\n 0, &\\text{otherwise},\n \\end{cases}\n \\label{eq:prop_dual_1}\n \\\\\nz_j(0,x_{-j}) &=0, && j=1,2, \\label{eq:prop_dual_2} \\\\\nz_j(1,x_{-j}) &= f_j(1)f_{-j}(x_{-j}), && j=1,2. \\label{eq:prop_dual_3} \n\\end{align}\nIf the requirements of Theorem~\\ref{th:characterization_main} hold, then it is fairly straightforward to get such a decomposition in certain regions. In particular, we can set $w_1=w_2=0$ in $I^2\\setminus D$, $w_1=W=h$ and $w_2=0$ in $D_1$ and $w_2=W=h$ and $w_1=0$ in $D_2$. Then, by~\\eqref{eq:1slice_gen_integrals}, it is not difficult to see that indeed conditions~\\eqref{eq:Wdecomp_sum}--\\eqref{eq:Wdecomp1} are satisfied. However, \\emph{it is highly non-trivial how to create such a decomposition in the remaining region $D_{1,2}$} and that is what the proof of Lemma~\\ref{lemma:coloring} achieves, with the assistance of the geometric Lemma~\\ref{lemma:no_positive_def}, in the remaining of this section. This is the most technical part of the paper.\n\nIn any case, if we are able to get such a decomposition, by the previous discussion that would mean that functions $z_1,z_2:I^2\\to\\mathbb R_+$ are \\emph{feasible dual} solutions: it is trivial to verify that properties~\\eqref{eq:prop_dual_1}--\\eqref{eq:prop_dual_3} satisfy the dual constraints \\eqref{eq:dual_const_1}--\\eqref{eq:dual_const_3}. But most importantly, the \\emph{equalities} in properties~\\eqref{eq:prop_dual_1}--\\eqref{eq:prop_dual_3} and the way $w_1$ and $w_2$ are defined in regions $D_1$ and $D_2$ tell us something more: that this pair of solutions would satisfy complementarity with respect to the primal given in~\\eqref{eq:optimal_auction_gen} and whose allocation is analyzed in detail in Sect.~\\ref{sec:partition_optimal}, thus proving that this mechanism is optimal and thus establishing Theorem~\\ref{th:characterization_main}.\n\n\\subsection{Deficiency}\n\\label{sec:nodef}\nThe following notion will be the tool that gives a very useful geometric interpretation to the rest of the proof of Theorem~\\ref{th:characterization_main} and it will be critical into proving Lemma~\\ref{lemma:coloring}. \n\\begin{definition}\n\\label{def:deficiency}\nFor any body $S\\subseteq I^2$ define its \\emph{deficiency} (with respect to distributions $f_1,f_2$) to be\n$$\n\\delta(S)\\equiv \\int_S h(\\vecc x)\\,d\\vecc x - f_2(1)\\int_{S_1}f_1(x_1)\\,dx_1-f_1(1)\\int_{S_2}f_2(x_2)\\,dx_2,\n$$\nwhere $S_1$, $S_2$ denote $S$'s projections to the $x_1$ and $x_2$ axis, respectively.\n\\end{definition}\n\\begin{lemma}\n\\label{lemma:no_positive_def}\nIf the requirements of Theorem~\\ref{th:characterization_main} hold, then no body $S\\subseteq D_{1,2}$ has positive deficiency.\n\\end{lemma}\n\\begin{proof}\nTo get to a contradiction, assume that there is body $S\\subseteq D_{1,2}$ with $\\delta(S)>0$. \nFirst, we'll show that without loss $S$ can be assumed to be upwards closed. Intuitively, we'll show that one can push mass of $S$ to the right or upwards, without reducing its deficiency. By Assumption~\\ref{assume:upwards_def} function $h(\\vecc x)-f_2(1)f_1(x_1)$ is nonnegative. Then, if there exists a nonempty horizontal line segment $\\slice{S}{x_2}{t}$ of $S$ at some height $x_2=t$, then we can assume that this line segment fills the entire available horizontal space of $D_{1,2}$: if that was not the case, and there existed a small interval $[\\alpha,\\beta]\\times{t}$ that was not in $S$, then we could add it to it, not increasing the projection towards the $x_2$-axis (it is already covered by the other existing points at $x_2=t$) and the projection towards the $x_1$-axis is increased at most by $\\beta-\\alpha$, leading to a change to the overall deficiency by at most $\\int_{\\alpha}^{\\beta}h(\\vecc x)\\,dx_1-f_2(1)\\int_{\\alpha}^{\\beta}f_1(x_1)\\,dx_1$, which is nonnegative\\footnote{We must mention here that the assumption of the nonnegativity of $h(\\vecc x) -f_2(1)f_1(x_1)$ could be replaced by that of $h(\\vecc x)-f_2(1)f_1(x_1)$ being increasing with respect to $x_1$ and the argument would still carry through: we can move entire columns of $S$ to the right, pushing elements horizontally; the projection towards axis $x_2$ again remains unchanged, and because of the monotonicity of $h(\\vecc x)-f_2(1)f_1(x_1)$, the overall deficiency will not decrease since we are integrating over higher values of $x_1$.\n\nThis means that the monotonicity of $h(\\vecc x)-f_j(x_j)j_{-j}(1)$ with respect to $x_j$ can replace its nonnegativity in the initial Assumption~\\ref{assume:upwards_def} (while still maintaining the regularity requirement of $h(\\vecc x)$ being nonnegative) without affecting the main results of this paper, namely Theorems \\ref{th:characterization_main}, \\ref{th:characterization_iid} and \\ref{th:two_iid_approx}.\n\\label{foot:alter-assumption-monotone}\n}.\n\nSo $S$ can be assumed to be the intersection of $D_{1,2}$ with a box, i.e.~$S=[t_1,1]\\times[t_2,1]\\inters D_{1,2}$, where $t_1\\geq x_1^*$ and $t_2\\geq x_2^*$. This also means that its projections are $S_1=[t_1,1]$ and $S_2=[t_2,1]$.\nNow consider the lowest horizontal slice $\\slice{S}{x_2}{t_2}$ of $S$. It obviously lies within $D_{1,2}$. But from condition~\\eqref{eq:1slice_gen_integrals} so do all horizontal line segments of the form $[s_1(x_2),1]$ for any $x_2\\in[x_2^*, t_2]$: $s_1(x_2)$ is decreasing and specifically less steeply than the line $-x_2+p$ which is the boundary of $D_{1,2}$. So, by adding all these segments to $S$ we won't increase the projections towards the $x_1$-axis (these are covered already by $\\slice{S}{x_2}{t_2}$, which has to be a superset of $[s_1(t_2),1]$, otherwise it would have a negative deficiency, see~\\eqref{eq:1slice_gen_integrals}) and the new projections towards the $x_2$-axis are dominated by the increase of the area of $S$ (this segments have nonnegative deficiency). So, $S$ can be assumed to project in the entire boundaries $[x_1^*,1]$ and $[x_2^*,1]$ of $D_{1,2}$ and thus, since $h$ is nonnegative, $S$ can be assumed to fill the entire $D_{1,2}$ region. But by the definition of price $p$ in Theorem~\\ref{th:characterization_main}, $\\delta(D_{1,2})=0$ which concludes the proof. \n\\end{proof}\n\\subsection{Dual Solution and Optimality}\n\\label{sec:optimality}\nNotice that Theorem~\\ref{th:characterization_main} ensures the existence of a full-bundling price in~\\eqref{eq:2slice_gen}. This needs to be proven. Indeed,\nquantity $\\int_Dh(\\vecc x)\\,d\\vecc x$ continuously (weakly) increases as $p$ decreases, and for $p=0$ %\n\\begin{align*}\n\\int_Dh(\\vecc x)\\,d\\vecc x &=\\int_0^1\\int_0^1 3f_1(x_1)f_2(x_2)+x_1f_1'(x_1)f_2(x_2)+yf_2'(x_2)f_1(x_1)\\,dx_1\\,dx_2\\\\\n\t\t&=3+(f_1(1)-1)+(f_2(1)-1)=1+f_1(1)+f_2(1)\n\t\t>f_1(1)+f_2(1)\n\\end{align*}\nwhile for $p=\\hat x_1+\\hat x_2$, where $(\\hat x_1,\\hat x_2)$ is the unique point of intersection of the curves $x_2=s_1(x_1)$ and $x_1=s_2(x_2)$ in $I^2$ (such a point certainly exists because $s_1$ and $s_2$ are defined over the entire $I$), \n\\begin{align*}\n\\int_{D_{1,2}}h(\\vecc x)\\,d\\vecc x &=\\int_{\\hat x_2}^1\\int_{\\hat x_1}^1 h(\\vecc x)\\,d\\vecc x\n\t\t\\leq \\int_{\\hat x_2}^1\\int_{s_1(x_2)}^1 h(\\vecc x)\\,d\\vecc x\n\t\t= \\int_{\\hat x_2}^1f_1(1)f_2(x_2)\\,dx_2\\\\\n\t\t&= f_1(1)(1-F_2(\\hat x_2))\n\t\t\\leq f_1(1)(1-F_2(\\hat x_2))+f_2(1)(1-F_1(\\hat x_1)),\n\\end{align*}\nthe first inequality holding because $h$ is nonnegative and $s_1(x_2)\\leq s_1(\\hat x_2)=\\hat x_1$ ($s_1$ is decreasing), and the second equality by substituting~\\eqref{eq:1slice_gen_integrals}, and from~\\eqref{eq:2slice_gen_bundle_region} this means that $\\int_{D}h\\,d\\vecc x\\leq f_1(1)+f_2(1)$.\n\nCombining the above, indeed there must be a $p\\in[0,\\hat x_1+\\hat x_2]$ such that $\\int_{D}h\\,d\\vecc x=f_1(1)+f_2(1)$. In fact, using this argument, if for $p=\\min\\sset{s_1(0),s_2(0)}$ it is $\\int_{D}h\\,d\\vecc x0$, there exist feasible dual solutions $z_1,z_2$ which are $\\varepsilon$-complementary to the (primal) $u$ given by~\\eqref{eq:optimal_auction_gen}. Therefore, the mechanism induced by $u$ is optimal.\n\\end{lemma}\n\\begin{proof}\nFollowing the discussion in Sect.~\\ref{sec:duality}, we would like to decompose $W$ into the desired functions $w_1$ and $w_2$ within $D_{1,2}$, i.e.~such that they satisfy~\\eqref{eq:Wdecomp_sum}--\\eqref{eq:Wdecomp1}. In fact, we are aiming for $\\varepsilon$-complementarity, so we can relax conditions~\\eqref{eq:Wdecomp1} a bit: \n\\begin{equation}\n\\int_0^1w_j(\\vecc x)\\,dx_j \\leq f_j(1)f_{-j}(x_{-j})+\\varepsilon' \n\\label{eq:relax_dual_boundary}\n\\end{equation}\nTo be precise, the $\\varepsilon$-complementarity of Lemma~\\ref{lemma:complementarity} dictates that regarding these conditions we must show that for a.e.\\ $\\vecc x\\in D_{1,2}$ property~\\eqref{eq:e_compl_3} holds (conditions~\\eqref{eq:e_compl_2} and~\\eqref{eq:e_compl_4} are immediately satisfied with strong equality, by~\\eqref{eq:prop_dual_1} and the fact that within $D_{1,2}$ both items are sold deterministically with probability $1$.).\nBut since $u(\\vecc x)\\leq x_1+x_2 \\leq 2$ for all $x_1,x_2\\in I$ ($u$'s derivatives are at most $1$ with respect to any direction) and also exists $M>0$ such that $f_1(1)f_2(x_2),f_2(1)f_1(x_1)\\geq M$ for all $\\vecc x\\in D_{1,2}$ (the density functions are continuous over the closed interval $I$ and positive\\footnote{We would like to note here that this is the only point in the paper where the fact that the densities are \\emph{strictly} positive is used. As a matter of fact, a closer look will reveal that the proof just needs the property to hold in the closure of $D_{1,2}$ and not necessarily in the entire domain $I^2$. This allows the consideration of a wider family of feasible distributional priors, for example the monomial distributions of Corollary~\\ref{th:optimal_two_power}: their densities $f(t)=(c+1)t^c$ may vanish at $t=0$ but these ``problematic'' points happen to lie outside the area $D_{1,2}$ where both items are sold.}), indeed~\\eqref{eq:relax_dual_boundary} is enough to guarantee $\\varepsilon$ complementarity if one ensures $\\varepsilon'\\leq \\varepsilon M\/2$. So, the remaining of the proof is dedicated into constructing nonnegative, a.e.~continuous functions $w_1$ and $w_2$ over $D_{1,2}$, such that $w_1+w_2=h$ and~\\eqref{eq:relax_dual_boundary} are satisfied.\n\nWe will do that by constructing an appropriate graph and recovering $w_1$ and $w_2$ as ``flows'' through its nodes, deploying the min-cut max-flow theorem to prove existence. To start, we pick an arbitrary small $\\delta>0$ and discretize $I^2$ into a lattice of $\\delta$-size boxes $[(i-1)\\delta,i\\delta] \\times [(j-1)\\delta,j\\delta]$, where $i,j=1,2,\\dots,1\/\\delta$, selecting $\\delta$ such that $1\/\\delta$ is an integer. Denote the intersection of such a box with $D_{1,2}$ by $B_{i,j}$. Also, let $B^1_i$ denote the projection of all nonempty $B_{i,j}$'s, as $j$ ranges, towards the $x_1$-axis and $B^2_j$ towards the $x_2$-axis, as $i$ ranges. Note that these are well-defined in this way, since by the geometry of region $D_{1,2}$ two nonempty $B_{i,j}$, $B_{i',j'}$ will have the same vertical projection if $i=i'$ and the same horizontal if $j=j'$. Also, it is a simple fact to observe that all $B^1_i$ and $B^2_j$ are single-dimensional real intervals of length at most $\\delta$.\n\nNow let's construct a directed graph $G=(V,E)$, together with a capacity function $c(e)$ for all edges $e\\in E$. Initially, for any pair $(i,j)$ such that $B_{i,j}$ has positive (two-dimensional Lebesgue) measure we insert a node $v(i,j)$ in $V$. We'll call these nodes \\emph{internal} and we'll denote them by $V_o$. Also, for any internal node $v(i,j)$ we add nodes $v_1(i)$ and $v_2(j)$ corresponding to entire columns and rows, calling them \\emph{column} and \\emph{row} vertices and denoting them by $V_1$ and $V_2$, respectively. Finally there are two special nodes, a source $\\sigma$ and a destination $\\tau$. From the source to all internal nodes $v=v(i,j)$ we add an edge $(\\sigma,v)$ with capacity equal to the area of $B_{i,j}$ under $h$, i.e.~$c(\\sigma,v)=\\int_{B_{i,j}}h(\\vecc x)\\,d\\vecc x$. From any internal node $v=v(i,j)$ to its external column and row nodes $v_1=v_1(i)$ and $v_2=v_2(j)$ we add edges with capacities $c(v,v_1)=c(v,v_2)=c(\\sigma,v)$ equal to the internal node's incoming edge capacity from the source. Finally, for all external nodes $v_1(i)\\in V_1$ and $v_2(j)\\in V_2$ we add edges towards the destination $\\tau$ with capacities $c(v_1,\\tau)=f_2(1)\\int_{B^1_i}f_1(x_1)\\,dx_1$ and $c(v_2,\\tau)=f_1(1)\\int_{B^2_j}f_2(x_2)\\,dx_2$, respectively. The structure of graph $G$ is depicted in Fig.~\\ref{fig:flows_graph}.\n \\begin{figure}\n \\centering\n \\includegraphics[width=10cm]{Figures\/flows.pdf}\n \\caption{\\footnotesize The graph $G$ in the proof of Lemma~\\ref{lemma:coloring}. Every internal node $B_{i,j}$ of region $D_{1,2}$ can receive at most $\\int_{B_{i,j}}h(\\vecc x)\\,d\\vecc x$ flow from the source node $\\sigma$ and can send at most that amount to each one of its neighbouring external nodes $B_{i}^1$ and $B_{j}^2$. Every external node $B^1_i$ and $B^2_j$ is connected to the destination $\\tau$ with edges of capacity $f_2(1)\\int_{B_i^1}f_1(x_1)\\,dx_1$ and $f_1(1)\\int_{B_j^2}f_2(x_2)\\,dx_2$, respectively. Internal $B_{i,j}$'s are two-dimensional intersections of $\\delta$-boxes with $D_{1,2}$, while the external ones, $B^1_i$ and $B^2_j$ are single dimensional intervals of length $\\delta$.}\n \\label{fig:flows_graph}\n \\end{figure}\n\nAs a first observation, notice that the maximum flow that can be sent from $\\sigma$ within the graph is $\\int_{D_{1,2}}h(\\vecc x)\\,dx_1\\,dx_2$ and the maximum flow that $\\tau$ can receive is \n$$f_2(1)\\int_{x_1^*}^1f_1(x_1)\\,dx_1+f_1(1)\\int_{x_2^*}^1f_2(x_2)\\,dx_2$$ (remember that the projection of $D_{1,2}$ to the $x_1$-axis is $[x_1^*,1]$ and to the $x_2$-axis $[x_2^*,1]$). But, from the way the entire region $D$ is constructed, we know that the above two quantities are equal (see~\\eqref{eq:2slice_gen_bundle_region}). Let's denote this value by $\\psi$. Next, we will prove that indeed one can create a feasible flow through $G$ that achieves that maximum value $\\psi$. From the max-flow min-cut theorem, it is enough to show that the minimum $(\\sigma,\\tau)$-cut of $G$ has a value of at least $\\psi$. To do that, we'll show that $(\\sigma,V\\setminus\\{\\sigma\\})$ is a minimum cut of $G$.\n\nIndeed, let $(S,V\\setminus S)$ be a $(\\sigma,\\tau)$-cut of $G$. First, let there be an edge $(v,v_j)$ crossing the cut, i.e.~$v\\in S$ and $v_j\\notin S$, with $v$ internal node and $v_j$ external. Then, by moving $v$ at the other side of the cut, i.e.~removing it from $S$, we would create at most a new edge contributing to the cut, namely $(\\sigma,v)$ but also destroy at least one edge $(v,v_j)$. Since the capacities of these two edges are the same, the overall effect would be to get a new cut with weakly smaller value. So, from now on we can assume that for all edges $(v,v_j)$ of $G$, if $v\\in S$ then also $v_j\\in S$. Under this assumption, if $S_{o}=V_{o}\\inters S$ denotes the set of internal nodes belonging at the left side of the cut, for every $v\\in S_{o}$ all edges $(v,v_j)$ adjacent to $v$ will not cross the cut. However, this means that all edges $(v_j,\\tau)$, where $v_j\\in N(v)$\\footnote{$N(v)$ denotes the set of neighbours of $v$ in graph $G$.}, do contribute to the cut. But then, if we remove all nodes in $S_{o}$, together with their neighbouring external nodes $N(S_{o})$ at the other side of the cut, we increase the cut's value by at most $\\sum_{v\\in S_{o}}c(\\sigma,v)$ and at the same time reduce it by at least $\\sum_{v_j\\in N(S_{o})}c(v_j,\\tau)$. However, by the way graph $G$ is constructed, this corresponds to an overall increase in the cut of at least \n$$\\int_B h(\\vecc x)\\,d\\vecc x -f_2(1) \\int_{B_{1}}f_1(x_1)\\,dx_1 -f_1(1)\\int_{B_{2}}f_2(x_2)\\,dx_2,$$ \nwhere $B=\\union_{v(i,j)\\in S_{o}}B_{i,j}$ is the region of $D_{1,2}$ covered by the boxes of nodes in $S_{o}$ and $B_1$, $B_2$ are the projections of this body to the horizontal and vertical axis, respectively. From Lemma~\\ref{lemma:no_positive_def} this difference must be nonpositive, thus this change results in a cut of an even (weakly) smaller value. The above arguments show that indeed the cut that has only $\\sigma$ remaining at its left side is a minimum one.\n\nSo, there must be a flow $\\phi:E\\longrightarrow\\mathbb R_+$, achieving to transfer a total value of $\\psi$ through $G$. As we argued above though, by the construction of $G$, in order to achieve this value of $\\psi$ the full capacity of \\emph{all} edges $(\\sigma, v)$ as well as that of all $(v_j,\\tau)$ must be used. So, this flow $f$ manages to elegantly separate all incoming flow $\\phi(\\sigma,v(i,j))=\\int_{B_{i,j}}h(\\vecc x)\\,d\\vecc x$ towards an internal box of $D_{1,2}$, into a sum of flows $\\phi(v(i,j),v_1(i))+\\phi(v(i,j),v_2(j))$ towards its external neighbours. But this is exactly what we need in order to construct our feasible dual solution! For simplicity, denote this incoming flow $\\phi(i,j)$ and the outgoing ones $\\phi_1(i,j)$ and $\\phi_2(i,j)$, respectively. Then, define the functions $w_1$, $w_2$ throughout $D_{1,2}$ by\n$$\nw_1(\\vecc x)=\\frac{\\phi_1(i,j)}{\\phi(i,j)}h(\\vecc x)\\qquad\\text{and}\\qquad w_2(\\vecc x)=\\frac{\\phi_2(i,j)}{\\phi(i,j)}h(\\vecc x),\n$$\nwhere $B_{i,j}$ is the discretization box where point $\\vecc x$ of $D_{1,2}$ belongs to. In that way, first notice that we achieve $w_1+w_2=h$. Secondly, functions $w_1$ and $w_2$ are almost everywhere continuous, since the values of the flows are constant within the boxes, and our discretization is finite. The only remaining property to prove is~\\eqref{eq:relax_dual_boundary}. \n\nFix some height $x_2=\\tilde x_2$ such that this horizontal line intersects $D_{1,2}$. We'll prove that \n$$\\int_{0}^{1}w_1(x_1,\\tilde x_2)\\,dx_1-f_1(1)f_2(\\tilde x_2)\\leq \\varepsilon'.$$ \nValue $\\tilde x_2$ falls within some interval of the discretization, let $\\tilde x_2\\in[(\\tilde j-1)\\delta,\\tilde j\\delta]=B_{\\tilde j}^2$. The average value of function $f_1(1)f_2(x_2)$ (with respect to $x_2$) within this interval is $$\\frac{1}{\\delta}f_1(1)\\int_{B^2_{\\tilde j}}f_2(x_2)\\,dx_2=c(v_2(\\tilde j),\\tau)\/\\delta$$ and the average value of $\\int_0^1w_1(\\vecc x)\\,dx_1$ is \n$$\n\\frac{1}{\\delta}\\int_{B^2_{\\tilde j}}\\int_0^1w_1(\\vecc x)\\,dx_1 = \\frac{1}{\\delta}\\sum_i\\int_{B_{i,\\tilde j}}w_1(\\vecc x)\\,d\\vecc x = \\frac{1}{\\delta}\\sum_i\\frac{\\phi_1(i,j)}{\\phi(i,j)}\\int_{B_{i,\\tilde j}}h(\\vecc x)\\,d\\vecc x \n= \\sum_i \\phi_1(i,\\tilde j)\/\\delta.\n$$\nBut since the sum of the outgoing flows over any horizontal line of internal nodes of the graph (here $j=\\tilde j$) must equal the outgoing flow of the corresponding external node (here $v_2(\\tilde j)$), the above quantities are equal. Thus, by selecting the discretization parameter $\\delta$ small enough, we can indeed make the values $\\int_{0}^{1}w_1(x_1,\\tilde x_2)\\,dx_1$ and $f_1(1)f_2(\\tilde x_2)$ to be $\\varepsilon'$ close to each other\n\\footnote{This should feel intuitively clear, and it relies on the uniform continuity of functions $f_2$ and $h$, but we also give a formal proof in Appendix~\\ref{append:technical_flow_rest}.}.\n\\end{proof}\n\n\n\\section{The Case of Identical Items}\nIn this section we focus on the case of identically distributed values, i.e.~$f_1(t)=f_2(t)\\equiv f(t)$ for all $t\\in I$, and we provide clear and simple conditions under which the critical property~\\eqref{eq:1slice_gen_functions} of Theorem~\\ref{th:characterization_main} holds. \n\nFirst notice that in this case the regularity Assumption~\\ref{assume:regularity} gives $3+\\frac{x_1f'(x_1)}{f(x_1)}+\\frac{x_2f'(x_2)}{f(x_2)}\\geq 0$ a.e. in $I^2$ (since $f$ is positive) and thus $\\frac{tf'(t)}{f(t)}\\geq -\\frac{3}{2}$ for a.e. $t\\in I$. An equivalent way of writing this is that $t^{3\/2}f(t)$ is increasing, which interestingly is the complementary case of that studied by~\\citet{Hart:2012uq} for two i.i.d. items: they show that when $t^{3\/2}f(t)$ is decreasing, then deterministically selling in a full bundle is optimal.\n\n\\begin{theorem}\n\\label{th:characterization_iid}\nAssume that $G(t)=tf(t)\/(1-F(t))$ and $H(t)=tf'(t)\/f(t)$ give rise to well defined, differentiable functions over $I$, $G$ being strictly increasing and convex, $H$ decreasing and concave, with $G+H$ increasing and $G(1)\\geq 2+H(0)$. Then the requirements of Theorem~\\ref{th:characterization_main} are satisfied. In particular \n$$s(t)=G^{-1}(2+H(t))$$\nand, if \n\\begin{equation}\n\\label{eq:two_iid_full_bundle_price}\n\\int_0^1\\int_0^1h(\\vecc x)\\,d\\vecc x-\\int_0^p\\int_0^{p-x_2}h(\\vecc x)\\,d\\vecc x-2f(1)\n\\end{equation}\nis nonpositive for $p=s(0)$ then the optimal selling mechanism is the one offering deterministically the full bundle for a price of $p$ being the root of \\eqref{eq:two_iid_full_bundle_price} in $[0,s(0)]$, otherwise the optimal mechanism is the one defined by the utility function\n$$\nu(\\vecc x)=\\max\\sset{0,x_1-s(x_2),x_2-s(x_1),x_1+x_2-p}\n$$\nwith $p=x^*+s(x^*)$, where $x^*\\in [0,s(0)]$ is the constant we get by solving \n\\begin{equation}\n\\label{eq:price_bundle_eq_iid}\n\\int_{x^*}^{s(x^*)}\\int_{s(x^*)+x^*-x_2}^1h(\\vecc x)\\,d\\vecc x+\\int_{s(x^*)}^1\\int_{x^*}^1h(\\vecc x)\\,d\\vecc x= 2f(1)(1-F(x^*)).\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nFunction $G$ is strictly monotone, thus invertible and has a range of $[G(0),G(1)]=[0,G(1)]\\supseteq [0,2+H(0)]$. By Assumption~\\ref{assume:regularity} and the previous discussion, it must be $tf'(t)\/f(t)\\geq -3\/2$, so $2+H(t)\\geq 1\/2>0$ for all $t\\in I$. Thus, $s(t)=G^{-1}(2+H(t))$ is well defined and furthermore it is decreasing, since $G$ is increasing and $H$ decreasing. Also, by the way $s$ is defined we get that for all $t$: $G(s(t))=2+H(t)$, which is exactly condition~\\eqref{eq:1slice_gen_functions} of Theorem~\\ref{th:characterization_main}. \n\nIt remains to be shown that $s$ is concave and that $s'(t)>-1$. From the definition of $s$, $s'(t)=H'(t)\/G'(s(t))$. Function $H$ is decreasing and concave, so $H'(t)$ is negative and decreasing, and function $G$ is increasing and convex and $s$ decreasing, so $G'(s(t))$ is positive and decreasing. Combining these we get that the ratio $H'(t)\/G'(s(t))$ is decreasing, proving that $s$ is concave. Finally, notice that since we are in a two item i.i.d. setting, the only part of curve $x_2=s(x_1)$ that matters and may appear in the utility of the resulting mechanism~\\eqref{eq:optimal_auction_gen} is the one where $x_1\\leq x_2$ (curves $x_2=s(x_1)$ and $x_1=s(x_2)$ will intersect on the line $x_1=x_2$), so we only have to show that $s'(t)>-1$ for $t\\leq s(t)$. Indeed, in that case $G'(t)\\leq G'(s(t))$, so $s'(t)=H'(t)\/G'(s(t))\\geq H'(t)\/G'(t)$ and thus it is enough to show that $H'(t)-G'(t)\\geq 0$ which we know holds since $H+G$ is assumed to be increasing.\n\n\\end{proof}\n\n\n\n\\begin{corollary}[Monomial Distributions]\n\\label{th:optimal_two_power}\nThe optimal selling mechanism for two items with i.i.d.\\ values from the family of distributions with densities $f(t)=(c+1)t^c$, $c\\geq 0$, is deterministic. In particular, it offers each item for a price of $s=\\sqrt[c+1]{\\frac{c+2}{2c+3}}$ and the full bundle for a price of $p=s+x^*$, where $x^*$ is the solution to~\\eqref{eq:price_bundle_eq_iid}.\n\\end{corollary}\n\\begin{proof}\nFor two monomial i.i.d.\\ items with $f_1(t)=f_2(t)=(c+1)t^c$ we have $h(\\vecc x)=(c+1)^2 (2 c+3) x_1^c x_2^c\\geq 0$, thus $h(\\vecc x)-f_2(1)f_1(x_1)=(c+1)^2 x_1^c \\left((2 c+3) x_2^c-1\\right)$ which is nonnegative for all $x_2\\geq \\sqrt[c]{1\/(2c+3)}\\equiv \\omega$. So, in order to make sure that Assumption~\\ref{assume:upwards_def} is satisfied, it is enough to show that $x^*\\geq\\omega$ because then $D_{1,2}\\subseteq [\\omega,1]^2$. We'll soon show that this is indeed satisfied for all $c\\geq 0$.\n\nApplying Theorem~\\ref{th:characterization_iid} we compute: $G(t)=(c+1)t^{c+1}\/(1-t^{c+1})$ which is strictly increasing and convex in $I$ and $H(t)=c$ which is constant and thus decreasing and concave. Also, it is trivial to deduce that $G+H$ is increasing and $\\lim_{t\\to 1^{-}}G(t)=\\infty>2+c=2+H(0)$. Then, it is valid to compute $G^{-1}(t)=\\left(\\frac{3+2c}{2+c}\\right)^{-\\frac{1}{1+c}}$ and thus $s(t)=\\sqrt[c+1]{\\frac{c+2}{2c+3}}$ which is constant. \n\nRegarding the computation of the full-bundle price $p$, condition~\\eqref{eq:price_bundle_eq_iid} gives rise to quantity\n$$\n\\int_{x^*}^{s}\\int_{s+x^*-x_2}^1x_1^cx_2^c\\,d\\vecc x+\\int_{s}^1\\int_{x^*}^1x_1^cx_2^c\\,d\\vecc x- \\frac{2}{c+1}(1-{x^*}^{c+1}),\n$$\nwhich by plugging-in $x^*=\\omega$ and using the values of $s$ and $\\omega$ (as functions of $c$) one can see that it is positive for all $c\\geq 0$. So, by the discussion in the beginning of Sect.~\\ref{sec:optimality} it can be deduced that the solution to~\\eqref{eq:price_bundle_eq_iid} will be such that $x^*>\\omega$. \n\\end{proof}\nNotice that for $c=0$ the setting of Corollary~\\ref{th:optimal_two_power} reduces to a two uniformly distributed goods setting, and gives the well-known results of $s=2\/3$ and $p=(4-\\sqrt{2})\/3$ (see e.g.~\\citep{Manelli:2006vn}). For the linear distribution $f(t)=2t$, where $c=1$, we get $s=\\sqrt{3\/5}$ and $p\\approx1.091$.\n\n\\begin{corollary}[Exponential Distributions]\n\\label{th:optimal_two_expo}\nThe optimal selling mechanism for two items with exponentially i.i.d.\\ values over $[0,1]$, i.e.\\ having densities $f(t)=\\lambda e^{-\\lambda t}\/(1-e^{-\\lambda})$, with $0<\\lambda\\leq 1$, is the one having $s(t)=\\frac{1}{\\lambda}\\left[2-\\lambda t-W\\left(e^{2-\\lambda-\\lambda t}(2-\\lambda t)\\right)\\right]$ and a price of\n$p=x^*+s(x^*)$ for the full bundle, where $x^*$ is the solution to~\\eqref{eq:price_bundle_eq_iid}. Here $W$ is Lambert's product logarithm function\\footnote{Function $W$ can be defined as the solution to $W(t)e^{W(t)}=t$.}.\n\\end{corollary}\n\\begin{proof}\nFor two i.i.d. exponentially distributed items with $f_1(t)=f_2(t)=\\lambda e^{-\\lambda t}\/(1-e^{-\\lambda})$ we have \n$$\n\\hspace{-0.5cm}\nh(\\vecc x)-f_2(1)f_1(x_1)=\\frac{\\lambda^2}{\\left(e^\\lambda-1\\right)^2} e^{2-\\lambda (x_1+x_2)} (3-\\lambda (x_1+x_2)-e^{\\lambda x_2})\\geq \\frac{\\lambda^2}{\\left(e^\\lambda-1\\right)^2} e^{2-\\lambda (x_1+x_2)} (2-\\lambda (x_1+x_2))\\geq 0\n$$ \nfor all $x_1,x_2\\in I$, since $\\lambda\\leq 1$.\n\nApplying Theorem~\\ref{th:characterization_iid} we compute: $G(t)=\\lambda t\/(1-e^{-\\lambda(1-t)})$ which is strictly increasing and convex in $I$ and $H(t)=-\\lambda t$ which is decreasing and concave. Also, $G(t)+H(t)=\\lambda te^{-\\lambda(1-t)}\/(1-e^{-\\lambda(1-t)})$ is increasing and $\\lim_{t\\to 1^{-}}G(t)=\\infty>2=2+H(0)$. Then, it is valid to compute $G^{-1}(t)=t\/\\lambda-W\\left(t e^{t-\\lambda}\\right)\/\\lambda$ and thus $s(t)=\\frac{1}{\\lambda}\\left[2-\\lambda t-W\\left(e^{2-\\lambda-\\lambda t}(2-\\lambda t)\\right)\\right]$.\n\\end{proof}\n\nFor example, for $\\lambda=1$ we get $s(t)=2-t-W\\left(e^{1-t} (2-t)\\right)$ and $p\\approx 0.714$. \nInterestingly, to our knowledge this is the first example for an i.i.d.\\ setting with values coming from a regular, continuous distribution over an interval $[0,b]$, where an optimal selling mechanism is \\emph{not} deterministic. Also notice how this case of exponential i.i.d.\\ items on a bounded interval is different from the one on $[0,\\infty)$: by \\citep{Daskalakis:2013vn,g2014} we know that at the unbounded case the optimal selling mechanism for two exponential i.i.d.\\ items is simply the deterministic full-bundling, but in our case of the bounded $I$ this is not the case any more.\n\n\\section{Non-Identical Items}\n\\label{sec:non-iid}\n\nAn interesting aspect of the technique of Theorem~\\ref{th:characterization_iid} is that it can readily be used also for non identically distributed values. One just has to define $G_j(t)\\equiv tf_j(t)\/(1-F_j(t))$ and $H_j(t)=tf_j'(t)\/f_j(t)$ for both items $j=1,2$ and check again whether $G_1,G_2$ are strictly increasing and convex and $H_1,H_2$ nonnegative, decreasing and concave. Then, we can get $s_j(t)=G_j^{-1}(2+H_{-j}(t))$ and check if $s_j(1)> -1$ and the price $p$ of the full bundle can be given by~\\eqref{eq:2slice_gen}. Again, a quick check of whether full bundling is optimal is to see if for $p=\\min\\sset{s_1(0),s_2(0)}$ expression $\\int_0^1\\int_0^1h(\\vecc x)\\,d\\vecc x-\\int_0^p\\int_0^{p-x_2}h(\\vecc x)\\,d\\vecc x-f_1(1)-f_2(1)$ is nonpositive.\n\n\\begin{example}\n\\label{example:uniform-expo}\nConsider two independent items, one having uniform valuation $f_1(t)=1$ and one exponential $f_2(t)=e^{-t}\/(1-e^{-1})$. Then we get that $s_1(t)=(2-t)\/(3-t)$, $s_2(t)=2-W(2e)\\approx 0.625$ and $p \\approx0.787$. The optimal selling mechanism offers either only item $2$ for a price of $s_2\\approx 0.625$, or item $1$ deterministically and item $2$ with a probability $s_1'(x_2)$ for a price of $s_1(x_2)-x_2s_1'(x_2)$, or the full bundle for a price of $p\\approx 0.787$. You can see the allocation space of this mechanism in Fig.~\\ref{fig:Exp_Uniform}.\n\\end{example}\n\n\\section{Approximate Solutions}\n\\label{sec:approximate_convex_fail}\nIn the previous sections we developed tools that, under certain assumptions, can give a complete closed-form description of the optimal selling mechanism. However, remember that the initial primal-dual formulation upon which our analysis was based, assumes a relaxed optimization problem. Namely, we dropped the convexity assumption of the utility function $u$. In the results of the previous sections this comes for free: the optimal solution to the relaxed program turns out to be convex anyways, as a result of the requirements of Theorem~\\ref{th:characterization_main}. But what happens if that was not the case? The following tool shows that even in that case our results are still applicable and very useful for both finding good upper bounds on the optimal revenue (Theorem~\\ref{th:two_iid_approx}) as well as designing almost-optimal mechanisms that have provably very good performance guarantees (Sect.~\\ref{sec:convexification}).\n\\begin{theorem}\n\\label{th:two_iid_approx}\nAssume that all conditions of Theorem~\\ref{th:characterization_main} are satisfied, except from the concavity of functions $s_1,s_2$. Then, the function $u$ given by that theorem might not be convex any more and thus not a valid utility function, but it generates an \\emph{upper bound} to the optimal revenue, i.e.\\ $\\ensuremath{\\text{\\rm\\sc Rev}}(f_1,f_2)\\leq\\mathcal R_{f_1,f_2}(u)$. In particular, this is the case if all the requirements of Theorem~\\ref{th:characterization_iid} hold except the concavity of $H$.\n\\end{theorem}\n\\begin{proof}\nThe proof is a straightforward result of the duality framework (see Sect.~\\ref{sec:duality}): By dropping only the concavity requirement of functions $s_1$ and $s_2$ but satisfying all the remaining conditions of Theorem~\\ref{th:characterization_main}, we still construct an optimal solution to the pair of primal-dual programs, meaning that function $u$ produced in \\eqref{eq:optimal_auction_gen} maximizes $\\mathcal R_{f_1,f_2}(u)$ over the space of all functions $u: I^2\\longrightarrow \\mathbb R_+$ with partial derivatives in $[0,1]$ (see~\\eqref{eq:allocs_probs_01}); the only difference is that $u$ might not be convex since $s_1,s_2$ might not be concave any more. The actual optimal revenue objective $\\ensuremath{\\text{\\rm\\sc Rev}}(f_1,f_2)$ has the extra constraint of $u$ being convex, thus, given that it is a maximization problem, it has to be that $\\ensuremath{\\text{\\rm\\sc Rev}}(f_1,f_2)\\leq \\mathcal R_{f_1,f_2}(u)$. Finally, it is easy to verify in the proof of Theorem~\\ref{th:characterization_iid} that dropping just the concavity requirement for $H$ can only affect the concavity of functions $s_1,s_2$ and hence the convexity of $u$.\n\\end{proof}\n\n\\begin{example}[Power-Law Distributions]\n\\label{example:power-law}\n A class of important distributions that falls into the description of Theorem~\\ref{th:two_iid_approx} are the power-law distributions with parameters $0<\\alpha\\leq 2$. More specifically, these are the distributions having densities $f(t)=c\/(t+1)^\\alpha$, with the normalization factor $c$ selected so that $\\int_0^1f(t)\\,dt=1$, i.e.~$c=(a-1)\/(1-2^{1-\\alpha})$. It is not difficult to verify that these distributions satisfy Assumption~\\ref{assume:upwards_def}. For example, for $\\alpha=2$ one gets $f(x)=2\/(x+1)^2$, the \\emph{equal revenue} distribution shifted in the unit interval. For this we can compute via~\\eqref{th:two_iid_approx} that $s(t)=\\frac{1}{2} \\sqrt{5+2 t+t^2}-\\frac{1}{2} (1+t)$ and $p\\approx 0.665$, which gives an upper bound of $R_{f,f}(u)\\approx 0.383$ to the optimal revenue $\\ensuremath{\\text{\\rm\\sc Rev}}(f,f)$.\n\\end{example}\n\n\\subsection{Convexification}\n\\label{sec:convexification}\nThe approximation results described in Theorem~\\ref{th:two_iid_approx} can be used not only for giving upper bounds on the optimal revenue, but also as a \\emph{design} technique for good selling mechanisms. Since the only deviation from a feasible utility function is the fact that function $s$ is not concave (and thus $u$ is not convex), why don't we try to ``convexify'' $u$, by replacing $s$ by a concave function $\\tilde s$? If $\\tilde s$ is ``close enough'' to the original $s$, by the previous discussion this would also result in good approximation ratios for the new, feasible selling mechanism. \n\nLet's demonstrate this by an example, using the equal revenue distribution $f(t)=2\/(t+1)^2$ of the previous example. We need to replace $s$ with a concave $\\tilde s$ in the interval $[0,x^*]$. So let's choose $\\tilde s$ to be the concave hull of $s$, i.e.~the minimum concave function that dominates $s$. Since $s$ is convex, this is simply the line that connects the two ends of the graph of $s$ in $[0,x^*]$, that is, the line\n$$\n\\tilde s(t)=\\frac{s(0)-s(x^*)}{x^*}(x^*-t)+s(x^*).\n$$\nA calculation shows that this new \\emph{valid} mechanism has an expected revenue which is within a factor of just $1+3\\times 10^{-9}$ of the upper bound given by $s$ using Theorem~\\ref{th:two_iid_approx}, rendering it essentially optimal.\n\n\\paragraph{Acknowledgements:} We thank Anna Karlin, Amos Fiat, Costis Daskalakis and Ian Kash for insightful discussions. We also thank the anonymous reviewers for their useful comments on the conference version of this paper.\n\n\\bibliographystyle{abbrvnat} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Appendix: Experimental Details}\n\\label{sec:appendix_exp_details}\n\n\\subsection{Models}\nWe use the following families of architectures.\nThe PyTorch~\\cite{paszke2017automatic} specification of our ResNets and CNNs\nare available at \\url{https:\/\/gitlab.com\/harvard-machine-learning\/double-descent\/tree\/master}.\n\n\\paragraph{ResNets.}\nWe define a family of ResNet18s of increasing size as follows.\nWe follow the Preactivation ResNet18 architecture of \\cite{he2016identity},\nusing 4 ResNet blocks, each consisting of two BatchNorm-ReLU-Convolution layers. The layer widths for the 4 blocks are $[k, 2k, 4k, 8k]$ for varying $k \\in \\mathbb{N}$ and the strides are [1, 2, 2, 2].\nThe standard ResNet18 corresponds to $k=64$ convolutional channels in the first layer.\nThe scaling of model size with $k$ is shown in Figure~\\ref{fig:resnet_params}.\nOur implementation is adapted from \\url{https:\/\/github.com\/kuangliu\/pytorch-cifar}.\n\n\\paragraph{Standard CNNs.}\nWe consider a simple family of 5-layer CNNs,\nwith four Conv-BatchNorm-ReLU-MaxPool layers and a fully-connected output layer.\nWe scale the four convolutional layer widths as $[k, 2k, 4k, 8k]$. The MaxPool is [1, 2, 2, 8]. For all the convolution layers, the kernel size = 3, stride = 1 and padding=1. \nThis architecture is based on the ``backbone'' architecture from \n\\cite{mcnn}.\nFor $k=64$, this CNN has 1558026 parameters and can reach $>90\\%$ test accuracy on CIFAR-10 (\\cite{cifar}) with data-augmentation.\nThe scaling of model size with $k$ is shown in Figure~\\ref{fig:mcnn_params}.\n\n\n\\paragraph{Transformers.}\nWe consider the encoder-decoder Transformer model from \\cite{Vaswani}\nwith 6 layers and 8 attention heads per layer, as implemented by fairseq \\cite{ott2019fairseq}.\nWe scale the size of the network by modifying the embedding dimension ($d_{\\text{model}}$),\nand scale the width of the fully-connected layers proportionally ($d_{\\text{ff}} = 4 d_{\\text{model}}$).\nWe train with 10\\% label smoothing and no drop-out, for 80 gradient steps.\n\n\\begin{figure}[!h]\n \\centering\n \\begin{subfigure}{.3\\textwidth}\n \\includegraphics[width=\\textwidth]{mcnn_parameters}\n \\caption{5-layer CNNs}\n \\label{fig:mcnn_params}\n \\end{subfigure}\\hfill\n \\begin{subfigure}{.3\\textwidth}\n \\includegraphics[width=\\textwidth]{resnet_parameters}\n \\caption{ResNet18s}\n \\label{fig:resnet_params}\n \\end{subfigure}\\hfill\n \\begin{subfigure}{.3\\textwidth}\n \\includegraphics[width=\\textwidth]{nlp_parameters}\n \\caption{Transformers}\n \\label{fig:resnet_params2}\n \\end{subfigure}\n \\label{fig:nparams}\n \\caption{Scaling of model size with our\n parameterization of width \\& embedding dimension.}\n\\end{figure}\n\n\\subsection{Image Classification: Experimental Setup}\nWe describe the details of training for CNNs and ResNets below.\n\n\\textbf{Loss function:} Unless stated otherwise, we use the cross-entropy loss for all the experiments.\n\n\\textbf{Data-augmentation:} In experiments where data-augmentation was used, we apply \\texttt{RandomCrop(32, padding=4)} and \\texttt{RandomHorizontalFlip}. In experiments with added label noise, the label for all augmentations of a given training sample are given the same label.\n\n\\textbf{Regularization:} No explicit regularization like weight decay or dropout was applied unless explicitly stated.\n\n\\textbf{Initialization:} We use the default initialization provided by PyTorch for all the layers.\n\n\\textbf{Optimization:}\n\\begin{itemize}\n \\item \\textbf{Adam:} Unless specified otherwise, learning rate was set at constant to $1\\mathrm{e}{-4}$ and all other parameters were set to their default PyTorch values. \n \\item \\textbf{SGD:} Unless specified otherwise, learning rate schedule inverse-square root (defined below) was used with initial learning rate\n $\\gamma_{0} = 0.1$ and updates every $L = 512$ gradient steps. No momentum was used.\n\\end{itemize}\nWe found our results are robust to various other \nnatural choices of optimizers and learning rate schedule.\nWe used the above settings because\n(1) they optimize well,\nand (2) they do not require experiment-specific hyperparameter tuning, and allow us to use the same optimization across many experiments.\n\n\\textbf{Batch size}: All experiments use a batchsize of 128.\n\n\\textbf{Learning rate schedule descriptions:}\n\\begin{itemize}\n \\item \\textbf{Inverse-square root $(\\gamma_0, L)$}:\n At gradient step $t$, the learning rate is set to\n $\\gamma(t) := \\frac{\\gamma_0}{\\sqrt{1+\\lfloor t \/ 512\\rfloor}}$.\n We set learning-rate with respect to number of gradient steps, and not epochs, in order to allow comparison between experiments with varying train-set sizes.\n \\item \\textbf{Dynamic drop ($\\gamma_0$, drop, patience)}: Starts with an initial learning rate of $\\gamma_0$ and drops by a factor of 'drop' if the training loss has remained constant or become worse for 'patience' number of gradient steps.\n\\end{itemize}\n\n\\subsection{Neural Machine Translation: Experimental Setup}\n\nHere we describe the experimental setup for the neural machine translation experiments.\n\n{\\bf Training procedure.}\n\nIn this setting, the distribution $\\mathcal D$ consists of triples \\[\n (x, y, i)\\;:\\;x \\in V_{src}^*,\\;y \\in V_{tgt}^*,\\;i \\in \\{0, \\dots, |y|\\}\n\\] where $V_{src}$ and $V_{tgt}$ are the source and target vocabularies, the string $x$ is a sentence in the source language, $y$ is its translation in the target language, and $i$ is the index of the token to be predicted by the model. We assume that $i|x, y$ is distributed uniformly on $\\{0, \\dots, |y|\\}$.\n\nA standard probabilistic model defines an autoregressive factorization of the likelihood: \\[\n p_M(y|x) = \\prod_{i = 1}^{|y|} p_M(y_i|y_{0$,\nis defined as:\n\\begin{align*}\n \\mathrm{EMC}_{\\cD,{\\epsilon}}(\\cT)\n := \\max \\left\\{n ~|~ \\mathbb{E}_{S \\sim \\cD^n}[ \\mathrm{Error}_S( \\cT( S ) ) ] \\leq {\\epsilon} \\right\\}\n \\end{align*}\n where $\\mathrm{Error}_S(M)$ is the mean error of model $M$ on train samples $S$.\n\\end{definition}\n\nOur main hypothesis can be informally stated as follows:\n\n\\begin{hypothesis}[Generalized Double Descent hypothesis, informal] \\label{hyp:informaldd}\nFor any natural data distribution $\\cD$, neural-network-based training procedure $\\cT$, and small $\\epsilon>0$,\nif we consider the task of predicting labels based on $n$ samples from $\\cD$ then:\n\\begin{description}\n \\item[Under-paremeterized regime.] If~$\\mathrm{EMC}_{\\cD,\\epsilon}(\\cT)$ is sufficiently smaller than $n$, any perturbation of $\\cT$ that increases its effective complexity will decrease the test error.\n \\item[Over-parameterized regime.] If $\\mathrm{EMC}_{\\cD,\\epsilon}(\\cT)$ is sufficiently larger than $n$,\n any perturbation of $\\cT$ that increases its effective complexity will decrease the test error.\n \n \\item[Critically parameterized regime.] If $\\mathrm{EMC}_{\\cD,\\epsilon}(\\cT) \\approx n$, then\n a perturbation of $\\cT$ that increases its effective complexity\n might decrease {\\bf or increase} the test error.\n\\end{description}\n\\end{hypothesis}\n\n\\iffalse\nHypothesis~\\ref{hyp:informaldd} is stated informally\nas we are yet to fully understand the behavior at the critically parameterized regime.\nFor example, it is an open question what determines the width of the \\emph{critical interval}---the interval around the point in which $\\mathrm{EMC}_{\\cD,\\epsilon}(\\cT) = n$, where increasing complexity might hurt performance. Specifically, we lack a formal definition for ``sufficiently smaller'' and ``sufficiently larger''. Another parameter that lacks principled understanding is the choice of $\\epsilon$. In our experiments, we use $\\epsilon=0.1$ heuristically.\n\nWhile in both the under-parameterized and over-parameterized regimes increasing complexity helps performance, the dynamics of the learning process seem very different in these two regimes\nFor example, in the over-parameterized regime, the gain comes not from classifying more training samples correctly but rather from increasing the confidence for the training samples that have already been correctly classified. \\gal{iffalsed previous stuff}\n\\fi\n\nHypothesis~\\ref{hyp:informaldd} is informal in several ways.\nWe do not have a principled way to choose the parameter $\\epsilon$ (and currently heuristically use $\\epsilon=0.1$). \nWe also are yet to have a formal specification for ``sufficiently smaller'' and ``sufficiently larger''.\nOur experiments suggest that there is a \\emph{critical interval}\naround the \\emph{interpolation threshold} when $\\mathrm{EMC}_{\\cD,\\epsilon}(\\cT) = n$: below and above\nthis interval increasing complexity helps performance,\nwhile within this interval it may hurt performance.\nThe width of the critical interval depends on both the distribution and\nthe training procedure in ways we do not yet completely understand.\n\n\nWe believe Hypothesis~\\ref{hyp:informaldd} sheds light on the interaction between optimization algorithms, model size, and test performance and helps reconcile some of the competing intuitions about them.\nThe main result of this paper is an experimental validation of Hypothesis~\\ref{hyp:informaldd} under a variety of settings, where we considered several natural choices of datasets, architectures, and optimization algorithms,\nand we changed the ``interpolation threshold''\nby varying the number of model parameters,\nthe length of training, the amount of label noise in the distribution, and the number of train samples.\n\n\n{\\bf Model-wise Double Descent.}\nIn Section~\\ref{sec:model-dd},\nwe study the test error of models of increasing size,\nfor a fixed large number of optimization steps.\nWe show that ``model-wise double-descent'' occurs\nfor various modern datasets\n(CIFAR-10, CIFAR-100, \\iwslt~de-en, with varying amounts of label noise),\nmodel architectures (CNNs, ResNets, Transformers),\noptimizers (SGD, Adam),\nnumber of train samples,\nand training procedures (data-augmentation, and regularization).\nMoreover, the peak in test error systematically occurs at the interpolation threshold.\nIn particular, we demonstrate realistic settings in which\n\\emph{bigger models are worse}.\n\n{\\bf Epoch-wise Double Descent.}\nIn Section~\\ref{sec:epoch-dd},\nwe study the test error of a fixed, large architecture over the course of training.\nWe demonstrate, in similar settings as above,\na corresponding peak in test performance when\nmodels are trained just long enough to reach $~\\approx 0$ train error.\nThe test error of a large model\nfirst decreases (at the beginning of training), then increases (around the critical regime),\nthen decreases once more (at the end of training)---that is,\n\\emph{training longer can correct overfitting.}\n\n\n{\\bf Sample-wise Non-monotonicity.}\nIn Section~\\ref{sec:samples},\nwe study the test error of a fixed model and training procedure, for varying number of train samples.\nConsistent with our generalized double-descent hypothesis,\nwe observe distinct test behavior in the ``critical regime'',\nwhen the number of samples is near the maximum that the model can fit.\nThis often manifests as a long plateau region,\nin which taking significantly more data might not help when training to completion\n(as is the case for CNNs on CIFAR-10).\nMoreover, we show settings (Transformers on \\iwslt~en-de),\nwhere this manifests as a peak---and for a fixed architecture and training procedure, \\emph{more data actually hurts.}\n\n{\\bf Remarks on Label Noise.} \nWe observe all forms of double descent most strongly in settings\nwith label noise in the train set\n(as is often the case when collecting train data in the real-world).\nHowever, we also show several realistic settings with a test-error peak even without label noise:\nResNets (Figure~\\ref{fig:resnet-cifar-left}) and CNNs (Figure~\\ref{fig:app-cifar100-clean-sgd}) on CIFAR-100;\nTransformers on \\iwslt~ (Figure~\\ref{fig:more-mdd2}).\nMoreover, all our experiments demonstrate distinctly different test behavior\nin the critical regime--- often manifesting as a ``plateau'' in the test error in the noiseless case which \ndevelops into a peak with added label noise.\nSee Section~\\ref{sec:discuss} for further discussion.\n\n\\section{Related work}\nModel-wise double descent was first proposed as a general phenomenon\nby \\cite{belkin2018reconciling}.\nSimilar behavior had been observed in\n\\cite{opper1995statistical, opper2001learning},\n\\cite{advani2017high},\n\\cite{spigler2018jamming}, and \\cite{geiger2019jamming}.\nSubsequently, there has been a large body of work studying the double descent phenomenon.\nA growing list of papers that theoretically analyze it in the tractable setting of linear least squares regression includes \\cite{belkin2019two, hastie2019surprises, bartlett2019benign, muthukumar2019harmless, bibas2019new, Mitra2019UnderstandingOP, mei2019generalization}. Moreover, \\cite{geiger2019scaling} provide preliminary results for model-wise double descent in convolutional networks trained on CIFAR-10. Our work differs from the above papers in two crucial aspects: First, we extend the idea of double-descent beyond the number of parameters to incorporate the training procedure under a unified notion of ``Effective Model Complexity\", leading to novel insights like epoch-wise double descent and sample non-monotonicity.\nThe notion that increasing train time corresponds to\nincreasing complexity was also presented in~\\cite{nakkiran2019sgd}.\nSecond, we provide an extensive\nand rigorous demonstration of double-descent for modern practices spanning a variety of architectures, datasets optimization procedures.\nAn extended discussion of the related work is provided in Appendix \\ref{sec:related_appendix}.\n\n\n\n\\subsubsection*{Acknowledgments}\nWe thank Mikhail Belkin for extremely useful discussions in the early stages of this work.\nWe thank Christopher Olah for suggesting the Model Size $\\x$ Epoch visualization, which led to the investigation of epoch-wise double descent,\nas well as for useful discussion and feedback.\nWe also thank Alec Radford, Jacob Steinhardt, and Vaishaal Shankar for helpful discussion and suggestions.\nP.N. thanks OpenAI, the Simons Institute, and the Harvard Theory Group for a research environment that enabled this kind of work.\n\nWe thank Dimitris Kalimeris, Benjamin L. Edelman, and Sharon Qian,\nand Aditya Ramesh for comments on an early draft of this work.\n\nThis work supported in part by NSF grant CAREER CCF 1452961, BSF grant 2014389, NSF USICCS proposal 1540428, a Google Research award, a Facebook research award, a Simons Investigator Award,\na Simons Investigator Fellowship,\nand NSF Awards\nCCF 1715187,\nCCF 1565264, CCF 1301976, IIS 1409097,\nand CNS 1618026. Y.B. would like to thank the MIT-IBM Watson AI Lab for contributing computational resources for experiments.\n\n\n\n\\section{Model-wise Double Descent}\n\\label{sec:model-dd}\n\n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}[t]{.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{cifar100_resnet_pVar}\n \\caption[width=0.8\\linewidth]{{\\bf CIFAR-100.}\n There is a peak in test error even with no label noise.\n } \\label{fig:resnet-cifar-left}\n \\end{subfigure}\\hfill\n \\begin{subfigure}[t]{.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{cifar10_resnet_pVar}\n \\caption[width=0.8\\linewidth]{{\\bf CIFAR-10.}\n There is a ``plateau'' in test error around the interpolation point with no label noise,\n which develops into a peak for added label noise.\n }\n \\end{subfigure}\n \\caption{{\\bf Model-wise double descent for\n ResNet18s.} Trained on CIFAR-100 and CIFAR-10, with varying label noise. Optimized using Adam with LR $0.0001$ for 4K epochs, and data-augmentation.\n }\n \\label{fig:resnet-cifar}\n\\end{figure}\n\nIn this section, we study the test error of models of increasing size,\nwhen training to completion (for a fixed large number of optimization steps). \nWe demonstrate model-wise double descent across different architectures,\ndatasets, optimizers, and training procedures.\nThe critical region exhibits distinctly different test behavior\naround the interpolation point and there is often a peak in test error that becomes more prominent in settings with label noise.\n\nFor the experiments in this section (Figures~\\ref{fig:resnet-cifar}, \\ref{fig:cnn-cifar},\n\\ref{fig:more-mdd1appendix},\n\\ref{fig:mcnn_cf100},\n\\ref{fig:more-mdd2}), notice that all modifications which increase the interpolation threshold\n(such as adding label noise, using data augmentation, and increasing the number of train samples)\nalso correspondingly shift the peak in test error towards larger models.\nAdditional plots showing the early-stopping behavior of these models,\nand additional experiments showing double descent\nin settings with no label noise (e.g. Figure~\\ref{fig:app-cifar100-clean})\nare in Appendix~\\ref{subsec:appendix_model}.\nWe also observed model-wise double descent for adversarial training,\nwith a prominent robust test error peak even in settings without label noise.\nSee Figure~\\ref{fig:adv_training} in Appendix~\\ref{subsec:appendix_model}.\n\n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}{.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{cifar10_mcnn_noaug_pVar}\n \\caption{Without data augmentation.}\n \\end{subfigure}\\hfill\n \\begin{subfigure}{.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{cifar10_mcnn_aug_pVar_trunc}\n \\caption{With data augmentation.}\n \\end{subfigure}\n \\caption{{\\bf Effect of Data Augmentation.}\n 5-layer CNNs on CIFAR10, with and without data-augmentation.\n Data-augmentation shifts the interpolation threshold to the right,\n shifting the test error peak accordingly.\n Optimized using SGD for 500K steps.~\n See Figure~\\ref{fig:mcnn_aug_large} for larger models.\n }\\label{fig:cnn-cifar}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\captionsetup{calcwidth=.9\\linewidth}\n\\begin{minipage}[t]{0.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{cifar10_mcnn_sgd_adam}\n \\caption[width=0.8\\linewidth]{{\\bf SGD vs. Adam.}\n 5-Layer CNNs on CIFAR-10 with no label noise, and no data augmentation.\n Optimized using SGD for 500K gradient steps, and Adam for 4K epochs.}\\label{fig:more-mdd1appendix}\n\\end{minipage}\\hfill\n\\begin{minipage}[t]{.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{cifar100_mcnn_noaug}\n \\caption{{\\bf Noiseless settings.} \n 5-layer CNNs on CIFAR-100 with no label noise;\n note the peak in test error.\n Trained with SGD and no data augmentation.\n See Figure~\\ref{fig:app-cifar100-clean-sgd}\n for the early-stopping behavior of these models.\n }\n \\label{fig:mcnn_cf100}\n\\end{minipage}\n\\end{figure}\n\n\\begin{figure}[h]\n \\centering\n \\begin{minipage}[c]{0.5\\textwidth}\n \\includegraphics[width=\\textwidth]{nlp_de_en_fr}\n \\end{minipage}\\hfill\n \\begin{minipage}[c]{0.45\\textwidth}\n \\caption[0.9\\textwidth]{{\\bf Transformers on language translation tasks:} Multi-head-attention encoder-decoder Transformer model trained for 80k gradient steps with labeled smoothed cross-entropy loss on \\iwslt\\ German-to-English (160K sentences) and WMT`14 English-to-French (subsampled to 200K sentences) dataset. Test loss is measured as per-token perplexity.}\n \\label{fig:more-mdd2}\n \\end{minipage}\n\\end{figure}\n\n\n\n\n\n\n\n\n\\paragraph{Discussion.}\nFully understanding the mechanisms behind model-wise double descent\nin deep neural networks remains an important open question.\nHowever, an analog of model-wise double descent occurs even\nfor linear models. A recent stream of theoretical works\nanalyzes this setting\n(\\cite{bartlett2019benign,muthukumar2019harmless,belkin2019two,mei2019generalization,hastie2019surprises}).\nWe believe similar mechanisms may be at work in deep neural networks.\n\nInformally, our intuition is that for model-sizes at the interpolation threshold, there is effectively\nonly one model that fits the train data and this interpolating model is very sensitive\nto noise in the train set and\/or model mis-specification.\nThat is, since the model is just barely able to fit the train data,\nforcing it to fit even slightly-noisy or mis-specified\nlabels will destroy its global structure, and result in high test error.\n(See Figure~\\ref{fig:ens_resnet} in the Appendix for an experiment\ndemonstrating this noise sensitivity, by showing that ensembling helps significantly\nin the critically-parameterized regime).\nHowever for over-parameterized models,\nthere are many interpolating models that fit the train set,\nand SGD is able to find one that ``memorizes''\n(or ``absorbs'') the noise while still performing well on the distribution.\n\nThe above intuition is theoretically justified for linear models.\nIn general, this situation manifests even without label noise for linear models\n(\\cite{mei2019generalization}),\nand occurs whenever there is \\emph{model mis-specification}\nbetween the structure of the true distribution and the model family.\nWe believe this intuition extends to deep learning as well,\nand it is consistent with our experiments.\n\n\n\n\\iffalse\n\\textcolor{blue}{ {\\bf Omit this paragraph? --Preetum.}\nThe situation for linear models is most clear with added label noise.\nHere, the fundamental reason why there is high test error\nat the interpolation threshold is: when the number of samples $n$\nis equal to the dimension $d$ of the linear model,\nthen there is exactly one model which fits the train set--- and this \ninterpolating model is very sensitive\nto noise in the train set.\nThat is, since the linear model is just barely able to fit the train data,\nforcing it to fit even slightly-noisy\nlabels will destroy its global structure, and result in high test error.\nHowever for ``overparameterized'' linear models, when $d \\gg n$,\nthere are many interpolating models that fit the train set,\nand selecting the minimum-norm one will often\nachieve good test performance\\footnote{Note that for linear problems,\nthe minimum-norm solution is the one found by gradient descent from $0$ initialization.}.\nThat is, overparameterized linear models are able to\nfit the label noise, while maintaining good performance on the distribution.\nIn general, this situation manifests even without label noise, whenever there is\n\\emph{model mis-specification} between the structure of the true distribution, and the\nlinear model family.\n}\n\nWe believe similar mechanisms may be at work in deep neural networks.\nInformally, our intuition is:\nfor model-sizes at the interpolation threshold, there is effectively\nonly one model that fits the train data.\nThis model is very sensitive to [noise in] the specific train samples.\nOverparameterized models, however, have the capacity to ``memorize the noise'',\nwithout affecting their global structure as much.\nMoreover, SGD (through unexplained means) manages to find such a minima.\nFor overparameterized models, there are many minima which fit the train data,\nan\nand SGD (through unexplained means) is able to ``memorize the noise''\nwithout destroying the global structure of the model.\n\\fi\n\n\n\n\\section{On The Mechanisms and Characterizations of Double Descent}\n\\label{sec:morediscuss}\n\n\\paragraph{On Effective Model Complexity.}\nWe stress that Effective Model Complexity is \\emph{not} a single-letter characterization\nof test error.\nFor example, two training procedures with the same EMC can output models with\nvery different test errors (e.g. consider a CNN and a fully-connected net, which both are capable of interpolating the same number of samples from a natural image distribution.\nHere, we would expect the CNN to have much lower test error than the fully-connected net.\nOur generalized double desccent hypothesis applies as we vary EMC in a ``smooth'' way--- say, by only\nchanging the model by a small amount--- and does not apply if we say, jump discontinuously from a CNN to a fully-connected net.\n\\section{Extended discussion of related work}\n\\label{sec:related_appendix}\n\n\n\n\\paragraph{\\cite{belkin2018reconciling}:} \nThis paper proposed, in very general terms, that the apparent contradiction between traditional notions of the bias-variance trade-off and empirically successful practices in deep learning can be reconciled under a double-descent curve---as model complexity increases, the test error follows the traditional ``U-shaped curve'', but beyond the point of interpolation, the error starts to \\textit{decrease}. This work provides empirical evidence for the double-descent curve with fully connected networks trained on subsets of MNIST, CIFAR10, SVHN and TIMIT datasets. They use the $l_2$ loss for their experiments. They demonstrate that neural networks are not an aberration in this regard---double-descent is a general phenomenon observed also in linear regression with random features and random forests.\n\n\\paragraph{Theoretical works on linear least squares regression:}\nA variety of papers have attempted to theoretically analyze this behavior in restricted settings, particularly the case of least squares regression under various assumptions on the training data, feature spaces and regularization method.\n\n\\begin{enumerate}\n \\item \\cite{advani2017high, hastie2019surprises} both consider the linear regression problem stated above and analyze the generalization behavior in the asymptotic limit $N, D \\rightarrow \\infty$ using random matrix theory. \\cite{hastie2019surprises} highlight that when the model is mis-specified, the minimum of training error can occur for over-parameterized models\n \\item \\cite{belkin2019two} Linear least squares regression for two data models, where the input data is sampled from a Gaussian and a Fourier series model for functions on a circle. They provide a finite-sample analysis for these two cases\n \\item \\cite{bartlett2019benign} provides generalization bounds for the minimum $l_2$-norm interpolant for Gaussian features\n \\item \\cite{muthukumar2019harmless} characterize the fundamental limit of of any interpolating solution in the presence of noise and provide some interesting Fourier-theoretic interpretations.\n \\item \\cite{mei2019generalization}: This work provides asymptotic analysis for ridge regression over random features\n\\end{enumerate}\n\nSimilar double descent behavior was investigated in \\cite{opper1995statistical, opper2001learning}\n\n\\cite{geiger2019jamming} showed that deep fully connected networks trained on the MNIST dataset with hinge loss exhibit a ``jamming transition'' when the number of parameters exceeds a threshold that allows training to near-zero train loss. \\cite{geiger2019scaling} provide further experiments on CIFAR-10 with a convolutional network. They also highlight interesting behavior with ensembling around the critical regime, which is consistent with our\ninformal intuitions in Section~\\ref{sec:model-dd} and our experiments in Figures \\ref{fig:ens_resnet}, \\ref{fig:ens_mcnn}.\n\n\\cite{advani2017high, geiger2019jamming, geiger2019scaling} also point out that double-descent is not observed when optimal early-stopping is used.\n\n\n\n\n\n\n\n\\section{Random Features: A Case Study}\n\\label{sec:rff}\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=\\textwidth]{rff_fmnist.png} \\\\\n \n \n \\caption{\\textbf{Random Fourier Features} on the Fashion MNIST dataset. The setting is equivalent to two-layer neural network with $e^{-ix}$ activation, with randomly-initialized first layer that is fixed throughout training. The second layer is trained using gradient flow.}\n \\label{fig:rff}\n\\end{figure}\n\n\nIn this section, for completeness sake, we show that both the model- and sample-wise double descent phenomena are not unique to deep neural networks---they exist even in the setting of Random Fourier Features of \\cite{rahimi2008random}. This setting is equivalent to a two-layer neural network with $e^{-ix}$ activation. The first layer is initialized with a $\\mathcal{N}(0, \\frac{1}{d})$ Gaussian distribution and then fixed throughout training. The width (or embedding dimension) $d$ of the first layer parameterizes the model size. The second layer is initialized with $0$s and trained with MSE loss.\n\nFigure \\ref{fig:rff} shows the grid of Test Error as a function of\nboth number of samples $n$ and model size $d$.\nNote that in this setting $\\mathrm{EMC} = d$ (the embedding dimension). As a result, as demonstrated in the figure, the peak follows the path of $n=d$. Both model-wise and sample-wise (see figure~\\ref{fig:rff-samples}) double descent phenomena are captured, by horizontally and vertically crossing the grid, respectively. \n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.6\\textwidth]{figures\/rff_fmnist_samples.png} \\\\\n \n \n \\caption{Sample-wise double-descent slice for Random Fourier Features on the Fashion MNIST dataset. In this figure the embedding dimension (number of random features) is 1000.}\n \\label{fig:rff-samples}\n\\end{figure}\n\\section{Sample-wise Non-monotonicity}\n\\label{sec:samples}\n\nIn this section, we investigate the effect of varying the number of train samples,\nfor a fixed model and training procedure.\nPreviously, in model-wise and epoch-wise double descent, we explored behavior in the critical regime, where $\\mathrm{EMC}_{\\cD, {\\epsilon}}(\\cT) \\approx n$, by varying the EMC.\nHere, we explore the critical regime by varying the number of train samples $n$.\nBy increasing $n$, the same training procedure $\\cT$ can switch from being effectively over-parameterized\nto effectively under-parameterized.\n\n\nWe show that increasing the number of samples has two different effects on the test error vs. model complexity graph.\nOn the one hand, (as expected) increasing the number of samples shrinks the area under the curve.\nOn the other hand, increasing the number of samples also has the effect of ``shifting the curve to the right'' and increasing the model complexity at which test error peaks.\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[c]{.45\\textwidth}\n \n \\centering\n \n \\includegraphics[width=\\textwidth]{mcnn_p10_big}\n \\includegraphics[width=\\textwidth]{mcnn_p20_big}\n \\caption{Model-wise double descent for 5-layer CNNs on CIFAR-10, for varying dataset sizes. \n \n {\\bf Top:}\n There is a range of model sizes (shaded green)\n where training on $2\\x$ more samples does not improve test error.\n {\\bf Bottom:}\n There is a range of model sizes (shaded red)\n where training on $4\\x$ more samples does not improve test error.}\n \\label{fig:sample-modeld}\n \\end{subfigure}\\hspace{0.7cm}\n \\begin{subfigure}[c]{.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{nlp_sampledd_test-annotated}\n \\caption{\\textbf{Sample-wise non-monotonicity.}\n Test loss (per-word perplexity) as a function of number of train samples, for two transformer models trained to completion on IWSLT'14. For both model sizes, there is a regime where more samples hurt performance. Compare to Figure~\\ref{fig:moresamplesareworse}, of model-wise double-descent in the identical setting.}\n \\label{fig:nlpdd}\n \\end{subfigure}\n \\caption{Sample-wise non-monotonicity.}\n\\end{figure}\n\nThese twin effects are shown in Figure~\\ref{fig:sample-modeld}.\nNote that there is a range of model sizes\nwhere the effects ``cancel out''---and\nhaving $4\\x$ more train samples does not help test\nperformance when training to completion.\nOutside the critically-parameterized regime, for sufficiently under- or over-parameterized models, having more samples helps.\nThis phenomenon is corroborated in Figure~\\ref{fig:grid},\nwhich shows test error as a function of both model and sample size,\nin the same setting as Figure~\\ref{fig:sample-modeld}.\n\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=\\textwidth]{joint_grid}\n \\caption{{\\bf Left:} Test Error as a function of model size and\n number of train samples, for 5-layer CNNs on CIFAR-10 + 20\\% noise.\n Note the ridge of high test error again lies along the interpolation threshold.\n {\\bf Right: } Three slices of the left plot,\n showing the effect of more data for models of different sizes.\n Note that, when training to completion, more data helps for\n small and large models, but does not help\n for near-critically-parameterized models (green).\n }\n \\label{fig:grid}\n\\end{figure}\n\nIn some settings, these two effects combine to yield a regime of model sizes\nwhere more data actually hurts test performance as in\nFigure~\\ref{fig:moresamplesareworse} (see also \nFigure~\\ref{fig:nlpdd}).\nNote that this phenomenon is not unique to DNNs:\nmore data can hurt even for linear models \n(see Appendix~\\ref{sec:rff}).\n\n\\section{Experimental Setup}\n\n\nWe briefly describe the experimental setup here; full details are in Appendix~\\ref{sec:appendix_exp_details}\n\\footnote{The raw data from our experiments\nare available at:\n\\url{https:\/\/gitlab.com\/harvard-machine-learning\/double-descent\/tree\/master}}.\nWe consider three families of architectures: ResNets, standard CNNs, and Transformers.\n\\textbf{ResNets:}\nWe parameterize a family of ResNet18s (\\cite{he2016identity})\nby scaling the width (number of filters) of convolutional layers.\nSpecifically, we use layer widths $[k, 2k, 4k, 8k]$ for varying $k$.\nThe standard ResNet18 corresponds to $k=64$.\n\\textbf{Standard CNNs:}\nWe consider a simple family of 5-layer CNNs, with 4 convolutional layers of widths $[k, 2k, 4k, 8k]$ for varying $k$, and a fully-connected layer. For context, the CNN with width $k=64$, can reach over $90\\%$ test accuracy on CIFAR-10 with data-augmentation.\n\\textbf{Transformers:}\nWe consider the 6 layer encoder-decoder from \\cite{Vaswani},\nas implemented by \\cite{ott2019fairseq}.\nWe scale the size of the network by\nmodifying the embedding dimension $d_{\\text{model}}$,\nand setting the width of the fully-connected layers proportionally\n($d_{\\text{ff}} = 4\\cdot d_{\\text{model}}$).\nFor ResNets and CNNs, we train with cross-entropy loss, and the following optimizers:\n(1) Adam with learning-rate $0.0001$ for 4K epochs; (2) SGD with learning rate $\\propto \\frac{1}{\\sqrt{T}}$ for 500K gradient steps.\nWe train Transformers for 80K gradient steps, with 10\\% label smoothing and no drop-out.\n\n\\paragraph{Label Noise.}\nIn our experiments, label noise of probability $p$ refers to\ntraining on a samples which have the correct label\nwith probability $(1-p)$, and a uniformly random incorrect label otherwise (label noise is sampled only once and not per epoch).\nFigure~\\ref{fig:errorvscomplexity} plots test error on the noisy distribution, while the remaining figures plot test error with respect to the clean distribution (the two curves are just linear rescaling of one another).","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Pseudo-code of proposed methods}\n\\label{appendix:ap_pseudocode}\nIn this section, we provide the pseudo-codes of methods proposed in the main paper. First, Algorithm~\\ref{alg:ap_white} shows the pseudo-code of ODI for white-box attacks in Section~\\ref{sec_ODS_white}. The line 4-6 in the algorithm describes the iterative update by ODI.\n\n\n\\begin{algorithm}[htbp]\n \\caption{Initialization by ODS (ODI) for white-box attacks }\n \\label{alg:ap_white}\n\\begin{algorithmic}[1]\n \\STATE {\\bfseries Input:} A targeted image $\\bm{\\mathrm{x}}_{org}$, a target classifier $\\bm{\\mathrm{f}}$, perturbation set $B(\\bm{\\mathrm{x}}_{org})$, number of ODI steps $N_{\\text{ODI}}$, step size $ \\eta_{\\text{ODI}}$, number of restarts $N_R$\n \\STATE {\\bfseries Output:} Starting points $\\{x^{start}_i \\}$ for adversarial attacks\n \\FOR{$i=1$ {\\bfseries to} $N_R$}\n \\STATE Sample $\\bm{\\mathrm{x}}_{0}$ from $B(\\bm{\\mathrm{x}}_{org})$, and sample $\\bm{\\mathrm{w}}_{\\text{d}} \\sim U(-1,1)^C$\n \\FOR{$k=0$ {\\bfseries to} $N_{\\text{ODI}}-1$}\n \\STATE $\\bm{\\mathrm{x}}_{k+1} \\gets \\mathrm{Proj}_{B(\\bm{\\mathrm{x}}_{org})} \\left( x_{k} + \\eta_{\\text{ODI}} \\, \\mathrm{sign}(\\bm{\\mathrm{v}}_{\\text{ODS}}(\\bm{\\mathrm{x}}_k,\\bm{\\mathrm{f}},\\bm{\\mathrm{w}}_{\\text{d}}) ) \\right)$\n \\ENDFOR\n \\STATE $\\bm{\\mathrm{x}}^{start}_i \\gets x_{N_{\\text{ODI}}}$\n \\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\n\nWe also describe the algorithm of Boundary-ODS, used in Section~\\ref{sec_black_decision} of the main paper. Algorithm~\\ref{alg:ap_boundary} shows pseudo-code of Boundary-ODS. \nThe original Boundary Attack~\\citep{Brendel18} first sampled a random noise vector $\\bm{\\mathrm{q}}$ from a Gaussian distribution $\\mathcal{N}(\\mathbf{0}, \\mathbf{I})$ and then orthogonalized the vector to keep the distance from the original image (line 7 in Algorithm~\\ref{alg:ap_boundary}).\nAfter that, the attack refined the vector $\\bm{\\mathrm{q}}$ to reduce the distance from the original image such that the following equation holds:\n\\begin{equation}\n\\label{eq_decision_update}\nd(\\bm{\\mathrm{x}},\\bm{\\mathrm{x}}_{adv}) - d(\\bm{\\mathrm{x}},\\bm{\\mathrm{x}}_{adv}+\\bm{\\mathrm{q}}) = \\epsilon \\cdot d(\\bm{\\mathrm{x}},\\bm{\\mathrm{x}}_{adv})\n\\end{equation}\nwhere $d(a,b)$ is the distance between $a$ and $b$.\nWe replace the random Gaussian sampling to ODS as in the line 5 and 6 of Algorithm~\\ref{alg:ap_boundary}.\nSampled vectors by ODS yield large changes for outputs on the target model and increase the probability that the updated image is adversarial (i.e. the image satisfies the line 9 of Algorithm~\\ref{alg:ap_boundary}), so ODS makes the attack efficient.\n\n\\begin{algorithm}[htbp]\n \\caption{Boundary Attack~\\citep{Brendel18} with sampling update direction by ODS }\n \\label{alg:ap_boundary}\n\\begin{algorithmic}[1]\n \\STATE {\\bfseries Input:} A targeted image $\\bm{\\mathrm{x}}$, a label $y$, a target classifier $\\bm{\\mathrm{f}}$, a set of surrogate models $\\mathcal{G}$\n \\STATE {\\bfseries Output:} attack result $\\bm{\\mathrm{x}}_{adv}$\n \\STATE Set the starting point $\\bm{\\mathrm{x}}_{adv} = \\bm{\\mathrm{x}}$ which is adversary\n \\WHILE {$k<$ number of steps}\n \\STATE Choose a surrogate model $\\bm{\\mathrm{g}}$ from $\\mathcal{G}$, sample $\\bm{\\mathrm{w}}_{\\text{d}} \\sim U(-1,1)^C$\n \\STATE Set $\\bm{\\mathrm{q}} = \\bm{\\mathrm{v}}_{\\text{ODS}}(\\bm{\\mathrm{x}}_{adv},\\bm{\\mathrm{g}},\\bm{\\mathrm{w}}_{\\text{d}})$\n \\STATE Project $\\bm{\\mathrm{q}}$ onto a sphere around the original image $\\bm{\\mathrm{x}}$\n \\STATE Update $\\bm{\\mathrm{q}}$ with a small movement toward the original image $\\bm{\\mathrm{x}}$ such that Equation~(\\ref{eq_decision_update}) holds\n \\IF{$\\bm{\\mathrm{x}}_{adv}+\\bm{\\mathrm{q}}$ is adversarial}\n \\STATE Set $\\bm{\\mathrm{x}}_{adv}=\\bm{\\mathrm{x}}_{adv}+\\bm{\\mathrm{q}}$ \n \\ENDIF\n \\ENDWHILE\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\section{Details of experiment settings}\n\\label{appendix:ap_parameter}\n\n\n\\subsection{Hyperparameters and settings for attacks in Section~\\ref{sec_white_various}}\n\\label{appendix:ap_parameter_whiteall}\nWe describe hyperparameters and settings for PGD and C\\&W attacks in Section~\\ref{sec_white_various}. \n\nMultiple loss functions $L(\\cdot)$ can be used for PGD attacks, including the cross-entropy loss, and the margin loss defined as $\\max_{i \\neq y} f_{i}(\\bm{\\mathrm{x}}) - f_{y}(\\bm{\\mathrm{x}})$. %\nWe use the margin loss for PGD attacks to make considered attacking methods stronger.\n\n\n\nPGD attacks have three hyperparameters: pertubation size $\\epsilon$, step size $\\eta$ and number of steps $N$. \nWe chose $\\epsilon=0.3,8\/255,4\/255$, $\\eta= 0.02, 2\/255, 0.5\/255$ and $N=40,20,50$ for MadryLab (MNIST), MadryLab (CIFAR-10), ResNet152 Denoise (ImageNet), respectively. We use the whole test set except for ImageNet, where the first 1000 test images are used. \n\nFor C\\&W attacks, we define na\\\"{i}ve random initialization to make sure the starting points are within an $\\ell_2$ $\\epsilon$-radius ball: we first sample Gaussian noise $\\bm{\\mathrm{w}} \\sim \\mathcal{N}(\\mathbf{0}, \\mathbf{I})$ and then add the clipped noise $\\epsilon \\cdot \\bm{\\mathrm{w}} \/ \\|\\bm{\\mathrm{w}}\\|_2$ to an original image. We set the perturbation radius of initialization $\\epsilon$ by reference to attack bounds in other studies: $\\epsilon= 2.0, 1.0, 5.0$ for MadryLab (MNIST), MadryLab (CIFAR-10), ResNet152 Denoise (ImageNet), respectively.\nwe also set hyperparameters of C\\&W attacks as follows: max iterations are 1000 (MNIST) and 100 (CIFAR-10 and ImageNet), search step is 10, learning rate is 0.1, and initial constant is 0.01. The attack is performed for the first 1000 images (MNIST and CIFAR-10) and the first 500 images (ImageNet).\n\n\n\\subsection{Hyperparameter tuning for tuned ODI-PGD in Section~\\ref{sec_sota} }\n\\label{appendix:ap_parameter_ODI}\nWe describe hyperparameter tuning for our tuned ODI-PGD in Section~\\ref{sec_sota}. \nWe summarize the setting in Table~\\ref{tab_para}. \n\n\\begin{table*}[ht]\n\\caption{Hyperparameter setting for tuned ODI-PGD in Section~\\ref{sec_sota}.}\n\\label{tab_para}\n\\begin{center}\n\\setlength{\\tabcolsep}{4.5pt}\n\\begin{tabular}{c|cc|ccc}\n\\toprule\n & \\multicolumn{2}{c|}{\\text{ODI}} & \\multicolumn{3}{c}{PGD} \\\\ \nmodel & \\begin{tabular}{c} total step \\\\ $N_{\\text{ODI}}$ \\end{tabular} & \\begin{tabular}{c} step size \\\\ $\\eta_{\\text{ODI}}$ \\end{tabular} & \n optimizer & \n \\begin{tabular}{c} total step \\\\ $N$ \\end{tabular} & \n \\begin{tabular}{c} step size (learning rate) \\\\ $\\eta_k$ \\end{tabular} \\\\ \\midrule\n\\begin{tabular}{c}MNIST\n\\end{tabular} & 50 & \n 0.05 & Adam & 950 & \n $\\begin{array}{l@{~}l}\n 0.1 & (k < 475) \\\\\n 0.01 & (475 \\leq k < 712) \\\\\n 0.001 & (712 \\leq k)\n \\end{array}$ \\\\ \\hline\n\\begin{tabular}{c}CIFAR-10\n\\end{tabular} & 10 & \n 8\/255 & \\begin{tabular}{c}sign \\\\ function \\end{tabular} & 140 & \n $\\begin{array}{l@{~}l}\n 8\/255 & (k < 46) \\\\\n 0.8\/255 & (46 \\leq k < 92) \\\\\n 0.08\/255 & (92 \\leq k)\n \\end{array}$ \\\\ \n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\end{table*}\nFor ODI, we increase the number of ODI step $N_{\\text{ODI}}$ to obtain more diversified inputs than ODI with $N_{\\text{ODI}}=2$. In addition, we make step size $\\eta_{\\text{ODI}}$ smaller than $\\epsilon$ on MNIST, because $\\epsilon$-ball with $\\epsilon=0.3$ is large and $\\eta_{\\text{ODI}}=0.3$ is not suitable for seeking the diversity within the large $\\epsilon$-ball. \nIn summary, we set $(N_{\\text{ODI}},\\eta_{\\text{ODI}})=(50,0.05), (10, 8\/255)$ for the MNIST model and the CIFAR-10 model, respectively.\n\n\nWe tune hyperparameters of PGD based on Gowal et al.~\\citep{MT19}. \nWhile several studies used the sign function to update images for the PGD attack, some studies \\citep{SPSA18,MT19} reported that updates by Adam optimizer~\\citep{Adam15} brought better results than the sign function. Following the previous studies~\\citep{SPSA18,MT19}, we consider the sign function as an optimizer and the choice of an optimizer as a hyperparameter. \nWe use Adam for the PGD attack on the MNIST model and the sign function on the CIFAR-10 model. \n\nWe adopt scheduled step size instead of fixed one. \nBecause we empirically found that starting from large step size brings better results, we set the initial step size $\\eta_0$ as $\\eta_0=\\epsilon$ on CIFAR-10. We update step size at $k=0.5N, 0.75N$ on MNIST and $k=N\/3, 2N\/3$ for on CIFAR-10. \nWhen we use Adam, step size is considered as learning rate.\nFinally, we set PGD step $N$ as $N_{\\text{ODI}}+N=1000$ on MNIST and $150$ on CIFAR-10.\n\n\\subsection{Setting for training on ImageNet in Section~\\ref{sec_black_limited}}\n\\label{appendix:ap_parameter_blackOOD}\nWe describe the setting of training of surrogate models on ImageNet in the experiment of Section~\\ref{sec_black_limited}.\nWe use the implementation of training provided in PyTorch with default hyperparameters.\nNamely, training epochs are 90 and learning rates are changed depending on epoch: 0.1 until 30 epochs, 0.01 until 60 epochs, 0.001 until 90 epochs. Batch size is 256 and weight decay 0.0001.\n\n\n\n\n\\section{Additional results and experiments for ODI with white-box attacks}\n\\label{appendix:ap_white}\n\n\\subsection{Diversity offered by ODI}\n\\label{appendix:ap_white_diversity}\n\\label{sec_append_white_diversity}\nWe empirically demonstrate that ODI can find a more diverse set of starting points than random uniform initialization, as pictorially shown in the left figures of Figure~\\ref{figure1} of the main paper. \n\nAs an example of target models, we train a robust classification model using adversarial training~\\citep{madry17} on CIFAR-10. We adopted popular hyperparameters for adversarial training under the $\\ell_\\infty$ PGD attack on CIFAR-10: perturbation size $\\epsilon = 8\/255$, step size $\\eta=2\/255$, and number of steps $N=10$. Training epochs are 100 and learning rates are changed depending on epoch: 0.1 until 75 epochs, 0.01 until 90 epochs, and 0.001 until 100 epochs. Batch size is 128 and weight decay 0.0002.\n\nOn the target model, we quantitatively evaluate the diversity of starting points by each initialization in terms of pairwise distances of output values $\\bm{\\mathrm{f}}(\\bm{\\mathrm{x}})$. Each initialization is bounded within $\\ell_\\infty$ $\\epsilon$-ball with $\\epsilon=8\/255$.\nWe pick 100 images on CIFAR-10 and run each initialization 10 times to calculate the mean pairwise distances among outputs for different starting points. As a result, the mean pairwise distance obtained from ODI is 6.41, which is about 15 times larger than that from uniform initialization (0.38). This corroborates our intuition that starting points obtained by ODI are more diverse than uniform initialization. We note that PGD does not generate diverse samples. When we use PGD with 2 steps as an initialization, the mean pairwise distance is only 0.43. \n\nWe also visualize the diversity offered by ODI. \nFirst, we focus on loss histogram of starting points by ODI and na\\\"{i}ve uniform initialization. We pick an image from the CIFAR-10 test dataset and run each initialization 100 times. Then, we calculate loss values for starting points to visualize their diversity in the output space. The left panel of Figure~\\ref{fig_div_visualize} is the histogram of loss values for each initialization. We can easily observe that images from na\\\"{i}ve initialization concentrate in terms of loss values (around $-1.0$), whereas images from ODI-2 (ODI with 2 steps) are much more diverse in terms of the loss values. We also observe that images from PGD-2 also take similar loss values. \nBy starting attacks from these initial inputs, we obtain the histogram of loss values in the center panel of Figure~\\ref{fig_div_visualize}. We can observe that ODI-PGD generates more diverse results than PGD with na\\\"{i}ve initialization (PGD-20).\n\nIn addition, we apply \nt-SNE~\\citep{tSNE} to the output logits for starting points by each initialization.\nWe visualize the embedding produced by t-SNE in the right panel of Figure~\\ref{fig_div_visualize}.\nAs expected, starting points produced by ODI are more diversified than those by na\\\"{i}ve initialization.\n\n\n\n\\begin{figure*}[htbp]\n\\centering\n \\begin{tabular}{ccc}\n\n \\begin{minipage}{0.35\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/neurips_losshist1_camera.png}\n \\end{center}\n \\end{minipage}\n \\begin{minipage}{0.35\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/neurips_losshist2_camera.png}\n \\end{center}\n \\end{minipage}\n \\hspace{0.3cm}\n \\begin{minipage}{0.23\\hsize}\n \\begin{center}\n \\includegraphics[width=0.9\\textwidth]{image\/ODI_loss_tsne.png}\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{(Left): Histogram of loss values evaluated at starting points by ODI, na\\\"{i}ve uniform initialization and PGD. PGD-2 means 2-step PGD with na\\\"{i}ve initialization. The loss function is the margin loss. (Right) Histogram of loss values after attacks with 20 total steps. ODI-PGD-18 means 18-step PGD with 2-step ODI. \n (Right): Embedding for starting points sampled on each initialization produced by t-SNE.\n }\n \\label{fig_div_visualize}\n\\end{figure*}\n\n\n\n\\subsection{Analysis of the sensitivity to hyperparameters of ODI}\n\\label{appendix:ap_white_sensitivity}\nFor ODI, we mainly set the number of ODI steps $N_{\\text{ODI}}=2$ and step size $\\eta_{\\text{ODI}}=\\epsilon$. \nTo validate the setting, we confirm that ODI-PGD is not sensitive to these hyperparameters. \nWe attack adversarially trained models on CIFAR-10 introduced in Section~\\ref{sec_append_white_diversity}, and adopt the same attack setup for ODI-PGD on CIFAR-10 as Section~\\ref{sec_white_various}.\nWe test $N_{\\text{ODI}}=2,4,8,16$ and $\\eta_{\\text{ODI}} = \\epsilon, \\epsilon\/2, \\epsilon\/4, \\epsilon\/8$, but \n exclude patterns with $N_{\\text{ODI}} \\cdot \\eta_{\\text{ODI}} < 2 \\epsilon$ to make $N_{\\text{ODI}} \\cdot \\eta_{\\text{ODI}}$ larger than or equal to the diameter of the $\\epsilon$-ball. \nWe calculate the mean accuracy for five repetitions of the attack, each with 20 restarts. \n\n\n\\begin{table}[htb]\n\\caption{The sensitivity to the number of ODI steps $N_{\\text{ODI}}$ and step size $\\eta_{\\text{ODI}}$. \nWe repeat each experiment 5 times to calculate statistics.\n}\n\\begin{center}\n\\begin{tabular}{cc|ccc}\n\\toprule\n$N_{\\text{ODI}}$ & $\\eta_{\\text{ODI}}$ & mean & max & min \\\\ \\midrule\n2 &$\\epsilon$ & 44.46\\% &44.50\\% & 44.45\\% \\\\\n4 &$\\epsilon \/2$ & 44.47\\% &44.50\\% & 44.42\\% \\\\\n4 &$\\epsilon $ & 44.42\\% &44.48\\% & 44.40\\% \\\\\n8 &$\\epsilon \/4$ & 44.47\\% &44.52\\% & 44.44\\% \\\\\n8 &$\\epsilon \/2$ & 44.42\\% &44.48\\% & 44.36\\% \\\\\n8 &$\\epsilon $ & 44.46\\% &44.49\\% & 44.42\\% \\\\\n16 &$\\epsilon \/8$ & 44.46\\% &44.50\\% & 44.43\\% \\\\\n16 &$\\epsilon \/4$ & 44.46\\% &44.50\\% & 44.40\\% \\\\\n16 &$\\epsilon \/2$ & 44.45\\% &44.48\\% & 44.43\\% \\\\\n16 &$\\epsilon$ & 44.44\\% &44.47\\% & 44.41\\% \\\\\n \\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_sensi}\n\\end{table}\n\nTable~\\ref{tab_sensi} shows the mean accuracy under ODI-PGD for different hyperparameters. \nThe maximum difference in the mean accuracy among different hyperparameters of ODI is only 0.05\\%.\nAlthough large $N_{\\text{ODI}}$ and $\\eta_{\\text{ODI}}$ will be useful to find more diversified starting points, the performance of ODI is not very sensitive to hyperparameters.\nThus, we restrict $N_{\\text{ODI}}$ to a small value to give fair comparison in terms of computation time as much as possible. \nTable~\\ref{tab_sensi} also shows that the difference between the maximum and minimum accuracy is about 0.1\\% for all hyperparameter pairs. This result supports the stability of ODI.\n\n\\subsection{Accuracy curve for adversarial attacks with ODI}\n\\label{appendix:ap_white_accuracycurve}\nIn Section~\\ref{sec_white}, we experimentally represented that the diversity offered by ODI improved white-box $\\ell_\\infty$ and $\\ell_2$ attacks. \nwe describe the accuracy curve with the number of restarts for attacks with ODI and na\\\"{i}ve initialization.\n\nFigure~\\ref{fig_Linf} shows how the attack performance improves as the number of restarts increases in the experiment of Section~\\ref{sec_white_various}. \nAttacks with ODI outperforms those with na\\\"{i}ve initialization with the increase of restarts in all settings. These curves further corroborate that restarts facilitate the running of attack algorithms, and ODI restarts are more effective than na\\\"{i}ve ones. We note that the first restart of ODI is sometimes worse than na\\\"{i}ve initialization. It is because diversity can cause local optima, i.e. random directions of ODI are not always useful. With the increase of restarts, at least one direction is useful and the accuracy drops.\n\n\\begin{figure*}[htbp]\n\\centering\n \\begin{tabular}{cccc}\n \\rotatebox[origin=c]{90}{PGD}&\n \\begin{minipage}{0.28\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/white_PGD_mnist.png}\n \\end{center}\n \\end{minipage}\n\n &\n \\begin{minipage}{0.28\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/white_PGD_cifar.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.28\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/white_PGD_image.png}\n \\end{center}\n \\end{minipage}\n \\\\\n \\rotatebox[origin=c]{90}{C\\&W}&\n \\begin{minipage}{0.28\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/white_cw_mnist.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.28\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/white_cw_cifar.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.28\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/white_cw_image.png}\n \\end{center}\n \\end{minipage}\n \\\\ \n & \\begin{tabular}{c} MNIST \n \\end{tabular}&\n \\begin{tabular}{c} CIFAR-10 \n \\end{tabular}&\n \\begin{tabular}{c} ImageNet \n \\end{tabular}\n \\end{tabular}\n \\caption{The attack performance against number of restarts for attacks with ODI. (Top): the model accuracy for PGD, (Bottom): the average of minimum $\\ell_2$ perturbations for C\\&W. %\n }%\n \\label{fig_Linf}\n\\end{figure*}\n\nNext, we describe the accuracy curve for the comparison between state-of-the-are attacks and ODI-PGD in Section~\\ref{sec_sota}. \nTo emphasize the stability of the improvement, \nwe evaluate the confidence intervals of our results against MadryLab's MNIST and CIFAR-10 models. We run tuned ODI-PGD attack with 3000 restarts on MNIST and \n100 restarts on CIFAR-10. \nThen, we sample 1000 runs on MNIST and 20 runs on CIFAR-10 from the results to evaluate the model accuracy, \nand re-sample 100 times to calculate statistics. \nFigure~\\ref{fig_SOTAmadry} shows the accuracy curve under tuned ODI-PGD. \nWe observe that confidence intervals become tighter as the number of restarts grows, and tuned ODI-PGD consistently outperforms the state-of-the-art attack after 1000 restarts on MNIST and 20 restarts on CIFAR-10.\n\n\n\\begin{figure}[ht]\n\\centering\n\n \\begin{tabular}{cc}\n \\begin{minipage}{0.45\\hsize}\n \\begin{center}\n \\includegraphics[width=.9\\textwidth]{image\/white_tuned_mnist.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.45\\hsize}\n \\begin{center}\n \\includegraphics[width=.9\\textwidth]{image\/white_tuned_cifar.png}\n \\end{center}\n \\end{minipage}\n \\\\\n MadryLab (MNIST) & MadryLab (CIFAR-10) \n \\end{tabular}\n \\caption{Model accuracy under tuned ODI-PGD and the current state-of-the-art attacks~\\citep{MT19}. \n The solid lines represent values from Table~\\ref{tab_sota} and \n the error bars show 95\\% confidence intervals. }\n \\label{fig_SOTAmadry}\n\\end{figure}\n\n\n\\subsection{Tighter estimation of robustness for various models}\n\\label{appendix:ap_white_recent}\nOne important application of powerful adversarial attacks is to evaluate and compare different defense methods. In many previous works on defending against adversarial examples, PGD attack with na\\\"{i}ve uniform initialization (called na\\\"{i}ve-PGD) is a prevailing benchmark and its attack success rate is commonly regarded as a tight estimation on (worst-case) model robustness. In this section, we conduct a case study on six recently published defense methods~\\citep{UAT19,carmon19,scatter19,metric19,free19,YOPO19} to show that ODI-PGD outperforms na\\\"{i}ve-PGD in terms of upper bounding the worst model accuracy under all possible attacks.\n\n\\paragraph{Setup}\nWe use pre-trained models from four of those studies,\nand train the other two models~\\citep{free19,YOPO19}\n using the settings and architectures described in their original papers. We run attacks with $\\epsilon = 8\/255$ on all test images. Other attack settings are the same as the experiment for CIFAR-10 in Section~\\ref{sec_white_various}. Apart from comparing ODI-PGD and na\\\"{i}ve-PGD, we also evaluate PGD attack without restarts (denoted as $\\text{PGD}_{1}$) as it is adopted in several existing studies \\citep{UAT19,carmon19,scatter19,YOPO19}.\n \n\n\\begin{table*}[ht]\n\\caption{Accuracy of models after performing ODI-PGD and na\\\"{i}ve-PGD attacks against recently proposed defense models.}\n\\begin{center}\n\\begin{tabular}{c|ccc|cc}\n\\toprule\nmodel& (1) $\\text{PGD}_{1}$ & (2) $\\text{na\\\"{i}ve-PGD}$ \n& (3) $\\text{ODI-PGD}$ \n&(1)$-$(2) &(2)$-$(3)\n\\\\ \\midrule\nUAT~\\citep{UAT19} & 62.63\\% & 61.93\\% & {\\bf 57.43\\%} & 0.70\\% & 4.50\\% \\\\\nRST~\\citep{carmon19} & 61.17\\% & 60.77\\% & {\\bf 59.93\\%} & 0.40\\% & 0.84\\% \\\\\nFeature-scatter~\\citep{scatter19} & 59.69\\% & 56.49\\% & {\\bf 39.52\\%} & 3.20\\% & 16.97\\% \\\\\nMetric learning~\\citep{metric19} & 50.57\\% & 49.91\\% & {\\bf 47.64\\%} & 0.56\\% & 2.27\\% \\\\\nFree~\\citep{free19} & 47.19\\% & 46.39\\% &{\\bf 44.20\\%} & 0.80\\% & 2.19\\% \\\\\nYOPO~\\citep{YOPO19} & 47.70\\% & 47.07\\% & {\\bf 45.09\\%} & 0.63\\% & 1.98\\% \\\\\n \\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_recent}\n\\end{table*}\n\n\n\n\\paragraph{Results}\nAs shown in Table~\\ref{tab_recent}, ODI-PGD uniformly outperforms na\\\"{i}ve-PGD against all six recently-proposed defense methods, lowering the estimated model accuracy by 1--17\\%. In other words, ODI-PGD provides uniformly tighter upper bounds on the worst case model accuracy than na\\\"{i}ve-PGD. \nAdditionally, The accuracy ranking of the defence methods for ODI-PGD is different from na\\\"{i}ve-PGD and $\\text{PGD}_{1}$.\nThese results indicate that ODI-PGD might be {a better benchmark for comparing and evaluating different defense methods}, rather than na\\\"{i}ve-PGD and $\\text{PGD}_{1}$.\n\n\n\\section{Additional results and experiments for ODS with black-box attacks}\n\\label{appendix:ap_black}\n\n\n\n\\subsection{Diversified samples by ODS}\n\\label{appendix:ap_black_diversity}\nWe empirically show that ODS can yield diversified changes in the output space of the target model, as shown in the right figures of Figure~\\ref{figure1} of the main paper. \nSpecifically, we evaluate the mean pairwise distance among outputs for different perturbations by ODS and \ncompare it with the distance among outputs for random Gaussian sampling. \n\nWe use pre-trained Resnet50~\\citep{resnet16} and VGG19~\\citep{VGG19} model \nas the target and surrogate models, respectively. \nWe pick 100 images on ImageNet validation set and sample perturbations 10 times by each sampling method. For comparison, we normalize the perturbation to the same size in the input space. \nThen, the obtained pairwise distance on the target model by ODS is 0.79, which is 10 times larger than the pairwise distance by random Gaussian sampling (0.07). This indicates that the diversity by ODS is transferable. \n\n\\subsection{Success rate curve in Section~\\ref{sec_black_score} and Section~\\ref{sec_black_decision} }\n\\label{appendix:ap_black_successcurve}\nIn Section~\\ref{sec_black_score}, we demonstrated that SimBA-ODS outperformed state-of-the-art attacks in terms of the query-efficiency. As an additional result, we give the success rate curve of score-based attacks with respect to the number of queries in the experiments. Figure~\\ref{fig_append_simba} shows how the success rate changes with the number of queries for SimBA-ODS and SimBA-DCT for the experiment of Table~\\ref{tab_black_score}. SimBA-ODS especially brings query-efficiency at small query levels. In Figure~\\ref{fig_append_square}, we also describe the success rate curve for the experiment of Table~\\ref{tab_black_square}. ODS-RGF outperforms other methods in the $\\ell_2$ norm.\n\n\\begin{figure*}[htbp]\n\\centering\n \\begin{tabular}{cc}\n \\begin{minipage}{0.4\\hsize}\n \\begin{center}\n \\includegraphics[width=0.9\\textwidth]{image\/simba_untarget.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.4\\hsize}\n \\begin{center}\n \\includegraphics[width=0.9\\textwidth]{image\/simba_target.png}\n \\end{center}\n \\end{minipage}\\\\\n untargeted & targeted\n \\end{tabular}\n \\caption{Relationship between success rate and number of queries for score-based SimBA-ODS and SimBA-DCT.}\n \\label{fig_append_simba}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\centering\n \\begin{tabular}{cccc}\n \\begin{minipage}{0.22\\hsize}\n \\begin{center}\n \\includegraphics[width=1\\textwidth]{image\/curve_score_untargeted.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.22\\hsize}\n \\begin{center}\n \\includegraphics[width=1\\textwidth]{image\/curve_score_targeted.png}\n \\end{center}\n \\end{minipage} &\n \\begin{minipage}{0.22\\hsize}\n \\begin{center}\n \\includegraphics[width=1\\textwidth]{image\/curve_score_linf_untargeted.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.22\\hsize}\n \\begin{center}\n \\includegraphics[width=1\\textwidth]{image\/curve_score_linf_targeted.png}\n \\end{center}\n \\end{minipage}\\\\\n untargeted ($\\ell_2$) & targeted ($\\ell_2$) & untargeted ($\\ell_\\infty$) & targeted ($\\ell_\\infty$)\n \\end{tabular}\n \\caption{Relationship between success rate and number of queries for SimBA-ODS, ODS-RGF, and Square Attack. Each attack is evaluated with norm bound $\\epsilon=5 (\\ell_2), 0.05 (\\ell_\\infty)$.}\n \\label{fig_append_square}\n\\end{figure*}\n\nIn Section~\\ref{sec_black_decision}, we demonstrated that Boundary-ODS outperformed state-of-the-art attacks in terms of median $\\ell_2$ perturbation. Here, we depict the relationship between the success rate and perturbation size (i.e. the frequency distribution of the perturbations) to show the consistency of the improvement. \nFigure~\\ref{fig_append_decision_curve} describes the cumulative frequency distribution of $\\ell_2$ perturbations for each attack at 10000 queries.\nBoundary-ODS consistently decreases $\\ell_2$ perturbations compared to other attacks in both untargeted and targeted settings. \n\n\n\n\n\\begin{figure*}[htbp]\n\\centering\n \\begin{tabular}{cc}\n \\begin{minipage}{0.4\\hsize}\n \\begin{center}\n \\includegraphics[width=0.9\\textwidth]{image\/decision_l2hist_untargeted.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.4\\hsize}\n \\begin{center}\n \\includegraphics[width=0.9\\textwidth]{image\/decision_l2hist_targeted.png}\n \\end{center}\n \\end{minipage}\\\\\n untargeted & targeted\n \\end{tabular}\n \\caption{Cumulative frequency distribution of $\\ell_2$ perturbations at 10000 queries for decision-based attacks.}\n \\label{fig_append_decision_curve}\n\\end{figure*}\n\n\n\n\\subsection{Comparison of ODS with TREMBA}\n\\label{appendix:ap_black_TREMBA}\nWe run experiments to compare ODS with TREMBA, which is a state-of-the-art attack with surrogate models, as we mentioned in Section~\\ref{sec_black_score_RGF}. \nTREMBA leverages surrogate models to learn a\nlow-dimensional embedding \nso as to obtain initial adversarial examples using a transfer-based attack and then update them using a score-based attack. \nAlthough TREMBA uses random sampling, ODS does not work well with TREMBA because random sampling of TREMBA is performed in the embedding space. In addition, it is difficult to directly compare attacks with ODS (e.g., ODS-RGF) and TREMBA because we do not discuss the combination of ODS with transfer-based attacks in this paper. \n\nHowever, we can start attacks with ODS (e.g., ODS-RGF) from images generated by any transfer-based attacks and compare the attack with TREMBA. \nWe generate starting points by SI-NI-DIM~\\citep{NesterovTransfer2020} (Scale-Invariant Nesterov Iterative FGSM integrated with diverse input method), which is a state-of-the-art transfer-based attack, and run ODS-RGF from these starting points. \n\nWe adopt the same experiment setup as TREMBA~\\citep{Huang20TREMBA}: we evaluate attacks against four target models (VGG19, ResNet34, DenseNet121, MobileNetV2) for 1000 images on ImageNet and use four surrogate models (VGG16, Resnet18, Squeezenet~\\citep{squeezenet} and Googlenet~\\citep{googlenet} ).\nWe set the same hyperparameters as in the original paper~\\citep{Huang20TREMBA} for TREMBA. For fair comparisons, we set the same sample size (20) and use the same surrogate models as TREMBA for ODS-RGF. We also set step size of ODS-RGF as 0.001. \nAs for SI-NI-DIM, we set hyperparameters referring to the paper~\\citep{NesterovTransfer2020}: maximum iterations as 20, decay factor as 1, and number of scale copies as 5.\nWe describe results in Table~\\ref{fig_black_TREMBA}. We can observe that \nODS-RGF with SI-NI-DIM is comparable to TREMBA. \n\nWe note that ODS is more flexible than TREMBA in some aspects. First, TREMBA is specific in the $\\ell_\\infty$-norm, whereas ODS can be combined with attacks at least in $\\ell_\\infty$ and $\\ell_2$-norms. \nIn addition, TREMBA needs to train a generator per target class in targeted settings, whereas ODS does not need additional training. \n\n\n\\begin{table}[htb]\n\\caption{Comparison of ODS-RGF with TREMBA against four target models. The first two rows and bottom two rows describe results for untargeted (U) attacks and targeted (T) attacks, respectively. Targeted class for targeted attacks is class 0. }\n\\begin{center}\n\\setlength{\\tabcolsep}{2pt}\n\\begin{tabular}{cc|cc|cc|cc|cc}\n\\toprule\n&& \\multicolumn{2}{c|}{VGG19}& \\multicolumn{2}{c|}{ResNet34}& \\multicolumn{2}{c|}{DenseNet121}& \\multicolumn{2}{c}{MobilenetV2}\\\\\n&attack& success & query& success & query& success & query& success & query\\\\ \\hline\n\\multirow{2}{*}{U}&TREMBA~\\citep{Huang20TREMBA} & {\\bf 100.0\\%} & 34\n& {\\bf 100.0\\%} & 161 & {\\bf 100.0\\%} & 157 & {\\bf 100.0\\%}& 63\\\\\n&SI-NI-DIM~\\citep{NesterovTransfer2020} + ODS-RGF& {\\bf 100.0\\%} & {\\bf 18}\n& 99.9\\% & {\\bf 47} & 99.9\\% & {\\bf 50} & {\\bf 100.0\\%}& {\\bf 29} \\\\ \\hline\n\\multirow{2}{*}{T}&TREMBA~\\citep{Huang20TREMBA} & {98.6\\%} & 975\n& {96.7\\%} & {\\bf 1421} & {\\bf 98.5\\%} & {\\bf 1151} & {\\bf 99.0\\%}& {\\bf 1163}\\\\\n&SI-NI-DIM~\\citep{NesterovTransfer2020} + ODS-RGF& {\\bf 99.4\\%} & {\\bf 634}\n& {\\bf 98.7\\%} & {1578} & {98.2\\%} & 1550 & {98.3\\%}& {2006} \\\\ \n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{fig_black_TREMBA}\n\\end{table}\n\n\n\\subsection{Performance of ODS against different target models}\n\\label{appendix:ap_black_targetchange}\nIn this paper, we used pre-trained ResNet50 model as the target model for all experiments in Section~\\ref{sec_black_experiment}. Here we set pre-trained VGG19 model as the target model and run experiments to show that the efficiency of ODS is independent with target models.\nAs surrogate models, we replace VGG19 with ResNet50, i.e. we use four pre-trained models (ResNet50, ResNet34, DenseNet121, MobileNetV2).\n\nWe run experiments for SimBA-ODS in Section~\\ref{sec_black_score} and Boundary-ODS in Section~\\ref{sec_black_decision}. \nAll settings except the target model and surrogate models are the same as the previous experiments.\nIn Table~\\ref{tab_black_score_VGG} and \\ref{tab_black_decision_VGG}, ODS significantly improves attacks against VGG19 model for both SimBA and Boundary Attack. This indicates that the efficiency of ODS does not depend on target models. \n\n\\begin{table}[htb]\n\\caption{Query counts and $\\ell_2$ perturbations for score-based Simple Black-box Attacks (SimBA) against pre-trained VGG19 model on ImageNet.}\n\\begin{center}\n\\setlength{\\tabcolsep}{4pt}\n\\begin{tabular}{cc|ccc|ccc}\n\\toprule\n&& \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \\cline{3-8}\n& num. of &success & average & median $\\ell_2$& success & average & median $\\ell_2$ \\\\ \\\nattack& surrogates& rate & query & distance & rate & query & distance \\\\ \\midrule\nSimBA-DCT~\\citep{Guo19} &0& {\\bf 100.0\\%} & 619 & 2.85 & {\\bf100.0\\%} & 4091 &6.81 \\\\\nSimBA-ODS &4& {\\bf 100.0\\%} & {\\bf 176} & {\\bf 1.35} & 99.7\\% & {\\bf 1779 } & {\\bf 3.31} \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_score_VGG}\n\\end{table}\n\n\n\\begin{table}[htb]\n\\caption{Median $\\ell_2$ perturbations for decision-based Boundary Attacks against pre-trained VGG19 model on ImageNet.}\n\\begin{center}\n\\begin{tabular}{cc|ccc|ccc}\n\\toprule\n&& \\multicolumn{6}{c}{number of queries} \\\\\n& num. of & \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ %\nattack&surrogates & 1000 & 5000 & 10000 & 1000 & 5000 & 10000 \\\\ \\midrule\nBoundary\\citep{Brendel18} &0& 45.62&11.79&4.19 & 75.10&41.63&27.34 \\\\\nBoundary-ODS&4 & {\\bf 6.03} & {\\bf 0.69} & {\\bf 0.43} & {\\bf 24.11} & {\\bf 5.44} & {\\bf 2.97}\\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_decision_VGG}\n\\end{table}\n\n\\subsection{Effect of the choice of surrogate models}\n\\label{appendix:ap_black_surrogatechange}\nIn Section~\\ref{sec_black_score} and \\ref{sec_black_decision}, we mainly used four pre-trained models as surrogate models. To investigate the effect of the choice of surrogate models, we run attacks with seven different sets of surrogate models.\nAll settings except surrogate models are the same as the previous experiments.\n\nTable~\\ref{tab_black_score_surrogate} and \\ref{tab_black_decision_surrogate} shows results for SimBA-ODS and Boundary-ODS, respectively. First, the first four rows in both tables are results for a single surrogate model. The degree of improvements depends on the model. ResNet34 gives the largest improvement and VGG19 gives the smallest improvement. Next, the fifth and sixth rows show results for sets of two surrogate models. By combining surrogate models, the query efficiency improves, especially for targeted attacks. This means that the diversity from multiple surrogate models is basically useful to make attacks strong. Finally, the performances in the seventh row are results for four surrogate models, which are not always better than results for the combination of two models (ResNet34 and DenseNet121). When the performances for each surrogate model are widely different, the combination of those surrogate models could be harmful.\n\n\\begin{table}[htb]\n\\caption{Query counts and $\\ell_2$ perturbations for SimBA-ODS attacks with various sets of surrogate models. In the column of surrogate models, R:ResNet34, D:DenseNet121, V:VGG19, M:MobileNetV2.}\n\\begin{center}\n\\begin{tabular}{cc|ccc|ccc}\n\\toprule\n&& \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \\cline{3-8}\nsurrogate& & success & average & median $\\ell_2$& success & average & median $\\ell_2$ \\\\ \\\nmodels&num.& rate & query & distance & rate & query & distance \\\\\n\\midrule\nR&1& {\\bf 100.0\\%} & {274} & 1.35 & 95.3\\%& 5115 & 3.50 \\\\\nD&1& {\\bf 100.0\\%} &342&1.38 &96.7\\% & 5282 & 3.51\\\\\nV&1& {\\bf 100.0\\%} &660 &1.78 &88.0\\% & 9769 & 4.80 \\\\\nM&1& {\\bf 100.0\\%} &475&1.70 &95.3\\%& 6539 & 4.53\\\\\nR,D&2& {\\bf 100.0\\%} &{\\bf 223}&{\\bf 1.31} & 98.0\\%& {\\bf 3381} & {\\bf 3.39}\\\\\nV,M&2& {\\bf 100.0\\%} &374&1.60 & 96.3\\%& 4696 & 4.27\\\\\nR,V,D,M &4& {\\bf 100.0\\%}& {241} & 1.40 & {\\bf 98.3\\%} & {3502} & {3.55}\\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_score_surrogate}\n\\end{table}\n\n\n\\begin{table}[htb]\n\\caption{Median $\\ell_2$ perturbations for Boundary-ODS attacks with various sets of surrogate models. In the column of surrogate models, R:ResNet34, D:DenseNet121, V:VGG19, M:MobileNetV2.}\n\\begin{center}\n\\begin{tabular}{cc|ccc|ccc}\n\\toprule\n& & \\multicolumn{6}{c}{number of queries} \\\\ \nsurrogate& & \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \nmodels & num. & 1000 & 5000 & 10000 & 1000 & 5000 & 10000 \\\\ \\midrule\nR&1 & 9.90&1.41&0.79 & 31.32&11.49&7.89\\\\\nD&1 & 10.12&1.39&0.76 &32.63&11.30&7.44\\\\\nV&1 & 22.68&3.47&1.52 & 49.18&24.26&17.75 \\\\\nM&1 & 20.67&2.34&1.10 & 44.90&18.62&12.01\\\\\nR,D&2 & {\\bf 7.53}&1.07&0.61 & {\\bf 26.00} & 8.08& 6.22\\\\\nV,M&2 & 17.60& 1.70& 0.92&39.63&14.97&9.21 \\\\\nR,V,D,M &4 & 7.57 & {\\bf 0.98} & {\\bf 0.57} & 27.24 & {\\bf 6.84} & {\\bf 3.76}\\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_decision_surrogate}\n\\end{table}\n\nIn Section~\\ref{sec_black_score_RGF}, we compared ODS-RGF with P-RGF only using the ResNet34 surrogate model. To show the effectiveness of ODS-RGF is robust to the choice of surrogate models, we evaluate ODS-RGF with different surrogate models. Table~\\ref{fig_black_RGF_VGG} shows the query-efficiency of ODS-RGF and P-RGF with the VGG19 surrogate model. We can observe that ODS-RGF outperforms P-RGF for all settings \nand the results are consistent with the experiment in Section~\\ref{sec_black_score_RGF}.\n\n\\begin{table}[htb]\n\\caption{Comparison between ODS-RGF and P-RGF with the VGG19 surrogate model. Settings in the comparison are the same as Figure~\\ref{fig_black_RGF}. }\n\\begin{center}\n\\setlength{\\tabcolsep}{3.7pt}\n\\begin{tabular}{ccc|ccc|ccc}\n\\toprule\n& & &\\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \\cline{4-9}\n & &num. of & & average & median $\\ell_2$ & success & average & median $\\ell_2$ \\\\ \nnorm& attack &surrogates & success & queries & perturbation & success & queries & perturbation \\\\ \n\\midrule\n\\multirow{3}{*}{$\\ell_2$}&RGF & 0& \\textbf{100.0\\%} & 633 & 3.07 &\\textbf{99.3\\%} &3141 & 8.23\\\\\n&P-RGF~[25] &1& \\textbf{100.0\\%} &467 & 3.03\n&{97.0\\%} &3130 & 8.18\\\\\n&ODS-RGF& 1& \\textbf{100.0\\%} &\\textbf{294}& \\textbf{2.24}\n&{98.0\\%} &\\textbf{2274} & \\textbf{6.60}\\\\ \\hline\n\\multirow{3}{*}{$\\ell_\\infty$} &RGF &0& {97.0\\%} & 520 & - \n&{25.0\\%} & 2971 & - \\\\\n&P-RGF~[25] &1& {98.7\\%} &337& - & {29.0\\%} & 2990 & -\\\\\n&ODS-RGF& 1& \\textbf{99.7\\%} &\\textbf{256}& - & \\textbf{45.7\\%} & \\textbf{2116} & - \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{fig_black_RGF_VGG}\n\\end{table}\n\n\\subsection{Effect of the number of surrogate models for the experiment in Section~\\ref{sec_black_limited}}\n\\label{appendix:ap_black_OOD_surrogatenum}\nWe described that surrogate models with limited out-of-distribution training dataset are still useful for ODS in Section~\\ref{sec_black_limited}. \nIn the experiment, we used five surrogate models with the same ResNet18 architecture. \nHere, we reveal the importance of the number of surrogate models through experiments with the different number of models. \nTable~\\ref{tab_black_limited_append} shows the result for Boundary-ODS with the different number of surrogate models. With the increase of the number of models, the query efficiency consistently improves.\n\n\\begin{table}[htb]\n\\caption{Median $\\ell_2$ perturbations for Boundary-ODS attacks with different number of surrogate models against out-of-distribution images on ImageNet.}\n\\begin{center}\n\\begin{tabular}{c|ccc|ccc}\n\\toprule\n& \\multicolumn{6}{c}{number of queries} \\\\ \nnum. of& \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \nsurrogates & 1000 & 5000 & 10000 & 1000 & 5000 & 10000 \\\\ \\midrule\n1 & 19.45 & 2.90 & 1.66 & 47.86& 25.30 & 20.46\\\\ \n2 & 15.45 & 2.42 & 1.35 & 43.45 & 19.30 & 13.78\\\\ \n3 & 13.75& 1.96&1.14& {\\bf 41.63}& 16.91 & 11.14\\\\ \n4 & 14.23& 1.86 & 1.21 & 41.65& 14.86 & 9.64\\\\ \n5 & {\\bf 11.27}& {\\bf 1.63}& {\\bf 0.98}& 41.67& {\\bf 13.72} & {\\bf 8.39}\\\\ \n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_limited_append}\n\\end{table}\n\n\n\\subsection{Score-based attacks with ODS against out-of-distribution images}\n\\label{appendix:ap_black_OOD_scorebased}\nIn Section~\\ref{sec_black_limited}, we demonstrated that the decision-based Boundary-ODS attack works well even if we only have surrogate models trained with limited out-of-distribution dataset.\nHere, we evaluate score-based SimBA-ODS with these surrogate models. Except surrogate models, we adopt the same setting as Section~\\ref{sec_black_score}.\n\nIn Table~\\ref{tab_black_OOD_SimBA}, SimBA-ODS with out-of-distribution dataset outperforms SimBA-DCT in untargeted settings. \nIn targeted settings, while SimBA-ODS improves the $\\ell_2$ perturbation, the average queries for SimBA-ODS are comparable with SimBA-DCT. \nWe hypothesize that it is because ODS only explores the subspace of the input space. The restriction to the subspace may lead to bad local optima. \nWe can mitigate this local optima problem by applying random sampling temporally when SimBA-ODS fails to update a target image in many steps in a low.\n\nWe note that decision-based Boundary-ODS with OOD dataset is effective, as shown in Section~\\ref{sec_black_limited}. \nWe hypothesize that the difference in effectiveness is because Boundary-ODS does not use scores of the target model and thus does not trap in local optima.\n\n\n\n\\begin{table}[htb]\n\\caption{Query counts and $\\ell_2$ perturbations for SimBA-ODS attacks with surrogate models trained with OOD images on ImageNet.}\n\\begin{center}\n\\setlength{\\tabcolsep}{4pt}\n\\begin{tabular}{c|ccc|ccc}\n\\toprule\n& \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \\cline{2-7}\n& success & average & median $\\ell_2$ & success & average & median $\\ell_2$ \\\\ \\\nattack& rate & queries & perturbation & rate & queries & perturbation \\\\ \\midrule\nSimBA-DCT~\\citep{Guo19} & {\\bf 100.0\\%} & 909 & 2.95 & {\\bf 97.0\\%} & 7114 &7.00 \\\\\nSimBA-ODS \n(OOD dataset) & {\\bf 100.0\\%} & {\\bf 491} & {\\bf 1.94} & 94.7\\% & {\\bf 6925} & {\\bf 4.92} \\\\ \\hline\n\\begin{tabular}{c}\nSimBA-ODS (full dataset) \n\\end{tabular} & {100.0\\%} & {242} & {1.40} & {98.3\\%} & {3503} & {3.55} \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_OOD_SimBA}\n\\end{table}\n\n\n\\subsection{ODS against robust defense models}\n\\label{appendix:ap_black_defensemodel}\nIn this paper, we mainly discuss transferability when surrogate models and the target model are trained with similar training schemes. On the other hand, it is known that transferability decreases when models are trained with different training schemes, \ne.g. the target model uses adversarial training and surrogate models use natural training. \nIf all surrogate models are trained with natural training scheme, \nODS will also not work against adversarially trained target models. \nHowever, we can mitigate the problem by simultaneously using surrogates obtained with various training schemes (which are mostly publicly available). In order to confirm this, we run an experiment to attack a robust target model using SimBA-ODS with both natural and robust surrogate models (a natural model and a robust model). In Table~\\ref{tab_black_advmodel}, the first row shows the attack performance of SimBA-DCT (without surrogate models) and the others show the performance of SimBA-ODS. \nIn the fourth row of Table~\\ref{tab_black_advmodel}, SimBA-ODS with natural and robust surrogate models significantly outperforms SimBA-DCT without surrogate models. This suggests that if the set of surrogates includes one that is similar to the target, ODS still works (even when some other surrogates are \"wrong\").\nWhile the performance with natural and robust surrogate models slightly underperforms single adversarial surrogate model in the third row, dynamic selection of surrogate models during the attack will improve the performance, as we mentioned in the conclusion of the paper.\n\n\\begin{table*}[htbp]\n\\caption{Transferability of ODS when training schemes of surrogate models are different from the target model. \nR50 shows pretrained ResNet50 model, and R101(adv) and R152(adv) are adversarially trained ResNeXt101 and ResNet152 denoise models from~\\citep{featureDenoise19}, respectively. All attacks are held in the same setting as Section~\\ref{sec_black_score}. \n}\n\\begin{center}\n\\begin{tabular}{cc|ccc}\n\\toprule\ntarget & surrogate & success rate & average queries & median $\\ell_2$ perturbation \\\\ \n \\midrule\nR101(adv) &- & 89.0\\% & 2824& 6.38\\\\\nR101(adv) &R50 & 80.0\\% & 4337& 10.15\\\\\nR101(adv) &R152(adv) & \\textbf{98.0\\%} &\\textbf{1066} & \\textbf{4.93}\\\\ \nR101(adv) &R50, R152(adv) & \\textbf{98.0\\%} &1304 & {5.62}\\\\\n\\bottomrule \n\\end{tabular}\n\\end{center}\n\\label{tab_black_advmodel}\n\\end{table*}\n\n\\section{Relationship and Comparison between ODS and MultiTargeted}\n\\label{appendix:ap_multitargeted}\nIn this section, we describe that ODS gives better diversity than the MultiTargeted attack~\\citep{MT19} for initialization and sampling. \n\nMultiTargeted is a variant of white-box PGD attacks, which maximizes $f_t(\\bm{\\mathrm{x}})-f_y(\\bm{\\mathrm{x}})$ where $\\bm{\\mathrm{f}}(\\bm{\\mathrm{x}})$ is logits, $y$ is the original label and $t$ is a target label. The target label is changed per restarts. In other words, MultiTargeted moves a target image to a particular direction in the output space, which is represented as like $\\bm{\\mathrm{w}}_{\\text{d}}=(1,0,-1,0)$ where 1 and -1 correspond to the target and original label, respectively. Namely, the procedure of MultiTargeted is technically similar to ODS. \n\n\nHowever, there are some key differences between MultiTargeted and ODS. One of the difference is the motivation. MultiTargeted was proposed as a white-box attack and the study only focused on $\\ell_p$-bounded white-box attacks. On the other hand, our study gives broader application for white- and black-box attacks. As far as we know, ODS is the first method which exploits the output diversity for initialization and sampling.\n\nAnother difference is the necessity of the original label of target images. \nODS does not require the original class of the target image,\nand thus ODS is applicable for black-box attacks \neven if surrogate models are trained with out-of-distribution training dataset, as shown in Section~\\ref{sec_black_limited}.\nOn the other hand, \nsince MultiTargeted exploits the original label of target images to calculate the direction of the attack, \nwe cannot apply MultiTargeted to sampling for black-box attacks against out-of-distribution images. \n\n\nFinally, the level of diversity is also different. \nAs we mentioned in Section~\\ref{sec_related}, the direction of MultiTargeted is restricted to away from the original class.\nThis restriction could be harmful for diversity because the subspace to explore directions is limited. \nTo show this statement, we apply MultiTargeted to initialization for white-box attacks and sampling for black-box attacks, and demonstrate that ODI provides better diversity than MultiTargeted for initialization and sampling (especially for sampling).\n\n\\paragraph{Initialization in white-box settings}\nWe apply MultiTargeted to initalization for white-box attacks in Section~\\ref{sec_white_various}. \nTable~\\ref{tab_appendix_MTwhite} represents the comparison of the attack performance with initialization by MultiTargeted and ODI.\nFor PGD attacks, MultiTargeted is slightly better than ODI. We hypotheses that it is because MultiTargeted was developed as a variant of PGD attacks and the initialization by MultiTargeted also works as an attack method. \nOn the other hand, ODI outperforms MultiTargeted for C\\&W attacks. In this setting, MultiTargeted does not work as an attack method, and thus the difference in the diversity makes the difference in the performance.\n\\begin{table*}[htb]\n\\caption{Comparison of model performance under attacks with MultiTargeted (MT) and ODI. The values are model accuracy (lower is better) for PGD and the average of the minimum $\\ell_2$ perturbations (lower is better) for C\\&W. All results are the average of three trials. Results for ODI are from Table~\\ref{tab_Linf}.\n}\n\\begin{center}\n\\setlength{\\tabcolsep}{4.5pt}\n\\begin{tabular}{c|cc|cc}\n\\toprule\n & \\multicolumn{2}{|c}{PGD } &\n\\multicolumn{2}{|c}{C\\&W} \\\\ \nmodel & MT & ODI & MT & ODI \n\\\\ \\midrule\nMNIST & $\\textbf{89.95}\\pm 0.05\\%$ & ${90.21}\\pm 0.05\\%$ & ${2.26}\\pm0.01$ & $\\textbf{2.25}\\pm0.01$ \\\\ \nCIFAR-10 & $\\textbf{44.33}\\pm 0.01\\%$ &${44.45}\\pm 0.02\\%$ & $0.69\\pm0.01$ &$\\textbf{0.67}\\pm0.00$ \\\\ \nImageNet & $\\textbf{42.2}\\pm 0.0\\%$ & ${42.3}\\pm 0.0\\%$& $2.30\\pm0.01$ & $\\textbf{1.32}\\pm0.01$ \\\\ \n\\bottomrule \n\\end{tabular}\n\\end{center}\n\\label{tab_appendix_MTwhite}\n\\end{table*}\n\n\n\\paragraph{Sampling in black-box settings} \nWe use MultiTargeted for sampling on the Boundary Attack in Section~\\ref{sec_black_decision} (called Boundary-MT), and compare it with Boundary-ODS. Table~\\ref{tab_append_MTblack} and Figure~\\ref{fig_append_MTblack} show the results of the comparison. While Boundary-MT outperforms the original Boundary Attack, Boundary-ODS finds much smaller adversarial perturbation than Boundary-MT. \n\nIn Figure~\\ref{fig_append_MTblack}, Boundary-MT slightly outperforms Boundary-ODS at small queries. We hypotheses that it is because MultiTargeted not works for providing diversity, but works for the approximation of gradients of the loss function. However, with the number of queries, the curve of Boundary-MT is saturated, and Boundary-MT underperforms Boundary-ODS. This is an evidence that the restriction of directions is harmful for sampling. %\n\n\\begin{table}[htb]\n\\caption{Median $\\ell_2$ perturbations for Boundary Attack with ODS and MultiTargeted (MT).}\n\\begin{center}\n\\begin{tabular}{c|ccc|ccc}\n\\toprule\n& \\multicolumn{6}{c}{number of queries} \\\\\n& \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \nattack & 1000 & 5000 & 10000 & 1000 & 5000 & 10000 \\\\ \\midrule\nBoundary~\\citep{Brendel18} & 45.07& 11.46 & 4.30 & 73.94 &41.88 &27.05 \\\\\nBoundary-ODS & {\\bf 7.57} & {\\bf 0.98} & {\\bf 0.57} & {\\bf 27.24} & {\\bf 6.84} & {\\bf 3.76}\\\\\nBoundary-MT & 7.65&2.20&2.01&28.16&18.48&16.59 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_append_MTblack}\n\\end{table}\n\\begin{figure}[ht]\n\\centering\n\n \\begin{tabular}{cc}\n \\begin{minipage}{0.35\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/boundary_untarget_MT.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.35\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/boundary_target_MT.png}\n \\end{center}\n \\end{minipage}\n \\\\\n Untargeted & Targeted \n \\end{tabular}\n \\caption{Relationship between median $\\ell_2$ perturbations and the number of queries for Boundary Attack with ODS and MultiTargeted. Error bars show 25\\%ile and 75\\%ile of $\\ell_2$ perturbations. }\n \\label{fig_append_MTblack}\n\\end{figure}\n\n\\section{Introduction}\nDeep neural networks have achieved great success in image classification. However, it is known that they are vulnerable to adversarial examples~\\citep{Szegedy13} --- small perturbations imperceptible to humans that cause classifiers to output wrong predictions. \nSeveral studies have focused on improving model robustness against these malicious perturbations. Examples include adversarial training~\\citep{madry17,Ian15}, %\ninput purification using generative models~\\citep{song2017pixeldefend,samangouei2018defense}, \nregularization of the training loss~\\citep{regGrad18,regStGrad18,regCurv19,regLinear19}, \nand certified defenses~\\citep{Certify17,Certify18_2,Certify19}. \n\n\nStrong attacking methods are crucial for evaluating the robustness of classifiers and defense mechanisms. \nMany existing adversarial attacks rely on random sampling, i.e., adding small random noise to the input.\nIn white-box settings, random sampling is widely used for random restarts~\\citep{PGD17,Dist19,fab19,MT19} to find a diverse set of starting points for the attacks.\nSome black-box attack methods also use random sampling to explore update directions~\\citep{Brendel18,Guo19} or to estimate gradients of the target models~\\citep{ilyas2018blackbox,IEM2018PriorCB,autozoom2019}. \nIn these attacks, random perturbations are typically sampled from a na\\\"{i}ve uniform or Gaussian distribution in the input pixel space.\n\n\\begin{figure*}[htbp]\n\\centering\n \\begin{tabular}{ccc|ccc}\n\n \\begin{minipage}{0.2\\hsize}\n \\begin{center}\n \\includegraphics[width=0.88\\textwidth]{image\/input_target.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.2\\hsize}\n \\begin{center}\n \\includegraphics[width=0.908\\textwidth]{image\/output_target.png}\n \\end{center}\n \\end{minipage} &&&\n \\begin{minipage}{0.2\\hsize}\n \\begin{center}\n \\includegraphics[width=0.908\\textwidth]{image\/output_target.png}\n \\end{center}\n \\end{minipage} &\n \\begin{minipage}{0.2\\hsize}\n \\begin{center}\n \\includegraphics[width=0.908\\textwidth]{image\/output_surrogate.png}\n \\end{center}\n \\end{minipage}\n \\\\ \n Input space& Output space&&& Output space &Output space \n \\\\\n &&&& (surrogate model) &(target model)\n \\end{tabular}\n \\caption{Illustration of the differences between random sampling (blue dashed arrows) and ODS (red solid arrows). In each figure, the black `o' corresponds to an original image, and white `o's represent sampled perturbations. (Left): white-box setting. Perturbations by ODS in the input space are crafted by maximizing the distance in the output space. (Right): black-box setting. Perturbations crafted on the surrogate model transfer well to perturbations on the target model. \n }\n \\label{figure1}\n \\vskip -0.0in\n\\end{figure*}\n\nRandom sampling in the input space, however, may not sufficiently explore the output (logits) space of a neural network --- diversity in the input space does not directly translate to diversity in the output space of a deep nonlinear model. We illustrate this phenomenon in the left panel of Figure~\\ref{figure1}. When we add random perturbations to an image in the input space (see dashed blue arrows in the first plot of Figure~\\ref{figure1}), the corresponding output logits could be very similar to the output for the original image (as illustrated by the second plot of Figure~\\ref{figure1}). \nEmpirically, we observe that this phenomenon can negatively impact the performance of attack methods.\n\nTo overcome this issue, we propose a sampling strategy designed to obtain samples that are diverse in the output space. Our idea is to perturb an input away from the original one as measured directly by distances in the output space (see solid red arrows in the second plot in Figure~\\ref{figure1}). \nFirst, we randomly specify a direction in the output space. Next, we perform gradient-based optimization to generate a perturbation in the input space that \nyields a large change in the specified direction. \nWe call this new sampling technique \\uline{Output Diversified Sampling} (ODS).\n\n\nODS can improve adversarial attacks under both white-box and black-box settings. For white-box attacks, we exploit ODS to initialize the optimization procedure of finding adversarial examples (called ODI). ODI typically provides much more diverse (and effective) starting points for adversarial attacks. \nMoreover, this initialization strategy is agnostic to the underlying attack method, and can be incorporated into most optimization-based white-box attack methods.\nEmpirically, we demonstrate that ODI improves the performance of $\\ell_\\infty$ and $\\ell_2$ attacks compared to na\\\"{i}ve initialization methods.\nIn particular, the PGD attack with ODI outperforms the state-of-the-art MultiTargeted attack~\\citep{MT19} against pre-trained defense models, with 50 times smaller computational complexity on CIFAR-10.\n\n\nIn black-box settings, we cannot directly apply ODS because we do not have access to gradients of the target model. As an alternative, we apply ODS to surrogate models and observe that the resulting samples are diverse with respect to the target model: diversity in the output space transfers (see the rightmost two plots in Figure~\\ref{figure1}). \nEmpirically, \nwe demonstrate that ODS can reduce the number of queries needed for a score-based attack (SimBA~\\citep{Guo19}) by a factor of two on ImageNet. \n{ODS also shows better query-efficiency than P-RGF~\\citep{Cheng19prior}, which is another method exploiting surrogate models to improve a black-box attack.}\nThese attacks with ODS achieve better query-efficiency than the state-of-the-art Square Attack~\\citep{ACFH2019square}.\nIn addition, ODS with a decision-based attack (Boundary Attack~\\citep{Brendel18}) reduces the median perturbation distances of adversarial examples by a factor of three compared to the state-of-the-art HopSkipJump~\\citep{chen2019hop} and Sign-OPT~\\citep{cheng20sign} attacks. \n\n\n\n\n\\section{Preliminaries}\nWe denote an image classifier as $\\bm{\\mathrm{f}}: \\bm{\\mathrm{x}}\\in[0,1]^D \\mapsto \\bm{\\mathrm{z}}\\in \\mathbb{R}^{C}$, where $\\bm{\\mathrm{x}}$ is an input image, $\\bm{\\mathrm{z}}$ represents the logits, and $C$ is the number of classes.\nWe use\n$ h(\\bm{\\mathrm{x}}) = \\argmax_{c=1,\\ldots,C} f_{c}(\\bm{\\mathrm{x}})$ to denote the model prediction, where $f_{c}(\\bm{\\mathrm{x}})$ is the $c$-th element of $\\bm{\\mathrm{f}}(\\bm{\\mathrm{x}})$. %\n\nAdversarial attacks can be classified into targeted and untargeted attacks.\nGiven an image $\\bm{\\mathrm{x}}$, a label $y$ and a classifier $\\bm{\\mathrm{f}}$, \nthe purpose of untargeted attacks \nis to find an adversarial example $\\bm{\\mathrm{x}}^{\\text{adv}}$ that is similar to $\\bm{\\mathrm{x}}$ but causes misclassification $ h(\\bm{\\mathrm{x}}^{\\text{adv}}) \\neq y $. \nIn targeted settings, attackers aim to change the model prediction $h(\\bm{\\mathrm{x}}^{\\text{adv}})$ to a particular target label $t \\neq y$. \nThe typical goal of adversarial attacks is to find an adversarial example $\\bm{\\mathrm{x}}^{\\text{adv}}$ within $B_{\\epsilon}(\\bm{\\mathrm{x}}) = \\{ \\bm{\\mathrm{x}} + \\bm{\\delta} : \\|\\bm{\\delta}\\|_{p} \\leq \\epsilon \\}$, i.e., the $\\epsilon$-radius ball around an original image $\\bm{\\mathrm{x}}$. Another common setting is to find a valid adversarial example with the smallest $\\ell_p$ distance from the original image. \n\n\\paragraph{White-box attacks}\nIn white-box settings, attackers can access full information of the target model. One strong and popular example is the Projected Gradient Descent (PGD) attack~\\citep{madry17}, which iteratively applies the following update rule:\n\\begin{equation}\n\\bm{\\mathrm{x}}^{\\text{adv}}_{k+1} \n= \\mathrm{Proj}_{B_{\\epsilon}(\\bm{\\mathrm{x}})} \\left( \\bm{\\mathrm{x}}^{\\text{adv}}_k + \\eta \\, \\mathrm{sign} \\left(\\nabla_{\\bm{\\mathrm{x}}^{\\text{adv}}_k} L(\\bm{\\mathrm{f}}(\\bm{\\mathrm{x}}^{\\text{adv}}_k),y) \n \\right) \\right)\n\\label{eq_pgd}\n\\end{equation}\nwhere $\\mathrm{Proj}_{B_{\\epsilon}(\\bm{\\mathrm{x}})}(\\bm{\\mathrm{x}}^{\\text{adv}}) \\triangleq \\argmin_{\\bm{\\mathrm{x}}' \\in B_{\\epsilon}(\\bm{\\mathrm{x}})} \\|\\bm{\\mathrm{x}}^{\\text{adv}}-\\bm{\\mathrm{x}}'\\|_{p}$, $\\eta$ is the step size, and $L(\\bm{\\mathrm{f}}(\\bm{\\mathrm{x}}),y)$ is a loss function, e.g. the margin loss defined as $\\max_{i \\neq y} f_{i}(\\bm{\\mathrm{x}}) - f_{y}(\\bm{\\mathrm{x}})$. \nTo increase the odds of success, the procedure is restarted multiple times with uniformly sampled initial inputs from $B_{\\epsilon}(\\bm{\\mathrm{x}})$.\n\n\n\\paragraph{Black-box attacks}\nIn black-box settings, the attacker only has access to outputs of the target model without knowing its architecture and weights. Black-box attacks can be largely classified into transfer-based, score-based, and decision-based methods respectively. Transfer-based attacks craft white-box adversarial examples with respect to surrogate models, and transfer them to the target model. The surrogate models are typically trained with the same dataset as the target model so that they are close to each other.\nIn score-based settings, attackers can know the output scores (logits) of the classifier; while for decision-based settings, attackers can only access the output labels of the classifier. \nFor these two approaches, attacks are typically evaluated in terms of query efficiency, i.e. the number of queries needed to generate an adversarial example and its perturbation size.\n\nRecently, several studies~\\citep{Cheng19prior,subspaceattack,Cai19transferSMBdirect} employed surrogate models to estimate the gradients of the loss function of the target model.\nSome attack methods used random sampling in the input space, \nsuch as the decision-based Boundary Attack~\\citep{Brendel18} and the score-based Simple Black-Box Attack~\\citep{Guo19}.\n\n\\section{Output Diversified Sampling}\n\nAs intuitively presented in Figure~\\ref{figure1}, random sampling in the input space does not necessarily produce samples with high diversity as measured in the output space. To address this problem, we propose Output Diversified Sampling (ODS). Given an image $\\bm{\\mathrm{x}}$, a classifier $\\bm{\\mathrm{f}}$ and the direction of diversification $\\bm{\\mathrm{w}}_{\\text{d}} \\in \\mathbb{R}^{C}$, \nwe define the normalized perturbation vector of ODS as follows:\n\\begin{equation}\n\\bm{\\mathrm{v}}_{\\text{ODS}}(\\bm{\\mathrm{x}},\\bm{\\mathrm{f}},\\bm{\\mathrm{w}}_{\\text{d}}) = \\frac{\\nabla_{\\bm{\\mathrm{x}}} (\\bm{\\mathrm{w}}_{\\text{d}}^\\intercal \\bm{\\mathrm{f}}(\\bm{\\mathrm{x}}))}{\\| \\nabla_{\\bm{\\mathrm{x}}} (\\bm{\\mathrm{w}}_{\\text{d}}^\\intercal \\bm{\\mathrm{f}}(\\bm{\\mathrm{x}})) \\|_2},\n\\end{equation}\nwhere $\\bm{\\mathrm{w}}_{\\text{d}}$ is sampled from the uniform distribution over $[-1,1]^C$. \nBelow we show how to enhance white- and black-box attacks with ODS.\n\n\n\n\\subsection{Initialization with ODS for white-box attacks}\n\\label{sec_ODS_white}\nIn white-box settings, we utilize ODS for initialization (ODI) to generate output-diversified starting points. Given an original input $\\bm{\\mathrm{x}}_{\\text{org}}$ and the direction for ODS $\\bm{\\mathrm{w}}_{\\text{d}}$, we try to find a restart point $\\bm{\\mathrm{x}}$ that is as far away from $\\bm{\\mathrm{x}}_{\\text{org}}$ as possible by maximizing $\\bm{\\mathrm{w}}_{\\text{d}}^\\intercal (\\bm{\\mathrm{f}}(\\bm{\\mathrm{x}})-\\bm{\\mathrm{f}}(\\bm{\\mathrm{x}}_{\\text{org}}))$ via the following iterative update:\n\\begin{equation}\n\\label{eq_linf}\n\\bm{\\mathrm{x}}_{k+1} = \\mathrm{Proj}_{B(\\bm{\\mathrm{x}}_{\\text{org}})} \\left( \\bm{\\mathrm{x}}_{k} + \\eta_{\\text{ODI}} \\, \\mathrm{sign}(\\bm{\\mathrm{v}}_{\\text{ODS}}(\\bm{\\mathrm{x}}_k,\\bm{\\mathrm{f}},\\bm{\\mathrm{w}}_{\\text{d}})) \\right)\n\\end{equation}\nwhere $B(\\bm{\\mathrm{x}}_{\\text{org}})$ is the set of allowed perturbations, which is typically an $\\epsilon$-ball in $\\ell_p$ norm, and $\\eta_{\\text{ODI}}$ is a step size. When applying ODI to $\\ell_2$ attacks, we omit the sign function. \nAfter some steps of ODI, we start an attack from the restart point obtained by ODI. \nWe sample a new direction $\\bm{\\mathrm{w}}_{\\text{d}}$ for each restart in order to obtain diversified starting points for the attacks.\nWe provide the pseudo-code for ODI in Algorithm~\\ref{alg:ap_white} of the Appendix. \n\n\n\nOne sampling step of ODI costs roughly the same time as one iteration of most gradient-based attacks (e.g., PGD). \nEmpirically, we observe that the number of ODI steps $N_{\\text{ODI}}=2$ is already sufficient to obtain diversified starting points (details of the sensitivity analysis are in Appendix~\\ref{appendix:ap_white_sensitivity}), and fix $N_{\\text{ODI}}=2$ in all our experiments unless otherwise specified. We emphasize that ODS is not limited to PGD, and can be applied to a wide family of optimization-based adversarial attacks.\n\n\\textbf{Experimental verification of increased diversity:} \nWe quantitatively evaluate the diversity of starting points in terms of pairwise distances of output values $\\bm{\\mathrm{f}}(\\bm{\\mathrm{x}})$, confirming the intuition presented in the left two plots of Figure~\\ref{figure1}. We take a robust model on CIFAR-10 as an example of target models, and generate starting points with both ODI and standard uniform initialization to calculate the mean pairwise distance. The pairwise distance (i.e. diversity) obtained by ODI is 6.41, which is about 15 times larger than that from uniform initialization (0.38). In addition, PGD with the same steps as ODI does not generate diverse samples (pairwise distance is 0.43). \nDetails are in Appendix~\\ref{appendix:ap_white_diversity}.\n\n\\subsection{Sampling update directions with ODS for black-box attacks}\n\\label{sec_ODS_black}\nIn black-box settings, we employ ODS to sample update directions instead of random sampling. \nGiven a target classifier $\\bm{\\mathrm{f}}$, we cannot directly calculate the ODS perturbation $\\bm{\\mathrm{v}}_{\\text{ODS}}(\\bm{\\mathrm{x}},\\bm{\\mathrm{f}},\\bm{\\mathrm{w}}_{\\text{d}})$ because gradients of the target model $\\bm{\\mathrm{f}}$ are unknown. Instead, we introduce a surrogate model $\\bm{\\mathrm{g}}$ and calculate the ODS vector $\\bm{\\mathrm{v}}_{\\text{ODS}}(\\bm{\\mathrm{x}},\\bm{\\mathrm{g}},\\bm{\\mathrm{w}}_{\\text{d}})$. \n\nODS can be applied to attack methods that rely on random sampling in the input space. Since many black-box attacks use random sampling to explore update directions~\\citep{Brendel18,Guo19} or estimate gradients of the target models~\\citep{ilyas2018blackbox,autozoom2019,chen2019hop}, ODS has broad applications. \nIn this paper, we apply ODS to two popular black-box attacks that use random sampling: decision-based Boundary Attack~\\citep{Brendel18} and score-based Simple Black-Box Attack (SimBA~\\citep{Guo19}). In addition, we compare ODS with P-RGF~\\citep{Cheng19prior}, which is an another attack method using surrogate models.\n\nTo illustrate how we apply ODS to existing black-box attack methods, we provide the pseudo-code of SimBA~\\citep{Guo19} with ODS in Algorithm~\\ref{alg:black}.\nThe original SimBA algorithm picks an update direction $\\bm{\\mathrm{q}}$ randomly from a group of candidates $Q$ that are orthonormal to each other. We replace it with ODS, as shown in the line 5 and 6 of Algorithm~\\ref{alg:black}. For other attacks, we replace random sampling with ODS in a similar way.\nNote that in Algorithm~\\ref{alg:black}, we make use of multiple surrogate models and uniformly sample one each time, since we empirically found that using multiple surrogate models can make attacks stronger.\n\n\\textbf{Experimental verification of increased diversity:} \nWe quantitatively evaluate that ODS can lead to high diversity in the output space of the target model, as shown in the right two plots of Figure~\\ref{figure1}. We use pre-trained Resnet50~\\citep{resnet16} and VGG19~\\citep{VGG19} models on ImageNet as the target and surrogate models respectively. We calculate and compare the mean pairwise distances of samples with ODS and random Gaussian sampling. The pairwise distance (i.e. diversity) for ODS is 0.79, which is 10 times larger than Gaussian sampling (0.07). Details are in Appendix~\\ref{appendix:ap_black_diversity}. We additionally observe that ODS does not produce diversified samples when we use random networks as surrogate models. This indicates that good surrogate models are crucial for transferring diversity. \n\n\n\\begin{algorithm}[tb]\n \\caption{Simple Black-box Attack~\\citep{Guo19} with ODS }\n \\label{alg:black}\n\\begin{algorithmic}[1]\n \\STATE {\\bfseries Input:} A targeted image $\\bm{\\mathrm{x}}$, loss function $L$, a target classifier $\\bm{\\mathrm{f}}$, a set of surrogate models $\\mathcal{G}$\n \\STATE {\\bfseries Output:} attack result $\\bm{\\mathrm{x}}_{\\text{adv}}$\n \\STATE Set the starting point $\\bm{\\mathrm{x}}_{\\text{adv}} = \\bm{\\mathrm{x}}$ \n \\WHILE {$\\bm{\\mathrm{x}}_{\\text{adv}}$ is not adversary}\n \\STATE Choose a surrogate model $\\bm{\\mathrm{g}}$ from $\\mathcal{G}$, and sample $\\bm{\\mathrm{w}}_{\\text{d}} \\sim U(-1,1)^C$\n \\STATE Set $\\bm{\\mathrm{q}} = \\bm{\\mathrm{v}}_{\\text{ODS}}(\\bm{\\mathrm{x}}_{\\text{adv}},\\bm{\\mathrm{g}},\\bm{\\mathrm{w}}_{\\text{d}})$\n \\FOR{$\\alpha \\in \\{\\epsilon, -\\epsilon\\}$}\n \\IF{$L(\\bm{\\mathrm{x}}_{\\text{adv}}+\\alpha \\cdot \\bm{\\mathrm{q}}) > L(\\bm{\\mathrm{x}}_{\\text{adv}})$}\n \\STATE Set $\\bm{\\mathrm{x}}_{\\text{adv}}=\\bm{\\mathrm{x}}_{\\text{adv}}+\\alpha \\cdot \\bm{\\mathrm{q}}$ and {\\bf break}\n \\ENDIF\n \\ENDFOR\n \\ENDWHILE\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\n\\section{Experiments in white-box settings}\n\\label{sec_white}\nIn this section, we show that the diversity offered by ODI can improve white-box attacks for both $\\ell_\\infty$ and $\\ell_2$ distances. \nMoreover, we demonstrate that a simple combination of PGD and ODI achieves new state-of-the-art attack success rates. All experiments are for untargeted attacks.\n\n\n\n\\subsection{Efficacy of ODI for white-box attacks}\n\\label{sec_white_various}\nWe combine ODI with two popular attacks: PGD attack~\\citep{madry17} with the $\\ell_\\infty$ norm and C\\&W attack~\\citep{cw17} with the $\\ell_2$ norm. \nWe run these attacks on MNIST, CIFAR-10 and ImageNet. \n\n\\paragraph{Setup} \nWe perform attacks against three adversarially trained models from MadryLab\\footnote{\\url{https:\/\/github.com\/MadryLab\/mnist_challenge} and \\url{https:\/\/github.com\/MadryLab\/cifar10_challenge}. We use their secret model.}~\\citep{madry17} for MNIST and CIFAR-10 and the Feature Denoising ResNet152 network\\footnote{\\url{https:\/\/github.com\/facebookresearch\/ImageNet-Adversarial-Training}.}~\\citep{featureDenoise19} for ImageNet. For PGD attacks, we evaluate the model accuracy with 20 restarts, where starting points are uniformly sampled over an $\\epsilon$-ball for the na\\\"{i}ve resampling. For C\\&W attacks, we calculate the minimum $\\ell_2$ perturbation that yields a valid adversarial example among 10 restarts for each image, and measure the average of the minimum perturbations. Note that the original paper of C\\&W~\\citep{cw17} attacks did not apply random restarts. Here for the na\\\"{i}ve initialization of C\\&W attacks we sample starting points from a Gaussian distribution and clip it into an $\\epsilon$-ball (details in Appendix~\\ref{appendix:ap_parameter_whiteall}). \n\nFor fair comparison, we test different attack methods with the same amount of computation. Specifically, we compare $k$-step PGD with na\\\"{i}ve initialization (denoted as PGD-$k$) against ($k$-2)-step PGD with 2-step ODI (denoted as ODI-PGD-($k$-2)). We do not adjust the number of steps for C\\&W attacks because the computation time of 2-step ODI are negligible for C\\&W attacks. %\n\n\n\\begin{table*}[htb]\n\\caption{Comparing different white-box attacks. We report model accuracy (lower is better) for PGD and average of the minimum $\\ell_2$ perturbations (lower is better) for C\\&W. All results are the average of three trials.\n}\n\\begin{center}\n\\begin{tabular}{c|cc|cc}\n\\toprule\n & \\multicolumn{2}{|c}{PGD } &\n\\multicolumn{2}{|c}{C\\&W} \\\\ \nmodel & na\\\"{i}ve (PGD-$k$) & ODI (ODI-PGD-($k$-2)) & na\\\"{i}ve & ODI \n\\\\ \\midrule\nMNIST & $90.31\\pm 0.02\\%$ & $\\textbf{90.21}\\pm 0.05\\%$ & ${2.27}\\pm0.00$ & $\\textbf{2.25}\\pm0.01$ \\\\ \nCIFAR-10 & $46.06\\pm 0.02\\%$ &$\\textbf{44.45}\\pm 0.02\\%$ & $0.71\\pm0.00$ & $\\textbf{0.67}\\pm0.00$ \\\\ \nImageNet & $43.5\\pm 0.0\\%$ & $\\textbf{42.3}\\pm 0.0\\%$& $1.58\\pm0.00$ & $\\textbf{1.32}\\pm0.01$ \\\\ \n\\bottomrule \n\\end{tabular}\n\\end{center}\n\\label{tab_Linf}\n\\end{table*}\n\n\n\n\n\n\\paragraph{Results} \nWe summarize all quantitative results in Table~\\ref{tab_Linf}. %\nAttack performances with ODI are better than na\\\"{i}ve initialization for all models and attacks. \nThe improvement by ODI on the CIFAR-10 and ImageNet models is more significant than on the MNIST model. We hypothesize that this is due to the difference in model non-linearity. When a target model includes more non-linear transformations,\nthe difference in diversity between the input and output space could be larger, in which case ODI will be more effective in providing a diverse set of restarts. \n\n\n\n\\subsection{Comparison between PGD attack with ODI and state-of-the-art attacks}\n\\label{sec_sota}\nTo further demonstrate the power of ODI, we perform ODI-PGD against MadryLab's robust models~\\citep{madry17} on MNIST and CIFAR-10 \nand compare ODI-PGD with state-of-the-art attacks.\n\n\\paragraph{Setup}\nOne state-of-the-art attack we compare with is the well-tuned PGD attack~\\citep{MT19}, which achieved 88.21\\% accuracy for the robust MNIST model. The other attack we focus on is the MultiTargeted attack~\\citep{MT19}, which obtained 44.03\\% accuracy against the robust CIFAR-10 model. \nWe use all test images on each dataset and perform ODI-PGD under two different settings. One is the same as Section~\\ref{sec_white_various}. \nThe other is ODI-PGD with tuned hyperparameters, e.g. increasing the number of steps and restarts. Please see Appendix~\\ref{appendix:ap_parameter_ODI} for more details of tuning.\n\n\n\\begin{table*}[htbp]\n\\caption{Comparison of ODI-PGD with state-of-the-art attacks against pre-trained defense models. The complexity rows display products of the number of steps and restarts. Results for ODI-PGD are the average of three trials.\nFor ODI-PGD, the number of steps is the sum of ODS and PGD steps. \n}\n\\begin{center}\n\\setlength{\\tabcolsep}{4.5pt}\n\\begin{tabular}{cc|cccc}\n\\toprule\nmodel& & \\begin{tabular}{c}ODI-PGD \\\\ \n(in Sec.~\\ref{sec_white_various}) \\end{tabular} & \\begin{tabular}{c}tuned \\\\ODI-PGD \\end{tabular}\n & \\begin{tabular}{c}tuned PGD \\\\~\\citep{MT19} \\end{tabular}\n & \\begin{tabular}{c}MultiTargeted \\\\~\\citep{MT19} \\end{tabular} \\\\ \\midrule\n \\multirow{2}{*}{MNIST} &accuracy & $90.21\\pm 0.05\\%$ & $\\textbf{88.13}\\pm 0.01\\%$ & 88.21\\% & 88.36\\% \\\\\n &complexity & \n$40 \\times 20$ & $1000\\times 1000 $ & $1000 \\times 1800$ &\n$1000 \\times 1800$ \\\\ \\hline \n\\multirow{2}{*}{CIFAR-10}&accuracy& $44.45\\pm 0.02\\%$ & $\\textbf{44.00} \\pm 0.01\\%$ & 44.51\\% & 44.03\\% \\\\\n &complexity & $20 \\times 20$ & $150 \\times 20$ & $1000 \\times 180$ & $1000 \\times 180$ \\\\ \n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_sota}\n\\end{table*}\n\n\n\n\\paragraph{Results}\nWe summarize the comparison between ODI-PGD and state-of-the-art attacks in Table~\\ref{tab_sota}. \nOur tuned ODI-PGD reduces the accuracy \nto 88.13\\% for the MNIST model, and to 44.00\\% for the CIFAR-10 model. These results outperform existing state-of-the-art attacks. \n\n\nTo compare their running time, we report the total number of steps (the number of steps multiplied by the number of restarts) as a metric of complexity, because the total number of steps is equal to the number of gradient computations (the computation time per gradient evaluation is comparable for all gradient-based attacks).\nIn Table~\\ref{tab_sota}, the computational cost of tuned ODI-PGD is smaller than that of state-of-the-art attacks, and especially 50 times smaller on CIFAR-10. \nSurprisingly, even without tuning ODI-PGD (in the first column) can still outperform tuned PGD~\\citep{MT19}\nwhile also being \ndrastically more efficient computationally.\n\n\n\\section{Experiments in black-box settings}\n\\label{sec_black_experiment}\nIn this section, we demonstrate that \nblack-box attacks combined with ODS significantly reduce the number of queries needed to generate adversarial examples. \nIn experiments below, we randomly sample 300 correctly classified images from the ImageNet validation set. We evaluate both untargeted and targeted attacks. For targeted attacks, we uniformly sample target labels.\n\n\\subsection{Query-efficiency of score-based attacks with ODS}\n\\label{sec_black_score}\n\\subsubsection{Applying ODS to score-based attacks}\n\\label{sec_black_score_simba}\nTo show the efficiency of ODS, we combine ODS with the score-based Simple Black-Box Attack (SimBA)~\\citep{Guo19}. \nSimBA randomly samples a vector and \neither adds or subtracts the vector to the target image to explore update directions. \nThe vector is sampled from a pre-defined set of orthonormal vectors in the input space. \nThese are the discrete cosine transform (DCT) basis vectors in the original paper~\\citep{Guo19}. We replace the DCT basis vectors with ODS sampling (called SimBA-ODS), %\n\n\n\\paragraph{Setup}\nWe use pre-trained ResNet50 model as the target model \nand select four pre-trained models \n(VGG19, ResNet34, DenseNet121~\\citep{densenet}, MobileNetV2~\\citep{mobilenetv2}) as surrogate models.\nWe set the same hyperparameters for SimBA as~\\citep{Guo19}: the step size is $0.2$ and the number of iterations (max queries) is 10000 (20000) for untargeted attacks and 30000 (60000) for targeted attacks. As the loss function in SimBA, we employ the margin loss for untargeted attacks and the cross-entropy loss for targeted attacks.\n\n\\paragraph{Results}\nFirst, we compare SimBA-DCT~\\citep{Guo19} and SimBA-ODS. Table~\\ref{tab_black_score} reports the number of queries and the median $\\ell_2$ perturbations. Remarkably, SimBA-ODS reduces the average number of queries by a factor between 2 and 3 compared to SimBA-DCT for both untargeted and targeted settings. This confirms that ODS not only helps white-box attacks, but also leads to significant improvements of query-efficiency in black-box settings. In addition, SimBA-ODS decreases the average perturbation sizes by around a factor of two, which means that ODS helps find better adversarial examples that are closer to the original image. \n\n\\begin{table}[htb]\n\\caption{Number of queries and size of $\\ell_2$ perturbations for score-based attacks.}\n\\begin{center}\n\\setlength{\\tabcolsep}{3.9pt}\n\\begin{tabular}{cc|ccc|ccc}\n\\toprule\n&& \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \\cline{3-8}\n& num. of & success & average & median $\\ell_2$ & success & average & median $\\ell_2$ \\\\ \\\nattack& surrogates & rate & queries & perturbation & rate & queries & perturbation \\\\\n\\midrule\nSimBA-DCT~\\citep{Guo19} &0& {\\bf 100.0\\%} & 909 & 2.95 & 97.0\\% & 7114 &7.00 \\\\\nSimBA-ODS &4& {\\bf 100.0\\%} & {\\bf 242} & {\\bf 1.40} & {\\bf 98.3\\%} & {\\bf 3503} & {\\bf 3.55} \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_score}\n\\end{table}\n\n\n\\subsubsection{Comparing ODS with other methods using surrogate models}\n\\label{sec_black_score_RGF}\nWe consider another black-box attack that relies on surrogate models: P-RGF~\\citep{Cheng19prior}, which improves over the original RGF (random gradient-free) method for gradient estimation. \nP-RGF exploits prior knowledge from surrogate models to estimate the gradient more efficiently than RGF. Since RGF uses random sampling to estimate the gradient, we propose to apply ODS to RGF (new attack named ODS-RGF) and compare it with P-RGF under $\\ell_2$ and $\\ell_\\infty$ norms.\n\n\n\\begin{table}[htbp]\n\\caption{ Comparison of ODS-RGF and P-RGF on ImageNet. Hyperparameters for RGF are same as~\\citep{Cheng19prior} :max queries are 10000, sample size is 10, step size is 0.5 ($\\ell_2$) and 0.005 ($\\ell_\\infty$), and epsilon is $\\sqrt{0.001 \\cdot 224^2 \\cdot 3}$ ($\\ell_2$) and 0.05 ($\\ell_\\infty$). }\n\\begin{center}\n\\setlength{\\tabcolsep}{3.5pt}\n\\begin{tabular}{ccc|ccc|ccc}\n\\toprule\n& & &\\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \\cline{4-9}\n & &num. of & success& average & median $\\ell_2$ & success & average & median $\\ell_2$ \\\\ \nnorm& attack &surrogates & rate & queries & perturbation & rate & queries & perturbation \\\\ \n\\midrule\n\\multirow{3}{*}{$\\ell_2$}&RGF & 0& \\textbf{100.0\\%} & 633 & 3.07 &\\textbf{99.3\\%} &3141 & 8.23\\\\\n&P-RGF~[25] &1& \\textbf{100.0\\%} &211 & 2.08 \n&{97.0\\%} &2296 & 7.03\\\\\n&ODS-RGF& 1& \\textbf{100.0\\%} &\\textbf{133}& \\textbf{1.50}\n&\\textbf{99.3\\%} &\\textbf{1043} & \\textbf{4.47}\\\\ \\hline\n\\multirow{3}{*}{$\\ell_\\infty$} &RGF &0& {97.0\\%} & 520 & - \n&{25.0\\%} & 2971 & - \\\\\n&P-RGF~[25] &1& {99.7\\%} &88& - & {65.3\\%} & 2123 & -\\\\\n&ODS-RGF& 1& \\textbf{100.0\\%} &\\textbf{74}& - & \\textbf{92.0\\%} & \\textbf{985} & - \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{fig_black_RGF}\n\\end{table}\n\nFor fair comparison, we use a single surrogate model as in~\\citep{Cheng19prior}.\nWe choose pre-trained ResNet50 model as the target model and ResNet34 model as the surrogate model. \nWe give query-efficiency results of both methods in Table~\\ref{fig_black_RGF}. The average number of queries required by ODS-RGF is less than that of P-RGF in all settings. This suggests ODS-RGF can estimate the gradient more precisely than P-RGF by exploiting diversity obtained via ODS and surrogate models. \nThe differences between ODS-RGF and P-RGF are significant in targeted settings, since ODS-RGF achieves smaller perturbations than P-RGF (see median perturbation column). \nTo verify the robustness of our results, we also ran experiments using VGG19 as a surrogate model and obtained similar results. %\n\n\nWe additionally consider TREMBA~\\citep{Huang20TREMBA}, a black-box attack (restricted to the $\\ell_\\infty$-norm) that is state-of-the-art among those using surrogate models. In TREMBA, a\nlow-dimensional embedding is learned via surrogate models so as to obtain initial adversarial examples\nwhich\nare then updated using a score-based attack. %\nOur results show that ODS-RGF combined with SI-NI-DIM~\\citep{NesterovTransfer2020}, which is a state-of-the-art transfer-based attack, is comparable to TREMBA even though ODS-RGF is not restricted to the $\\ell_\\infty$-norm. \nResults and more details are provided in Appendix~\\ref{appendix:ap_black_TREMBA}.\n\n\n\\subsubsection{Comparison of ODS with state-of-the-art score-based attacks}\nTo show the advantage of ODS and surrogate models, \n we compare SimBA-ODS and ODS-RGF with the Square Attack~\\citep{ACFH2019square}, which is a state-of-the-art attack for both $\\ell_{\\infty}$ and $\\ell_2$ norms when surrogate models are not allowed.\nFor comparison, we regard SimBA as $\\ell_2$ bounded attacks: the attack is successful when adversarial $\\ell_2$ perturbation is less than a given bound $\\epsilon$.\nWe set $\\epsilon=5 \\ (\\ell_2)$ and $0.05 \\ (\\ell_\\infty)$ as well as other hyperparameters according to the original paper~\\citep{ACFH2019square}, except that we set the max number of queries to $20000$ for untargeted attacks and $60000$ for targeted attacks. For ODS-RGF, we use four surrogate models as discussed in Section~\\ref{sec_black_score_simba} for SimBA-ODS. %\n\n\n\n\\begin{table}[htbp]\n \\caption{Number of queries for attacks with ODS versus the Square Attack. }\n \\begin{center}\n \\label{tab_black_square}\n \\begin{tabular}{ccc|cc|cc}\n \\toprule\n &&& \\multicolumn{2}{c}{untargeted} &\\multicolumn{2}{|c}{targeted} \\\\ \\cline{4-7}\n && num. of & success & average & success & average \\\\ \n norm &attack& surrogates& rate & queries & rate & queries \\\\ \\midrule\n \\multirow{3}{*}{$\\ell_2$}&Square~\\citep{ACFH2019square} &0& {99.7\\%} & 647\n & {96.7\\%} & {11647} \\\\\n &SimBA-ODS &4& {99.7\\%} & {237} & {90.3\\%} & {2843} \\\\\n &ODS-RGF & 4& {\\bf 100.0\\%} & {\\bf 144} & {\\bf 99.0\\%} & {\\bf 1285}\\\\\n \\hline\n \\multirow{2}{*}{$\\ell_\\infty$} &Square~\\citep{ACFH2019square} &0& {\\bf 100.0 \\%} & {\\bf 60} & {\\bf 100.0\\%} & {2317} \\\\\n &ODS-RGF &4& {\\bf 100.0 \\%} & 78 & {97.7\\%}& {\\bf 1242} \\\\\n \\bottomrule\n \\end{tabular}\n \\end{center}\n\\end{table}\n\nAs shown in Table~\\ref{tab_black_square}, \nthe number of queries required for ODS-RGF and SimBA-ODS are lower than that of the Square Attack under the $\\ell_2$ norm. The improvement is especially large for ODS-RGF. The difference between ODS-RGF and SimBA-ODS mainly comes from different base attacks (i.e., RGF and SimBA).\nFor the $\\ell_\\infty$ norm setting, ODS-RGF is comparable to the Square Attack. We hypothesize that the benefit of estimated gradients by RGF decreases under the $\\ell_\\infty$ norm due to the sign function. However, because ODS can be freely combined with many base attacks, a stronger base attack is likely to further improve query-efficiency.\n\n\\subsection{Query-efficiency of decision-based attacks with ODS}\n\\label{sec_black_decision}\nWe demonstrate that ODS also improves query-efficiency for decision-based attacks. We combine ODS with the decision-based Boundary Attack~\\citep{Brendel18}. \nThe Boundary Attack starts from an image which is adversarial, and iteratively updates the image to find smaller perturbations. \nTo generate the update direction, the authors of~\\citep{Brendel18} sampled a random noise vector from a Gaussian distribution $\\mathcal{N}(\\mathbf{0}, \\mathbf{I})$ each step.\nWe replace this random sampling procedure with sampling by ODS (we call the new method Boundary-ODS). We give the pseudo-code of Boundary-ODS in Algorithm~\\ref{alg:ap_boundary} (in the Appendix).\n\n\\paragraph{Setup}\nWe use the same settings as the previous section for score-based attacks: 300 validation images on ImageNet, pre-trained ResNet50 target model, and four pre-trained surrogate models.\nWe test on both untargeted and targeted attacks. In targeted settings, we give randomly sampled images with target labels as initial images.\nWe use the implementation in Foolbox~\\citep{foolbox2017} for Boundary Attack with default parameters, which is more efficient than the original implementation.\n\nWe also compare Boundary-ODS with two state-of-the-art decision-based attacks: \nthe HopSkipJump attack~\\citep{chen2019hop} and the Sign-OPT attack~\\citep{cheng20sign}. We use the implementation in ART~\\citep{art2018} for HopSkipJump and the author's implementation for Sign-OPT. We set default hyperparameters for both attacks.\n\n\\paragraph{Results}\nTable~\\ref{tab_black_decision} summarizes the median sizes of $\\ell_2$ adversarial perturbations obtained with a fixed number of queries. Clearly, Boundary-ODS significantly improves query-efficiency compared to the original Boundary Attack. In fact, Boundary-ODS outperforms state-of-the-art attacks: it decreases the median $\\ell_2$ perturbation at 10000 queries to less than one-third of previous best untargeted attacks and less than one-fourth of previous best targeted attacks. \nWe additionally describe the relationship between median $\\ell_2$ perturbations and the number of queries in Figure~\\ref{fig_black_decision}. Note that Boundary-ODS outperforms other attacks, especially in targeted settings. Moreover, Boundary-ODS only needs fewer than 3500 queries to achieve the adversarial perturbation obtained by other attacks with 10000 queries.\n\n\\begin{table}[htb]\n\\caption{Median $\\ell_2$ perturbations for Boundary-ODS and decision-based state-of-the-art attacks.}\n\\begin{center}\n\\begin{tabular}{cc|ccc|ccc}\n\\toprule\n& & \\multicolumn{6}{c}{number of queries} \\\\\n& num. of & \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \nattack &surrogates& 1000 & 5000 & 10000 & 1000 & 5000 & 10000 \\\\ \\midrule\nBoundary~\\citep{Brendel18} & 0& 45.07& 11.46 & 4.30 & 73.94 &41.88 &27.05 \\\\\nBoundary-ODS & 4& {\\bf 7.57} & {\\bf 0.98} & {\\bf 0.57} & {\\bf 27.24} & {\\bf 6.84} & {\\bf 3.76}\\\\\nHopSkipJump~\\citep{chen2019hop} & 0& 14.86 &3.50 & 1.79 & 65.88 & 33.98 & 18.25 \\\\\nSign-OPT~\\citep{cheng20sign} & 0& 21.73 & 3.98 & 2.01 & 68.75 & 36.93&22.43\\\\ \n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_decision}\n\\end{table}\n\n\\begin{figure}[htbp]\n\\centering\n\n \\begin{tabular}{cc}\n \\begin{minipage}{0.35\\hsize} %\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/decision_untargeted.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.35\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/decision_targeted.png}\n \\end{center}\n \\end{minipage}\n \\\\\n Untargeted & Targeted \n \\end{tabular}\n \\caption{Relationship between median $\\ell_2$ perturbations and the number of queries for decision-based attacks. Error bars show 25th and 75th percentile of $\\ell_2$ perturbations. } %\n \\label{fig_black_decision}\n\\end{figure}\n\n\n\n\n\\subsection{Effectiveness of ODS with out-of-distribution images}\n\\label{sec_black_limited}\nAlthough several studies use prior knowledge from surrogate models to improve performance of black-box attacks, there is a drawback---those approaches require a dataset to train surrogate models. \nIn reality, it is typically impossible to obtain the same dataset used for training the target model. We show that ODS is applicable even when we only have a limited dataset that is out-of-distribution (OOD) and may contain only images with irrelevant labels.\n\nWe select 100 ImageNet classes which do not overlap with the classes used in the experiments of Section~\\ref{sec_black_decision}. \nWe train surrogate models using an OOD training dataset with these 100 classes. \nWe train five surrogate models with the same ResNet18 architecture because multiple surrogate models provide diversified directions. \nThen, we run Boundary-ODS with the trained surrogate models under the same setting as Section~\\ref{sec_black_decision}. As shown in Table~\\ref{tab_black_limited1}, although Boundary-ODS with the OOD training dataset underperforms Boundary-ODS with the full dataset, it is still significantly better than the original Boundary Attack with random sampling. This demonstrates that the improved diversity achieved by ODS improves black-box attacks even if we only have OOD images to train a surrogate. \n\n\n\\begin{table}[htb]\n\\caption{Median $\\ell_2$ perturbations for Boundary-ODS with surrogate models trained on OOD images.}\n\\begin{center}\n\\begin{tabular}{c|ccc|ccc}\n\\toprule\n& \\multicolumn{6}{c}{number of queries} \\\\\n& \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \nattack & 1000 & 5000 & 10000 & 1000 & 5000 & 10000 \\\\ \\midrule\nBoundary~\\citep{Brendel18} & 45.07& 11.46 & 4.30 & 73.94 &41.88 &27.05 \\\\\nBoundary-ODS (OOD dataset) & {\\bf 11.27}& {\\bf 1.63}& {\\bf 0.98}&{\\bf 41.67}&{\\bf 13.72} & {\\bf 8.39}\\\\ \\hline\nBoundary-ODS (full dataset in Sec.~\\ref{sec_black_decision}) & 7.57& 0.98& 0.57& 27.24& 6.84& 3.76\\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_limited1}\n\\end{table}\n\n\n\n\\section{Related works}\n\\label{sec_related}\n\nThe closest approach to ours is the white-box MultiTargeted attack~\\citep{MT19}. This attack changes the target class of attacks per restart, and it can be regarded as a method which aims to obtain more diversified starting points. However, MultiTargeted attack is limited to the setting of $\\ell_p$-bounded white-box attacks. In contrast, ODS can be applied to more general white- and black-box attacks. In addition, ODS does not require the original class of the target image, therefore it is more broadly applicable. Further discussion are in Appendix~\\ref{appendix:ap_multitargeted}.\nAnother related work is Interval Attack~\\citep{wang19symbolic} which generates diverse starting points by leveraging symbolic interval propagation. While Interval Attack shows good performances against MNIST models, the attack is not scalable to large models. \n\nODS utilizes surrogate models, which are commonly used for black-box attacks. Most previous methods exploit surrogate models to estimate gradients of the loss function on the target model~\\citep{trans_papernot17,trans_liu17,Brunner19GuessingSmart,Cheng19prior,subspaceattack,Cai19transferSMBdirect}.\nSome recent works exploit surrogate models to train other models~\\citep{Du2020Query-efficient,Huang20TREMBA} \nor update surrogate models during attacks~\\citep{Suya20hybridBlack}. Integrating ODS with these training-based methods is an interesting direction for future work.\n\n\n\\section{Conclusion}\nWe propose ODS, a new sampling strategy for white- and black-box attacks. \nBy generating more diverse perturbations as measured in the output space, ODS can create more effective starting points for white-box attacks. Leveraging surrogate models, ODS also improves the exploration of the output space for black-box attacks. Moreover, ODS for black-box attacks is applicable even if the surrogate models are trained with out-of-distribution datasets. Therefore, black-box attacks with ODS are more practical than other black-box attacks using ordinary surrogate models. Our empirical results demonstrate that ODS with existing attack methods outperforms state-of-the-art attacks in various white-box and black-box settings. \n\nWhile we only focus on ODS with surrogate models trained with labeled datasets, ODS may also work well using unlabeled datasets, which we leave as future work. One additional direction is to improve the efficiency of ODS by selecting suitable surrogate models with reinforcement learning. \n\n\\section*{Broader Impact}\nThe existence of adversarial examples is a major source of concern for machine learning applications in the real world. For example, imperceptible perturbations crafted by malicious attackers could deceive safety critical systems such as autonomous driving and facial recognition systems. Since adversarial examples exist not only for images, but also for other domains such as text and audio, the \npotential impact is large. \nOur research provides new state-of-the-art black-box adversarial attacks in terms of query-efficiency and makes adversarial attacks more practical and strong. \nWhile all experiments in this paper are for images, the proposed method is also applicable to other modalities. Because of this, our research could be used in harmful ways by malicious users. \n\nOn the positive side, strong attacks are necessary to develop robust machine learning models. \nFor the last few years, several researchers have proposed adversarial attacks which break previous defense models.\nIn response to these strong attacks, new and better defense mechanisms have been developed. \nIt is this feedback loop between attacks and defenses that advances the field. %\nOur research not only provides a state-of-the-art attack, \nbut also sheds light on a new perspective, namely the importance of diversity, for improving adversarial attacks. This may have a long term impact on inspiring more effective defense methods.\n\n\\section*{Acknowledgements and Disclosure of Funding}\nThis research was supported in part by AFOSR (FA9550-19-1-0024), NSF (\\#1651565, \\#1522054, \\#1733686), ONR, and FLI. \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdjww b/data_all_eng_slimpj/shuffled/split2/finalzzdjww new file mode 100644 index 0000000000000000000000000000000000000000..e2f8f3485007abf9bfe450c31c7421238f2deb8c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdjww @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIt has been over 12 years since blockchain was invented to serve as the public transaction ledger of Bitcoin~\\cite{bitcoin}. Since, blockchain technologies have evolved significantly. This includes hundreds of public blockchain platforms, thousands of cryptocurrencies~\\cite{cryptocurrency}, tens of thousands of decentralized applications (DApps) and blockchain-based services~\\cite{dapp}, millions of smart contracts~\\cite{he2020characterizing}, hundreds of millions of user accounts~\\cite{eos-account}, and billions of transactions~\\cite{eth-tx}.\nThese technologies and entities have formed a symbiotic relationship, which we refer to as the \\textit{\\textbf{blockchain ecosystem}}.\n\nThe global blockchain economy is estimated to be worth 1.8 trillion USD at the end of May 2021~\\cite{cryptocurrency}.\nAs a result, its continuing evolution has increased its complexity, which \\textit{itself needs considerable effort to understand: particularly its characteristics, operations, trends and security issues}.\n\nDue to this, the blockchain ecosystem has attracted a lot of attention from the research community. A large number of studies have focused on exploring the characteristics of blockchain networks~\\cite{huang2020understanding, chen2020traveling, wu2019t}, detecting attacks and vulnerabilities~\\cite{chen2018understanding, suevil, ji2020deposafe}, deanonymizing users and tracking money-flow~\\cite{ober2013structure, awan2017blockchain, moser2018empirical, wang2020identifying}, and so on.\nHowever, little is known (at a comprehensive level) on the evolution of the overall blockchain ecosystem. Although several prior studies have performed measurement studies on transactions and smart contracts, they characterize activities based on either a static snapshot~\\cite{chen2020traveling}, or based on a single platform~\\cite{chen2018understanding, huang2020understanding}. \nFor example, Chen et al.~\\cite{chen2018understanding} and Huang et al.~\\cite{huang2020understanding} characterized Ethereum and EOSIO, respectively, based on large-scale transaction analysis.\nTo the best of our knowledge, \\textit{no existing studies have characterized the evolution of blockchain ecosystems comprehensively at scale, longitudinally and across multiple representative platforms.}\n\n\\textbf{This Work.}\nTo fill this gap, we perform a multi-dimensional and large-scale study covering all of the transactions on three of the most representative blockchains: Bitcoin, Ethereum and EOSIO. \nBitcoin is the first distributed blockchain implementation.\nEthereum is the second largest blockchain system after Bitcoin, and the first second-generation blockchain platform that supports smart contract functionality~\\cite{eth-blockchain-2}.\nEOSIO claims to be the third-generation blockchain, and adopts Delegate Proof of Stake (DPoS) consensus, which significantly improves throughput and enables new applications~\\cite{eos-dapp-prosperity-1, eos-dapp-prosperity-2}.\n\nWe first collect the largest ever dataset across these three platforms (\\textbf{see \\S\\ref{sec:study-design}}), with over 35 billion traces in total. \nWe use this dataset to provide a high-level characterization and temporal analysis of the three platforms (\\textbf{\\S\\ref{sec:blockchain-evolution}}), including assessing the behavior of money transfers (\\textbf{\\S\\ref{sec:blockchain-evolution:mtg}}), account creation (\\textbf{\\S\\ref{sec:blockchain-evolution:acg}}) and contract invocation (\\textbf{\\S\\ref{sec:blockchain-evolution:cig}}).\nFollowing this, we explore ``outliers'' across the evolution of the three blockchains. We identify these as either misbehavior by attackers or highly popular ``killer'' DApps that trigger major fluctuations in activity (\\textbf{\\S\\ref{sec:abnormal-behaviors}}).\nAmong many interesting results, the following are the most prominent:\n\\begin{itemize}\n \\item \\textit{The overall blockchain ecosystem has shown significant growth over the last decade. Ethereum and EOSIO are on their way to becoming key decentralized systems, featuring powerful and popular DApps.} \n Bitcoin has shown a steady upward trend, with the number of transactions rising 15K times over 12 years. The number of transactions in Ethereum and EOSIO rose 7.6K times and 2.2K times, respectively.\n However, along with this growth in the number of transactions, the proportion of money transferred has dropped to 16.5\\% and 15.1\\% for Ethereum and EOSIO till Nov. 2019, while the proportion of smart contract invocation has reached over 80\\%.\n Smart contracts and DApps are therefore becoming increasingly popular.\n \n \\item \\textit{Many activities in the blockchain ecosystem follow the Pareto principle across their evolution, and we are witnessing increasing centralization.}\n For all three types of activities, i.e., money transfer, account creation, and contract invocation, the Pareto principle is followed (across all blockchains). This applies to every month we analyze, indicating that a small group of accounts are increasingly controlling the entire network. \n \n \\item \\textit{Interactions among smart contracts are increasingly popular.}\n During the development of Ethereum and EOSIO, the emergence of ``killer'' DApps has led to geater interactions between smart contracts. \n In other words, contract invocation transactions are more likely to be initiated by another smart contract than by a human.\n However, Ethereum and EOSIO show quite different behaviors regarding to the DApp evolution. For example, in Ethereum, the explosion of Decentralized Finance (DeFi) DApps accounts for over 1\/3 of contract invocations since Mar. 2020. In contrast, nearly half of (49.04\\%) EOSIO contract invocation transactions were related to gambling DApps.\n \n \\item \\textit{87 ``outliers'' are identified from our time-series analysis. These have a significant impact that either facilitates or impedes the progress of the blockchain ecosystem}. 48 of these are due to emerging DApps, while 39 of them are introduced by attacks or scams.\n\\end{itemize}\n\nTo the best of our knowledge, this is the first longitudinal and large-scale study across these three-generations of blockchain. Our results motivate the need for further research efforts. Particularly, we argue it is necessary to measure and mitigate the widely unexplored trends and misbehaviors of these blockchain technologies. \n\n\\section{Background}\n\\label{sec:background}\n\n\\subsection{Blockchain and its Evolution}\n\\label{sec:background:blockchain-evolution}\n\nBlockchain has a history spanning 12 years. \n\\textit{Bitcoin}, as the first platform that utilized blockchain, is widely called \\textit{Blockchain 1.0}. \nFollowing the growth of cryptocurrencies, users \nstarted to explore the possibility of decentralized applications (DApps). This led to the development of Ethereum. As a representative system of \\textit{Blockchain 2.0}, \\textit{Ethereum} is widely adopted by developers. In fact, millions of smart contracts have already been deployed on Ethereum. However, the performance of Ethereum is limited by its consensus protocol (Proof-of-Work, PoW). As a result, in 2018, EOSIO came online, which adopts Delegated Proof-of-Stake (DPoS) consensus. This has been termed \\textit{Blockchain 3.0}, improving transaction throughput significantly. To boost performance, Ethereum 2.0 (Phase 0 test) has also adopted Proof-of-Stake (PoS) consensus in Dec. 2020.\nAlthough many kinds of other blockchain systems have emerged, \\textit{we believe these three blockchains are the most representative and popular ones, making them ideal for studying the evolution of the overall blockchain ecosystem}.\n\n\\begin{table*}[tbp]\n\\caption{A Comparison of Bitcoin, Ethereum and EOSIO.}\n\\centering\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{lccccccc}\t\n\\toprule\n &\\textbf{\\begin{tabular}[c]{@{}c@{}}Native\\\\ Cryptocurrency\\end{tabular}} & \\textbf{Model} & \\textbf{Consensus} & \\textbf{\\begin{tabular}[c]{@{}c@{}}Support for\\\\ Smart Contract\\end{tabular}} & \\textbf{TPS}~\\cite{garriga2020blockchain} & \\textbf{Resource} & \\textbf{Role} \\\\\n\\midrule \n\\textbf{Bitcoin} & Bitcoin & UTXO & PoW & No & 7 & Transaction Fee & Input \/ Output \\\\\n\\textbf{Ethereum} & Ether & Account\/Balance & PoW & Yes & 15 & Gas & EOA \/ Smart Contract \\\\\n\\textbf{EOSIO} & EOS & Account\/Balance & DPoS & Yes & 1000+ & CPU \/ NET & Account \\\\\n\\bottomrule\n\\end{tabular}%\n}\n\\label{table:basic-info}\n\\end{table*}\n\n\n\\subsection{Bitcoin, Ethereum and EOSIO}\n\\label{sec:background:general}\n\nWe next briefly compare the working mechanisms of Bitcoin, Ethereum and EOSIO, as summarized in Table~\\ref{table:basic-info}.\n\n\n\\subsubsection{Record-Keeping Models}\n\\label{sec:background:general:model}\n\nTwo types of record-keeping models are adopted in existing blockchain networks. The first method is called the UTXO (Unspent Transaction Output) Model~\\cite{utxo}, and the second one is the Account\/Balance Model~\\cite{account-based}. Bitcoin is the only one that adopts the UTXO model, which is illustrated in Fig.~\\ref{fig:utxo} in the Appendix \\S\\ref{sec:appendix:money-transfer}.\nSpecifically, each transaction is composed of a set of \\textit{inputs} and \\textit{outputs}, which all correspond to a certain amount of Bitcoin. The UTXO model ensures that the sum of amount represented by inputs is equal to the outputs.\nMoreover, each output is locked by a \\textit{scriptPubKey}. Only the one with the corresponding public key can unlock the output and spend the money by an input. Therefore, in this paper, we refer to each of the inputs and outputs as $pubkey$.\n\nHowever, Bitcoin suffers from several drawbacks: \n1) the script in Bitcoin is not Turing-complete~\\cite{turing}, which means it cannot perform more sophisticated functions (e.g., smart contracts); \n2) due to its consensus algorithm, the transactions per second (TPS) is low; \nand \n3) UTXOs are stateless and can be executed in parallel, which is not well suited for many applications.\nTherefore, Ethereum and EOSIO adopt the Account\/Balance model. It works similarly to a debit card transaction: the bank, equivalent to a consensus algorithm, tracks the balance of each debit card to make sure enough money exists before approving the transaction.\n\n\\subsubsection{Accounts and Smart Contracts}\n\\label{sec:background:general:account}\nIn Ethereum, there are two types of accounts: \\textit{External Owned Accounts} (EOA) and \\textit{smart contracts}.\nSmart contracts contain immutable and executable bytecode, while an EOA does not and is controlled directly by a private key.\nA smart contract can be created by either EOA or another smart contract. \nIn contrast, the smart contract in EOSIO is \\textit{updatable}. That is because an executable smart contract's bytecode in EOSIO is equivalent to an attribute of an account, like one's balance. Therefore, in EOSIO, one can easily update the smart contract in accounts he\/she owns after paying the required cost. \n\n\\subsubsection{Transactions}\n\\label{sec:background:general:tx}\nAccounts in Ethereum and EOSIO can interact with each other by invoking \\textit{transactions}, including money transfers, invoking contracts, or creating an account. \nIn Ethereum, the transactions initiated by EOA and smart contracts are termed \\textit{external transaction} and \\textit{internal transaction}, respectively. \nIn EOSIO, a transaction consists of one or more \\textit{action(s)} and an action can be invoked by an account.\n\nTo better illustrate the working mechanisms, we depict the process of transferring tokens between accounts in Ethereum and EOSIO in Fig.~\\ref{fig:transfer-token} (see Appendix \\S\\ref{sec:appendix:money-transfer}).\nSpecifically, when $account_a$ transfers 1 Ether to $account_b$, it initiates a transaction with an empty \\textit{input} field and 1 Ether filled in the \\textit{value} field. Specifically, the data in the \\textit{input} field can be parsed by the recipient as a function signature with the corresponding parameters, and the \\textit{value} indicates the attached Ether.\nThe situation in EOSIO is more complicated. \nAn official smart contract, named \\texttt{eosio.token}, issues EOS tokens and maintains a balance table for all holders. If $account_a$ intends to transfer EOS to $account_b$, it has to request the \\texttt{eosio.token} to update the corresponding rows. After updating, the \\texttt{eosio.token} would immediately \\textit{notify} both of them that the transfer request is handled properly. Once $account_b$ received the notification, it would perform the following movements, e.g., initiating another action to others.\nNote that, when someone intends to create an account, they need to request another official account (called \\texttt{eosio}) to perform the action, instead of transferring an arbitrary amount of EOS to a non-existent address like in Ethereum.\n\n\n\\subsubsection{Consensus Protocol \\& Resource Model}\n\\label{sec:background:general:consensus}\nThe main difference between EOSIO and the other two platforms is the consensus protocol and the resource model. PoW requires miners to calculate a string of meaningless hash values which must be preceded by a certain number of zero. Therefore, the longer the prefix of zeros, the more calculation is required, which leads to higher delays and resource-consumption.\nIn contrast, the DPoS is more resource\u2013conservative and efficient. Compared to PoW, the theoretic TPS in EOSIO is hundreds-fold, reaching up to 1000+. \n\nBitcoin and Ethereum adopt a similar resource model, i.e., \\textit{transaction fees} and \\textit{gas}. Both are used to incentivize miners to pack the transactions with the highest transaction fee or gas. \nIn EOSIO, the resources are named as \\textit{CPU} and \\textit{NET}, which represent the allowed time to execute an action, and the space can be occupied within a single transaction. However, these two types of resource do not charge anything from initiators. They can mortgage a certain amount of EOS in exchange for \\textit{CPU} and \\textit{NET}, which can be reused in each transaction.\nNote that, the price of CPU fluctuates according to the total value of mortgaged EOS for CPU, and NET can be freely used if some is available. \nIn other words, the resource price in EOSIO is \\textit{nearly free in normal conditions}.\n\n\n\n\n\\section{Data Collection}\n\\label{sec:study-design}\n\n\\subsection{Methodology}\n\\label{sec:study-design:data-collection}\n\nTo obtain all transactions across Bitcoin, Ethereum, and EOSIO, we deploy their official clients to synchronize data, i.e., \\textit{Bitcoin Core}~\\cite{bitcoin-core}, \\textit{OpenEthereum}~\\cite{openethereum}, and \\textit{nodeos}~\\cite{nodeos}. \nTo conduct a fine-grained analysis, we also collect all \\textit{traces}, i.e., internal transactions in Ethereum and actions in EOSIO (see \\S\\ref{sec:background:general:tx}). \nAs the traces are too large to crawl directly from mainstream browsers (like \\textit{etherscan}~\\cite{etherscan}), we collect them by replaying each transactions. Specifically, \\textit{trace\\_transaction} in the trace module of OpenEthereum takes a transaction as an input, and replays it to collect all internal transactions that result from the given transaction. \nFor obtaining actions in EOSIO, we take advantage of the approach proposed by \\cite{huang2020understanding} to customize the core service daemon (nodeos) to accelerate the data collection process.\n\nTo facilitate our analysis of major events across the blockchains (see \\S\\ref{sec:abnormal-behaviors}), we further crawl the available information (i.e, name, category, and corresponding addresses or accounts) of all DApps from three well-known platforms: DAppTotal~\\cite{dapptotal}, DappRadar~\\cite{dappradar}, and DappReview~\\cite{dappreview}. \n\n\\begin{table}[t]\n\\caption{Dataset overview (as of March 31st, 2020).}\n\\centering\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{lccc}\n\\toprule\n\\textbf{Category} & \\textbf{Bitcoin} & \\textbf{Ethereum} & \\textbf{EOSIO} \\\\\n\\midrule\nLaunch Date & 2009.01.09 & 2015.07.30 & 2018.06.09 \\\\\nBlocks & 623,837 & 9,782,602 & 113,124,658 \\\\\n\\midrule\nTraces & 2,371,617,384 & 1,655,111,086 & 31,588,572,466 \\\\\nAccount & 630,562,205 & 68,790,386 & 1,816,578 \\\\\nSmart Contract & - & 24,201,516 & 6,139 \\\\\n\\midrule\nDApp & - & 3,292 & 652 \\\\\nDApp Addresses & - & 13,218 & 1,556 \\\\\n\\bottomrule\n\\end{tabular}\n}\n\\label{table:dataset}\n\\end{table}\n\n\n\\subsection{Dataset Overview}\n\\label{sec:study-design:overview}\n\n\\subsubsection{Dataset.} In total, we have collected over \\textit{35 Billion traces}. Table~\\ref{table:dataset} summarizes the dataset. To the best of our knowledge, this is the \\textit{largest ever} dataset studied in the research community. The \\textit{trace} is the basic unit of transaction in each blockchain. For Bitcoin, a trace is an input or an output; for Ethereum, traces are composed of all the external transactions, as well as the internal ones; for EOSIO, a trace corresponds to an action.\n\nObviously, \\textit{although EOSIO has the latest launch time, the numbers of traces and blocks are much higher than the other two blockchains.}\nAs Bitcoin adopted UTXO model, which does not have the concept of account, the number of Bitcoin accounts indicates the distinct \\textit{pubkeys} of inputs and outputs. \nIn Bitcoin, an entity may have multiple pubkeys to avoid reuse (to protect anonymity). Thus, the number of accounts in Bitcoin is tens of Ethereum's and hundreds of EOSIO's.\nMoreover, we can observe a significant distinction in the number of accounts and smart contracts between Ethereum and EOSIO. As explained in \\S\\ref{sec:background:general:account}, this is because: 1) creating an account in EOSIO charges not only transaction fee; and 2) EOSIO smart contract is updatable. Both of them impede the explosion of the number of accounts in EOSIO.\n\\begin{figure}[h]\n\\centerline{\\includegraphics[width=\\columnwidth]{images\/traces.pdf}}\n\\caption{The evolution of traces on a monthly basis.}\n\\label{fig:traces}\n\\end{figure}\n\n\\subsubsection{General Trends.}\n\\label{sec:dataset:overview:general-trend}\nFig.~\\ref{fig:traces} shows the number of traces collected on a monthly basis, and the corresponding Transaction per Second (TPS). \nThe numbers of traces in all three platforms have an upward trend. Interestingly, it only took EOSIO three months to catch up and overtake the other two platforms. However, it took Ethereum almost three years to surpass Bitcoin.\nCompared to the relatively steady growth of Bitcoin, Ethereum and EOSIO experience sudden increases. For example, in Oct. 2016, the TPS of Ethereum reached up to 73.0, around 15.9 times higher than the previous month. \nInterestingly, this explosive growth was actually due to a DoS~\\cite{eth-dos} attack, which far exceeded the theoretical TPS it could handle.\nAs for EOSIO, the TPS reached over 1,240 in Nov. 2019, and finally reached over 3,080. This resulted from the launch of a spam DApp, which will be detailed in \\S\\ref{sec:abnormal-behaviors:case-study:EIDOS}.\n\n\n\n\n\n\\section{Graph-based Evolution Analysis}\n\\label{sec:blockchain-evolution}\n\n\\subsection{Method}\nWe seek to characterize the evolution of the blockchain ecosystems by investigating three representative behaviors: \\textit{money transfers}, \\textit{account creation}, and \\textit{contract invocation}, corresponding to the types of interactions between accounts (see \\S\\ref{sec:background:general:tx}).\nTo this end, we represent these activities in three graph structures: the \\textit{Money Transfer Graph (MTG)}, \\textit{Account Creation Graph (ACG)}, and \\textit{Contract Invocation Graph (CIG)}. \nFor each blockchain platform, we will denote each graph using its first three lowercase letters in subscript, e.g., $CIG_{btc}$ for Bitcoin.\nWe build these three types of graphs on a monthly basis and specify the month (formatted as $yyyy-MM$) in superscript, e.g., $CIG_{btc}^{2019-10}$.\nOverall, we have created 372 graphs (135 for Bitcoin, 171 for Ethereum and 66 for EOSIO).\n\n\\begin{table*}[t]\n\\caption{Notations with their corresponding explanations.}\n\\centering\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{llll}\n\\toprule\n\\textbf{Notations} & \\textbf{Explanations} & \\textbf{Notations} & \\textbf{Explanations} \\\\ \\midrule\n$MTG$ & Money Transfer Graph & $\\alpha$ & The exponent of fitting function in degree\/indegree\/outdegree distribution \\\\\n$ACG$ & Account Creation Graph & $R$ & The Pearson's Correlation Coefficient between indegree and outdegree \\\\\n$CIG$ & Contract Invocation Graph & $WCC$, $SCC$ & Weakly and strongly connected component \\\\\n$T_m$ & Money transfer transactions & $(v_i, v_j, w, t)$ & \\begin{tabular}[c]{@{}l@{}}Formal definition of edges, indicating at timestamp $t$, $v_i$ initiates\\\\ $w$ times of corresponding type of transactions to $v_j$\\end{tabular} \\\\\n$T_a$ & Account creation transactions & $d$ & Date, converting from timestamp $t$ and formatting as ($yyyy-MM$) \\\\\n$T_c$ & Contract invocation transactions & $V$, $E$ & The node\/edge set of the graph \\\\ \\bottomrule \n\\end{tabular}\n}\n\\label{table:notations}\n\\end{table*}\n\n\n\\subsection{Metrics and Notations}\n\\label{sec:blockchain-evolution:metrics}\n\nFollowing previous studies~\\cite{chen2018understanding, huang2020understanding}, we measure the graphs using the following four metrics. For each graph, we later inspect each of these metrics in-turn. Table~\\ref{table:notations} summarizes the key notations used in this paper.\n\n\\subsubsection{The number of traces.}\nWe used simplified notations $T_m$, $T_a$ and $T_c$ to refer to the \\textit{money transfer traces}, \\textit{account creation traces} and \\textit{contract invocation traces}, respectively. \\textit{This metric reflects the popularity of a certain type of activity along the timeline}. For example, a popular gambling DApp will introduce large number of $T_c$ (as the player invokes the contracts of the gambling DApp) and $T_m$ (as the player sends and receives money from the gambling DApp).\nMoreover, for each kind of trace, we tag the \\textit{role of invokers}, i.e., from a smart contract or a user account. The trends can reflect whether the corresponding blockchain is mainly used as a \\textit{payment network} or a \\textit{decentralized system with emerging and powerful applications (i.e., smart contracts)}.\n\n\\subsubsection{Distribution of node degree.}\n\\label{sec:blockchain-evolution:metrics:degree}\nThe \\textit{indegree} and \\textit{outdegree} of a node represent how many edges are directed to, or pointed from that node. The \\textit{degree} of a node is the sum of its indegree and outdegree. \n\\textit{The node degrees distribution reflects the centralization of the blockchain in terms of certain activities.}\nTo measure this in our later analysis, we plot the degree\/indegree\/outdegree distribution for each graph, in which the y-axis is the proportion of nodes and x-axis is the size of degree\/indegree\/outdegree. On the basis of these plots, we take the logarithm of both axes, and perform a fit with function $y \\sim x^{\\alpha}$.\nThe \\textit{the lower the $\\alpha$, the shorter the tail, and the less centralized the corresponding traces}.\nWe illustrate a concrete example of degree distribution in \\S\\ref{sec:blockchain-evolution:mtg:degree}.\n\n\n\\subsubsection{Pearson Correlation Coefficient ($R$)}\nOn top of the indegree and outdegree of different nodes, we further measure the \\textit{Pearson Correlation Coefficient}~\\cite{pearson} (denoted as $R$ in the following) between them.\n$R$ reflects whether the distribution of outdegree and indegree have a linear relationship. It is between $1$ and $-1$ to indicate strong positive or negative correlation.\nFor example, a popular DApp interacts frequently with users, leading to both high indegree and outdegree for all participants. To this end, the $R$ would be close to 1, indicating strong positive correlation. \nThus, \\textit{$R$ indicates the consistency of indegree and outdegree of nodes within a graph}.\n\n\\subsubsection{Weakly connected component (WCC) \\& Strongly connected component (SCC)}\n\\label{sec:metric:wcc}\nIn graph theory, if there exists a \\textit{path} between $v_i$ to $v_j$ in a directed graph, there exists a set of edges end-to-end whose starting and ending points are $v_i$ and $v_j$, respectively.\nIn WCC, there exists a path between two arbitrary nodes but without differentiating the starting point and the end point. \nIn SCC, however, it requires that there exists a bi-directional path between any two nodes.\nTo this end, if the number of WCCs drops, it means there exists a node with extensive coverage; and if the number increases, it means such a node disappears.\nFurthermore, the number of SCCs examines the possibility of two-way interaction of that node.\nConsequently, \\textit{the number of WCCs and SCCs reflects the connectivity between nodes in terms of one-way and two-way interaction, respectively}. For example, a spammer would initiate a large amount of $T_m$ or $T_c$ to deliver its message, but few accounts would reply. Thus, the number of WCCs and SCCs would respectively drop and rise due to many one-way edges emerging.\n\n\n\n\\section{Evolution of Money Transfers}\n\\label{sec:blockchain-evolution:mtg}\n\n\n\\subsection{Graph Construction}\n\\label{sec:blockchain-evolution:mtg:construction}\n\nTo build $MTG$, we first define the money transfer trace, i.e., $T_m$ in Bitcoin, Ethereum and EOSIO.\nSpecifically, in Ethereum, a transaction with a certain amount of Ether and a blank input field is regarded as a $T_m$. The payer and payee can be identified easily by their addresses.\nIn EOSIO, we further parse the following notifications initiated from \\texttt{eosio.token} after transfer requests. Hence, we can extract the real participants and construct a $T_m$ directly between them.\nFor Bitcoin, a transaction may have multiple inputs and outputs, which are regarded as two sets: payers and payees. Therefore, \nwe follow previous work~\\cite{akcora2018forecasting} and insert an additional node and label it by the corresponding transaction id, $txid$.\n\nAs defined in Table~\\ref{table:notations}, each $T_m$ can be formally represented as $(v_i, v_j, w, t)$, indicating at time $t$, $v_i$ transferred $w$ tokens to $v_j$.\nThen, by converting the timestamp $t$ to date $d$ (formatted as $yyyy-MM$) and adding $w$ up as $w^*$, we can group $T_m$s with the same payer and payee on a monthly basis for each platform. This can be formalized as $\\{(v_i, v_j, w^*, d) | v_i, v_j \\in V, w^* \\in \\mathbb{R}^+\\}$.\nConsequently, $MTG$s can be defined as a \\textit{weighted directed graph}: $MTG = (V, E)$, where $E$ is the above mentioned set and $V$ is a set of nodes represented as: $(id, label)$. Specifically, $id$ is the identifier to distinguish nodes within a platform; and the $label$ is the category of DApps to which the $id$ belongs, according to the off-chain information we collected.\nConsequently, we have created 214 MFGs, 135 for Bitcoin, 57 for Ethereum and 22 for EOSIO.\nWe spend the rest of this section inspecting $MTG$ using the four metrics defined in \\S\\ref{sec:blockchain-evolution}.\n\n\n\\subsection{Number of Traces}\n\\label{sec:blockchain-evolution:mtg:traces}\n\n\n\\subsubsection{Overall evolution.}\nThe number of $T_m$ over time across three platforms is shown in Fig.~\\ref{fig:traces-mtg}.\nThe number of $T_m$ in Bitcoin has always been greater than Ethereum's.\nSince Oct. 2018, the number of $T_m$ in EOSIO has surpassed the other two platforms. Though there was a decline from late 2018, its number has reached 2.1 billion in $MTG_{eos}^{2019-11}$, which is more than an order of magnitude higher than Bitcoin and Ethereum at the same time.\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics[width=1\\columnwidth]{images\/traces-mtg.pdf}}\n\\caption{The evolution of monthly money transfer traces across three blockchains.}\n\\label{fig:traces-mtg}\n\\end{figure}\n\nWe further show the proportion of $T_m$ out of all traces by a dashed line in Fig.~\\ref{fig:traces-mtg}. Note that we omit the percentage of Bitcoin, as it is always 100\\% (the Bitcoin network is solely composed of $T_m$).\nFor Ethereum, we observe three troughs in late-2015, Oct. 2016 and Nov. 2017. These were caused by Dwarfpool~\\cite{dwarf}, a DoS attack~\\cite{eth-dos}, and a DApp called CryptoKitties~\\cite{cryptokitties}, respectively. \nTo be specific, the mining pool and the DoS attack both invoked a huge number of $T_c$ (see \\S\\ref{sec:blockchain-evolution:cig}). The DoS attack took advantage of the flawed design of the price of gas (see \\S\\ref{sec:background:general:consensus}).\nCryptoKitties is a collectible game where players can trade and exchange \\textit{kitties}. However, \\textit{kitties} are represented by alternative tokens, whose transfer is regarded as $T_c$ in our paper.\nExcept for these three troughs, the overall share of $T_m$ in Ethereum has been gradually decreasing, which we believe is due to the emergence of DeFi since May 2017. According to our data, in just six months, the number of DeFi related $T_c$ had risen by a remarkable 486.3\\% and another 7,420.4\\% for the next half year.\n\nIn contrast, the amount of $T_m$ in EOSIO surged in late-2018, which was due to the popularity of gambling DApps. This type of DApps requires frequent and abundant money transferred between players and contracts. Our collected data indicates there were 237 million $T_m$ related to gambling DApps in EOSIO in Nov. 2018 (equivalent to 23.4 times of all $T_m$ in Ethereum in the same month).\nAfter this, the share of $T_m$ also started to go down until Nov. 2019. The emergence of EIDOS\\footnote{This accepts a user's transfer of EOS and returns it back immediately. In the meanwhile, it initiates another transaction with a certain amount of EIDOS tokens, issued by themselves but can be traded in exchanges. Therefore, the ratio of $T_m$ to $T_c$ related to EIDOS is around 2 to 1.} \\cite{eidos} forced the percentage of $T_m$ to around 66\\% (i.e. two-thirds) of all traces in each month (see \\S\\ref{sec:abnormal-behaviors:case-study:EIDOS}).\n\n\n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}[t]{0.8\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/percentage-mtg-eth.pdf}\n \\caption{Ethereum}\n \\label{fig:percentage-mtg-eth}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.8\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/percentage-mtg-eos.pdf}\n \\caption{EOSIO}\n \\label{fig:percentage-mtg-eos}\n \\end{subfigure}\n \\caption{Percentage of initiators' role who invokes money transfer traces.}\n \\label{fig:percentage-mtg}\n\\end{figure}\n\n\n\n\\subsubsection{The role of initiators.}\nFig.~\\ref{fig:percentage-mtg} shows the percentage of the role of initiators, from whom the $T_m$ is initiated, in Ethereum and EOSIO.\nAn obvious trough existed in Oct. 2016 from Fig.~\\ref{fig:percentage-mtg-eth}. \nThis is because the DoS attack requires frequent invocations of $T_m$, which can be easily achieved by smart contracts instead of regular accounts (i.e., human).\nExcept for that, we observe that users in Ethereum prefer transferring money on their own, though the ratio was still gradually going down.\nAccording to our data, along with the increase of the absolute number of $T_m$ (see Fig.~\\ref{fig:traces-mtg}), since 2018, the number of $T_m$ initiated by EOA has dropped by 67.6\\%.\nMoreover, in EOSIO, we see a surge of $T_m$ initiated from smart contracts in late-2018, as the contracts of gambling DApps needed to invoke $T_m$ to return money to their winners. Our data illustrates that among these 237M pieces of gambling related $T_m$, 78.38\\% of them were initiated by gambling DApps.\n\n\\textbf{\\textit{Insight: }}\n\\textit{\nThe activity of the blockchain ecosystems has shown continuous growth in terms of transferring native tokens, in-part incentivized by killer DApps.\nThe proportion of transfer transactions reveals that both Ethereum and EOSIO has been experiencing a decline, due to them departing from pure value-transfer networks.\n}\n\n\\subsection{Degree}\n\\label{sec:blockchain-evolution:mtg:degree}\nAs explained in \\S\\ref{sec:blockchain-evolution:metrics}, we use $\\alpha$ to measure the degree distribution. \nWe first illustrate its meaning using a concrete example. \nFig.~\\ref{fig:alpha-eg} illustrates the $\\alpha$ of indegree and outdegree distribution in $MTG_{eos}^{2019-10}$ and $MTG_{eth}^{2019-10}$.\nEach dot in Fig.~\\ref{fig:alpha-eg} shows that there is $y$ percent of nodes with the degree\/indegree\/outdegree valued as $x$ (in the corresponding month and platform). \nThe distribution of node degree is quite different across the blockchains.\nNote, the larger the $\\alpha$, the more obvious the long-tail distribution, indicating less variabilty in the nodes' degree. \nMore importantly, a high $\\alpha$ means greater centralization.\nFor example, we identify a family of accounts in EOSIO that invoke spam advertisements in Oct. 2019 (more details in \\S\\ref{sec:abnormal-behaviors:case-study:spam}).\nDue to this, Fig.~\\ref{fig:alpha-eg-eos-in} has points in the top left corner. These are nodes with a small indegree, representing victims of this spam attack. In contrast, we see a set of points located in the bottom right corner of Fig.~\\ref{fig:alpha-eg-eos-out}, i.e., the spammers.\n\n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}[t]{0.4\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/alpha-eg-eos-in.pdf}\n \\caption{EOSIO indegree}\n \\label{fig:alpha-eg-eos-in}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.4\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/alpha-eg-eos-out.pdf}\n \\caption{EOSIO outdegree}\n \\label{fig:alpha-eg-eos-out}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.4\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/alpha-eg-eth-in.pdf}\n \\caption{Ethereum indegree}\n \\label{fig:alpha-eg-eth-in}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.4\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/alpha-eg-eth-out.pdf}\n \\caption{Ethereum outdegree}\n \\label{fig:alpha-eg-eth-out}\n \\end{subfigure}\n \\caption{Indegree and outdegree distribution for $MTG_{eos}^{2019-10}$ and $MTG_{eth}^{2019-10}$.}\n \\label{fig:alpha-eg}\n\\end{figure}\n\n\n\\subsubsection{Overall evolution.}\nTo study this centralization, we further calculate the $\\alpha$ of the degree\/indegree\/outdegree distribution for each platform on a monthly basis. The trend of the $\\alpha$s are shown in Fig.~\\ref{fig:alpha-mtg}.\nThe general trend shows that, in Bitcoin, except for the infant stage (prior to 2012), the $\\alpha$ of indegree and outdegree distributions were quite stable and changed almost simultaneously.\nMoreover, the $\\alpha$ of Ethereum's indegree distribution shows an overall gradual upward trend, while that of outdegree was gradually stabilizing. This signals growing centralization of indegree distribution.\nIn EOSIO, the $\\alpha$ metrics fluctuate significantly though. Compared to Ethereum, the indegree was less centralized but the outdegree was exactly in the opposite position.\n\n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}[t]{\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/alpha-mtg-btc.pdf}\n \\caption{Bitcoin}\n \\label{fig:alpha-mtg-btc}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.63\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/alpha-mtg-eth.pdf}\n \\caption{Ethereum}\n \\label{fig:alpha-mtg-eth}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.36\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/alpha-mtg-eos.pdf}\n \\caption{EOSIO}\n \\label{fig:alpha-mtg-eos}\n \\end{subfigure}\n \\caption{$\\alpha$ of degree\/indegree\/outdegree distribution of blockchain platforms over time in terms of $MTG$.}\n \\label{fig:alpha-mtg}\n\\end{figure}\n\n\\subsubsection{Per-chain analysis.}\nWhen we look in detail on the per-chain basis, subtle differences arise.\nAs we see from Fig.~\\ref{fig:alpha-mtg-btc}, prior to 2012, \nBitcoin was still in its infancy and the number of $T_m$ per month was so small that a few transactions could cause a large shift in the $\\alpha$.\nFrom Fig.~\\ref{fig:alpha-mtg-eth}, we see that prior to mid-2018, the $\\alpha$ of outdegree distribution went higher than that of the indegree distribution. Combining with the historical prices of Ether~\\cite{eth-cap}, we see that the price was going up and reached a local maximum from mid-2017. This would motivate frequent buy and sell transactions through exchanges, who typically have multiple entries used to collect deposit requests from users. Hence, this has resulted in more centralized indegree distribution than outdegree. According to our data, the exchange related $T_m$ during late 2017 accounted for 15.7\\% --- 48.18\\% of the total DApp-related $T_m$.\nFor EOSIO, except for Oct. 2019, we see the trend of $\\alpha$ was similar with the Ethereum's (see Fig.~\\ref{fig:alpha-mtg-eos},). Also, if we combine the historical price of EOS~\\cite{eos-cap}, we see that there were some price peaks, especially before Dec. 2018 and Jun. 2019.\nHowever, the existence of exchanges in EOSIO cannot drive the $\\alpha$ so dramatically. \nOther driving forces include the growth of gambling DApps in Sep. 2018,\ni.e., around 6.9 million transactions were initiated from gambling DApps. The top 1\\% accounted for 98.1\\% of them, which explains the greater centralization of outdegree.\n\n\\textbf{\\textit{Insight: }}\n\\textit{\nCompared to the relative stable degree distribution in Bitcoin, the degree distribution in Ethereum and EOSIO has been significantly affected by the price of tokens. This has resulted in greater centralization (with some dominant accounts) that are reflected in the fluctuation of $\\alpha$.\n}\n\n\\subsection{Pearson Correlation Coefficient ($R$)}\n\\label{sec:blockchain-evolution:mtg:pearson}\n\n\\begin{figure}[h]\n\\centerline{\\includegraphics[width=\\columnwidth]{images\/pearson-mtg.pdf}}\n\\caption{Pearson's correlation between indegree and outdegree of blockchain platforms over time.}\n\\label{fig:pearson-mtg}\n\\end{figure}\n\nThe correlation ($R$) between indegree and outdegree of nodes for these platforms are depicted in Fig.~\\ref{fig:pearson-mtg}.\nAfter the infant stage of Bitcoin,\nits $R$ was relatively high, reflecting a strong positive correlation of indegree and outdegree. However, this may be due to the topology of $MTG_{btc}$, i.e., the node\nmust have one indegree (the referring output is directed from its $txid$) and zero\/one outdegree (depending on if it is (un)spent).\nIn Ethereum, after the DoS attack, there were two peaks that indicate a strong correlation: Mar. 2017 (0.521) and Dec. 2017 (0.607). These are caused by ShapeShift~\\cite{shapeshift} and Cryptokitties, respectively. The former is an exchange which allows users to exchange cryptocurrencies. Though Cryptokitties led to frequent token exchanges, it still required a huge amount of Ether transfer to perform game logic, like trading and breeding \\textit{kitties}. In terms of $T_m$, the core contract of Cryptokitties accounted for 469K and 458K outdegree and indegree, respectively.\nHowever, this has been gradually decreasing since 2018, due to the emergence of DeFi, which is more likely to initiate $T_c$ instead of $T_m$ (see \\S\\ref{sec:blockchain-evolution:cig}).\n\nAs for EOSIO, the overall tendency of $R$ shows there was nearly no correlation between indegree and outdegree, except for two months: Apr. 2019 and Nov. 2019. One previous work~\\cite{huang2020understanding} analyzed the EOS Global event that happened in Apr. 2019. This was a bot-like account that sent and received a huge number of EOS to fake an exaggerated trading volume. This resulted in huge outdegree and indegree.\nThe peak in Nov. 2019 resulted from EIDOS, a spam airdrop event, whose contract received a lot of EOS from investors and returned them back immediately (detailed in \\S\\ref{sec:abnormal-behaviors:case-study:EIDOS}).\n\n\\textbf{\\textit{Insight: }}\n\\textit{\nBitcoin has a relatively high $R$, but Ethereum's $R$ has been gradually decreasing. This may be related to the emergence of popoular DeFi DApps.\nIn contrast, EOSIO has maintained a relatively low level of $R$ in transferring money, except for the peaks imported by exchanges and spams.\n}\n\n\n\\subsection{Connected Components}\n\\label{sec:blockchain-evolution:mtg:cc}\nFig.~\\ref{fig:cc-mtg} shows the evolution of the numbers of WCC and SCC.\nGenerally, these numbers reflect the (dis)apperance of nodes who could import huge amounts of (bi-)directional edges (see \\S\\ref{sec:metric:wcc}).\nNote that, due to the impossibility of constructing bi-directional edges in $MTG_{btc}$ for the Bitcoin network, we only measure its number of WCC.\nUntil Apr. 2018, the number of WCCs for Ethereum and Bitcoin have been almost equal (48,529 for Ethereum and 55,387 for Bitcoin). In contrast, the number of WCC for EOSIO was only 230 at most (in Apr. 2019). There are two possible reasons: \n1) a relatively lower number of accounts and smart contracts in EOSIO compared to Ethereum and Bitcoin (see Table~\\ref{table:dataset}); \nand 2) $T_m$ is more likely to be invoked in EOSIO (see Fig.~\\ref{fig:traces-mtg}).\nIn Oct. 2019, the number of WCC in EOSIO dropped to 3. This was caused by an enormous amount of spam advertisements (detailed in \\S\\ref{sec:abnormal-behaviors:case-study:spam}) that carry tiny EOS but cover many accounts. Astonishingly, this pushes the size of WCC as 579,964, covering more than 99.9\\% of nodes in $MTG_{eos}^{2019-10}$.\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics[width=\\columnwidth]{images\/cc-mtg.pdf}}\n\\caption{The number of WCC and SCC in different blockchain platforms over time.}\n\\label{fig:cc-mtg}\n\\end{figure}\n\nAs to SCC, the difference between EOSIO and Ethereum has reduced, which suggests that one-way directed edges in $T_m$ still dominate. In EOSIO, we also see a sudden drop in Nov. 2019 (the number of SCC is 55,452), as the EIDOS enforces the bi-directional $T_m$.\n\n\\textbf{\\textit{Insight: }}\n\\textit{The number of WCC was relatively low in EOSIO compared to other two platforms, suggesting that accounts in EOSIO have stronger connections than those in Bitcoin and Ethereum. That said, violent fluctuations were introduced by certain misbehaviors.\nMoreover, the numbers of WCCs and SCCs have an upward trend, meaning that the platforms are constantly flooded with new users. However, our results show that the existing contracts do not perform many money transfers with them. \n}\n\n\\section{Evolution of Account Creation}\n\\label{sec:blockchain-evolution:acg}\n\n\\subsection{Graph Construction}\nThough we uniformly defined accounts in all three platforms (see \\S\\ref{sec:study-design:overview}), the concept of an `accounts' in Bitcoin is still vague. Therefore, we only discuss $T_a$, \\textit{account creation transactions}, in Ethereum and EOSIO.\nIn Ethereum, we focus on the smart contract, which can be created by an internal transaction whose type is \\textit{create} (see \\S\\ref{sec:background:general:tx}).\nAs for EOSIO, we parse the notifications followed by function \\texttt{newaccount} in theofficial account \\texttt{eosio} to parse the actual creator and the created account (explained in \\S\\ref{sec:background:general:tx}).\n\nSimilar to the definition of $T_m$ (see \\S\\ref{sec:blockchain-evolution:mtg:construction} and Table~\\ref{table:notations}), $T_a$ is also defined as $(v_i, v_j, w, t)$. However, as each $v_j$ can only be created once, the $w$ is always $1$ here.\nTherefore, we can group $T_a$s by only converting $t$ as $d$: $\\{(v_i, v_j, d) | v_i, v_j \\in V\\}$.\nEventually, we obtain an \\textit{unweighted directed graph} $ACG$ for each month: $ACG = (V, E)$, where $E$ is the above set and $V$ is the set composed of nodes as introduced in \\S\\ref{sec:blockchain-evolution:mtg:construction}.\nIn total, we have created 79 ACGs: 57 for Ethereum and 22 for EOSIO.\nWe spend the rest of this section inspecting $ACG$ using three of the metrics defined in \\S\\ref{sec:blockchain-evolution}.\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics[width=\\columnwidth]{images\/traces-acg.pdf}}\n\\caption{The evolution of account creation traces.}\n\\label{fig:traces-acg}\n\\end{figure}\n\n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}[t]{0.8\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/percentage-acg-eth.pdf}\n \\caption{Ethereum}\n \\label{fig:percentage-acg-eth}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.8\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/percentage-acg-eos.pdf}\n \\caption{EOSIO}\n \\label{fig:percentage-acg-eos}\n \\end{subfigure}\n \\caption{The distribution of initiators of $T_a$.}\n \\label{fig:percentage-acg}\n\\end{figure}\n\n\n\\subsection{Number of Traces}\n\\label{sec:blockchain-evolution:acg:traces}\nThe number of $T_a$ and its corresponding proportion to the number of all traces in the corresponding month are shown in Fig.~\\ref{fig:traces-acg}.\nTaking the number of $T_a$ in Ethereum as a whole, we can observe a conspicuous upwards trend. Contrary to Ethereum, the number of newly created accounts in EOSIO has remained at a relatively low but stable level. The exception is during Apr. 2019, where the EOS Global contract created more than 241K bot-accounts.\nMoreover, in EOSIO, except for the first month, the ratio of $T_a$ was almost negligible (0.17\\% in Aug. 2018 was the maximum ratio).\nContrary, there was always a certain percentage of $T_a$ in Ethereum for each month (around 1\\% -- 2.5\\%), though the percentage was much lower than $T_m$'s (see Fig.~\\ref{fig:traces-mtg}).\nThe main reason is that creating an account in Ethereum is almost free compared to EOSIO\\footnote{\nCreating an EOSIO account will cost 5.39 USD~\\cite{eos-create-account}, while it takes less than 0.01 USD in Ethereum on that day.\n}, which may make users more reluctant to reuse existing accounts for different purposes.\n\n\nFig.~\\ref{fig:percentage-acg} further shows the role of the initiator who invokes $T_a$s.\nEspecially in Mar. 2020, the ratio of Ethereum accounts created by smart contracts is 33.7 times higher than that of EOAs. Such an astonishing ratio shows a dramatic contrast to the ratio in $MTG$ (see Fig.~\\ref{fig:percentage-mtg-eth}).\nAfter manual investigation, we find that this is due to a contract named GasToken~\\cite{gastoken}, which helps users to spend less gas to complete the transaction --- this created a huge amount of accounts since 2019. \nThe $T_a$ it initiated accounted for 62\\% of all $T_a$ in Mar. 2020.\nMoreover, we find that the ratio of $T_a$ initiated from contracts has little correlation with the prosperity of DApps.\nOverall, the DApp related account creation transactions has never accounted for higher than 15\\% in each month.\n\n\\textbf{\\textit{Insight: }}\n\\textit{In Ethereum and EOSIO, the absolute number and ratio of account creation transactions are relative low compared to transfer transactions. Creating accounts is less necessary in Ethereum, and EOSIO charges expensive fees.\nMoreover, accounts are more likely to be created by contracts, indicating the tendency for more relationships between smart contract.\n}\n\n\n\\subsection{Degree}\n\\label{sec:blockchain-evolution:acg:degree}\nAs each node can only be created once, this leads to a constant indegree value for each node. Hence, we only measure the distribution of outdegree and degree, and the evolution of the corresponding metrics $\\alpha$, as shown in Fig.~\\ref{fig:alpha-acg}.\n\nFor Ethereum, prior to mid-2017, the values of $\\alpha$ show volatility, which we believe is caused by the relatively small number of $T_a$ during this period (see Fig.~\\ref{fig:traces-acg}). From then on, both platforms exhibit an upwards trend for the $\\alpha$ of the outdegree distribution, especially in EOSIO. In other words, the account creation behavior has become increasingly centralized. For example, account \\texttt{genialwombat} which belongs to Wombat Wallet~\\cite{wombat}, offers a free account creation service for its users. This wallet has created more than 32.2K accounts. Furthermore, the top 1\\% of accounts in EOSIO have created 97.3\\% of all accounts in Mar. 2020.\nInterestingly, we can observe two sudden decreases in both sub-graphs: Aug. 2018 in Ethereum and Apr. 2019 in EOSIO. Combining the off-chain information we collected, we conclude that these two outliers were caused by an attack against a Fomo3D-like game called Last Winner~\\cite{last-winner} and the EOS Global event, respectively. The former took advantage of lots of newly created accounts to attack the DApp. This account creation process followed a multi-level structure~\\cite{last-winner} leading to invoking enormous $T_a$.\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[t]{0.63\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/alpha-acg-eth.pdf}\n \\caption{Ethereum}\n \\label{fig:alpha-acg-eth}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.36\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/alpha-acg-eos.pdf}\n \\caption{EOSIO}\n \\label{fig:alpha-acg-eos}\n \\end{subfigure}\n \\caption{$\\alpha$ of degree\/outdegree distribution of blockchain platforms over time in terms of $ACG$.}\n \\label{fig:alpha-acg}\n\\end{figure}\n\n\\textbf{\\textit{Insight: }}\n\\textit{Except for the early stage of Ethereum, the $\\alpha$ of the outdegree distributions show an upward trend for both platforms. The exceptions are two troughs, which resulted from attacks against Last Winner and EOS Global, respectively.\nEspecially in EOSIO, the upwards $\\alpha$ indicates the emergence of large account creation providers which made the invocations of account creation transactions more centralized.}\n\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics[width=\\columnwidth]{images\/cc-acg.pdf}}\n\\caption{The number of WCC in different blockchain platforms over time in terms of $ACG$.}\n\\label{fig:cc-acg}\n\\end{figure}\n\n\\subsection{Connected Components}\n\\label{sec:blockchain-evolution:acg:cc}\nFig.~\\ref{fig:cc-acg} shows the number of WCCs in Ethereum and EOSIO.\nEthereum shows an upward trend followed by a decline in terms of the number of WCC. \nIn EOSIO, it was also gradually decreasing (due to the emergence of account creation services).\nMoreover, we observe an obvious trough during Oct. 2016 in Ethereum, which had 294 WCCs.\nWe believe this is related to the DoS attack, which created a large number of accounts and prevented others from creating accounts properly.\nBesides, the strange 1 WCC in EOSIO in Jun. 2018 was because the account creation tree composed of invoked $T_a$ has a common root node: official account \\texttt{eosio}.\n\n\n\n\\textbf{\\textit{Insight: }}\n\\textit{The account creation process has become more centralized, causing the number of WCC to go down continuously. This further reflects the tighter relationship between smart contracts in terms of account creation behavior.\n}\n\n\n\\section{Evolution of Contract Invocation}\n\\label{sec:blockchain-evolution:cig}\n\n\n\\subsection{Graph Construction}\nSimilar to $ACG$, we do not discuss $T_c$ in Bitcoin.\nWe treat all the traces in Ethereum that are targeted to any smart contract as $T_c$.\nAs for EOSIO, we excluded the transfer actions to \\texttt{eosio.token} and the account creation action to \\texttt{eosio}. All the other actions are treated as $T_c$.\nConsequently, we collect all the $T_c$ as $(v_i, v_j, w, t)$.\nIdentical to the process introduced in \\S\\ref{sec:blockchain-evolution:mtg:construction}, the $CIG$ is defined as a \\textit{weighted directed graph}: $CIG = (V, E)$, where $E = \\{(v_i, v_j, w^*, d) | v_i, v_j \\in V, w^* \\in \\mathbb{R}^+\\}$ and $V$ is the set of nodes.\nIn total, we have created 79 CIGs, 57 for Ethereum and 22 for EOSIO.\n\n\\subsection{Number of Traces}\n\\label{sec:blockchain-evolution:cig:traces}\nFig.~\\ref{fig:traces-cig} depicts the number of $T_c$ and its corresponding ratio across time.\nAll these four lines fluctuate wildly.\nFor Ethereum, except for the spike in Oct. 2016 caused by the DoS attack which led to the number of $T_c$ exceeding 189 million, its absolute number shows a gradual upward trend.\nMoreover, the growth of the ratio is dramatic: from the launch of Ethereum to Mar. 2020, the ratio grew from 9.60\\% to 84.69\\%, thereby starting to dominate Ethereum's network. \n\nAs for EOSIO, its absolute number was also rising and was about an order of magnitude more than that of Ethereum, reaching almost two orders of magnitude after Nov. 2019. However, the trends between its ratio and absolute numbers are inconsistent. To be specific, we can see an obvious spike on Jul. 2018 that resulted from two accounts: \\texttt{blocktwitter}\nand \\texttt{chaintwitter}, which imported more than 64.9M and 21.0M $T_c$. \nThen, during late-2018, a gambling DApp led to frequent invocations between accounts, thereby resulting in a trough during this period. Since then, the ratio was continuously rising till Nov. 2019, since when the EIDOS's mechanism almost fixes the ratio to 33.3\\%.\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics[width=\\columnwidth]{images\/traces-cig.pdf}}\n\\caption{The evolution of contract invocation traces.}\n\\label{fig:traces-cig}\n\\end{figure}\n\n\nThe role of invokers who initiated $T_c$ is also interesting, as shown in Fig.~\\ref{fig:percentage-cig}.\nThe DoS attack in Ethereum (Oct. 2016) is mainly initiated by smart contracts. \nSince then, smart contracts increasingly invoke other smart contracts.\nBut in EOSIO, a different tendency is observed: users in EOSIO preferred to invoke smart contracts from their own accounts. In May 2019, 41.42\\% of $T_c$ was initiated by regular accounts. \nThe reason is unclear, however, from Nov. 2019, EIDOS started to dominate the ecosystem, resulting in the percentage of smart-contract-invoking invocations rising to 98.27\\%.\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[t]{0.8\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/percentage-cig-eth.pdf}\n \\caption{Ethereum}\n \\label{fig:percentage-cig-eth}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.8\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/percentage-cig-eos.pdf}\n \\caption{EOSIO}\n \\label{fig:percentage-cig-eos}\n \\end{subfigure}\n \\caption{The distribution of initiators of $T_c$.}\n \\label{fig:percentage-cig}\n\\end{figure}\n\n\\textbf{\\textit{Insight: }}\n\\textit{The number of smart contract invocations has been increasing across both platforms, confirming their growing use.\nHowever, the number and the corresponding ratio were inconsistent in EOSIO, as they were significantly affected by gambling DApps and EIDOS.}\n\n\n\\subsection{Degree}\n\\label{sec:blockchain-evolution:cig:degree}\nFig.~\\ref{fig:alpha-cig} illustrates the degree distribution.\nInterestingly, the $\\alpha$ of outdegree distribution of EOSIO is overall higher than that of Ethereum. The $\\alpha$ of the indegree distribution, however, is on the opposite position. It indicates that, \\textit{most accounts in Ethereum invoke fewer contracts than accounts in EOSIO, but there exist some \\textit{super nodes} that accept most of the $T_c$ in Ethereum.} \nMoreover, in EOSIO, the $\\alpha$ of the outdegree distribution is almost higher than the indegree's all the time. This illustrates the existence of some killer DApps that invokes many $T_c$.\nThe significant upward trend in 2020 in Fig.~\\ref{fig:alpha-cig-eos} reflects that the initiation of $T_c$ became more centralized. Recall the mechanism of EIDOS we briefly introduced in \\S\\ref{sec:blockchain-evolution:mtg:traces}, which initiates lots of EIDOS token transfers to benefit users. Alone, it accounted for 92.2\\% of $T_c$ in Nov. 2019, which indicates the tendency towards centralization in EOSIO.\nAdditionally, in Ethereum, for the $\\alpha$ of the outdegree distribution, the trough during the whole 2018 in Fig.~\\ref{fig:alpha-cig-eth} is highly conspicuous. The reason is the emergence of some popular DeFi DApps who attract users to invoke them; this finally makes the distribution of outdegree less centralized.\n\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[t]{0.63\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/alpha-cig-eth.pdf}\n \\caption{Ethereum}\n \\label{fig:alpha-cig-eth}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.36\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/alpha-cig-eos.pdf}\n \\caption{EOSIO}\n \\label{fig:alpha-cig-eos}\n \\end{subfigure}\n \\caption{$\\alpha$ of degree\/indegree\/outdegree distribution of blockchain platforms over time in terms of $CIG$.}\n \\label{fig:alpha-cig}\n\\end{figure}\n\n\\textbf{\\textit{Insight: }}\n\\textit{The $\\alpha$ of the outdegree\/indegree distributions in EOSIO and Ethereum follow opposite trends. This reflects the centralization of the outdegree in EOSIO, caused by popular gambling DApps and mis-behaviors; and the centralization of the indegree in Ethereum due to the emergence of DeFi DApps.\n}\n\n\n\\subsection{Pearson Coefficient ($R$)}\n\\label{sec:blockchain-evolution:cig:pearson}\nFig.~\\ref{fig:pearson-cig} gives the $R$ between indegree and outdegree in terms of contract invocation.\nWe see there was little correlation between the indegree and outdegree in the $CIG_{eth}$ and $CIG_{eos}$, and there never existed a negative correlation.\nThat said, in some months, the value of $R$ rises sharply. This is caused by popular DApps or malicious behaviors; these have such a large indegree and outdegree that they are able to affect the entire network.\nFor example, in Aug. 2018, a gambling game called \\textit{Last Winner} launched in Ethereum; and in Aug. 2019, a popular DApp \\textit{pornhashbaby} appeared in EOSIO. The former resulted in 876K indegree and 311K outdegree, accounting for more than 45\\% gambling related $T_c$ in $CIG_{eth}^{2018-08}$.\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics[width=\\columnwidth]{images\/pearson-cig.pdf}}\n\\caption{Pearson's correlation between indegree and outdegree of blockchain platforms over time in $CIG$.}\n\\label{fig:pearson-cig}\n\\end{figure}\n\n\\textbf{\\textit{Insight: }}\n\\textit{The indegree and outdegree of nodes in both platforms had little correlation in terms of contract invocation. However, this metric can be severely impacted by misbehaviors or popular DApps.\n}\n\n\\subsection{Connected Components}\n\\label{sec:blockchain-evolution:cig:cc}\nFig.~\\ref{fig:cc-cig} shows the number of WCCs and SCCs in Ethereum and EOSIO in $CIG$s.\nThese have been gradually increasing. \nThe peak on Jan. 2018 in Ethereum's WCC is obvious, caused by the sudden decrease of users in Bittrex due to a hack event~\\cite{bittrex-hack}.\nMoreover, the number of WCCs in EOSIO is nearly two orders of magnitude smaller than Ethereum's.\nCombining Fig.~\\ref{fig:traces-cig} and Fig.~\\ref{fig:alpha-cig-eos}, we conclude that \\textit{users on EOSIO were calling contracts more frequently}.\n\n\\begin{figure}[h]\n\\centerline{\\includegraphics[width=\\columnwidth]{images\/cc-cig.pdf}}\n\\caption{The number of WCC and SCC in Ethereum and EOSIO over time in terms of $CIG$}\n\\label{fig:cc-cig}\n\\end{figure}\n\nBesides, some accounts initiated a large number of $T_c$, \nwhich covered numerous contracts and kept the number of WCC at a low level. The maximum WCC has covered at least 99.24\\% of nodes in the corresponding months. However, for SCC, we can conclude that the callback from the smart contract is unusual. This may be due to two reasons: 1) the initiating account is a regular account, thus it is unable to callback by a $T_c$; and 2) the callback trace just sends some native tokens, which is classified as a $T_m$. \nNote that there is a missing point of \\#SCC in Nov. 2016. This is because Ethereum officially deployed a sweeper contract\\footnote{0xa43ebd8939d8328f5858119a3fb65f65c864c6dd} that initiated more than 16 million one-way $T_c$ to remove all the redundant and meaningless contracts created by the DoS attack in October. This resulted in a large number of SCCs ($>$16M).\n\n\\textbf{\\textit{Insight: }}\n\\textit{\nThe growing number of accounts, indicate smart contracts are gaining increasing popularity.\n}\n\n\\section{General Overview of the Evolution}\n\\label{sec:overview}\n\n\n\n\\haoyu{may provide the general overview here. e.g., the number of overall TX, account, smart contract, etc.} \\haoyu{we can also calculate the TPS of the three chains and provide an evolution graph}\n\n\\haoyu{then, we say we will further provide detailed analysis of the TX from three different perspectives in the following section}\n\n\\haoyu{can we merge the graph construction section and the graph analysis section?}\n\\fi\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{DApp Ecosystem}\n\\label{sec:dapp}\nSmart contract has been first used in blockchain in Ethereum, in which the ability of interacting among smart contracts makes the appearance of DApp possible. Moreover, the resource model of EOSIO encourages the emergence of gambling and DeFi DApp that requires frequent money transfer or contract invocation.\nTherefore, based on the $MTG$ and $CIG$ for both above platforms and the DApp information we collected (see \\S\\ref{sec:study-design:data-collection}), we extracted all the DApp-related traces for each month.\nSpecifically, if the $label$ attribute of either one of the two nodes of a trace contains a DApp's information, we regard this trace as a DApp-related trace and classify it into the corresponding category by $label$.\nThen, we utilized these DApp-related traces to construct a series of $MTG$ and $CIG$ like the methodology we adopted in \\S\\ref{sec:blockchain-evolution}.\nIn this section, we would first give an overview of the DApp information we collected. Then, we would picture the overall ecosystem of DApps in Ethereum and EOSIO. Finally, we will pick out the DApp-related traces belonging to the two most representative categories: gambling and DeFi, to explore some interesting findings.\n\n\\subsection{DApp Overview}\n\\label{sec:dapp:overview}\nAs we mentioned in \\S\\ref{sec:study-design:data-collection}, we have collected the largest-ever dataset on DApp information on both Ethereum and EOSIO. Moreover, we made a great effort on verifying the correctness of the DApps belonging categories.\nTable~\\ref{table:dapp} shows brief descriptions of the categories which we have adopted, as well as the number of the corresponding DApps.\n\n\\begin{table}[tbp]\n\\caption{The description of categories, as well as the number of corresponding DApps.}\n\\centering\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{llcc}\n\\toprule\n\\textbf{Category} & \\textbf{Desc.} & \\textbf{\\#Ethereum (\\#Addr)} & \\textbf{\\#EOSIO (\\#Addr)} \\\\\n\\midrule\n\\textbf{DeFi} & Financial products that directly build on blockchain technology & 73 (4,386) & 8 (23) \\\\\n\\textbf{Exchange} & A palace allows user to exchange fiat currency and cryptocurrency & 162 (328) & 40 (96) \\\\\n\\textbf{Finance} & DApps that are related to financial product & 95 (316) & 10 (19) \\\\\n\\textbf{Gambling} & It requires players to invest tokens and may return more & 377 (1,107) & 372 (1,052) \\\\\n\\textbf{Game} & All non-gambling games, e.g., collectible card games & 1,109 (5,126) & 97 (163) \\\\\n\\textbf{High-Risk} & Applications that are often characterized by Ponzi schemes & 335 (453) & 47 (68) \\\\\n\\textbf{Platform} & On which users can develop or deploy other applications or services & 72 (158) & 16 (34) \\\\\n\\textbf{Social} & Social applications (like dating) that adopt blockchain technology & 76 (134) & 20 (31) \\\\\n\\textbf{Token} & Unofficial token issued on the blockchain platform & 854 (909) & 3 (4) \\\\\n\\textbf{Tool} & Provide users with convenient and efficient tools & 139 (301) & 38 (65) \\\\\n\\textbf{EIDOS} & A malicious but phenomenal DApp in EOSIO & - & 1 (1) \\\\\n\\bottomrule\n\\end{tabular}%\n}\n\\label{table:dapp}\n\\end{table}\n\nAs we can see from Table~\\ref{table:dapp}, in EOSIO, the number of gambling DApp has absolutely dominated the EOSIO's DApp, and it was also popular in Ethereum. However, game DApps in Ethereum has occupied larger share, around 38.78\\% addresses are related to game DApps. Moreover, 73 DeFi DApps in Ethereum has accounted for about one third of DApp addresses. We speculate that the reason for this phenomenon is that decentralized exchange needs different smart contracts to act as different trading pairs. For example, Uniswap~\\cite{}, a well-known decentralized exchange, has 410 contract address according to our dataset.\nLast but not least, we can not omit the EIDOS DApp in EOSIO, which receives EOS tokens transferred from any accounts, and then immediately returns all EOS tokens and attaches a certain amount (independent of the number of the transferred EOS tokens) of EIDOS tokens. Since EIDOS tokens can be circulated on the secondary market and have a certain value, users can transfer EOS to this type of DApps at a very high frequency and get profit with no cost. To this end, EIDOS has caused huge amount of spam transactions, which will be detailed in \\S\\ref{sec:dapp:evolution} and \\S\\ref{sec:abnormal-behaviors:case-study:EIDOS}.\n\n\n\\subsection{Overall Evolution}\n\\label{sec:dapp:evolution}\nAfter extracting all the DApp-related traces, we analyzed the percentage of $T_m$ and $T_c$ belonging to different DApp categories, which are shown in Fig.~\\ref{fig:dapp-mtg} and Fig.~\\ref{fig:dapp-cig}, respectively.\n\n\\subsubsection{DApp's Money Transfer}\n\\label{sec:dapp:evolution:mtg}\nFrom the perspective of money transfer, we can easily observe from Fig.~\\ref{fig:dapp-mtg-eos} that gambling DApp dominates EOSIO's DApp ecosystem overwhelmingly. To be specific, from the third month of its very beginning, i.e., August, 2018, the gambling DApps were continuously accounting for more than 90\\% of $T_m$ in the corresponding months. Astonishingly, the number has reached up to the highest point in 2018-11: 99.01\\%.\nHowever, from 2019-11, the EIDOS began to dominate the whole ecosystem by an overwhelming margin, around 99.39\\% in November 2019.\nExcept for these two categories, we can see exchanges also play an important role in EOSIO's ecosystem. Prior to 2019-11, exchange-related transfers began to become more active, which indicates users began to buy and sell EOS tokens more frequently. According to our statistics, the most active exchanges are: DEXEOS~\\cite{}, WhaleEx~\\cite{}, Findex~\\cite{} and Newdex~\\cite{}.\nAlso, we can see 79.51\\% of $T_m$ was related to game DApps only in 2018-07. After a comprehensive analysis, we found it was caused by a game DApp called PumpDumpWars~\\cite{}, \\he{which is a xxx}.\n\nFrom Fig.~\\ref{fig:dapp-mtg-eth}, we can see that platform DApps (a mining pool called Dawrfpool~\\cite{}) took up all the initial market share till mid-2017. However, its absolute number was actually relatively small. According to the statistics, it had took up at most 20.22\\% of all $T_m$, which happened in 2016-02 (see Fig.~\\ref{fig:no-dapp}).\nIn addition, we can see a spike of tool DApps in mid-2017. It is a DApp called Ethereum Name Service (ENS)~\\cite{}, which provides services that exchange between the abstract Ethereum addresses to human-readable strings.\nMoreover, two obvious spikes related to game and gambling DApps in December 2017 and August 2018 can be easily noticed, which are resulted from the prosperity of CryptoKitties~\\cite{} and Last Winner~\\cite{}.\nDeFi has started to flourish from the beginning of 2018, and became more and more important. At meanwhile, gambling DApps also started to capture the market of game DApps. We think the reason may be the popularity of Fomo3D~\\cite{}, a phenomenal gambling DApp in Ethereum at mid-2018, and prosperity of gambling DApps on EOSIO platform, which both contributed to the prevalence of gambling DApps.\n\nMoreover, Fig.~\\ref{fig:no-dapp} illustrates the percentage of non-DApp-related traces in corresponding months, which shows a significant difference between DApp ecosystems of Ethereum and EOSIO.\nWe can easily observe that more than three quarters of $T_m$ in EOSIO was related to DApps at most time. But instead in Ethereum, less than one quarter of $T_m$ was related to DApps.\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/dapp-mtg-eth.pdf}\n \\caption{Ethereum}\n \\label{fig:dapp-mtg-eth}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/dapp-mtg-eos.pdf}\n \\caption{EOSIO}\n \\label{fig:dapp-mtg-eos}\n \\end{subfigure}\n \\caption{Percentage of different categories of DApps in terms of money transferring.}\n \\label{fig:dapp-mtg}\n\\end{figure}\n\n\\textbf{\\textit{Insight:}} In terms of DApp-related $T_m$, gambling DApps nearly dominated the EOSIO's DApp ecosystem until the appearance of EIDOS. Along with the prosperity of gambling DApps, exchanges also play a vital role in EOSIO. Moreover, Except for the beginning four months, no less than 80.64\\% of $T_m$ was related to kinds of DApps.\nCompared to EOSIO, the ecosystem of Ethereum's DApps was less active. However, there did not exist a certain category of DApps that dominated a period within Ethereum. In other words, The rise and fall of percentage of various categories of DApps reflected the diversity of this ecosystem.\n\n\n\\subsubsection{DApp's Contract Invocation}\n\\label{sec:dapp:evolution:cig}\nFig.~\\ref{fig:dapp-cig} reveals the contract invocation behavior related to DApps on both platforms. \nSpecifically, Fig.~\\ref{fig:dapp-cig-eos} reveals the DApp-related $T_c$ in EOSIO. Compared with the money transfer in EOSIO (see Fig.~\\ref{fig:dapp-mtg-eos}), the biggest distinction is the appearance of traces related to social and platform DApps in 2018-07 and almost whole year of 2019, respectively. According to our statistics, we figured out the social DApp are ChallengeDapp~\\cite{} and KARMA~\\cite{}, and the platform DApp is pornhashbaby~\\cite{}, which provides the pornography storing and sharing service but was shut down in November 2019.\n\nSimilarly, the DApp-related $T_c$ in Ethereum (see Fig.~\\ref{fig:dapp-cig-eth}) are more complicated than the money transfer's.\nWe can easily observe that the number of $T_c$ categorized as token DApps increased suddenly. That is because we regard all the unofficial token transfers (see category \\textbf{Token} in \\S\\ref{sec:dapp:overview}) in Ethereum as contract invocation.\nExcept the emergence of token DApps, the DeFi has been capturing more and more market. As DeFi requires frequently interactions between DeFi DApps, which pushed the number higher than ever.\nMoreover, we can observe that the gambling-related $T_c$ has occupied lots of shares which original belonged to platform DApps (mainly Dawrfpool) before mid-2017. We think the reason is these gambling DApps accepted unofficial tokens as betting from players, which would only increase the number of $T_c$.\nAdditionally, we can obtain some abnormal spikes: `tool' in 2017-05, `gambling' in 2019-09, and `game' in 2019-11, which are caused by ENS, Fairwin~\\cite{} (a well-known but transitory gambling DApp), and Gods Unchained TCG~\\cite{} (a collectable trading cards game, like the CryptoKitties), respectively.\n\nTalking about the percentage of non-DApp $T_c$, it has completely different characteristics from that of money transfer. \nSpecifically, in Ethereum, the percent of non-DApp $T_c$ has dropped to nearly 50\\% compared with 80\\% of non-DApp $T_m$ at the same time period. For EOSIO, however, the opposite is true.\nThe main reason for this phenomenon we believe is that traces related to unofficial tokens transfer is more active in Ethereum compared to Ether, while the opposite is true in EOSIO.\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/dapp-cig-eth.pdf}\n \\caption{Ethereum}\n \\label{fig:dapp-cig-eth}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/dapp-cig-eos.pdf}\n \\caption{EOSIO}\n \\label{fig:dapp-cig-eos}\n \\end{subfigure}\n \\caption{Percentage of different categories of DApps in terms of contract invocation.}\n \\label{fig:dapp-cig}\n\\end{figure}\n\n\\begin{figure}[tbp]\n\\centerline{\\includegraphics[width=0.7\\columnwidth]{images\/no-dapp.pdf}}\n\\caption{Percentage of non-DApp money transferring and contract invocation traces in Ethereum and EOSIO.}\n\\label{fig:no-dapp}\n\\end{figure}\n\n\n\\textbf{\\textit{Insight:}} In EOSIO's DApp ecosystem, the characteristics of $T_c$ is similar to that of $T_m$. However, the pornography platform occupied a certain share till its close in November. Comparing with $T_m$, the percentage of DApp-related $T_c$ is a little bit lower, but still higher than that of Ethereum.\nEthereum still illustrated the diversity mentioned in \\S\\ref{sec:dapp:evolution:mtg}. However, the tokens' related transfer has accounted for a large proportion, and DeFi has been dominating the ecosystem, reaching 66.38\\% in March 2020. Moreover, the game DApps, which initiated and received lots of $T_c$ to perform the game logic, can not be neglected from the late-2017 till the time of writing.\n\n\n\\subsection{Gambling \\& Defi DApps}\n\\label{sec:dapp:defi-gambling}\nWe decided to extract the sub-figures of $MTG$ and $CIG$ that are composed of all the traces related to gambling and DeFi DApps respectively to delve deeper for these two representative categories.\n\n\\subsubsection{Gambling Ecosystem}\n\\label{sec:dapp:defi-gambling:gambling}\nGambling DApps has emerged since the very beginning of these two platforms and always holds a certain of shares of DApp ecosystems, especially in EOSIO (see \\S\\ref{sec:dapp:evolution}). Therefore, we firstly characterized sub-graphs related to gambling DApps.\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/gambling-basic-mfg-node.pdf}\n \\caption{$MTG$}\n \\label{fig:gambling-basic-mfg-node}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/gambling-basic-cig-node.pdf}\n \\caption{$CIG$}\n \\label{fig:gambling-basic-cig-node}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/gambling-basic-mfg-trace.pdf}\n \\caption{$MTG$}\n \\label{fig:gambling-basic-mfg-trace}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/gambling-basic-cig-trace.pdf}\n \\caption{$CIG$}\n \\label{fig:gambling-basic-cig-trace}\n \\end{subfigure}\n \\caption{Number of nodes and traces within sub-$MTG$ and sub-$CIG$ that are related to gambling DApps.}\n \\label{fig:gambling-basic}\n\\end{figure}\n\nFig.~\\ref{fig:gambling-basic} shows the number of nodes included in each months' gambling-related $MTG$ and $CIG$, as well as the number of traces. \nWe can see that gambling DApps were much more popular in EOSIO in terms of both absolute number and relative percentage at least prior to November 2019.\nTo be specific, though gambling DApps has emerged earlier in Ethereum and held a certain of market (see Fig.~\\ref{fig:dapp-cig-eth}), there was only 10.84\\% of $T_c$ in April 2017 related to the gambling DApps as shown in Fig.~\\ref{fig:gambling-basic-cig-trace}, which is the maximum point in the earlier stage of Ethereum.\nIn late-2018, gambling DApps related $T_c$ has occupied up to 88.53\\% in EOSIO in October 2018. The percentage for $T_m$ went even higher, reached up to 94.67\\% at the same month shown in Fig.~\\ref{fig:gambling-basic-mfg-trace}.\nInterestingly, we can see three strange spikes in Fig.~\\ref{fig:gambling-basic-mfg-node}: August 2018 and May 2019 in Ethereum, April and June in 2019 in EOSIO. For Ethereum, the reason lies behind were the Last Winner~\\cite{} and CoinGathernator~\\cite{}.\nHowever, in EOSIO, it is because the \\textit{spam advertisements} initiated from some gambling DApps. To be specific, the content of advertisements was embedded in the transaction in which it carries a tiny amount of EOS to attract the players' attention. We can see from Fig.~\\ref{fig:gambling-basic-mfg-trace}, the few pieces of advertisement did not have ability to change the whole ecosystem but covered lots of users in that month, i.e., 61.53\\% and 73.95\\% of users who included in $MTG_{eos}^{2019-04}$ and $MTG_{eos}^{2019-06}$, respectively.\n\n \\begin{figure}\n \\centering\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/gambling-alpha-mfg.pdf}\n \\caption{$MTG$}\n \\label{fig:gambling-alpha-mfg}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/gambling-alpha-cig.pdf}\n \\caption{$CIG$}\n \\label{fig:gambling-alpha-cig}\n \\end{subfigure}\n \\caption{$\\alpha$ of indegree\/outdegree distribution within sub-$MTG$ and sub-$CIG$ that are related to gambling DApps. Blue lines are for Ethereum, and red lines are for EOSIO.}\n \\label{fig:gambling-alpha}\n\\end{figure}\n\n\\begin{figure}[tbp]\n\\centerline{\\includegraphics[width=0.5\\columnwidth]{images\/gambling-pearson-eth.pdf}}\n\\caption{R of indegree and outdegree distribution in Ethereum related to gambling DApps.}\n\\label{fig:gambling-pearson-eth}\n\\end{figure}\n\nWhen we delve deeper to investigate the change of $\\alpha$ of indegree\/outdegree distributions, we can easily observe from Fig.~\\ref{fig:gambling-alpha-cig} that the $\\alpha$ of outdegree distribution in Ethereum is much more higher than that of indegree distribution. Combining with the relative small number of $T_c$ in gambling DApps (see Fig.~\\ref{fig:gambling-basic-cig-trace}), it is easy to draw conclusion: a few small number of gambling DApps in Ethereum has received almost all the gambling related $T_c$, and the players are dispersed among the Ethereum which leads to such a small $\\alpha$ of outdegree distribution.\nMoreover, Fig.~\\ref{fig:gambling-pearson-eth} depicts the R between indegree and outdegree distribution in Ethereum in terms of gambling DApps. During the most active period of gambling DApps in Ethereum, i.e., mid-2017, the R of $CIG$ even shows a negative correlation, which indicates that those gambling DApps who received lots of $T_c$ hardly initiated another $T_c$.\n\nRecall the appearance of Fairwin in September 2019 which leads to conspicuous spikes in Fig.~\\ref{fig:dapp-cig-eth} and Fig.~\\ref{fig:gambling-alpha-cig}. Within our analysis, the percentage of $T_c$ has reached up to 25.81\\%, while the value of $\\alpha$ of indegree distribution was -4.988. Moreover, the R of $CIG$ and $MTG$ were opposite, which are -0.078 and 0.997, respectively.\nThese values jointly indicates this gambling DApp in Ethereum centralized more than one quarter of $T_c$ in that month. However, the number of scattered players are so high (see the spike in Fig.~\\ref{fig:gambling-basic-mfg-node} and Fig.~\\ref{fig:gambling-basic-cig-node}) to pull the $\\alpha$ down. In addition, the two values of R indicate that Fairwin did not initiated much $T_c$ but a high number of $T_m$.\n\n\\textbf{\\textit{Insight:}} Gambling DApps dominated the EOSIO ecosystem, though some of the high coverage are due to the spam advertisements behavior. Contrary to the long-term prosperity of gambling DApps in EOSIO, Ethereum's popular ones are more in a transitory situation. In other words, some killer gambling DApps emerged and attracted lots of attention from players in Ethereum, but they disappeared suddenly.\n\n\\subsubsection{DeFi Ecosystem}\n\\label{sec:dapp:defi-gambling:defi}\nDeFi, as the most popular category of DApps now, has attracted lots of investors, as well as the malicious users. For example, Lendf.me, a famous lending platform based on decentralized finance technology, has been attack in April 2020, which leads to more than 25 million USD financial loss~\\cite{}.\nMoreover, from Fig.~\\ref{fig:dapp-mtg-eth} and Fig.~\\ref{fig:dapp-cig-eth}, we can easily observe the liveness of DeFi DApps. Thus, in this section, we would explore the characteristics of DeFi DApps in Ethereum and EOSIO.\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/defi-basic-mfg-node-scale.pdf}\n \\caption{$MTG$}\n \\label{fig:defi-basic-mfg-node}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/defi-basic-cig-node.pdf}\n \\caption{$CIG$}\n \\label{fig:defi-basic-cig-node}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/defi-basic-mfg-trace-scale.pdf}\n \\caption{$MTG$}\n \\label{fig:defi-basic-mfg-trace}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/defi-basic-cig-trace.pdf}\n \\caption{$CIG$}\n \\label{fig:defi-basic-cig-trace}\n \\end{subfigure}\n \\caption{Number of nodes and traces within sub-$MTG$ and sub-$CIG$ that are related to DeFi DApps.}\n \\label{fig:defi-basic}\n\\end{figure}\n\nFig.~\\ref{fig:defi-basic} shows the number of related nodes, $T_m$, and $T_c$ in $MTG$ and $CIG$.\nWe can easily observe that, DeFi related traces are more classified into the $CIG$.\nAs we have observed in Fig.~\\ref{fig:dapp-mtg-eos} and Fig.~\\ref{fig:dapp-cig-eos}, DeFi was account for a tiny fraction of $T_m$ and $T_c$ in EOSIO: less than 1\\% for both types of graphs. However, we can still see three conspicuous spikes in Fig.~\\ref{fig:defi-basic-mfg-node} in 2018-11, 2019-05 and 2019-11, respectively. After the manual analysis, we found that the first two spikes were caused by Chintai~\\cite{} and EOS REX~\\cite{} (resource exchange platform in EOSIO), respectively. However, for November 2019, the number of nodes 7,366 did not increase too many compared to the preceding month (7,324 nodes in October 2019). Therefore, we think this spike is due to the congestion resulted by EIDOS.\nAs for Ethereum, steep growth trends in both $MTG$ and $CIG$ can be observed. Except the trough in late-2019, which is caused by the decline of coverage of USDT (a well-known stable coin in Ethereum), DeFi has accounted for 34.62\\% of $T_c$ in March 2020 and still kept an upward tendency. Compared to contract invocation, DeFi led to less money (Ether) transfer, however, it still account for about 7\\% of $T_m$ at the same time.\n\n\\begin{figure}[tbp]\n\\centerline{\\includegraphics[width=0.5\\columnwidth]{images\/defi-alpha-cig.pdf}}\n\\caption{$\\alpha$ of indegree\/outdegree distribution within sub-$CIG$ that are related to DeFi DApps.}\n\\label{fig:defi-alpha-cig}\n\\end{figure}\n\nAs the DeFi DApps are mainly interacted with each other in $CIG$, we only measured the $\\alpha$ of indegree\/outdegree distribution in $CIG$ for both platforms as shown in Fig.~\\ref{fig:defi-alpha-cig}.\nThe point in 2019-01 is obvious, the $\\alpha$ of outdegree and indegree distribution moved to opposite directions simultaneously. We think the reason was the appearance of a DeFi called Chintai~\\cite{}, which is a platform where users can lend their EOS on the market to earn interest.\nMoreover, for both platforms, we can observe an interesting phenomenon: the $\\alpha$ of indegree distribution nearly always higher than that of outdegree distribution. This indicates there were several head DeFi DApps that absorbed users' contract invocation requests.\nAccording to our statistics, the most 10 popular DeFi DApps in Ethereum's history and the number of its received $T_c$, as well as the covered users, are shown in Table~\\ref{table:top10-defi}.\nWe can see that, 4 out of top 5 are the stable coins, which provide a relative stable price by some financial measures. The most widely used stable coin is USDT~\\cite{}, which has more than 3 million users who have invoked more than 4.3 million pieces of $T_c$.\nMore than stable coins, exchanges and money markets also appeared in the top 10. Specifically, exchanges here refer to decentralized exchanges (DEX), which utilized smart contracts to automatically perform matchmaking. However, several blogs~\\cite{} and previous works~\\cite{} have reported and studied the vulnerability existed in DEXes. As for money market, it provides a platform to allow user lends and loans cryptocurrency with it.\n\n\\begin{table}[tbp]\n\\caption{Top 10 DeFi DApps in Ethereum and its corresponding number of $T_c$ and nodes.}\n\\centering\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{ccccccccccc}\n\\toprule\n & \\textbf{USDT} & \\textbf{USDC} & \\textbf{IDEX} & \\textbf{PAX} & \\textbf{DAI} & \\textbf{Oasis} & \\textbf{CDP Portal} & \\textbf{Uniswap} & \\textbf{TUSD} & \\textbf{Bancor} \\\\\n\\midrule\n\\textbf{Category} & \\begin{tabular}[c]{@{}c@{}}Stable\\\\ Coin\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Stable\\\\ Coin\\end{tabular} & Exchanges & \\begin{tabular}[c]{@{}c@{}}Stable\\\\ Coin\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Stable\\\\ Coin\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Money\\\\ Market\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Money\\\\ Market\\end{tabular} & Exchanges & \\begin{tabular}[c]{@{}c@{}}Stable\\\\ Coin\\end{tabular} & Exchanges \\\\\n\\textbf{\\# $T_c$} & 4,335,452 & 624,793 & 603,680 & 451,442 & 406,847 & 261,888 & 261,423 & 148,544 & 146,433 & 121,159 \\\\\n\\textbf{\\# Nodes} & 3,074,449 & 508,592 & 273,815 & 407,656 & 301,911 & 106,761 & 122,859 & 41,131 & 105,198 & 34,501 \\\\\n\\bottomrule\n\\end{tabular}%\n}\n\\label{table:top10-defi}\n\\end{table}\n\n\n\\begin{figure}[tbp]\n\\centerline{\\includegraphics[width=0.5\\columnwidth]{images\/defi-cc-cig.pdf}}\n\\caption{The number of WCC and SCC within $CIG$ that are related to DeFi DApps for both platforms.}\n\\label{fig:defi-cc-cig}\n\\end{figure}\n\nFig.~\\ref{fig:defi-cc-cig} depicts the number of WCC and SCC related to the DeFi DApps in $CIG$. We can observe that the numbers of SCC in Ethereum were overall higher than EOSIO's. We think this must be due to the frequent interaction between users and DeFi DApps, as the corresponding numbers of WCC were only fluctuating around a relative low level.\nThe three spikes in the number of SCC in EOSIO exactly corresponds with the events mentioned in Fig.~\\ref{fig:defi-basic-mfg-node}.\n\n\\textbf{\\textit{Insight:}} DeFi in Ethereum has been being under a flourishing development, some of head DeFi DApps has attracted lots of users who initiate $T_c$ to interact with them. At meanwhile, it did not be developed well in EOSIO, in which only the unabiding DeFi DApps emerged.\n\n\n\\subsection{Summary}\n\\he{high level summary for this whole section, TBD}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Characterizing Outliers}\n\\label{sec:abnormal-behaviors}\n\n\n\\subsection{Definition of Outlier}\n\nOur previous exploration suggests that some events (e.g., DoS attacks) and influential accounts have a large impact on ecosystems. \nIn this section, we explore such events and addresses. \\textit{Note our outlier analysis is not intended to identify attacks, as many attack events may not cause noticeable changes to the overall ecosystems.} Instead, our purpose is to pinpoint what kinds of events\/DApps can facilitate or impede the development of the blockchain ecosystems.\nFor all metrics considered, some data points deviate significantly from the average~\\cite{outlier}. We refer to these points as \\textit{outliers}. \n\n\n\\subsection{Detecting and Labelling Outliers}\n\\label{sec:abnormal-behaviors:method}\n\n\n\\subsubsection{Detecting Outliers.} \nWe use a simple but efficient algorithm, \\textit{z-score}~\\cite{z-score}, to flag outliers in each time series. Z-scores quantify the unusualness of an observation. Z-scores are the number of standard deviations above and below the mean that each value falls. Specifically, if the value of a metric for a month is $x$, the z-score would be: $z=\\frac{x-\\mu}{\\sigma}$, where $\\mu$ is the average of $x$ and corresponding values before and after three months (7 values in total), and $\\sigma$ is the standard deviation for these 7 values.\nTherefore, the absolute value of the z-score represents the distance between the $x$ and its adjacent elements mean in units standard deviation.\n\n\\subsubsection{Labelling Outlier}\n\\label{sec:abnormal-behaviors:detecting:labelling}\nWe propose a semi-automated method to label outliers.\nOur idea is that \\textit{outliers result from suddenly popular DApps or misbehaviors}. Thus, removing the responsible node(s) can make the absolute value of z-score smaller than a threshold.\nFor each metric, we have different strategies to identify the responsible node(s).\n\nSpecifically, (1) for the number of trace metric, the node with the highest degree (named the \\textit{supernode}) in the corresponding graph will be identified; \n(2) for $\\alpha$ of the degree distribution, there are two cases: trough or peak, corresponding to dispersion and centralization, respectively. Therefore, for the former, the strategy tries to identify the node(s) who import many nodes with a small degree; for the latter, the supernode with the highest degree\/indegree\/outdegree will be labeled;\n(3) based on our extensive manual investigation, a node with high indegree and outdegree simultaneously is likely to push the $R$ higher. Therefore, the strategy focuses on the node with the highest product of indegree and outdegree;\n(4) the significant change in the number of connected components is always related to the (dis)appearance of nodes that are directly connected to lots of nodes in the network. To this end, the nodes with the most one-way or two-way edges with the others will be identified.\n\nAfter the node(s) are identified according to the above strategy, it and its connected edges in the corresponding graph will be removed. We then recalculate the corresponding metric to see if the absolute value of z-score drops lower than a threshold. If not, the above process is repeated until the outlier disappears.\nTo guarantee accuracy, we manually recheck all removed nodes to confirm they are the actual responsible ones.\nIn other words, for each outlier, \\textit{our method can extract a set of responsible nodes that are related to attack\/spam events or killer DApps.}\n\nWe classify the outliers into two major groups: \\textit{Killer DApps} and \\textit{Misbehaviors}, as shown in Table~\\ref{table:category-outliers}.\nThe Killer DApp category indicates the outlier is due to the activity of a popular DApp. \nThe misbehavior category is divided into three sub-categories: \\textit{Attack}, \\textit{Resource Manipulation}, and \\textit{Spam}. An attack means the outlier has been caused by a well-known attack, like the DAO reentrancy attack~\\cite{the-dao}. Resource manipulation indicates the outlier is caused by an exploitation on the resource model, like EIDOS (see \\S\\ref{sec:background:general:consensus}). The spam sub-category covers spam advertisements and bot accounts. \nNote that the z-score algorithm also flags some outliers in the infant stage of the blockchains. We omit these, as a few transactions can lead to a volatile swing on the value of a metric.\n\n\n\\subsection{Results}\n\\label{sec:abnormal-behaviors:overall}\n\n\n\\begin{table*}[t]\n\\caption{The overall result of outlier labelling.}\n\\centering\n\\resizebox{0.7\\textwidth}{!}{%\n\\begin{tabular}{l|ccccccc|ccc|c}\n\\toprule\t\n & \\multicolumn{7}{c|}{\\textbf{Killer DApp}} & \\multicolumn{3}{c|}{\\textbf{Misbehavior}} & \\multirow{2}{*}{\\textbf{Total}} \\\\ \\cline{2-11}\n & DeFi & Exchange & Gambling & Game & Platform & Token & Tool & Attack & \\begin{tabular}[c]{@{}c@{}}Resource\\\\Manipulation\\end{tabular} & Spam & \\\\ \\midrule\n\\textbf{Bitcoin} & 0 & 0 & 2 & 2 & 5 & 0 & 0 & 3 & 0 & 1 & \\textbf{13} \\\\ \\midrule\n\\textbf{Ethereum} & 2 & 10 & 3 & 6 & 0 & 8 & 2 & 2 & 13 & 3 & \\textbf{49} \\\\ \\midrule\n\\textbf{EOSIO} & 0 & 0 & 5 & 0 & 1 & 1 & 1 & 0 & 2 & 15 & \\textbf{25} \\\\ \\midrule\n\\textbf{Total} & \\textbf{2} & \\textbf{10} & \\textbf{10} & \\textbf{8} & \\textbf{6} & \\textbf{9} & \\textbf{3} & \\textbf{5} & \\textbf{15} & \\textbf{19} & \\textbf{87} \\\\\n\\bottomrule\n\\end{tabular}\n}\n\\label{table:category-outliers}\n\\end{table*}\n\n\nTable~\\ref{table:category-outliers} shows the overall results (see Table~\\ref{table:anomalies-1} to Table~\\ref{table:anomalies-3} in Appendix for details), 87 outliers are identified in total: 13 for Bitcoin, 49 for Ethereum, and 25 for EOSIO. \nAs concluded before, Bitcoin is the most stable platform. Though EOSIO fluctuates in several plots, due to its relative short history, its number of outliers is below Ethereum's.\n\nAmong the 87 outliers, 45\\% (39) belongs to the misbehavior category. Moreover, most of them are in the resource manipulation and spam sub-categories. Ethereum suffers more from malicious exploitation on resources. The main reason is the DoS attack, which was so powerful that it significantly altered many metrics in Oct. 2016. Due to the resource model in EOSIO (see \\S\\ref{sec:background:general:consensus}), it is more likely to suffer from spam.\nThis is partly due to the EOS Global event, which created many useless accounts and transferred EOS between them. The spam advertisements in EOSIO also played an important role (detailed in \\S\\ref{sec:abnormal-behaviors:case-study:spam}).\nMoreover, we observe that, compared to EOSIO, Ethereum is impacted more by killer DApps. These apps are evenly distributed across categories. This likely indicates that Ethereum is still preferred by DApp developers. The only killer app in EOSIO is the pornhashbaby, which brought significant traffic until it was shutdown.\n\n\n\n\\subsection{Case Studies}\n\\label{sec:abnormal-behaviors:case-study}\nWe next briefly introduce two case studies, while the details can be found in Appendix \\S\\ref{sec:appendix:case-study}.\n\n\\subsubsection{EIDOS Event}\n\\label{sec:abnormal-behaviors:case-study:EIDOS}\nFrom Fig.~\\ref{fig:traces} we observe that, in Nov. 2019, the number of traces in EOSIO jumped by 1,009.82\\% compared to the preceding month (reaching 3.21 billion pieces). Surprisingly, over 99\\% of the transactions are related to the EIDOS contract. However, only a few hundred accounts interacted with the contract. This suggests that billions of transactions were introduced by a small number of addresses.\nWe further calculate the total volume of money transferred to the EIDOS contract, and the average EOS per $T_m$. After a slightly increase in Dec. 2019, both of these two metrics dropped significantly. In Mar. 2020, only 1.43 million EOS were transferred to EIDOS (around 6.53\\% of the peak). Meanwhile, the average EOS per $T_m$ decreased to $6 \\times 10^{-4}$, while $1 \\times 10^{-4}$ is the minimum allowed transfer amount in EOSIO. We conclude that players were becoming rational and centralized, and a number of services have emerged~\\cite{eidos-miner-1, eidos-miner-2} that allow participants to maximize benefits at minimal cost.\nEIDOS also overtook other DApps, e.g., previously popular gambling DApps, and has had a significant negative impact on the entire ecosystem.\n\n\\subsubsection{Spam Advertisement}\n\\label{sec:abnormal-behaviors:case-study:spam}\nDuring the analysis of Fig.~\\ref{fig:alpha-mtg-eos}, we have observed two strange spikes in the $\\alpha$ of the outdegree distribution (Mar. and Oct. 2019).\nWe discover that these two outliers are due to \\textit{spam advertisement}.\nSpammers have taken advantage of the resource model in EOSIO, which allows users to initialize spam advertisements to cover huge amounts of accounts; specifically, in the memo field~\\cite{eos-memo} of transactions, users can write use this to free-text. We see that spammer write things like bait-and-switch advertisements.\nUsing $MTG$ and $CIG$, we further propose an automated method to identify spammers (see Appendix \\S\\ref{sec:appendix:case-study:spam} for details). The key idea is that within a given time frame, the spammer would initiate spam advertisements carrying identical spam content by the lowest cost to cover as many accounts as possible. Thus, we can group together small transactions with identical content to flag spam candidates.\nThrough this, we identify \\textbf{206} distinct spam accounts in EOSIO, and we identify two account family: \\texttt{peostoken} and \\texttt{defind.io}, who are responsible for the spikes located in Mar. 2019 and Oct. 2019.\n\n\n\\section{Discussion}\n\n\n\\subsection{Cross-chain Comparison}\n\n\\noindent\n\\textbf{Bitcoin.} Bitcoin is a value-transfer network, and does not support features like smart contracts. Moreover, its simple but effective transaction fee mechanism guarantees that it is hard to be abused. To this end, Bitcoin has been in a steady growth and has been hard to be manipulate according to its number of transfer transactions and degree distribution.\n\n\n\\noindent\n\\textbf{Ethereum.} Ethereum is still evolving, especially in terms of contract invocation. Prior to 2018, the most dominant transaction in Ethereum is the Ether transfer. The price of Ether would always incentivize a more frequent and centralized Ether transfer between users and exchanges. Since then, the emergence of DeFi has increased the number of interactions between smart contracts. Along with the growing number of transactions, the relatively stable ratio of newly created smart contracts also implies the liveness in Ethereum.\n\n\\noindent\n\\textbf{EOSIO.}\nVia DPoS, EOSIO is able to carry out hundreds of times the traces than Bitcoin and Ethereum (who adopt PoW). \nPrior to Nov. 2019, though there was a slight decline in the number of transfers and contract invocations, the popularity of gambling DApps has attracted many users. However, the appearance of EIDOS almost collapsed the entire network: The TPS was pushed to its limit, and all other normal transactions suffered high congestion. \n\n\n\n\\subsection{Threats to Validity and Future Work}\nThis study carries several limitations.\nFirst, we primarily study the evolution of the ecosystems based on graph analysis and, by definition, the metrics used are limited. Although we have considered the most widely used metrics, more metrics (e.g., global clustering coefficient~\\cite{clustering}, page rank~\\cite{page-rank}, reciprocity~\\cite{reciprocity}) could be adopted to gain further understanding of ecosystems.\nSecond, in several cases, we have relied on heuristics and manual validation due to the paucity of ground-truth data. For example, during the outlier analysis, we have manually confirmed the underlying reasons for several cases. We agree that automated approaches (e.g., via machine learning) could be used to facilitate the process.\nThird, our study relies solely on transaction analysis. We believe additional code analysis (of the smart contracts) could help us to gain a better understanding of ecosystems.\nFinally, although some highly influential attacks and unreported scams (e.g., the scam advertisements in EOSIO) have been discovered by us, we do not consider a number of known attacks in the community (as they did not have a noticeable impact on the overall ecosystem). \nFor future work, we plan to address the above limitations.\n\n\n\\section{Related Work}\n\\label{sec:related}\n\n\\textbf{Blockchain Transaction Analysis.}\n\\label{sec:related:tx-analysis}\nSome previous works have focused on blockchain transaction analysis for a single blockchain platform, e.g., Bitcoin~\\cite{awan2017blockchain,ober2013structure, ron2013quantitative, houstudy}, Ethereum~\\cite{chen2018understanding, wu2019t, chen2020traveling, kiffer2018analyzing, lee2020measurements, zhao2021temporal}, EOSIO~\\cite{huang2020understanding, perez2020revisiting} and Monero~\\cite{moser2018empirical}.\nOber et al.~\\cite{ober2013structure} and Awan et al.~\\cite{awan2017blockchain} both analyzed the topology and dynamics of the Bitcoin transaction graph.\nChen et al.~\\cite{chen2018understanding} used graph analysis to explore the characteristics of transactions in Ethereum.\nSimilarly, Chen et al.~\\cite{chen2020traveling} and Huang et al.~\\cite{huang2020understanding} also adopted graph analysis, but they focused on the behaviors of ERC20 tokens and fraudulent activities in EOSIO, respectively.\n\n\\noindent\n\\textbf{Smart Contract \\& DApp Analysis.}\n\\label{sec:related:sc-analysis}\nThere are some works \\cite{wu2019empirical, wang2020identifying} dedicated to exploring the ecosystem of DApps.\nFor example, Wu et al.~\\cite{wu2019empirical} defined the popularity of DApps by some metrics across different categories. They measured the reuse of DApps, and the transformation of a traditional application to a DApp. Wang et al.~\\cite{wang2020identifying} classify DApps and users behaviors according to the patterns extracted from transactions.\nMoreover, some works~\\cite{suevil, chen2020survey, he2020characterizing, ji2020deposafe, he2021eosafe} have focused on the security issue of DApps across platforms.\nFor example, Chen et al.~\\cite{chen2020survey} conducted a detailed survey about vulnerabilities and defenses related to smart contracts in DApps. Su et al.~\\cite{suevil} and He et al.~\\cite{he2020characterizing} measured the attack instances and clone behaviors against DApps, respectively.\n\n\n\\section{Conclusion}\n\nThis paper has presented a longitudinal comparative study of three representative blockchains: Bitcoin, Ethereum and EOSIO. \nWe utilized billions of transaction records to offer a unique insight into the overall ecosystems.\nThese have revealed complementary trends amongst the three platforms, highlighting their differing evolutions. \nAlthough our findings have revealed some promising trends for these technologies, a number of ``outliers'' have also been identified and analyzed, e.g., abuse by spammers.\nWe believe that our research efforts contribute positively to inform best operational practices and will benefit the wider research community and regulators. \n\n\\section{Money Transfer in Bitcoin, Ethereum and EOSIO}\n\\label{sec:appendix:money-transfer}\nFig.~\\ref{fig:utxo} shows the UTXO model adopted by Bitcoin, and Fig.~\\ref{fig:transfer-btc} depicts the corresponding topology of $MTG_{btc}$.\n\n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}[t]{0.9\\columnwidth}\n\t\t\\includegraphics[width=\\textwidth]{images\/utxo.pdf}\n\t\t\\caption{UTXO model of Bitcoin.}\n\t\t\\label{fig:utxo}\n\t\\end{subfigure}\n \\begin{subfigure}[t]{0.9\\columnwidth}\n\t\t\\includegraphics[width=\\textwidth]{images\/transfer-btc.pdf}\n\t\t\\caption{Its Structure of Money Transfer Graph.}\n\t\t\\label{fig:transfer-btc}\n\t\\end{subfigure}\n \\caption{UTXO model and its corresponding structure of Money Transfer Graph.}\n \\label{fig:bitcoin}\n\\end{figure}\n\nFig.~\\ref{fig:transfer-eth} and Fig.~\\ref{fig:transfer-eos} respectively illustrate the transferring Ether and EOS in Ethereum and EOSIO.\n\n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}[t]{0.9\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/transfer-eth.pdf}\n \\caption{Transferring Ether in Ethereum}\n \\label{fig:transfer-eth}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.9\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/transfer-eos.pdf}\n \\caption{Transferring EOS in EOSIO}\n \\label{fig:transfer-eos}\n \\end{subfigure}\n \\caption{Transferring official tokens.}\n \\label{fig:transfer-token}\n\\end{figure}\n\n\n\\newpage\n\\section{Outliers}\n\\label{sec:appendix:anomalies}\nThe detected outliers and their corresponding sub-category (category is omitted, details in Table~\\ref{table:category-outliers}), as well as the reason, are shown in Table~\\ref{table:anomalies-1} to Table~\\ref{table:anomalies-3}.\n\n\\begin{table*}[h]\n\\caption{The detected outliers and its corresponding sub-category, as well as the specific reason.}\n\\centering\n\\resizebox{0.68\\textwidth}{!}{%\n\\begin{tabular}{lccccc}\n\\toprule\n\\textbf{Chain} & \\textbf{Date} & \\textbf{Metric} & \\textbf{Figure} & \\textbf{Sub-category} & \\textbf{DApp \/ Event} \\\\\n\\midrule\n\\multirow{8}{*}{\\textbf{BTC}} & 2012-02 & indegree $\\alpha$ & Fig.~\\ref{fig:alpha-mtg-btc} & Platform & Deepbit \\\\\n & 2015-07 & outdegree $\\alpha$ & Fig.~\\ref{fig:alpha-mtg-btc} & Game & 1Luc* \\\\\n & 2015-12 & \\# WCC & Fig.~\\ref{fig:cc-mtg} & Game & 1Luc* \\\\\n & 2012-02 & degree $\\alpha$ & Fig.~\\ref{fig:alpha-mtg-btc} & Platform & Deepbit \\\\\n & 2012-02 & outdegree $\\alpha$ & Fig.~\\ref{fig:alpha-mtg-btc} & Platform & Deepbit \\\\\n & 2010-11 & \\# traces & Fig.~\\ref{fig:traces-mtg} & Spam & 14mU** \\\\\n & 2013-07 & \\# traces & Fig.~\\ref{fig:traces} & Gambling & Satoshi Dice\n \\\\\n & 2015-07 & \\# traces & Fig.~\\ref{fig:traces} & Gambling & LuckyBit hot wallet\n \\\\\n & 2012-01 & \\# traces & Fig.~\\ref{fig:traces} & Platform & Deepbit\n \\\\\n & 2012-02 & \\# traces & Fig.~\\ref{fig:traces} & Platform & Deepbit \\\\\n & 2015-07 & \\# traces & Fig.~\\ref{fig:traces} & Attack & Spam Attack \\\\\n & 2015-07 & \\# traces & Fig.~\\ref{fig:traces-mtg} & Attack & Spam Attack \\\\\n & 2015-07 & \\# traces & Fig.~\\ref{fig:traces} & Attack & Spam Attack \\\\\n\\midrule\n\\multirow{30}{*}{\\textbf{EOS}} & 2018-10 &$R$ & Fig.~\\ref{fig:pearson-cig} & Gambling & EOSTiger \\\\\n & 2018-10 &$R$ & Fig.~\\ref{fig:pearson-cig} & Gambling & Dice \\\\\n & 2018-10 &$R$ & Fig.~\\ref{fig:pearson-cig} & Gambling & eos sicbo \\\\\n & 2018-10 &$R$ & Fig.~\\ref{fig:pearson-cig} & Gambling & EOS.Win \\\\\n & 2019-04 &$R$ & Fig.~\\ref{fig:pearson-cig} & Spam & EOS Global \\\\\n & 2019-08 &$R$ & Fig.~\\ref{fig:pearson-cig} & Platform & pornhashbaby \\\\\n & 2019-04 & indegree $\\alpha$ & Fig.~\\ref{fig:alpha-cig-eos} & Spam & EOS Global \\\\\n & 2019-04 &$R$ & Fig.~\\ref{fig:pearson-mtg} & Spam & EOS Global \\\\\n & 2019-11 &$R$ & Fig.~\\ref{fig:pearson-mtg} & \\begin{tabular}[c]{@{}c@{}}Malicious\\\\ Exploitation\\\\ on Resource\\end{tabular} & EIDOS \\\\\n & 2019-04 & \\# WCC & Fig.~\\ref{fig:cc-mtg} & Spam & EOS Global \\\\\n & 2019-10 & outdegree $\\alpha$ & Fig.~\\ref{fig:alpha-mtg-eos} & Spam & defind.io \\\\\n & 2019-04 & \\# traces & Fig.~\\ref{fig:traces-acg} & Spam & EOS Global \\\\\n & 2019-04 & trace ratio & Fig.~\\ref{fig:traces-acg} & Spam & EOS Global \\\\\n & 2019-04 & From Account & Fig.~\\ref{fig:percentage-acg-eos} & Spam & EOS Global \\\\\n & 2019-04 & degree $\\alpha$ & Fig.~\\ref{fig:alpha-acg-eos} & Spam & EOS Global \\\\\n & 2019-04 & outdegree $\\alpha$ & Fig.~\\ref{fig:alpha-acg-eos} & Spam & EOS Global \\\\\n & 2019-04 & \\# SCC & Fig.~\\ref{fig:cc-acg} & Spam & EOS Global \\\\\n & 2019-11 & degree $\\alpha$ & Fig.~\\ref{fig:alpha-cig-eos} & \\begin{tabular}[c]{@{}c@{}}Malicious\\\\ Exploitation\\\\ on Resource\\end{tabular} & EIDOS \\\\\n & 2019-10 & degree $\\alpha$ & Fig.~\\ref{fig:alpha-mtg-eos} & Spam & defind.io \\\\\n & 2019-10 & From Account & Fig.~\\ref{fig:percentage-acg-eos} & Spam & defind.io \\\\\n & 2019-11 & \\# WCC & Fig.~\\ref{fig:cc-cig} & Token & krownairdrop \\\\\n & 2019-05 & \\# WCC & Fig.~\\ref{fig:cc-mtg} & Spam & EOS Global \\\\\n & 2019-05 & \\# SCC & Fig.~\\ref{fig:cc-mtg} & Spam & EOS Global \\\\\n & 2019-05 & \\# SCC & Fig.~\\ref{fig:cc-cig} & Tool & AirDropsDAC \\\\\n & 2018-11 & $R$ & Fig.~\\ref{fig:pearson-mtg} & Gambling & BET24 \\\\\n\\midrule\n\\multirow{7}{*}{\\textbf{ETH}} & 2016-06 & \\# WCC & Fig.~\\ref{fig:cc-mtg} & Attack & The DAO \\\\\n & 2016-10 & \\# traces & Fig.~\\ref{fig:traces-mtg} & \\begin{tabular}[c]{@{}c@{}}Malicious\\\\ Exploitation\\\\ on Resource\\end{tabular} & DoS Attack \\\\\n & 2016-10 & From Smart Contract & Fig.~\\ref{fig:percentage-mtg-eth} & \\begin{tabular}[c]{@{}c@{}}Malicious\\\\ Exploitation\\\\ on Resource\\end{tabular} & DoS Attack \\\\\n\\bottomrule\n\\multicolumn{6}{l}{\\begin{tabular}[c]{@{}l@{}}* 1LuckyR1fFHEsXYyx5QK4UFzv3PEAepPMK\\\\ ** 14mUbjiofYY2F6h3ZGUSoTo3kxdqtajVTp\\end{tabular}} \n\\end{tabular}%\n}\n\\label{table:anomalies-1}\n\\end{table*}\n\n\\newpage\n\\begin{table*}[h]\n\\caption{The detected outliers and its corresponding sub-category, as well as the specific reason.}\n\\centering\n\\resizebox{0.68\\textwidth}{!}{%\n\\begin{tabular}{lccccc}\n\\toprule\n\\textbf{Chain} & \\textbf{Date} & \\textbf{Metric} & \\textbf{Figure} & \\textbf{Sub-category} & \\textbf{DApp \/ Event} \\\\\n\\midrule\n\\multirow{50}{*}{\\textbf{ETH}} & 2018-08 & From Smart Contract & Fig.~\\ref{fig:percentage-mtg-eth} & Gambling & Last Winner \\\\\n & 2016-08 &$R$ & Fig.~\\ref{fig:pearson-cig} & Exchange & ReplaySafeSplit \\\\\n \t& 2017-05 &$R$ & Fig.~\\ref{fig:pearson-cig} & Tool & ENS \\\\\n & 2018-08 &$R$ & Fig.~\\ref{fig:pearson-cig} & Gambling & Last Winner \\\\\n & 2019-09 &$R$ & Fig.~\\ref{fig:pearson-cig} & DeFi & NEST Protocol \\\\\n & 2016-10 & From Smart Contract & Fig.~\\ref{fig:percentage-cig-eth} & \\begin{tabular}[c]{@{}c@{}}Malicious\\\\ Exploitation\\\\ on Resource\\end{tabular} & DoS Attack \\\\\n & 2016-10 & \\# traces \t& Fig.~\\ref{fig:traces} & \\begin{tabular}[c]{@{}c@{}}Malicious\\\\ Exploitation\\\\ on Resource\\end{tabular}\t& DoS Attack\\\\\n & 2016-10 & \\# traces & Fig.~\\ref{fig:traces-cig} & \\begin{tabular}[c]{@{}c@{}}Malicious\\\\ Exploitation\\\\ on Resource\\end{tabular} & DoS Attack \\\\\n & 2019-11 & trace ratio & Fig.~\\ref{fig:traces-cig} & Game & Gods Unchained \\\\\n & 2018-01 & \\# SCC & Fig.~\\ref{fig:cc-cig} & Attack & Bittrex Hacked \\\\\n & 2019-02 & \\# SCC & Fig.~\\ref{fig:cc-cig} & Game & CryptoKitties \\\\\n & 2019-02 & \\# SCC & Fig.~\\ref{fig:cc-cig} & Token & ethairdrop \\\\\n & 2019-09 & \\# SCC & Fig.~\\ref{fig:cc-cig} & DeFi & USDT \\\\\n & 2018-05 & indegree $\\alpha$ & Fig.~\\ref{fig:alpha-cig-eth} & Token & EOS \\\\\n & 2018-05 & indegree $\\alpha$ & Fig.~\\ref{fig:alpha-cig-eth} & Token & NePay \\\\\n & 2019-06 & degree $\\alpha$ & Fig.~\\ref{fig:alpha-cig-eth} & Token & MGC TOKEN \\\\\n & 2019-06 & degree $\\alpha$ & Fig.~\\ref{fig:alpha-cig-eth} & Token & VOKEN \\\\\n & 2016-10 & From Smart Contract & Fig.~\\ref{fig:percentage-acg-eth} & \\begin{tabular}[c]{@{}c@{}}Malicious\\\\ Exploitation\\\\ on Resource\\end{tabular} & DoS Attack \\\\\n & 2016-10 & From EOA & Fig.~\\ref{fig:percentage-acg-eth} & \\begin{tabular}[c]{@{}c@{}}Malicious\\\\ Exploitation\\\\ on Resource\\end{tabular} & DoS Attack \\\\\n & 2018-07 & From EOA & Fig.~\\ref{fig:percentage-acg-eth} & Spam & 0x004b * \\\\\n & 2019-05 & From EOA & Fig.~\\ref{fig:percentage-acg-eth} & Spam & 0x8c4b ** \\\\\n & 2019-09 & From EOA & Fig.~\\ref{fig:percentage-acg-eth} & Spam & 0x8c4b ** \\\\\n & 2016-10 & \\# traces & Fig.~\\ref{fig:traces-acg} & \\begin{tabular}[c]{@{}c@{}}Malicious\\\\ Exploitation\\\\ on Resource\\end{tabular} & DoS Attack \\\\\n & 2016-10 & degree $\\alpha$ & Fig.~\\ref{fig:alpha-acg-eth} & \\begin{tabular}[c]{@{}c@{}}Malicious\\\\ Exploitation\\\\ on Resource\\end{tabular} & DoS Attack \\\\\n & 2016-10 & outdegree $\\alpha$ & Fig.~\\ref{fig:alpha-acg-eth} & \\begin{tabular}[c]{@{}c@{}}Malicious\\\\ Exploitation\\\\ on Resource\\end{tabular} & DoS Attack \\\\\n & 2016-10 & \\# SCC & Fig.~\\ref{fig:cc-acg} & \\begin{tabular}[c]{@{}c@{}}Malicious\\\\ Exploitation\\\\ on Resource\\end{tabular} & DoS Attack \\\\\n & 2019-10\t & \\# WCC\t & Fig.~\\ref{fig:cc-mtg} \t& Gambling\t& FairWin \\\\\n\\bottomrule\n\\multicolumn{6}{l}{\\begin{tabular}[c]{@{}l@{}}* 0x004bd3562a42c8a7394794849b8ff5ad71c527b2\\\\ ** 0x8c4b7870fc7dff2cb1e854858533ceddaf3eebf4\\end{tabular}} \n\\end{tabular}%\n}\n\\label{table:anomalies-2}\n\\end{table*}\n\n\\newpage\n\\begin{table*}[h]\n\\caption{The detected outliers and its corresponding sub-category, as well as the specific reason.}\n\\centering\n\\resizebox{0.68\\textwidth}{!}{%\n\\begin{tabular}{lccccc}\n\\toprule\n\\textbf{Chain} & \\textbf{Date} & \\textbf{Metric} & \\textbf{Figure} & \\textbf{Sub-category} & \\textbf{DApp \/ Event} \\\\\n\\midrule\n\\multirow{20}{*}{\\textbf{ETH}} & 2016-11\t& \\# SCC\t& Fig.~\\ref{fig:cc-mtg} \t& Exchange\t& ReplaySafeSplit\\\\\n & 2017-01\t& \\# WCC\t& Fig.~\\ref{fig:cc-acg} \t& Exchange\t& Yunbi \\\\\n & 2017-08 &\t\\# WCC \/ \\# nodes\t & Fig.~\\ref{fig:cc-acg} &Exchange\t& Bittrex \\\\\n & 2018-05\t& degree $\\alpha$ \t& Fig.~\\ref{fig:alpha-cig-eth} \t& Token\t& NePay \\\\\n & 2016-05\t& degree $\\alpha$ \t& Fig.~\\ref{fig:alpha-mtg-eth}\t \t& Exchange & ShapeShift \\\\\n & 2016-05\t& outdegree $\\alpha$\t\t& Fig.~\\ref{fig:alpha-mtg-eth} \t\t & Token\t & TheDAO\\\\\n & 2018-01\t& \\# traces \t& Fig.~\\ref{fig:traces-mtg} \t & Exchange & Binance \\\\\n \t& 2018-01\t& trace ratio \t& Fig.~\\ref{fig:traces-mtg} \t & Exchange & Binance \\\\\n \t& 2018-01\t& trace ratio \t& Fig.~\\ref{fig:traces-mtg} \t & Game &\tCryptoKitties \\\\\n \t& 2018-05 & \\# SCC & Fig.~\\ref{fig:cc-mtg} \t & Exchange & Binance \\\\\n \t& 2018-11\t& \\# WCC\t\t& Fig.~\\ref{fig:cc-mtg} \t \t& Game\t& MegaCryptoPolis \\\\\n \t& 2019-02 &\t\\# traces \t& Fig.~\\ref{fig:traces} \t & Game &\tDragonereum \\\\\n \t& 2019-02 &\t\\# traces \t& Fig.~\\ref{fig:traces-cig} \t & Game &\tDragonereum \\\\\n \t& 2016-10 & \\# SCC & Fig.~\\ref{fig:cc-cig} & \\begin{tabular}[c]{@{}c@{}}Malicious\\\\ Exploitation\\\\ on Resource\\end{tabular} & DoS Attack \\\\\n \t& 2016-10 & \\# WCC \/ \\# nodes & Fig.~\\ref{fig:cc-acg} & \\begin{tabular}[c]{@{}c@{}}Malicious\\\\ Exploitation\\\\ on Resource\\end{tabular} & DoS Attack \\\\\n \t& 2020-03 & \t\\# SCC \t & Fig.~\\ref{fig:cc-acg}\t\t& Token &\tGasToken.io \\\\\n \t& 2016-02 & $R$ & Fig.~\\ref{fig:pearson-cig} & Tool & EthereumAlarmClock \\\\\n \t& 2016-03 & $R$ & Fig.~\\ref{fig:pearson-mtg} & Exchange & ShapeShift \\\\\n \t& 2016-03 & $R$ & Fig.~\\ref{fig:pearson-mtg} & Exchange & Poloniex \\\\\n\\bottomrule\n\\end{tabular}%\n}\n\\label{table:anomalies-3}\n\\end{table*}\n\n\\newpage\n\\section{EIDOS \\& Spam Advertisements}\n\\label{sec:appendix:case-study}\nIn \\S\\ref{sec:abnormal-behaviors:case-study}, we have briefly introduced the conclusion after we studied the EIDOS project and spam advertisements behavior in EOSIO.\nHere, we will provide the full details for them.\n\n\\subsection{EIDOS Event}\n\\label{sec:appendix:case-study:EIDOS}\nAs we briefly explained in \\S\\ref{sec:background:blockchain-evolution}, EIDOS attracts users to transfer EOS to it inexhaustibly. The mechanism of EIDOS is illustrated in Fig.~\\ref{fig:eidos-mechanism}.\n\n\\begin{figure*}[h]\n\\centerline{\\includegraphics[width=0.7\\textwidth]{images\/eidos-mechanism.pdf}}\n\\caption{The mechanism of EIDOS.}\n\\label{fig:eidos-mechanism}\n\\end{figure*}\n\nWe can see from Fig.~\\ref{fig:eidos-mechanism}, once the user transfers arbitrary amount of EOS by $T_m$ to its main contract: \\texttt{eidosonecoin} (see steps 1 to 3), the EOS would be immediately returned by another $T_m$ (see steps 4 to 6). Then, EIDOS would initiate a $T_c$ that invokes the \\texttt{transfer} function in its owned smart contract (see steps 9 and 10). In the \\texttt{transfer} function, EIDOS would increase the user's balance of EIDOS token whose value is not related to the amount of EOS in the previous $T_m$. Users can spend these EIDOS tokens in exchanges or other contracts that accept EIDOS token.\nNote that the EIDOS token is issued by the \\texttt{issue} function and part of them will be transferred to its official team: \\texttt{eidosoneteam} (see steps 7 and 8). Therefore, there also exists an another two $T_c$s which invoke \\texttt{issue} and \\texttt{transfer}. \nHowever, the issuance of EIDOS is not executed each time, thus the ratio of $T_m$ to $T_c$ is roughly two to one, which were illustrated by the `trace ratio' in Fig.~\\ref{fig:traces-mtg} and Fig.~\\ref{fig:traces-cig}.\n\nAccording to our statistics, in November 2019, the number of traces in EOSIO has jumped by 1009.82\\% over the preceding month, reaching up to 3.21 billion pieces. Therefore, we further extract all the traces related to EIDOS, and calculate several related metrics as shown in Fig.~\\ref{fig:eidos-statistics}.\n\n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}[t]{0.49\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/eidos-user.pdf}\n \\caption{Amount of users.}\n \\label{fig:eidos-user}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.49\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/eidos-trace.pdf}\n \\caption{Amount of $T_m$.}\n \\label{fig:eidos-trace}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.49\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/eidos-volume.pdf}\n \\caption{Transferred volume in EOS.}\n \\label{fig:eidos-volume}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.49\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/eidos-eos-per-trace.pdf}\n \\caption{Average transferred EOS per $T_m$.}\n \\label{fig:eidos-eos-per-trace}\n \\end{subfigure}\n \\caption{The statistics of EIDOS project.}\n \\label{fig:eidos-statistics}\n\\end{figure}\n \nWe see that the number of users decreased dramatically in Mar. 2020: the number has dropped to 643 (around 95.13\\% compared to Nov. 2019). In the meanwhile, the number of related $T_m$ has increased by 150.43\\%. These two opposite changes indicate that the players of EIDOS tend to be specialized. In other words, some accounts started to a provide service, which allows users to invest their owned resources and get dividends back. Actually, there indeed existed some service providers~\\cite{eidos-miner-1, eidos-miner-2}. These can also be observed from the $\\alpha$ of outdegree distribution during the corresponding time period (see Fig.~\\ref{fig:alpha-mtg-eos}).\n\nMoreover, these players were becoming \\textit{rational}. As we have explained in \\S\\ref{sec:abnormal-behaviors:case-study:EIDOS}, the amount of the returned EIDOS token has nothing to do with the amount of invested EOS. Thus, as we can see from Fig.~\\ref{fig:eidos-volume} and Fig.~\\ref{fig:eidos-eos-per-trace}, after an ephemeral increase in the volume of invested EOS and the amount of EOS per invested traces, the two curves start to go down sharply. In Mar. 2020, the average EOS per trace was as low as $6 \\times 10^{-4}$, while $1 \\times 10^{-4}$ is the minimum allowed transferred amount in EOSIO and the most rational choice.\nActually, according to our data, in Nov. 2019, 99.80\\% of $T_m$ were valued below 1 EOS. But in Oct. 2019, this percentage was up to 74.81\\%.\nTherefore, we conclude that EIDOS has captured the whole EOSIO ecosystem via meaningless $T_m$ and $T_c$ transactions, thereby slowing the TPS of other users.\n\n\n\\subsection{Spam Advertisement}\n\\label{sec:appendix:case-study:spam}\nDuring the analysis of Fig.~\\ref{fig:alpha-mtg-eos}, we have observed two strange spikes, located in Mar. 2019 and Oct. 2019, in the $\\alpha$ of outdegree distribution. After the analysis of our method mentioned in \\S\\ref{sec:abnormal-behaviors:method} and manual verification, we figured out these two outliers resulted from \\textit{spam advertisements}.\n\nTo be specific, taking the advantage of memo field in EOSIO transactions~\\cite{eos-memo} where the initiator can freely write anything, an account or a smart contract is able to initiate lots of $T_m$ carrying with $1 \\times 10^{-4}$ EOS and bait-and-switch advertisement to cover as many users as possible.\n\nBased on our investigation, we proposed a detection methodology. Specifically, our methodology traverses the $MTG$ and $CIG$ for each month, and detects if node $N$ obeys to the following rules:\n\\begin{enumerate}\n\t\\item The average amount of transferred EOS to each user initiated from $N$ is no more than $x$;\n\t\\item $N$ invokes up to $y$ pieces of $T_m$ to each user within a single month;\n\t\\item More than $z$ users who received $T_m$ from $N$ did not in turn initiates $T_m$ or $T_c$ to $N$;\n\t\\item The memo should carry the leading content, typically is an exaggerated statement and a URL.\n\\end{enumerate}\n\nThe $x$, $y$ and $z$ are heuristically set as 0.001, 30, and 500, which are relative conservative and may lead to false positives.\nThus we further conduct a manual verification focusing on the memo content and the patterns of behavior.\nWe discover \\textbf{206} distinct accounts in total that have initiated spam advertisements at least for one month. The distribution of the first appearance time for these 206 accounts is shown in Fig.~\\ref{fig:ad-distribution}.\nWe can see the obvious two spikes correspond to the outliers located in Mar. 2019 and Oct. 2019, respectively.\nHowever, except for these two spikes and the peak in the late-2018 (caused by gambling DApps), the advertisement accounts are not so active, especially after EIDOS.\n\nAfter investigation, we figure out these two spikes are caused by two clusters of advertisement accounts: \\texttt{peostoken} family and \\texttt{defind.io} family.\nTherefore, we have built a family tree to explore the creation relationship, taking advantage of our $ACG$, which is shown in Fig.~\\ref{fig:ad-family-tree}.\nWe observe that there are clusterings (framed by red boxes in the figure) in which the proportion of advertisements nodes are relatively high. We have marked the \\texttt{peostoken} family and \\texttt{defind.io} family in Fig.~\\ref{fig:ad-family-tree}, which were the reasons behind the two outliers. These two clusters are composed of accounts with multi-level creation relationships.\n\n\n\\begin{figure*}[h]\n\\centerline{\\includegraphics[width=0.7\\textwidth]{images\/ad-distribution.pdf}}\n\\caption{The distribution of these 206 detected advertisement accounts according their first appearance time.}\n\\label{fig:ad-distribution}\n\\end{figure*}\n\n\\begin{figure}[h]\n\\centerline{\\includegraphics[width=\\columnwidth]{images\/ad-family-tree.pdf}}\n\\caption{The family tree of spam advertisement related accounts, where the red nodes are advertisement nodes.}\n\\label{fig:ad-family-tree}\n\\end{figure}\n\n\nIn addition, we have observed some interesting characteristics for spam advertisement accounts:\n\\begin{itemize}\n\t\\item The language in memo content is mainly in English and Chinese, as well as a small percentage of Korean and Japanese.\n\t\\item It is possible for the same account to send out memos with different content under different time periods, and even change the language used in the memo.\n\t\\item Most of the advertisements are promoting their own gambling games, and some are promoting their own tokens or services (e.g., airdrop services). Moreover, advertisements promoting DeFi-related DApps have been appearing since Sep. 2019.\n\t\\item \\texttt{eostribalert} and \\texttt{eostribealrt} are a pair of accounts that launch advertisements. The memo in the latter one contains a URL and convinces users to apply for TLOS token. Immediately after the former one also starts sending out a large number of $T_m$ containing a memo stating that the URL is fraudulent information. However, the difference of number of $T_m$ they sent is very large, 5.3K and 101K times, respectively.\n\\end{itemize}\n\nWe conclude that spam advertisement behavior was popular in EOSIO. They mainly target users who speak English and Chinese. Moreover, those advertisement accounts usually have relationships in terms of account creation behavior.\n\n\n\\end{document}\n\\endinput\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\\label{Sec:Intro}\n\nThe search for life on other worlds looms large in NASA's 30-year Strategic Plan.\\cite{2014arXiv1401.3741K,2015arXiv150704779D} To enable this, NASA is studying a Large UV-Optical-IR Surveyor (LUVOIR) that would use advanced coronagraphs for starlight suppression.\\cite{2014arXiv1401.3741K} ATLAST is one concept for LUVOIR. Alternatively, a Habitable Exoplanet Imaging Mission\\cite{NASATownhall:2015tm} has been proposed. This might pair a smaller aperture space telescope with a starshade flying in formation. Alternatively, an off-axis non-segmented telescope with a coronagraph has been suggested. In any case, better detectors than exist today are highly desirable.\n\nOur emphasis in this paper is on ATLAST\\cite{Bolcar:2015td}, and specifically on ATLAST's detectors for spectroscopic biosignature characterization in the VISIR (hereafter just ``biosignature characterization'') . Although not discussed here, other ATLAST technology needs include precision large-scale optics, ultra-stable structures, starlight suppression, and mirror coatings (See Ref.~\\citenum{Bolcar:2015td}). This emphasis sets aside the importance of the UV to ATLAST's overall mission. Within the ATLAST study, detector and other technology development is envisioned across ATLAST's $90~\\textrm{nm}-2.5~\\mu\\textrm{m}$ ``stretch goal'' wavelength range.\\cite{Bolcar:2015td}\n\nThis paper closely follows the SPIE presentation. We begin with an introduction to biosignature characterization, and show that biosignature characterization is ultra-low background astronomy. The extremely low background count rates motivate the need for further work on VISIR detectors. In Sec.~\\ref{ATLASTNeeds}, we briefly summarize ATLAST's VISIR detector needs in the context of existing technologies.\n\nSec.~\\ref{DetStatus} discusses what are arguably the two most mature detector technologies for biosignature characterization in greater detail. These are electron multiplying CCDs (EMCCDs) for the visible and HgCdTe avalanche photodiode (APD) arrays for the VISIR. We also include a more speculative discussion of what might be achieved in conventional HgCdTe arrays with appropriately optimized readout integrated circuits (ROIC). \n\n\\section{BIOSIGNATURE CHARACTERIZATION}\\label{BioSig}\n\nBiosignature characterization uses low resolution spectroscopy, $R=\\lambda\/\\Delta\\lambda>70$ (required) or $R>150$ (goal), to characterize atmospheric features that are either required for life, or caused by it.\\footnote{For purposes of this discussion, the word ``life'' refers to life as we know it on earth today.} Fig.~\\ref{EarthAsExoplanet} shows several important biosignatures overlaid on a spectrum of earth as it would appear if seen as an exoplanet.\n\nTo make this spectrum, Turnbull~\\etal\\cite{2006ApJ...644..551T} observed the night side of the moon and used modeling to solve for the earth's contribution as it would appear to a distant observer. We define a life detection as consisting of; (1) a rocky planet, (2) with water vapor, (3) a primary biosignature, and (4) a confirming biosignature to rule out false positives. Lacking a confirming biosignature, one could attempt to increase the statistical significance of a result by resolving the temporal dependence of a feature. Arguments for a biological source could also be strengthened by placing the detection in a more comprehensive geological and astrophysical context by measuring other atmospheric gases including ${\\rm CO_2}$, and characterization of the host star's energy distribution.\n\nWith regard to false positives, methane is thought to be particularly important because it is difficult to simultaneously maintain significant concentrations of oxygen, ozone, and methane. Non-equilibrium concentrations are most straightforwardly explained by biological processes. The methane feature at 2.32~$\\mu$m is unfortunately blended with water vapor. However, there is another methane feature between 3~$\\mu$m and 3.5~$\\mu$m that might be better if the observatory could observe it. The spectrum shows a few other features, notably ${\\rm CO_2}$ and ${\\rm O_4}$. Although these features do not provide as much information as the primary and secondary biosignatures, they can still be useful, especially when no confirming biosignature is available.\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.8\\columnwidth]{f10.eps}\n\\caption{This figure overlays a number of important biosignatures on Turnbull~\\etal's\\cite{2006ApJ...644..551T} spectrum of earth seen as an exoplanet. In addition to the features shown here, there is a strong $\\textrm{O}_3$ bandhead at about 260~nm that is considered a primary biosignature. The vegetation red edge (VRE) is caused by chlorophyl from plants. The individual spectral features are discussed in the text.\\label{EarthAsExoplanet}}\n\\end{center}\n\\end{figure}\n\n\\section{ULTRA-LOW BACKGROUND}\\label{ULB}\n\nEven using a $\\geq 8~\\textrm{m}$ space telescope, biosignature characterization is extreme ultra-low background astronomy, potentially requiring days to observe each exoEarth candidate. Consider a simple toy model with these assumptions; a perfect coronagraph, 25\\% efficient integral field unit (IFU) spectrograph, $\\lambda=550$~nm, pixel size $=0.7\\times 1.22\\lambda\/D$, $R=150$, and the background is $3\\times$ the earth's zodiacal light. With these assumptions, the background count rate is $<0.001~\\textrm{cts}~s^{-1}~\\textrm{pix}^{-1}$. More sophisticated models that include the effects of imperfect coronagraphs and simulated exoEarths reach the same conclusion: biosignature characterization is extremely photon starved.\\cite{Stark:2015er} \n\nFor such extremely low count rates, a single photon detector (SPD) is clearly preferred. Better than photon counting, an SPD counts individual photons without adding appreciable noise from any source. The needed SPD combines high QE, zero read noise, ultra-low dark current, and ultra-low spurious count rate. In an EMCCD, clock induced charge (CIC) is one example of spurious counts that it would be beneficial to reduce. In IR APD arrays, glow from non-optimized ROICs is another example of spurious counts that it would be beneficial to eliminate. We discuss both EMCCDs and IR APD arrays in more detail later.\n\n\\section{ATLAST Detector Needs}\\label{ATLASTNeeds}\n\nThe ATLAST technology development plan has been discussed elsewhere at this conference.\\cite{Bolcar:2015td} Tab.~\\ref{ReqTab} summarizes the ATLAST detector needs and ``technology gaps''. Because detectors for the UV through near-IR are equally important to the ATLAST mission, we show them all here. However, this presentation is focused specifically on the VISIR, which we highlight with a red box. The grayed out technologies are no less important to the mission.\n\\begin{table}[t]\n\\begin{center}\n\\caption{Detector Technology Components and Identified ``Gaps''$\\rm ^a$}\\label{ReqTab}\n\\includegraphics[width=\\columnwidth]{f21.eps}\n\\end{center}\n\\end{table}\n\nBroadly speaking, the need is for $\\textrm{QE}>80\\%$ SPDs from 400~nm through 1.8~$\\mu$m (2.5~$\\mu$m goal). The $\\rm 2K\\times 2K$ pixel format is needed if used with an IFU. For space flight, the detectors need to be radiation hard in the L2 radiation environment and able to survive launch loads and vibration. \n\nThe ATLAST study team has indicated a strong preference for non-cryogenic detectors if they can enable the science. This is driven by goals that include: (1) simplifying the system engineering, (2) simplifying the integration and test flow, and (3) completely retiring the risks associated with cooling the detectors to $\\textrm{T}\\sim 100~\\textrm{mK}$.\n\nCoronagraphs capable of achieving contrasts of $10^{-10}$ require wavefront error stability at the level of tens of picometers. Exported vibrations from a conventional cryocooler would present an obvious threat to achieving this. If cryogenic detectors are to be used, then cooling technology development is needed to provide essentially zero vibration cooling. If the cooling challenges could be creatively overcome, then cryogenic detectors including microwave kinetic inductance devices (MKID) and transition edge sensor (TES) arrays might become attractive. \n\nOnce cooled, Both TESs and MKIDs already function as true SPDs with built in energy resolution. Both MKIDs\\cite{2013PASP..125.1348M} and TESs\\cite{1999ApJ...521L.153R,Romani:2001wl} have been used for refereed astrophysics publications. For biosignature characterization, both would require further development to improve parameters including their VISIR energy resolution and the efficiency of coupling light to detector elements. However, since this publication is about ATLAST, we defer further discussion of cryogenic detectors to a later publication.\n\nConsistent with ATLAST's preference for non-cryogenic detectors if possible, we take it as a requirement that the detectors will be operated at a temperature that can be achieved using only passive cooling, $\\textrm{T}\\gtrsim 30~\\textrm{K}$. In early JWST studies, this emerged as a practical detector temperature that could be achieved with margin at L2.\n\n\\section{Candidate VISIR Detector Technologies for ATLAST}\\label{DetTech}\n\nAlthough no completely satisfactory VISIR detector candidate exists for ATLAST today, Tab.~\\ref{DetCandidates} summarizes a number of promising technologies. In making this list, we limited consideration to detectors that we believe to be at least NASA TRL-3. This unavoidably leaves some lower TRL, but nevertheless promising technologies off the list. We encourage all efforts that aim to meet the needs outlined in Sec.~\\ref{ATLASTNeeds}, even if the specific technology does not appear in this table. Tab.~\\ref{DetCandidates} includes a few detectors that we will not be discussing further here because they would operate at $\\textrm{T}\\leq 30~\\textrm{K}$. These are MKIDs, TESs, superconducting nanowire single photon detectors (SNSPD), and Si:As hybrids.\n\\begin{table}[t]\n\\begin{center}\n\\caption{ATLAST VISIR Detector Candidates}\\label{DetCandidates}\n\\includegraphics[width=0.65\\columnwidth]{f31.eps}\n\\end{center}\n\\end{table}\n\nTab.~\\ref{DetCandidates} attempts to condense a wide trade space into a simple graphic for presentation purposes. Both dark and light blue represent existing technologies that we believe hold significant promise. Dark blue is arguably higher TRL than light blue for biosignature characterization. Yellow indicates a technology that we did not discuss to comply with presentation time limits, but that nevertheless may hold promise for further investigation.\n\nTwo of the technologies that we discuss in depth, EMCCDs and IR APDs, are both shaded dark blue. Of these, EMCCDs are currently closer to meeting performance requirements in the visible. IR APDs are the most mature non-cryogenic candidate spanning the VISIR. Although HgCdTe hybrids are shaded yellow in Tab.~\\ref{DetCandidates}, we also discuss these in Sec.~\\ref{DetStatus} because we plan to investigate them further in our labs at Goddard.\n\n\\section{Status of a Few Detector Candidates}\\label{DetStatus}\n\n\\subsection{EMCCD Status}\\label{EMCCDStatus}\n\nFor over a decade, EMCCDs have been leading candidates for low background photon counting in the visible. Starting in the early 2000s, several groups have explored individual photon counting with EMCCDs. In 2004, Daigle~\\textit{et al.}\\cite{Daigle:2004jka} studied how an e2v CCD97 camera, ``operating in pure photon counting mode would behave based on experimental data.'' In 2006, Wen~\\textit{et al.}\\cite{Wen:2006gv} evaluated an e2v CCD201 for space astronomy and published images of a test pattern showing that the EMCCD operated as a photon counter. Over the ensuing decade, steady progress has been made, and today it is possible to buy a commercial EMCCD camera from NuVu Cameras that uses shaped clocks and high readout rates to achieve $\\rm CIC<0.002~cts~pix^{-1}~frame^{-1}$. EMCCDs have been baselined for WFIRST's coronagraph and several presentations at this conference discuss WFIRST's EMCCD efforts. \\cite{Harding:2015tz,Bush:2015uf}.\n\nWhen new and un-degraded by the space radiation environment, EMCCDs are arguably able to meet even ATLAST's challenging performance needs. However, like any conventional n-channel CCD, they will degrade when irradiated. This is a consequence of the phosphorus that is used to dope the n-type channels. Radiation damage, including charge transfer efficiency degradation and pixel operability degradation, has been one of the major reasons for replacing the Hubble Space Telescope's (HST) CCDs. Understanding how radiation effects EMCCDs is important to both WFIRST and ATLAST. Although the ATLAST detector requirements will ultimately be more challenging than those for WFIRST, WFIRST nevertheless provides a valuable early opportunity to understand the issues and address them. For WFIRST, JPL has begun radiation testing and mitigation studies.\\cite{Bush:2015uf}\n\nAlthough EMCCDs are promising detectors for ATLAST, more work in selected areas would be very beneficial. These include efforts aimed at: (1) improving radiation tolerance, (2) further reducing clock induced charge, and (3) improving the red QE from about 850~nm to the bandgap wavelength. Any improvements in radiation tolerance will lead to longer usable life at L2. Further reduction in CIC is important for ATLAST because it is currently a major component of the noise budget. Improving the red QE is important because the strongest water line that is in band for a silicon detector is found at about 950~nm, where the QE of conventional CCDs tends to be falling rapidly.\n\n\\subsection{HgCdTe Photodiode Status}\\label{MCTStatus}\n\nCompared to many other detector materials, HgCdTe has shown very good QE from 400~nm through 2.5~$\\mu$m and beyond (Fig.~\\ref{QE}). For example, the James Webb Space Telescope's (JWST) near-IR detectors achieve $\\rm QE>70\\%$ from $0.6-1~\\mu\\rm m$, and $\\rm QE>80\\%$ from $1-2.5~\\mu$m. For non-astronomical applications, the major vendors have delivered HgCdTe detectors that function at wavelengths at least as short as 400~nm. This impressive QE and high overall maturity begs the question, could conventional HgCdTe photodiode arrays achieve much lower noise than they do today if paired with the right readout approach?\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.8\\columnwidth]{f42.eps}\n\\caption{AR coated HgCdTe has demonstrated $\\rm QE > 70\\%$ from $0.6-1~\\mu\\rm m$ and $\\rm QE > 80\\%$ from $1-2.5~\\mu\\rm m$ for JWST. The major vendors claim that using optimized designs, they can extend this performance to about 400~nm. We show the full potential range here, although the QE performance from $400-600$~nm needs to be confirmed in an astronomical detector.\\label{QE}}\n\\end{center}\n\\end{figure}\n\nSince the mid-1980s, most low background astronomy arrays have used a source-follower per detector architecture (SFD; Fig.~\\ref{SFD}). The SFD architecture has the advantages that it is simple, low power, low glow (when properly designed), and has met performance requirements up to and including those for WFIRST. A major factor driving ROIC design up to the present day has been the need to multiplex a large number of pixels out through a small number of video outputs. This necessitates very wide measurement bandwidth in the video electronics to reproduce the complicated waveform as the output changes from pixel to pixel.\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.5\\columnwidth]{f50.eps}\n\\caption{Since the mid 1980s, most HgCdTe arrays for low background astronomy have used an SFD architecture to multiplex many pixels onto a few video outputs. However, the overall SFD system may not be optimal for achieving the lowest possible noise. To understand the full potential of HgCdTe photodiode arrays as ultra-low noise detectors, it would be helpful to better understand the noise that originates in: (a) the photodiode itself, (b) the resistive contact, (c) the pixel source-follower, and (d) the output source follower if one is used. Armed with comprehensive understanding of the noise components, it might be possible to design other ROIC architectures to achieve the noise floor that is set by the photodiode alone. This figure is based closely on a corresponding figure from Ref.~\\citenum{Loose:2003vh}.\\label{SFD}}\n\\end{center}\n\\end{figure}\n\nA good start toward understanding the full potential of HgCdTe photodiode arrays as ultra-low noise detectors would be detailed characterization of existing JWST and WFIRST arrays aimed at separating the noise contributions from elements $a-e$ of Fig.~\\ref{SFD}. The aim would be an itemized noise budget rather than the lumped ``read noise'' that is conventionally reported.\n\nAlthough SFD arrays are well adapted to many kinds of astronomy, the current SFD design may not be optimal for achieving the lowest possible noise. The fundamental noise floor of an \\eg JWST HgCdTe photodiode (a) is potentially of order $\\sqrt{i_d t}$, where $i_d$ is the dark current and $t$ is integration time. The JWST NIRCam arrays have $i_d\\sim 0.001~e^-~s^{-1}~\\textrm{pixel}^{-1}$. Although conventional HgCdTe photodiode arrays will never function as ideal SPDs on account of leakage current at temperatures $\\rm T>30~K$, it is possible that today's H2RG and H4RG detectors are not yet approaching the fundamental noise limits of the photodiodes themselves. It would be interesting to see what could be achieved if noise from the resistive interconnect (b) could be reduced, and\/or different ROICs and readout strategies could substantially reduce or eliminate $1\/f$ noise from (c) the pixel source follower. The output source follower (d) can already be bypassed in many cases.\n\n\\subsection{IR APD Status}\\label{IRAPDStatus}\n\nHgCdTe APD arrays are a promising technology that initially entered astronomy for comparatively high background applications including adaptive optics and interferometry\\cite{Finger:2012dv} and wavefront sensing and fringe tracking\\cite{Finger:2014we}. More recently, they have been used at the telescope to provide diffraction-limited imaging via the ``lucky imaging'' technique.\\cite{2014SPIE.9154E..19A} Although HgCdTe APD arrays have been made by DRS, Raytheon, and Teledyne; those made by Selex are the focus of most attention in astronomy now.\n\nA group at the University of Hawaii has been evaluating Selex SAPHIRA arrays for applications that include low background astronomy.\\cite{2014SPIE.9154E..19A} With appropriately optimized process, the HgCdTe itself is probably capable of the same QE performance as the JWST arrays. Moreover, because gain is built into the pixels before the first amplifier, they promise photon counting and potentially even single photon detection if ``dark current'' can be reduced to acceptable levels.\n\n``Dark current'' is the most significant obstacle to using Selex APD arrays for ultra-low background astronomy today. The $\\sim 10-20~e^-~s^{-1}~\\textrm{pixel}^{-1}$ gain corrected ``dark current'' that has been reported\\cite{2014SPIE.9154E..19A} is almost certainly dominated by glow from the ROIC. The ROIC in current devices was not optimized for ultra-low background, or even low background astronomy. Work continues at the University of Hawaii to try to disentangle ROIC glow from more fundamental leakage currents in current generation APD arrays. On the longer term, work is also underway aimed at optimizing the ROIC design.\n\nAlthough HgCdTe APD arrays hold out the promise of read noise below that which can be achieved using conventional photodiode; like conventional photodiodes there will ultimately be a leakage current noise floor that is determined by thermally activated defect states in the HgCdTe. However, it is likely that today's performance is still far from that floor, and more work is needed to better understand the full potential of HgCdTe APD arrays for ultra-low background astronomy in the context of missions like ATLAST.\n\n\\section{Summary}\\label{Summary}\n\nThe search for life on other worlds promises to be a major focus for astrophysics in coming decades. In space, the tools will include new observatories like ATLAST equipped with high performance coronagraphs and\/or starshades. These will enable biosignature characterization of exoEarths.\n\nFortunately, good detector prototypes exist, although more work is needed to mature them for ATLAST. In the VISIR, these include EMCCDs and IR APD arrays. More speculatively, HgCdTe photodiode arrays may still have room for improvement, even beyond the impressive performance that has been shown for JWST and that is expected for WFIRST. Although the challenges are real, they are solvable. To get from where we are today to what is needed to fully enable ATLAST, focused strategic investment in VISIR detectors is needed.\n\n\\acknowledgments\nThis work was supported by NASA as part of the Goddard Space Flight Center Science and Exploration Directorate Life Finder Detectors Science Task Group (STG). \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\footnotetext[12]{Based on observations obtained with MegaPrime\/MegaCam, a joint project of CFHT and CEA\/DAPNIA, at\nthe Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the\nInstitut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the\nUniversity of Hawaii.}\nSystematic deep photometric observations of the surroundings of the Andromeda galaxy have revolutionized our knowledge of its satellite system. Before 2004, 12 dwarf galaxies were known to be M31 companions, including only 6 dwarf spheroidal galaxies, while 12 new dwarf galaxies have been discovered to inhabit this region of the Local Group sky over the last five years. A dedicated scan of the Sloan Digital Sky Survey (SDSS), targeting a region within $2^\\circ$ of M31's major axis, unveiled the presence of Andromeda~IX and~X \\citep{zucker04,zucker07}. In parallel, a contiguous mapping of the region within $\\sim50{\\rm\\,kpc}$ of Andromeda, performed with the Wide Field Camera on the Isaac Newton Telescope, revealed the picture of a strikingly substructured inner stellar halo \\citep{ferguson02}, and led to the discovery of Andromeda~XVII \\citep{irwin08}.\n\nMapping the outer regions of M31's halo was the goal of an extension to this survey. Performed with the MegaPrime\/MegaCam $1\\,\\textrm{deg}^2$ camera on the Canada-France-Hawaii Telescope (CFHT), it mapped a quarter of the M31 stellar halo, out to $\\sim150{\\rm\\,kpc}$ \\citep{ibata07}. Confirming the clumpy nature of Andromeda's stellar halo, the survey has also revealed the presence of numerous other dwarf galaxies: Andromeda~XI, XII, XIII, XV, XVI, XVIII, XIX and~XX \\citep{martin06b,ibata07,mcconnachie08}. At about the same time, Andromeda~XIV was discovered serendipitously by \\citet{majewski07}, just outside of the edge of the MegaCam survey.\n\nOur understanding of these systems remains sparse, mainly due to the distance at which they reside, translating into difficult photometric observations that cannot easily reach much deeper than the horizontal branch at the distance of M31. Spectroscopic observations are also limited to the handful of bright red giant branch (RGB) stars that can be targeted in each system. Although progress is expected along these lines in the coming years, much can already be said of the generic properties of these new objects from the current survey data alone. Their absolute magnitude ranges from a very faint $M_V=-6.4$ for And~XII and And~XX \\citep{martin06b,mcconnachie08} to a surprisingly bright $M_V\\mathrel{\\spose{\\lower 3pt\\hbox{$\\mathchar\"218$}-9.7$ for And~XVIII \\citep{mcconnachie08}, patently showing the incompleteness of the M31 satellite luminosity function at the faint end, in regions that have so far only been surveyed with photographic plates.\n\nGiven their relative faintness and significant sizes, these new systems are usually assumed to be dwarf spheroidal galaxies, in other words dwarf galaxies that are devoid of any significant amount of gas. This is currently consistent with the analysis of \\textsc{Hi} surveys but the upper limits on their \\textsc{Hi} content remains relatively high ($2-3\\times10^5{\\rm\\,M_\\odot}$; \\citealt{grcevich09}). Therefore, it cannot be entirely ruled out that some of these galaxies still contain non-negligeable amounts of gas.\n\nBuilding upon the previous MegaCam survey, we have initiated the Pan-Andromeda Archaeological Survey (PAndAS), a Large Program using the CFHT MegaCam imager to map the entire stellar halo of M31 and M33 out to distances of $\\sim 150$ and $\\sim50{\\rm\\,kpc}$ respectively. \nHere, we report on the discovery of two new dwarf galaxies in the vicinity of the Andromeda and Triangulum galaxies based on the first year of PAndAS data. These systems, dubbed Andromeda~XXI and~XXII are sparse but unmistakable overdensities of stars that are also aligned along a RGB at the distance of M31. Section~2 of this paper briefly summarizes the PAndAS data while section~3 presents the new systems and details their properties. Section~4 briefly discusses the new discoveries and section~5 concludes the paper.\n\nThroughout this paper, the distance moduli of M31 and M33 are assumed to be $(m-M)_0=24.47\\pm0.07$ and $(m-M)_0=24.54\\pm0.06$, or $783\\pm25$ and $809\\pm24{\\rm\\,kpc}$ respectively \\citep{mcconnachie05}.\n\n\\section{The PAndAS survey}\nPAndAS builds upon our previous CFHT\/MegaCam surveys of M31 whose results are presented in \\citet{martin06b}, \\citet{ibata07} and \\citet{mcconnachie08}. We use a similar observational set-up and refer the reader to these papers for a description of the observing strategy, data reduction and data quality. Maps showing early results and the complete survey area to date can be found in \\citet{mcconnachie09} and led to the discovery of And~XXI and And~XXII. The survey consists of contiguous exposures, performed with the $1\\times1$ deg$^2$ MegaPrime\/MegaCam camera mounted on the CFHT. The camera is a mosaic of 36 $2048\\times4612$ CCDs with a pixel size of 0.187\\,arsec. Small gaps within the survey lead to a scientifically useable field-of-view of $0.96\\times0.94\\,\\textrm{deg}^2$ for each of the current 250\\,pointings of the survey.\n\nEach field has been observed for 1350\\,s in each of the MegaCam $g$ and $i$ filters, split into $3\\times450$\\,s dithered subexposures. Good seeing ($<0.8''$) ensures that the photometry reaches $g\\sim25.5$ and $i\\sim24.5$ with a signal-to-noise ratio of 10 and guarantees that the star\/galaxy separation only degrades at magnitudes fainter than $g\\sim25.0$ and $i\\sim24.0$. Data were preprocessed (de-biased, flat-fielded and fringe corrected) by the Elixir system at CFHT that also determines the photometric zero point of the observations, and then processed using a version of the Cambridge Astronomical Survey Unit (CASU) photometry pipeline \\citep{irwin01}, adapted to CFHT\/MegaCam observations. The pipeline registers and stacks the images, and also generates catalogues with object morphological classification before creating band-merged $g$, $i$ products.\n\nIn the following, dereddened magnitudes ($g_0$ and $i_0$) have been determined from the \\citet{schlegel98} $E(B-V)$ extinction maps, using the following correction coefficients: $g_0=g-3.793E(B-V)$ and $i_0=i-2.086E(B-V)$, listed in their Table~6.\n\n\\section{Andromeda~XXI and Andromeda~XXII}\n\\subsection{Discovery}\n\nTwo new dwarf galaxies were directly identified on matched-filter surface density maps of RGB candidate stars selected to have colors consistent with the average (generally) metal-poor locus of M31 dwarf spheroidals. Satellites in the very sparsely populated outer halo regions correspond to stellar overdensities that are striking enough for them to be easily spotted, even down to magnitudes of $M_V\\sim-6.5$. This was the case for And~XII \\citep{martin06b} and And~XX \\citep{mcconnachie08} and also here with Andromeda~XXII. An automated search is certainly warranted to better quantify the detection limits of the survey in uncovering new dwarf galaxies. However, it is beyond the scope of this initial discovery paper and will be presented elsewhere.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\hsize,angle=270]{f1.ps}\n\\caption{\\label{map} Map of dwarf galaxies located in the surroundings of M31 and M33. M31 is located at the center of the coordinate system and known satellites are represented by empty circles. The two new discoveries, And~XXI and And~XXII, are represented by filled circles. The current extent of the PAndAS survey is shown as dotted squares, with each square representing a single $1\\times1\\,\\textrm{deg}^2$ MegaCam field. The ellipse represents the extent of M31's \\textsc{Hi} disk. Constellation boundaries, taken from \\citet{davenhall97}, are shown as thin lines.}\n\\end{center}\n\\end{figure}\n\nThe locations of the two new satellites are shown on the map of the PAndAS survey (Figure~\\ref{map}). As with many previous detections, these two systems are noticeably close to the edge of the survey, suggesting that the survey limit of $150{\\rm\\,kpc}$ in projected distance from M31 does not represent the true extent of Andromeda's satellite system (see also Figure~2 of \\citealt{mcconnachie09}). In the spirit of previous conventions, we dub these two satellites Andromeda~XXI and Andromeda~XXII (And~XXI and And~XXII; see the discussion below, in Appendix~\\ref{naming_conventions}).\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=0.44\\hsize,angle=270]{f2a.ps}\n\\includegraphics[width=0.44\\hsize,angle=270]{f2b.ps}\n\\caption{\\label{map_AndXXI}\\emph{Left panel:} Spatial distribution of stellar sources around And~XXI. Small dots represent all stars in the PAndAS survey whereas large dots correspond to likely RGB stars of the dwarf galaxy, selected within the dashed box shown on the CMD of the middle panel. These stars are clearly clumped into an overdensity of stars. MegaCam CCDs are shown as dashed rectangles and white regions correspond to holes in-between CCDs or holes in the survey. Open circles correspond to regions that are lost to the survey due to the presence of saturated bright stars. The central dashed ellipse corresponds to the region within two half-light radii of the dwarf galaxy, assuming the structural parameters listed in Table~\\ref{parameters}. \\emph{Right panels:} Color-magnitude diagrams within two half-light radii of And~XXI (middle panel) and, for comparison, of a field region at a distance of $\\sim20'$ covering the same area after correcting from gaps in the survey coverage (right-most panel). The galaxy's RGB is clearly visible as an overdensity of stars with $0.8\\mathrel{\\spose{\\lower 3pt\\hbox{$\\mathchar\"218$} g-i\\lta1.5$ and $i\\gta21.2$ that does not appear in the reference CMD.}\n\\end{center}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=0.44\\hsize,angle=270]{f3a.ps}\n\\includegraphics[width=0.44\\hsize,angle=270]{f3b.ps}\n\\caption{\\label{map_AndXXII} Same as Figure~\\ref{map_AndXXI} but for And~XXII. Although this system is much fainter, it still appears as a spatial overdensity of stars (left panel) that are aligned along a RGB in the CMD (middle panel), a feature that does not appear in the reference CMD (right panel).}\n\\end{center}\n\\end{figure*}\n\nBoth systems appear as overdensities of stars on the sky, as is visible in the left panels of Figures~\\ref{map_AndXXI} and~\\ref{map_AndXXII}. These stars are also aligned along a RGB that would be at, or close to, the distance of M31 or M33. The CMDs within 2 half-light radii of the dwarfs (determined in \\S~\\ref{strParam} below) are shown in the middle panels of these figures and, when compared to the CMD of reference fields chosen in an annulus covering the same area at a distance of $\\sim20'$ from the dwarfs' centers (right panels), indeed reveal an alignment of stars that follow the typical shape of a RGB. Isolating these stars enhances the contrast of the overdensity of stars on the sky (large symbols in the left panels).\n\nAnd~XXI is typical of the relatively bright dwarf galaxies that we have found before (such as And~XV or And~XVI). The overdensity of stars in the CMD region $24.55\\,$GeV, the three-gluon mSTI is however fulfilled, where the small deviation for very large momenta stems from the tuning procedure, as described before.\n\\begin{figure*}[t]\n\t\\centering\n\t\\begin{subfigure}[t]{0.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\linewidth]{figures\/ThreeGluondecPlotnewMeasure}\n\t\t\\caption{Absolute value of individual contributions: fRG (solid lines), STI (dashed lines). \\hspace*{\\fill}}\n\t\t\\label{fig:three_gluon_msti_abs}\n\t\\end{subfigure}%\n\t\\hspace{0.05\\textwidth}\n\t\\begin{subfigure}[t]{0.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\linewidth]{figures\/ThreeGluonmSTIdecPlotnewMeasure}\n\t\t\\caption{Normalized difference.}\n\t\t\\label{fig:three_gluon_mSTI_relative}\n\t\\end{subfigure}\n\t\\caption{Results for the longitudinal three-gluon dressing $\\lambda_{A^3,2}(p)$ as a function of the momentum $p$. The colors differentiate IR scenarios.\\hspace*{\\fill}}\n\t\\label{fig:YM:theree_gluon_msti_rel}\n\\end{figure*}\n\n\\subsection{Discussion of Different Solutions\n\\label{sec:YM:Discussiondec\n\nIn this section, we present a comparison of the mSTIs for different types of solutions, i.e. the scaling, decoupling, and a Higgs-type solution, that are highlighted amongst the range of solutions in \\Cref{fig:gluon_mass_tuning}.\n\n \nStudying the normalized difference of the transverse andf longitudinal gluon mass, see \\Cref{fig:YM:massMeasuredec}, where the longitudinal mass is obtained from the mSTI and the effective transverse mass from the fRG, one can see that all three solutions agree equally well up until $p \\approx 20\\,$GeV. There, the Higgs-type solution starts to deviate. It is expected that for a more massive Higgs-type solution, the deviation starts at an even larger momentum scale $p$. Vice versa, the scaling solution starts to deviate at a smaller scale than the decoupling solution. \n\nThe ghost-gluon mSTI is fulfilled for all three types of solutions, the small deviations shown in \\Cref{fig:YM:MassPlot} simply depict the numerical precision in the computation.\n\nGenerally, we do not expect the mSTIs to be fulfilled for scales $p \\ll 1\\,$GeV, since non-classical vertices and tensor structures that were not taken into account in our truncation, contribute significantly in this regime. We furthermore expect IR cutoff effects contributing below this scale, being within one order of magnitude of $k_{\\mathrm{min}}$.\n\nHowever, we can clearly see the expected divergence in the gluon two-point, \\Cref{fig:YM:MassPlot}, and three-point mSTI, \\Cref{fig:YM:theree_gluon_msti_rel}, for momenta $p \\ll 1\\,$GeV.\n\nComparing the longitudinal three-gluon dressing $\\lambda_{A^3,2}$ from the fRG and from the STI for different types of solutions, one can see that the scaling solution yields the best qualitative agreement of the dressings.\n\n\n\n\\section{Conclusion\n\\label{sec:YM:Conclusion\nWe studied the gauge consistency of functional approaches to Yang-Mills within the Landau gauge. For this purpose, we resolved the transverse and longitudinal sector by solving the corresponding fRG flow equations for the dressings of transverse and longitudinal propagators and vertices in a sufficiently advanced truncation. To quantify the violation of gauge consistency in such a set-up, we also computed longitudinal dressings with the associated modified Slavnov-Taylor identities. The results agree within numerical accuracy for momentum scale $p\\gtrsim 1\\,\\mathrm{GeV}$. We found that the agreement of STI dressings and fRG dressings differ for the solutions branches, i.e. for scaling, decoupling and Higgs type solutions: in general, the scaling solution shows the best agreement. In turn, the longitudinal fRG dressings for decoupling and Higgs-type solutions start to deviate from the ST dressings for successively larger momenta, the farer away they are from scaling. Interestingly, the longitudinal three-gluon vertex dressing is an exception, as there the deviation of fRG and STI dressings happens at roughly the same scale. This structure deserves further investigation, one of the reasons being that the STIs used in the present work are the standard Landau gauge STIs and the decoupling and Higgs-type solutions triggered here are effectively induced by an explicit mass parameter as in the Curci-Ferrari (CF) model. This suggests to repeat the present investigation on the basis of the CF BRST transformations.\n\nIn summary, these comparisons provide us with a tool to control the gauge consistency in truncations of the quantum effective action. In our opinion, the present level of gauge consistency supports the reliability of such truncations in applications of non-perturbative functional methods to Yang-Mills theory.\n\nMonitoring the mSTIs in such a manner provides additional, much needed, guidance to improve on truncations. Alternatively, they can be used as constraints to determine correlation functions. \n\nThis work provides the foundation for an advanced study thereof in QCD, where the gauge consistency of the coupled self-consistent quark-gluon system poses an intricate problem. We hope to report on this in the near future.\n\n\\section*{Acknowledgments}\n We thank G.~Eichmann, J.~Horak, J.~Papavassiliou and U.~Reinosa for discussions. This work is done within the fQCD-collaboration~\\cite{fQCD} and we thank the members for discussion and collaborations on related projects. This work is supported by EMMI, and is part of\nand supported by the DFG Collaborative Research Centre \"SFB 1225 (ISOQUANT)\". It is also supported by Germany's Excellence Strategy EXC 2181\/1 - 390900948 (the Heidelberg STRUCTURES Excellence Cluster).\nN.~W.~is additionally supported by the Hessian collaborative research cluster ELEMENTS and by the DFG Collaborative Research Centre \"CRC-TR 211 (Strong-interaction matter under extreme conditions)\".\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe micro-lens array (MLA) based plenoptic cameras, including the conventional plenoptic camera \\cite{ng2006digital} and the focused plenoptic camera \\cite{lumsdaine2009focused}, capture the radiance information of rays in both spatial and angular dimensions, e.g. the 4D light field data \\cite{levoy1996light,gortler1996lumigraph}. The data from the plenoptic camera is equivalent to narrow baseline images of traditional cameras with projection centers on the lens aperture plane. The measurement of the same position in multiple directions allows or strengths applications on computer photography, such as digital refocusing \\cite{ng2005fourier}, depth estimation \\cite{Jeon2015Accurate}, saliency detection \\cite{li2014saliency} and so on. However, the angular and spatial resolution of a light field data is limited by the physical parameters of the plenoptic camera. Recent work proposed the methods on light field registration and stitching to expand the field of view \\cite{johannsen2015linear,birklbauer2014panorama,guo2015enhancing}. To support these applications, calibrating the plenoptic camera and decoding accurate 4D light field in metric distance from the 2D image sensor are crucial.\n\nTo calibrate the plenoptic camera, it is essential to build a model to relate the measurement on the 2D raw image and the rays in the 3D space. Prior work dealt with the intrinsic calibration on the plenoptic cameras in different optical designs \\cite{dansereau2013decoding,bok2014geometric,heinze2015automated,vaish2004using}. However, the parameters of the proposed models are redundant or incomplete, and the models are still improvable on the description of plenoptic cameras. Some of the calibration methods have issues on the initialization estimation or the optimization procedure.\n\nIn this paper, we propose a novel unconstrained TPP model with 7 parameters to describe the light field structure inside and outside the plenoptic camera concisely. The 7 parameters are sufficient to constrain the rays in a 4D light field. Based on the 7-parameter TPP model, the pixels on the raw image can be related to the rays by a virtual MLA directly. We deduce the projective transformation \\cite{hartley2003multiple} on the reconstructed 3D points with different TPP parameters, which is the theoretical foundation of the closed-form solution of our calibration method. Then we employ a nonlinear optimization to refine the parameters via minimizing the re-projection error on the raw images. We conduct experiments on both simulated data and a physical plenoptic camera.\n\nIn summary, our main contributions are listed as follows:\n\n(1) We simplify the plenoptic camera system as a 7-parameter unconstrained TPP coordinate and deduce a projective transformation matrix to relate the measurement on the image sensor to the scene points in 3D space.\n\n(2) We solve the parameters of the TPP using a robust and efficient method, which consists of a linear initialization and an optimization via re-projection error.\n\nThe remainder of this paper is organized as follows: Section~\\ref{sec:RelatedWork} summarizes the related work on the plenoptic camera models and calibration methods. Section~\\ref{sec:TPPModel} describes the 7-parameter unconstrained TPP model and its relationship with the physical plenoptic camera, and derives the projective transformation involved the TPP's parameters. Section~\\ref{sec:Solve} provides the details of our proposed calibration method. Section~\\ref{sec:ExpResults} shows the calibration results on simulated data and real scene data.\n\n\n\n\\section{Related Work}\n\\label{sec:RelatedWork}\n\nTo acquire light field, there are various imaging systems developed from the traditional camera. Wilburn et al. \\cite{wilburn2005high} presented a camera array to obtain light field with high spatial and angular resolution. Prior work dealt with the calibration of the camera arrays \\cite{vaish2004using}. Unfortunately, applications on camera arrays are limited by its high cost and complex control. In contrast, a MLA enables a single camera to record 4D light field more conveniently and efficiently, though the baseline and spatial resolution is smaller than the camera array. Recent work devoted to calibrate the intrinsic parameters of the plenoptic cameras in two designs \\cite{ng2006digital,lumsdaine2009focused}, which are quite different according to the image structure of the micro lenses. Moreover, in traditional multi-view geometry, multiple cameras in different poses are defined as a set of unconstrained rays, which is known as as Generalized Camera Model (GCM) \\cite{pless2003using}. The ambiguity of the reconstructed scene was discussed in traditional topics. For a plenoptic camera, the different views of the same scene point are obtained, and the calibration of a plenoptic camera can use the theory on traditional multi-view for reference.\n\nSome work explored the calibration on the focused plenoptic camera, where the multiple projections of the same scene point are convenient to be recognized. Johannsen et al. \\cite{johannsen2013calibration} proposed the method on the intrinsic parameters calibration of a focused plenoptic camera. By reconstructing 3D points from the parallax in adjacent micro-lens images, the parameters including the depth distortion were estimated by nonlinear optimization directly without a linear initialization. However, the geometry center of the micro image was on its micro-lens's optical axis in their method. This assumption caused inaccuracy on the reconstructed points and was compensated by the depth distortion coefficients. Hahne et al. \\cite{hahne2015refocusing} discussed the influence of the deviation of the micro image's centers and the optical center of its micro lens. Heinze et al. \\cite{heinze2015automated} applied a similar method with \\cite{johannsen2015linear} and proposed a linear initialization for the intrinsic parameters.\n\nSome work explored the calibration on the conventional plenoptic camera, where the sub-aperture images are easy to be synthesized. Dansereau et al. \\cite{dansereau2013decoding} presented a model to decode the pixels into rays for a conventional plenoptic camera, where the 12-free-parameter transformation matrix was connected with the reference plane outside the camera. However, the calibration method was initialized using traditional camera calibration techniques and there were redundant parameters in the decoding matrix. Bok et al. \\cite{bok2014geometric} formulated a geometric projection model consisting of a main lens and a MLA to estimate the intrinsic and extrinsic parameters by utilizing raw images directly, including an analytical solution and a nonlinear optimization. Moreover, Thomason et al. \\cite{thomason2014calibration} concentrated on the misalignment of the MLA and estimated its position and orientation.\n\nDifferent from the previous models, we represent the image sensor, the MLA and the main lens as a simple TPP model with 7 parameters. The 7 parameters are connected with the physical parameters of the plenoptic camera and sufficient to relate the pixels on the raw image to the rays without redundancy. To reveal the relationship between the light field data and the scene structure, we explore the ray-ray association on the TPP coordinate. Then the 3D projective transformation of the reconstructed structure with different TPP parameters is deduced. Based on the projective transformation, we solve a linear initialization for the intrinsic and extrinsic parameters. In our method, the prior scene points are support by a planar calibration board in different poses. The solved initialization is refined by minimizing the re-projection error using Levenberg-Marquardt algorithm. Theoretical derivation and experimental results demonstrate the validity of our calibration method.\n\n\n\n\\section{Unconstrained TPP model}\n\\label{sec:TPPModel}\n\nThe distance of the traditional TPP model is normalized as 1 unit to describe a set of rays \\cite{levoy1996light,gortler1996lumigraph}. To describe the decoded rays of a plenoptic camera, a TPP model with free parameters are needed, e.g. the unconstrained TPP model. As shown in Fig.\\ref{fig:TPP_coordinate}, we define a TPP coordinate, where $\\vec{r}\\!=\\! \\left( x, y, u, v, f \\right)\\!\\! ^\\mathrm{T}$ defines a ray passing $\\left(x, y, 0 \\right)\\!\\! ^\\mathrm{T}$ and $\\left(u, v, f \\right)\\!\\! ^\\mathrm{T}$. In this section, we discuss the 3D projective transformation of reconstructed points in light field based on the TPP model. Then we establish the relationship between the TPP parameters and physical parameters of a focused plenoptic camera.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=60mm]{fig1_TPP_coordinate}\n\\vspace{-3mm}\n\\caption{\nAn illustration of TPP coordinate system. The origins of two coordinates $x$-$y$ and $u$-$v$ lie on the axis $Z$. Axis $x$ and axis $y$ are parallel to axis $u$ and axis $v$ respectively. The distance between $x$-$y$ plane and $u$-$v$ plane is $f$.}\n\\label{fig:TPP_coordinate}\n\\vspace{-3mm}\n\\end{figure}\n\n\n\\subsection{Projective Transformation on TPP}\n\\label{subsec:tpp}\n\nLet $\\vec{r}$ pass the point $\\left(X, Y, Z\\right)^\\mathrm{T}$, we have:\n\n\\vspace{-2mm}\n\\begin{equation}\n \\underbrace{ \\left[ \\begin{array}{cccc}\n f & 0 & x\\!-\\!u & -\\!fx \\\\\n 0 & f & y\\!-\\!v & -\\!fy \\end{array} \\right] }_{\\bm{M}}\n \\left[ \\begin{array} {c}\n \tX \\\\ Y \\\\ Z \\\\ 1 \\end{array} \\right] = \\bm{0},\n \\label{eq_Mx=0}\n\\end{equation}\n\n\\noindent where $\\left(X, Y, Z\\right)^\\mathrm{T}$ can be solved iff there are at least two rays and any two of the rays $\\vec{r}_i$ and $\\vec{r}_j$ satisfy $\\frac{u_i-u_j}{x_i-x_j}=\\frac{v_i-v_j}{y_i-y_j}$.\n\nTransforming $\\vec{r}$ into $\\vec{r}^\\prime \\!\\!=\\!\\! \\left( k_{\\!x}x, k_{\\!y}y, k_{\\!u}u\\!+\\!u_0, k_{\\!v}v\\!+\\!v_0, f\\prime \\right)\\!\\! ^\\mathrm{T}$, the intersection point $\\left(X, Y, Z \\right)\\!\\!^\\mathrm{T}$ is changed to be $\\left(X^\\prime, Y^\\prime, Z^\\prime \\right)\\!\\! ^\\mathrm{T}$, which satisfies:\n\n\\vspace{-2mm}\n\\begin{equation}\n \\setlength\\arraycolsep{4.0pt}\n \\underbrace{ \\left[ \\begin{array}{cccc}\n fk_uk_x & 0 & k_x u_0 & 0 \\\\\n 0 & fk_vk_x & k_x v_0 & 0 \\\\\n 0 & 0 & f^\\prime k_x & 0 \\\\\n 0 & 0 & k_x\\!-\\!k_u & fk_u \\end{array} \\right] }_{=:P\\left( \\bm{\\mathscr{X}}, f \\right)}\n \\left[\\!\\! \\begin{array} {c}\n \tX \\\\ Y \\\\ Z \\\\ 1 \\end{array} \\!\\!\\right] = s\n \t\\left[\\!\\! \\begin{array} {c}\n \tX^\\prime \\\\ Y^\\prime \\\\ Z^\\prime \\\\ 1 \\end{array} \\!\\!\\right],\n \\label{eq_PX=sX_}\n\\end{equation}\n\\vspace{-1mm}\n\n\\noindent where $\\mathscr{X}\\!\\!\\!=\\!\\! \\left( k_x, k_y, k_u, k_v, u_0, v_0, f^\\prime \\right)$ is the transformation parameters of the TTP coordinate and $s$ is a scalar factor. To make the set of transformed rays $\\vec{r}^\\prime$ all intersect at the point $\\left(X^\\prime, Y^\\prime, Z^\\prime \\right)^\\mathrm{T}$, $\\mathscr{X}$ must satisfy $k_u\/k_x\\!=\\! k_v\/k_y$. Equation \\ref{eq_PX=sX_} indicates that $\\mathscr{X}$ affects the geometric structure of the recovered scene, {\\it i.e.} the intersections of rays. In addition, the non-zero elements in the last row of $\\bm{P}$ is equivalent to the refraction of a lens in a traditional camera. Therefore, by transforming the coordinate of the TPP via $\\mathscr{X}$, the light field inside a camera can be transformed into the real world scene, where the scale of $u$-$v$ handles the projective transformation and the scale of $x$-$y$ handles the zoom of the recovered scene. The transformations on the scene structure with single parameter in $\\mathscr{X}$ are shown in Fig.\\ref{fig:projective_distortion} separately.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=90mm]{fig2_projective_distortion}\n\\vspace{-4mm}\n\\caption{\nTPP light field recording a Lambertian cube. The left one is the original cube and the others are distorted cubes with the change of $f$, $k_{u\\!v}\\left(k_{uv}\\!\\!=\\!\\! k_u\\!\\!=\\!\\! k_v\\right)$, $\\left(u_0, v_0 \\right)^\\mathrm{T}$ respectively.}\n\\label{fig:projective_distortion}\n\\vspace{-3mm}\n\\end{figure}\n\nIn Section~\\ref{sec:Solve}, we will discuss the calibration method using the projective transformation in Eq.\\ref{fig:projective_distortion}.\n\n\n\n\\subsection{TPP Coordinate Inside and Outside the Camera}\n\\label{subsec:tpp2}\n\nWe model the main lens as a thin lens and the micro-lens as a pinhole, thus every pixel on the image sensor can be regarded as a ray passing through the coordinate on the image sensor and the optical center of its corresponding micro-lens \\cite{dansereau2013decoding,johannsen2013calibration,bok2014geometric}.\nThe TPP coordinate system consists of a image sensor and a MLA, {\\it i.e.} the $x^\\prime y^\\prime u^\\prime v^\\prime$ coordinate shown in Fig.\\ref{fig:TPP2System}. Moreover, there is another TPP coordinate $xyuv$ outside the camera, where the $x$-$y$ is related to the image sensor and the $u$-$v$ is related to the MLA. The $x$-$y$ and $u$-$v$ planes can be regarded as a zoomed raw image and a virtual MLA with larger diameter respectively.\n\nDue to the refraction of the main lens, the rays and the scene structure in the two TPP coordinates are different. Obviously, there is a projective transformation on the reconstructed 3D points with different TPP parameters (Eq.\\ref{eq_PX=sX_}). By transforming the rays $\\vec{r}^\\prime$ in $x^\\prime y^\\prime u^\\prime v^\\prime$ coordinate to $\\vec{r}$ in $xyuv$ coordinate via $\\mathscr{X}$, {\\it i.e.} reparameterizing the coordinate of TTP, the projective distortion of the reconstructed points can be rectified. Therefore, the complex compositions of the plenoptic camera can be equivalently replaced by two parallel planes. In other words, there is a similar two-plane light field structure in the real world scene with the one inside the camera. Then we discuss the relationship between the physical structure inside a focused plenoptic camera and the virtual structure outside the camera.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=80mm]{fig3_TPP2System}\n\\vspace{-3mm}\n\\caption{\nA focused plenoptic camera with a MLA. There are two TPP coordinates, {\\it i.e.} $x^\\prime y^\\prime u^\\prime v^\\prime$ inside the camera and $xyuv$ in the real world scene.}\n\\label{fig:TPP2System}\n\\end{figure}\n\n\nIn the plenoptic camera, a ray which passes the pixel $\\left(x, y \\right)$ on the image coordinate and the micro-lens with label $\\left(i, j \\right)$ can be represented as a virtual ray $\\left(x, y, i, j, 1 \\right)^{\\mathrm{T}}$ where $i \\! \\in \\! \\mathbb{Z}$, $j \\! \\in \\! \\mathbb{Z}$. There is a geometric relationship between the two coordinates $xyuv$ and $x^\\prime y^\\prime u^\\prime v^\\prime$. Let $\\mathscr{X}_{in}\\!\\!=\\!\\!\\left( k_{x\\!,in},k_{y\\!,in},k_{u\\!,in},k_{v\\!,in}, u_{in}, v_{in}, f_{\\!in} \\right)\\!\\! ^\\mathrm{T}$ and $\\mathscr{X}_{\\!out}\\!\\!=\\!\\!( k_{x\\!,out},k_{y\\!,out},k_{u\\!,out},k_{v\\!,out}, u_{out}$, $v_{out}, f_{\\!out} )\\!^\\mathrm{T}$ be the parameters of TPP inside and outside the camera respectively, thus the virtual ray $\\left(x, y, i, j, 1 \\right)\\!\\! ^\\mathrm{T}$ is related to two physical rays $\\left(k_{x,in}x, k_{y,in}y, k_{u,in}i+\\!u_{in}, k_{v,in}j+\\!v_{in}, f_{in} \\right)\\! ^\\mathrm{T}$ and $(k_{x,out}x,\\!k_{y,out}y, k_{u,out}i+u_{out}$, $k_{v,out}j+v_{out}, f_{out} )^\\mathrm{T}$respectively. The parameters $k_{x\\!,in}$ (or $k_{y\\!,in}$), $f_{in}$ can be regarded as the diameter of micro lens and the distance between the image sensor and the MLA. With the image sensor origin $\\left( X_{os}, Y_{os}, Z_{os} \\right)\\!\\!^{\\mathrm{T}}$ and the reference micro-lens $\\left( X_{oa}, Y_{oa}, Z_{oa} \\right)\\!\\! ^{\\mathrm{T}}$ whose label is $\\left(0,0\\right)$ (in the main lens coordinate), we can get the relationship between $\\mathscr{X}_{in}$ and $\\mathscr{X}_{out}$:\n\n\\vspace{-1mm}\n\\begin{equation}\n\\vspace{-3.0pt}\n\\frac{k_{x,in}}{k_{x,out}} = \\frac{k_{y,in}}{k_{y,out}} = \\frac{F}{Z_{os}-F},\t\n\\vspace{-3.0pt}\n\\end{equation}\n\n\\begin{equation}\n\\vspace{-3.0pt}\n\\frac{k_{u,in}}{k_{u,out}} = \\frac{k_{v,in}}{k_{v,out}} = \\frac{F}{Z_{oa}-F},\n\\vspace{-3.0pt}\n\\end{equation}\n\n\\begin{equation}\n\\vspace{-3.0pt}\n\\left[\\!\\! \\begin{array} {c}\nu_{out}-u_{in} \\\\v_{out}-v_{in}\n\\end{array} \\!\\!\\right] = \\frac{Z_{oa}}{F-Z_{oa}}\n\\left[\\!\\! \\begin{array} {c}\nX_{oa} \\\\ Y_{oa}\n\\end{array} \\!\\!\\right] - \\frac{Z_{os}}{F-Z_{os}}\n\\left[\\!\\! \\begin{array} {c}\nX_{os} \\\\ Y_{os}\n\\end{array} \\!\\!\\right]\n\\vspace{-3.0pt}\n\\end{equation}\n\n\\begin{equation}\n\\vspace{-1.0pt}\nf_{in} = Z_{oa}\\!\\!-\\!Z_{os},\\quad\nf_{out} = \\frac{F}{F\\!\\!-\\!Z_{oa}} Z_{oa}- \\frac{F}{F\\!\\!-\\!Z_{os}}Z_{os},\\\\\n\\vspace{-0.0pt}\n\\end{equation}\n\nIn addition, to simplify the discussion, we assume that the layout of the micro-lens array is square-like. For hexagon-like configuration, the label of micro-lens is different due to the layout.\n\n\n\n\\section{Calibration Method}\n\\label{sec:Solve}\n\nTo decode the virtual ray $\\left(x, y, i, j, 1 \\right)^{\\!\\mathrm{T}}$ into the ray in the real world scene, we need to calibrate the parameters of TPP coordinate system, {\\it i.e.} the intrinsic parameters of the plenoptic camera. The theorem in Section~\\ref{sec:TPPModel} indicates that given an arbitrary setting of TPP parameters, a set of 3D points can be recovered, and there is a projective transformation between the real scene points and the recovered points. Moreover, the projective transformation is determined by the TPP parameters.\n\nThis section provides the details of how to solve the parameters effectively, including a linear closed-form solution and a nonlinear optimization to minimize the re-projection error.\n\n\n\\subsection{Linear Initialization}\n\\label{subsec:linearInit}\n\nGiven an arbitrary setting $\\mathscr{X}^\\prime \\!\\!=\\!\\! \\left( k_x^\\prime, k_y^\\prime, k_u^\\prime, k_v^\\prime, u_0^\\prime, v_0^\\prime, f^\\prime \\right)^\\mathrm{T}$\\!\\!,\nwe decode the virtual ray $\\left(x, y, i, j, 1 \\right)^{\\!\\mathrm{T}}$\\!\\! into the ray passing\n$\\left( k_x^\\prime x,k_y^\\prime y, 0 \\right)^{\\mathrm{T}}$\\!\\! and $(k_u^\\prime i + u_0^\\prime,k_v^\\prime j +v_0^\\prime$,$f^\\prime)^{\\mathrm{T}}$. Then the distorted 3D point $\\bm{X}_d$ can be reconstructed.\nObviously, there is a projective transformation between $\\bm{X}_d$ and the real scene point $\\bm{X}_c$ (Eq.\\ref{eq_PX=sX_}). Therefore, using a transformation parameter setting $\\mathscr{X}_d \\!\\!=\\!\\! \\left(k_x, k_y, k_u, k_v, u_0, v_0, f^\\prime \\right)^\\mathrm{T}$, $\\bm{X}_c$ can be transformed to $\\bm{X}_d$. Then we assume that the points in the world coordinate $\\bm{X}_w$ is related to the TPP coordinate by a rigid motion, $\\bm{X}_c \\!=\\! \\bm{R}\\bm{X}_w+\\bm{t}$, with rotation $\\bm{R} \\in SO(3)$ and translation $\\bm{t}\\in \\mathbb{R}^3 $. Let's denote the $i^\\mathrm{th}$ column of $\\bm{R}$ by $\\bm{r}_i$. Here we assume that $k_x^\\prime \\!=\\! k_y^\\prime \\!=\\! k_{xy}^\\prime$, $k_u^\\prime \\!=\\! k_v^\\prime \\!=\\! k_{uv}^\\prime$, and the same as $\\mathscr{X}$. The relationship between the $\\bm{R}$, $\\bm{t}$, $\\bm{X}_w$, $\\bm{X}_d$ and $\\mathscr{X}_d$ is:\n\n\\vspace{-2mm}\n\\begin{equation}\ns\\bm{X}_d \\!= \\bm{P} \\left( \\mathscr{X}_d, f \\right) \\left[\\!\\! \\begin{array} {cccc}\n\\bm{r_1} & \\bm{r_2} & \\bm{r_3} & \\bm{t} \\\\\n0 & 0 & 0 & 1 \\end{array} \\!\\!\\right]\n \\left[ \\!\\!\\! \\begin{array}{c}\nX_w \\\\ Y_w \\\\ Z_w \\\\ 1 \\end{array} \\!\\!\\! \\right],\n\\label{eq_sXd=PRtX}\n\\end{equation}\n\n\\noindent where $f$ is the distance of the calibrated two parallel planes. Obviously, there is a $4\\times3$ homography matrix:\n\n\\vspace{-3mm}\n\\begin{equation}\n\\bm{H} \\!=\\! \\bm{P} \\left[ \\begin{array} {ccc}\n\\bm{r}_1 & \\bm{r}_2 & \\bm{t} \\\\\n0 & 0 & 1\n\\end{array} \\right].\n\\label{eq_H=Prt}\n\\end{equation}\n\nWe assume that the calibration board plane is $Z=0$ on the world coordinate, thus $Z_w=0$. Let's denote the $i^\\mathrm{th}$ prior point by $\\bm{X}_{\\!w\\!,i}\\!=\\!( X_{\\!w\\!,i}, Y_{\\!w\\!,i}, 1 )^{\\!\\mathrm{T}}$. Combining Eq.\\ref{eq_Mx=0} and Eq.\\ref{eq_sXd=PRtX}, we have:\n\n\\vspace{-1mm}\n\\begin{equation}\n\\bm{M}_i \\, \\bm{H} \\left[ \\!\\! \\begin{array}{c}\nX_w \\\\ Y_w \\\\ 1\t\n\\end{array}\\!\\! \\right] = \\bm{0},\n\\label{eq_MiHXw=0}\n\\end{equation}\n\n\\begin{equation}\n\\left[\\! \\begin{array}{c}\n\\bm{M}_1 \\otimes \\left( X_{w\\!,1} \\,\\,\\, Y_{w\\!,1} \\,\\,\\, 1 \\right) \\\\\n\\vdots \\\\\n\\bm{M}_n \\otimes \\left( X_{w\\!,n} \\,\\,\\, Y_{w\\!,n} \\,\\,\\, 1 \\right)\n\\end{array} \\!\\right] \\overrightarrow{\\bm{H}} = \\bm{0},\n\\label{eq_MH=0}\n\\end{equation}\n\n\\noindent where $\\bm{M}_i$ is a $2m_i\\times4$ matrix which contains $\\bm{X}_{w,i}$'s $m_i$ decoded rays from the raw image, $\\overrightarrow{\\bm{H}}$ is a $12\\times1$ matrix stretched on row from $\\bm{H}$, and $\\otimes$ is a direct product operator. The homography $\\bm{H}$ multiplied by an unknown factor can be estimated by Eq.\\ref{eq_MH=0}. Let's denote the $i^\\mathrm{th}$ column vector of $\\bm{H}$ be $\\bm{h}_i= \\left(h_{1i}, h_{2i}, h_{3i}, h_{4i} \\right)\\!\\! ^\\mathrm{T}$. Utilizing the orthogonality and identity of $\\bm{R}$, we have:\n\n\\vspace{-1mm}\n\\begin{equation}\n\\begin{aligned}\n&\\bm{h}_1^{\\mathrm{T}} \\bm{P}^{-\\mathrm{T}} \\bm{P}^{-1} \\bm{h}_2 = 0, \\\\\n&\\bm{h}_1^{\\mathrm{T}} \\bm{P}^{-\\mathrm{T}} \\bm{P}^{-1} \\bm{h}_1 = \\bm{h}_2^{\\mathrm{T}} \\bm{P}^{-\\mathrm{T}} \\bm{P}^{-1} \\bm{h}_2,\n\\end{aligned}\n\\label{eq_hPPh}\n\\end{equation}\n\n\\noindent where $\\bm{P}\\!=\\! \\bm{P}\\left(\\mathscr{X}_d, f \\right)$. Let's denote $\\bm{P}^{-\\!\\mathrm{T}} \\!\\bm{P}^{-\\!1}$ by a symmetric matrix $\\bm{Q}$, thus:\n\n\\vspace{-3mm}\n\\begin{equation}\n\\setlength{\\arraycolsep}{4.0pt}\n\\bm{Q} = \\left[\\!\\! \\begin{array} {cccc}\n\\frac{1}{ f^{2} k_{\\!x\\!y}^{2}k_{\\!u\\!v}^{2} } & 0 & - \\frac{u_0}{f^{\\!\\prime} \\! f^{2} k_{\\!x\\!y}^{2}k_{\\!u\\!v}^{2}} & 0 \\\\\n0 & \\frac{1}{ f^{2} k_{\\!x\\!y}^{2}k_{\\!u\\!v}^{2} } & - \\frac{v_0}{f^{\\!\\prime} \\! f^{2} k_{\\!x\\!y}^{2}k_{\\!u\\!v}^{2}} & 0 \\\\\n- \\frac{u_0}{f^{\\!\\prime} \\! f^{2} k_{\\!x\\!y}^{2}k_{\\!u\\!v}^{2}} &\\! -\\frac{v_0}{\\!f^{\\!\\prime} \\! f^{2} k_{\\!x\\!y}^{2}k_{\\!u\\!v}^{2}} & \\frac{1}{f^{\\!\\prime 2}k_{\\!x\\!y}^2} \\!+\\! \\frac{u_0^2+v_0^2}{f^{\\!\\prime2} \\! f^{2} k_{\\!x\\!y}^{2}k_{\\!u\\!v}^{2}} \\!+\\! \\frac{ \\left( k_{\\!u\\!v}\\!-\\!k_{\\!x\\!y} \\right) ^2 }{ f^{\\!\\prime 2}\\! f^{2} k_{\\!x\\!y}^{2}k_{\\!u\\!v}^{2} } & \\frac{k_{\\!u\\!v}\\!-k_{\\!x\\!y}}{ f^{\\!\\prime} \\! f^{2} k_{\\!x\\!y} k_{\\!u\\!v}^{2} } \\\\\n0 & 0 & \\frac{k_{\\!u\\!v}-k_{\\!x\\!y}}{ f^{\\!\\prime} \\! f^{2} k_{\\!x\\!y}k_{\\!u\\!v}^{2} } & \\frac{1}{ f^{2}k_{\\!u\\!v}^{2} }\n\\end{array}\t\\!\\!\\right].\n\\label{eq_Q=PP}\n\\end{equation}\n\nNote that there are only six distinct non-zero elements in $\\bm{Q}$, denoted by $\\bm{q}= \\left(q_{11}, q_{13}, q_{23}, q_{33}, q_{34}, q_{44} \\right)^\\mathrm{T}$. To solve $\\bm{Q}$, we have:\n\n\\vspace{-2mm}\n\\begin{equation}\n\\setlength{\\arraycolsep}{8.0pt}\n\\left[ \\begin{array}{cc}\nh_{11}h_{12}+h_{21}h_{22} & h_{11}^2\\!\\!-\\!h_{12}^2\\!+\\!h_{21}^2\\!\\!-\\!h_{22}^2\\\\\nh_{11}h_{32}+h_{12}h_{31} & 2\\left( h_{11}h_{31}\\!\\!-\\!h_{12}h_{32} \\right)\\\\\nh_{21}h_{32}+h_{22}h_{31} & 2\\left( h_{21}h_{31}\\!\\!-\\!h_{22}h_{32} \\right)\\\\\nh_{31}h_{32} & h_{31}^2\\!\\!-\\!h_{32}^2\\\\\nh_{31}h_{42}+h_{32}h_{41} & 2\\left( h_{31}h_{41}\\!\\!-\\!h_{32}h_{42} \\right)\\\\\nh_{41}h_{42} & h_{41}^2\\!\\!-\\!h_{42}^2\\\\\n\\end{array} \\right]^{\\!\\!\\mathrm{T}} \\left[ \\!\\!\n\\begin{array}{c}\nq_{11} \\\\ q_{13} \\\\ q_{23} \\\\ q_{33} \\\\ q_{34} \\\\ q_{44}\n\\end{array} \\!\\! \\right] = \\bm{0}.\n\\label{eq_hq=0}\n\\end{equation}\n\n\nBy stacking at least three such equations as Eq.\\ref{eq_hq=0}, we will have in general a unique non-zeros solution for $\\bm{q}$ denoted as $\\hat{\\bm{q}}= \\left(\\hat{q}_{11}, \\hat{q}_{13}, \\hat{q}_{23}, \\hat{q}_{33}, \\hat{q}_{34}, \\hat{q}_{44}\\right)\\!\\! ^\\mathrm{T}$, which is defined up to an unknown scale factor $\\lambda \\left( \\lambda \\bm{q}=\\hat{\\bm{q}}\\right)$. Once $\\bm{Q}$ is estimated, we can solve all the parameter $\\mathscr{X}_d$:\n\n\\begin{equation}\n\\begin{aligned}\n& \\lambda = f^{\\prime 2} \/ \\hat{q}_{11} \\left[ \\left(\\hat{q}_{33}\\hat{q}_{44}\\!-\\!\\hat{q}_{34}^2 \\right)-\\hat{q}_{44}\/\\hat{q}_{11} \\left( \\hat{q}_{13}^2+\\hat{q}_{23}^2 \\right) \\right], \\\\\n& k_{xy} = \\sqrt{ \\hat{q}_{44} \/ \\hat{q}_{11}} ,\\\\\n& k_{uv} = \\sqrt{ \\hat{q}_{44} \/ \\hat{q}_{11} } \\left( 1+f^\\prime \\hat{q}_{34} \/ \\hat{q}_{44} \\right),\\\\\n& u_0 = -f^\\prime \\hat{q}_{13} \/ \\hat{q}_{11},\\\\\n& v_0 = -f^\\prime \\hat{q}_{23} \/ \\hat{q}_{11},\\\\\n& f = \\left( \\sqrt{ \\lambda \/ \\hat{q}_{44} } \\right) \/ k_{uv}.\n\\end{aligned}\n\\label{eq_linearSln}\n\\end{equation}\n\nAfter solving $\\mathscr{X}_d$, the virtual ray $\\left(x,y,i,j,1\\right)^\\mathrm{T}$ is related to a ray passing the\n$\\left( k_{xy}^\\prime \/ k_{xy} x, k_{xy}^\\prime \/ k_{xy} y, 0 \\right)^\\mathrm{T}$ and \n$\\left(k_{u\\!v}^\\prime \/ k_{u\\!v} i \\!+\\! u_0^\\prime \\!-\\! u_0, k_{u\\!v}^\\prime \/ k_{u\\!v} j \\!+\\! v_0^\\prime \\!-\\! v_0, f \\right)^\\mathrm{T}$\nin the real world scene in metric distance, and the 3D points $\\bm{X}_c$ are recovered. Let's denote the parameters of the TPP in the real scene world by $\\mathscr{X}\\!\\!=\\!\\! \\left( k_{x\\!y}^\\prime \/ k_{x\\!y}, k_{x\\!y}^\\prime \/ k_{x\\!y}, k_{u\\!v}^\\prime \/ k_{u\\!v}, k_{u\\!v}^\\prime \/ k_{u\\!v}, u_0^\\prime\\!\\!-\\! u_0, v_0^\\prime\\!\\!-\\! v_0, f \\right)\\!^\\mathrm{T}$. Then the extrinsic parameters $\\bm{R}_i$, $\\bm{t}_i$\nfor the $i^\\mathrm{th}$ raw image is computed:\n\n\\vspace{-2mm}\n\\begin{equation}\n\\begin{aligned}\n& \\bm{r}_1 = \\bm{P}^{-1} \\bm{h}_1 \/ \\left\\| \\bm{P}^{-1} \\bm{h}_1 \\right\\|,\\\\\n& \\bm{r}_2 = \\bm{P}^{-2} \\bm{h}_2 \/ \\left\\| \\bm{P}^{-2} \\bm{h}_2 \\right\\|,\\\\\n& \\bm{r}_3 = \\bm{r}_1 \\times \\bm{r}_2,\\\\\n& \\bm{t}\\,\\,\\, = \\bm{P}^{-1} \\bm{h}_3 \/ \\left\\| \\bm{P}^{-1} \\bm{h}_1 \\right\\|.\n\\end{aligned}\n\\label{eq_r1r2r3}\n\\end{equation}\n\n\n\\subsection{Nonlinear Optimization}\n\\label{subsec:nonlinearOpt}\n\nThe optical property of the lenses and the physical machining error of MLA lead to the distortion of the rays. Distortion is in primarily radially symmetric due to the symmetric design of the plenoptic camera. Moreover, the $x$-$y$ plane and $u$-$v$ plane are related to the image sensor and the MLA respectively. Therefore, we employ the radial distortion on the two coordinates of TPP \\cite{Zhang2000A}:\n\n\\vspace{-2mm}\n\\begin{equation}\n\\begin{aligned}\n& \\hat{x}^d = (\\hat{x}-x_c)(1+s_1r_{\\!x\\!y}^2+s_2r_{\\!x\\!y}^4)+x_c,& \\\\\t\n& \\hat{y}^d = (\\hat{y}-y_c)(1+s_1r_{\\!x\\!y}^2+s_2r_{\\!x\\!y}^4)+y_c,& \\\\\n& \\hat{u}^d = (\\hat{u}-u_c)(1+t_1r_{\\!u\\!v}^2+t_2r_{\\!u\\!v}^4)+u_c,& \\\\\n& \\hat{v}^d = (\\hat{v}-v_c)(1+t_1r_{\\!u\\!v}^2+t_2r_{\\!u\\!v}^4)+v_c,&\n\\end{aligned}\n\\label{eq_dirtortion}\n\\end{equation}\n\n\\noindent where $\\left(x_c, y_c\\right)\\!\\! ^\\mathrm{T}$ and $\\left(u_c, v_c\\right)\\!\\! ^\\mathrm{T}$ are the offsets as the origin of the distortion on two planes, \n$\\left(\\hat{x}^d, \\hat{y}^d\\right)\\!\\! ^\\mathrm{T}$ and $\\left(\\hat{u}^d, \\hat{v}^d\\right)\\!\\! ^\\mathrm{T}$ are the distorted points, $r_{\\!xy}\\!=\\!\\sqrt{\\left(\\hat{x}\\!-\\!x_c\\! \\right)^2 \\!+\\! \\left(\\hat{y}\\!-\\!y_c\\! \\right)^2}$ and $r_{\\!uv}\\!=\\!\\sqrt{\\left(\\hat{u}\\!-\\!u_c\\! \\right)^2 \\!+\\! \\left(\\hat{v}\\!-\\!v_c\\! \\right)^2}$. The parameters $s_i$ and $t_i$ are the distortion coefficients.\n\nWe minimize the following cost function with initialization solved in Section~\\ref{subsec:linearInit} to refine the intrinsic and extrinsic parameters, including the distortion coefficients:\n\n\\vspace{-2mm}\n\\begin{equation}\n\\sum_{i=1}^{n}\\!\\sum_{j=1}^{p_i} { \\left\\| \\bm{x}_{i,j} - \\hat{\\bm{x}}_{i,j}^d \\left( \\mathscr{X}, s_1, s_2, t_1, t_2, \\bm{R}_i, \\bm{t}_i, \\bm{X}_{w,i} \\right) \\right\\| },\n\\label{eq_re-projectionError}\n\\end{equation}\n\n\\noindent where $\\bm{x}_{i,j}$ is the $j^\\mathrm{th}$ projections of the prior scene point $\\bm{X}_{w,i}$ on the image coordinate, and $p_i$ is the number of the projections of $\\bm{X}_{w,i}$. In Eq.\\ref{eq_re-projectionError}, $\\bm{R}$ is parameterized by Rodrigues formula \\cite{faugeras1993three}. Equation \\ref{eq_re-projectionError} is the re-projection error in traditional computer vision. In addition, the Jacobian matrix of the cost function is simple and sparse. It can be solved with the LM Algorithm based on the trust region method \\cite{madsen2004methods}. We use MATLAB's lsqnonlin function to complete the optimization.\n\n\\subsection{Summary}\n\\label{subsec:summary}\n\nOur proposed calibration procedure is listed as follows:\n\n\\begin{enumerate}\n\\item Take at least 3 raw images with different poses of the calibration board by moving either the calibration board or the camera.\n\\item Detect the multiple projections corresponding to the scene points.\n\\item Calculate the $4\\times3$ homography $\\bm{H}_i$ for the $i^\\mathrm{th}$ raw image via Eq.\\ref{eq_MH=0}.\n\\item Estimate the intrinsic and extrinsic parameters via Eqs.\\ref{eq_hq=0}, \\ref{eq_linearSln} and \\ref{eq_r1r2r3}.\n\\item Refine all the parameters via Eq.\\ref{eq_re-projectionError} using LM Algorithm.\n\\end{enumerate}\n\n\n\\section{Experimental Results}\n\\label{sec:ExpResults}\n\nIn experiments, we apply our calibration method on the simulated data and the real world scene data. The prior scene points $\\bm{X}_w$ are obtained by a planar calibration board with a circular grid pattern (Fig.\\ref{fig:physicalCamera}). Due to the inevitable misalignment of the MLA and the image sensor, a preprocess on the raw image is needed (Section~\\ref{subsec:rect}).\n\n\n\\subsection{Rectification}\n\\label{subsec:rect}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=50mm]{fig4_misalignment_MLA}\n\\vspace{-3mm}\n\\caption{\nThe misalignment of the MLA in a plenoptic camera.}\n\\label{fig:misalignment_MLA}\n\\end{figure}\n\n\nIn a physical plenoptic camera, there is a slight rotation between the MLA and the image sensor \\cite{thomason2014calibration}, as shown in Fig.\\ref{fig:misalignment_MLA}. Let's denote the optical center and diameter of the micro-lens by $(x_g, y_g, z_g )^\\mathrm{T}$ and $d_m$ respectively, we have:\n\n\\begin{equation}\n\\left[ \\begin{array}{c;{2pt\/2pt}c}\n & x_m \\\\\t\n\\bm{R}_{m\\!l\\!a} & y_m \\\\\t\n & L\n\\end{array} \\right]\t\\left[\\!\\! \\begin{array}{c}\nid_{m} \\\\ jd_{m} \\\\ 0 \\\\ 1\n\\end{array} \\!\\!\\right] = \\left[\\! \\begin{array}{c}\nx_g \\\\ y_g \\\\ z_g \\\\ 1\t\n\\end{array} \\!\\right],\n\\label{eq_RtXg}\n\\end{equation}\n\n\\noindent where $\\bm{R}_{mla} \\!\\!\\in\\!\\! SO(3)$, and $\\left( x_m,y_m,L \\right)\\!\\! ^\\mathrm{T}$ is the offset between the reference micro-lens and the main lens (Fig.\\ref{fig:misalignment_MLA}). Therefore, the geometric center of the micro-lens image $(x_{g}^\\prime, y_{g}^\\prime)\\! ^\\mathrm{T}$ is:\n\n\\vspace{-1mm}\n\\begin{equation}\n\\left[\\! \\begin{array}{c}\nx_g^{\\,\\prime}\t\\\\ y_g^{\\,\\prime}\n\\end{array} \\!\\right] = \\frac{L+l}{z_g} \\left[\\! \\begin{array}{c}\nx_g\t\\\\ y_g\n\\end{array} \\!\\right].\n\\label{eq_xg_}\n\\end{equation}\n\nThe centers of the micro images $\\left(x_g^\\prime, y_g^\\prime \\right)\\!\\!^\\mathrm{T} $ are recognized by a raw image shooting a white scene or other pure color \\cite{cho2013modeling,Chunping2016Decoding}. The misalignment of the MLA makes the diameter of the micro-lens image non-uniform. As shown in Fig.\\ref{fig:slopes_centers}, the slopes fitted by a line of the centers of micro-lens images are descending linearly. The rate of descending is related to the rotation of the MLA. The range of the slopes of the rectified images is smaller than the slopes of the original images. It indicates that the diameters of the rectified micro images are more uniform. Taking the misalignment of the MLA into account \\cite{thomason2014calibration}, we rectify the raw image by a homography to make the micro-lens images uniform. More importantly, by Eq.\\ref{eq_RtXg} and Eq.\\ref{eq_xg_}, a homography with 8 degree of freedom is sufficient to preprocess the raw image, thus the two non-parallel light field planes are reparameterized to parallel planes. The experiments in Section~\\ref{subsec:simu} and \\ref{subsec:physical} are based on the rectification.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=60mm]{fig5_slopes_centers}\n\\caption{\nThe slopes fitted by 88 line of micro-lens images' centers from the original and rectified white image of a physical focused plenoptic camera. Every slope is fitted by 115 centers from the same row of micro-images.}\n\\label{fig:slopes_centers}\n\\end{figure}\n\n\n\\subsection{Simulated data}\n\\label{subsec:simu}\n\nWe first verify the calibration method on the simulated images rendered in MATLAB. The image sensor resolution is $4008\\times2672$ with 9 $\\mu m$ pixel width. The focused plenoptic camera consists of a main lens with 50 $mm$ focal length and a MLA with 300 $\\mu m$ diameter and 2.726 $mm$ focal length in hexagon layout. The calibration board is a pattern with $5\\times5$ points of $54.0\\times54.0 \\, mm$ cells. We render 12 raw images with different poses of the board.\n\nFor a focused plenoptic camera, to recognize the multiple projections of the same scene point, we preprocess the raw image using a white image and then use template matching by normalized cross-correlation (NCC).\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=105mm]{fig67_simuResult}\n\\caption{\nThe calibration errors on simulated data with different numbers of poses (the left-two) and different noise levels (the right-two).}\n\\label{fig:SimuResults}\n\\end{figure}\n\nWe test the performance with respect to the numbers of poses of the calibration board. We vary the number of images from 3 to 12. The calibration results with increasing number of poses are shown in Fig.\\ref{fig:SimuResults}. The errors decrease when more images are used. From to 3 to 4, the errors of most parameters decrease significantly. When the number of poses is more than 6, all parameters tend to be stable. In addition, the ground truth are calculated by the input paramters of the simulation and the equations in Section~\\ref{subsec:tpp2}. \n\nIn addition, we add the projections of the total 12 raw images with Gaussian noise with 0 mean and standard deviation varied from 0.1 pixels to 0.8 pixels. The results are shown in Fig.\\ref{fig:SimuResults}. Due to that there are at least 12 projections of the same scene point in single raw image, the calibration results are still reasonable with different noise levels. It verifies the robustness of the calibration method.\n\n\n\n\\subsection{Physical camera}\n\\label{subsec:physical}\n\n\\vspace{-3mm}\n\\begin{figure}\n\\centering\n\\includegraphics[width=90mm]{Fig8_physicalCamera}\n\\vspace{-2mm}\n\\caption{\nThe self-assembly focused plenoptic camera and the MLA inside the camera.}\n\\label{fig:physicalCamera}\n\\end{figure}\n\n\nWe capture raw images of calibration board and real scene using a self-assembly focused plenoptic camera. The camera and the MLA are shown in Fig.\\ref{fig:physicalCamera}. The camera consists of a GigE camera with a CCD image sensor whose resolution is $4008\\times2672$ with 9 $mm$ pixel width, a Nikon AF Nikkor f\/1.4D F-mount lens with 50 $mm$ focal length, and a MLA with 300 $\\mu m$ diameter and 2.726 $\\mu m$ focal length in hexagon layout. The calibration board is $10\\times10$ points with 2$cm$ $\\times$ 2 $cm$ cells (Fig.\\ref{fig:physicalCamera}). We shoot 9 raw images with different poses of the board, and 9 raw images with the same real scene and a calibration board. All the raw images are preprocessed by the method mentioned in Section~\\ref{subsec:rect}.\n\n\\begin{table}[tbp]\n\\caption{Calibration results of a physical camera with different numbers of poses.}\n\\centering\n\\tiny\n\\label{cp_tab4}\n\\begin{tabular}{c|c|c|c|c|c|c|c|c}\n\\hline\nParame & \\multicolumn{2}{|c|}{3 poses} & \\multicolumn{2}{|c|}{5 poses} & \\multicolumn{2}{|c|}{7 poses} & \\multicolumn{2}{|c}{9 poses} \\\\\n\\cline{2-9}\n-ter & $4\\times 5$ & $8\\times10$ & $4\\!\\times\\! 5$ & $8\\!\\times\\! 10$ & $4\\!\\times\\! 5$ & $8\\!\\times\\! 10$ & $4\\!\\times\\! 5$ & $8\\!\\times\\! 10$ \\\\\n\\hline\n$k_x$ & 31.1525 & 32.2787 & 29.0403 & 28.7948 & 27.5295 & 27.7046 & 27.1743 & 27.0686\\\\\n$k_y$ & 30.9687 & 32.1397 & 29.0474 & 28.8172 & 27.5140 & 27.6846 & 27.1752 & 27.0632\\\\\n$k_u$ & 1230.23 & 1271.44 & 1152.98 & 1143.66 & 1097.09 & 1103.30 & 1084.01 & 1079.72\\\\\n$k_v$ & 1224.51 & 1267.02 & 1153.47 & 1144.48 & 1096.8 & 1102.77 & 1084.28 & 1079.66\\\\\n$u_0$\/pixel & -3364.99 & -3598.28 & -3440.39 & -3604.41 & -3475.55 & -3685.79 & -3508.18 & -3691.41\\\\\n$v_0$\/pixel & -7162.50 & -7417.96 & -6960.49 & -7075.08 & -6891.50 & -7043.57 & -6867.73 & -6987.77\\\\\n$f$\/pixel & 31098.7 & 31605.3 & 30309.7 & 30162.6 & 29878.2 & 30012.6 & 29742.5 & 29701.1\\\\\n\\hline\n$s_1$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n$s_2$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n$t_1$ & -2.44e-13 & -2.54e-13 & -3.81e-13 & -3.84e-13 & -3.85e-13 & -3.85e-13 & -3.72e-13 & -3.80e-13\\\\\n$t_2$ & 1.79e-22 & 1.60e-22 & 3.059e-22 & 2.89e-22 & 3.65e-22 & 3.32e-22 & 3.69e-22 & 3.47e-22\\\\\n\\hline\nRMS & 0.30394 & 0.31505 & 0.27223 & 0.29952 & 0.26344 & 0.29151 & 0.24877 & 0.27846\\\\\n\\hline\n\\end{tabular}\n\\label{tab:RealResult}\n\\vspace{-2mm}\n\\end{table}\n\n\\begin{figure}[tbp]\n\n\\centering\n\\subfigure[\\,]{\n\\label{fig:RealResultPoses}\n\\includegraphics[width=0.55\\textwidth]{fig12_RealResultPoses}}\n\\vspace{-1mm}\n\\subfigure[\\,]{\n\\label{fig:RealResultErrors}\n\\includegraphics[width=0.42\\textwidth]{fig12_RealResultErrors}}\n\\caption{\n(a) shows the estimated poses of the 9 raw images with a calibration board (the black parallelogram on the top). (b) shows the histograms of the distribution of the re-projection error. The histograms are calculated without or with the distortion coefficients, and the mean errors are 0.31623 and 0.27846 pixels respectively.}\n\\label{fig:RealResult}\n\\end{figure}\n\nThe results of the estimated intrinsic parameters are listed in Tab.1. We apply our calibration method to the first 3, 5, 7, and all 9 raw images in different poses. As shown in Tab.\\ref{tab:RealResult}, the re-projections RMS error is less than 0.3 pixels when at least 5 poses are used. The RMS errors are quite consistent with different numbers of poses. With the same number of poses, the parameters are close when the number of the calibration points is changed from $4\\times 5$ points to $8\\times 10$ points. The poses estimated using the 9 raw images with a calibration board and the histogram of the re-projection errors are shown in Fig.\\ref{fig:RealResult}. Fig.\\ref{fig:RealResultErrors} shows that most of the errors are less than 0.5 pixels, and the re-projection errors decrease with the optimization on distortion.\n\n\\begin{figure}\n\\vspace{-2mm}\n\\centering\n\\includegraphics[width=120mm]{fig11_RenderPipeline}\n\\caption{\nThe refocus rendering pipeline of a focused plenoptic camera.}\n\\label{fig:RenderPipeline}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\subfigure[\\,]{\n\\label{fig:RenderImgsSub1}\n\\includegraphics[width=0.52\\textwidth]{fig10_RenderImgs}}\n\\subfigure[\\,]{\n\\label{fig:RenderImgsSub2}\n\\includegraphics[width=0.45\\textwidth]{fig10_pose2fig}}\n\\vspace{-4mm}\n\\caption{\n(a) shows the rendered images from the physical focused plenoptic camera. The top are the most left view and the most right view and the bottom is the stitched image of the different 9 poses. (b) shows two of the views of the estimated posed from the 9 raw images.}\n\\label{fig:RenderImgs}\n\\end{figure}\n\nAfter the estimation of $\\mathscr{X}$, the virtual optical centers of the MLA are calculated, thus the rays' directions are obtained \\cite{perwass2012single}. The refocus rendering pipeline of a focused plenoptic camera is shown in Fig.\\ref{fig:RenderPipeline}. The rendered images by ray tracing using 9 raw images in different poses are shown in Fig.\\ref{fig:RenderImgs}.\n\n\n\\section{Conclusion}\n\\label{sec:Conclusion}\n\nIn this paper, we present a novel unconstrained TPP model to describe the relationship between the 4D rays and the 3D scene structure by a projective transformation. To calibrate the focused plenoptic camera, we simplify the imaging system as a 7-parameter TPP model. Compared with the previous calibration method on the focused plenoptic cameras, we substitute the refraction of the main lens and simplify the image system as a 7-intrinsic-parameter unconstrained TPP model, which is closer to the optical path and the imaging principles. We derive the closed-form solution for the intrinsic and extrinsic parameters and then refine the parameters by minimizing the re-projections error via LM algorithm. Both simulated data and real data verify the robustness and validity of our proposed method. Due to the image features, the multiple projections are more convenient to be recognized in a focused plenoptic camera. The recognition of projections in a conventional plenoptic camera is mentioned in the work \\cite{bok2014geometric,bergamasco2015adopting}. Moreover, the TPP coordinate system for the conventional plenoptic camera can be regarded as the main lens plane and the focal plane of MLA outside the camera. Therefore, the calibration for a conventional plenoptic camera can be completed by estimating the view coordinates, the focal length and the principle points of the sub-aperture images.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdrea b/data_all_eng_slimpj/shuffled/split2/finalzzdrea new file mode 100644 index 0000000000000000000000000000000000000000..79ff5daeb8a3fd65d44b61580476b7a9a3989f4b --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdrea @@ -0,0 +1,5 @@ +{"text":"\\section{INTRODUCTION}\nIt is known that several deeply-embedded young stellar objects are driving highly-collimated molecular outflows with an extremely-high velocity (EHV) and jet-like component running along the axes of the lobes \\citep[e.g.,][]{Bac96}. \nSince the EHV molecular jets having the terminal velocities of 50--150 km s$^{-1}$ are concentrated within the narrow angles along the axes of the lobes and have large momenta comparable to those of the slowly moving (20--30 km s$^{-1}$) ^^ ^^ classical'' outflows, the EHV jets are considered to be closely connected to the ^^ ^^ primary jet'' which is responsible for driving molecular outflows.\nTherefore, studying the physical and kinematical properties of the EHV jet will allow us to understand the properties of the ^^ ^^ primary jet'' which provide clues to constrain the launching mechanism of the outflows.\n\nThe low-luminosity (7.5 $L_{\\odot}$; \\citet{Tob07}) class 0 source L1448C (also known as L1448-mm) in the Perseus molecular cloud complex (D$\\sim$250 pc: e.g., \\citet{Eno06}) is a spectacular example of an outflow with an EHV jet.\nThe EHV component of this source was identified as the secondary peaks of the CO $J$=2--1 spectra at ${\\sim}{\\pm}$60 km s$^{-1}$ from the cloud systemic velocity \\citep{Bac90}.\nThe CO emission in the EHV range was found to be confined to a series of discrete clumps, called ^^ ^^ bullets\", which are aligned along the axes of the outflow lobes and are symmetrically placed with respect to the central source.\nThe EHV bullets are also observed in several transitions of the SiO \\citep[e.g.][]{Bac91, Dut97}.\nSince the SiO emission has been barely detected in quiescent dark clouds because of its very low abundance (of the order of 10$^{-12}$; \\citet{Ziu89,Mar92}), the detection of SiO in EHV bullets suggests the presence of shocks that enhanced the SiO abundance in bullet gas by a factor of $\\gtrsim$10$^4$.\nAlthough the lower transition of SiO, i.e. $J$=2--1, was observed not only in the EHV bullets but also in the lower velocity component that delineates the tips and walls of the outflow cavities, $J$=5--4 emission was confined to a pair of EHV bullets located at the closest positions to the star \\citep{Bac91}. \nThis suggests that the excitation condition of the EHV jet varies along the axes, and that the jet gas is highly excited in the close vicinity of the driving source.\nRecently, higher excitation SiO up to $J$=11-10 has been observed by \\citet{Nis07}.\nTheir results have revealed that the innermost pair of bullets, labeled B1 and R1 by \\citet{Bac90}, have a density of $n_{\\rm H_2}\\sim10^6$ cm$^{-3}$ and a kinetic temperature of $T_{\\rm kin}\\gtrsim$ 500 K, which is denser and warmer than the bullets in the downstream.\nIt is also known that the innermost pair of bullets, B1 and R1 are resolved into two clumps, BI-BII and RI-RII, respectively, in the higher resolution ($\\sim$2\\arcsec) interferometric SiO $J$=2--1 observations \\citep{Gui92, Gir01}.\nThe high resolution SiO data exhibit the kinematic structure of the EHV jet near the source, which shows an apparent acceleration of the jet up to 70 km s$^{-1}$ within a region of 6$''$ ($\\sim$2000 AU).\nThe proper motion measurement of the SiO clumps carried out by \\citet{Gir01} suggests that the outflow axis is inclined by 21$^{\\circ}$ with respect to the plane of the sky, and therefore, the SiO clumps in the EHV jet are likely to be moving with absolute velocities of 180 km s$^{-1}$.\n\n\nIn this paper, we present the SiO $J$=8--7, CO $J$=3--2, and 350 GHz continuum images obtained with the Submillimeter Array\\footnote{The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica.} (SMA) \\citep{Ho04} at $\\sim$1 arcsecond resolution, which is a factor of three higher than the previous SiO $J$=2--1 images \\citep{Gui92, Gir01}.\nA high angular resolution is crucial for studying the structure and kinematics of the jet near the base, at which the jet velocity increases up to the terminal velocity.\nIn addition, higher transitions of SiO and CO in the submillimeter waverange enable us to segregate the dense and warm gas in the EHV jet from the lower excitation gas in the cavity wall.\n\n\\section{OBSERVATIONS}\n\nThe observations of the SiO $J$=8--7 and CO $J$=3--2 lines and 350 GHz continuum emission were carried out with the SMA on 2006 December 5 in the extended configuration and on 2006 December 25 in the compact configuration.\nThe two array configurations provided projected baselines ranging from 12\\,m to 222\\,m.\nSince the primary beam of the SMA antenna has a size of $\\sim$35\\arcsec, two pointings separated by 17\\arcsec were observed in order to cover the EHV bullet pair closest to the central source.\nThe receivers have two sidebands; the lower and upper sidebands covered the frequency ranges from 345.5 to 347.5 GHz, and from 355.5 to 357.5 GHz, respectively.\nThe SiO $J$=8--7 and CO $J$=3--2 lines were observed simultaneously in the lower sideband.\nThe SMA correlator divides each sideband of 2 GHz bandwidth into 24 ^^ ^^ chunks\" of 104 MHz width.\nWe used the configuration that gave 256 channels to all chunks, which provided a uniform frequency resolution of 406.25\\,kHz across a 2\\,GHz-wide band.\nThe corresponding velocity resolution was 0.35 km s$^{-1}$.\nWe used Titan for flux calibration, and a pair of quasars 3C84 and 3C111 for amplitude and phase calibrations.\nThe flux calibration was estimated to be accurate to 25\\%.\nThe band pass was calibrated by observing 3C273.\n\nThe calibrated visibility data were Fourier transformed and CLEANed using the MIRIAD package.\nThe velocity-channel maps of the SiO and CO were made with a velocity interval of 1 km s$^{-1}$.\nThe synthesized beam size of the SiO map was 0\\farcs96$\\times$0\\farcs84 with a position angle of $-$84$^{\\circ}$ and that of the CO map was 0\\farcs96$\\times$0\\farcs86 with a position angle of $-$76$^{\\circ}$ with uniform weighting.\nThe rms noise level of the velocity-channel map at 1.0 km s$^{-1}$ width was 0.15 Jy beam$^{-1}$.\nA non-linear joint deconvolution, MOSSDI, which is based on the CLEAN-based algorithm, and is part of the MIRIAD package \\citep{Sau96}, was used for deconvolving the images. \n\nThe 350 GHz continuum map was obtained by averaging the line-free chunks of both sidebands.\nTo improve the signal to noise ratio, the upper and lower sidebands data were combined.\nThe synthesized beam size of the map made with uniform weighting was 0\\farcs93$\\times$0\\farcs83 with a position angle of $-$86$^{\\circ}$.\nThe rms noise levels of the 345 GHz continuum maps was 6.4 mJy beam$^{-1}$.\n\n\\section{RESULTS}\n\\subsection{Continuum emission}\n\nThe 350 GHz continuum map reveals a bright compact source at the center.\nThis source has a peak intensity of 352 mJy beam$^{-1}$, and is surrounded by spatially extended emission.\nIn addition to this bright source, there is a faint emission peak of $\\sim$5$\\sigma$ level at $\\sim$8\\farcs3 southeast of the center.\nRecent {\\it Spitzer Space Telescope} observations at mid-infrared wavelengths have resolved L1448C into two components, L1448C(N) and L1448C(S) \\citep{Jor06}.\nThe bright submillimeter source at the center corresponds to L1448C(N), which is considered to be the driving source of the highly-collimated molecular outflow and is also referred to as L1448-mm, and the southern faint source corresponds to L1448C(S).\nDust continuum emission from L1448C(S) was also detected by \\citet{Jor07} at 230 GHz and 350 GHz with the SMA, and by \\citet{Mau10} at 107 GHz with the Plateau de Bure Interferometer (PdBI).\n\nThe visibility amplitudes plot for L1448C(N) as a function of {\\it uv} distance (Fig.~\\ref{fig2}) suggests that this source consists of two components; one is from a spatially extended envelope that dominates the flux at a {\\it uv} distance of $<$50 k$\\lambda$, and the other is from a compact source that is prominent at $>$50 k$\\lambda$.\nThe visibility amplitude profile was fit by two circular gaussian components; one is an extended component with a deconvolved size of $\\sim$3.6\\arcsec and a flux of $\\sim$440 mJy, and the other is a compact component with a deconvolved size of $\\sim$0.3\\arcsec and a flux of $\\sim$330 mJy.\nThe peak flux values of the extended and compact components correspond to $\\sim$27 mJy and $\\sim$330 mJy beam$^{-1}$, respectively, per 0\\farcs93$\\times$0\\farcs83 beam.\nAs shown in Fig. 1a, the extended component appears as a bump in the northwest and a tail in the southeast (both of them are in the 3$\\sigma$ level) of the central source, suggesting that this emission extends along the outflow axis.\nSimilar emission feature along the outflow axis is also seen in the maps of 3 mm \\citep{Gui92}, 2.6 mm \\citep{Bac95}, 1.4 mm \\citep{Sch04}, and 1.3 mm \\citep{Mau10} observed with 1.5--3\\arcsec resolution.\nSuch a faint elongated feature was clearly seen in L1157-mm, and was interpreted as the edges of the cavity excavated by the outflow \\citep{Gue97}.\nIt is, therefore, possible that the extended emission in L1448C(N) also delineates the inner part of the envelope which is disturbed by the outflow, although the cavity-like structure is not clearly seen.\nAn alternative interpretation is that the faint emission comes from an embedded companion.\nA faint secondary peak seen in the 230 GHz map of \\citet{Jor07} implies this possibility.\nHowever, the position of the secondary peak at 230 GHz in \\citet{Jor07} is 1\\arcsec offset toward the northwest from that of our 350 GHz map.\nAs discussed in \\citet{Jor07} such a small difference in position between the 350 GHz peak and 230 GHz peak could be introduced by the extended emission from the envelope component sampled by different {\\it uv} coverages.\nSince the total flux at 350 GHz observed with the SMA (extended + compact components) corresponds to 43 \\% of the flux observed by \\citet{Hat05} at 850 $\\mu$m with SCUBA at the JCMT (1.737 Jy per 14\\arcsec beam), it is possible that the extended component that is not sampled well with the SMA affects the morphology of the faint component.\nAlthough the northwestern bump may harbor an embedded companion, it is likely that most of the extended emission arises from the inner part of the envelope.\nTherefore, the former scenario, which attributes the extended component to the envelope--outflow interacting region, is more preferable in this case.\n\nThe visibility amplitude profile at {\\it uv} distance longer than $\\sim$50 k$\\lambda$ implies that the compact component is not point-like but has a spatially resolved structure.\nIn order to exclude the contamination from the extended envelope component, we made a map using the visibility data with the {\\it uv} distances greater than 70 k$\\lambda$ (see Fig.1b).\nThe synthesized beam of this map is 0\\farcs70$\\times$0\\farcs50 with a position angle of $-$87.2$^{\\circ}$.\nIt is shown that the compact source has an elongation along the axis perpendicular to the outflow axis.\nA two dimensional Gaussian fit to the visibility data with {\\it uv} distances larger than 70 k$\\lambda$ yields the deconvolved major and minor axes of 0.37\\arcsec (90 AU) and 0.26\\arcsec (65 AU), respectively, with a position angle of $\\sim$70$^{\\circ}$.\nThe source position derived from the fit is $\\alpha$(J2000) = 3$^h$25$^m$38.873$^s$, $\\delta$(J2000)=30$^{\\circ}$44$'$05\\farcs35.\nThis position agrees well (less than 0\\farcs1) with the 3.6 cm continuum position observed with a smaller beam of 0\\farcs31$\\times$0\\farcs27 \\citep{Rei02}.\n\nThe 350 GHz flux of L1448C(S) was measured to be $\\sim$60 mJy, which is consistent to that reported by \\citet{Jor07}.\nL1448C(S) was detected in 230 GHz by \\citet{Jor07} but not by \\citet{Mau10} at the same frequency.\nThe non-detection of this source by \\citet{Mau10} is probably because this source is located near the edge of the primary beam of the PdBI ($\\sim$22\\arcsec).\nIf the response of their primary beam is taken into account, the 230 GHz continuum source with a flux of 12.8$\\pm$3 mJy detected by \\citet{Jor07} could be below the detection limit of their observations ($\\sim$8.4 mJy beam$^{-1}$).\nAlthough L1448C(S) is bright in mid-infrared, its sub-mm flux is more than ten times weaker than L1448C(N).\nIt is unlikely that such a small sub-mm flux is due to the effect of missing flux, because single-dish measurements at 450 and 350 $\\mu$m \\citep{Cha00} showed no hint of the secondary component to the south of L1448C(N) even though the angular resolution ($\\sim$8\\arcsec) was comparable to the separation of two sources.\n\n\n\\subsection{The SiO Jet}\n\nThe SiO $J$=8--7 emission was detected in two velocity ranges from $-$70 to $-$12 km s$^{-1}$ (blueshifted) and from 20 to 71 km s$^{-1}$ (redshifted) with respect to the systemic velocity of $V_{\\rm LSR}{\\sim}$5.0 km s$^{-1}$.\nFig.~\\ref{fig3} shows velocity channel maps of the SiO $J$=8--7 emission at 10 km s$^{-1}$ intervals.\nIt is shown that the SiO emission comes from the jetlike narrow region with its blueshifted part to the northwest and the redshifted part to the southeast of L1448C(N).\nThe SiO $J$=8--7 jet is partially resolved along its minor axis; after deconvolution from the SMA beam, the width of the SiO jet is $\\sim$0.8\\arcsec ($\\sim$200 AU) FWHM on average.\nIn order to estimate the missing flux, we have smoothed the SMA map to a resolution of 14\\arcsec and compared it with the SiO $J$=8--7 spectra observed with the James Clerk Maxwell Telescope \\citep[JCMT;][]{Nis07}.\nIt is found that 85--100 \\% of the single-dish flux is recovered by the SMA, implying that almost all the SiO $J$=8--7 emission arises from the narrow jet.\nSiO $J$=8--7 emission mainly comes from the B1 and R1 ^^ ^^ bullets'' identified in the single-dish CO $J$=2--1 map by \\citet{Bac90}.\nThe SiO jet consists of a chain of knots with a typical size scale of $\\sim$1--1.5\\arcsec.\nA comparison with the previous SiO $J$=2--1 maps with $\\sim$3\\arcsec resolution \\citep{Gui92, Gir01} reveals that the three pairs of knots close to L1448C(N) seen in the SiO $J$=8--7 map corresponds to the inner pair BI and RI in the SiO $J$=2--1 map, and the two pairs in the downstream correspond to the outer pair BII and RII.\nThe innermost knot pair BIa and RIa are located within 1\\arcsec (250 AU) from L1448C(N).\nThe high-resolution image also shows that the SiO jet is not straight.\nA close-up view of the high velocity component (Fig.~\\ref{fig5}) shows that the jet changes its position angle from +15$^{\\circ}$ at BII, $-$25$^{\\circ}$ at BI, $-$20$^{\\circ}$at RI, to $-$5$^{\\circ}$ at RII.\nThe kinks between BI and BII, and RI and RII are also seen in the previous SiO $J$=2--1 maps of $\\sim$3\\arcsec resolution \\citep{Gui92, Gir01}.\nHowever, it is more obvious in the higher resolution image.\nIn addition, it is clear that the jet axes in BI and RI are also misaligned by 5$^{\\circ}$.\n\\subsection{CO Jet and outflow}\n\nThe CO $J$=3--2 emission was detected in the wide velocity range from $-$77 km s$^{-1}$ to $+$79 km s$^{-1}$ with respect to the systemic velocity.\nIn the velocity ranges of ${\\Delta}V < {\\pm}$40 km s$^{-1}$ (Fig. 4a--4d), the CO $J$=3--2 emission delineates two V-shaped structures open to the northwest and southeast with a common apex at the position of L1448C(N).\nThe opening angles of the V-shape features become narrower as the velocity offset increases, suggesting that the CO emission in the V-shaped features comes from the limb-brightened shells.\nThe largest opening angle of the shell is $\\sim$60$^{\\circ}$ in the blueshifted lobe, while it is $\\sim$40$^{\\circ}$ in the redshifted lobe.\nIn the higher velocity ranges of ${\\Delta}V = {\\pm}$41--70 km s$^{-1}$ (Fig. 4e, 4f, and 4g), the CO emission comes from a narrow jet-like region.\nThe CO flux recovered by the SMA in each velocity range was estimate by comparing the CO spectra observed by the SMA with those observed by the JCMT.\nThe SMA map was smoothed to be a 14\\arcsec resolution so as to match with the beam of the JCMT.\nIt is found that the recovered flux is only $\\sim$20~\\% in the lowest velocity ranges (${\\Delta}V = {\\pm}$1--10 km s$^{-1}$; Fig. 4a), and is $\\sim$50\\% in the next velocity range of ${\\Delta}V = {\\pm}$11--20 km s$^{-1}$ (Fig. 4b). \nThe edges of the shells in Fig. 4a and 4b look very steep probably because significant amount of the CO emission from spatially extended component was filtered out by the interferometer.\nIn fact, previous examples of the L1157 outflow and IRAS 04166+2706 outflow revealed that the edges of the bipolar cavities were emphasized in the maps made with the interferometer data alone, while the cavities were filled by the diffuse CO emission when the single-dish data were added \\citep{Gue96,San09}.\nOn the other hand, in the higher velocity ranges with ${\\Delta}V > {\\pm}$20 km s$^{-1}$, 80--100\\% of the CO flux was recovered by the SMA.\nThis suggests that almost all the CO $J$=3--2 flux in the extremely high velocity ranges (Fig. 4e, 4f, and 4g) comes from the narrow jet.\n\nAs in the case of the SiO jet, the CO $J$=3--2 jet also shows clumpy structure.\nIn addition, most of the knots seen in the SiO $J$=8--7 map have their counterparts in the CO map.\nHowever, the innermost knots BIa and RIa, which are significant in the SiO map at ${\\Delta}V = {\\pm}$21--60 km s$^{-1}$ are barely seen in the CO map.\nThis is probably because most of the CO molecules in these knots are excited to the levels higher than $J$=3 because of high density and high temperature.\nSimilar feature with strong SiO and weak CO in the close vicinity of the protostar was also observed in the highly collimated jet in the HH211 outflow \\citep{Pal06, Lee07b}.\nAnother difference between the CO jet and SiO jet is seen between BIc and BIIa, and RIc and RIIa. \nThe CO map reveals the knot pair labeled BI-II and RI-II, while the SiO map shows only faint emission (Fig.~\\ref{fig5}).\nIn the CO $J$=3--2, the overall distribution of the EHV jet is more continuous as compared to the SiO distribution. \n\nAs in the case of the SiO jet, the blue and red axes of the CO jet are also misaligned by $\\sim$5$^{\\circ}$.\nThe kinks between BI and BII, and RI and RII are also seen in the CO jet.\nIn addition, the ridge of the CO emission wiggles along the jet axis.\nThis wiggling feature is clearly seen in the maps of the highest velocity ranges (Fig.~\\ref{fig6}).\nIt is likely that each knot has been ejected in slightly different direction.\nSince the typical knot separation, 2--3\\arcsec corresponds to a time interval of 15--20 yr, the observed jet wiggling suggests that the direction of jet ejection also varies in a similar time scale.\nOn the other hand, the CO emission with lower intensity that is surrounding the emission ridge tends to extend linearly along the axes.\nThe transverse width of the lower level CO emission component increases with a distance from the source.\nThis is probably because significant part of the CO emission comes from the outflow shell even in the extremely high velocity ranges (see the next section).\nThere is faint CO emission with the highest velocity seen ahead of the SiO jet.\nSince this highest velocity emission is spatially extended, it is likely that it arises from the highest velocity part of the shells.\n\n\\subsection{Kinematics of the jet along its axis}\n\nPosition-velocity (P-V) diagrams of the SiO and CO emission along the jet axes (the position angle is $-$25$^{\\circ}$ for the blueshifted part and is $-$20$^{\\circ}$ for the redshifted part) are shown in Fig.~\\ref{fig7}.\nDue to the change of the position angle, the outer knots BIIb and RIIb do not appear in these P-V diagrams.\n\nThe velocity structure of the jet is well traced in the SiO.\nThe jet velocity rapidly increases within $\\backsimeq$1\\arcsec from the star, and reaches close to its highest velocity (${\\pm}$65 km s$^{-1}$ from $V_{\\rm sys}$) at $\\sim$5\\arcsec from the star, which corresponds to the positions of the BIc and RIc knots.\nThe velocity dispersion is extremely large (${\\Delta}V{\\sim}$50 km s$^{-1}$ at 1$\\sigma$ level) at the base, while it narrows to ${\\Delta}V{\\sim}$20 km s$^{-1}$ at the positions of the BIc and RIc knots. \nIn the positions further downsteam, from BIc to BIIa and from RIc to RIIa, the observed radial velocity slightly decreases in the blueshifted side, while it increases in the redshifted side.\nThese radial velocity changes are probably due to the change of inclination angle, because the position angle of the jet also changes at these positions.\nThe velocity pattern shown in the SiO is similar to those of the atomic jets from class I source and T Tauri stars, in which the low-velocity components with broad line widths are located near the base and the high-velocity components with narrower line widths are located further from the source \\citep[e.g.][]{Pyo02}.\nIn addition to the global velocity structure, each knot shows its internal velocity gradient with its higher velocity in the upstream side and lower velocity in the downstream side.\n\n\\subsection{Kinematics of the CO outflow shells}\n\nIn the CO $J$=3--2, the jet shows a similar velocity pattern with a similar velocity centroid as the SiO.\nIn addition to the jet, the P-V map of the CO shows extended low velocity features (slower than $\\pm$15 km s$^{-1}$) and linear velocity features with the velocity magnitude increasing with the distance from the source (^^ ^^ Hubble-law''). \nThese Hubble-law features are obviously different from the jets, and are not seen in the SiO, suggesting that they are from the outflow shells.\nIt should be noted that the radial velocity offsets of the linear velocity components become larger than 50 km s$^{-1}$ at around $\\backsimeq$10\\arcsec from the star, and contaminate the CO map in the EHV range.\n\nThe observed velocity pattern of the outflow shell is different from that predicted by the jet-driven bow shock model, in which the P-V structure shows a convex spur with high-velocity components at the jet head \\citep[e.g.][]{Mas93, Che94}; it is rather similar to the velocity pattern produced by the wide-angle wind model \\citep{Shu91, Shu00, Li96}.\nIn the wide-angle wind model, an outflow shell consists of ambient material swept-up by a radial wind from a young star.\nA ^^ ^^ Hubble-law'' in the shell velocity is expected if the shell is expanding into an ambient medium with a radial density profile of ${\\propto}r^{-2}$ \\citep{Shu91}.\nThe shape of the shell is determined by the combination of the poloidal density profiles of the wind and ambient gas.\nIt is approximately parabolic for an $X$-wind type of wide opening angle wind with an angle-dependent density profile of ${\\propto}1\/{\\rm sin}^2{\\theta}$ (where $\\theta$ is an angle measured from the axis of the flow) expanding into an ambient medium with a ${\\propto}{\\rm sin}^2{\\theta}\/r^2$ density profile, which is appropriate for magnetized cores \\citep{Li96}.\nTherefore, we adopted the simplified analytical model of a wind-driven model proposed by \\citet{Lee00} to examine whether the observed morphology and kinematics of the CO outflow shells can be explained by means of wide-opening angle wind model.\nIn the cylindrical coordinate system, the structure and velocity of the shell can be written as follows:\n \\begin{equation} z=CR^2, {\\it v}_R={\\it\nv}_0R, {\\it v}_z={\\it v}_0z, \\end{equation} \nwhere z is the distance along the outflow axis; R is the radial size of the outflow perpendicular to z ;\n$C$ and {\\it v}$_0$ are free parameters which describe the spatial and velocity distributions of the outflow shell, respectively.\nThe observed outflow shell features and the velocity patterns of the redshifted and blueshifted lobes were successfully reproduced by the model curves with $C$=0.8 arcsec$^{-1}$and {\\it v}$_0$=5.0 km s$^{-1}$ arcsec$^{-1}$, and $C$=0.6 arcsec$^{-1}$ and {\\it v}$_0$=5.0 km s$^{-1}$ arcsec$^{-1}$, respectively.\nHere the inclination angle of the outflow axis with respect to the plane of the sky was assumed to be 21$^{\\circ}$, which is derived from the proper motion measurement for the SiO $J$=2--1 jet by \\citet{Gir01}.\nThe dynamical age of the outflow shell is given by 1\/$v_0$ and is estimated to be $\\sim$240 yr.\nThis number is roughly consistent with the dynamical age of $\\sim$500 yr derived from the extent of the shell ($\\sim$25\\arcsec = 6300 AU), the mass-weighted-mean radial velocity of the shell ($\\sim$22 km s$^{-1}$), and the inclination angle of the outflow axis ($\\sim$21$^{\\circ}$).\nThe model curves that delineate the outer boundaries of the lobes projected onto the plane of the sky are shown in Fig.~\\ref{fig8} on top of the contours of the outflow shells.\nThe model curves of the P-V maps are shown in Fig.~\\ref{fig7}.\nHowever, this simplified wind-driven model has difficulty in reproducing several observed features.\nFirst, the observed CO intensity drops sharply at the systemic velocity and does not extend to the opposite velocity ranges, while the model curves on the P-V maps predict the emission at $V_{\\rm LSR}{\\sim}$10 km s$^{-1}$ in the blueshifted lobe and at $V_{\\rm LSR}{\\sim}$0 km s$^{-1}$ in the redshifted lobe.\nSecond, the shapes of the shells in the different velocities cannot be reproduced (Fig.~\\ref{fig9}).\nThe observed CO emission shows V-shaped distributions in the velocity ranges close to the systemic velocity.\nOn the other hand, the model curves predict elliptical shapes (Fig. 9a and 9b).\nIn addition, the transverse width of the CO shell becomes narrower as the velocity offset increases, while the model predict the opposite trend.\nIt should be noted that the CO emission near the protostar in Fig. 9c and 9d arises from the jet.\n\n\n\\subsection{Kinematics across the jet}\n\nIn order to search for the signs of jet rotation, the velocity gradient along the minor axes of the jet was examined.\nThe P-V diagrams of the SiO and CO at the positions of innermost two pairs of knots (Fig.~\\ref{fig11}) show no clear velocity gradient at the positions of BIa, RIa, and RIb.\nOn the other hand, there is some hint of velocity gradient in the SiO at the position of BIb; the southwestern side of the jet tends to be more blueshifted than the northeastern part.\nHowever, this velocity gradient is in the opposite sense to the rotation pattern of the NH$_3$ core with its blueshifted part in the northeast and the redshifted part in the southwest \\citep{Cur99}.\nThis suggests that the observed velocity gradient is not due to the rotation.\n\n\n\\subsection{Physical properties of the EHV jet}\n\nThe physical parameters of the EHV jet were estimated using the CO flux measured in the velocity ranges of ${\\Delta}V$=50--70 km s$^{-1}$, in which most of the CO flux was recovered by the SMA.\nWe assumed that the CO emission in these velocity ranges is optically thin, and that the excitation condition of the CO follows the LTE.\nA fractional abundance of the CO and a mean atomic weight were adopted to be 10$^{-4}$ and 1.41, respectively.\nThe kinetic temperature of molecular gas in the EHV jetwas derived to be 500--1200 K from the far infrared lines of CO, H$_2$O, and H$_2$ \\citep{Nis99,Nis00}, and $>$500 K from the millimeter and submillimeter SiO lines \\citep{Nis07}.\nHowever, it is uncertain whether the CO (3-2) emission arises from the same gas component that contributes to the higher transition lines in the far-infrared.\nIn fact, \\citet{Gus08a} have modeled the multi-transition SiO data observed by \\citet{Nis07} using their face-on C-type shock models, and obtained much lower temperature of 70--90 K.\nThis implies that the most of the gas in the EHV jet is not so warm as $>$500 K. \nTherefore, an excitation temperature of $\\sim$100 K was assumed for the calculation.\n\nThe mass for each of the blueshifted and the redshifted jets is estimated to be $\\sim$10$^{-3}$ $M_{\\odot}$, which is a few times higher than the bullet mass estimated by \\citet{Bac90} using an assumption of $T_{\\rm ex}{\\sim}$20 K.\nThe momentum and kinetic energy of the bipolar jet are estimated to be 0.3 $M_{\\odot}$ km s$^{-1}$ and 5.0$\\times$10$^{44}$ erg, respectively.\nHere, the calculation was done using the de-projected jet velocity under the assumption that the jet axis is inclined by 21$^{\\circ}$ from the plane of the sky \\citep{Gir01}.\nThe dynamical timescale of the jet derived from the length ($\\sim$ 20\\arcsec = 5000 AU) and mass weighted mean (de-projected) velocity is estimated to be only $\\sim$150 yr.\nBecause of the rather large mass and short timescale, the obtained mass loss rate is also large, $\\sim$10$^{-5}$ $M_{\\odot}$ yr$^{-1}$.\nFurthermore, the high velocity of $\\sim$160 km s$^{-1}$ (corrected for the inclination of 21$^{\\circ}$) brings extremely large momentum supply rate of $\\sim$2${\\times}$10$^{-3}$ $M_{\\odot}$ km s$^{-1}$ yr$^{-1}$ and mechanical luminosity of $\\sim$26$L_{\\odot}$ for the jet.\n\nIt should be noted that the derived total mechanical luminosity of $\\sim$26 $L_{\\odot}$ is a factor of 3.5 larger than the bolometric luminosity of the central source (7.5 $L_{\\odot}$).\nOne possible reason for such a discrepancy is that the mass of the jet was overestimated.\nSince the CO $J$=3--2 emission comes from both the jet and shell, the high velocity part of the shell may contaminate the CO flux that was used to calculate the mass of the jet.\nHowever, this effect should not be significant, because the CO flux from the shell does not dominate the CO flux at ${\\Delta}V >$50 km s$^{-1}$ velocity range.\nThe second possible reason is the excitation temperature.\nThe assumed excitation temperature of $T_{\\rm ex}{\\sim}$100 K is much lower then the kinetic temperature of $>$500 K derived from the far infrared measurements \\citep{Nis99,Nis00}.\nHowever, the assumption with higher excitation temperature increases the mass and dynamical parameters (i.e. momentum, kinetic energy, mass loss rate, momentum rate, and mechanical luminosity).\nFor example, the assumption of $T_{\\rm ex}$= 500 K increases the mass and dynamical parameters by a factor of 3.8, and brings an extremely large mechanical luminosity of $\\sim$90 $L_{\\odot}$.\nThis implies that the bulk of the molecular gas in the jet is not so warm as 500 K, and that the warm gas which contributes to the far infrared emission is not the major component.\nOn the other hand, the lower excitation temperature of 40--50 K reduces the mass.\nHowever this effect is limited to a factor of 1.5 at the most.\nAnother possible reason is the CO abundance, which was assumed to be 10$^{-4}$.\nThis value can be as high as 4$\\times$10$^{-4}$ in the chemical model of protostellar winds proposed by \\citet{Gla91}, in which molecules such as CO and SiO are formed via gas-phase reactions in an initially atomic protostellar wind.\nIf the higher CO abundance of 4$\\times$10$^{-4}$is adopted, the mass and all dynamical parameters are reduced by a factor of 4, and the mechanical luminosity of the jet becomes $\\sim$7 $L_{\\odot}$, which is comparable to the bolometric luminosity of the central source.\nHowever, this value for the mechanical luminosity of $\\sim$7 $L_{\\odot}$ should be the lower limit, because it is derived under the assumption of the optically thin CO emission.\nThe physical parameters derived using $T_{\\rm ex}$ = 100 K and CO\/H$_2$ abundance ratio of 4$\\times$10$^{-4}$ are given Table 1.\n\n\\subsection{Physical parameters of the outflow shell}\n\nThe mass, momentum, and kinetic energy of the outflow shell were estimated using the CO flux measured in the velocity ranges of $\\pm$1--40 km s$^{-1}$ from the systemic velocity.\nThe CO emission is assumed to be optically thin and in the LTE condition with an excitation temperature of 40 K, which is derived from the observed peak brightness temperature of the CO $J$=3--2 line.\nSince most of the gas in the shell component is considered to be the swept-up ambient material, the canonical CO abundance of 10$^{-4}$ was used for the calculations.\nThe dynamical parameters of the outflow shell are summarized in Table 2. \nSince significant fraction of the CO flux is missed in the low velocity ranges, Table 2 show the parameters with and without correction for the effect of missing flux. \nThe inclination angle of the outflow axis is assumed to be 21$^{\\circ}$ as in the case of the EHV jet.\nThe dynamical timescale of the outflow shell is adopted to be $\\sim$240 yr, which is derived from the modeling descried in the previous section.\nTable 2 shows that the dynamical parameters of the outflow shell are comparable to those of the EHV jet listed in Table 1.\nIf the effect of the missing flux is corrected, the outflow shell has a momentum supply rate of $\\sim$1.8$\\times$10$^{-3}$ $M_{\\odot}$ km s$^{-1}$ yr$^{-1}$, and a mechanical luminosity of $\\sim$8.6 $L_{\\odot}$.\nThe mechanical lumiosity derived here is comparable to the bolometric luminosity of the central source and the mechanical luminosity of the EHV jet.\nIt should be noted that the dynamical parameters for the redshifted outflow are affected by the contamination from the L1448C(S) outflow (see the next section).\nHowever, the effects of L1448C(S) outflow to the dynamical parameters are not so significant, because this outflow is seen only in the low velocity ranges.\n\n\n\\subsection{CO outflow from L1448C(S)}\n\nIn the CO maps in the low velocity ranges such as Fig. 4a and 4b, there is a blueshifted component in the redshifted lobe.\nThis blueshifted component shows a triangle shape with its apex at the position of L1448C(S), and extends to the northeast direction with a position angle of $\\sim$40$^{\\circ}$.\nIt is likely that this blueshifted component is related to the activity of L1448C(S).\nFig.~\\ref{fig11}, which provides a close-up view of the L1448C(S) region, shows a redshifted counterpart to the southwest of L1448C(S), a compact component at $\\sim$1\\arcsec southwest of L1448C(S) and another extended component to the southwest of the V-shaped shell of the L1448C(N) outflow.\nThis NE-SW outflow from L1448C(S) is also highly collimated.\nThe opening angle of the lobes is $\\sim$40$^{\\circ}$, which is similar to the redshifted lobe of the L1448C(N) outflow.\nHowever, this outflow is seen only in the velocity ranges slower than 15 km s$^{-1}$, and has no high-velocity jet-like component.\nThere is no SiO $J$=8--7 emission either.\n\nThe redshifted part of the L1448C(S) outflow overlaps with the western wall of the L1448C(N) outflow.\nAt the place where the two outflows are superposed, the CO emission is significantly enhanced and the wall of the L1448C(N) outflow lobe is bending.\nTherefore, it is possible that the two outflows are intersecting, although three dimensional geometries of two outflows are uncertain.\n\nThe physical parameters of this outflow were estimated assuming that the CO emission is optically thin and that the excitation condition of the CO is in LTE.\nThe fractional abundance of the CO was adopted to be the canonical value, 10$^{-4}$, because the V-shaped morphology and low velocity suggest that the bulk of the L1448C(S) outflow is likely to be the swept-up ambient gas.\nA mean atomic weight of the gas was assumed to be 1.41.\nThe excitation temperature was assumed to be 20 K, which is same as the rotational temperature of the dense gas surrounding L1448C derived from the NH$_3$ observations \\citep{Cur99}.\nThe mass of the blueshifted component is estimated to be 5.3$\\times$10$^{-4}$ $M_{\\odot}$ and that of the redshifted component is 2.7$\\times$10$^{-4}$ $M_{\\odot}$.\nSince the CO $J$=3--2 emission is assumed to be optically thin, the values derived here are the lower limits.\nIn addition, the mass of the redshifted component is likely to be underestimated, because we exclude the region where the two outflows are superposed.\nIt is also possible that part of the flux from the spatially extended component is missing.\nThe dynamical parameters of the flow are summarized in Table~\\ref{table3}.\nTable 3 gives two kinds of values; one is corrected for the inclination effect and the other is without this correction (uncorrected).\nHere, the inclination angle of the L1448C(S) outflow from the plane of the sky was assumed to be 32.7$^{\\circ}$, which corresponds to the mean inclination angle from assuming randomly oriented outflows.\nWe also assumed that the outflow lobes have a size of $\\sim$20\\arcsec (5000 AU).\nThe actual size of this outflow is not clear because of confusion with L1448C(N) outflow and its sidelobes.\nHowever, there is no counterpart of this outflow in the single-dish CO $J$=3--2 map \\citep{Hat07}.\nThis suggests that the spatial extent of this outflow is not large, or the spatially extended component is fainter than the sensitivity limit of the single-dish measurement. \nIn the later case, the faint component would not make significant contribution to the dynamical parameters even though its spatial extension is large.\n\n\\section{DISCUSSION}\n\n\\subsection{Compact disk around L1448C(N)}\n\nThe compact component of dust continuum emission is partially resolved by the 0\\farcs7$\\times$0\\farcs5 beam.\nThe observed structure is elongated perpendicular to the outflow axis, suggesting that this component traces the disk surrounding the protostar. \nIf the beam-deconvolved major axis, $\\sim$90 AU, represents the diameter of the disk, the disk size of L1448C(N) is comparable to those around the youngest protostars such as HH211 \\citep{Lee07b} and HH212 \\citep{Cod07, Lee07a, Lee08}.\nIf the measured flux (330 mJy) comes from the region with 0\\farcs37$\\times$0\\farcs26 size, the brightness temperature corresponds to $\\sim$35 K.\nThe spectral energy distribution (SED) of the compact source from 8.3 GHz to 350 GHz is shown in Fig.~\\ref{fig12}.\nThe measured flux densities are fit by a single power law with a spectral index $\\alpha$ of 1.98 (solid line in Fig~\\ref{fig12}), which agrees with the previous result ($\\alpha$ = 1.84) of \\citet{Sch04}.\nThis spectral index is smaller than $\\alpha$ = 3.4 derived from the photometric broadband measurement including the contribution of the larger scale envelope component \\citep{Fro05}.\nSince the emission at cm wavelengths is likely to be the free-free emission from shock-ionized gas,\nthe contribution of the free-free component at mm and sub-mm wave ranges was estimated by using the equations given by \\citet{Cur90}.\nUsing the parameters of the stellar wind in \\citet{Cur90}, the flux densities of the free-free emission at mm and sub-mm wave ranges were estimated to be less than 1 mJy (dotted line in Fig~\\ref{fig12}).\nTherefore, the small index number is unlikely to be due to the contribution of the free-free component.\nThe power law fit assumes that the emission is optically thin and that the Rayleigh-Jeans approximation is valid.\nHowever, the observed index $\\alpha\\sim$2 is close to the value for the blackbody radiation, suggesting that the emission is optically thick.\nIn addition, a brightness temperature of 35 K suggests that the Rayleigh-Jeans approximation is not applicable in the mm and sub-mm wave ranges.\nTherefore, we applied an optically thick fit without Rayleigh-Jeans approximation using the formula,\n\\begin{equation}\nS_{\\nu}={\\Omega}_s B_{\\nu}[1-{\\rm exp}(-{\\tau}_{\\nu})],\n\\end{equation}\nWhere $S_{\\nu}$ is the flux density, ${\\Omega}_s$ is the source size, B$_{\\nu}$ is the Planck function, and ${\\tau}_{\\nu}$ is the dust optical depth, which is assumed to follow the power low, ${\\tau}_{\\nu} {\\propto} {\\nu}^{\\beta}$.\nAssuming that the source size is 0\\farcs37$\\times$0\\farcs26 and a dust temperature of 40 K, the SED fit provides $\\beta$ = 1.3 and ${\\tau}_{350 {\\rm GHz}}$ = 7.5 (dash-dotted line in Fig~\\ref{fig12}).\nThe average optical depth at 350 GHz for a disk of mass (gas+dust) $M_D$ and radius $R_D$ is given by \n\\begin{equation}\n<\\tau_{350 {\\rm GHz}}> = \\left( \\frac{0.5}{{\\rm cos} \\theta} \\right) \\left( \\frac{M_D}{0.1 M_{\\odot}} \\right) \\left( \\frac{R_D}{100 {\\rm AU}} \\right) ^{-2},\n\\end{equation}\nwhere $\\theta$ is the disk inclination angle to the line of sight \\citep{Jor07}.\nUsing this relation, the mass of the disk with ${\\tau}_{350 {\\rm GHz}}$ = 7.5, $R_D$ = 45 AU, and $\\theta$ = 69$^{\\circ}$ (assuming that the disk is perpendicular to the jet axis) is estimated to be 0.11 $M_{\\odot}$.\nThe mass derived here is approximately twice as large as the lower limit of 0.047 $M_{\\odot}$, which is derived under the assumption of optically thin emission.\n\n\\subsection{Stellar mass loss rate and its implication to the protostellar evolution}\n\nSince protostellar jet is considered to be closely linked to the mass accretion, the stellar mass-loss rate gives us a rough estimate of the mass accretion rate onto the star.\nTheoretical estimate for the ratio of mass outflow to mass accretion rate ($\\dot{M}_{\\rm out}\/\\dot{M}_{\\rm acc}$) is ${\\sim}$1\/3 for an X-wind type magneto-centrifugal wind \\citep[e.g.][]{Shu94}.\nIf the $\\dot{M}_{\\rm out}\/\\dot{M}_{\\rm acc}$ ratio is assumed to be $\\sim$0.3, the total mass-loss rate (blue + red) derived from the CO flux, 2.4$\\times$10$^{-6}$ $M_{\\odot}$ yr$^{-1}$, gives us the mass accretion rate of 8$\\times$ 10$^{-6}$ $M_{\\odot}$ yr$^{-1}$.\nIn spite of rather high accretion rate, the observed bolometric luminosity is only 7.5 $L_{\\odot}$, suggesting that the mass of the central star is still very small.\nIf most of the observed bolometric luminosity is released by means of accretion, the mass of the central star is calculated by using the relation, $M_*$ = $L_{\\rm acc}R_{*}\/G\\dot{M}_{\\rm acc}$, where $M_*$ is the mass of the central star, $L_{\\rm acc}$ is the accretion luminosity, and $\\dot{M}_{\\rm acc}$ is the mass accretion rate onto the protostar.\nThe radius of the protostar, $R_*$, is considered to be $\\sim$1 $R_{\\odot}$ in the earliest evolutionary stage with very low mass and $\\sim$3 $R_{\\odot}$ in the later stage \\citep[e.g.][]{Sta88}.\nThe mass of the central star is estimated to be 0.03--0.09 $M_{\\odot}$ for an accretion luminosity of 7.5 $L_{\\odot}$.\nWith a stellar mass of 0.03--0.09 $M_{\\odot}$, the Keplerian velocity at the surface of the protostar becomes $\\sim$80 km s$^{-1}$.\nIn this case, the jet velocity to Keplerian velocity ratio becomes $\\sim$2, which is reasonable if the jet is launched by magneto-centrifugal force \\citep{Shu94,Pud07}.\nIf a constant mass accretion rate of 8$\\times$ 10$^{-6}$ $M_{\\odot}$ yr$^{-1}$ is assumed, the age of the central star is estimated to be (4--12)$\\times$10$^3$ yr.\nThe timescale derived here is consistent to the kinematic age of the larger scale outflow of $\\sim$0.3 pc scale \\citep{Bac90}.\nHowever, morphology of the EHV jets implies that the mass accretion was variable.\nThe highly-colimated EHV jets terminate at $\\sim$20\\arcsec from the source, suggesting that L1448C(N) experienced lower activity phase in the past and enhanced its activity significantly in the last $\\sim$150 yr.\n\n\n\\subsection{Clumpy structure in the L1448C(N) jet}\n\nHigh resolution SiO and CO maps show that the EHV bullets B1 and R1 identified by \\citet{Bac90} consist of chains of knots.\nIf the BI-II and RI-II knots are included, the knots are aligned with almost equal intervals of $\\sim$2\\arcsec (500 AU).\nSimilar knotty structure with semi-regular intervals is also seen in the SiO and CO jets in HH211 \\citep{Hir06, Pal06, Lee07b} and HH212 \\citep{Cod07, Lee07a}.\nIn the case of HH211 and HH212, the knots seen in the SiO and CO have their counterparts in the near infrared H$_2$ emission except the innermost knots pairs that were highly obscured.\nIn the case of the L1448C(N) outflow, H$_2$ emission knots are seen only in the northern blueshifted side and not in the southern redshifted side \\citep{Dav94, Eis00}.\nThis is probably because the axis of the L1448C(N) jet is inclined from the plane of the sky and the near infrared emission in the southern part is obscured by the dense gas envelope traced by the NH$_3$ emission \\citep{Cur90}.\nIn the northern side, the morphology of the SiO jet coincides well with that of the H$_2$ jet \\citep{Eis00}, which also shows a kink between BI and BII components.\nTherefore it is likely that the knots in the L1448 jet are the internal bow shocks in the jet beam as in the cases of HH211\\citep{Hir06, Pal06, Lee07b} and HH212 \\citep{Cod07, Lee07a}.\nIn fact, some of the SiO knots are partially resolved in the transverse direction, and the RII-a knot shows an arc-shaped structure typical of a bow shock (Fig. 3f).\nThe SiO emission is weak at the positions of the BI-II and RI-II knots, suggesting that the shocks at these positions are rather weak as compared to the other knot positions.\nSince the jet is deflected at the positions of BI-II and RI-II knots, it is likely that the jet material there is impacting less dense material surrounding the jet beam.\n\n\n\\subsection{Jet bending}\n\nAs shown in Fig. 5, the blue part and the red part of the jet are misaligned by $\\sim$5$^{\\circ}$ and forming a C-shaped structure bending toward the west.\nSuch a C-shaped bending of the jet could be due to the Lorenz forces between the jet and interstellar magnetic field \\citep{Fen98}, the orbital motion of the jet source in a binary system \\citep{Fen98, Mas02}, or dynamical pressure from external medium \\citep{Fen98}. \nIn the case of the Lorenz forces, a C-shaped bending is expected if the poloidal current in the jet and counter jet flows in the same direction \\citep{Fen98}.\nHowever, this mechanism is difficult to account for the observed bending of the L1448C(N) jet, because typical interstellar magnetic field with several tens of microgauss is not strong enough to bend the jet beam with a density of $>$10$^6$ cm$^{-3}$.\n\nIf the C-shaped bending is produced by the orbital motion of a binary system, the orbital radius and orbital velocity can be estimated by using the analytical model of \\citet{Mas02}.\nHere the jet is assumed to be ejected at a velocity of $v_j$ from one of the binary protostars in a circular orbit of radius $r_0$ and orbital velocity $v_0$.\nThe $z$-axis is parallel to the orbital rotation axis, and $z$=0 is the orbital plane.\nAs shown in Fig. 3 of \\citet{Mas02}, the deflection angle $\\alpha$ of the jet beam near the source is approximated by the $x$=${\\kappa}z\/{\\rm cos}i$ line, where $\\kappa$=$v_0\/v_j$ and $i$ is the inclination angle of the orbital axis with respect to the plane of the sky.\nIn the case of the L1448C(N) jet, the deflection angle $\\alpha$ is estimated to be 2.5$^{\\circ}$, which corresponds to the half of the misalignment angle.\nUsing the jet velocity of $\\sim$160 km s$^{-1}$, the orbital velocity is calculated to be 6.5 km s$^{-1}$.\nSince the mass of the protostar with jet is only 0.03--0.09 $M_{\\odot}$, the total mass of the binary system is considered to be less than 0.18 $M_{\\odot}$.\nTherefore, the radius and period of the orbital morion are estimated to be smaller than 4.2 AU and 20 yr, respectively.\nHowever, such a short period orbital motion cannot account for the observed C-shaped pattern, because\nthe C-shaped bending is seen in the BI and RI parts of the jet with a length of $\\sim$2000 AU with a dynamical time scale of 47 yr.\nIn order to produce the C-shaped pattern with the orbital motion, the orbital period needs to be longer than twice of the dynamical time scale.\n \nIn the case of dynamical pressure of external medium, ambient gas with $n$(H$_2$)$<$10$^4$ cm$^{-3}$ cannot account for bending the jet with a density of $>$10$^6$ cm$^{-3}$, unless the protostar is moving with a velocity that is comparable to the jet velocity.\nOn the other hand, the dynamical pressure caused by the outflow from the nearby protostar, L1448N, cannot be ruled out.\nAs shown in the CO map of \\citet{Bac90}, the redshifted lobe of the L1448N outflow overlaps with the blue lobe of the L1448C(N) outflow.\nThe interaction between the two outflows from L1448C(N) and L1448N has been suggested by \\citet{Bac95}, because the large scale outflow from L1448C(N) shows a considerable bending at the place where the two outflows are overlapping.\nSince the redshifted emission from L1448N outflow reaches close to the position of L1448C(N) \\citep{Bac95}, it is possible that the jet from L1448C(N) is propagating under the influence of the L1448N outflow.\nIn this case, the dynamical pressure from L1448N outflow acts from north to south, and deflects the jet beams to the west if they were ejected to the the northwest and southeast directions. \n\n\\subsection{Deflection and wiggling of the jet}\n\nIn addition to the C-shaped bending, the jet is also deflected toward the east by $\\sim$40$^{\\circ}$ at the position of BI-II and toward the south by $\\sim$15$^{\\circ}$ at the RI-II position.\nSince both sides of the jet are deflected at almost same distance from the central star, the jet deflection is likely to be caused by some variability intrinsic to the driving source rather than by external perturbation.\nThe observed morphology is similar to the S-shaped point-reflection symmetric pattern that is expected if the disk is precessing or wobbling.\nAlthough the jet is not exactly the S-shape but asymmetric in deflection angle, this is probably because of the projection effect.\nIn a binary protostellar system with a disk misaligned with the orbital plane of the binary, the disk wobbles with a period approximately half of the binary orbital period and precesses with a period of $\\sim$20 orbital period \\citep{Bat00}.\nS-shaped point symmetry will be observed if the precession or wobbling time scale is longer than four times of the dynamical timescale.\nSince the jet deflection occurs at BI-II and RI-II the time scale of which is $\\sim$50 yr, the time scale of the precession or wobbling should be longer than $\\sim$200 yr.\nTherefore, if the deflection is due to the precession, the lower limit of the binary orbital period is $\\sim$10 yr.\nSince this orbital period of $\\sim$10 yr is comparable to the period of small scale wiggling shown in Fig. 6, 15--20 yr, this orbital motion can also explain the wiggling feature.\nIf the binary system consists of equal mass protostars with 0.03--0.09 $M_{\\odot}$, the orbital radius is estimated to be 2.4--4.2 AU.\nOn the other hand, if the jet deflection is due to the wobbling of the disk, the orbital period and the separation of the binary are estimated to be $\\sim$400 yr and 30 AU, respectively.\nSince the estimated separation of the binary is smaller than the size of the disk observed with the 350 GHz continuum emission, it is possible that the observed 90 AU scale disk harbors two sources separated by 60 AU.\nHowever, the binary with a separation of $>$60 AU cannot account for the small scale wiggling feature.\n\n\\subsection{Velocity variation of the jet}\n\nThe P-V diagram of the SiO shows that the velocity of the jet varies semi-periodically.\nThe velocity variation is more obvious in Fig.~\\ref{fig13}, which plots the velocity centroid of the SiO emission in the redshifted part of the jet as a function of the distance from L1448C(N).\nIt is shown that the typical amplitude of the variation in velocity centroid is $\\sim$7 km s$^{-1}$.\nSuch a velocity variation is expected if the jet is precessing, the jet is launched from an orbiting object, or the ejection velocity itself varies as a function of time \\citep[e.g.][]{Smi97}.\nThe period of the velocity variation estimated from the de-projected jet velocity and knot separation is $\\sim$15--20 yr.\nSince this time scale is much shorter than the precession time scale that is estimated to be 200 yr, it is unlikely that the velocity variation is caused by the precession of the jet.\nOn the other hand, the orbital motion of the driving source can account for the velocity variation; the binary system with an orbital period of $\\sim$15--20 yr, an orbital radius of 2.4--4.2 AU, a total mass of 0.06--0.18 $M_{\\odot}$ has an orbital velocity of $\\sim$4.7--6.2 km s$^{-1}$, which is comparable to the amplitude of the radial velocity variation.\nHowever, the orbital motion cannot explain the relation between the SiO intensity and velocity gradient.\nAs shown in the P-V map (Fig.~\\ref{fig7}), each SiO knot has its higher velocity in the upstream side and lower velocity in the downstream side.\nThe opposite velocity gradient is always seen in the faint emission between the knots.\nSuch a structure is more likely to be formed by means of periodic variation of the ejection velocity. \nIn such a case, the SiO knots are considered to be formed as the fast moving material plunge into the slow moving material in the downstream \\citep[e.g.][]{Sto93, Sut97}.\nThe periodic variation in the ejection velocity is probably due to the modulation of mass accretion by means of companion.\nIn such a case, the variation amplitude of the jet velocity corrected for the inclination is calculated to be $\\sim$20 km s$^{-1}$. \nThis velocity amplitude corresponds to the shock velocity, which is consistent to the velocity of C-type shocks that can account for the excitation conditions of the far infrared molecular lines \\citep{Nis99, Nis00}.\nSimilar velocity gradients in the knots with the faster part in the upstream side and the slower part in the downstream was also observed in the optical jet of HH111 \\citep{Rag02} and in the CO and SiO jets from IRAS 04166+2706 \\citep{San09}.\n\n\n\\subsection{Driving mechanism of the CO outflow}\n\nThe P-V diagram of the CO $J$=3--2 along the axis (Fig.~\\ref{fig7}) exhibits two kinematic components, i.e. the EHV jet with an almost constant velocity and the outflow shell with a parabolic velocity pattern.\nAlthough highly-collimated jet is clearly seen in both SiO and CO, the parabolic velocity pattern seen in the outflow shells is reproduced by the wind-driven model.\nTherefore, the observational results require a wide-opening angle wind and a collimated jet at the same time.\n\nOne possible mechanism to explain the observed jet+shell structure is the ^^ ^^ unified model'' proposed by \\citet{Sha06}, in which highly-collimated jet component is explained as an on-axis density enhancement of the {\\it X}-wind type of wide opening angle wind launched magnetocentrifugally.\nIn this model, the jet along the axis corresponds to the densest part of the primary wind, and the shell is mostly consisted of the swept-up ambient material.\nTheir numerical model successfully reproduced the structure of a dense and narrow jet surrounded by a conical shell. \nThe other models that can explain the two components structure are proposed by \\citet{Mac08} and \\citet{Ban06}.\nThe model proposed by \\citet{Mac08} predicts two distinct flows from the adiabatic core and the protostar.\nThe flow from the adiabatic core driven by the magnetocentrifugal mechanism has a low velocity and a wide opening angle, while the flow from the protostar, which is mainly driven by the magnetic pressure gradient force, has a high velocity and is well collimated.\nOn the other hand, a model proposed by \\citet{Ban06} predicts the structure with the jet powered by magnetocentrifugal force enclosed by the large-scale outflow driven by the magnetic pressure.\nAlthough these two models reproduce the jet+shell structure similar to the observational results, the velocities of the shells predicted in these model are rather small ($\\sim$5 km s$^{-1}$ for the model of \\citet{Mac08}) because of the shallow gravitational potential at the launching point.\nIn the case of the L1448C(N) outflow, the terminal velocity of the outflow shellreaches to ${\\Delta}V {\\sim} {\\pm}$70 km s$^{-1}$ without inclination correction, which is comparable to the velocity of the EHV jet.\nIn order to launch such a high velocity wind, the launching point of this wind should be close to the launching point of the EHV jet.\nTherefore, the models with two components launched from two different regions do not explain the jet+shell structure in the L1448C(N) outflow.\nIn the case of the {\\it X}-wind with density stratification \\citep{Sha06}, the rather high velocity in the shell component is naturally explained because the shell is driven by the high-velocity primary wind launched at the same region as the EHV jet.\n\n\n\\subsection{Origin of the SiO in the jet}\n\nIt is considered that the SiO molecules observed in jets and outflows are formed as a consequence of grain sputtering in a C-shock releasing Si-bearing material into the gas phase, followed by the reaction with O and OH \\citep{Sch97, Cas97, Gus08a}.\nThe multi-transition SiO lines from the L1448C(N) jet observed by \\citet{Nis07} have been successfully modeled by the steady-state C-shock model of \\citet{Gus08a} with a pre-shock density of $\\sim$10$^5$ cm$^{-3}$ and a shock velocity of 30--45 km s$^{-1}$.\nHowever, the conversion of Si into SiO is initially rather slow in their models, and the predicted SiO line emission predominantly arises from postshock gas $>$100 yr after the passage of shocks.\nSince most of the SiO knots in the L1448C(N) jet have dynamical timescale shorter than the SiO formation timescale, the steady-state C-type shock models of \\citet{Gus08a} does not account for the SiO in the knots close to the central star, especially in the innermost knot pair with extremely short time scale less than 10 yr.\nOne possible explanation is that the SiO molecules existed on the grain mantles and are released into the gas-phase by means of shocks as suggested by \\citet{Gus08b}.\n\nAnother possibility is the formation of SiO in high density primary jet \\citep{Sha06}.\n\\citet{Gla89, Gla91} studied the formation of molecules in protostellar winds, which are originally neutral atomic, and found that significant quantities of SiO can be formed quickly in the close vicinity ($<$0.1 AU) of the central star if the mass-loss rate is high ($>$10$^{-6}$ $M_{\\odot}$).\nSince the mass-loss rate of the L1448C(N) jet is high enough, this scenario of in situ formation also can be the origin of the SiO in the jet.\nThe chemical models of \\citet{Gla89, Gla91} also predict significant amount of CO synthesized in the winds; the CO abundance reaches an equilibrium value of 4$\\times$10$^{-4}$ under the conditions in which observable amount of SiO is formed.\nThe morphological and kinematical similarity of the CO and SiO jets supports the idea that both CO and SiO are formed in the protostellar wind.\n\n\n\\subsection{Properties of L1448C(S)}\n\nThe secondary source L1448C(S) is located at $\\sim$8\\farcs3 (2000 AU) southeast of L1448C(N).\nThe 350 GHz continuum flux of this source is $\\sim$60 mJy, which is five times smaller than the flux from L1448C(N).\nIf an optically thin condition is assumed, the mass of the circumstellar material surrounding L1448C(S) is estimated from the observed flux using the dust mass opacity of 1.75 cm$^2$ g$^{-1}$ \\citep{Oss94} and the formula given by \\citet{Jor07}.\nFor a dust temperature of $\\sim$40 K, the estimated mass in the optically thin limit is 8.6$\\times$10$^{-3}$ $M_{\\odot}$.\nSince L1448C(S) is associated with a molecular outflow, it is highly probable that this source is a protostar rather than a mere dust clump at the cavity wall as claimed by \\citet{Mau10}.\nThe NH$_3$ data of \\citet{Cur99} suggest that L1448C(S) is formed in the same dense core as L1448C(N).\nHowever, the SED of L1448C(S) is significantly different from that of L1448C(N).\nIn table 4, the broad-band spectra of L1448C(N) and L1448C(S) measured at different wavelengths are listed.\nIn the near-infrared, L1448C(S) is much dimmer than L1448C(N); only L1448C(N) appeared in the $K_s$ band image of \\citet{Tob07}.\nOn the contrary, in the mid-inrfrared at three IRAC bands, band 2 (4.5 $\\mu$m), band 3 (5.8 $\\mu$m), and band 4 (8.0 $\\mu$m), L1448C(S) becomes brighter than L1448C(N); especially in the bands 3 and 4, L1448C(S) is more than six times brighter than L1448C(N).\nIn the MIPS 24 $\\mu$m image, L1448C(S) is also seen clearly \\citep{Tob07, Reb07}.\nThe flux from L1448C(S) at 24 $\\mu$m looks similar to that from L1448C(N), although the accurate flux value of each source is not easy to measure because of the confusion.\nIn the sub-mm and mm wavebands, L1448C(S) is much weaker than L1448C(N).\nThe masses of the circumstellar material surrounding L1448C(S), $\\sim$0.01 $M_{\\odot}$, is approximately 10 times smaller than that of L1448C(N), $\\sim$0.1 $M_{\\odot}$.\nDue to the small amount of circumstellar material, the central star of L1448C(S) is likely to be less obscured in the mid-infrared as compared to that of L1448C(N) enshrouded by the thick cocoon.\nThe outflow activities in two sources are also different significantly.\nThe CO outflow from L1448C(S) is compact and substantially weaker than the L1448C(N) outflow.\nThe momentum flux of the L1448C(S) outflow is only $\\sim$10$^{-6}$ $M_{\\odot}$ km s$^{-1}$ yr$^{-1}$, which is two or three orders of magnitude smaller than that of the L1448C(N) outfow and is comparable to those of class I outflows studied by \\citet{Bon96}.\nIn addition, there is no EHV component nor SiO emission associated with the outflow from L1448C(S).\n\nThe small amount of circumstellar material of less than 0.01 $M_{\\odot}$ suggests that L1448C(S) may have accumulated most of its circumstellar mass.\nThe less energetic outflow also support the idea that L1448C(S) has a nature close to class I rather than class 0.\nThese results imply that two sources, L1448C(N) and L1448C(S) are formed in different epochs in the same dense core.\nAnother possibility is the effect of the L1448C(N) outflow.\nSince L1448C(S) is located at the same line of sight as the L1448C(N) outflow, it is possible that the outflowing gas has stripped away the dense gas surrounding L1448C(S).\nAlthough the three-dimensional geometries of the sources and outflows are not clear, the high rotational temperature of NH$_3$ observed at the position of L1448C(S) \\citep{Cur99} suggests the possibility that the gas around L1448C(S) is heated by the interaction with the jet from L1448C(N).\nIn such a case, the {\\it apparent} age of L1448C(S) is older than that of L1448C(N) simply because the amount of material left around L1448C(S) is smaller than that around L1448C(N).\nThe effect of outflow is also proposed to explain the difference of the apaprent evolutionary stage of protostellar pair L1448N(A) and L1448N(B), which is located at $\\sim$75\\arcsec northwest of L1448C \\citep{Oli06}.\n\nThe third scenario is the disintegration of an unstable multiple system as proposed by \\citet{Rei00}.\nSince nonhierarchical triple systems are unstable, they break up, ejecting the lightest member, while the remaining two members form a close binary system with a highly eccentric orbit.\nIn this scenario, the disks around escaping stars will be highly truncated; the typical disk size is expected to be around half of the distance between the stars in the close triple encounter.\nIf L1448C(S) is the escaping member, the small amount of its circumstellar material can be explained by means of disk truncation.\nThis scenario also implies that L1448C(N) is a close binary system.\nThe observed deflection, wiggling, and periodic velocity variation of the jet suggest the possibility that L1448C(N) is a close binary system with an orbital radius of $\\sim$2--4 AU.\nTherefore, it is possible that such a close binary system was formed by means of disintegration of a triple system.\nIn order to assess this scenario, kinematic information of L1448C(S) becomes important.\nAlthough previous NH$_3$ results of \\citet{Cur99} did not show peculiar motions in the dense core containing L1448C(N) and L1448C(S), a detailed study with higher angular resolution would be helpful.\n\n\n\n\n\\section{CONCLUSIONS}\nThe central region of the highly-colimated molecular outflow driven by L1448C was mapped in the SiO $J$=8--7, CO $J$=3--2, and 350 GHz continuum emission with the SMA at $\\sim$1 arcsecond resolution.\nOur main conclusions are the following:\n\\begin{enumerate}\n\\item The 350 GHz continuum emission was detected from two {\\it Spitzer} sources L1448C(N) and L1448C(S). \nThe continuum emission from L1448C(N) consists of an extended component and a compact component. \nThe compact component is elongated perpendicular to the outflow axis, and is likely to be a circumstellar disk with a size of $\\sim$90 AU. \nThe spectral index of this compact component derived from the data from 86 GHz to 350 GHz is ${\\alpha}{\\sim}$2, suggesting the possibility that the continuum emission is optically thick at 350 GHz.\nThe mass of the disk is estimated to be $\\sim$0.11 $M_{\\odot}$.\n\\item The continuum flux from L1448C(S) is $\\sim$60 mJy, which is $\\sim$10 times lower than the flux from L1448C(N), although L1448C(S) is brighter than L1448C(N) in the mid infrared wavebands.\nThe mass of the circumstellar material surrounding L1448C(S) is estimated to be 8.6$\\times$10$^{-3}$ $M_{\\odot}$.\n\\item A narrow jet from L1448C(N) along the outflow axis was observed in the SiO and the high-velocity CO. The width of the jet measured in the SiO images is $\\sim$200 AU FWHM on average. \nThe jet consists of a chain of emission knots with an inter-knot spacing of $\\sim$500 AU.\nIt is likely that the knots in the L1448 jet are the internal bow shocks in the jet beam.\n\\item The dynamical timescale of the innermost pair of knots, which are significant in the SiO but barely seen in the CO, is only $\\sim$10 yr. \nIt is likely that the SiO may have been formed quickly in the protostellar wind through the gas-phase reaction, or been formed on the dust grain and directly released into the gas phase by means of shocks.\n\\item The low velocity CO emission delineates two V-shaped shells with a common apex at L1448C(N). The kinematics of this shell component is reproduced by the model of wide opening angle wind. \nTherefore, the outflow from L1448C(N) consists of both highly-collimated jets and shells produced by wide-opening angle wind.\nThe observed jet+shell structure can be explained by the ^^ ^^ unified model'' proposed by \\citet{Sha06}, in which highly-collimated jet components are explained as an on-axis density enhancement of the {\\it X}-wind type of wide opening angle wind.\n\\item The Jet from L1448C(N) is extremely active with a momentum supply rate of $\\sim$5$\\times$10$^{-4}$ $M_{\\odot}$ km s$^{-1}$ yr$^{-1}$ and a mechanical luminosity of $\\sim$7 $L_{\\odot}$. \nThe mass accretion rate derived from the mass loss rate is $\\sim$10$^{-5}$ $M_{\\odot}$ yr$^{-1}$.\nSuch a high mass-accretion rate and a rather low bolometric luminosity of the central source, 7.5 $L_{\\odot}$, imply that the central protostar is still in the very early phase of its evolution with a mass of 0.03--0.09 $M_{\\odot}$ and a dynamical age of (4--12)$\\times$10$^3$ yr.\n\\item The blue part and the red part of the jet are misaligned by $\\sim$5$^{\\circ}$ and forming a C-shaped bending toward the west.\nThe possible origin of this bending is the dynamical pressure caused by the outflow from the nearby protostar, L1448N.\n\\item The jet is deflected toward the east in the blueshifted part and toward the south in the redshifted part. \nIn addition, the jet is wiggling with a period of $\\sim$15--20 yr.\nThe deflection and wiggling of the jet can be explained if the driving source is a member of the binary system with an orbital radius of 2--4 AU.\n\\item The jet shows a semi-periodic variation in radial velocity with an amplitude of $\\sim$7 km s$^{-1}$.\nEach SiO knot has its higher velocity in the upstream side and lower velocity in the downstream side. The opposite velocity gradient is seen in the faint emission between the knots.\nIt is likely that the ejection velocity varies periodically by means of modulation of mass accretion. \n\\item The bipolar outflow in the NE-SW direction centered at L1448C(S) was discovered in the CO $J$=3--2. \nThis provides a strong evidence that L1448C(S) is a protostar.\nThe momentum flux of this outflow is only $\\sim$10$^{-6}$ $M_{\\odot}$ km s$^{-1}$ yr$^{-1}$, which is two to three orders of magnitude smaller than that of L1448C(N) outflow, and is comparable to those of class I outflow.\n\\item L1448C(S) is surrounded by a rather small amount of circumstellar material of less than 0.01 $M_{\\odot}$ and is powering a less energetic outflow, suggesting that this source has a nature close to class I rather than class 0, even though this source is formed in the same dense core as L1448C(N).\nOne possible scenario to explain this dichotomy is the effect of the outflow from L1448C(N); significant amount of material in the envelope surrounding L1448C(S) might have been stripped off by the powerful outflow from L1448C(N).\nAnother possibility is the disintegration of an unstable multiple system, in which L1448C(S) is an escaping member with a truncated disk.\n\n\\end{enumerate}\n\n\\acknowledgments\n\nWe wish to thank all the SMA staff in Hawaii, Cambridge, and Taipei for their \nenthusiastic help during these observations. \nN. Hirano thanks M. Machida and S. Inutsuka for fruitful discussion.\nN. Hirano is supported by NSC grant 96-2112-M-001-023.\n\n \\clearpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{Intro}\nGiven a (commutative) integral domain $D$ with fraction field $K$, we define $\\textnormal{Int}(D) := \\{f \\in K[X] \\mid f(D) \\subseteq D\\}$, which is the ring of integer-valued polynomials on $D$. Integer-valued polynomials and the properties of $\\textnormal{Int}(D)$ have been well studied; the book \\cite{CaCh} covers the major theory in this area and provides an extensive bibliography. In recent years, researchers have begun to study a generalization of $\\textnormal{Int}(D)$ to polynomials that act on a $D$-algebra rather than on $D$ itself \\cite{EFJ}, \\cite{EvrJoh}, \\cite{Fri1}, \\cite{Fri1Corr}, \\cite{Fri2}, \\cite{LopWer}, \\cite{Per1}, \\cite{PerDivDiff}, \\cite{PerFinite}, \\cite{PerWer}, \\cite{PerWer1}, \\cite{Wer}. For this generalization, we let $A$ be a torsion-free $D$-algebra such that $A \\cap K = D$, and let $B = K \\otimes_D A$, which is the extension of $A$ to a $K$-algebra. By identifying $K$ and $A$ with their images under the injections $k \\mapsto k \\otimes 1$ and $a \\mapsto 1 \\otimes a$, we can evaluate polynomials in $K[X]$ at elements of $A$. This allows us to define $\\textnormal{Int}_K(A) := \\{f \\in K[X] \\mid f(A) \\subseteq A\\}$, which is the ring of integer-valued polynomials on $A$ with coefficients in $K$. With notation as above, the condition $A \\cap K = D$ ensures that $D[X] \\subseteq \\textnormal{Int}_K(A) \\subseteq \\textnormal{Int}(D)$.\n\n\\begin{Def}\\label{Nontrivial}\nWe say that $\\textnormal{Int}_K(A)$ is \\textit{nontrivial} if $\\textnormal{Int}_K(A) \\ne D[X]$.\n\\end{Def}\n\nThe goal of this paper is to determine when $\\textnormal{Int}_K(A)$ is nontrivial. Some results in this direction were proved by Frisch in \\cite[Lem. 4.1]{Fri2} and \\cite[Thm. 4.3]{Fri2}; these are restated below in Proposition \\ref{Fin gen nontrivial}. In the traditional case, necessary and sufficient conditions for $\\textnormal{Int}(D)$ to be nontrivial were given by Rush in \\cite{Rush}. Using Rush's criteria, we prove (Theorem \\ref{Nontriv fin gen criterion}) that when $D$ is any integral domain and $A$ is finitely generated as a $D$-module, $\\textnormal{Int}_K(A)$ is nontrivial if and only if $\\textnormal{Int}(D)$ is nontrivial. Part of this work involves conditions under which we have $D[X] \\subseteq \\textnormal{Int}_K(M_n(D)) \\subseteq \\textnormal{Int}_K(A)$ for some $n$, where $M_n(D)$ is the algebra of $n \\times n$ matrices with entries in $D$. This led us to investigate whether having $\\textnormal{Int}_K(M_n(D)) = \\textnormal{Int}_K(A)$ implies that $A \\cong M_n(D)$. While this is not true in general, the result does hold if $D$ is a Dedekind domain and $A$ can be embedded in $M_n(D)$ (Theorem \\ref{Uniqueness of M_n(D)}).\n\nIf we drop the assumption that $A$ is finitely generated as a $D$-module, determining whether $\\textnormal{Int}_K(A)$ is nontrivial becomes more complicated. However, when $D$ is Dedekind, we are able to give necessary and sufficient conditions for $\\textnormal{Int}_K(A)$ to be nontrivial (Theorem \\ref{criterion}). Our work on this topic also allows us to prove that if $D$ is Dedekind, then $\\textnormal{Int}_K(A)$ has Krull dimension 2 (Corollary \\ref{Krull dimension}). This generalizes another theorem of Frisch \\cite[Thm. 5.4]{Fri1} where it was assumed that $A$ was finitely generated as a $D$-module.\n\n\\section{Integral Algebras of Bounded Degree}\\label{Non-trivial}\nThroughout, $D$ denotes an integral domain with field of fractions $K$, and $A$ denotes a $D$-algebra. We will always assume that $A$ satisfies certain conditions, which we call our \\textit{standard assumptions}.\n\n\\begin{Def}\\label{Standard assumptions}\nWhen $A$ is a torsion-free $D$-algebra such that $A \\cap K = D$, we say that $A$ is a $D$-algebra with \\textit{standard assumptions}. When $A$ is finitely generated as a $D$-module, we say that $A$ is of \\textit{finite type}.\n\\end{Def}\n\nAs mentioned in the introduction, the condition that $A \\cap K = D$ implies that\n\\begin{equation*}\nD[X] \\subseteq \\textnormal{Int}_K(A) \\subseteq \\textnormal{Int}(D)\n\\end{equation*}\nand it is natural to consider when $D[X] = \\textnormal{Int}_K(A)$ or $\\textnormal{Int}_K(A) = \\textnormal{Int}(D)$. This latter equality is investigated in \\cite{IntdecompII}, where the following theorem is proved. Unless stated otherwise, all isomorphisms are ring isomorphisms.\n\n\\begin{Thm}\\label{Int_K(A)=Int(D)}\n\\cite[Thms. 2.10, 3.10]{IntdecompII} Let $D$ be a Dedekind domain with finite residue rings. Let $A$ be a $D$-algebra of finite type with standard assumptions. For each maximal ideal $P$ of $D$, let $\\widehat{A}_P$ and $\\widehat{D}_P$ be the $P$-adic completions of $A$ and $D$, respectively. Then, the following are equivalent.\n\\begin{enumerate}[(1)]\n\\item $\\textnormal{Int}_K(A) = \\textnormal{Int}(D)$.\n\\item For each nonzero prime $P$ of $D$, there exists $t \\in \\mathbb{N}$ such that $A\/PA \\cong \\bigoplus_{i=1}^t D\/P$.\n\\item For each nonzero prime $P$ of $D$, there exists $t \\in \\mathbb{N}$ such that $\\widehat{A}_P \\cong \\bigoplus_{i=1}^t \\widehat{D}_P$.\n\\end{enumerate}\n\\end{Thm}\n\nIn this paper, we examine the containment $D[X] \\subseteq \\textnormal{Int}_K(A)$. In the traditional setting of integer-valued polynomials, the ring $\\textnormal{Int}(D)$ is said to be \\textit{trivial} if $\\textnormal{Int}(D) = D[X]$, and we adopt the same terminology for $\\textnormal{Int}_K(A)$. Clearly, for $\\textnormal{Int}_K(A)$ to be nontrivial it is necessary that $\\textnormal{Int}(D)$ be nontrivial, so we begin by reviewing the situation for $\\textnormal{Int}(D)$. Section I.3 of \\cite{CaCh} and a paper by Rush \\cite{Rush} give several results regarding the triviality or non-triviality of $\\textnormal{Int}(D)$. We will summarize these theorems after recalling several definitions.\n\n\\begin{Def}\nAn ideal $\\mathfrak{a}$ of $D$ is said to be the colon ideal or conductor ideal of $q \\in K$ if \n$$\\mathfrak{a} = (D :_D q) = \\{d \\in D \\mid dq \\in D\\}.$$\nFor a commutative ring $R$, we denote by $\\textnormal{nil}(R)$ the nilradical of $R$, which is the set of all nilpotent elements of $R$, or, equivalently, the intersection of all nonzero prime ideals of $R$. For $x \\in \\textnormal{nil}(R)$, we let $\\nu(x)$ equal the nilpotency of $x$, i.e., the smallest positive integer $n$ such that $x^n = 0$. If $I\\subseteq R$ is an ideal, let $V(I) = \\{P \\in \\text{Spec}(R) \\mid P \\supseteq I\\}$.\n\\end{Def}\n\nThe following proposition summarizes several sufficient and necessary conditions on $D$ in order for $\\textnormal{Int}(D)$ to be nontrivial.\n\\begin{Prop}\\label{IntD nontrivial}\\mbox{}\n\\begin{enumerate}[(1)]\n\\item \\cite[Cor. I.3.7]{CaCh} If $D$ is a domain with all residue fields infinite, then $\\textnormal{Int}(D)$ is trivial.\n\\item \\cite[Prop. I.3.10]{CaCh} Let $D$ be a domain. If there is a proper conductor ideal $\\mathfrak{a}$ of $D$ such that $D\/\\mathfrak{a}$ is finite, then $\\textnormal{Int}(D)$ is nontrivial.\n\\item \\cite[Thm. I.3.14]{CaCh} Let $D$ be a Noetherian domain. Then, $\\textnormal{Int}(D)$ is nontrivial if and only if there is a prime conductor ideal of $D$ with finite residue field.\n\\item \\cite[Cor. 1.7]{Rush} Let $D$ be an integral domain. Then, the following are equivalent:\n\\begin{enumerate}[(i)]\n\\item $\\textnormal{Int}(D)$ is nontrivial.\n\\item There exist $a,b\\in D$ with $b\\notin aD$ such that the two sets $\\{\\;|D\/P|\\; \\mid P \\in V((aD:b))\\}$ and $\\{\\nu(x) \\mid x\\in {\\rm nil}(D\/(aD:b))\\}$ are bounded.\n\\end{enumerate}\n\\end{enumerate}\n\\end{Prop}\n\nIf $A$ is finitely generated as a $D$-module, Frisch has shown that the analogs of the above conditions in Proposition \\ref{IntD nontrivial} hold for $\\textnormal{Int}_K(A)$:\n\n\\begin{Prop}\\label{Fin gen nontrivial}\nLet $D$ be a domain. Let $A$ be a $D$-algebra of finite type with standard assumptions.\n\\begin{enumerate}[(1)]\n\\item \\cite[Lem. 4.1]{Fri2} Assume there is a proper conductor ideal $\\mathfrak{a}$ of $D$ such that $D\/\\mathfrak{a}$ is finite. Then, $\\textnormal{Int}_K(A)$ is nontrivial.\n\\item \\cite[Thm. 4.3]{Fri2} Assume that $D$ is Noetherian. Then, $\\textnormal{Int}_K(A)$ is nontrivial if and only if there is a prime conductor ideal of $D$ with finite residue field.\n\\end{enumerate}\n\\end{Prop}\n\nIn particular, \\cite[Thm. 4.3]{Fri2} shows that for a Noetherian domain $D$ and a finitely generated algebra $A$, $\\textnormal{Int}_K(A)$ is nontrivial if and only if $\\textnormal{Int}(D)$ is nontrivial. In Theorem \\ref{Nontriv fin gen criterion}, we will show that this holds even if $D$ is not Noetherian. Additionally, we can weaken our assumptions on $A$. Recall the following definition, which can be found in \\cite{Jac} or \\cite{Lam}, among other sources.\n\n\\begin{Def}\nLet $R$ be a commutative ring and $A$ an $R$-algebra. We say that $A$ is an \\emph{algebraic algebra} (over $R$) if every element of $A$ satisfies a polynomial equation with coefficients in $R$. We say that $A$ is an \\emph{algebraic algebra of bounded degree} if there exists $n\\in\\mathbb{N}$ such that the degree of the minimal polynomial equation of each of its elements is bounded by $n$. If we insist that each element of $A$ satisfy a monic polynomial with coefficients in $R$, then we say that $A$ is an \\emph{integral algebra} over $R$.\n\\end{Def}\n\nAlgebraic algebras are usually discussed over fields, in which case an algebraic algebra is also an integral algebra. Over a domain however, the two structures are not equivalent. For example, $A = \\mathbb{Z}[\\frac{1}{2}]$ is an algebraic algebra over $\\mathbb{Z}$ that is not an integral algebra. In this case, $A$ does not satisfy our standard assumption that $A \\cap \\mathbb{Q}$ should equal $\\mathbb{Z}$. However, if we instead take $A = \\mathbb{Z} \\oplus \\mathbb{Z}[\\frac{1}{2}]$ (so that $B = \\mathbb{Q} \\otimes_\\mathbb{Z} A \\cong \\mathbb{Q} \\oplus \\mathbb{Q}$, $D$ is the diagonal copy of $\\mathbb{Z}$ in $B$, and $K$ is the diagonal copy of $\\mathbb{Q}$ in $B$), then $A$ is an algebraic algebra over $D$, $A$ is not an integral algebra over $D$, and $A \\cap K = D$.\n\nNote also that if $A$ is finitely generated as a $D$-module, then $A$ is an integral algebra of bounded degree, with the bound given by the number of generators (see \\cite[Thm. 1, Chap. V]{BourbakiAlg} or \\cite[Prop. 2.4]{AtMc}). However, the converse does not hold. For instance, $A = D[X_1, X_2, \\ldots]\/(\\{X_i X_j \\mid i, j \\geq 1 \\})$ is not finitely generated, but if $f \\in A$ with constant term $d \\in D$, then $f$ satisfies the polynomial $(X-d)^2$. Thus, this $A$ is an integral algebra of bounded degree. \n\nFor our purposes, the importance of having a bounding degree $n$, is that it guarantees that $\\textnormal{Int}_K(A)$ contains $\\textnormal{Int}_K(M_n(D))$, where $M_n(D)$ denotes the algebra of $n \\times n$ matrices with entries in $D$.\n\n\\begin{Lem}\\label{Matrix containment}\nLet $D$ be a domain and let $A$ be a $D$-algebra with standard assumptions. Assume that $A$ is an integral $D$-algebra of bounded degree $n$. Then, $\\textnormal{Int}_K(M_n(D)) \\subseteq \\textnormal{Int}_K(A)$.\n\\end{Lem}\n\\begin{proof}\nLet $a \\in A$ and let $\\mu_a \\in D[X]$ be monic of degree $n$ such that $\\mu_a(a) = 0$. Let $f(x) = g(X)\/d \\in \\textnormal{Int}_K(M_n(D))$, where $g \\in D[X]$ and $d \\in D \\setminus \\{0\\}$. By \\cite[Lem. 3.4]{FriSep}, $g$ is divisible modulo $dD[X]$ by every monic polynomial in $D[X]$ of degree $n$; hence, $\\mu_a$ divides $g$ modulo $d$. It follows that $g(a) \\in dA$ and $f(a) \\in A$. Since $a$ was arbitrary, $f \\in \\textnormal{Int}_K(A)$.\n\\end{proof}\n\n\\begin{Rem}\nThe converse of Lemma \\ref{Matrix containment} does not hold, even in the case when $\\textnormal{Int}_K(M_n(D))$ is nontrivial, as Example \\ref{integralalgebraboundeddegree not necessary} below will show.\n\\end{Rem}\n\nThus, in the case of an integral algebra of bounded degree $n$, to prove that $\\textnormal{Int}_K(A)$ is nontrivial it suffices to show that $\\textnormal{Int}_K(M_n(D))$ is nontrivial. This task is more tractable, because the polynomials given in the next definition can be used to map $M_n(D)$ into $M_n(P)$, where $P$ is a maximal ideal of $D$ with a finite residue field.\n\n\\begin{Def}\\label{BCL polynomials}\nFor each prime power $q$ and each $n > 0$, let \n\\begin{equation*}\n\\phi_{q,n}(X) = (X^{q^n} - X)(X^{q^{n-1}}-X) \\cdots (X^q - X).\n\\end{equation*}\n\\end{Def}\n\n\\begin{Lem}\\label{BCL lemma}\n\\cite[Thm. 3]{BrawCarLev} Let $\\mathbb{F}_q$ be the finite field with $q$ elements. Then, $\\phi_{q,n}$ sends each matrix in $M_n(\\mathbb{F}_q)$ to the zero matrix. Consequently, if $P \\subset D$ is a maximal ideal of $D$ with residue field $D\/P \\cong \\mathbb{F}_q$, then $\\phi_{q, n}$ maps $M_n(D)$ into $M_n(P)$.\n\\end{Lem}\n\n\\begin{Prop}\\label{Nontriv matrix criterion}\nLet $D$ be a domain. If $\\textnormal{Int}(D)$ is nontrivial, then $\\textnormal{Int}_K(M_n(D))$ is nontrivial, for all $n \\geq 1$.\n\\end{Prop}\n\\begin{proof}\nLet $n \\geq 1$ be fixed. Since $\\textnormal{Int}(D)$ is nontrivial, by \\cite[Cor. 1.7]{Rush} there exist $a,b\\in D$ with $b\\notin aD$ such that $\\{\\;|D\/P|\\; \\mid P \\in V((aD:b))\\}$ and $\\{\\nu(x) \\mid x\\in \\textnormal{nil}(D\/(aD:b))\\}$ are bounded. Let $I = (aD:b)$. Note that, because the former condition holds, each prime ideal containing $I$ is maximal, so the nilradical of $D\/I$ is equal to the Jacobson radical of $D\/I$.\n\nLet $\\{q_1,\\ldots,q_s\\}=\\{\\;|D\/P|\\; \\mid P\\in V(I)\\}$. By Lemma \\ref{BCL lemma}, we have $\\phi_{q,n}(M_n(D))\\subseteq M_n(P)$ for each maximal ideal $P\\subset D$ whose residue field has cardinality $q$. Then\n\\begin{equation*}\ng(X)=\\prod_{i=1,\\ldots,s}\\phi_{q_i,n}(X)\n\\end{equation*}\nis a monic polynomial such that $g(M_n(D))\\subseteq \\prod_{i}M_n(P_i)\\subseteq M_n(J)$, where $J=\\sqrt{I}$. Considering everything modulo $I$, we have $\\overline{g}(M_n(D\/I))\\subseteq M_n(J\/I)$. \n\nNow, since $\\{\\nu(x) \\mid x\\in \\textnormal{nil}(D\/I)\\}$ is bounded, the nilpotency of every element in $J\/I$ is bounded by some positive integer $t$. It is a standard exercise that a matrix over a commutative ring with nilpotent entries is a nilpotent matrix. Moreover, it easily follows that the nilpotency of every matrix in $M_n(J\/I)$ is bounded by some $m\\in\\mathbb{N}$, depending only on $t$ and $n$. Hence, $\\overline{g}(X)^{m}$ maps every matrix $M_n(D\/I)$ to $0$, so that $g(X)^{m}$ maps $M_n(D)$ into $M_n(I)$. Finally, it is now easy to see that the polynomial $\\frac{b}{a}\\cdot g(X)^m$ is in $\\textnormal{Int}_K(M_n(D))$ but not in $D[X]$.\n\\end{proof}\n\nCombining Lemma \\ref{Matrix containment} with Proposition \\ref{Nontriv matrix criterion}, we obtain our desired theorem.\n\n\\begin{Thm}\\label{Nontriv fin gen criterion}\nLet $D$ be a domain and let $A$ be $D$-algebra with standard assumptions. Assume that $A$ is an integral $D$-algebra of bounded degree. Then, $\\textnormal{Int}_K(A)$ is nontrivial if and only if $\\textnormal{Int}(D)$ is nontrivial. In particular, if $A$ is finitely generated as a $D$-module, then $\\textnormal{Int}_K(A)$ is nontrivial if and only if $\\textnormal{Int}(D)$ is nontrivial.\n\\end{Thm}\n\nLemma \\ref{Matrix containment} shows that, for an integral algebra $A$ of bounded degree $n$, the following containments hold:\n\\begin{equation*}\nD[X] \\subseteq \\textnormal{Int}_K(M_n(D)) \\subseteq \\textnormal{Int}_K(A) \\subseteq \\textnormal{Int}(D).\n\\end{equation*}\nWhile our focus has been on whether $\\textnormal{Int}_K(A)$ equals $D[X]$, for the remainder of this section we will consider the containment $\\textnormal{Int}_K(M_n(D)) \\subseteq \\textnormal{Int}_K(A)$. In particular, we will examine to what extent $\\textnormal{Int}_K(M_n(D))$ is unique among rings of integer-valued polynomials. That is, if $\\textnormal{Int}_K(M_n(D)) = \\textnormal{Int}_K(A)$, then can we conclude that $A \\cong M_n(D)$? In general, the answer is no, as we show below in Example \\ref{Quaternion example}. However, in Theorem \\ref{Uniqueness of M_n(D)} we will prove that for $D$ Dedekind, if $A$ can be embedded in $M_n(D)$, then having $\\textnormal{Int}_K(M_n(D)) = \\textnormal{Int}_K(A)$ implies that $A \\cong M_n(D)$.\n\nWe first recall the definition of a null ideal of an algebra.\n\n\\begin{Def}\\label{Null ideal}\nLet $R$ be a commutative ring and $A$ an $R$-algebra. The \\textit{null ideal} of $A$ with respect to $R$, denoted $N_R(A)$, is the set of polynomials in $R[X]$ that kill $A$. That is, $N_R(A) = \\{ f \\in R[X] \\mid f(A) = 0\\}$. In particular, $N_{D\/P}(A\/PA) = \\{f \\in (D\/P)[X] \\mid f(A\/PA) = 0\\}$ denotes the null ideal of $A\/PA$ with respect to $D\/P$.\n\\end{Def}\n\nThere is a close relationship between polynomials in rings of integer-valued polynomials and polynomials in null ideals.\n\n\\begin{Lem}\\label{Null ideal lemma}\nLet $D$ be a domain and let $A$ and $A'$ be $D$-algebras with standard assumptions.\n\\begin{enumerate}[(1)]\n\\item Let $g(X)\/d \\in K[X]$, where $g \\in D[X]$ and $d \\ne 0$. Then, $g(X)\/d \\in \\textnormal{Int}_K(A)$ if and only if the residue of $g$ (mod $d$) is in $N_{D\/dD}(A\/dA)$. \n\\item $\\textnormal{Int}_K(A) = \\textnormal{Int}_K(A')$ if and only if $N_{D\/dD}(A\/dA) = N_{D\/dD}(A'\/dA')$ for all $d \\in D$.\n\\end{enumerate}\n\\end{Lem}\n\\begin{proof}\nNotice that $g \\in \\textnormal{Int}_K(A)$ if and only if $g(A) \\subseteq dA$ if and only if $g(A\/dA) = 0$ mod $d$. This proves (1), and (2) follows easily.\n\\end{proof}\n\n\\begin{Ex}\\label{Quaternion example}\nLet $D = \\mathbb{Z}_{(p)}$ be the localization of $\\mathbb{Z}$ at an odd prime $p$. Take $A$ to be the quaternion algebra $A = D \\oplus D\\mathbf{i} \\oplus D\\mathbf{j} \\oplus D\\mathbf{k}$, where $\\mathbf{i}$, $\\mathbf{j}$, and $\\mathbf{k}$ are the imaginary quaternion units satisfying $\\mathbf{i}^2 = \\mathbf{j}^2 = -1$ and $\\mathbf{i}\\mathbf{j} = \\mathbf{k} = -\\mathbf{j}\\mathbf{i}$. It is well known (cf.\\ \\cite[Exercise 3A]{Goodearl} or \\cite[Sec. 2.5]{DavSarVal}) that $A\/p^k A \\cong M_2(\\mathbb{Z}\/p^k \\mathbb{Z}) \\cong M_2(D\/p^k D)$ for all $k > 0$. By Lemma \\ref{Null ideal lemma}, $\\textnormal{Int}_\\mathbb{Q}(A) = \\textnormal{Int}_\\mathbb{Q}(M_2(D))$. However, $A$ contains no nonzero nilpotent elements (and is in fact contained in the division ring $\\mathbb{Q} \\oplus \\mathbb{Q}\\mathbf{i} \\oplus \\mathbb{Q}\\mathbf{j} \\oplus \\mathbb{Q}\\mathbf{k}$) and so cannot be isomorphic to $M_2(D)$.\n\\end{Ex}\n\nThus, in general, $\\textnormal{Int}_K(A) = \\textnormal{Int}_K(M_n(D))$ does not imply that $A \\cong M_n(D)$. However, as mentioned above, we do have such an isomorphism if $A$ can be embedded in $M_n(D)$. Proving this theorem involves some results of Racine \\cite{Racine}, \\cite{Racine2} about maximal subalgebras of matrix rings, which we now summarize.\n\n\\begin{Prop}\\label{Racine classification}\\mbox{}\n\\begin{enumerate}[(1)]\n\\item (\\cite[Thm. 1]{Racine}) Let $\\overline{A}$ be a maximal $\\mathbb{F}_q$-subalgebra of $M_n(\\mathbb{F}_q)$. Let $V$ be an $\\mathbb{F}_q$-vector space of dimension $n$, so that $M_n(\\mathbb{F}_q)\\cong{\\rm End}_{\\mathbb{F}_q}(V)$. Then, $\\overline{A}$ is one of the following two types.\n\\begin{itemize}\n\\item[(I)] The stabilizer of a proper nonzero subspace of $V$. That is, $\\overline{A} = S(W) = \\{\\varphi\\in {\\rm End}_{\\mathbb{F}_q}(V) \\mid \\varphi(W)\\subseteq W\\}$, where $W$ is a proper nonzero $\\mathbb{F}_q$-subspace of $V$.\n\n\\item[(II)] The centralizer of a minimal field extension of $\\mathbb{F}_q$. That is, $\\overline{A} = C_{{\\rm End}_{\\mathbb{F}_q}(V)}(\\mathbb{F}_{q^l})=\\{\\varphi\\in{\\rm End}_{\\mathbb{F}_q}(V) \\mid \\varphi x=x\\varphi, \\forall x\\in \\mathbb{F}_{q^l} \\}$, where $l\\in\\mathbb{Z}$ is a prime dividing $n$.\n\\end{itemize}\n\n\\item (\\cite[Theorem p.\\ 12]{Racine2}) Let $D$ be a Dedekind domain and let $A$ be a maximal $D$-subalgebra of $M_n(D)$. Then, there exists a maximal ideal $P$ of $D$ such that $A\/P A$ is a maximal subalgebra of $M_n(D\/P)$. \n\\end{enumerate}\n\\end{Prop}\n\nRacine's classification allows us to establish a partial uniqueness result for the null ideal of $M_n(\\mathbb{F}_q)$, and hence for $\\textnormal{Int}_K(M_n(D))$.\n\n\\begin{Lem}\\label{FqsubalgebrasMnFq}\nLet $\\overline{A}$ be an $\\mathbb{F}_q$-subalgebra of $M_n(\\mathbb{F}_q)$ such that $N_{\\mathbb{F}_q}(\\overline{A})=N_{\\mathbb{F}_q}(M_n(\\mathbb{F}_q))$. Then $\\overline{A} = M_n(\\mathbb{F}_q)$.\n\\end{Lem}\n\\begin{proof}\nSuppose the claim is not true, so that $\\overline A$ is contained in a maximal $\\mathbb{F}_q$-subalgebra of $M_n(\\mathbb{F}_q)$; hence, without loss of generality, we may assume that $\\overline A\\subsetneq M_n(\\mathbb{F}_q)$ is a maximal $\\mathbb{F}_q$-subalgebra. We will show that $N_{\\mathbb{F}_q}(\\overline{A})$ properly contains $N_{\\mathbb{F}_q}(M_n(\\mathbb{F}_q))$. Note that $N_{\\mathbb{F}_q}(M_n(\\mathbb{F}_q)) = (\\phi_{q,n}(X))$ by \\cite[Thm. 3]{BrawCarLev}, where $\\phi_{q,n}$ is the polynomial from Definition \\ref{BCL polynomials}.\n\nLet $V$ be an $\\mathbb{F}_q$-vector space of dimension $n$, so that $M_n(\\mathbb{F}_q)\\cong{\\rm End}_{\\mathbb{F}_q}(V)$. Assume first that $\\overline{A} = S(W)$ is of Type I as in Proposition \\ref{Racine classification}, and let $m = \\dim_{\\mathbb{F}_q}(W)$. Note that conjugating $\\overline{A}$ by an element of $GL(n, q)$ will change the matrices in $\\overline{A}$, but not the polynomials in the null ideal $N_{\\mathbb{F}_q}(\\overline{A})$. Moreover, up to conjugacy by an element in $GL(n, q)$, we may assume that $W$ has basis $e_1, e_2, \\ldots, e_m$, where $e_i$ is the standard basis vector with $1$ in the $i^\\text{th}$ component and 0 elsewhere. Under this basis, the matrices in $\\overline{A}$ are block matrices of the form \n$\\big(\\begin{smallmatrix}\nA_1 & A_2 \\\\ 0 & A_3\n\\end{smallmatrix}\\big)$ \nwhere $A_1$ is $m \\times m$ and $A_3$ is $(n-m) \\times (n-m)$. \n\nOne consequence of this representation is that every matrix in $S(W)$ has a reducible characteristic polynomial. As shown in the proof of \\cite[Thm. 3]{BrawCarLev}, $\\phi_{q, n}$ is the least common multiple of all monic polynomials in $\\mathbb{F}_q[X]$ of degree $n$. Hence, $\\phi_{q, n} \\in N_{\\mathbb{F}_q}(\\overline{A})$, because the characteristic polynomial of each matrix in $\\overline{A}$ divides $\\phi_{q,n}$. However, if $\\phi$ is the quotient of $\\phi_{q,n}$ by an irreducible polynomial in $\\mathbb{F}_q[X]$ of degree $n$, then $\\phi \\in N_{\\mathbb{F}_q}(\\overline{A})$, but $\\phi \\notin N_{\\mathbb{F}_q}(M_n(\\mathbb{F}_q))$. Thus, $N_{\\mathbb{F}_q}(\\overline{A})$ properly contains $N_{\\mathbb{F}_q}(M_n(\\mathbb{F}_q))$.\n\nNow, assume that $\\overline{A}$ is of Type II of Proposition \\ref{Racine classification}, so that $\\overline{A} = C_{{\\rm End}_{\\mathbb{F}_q}(V)}(\\mathbb{F}_{q^l})$ for some prime $l$ dividing $n$. Then, by \\cite[Thm. VIII.10]{McD}, we have $\\overline{A} \\cong M_{n\/l}(\\mathbb{F}_{q^l})$, and so \n\\begin{equation*}\nN_{\\mathbb{F}_q}(\\overline{A})=(\\phi_{q^l,n\/l}(X))\\supsetneq (\\phi_{q,n}(X)) = N_{\\mathbb{F}_q}(M_n(\\mathbb{F}_q)).\n\\end{equation*}\nAs before, the null ideal of $\\overline{A}$ strictly contains the null ideal of $M_n(\\mathbb{F}_q)$.\n\\end{proof}\n\n\\begin{Thm}\\label{Uniqueness of M_n(D)}\nLet $D$ be a Dedekind domain with finite residue fields. Let $A$ be a $D$-algebra of finite type with standard assumptions. Assume that $n \\geq 1$ is such that $A$ can be embedded in $M_n(D)$. Then, $\\textnormal{Int}_K(A) = \\textnormal{Int}_K(M_n(D))$ if and only if $A \\cong M_n(D)$.\n\\end{Thm}\n\\begin{proof}\nClearly, $A \\cong M_n(D)$ implies that $\\textnormal{Int}_K(A) = \\textnormal{Int}_K(M_n(D))$. So, assume that $\\textnormal{Int}_K(M_n(D)) = \\textnormal{Int}_K(A)$. As we will prove shortly in Lemma \\ref{wellbehaviourlocalization}, $\\textnormal{Int}_K(A)$ (and likewise $\\textnormal{Int}_K(M_n(D))$) is well-behaved with respect to localization at primes of $D$: for each prime $P$ of $D$, we have $\\textnormal{Int}_K(A)_P = \\textnormal{Int}_K(A_P)$. Hence, $\\textnormal{Int}_K(M_n(D_P)) = \\textnormal{Int}_K(A_P)$ for each $P$. Since $D$ is Dedekind, $D_P$ is a discrete valuation ring, so there exists $\\pi \\in D_P$ such that $PD_P = \\pi D_P$. Moreover, we have $D_P\/\\pi D_P \\cong D\/P$ and $A_P \/ \\pi A_P \\cong A\/PA$, so that $N_{D_P\/\\pi D_P}(A_P \/ \\pi A_P) = N_{D\/P}(A\/PA)$ (and likewise for $M_n(D)$). By Lemma \\ref{Null ideal lemma} (2), we conclude that the null ideals $N_{D\/P}(M_n(D\/P))$ and $N_{D\/P}(A\/P A)$ are equal for all maximal ideals $P$ of $D$.\n\nNow, suppose by way of contradiction that the image of $A$ in $M_n(D)$ does not equal $M_n(D)$. As in Lemma \\ref{FqsubalgebrasMnFq}, we may assume that the image of $A$ in $M_n(D)$ is a maximal $D$-subalgebra of $M_n(D)$. By Proposition \\ref{Racine classification}, there exists a maximal ideal $P$ of $D$ such that $A\/P A$ is isomorphic to a maximal subalgebra of $M_n(D\/P)$. By Lemma \\ref{FqsubalgebrasMnFq}, the null ideals $N_{D\/P}(A\/P A)$ and $N_{D\/P}(M_n(D\/P))$ are not equal. This is a contradiction. Therefore, $A \\cong M_n(D)$.\n\\end{proof}\n\n\\section{General Case}\\label{General case section}\n\nWe return now to the study of when $\\textnormal{Int}_K(A)$ is nontrivial. Because of Theorem \\ref{Nontriv fin gen criterion}, $A$ being an integral $D$-algebra of bounded degree can be sufficient for $\\textnormal{Int}_K(A)$ to be nontrivial, but it is not necessary. There exist $D$-algebras $A$ that are neither finitely generated, nor algebraic over $D$ (let alone integral or of bounded degree), but for which $\\textnormal{Int}_K(A)$ is nontrivial, as the next example shows.\n\n\\begin{Ex}\\label{integralalgebraboundeddegree not necessary}\nLet $D = \\mathbb{Z}$ and let $A = \\prod_{i \\in \\mathbb{N}} \\mathbb{Z}$ be an infinite direct product of copies of $\\mathbb{Z}$. Then, the element $(1, 2, 3, \\ldots)$ cannot be killed by any polynomial in $\\mathbb{Z}[X]$, so $A$ is not algebraic over $\\mathbb{Z}$. However, since operations in $A$ are done component-wise, any polynomial in $\\textnormal{Int}(\\mathbb{Z})$ is also in $\\textnormal{Int}_\\mathbb{Q}(A)$. Hence, $\\textnormal{Int}_\\mathbb{Q}(A) = \\textnormal{Int}(\\mathbb{Z})$, so in particular $\\textnormal{Int}_\\mathbb{Q}(A)$ is nontrivial. \n\\end{Ex}\n\nUltimately, the previous example works because for each prime $p$ there exists a polynomial that sends each element of $A\/pA$ to 0. More explicitly, each element of $\\prod_{i \\in \\mathbb{N}} \\mathbb{F}_p$ is killed by the polynomial $X^p-X$. This suggests that for $\\textnormal{Int}_K(A)$ to be nontrivial, it may be enough that there exists a finite index prime $P$ of $D$ with $A\/PA$ algebraic of bounded degree over $D\/P$ (since $D\/P$ is a field in this case, this is equivalent to having $A\/PA$ be integral of bounded degree over $D\/P$). We will prove below in Theorem \\ref{criterion} that if $D$ is a Dedekind domain, then this condition is necessary and sufficient for $\\textnormal{Int}_K(A)$ to be nontrivial.\n\nOur work will involve localizing $D$, $A$, and $\\textnormal{Int}_K(A)$ at $P$, and exploiting properties of $D_P$. In \\cite[Prop. 3.2]{Wer}, it is shown that if $D$ is Noetherian and $A$ is a free $D$-module of finite rank, then $\\textnormal{Int}_K(A)_P = \\textnormal{Int}_K(A_P)$ (in fact, \\cite[Prop. 3.2]{Wer} will hold if $A$ is merely finitely generated as a $D$-module). The next lemma shows that we can drop this finiteness assumption if $D$ is Dedekind.\n\n\\begin{Lem}\\label{wellbehaviourlocalization}\nLet $D$ be a Dedekind domain and $A$ a $D$-algebra with standard assumptions. Let $P$ be a prime ideal of $D$. Then $\\textnormal{Int}_K(A_P) = \\textnormal{Int}_K(A)_P$.\n\\end{Lem}\n\\begin{proof}\nThe containment $\\textnormal{Int}_K(A)_P \\subseteq \\textnormal{Int}_K(A_P)$ follows from the proof of \\cite[Prop. 3.2]{Wer}, which itself is an adaptation of a technique of Rush involving induction on the degrees of the relevant polynomials (see \\cite[Thm. I.2.1]{CaCh} or \\cite[Prop. 1.4]{Rush}). \n\nFor the other inclusion, let $f \\in \\textnormal{Int}_K(A_P)$ and write $f(X)=\\frac{g(X)}{d}$ for some $g \\in D[X]$ and $d\\in D \\setminus \\{0\\}$. Since $D$ is Dedekind, we may write $dD=P^aI$, where $a \\geq 0$ and $I$ is an ideal of $D$ coprime with $P$ (possibly equal to $D$ itself). If $a=0$ then $f \\in D_P[X] \\subseteq \\textnormal{Int}_K(A)_P$. If $a \\geq 1$, let $c\\in I \\setminus P$. We claim that $cf \\in\\textnormal{Int}_K(A)$, from which the statement follows since $c \\in D \\setminus P$. \n\nIf $Q \\subset D$ is a prime ideal different from $P$, then $cf \\in D_Q[X] \\subseteq \\textnormal{Int}_K(A_Q)$; that is, $cf(A_Q) \\subset A_Q$. Now, $f(A) \\subseteq f(A_P) \\subseteq A_P$ by assumption, so $cf(A) \\subset cA_P = A_P$, since $c \\notin P$. Since $A=\\bigcap_{Q\\in{\\rm Spec}(D)} A_Q$, it follows that $cf(A)\\subset A$, and we are done.\n\\end{proof}\n\n\nRecall (Definition \\ref{Null ideal}) that the null ideal of $A$ in $R$ is $N_R(A) = \\{ f \\in R[X] \\mid f(A) = 0\\}$.\n\n\\begin{Prop}\\label{equivalent conditions}\nLet $D$ be a Dedekind domain and $A$ a $D$-algebra with standard assumptions. Let $P$ be a prime ideal of $D$. Then, the following are equivalent.\n\\begin{enumerate}[(1)]\n\\item $N_{D\/P}(A\/PA) \\supsetneq (0)$.\n\\item $D_P[X] \\subsetneq \\textnormal{Int}_K(A_P)$.\n\\item $D\/P$ is finite and $A\/PA$ is a $D\/P$-algebraic algebra of bounded degree.\n\\end{enumerate}\n\\end{Prop}\n\\begin{proof}\n$(1) \\Rightarrow (2)$ Let $g\\in D[X]$ be a monic pullback of a nontrivial element $\\overline{g}\\in N_{D\/P}(A\/PA)$ and let $\\pi\\in P\\setminus P^2$. Then, $g(A_P)\\subseteq PA_P=\\pi A_P$, so $\\frac{g(X)}{\\pi}\\in \\textnormal{Int}_K(A_P)\\setminus D_P[X]$. \n\n$(2) \\Rightarrow (1)$ Let $f(X)=\\frac{g(X)}{d} \\in \\textnormal{Int}_K(A_P) \\setminus D_P[X]$, with $g \\in D[X] \\setminus P[X]$ and $d\\in P$. Let $v_P$ denote the canonical valuation on $D_P$. If $v_P(d)=e>1$ and $\\pi \\in P\\setminus P^2$, then $\\pi^{e-1} f(X)$ is still an element of $\\textnormal{Int}_K(A_P)$ which is not in $D_P[X]$. So, $g(A_P) \\subseteq \\frac{d}{\\pi^{e-1}} A_P \\subseteq \\pi A_P$. Hence, $\\overline{g}\\in(D_P\/PD_P)[X]\\cong(D\/P)[X]$ is a nontrivial element of $N_{D\/P}(A\/PA)$.\n\n$(1) \\Leftrightarrow (3)$ Note that\n\\begin{equation*}\nN_{D\/P}(A\/PA)=\\bigcap_{\\overline{a}\\in A\/PA}N_{D\/P}(\\overline{a})=\\bigcap_{\\overline{a}\\in A\/PA}(\\mu_{\\overline{a}}(X))\n\\end{equation*}\nwhere, for each $\\overline{a}\\in A\/PA$, $\\mu_{\\overline{a}}\\in (D\/P)[X]$ is the minimal polynomial of $\\overline{a}$ over the field $D\/P$.\n\nIf $N_{D\/P}(A\/PA)$ is nonzero, then it is equal to a principal ideal generated by a monic non-constant polynomial $\\overline{g}\\in (D\/P)[X]$. Since $N_{D\/P}(A\/PA)\\subseteq N_{D\/P}(D\/P)$, it follows that $D\/P$ is finite (if not, then $N_{D\/P}(D\/P) = (0)$, because the only polynomial which is identically zero on an infinite field is the zero polynomial). Moreover, each element $\\overline{a}\\in A\/PA$ is algebraic over $D\/P$ (otherwise the corresponding $N_{D\/P}(\\overline{a})$ is zero) and its degree over $D\/P$ is bounded by $\\deg(\\overline{g})$. \n\nConversely, assume $D\/P$ is finite and $A\/PA$ is a $D\/P$-algebraic algebra of bounded degree $n$. Then, there are finitely many polynomials over $D\/P$ of degree at most $n$, and the product of all such polynomials is a nontrivial element of $N_{D\/P}(A\/PA)$.\n\\end{proof}\n\nWe can now establish the promised criterion for $\\textnormal{Int}_K(A)$ to be nontrivial.\n\n\\begin{Thm}\\label{criterion}\nLet $D$ be a Dedekind domain and let $A$ be a $D$-algebra with standard assumptions. Then $\\textnormal{Int}_K(A)$ is nontrivial if and only if there exists a prime ideal $P$ of $D$ of finite index such that $A\/PA$ is a $D\/P$-algebraic algebra of bounded degree.\n\\end{Thm}\n\\begin{proof}\nClearly, $D[X]\\subsetneq \\textnormal{Int}_K(A)$ if and only if there exists a prime ideal $P\\subset D$ such that the two $D$-modules $D[X]$ and $\\textnormal{Int}_K(A)$ are not equal locally at $P$, that is, $D_P[X]\\subsetneq \\textnormal{Int}_K(A)_P$. Since $\\textnormal{Int}_K(A)_P=\\textnormal{Int}_K(A_P)$ by Lemma \\ref{wellbehaviourlocalization}, we can apply Proposition \\ref{equivalent conditions} and we are done.\n\\end{proof}\n\n\\begin{Ex}\\label{Nontriv examples}\nTheorem \\ref{criterion} applies to the following examples.\n\\begin{itemize}\n\\item[(1)] Let $D=\\mathbb{Z}$ and $A=\\overline{\\mathbb{Z}}$, the absolute integral closure of $\\mathbb{Z}$. Then, for each $n\\in\\mathbb{N}$, there exists $\\alpha\\in\\overline{\\mathbb{Z}}$ of degree $d>n$ such that $O_{\\mathbb{Q}(\\alpha)}=\\mathbb{Z}[\\alpha]$. It follows that for each prime $p\\in\\mathbb{Z}$, $\\overline{\\mathbb{Z}}\/p\\overline{\\mathbb{Z}}$ is an algebraic $\\mathbb{Z}\/p\\mathbb{Z}$-algebra of unbounded degree. Thus, $\\textnormal{Int}_{\\mathbb{Q}}(\\overline{\\mathbb{Z}})=\\mathbb{Z}[X]$.\n\n\\item[(2)] Let $D=\\mathbb{Z}_{(p)}$ and $A=\\mathbb{Z}_p$. Then, $\\mathbb{Z}_p\/p\\mathbb{Z}_p\\cong\\mathbb{Z}\/p\\mathbb{Z}$, so $\\mathbb{Z}_{(p)}[X]\\subsetneq \\textnormal{Int}_{\\mathbb{Q}}(\\mathbb{Z}_p)$.\n\n\\item[(3)] Let $D=\\mathbb{Z}$ and $A=\\widehat{\\mathbb{Z}}=\\prod_{p\\in\\mathbb{P}}\\mathbb{Z}_p$, the profinite completion of $\\mathbb{Z}$, where $\\mathbb{P}$ denotes the set of all prime numbers. For each prime $p\\in\\mathbb{Z}$, we have $p\\widehat{\\mathbb{Z}}=\\prod_{p'\\not=p}\\mathbb{Z}_{p'}\\times p\\mathbb{Z}_p$, so $\\widehat{\\mathbb{Z}}\/p\\widehat{\\mathbb{Z}}\\cong \\mathbb{Z}_p\/p\\mathbb{Z}_p\\cong\\mathbb{Z}\/p\\mathbb{Z}$. Thus, $\\mathbb{Z}[X]\\subsetneq\\textnormal{Int}_{\\mathbb{Q}}(\\widehat{\\mathbb{Z}})$.\n\\end{itemize}\n\\end{Ex}\n\nIf $\\widehat{A}$ is the $P$-adic completion of a $D$-algebra $A$, then we can say more about $\\textnormal{Int}_K(\\widehat{A})$. The following lemma also appears in \\cite{IntdecompII}. We include it in its entirety since the proof is quite short.\n\n\\begin{Lem}\\label{DVR lemma}\nLet $D$ be a discrete valuation ring (DVR) with maximal ideal $P = \\pi D$. Let $A$ be a $D$-algebra with standard assumptions, and let $\\widehat{A}$ be the $P$-adic completion of $A$. Then, $\\textnormal{Int}_K(\\widehat{A}) = \\textnormal{Int}_K(A)$.\n\\end{Lem}\n\\begin{proof}\nThe containment $\\textnormal{Int}_K(\\widehat{A}) \\subseteq \\textnormal{Int}_K(A)$ is clear, since $A$ embeds in $\\widehat{A}$. Conversely, let $f \\in \\textnormal{Int}_K(A)$ and $\\alpha \\in \\widehat{A}$. Suppose $f(X) = g(X)\/\\pi^k$, where $g \\in D[X]$ and $k \\in \\mathbb{N}$. If $k = 0$, then the conclusion is clear, so assume that $k > 1$. \n\nVia the canonical projection $\\widehat{A} \\to A\/\\pi^k A$, we see that there exists $a\\in A$ such that $\\alpha \\equiv a \\pmod{\\pi^k \\widehat{A}}$. Since the coefficients of $g$ are central in $A$, we get $g(\\alpha)\\equiv g(a) \\pmod{\\pi^k \\widehat{A}}$. Thus, $f(\\alpha)=f(a)+\\lambda\/\\pi^k$, where $\\lambda\\in\\pi^k \\widehat{A}$, so that $f(\\alpha)\\in \\widehat{A}$. Hence, $f \\in \\textnormal{Int}_K(\\widehat{A})$ and $\\textnormal{Int}_K(\\widehat{A}) = \\textnormal{Int}_K(A)$.\n\\end{proof} \n\nThus, in Example \\ref{Nontriv examples} (2), we have $\\textnormal{Int}_\\mathbb{Q}(A) = \\textnormal{Int}(\\mathbb{Z}_{(p)})$. Moreover, in Example \\ref{Nontriv examples} (3) we have $\\textnormal{Int}_\\mathbb{Q}(A) = \\textnormal{Int}(\\mathbb{Z})$ (see also \\cite{ChabPer} where the profinite completion of $\\mathbb{Z}$ was considered in order to study the polynomial overrings of $\\textnormal{Int}(\\mathbb{Z})$). A more general example, which results in proper containments among all of $D[X]$, $\\textnormal{Int}_K(A)$, and $\\textnormal{Int}(D)$, is the following.\n\n\\begin{Ex}\\label{DVR example}\nLet $D$ be a DVR with maximal ideal $P = \\pi D$ and finite residue field. Let $A$ be a $D$-algebra of finite type with standard assumptions and such that $\\textnormal{Int}_K(A) \\subsetneq \\textnormal{Int}(D)$. Let $\\widehat{A}$ be the $P$-adic completion of $A$. Then, $P$ satisfies the conditions of Theorem \\ref{criterion} with respect to $A$, so $D[X] \\subsetneq \\textnormal{Int}_K(A)$; and $\\textnormal{Int}_K(\\widehat{A}) = \\textnormal{Int}_K(A)$ by Lemma \\ref{DVR lemma}. Thus,\n\\begin{equation*}\nD[X] \\subsetneq \\textnormal{Int}_K(\\widehat{A}) = \\textnormal{Int}_K(A) \\subsetneq \\textnormal{Int}(D).\n\\end{equation*}\nIn general, $\\widehat{A}$ is not finitely generated as a $D$-module (this is the case, for instance, when $A$ is countable but $\\widehat{A}$ is uncountable). So, $\\widehat{A}$ can provide an example of a $D$-algebra that is not finitely generated and for which the integer-valued polynomial ring is properly contained between $D[X]$ and $\\textnormal{Int}(D)$.\n\\end{Ex}\n\n\\begin{Rem}\\label{Quaternion example again}\nLemma \\ref{DVR lemma} also gives us another approach to Example \\ref{Quaternion example}. With notation as in that example, we have $\\widehat{A} \\cong M_2(\\mathbb{Z}_p)$ (indeed, this follows from the fact that $A\/p^k A \\cong M_2(\\mathbb{Z}\/p^k \\mathbb{Z})$ for all $k > 0$). Thus, $\\textnormal{Int}_\\mathbb{Q}(A) = \\textnormal{Int}_\\mathbb{Q}(M_2(\\mathbb{Z}_p)) = \\textnormal{Int}_\\mathbb{Q}(M_2(\\mathbb{Z}_{(p)}))$ even though $A \\not\\cong M_2(\\mathbb{Z}_{(p)})$.\n\\end{Rem}\n\nWe close this paper by using the conditions of Proposition \\ref{equivalent conditions} to prove that when $D$ is Dedekind, $\\textnormal{Int}_K(A)$ has Krull dimension 2. This result was shown by Frisch \\cite[Thm. 5.4]{Fri1} in the case where $A$ is of finite type. Our work does not require $A$ to be finitely generated, and somewhat surprisingly does not require a full classification of the prime ideals of $\\textnormal{Int}_K(A)$.\n\nRecall that a nonzero prime ideal $\\mathfrak{P}$ of $\\textnormal{Int}_K(A)$ is called unitary if $\\mathfrak{P} \\cap D \\ne (0)$, and is called non-unitary if $\\mathfrak{P} \\cap D = (0)$.\n\n\\begin{Thm}\\label{Height thm}\nLet $D$ be a Dedekind domain and let $A$ be a $D$-algebra with standard assumptions. Let $\\mathfrak{P}$ be a nonzero prime ideal of $\\textnormal{Int}_K(A)$.\n\\begin{enumerate}[(1)]\n\\item If $\\mathfrak{P}$ is non-unitary, then $\\mathfrak{P}$ has height 1.\n\\item If $\\mathfrak{P}$ is unitary, then let $P = \\mathfrak{P} \\cap D$.\n\\begin{itemize}\n\\item[(i)] If $P$ does not satisfy any of the conditions of Proposition \\ref{equivalent conditions}, then $\\mathfrak{P}$ has height 2.\n\n\\item[(ii)] If $P$ satisfies one of the conditions of Proposition \\ref{equivalent conditions}, then $\\mathfrak{P}$ is maximal \nand has height at most 2.\n\\end{itemize}\n\\end{enumerate}\n\\end{Thm}\n\\begin{proof}\n(1) Following \\cite[Lem. 5.3]{Fri1}, the non-unitary prime ideals of $\\textnormal{Int}_K(A)$ are in one-to-one correspondence with the prime ideals of $K[X]$. Since $K[X]$ has dimension 1, the non-unitary primes of $\\textnormal{Int}_K(A)$ are all of height 1.\n\n(2) Let $P$ be a nonzero prime of $D$. Assume first that $P$ does not satisfy any of the conditions of Proposition \\ref{equivalent conditions}. Then, $D_P[X] = \\textnormal{Int}_K(A_P) = \\textnormal{Int}_K(A)_P$. It follows that the unitary primes of $\\textnormal{Int}_K(A)$ are in one-to-one correspondence with the primes of $D_P[X]$. Since $D$ is Dedekind, we know that $D_P[X]$ has dimension 2; hence, all the primes of $\\textnormal{Int}_K(A)$ under consideration have height 2.\n\nFor the remainder of the proof, assume that $P = \\mathfrak{P} \\cap D$ does satisfy the conditions of Proposition \\ref{equivalent conditions}. Since $\\mathfrak{P} \\cap D = P$, the prime ideal $\\mathfrak{P}$ survives in $\\textnormal{Int}_K(A)_P=\\textnormal{Int}_K(A_P)$ and clearly its extension $\\mathfrak{P}^e$ is still a prime unitary ideal (so, $\\mathfrak{P}^e\\cap D_P=PD_P$). It is sufficient to show that $\\mathfrak{P}^e$ is a maximal ideal, so we may work over the localizations. Thus, without loss of generality we will assume that $D$ is a DVR. In particular, this means that $P =\\pi D$, for some $\\pi\\in D$.\n\nLet $\\overline{g}\\in N_{D\/P}(A\/PA)$, $\\overline{g}\\not=0$, and let $g\\in D[X]$ be a pullback of $g(X)$. Then $g(A) \\subseteq PA=\\pi A$. Consequently, for each $f\\in\\textnormal{Int}_K(A)$ we have $(g\\circ f)(A)\\subset \\pi A$. Consider the ideal $\\mathfrak{A} = \\{F \\in \\textnormal{Int}_K(A) \\mid F(A) \\subseteq PA\\}$ of $\\textnormal{Int}_K(A)$. Because $P = \\pi D$ is principal, we have $\\mathfrak{A} = \\pi \\textnormal{Int}_K(A)$, which is contained in $\\mathfrak{P}$. Hence, for each $f \\in \\textnormal{Int}_K(A)$, $g \\circ f \\in \\mathfrak{P}$. \n\nNow, if we consider the $D\/P$-algebra $\\textnormal{Int}_K(A)\/\\mathfrak{P}$, we see that each element of $\\textnormal{Int}_K(A)\/\\mathfrak{P}$ is annihilated by $\\overline{g}(X)$. But $\\textnormal{Int}_K(A)\/\\mathfrak{P}$ is a domain, and for it to be annihilated by a nonzero polynomial, it must be finite. Thus, in fact $\\textnormal{Int}_K(A)\/\\mathfrak{P}$ is a finite field, and so $\\mathfrak{P}$ is maximal.\n\nFinally, to show that $\\mathfrak{P}$ has height at most 2, let $\\mathfrak{Q}$ be a prime of $\\textnormal{Int}_K(A)$ such that $(0) \\subsetneq \\mathfrak{Q} \\subseteq \\mathfrak{P}$. If $\\mathfrak{Q}$ is unitary, then we have $\\mathfrak{Q} \\cap D = P$, and by our work above $\\mathfrak{Q}$ is maximal, hence equal to $\\mathfrak{P}$. If $\\mathfrak{Q}$ is non-unitary, then it has height 1 by part (1) of the theorem. It follows that $\\mathfrak{P}$ has height at most 2.\n\\end{proof}\n\n\\begin{Cor}\\label{Krull dimension}\nLet $D$ be a Dedekind domain with quotient field $K$. Let $A$ be a $D$-algebra with standard assumptions. Then, $\\textnormal{Int}_K(A)$ has Krull dimension 2.\n\\end{Cor}\n\\begin{proof}\nIf $\\textnormal{Int}_K(A) = D[X]$, then its dimension equals that of $D[X]$, which is 2. So, assume that $\\textnormal{Int}_K(A)$ is nontrivial. By Theorem \\ref{criterion}, there exists a prime $P$ of $D$ that satisfies the conditions of Proposition \\ref{equivalent conditions}.\n\nLet $\\mathfrak{P} = \\{f \\in \\textnormal{Int}_K(A) \\mid f(0) \\in P \\}$. Since $\\textnormal{Int}_K(A) \\subseteq \\textnormal{Int}(D)$, $\\mathfrak{P}$ is an ideal of $\\textnormal{Int}_K(A)$, and it is easily seen to be prime and unitary, with $\\mathfrak{P} \\cap D = P$. Moreover, it contains the non-unitary ideal $XK[X] \\cap \\textnormal{Int}_K(A)$. Hence, $\\mathfrak{P}$ has height at least 2, and so $\\dim(\\textnormal{Int}_K(A)) \\geq 2$. However, $\\dim(\\textnormal{Int}_K(A)) \\leq 2$ by Theorem \\ref{Height thm}, so we conclude that $\\dim(\\textnormal{Int}_K(A)) = 2$.\n\\end{proof}\n\n\\subsection*{Acknowledgments}\n\\noindent This research has been supported by the grant ``Assegni Senior'' of the University of Padova. The authors wish to thank the referee for several suggestions which improved the quality of the paper.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n{S}{cene} text recognition is an essential process in computer vision tasks. Many practical applications such as traffic sign reading, product recognition, intelligent inspection, and image searching, benefit from the rich semantic information of scene text. With the development of scene text detection methods \\cite{gomez2017textproposals,khare2016blind,sun2015robust,zhu2016could}, scene character recognition has emerged at the forefront of this research topic and is regarded as an open and very challenging research problem \\cite{su2017accurate}.\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[]{\n\\begin{minipage}[c]{0.15\\textwidth}\n\\centering\n \\includegraphics[width=2.5cm,height=1.5cm]{picture\/regular-a.jpg}\n \\includegraphics[width=2.5cm,height=1.5cm]{picture\/regular-b.jpg}\n \\includegraphics[width=2.5cm,height=1.5cm]{picture\/regular-c.jpg}\n\\end{minipage}%\n}%\n\\subfigure[]{\n\\begin{minipage}[c]{0.15\\textwidth}\n\\centering\n \\includegraphics[width=2.5cm,height=1.5cm]{picture\/irregular-d.jpg}\n \\includegraphics[width=2.5cm,height=1.5cm]{picture\/irregular-e.jpg}\n \\includegraphics[width=2.5cm,height=1.5cm]{picture\/irregular-f.jpg}\n\\end{minipage}%\n}%\n\\subfigure[]{\n\\begin{minipage}[c]{0.15\\textwidth}\n\\centering\n \\includegraphics[width=2.5cm,height=1.5cm]{picture\/irregular-g.jpg}\n \\includegraphics[width=2.5cm,height=1.5cm]{picture\/irregular-h.jpg}\n \\includegraphics[width=2.5cm,height=1.5cm]{picture\/irregular-i.jpg}\n\\end{minipage}%\n}%\n\n\\caption{Examples of regular and irregular scene text. (a) Regular text. (b) Slanted and perspective text. (c) Curved text.}\n\\label{fig:1-scene-text}\n\\end{figure}\n\nNowadays, regular text recognition methods \\cite{bissacco2013photoocr,neumann2012real,shi2017end,su2017accurate,wang2012end} have achieved notable success. Moreover, methods based on convolutional neural networks \\cite{bissacco2013photoocr,jaderberg2016reading,wang2012end} have been broadly applied. Integrating recognition models with recurrent neural networks \\cite{he2016reading,shi2017end,shi2016robust} and attention mechanisms \\cite{cheng2017focusing,cheng2017arbitrarily,lee2016recursive,yang2017learning} yields better performance for these models.\n\nNevertheless, most current recognition models remain too unstable to handle multiple disturbances from the environment. Furthermore, the various shapes and distorted patterns of irregular text cause additional challenges in recognition. As illustrated in Fig. \\ref{fig:1-scene-text}, scene text with irregular shapes, such as perspective and curved text, is still very challenging to recognize.\n\nReading text is naturally regarded as a multi-classification task involving sequence-like objects \\cite{shi2017end}. Usually, the characters in one text are of the same size. However, characters in different scene texts can vary in size. Therefore, we propose the multi-object rectified attention network (MORAN), which can read rotated, scaled and stretched characters in different scene texts. The MORAN consists of a multi-object rectification network \\textbf{(MORN)} to rectify images and an attention-based sequence recognition network \\textbf{(ASRN)} to read the text. We separate the difficult recognition task into two parts. First, as one kind of spatial transformer, the MORN rectifies images that contain irregular text. As Fig. \\ref{fig:2-MORAN-system} shows, after the rectification by the MORN, the slanted text becomes more horizontal, tightly-bounded, and easier to read. Second, ASRN takes the rectified image as input and outputs the predicted word.\n\nThe training of the MORN is guided by the ASRN, which requires only text labels. Without any geometric-level or pixel-level supervision, the MORN is trained in a weak supervision way. To facilitate this manner of network training, we initialize a basic coordinate grid. Every pixel of an image has its own position coordinates. The MORN learns and generates an offset grid based on these coordinates and samples the pixel value accordingly to rectify the image. The rectified image is then obtained for the ASRN.\n\n\\begin{figure}\n\\centering\n\\begin{overpic}[width=8.5cm,height=3.2cm]{picture\/Moran-system.jpg}\n\n\\put(11,33){Input}\n\\put(10,28){Image}\n\\put(45,33){Rectified}\n\\put(48,28){Image}\n\\put(82,26){Result}\n\n\\put(79,17){JOHNNY}\n\\put(28,26){MORN}\n\\put(66,26){ASRN}\n\n\\put(29,7){Weak}\n\\put(25,3){Supervision}\n\n\\put(65,7){Text Label}\n\\put(64,3){Supervision}\n\n\\end{overpic}\n\n\\caption{Overview of the MORAN. The MORAN contains a MORN and an ASRN. The image is rectified by the MORN and given to the ASRN. The dashed lines show the direction of gradient propagation, indicating that the two sub-networks are jointly trained.}\n\\label{fig:2-MORAN-system}\n\\end{figure}\n\nWith respect to the ASRN, a decoder with an attention mechanism is more likely to predict the correct words because of the rectified images. However, Cheng et al. \\cite{cheng2017focusing} found that existing attention-based methods cannot obtain accurate alignments between feature areas and targets. Therefore, we propose a fractional pickup method to train the ASRN. By adopting several scales of stretch on different parts of the feature maps, the feature areas are changed randomly at every iteration in the training phase. Owing to training with fractional pickup, the ASRN is more robust to the variation of context. Experiments show that the ASRN can accurately focus on objects.\n\nIn addition, we designed a curriculum learning strategy for the training of the MORAN. Because the MORN and ASRN are mutually beneficial in terms of performance, we first fix one of them to more efficiently optimize the other. Finally, the MORN and ASRN are optimized in an end-to-end fashion to improve performance. In short, the contributions of our research are as follows:\n\n\\begin{itemize}\n\n\\item We propose the MORAN framework to recognize irregular scene text. The framework contains a multi-object rectification network (MORN) and an attention-based sequence recognition network (ASRN). The image rectified by the MORN is more readable for the ASRN.\n\n\\item Trained in a weak supervision way, the sub-network MORN is flexible. It is free of geometric constraints and can rectify images with complicated distortion.\n\n\\item We propose a fractional pickup method for the training of the attention-based decoder in the ASRN. To address noise perturbations, we expand the visual field of the MORAN, which further improves the sensitivity of the attention-based decoder.\n\n\\item We propose a curriculum learning strategy that enables the MORAN to learn efficiently. Owing to the training with this strategy, the MORAN outperforms state-of-the-art methods on several standard text recognition benchmarks, including the IIIT5K, SVT, ICDAR2003, ICDAR2013, ICDAR2015, SVT-Perspective, and CUTE80 datasets.\n\n\\end{itemize}\n\nThe rest of the paper is organized as follow. Section 2 reviews related work. Section 3 details the proposed method. Experimental results are given in Section 4, and the conclusions are presented in Section 5.\n\n\\section{Related Work}\n\\label{section:Related work}\nIn recent years, the recognition of scene text has greatly advanced because of the rapid development of neural networks \\cite{gu2017recent}. Zhu et al. \\cite{zhu2016scene} and Ye et al. \\cite{ye2015text} have provided an overview of the major advances in the field of scene text detection and recognition. Based on the sliding window method \\cite{wang2011end,wang2010word}, pattern features extracted by a neural network become dominant with respect to the hand crafted features, such as the connected components \\cite{neumann2012real}, strokelet generation \\cite{yao2014strokelets}, histogram of oriented gradients descriptors \\cite{dalal2005histograms,su2014accurate}, tree-structured models \\cite{shi2014end}, semi-markov conditional random fields \\cite{seok2015scene} and generative shape models \\cite{lou2016generative}. For instance, Bissacco \\cite{bissacco2013photoocr} applied a network with five hidden layers for character classification. Using convolutional neural networks (CNNs), Jaderberg et al. \\cite{Jaderberg2015Deep} and Yin et al. \\cite{yin2017scene} proposed respective methods for unconstrained recognition.\n\nWith the widespread application of recurrent neural networks (RNNs) \\cite{cho2014learning,hochreiter1997long}, CNN-based methods are combined with RNNs for better learning of context information. As a feature extractor, the CNN obtains the spatial features of images. Then, the RNN learns the context of features. Shi et al. \\cite{shi2017end} proposed an end-to-end trainable network with both CNNs and RNNs, named CRNN. Guided by the CTC loss \\cite{graves2006connectionist}, the CRNN-based network learns the conditional probability between predictions and sequential labels.\n\nFurthermore, attention mechanisms \\cite{bahdanau2014neural} focus on informative regions to achieve better performance. Lee et al. \\cite{lee2016recursive} proposed a recursive recurrent network with attention modeling for scene text recognition. Yang et al. \\cite{yang2017learning} addressed a two-dimensional attention mechanism. Cheng et al. \\cite{cheng2017focusing} used the focusing attention network (FAN) to correct shifts in attentional mechanisms and achieved more accurate position predictions.\n\nCompared with regular text recognition work, irregular text recognition is more difficult. One kind of irregular text recognition method is the bottom-up approach \\cite{cheng2017arbitrarily,yang2017learning}, which searches for the position of each character and then connects them. Another is the top-down approach \\cite{liu2016star,shi2016robust}. This type of approach matches the shape of the text, attempts to rectify it, and reduces the degree of recognition difficulty.\n\nIn the bottom-up manner, a two-dimensional attention mechanism for irregular text was proposed by Yang et al. \\cite{yang2017learning}. Based on the sliced Wasserstein distance \\cite{rabin2011wasserstein}, the attention alignment loss is adopted in the training phase, which enables the attention model to accurately extract the character features while ignoring the redundant background information. Cheng et al. \\cite{cheng2017arbitrarily} proposed an arbitrary-orientation text recognition network, which uses more direct information of the position to instruct the network to identify characters in special locations.\n\nIn the top-down manner, STAR-Net \\cite{liu2016star} used an affine transformation network that transforms the rotated and differently scaled text into more regular text. Meanwhile, a ResNet \\cite{he2016deep} is used to extract features and handle more complex background noise. RARE \\cite{shi2016robust} regresses the fiducial transformation points on sloped text and even curved text, thereby mapping the corresponding points onto standard positions of the new image. Using thin-plate-spline \\cite{bookstein1989principal} to back propagate the gradients, RARE is end-to-end optimized.\n\nOur proposed MORAN model uses the top-down approach. The fractional pickup training method is thus designed to improve the sensitivity of the MORAN to focus on characters. For the training of the MORAN, we propose a curriculum learning strategy for better convergence.\n\n\\section{Methodology}\n\nThe MORAN contains two parts. One is the MORN, which is trained in a weak supervision way to learn the offset of each part of the image. According to the predicted offsets, we apply sampling and obtain a rectified text image. The other one is ASRN, a CNN-LSTM framework followed by an attention decoder. The proposed fractional pickup further improves attention sensitivity. The curriculum learning strategy guides the MORAN to achieve state-of-the-art performance.\n\n\\subsection{Multi-Object Rectification Network}\n\\label{section:Multi-Object Rectification Network}\nCommon methods to rectify patterns such as the affine transformation network, are limited by certain geometric constraints. With respect to the affine transformation, it is limited to rotation, scaling, and translation. However, one image may have several kinds of deformations, and the distortion of scene text will thus be complicated. As shown in Fig. \\ref{fig:compare-stn}, the characters in the image become slanted after rectification by the affine transformation. The black edges introduce additional noise. Therefore, transformations with geometric constraints can not cover all complicated deformations.\n\nAnother method that is free of geometric constraints, is the deformable convolutional network \\cite{deformable2017}. Using deformable convolutional kernels, the feature extractor automatically selects informative features. We attempted to combine the recognition network with a deformable convolutional network. However, as a sequence-to-sequence problem, irregular text recognition is more challenging. The network sometimes failed to converge. The best accuracy rate on IIIT5K we achieved was only 78.1\\%, which is far behind the state-of-the-art result (91.2\\%).\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=5cm,height=5cm]{picture\/compare-stn.jpg}\n\\caption{Comparison of the MORN and affine transformation. The MORN is free of geometric constraints. The main direction of rectification predicted by the MORN for each character is indicated by a yellow arrow. The offset maps generated by the MORN are visualized as a heat map. The offset values on the boundary between red and blue are zero. The directions of rectification on both sides of the boundary are opposite and outward. The depth of the color represents the magnitude of the offset value. The gradual-change in color indicates the smoothness of the rectification.}\n\\label{fig:compare-stn}\n\\end{figure}\n\nBecause the recognition models remain inadequately strong to handle multiple disturbances from various shapes, we consider rectifying images to reduce the difficulty of the recognition. As demonstrated in Fig. \\ref{fig:3-MORAN-overview}, the MORN architecture rectifies the distorted image. The MORN predicts the offset of each part of the image without any geometric constraint. Based on the predicted offsets, the image is rectified and becomes easier to recognize.\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=17cm]{picture\/Moran-overview.jpg}\n\\caption{Overall structure of MORAN. }\n\\label{fig:3-MORAN-overview}\n\\end{figure*}\n\nFurthermore, the MORN predicts the position offsets but not the categories of characters. The character details for classification are not necessary. We hence place a pooling layer before the convolutional layer to avoid noise and reduce the amount of calculation.\n\n\\begin{table}[h]\n\\centering\n\\caption{Architecture of the MORN}\n\\label{table:The architecture of MORN}\n\\begin{tabular*}{8.5cm}{c|p{3.5cm}<{\\centering}|c}\n\\hline\nType & Configurations & Size \\\\\n\\hline\nInput & - & 1$\\times$32$\\times$100 \\\\\n\\hline\nMaxPooling & k2, s2 & 1$\\times$16$\\times$50 \\\\\n\\hline\nConvolution & maps:64, k3, s1, p1 & 64$\\times$16$\\times$50 \\\\\n\\hline\nMaxPooling & k2, s2 & 64$\\times$8$\\times$25 \\\\\n\\hline\nConvolution & maps:128, k3, s1, p1 & 128$\\times$8$\\times$25 \\\\\n\\hline\nMaxPooling & k2, s2 & 128$\\times$4$\\times$12 \\\\\n\\hline\nConvolution & maps:64, k3, s1, p1 & 64$\\times$4$\\times$12 \\\\\n\\hline\nConvolution & maps:16, k3, s1, p1 & 16$\\times$4$\\times$12 \\\\\n\\hline\nConvolution & maps:2, k3, s1, p1 & 2$\\times$4$\\times$12 \\\\\n\\hline\nMaxPooling & k2, s1 & 2$\\times$3$\\times$11 \\\\\n\\hline\nTanh & - & 2$\\times$3$\\times$11 \\\\\n\\hline\nResize & - & 2$\\times$32$\\times$100 \\\\\n\\hline\n\\end{tabular*}\n\\begin{tablenotes}\n\\item Here k, s, p are kernel, stride and\npadding sizes, respectively. For example, $k3$ represents a $3\\times3$ kernel size.\n\\end{tablenotes}\n\\end{table}\n\nThe architecture of the MORN is given in Table\\ref{table:The architecture of MORN}. Each convolutional layer is followed by a batch normalization layer and a ReLU layer except for the last one. The MORN first divides the image into several parts and then predicts the offset of each part. With an input size of $32\\times100$, the MORN divides the image into $3\\times 11 = 33$ parts. All the offset values are activated by $Tanh(\\cdot)$, resulting in values within the range of $(-1, 1)$. The offset maps contain two channels, which denote the x-coordinate and y-coordinate respectively. Then, we apply bilinear interpolation to smoothly resize the offset maps to a size of $32\\times100$, which is the same size of the input image. After allocating the specific offset to each pixel, the transformation of the image is smooth. As demonstrated in Fig.\\ref{fig:compare-stn}, the color depth gradually changes on both sides of the boundary between the red and blue colors in the heat map, which evidences the smoothness of the rectification. There are no indented edges in the rectified image.\n\nMoreover, because every value in the offset maps represents the offset from the original position, we generate a basic grid from the input image to represent the original positions of the pixels. The basic grid is generated by normalizing the coordinate of each pixel to $[-1, 1]$. The coordinates of the top-left pixel are $(-1, -1)$, and those of the bottom-right one are $(1, 1)$. Pixels at the same positions on different channels have the same coordinates. Similar to the offset maps, the grid contains two channels, which represent the x-coordinate and y-coordinate, respectively. Then, the basic grid and the resized offset maps are summed as follows,\n\\begin{equation}\noffset_{(c,i,j)}^{'} = offset_{(c,i,j)}+basic_{(c,i,j)} , c = 1,2\n\\end{equation}\nwhere $(i,j)$ is the position of the $i$-th row and $j$-th column.\n\nBefore sampling, the x-coordinate and y-coordinate on the offset maps are normalized to $[0, W]$ and $[0, H]$, respectively. Here, $H\\times W$ is the size of the input image. The pixel value of $i$-th row and $j$-th column in rectified image $I'$ is,\n\\begin{equation}\nI'_{(i, j)} = I_{(i^{'}, j^{'})} \\label{sampling}\n\\end{equation}\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\ni^{'} = offset_{(1, i, j)}^{'} \\\\\nj^{'} = offset_{(2, i, j)}^{'}\n\\end{aligned}\n\\right.\n\\end{equation}\nwhere $I$ is the input image. Further, $i^{'}$ is obtained from the first channel of the offset maps, whereas $j^{'}$ is from the second channel. Both $i^{'}$ and $j^{'}$ are real values as opposed to integers so rectified image $I'$ is sampled from $I$ using bilinear interpolation.\n\nBecause Equation (\\ref{sampling}) is differentiable, the MORN can back-propagate the gradients. The MORN can be trained in a weak supervision way with images and associated text labels only, which means that it does not need pixel-level labeling information about the deformation of the text.\n\nAs Fig. \\ref{fig:4-ori-rectified-img} shows, the text in the input images is irregular. However, the text in the rectified images is more readable. Slanted or perspective texts become tightly bound after rectification. Furthermore, redundant noise is eliminated by the MORN for the curved texts. The background textures are removed in the rectified images of Fig. \\ref{fig:4-ori-rectified-img} (b).\n\n\\begin{figure}[h]\n\\centering\n\\subfigure[Perspective texts]{\n\\begin{minipage}[c]{0.45\\textwidth}\n\\centering\n \\includegraphics[width=7cm,height=4cm]{picture\/ori_img.jpg}\n\\end{minipage}%\n}%\n\n\\subfigure[curved texts]{\n\\begin{minipage}[c]{0.45\\textwidth}\n\\centering\n \\includegraphics[width=7cm,height=4cm]{picture\/ori_img_curve.jpg}\n\\end{minipage}%\n}%\n\n\n\\caption{Results of the MORN on challenging image text. The input images are shown on the left and the rectified images are shown on the right. The heat maps visualize offset maps as well as Fig. \\ref{fig:compare-stn}. (a) Slanted and perspective text. (b) Curved text, which is more challenging for recognition. Removed background textures are indicated by red circles.}\n\\label{fig:4-ori-rectified-img}\n\\end{figure}\n\nThe advantages of the MORN are manifold. 1) The rectified images are more readable owing to the regular shape of the text and the reduced noise. 2) The MORN is more flexible than the affine transformation. It is free of geometric constraints, which enables it to rectify images using complicated transformations. 3) The MORN is more flexible than methods using a specific number of regressing points. Existing method \\cite{shi2016robust} cannot capture the text shape in details if the width of the image is large. Thus the MORN has no limit with respect to the image size, especially the width of the input image. 4) The MORN does not require extra labelling information of character positions. Therefore, it can be trained in a weak supervision way by using existing training datasets.\n\n\\subsection{Attention-based Sequence Recognition Network}\n\\label{section:Attention-based Sequence Recognition Network}\nAs Fig. \\ref{fig:3-MORAN-overview} shows, the major structure of the ASRN is a CNN-BLSTM framework. We adopt a one-dimensional attention mechanism at the top of CRNN. The attention-based decoder, proposed by Bahdanau et al. \\cite{bahdanau2014neural}, is used to accurately align the target and label. It is based on an RNN and directly generates the target sequence $(y_{1},y_{2} ...,y_{N})$. The largest number of steps that the decoder generates is $T$. The decoder stops processing when it predicts an end-of-sequence token $``EOS\"$ \\cite{sutskever2014sequence}. At time step $t$, output $y_t$ is,\n\\begin{equation}\ny_{t} = Softmax(W_{out}s_{t}+b_{out})\n\\end{equation}\nwhere $s_{t}$ is the hidden state at time step $t$. We update $s_{t}$ using GRU \\cite{cho2014learning}. State $s_{t}$ is computed as:\n\\begin{equation}\ns_t = GRU(y_{prev}, g_{t}, s_{t-1})\n\\end{equation}\nwhere $y_{prev}$ denotes the embedding vectors of the previous output $y_{t-1}$ and $g_{t}$ represents the glimpse vectors, respectively calculated as,\n\\begin{equation}\ny_{prev} = Embedding(y_{t-1})\n\\end{equation}\n\\begin{equation}\ng_{t} = \\sum_{i=1}^L(\\alpha_{t,i} h_{i}) \\label{equ-g}\n\\end{equation}\nwhere $h_{i}$ denotes the sequential feature vectors and $L$ is the length of the feature maps. In addition, $\\alpha_{t,i}$ is the vector of attention weights as follows,\n\\begin{equation}\n\\alpha_{t,i} = {exp(e_{t,i})} \/ {\\sum_{j=1}^L(exp(e_{t,j}))} \\label{equ-alpha}\n\\end{equation}\n\\begin{equation}\ne_{t,i} = Tanh(W_{s}s_{t-1}+W_{h}h_{i}+b) \\label{equ-e}\n\\end{equation}\n\nHere, $W_{out}$, $b_{out}$, $W_{s}$, $W_{h}$ and $b$ are trainable parameters. Note that $y_{prev}$ is embedded from the ground truth of the last step in the training phase, whereas the ASRN only uses the predicted output of the last step as $y_{t-1}$ in the testing phase.\n\nThe decoder outputs the predicted word in an unconstrained manner in lexicon-free mode. If lexicons are available, we evaluate the probability distributions for all words and choose the word with the highest probability as the final result.\n\nThe architecture of the ASRN is given in Table\\ref{table:The architecture of ASRN}. Each convolutional layer is followed by a batch normalization layer and a ReLU layer.\n\\begin{table}[t]\n\\centering\n\\caption{Architecture of the ASRN}\n\\label{table:The architecture of ASRN}\n\\begin{tabular*}{8.5cm}{c|p{3.5cm}<{\\centering}|c}\n\\hline\nType & Configurations & Size \\\\\n\\hline\nInput & - & 1$\\times$32$\\times$100 \\\\\n\\hline\nConvolution & maps:64, k3, s1, p1 & 64$\\times$32$\\times$100 \\\\\n\\hline\nMaxPooling & k2, s2 & 64$\\times$16$\\times$50 \\\\\n\\hline\nConvolution & maps:128, k3, s1, p1 & 128$\\times$16$\\times$50 \\\\\n\\hline\nMaxPooling & k2, s2 & 128$\\times$8$\\times$25 \\\\\n\\hline\nConvolution & maps:256, k3, s1, p1 & 256$\\times$8$\\times$25 \\\\\n\\hline\nConvolution & maps:256, k3, s1, p1 & 256$\\times$8$\\times$25 \\\\\n\\hline\nMaxPooling & k2, s2x1, p0x1 & 256$\\times$4$\\times$26 \\\\\n\\hline\nConvolution & maps:512, k3, s1, p1 & 512$\\times$4$\\times$26 \\\\\n\\hline\nConvolution & maps:512, k3, s1, p1 & 512$\\times$4$\\times$26 \\\\\n\\hline\nMaxPooling & k2, s2x1, p0x1 & 512$\\times$2$\\times$27 \\\\\n\\hline\nConvolution & maps:512, k2, s1 & 512$\\times$1$\\times$26 \\\\\n\\hline\nBLSTM & hidden unit:256 & 256$\\times$1$\\times$26 \\\\\n\\hline\nBLSTM & hidden unit:256 & 256$\\times$1$\\times$26 \\\\\n\\hline\nGRU & hidden unit:256 & 256$\\times$1$\\times$26 \\\\\n\\hline\n\\end{tabular*}\n\\begin{tablenotes}\n\\item Here, k, s, p are kernel, stride and\npadding sizes, respectively. For example, $s2\\times1$ represents a $2\\times1$ stride size. ``BLSTM\" stands for bidirectional-LSTM. ``GRU\" is in attention-based decoder.\n\\end{tablenotes}\n\\end{table}\n\n\\subsection{Fractional Pickup}\n\nThe decoder in the ASRN learns the matching relationship between labels and target characters in images. It is a data-driven process. The ability to choose regions that are focus-worthy is enhanced by the feedback of correct alignment.\n\nHowever, scene text is surrounded by various types of noise. Often, the decoder is likely to be deceived into focusing on ambiguous background regions in practical applications. If the decoder generates an incorrect region of focus, the non-corresponding features are chosen, which can cause a failed prediction.\n\nSome challenging samples for recognition are presented in Fig. \\ref{fig:5-fractional-pickup}. In this figure, the images contain text with shadows and unclear boundaries between characters or complicated backgrounds. Moreover, the focus regions generated by the decoder are narrow, which increases the probability of drifting from the correct regions.\n\n\\begin{figure}[t]\n\\centering\n\\rule{8cm}{0.05em}\n\\\\\n-------------------------------------------------------------\n\\begin{overpic}[width=8cm,height=3cm]{picture\/fp_1.jpg}\n\n\\put(13,39){Without FP}\n\\put(67,39){With FP}\n\n\\put(20,-3){\\color{blue}{hot}\\color{red}{l}}\n\\put(72,-3){\\color{blue}{hotel}}\n\\end{overpic}\n\n-------------------------------------------------------------\n\n\\begin{overpic}[width=8cm,height=3cm]{picture\/fp_2.jpg}\n\\put(20,-3){\\color{blue}{a}\\color{red}{r}\\color{blue}{ge}\\color{red}{h}}\n\\put(72,-3){\\color{blue}{angels}}\n\\end{overpic}\n\n-------------------------------------------------------------\n\n\\begin{overpic}[width=8cm,height=3cm]{picture\/fp_3.jpg}\n\\put(20,-3){\\color{red}{e}\\color{blue}{aser}}\n\\put(72,-3){\\color{blue}{laser}}\n\\end{overpic}\n\n-------------------------------------------------------------\n\n\\caption{Difference in $\\alpha_{t}$ for training with and without fractional pickup. Here $\\alpha_{t}$ is visualized as a heat map. We delete the $\\alpha_{t}$ corresponding to ``EOS\".}\n\\label{fig:5-fractional-pickup}\n\\end{figure}\n\nWe propose a training method called fractional pickup that fractionally picks up the neighboring features in the training phase. An attention-based decoder trained by fractional pickup method can perceive adjacent characters. The wider field of attention contributes to the robustness of the MORAN.\n\nWe hence adopt fractional pickup at each time step of the decoder. In other words, a pair of attention weights are selected and modified at every time step. At time step $t$, $\\alpha_{t,k}$ and $\\alpha_{t,k+1}$ are updated as,\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\n\\alpha_{t,k}^{,} = \\beta \\alpha_{t,k}+(1-\\beta)\\alpha_{t,k+1} \\\\\n\\alpha_{t,k+1}^{,} = (1-\\beta) \\alpha_{t,k}+\\beta\\alpha_{t,k+1}\n\\end{aligned}\n\\right.\n\\end{equation}\nwhere decimal $\\beta$ and integer $k$ are randomly generated as,\n\\begin{equation}\n\\beta = rand(0,1)\n\\end{equation}\n\\begin{equation}\nk = rand[1,T-1]\n\\end{equation}\nHere, T is the maximum number of steps of the decoder.\n\n\\textbf{Variation of Distribution}\nFractional pickup adds randomness to $\\alpha_{t,k}$ and $\\alpha_{t,k+1}$ in the decoder. This means that, even for the same image, the distribution of $\\alpha_{t}$ changes every time step in the training phase. As noted in Equation (\\ref{equ-g}), the glimpse vectors $g_{t}$ grabs the sequential feature vectors $h_{i}$ according to the various distributions of $\\alpha_{t}$, which is equivalent to the changes in feature areas. The randomness of $\\beta$ and $k$ avoids over-fitting and contributes to the robustness of the decoder.\n\n\\textbf{Shortcut of Forward Propagation}\nSequential feature vector $h_{i}$ is the output of the last bidirectional-LSTM in the ASRN. As shown in Fig. \\ref{fig:short-cut}, for step $k+1$ in the bidirectional-LSTM, a shortcut connecting to step $k$ is created by fractional pickup. The shortcut retains some features of the previous step in the training phase, which is the interference to the forget gate in bidirectional-LSTM. Therefore, fractional pickup provides more information about the previous step and increases the robustness for the bidirectional-LSTM in the ASRN.\n\n\\begin{figure}\n\\centering\n\\begin{overpic}[width=4cm]{picture\/short-cut.jpg}\n\\put(-5,65){$h_{i}$}\n\\end{overpic}\n\\caption{Fractional pickup creates a shortcut of forward propagation. The shortcut is drawn as a red arrow.}\n\\label{fig:short-cut}\n\\end{figure}\n\n\\textbf{Broader Visual Field}\nTraining with fractional pickup disturbs the decoder through the local variation of $\\alpha_{t,k}$ and $\\alpha_{t,k+1}$. Note that $\\alpha_{t,k}$ and $\\alpha_{t,k+1}$ are neighbors. Without fractional pickup, the error term of sequence feature vector $h_{k}$ is,\n\\begin{equation}\n\\delta_{h_{k}} = \\delta_{g_{t}}\\alpha_{t,k}\n\\end{equation}\nwhere $\\delta_{g_{t}}$ is the error term of glimpse vector $g_{t}$. $\\delta_{h_{k}}$ is only relevant to $\\alpha_{t,k}$. However, with fractional pickup, the error item becomes,\n\\begin{equation}\n\\delta_{h_{k}} = \\delta_{g_{t}}(\\beta \\alpha_{t,k}+(1-\\beta)\\alpha_{t,k+1})\n\\end{equation}\nwhere $\\alpha_{t,k+1}$ is relevant to $h_{k+1}$, as noted in Equations (\\ref{equ-alpha}) and (\\ref{equ-e}), which means $\\delta_{h_{k}}$ is influenced by the neighbouring features. Owing to the disturbance, back-propagated gradients are able to dynamically optimize the decoder over a broader range of neighbouring regions.\n\nThe MORAN trained with fractional pickup method generates a smoother $\\alpha_{t}$ at each time step. Accordingly, it extracts features not only of the target characters, but also of the foreground and background context. As demonstrated in Fig. \\ref{fig:5-fractional-pickup}, the expanded visual field enables the MORAN to correctly predict target characters. To the best of our knowledge, this is the first attempt to adopt a shortcut in the training of the attention mechanism.\n\n\\subsection{Curriculum Training}\n\\label{section:curriculum-training}\n\nThe MORAN is end-to-end trainable with random initialization. However, end-to-end training consumes considerable time. We found that the MORN and ASRN can hinder each other during training. A MORN cannot be guided to rectify images when the input images have been correctly recognized by the high-performance ASRN. For the same reason, the ASRN will not gain robustness because the training samples have already been rectified by the MORN. The reasons above lead to inefficient training.\n\nTherefore, we propose a curriculum learning strategy to guide each sub-network in MORAN. The strategy is a three-step process. We first optimize the MORN and ASRN respectively and then join them together for further end-to-end training. The difficulty of training samples is gradually increased. The training set is denoted as $D = \\left \\{I_{i}, Y_{i} \\right \\}, i=1...N $. We minimize the negative log-likelihood of conditional probability of $D$ as follows:\n\n\\begin{equation}\nLoss = -\\sum_{i=1}^N{ \\sum_{t=1}^{\\left| Y_{i} \\right|}{\\log p(Y_{i,t} \\left| \\right. I_{i}; \\theta)} }\n\\end{equation}\nwhere $Y_{i,t}$ is the ground truth of the $t$-th character in $I_{i}$. $\\theta$ denotes the parameters of MORAN.\n\n\\textbf{First Stage for ASRN}\nWe first optimize the ASRN by using regular training samples. The dataset released by Gupta et al. \\cite{gupta2016synthetic} has tightly bounded annotations, which makes it possible to crop a text region with a tightly bounded box. The ASRN is first trained with these regular samples. Then, we simply crop every text using a minimum circumscribed horizontal rectangle to obtain irregular training samples. The commonly used datasets released by Jaderberg et al. \\cite{jaderberg2014synthetic} and Gupta et al. \\cite{gupta2016synthetic} offer abundant irregular training samples. We use them for the following training. Taking advantage of them, we optimize ASRN, which thus achieves higher accuracy.\n\n\\textbf{Second Stage for MORN}\nThe ASRN trained using regular training samples is chosen to guide the MORN training. This ASRN is not adequately robust for irregular text recognition so it is able to provide informative gradients for the MORN. We fix the parameters of this ASRN, and stack it after the MORN. If the transformation of the MORN does not reduce the difficulty of recognition, few meaningful gradients will be provided by the ASRN. The optimization of MORN would not progress. Only the correct transformation that decreases difficulty for recognition will give positive feedback to the MORN.\n\n\\textbf{Third Stage for End-to-end Optimization}\nAfter the MORN and ASRN are optimized individually, we connect them for joint training in an end-to-end fashion. Joint training enables MORAN to complete end-to-end optimization and outperform state-of-the-art methods.\n\n\\section{Experiments}\nIn this section we describe extensive experiments conducted on various benchmarks, including regular and irregular datasets. The performances of all the methods are measured by word accuracy.\n\n\\begin{table*}[t]\n\\centering\n\\caption{Comparison of pooling layers in lexicon-free mode. ``No\", ``AP\" and ``MP\" respectively indicate no pooling layer, an average-pooling layer and a max-pooling layer at the top of the MORN. The kernel size is 2. ``s\" represents the stride. }\n\\label{table:comparison-of-pooling}\n\\begin{tabular}{|c| c | c | c | c | c | c | c | c | c}\n\\hline\n\\multirow{2}{*}{} & s & IIIT5K & SVT & IC03 & IC13 & SVT-P & CUTE80 & IC15 \\\\\n\\cline{2-9}\n\\hline\nNo & - & 85.7 & 87.9 & 92.9 & 91.5 & 75.8 & 65.9 & 59.4 \\\\\nAP & 2 & 89.2 & 87.4 & 94.8 & 91.1 & 75.9 & 71.1 & 64.6 \\\\\nAP & 1 & 89.3 & 87.9 & 94.7 & 91.6 & 75.9 & 72.9 & 64.9 \\\\\nMP & 2 & 90.4 & 88.2 & 94.5 & 91.8 & \\textbf{76.1} & 76.4 & 68.4 \\\\\nMP & 1 & \\textbf{91.2} & \\textbf{88.3} & \\textbf{95.0} & \\textbf{92.4} & \\textbf{76.1} & \\textbf{77.4} & \\textbf{68.8} \\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\\begin{table*}[t]\n\\centering\n\\caption{Performance of the MORAN. }\n\\label{table:Performance-of-MORAN}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|}\n\\hline\n\\multirow{1}{*}{Method} & IIIT5K & SVT & IC03 & IC13 & SVT-P & CUTE80 & IC15 \\\\\n\\cline{2-8}\n\\hline\nEnd-to-end training & 89.9 & 84.1 & 92.5 & 90.0 & 76.1 & 77.1 & 68.8 \\\\\n\\hline\nOnly ASRN & 84.2 & 82.2 & 91.0 & 90.1 & 71.0 & 64.6 & 65.6 \\\\\nMORAN without FP & 89.7 & 87.3 & 94.5 & 91.5 & 75.5 & 77.1 & 68.6 \\\\\nMORAN with FP & \\textbf{91.2} & \\textbf{88.3} & \\textbf{95.0} & \\textbf{92.4} & \\textbf{76.1} & \\textbf{77.4} & \\textbf{68.8} \\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\\subsection{Datasets}\n\n\\textbf{IIIT5K-Words (IIIT5K)} \\cite{mishra2012scene} contains 3000 cropped word images for testing. Every image has a 50-word lexicon and a 1000-word lexicon. The lexicon consists of a ground truth and some randomly picked words.\n\n\\textbf{Street View Text (SVT)} \\cite{wang2011end} was collected from the Google Street View, consisting of 647 word images. Many images are severely corrupted by noise and blur, or have very low resolutions. Each image is associated with a 50-word lexicon.\n\n\\textbf{ICDAR 2003 (IC03)} \\cite{lucas2003icdar} contains 251 scene images that are labeled with text bounding boxes. For fair comparison, we discarded images that contain non-alphanumeric characters or those have less than three characters, following Wang, Babenko, and Belongie \\cite{wang2011end}. The filtered dataset contains 867 cropped images. Lexicons comprise of a 50-word lexicon defined by Wang et al. \\cite{wang2011end} and a ``full lexicon\". The latter lexicon combines all lexicon words.\n\n\\textbf{ICDAR 2013 (IC13)} \\cite{karatzas2013icdar} inherits most of its samples from IC03. It contains 1015 cropped text images. No lexicon is associated with this dataset.\n\n\\textbf{SVT-Perspective (SVT-P)} \\cite{quy2013recognizing} contains 645 cropped images for testing. Images are selected from side-view angle snapshots in Google Street View. Therefore, most images are perspective distorted. Each image is associated with a 50-word lexicon and a full lexicon.\n\n\\textbf{CUTE80} \\cite{risnumawan2014robust} contains 80 high-resolution images taken in natural scenes. It was specifically collected for evaluating the performance of curved text recognition. It contains 288 cropped natural images for testing. No lexicon is associated with this dataset.\n\n\\textbf{ICDAR 2015 (IC15)} \\cite{karatzas2015icdar} contains 2077 cropped images including more than 200 irregular text. No lexicon is associated with this dataset.\n\n\\subsection{Implementation Details}\n\\textbf{Network: }Details about the MORN and the ASRN of MORAN are given in Table\\ref{table:The architecture of MORN} and Table\\ref{table:The architecture of ASRN} respectively. The number of hidden units of GRU in the decoder is $256$. The ASRN outputs 37 classes, including 26 letters, 10 digits and a symbol standing for $``EOS\"$.\n\n\\textbf{Training Model: }As stated in Section \\ref{section:curriculum-training}, the training of the MORAN is guided by a curriculum learning strategy. The training data consists of 8-million synthetic images released by Jaderberg et al. \\cite{jaderberg2014synthetic} and 6-million synthetic images released by Gupta et al. \\cite{gupta2016synthetic}. No extra data is used. We do not use any geometric-level or pixel-level labels in our experiments. Without any fine-tuning for each specific dataset, the model is trained using only synthetic text. With ADADELTA \\cite{zeiler2012adadelta} optimization method, we set learning rate to $1.0$ at the beginning and decreased it to $0.01$ in the third stage of the curriculum learning strategy. Following the similar settings in \\cite{liu2016star}, we found that a decreased learning rate contributes to better convergence. The batch size was set to 64. We trained the model for 600,000, 20,000 and 300,000 iterations respectively in three stages of the curriculum learning strategy. The training totally consumed 30 hours.\n\n\\textbf{Implementation: }We implemented our method under the framework of PyTorch \\cite{pytorch}. CUDA 8.0 and CuDNN v7 backends are used in our experiments so our model is GPU-accelerated. All the images are resized to $32\\times 100$. With an NVIDIA GTX-1080Ti GPU, the MORAN takes 10.4ms to recognize an image containing five characters in lexicon-free mode.\n\n\\subsection{Performance of the MORAN}\n\nWe used a max-pooling layer at the top of the MORN. To evaluate the effectiveness of this technique, a comparison of pooling layers with different configurations is shown in Table \\ref{table:comparison-of-pooling}. The accuracy is the highest when we use a max-pooling layer with a kernel size of 2 and stride of 1.\n\nBefore conducting a comparison with other methods, we list three results with a progressive combination of methods in Table \\ref{table:Performance-of-MORAN}. The MORAN trained in an end-to-end manner already achieves very promising performance. In curriculum learning, the first experiment is carried out using only an ASRN. Then, a MORN is added to the bottom of the above network to rectify the images. The last result is from the entire MORAN, including the MORN and ASRN trained with the fractional pickup method. The contribution of each part of our method is hence clearly demonstrated. For ICDAR OCR tasks, we report the total edit distance in Table \\ref{table:Performance-of-MORAN(TED)}.\n\n\\begin{table}[h]\n\\centering\n\\caption{Performance of the MORAN (total edit distance). }\n\\label{table:Performance-of-MORAN(TED)}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multirow{1}{*}{Method} & IC03 & IC13 & IC15 \\\\\n\\cline{2-4}\n\\hline\nEnd-to-end training & 29.1 & 57.7 & 368.8 \\\\\n\\hline\nOnly ASRN & 33.8 & 69.1 & 376.8 \\\\\nMORAN without FP & 22.7 & 45.3 & 345.2 \\\\\nMORAN with FP & \\textbf{19.8} & \\textbf{42.0} & \\textbf{334.0} \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Comparisons with Rectification Methods}\n\\textbf{Affine Transformation}: The results using the affine transformation are provided by Liu et al. \\cite{liu2016star}. For fair comparison, we replace the ASRN by the R-Net proposed by Liu et al. \\cite{liu2016star}. A direct comparison of the results is shown in Table \\ref{table:comparison-stn}. As demonstrated in Fig.\\ref{fig:compare-stn} and described in Section \\ref{section:Multi-Object Rectification Network}, affine transformation is limited by the geometric constraints of rotation, scaling and translation. However, the distortion of scene text is complicated. The MORAN is more flexible than affine transformation. It is able to predict smooth rectification for images free of geometric constraints.\n\n\\begin{table}[h]\n\\centering\n\\caption{Comparison with STAR-Net. }\n\\label{table:comparison-stn}\n\\begin{tabular}{|p{2.2cm}<{\\centering}|p{0.9cm}<{\\centering}|p{0.6cm}<{\\centering}|p{0.6cm}<{\\centering}| p{0.6cm}<{\\centering}|p{1.1cm}<{\\centering}|}\n\\hline\n\\multirow{1}{*}{Method} & IIIT5K & SVT & IC03 & IC13 & SVT-P \\\\\n\\cline{2-6}\n\\hline\nLiu et al. \\cite{liu2016star} & 83.3 & 83.6 & 89.9 & \\textbf{89.1} & 73.5 \\\\\n\\hline\nOurs & \\textbf{87.5} & \\textbf{83.9} & \\textbf{92.5} & \\textbf{89.1} & \\textbf{74.6} \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\textbf{RARE} \\cite{shi2016robust}: The results of RARE given by Shi et al. \\cite{shi2016robust} are in the Table \\ref{table:Results on general benchmarks} and Table \\ref{table:Results on irregular text}. We directly compare the network using exactly the same recognition network as that proposed in RARE. The results are shown in Table \\ref{table:comparison-rare}.\n\nThe MORAN has some benefits and drawbacks comparing with RARE. RARE using fiducial points can only capture the overall text shape of an input image, whereas the MORAN can rectify every character in an image. As shown in Fig. \\ref{fig:comparison-other}, all the characters in the image rectified by the MORAN are more normal in appearance than those of RARE. Furthermore, the MORAN without any fiducial points is theoretically able to rectify text of infinite length.\n\n\\begin{table*}[t]\n\\centering\n\\caption{Comparison with RARE. }\n\\label{table:comparison-rare}\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n\\multirow{1}{*}{Method} & IIIT5K & SVT & IC03 & IC13 & SVT-P & CUTE80 \\\\\n\\cline{2-7}\n\\hline\nShi et al. \\cite{shi2016robust} & 81.9 & 81.9 & 90.1 & 88.6 & 71.8 & 59.2 \\\\\n\\hline\nOurs & \\textbf{87.9} & \\textbf{83.9} & \\textbf{92.7} & \\textbf{90.0} & \\textbf{73.2} & \\textbf{72.6} \\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\\begin{figure}[h]\n\\centering\n\\begin{overpic}[width=8.5cm,height=5cm]{picture\/rare-a-wide.jpg}\n\n\\put(70,50){Predict:\\color{red}{stink}\\color{green}{er}}\n\\put(75,30){GT:denver}\n\\put(70,10){Predict:\\color{green}{denver}}\n\n\\end{overpic}\n\n\\caption{Comparison of the MORAN and RARE. All characters are cropped for further comparison. The recognition results are on the right. ``GT\" denotes the ground truth.}\n\n\\label{fig:comparison-other}\n\\end{figure}\n\nThe training of MORAN is more difficult than that of RARE. We thus designed a curriculum learning strategy to enable the stable convergence of the MORAN. In terms of RARE, although it is end-to-end optimized with special initialization, randomly initialized network may result in failure of convergence.\n\n\\begin{table*}[t]\n\\centering\n\\caption{Results on general benchmarks. ``50\" and ``1k\" are lexicon sizes. ``Full\" indicates the combined lexicon of all images in the benchmarks. ``None\" means lexicon-free.}\n\\label{table:Results on general benchmarks}\n\\begin{tabular}{| c | c c c | c c | c c c | c |}\n\\hline\n\\multirow{2}{*}{Method} & \\multicolumn{3}{c|}{IIIT5K} & \\multicolumn{2}{c|}{SVT} &\\multicolumn{3}{c|}{IC03} & IC13 \\\\\n\\cline{2-10}\n & 50 & 1k & None & 50 & None & 50 & Full & None & None \\\\\n\\hline\nAlmaz\\'{a}n et al \\cite{almazan2014word} & 91.2 & 82.1 & - & 89.2 & - & - & - & - & - \\\\\nYao et al. \\cite{yao2014strokelets} & 80.2 & 69.3 & - & 75.9 & - & 88.5 & 80.3 & - & - \\\\\nR.-Serrano et al. \\cite{rodriguez2015label} & 76.1 & 57.4 & - & 70.0 & - & - & - & - & - \\\\\nJaderberg et al. \\cite{jaderberg2014deep} & - & - & - & 86.1 & - & 96.2 & 91.5 & - & - \\\\\nSu and Lu \\cite{su2014accurate} & - & - & - & 83.0 & - & 92.0 & 82.0 & - & - \\\\\nGordo \\cite{gordo2015supervised} & 93.3 & 86.6 & - & 91.8 & - & - & - & - & - \\\\\nJaderberg et al. \\cite{Jaderberg2015Deep} & 95.5 & 89.6 & - & 93.2 & 71.7 & 97.8 & 97.0 & 89.6 & 81.8 \\\\\nJaderberg et al. \\cite{jaderberg2016reading} & 97.1 & 92.7 & - & 95.4 & 80.7* & \\textbf{98.7} & \\textbf{98.6} & 93.1* & 90.8* \\\\\nShi, Bai, and Yao \\cite{shi2017end} & 97.8 & 95.0 & 81.2 & \\textbf{97.5} & 82.7 & \\textbf{98.7} & 98.0 & 91.9 & 89.6 \\\\\nShi et al. \\cite{shi2016robust} & 96.2 & 93.8 & 81.9 & 95.5 & 81.9 & 98.3 & 96.2 & 90.1 & 88.6 \\\\\nLee and Osindero \\cite{lee2016recursive} & 96.8 & 94.4 & 78.4 & 96.3 & 80.7 & 97.9 & 97.0 & 88.7 & 90.0 \\\\\nLiu et al. \\cite{liu2016star} & 97.7 & 94.5 & 83.3 & 95.5 & 83.6 & 96.9 & 95.3 & 89.9 & 89.1 \\\\\nYang et al. \\cite{yang2017learning} & 97.8 & 96.1 & - & 95.2 & - & 97.7 & - & - & -\\\\\nYin et al. \\cite{yin2017scene} & 98.7 & 96.1 & 78.2 & 95.1 & 72.5 & 97.6 & 96.5 & 81.1 & 81.4 \\\\\nCheng et al. \\cite{cheng2017focusing} & 98.9 & 96.8 & 83.7 & 95.7 & 82.2 & 98.5 & 96.7 & 91.5 & 89.4 \\\\\nCheng et al. \\cite{cheng2017arbitrarily} & \\textbf{99.6} & \\textbf{98.1} & 87.0 & 96.0 & 82.8 & 98.5 & 97.1 & 91.5 & - \\\\\n\\hline\nOurs & 97.9 & 96.2 & \\textbf{91.2} & 96.6 & \\textbf{88.3} & \\textbf{98.7} & 97.8 & \\textbf{95.0} & \\textbf{92.4} \\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\\subsection{Results on General Benchmarks}\nThe MORAN was evaluated on general benchmarks in which most of the testing samples are regular text and a small part of them are irregular text. The MORAN was compared with 16 methods and the results are shown in Table \\ref{table:Results on general benchmarks}.\n\nIn Table \\ref{table:Results on general benchmarks}, the MORAN outperforms all current state-of-the-art methods in lexicon-free mode. As Jaderberg \\cite{jaderberg2016reading} treated each word as a category and the model cannot predict out-of-vocabulary words, we highlight these results by adding an asterisk. FAN \\cite{cheng2017focusing} trained with pixel-level supervision is also beyond the scope of consideration. We hence compare the MORAN with the baseline of FAN.\n\n\\begin{table*}[t]\n\\centering\n\\caption{Results on irregular datasets. ``50\" is lexicon sizes. ``Full\" indicates the combined lexicon of all images in the benchmarks. ``None\" means lexicon-free.}\n\\label{table:Results on irregular text}\n\\begin{tabular}{|c|p{1.cm}<{\\centering} p{1.cm}<{\\centering} p{1.cm}<{\\centering} |p{1.5cm}<{\\centering}|p{1.5cm}<{\\centering}|}\n\\hline\n\\multirow{2}{*}{Method} & \\multicolumn{3}{c|}{SVT-Perspective} & CUTE80 & IC15 \\\\\n\\cline{2-6}\n & 50 & Full & None & None & None \\\\\n\\hline\nABBYY et al. \\cite{wang2011end} & 40.5 & 26.1 & - & - & - \\\\\nMishra et al. \\cite{mishra2012scene} & 45.7 & 24.7 & - & - & - \\\\\nWang et al. \\cite{wang2012end} & 40.2 & 32.4 & - & - & - \\\\\nPhan et al. \\cite{quy2013recognizing} & 75.6 & 67.0 & - & - & - \\\\\nShi et al. \\cite{shi2016robust} & 91.2 & 77.4 & 71.8 & 59.2 & - \\\\\nYang et al. \\cite{yang2017learning} & 93.0 & 80.2 & 75.8 & 69.3 & - \\\\\nLiu et al. \\cite{liu2016star} & \\textbf{94.3} & 83.6 & 73.5 & - & - \\\\\nCheng et al. \\cite{cheng2017focusing} & 92.6 & 81.6 & 71.5 & 63.9 & 66.2 \\\\\nCheng et al. \\cite{cheng2017arbitrarily} & 94.0 & 83.7 & 73.0 & 76.8 & 68.2 \\\\\n\\hline\nOurs & \\textbf{94.3} & \\textbf{86.7} & \\textbf{76.1} & \\textbf{77.4} & \\textbf{68.8}\\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\\subsection{Results on Irregular Text}\n\n\\begin{figure}[t]\n\\rule{8.25cm}{0.05em}\n\\\\\n\\\\\n\\\\\n\\centering\n\\begin{overpic}[width=8.5cm,height=8cm]{picture\/irregular-wide.jpg}\n\n\\put(5,100){Input Image}\n\\put(35,100){Rectified Images}\n\\put(67,101){Ground Truth}\n\\put(70,97){Prediction}\n\\put(0,94){-----------------------------------------------------------------}\n\n\\put(75,90){west}\n\\put(75,84){\\color{blue}{west}}\n\\put(70,77){---------------}\n\n\\put(75,71){united}\n\\put(75,64){\\color{blue}{united}}\n\\put(70,60){---------------}\n\n\n\\put(75,55){arsenal}\n\\put(75,50){\\color{blue}{arsenal}}\n\\put(70,45){---------------}\n\n\n\\put(75,40){football}\n\\put(75,35){\\color{blue}{football}}\n\\put(70,30){---------------}\n\n\n\\put(72,26){manchester}\n\\put(72,20){\\color{blue}{m}\\color{red}{essageid}}\n\\put(70,15){---------------}\n\n\n\\put(71,10){briogestone}\n\\put(71,4){\\color{red}{contracers}}\n\n\\put(0,-2){-----------------------------------------------------------------}\n\\end{overpic}\n\n\\caption{Effects of different curve angles of scene text. The first four rows are text with small curve angles and the last two rows are text with large curve angles. The MORAN can rectify irregular text with small curve angles.}\n\n\\label{fig:6-irregular-samples}\n\\end{figure}\n\nThe MORAN was also evaluated on irregular text datasets to reveal the contribution of the MORN. The results on SVT-Perspective, CUTE80 and IC15 are shown in Table \\ref{table:Results on irregular text}. The MORAN is still the best of all methods.\n\nFor the SVT-Perspective dataset, many samples are low-resolution and perspective. The result of the MORAN with 50-word lexicon is the same as that of the method of Liu et al. \\cite{liu2016star}. However, the MORAN outperforms all methods in the setting without any lexicon.\n\nIn addition to perspective text, the MORAN is able to recognize curved text. Some examples are demonstrated in Fig. \\ref{fig:6-irregular-samples}. The MORAN is able to rectify most curved text in CUTE80 and correctly recognize them. It is hence adequately robust to rectify text with small curve angle.\n\n\\subsection{Limitation of the MORAN}\n\nFor fair comparisons and good repeatability, we chose the widely used training datasets, which contain only horizontal synthetic text. Therefore, because of complicated background, the MORAN will fail when the curve angle is too large. Such cases are given in the last two rows of Fig. \\ref{fig:6-irregular-samples}. MORAN mistakenly regards the complicated background as foreground. However, such samples are rare in training datasets.\n\nFurthermore, with the existing training datasets and without any data augmentation, the MORAN focuses more on horizontal irregular text. Note that there are many vertical text in IC15. However, the MORAN is not designed for vertical text. Our method was proposed for the complicated deformation of text within a cropped horizontal rectangle.\n\nThe experiments above are all based on cropped text recognition. A MORAN without a text detector is not an end-to-end scene text recognition system. Actually, in more application scenarios, irregular and multi-oriented text are challenging both for detection and recognition, which have attracted great interest. For instance, Liu et al. \\cite{yuliang2017detecting} and Ch'ng et al. \\cite{CK2017} released complicated datasets. Sain et al. \\cite{sain2018multi} and He et al. \\cite{he2018multi} proposed methods to improve the performance of multi-oriented text detection. Therefore, scene text recognition still remains a challenging problem waiting for solutions.\n\n\\section{Conclusion}\nIn this paper, we presented a multi-object rectified attention network (MORAN) for scene text recognition. The proposed framework involves two stages: rectification and recognition. First, a multi-object rectification network, which is free of geometric constraints and flexible enough to handle complicated deformations, was proposed to transform an image containing irregular text into a more readable one. The rectified patterns decrease the difficulty of recognition. Then, an attention-based sequence recognition network was designed to recognize the rectified image and outputs the characters in sequence. Moreover, a fractional pickup method was proposed to expand the visual field of the attention-based decoder. The attention-based decoder thus obtains more context information and gains robustness. To efficiently train the network, we designed a curriculum learning strategy to respectively strengthen each sub-network. The proposed MORAN is trained in a weak-supervised way, which requires only images and the corresponding text labels. Experiments on both regular and irregular datasets, including IIIT5K, SVT, ICDAR2003, ICDAR2013, ICDAR2015, SVT-Perspective and CUTE80, demonstrate the outstanding performance of the MORAN.\n\nIn future, it is worth extending this method to deal with arbitrary-oriented text recognition, which is more challenging due to the wide variety of text and background. Moreover, the improvements in end-to-end text recognition performance come not just from the recognition model, but also from detection model. Therefore, finding a proper and effective way to combine the MORAN with a scene text detector is also a direction worth of study.\n\n\\section*{Acknowledgement}\nThis research was supported by the National Key R\\&D Program of China (Grant No.: 2016YFB1001405), GD-NSF (Grant No.: 2017A030312006), NSFC (Grant No.: 61472144, 61673182), GDSTP (Grant No.: 2015B010101004, 2015B010130003, 2017A030312006), GZSTP (Grant No.: 201607010227).\n\n\n{\\small\n\\bibliographystyle{ieee}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Sample structures and low field transport data}\n\\begin{figure*}[h!]\n\\includegraphics[width=16cm] {FigS1.png}\n\\caption{(color online) (a) A schematic drawing of the sample structure. The samples studied here are from the same growth in Ref.\\cite{Oh_AM}, and their thicknesses are also characterized therein. A 20-QL In$_2$Se$_3$ \/15-QL (Sb$_0.65$In$_0.35$)$_2$Te$_3$ layer was first grown on a sapphire (Al$_2$O$_3$) substrate as a buffer layer to match the lattice constant of Sb$_2$Te$_3$. This buffer layer helps to reduce the defect density in the Sb$_2$Te$_3$ layer and allows tuning the carriers from p-type to n-type through titanium doping. Then, the Sb$_2$Te$_3$ layer is deposited and capped \\textit{in situ} with another 15-QL (Sb$_0.65$In$_0.35$)$_2$Te$_3$ layer to protect it from aging and maintain a symmetric environment between its top and bottom surfaces. (b) Carrier concentrations in the studied samples. The carrier densities in the samples are characterized with magneto-transport at low temperature with the Van der Pauw geometry. The carrier types are different in these two samples due to different amounts of Titanium dopants. From the sign of the slope in the $R_{Hall}$ vs $B$ plot, the 8-QL sample is determined to be p-doped, and the 10-QL sample is n-doped. The extracted carrier densities from the slope of the data give $n_e= 2.2 \\times 10^{11}$ cm$^{-2}$ and $n_h=3.1 \\times 10^{11}$ cm$^{-2}$ for the 10-QL and 8-QL samples, respectively. }\n\\end{figure*}\n\n\\newpage\n\\section{Additional data on the 8-QL sample}\n\\begin{figure*}[h!]\n\\includegraphics[width=16cm] {FigS2.png}\n\\caption{(color online) Normalized magneto-infrared spectra of the 8-QL sample for (a)(c) transmission and (b) reflection measurements outside and inside the substrate Reststrahlen band, respectively. All the modes are labeled with a Latin or Greek letter. The blue dash lines are guides to the eye showing the mode evolution in magnetic field. Compared to the 10-QL sample, the A mode is not observable here, probably due to the broad linewidth. All spectra are shifted for clarity.}\n\\end{figure*}\n\n\\section{Symmetrization and anti-symmetrization of the surface state dispersion}\nThe conduction surface state (CSS) and valence surface state (VSS) can be written generally as \\cite{eh_Theory1}:\n\\begin{align*}\n E_{CSS}=E_0+Ck^2+\\sqrt{A^2k^2+(M-Bk^2)^2},\\\\\n E_{VSS}=E_0+Ck^2-\\sqrt{A^2k^2+(M-Bk^2)^2}.\n\\end{align*}\nThe first term is the overall offset $E_0$, and the second term is the band asymmetry term described by $C$. The last term is the dominant component of the SS, which is described by the linear band parameter $A$, the Dirac mass $M$, and band inversion parameters $B$ and exhibits a electron-hole (e-h) symmetry. We can arrive at the e-h symmetry part $E_{sym}$ and e-h asymmetry part $E_{asy}$ by\n\\begin{align*}\n &E_{sym}=(E_{CSS}-E_{VSS})\/2=\\sqrt{A^2k^2+(M-Bk^2)^2},\\\\\n &E_{asy}=(E_{CSS}+E_{VSS})\/2=Ck^2+E_0.\n\\end{align*}\nFrom here, it is evident to see that the anti-symmetrized part leads to the band asymmetry term $Ck^2$ and an offset $E_0$, and the symmetrized part to the dominant dispersion also with an offset from the Dirac mass $M$. For clear comparisons between the dispersions of different thicknesses, we shift all the $E_{Sym}$ and $E_{Asy}$ to the same origin in Fig. 4(c-d) in the main text. \n\nFinally, we note that when the gap $M$ is sufficiently large, we can Taylor expand $E_{sym}$:\n\\begin{align*}\n E_{sym}=\\sqrt{A^2k^2+(M-Bk^2)^2}=M+\\frac{(A^2-2M B)k^2}{2M}+\\dots,\n\\end{align*}\n which explains the quadratic behavior in the ultrathin film limit.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRandom matrix theory (RMT)~\\cite{RMT} accurately describes eigenvalue\ncorrelations in complex systems. More precisely, if we consider a\nquantum system governed by a Hamiltonian $H$ whose classical\ncounterpart is chaotic, then the statistical properties of the\neigenvalue spectrum of $H$ can be modeled by an ensemble of matrices\nwith random entries (distributed according to some statistical weight)\nand with the same global symmetries as $H$. This description is\ninsensitive to the details of the interaction and predicts universal\nfeatures that are unveiled when different spectra are rescaled\n(unfolded) to the same mean density.\n\nAmong the many applications of RMT in mathematics and in physics, a\nparticularly interesting one is relevant for the description of the\nspectrum of the Dirac operator in quantum chromodynamics. QCD\nin the $\\varepsilon$-regime can be described by a chiral RMT with the\nsame chiral and flavor symmetries as QCD~\\cite{chRMT}. This approach\ncan also be extended to non-vanishing temperature and\/or chemical\npotential and has been confirmed in many numerical studies. In this\nformulation, the anti-Hermitian massless Dirac operator $D=\\gamma_\\mu\n\\left(\\partial_\\mu + i A_\\mu \\right)$ is described in terms of a\nmatrix with an off-diagonal block structure,\n\\begin{equation}\n \\label{dmatrix}\n D\\to \\left( \n \\begin{tabular}{cc}\n 0 & $i W$ \\\\\n $i W^\\dagger$ & 0 \\\\\n \\end{tabular}\n \\right) \\: , \n\\end{equation}\nwhere $W$ is a complex $(n+\\nu)\\times n$ matrix and $\\nu$ plays the\nrole of the topological charge.\n\nDepending on the color gauge group $G$ and on the fermion field\nrepresentation, the Dirac operator may also be invariant under some\ndiscrete antiunitary symmetries, leading to the following symmetry\nclasses~\\cite{Verbaarschot:1994qf}:\n\\begin{enumerate}\n\\item For $G=\\text{SU}(2)$ and fermions in the fundamental\n representation, the pseudo-real nature of the group generators\n allows us to recast the Dirac operator in a form with real matrix\n entries. The corresponding matrix ensemble is the chiral orthogonal\n ensemble (chOE) with Dyson index $\\beta_D=1$.\n\\item For $G=\\text{SU}(N_C)$ with $N_C \\ge 3$ and fermions in the\n fundamental representation, the Dirac operator generically has\n complex entries. The appropriate matrix ensemble is the chiral\n unitary ensemble (chUE) with Dyson index $\\beta_D=2$.\n\\item For gauge group $G=\\text{SU}(N_C)$ and fermions in the adjoint\n representation, the generators are antisymmetric matrices with\n imaginary entries, and the Dirac operator can be written as a matrix\n of real quaternions. The associated matrix ensemble is the chiral\n symplectic ensemble (chSE) with Dyson index $\\beta_D=4$.\n\\end{enumerate}\nThe behavior of the universal quantities depends on the symmetry\nclasses listed above. In particular, the probability density $P(s)$\nfor the spacing $s$ of adjacent unfolded levels can be computed\nexactly and is well approximated by the Wigner surmise,\n\\begin{align}\n \\label{eq:Wigner}\n P(s) = a\\,s^{\\beta_D}e^{-bs^2}\\quad\\text{with}\\quad\n a=2\\,\\frac{\\Gamma^{\\beta_D+1} \\left( \\beta_D\/2 +1 \\right)}\n {\\Gamma^{\\beta_D+2} \\left( (\\beta_D+1)\/2 \\right)} \\:,\\quad\n b= \\frac{\\Gamma^2\\left( \\beta_D\/2+1 \\right)}\n {\\Gamma^2 \\left( (\\beta_D +1)\/2 \\right)}\\:.\n\\end{align}\nFor quantum systems whose classical analog is integrable, $P(s)$ is\ngiven by the result for a Poisson process, $P(s)=e^{-s}$.\n\nThe massless staggered Dirac operator on a lattice in $d$ dimensions\nwith lattice spacing $a$,\n\\begin{equation}\n \\label{eq:DKS}\n (D_\\text{KS})_{x,y} = \\frac{1}{2a} \\sum_{\\mu=1}^{d}\n (-1)^{\\sum\\limits_{\\nu<\\mu}\\!\\! x_\\nu} \\left[ \\delta_{x+\\hat\\mu,y} \n U_\\mu^\\dagger(x) - \\delta_{x-\\hat\\mu,y} U_\\mu(x-\\hat\\mu) \\right] \\: , \n\\end{equation}\nwhich is widely used in numerical simulations, exhibits a peculiar\nfeature: For gauge group SU(2) and fundamental fermions, its\nantiunitary symmetry is that of the chSE \\cite{Halasz:1995vd} instead\nof the chOE symmetry of the continuum operator.\\footnote{A similar\n situation occurs for adjoint fermions: The staggered Dirac operator\n has chOE symmetry in this case, as opposed to the chSE symmetry of\n the continuum operator.} This discrepancy is due to the replacement\nof the $\\gamma$-matrices by the staggered phases in $D_\\text{KS}$.\nFig.~\\ref{fundsmallbetafig} confirms the expectation using $P(s)$ as\nan example.\n\n\\begin{figure}[-t]\n \\includegraphics[width=.33\\textwidth]{fundL10beta3}\n \\includegraphics[width=.33\\textwidth]{fundL14beta3}\n \\includegraphics[width=.33\\textwidth]{fundL16beta3}\n \\caption{The distribution of the unfolded level spacing $s$ obtained\n for different lattice sizes from the SU(2) staggered Dirac\n operator with fundamental fermions and gauge action parameter\n $\\beta=4\/g^2=3.0$ (histograms) is consistent with the chSE, which\n is not the symmetry of the continuum Dirac operator.}\n \\label{fundsmallbetafig}\n\\end{figure}\n\nA transition of the symmetry properties of the staggered Dirac\noperator from chSE to chOE is expected in the continuum limit. A\nfirst indication of such a transition has been reported in\nRef.~\\cite{Follana:2006zz}. We plan to study this transition in more\ndetail, but here we first consider a numerically cheaper case, namely\nthe free limit. This limit is approached by increasing $\\beta$ at\nfixed (or mildly varying) lattice size, i.e., the physical volume is\nshrinking to zero. From the RMT point of view, this limit is\ninteresting since it might result in a transition to Poisson behavior\nin, e.g., $P(s)$. We shall see that the situation is actually a bit\nmore complicated.\n\n\n\\section{The Dirac spectrum of vacuum configurations and the influence\n of Polyakov loops}\n\\label{sec:vac}\n\nIn this section we present a short theoretical interlude in\npreparation for our numerical results. Let us consider a particular\ngauge configuration. For reasons that will become clear below, we now\nconstruct a corresponding vacuum configuration (i.e., a configuration\nwith all plaquettes equal to unity) that is built from uniform links\nin an Abelian subgroup of SU(2) which reproduce the average traced\nPolyakov loops $P_\\mu$ (for all directions $\\mu$) of the configuration\nunder consideration. The eigenvalue spectrum of the staggered Dirac\noperator \\eqref{eq:DKS} can be computed analytically for such a vacuum\nconfiguration. If the lattice extent in the $\\mu$-direction is\n$L_\\mu$, we obtain\n\\begin{equation}\n \\label{clustersandpolyakovloops}\n \\lambda=\\pm i \\sqrt{\\sum_{\\mu=1}^d \\sin^2 \\left[ \\frac{2 \\pi}{L_\\mu}\n \\left( k_\\mu + c_\\mu + \\frac{\\arccos P_\\mu}{2 \\pi} \\right)\\right] }\n \\quad \\text{with }\\: k_\\mu\\in\\mathbb N\\:,\\; 0 \\le k_\\mu < \\frac{L_\\mu}2\\:,\n\\end{equation}\nwhere $c_\\mu = 0$ $\\left(c_\\mu= \\frac{1}{2}\\right)$ for (anti-)\nperiodic boundary conditions (b.c.s) of the Dirac operator in\ndirection $\\mu$. In the free limit, i.e., for $\\beta \\to \\infty$, the\nPolyakov loops take values in the center $\\mathbb Z_2$ of SU(2), i.e.,\n$P_\\mu=\\pm1$. In this case it is clear from\nEq.~\\eqref{clustersandpolyakovloops} that changing $P_\\mu$ from $+1$\nto $-1$, or vice versa, is equivalent to switching between periodic\nand antiperiodic b.c.s in that direction. In the following we always\nuse (anti-) periodic b.c.s for $\\mu=1$, 2, 3 ($\\mu=4$). Close to the\nfree limit, i.e., for large values of $\\beta$, the distribution of\n$P_\\mu$ is peaked at $\\pm1$. The eigenvalues predicted by\nEq.~\\eqref{clustersandpolyakovloops} are degenerate, see below.\n\n\n\\section{Numerical results for the eigenvalue spectrum close to the\n free limit}\n\n\\begin{figure}[-t]\n \\includegraphics[width=.33\\textwidth]{polyakov1}\n \\includegraphics[width=.33\\textwidth]{polyakov2}\n \\includegraphics[width=.33\\textwidth]{polyakov6}\n \\includegraphics[width=.33\\textwidth]{polyakov8}\n \\includegraphics[width=.33\\textwidth]{polyakov18}\n \\includegraphics[width=.33\\textwidth]{L6_polyakov5}\n \\caption{Separation of scales in the level spacings close to the\n free limit. The eigenvalues obtained for each configuration\n (black dots) arrange themselves in clusters of eight. These\n clusters are spread about the well-separated plateaux\n corresponding to the free case (dashed blue lines).\n Eq.~\\protect\\eqref{clustersandpolyakovloops} yields an accurate\n prediction for the location of each cluster (solid red\n lines). The different plateau structures are due to the different\n signs of $P_\\mu\\approx\\pm1$ in each configuration. The last plot\n confirms that the agreement between the data and\n Eq.~\\protect\\eqref{clustersandpolyakovloops} persists on larger\n lattices, for which the theoretical formula predicts more\n clusters.}\n \\label{threescalesfig}\n\\end{figure}\n\nWhen $\\beta$ is increased to very large values, we observe that the\neigenvalue spectrum of $D_\\text{KS}$ arranges itself as shown in\nFig.~\\ref{threescalesfig}. Only the eigenvalues with positive\nimaginary part are plotted, and an overall double (Kramers) degeneracy\n\\cite{Hands:1990wc} has been divided out from all of our results. The\ndashed blue lines, which will be called \\emph{plateaux} in the\nfollowing, correspond to the highly degenerate eigenvalues predicted\nby Eq.~\\eqref{clustersandpolyakovloops} in the free limit, i.e., for\n$P_\\mu=\\pm1$. The numerically obtained eigenvalues form\n\\emph{clusters} consisting of eight eigenvalues each, and these\nclusters are located close to the plateaux of the free limit. We\nobserve a clear separation of three energy scales (from largest to\nsmallest),\n\\begin{enumerate}\\itemsep-1mm\n\\item the spacings between the plateaux of the free limit,\n\\item the spacings between adjacent clusters (which, by definition, do\n not overlap), and\n\\item the spacings between adjacent eigenvalues within a cluster.\n\\end{enumerate}\n\nThe question now arises to what extent the locations of the clusters\nfor a particular configuration can be described by the levels obtained\nfrom Eq.~\\eqref{clustersandpolyakovloops} for the vacuum configuration\nconstructed as described in Sec.~\\ref{sec:vac}. The answer is given\nby the solid red lines, which correspond to the predictions of\nEq.~\\eqref{clustersandpolyakovloops} using the $P_\\mu$ computed from\nthe configuration under consideration. These lines are essentially\nhidden by the data points and thus give very good approximations to\nthe cluster locations. This statement holds for all configurations,\nsome of which are shown in Fig.~\\ref{threescalesfig}.\n\nWe can understand the observation that the clusters contain eight\nnearly degenerate eigenvalues. A careful analysis shows that the\neigenvalues predicted by Eq.~\\eqref{clustersandpolyakovloops} have a\nmultiplicity of $2^d$. For $d=4$, this predicts an eightfold\ndegeneracy in addition to Kramers' degeneracy. A small perturbation\nof the vacuum configuration lifts this eightfold degeneracy but not\nthe Kramers degeneracy, which is exact for $D_\\text{KS}$.\n\nWe also remark that in the continuum limit, in which the lattice\nspacing goes to zero at fixed physical volume, the eigenvalues of\n$D_\\text{KS}$ should arrange themselves in multiplets corresponding to\nthe taste degeneracy of staggered fermions. This effect was observed\nfor SU(3)~\\cite{quadruplets} (quadruplets) and\nSU(2)~\\cite{Follana:2006zz} (doublets) for improved versions of\n$D_\\text{KS}$, but it should not be confused with the effect we are\nstudying here.\n\nOur observations may be related to other recent work~\\cite{chisbconf}\ndiscussing the connection between the spectrum of the Dirac operator\n(which is relevant for chiral symmetry breaking) and the Polya\\-kov\nloop (which is an order parameter for confinement in the quenched\ntheory).\n\n\n\\section{Spectral fluctuations on different scales}\n\nWe now turn to a study of spectral correlations close to the free\nlimit, using again the nearest-neighbor spacing distribution $P(s)$ as\nan example. To construct $P(s)$, the average spectral density must be\nseparated from the spectral fluctuations by an unfolding procedure.\nBecause of the separation of scales observed above, a uniform\nunfolding of the entire spectral density is not sensible close to the\nfree limit. Rather, we should consider the spectral fluctuations\nseparately on the three scales we identified.\n\nFirst, we construct $P(s)$ for the level spacings within the clusters\nby unfolding the spectral density only within a given cluster and then\naveraging $P(s)$ over all clusters. Fig.~\\ref{still_chse} (left)\nshows that $P(s)$ within the clusters continues to agree with the chSE\neven for very large values of $\\beta$. This is consistent with the\ntheoretical expectation, since the perturbation that lifts the\ndegeneracy of the eigenvalues in each cluster has the same symmetries\nas the full $D_\\text{KS}$ operator, which are those of the chSE.\n\nSecond, to construct $P(s)$ for the spacings between clusters, we\ndefine a cluster by the average of its eight members and unfold the\ndensity of the clusters. Fig.~\\ref{still_chse} (right) shows that the\nresulting $P(s)$ differs from the chSE. It also differs from the\nPoisson distribution, but we believe that this is due to the small\nlattice size and to the fact that on a $10^4$ lattice the free\nstaggered operator has many ``accidental'' degeneracies. These\ndegeneracies can be removed by choosing a lattice with\n$L_\\mu=2\\ell_\\mu$, where the $\\ell_\\mu$ are four different prime\nnumbers. As an example, we generated quenched configurations close to\nthe free limit ($\\beta=10000$) for a $34\\times38\\times46\\times58$\nlattice, computed the averaged traced Polyakov loops $P_\\mu$ and used\nthese to calculate the ``cluster spectrum'' according to\nEq.~\\eqref{clustersandpolyakovloops}. The resulting $P(s)$ is shown\nin Fig.~\\ref{poissonlargelatticefig} (left) and now agrees with the\nPoisson distribution.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.33\\textwidth]{fundL10beta10000}\n \\hspace*{20mm}\n \\includegraphics[width=.33\\textwidth]{clusters_L10_beta_10000}\n \\caption{$P(s)$ for the eigenvalue spacings within the clusters\n (left) and for the spacings between clusters (right), both for\n $L^4=10^4$ and $\\beta=10000$.}\n \\label{still_chse}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.33\\textwidth]{incommensurable_sizes_clusters}\n \\hspace*{20mm}\n \\includegraphics[width=.33\\textwidth]{incommensurable_sizes_plateaux}\n \\caption{$P(s)$ for the ``cluster spectrum'' predicted by\n Eq.~\\protect\\eqref{clustersandpolyakovloops} for a single\n configuration on a $ 34 \\times 38 \\times 46 \\times 58 $ lattice at\n $\\beta=10000$ (left) and $P(s)$ for the free Dirac eigenvalues (or\n plateaux) on the same lattice (right). Both agree with the\n Poisson distribution.}\n \\label{poissonlargelatticefig}\n\\end{figure}\n\nThird, we consider $P(s)$ for the spacings between the free\neigenvalues (or plateaux), which are known analytically\n(Eq.~\\eqref{clustersandpolyakovloops} with $P_\\mu=\\pm1$). Again it is\nsensible to remove accidental degeneracies by choosing a ``prime\nlattice''. The result for a $34\\times38\\times46\\times58$ lattice,\nobtained after unfolding the free eigenvalues, is shown in\nFig.~\\ref{poissonlargelatticefig} (right) and agrees with Poisson as\nexpected \\cite{GoesToPoisson}.\n\nAlthough the two plots in Fig.~\\ref{poissonlargelatticefig} look very\nsimilar, it should be noted that they come from data at very different\nscales. The average spacing between the levels of the free spectrum\nis more than ten times larger than the average spacing between the\nlevels predicted by Eq.~(\\ref{clustersandpolyakovloops}).\n\n\n\\section{Summary and outlook}\n\nWe have investigated the spectrum of the staggered Dirac operator with\nSU(2) gauge fields close to the free limit. Three different energy\nscales emerge:\n\\begin{enumerate}\n\\item Overall plateau structure: The spectrum arranges itself in\n clusters of eight eigenvalues each, lying close to the plateaux\n predicted for the free Dirac operator. The plateau structure only\n depends on the lattice geometry (i.e., on the $L_\\mu$ and on the\n b.c.s) and on the signs of the average traced Polyakov loops in the\n different directions. (Note that the distribution of the traced\n Polyakov loops is peaked at $\\pm1$, corresponding to the center\n elements of SU(2).)\n\\item Plateau-breaking and cluster separation at an intermediate\n scale: At a finer scale, the spread of the clusters about the\n plateaux of the free limit is due to the deviations of the $P_\\mu$\n from $\\pm1$ and can be accurately modeled by\n Eq.~\\eqref{clustersandpolyakovloops}.\n\\item Eigenvalue splitting within the clusters: The system dynamics\n removes the degeneracy of the eight eigenvalues belonging to the\n same cluster.\n\\end{enumerate}\nIn the regime we have studied, these three scales are well separated\nand can be unambiguously disentangled from each other.\n\nThe nearest-neighbor spacing distribution $P(s)$ computed within the\nclusters shows a behavior compatible with the chSE, consistent with\nthe symmetries of the staggered Dirac operator. For large enough\n``prime lattices'', the spacing distributions between the clusters and\nbetween the plateaux tends to the Poisson distribution. In the near\nfuture, we will also present a study of the spectrum of the Dirac\noperator for adjoint fermions close to the free limit. Ultimately, of\ncourse, we would like to obtain a more detailed understanding of the\ncontinuum limit, in which a chSE to chOE (for SU(2) with fundamental\nfermions) or chOE to chSE (for adjoint fermions) transition is\nexpected.\n\n\\acknowledgments\n\nWe thank J.J.M.~Verbaarschot for helpful discussions and acknowledge\nsupport from DFG (FB, SK, TW) and from the Alexander von Humboldt\nFoundation (MP).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdttu b/data_all_eng_slimpj/shuffled/split2/finalzzdttu new file mode 100644 index 0000000000000000000000000000000000000000..9de097524c72657d41997721b2de170ccfe6b846 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdttu @@ -0,0 +1,5 @@ +{"text":"\\section{Field-aligned downward current sheets}\nThe observations concerning auroral electron fluxes and the related currents are the following:\n\n\\subsection{Upward current region properties}\nDownward electrons\/upward currents occupy an extended spatial interval of low density and barely structured fluxes. The variation of the perpendicular (to the main field) magnetic field is smooth; it changes about linearly from $-\\delta B_\\perp$ to $+\\delta B_\\perp$ signalling that the spacecraft has crossed a homogeneous broad structureless upward sheet current carried by the as well structureless medium energy electron flux which, in the energy-time spectrum occupies a narrow band of constant energy and small energy spread. \n\n\nAbsence of an ionospheric electron background at (FAST) spacecraft altitude either suggests that the ionosphere does not reach up to those altitudes ($\\sim2000- 3000$ km) which sometimes, in a diffusive model of the ionosphere, is interpreted as presence of a field aligned electric potential which holds the ionospheric electrons down while attracting magnetospheric electrons. The validity of such an assumption can be questioned in terms of tail reconnection as the inflow of reconnection accelerated electrons from the tail does not require such an electric potential field, the origin of which is difficult to justify over a region of upward current extension while being natural when considering tail reconnection where it simply maps the large reconnection affected interval of the cross tail current down into the ionosphere. The small number of downward electrons does not require any presence of electric fields. The flux consists of nearly mono-energetic auroral electrons. These form a field-aligned beam and are accompanied by observed low frequency Langmuir-wave excitation which allows for the determination of the beam density being roughly $N_\\downarrow\\approx 10^6$ m$^{-3}$ (one electron per cubic centimetre). \n\n\\subsection{Upward topside electrons}\nFigure \\ref{fig3} shows simultaneous upward\/downward FAST measurements of electron fluxes when crossing a very active substorm topside auroral ionosphere. The upward-electron downward-current region behaves differently. Its spatial extension is narrow. In view of the electron flux it consists of a large number of very closely spaced spikes. The flux in each spike (generally) maximizes at the lowest energies $\\epsilon_e\\lesssim 0.1$ keV. Electron number densities are high estimated to be around $N_\\uparrow \\sim 10^7$ m$^{-3}$ (ten per cubic centimetre) or higher. The total integrated up and down currents must be similar for perfect closure. This is however not guaranteed for the divergence of currents in the ionosphere perpendicular to the magnetic field, current dissipation, and the high spatial structuring of the downward currents such that it cannot be checked whether the indication of the different downward current sheets all belong to closure of the single upward current. Some uncertainty in comparison remains, which however for our purposes does not matter.\n\nThe important observation is the high local structure of the downward currents, their obvious spatial closeness, and their differences in energies and flux level which is reflected in both the flux fluctuations across the narrow downward current region, and in the high spatial fluctuation of the main-field-perpendicular magnetic component $\\mathbf{b}_\\perp$ {(from here on denoting the magnetic variation $\\delta B_\\perp= b_\\perp$)} which indicates the crossing of many downward current sheets or filaments. All these downward currents flow parallel while being closely spaced in the direction perpendicular to the main field $\\mathbf{B}_0$. Electrodynamics requires that they should attract each other and merge. Why is this not happening in the auroral downward current region?\n\nOne might argue that the acceleration of electrons in the ionosphere below observation altitude is probably highly localized, depending on processes in the resistive ionospheric plasma. Therefore there would be no need for upward escaping electrons to merge laterally. This argument is invalid because they transport current. Lorentz attraction forces the currents to approach each other to form a broad unstructured downward current sheet. This is, however, inhibited by the strong main auroral geomagnetic field $B_0$. The argument that this should also happen in the upward current region fails because the current sheet there is broad by its origin from the tail reconnection site.\n\\begin{figure*}[t!]\n\\centerline{\\includegraphics[width=0.75\\textwidth,clip=]{fig2.jpg}}\n\\caption{Full sequence of FAST measurements across dow-up-auroral current system on 02-01-1997 \\citep[after][]{treumann2011}. {$(a)$ Magnetic field component $b_\\perp$ transverse to main field $B_0$, $(b)$ electric field fluctuation wave form $\\delta E$, $(c)$ low frequency electric fluctuation spectrum, $(d)$ high frequency electric spectrum, showing emission of auroral kilometric radiation bands $(e)$ electron energy spectrum, $(f)$ electron flux versus pitch-angle, $(g)$ ion energy flux, $(h)$ ion flux versus pitch-angle. The most intriguing part here is the smoothness of the magnetic signature of the upward current in its linear course showing that the upward current is a broad homogeneous current sheet. The downward current region (DCR) flanks the upward current region to both its sides, is comparably narrow in its spatial extent, and exhibits strong current and flux variations. This is seen in the electron flux panels $(e-f)$. Downward fluxes around few keV are relatively smooth indicating a relatively stable tail reconnection over observation time, upward fluxes have maximum at low energy and are highly variable in time and space.}The magnetic field being the integrated response to the spatial flux fluctuations exhibits a much smoother course which is inverse with respect to that of the upward current thus indicating the reversed current direction. Note the low energies of the upward electron fluxes in panel five as well as the clear separation of upward and downward fluxes as seen in the left part.} \\label{fig2}\n\\end{figure*}\n\nSince anti-parallel currents reject each other the transition region between upward and downward currents is quiet. {This is seen in panels $(b - f)$ of Fig. \\ref{fig2} and} is in contrast to our previous investigation where we assumed that reconnection would happen there between parallel kinetic Alfv\\'en waves. The Lorentz force between two equally strong sheet currents $\\mathbf{J}_\\|$ is \n\\begin{equation}\n\\mathbf{J}_\\| \\times \\mathbf{b}_\\perp= -\\nabla_\\perp b_\\perp^{2}\/\\mu_0 \n\\end{equation}\nwhere $\\mathbf{b}_\\perp$ is the magnetic field between the two currents {(in the following we suppress the index $\\perp$ on the magnetic field component $\\mathbf{b}$)}, and $\\nabla_\\perp$ refers to the gradient in the direction from current sheet to current sheet. The current consists, however, of gyrating electrons whose Lorentz force is the cross product of the azimuthal gyration speed times the very strong stationary field with gradient $\\nabla_c$ taken only over the gyro-radius $r_{ce}=v_{e\\perp}\/\\omega_{ce}$ of the electrons. For a separation of the sheet current exceeding the electron gyroradius and low current density the sheet currents will approach each other only on very long diffusive time scales of no interest. For a thin current sheet only a few gyroradii thick the condition for this time to be long is simply that the electron inertial length exceeds the gyroradius or\n\\begin{equation}\nv_{e\\perp}\/c\\ll\\omega_{ce}\/\\omega_e\n\\end{equation}\nwhich holds under very weak conditions in the topside auroral ionosphere. This implies that downward current sheets separated by say an electron inertial length $\\lambda_e=c\/\\omega_e$ will not merge under no circumstance. They remain separated over the observational spacecraft crossing time scales. It is their secondary magnetic field $\\mathbf{b}$ which will undergo reconnection without affecting the ambient magnetic field which just serves as guide field directed along the current flow. This distinguishes topside reconnection from other guide field mediated reconnection. {One may note, however, that sometimes in simulations when plasmoids form \\citep[cf., e.g.,][and others]{malara1991} parallel currents apparently do not attract each other. This happens, when the Lorentz force between the parallel currents does not overcome the mechanical forces exerted by the massive plasmoids, i.e. forces induces by their inertia and impulse. The Lorentz force is then too weak to push the parallel currents toward each other, an effect which can also be observed in highly turbulent plasmas. Such cases, when the currents remain close enough will, by the mechanism proposed below, be subject to reconnection between the opposing fields of the parallel current, leading to a cascade in the current structure towards smaller scales and to local reconnection as a main dissipation process of magnetic and turbulent energy.}\n\n\n\\subsection{Kinetic (shear) Alfv\\'en waves}\nIn the complementary wave picture of field-aligned currents in the auroral region, the current is carried by (kinetic) Alfv\\'en waves in the frequency range well below the local ion-cyclotron frequency. In addition, a large number of low-frequency electromagnetic waves are known to be present there \\citep{labelle2002}. We are in the downward current region with highly sheared upward particle flow along the magnetic field consisting of moderately fast electrons and much slower ions. Such flows are capable of generating Alfv\\'en waves \\citep{hasegawa1982} on perpendicular scales of the ion inertial length $\\lambda_i=c\/\\omega_i=\\lambda_e\\sqrt{m_i\/m_e}$ and below and long wavelength parallel to the ambient field. For the current-carrying electrons such waves are about stationary magnetic structures. \n\nThese Alfv\\'en waves cannot be body waves like in the solar wind \\citep{goldstein2005,narita2020} because they are strictly limited to the narrow field-aligned current sheets. Since they propagate in the strong auroral geomagnetic field, they are rather different from the usual kind of kinetic Alfv\\'en waves which one refers to in solar wind turbulence \\citep{goldstein2005}, where the magnetic field ist very weak and the turbulence is dominated by the mechanics of the flow \\citep{maiorano2020}. There the ion-temperature plays an important role imposing kinetic effects on the wave. \n\nUnder auroral conditions, in particular close to the ionosphere, the magnetic field is so strong that thermal ion effects on the wave are barely important. Their mass effect enters the Alfv\\'en speed. Instead, however, under those conditions electron inertia on scales $\\lambda_i\\sim\\Delta\\gtrsim\\lambda_e= c\/\\omega_e$ below the ion scale comes into play. For sufficiently narrow field-aligned current sheets of width the order of inertial scales, the field does not allow the electrons to leave their flux tube unless they have large perpendicular moment. Field-aligned electrons remain inside their gyration flux tube, and the currents cannot react to merge with neighbouring parallel current sheets. The Lorentz force on the field-aligned current in the magnetic field of its neighbour is not strong enough to move the currents. In this case the kinetic Alfv\\'en waves transporting the currents in pulses become inertially dominated with dispersion relation\n\\begin{equation}\n\\omega^2= \\frac{k_\\|^2V_A^2\\big(1+k_\\perp^2\\rho_i^2\\big)}{1+k_\\perp^2\\lambda_e^2}\n\\end{equation}\nwhere $\\rho_i$ is some modified ion gyro-radius \\citep[cf., e.g.][]{baumjohann1996} containing kinetic temperature contributions. For the cold ions in the topside auroral ionosphere the term containing $\\rho_i$ in the numerator vanishes. The kinetic Alfv\\'en wave under those conditions becomes an inertial or shear wave. It propagates at a reduced though still fast speed along the magnetic field, being of very long parallel wavelength. It also propagates slowly perpendicular to the magnetic field at short wavelength $\\lambda_\\perp\\sim\\lambda_e\/2\\pi$. It is, in principle, this wave which carries the current. Thus the current is not stationary on time scales long compared to the inverse frequency $\\Delta t>\\omega^{-1}$ but can be considered stationary for shorter time scales $\\omega\\Delta t < 1$. \n\nThe above dispersion relation, neglecting the ion contribution in the numerator, gives the well known relations for the parallel and perpendicular energy transport in the shear wave\n\\begin{equation}\n\\frac{\\partial\\omega}{\\partial k_\\|}=\\frac{V_A}{\\sqrt{1+k_\\perp^2\\lambda_e^2}},\\qquad \\frac{\\partial\\omega}{\\partial k_\\perp}=-\\frac{\\partial\\omega}{\\partial k_\\|}\\frac{k_\\|}{k_\\perp} \\frac{1}{\\big[1+1\/(k_\\perp\\lambda_e)^{2}\\big]}\n\\end{equation}\nEnergy transport in the perpendicular direction is smaller than parallel by the ratio of wave numbers.\n\nRepeating that we are in the downward current upward electron flux region causality requires that the upward electrons carry information from the ionosphere to the magnetosphere. Hence the kinetic Alfv\\'en waves in this region also propagate upward being produced in the topside by the transverse shear on ion-inertial scales below $\\lambda_i$. \n\n\\begin{figure*}[t!]\n\\centerline{\\includegraphics[width=0.8\\textwidth,clip=]{fig3.jpg}}\n\\caption{Sequence of downward (top) and upward (bottom) auroral electron fluxes observed by FAST on 02-07-1997 in the topside auroral region when crossing a substorm aurora \\citep[after][]{treumann2011a}. The sequence distinguishes nicely between the intense downward electron fluxes at energies $\\epsilon_e\\sim 10$ keV and downward fluxes at energies $\\epsilon_e<0.1$ keV. Upward fluxes are confined to narrow spatial regions, downward fluxes are distributed over a much wider domain. In the transition regions between both domains one observes flux mixing which indicates that the current systems are not simply two dimension and also that there are many overlapping flux and current sources which the one-domensional path of the spacecraft does not resolve spatially. } \\label{fig3}\n\\end{figure*}\n\n\\section{Reconnection under auroral conditions}\nAssuming stationarity of the field aligned current $\\mathbf{J}=J_\\|\\mathbf{B}_0\/B_0$ in two adjacent but separated parallel sheets and assuming, for simplicity, that the two currents are of equal strength, reconnection will occur in the central region of separation of the current sheets. (Figure \\ref{fig4} shows a two-cross section schematic of the downward current-field configuration for two closely spaced current sheets.) Here the two magnetic fields of the field-aligned currents are antiparallel. According to the above discussion, we are in the downward current region \\citep[as in the case of our previous radiation model][]{treumann2011}. In fact an analogue model would apply to the upward current region. The current flows in direction $z$; the direction of $y$ is longitudinal (eastward), $x$ is latitudinal (northward). If the sheet ist extended mostly in $y$ the antiparallel magnetic fields are in $y$ along the sheets. They will touch each other and reconnect between the two sheets thereby forming reconnection X-points with field component $\\pm b_x$ and extended magnetic field free electron exhausts in $x$ and in $y$, which contain the local main-field parallel reconnection electric field, and accelerate electrons along the ambient field. Tjhese exhausts will propagate along the main field together with the wave. Plasmoids might also form in the separation between the sheets perpendicular to the ambient field, and the presence of the strong ambient field will impose electron gyration and scatter of electrons causing secondary effects like bursts of field aligned energetic electrons. Moreover, the exhausts will serve as source of radiation and various kinds of electrostatic instabilities (for instance Bernstein modes). \n\n{There are two essential differences between this type of reconnection and ordinary reconnection models. The first is that the ambient field serves as a strong guide field which, as noted, inhibits the adjacent field-aligned current to merge. The second is that initially the set-up lacks the presence of any central current sheet which in conventional models of reconnection is crucial and imposed from the beginning. In topside reconnection such a current flowing along the magnetic field inside the separation region would imply a return current which, however, is absent. Return currents flow through the bottomside ionosphere and close in the upward current region. Nevertheless formally a fictitious return current forms locally and temporarily in the centre of the separator, which can be assumed as distributed over the separating region and belonging to the antiparallel fields $\\pm b$.}\n\n{This current builds up dynamically and locally during the reconnection process itself when the two kinetic Alfv\\'en waves slowly move perpendicular. This is a difficult dynamical problem in that reconnection will set on when the encountering magnetic fields exceed some threshold. Since electrons in this region are magnetized by the strong ambient field, they gyrate but do not take notice of the weak field $b$ of the kinetic Alfv\\'en waves which is transported across the separating region by the perpendicular phase and group speeds of the waves to get into contact and merge.} \n\n{The reconnection process is thus solely between the two waves, primarily not affecting the ambient field and not based on any real central primary current sheet. Observations so far do not resolve the magnetic nor the particle effects of such fictitious return currents though some of the structure seen in the low energy electron fluxes in Fig. \\ref{fig3} could be interpreted as such without proof. In fact, in order to avoid formation of the fictitious return current, which would imply that this current would be equally strong in the gap between the current sheets, reconnection is required over the full length of half a wavelength along $z$. Thus it necessarily generates elongated field-aligned vertical X-lines and electron exhausts in $z$.}\n\n \\subsection{First step}\nAll these effects are of vital interest. However, one particularly interesting question concerns the dissipation produced by this kind of reconnection. It is frequently argued that it leads to sliding of main-field field lines. In order to understand such a mechanism one needs to know the anomalous resistance caused by reconnection. In electrodynamic formulation, reconnection is conventionally dealing solely with the merging and energy transfer of fields. The microscopic mechanism of energy transfer is accounted for in the transport coefficients. Hence the appropriate way of inferring their value is referring to the electromagnetic energy exchange. This leads to the application of Poynting's theorem\n\\begin{equation}\n\\frac{\\partial b^2}{\\partial t}=-\\mu_0\\,\\eta_{an}J_\\|^2-\\nabla_\\perp\\cdot\\big(\\mathbf{E}_\\|\\times\\mathbf{b}\\big)-\\nabla_\\perp\\cdot\\big(\\mathbf{E}_{rec}\\times\\mathbf{B}_0\\big)\n\\end{equation}\nwhere the contribution of the electric field to the left-hand side is neglected as it is relativistically small, and $\\mathbf{b}$ is the magnetic field of the field-aligned current. It allows for a convenient estimate of the anomalous resistivity $\\eta_{an}$ in reconnection without going into any microscopic detail of the mechanism of its generation. The electric field in this expression is along the ambient magnetic field, essentially being the electric field of the kinetic Alfv\\'en wave. Estimates of this parallel field have been provided by \\citet{lysak1996} and were taken as the important agent for accelerating auroral electrons. \n\nThe above expression shows that reconnection in this case is a two-step process. In the first step the parallel field $E_\\|$ along the ambient magnetic guide field sets up reconnection. In the second step the reconnection electric field $E_{rec}$ and exhaust have evolved. The cross-product with the main magnetic field then modifies the dynamics of the exhaust. \n\n\\subsection{Anomalous collision frequency}\nIn this subsection we are not interested in this effect here as it is overwritten once reconnection really sets on but enters in the determination of the perpendicular inflow speed. It causes it to be different from tailward reconnection. Instead we proceed to an estimate of the anomalous collision frequency. \n\nThe parallel electric field $E_\\|$ of the kinetic Alfv\\'en wave plays an important role in the first step of the topside reconnection process. Since this field is parallel to the ambient geomagnetic field $\\mathbf{B}_0$, the cross product with the wave magnetic field is responsible for the two current-sheet magnetic field components $\\pm\\mathbf{b}$ to approach each other in the region between the sheets. Hence, referring to this fact, the second term on the right can be expressed through the perpendicular velocity $\\mathbf{V}_\\perp=\\mathbf{E}_\\|\\times\\mathbf{b}\/b^2$, and we have\n\\begin{equation}\n\\nabla_\\perp\\cdot\\big(\\mathbf{E}_\\|\\times\\mathbf{b}\\big)=\\nabla_\\perp\\cdot\\big(\\mathbf{V}_\\perp b^2\\big)\n\\end{equation}\n\nIn order to get some information about the perpendicular velocity $\\mathbf{V}_\\perp$ which according to our coordinate system points to the centre of the region which separates the two current sheets, i.e. along $y$, we refer to the wave picture, noting that these pictures are equivalent: the field-aligned current $J_\\|$ is carried by (upward topside ionospheric) electrons, on the other hand these electrons are transported (or pushed) by the kinetic Alfv\\'en wave. {In fact, of course, $V_\\perp$ is counted from each of the two parallel currents as pointing to the center of the separating sheet. It thus in our water-bag model changes abruptly sign in the center where due to the two antiparallel magnetic fields which collide there a fictitious weak return current of strength $j_\\|\\approx 2b\/\\mu_0\\delta$ arises, with $\\delta$ the fictitious width of this narrow current layer which we do not explicitly consider. The simplest is in our water-bag model to assume that for closely separated parallel current sheets we have essentially \n$\\delta\\to\\alpha\\Delta$, with $\\alpha\\lesssim1$, and a return current distributed over almost the entire separation width. One should also keep in mind that any field-aligned current carried by the Alfv\\'en wave is a current pulse with both $E_\\|$ and $b$ changing direction (oscillating) over half the wavelength. Thus $V_\\perp$ for each current pulse on one ambient field line has same sign over the full wave length while maximizing twice. On using this equivalence the perpendicular velocities $\\pm V_\\perp$ are just the perpendicular phase speeds of the kinetic Alfv\\'en waves on the two adjacent current sheets}\n\\begin{equation}\nV_\\perp\\sim\\frac{\\omega}{k_\\perp}\\approx \\frac{V_A}{\\sqrt{1+k_\\perp^2\\lambda_e^2}}\\frac{k_\\|}{k_\\perp}\\ll V_A\n\\end{equation}\n{This velocity apparently diverges for $k_\\perp\\to0$ which, however, is not the case because the kAW is a surface wave being defined only for $k_\\perp\\neq0$ while becoming a body wave for $k_\\perp\\to0$ carrying no current anymore. Its most probable wavenumber is about $k_\\perp\\lambda_e\\sim1$ attributing to the parallel phase and group velocity $\\sim V_A\/\\sqrt{2}$ and a perpendicular group velocity $\\sim-V_A(k_\\|\\lambda_e)\/2^{3\/2}$. However, since $V_\\perp$ indeed transports not only the field but also energy, one may argue that the use of the latter expression would be more appropriate than the phase speed. Since this does not make any big difference for our purposes, we in the following for reasons of simplicity understand $V_\\perp$ as phase speed. For more precise expressions one may replace it in the following with the perpendicular group speed}. \n\nThe velocity $V_\\perp$ is small because $k_\\|\\ll k_\\perp$, i.e. the kinetic Alfv\\'en wave is long-wavelength parallel to the ambient field but of short perpendicular wavelength, a very well-known property. Moreover, $V_\\perp(z)$ may vary along the ambient field but, in the frame of the wave, which corresponds to a water-bag model, is constant in the perpendicular direction. Hence, of the above vector product just remains the variation of the magnetic field $b(x)$ over the distance between the two current sheets. This insight enables us to rewrite Poynting's equation as\n\\begin{equation}\n\\frac{\\partial b^2}{\\partial t}\\approx -\\mu_0\\,\\eta_{an}J_\\|^2-V_\\perp\\frac{\\partial b^2}{\\partial x}\n\\end{equation}\nwhich, assuming a stationary state, enables us to estimate the anomalous resistivity of stationary reconnection (in the wave frame) where the inflow of magnetic energy attributed by the current, i.e. the field-aligned electron flux whose origin is found in reconnection in the magnetotail, is balanced by anomalous energy transfer to the plasma in the region separating the two current sheets. Putting the left-hand side to zero we thus find that in this kind of topside reconnection the anomalous resistivity is bound from above as\n\\begin{equation}\n\\eta_{an}\\lesssim \\frac{4V_A}{\\sqrt{1+k_\\perp^2\\lambda_e^2}}\\frac{k_\\|}{k_\\perp}\\frac{b^2}{\\mu_0 \\Delta J_\\|^2}{\\approx \\frac{2V_Ab^2}{\\mu_0\\Delta J_\\|^2} \\frac{k_\\|^2\\lambda_e^2}{(1+k_\\perp^2\\lambda_e^2)^{3\/2}}}\n\\end{equation}\nwhere $\\Delta$ is the spatial separation of the two field-aligned current sheets, and we have taken into account that each of the two identical current layers contributes a field $b$. {The second part of this expression makes use of the perpendicular group speed. This resistivity is small as $k_\\|^2\\lambda_e^2\/\\Delta$ but finite. It gives rise to a finite diffusion coefficient that can be interpreted as an anomalous diffusivity for the ambient magnetic field in the auroral topside ionosphere, caused by topside reconnection between anti-parallel current sheets in the downward current region. We might note at this occasion that the restriction to the downward current region is motivated by the observation of narrow current sheets in the downward current region. Observations do not suggest that similarly narrow current sheets evolve in the upward current region. If this would be the case, the same arguments would apply there, causing reconnection and a similar anomalous resistivity.} \n\n\\begin{figure*}[t!]\n\\centerline{\\includegraphics[width=0.75\\textwidth,clip=]{fig4.jpg}}\n\\caption{Schematic of the field configuration between two parallel field aligned flux tubes in the downward current region. \\emph{Left}: Geometry along the ambient field. {Currents are in red. Included would be the (red dashed) fictitious return current which locally would correspond to the antiparallel wave magnetic fields $\\pm b$. This current would be local over the wavelength of the inertial Alfv\\'en wave. In any stationary reconnecting current picture it would be this current whose magnetic field reconnects. However, here this current does not exist in the exhaust. It is completely reconnected and gives rise to the reconnection electric field $E_z$ instead (dashed red) in the exhaust along the main magnetic field. Electrons are directly accelerated by it along $\\mathbf{B}_0$.} \\emph{Right}: Reconnection geometry with perpendicular velocity $\\mathbf{V}_\\perp$, field free exhaust, reconnection fields $\\pm b_x$ indicated, and $E_z$. {The anomalous collisions caused in the exhaust volume also permit for weak diffusion of the ambient field. This may cause what is believed to be magnetic field diffusion, a very slow process compared to the wave\/current induced spontaneous reconnection.}} \\label{fig4}\n\\end{figure*}\n\nWhat concerns the spatial separation of the current sheets (see Fig. \\ref{fig3}), the best available observations (FAST) do not resolve any single sheets; it can however be assumed that their scales are the order of or below the ion-inertial length, such that $\\Delta\\lesssim \\lambda_i\\sim$ several to many $\\lambda_e$. This may overestimate the real value but has been accounted for in writing the expression as an upper limit. Determination of the anomalous resistivity thus requires knowledge of the field aligned current density, current sheet separation, and the transverse magnetic field component of the sheet current. We then can estimate the anomalous collision frequency $\\nu_{an}= \\eta_{an}\\epsilon_0\\omega_e^2$ in this kind of reconnection\n\\begin{equation}\n\\nu_{an}\\lesssim \\frac{V_A\/\\alpha^2\\lambda_e}{\\sqrt{1+k_\\perp^2\\lambda_e^2}}\\frac{k_\\|\\Delta}{k_\\perp\\lambda_e }\n\\end{equation}\nwhere we used that $J_\\|\\approx 2b\/\\alpha\\Delta$. Note that $V_A=B_0\/\\sqrt{\\mu_0 m_i N}$ is based on the ambient magnetic field and plasma density. This simple estimate shows that reconnection in this case can, under stationary condition be described as being equivalent to a diffusive process based on the anomalous collision frequency which is provided by the merging of the transverse magnetic fields of the two neighbouring field-aligned current sheets. Since the related diffusivity is felt in the entire region it is remarkable that it could effect also the main ambient guide field. In other words, topside reconnection could become responsible for diffusion of the main magnetic field lines in a locally restricted domain possibly causing effects on a larger scale in the auroral region. \n\nReal reconnection will not occur between field-aligned current sheets of same strength. Thus the above resistivity respectively the collision frequency must be reduced by another factor proportional to the involved current and field fractions. \n\n\\subsection{Second step: Reconnection electric field}\n{So far we just investigated the energy balance in order to obtain an anomalous collision frequency in this kind of reconnection. Reconnection however manifests itself in X points generating transverse magnetic fields and in addition electric fields. Since there is no primary return current flowing, it cannot be used as input into the two-dimensional reconnection equation for the vector potential $A_z$\n\\begin{equation}\n\\nabla^2 A_z=-\\mu_0j_z(x), \\qquad \\nabla= (\\partial_x,\\partial_y,0), \\qquad j_z(x)= -2\\epsilon b(x)\/\\mu_0\\Delta\n\\end{equation}\nwithout prescribing the built-up of the central current profile $j_z(x)$, which is possible only when assuming that the $b$ is independent of $x$, in which case it provides the usual stationary tearing mode solution \\citep[see, e.g.,][]{schindler1974} rewritten for electrons alone. Under these simplifying restrictions the two components of the reconnected magnetic field including the X point are given by $\\mathbf{b}=(\\partial_yA_z,-\\partial_xA_z,0)$, which to refer to suffices for our qualitative considerations. The a priori assumption of a return current is, however, incorrect. On the topside there may weak local return currents exist filling the separations between the narrow downward current sheets, but the main return current flows in the upward current region and is distributed over a wide domain. Hence just a fraction $\\epsilon$ of return current can flow in the gap, as included in the last expression. The electric field in this case primarily has only one component, which is along the main field and is given by $E_z=-\\partial_t A_z-\\nabla U$ where $U$ is the scalar electrostatic gauge potential which may occur if an inhomogeneity exists or the system is not ideally symmetric. This field adds to the field aligned kinetic Alfv\\'en wave electric field and contributes to electron acceleration. It is the wanted reconnection electric field and can be much larger than the small linear wave electric field. Unfortunately its precise knowledge requires solution of the equation for the vector potential $A_z$ and some interpretation of the time derivative operator. The latter can be transformed into a spatial derivative $\\partial_t= \\pm\\mathbf{V}_\\perp\\cdot\\nabla$, still requiring the solution $A_z(x,y)$. }\n\n{The important conclusion in the case of topside reconnection is rather different from usual reconnection. It tells that the exhaust is, over half the wavelength of the inertial Alfv\\'en wave free of wave magnetic fields $b$, while being bounded by the reconnected wave fields $\\pm b_x$. The exhaust instead contains the reconnection electric field, by being along the main field, does directly contribute to acceleration respectively deceleration of electrons (and also ions) along the main magnetic field, one of the most important and still unresolved problems in auroral physics. There acceleration is attributed to a variety of waves, reaching from kinetic Alfv\\'en through whistlers and several electrostatic waves to electron and ion holes. Except for the latter nonlinear structures, all wave electric fields are quite weak, and in addition fluctuate. Acceleration thus becomes a second order process.} \n\n{In case of the topside reconnection, a mesoscale first-order electric field $E_z$ is produced which directly accelerates particles, depending on its direction along the main field. Moreover, the source of the accelerated particles is the gap region between the two current sheets, the so-called exhaust, such that the kinetic Alfv\\'en wave electric field and the reconnection electric field do barely interfere. Hence the full strength of the reconnection exhaust field acts accelerating. One may thus conclude that topside reconnection, if it takes place, will substantially contribute to auroral particle acceleration. \n }\n\n{In order to circumvent the above named difficulty of calculating $A_z$ and to obtain an estimate of the reconnection electric field, we may return to the induction equation in its integral form where the electric field is given by the integral over the surface of the reconnection site\n\\begin{equation}\n\\oint \\mathbf{E}\\cdot d\\mathbf{s}=-\\frac{d\\Phi}{dt}= -\\frac{d}{dt}\\int\\mathbf{b}\\cdot d\\mathbf{F}\n\\end{equation}\nand the right-hand side is the exchange of magnetic flux in the reconnection process within the typical time $dt=\\tau_{rec}$. This time is not necessarily the same as the anomalous collision time. The magnetic flux is given by \n$\\Delta\\Phi\\approx 4\\pi b\\Delta\/k_\\|$. The line integral over the boundary of the reconnection site becomes $\\approx4\\pi E_z\/k_\\|+2\\Delta \\delta E_x$. Under ideally symmetric conditions the second term would vanish because the two contributions of the $x$ integration would cancel out. If some asymmetry is retained then a finite component $\\delta E_x$ arises. Taken these together yields dimensionally (not caring for the signs)\n\\begin{equation}\n4\\pi E_z\/k_\\|+2\\Delta \\delta E_x \\approx 4\\pi b\\Delta\/k_\\|\\tau_{rec}\n\\end{equation}\nNeglecting the small second term on the left then gives a simple order of magnitude estimate of the reconnection electric field\n\\begin{equation}\nE_z\\approx \\frac{b\\Delta}{\\tau_{rec}}\n\\end{equation}\nwhich could have been guessed from the beginning. This contains the reconnection time $\\tau_{rec}$ which so far is undetermined. It can be taken for instance as the above derived anomalous collision time $\\tau_{an}=\\nu_{an}^{-1}$. Below we derive another characteristic time. Which one has to be chosen, cannot decided from these theoretical order of magnitude estimates. It is either due to observation or numerical simulations.} \n\nThe small additional term $2\\Delta\\delta E_x =-U$ is a potential field produced by a possibly present asymmetry between the original current sheets or some gradient in the particle density. Such a gradient can be produced, if a substantial part of the electron component in the gap is accelerated away along the main field, causing a dilution of plasma in the exhaust. Being perpendicular to the magnetic fields $\\mathbf{B}_0$ and $\\mathbf{b}$ it leads to weak shear motions and circulation of the electrons inside the gap-exhaust region, which should observationally be detectable. \n\n\n\\subsection{Reconnection time}\nIn the above we have made use of the notion of reconnection time $\\tau_{rec}$. Here we attempt a clarification of this time. Topside reconnection will not be stationary. It should vary on the time scale of the kinetic Alfv\\'en frequency respectively moving together with the latter along the magnetic field. This motion should mainly be upward since causality requires that the wave transports information back upward with the upward moving electrons in the downward current region. It will thus be modulated and lead to quasi-periodic acceleration and generate medium energy electron bursts ejected from the local electron exhaust reconnection region along the sheet current magnetic field. These bursts flow perpendicular to the ambient field, start gyrating and immediately become scattered along the ambient field spiralling mainly upward into the weak ambient field region. Their pitch-angle distribution should obey a well defined downward loss-cone. \n\nWith the above estimate of the anomalous resistivity in this kind of reconnection, we can proceed asking for the typical reconnection time scale. For this purpose we return to Poynting's full theorem and take its variation with respect to the stationary state, indexing the latter with 0 while keeping the slow perpendicular velocity $V_\\perp$ fixed but varying the resistivity. We need to express the parallel current through the resistivity. This can be done via the electric field $E_\\|$ to obtain\n\\begin{equation}\nJ_\\|^2=\\eta^{-2}E_\\|^2=\\eta^{-2}b^2V_\\perp^2\n\\end{equation}\nThis procedure, after some straightforward and simple algebra and rearranging, leads to the following expressions\n\\begin{eqnarray}\n\\frac{d(\\delta b)^2}{dt}\\equiv\\Big(\\frac{\\partial}{\\partial t}+V_\\perp\\nabla_\\perp\\Big)(\\delta b)^2&=&- 2\\mu_0J_{\\|0}^2\\delta\\eta\\\\\n\\delta\\eta&=&-\\frac{V_\\perp}{\\mu_0J^2_{\\|0}\\Delta} (\\delta b)^2\n\\end{eqnarray}\nand we obtain dimensionally for the typical time of reconnection\n\\begin{equation}\n\\tau_{rec}\\sim\\frac{2\\Delta}{V_\\perp}\n\\end{equation}\nThis seems a trivial result, but it tells that reconnection is a process which annihilates the excess magnetic field which is provided by the perpendicular inflow under the condition that we are close to a stationary state. This time can be compared with the times of energy flow in the shear Alfv\\'en wave. Since clearly $V_\\perp\\approx\\partial\\omega\/\\partial k_\\perp$, one obtains\n\\begin{equation}\n\\tau_{rec}\\approx\\frac{2\\Delta}{V_A}\\frac{k_\\perp}{k_\\|}\\frac{\\big(1+k_\\perp^2\\lambda_e^2\\big)^\\frac{3}{2}}{k_\\perp^2\\lambda_e^2}\n\\end{equation}\na time the length of which depends essentially on the spacing of the current sheets. Since $V_A$ is large, there will be a balance between the spacing and the domain of the kinetic Alfv\\'en wave spectrum which allows reconnection to occur in the topside. Let the vertical topside width be $L_z$ and the Alfv\\'en time $\\tau_A=L_z\/V_A$ then we have the condition \n\\begin{equation}\n\\frac{\\tau_{rec}}{\\tau_A}\\approx \\frac{2\\Delta}{L_z}\\frac{k_\\perp}{k_\\|}\\frac{\\big(1+k_\\perp^2\\lambda_e^2\\big)^\\frac{3}{2}}{k_\\perp^2\\lambda_e^2}< 1\n\\end{equation}\nfor reconnection to occur in topside parallel field-aligned current sheets. This essentially is a condition on the spacing $\\Delta$ of the sheets, meaning that \n\\begin{equation}\n\\frac{\\sqrt{32}\\,\\Delta}{L_z}< \\frac{k_\\|}{k_\\perp}=\\frac{\\lambda_\\perp}{\\lambda_\\|}\\ll 1\n\\end{equation}\nAny current sheet separation is strictly limited. Since it must be larger than the upward electron gyro-radius we have $\\Delta>r_{ce}$. Both conditions are easily satisfied.\n\n\n\n\\subsection{Conclusions}\n{In the present letter we propose that reconnection might occur not only in given current sheets but also in the topside ionosphere-magnetosphere auroral transition region where the main magnetic field is very strong, almost vertical, and directly connects to the tail reconnection region. It serves as a guide for any particle flow exchange between the topside ionosphere and the tail plasma sheet, exchange between low frequency electromagnetic waves (in our case kinetic Alfv\\'en waves) trapped in flux tubes and the accompanying field-aligned current sheets, and ultimately as an inhibitor for the field-aligned parallel current sheets to merge. This enables reconnection in the gap between the current sheets between the oppositely directed magnetic field of the sheets respectively the kinetic Alfv\\'en wave magnetic fields.} \n\n{Dealing with reconnection, one is not primarily interested in the change of magnetic topology but in energy transformation from magnetic into kinetic, diffusion of plasma and magnetic field across the reconnection region, generation of electric fields, and ultimately selective particle acceleration as these are the observed effects. }\nThe generality of reconnection is not the best argument. The decades old claim that reconnection converts magnetic energy into mechanical energy is no fundamental insight; in all processes involving reconnection, the main energy is stored in the basic mechanical motion and by no means in the magnetic field. This motion, convection in inhomogeneous media with boundaries, like the magnetotail or the magnetopause, or turbulence necessarily produces currents and transports magnetic fields to let them get into contact. The amount of energy released by reconnection is in all cases just the minor electromagnetic part, a fraction of the mechanical energy. \n\n{Topside reconnection is expected predominantly in the downward current region, which observationally seems to be highly structured, consisting of several adjacent parallel current sheets. Similar conditions may also occur in the upward current region though no such structuring is obvious from observations. If it exist, then the physics will be similar. We have shown that topside reconnection is possible, generates a elongated field-aligned regions (exhausts) where the fields of parallel current sheets merge, anomalous collisions are generated, energy is exchanged and dissipated, and most important a first order reconnection electric field $E_{rec}$ is produced in the exhaust along the ambient magnetic field but restricted to the gap region between the current sheets. This field is capable of accelerating electrons along the main field, as is most desired by all auroral physics. Here it comes out as a natural result of topside reconnection.} Topside reconnection generates parallel electron beams, it lifts the escaping electrons in the exhaust into an elevated parallel energy level. These beams then cause a wealth of auroral effects in the environment and when impinging onto the upper ionosphere. Acceleration of electrons by the reconnection electric field leaves behind an electron depleted exhaust mainly containing only an anisotropic electron component whose pitch angle distribution peaks at perpendicular energies. \n\n{It is instructive to briefly inspect Fig. \\ref{fig3}. It shows the downward (upper panel) and upward (lower panel) electron fluxes. In addition to the temporally\/spatially highly structured fluxes, still obeying the spatial differences between the downward and upward current regions imposed by the tail-source of the downward fluxes, resulting from variations in tail-reconnection, or several tail-reconnection sites, one occasionally observes the simultaneous presence of upward and downward fluxes in the downward current region. One particular case it at $t\\approx 60$ s. The upward electron fluxes maximize below $\\sim0.1$ keV. Simultaneously a banded flux of downward electrons with central energy $\\sim 0.3$ keV appears in the upper panel. This event is indicated as flux mixing. It could also be understood as acceleration of electrons resulting from the local reconnection in the gap between current sheets. }\n\nAside of acceleration, radiation generation may be taken as signature of topside reconnection. Radiation is preferrably generated by the electron cyclotron maser mechanism. It requires low electron densities, strong magnetic fields, and a rather particular particle distribution with excess energy in its component perpendicular to the ambient magnetic field \\citep{sprangle1977,melrose1985}. Such a state in dilute plasmas lacks sufficiently many electrons for re-absorbing the spontaneously emitted radiation while the excited state causes inversion of the absorption coefficient. These conditions allow for the plasma to become an emitter \\citep{twiss1958,schneider1959,gaponov1959} by the electron cyclotron maser mechanism \\citep{wu1979} based on a loss-cone distribution \\citep{louarn1996}. It requires weakly relativistic electrons \\citep[see][for reviews]{melrose1985,treumann2006} and a low density electron background embedded into a strong field. It nicely comes up for the weak auroral kilometric background radiation but fail explaining the intense narrow band observed and drifting emission seen in panel $d$ of Fig. \\ref{fig2}. \n\nTo explain the latter, in earlier work we referred to electron hole formation \\citep{pottelette2005,treumann2011}. Hole models favourably apply to electron depleted exhausts in topside reconnection where densities become low \\citep[see, e.g.][]{treumann2013} and the remaining trapped electron component maximizes at perpendicularly speeds having large anisotropy. Intense narrow band drifting emissions in the frequency range 300-600 kHz may be a signature of topside reconnection in the strong main auroral field. They were originally attributed to Debye scale electrostatic electron holes \\citep{ergun1998b,pottelette1999} observed by Viking \\citep{deferaudy1987} and FAST \\citep{carlson1998,ergun1998a,pottelette2005} but are to small-scale for radiation sources. Topside reconnection exhausts instead have dimensions along the magnetic field of half a kinetic Alfv\\'en length and transverse scales of few ion inertial lengths $\\lambda_i$ or $\\sim 100\\lambda_e$. Such scales can host and amplify one or more radiation wave lengths.\n\nOf course, details of this process should be developed both analytically as far as possible, and by numerical simulations. If confirmed, this mechanism would also map to any astrophysical moderately or strongly magnetized object with appropriate modification. \n\nThe present qualitative considerations which we spiced with a few simple estimates based on energy conservation arguments just propose that reconnection in the topside auroral ionosphere is a process which has so far been missed and probably is that mechanism which releases the largest amount of so-called magnetically stored energy available and from the smallest spatial regions. Reconnection in much weaker fields like in turbulence and broad current sheets will be substantially less efficient because of the weakness of the reconnecting magnetic fields. Nevertheless in very large extended systems with reconnection proceeding on the microscales \\citep{treumann2015} with the total number of reconnection regions very large, the emission measure is large as well, and radiation from reconnection may become a non-negligible signature even in weak fields. However, in very strong fields like those in magnetized planets and magnetized stars (predominantly neutron stars, white dwarfs but also including outer atmospheres of magnetized stars like the sun) reconnection following our argumentation may be more important than so far assumed. \n\n\n\\section*{Acknowledgments}\n This work was part of a brief Visiting Scientist Programme at the International Space Science Institute Bern. RT acknowledges the interest of the ISSI directorate as well as the generous hospitality of the ISSI staff, in particular the assistance of the librarians Andrea Fischer and Irmela Schweitzer, and the Systems Administrator Saliba F. Saliba. We acknowledge discussions with R. Nakamura, and Y. Narita. RT acknowledges the cooperation with R. Pottelette two decades ago on the data reduction and the radiation and electron hole problems.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nCollections of interacting self-motile objects\nfall into the class of systems known as active\nmatter \\cite{1,2},\nwhich can be biological in nature\nsuch as swimming bacteria \\cite{3} or animal herds \\cite{4},\na social system such as pedestrian or traffic flow \\cite{5},\nor a robotic swarm \\cite{6,7}.\nThere are also a wide range of artificial active matter\nsystems such as self-propelled colloidal particles \\cite{8,9,10}.\nStudies of these systems have generally focused on the case\nwhere the motile objects interact with either a smooth or\na static substrate; however, the field is now advancing to a point where\nit is possible to ask\nhow such systems behave\nin more complex static or dynamic environments.\n\nOne subclass of active systems\nis a collection of interacting disks that undergo either run-and-tumble \\cite{11,12} or\ndriven diffusive \\cite{13,14,15} motion.\nSuch systems\nhave been shown to exhibit a transition\nfrom a uniform density liquid state\nto a motility-induced phase separated state\nin which the disks form dense clusters surrounded\nby a low density gas phase \\cite{9,10,11,12,13,14,15,16,17,18}.\nRecently it was shown that when phase-separated run-and-tumble disks are\ncoupled to a random pinning substrate, a transition to a uniform density liquid state\noccurs as a function of the maximum force exerted by the substrate \\cite{19}.\nIn other studies of run-and-tumble disks driven over an obstacle array by a dc driving force,\nthe onset of clustering coincides with a drop in the net disk transport since a large cluster\nacts like a rigid object that can only move through the obstacle array with difficulty; in\naddition, it was shown that the disk transport was maximized at an optimal\nactivity level or disk running time \\cite{20}.\nStudies of flocking or swarming disks that obey modified Vicsek models of\nself-propulsion \\cite{21}\ninteracting with obstacle arrays\nindicate \nthat there is an optimal\nintrinsic noise level at which collective swarming occurs \\cite{22,23},\nand that transitions between swarming and non-swarming states can occur as\na function of increasing substrate disorder \\cite{24}.\nThe dynamics in such swarming models differ from those of\nthe active disk systems, so it is not clear whether the same behaviors will occur across\nthe two different systems.\n\nA number of studies have already considered\nactive matter such as bacteria or run-and-tumble disks\ninteracting with periodic obstacle arrays \\cite{25} or\nasymmetric arrays \\cite{26,27,28,29}.\nSelf-ratcheting behavior occurs for the asymmetric arrays\nwhen the combination of broken detailed balance and the substrate asymmetry\nproduces\ndirected or ratcheting motion of the active matter particles\n\\cite{30,31},\nand it is even possible to couple passive particles to the active matter particles\nin such arrays in order to shuttle cargo across the sample \\cite{29}.\nIn the studies described above, the substrate is static, and external driving\nis introduced via fluid flow or chemotactic effects; however, it is also possible\nfor the substrate itself to be dynamic, such as in the case of\ntime dependent optical traps \\cite{32,33} or a traveling wave substrate.\nTheoretical and experimental studies of colloids in traveling wave potentials\nreveal a rich variety of dynamical phases,\nself-assembly behaviors, and directed transport \\cite{34,35,36,37,38,39,40}.\n\nHere we examine a two-dimensional system of run and tumble active\nmatter disks that can exhibit motility induced phase separation\ninteracting with a periodic quasi-one dimensional (q1D) traveling wave substrate.\nIn the low activity limit, the substrate-free system forms a\nuniform liquid state, while in the presence of a substrate,\nthe disks are readily trapped by the substrate minima\nand swept through the system by the traveling wave.\nAs the activity increases, a partial decoupling transition of the disks and the substrate\noccurs, producing a drop in the net effective transport. This transition is correlated\nwith the onset of the phase separated state,\nin which the clusters act as large scale composite objects that cannot be transported\nas easily as individual disks by the traveling wave.\nWe also find that the net disk transport is optimized at particular\ntraveling wave speeds, disk run length, and substrate strength.\nIn the phase separated state we observe an interesting effect where\nthe center of mass of each cluster moves in the direction opposite to that in which\nthe traveling wave is moving, and we also find reversals\nto states in which the clusters and the traveling wave move in the same direction.\nThe reversed motion of the clusters arises due to asymmetric growth and shrinking rates\non different sides of the cluster.\nThe appearance of backward motion of the cluster center of mass\nsuggests that certain biological or social active systems can move against biasing\ndrifts by forming large collective objects or swarms.\n\n\\begin{figure}\n\\includegraphics[width=3.5in]{Fig1.png}\n\\caption{Schematic of the system.\n Red spheres represent the active run and tumble disks in a two-dimensional system\n interacting with a periodic q1D traveling wave potential\n which is moving in the positive $x$-direction (arrow) with a wave speed of $v_{w}$.\n}\n\\label{fig:1}\n\\end{figure}\n\n\n\\section{Simulation}\nWe model a two-dimensional system of $N$ run and tumble disks\ninteracting with a q1D traveling wave periodic substrate,\nas shown in the schematic in Fig.~\\ref{fig:1} where the substrate\nmoves to the right at a constant velocity $v_{w}$.\nThe dynamics of each disk is governed by the following overdamped equation of motion:\n\\begin{equation}\n\\eta \\frac{d{\\bf r}_i}{dt} = {\\bf F}^{\\rm inter}_i + {\\bf F}^m_i + {\\bf F}^{s}_i ,\n\\end{equation}\nwhere the damping constant is $\\eta = 1.0$.\nThe disk-disk repulsive interaction force ${\\bf F}^{\\rm inter}_i$\nis modeled as a harmonic spring,\n${\\bf F}^{\\rm inter}_i=\\sum_{j\\neq i}^N\\Theta(d_{ij}-2R)k(d_{ij}-2R){\\bf \\hat d}_{ij}$,\nwhere $R=1.0$ is the disk radius, $d_{ij}=|{\\bf r}_i-{\\bf r}_j|$ is the distance between\ndisk centers, ${\\bf \\hat d}_{ij}=({\\bf r}_i-{\\bf r}_j)\/d_{ij}$, and the spring constant\n$k=20.0$\nis large enough to prevent significant disk-disk overlap under the conditions we\nstudy\nyet small enough to permit a computationally efficient time step\nof $\\delta t=0.001$ to be used.\nWe consider a sample of size $L \\times L$ with $L=300$, and describe the disk density in\nterms of the area coverage\n$\\phi = N\\pi R^2\/L^2$.\nThe run and tumble self-propulsion is modeled with a motor force ${\\bf F}^{m}_i$\nof fixed magnitude $F^m=1.0$ that acts in a randomly chosen direction during\na run time of $\\tilde{t}_{r}$.\nAfter this run time, the motor force instantly reorients into a new\nrandomly chosen direction for the next run time.\nWe take $\\tilde{t}_r$ to be uniformly distributed over the range\n[$t_r,2t_r$], using run times ranging from $t_r=1 \\times 10^3$ to $t_r=3 \\times 10^5$.\nFor convenience we describe \nthe activity in terms of the run length $r_l=F^mt_r\\delta t$,\nwhich is the distance a disk would move during a single run time\nin the absence of a substrate or other disks.\nThe substrate is modeled as a time-dependent\nsinusoidal force $F^s_i(t)=A_{s}\\sin(2\\pi x_i-v_w t)$\nwhere $A_s$ is the substrate strength and $x_i$ is the $x$ position of disk $i$. We take\na substrate periodicity of $a = 15$ so that the system contains\n$20$ minima.\nThe substrate travels at a constant velocity of $v_{w}$ in the positive $x$-direction.\nWe measure the average drift velocity of the disks in the direction of the traveling wave,\n$\\langle V\\rangle = N^{-1}\\sum^{N}_{i}{\\bf v}_{i}\\cdot {\\hat {\\bf x}}$.\nWe vary the run length, substrate strength, disk density, and wave speed.\nIn each case \nwe wait for a fixed time\nof $5 \\times 10^6$ simulation time steps\nbefore taking measurements to avoid any transient effects.\n\n\\begin{figure}\n\\includegraphics[width=3.5in]{Fig2.png}\n\\caption{(a) The average velocity per disk $\\langle V\\rangle$\n vs wave speed $v_{w}$ for a system with\n $N=13000$ disks, $\\phi = 0.45376$, and\n $r_l = 300$ for varied substrate strengths of $A_s=0.5$ to 3.0.\n The dashed line indicates the limit in which all the disks move at the wave\n speed, $\\langle V\\rangle = v_{w}$.\n (b) The corresponding number $\\tilde{C}_L$ of disks that are in a cluster\nvs wave speed. \nThe inset shows the regions of cluster and non-cluster (NC) states\nas a function of $v_{w}$ vs $A_{s}$. \n(c) The number $\\tilde{P}_6$ of sixfold-coordinated\ndisks vs wave speed $v_w$.\n}\n\\label{fig:2}\n\\end{figure}\n\n\\section{Results}\nIn Fig.~\\ref{fig:2}(a) we plot the average velocity per disk\n$\\langle V\\rangle$ versus wave speed $v_{w}$ at different substrate strengths\n$A_{s}$ for a system containing $N=13000$ active disks,\ncorresponding to $\\phi = 0.45376$,\nat $r_l = 300$, a running length at which the substrate-free system\nforms a phase separated state.\nThe number of disks that are in\nthe largest cluster, $\\tilde{C}_L$, serves as an effective measure\nof whether the system is in a phase separated state or not. We measure $\\tilde{C}_L$\nusing the cluster identification algorithm described in Ref.~\\cite{Hermann}, and call\nthe system phase separated when $\\tilde{C}_L\/N>0.55$. In Fig.~\\ref{fig:2}(b) we plot\n$\\tilde{C}_L$ versus $v_w$ at varied $A_s$,\nand in Fig.~\\ref{fig:2}(c) we show the corresponding number of sixfold-coordinated\ndisks, $\\tilde{P}_6=\\sum_i^N\\delta(z_i-6)$, where $z_i$ is the coordination number of\ndisk $i$ determined from a Voronoi construction \\cite{cgal}. In phase separated states,\nmost of the disks within a cluster have $z_i=6$ due to the triangular ordering of the\ndensely packed state.\nIn Fig.~\\ref{fig:2}(a), the linearly increasing dashed line denotes\nthe limit in which all the disks\nmove with the substrate so that $\\langle V\\rangle = v_{w}$ .\nAt $A_{s} = 3.0$, $\\langle V\\rangle$ initially increases linearly, following the dashed\nline, up to $v_{w} = 1.25$, \nindicating that there is a complete locking of the\ndisks to the substrate.\nFor $v_w > 1.25$, there is a slipping process in which the disks\ncannot keep up with the traveling wave and jump to the next well.\nA maximum in $\\langle V\\rangle$ appears near $v_{w} = 2.0$,\nand there is a sharp drop in $\\langle V\\rangle$ near $v_{w} = 5.0$,\nwhich also coincides with a sharp increase in $\\tilde{C}_L$ and $\\tilde{P}_{6}$.\nThe $\\langle V\\rangle$ versus $v_{w}$ curves for $A_{s} > 1.0$\nall show similar trends, with a sharp drop\nin $\\langle V\\rangle$ accompanied by an increase in\n$\\tilde{C}_{L}$ and $\\tilde{P}_{6}$, showing that the onset of clustering\nresults in a sharp decrease in $\\langle V\\rangle$.\nFor $A_{s} \\leq 1.0$, the substrate is weak enough that the system\nremains in a cluster state even at $v_{w} = 0$,\nindicating that a transition from a cluster to a non-cluster state\ncan also occur as a function of substrate strength.\nIn the inset of Fig.~\\ref{fig:2}(b) we show the regions in which\nclustering and non-clustering states appear as a function of $v_w$ versus $A_s$.\nAt $v_{w} = 0$, there is a substrate-induced transition from a\ncluster to a non-cluster state near $A_{s} = 1.0$,\nwhile for higher $A_s$, the location of the transition shifts linearly to higher $v_w$\nwith increasing $A_s$.\nSince the motor force is $F^m=1.0$,\nwhen $A_{s} < 1.0$ individual disks\ncan escape from the substrate minima,\nso provided that $r_{l}$ is large enough, the disks can freely move\nthroughout the entire system and form a cluster state.\nFor $A_{s} > 1.0$, the disks are confined by the substrate minima,\nbut when $v_w$ becomes large enough, the disks can readily escape\nthe minima and again form a cluster state.\n\n\\begin{figure}\n\\includegraphics[width=3.5in]{Fig3.png}\n\\caption{(a) $\\langle V\\rangle\/v_{w}$\n vs $A_{s}$ for the system in Fig.~\\ref{fig:2}\n at $v_{w} = 0.6$.\n (b) The corresponding normalized $C_{L}$ showing\n that the transition from a cluster to a non-cluster state\n coincides with an increase in $\\langle V\\rangle\/v_w$.\n (c) $\\langle V\\rangle\/v_{w}$ vs $A_s$ for the same system with $v_w=2.0$\n where the cluster to non-cluster transition occurs at a higher value of $A_{s}$.\n (d) The corresponding normalized $C_{L}$ vs $A_s$.\n}\n\\label{fig:3}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=3.5in]{Fig4.png}\n\\caption{ The real space positions of the active disks for the system in Fig.~\\ref{fig:3}(a,b)\n with $v_{w} = 0.6$.\n (a) At $A_{s} = 0.75$, a phase separated state appears.\n(b) At $A_{s} = 2.5$, the disks are strongly localized in the substrate minima and move\nwith the substrate.\n}\n\\label{fig:4}\n\\end{figure}\n\nTo highlight the correlation between the changes in\nthe transport and the onset of clustering, in\nFig.~\\ref{fig:3}(a,b)\nwe plot $\\langle V\\rangle\/v_{w}$ and the normalized\n$C_{L}=\\tilde{C}_L\/N$ versus $A_{s}$ at a fixed value of $v_{w} = 0.6$\nfrom the system in Fig.~\\ref{fig:2}.\nHere\nthe cluster to non-cluster transition occurs at $A_{s} = 1.25$,\nas indicated by the drop in $C_{L}$\nwhich also coincides with a jump in $\\langle V\\rangle\/v_{w}$.\nFor this value of $v_{w}$, a complete locking between the disks and the traveling\nwave occurs for $A_{s} \\geq 3.0$, where\n$\\langle V\\rangle\/v_{w} = 1.0$.\nIn Fig.~\\ref{fig:3}(c,d) we plot $\\langle V\\rangle\/v_w$ and $C_L$ versus $A_s$ for the\nsame system at\n$v_{w} = 2.0$, where the cluster to non-cluster transition\noccurs at a higher value of $A_{s} = 2.0$.\nThis transition again coincides with a sharp\nincrease in $\\langle V\\rangle\/v_{w}$.\nIn Fig.~\\ref{fig:4}(a) we show images of the disk configurations for\nthe system in Fig.~\\ref{fig:3}(a,b) with $v_w=0.6$ at\n$A_{s} = 0.75$,\nwhere the disks form a cluster state,\nwhile in Fig.~\\ref{fig:4}(b), at $A_s=2.5$ in the same system,\nthe clustering is lost and the disks are\nstrongly trapped in the substrate minima, forming\nchain like states that move with the substrate.\nThese results indicate that the clusters act as composite objects that\nonly weakly couple to the substrate.\n\n\\begin{figure}\n\\includegraphics[width=3.5in]{Fig5.png}\n\\caption{(a) $\\langle V\\rangle$ vs $r_{l}$\n in samples with $A_{s} = 2.0$ and $\\phi = 0.453$ for varied $v_{w}$ from\n $v_w=0.25$ to $v_w=6.0$.\n(b) The corresponding $\\tilde{C}_{L}$ vs $r_{l}$.}\n\\label{fig:5}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=3.5in]{Fig6.png}\n\\caption{A sample with $A_s=2.0$ and $\\phi=0.453$\n for $r_{l} = 5$ (red squares) and $r_l=200$ (blue circles).\n (a) $\\langle V\\rangle$ vs $v_{w}$.\n(b) $C_{L}$ vs $v_{w}$.\n}\n\\label{fig:6}\n\\end{figure}\n\nWe next examine the case with a fixed\nsubstrate strength of $A_{s} = 2.0$ and varied $r_{l}$.\nFigure~\\ref{fig:5}(a) shows\n$\\langle V\\rangle$ versus $r_{l}$\nfor $v_{w}$ values ranging from $v_w=0.25$\nto $v_w=6.0$, and Fig.~\\ref{fig:5}(b) shows the corresponding $\\tilde{C}_{L}$\nversus $r_{l}$.\nFor $v_{w} < 3.0$ the system remains in a non-cluster state for all values of $r_{l}$, while\nfor $v_{w} \\geq 3.0$ there is a transition\nfrom a non-cluster to a cluster state\nwith increasing $r_l$ as indicated by the simultaneous drop in\n$\\langle V\\rangle$ and increase in $\\tilde{C}_{L}$.\nIn Fig.~\\ref{fig:6}(a,b) we plot $\\langle V\\rangle$ and\n$C_{L}$ versus $v_{w}$ at $A_{s} = 2.0$\nfor $r_{l} = 200$ and $r_l=5.0$.\nThe system is in a non-cluster state for all $v_w$ when\n$r_{l} = 5.0$,\nand there is a peak in $\\langle V\\rangle$ near $v_{w} = 1.0$,\nwhile for $r_{l} = 200$ there is a transition to a cluster state\nclose to $v_{l} = 3.0$\nwhich coincides with a drop in $\\langle V\\rangle$ that is much sharper than the decrease\nin $\\langle V\\rangle$ with increasing $v_w$ for the $r_{l} = 5$ system.\nIn general, when $r_{l}$ is small, the net transport of disks through the sample is\ngreater than in samples with larger $r_l$.\nThe fact that the net disk transport varies with varying $r_l$ suggests\nthat traveling wave substrates could be used as a method for separating\ndifferent types of active matter, such as clustering and non-clustering species.\n\n\\begin{figure}\n\\includegraphics[width=3.5in]{Fig7.png}\n\\caption{\n (a) $\\langle V\\rangle$ vs $v_{w}$ at $\\phi = 0.56$ and\n $r_{s} = 300$ for varied $A_{s} = 0.5$ to $A_s=4.0$.\n(b) The corresponding $\\tilde{C}_{L}$ vs $v_{w}$.\n(c) $\\langle V\\rangle$ vs $\\phi$ for $A_{s} = 2.5$, $r_{l} = 300$ and $v_{w} = 2.0$.\n (d) The corresponding\n $\\tilde{C}_{l}$ (blue circles) and $\\tilde{P}_{6}$ (red squares) vs $\\phi$\n where the onset of clustering occurs\n near $\\phi = 0.6$ at the same point for which there is a drop in\n $\\langle V\\rangle$ in panel (c).\n}\n\\label{fig:7}\n\\end{figure}\n\nWhen we vary the disk density $\\phi$ while holding $r_l$ fixed, we find\nresults similar to those described above.\nIn Fig.~\\ref{fig:7}(a) we plot\n$\\langle V\\rangle$ versus $v_{w}$\nat $\\phi = 0.56$ for varied $A_{s}$ from $A_s=0.5$ to $A_s=4.0$, where we find\na similar trend in which\n$\\langle V\\rangle$ increases with increasing wave speed\nwhen the disks are strongly coupled to the substrate.\nA transition to a cluster state occurs at higher $v_w$ as shown in\nFig.~\\ref{fig:7}(b) where we plot $\\tilde{C}_{L}$ versus $v_{w}$ for the same\nsamples. The increase in $\\tilde{C}_{L}$ at the cluster state onset\ncoincides\nwith a drop in $\\langle V\\rangle$.\nIn Fig.~\\ref{fig:7}(c) we plot $\\langle V\\rangle$ versus $\\phi$\nfor a system with fixed $v_{w} = 2.0$, $A_{s} = 2.5$, and $r_{l} = 300$, while\nin Fig.~\\ref{fig:7}(d) we show the corresponding\n$\\tilde{C}_{L}$ and $\\tilde{P}_{6}$ versus $\\phi$.\nA transition from the\nnon-cluster to the cluster state occurs\nnear $\\phi=0.6$, which correlates with a sharp drop in $\\langle V\\rangle$ and\na corresponding increase\nin $\\tilde{C}_{L}$ and $\\tilde{P}_{6}$.\n\n\\begin{figure}\n\\includegraphics[width=3.5in]{Fig8.png}\n\\caption{\n The disk positions on the traveling wave substrate for the system in\n Fig.~\\ref{fig:7}(a) at $\\phi = 0.56$. Colors indicate disks belonging to\n the five largest clusters.\n (a) Complete locking\n at $A_{s} = 4.0$ and $v_{w} = 1.0$,\n where the transport efficiency is $\\langle V\\rangle\/v_{w} = 0.998$.\n (b) Partial locking\n at $A_{s} = 2.0$ and $v_{w} = 1.5$, \n where $\\langle V\\rangle\/v_{w} = 0.41$.\n (c) Weak locking\n at $A_{s} = 1.0$ and $v_{w} = 0.6$ \n with $\\langle V\\rangle\/v_{w} = 0.078$.\n}\n\\label{fig:8}\n\\end{figure}\n\nIn Fig.~\\ref{fig:8}(a) we show the disk configurations from the system in\nFig.~\\ref{fig:7}(a) at $A_{s} = 4.0$ and $v_{w} = 1.0$.\nHere $\\langle V\\rangle\/v_{w} = 0.998$,\nindicating that the disks are almost completely locked with\nthe traveling wave motion\nand there is little to no slipping of the disks out of the substrate minima. \nIn Fig.~\\ref{fig:8}(b), the same system at\n$A_{s} = 2.0$ and $v_{w} = 1.5$\nhas a transport efficiency of $\\langle V\\rangle\/v_{w} = 0.41$.\nNo clustering occurs but there are numerous disks that \nslip as the traveling wave moves.\nAt $A_{s} = 1.0$ and $v_{w} = 0.6$ in\nFig.~\\ref{fig:8}(c)\nthere is a low transport efficiency of\n$\\langle V\\rangle\/v_w=0.078$.\nThe system forms a cluster state and smaller numbers of individual disks\noutside of the cluster are transported by the traveling wave.\n\n\\section{Forward and Backward Cluster Motion}\n\n\\begin{figure}\n\\includegraphics[width=3.5in]{Fig9.png}\n\\caption{\n The center of mass $X_{\\rm COM}$ location of a cluster vs\n time in simulation time steps for a system with $\\phi = 0.454$ and $r_{l} = 300$.\n(a) At $A_{s} = 1.25$ and $v_{w} = 0.6$, the cluster moves in the negative $x$-direction,\n against the direction of the traveling wave.\n (b) At $A_{s} = 0.5$ and $v_{w} = 4.0$, the cluster\n is stationary.\n (c) At $A_{s} = 3.0$ and $v_{w} = 7.0$, the cluster moves in the positive\n $x$-direction, with the traveling wave.\n The dip indicates the point at which the center of mass\n passes through the periodic boundary conditions.\n}\n\\label{fig:9}\n\\end{figure}\n\nIn general, we find that when the traveling wave is moving in the positive $x$-direction,\n$\\langle V\\rangle > 0$; however, within the\ncluster phase,\nthe center of mass motion of a cluster can be\nin the positive or negative $x$ direction or the cluster can be almost stationary.\nBy using the cluster algorithm we can track the $x$-direction motion of the\ncluster center of mass $X_{\\rm COM}$ over fixed time periods,\nas shown in Fig.~\\ref{fig:9}(a) for a system with\n$\\phi = 0.454$, $r_{l} = 300$, $A_{s} = 1.25$, and $v_{w} = 0.6$.\nDuring the course of $6\\times 10^6$ simulation time steps\nthe cluster moves in the negative $x$-direction a distance of\n$235$ units, corresponding to a space containing 16 potential minima.\nEven though the net disk flow is in the positive\n$x$ direction, the cluster itself drifts in the negative $x$ direction.\nIn Fig.~\\ref{fig:9}(b) at $A_{s} = 0.5$ and $v_{w} = 4.0$,\nthe disks are weakly coupled to the substrate\nand the cluster is almost completely stationary.\nFigure~\\ref{fig:9}(c) shows that at $A_{s} = 3.0$ and $v_{w} = 7.0$,\nthe cluster center of mass motion is now\nin the positive $x$ direction, and the cluster\ntranslates a distance equal to almost $20$ substrate minima\nduring the time period shown.\nThe apparent dip in the center of mass motion\nis due to the periodic boundary conditions.\n\n\\begin{figure}\n\\includegraphics[width=3.5in]{Fig10.png}\n\\caption{\n Height field of the direction and magnitude of the center of mass motion $V_{\\rm COM}$\n as a function of $A_s$ vs $v_w$ for the cluster\n obtained after $4\\times 10^6$ simulation steps.\n The gray area indicates a regime in which there is no cluster formation.\n}\n\\label{fig:10}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=3.5in]{Fig11.png}\n\\caption{ The disk positions for the system in Figs.~\\ref{fig:9} and\n ~\\ref{fig:10}.\n (a)\n At $A_{s} = 1.0$ and $v_{w} = 0.4$, the cluster drifts in the negative $x$-direction.\n (b)\n At $A_{s} = 3.0$ and $v_{w} = 5.0$, \n the cluster drifts in the positive $x$-direction.\n}\n\\label{fig:11}\n\\end{figure}\n\n\nWe have conduced a series of simulations and measured\nthe direction and amplitude $V_{\\rm COM}$ of the center\nof mass motion, as plotted\nin Fig.~\\ref{fig:10} as a function of $A_{s}$ versus wave speed\nfor the system in Fig.~\\ref{fig:7}.\nThe gray area indicates a region in which clusters do not occur,\nand in general we find that the negative cluster motion occurs\nat lower wave speeds while the positive motion occurs\nfor stronger substrates and higher wave speeds.\nThere are two mechanisms that control the cluster center of mass motion.\nThe first is the motion of the substrate itself,\nwhich drags the cluster in the positive $x$ direction, and\nthe second is the manner in which the cluster grows or shrinks\non its positive $x$ and negative $x$ sides.\nAt lower substrate strengths and low wave speeds,\nthe disks in the cluster are weakly coupled to the substrate\nso the cluster does not move with the substrate.\nIn this case the disks can leave or\njoin the cluster anywhere around its edge; \nhowever, disks tend to join the cluster at a higher rate on its\nnegative $x$ side since individual disks, driven by the moving\nsubstrate, collide with the negative $x$ side of the cluster and can become\ntrapped in this higher density area. The positive $x$ side of the cluster\ntends to shed disks at a higher rate since the disks can be carried\naway by the moving substrate into the low density gas region.\nThe resulting asymmetric growth rate causes the cluster to drift in\nthe negative $x$ direction.\nThere is a net overall transport of disks in the positive $x$ direction due to the\nlarge number of gas phase disks outside of the cluster region which follow\nthe motion of the substrate.\nFigure~\\ref{fig:11}(a) shows the\ndisk positions at $A_{s} = 1.0$ and $v_{w} = 0.4$, where the cluster\nis drifting in the negative $x$ direction. \nFor strong substrate strengths, all the disks that are outside of the cluster become\nstrongly confined in the q1D substrate minima,\nand the disk density inside the cluster itself starts to become modulated by the substrate.\nUnder these conditions, the cluster is dragged along with the traveling substrate\nin the positive $x$ direction, as illustrated\nin Fig.~\\ref{fig:11}(b) for $A_{s} = 3.0$ and $v_{w} = 5.0$. \nThese results\nsuggest that it may be possible for\ncertain active matter systems to \ncollectively form a cluster state in order to \nmove against an external bias\neven when isolated individual particles on average move with the\nbias.\n\n\n\\section{Summary}\n\nWe have examined run and tumble active matter disks interacting with\ntraveling wave periodic substrates.\nWe find that\nin the non-phase separated state,\nthe disks couple to the traveling waves,\nand that at the transition\nto the cluster state,\nthere is a partial decoupling from the substrate and the net transport of disks by\nthe traveling wave is strongly reduced.\nWe also find a transition from a cluster\nstate to a periodic quasi-1D liquid\nstate for increasing substrate strength,\nas well as a transition back to a cluster state\nfor increasing traveling wave speed.\nWe show that there is a transition\nfrom a non-cluster to a cluster state as a function of increasing\ndisk density which is correlated with a drop in the net disk transport.\nSince disks with different run times drift with different velocities,\nour results indicate that traveling wave substrates\ncould be an effective method for separating active matter particles with different mobilities.\nWithin the regime in which the system forms a cluster state,\nwe find that as a function of wave speed and substrate strength,\nthere are weak substrate regimes where the center of mass of the\ncluster moves in the opposite direction from that of the traveling wave,\nwhile for stronger substrates,\nthe cluster center of mass moves in the same direction as the traveling wave.\nThe reversed cluster motion occurs due to the\nspatial asymmetry of the rate at which disks leave or join the cluster.\nThis suggests that collective clustering could be an effective method\nfor forming an emergent object that\ncan move against gradients or drifts even\nwhen individual disks on average move with the drift.\n\n\\acknowledgments\nThis work was carried out under the auspices of the \nNNSA of the \nU.S. DoE\nat \nLANL\nunder Contract No.\nDE-AC52-06NA25396.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn the past few years, the field of text generation witnesses many significant advances, including but not limited to neural machine translation \\cite{Transformer:17,gehring2017convolutional}, dialogue systems \\cite{liu-etal-2018-knowledge,zhang2019memory} and text generation \\cite{clark-etal-2018-neural,guo2018long}.\nBy utilizing the power of the sequence-to-sequence (S2S) framework \\cite{Sutskever:14}, generation models can predict the next token based on the previous generated outputs and contexts.\nHowever, S2S models are not perfect.\nOne of the obvious drawbacks is that S2S models tend to be short-sighted on long context and are unaware of global knowledge.\nTherefore, how to incorporate global or local knowledge into S2S models has been a long-standing research problem.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{images\/figure0.png}\n \\caption{\\label{example-table} An example from a novel called ``Fights Break Sphere''. The relations between \\textcolor{y}{Yanxiao}, \\textcolor{x}{Xuner} and \\textcolor{n}{Nalanyanran} are evolutionary. And the characteristic of \\textcolor{y}{Yanxiao} changes over time.}\n\\end{figure}\n\nThere lie two different directions to include knowledge into S2S models.\nOn the one hand, many efforts \\cite{zhang-etal-2018-improving,guan2019story,li-etal-2019-incremental} have been taken to address the short-sighted problem in text generation by explicitly modeling unstructured context. Nevertheless, these approaches rely heavily on the quality and scale of unstructured context, and become intractable when applying to scenarios where the context length increases drastically (e.g., commenting a full-length novel).\nOn the other hand, researchers~\\citep{beck-etal-2018-graph,marcheggiani-perez-beltrachini-2018-deep,li-etal-2019-coherent} try to combine knowledge with S2S by employing pre-processed structured data~(e.g., knowledge graph) which naturally avoid the difficulty for context length. However, those models are oriented to static knowledge, and hence hardly model events where temporal knowledge evolution occurs.\n\n\n\n\nThe dynamic knowledge evolution is very common in full-length novels. In a novel, a knowledge graph can be constructed by using entities (characters, organizations, locations etc.) as vertices together with the entity relations as edges. Obviously, a single static knowledge graph is hardly to represent the dynamic story line full of dramatic changes. For example, a naughty boy can grow up into a hero, friends may become lovers, etc. In this paper, we proposes \\textbf{E}volutionary \\textbf{K}nowledge \\textbf{G}raph~(\\textbf{EKG}) which contains a series of sub-graph for each time step. Figure 1 illustrates EKG for the novel ``Fights Break Sphere''. At three different scenes, ``Yanxiao'', the leading role of the novel, is characterized as ``proud boy'', ``weak fighter'', and ``magic master'' separately. At the same time, the relation between ``Yanxiao'' and ``Xuner'' is evolved from friend into lovers, and finally get married, and the relation between ``Yanxiao'' and ``Nalanyanran'' is changed over time as ``engagement$\\rightarrow$divorce$\\rightarrow$friend''.\n\nEKG is important for commenting passage of novels since it is the dramatic evolution and comparison in the storyline but not the static facts resonate the readers most. As illustrated in Figure~\\ref{example-table}. When commenting passage sampled from the $T$-th chapter of a novel, the user-A refers to a historical fact that \"Nalanyanran\" has abandoned ``Yanxiao'', while the user-B refers to the future relation between ``Yanxian'' and a related entity ``Xuner'' that they will go through difficulties together. In this paper, EKG is trained within a multi-task framework to represent the latent dynamic context, and then a novel graph-to-sequence model is designed to select the relevant context from EKG for comment generation.\n\n\\subsection{Related Work}\ngraph-to-sequence model has been proposed for text generation.~\\citet{song-etal-2018-graph}, ~\\citet{beck-etal-2018-graph}, and~\\citet{guo-etal-2019-densely} used the graph neural networks to solve the AMR-to-text problem.~\\citet{bastings-etal-2017-graph} and ~\\citet{GraphSeq2Seq:18} utilized graph convolutional networks to incorporate syntactic structure into neural attention-based encoder-decoder models for machine translation. In comment generation, Graph2Seq~\\citep{li-etal-2019-coherent} is proposed to generate comments by modeling the input news as a topic graph. These methods are using static graph, and did not involve the dynamic knowledge evolution.\n\nRecently, more research attention has been focused on dynamic knowledge modeling.~\\citet{taheri2019www} used gated graph neural networks to learn the temporal dynamics of an evolving graph for dynamic graph classification.~\\citet{KnowEvolve:17, trivedi2018dyrep, kumar2019kdd} learned evolving entity representations over time for dynamic link prediction. Unlike the EKG in this paper, they did not model the embeddings of the relations between dynamic entities.\n~\\citet{iyyer-etal-2016-feuding} proposed an unsupervised deep learning algorithm to model the dynamic relationship between characters in a novel without considering the entity embedding. Unlike these methods, our EKG-based model represents the temporal evolution of entities and relations simultaneously by learning their temporal embeddings, and hence has an advantage in supporting text generation tasks.\n\nTo our knowledge, few studies make use of evolutionary knowledge graph for text generation. This may due to the lack of datasets involving dynamic temporal evolution. We observed that novel commenting need to understand long context full of dramatic changes, and hence build such a dataset by collecting full-length novels and real user comments. The dataset with its EKG will be made publicly available, and more details can be found in Section~\\ref{sect:Dataset}.\n\nThe main contributions of our work are three-fold:\n\n\\begin{itemize}\n \\item We build a new dataset to facilitate the research of evolutionary knowledge based text generation.\n \\item We propose a multi-task framework for the learning of evolutionary knowledge graph to model the long and dynamic context.\n \\item We propose a novel graph-to-sequence model to incorporate evolutionary knowledge graph for text generation.\n\n\\end{itemize}\n\n\\section{Dataset Development and Evolutionary Knowledge Graph Building}\\label{sect:Dataset}\nTo facilitate the research of modeling knowledge evolution for text generation, we build a dataset called \\textit{GraphNovel} by collecting full-length novels and real user comments. Together with the corresponding EKG embeddings, we will make the dataset public available soon.\nWe detail the collection of the dataset below.\n\n\\subsection{Data collection}\n\\label{sub:collect}\nThe data is collected from well-known Chinese novel websites. To increase the diversity of data, top-1000 hottest novels are crawled with different types including science fiction, fantasy, action, romance, historical, and so on. Then we filter out novels due to the following 3 considerations: 1) the number of chapters is less than 10, 2) few entities are mentioned and 3) lack of user comments. Each remained novel includes chapters in chronological order, a set of user-underlined passages, and user comments for the passages.\n\nThen, we use the lexical analysis tool~\\citep{jiao2018LAC} to recognize three types of entities (persons, organizations, locations) from each novel. Due to many of the nickname in novels, the identified entities from the tool contains much noise. To improve the knowledge quality, human annotators are asked to verify the entities, and add missing ones. Then all the paragraphs containing mentions of two entities are identified, and will later serve as a representation of the entity relations at that specific time step.\n\nAs for the highlighted novel passage and user comments, three criteria are used to select high quality and informative data: 1) The selected passage must contain at least one entity; 2) the selected passage must be commented by at least three users; 3) comments related to the same passage are ranked according to the upvotes, and the bottom 20\\% are dropped. Notably, those highlighted passages have a degree of redundancy because users tend to highlight and comment passages at very similar positions. Thus we merge passages which have more than 50\\% overlapping rate. This operation can effectively reduce the quantity of passages by 30\\%.\n\n\n\n\\subsection{Core Statistics}\nThe dataset contains 203 novels, 349,695 highlighted passages and 3,136,210 comments totally. Due to diverse genre of novels in our dataset, the number of entities and relations per novel varies widely with range [$10^2$, $10^5$]. And the number of comments per passage changes a lot with a range of [$3$, $2\\times{10^3}$], because it depends on how interesting the corresponding passage is.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=\\textwidth]{images\/figure2.png}\n \\caption{Architecture of our model. First, the EKG is trained under a multi-task learning framework. Then a graph-to-sequence model is traind to utilize the EKG for text generation.}\\label{fig:model}\n\\end{figure*}\n\nWe partitioned the dataset into non-overlapping training, validation, and test portions, along novels (See Table~\\ref{dataset-table} for detailed statistics). Five most relevant comments for each passage in validation and test set are selected by human annotators. While the comments in train set are all preserved in order to ensure its flexible use.\n\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{l|r|r|r}\n\\hline\n & \\textbf{train} & \\textbf{valid} & \\textbf{test} \\\\\n\\hline\n{\\small{\\# novels}} & 173 & 10 & 20 \\\\\n\\hline\n{\\small{\\# passages}} & 324,803 & 7,976 & 16,916 \\\\\n\\hline\n{\\small{\\# comments}} & 3,011,750 & 39,880 & 84,580 \\\\\n\\hline\n\\tabincell{l}{\\small{Avg. length}\\\\\\small{~~of context}} & 520,571 & 305,492 & 277,847 \\\\\n\\hline\n{\\small{Avg. \\# entities}} & 383.4 & 720.6 & 281.7 \\\\\n\\hline\n{\\small{Avg. \\# relations}} & 9,013 & 1,9919 & 7,084 \\\\\n\\hline\n\\tabincell{l}{\\small{Avg. \\# comments}\\\\\\small{~~per passage}} & 9.3 & 5.0 & 5.0 \\\\\n\\hline\n\\tabincell{l}{\\small{Avg. \\# entities}\\\\\\small{~~per passage}} & 2.6 & 3.1 & 3.4 \\\\\n\\hline\n\\tabincell{l}{\\small{Avg. \\# relations}\\\\\\small{~~per passage}} & 4.4 & 9.2 & 11.0 \\\\\n\\hline\n\\end{tabular}\n\\caption{\\label{dataset-table} Statistics of Dataset. }\n\\end{table}\n\n\\subsection{Build up Evolutionary Knowledge Graph}\n\\label{sub:collect}\n\n\nThen for each novel, we build up a knowledge graph which consists of a sequence of sub-graphs. Obviously, it is sensible to build up a sub-graph for each import scene of the novel, and build up more sub-graphs around the critical transitions in the stroryline. In this paper, we assume each chapter usually represents an integral scene, and hence build a sub-graph for each chapter of the novel.\n\nThen for each chapter, the entities being mentioned in the chapter are the vertices of the corresponding sub-graph. And if a paragraph consists two of the entities, an edge is created between the two entities. In such a way, a sequence of sub-graphs are constructed, and form our EKG. In the next section, we will formulate the embedding computation of the EKG, and its application for comment generation.\n\n\\section{Model Formalism}\\label{sec:formalism}\nIn this section, we formulate our approach in details. First, the training of EKG embedding is presented. Then a graph-to-sequence model is shown to utilize the EKG for comment generation. The architecture of the model is shown in Figure~\\ref{fig:model}.\n\n\\subsection{Definition}\nFor a novel with $n_e$ entities and $n_r$ relations, define a global evolutionary knowledge graph: $G_{ekg}^{global} = \\{G(t)\\}|_{t:1\\rightarrow{T}}$,\nwhere $T$ is the number of chapters\\footnote{We will cluster successive chapters into a longer one if they are too short.} (or time periods); $G(t)=\\langle{V(t), E(t)}\\rangle$, is a temporal knowledge graph of the chapter $t$; $V(t)=\\{v_1(t), v_2(t),...,v_{n_e}(t)\\}$ is the set of vertices and $E(t)=\\{e_1(t), e_2(t),...,e_{n_r}(t)\\}$ is the set of edges between two vertices.\n\nGiven a passage $C$ from the chapter $t$ with $c_e$ entities and $c_r$ relations, a local EKG related to it can is a sub-graph of the global EKG: $G_{ekg}^{local}=\\{G_c(t)\\}|_{t:1\\rightarrow{T}}$, where $G_c(t)=\\langle{V_c(t), E_c(t)}\\rangle$; $V_c(t)$ is a subset of $V(t)$ with size of $c_e$ and $E_c(t)$ is a subset of $E(t)$ with size of $c_r$. Then the local EKG of the passage is a sequence of local temporal knowledge sub-graphs with $T \\times c_e$ vertex embeddings and $T \\times c_r$ edge embeddings.\n\\subsection{EKG Embedding Training}\nInspired by the consistent state-of-the-art performance in language understanding tasks, we use off-the-shelf Chinese BERT model~\\citep{devlin-etal-2019-bert} to calculate the initial semantic representation of sentences. Considering the fact that entities are either out-of-vocabulary or associated with special semantics within the novel context, we propose the following algorithm to jointly learn entity and relation embeddings in EKG:\n\n\\paragraph{Vertex embedding learning.} The passages containing mentions of entity $v$ will contribute to learn the embedding of $v$. Specifically, the $i$-th passage is tokenized while the entity mention $v_i$ is masked with token ``[MASK]''. Then resulted tokens are fed into the pre-trained Chinese BERT model, and $f_{v_i}$ is obtained as the output feature corresponding to the mask token.\nWithin the chapter $t$, there exists $N_{v}^{t}$ sentences containing vertices $v$. The embedding of $v$ is learned by optimizing the following softmax loss summation which models the probabilities to predict the masked entities as $v$.\n\\begin{equation}\\label{node_learn}\n L_{v}^{t} = -\\sum_{i=1}^{N_{v}^{t}}{\\log{p_{v_i}^{t}}}\n\\end{equation}\n\\begin{equation}\\label{node_softmax}\n p_{v_i}^{t} = softmax(W_{v}^{t}\\cdot{f_{v_i}})\n\\end{equation}\nwhere $W_v^t$ is learnable parameter and denotes the embedding of $v$ from the chapter $t$. Usually the semantic representations of entities change smoothly over time, so we propose a temporally smoothed softmax loss to retain the similarity of entity embeddings from successive time periods:\n\\begin{equation}\\label{node_improve}\n \\tilde{L}_{v}^{t} = -\\sum_{i=1}^{N_{v}^{t}}{(\\lambda{_0}\\log{p_{v_i}^{t-1}}+\\lambda{_1}\\log{p_{v_i}^{t}}+\\lambda{_2}\\log{p_{v_i}^{t+1}})}\n \n\\end{equation}\nwhere $\\lambda_0$, $\\lambda_1$ and $\\lambda_2$ are smooth factors; and only valid probabilities are included when $t=1$ or $T$. Then the overall loss for all time periods and all vertices is:\n\\begin{equation}\\label{node_overall}\n L_{vertex} = \\sum_{t=1}^{T}\\sum_{v=1}^{n_e}{\\tilde{L}_{v}^{t}}\n\\end{equation}\n\n\n\\paragraph{Edge embedding learning.} Since the number of relations equal the number of co-occurrence for any two entities, it is infeasible to employ an embedding matrix to model the relation. Therefore, a \\textit{Relation Network} (RN) is proposed to learn the edge embeddings in the TKGs as shown in Figure~\\ref{fig:rn}. Specifically, the RN takes two vertex embeddings as input, and feed them into the first hidden layer to obtain the embedding of the edge $r$. Then the embeddings of two vertices and one edge are concatenated and fed into second hidden layer to reconstruct the sentence. We also use the pre-trained BERT~\\citep{devlin-etal-2019-bert} to obtain the representation $f_c$ for the whole sentence. The $f_c$ is taken from the final hidden state corresponding to ``[CLS]'' token because it aggregates sequence representation.\n\nA reconstruction loss applied to the network is optimized to jointly learn RN and edge embeddings:\n\\begin{equation}\\label{edge_l0}\n L_r = \\max(d(f_{p^+}, f_c)- d(f_{p^-}, f_c) + \\alpha, 0)\n\\end{equation}\nwhere $p$ stands for a pair of vertices; $p^+$ represents a positive pair with two vertex related to the edge, and $p^-$ represents a negative pair with one vertex related to the edge and the other unrelated. The overall loss for learning edge embedding is:\n\\begin{equation}\\label{edge_loss}\n L_{edge} = \\sum_{r=1}^{N_r}L_r\n\\end{equation}\n\nCombining the vertex and edge loss above, our final multi-task loss is:\n\\begin{equation}\\label{tkg_loss}\n L_{EKG} = L_{vertex} + \\lambda_{r}L_{edge}\n\\end{equation}\nwhere $\\lambda_{r}$ is a hyperparameter to be tuned.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{images\/rn.png}\n \\caption{Relation Network with a reconstruction loss.The edge embedding is shown in \\textcolor{red}{red color}.}\\label{fig:rn}\n\\end{figure}\n\n\\subsection{Graph-to-sequence modeling}\nAfter EKG embedding learning, we propose a graph encoder to utilize the embeddings of the EKG for comment generation. From the learned $G_{ekg}^{local}$, we can obtain vector sequences $V_i=\\{v_i(1),..,,v_i(2), v_i(T)\\}$ for each vertex, and $E_i=\\{e_i(1),e_i(2),...e_i(T)\\}$ for each edge. All these sequences are fed into Bi-LSTM to integrate information from all time periods. Then final representation of the vertices and edges are taken from the final hidden state of Bi-LSTM corresponding to the time step $t$.\n\nFurther, our graph encoder employs graph convolutional networks~\\citep{GNN:18} to aggregate the structured knowledge from the EKG and then is combined into a widely used encoder-decoder framework~\\citep{Transformer:17} for generation.\n\nour graph encoder is based on the implementation of GAT~\\citep{v2018graph}. The input to a single GAT layer is a set of vertices and edge features, denoted as $\\mathbf{F}_v=\\{\\vec{v}_1, \\vec{v}_2,..., \\vec{v}_{c_e}\\}$, and $\\mathbf{F}_r=\\{\\vec{r}_1, \\vec{r}_2,..., \\vec{r}_{c_r}\\}$, where ${c_e}$ is the number of vertices, and ${c_r}$ is the number of edges from the passage. The layer produces a new set of vertex features, $\\mathbf{F'}_v=\\{\\vec{v'}_1, \\vec{v'}_2,..., \\vec{v'}_{c_e}\\}$ as its output.\n\nIn order to aggregate the structured knowledge from input features and transform them into higher-level features, we then perform self-attention on both the vertices and the edges to compute attention coefficients\n\\begin{equation}\\label{gat_atten1}\n \\alpha_{ij}^e = g(\\mathbf{W}\\vec{v}_i, \\mathbf{W}\\vec{v}_j)\n\\end{equation}\n\n\\begin{equation}\\label{gat_atten2}\n \\alpha_{ij}^r = h(\\mathbf{W}\\vec{v}_i, \\mathbf{W}\\vec{r}_{ij})\n\\end{equation}\nwhere $\\mathbf{W}$ is learnable parameter; $g$ and $h$ are mapping functions; $\\alpha_{ij}^e$ and $\\alpha_{ij}^r$ indicate the importance of neighbor features to vertex $i$; they are normalized using softmax.\n\nOnce obtained, the normalized attention coefficients are used to compute a linear combination of the features corresponding to them, to serve as the final output features for every vertex:\n\\begin{equation}\\label{gat_output}\n \\vec{v'}_i=\\sum_{j\\in{\\mathcal{N}_i}}\\alpha_{ij}^e\\mathbf{W}\\vec{v}_j + \\alpha_{ij}^r\\mathbf{W}\\vec{r}_{ij}\n\\end{equation}\n\nThen the graph encoder is combined into the encoder-decoder framework~\\citep{Transformer:17}, in which a self-attention based encoder is used to encode the passage. To aggregate structured knowledge, the encoding of all vertices from graph encoder are concatenated with the output of the passage encoder, and fed into a Transformer decoder for text generation. The whole graph-to-sequence model is trained to minimize the negative log-likelihood of the related comment.\n\\section{Experiment}\nIn this section, we first introduce the experimental details, and then present results from automatic and human evaluations.\n\\subsection{Details}\nWe train all the models using training set and tune hyperparamters based on validation set. The automatic and human evaluations are carried out based on test set.\nDuring the training of the EKG, we first learn the vertex embeddings and fix them during the subsequent training of edge embeddings. Our GAT-based graph encoder is based on the entities from the passage. we keep $K$ entities for each passage: if the number of entities\\footnote{Note that the number will not be zero because the passage contains as least one entity in our dataset.} is smaller than $K$, we use breadth-first searching on the global graph to fill the gap, otherwise we filter the low-order entities according to entity frequency. We set $K$ to 5 which is selected by validation. Label smoothing is used in the smoothed softmax loss. We denote our full model as ``\\textbf{EKG+GAT(V+E)}'', its variant which only use the first term in Equ.~\\ref{gat_output} as ``\\textbf{EKG+GAT(V)}'' and name the other variant ``\\textbf{EKG}'', which does not use GAT-based graph encoder and feeds the encoding of vertices into the Transformer decoder directly.\n\\subsection{Hyperparameters}\nIn our model, we set $\\lambda_1=0.5$, $\\lambda_2=1.0$, $\\lambda_3=0.3$ for the smoothed softmax loss; set $\\alpha$ to 0.0 for the reconstruction loss and $\\lambda_r$ to 1.0 for the multi-task loss; the number of self-attention layers in our passage encoder is 6; the number of Bi-LSTMs is 2 and the length of its hidden state is 768; and the GAT-based graph encoder has two layers. To stabilize the training, we use Adam optimizer~\\citep{adam:14} and follow the learning rate strategy in~\\citet{klein-etal-2017-opennmt} by increasing the learning rate linearly during the first $5000$ steps for warming-up and then decaying it exponentially. For inference, the maximum length of decoding is 50; beam searching is used with beam size 4 for all the models.\n\\subsection{Evaluation metrics}\nwe use both automatic metrics and human evaluations to evaluate the quality of generated novel comments.\n\n\\noindent{\\textbf{Automatic metrics}}:~1) \\textbf{BLEU} is commonly employed in evaluating translation systems. It is also introduced into comment generation task~\\citep{qin-etal-2018-automatic, yang-etal-2019-read}. we use $multi{-}bleu.perl$\\footnote{https:\/\/github.com\/moses-smt\/mosesdecoder\/blob\/master\/scripts\/generic\/multi-bleu.perl} to calculate the BLEU score. 2) \\textbf{ROUGE-L}(\\citep{lin-2004-rouge}) uses longest common subsequence to calculate the similar score between candidates and references. For calculation, we use a python package called $pyrouge$\\footnote{https:\/\/pypi.org\/project\/pyrouge\/}. These metrics also support the multi-reference evaluations on our dataset.\n\n\\noindent{\\textbf{Human evaluations}}:~1) \\textbf{Relevance}: This metric evaluates how relevant is the comment to the passage. It measures the degree that the comment is about the main storyline of the novel.2) \\textbf{Fluency}: This metric evaluates whether the sentence is fluent and judges whether the sentence follows the grammar and whether the sentence has clear logic. 3) \\textbf{Informativeness}: This metric evaluates how much structured knowledge the comment contains. It measures whether the comment reflects the evolution of entities and relations, or is just a general description that can be used for many passages. All these metrics have three gears, the final scores are projected to 0$\\sim$3.\n\n\\subsection{Baseline Models}\nWe describe three kinds of models used as baselines. All the baselines are implemented according to the related works and tuned on the validation set.\n\\begin{table*}[!t]\n\\centering\n\\begin{tabular}{lccccc}\n\\hline \\textbf{Model} & \\textbf{BLEU} &\\textbf{ROUGE-L}\n& \\textbf{Relevance} & \\textbf{Fluency} & \\textbf{Informativeness}\\\\ \\hline\n\\textbf{Seq2Seq}\\citep{qin-etal-2018-automatic} & 2.59 & 14.71 & 0.12 & 1.71 & 0.09\\\\\n\\textbf{Attn}\\citep{qin-etal-2018-automatic} & 3.71 & 16.44 & 0.34 & 1.70 & 0.33 \\\\\n\\textbf{Trans}\\citep{Transformer:17} & 6.11 & 19.21 & 0.57 & 1.62 & 0.58 \\\\\n\\textbf{Trans.+CTX}\\citep{zhang-etal-2018-improving} & 6.52 & 19.11 & 0.68 & 1.68 & 0.67\\\\\n\\textbf{Graph2Seq}\\citep{li-etal-2019-coherent} & 4.93 & 16.91 & 0.35 & 1.69 & 0.31 \\\\\n\\textbf{Graph2Seq++}\\citep{li-etal-2019-coherent} & 5.56 & 17.51 & 0.85 & 1.67 & 0.60\\\\\n\\hline\n\\textbf{EKG} & 6.59 & 20.00 & 0.81 & \\textbf{1.83} & 0.64 \\\\\n\\textbf{EKG+GAT(V)} & 6.72 & 20.09 & 0.88 & 1.77 & 0.70 \\\\\n\\textbf{EKG+GAT(V+E)} & \\textbf{7.01} & \\textbf{20.10} & \\textbf{0.89} & 1.74 & \\textbf{0.75} \\\\\n\\hline\n\\textbf{Human Performance} & 100 & 100 & 1.09 & 1.85 & 1.04\\\\\n\\hline\n\n\\end{tabular}\n\\caption{\\label{metric-table} Automatic metrics and human evaluations. }\n\\end{table*}\n\\begin{itemize}\n \\item \\textbf{Seq2Seq models}~\\citep{qin-etal-2018-automatic}:~those models generate comments for news either from the title or the entire article. Considering there are no titles in our dataset, We compare two kinds of models from their work. 1) \\textbf{Seq2Seq:} it is a basic sequence-to-sequence model~\\citep{Sutskever:14} that generate comments from the passage; 2) \\textbf{Attn:} sequence-to-sequence model with an attention mechanism\\citep{Bahdanau:14}. For the input of the attention model, we append the related entities to the back of the passage.\n\n \\item \\textbf{Self-attention models}:~our model includes a graph encoder to encode knowledge from graph, and a passage encoder use multiple self-attention layers. To show the power of graph encoder, we use the encoder-decoder framework (\\textbf{Trans.})~\\citep{Transformer:17} for passage-based comparison. Also we introduce an improved Transformer~\\citep{zhang-etal-2018-improving} with a context encoder to represent document-level context and denote it \\textbf{Trans.+CTX}. For the context input, we use up to 512 tokens before the passage as context.\n\n \\item \\textbf{Graph2Seq}~\\citep{li-etal-2019-coherent}:~this is a graph-to-sequence model that builds a static topic graph for the input and generates comments based on representations of entities only. A two-layer transformer encoder is used in their work. For fair comparison, we use 6-layer transformer encoder to replace the original and denote the new model as \\textbf{Graph2Seq++}.\n\\end{itemize}\n\n\n\\subsection{Evaluation results}\nTable~\\ref{metric-table} shows the results of both automatic metrics and human evaluations.\n\nIn automatic metrics, our proposed model has best BLEU and ROUGE-L scores.\nFor BLEU, our full model EKG+GAT(V+E) achieves 7.01 score, which is 0.59 higher than that of the best baseline Trans.+CTX.\nThe Graph2Seq++ has a BLEU score 5.56 which is obviously lower than the EKG+GAT(V+E). The main reason is that the Graph2Seq++ is based on static graph and cannot make use of the dynamic knowledge.\nFor Rouge-L, our models all have ROUGE-L scores higher than 20\\%, and is 0.79\\% slightly better than that of Trans., which is the best among all baselines; the ROUGE-L score of Trans. is higher than of the Trans.+CTX, which is opposite of that in BLEU.\n\nIn human evaluations, we randomly select 100 passages from the test set and run all the models in Table~\\ref{metric-table} to generate respective comments. We also provide one user comment for each passage to get evaluations of human performance. All these passage-comment pairs are labeled by human annotators.\nIn relevance metric, our full model EKG+GAT(V+E) has better relevance score than all the baselines. It means our model can generate more relevant comments and better reflect the main storyline of the novel. For all this, there still exists significant gaps when compared to the human performance.\nIn fluency and informativeness metrics, our EKG+GAT(V+E) model has achieves higher score compared to all baselines. It illustrates that the generated comments by our proposed model are more fluent and contains more attractive information.\n\n\\begin{table*}[!t]\\small\n\\centering\n\\begin{tabular}{|l|}\n\\hline\n\\begin{CJK*}{UTF8}{gkai}\\tabincell{l}{\\textbf{P1:}~\\textbf{\u8fd9\u4eba\u5bb6\u59d3\u66fe,\u4f4f\u5728\u53bf\u57ce\u4ee5\u5357\u4e00\u767e\u4e09\u5341\u91cc\u5916\u7684\u8377\u53f6\u5858\u90fd\u3002}\\\\\n(This family, surnamed Zeng, lives in Heyetangdu, 130 miles \\textcolor{orange}{south of the county}.)\n}\\end{CJK*} \\\\\n\\hline\n\\begin{CJK*}{UTF8}{gkai}\\tabincell{l}{\\textbf{T1:}~\u539f\u6765\u662f\u8fd9\u4e48\u6765\u7684\u3002~(That's how it turned out.)}\\end{CJK*} \\\\\n\\begin{CJK*}{UTF8}{gkai}\\tabincell{l}{\\textbf{G1:}~\u8fd9\u4e2a\u5730\u65b9\u5440!~(This is the place !)}\\end{CJK*} \\\\\n\\begin{CJK*}{UTF8}{gkai}\\tabincell{l}{\\textbf{E1:}~\u4ed6\u4e00\u76f4\u601d\u5ff5\u7740\u5bb6\u91cc\u4eba\u3002~(He has been \\textcolor{cyan}{missing his family}.)}\\end{CJK*}~\\textcolor{blue}{\\textbf{[P2]}}\\\\\n\\hline\n\\begin{CJK*}{UTF8}{gkai}\\tabincell{l}{\\textbf{P2:}~\\textbf{\u56fd\u85e9\u4eca\u65e5\u4e43\u6234\u5b5d\u4e4b\u8eab,\u8001\u6bcd\u5e76\u672a\u5b89\u846c\u59a5\u5e16,\u600e\u5fcd\u79bb\u5bb6\u51fa\u5c71?}\\\\\n(Today Guofan is wearing mourning. \\textcolor{cyan}{My mother hasn't been buried yet}. How can I leave home ?)\n}\\end{CJK*} \\\\\n\\hline\n\\begin{CJK*}{UTF8}{gkai}\\tabincell{l}{\\textbf{T2:}~\u771f\u662f\u4e00\u4e2a\u806a\u660e\u4eba~(He is so clever.)\\\\}\\end{CJK*} \\\\\n\\begin{CJK*}{UTF8}{gkai}\\tabincell{l}{\\textbf{G2:}~\u8001\u592a\u592a\u4e5f\u662f\u4e2a\u597d\u4eba~(The old lady is also a good person.)\\\\}\\end{CJK*} \\\\\n\\begin{CJK*}{UTF8}{gkai}\\tabincell{l}{\\textbf{E2:}~\u8fd9\u4e2a\u65f6\u5019\u7684\u56fd\u5bb6\u5df2\u7ecf\u6709\u4e86\u53d8\u5316~(\\textcolor{green}{The country is changing} at this time.)}\\end{CJK*} ~\\textcolor{blue}{\\textbf{[P3]}}\\\\\n\\hline\n\\begin{CJK*}{UTF8}{gkai}\\tabincell{l}{\\textbf{P3:}~\\textbf{\u9762\u4e34\u5927\u654c,\u66fe\u6697\u81ea\u4e0b\u5b9a\u51b3\u5fc3,\u4e00\u65e6\u57ce\u7834,\u7acb\u5373\u81ea\u520e,\u8ffd\u968f\u5854\u9f50\u5e03\u3001\u7f57\u6cfd\u5357\u4e8e\u5730\u4e0b\u3002}}\\end{CJK*} \\\\\n(\\textcolor{green}{Facing the enemy}, Zeng made up his mind to \\textcolor{red}{commit suicide as soon as the city broke}, following Ta Qibu and Luo Zenan.)\\\\\n\\hline\n\\begin{CJK*}{UTF8}{gkai}\\tabincell{l}{\\textbf{T3:}~\u8fd9\u5c31\u662f\u539f\u6765\u6218\u4e89\u7684\u6837\u5b50}\\end{CJK*} (This is what the war looks like.)\\\\\n\\begin{CJK*}{UTF8}{gkai}\\tabincell{l}{\\textbf{G3:}~\u4e00\u4e2a\u4eba\u7684\u547d\u8fd0\u603b\u662f\u5982\u6b64}\\end{CJK*}(One's destiny is always like this.)\\\\\n\\begin{CJK*}{UTF8}{gkai}\\tabincell{l}{\\textbf{E3:}~\u81ea\u7acb\u4e8e\u5357\u57ce,\u81ea\u7834\u800c\u7acb~(He established himself in \\textcolor{orange}{the south of the country} throught constant breakthroughs.)}\\end{CJK*}~\\textcolor{blue}{\\textbf{[P1]}}\\\\\n\\hline\n\\begin{CJK*}{UTF8}{gkai}\\tabincell{l}{\\textbf{P4:}~\\textbf{\u66fe\u56fd\u85e9\u7684\u8138\u4e0a\u9732\u51fa\u4e00\u4e1d\u6d45\u6d45\u7684\u7b11\u610f,\u5934\u4e00\u6b6a,\u5012\u5728\u592a\u5e08\u6905\u4e0a.}\\\\\n(Zeng Guofan smiled slightly. His head tilted and fell on the chair.)\n}\\end{CJK*} \\\\\n\\hline\n\\begin{CJK*}{UTF8}{gkai}\\tabincell{l}{\\textbf{T4:}~\u8fd9\u4e00\u6bb5\u63cf\u5199\u771f\u7684\u5f88\u6709\u753b\u9762\u611f~(This description is really picturesque.)}\\end{CJK*} \\\\\n\\begin{CJK*}{UTF8}{gkai}\\tabincell{l}{\\textbf{G4:}~\u8fd9\u4e2a\u4eba\u7684\u5fc3\u601d\u7f1c\u5bc6~(This man is very thoughtful.)}\\end{CJK*} \\\\\n\\begin{CJK*}{UTF8}{gkai}\\tabincell{l}{\\textbf{E4:}~\u4ed6\u4e00\u751f\u5fe0\u541b\u4e3a\u56fd,\u5c31\u8fd9\u6837\u8d70\u4e86~(He was \\textcolor{red}{loyal to his country} all his life; he is gone.)}\\end{CJK*}~\\textcolor{blue}{\\textbf{[P3]}}\\\\\n\\hline\n\\end{tabular}\n\\caption{\\label{case-table}Comments generated by Trans.+CTX~(\\textbf{T}), Graph2Seq++~(\\textbf{G}) and our EKG+GAT(V+E)~(\\textbf{E}). The passages (i.e., P1, P2, P3, P4) are extracted from the same novel called \\emph{Zeng Guofan}. We highlight the passage corresponding to the generated comment from our model~\\textbf{E} with \\textcolor{blue}{blue color}. Moreover, the relevant fragments are marked with a same color.}\n\\end{table*}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{images\/ablation.png}\n \\caption{Ablation results about number of entities (a) and number of time periods (b).}\\label{fig:ablation}\n\\end{figure}\n\\subsection{Analysis and Discussion}\n\\paragraph{Ablation study:} we compare the results of EKG, EKG+GAT(V), and EKG+GAT(V+E). The EKG, which does not use graph encoder, can achieve 6.59 BLEU score, which is 1.03 higher than that of Graph2Seq++. Then the BLEU score can be further improved to 6.72 by introducing a vertex-only variant EKG+GAT(V). Comparing EKG+GAT(V) and EKG+GAT(V+E) to the EKG, the BLEU scores increase 0.13 and 0.42 respectively; it indicates the usefulness of the graph encoder and that the evolutionary knowledge from edges can be treated as a good supplement to that of vertices. In human evaluations, EKG+GAT(V+E) and EKG+GAT(V) have higher relevance and informativeness scores than that of the EKG. It also indicates that the graph encoder can effectively utilize the evolutionary knowledge from vertices and edges, and make the generated comments more relevant and informative.\n\\paragraph{Analysis of the number of entities:} the corresponding local EKG is constructed based on the entities from the passage. To explore the influence of the number ($N$) of entities, we report BLEU scores of our full model based on different number\\footnote{We do not report the BLEU score of the full model when $N=1$ because there are no edges included.} in Figure~\\ref{fig:ablation}(a). The best BLEU score is achieved at $N=5$. The BLEU score at $N=0$ belongs to the Transformer(Trans.). And our full model is robust to the number of entities because the BLEU scores are stable when N is in the range $[2, 7]$.\n\\paragraph{Analysis of the number of time periods:} we also report the BLEU scores under the different number of time periods in Figure~\\ref{fig:ablation}(b). Our full model achieves the best BLEU score of 7.01 at $N=4$, which is 0.41 higher than that of the static graph at $N=1$. It illustrates that the dynamic knowledge is useful for improving the performance.\n\n\\paragraph{Case study:} we provide a case study here. Four passages that need to be commented are extracted from a novel chronologically and shown in Table~\\ref{case-table}. For comparison, we use \\textbf{T}rans.+CTX and \\textbf{G}raph2Seq++, which have the best relevance and informativeness scores among baselines respectively. To start with, within each case, we find that the generated comments from our model are more informative, while the generated outputs from other models tend to be general or common replies, which proves the effectiveness of our knowledge usage.\n\nFrom another perspective, we observe that our model can make use of the dynamics of knowledge. Let us take a look at \\textbf{P3}, our generated comment describes that \\textit{Zeng Guofan} was born in the south of the country, which is in accordance with the passage described in \\textbf{P1}. Similar interactions can be found in all four cases, which support our claims above.\n\\section{Conclusion}\nIn this paper, we propose to encode evolutionary knowledge for automatic commenting long novels. We learn an \\textit{Evolutionary Knowledge Graph} under a multi-task framework and then design a graph-to-sequence model to utilize the EKG for generating comments. In addition, we collect a new generation dataset called \\textit{GraphNovel} to advance the corresponding research. Experimental results show that our EKG-based model is superior to several strong baselines on both automatic metrics and human evaluations. In the future, we plan to develop new graph-based encoders to generate personalized comments with this dataset.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\nAnisotropy of molecular interactions plays an important role in many\nphysical, chemical and biological processes. Attractive forces are\nresponsible for the tendency toward particle association, while the\ndirectionality of the resulting bonds determines the geometry of the\nresulting clusters. Aggregation may thus lead to very different structures:\nin particular, chains, globular forms, and bi- or three-dimensional\nnetworks. Understanding the microscopic mechanisms underlying such phenomena\nis clearly very important both from a theoretical and a technological point\nof view. Polymerization of inorganic molecules, phase behaviour of\nnon-spherical colloidal particles, building up of micelles, gelation,\nformation of $\\alpha $-helices from biomolecules, DNA-strands, and other\nordered structures in living organisms, protein folding and crystallization,\nself-assembly of nanoparticles into composite objects designed for new\nmaterials, are all subjects of considerable interest, belonging to the same\nclass of systems with anisotropic interactions.\n\nModern studies on these complex systems strongly rely upon computer\nsimulations, which have provided a number of useful information about many\nproperties of molecular fluids.\n\nNevertheless, analytic models with explicit expressions for structural and\nthermodynamic properties still represent an irreplaceable tool, in view of\ntheir ability of capturing the essential features of the investigated\nphysical systems.\n\nAt the lowest level in this hierarchy of minimal models on assembling\nparticles lies the problem of the formation of linear aggregates, from\ndimers \\cite{Spinozzi02,Giacometti05} up to polymer chains. This topic has\nbeen extensively investigated, through both computer simulations and\nanalytical methods. In the latter case a remarkable example is Wertheim's\nanalytic solution of the \\textit{mean spherical approximation} (MSA)\nintegral equation for dipolar hard spheres (DHS), i.e. hard spheres (HS)\nwith a point dipole at their centre \\cite{Wertheim71} (hereafter referred to\nas I). For the DHS model, several studies predict chain formation, whereas\nlittle can be said about the existence of a fluid-fluid coexistence line,\nsince computer simulations and mean field theories provide contradictory\nresults \\cite{Weis93,Leeuwen93,Sear96,Camp00,Tlusty00}. On the other hand,\nfor mesoscopic fluids the importance of combining \\textit{short-ranged}\nanisotropic attractions and repulsions has been well established \\cite%\n{Gazzillo06,Fantoni07}, and hence the long-range of the dipolar interaction\nis less suited for the mesoscopic systems considered here, at variance with\ntheir atomistic counterpart.\n\nThe aim of the present paper is to address both the above points, by\nstudying a model with anisotropic surface adhesion that is amenable to an\nanalytical solution, within an approximation which is expected to be valid\nat significant experimental regimes.\n\nIn the isotropic case, the first model with `surface adhesion' was\nintroduced long time ago by Baxter \\cite{Baxter68,Baxter71}. The interaction\npotential of these `sticky hard spheres' (SHS) includes a HS repulsion plus\na spherically symmetric attraction, described by a square-well (SW) which\nbecomes infinitely deep and narrow, according to a limiting procedure\n(Baxter's sticky limit) that keeps the second virial coefficient finite.\n\nPossible anisotropic variations include `sticky points'\\ \\cite%\n{Sciortino05,Bianchi06,Michele06,Lomakin99,Starr03,Zhang04,Glotzer04a,Glotzer04b,Sciortino07}%\n, `sticky patches'\\ \\cite%\n{Jackson88,Ghonasgi95,Sear99,Mileva00,Kern03,Zhang05,Fantoni07} and, more\nrecently, `Gaussian patches'\\ \\cite{Wilber06,Doye07}. The most common\nversion of patchy sticky models refers to HS with one or more `uniform\ncircular patches', all of the same species. This kind of patch has a\nwell-defined circular boundary on the particle surface, and is always\nattractive, with an `uniform' strength of adhesion, which does not depend on\nthe contact point within the patch \\cite{Jackson88}.\n\nIn the present paper we consider a `dipolar-like' SHS model, where the sum\nof a uniform surface adhesion (isotropic background) plus an appropriate\ndipolar sticky correction -- which can be both positive or negative,\ndepending on the orientations of the particles -- yields a nonuniform\nadhesion. Although the adhesion varies continuously and no discontinuous\nboundary exists, the surface of each molecule may be regarded as formed by\ntwo hemispherical `patches' (colored red and blue, respectively, in the\nonline Figure 1). One of these hemispheres is `stickier' than the other, and\nthe entire molecular surface is adhesive, but its stickiness is nonuniform\nand varies in a dipolar fashion. By varying the dipolar contribution, the\ndegree of anisotropy can be changed, in such a way that the total sticky\npotential can be continuously tuned from very strong attractive strength\n(twice the isotropic one) to vanishing adhesion (HS limit). The physical\norigin of this model may be manifold (non-uniform distribution of surface\ncharges, or hydrophobic attraction, or other physical mechanisms), one\nsimple realization being as due to an `extremely screened' attraction. The\npresence of a solvent together with a dense ionic atmosphere could induce\nany electrostatic interaction to vanish close to the molecular surface, and\n-- in the idealized sticky limit -- to become \\textit{truncated }exactly at\ncontact.\n\nFor this model, we solve analytically the molecular Ornstein-Zernike (OZ)\nintegral equation, by using a truncated \\textit{Percus-Yevick} (PY)\napproximation, \\textit{with orientational linearization }(PY-OL), since it\nretains only the lowest order terms in the expansions of the correlation\nfunctions in angular basis functions. This already provides a clear\nindication of the effects of anisotropy on the adhesive adhesion.\n\nThe idea of an anisotropic surface adhesion is not new. In a series of\npapers on hydrogen-bonded fluids such a water, Blum and co-workers \\cite%\n{Cummings86,Wei88,Blum90} already studied models of spherical molecules with\nanisotropic pair potentials, including both electrostatic multipolar\ninteractions and sticky adhesive terms of multipolar symmetry. Within\nappropriate closures, these authors outlined the general features of the\nanalytic solutions of the OZ equation by employing a very powerful formalism\nbased upon expansions in rotational invariants. In particular, Blum,\nCummings and Bratko \\cite{Blum90} obtained an analytic solution within a\nmixed MSA\/PY closure (extended to mixtures by Protsykevich \\cite%\n{Protsykevich03}) for molecules which have surface adhesion of dipolar\nsymmetry and at most dipole-dipole interactions. From the physical point of\nview, our model -- with `dipolar-like' adhesion resulting from the sum of an\nisotropic plus a dipolar term -- is different and more specifically\ncharacterized with respect to the one of Ref. [32], whose adhesion has a\nsimpler, strictly `dipolar', symmetry. From the mathematical point of view,\nhowever, the same formalism employed by Blum \\textit{et al.} \\cite{Blum90}\ncould also be applied to our model. Unfortunately, the solution given in\nRef. [32] is not immediately usable for the actual computation of\ncorrelation functions, since the explicit determination of the parameters\ninvolved in their analytical expressions is lacking.\n\nIn the present paper we adopt a simpler solution method, by extending the\nelegant approach devised by Wertheim for DHS within the MSA closure \\cite%\n{Wertheim71}, and, most importantly, we aim at providing a \\textit{complete}\nanalytic solution -- including the determination of all required parameters\n-- within our PY-OL approximation.\n\nThe paper is organized as follows. Section II defines the model. In Section\nIII we recall the molecular OZ integral equation and the basic formalism. In\nSection IV we present the analytic solution. Numerical exact results for\nsome necessary parameters, as well as very accurate analytic approximations\nfor them, will be shown in Section V. Some preliminary plots illustrating\nthe effects of the anysotropic adhesion on the local structure are reported\nin Section VI. Phase stability is breafly discussed in Section VII, while\nfinal remarks and conclusions are offered in Section VIII.\\bigskip\n\n\\bigskip\n\n\\section{HARD SPHERES WITH\\ ADHESION OF\\ DIPOLAR-LIKE\\ SYMMETRY}\n\nLet the symbol $i\\equiv \\left( \\mathbf{r}_{i},\\Omega _{i}\\right) $ (with $%\ni=1,2,3,\\ldots $) denote both the position $\\mathbf{r}_{i}$ of the molecular\ncentre and the orientation $\\Omega _{i}$ of molecule $i$; for linear\nmolecules, $\\Omega _{i}\\equiv \\left( \\theta _{i},\\varphi _{i}\\right) $\nincludes the usual polar and azimuthal angles. Translational invariance for\nuniform fluids allows to write the dependence of the pair correlation\nfunction $g\\left(1,2\\right)$ as \n\\begin{equation*}\n(1,2)=(\\mathbf{r}_{12},\\Omega _{1},\\Omega _{2})=(r,\\Omega _{1},\\Omega _{2},%\n\\widehat{\\mathbf{r}}_{12})=(r,\\Omega _{1},\\Omega _{2},\\Omega _{r}),\n\\end{equation*}%\nwith $\\mathbf{r}_{12}=\\mathbf{r}_{2}-\\mathbf{r}_{1}$, $r=|\\mathbf{r}_{12}|$,\nand $\\Omega _{r}$ being the solid angle associated with $\\widehat{\\mathbf{r}}%\n_{12}=\\mathbf{r}_{12}\/r.$\n\nIn the spirit of Baxter's isotropic counterpart \\cite{Baxter68,Gazzillo04},\nour model is defined by the Mayer function given by \n\\begin{equation}\nf^{\\mathrm{SHS}}(1,2)=f^{\\mathrm{HS}}(r)+t\\ \\epsilon (1,2)\\ \\sigma \\delta\n\\left( r-\\sigma \\right) , \\label{eq2}\n\\end{equation}%\nwhere $f^{\\mathrm{HS}}(r)=\\Theta \\left( r-\\sigma \\right) -1$ is its HS\ncounterpart, $\\Theta $ is the Heaviside step function ($\\Theta (x<0)=0$, $%\n\\Theta (x>0)=1$) and $\\delta \\left( r-\\sigma \\right) $ the Dirac delta\nfunction, which ensures that the adhesive interaction occurs only at contact\n($\\sigma $ being the hard sphere diameter). An appropriate limit of the\nfollowing particular square well potential of width $R-\\sigma $ \n\\begin{equation*}\n\\Phi ^{\\mathrm{SW}}\\left( 1,2\\right) =\\left\\{ \n\\begin{array}{ccc}\n+\\infty \\text{ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ } & & 0R\\text{ ,}%\n\\end{array}%\n\\right.\n\\end{equation*}%\ncan be shown to lead to Eq. (\\ref{eq2}).\n\nThe angular dependence is buried in the angular factor%\n\\begin{equation}\n\\epsilon (1,2)=1+\\alpha D(1,2),\n\\end{equation}%\nincluding the dipolar function%\n\\begin{equation*}\nD(1,2)=D(\\Omega _{1},\\Omega _{2},\\Omega _{r})=3(\\mathbf{u}_{1}\\cdot \\hat{%\n\\mathbf{r}})(\\mathbf{u}_{2}\\cdot \\hat{\\mathbf{r}})-\\mathbf{u}_{1}\\cdot \n\\mathbf{u}_{2}\n\\end{equation*}%\nwhich stems from the dipole-dipole potential $\\phi ^{\\mathrm{dip-dip}%\n}(1,2)=-\\mu ^{2}D(1,2)\/r^{3}$ ($\\mu $ is the magnitude of the dipole moment)\nand is multiplied by the tunable \\textit{anisotropy parameter} $\\alpha $. In\nthe isotropic case, $\\alpha =0$, one has $\\epsilon (1,2)=1$. Here and in the\nfollowing, $\\hat{\\mathbf{r}}$ coincides with $\\hat{\\mathbf{r}}_{12}=-\\hat{%\n\\mathbf{r}}_{21}$ , while $\\mathbf{u}_{i\\text{ }}$is the versor attached to\nmolecule $i$ (drawn as yellow arrow in Figure 1) which completely determines\nits orientation $\\Omega _{i}$. Note the symmetry $D(2,1)=D(1,2)$.\n\nThe condition $\\epsilon (1,2)\\geq 0$ must be enforced in order to preserve a\ncorrect definition of the sticky limit, ensuring that the total sticky\ninteraction remains attractive for all orientations, and the range of\nvariability $-2\\leq D(1,2)\\leq 2$ yields the limitation $0\\leq \\alpha \\leq \n\\frac{1}{2}$ on the anisotropy degree. The stickiness parameter $t$ -- equal\nto $\\left( 12\\tau \\right) ^{-1}$ in Baxter's original notation \\cite%\n{Baxter68} -- measures the strength of surface adhesion relatively to the\nthermal energy $k_{B}T$ ($k_{B}$ being the Boltzmann constant, $T$ the\nabsolute temperature) and increases with decreasing temperature.\n\nIf we adopt an `inter-molecular reference frame' (with both polar axis and\ncartesian $z$-axis taken along $\\mathbf{r}_{12}$), then the cartesian\ncomponents of $\\hat{\\mathbf{r}}$ and $\\mathbf{u}_{i}$ are $(0,0,1)$ and $%\n(\\sin \\theta _{i}\\cos \\varphi _{i}$, $\\sin \\theta _{i}\\sin \\varphi _{i}$, $%\n\\cos \\theta _{i})$, respectively, and thus%\n\\begin{equation}\nD(1,2)=2\\cos \\theta _{1}\\cos \\theta _{2}-\\sin \\theta _{1}\\sin \\theta\n_{2}\\cos \\left( \\varphi _{1}-\\varphi _{2}\\right) . \\label{eq4b}\n\\end{equation}\n\nThe strength of adhesion between two particles $1$ and $2$ at contact\ndepends -- in a continuous way -- on the relative orientation of $\\mathbf{u}%\n_{1}$ and $\\mathbf{u}_{2}$ as well as on the versor $\\widehat{\\mathbf{r}}%\n_{12}$ of the intermolecular distance. We shall call \\textit{parallel} any\nconfiguration with $\\mathbf{u}_{1}\\cdot \\mathbf{u}_{2}=1$, while \\textit{%\nantiparallel} configurations are those with $\\mathbf{u}_{1}\\cdot \\mathbf{u}%\n_{2}=-1$ (see Figure 1). For all configurations with $D(1,2)>0$, the\nanisotropic part of adhesion is attractive and adds to the isotropic one.\nThus, the surface adhesion is maximum, and larger than in the isotropic\ncase, when $\\mathbf{u}_{1}=\\mathbf{u}_{2}=$ $\\widehat{\\mathbf{r}}_{12}$ and\nthus $\\epsilon (1,2)=1+2\\alpha $ (head-to-tail parallel configuration, shown\nin Figure 1b). On the contrary, when $D(1,2)<0$ the anisotropic contribution\nis repulsive and subtracts from the isotropic one, so that the total sticky\ninteraction still remains attractive. Then, the stickiness is minimum, and\nmay even vanish for $\\alpha =1\/2$, when $\\mathbf{u}_{1\\text{ }}=-$ $\\mathbf{u%\n}_{2}=\\widehat{\\mathbf{r}}_{12}$ and thus $\\epsilon (1,2)=1-2\\alpha $\n(head-to-head or tail-to-tail antiparallel configurations, reported in\nFigure 1c). The intermediate case of \\textit{orthogonal }configuration ($%\n\\mathbf{u}_{2}$ perpendicolar to $\\mathbf{u}_{1}$) corresponds to $D(1,2)=0,$\nwhich is equivalent to the isotropic SHS interaction.\n\nIt proves convenient to `split' $f^{\\mathrm{SHS}}(1,2)$ as \n\\begin{equation}\n\\text{\\ }f^{\\mathrm{SHS}}(1,2)=f_{0}(r)+f_{\\mathrm{ex}}(1,2), \\label{eq6a}\n\\end{equation}%\n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\nf_{0}(r)=f^{\\mathrm{HS}}(r)+t\\ \\sigma \\delta \\left( r-\\sigma \\right) \\equiv\nf^{\\mathrm{isoSHS}}(r) \\\\ \nf_{\\mathrm{ex}}(1,2)=\\left( \\alpha t\\right) \\ \\sigma \\delta \\left( r-\\sigma\n\\right) \\ D(1,2),\\text{ \\ \\ \\ \\ \\ \\ \\ \\ }%\n\\end{array}%\n\\right. \\label{eq6b}\n\\end{equation}%\nwhere the spherically symmetric $f_{0}(r)$ corresponds to the `reference'\nsystem with isotropic background adhesion, while $f_{\\mathrm{ex}}(1,2)$ is\nthe orientation-dependent `excess' term.\n\nWe remark that, as shown in Ref. I (see also Table I in Appendix A of the\npresent paper), convolutions of $f^{\\mathrm{SHS}}$-functions generate\ncorrelation functions with a more complex angular dependence. Therefore, in\naddition to $D(1,2)$, it is necessary to consider also \n\\begin{equation}\n\\Delta (1,2)=\\mathbf{u}_{1}\\cdot \\mathbf{u}_{2}\\text{ }=\\cos \\theta _{1}\\cos\n\\theta _{2}+\\sin \\theta _{1}\\sin \\theta _{2}\\cos \\left( \\varphi _{1}-\\varphi\n_{2}\\right) ,\\ \n\\end{equation}%\nwhere the last equality holds true in the inter-molecular frame. The limits\nof variation for $\\Delta (1,2)$ are clearly $-1\\leq \\Delta (1,2)\\leq 1$.\n\n\\section{BASIC FORMALISM}\n\nThis section, complemented by Appendix A, presents the main steps of \\\nWertheim's formalism, as well as its extension to our model.\n\n\\subsection{Molecular Ornstein-Zernike equation}\n\nThe \\textit{molecular OZ integral equation} for a pure and homogeneous fluid\nof molecules interacting via non-spherical pair potentials is\n\n\\begin{equation}\nh(1,2)=c(1,2)+\\rho \\int d\\mathbf{r}_{3}\\ \\left\\langle \\ c(1,3)\\ h(3,2)\\\n\\right\\rangle _{\\Omega _{3}}\\ , \\label{oz4}\n\\end{equation}%\nwhere $h(1,2)$ and $c(1,2)$ are the total and direct correlation functions,\nrespectively, $\\rho $ is the number density, and $g(1,2)=1+h(1,2)$ is the\npair distribution function \\cite{Friedman85,Lee88,Hansen06}. Moreover, the\nangular brackets with subscript $\\Omega $ denote an average over the\norientations, i.e. $\\left\\langle \\cdots \\right\\rangle _{\\Omega }=\\left( 4\\pi\n\\right) ^{-1}\\int d\\Omega \\ \\cdots .$\n\nThe presence of convolution makes convenient to Fourier transform (FT) this\nequation, by integrating with respect to the space variable $\\mathbf{r}$\nalone according to\n\n\\begin{equation}\n\\widehat{F}\\left( \\mathbf{k},\\Omega _{1},\\Omega _{2}\\right) =\\int d\\mathbf{r}%\n\\ F(\\mathbf{r},\\Omega _{1},\\Omega _{2})\\ \\exp (i\\mathbf{k\\cdot r}).\n\\label{oz5}\n\\end{equation}%\nThe $\\mathbf{r}$-space convolution becomes a product in $\\mathbf{k}$-space,\nthus leading to%\n\\begin{equation}\n\\widehat{h}(\\mathbf{k},\\Omega _{1},\\Omega _{2})=\\widehat{c}(\\mathbf{k}%\n,\\Omega _{1},\\Omega _{2})+\\rho \\ \\left\\langle \\widehat{c}(\\mathbf{k},\\Omega\n_{1},\\Omega _{3})\\ \\widehat{h}(\\mathbf{k},\\Omega _{3},\\Omega\n_{2})\\right\\rangle _{\\Omega _{3}}\\ . \\label{oz6}\n\\end{equation}\n\nAs usual the OZ equation involves two unknown functions, $h$ and $c$, and\ncan be solved only after adding a closure, that is a second (approximate)\nrelationship among $c$, $h$ and the potential.\n\n\\subsection{Splitting of the OZ equation: reference and excess part}\n\nThe particular form of our potential, as defined by the Mayer function of\nEq. (\\ref{eq2}), gives rise to a remarkable exact splitting of the original\nOZ equation. Using diagrammatic methods \\cite{Friedman85,Lee88,Hansen06} it\nis easy to see that both $c$ and $h$ can be expressed as graphical series\ncontaining the Mayer function $f$ as \\textit{bond function}. If $\\ f^{%\n\\mathrm{SHS}}=f_{0}+f_{\\mathrm{ex}}$ is substituted into all graphs of the\nabove series, each diagram with $n$ $f$-bonds will generate $2^{n}$ new\ngraphs. In the cluster expansion of $c$, the sum of all graphs having only $%\nf_{0}$-bonds will yield $c_{0}(r)=c^{\\mathrm{isoSHS}}(r)$, i.e. the the\ndirect correlation function (DCF) of the reference fluid with isotropic\nadhesion. On the other hand, all remaining diagrams have \\textit{at least one%\n} $f_{\\mathrm{ex}}$-bond, whose expression is given by Eq. (\\ref{eq6b}).\nThus, in the sum of this second subset of graphs it is possible to factorize \n$\\alpha t$, and we can write \n\\begin{equation}\n\\text{\\ }c^{\\mathrm{SHS}}(1,2)=c_{0}(r)+c_{\\mathrm{ex}}(1,2), \\label{oz7a}\n\\end{equation}%\n\\begin{equation}\n\\text{ \\ \\ \\ \\ \\ \\ \\ \\ \\ }\\left\\{ \n\\begin{array}{c}\nc_{0}(r)=c^{\\mathrm{isoSHS}}(r),\\text{\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ }\n\\\\ \nc_{\\mathrm{ex}}(1,2)=\\left( \\alpha t\\right) \\ c^{\\dagger }(1,2).\\text{ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ }%\n\\end{array}%\n\\right. \\label{oz7b}\n\\end{equation}%\nSimilarly, for $h$ we get\n\n\\begin{equation}\n\\text{\\ }h^{\\mathrm{SHS}}(1,2)=h_{0}(r)+h_{\\mathrm{ex}}(1,2), \\label{oz8a}\n\\end{equation}%\n\\begin{equation}\n\\text{ \\ \\ \\ \\ \\ \\ \\ \\ \\ }\\left\\{ \n\\begin{array}{c}\nh_{0}(r)=h^{\\mathrm{isoSHS}}(r),\\text{\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ }\n\\\\ \nh_{\\mathrm{ex}}(1,2)=\\left( \\alpha t\\right) \\ h^{\\dagger }(1,2).\\text{ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ }%\n\\end{array}%\n\\right. \\label{oz8b}\n\\end{equation}%\n\\ \\ \\ \\ \\ \n\nNote that this useful separation into reference and excess part may also be\nextended to other correlation functions, such as $\\gamma (1,2)\\equiv\nh(1,2)-c(1,2)$, $g(1,2)=1+h(1,2)$, and the `cavity' function $%\ny(1,2)=g(1,2)\/e(1,2)$. The function $\\gamma $ coincides with the OZ\nconvolution integral, without singular $\\delta $-terms. Similarly $y$ is\nalso `regular', and its exact expression reads $y\\left( 1,2\\right) =\\exp %\n\\left[ \\ \\gamma \\left( 1,2\\right) +B(1,2)\\ \\right] $, where the `bridge'\nfunction $B$ is defined by a complicated cluster expansion \\cite%\n{Friedman85,Lee88,Hansen06}.\n\nFrom Eqs. (\\ref{oz7a})-(\\ref{oz8b}), which are merely a consequence of the\nparticular form of $f_{\\mathrm{ex}}$ in the splitting of $f^{\\mathrm{SHS}}$,\none immediately sees that, if the anisotropy degree\\textit{\\ }$\\alpha $\ntends to zero, then%\n\\begin{equation}\n\\lim_{\\alpha \\rightarrow 0}c_{\\mathrm{ex}}(1,2)=\\lim_{\\alpha \\rightarrow\n0}h_{\\mathrm{ex}}(1,2)=\\lim_{\\alpha \\rightarrow 0}y_{\\mathrm{ex}}(1,2)=0.\n\\label{oz9}\n\\end{equation}\n\nNote that the spherically symmetric parts $c_{0}$ and $h_{0}$ must be\nrelated through the OZ equation for the reference fluid with isotropic\nadhesion (\\textit{reference OZ equation})\n\n\\begin{equation}\nh_{0}(r)=c_{0}(r)+\\rho \\int d\\mathbf{r}_{3}\\ c_{0}(r_{13})\\ h_{0}(r_{32}).\\ \n\\label{ozeq1}\n\\end{equation}%\nThus, substituting $c$ and $h$ of Eq. (\\ref{oz4}) with $c_{0}+c_{\\mathrm{ex}%\n} $ and $h_{0}+h_{\\mathrm{ex}}$, respectively, and subtracting Eq. (\\ref%\n{ozeq1}), we find that $c_{\\mathrm{ex}}$ and $h_{\\mathrm{ex}}$ must obey the\nfollowing relation\n\n\\begin{eqnarray*}\nh_{\\mathrm{ex}}(1,2) &=&c_{\\mathrm{ex}}(1,2)+\\rho \\int d\\mathbf{r}_{3}\\ %\n\\left[ \\ c_{0}(r_{13})\\ \\left\\langle \\ h_{\\mathrm{ex}}(3,2)\\ \\right\\rangle\n_{\\Omega _{3}}\\right. \\\\\n&&\\left. +\\left\\langle \\ c_{\\mathrm{ex}}(1,3)\\ \\right\\rangle _{\\Omega\n_{3}}h_{0}(r_{32})+\\left\\langle \\ c_{\\mathrm{ex}}(1,3)\\ h_{\\mathrm{ex}%\n}(3,2)\\ \\right\\rangle _{\\Omega _{3}}\\ \\right] \\ .\n\\end{eqnarray*}%\nand when \n\\begin{equation}\n\\left\\langle \\ c_{\\mathrm{ex}}(1,3)\\ \\right\\rangle _{\\Omega\n_{3}}=\\left\\langle \\ h_{\\mathrm{ex}}(3,2)\\ \\right\\rangle _{\\Omega _{3}}=0\n\\label{eq_cond}\n\\end{equation}%\nthe orientation-dependent excess parts $c_{\\mathrm{ex}}$ and $h_{\\mathrm{ex}%\n} $ satisfy the equality \n\\begin{equation}\nh_{\\mathrm{ex}}(1,2)=c_{\\mathrm{ex}}(1,2)+\\rho \\int d\\mathbf{r}_{3}\\\n\\left\\langle \\ c_{\\mathrm{ex}}(1,3)\\ h_{\\mathrm{ex}}(3,2)\\ \\right\\rangle\n_{\\Omega _{3}}\\ , \\label{ozeq2}\n\\end{equation}%\nwhich is decoupled from that of the reference fluid and may be regarded as\nan OZ equation for the excess part (\\textit{excess OZ equation}). As we\nshall see, condition (\\ref{eq_cond}) is satisfied in our scheme.\n\nWe stress that, in principle, the closures for Eq. (\\ref{ozeq1}) and Eq. (%\n\\ref{ozeq2}), respectively, might be \\textit{different}. In addition,\nalthough the two OZ equations\\ are decoupled, a suitably selected closure\nmight establish a relationship between $F_{0}$ and $F$ $(F=c,h).$\n\n\\subsection{Percus-Yevick closure with orientational linearization}\n\nFor hard-core fluids, $h$ and $c$ inside the core are given by \n\\begin{equation}\n\\left\\{ \n\\begin{array}{ccc}\nh(1,2)=-1\\text{ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ } & & \\text{for \\ }0\\sigma .%\n\\end{array}%\n\\right. \\label{c11}\n\\end{equation}%\nAt $r=2\\sigma $ $h_{D,\\text{\\textrm{reg}}}$ and $h_{D,\\text{\\textrm{reg}}%\n}^{0}$ have the same discontinuity. We also get\n\n\\begin{equation}\nh_{D,\\text{\\textrm{reg}}}(\\sigma ^{+})=h_{D,\\text{\\textrm{reg}}}^{0}(\\sigma\n^{+})+3K_{\\mathrm{reg}}. \\label{f7}\n\\end{equation}%\nClearly, these results must agree with those obtained from Eq. (\\ref{oz12}),\ni.e. \n\\begin{equation*}\n\\begin{array}{c}\nh_{D}^{0}(r)=h_{D}(r)-3\\psi (r),\\text{ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ } \\\\ \n\\psi (r)\\equiv \\int_{r}^{\\infty }h_{D}(x)\\ x^{-1}\\ dx=\\Lambda _{D}\\ \\theta\n\\left( \\sigma -r\\right) +\\int_{r}^{\\infty }h_{D,\\text{\\textrm{reg}}}(x)\\\nx^{-1}\\ dx.%\n\\end{array}%\n\\text{\\ }\n\\end{equation*}%\nIn order to recover Eq. (\\ref{f7}) along this second route, note that $\\psi\n(r)$ is not continuous at $r=\\sigma $. In fact, from Eqs. (\\ref{oz14b}) and (%\n\\ref{oz14bb}) follows $\\psi (\\sigma ^{-})=K$ whereas $\\psi (\\sigma ^{+})=K_{%\n\\mathrm{reg}}.$\n\nii) Similarly, for $c_{D}(r)$ we obtain $c_{D}(r)=c_{D,\\text{\\textrm{reg}}%\n}(r)+\\Lambda _{D}\\ \\sigma \\delta (r-\\sigma )$, with\n\n\\begin{equation}\nc_{D,\\text{\\textrm{reg}}}(r)=c_{D,\\text{\\textrm{reg}}}^{0}(r)-3r^{-3}\\left[\n\\int_{0}^{r}\\ c_{D,\\text{\\textrm{reg}}}^{0}(x)x^{2}\\ dx+\\Lambda _{D}\\sigma\n^{3}\\ \\theta (r-\\sigma )\\right] , \\label{c12}\n\\end{equation}%\nsince $\\int_{0}^{r}\\ \\delta (x-\\sigma )x^{2}dx=\\sigma ^{2}\\theta (r-\\sigma )$%\n. On the other hand, from Eq. (\\ref{oz12}) one easily finds that%\n\\begin{equation} \\label{cDcD0}\nc_{D}(r)=c_{D}^{0}(r)\\text{ \\ \\ \\ for \\ }r\\geq \\sigma .\n\\end{equation}%\n\\ \\ \\ \n\niii) By applying the relationship (\\ref{oz12a}) to $c_{D}(r)$, using Eq. (%\n\\ref{cDcD0}) and noticing that $c_{D}(x)=0$ for $r>\\sigma $ within the PY-OL\napproximation, leads to a \\textit{sum rule}: \n\\begin{equation}\n\\int_{0}^{\\infty }c_{D}^{0}(x)\\ x^{2}\\ dx=\\int_{0}^{\\sigma }\\ c_{D,\\text{%\n\\textrm{reg}}}^{0}(x)x^{2}\\ dx+\\Lambda _{D}\\sigma ^{3}\\ =0, \\label{f4b}\n\\end{equation}%\nthat we will exploit later.\n\n\\section{ANALYTIC\\ SOLUTION}\n\nWe have seen that the molecular PY-OL integral equation (IE) for our \\textit{%\nanisotropic}-SHS model splits into three IE's \n\\begin{equation}\n\\left\\{ \n\\begin{array}{cc}\nh_{m}(r)=c_{m}(r)+\\rho _{m}\\ (h_{m}\\star c_{m})\\text{\\ \\ } & \\\\ \nh_{m}(r)=-1\\text{\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ } & \n0\\sigma $, one finds $%\nq\\left( r\\right) =0$ for $r>\\sigma $ \\cite{Gazzillo04}.\n\nOn applying Baxter's factorization to Eqs. ($\\ref{gpy1}$), we get \n\\begin{equation}\nrh_{m}\\left( r\\right) =-q_{m}^{\\prime }(r)+2\\pi \\rho _{m}\\int_{0}^{\\sigma\n}du\\ q_{m}\\left( u\\right) \\left( r-u\\right) h_{m}\\left( |r-u|\\right) .\n\\label{ie3}\n\\end{equation}%\nwith $m=0,1,2$. Now the closure $c_{m}(r)=\\Lambda _{m}\\ \\sigma \\delta \\left(\nr-\\sigma \\right) $ for \\ $r\\geq \\sigma $ implies that the same $\\delta $%\n-term must appear in $h_{m}\\left( r\\right) $. Thus, for $0\\leq r\\leq \\sigma $%\n, using $h_{m}(r)=-1+\\Lambda _{m}\\ \\sigma \\delta \\left( r-\\sigma \\right) $,\nwe find\n\n\\begin{equation*}\nq_{m}^{\\prime }(r)=a_{m}r+b_{m}\\sigma -\\Lambda _{m}\\ \\sigma ^{2}\\delta\n\\left( r-\\sigma \\right) ,\n\\end{equation*}%\nwith%\n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\na_{m}=\\ 1-2\\pi \\rho _{m}\\int_{0}^{\\sigma }du\\ q_{m}\\left( u\\right) \\ , \\\\ \nb_{m}\\sigma =\\ 2\\pi \\rho _{m}\\int_{0}^{\\sigma }du\\ q_{m}\\left( u\\right) \\ u\\\n.\\text{ \\ \\ }%\n\\end{array}%\n\\right. \\label{ie4}\n\\end{equation}\n\nThe $\\delta $-term of $q_{m}^{\\prime }(r)$ means that $q_{m}(r)$ has a\ndiscontinuity $q_{m}(\\sigma ^{+})-q_{m}(\\sigma ^{-})=-\\Lambda _{m}\\sigma\n^{2} $, with $q_{m}(\\sigma ^{+})=0.$ Integrating $q_{m}^{\\prime }(r)$,\nsubstituting this result into Eqs. ($\\ref{ie4}$), and solving the\ncorresponding algebraic system, we find the following solution\n\n\\begin{equation}\nq_{m}(r)=\\left\\{ \n\\begin{array}{cc}\n\\ \\frac{1}{2}a_{m}(r-\\sigma )^{2}+\\left( a_{m}+b_{m}\\right) \\sigma (r-\\sigma\n)\\ +\\Lambda _{m}\\ \\sigma ^{2} & \\text{ \\ \\ \\ \\ }0\\leq r\\leq \\sigma , \\\\ \n0 & \\text{ \\ \\ \\ otherwise,}%\n\\end{array}%\n\\right. \\label{so1}\n\\end{equation}%\n\\begin{eqnarray}\na_{m} &=&\\ a^{\\mathrm{HS}}(\\eta _{m})-\\frac{12\\eta _{m}\\ \\Lambda _{m}\\ }{%\n1-\\eta _{m}}\\ \\label{so2} \\\\\n&& \\notag \\\\\nb_{m} &=&\\ b^{\\mathrm{HS}}(\\eta _{m})+\\frac{6\\eta _{m}\\ \\Lambda _{m}\\ }{%\n1-\\eta _{m}}\\ \\ \\label{so3} \\\\\n&& \\notag \\\\\n\\text{\\ }\\eta _{m} &=&\\left( \\pi \/6\\right) \\rho _{m}\\sigma ^{3}\\text{ \\ \\ \\ }\n\\\\\n&& \\notag \\\\\na^{\\mathrm{HS}}(x) &=&\\frac{1+2x}{\\left( 1-x\\right) ^{2}},\\text{ \\ \\ \\ \\ \\ \\\n\\ }b^{\\mathrm{HS}}(x)=-\\frac{3x}{2\\left( 1-x\\right) ^{2}},\n\\end{eqnarray}\n\nFrom the first of Eqs. ($\\ref{ie2b}$) we get the DCFs $c_{m}(r)=c_{m,\\text{%\n\\textrm{reg}}}(r)+\\ \\Lambda _{m}\\ \\sigma \\delta (r-\\sigma )$, where $c_{m,%\n\\text{\\textrm{reg}}}(r)=0$ for $r\\geq \\sigma ,$ and for $0\\sigma $, Eqs. ($\\ref{ie3}$) becomes%\n\\begin{equation}\nH_{m,\\text{\\textrm{reg}}}\\left( r\\right) =12\\eta _{m}\\ \\sigma ^{-3}\\left\\{ \n\\begin{array}{cc}\n\\begin{array}{c}\n\\int_{0}^{r-\\sigma }du\\ q_{m}\\left( u\\right) \\ H_{m,\\text{\\textrm{reg}}%\n}\\left( r-u\\right) \\text{ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ } \\\\ \n+\\ \\int_{r-\\sigma }^{\\sigma }du\\ q_{m}\\left( u\\right) \\left( u-r\\right)\n+\\Lambda _{m}\\sigma ^{2}\\ q_{m}\\left( r-\\sigma \\right) \\text{ \\ \\ \\ }%\n\\end{array}\n& \\sigma 2\\sigma ,\\text{\\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ } & \n\\end{array}%\n\\right. \\label{so7}\n\\end{equation}%\nwhere $H_{m}\\left( r\\right) \\equiv rh_{m}\\left( r\\right) $. Due to the last\nterm of Eq. ($\\ref{so7}$) and the discontinuity of $q_{m}\\left( r\\right) $\nat $r=\\sigma $, $h_{m,\\text{\\textrm{reg}}}(r)$ has a jump of at $r=2\\sigma $ \n\\cite{Kranendonk88,Miller04}: $h_{m,\\text{\\textrm{reg}}}(2\\sigma ^{+})-h_{m,%\n\\text{\\textrm{reg}}}(2\\sigma ^{-})=-6\\eta _{m}\\ \\Lambda _{m}^{2}.$\n\n\\bigskip\n\n\\subsection{An important relationship}\n\nIn Appendix B it is shown that a remarkable consequence of the sum rule (\\ref%\n{f4b}) is the condition%\n\\begin{equation}\na_{2}=a_{1}\\text{ ,} \\label{so10}\n\\end{equation}%\nthat will play a significant role in the determination of the unknown\nparameters $\\Lambda _{1},$ $\\Lambda _{2}$ and $K$ \\ (see Appendix B).\n\n\\bigskip\n\n\\subsection{Reference fluid coefficients}\n\nThe $m=0$ case corresponds to Baxter's PY results for the reference fluid of\nisotropic SHS particles \\cite{Baxter68,Baxter71}. We have: $q_{0}(r)=q^{%\n\\mathrm{isoSHS}}(r;\\eta ,\\Lambda _{0}),$ and%\n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\nc_{0}(r)=c^{\\mathrm{isoSHS}}(r;\\eta ,\\Lambda _{0})=c_{\\text{\\textrm{reg}}}^{%\n\\mathrm{isoSHS}}(r;\\eta ,\\Lambda _{0})+\\Lambda _{0}\\ \\sigma \\delta (r-\\sigma\n)\\text{ } \\\\ \nh_{0}(r)=h^{\\mathrm{isoSHS}}(r;\\eta ,\\Lambda _{0})=h_{\\text{\\textrm{reg}}}^{%\n\\mathrm{isoSHS}}(r;\\eta ,\\Lambda _{0})+\\Lambda _{0}\\ \\sigma \\delta (r-\\sigma\n)%\n\\end{array}%\n\\right. \\label{f1}\n\\end{equation}%\n(for simplicity, we omit -- here and in the following -- the superscript PY).\n\n\\subsection{$\\Delta -$ and $D-$coefficients}\n\nWe write $q_{m}(r)=q^{\\mathrm{isoSHS}}(r;\\eta _{m},\\Lambda _{m})$ with $%\nm=1,2 $. Then,\n\ni) For the $\\Delta $\\textit{-coefficients}, after recalling Eq. (\\ref{oz15a}%\n) and exploiting Eqs. (\\ref{c10}), we end up with:%\n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\nc_{\\Delta }(r)=2K\\left[ c_{0,\\text{\\textrm{reg}}}(r;2K\\eta ,\\Lambda\n_{2})-c_{0,\\text{\\textrm{reg}}}(r;-K\\eta ,\\Lambda _{1})\\right] +\\Lambda\n_{\\Delta }\\ \\sigma \\delta (r-\\sigma )\\text{\\ \\ \\ } \\\\ \nh_{\\Delta }(r)=2K\\left[ h_{0,\\text{\\textrm{reg}}}(r;2K\\eta ,\\Lambda\n_{2})-h_{0,\\text{\\textrm{reg}}}(r;-K\\eta ,\\Lambda _{1})\\right] +\\Lambda\n_{\\Delta }\\ \\sigma \\delta (r-\\sigma ).\\text{ \\ }%\n\\end{array}%\n\\right. \\label{f2}\n\\end{equation}\n\nii) For the $D$\\textit{-coefficients}, we get\n\n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\nc_{D}^{0}(r)=2K\\left[ c_{0,\\text{\\textrm{reg}}}(r;2K\\eta ,\\Lambda _{2})+%\n\\frac{1}{2}c_{0,\\text{\\textrm{reg}}}(r;-K\\eta ,\\Lambda _{1})\\right] \\\n+\\Lambda _{D}\\ \\sigma \\delta (r-\\sigma )\\text{ \\ } \\\\ \nh_{D}^{0}(r)=2K\\left[ h_{0,\\text{\\textrm{reg}}}(r;2K\\eta ,\\Lambda _{2})+%\n\\frac{1}{2}h_{0,\\text{\\textrm{reg}}}(r;-K\\eta ,\\Lambda _{1})\\right] \\\n+\\Lambda _{D}\\ \\sigma \\delta (r-\\sigma ).\\text{ }%\n\\end{array}%\n\\right. \\label{f3}\n\\end{equation}%\nFinally, from $c_{D}^{0}(r)$ and $h_{D}^{0}(r)$ we can calculate $c_{D}(r)$\nand $h_{D}(r)$, as described by Eqs. (\\ref{c12}) and (\\ref{c11}),\nrespectively.\n\nIn short, a) our PY-OL solution -- $\\left\\{ c_{0},c_{\\Delta },c_{D}\\right\\} $\nand $\\left\\{ h_{0},h_{\\Delta },h_{D}\\right\\} $ -- satisfies both the PY\nclosures and the core conditions; b) all\\textit{\\ }coefficients contain a\nsurface adhesive $\\delta -$term; c) $\\left\\{ h_{0},h_{\\Delta },h_{D}\\right\\} \n$ all exhibit a step discontinuity at $r=2\\sigma $.\n\n\\bigskip\n\n\\section{EVALUATION OF THE PARAMETERS $K$, $\\Lambda _{1}$ AND $\\Lambda _{2}$}\n\nThe calculation of the Baxter functions $q_{m}s$ ($m=0,1,2$) requires the\nevaluation of $K,$ $\\Lambda _{1}$, and $\\Lambda _{2}$, for a given set of $%\n\\alpha ,\\eta $ and $t$ values, a task that we address next.\n\n\\bigskip\n\n\\subsection{Exact expressions}\n\nFour equations are needed to find the three quantities $\\Lambda\n_{m}=q_{m}(\\sigma ^{-})\/\\sigma ^{2}$ $(m=0,1,2)$, as well as the parameter $%\nK\\left( \\eta ,t,\\alpha \\right) $. We stress that the \\textit{almost fully\nanalytical} determination of these unknown parameters was lacking in Ref.\n[32] and represents an important part of the present work. Our detailed\nanalysis is given in Appendix B, and we quote here the main results.\n\ni) For $\\Lambda _{0}$, the same PY equation found by Baxter for isotropic\nSHS \\cite{Baxter68,Baxter71} \n\\begin{equation}\n12\\eta t\\ \\Lambda _{0}^{2}-\\left( 1+\\frac{12\\eta }{1-\\eta }t\\right) \\Lambda\n_{0}+y_{\\sigma }^{\\mathrm{HS}}(\\eta )t=0. \\label{b5}\n\\end{equation}%\nOnly the smaller of the two real solutions (when they exist) is physically\nsignificant \\cite{Baxter68,Baxter71}, and reads\n\n\\begin{equation}\n\\Lambda _{0}=y_{0}^{\\mathrm{PY}}(\\sigma )t=\\frac{y_{\\sigma }^{\\mathrm{HS}%\n}(\\eta )t}{\\frac{1}{2}\\left[ 1+\\frac{12\\eta }{1-\\eta }t+\\sqrt{\\left( 1+\\frac{%\n12\\eta }{1-\\eta }t\\right) ^{2}-48\\eta \\ y_{\\sigma }^{\\mathrm{HS}}(\\eta )\\\nt^{2}}\\right] }, \\label{p1}\n\\end{equation}\n\nii) For $\\Lambda _{1}$ and $\\Lambda _{2}$, two other quadratic equations,\ni.e. \\ \\ \\ \\ \n\\begin{equation}\n12\\eta _{m}t\\ \\Lambda _{m}^{2}-\\left( 1+\\frac{12\\eta _{m}}{1-\\eta _{m}}%\nt\\right) \\Lambda _{m}+h_{\\sigma }^{\\mathrm{HS}}(\\eta _{m})t=-\\mathcal{P}%\n\\text{\\quad ~~}(m=1,2). \\label{p1b}\n\\end{equation}\n\niii) The fourth equation is the following linear relationship between $%\n\\Lambda _{1}$ and $\\Lambda _{2}$%\n\\begin{equation}\n\\frac{12\\eta _{2}\\ \\Lambda _{2}\\ }{1-\\eta _{2}}-\\frac{12\\eta _{1}\\ \\Lambda\n_{1}\\ }{1-\\eta _{1}}=\\frac{\\eta _{2}\\left( 4-\\eta _{2}\\right) \\ }{\\left(\n1-\\eta _{2}\\right) ^{2}}-\\frac{\\eta _{1}\\left( 4-\\eta _{1}\\right) \\ }{\\left(\n1-\\eta _{1}\\right) ^{2}}, \\label{p1c}\n\\end{equation}%\n$\\ $which stems from the condition $a_{2}=a_{1}$.\n\nThe analysis of Appendix B gives\n\n\\begin{equation}\n\\Lambda _{2}\\left( \\eta _{1},\\eta _{2},t,\\alpha \\right) =\\Lambda _{1}\\left(\n\\eta _{2},\\eta _{1},t,\\alpha \\right)\n\\end{equation}%\nwith\n\n\\begin{equation}\n\\Lambda _{m}=\\Lambda +\\Lambda _{m}^{\\mathrm{ex}}\\text{ \\ \\ \\ \\ \\ \\ \\ }(m=1,2)\n\\label{p2a}\n\\end{equation}%\n\\begin{equation}\n\\Lambda =\\frac{1}{3}+\\frac{1}{4}\\left( \\frac{\\eta _{1}}{1-\\eta _{1}}+\\frac{%\n\\eta _{2}}{1-\\eta _{2}}\\right) =\\frac{1}{3}+\\allowbreak \\frac{x(1+4x)}{%\n4\\left( 1+x\\right) \\left( 1-2x\\right) } \\label{p2b}\n\\end{equation}%\n\\begin{equation}\n\\Lambda _{1}^{\\mathrm{ex}}=\\frac{\\eta _{2}}{4\\left( 1-\\eta _{2}\\right) }%\nW_{0}^{\\mathrm{ex}},\\qquad \\Lambda _{2}^{\\mathrm{ex}}=\\frac{\\eta _{1}}{%\n4\\left( 1-\\eta _{1}\\right) }W_{0}^{\\mathrm{ex}}, \\label{p2c}\n\\end{equation}%\nwhere we have introduced $\\eta _{1}=-x$, $\\eta _{2}=2x$ \\ ( $x\\equiv K\\eta $\n), and $W_{0}^{\\mathrm{ex}}$ is defined in Appendix B. All these quantites\nare analytic functions of $x=K\\eta $. Thus, to complete the solution, we\nneed an equation for $K$, which can be written as\n\n\\begin{equation}\nK=\\alpha t\\ \\mathcal{K},\\text{ }\\ \\ \\ \\ \\ \\text{with \\ \\ \\ }\\ \\mathcal{K}%\n\\text{ }=\\frac{y_{0}^{\\mathrm{PY}}(\\sigma )}{Z(\\eta _{1},\\eta _{2},t)},\n\\label{p3}\n\\end{equation}%\n\\begin{equation}\nZ=\\frac{3}{2}\\left( \\Lambda _{1}+\\Lambda _{2}\\right) -3\\left\\{ \\frac{1}{2}%\n\\sum_{m=1}^{2}\\ \\left[ 12\\eta _{m}\\ \\Lambda _{m}^{2}-\\frac{12\\eta\n_{m}\\Lambda _{m}}{1-\\eta _{m}}+h_{\\sigma }^{\\mathrm{HS}}(\\eta _{m})\\right] +%\n\\frac{K_{\\mathrm{reg}}}{K}\\right\\} t \\label{p4}\n\\end{equation}%\nand $\\lim_{\\eta \\rightarrow 0}Z(\\eta _{1},\\eta _{2},t)=1$. Insertion of\nfound expressions for $\\Lambda _{1},$ $\\Lambda _{2}$ and $K_{\\mathrm{reg}}$\n(see Appendix B) into Eq. (\\ref{p3}) yields a single equation for $K$ that\nwe have solved numerically, although some further analytic simplifications\nare probably possible.\n\nOur solution is then almost fully analytical, as only the final equation for \n$K$ is left to be solved numerically.\n\n\\subsection{Approximate expressions}\n\nFor practical use we next derive very accurate analytical approximations to $%\nK$, $\\Lambda _{1}$ and $\\Lambda _{2}$, which provide an useful tool for\nfully analytical calculations. Since in all cases of our interest we always\nfind $x=K\\eta \\ll 1$, a serie expansion leads to:\n\n\\begin{equation}\nW_{0}^{\\mathrm{ex}}=\\frac{2}{3}\\allowbreak \\left( 1+5x\\right) t+\\mathcal{O}%\n\\left( x^{2}\\right) ,\n\\end{equation}%\nand, consequently,\n\n\\begin{equation}\n\\Lambda _{1}^{\\mathrm{ex}}=\\frac{x\\left( 1+5x\\right) }{3\\left( 1-2x\\right) }%\nt+\\mathcal{O}\\left( x^{3}\\right) ,\\qquad \\Lambda _{2}^{\\mathrm{ex}}=-\\frac{%\nx\\left( 1+5x\\right) }{6\\left( 1+x\\right) }t+\\mathcal{O}\\left( x^{3}\\right) .\n\\label{r5}\n\\end{equation}%\nSimilarly we can expand $Z$ in Eq. (\\ref{p4}) as\n\n\\begin{equation}\nZ(x,t)=1+z_{1}(t)x+z_{2}(t)x^{2}+O\\left( x^{3}\\right) ,\n\\end{equation}%\nwith%\n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\nz_{1}(t)=\\frac{1}{4}\\left( 3+11t\\right) \\text{ \\ \\ \\ \\ \\ \\ \\ \\ \\ } \\\\ \nz_{2}(t)=\\frac{1}{4}\\left( 15+61t-4t^{2}\\right) .%\n\\end{array}%\n\\right. \\label{r6}\n\\end{equation}%\nInsertion of this result into Eq. (\\ref{p3}) yields a cubic equation for $K,$%\n\\begin{equation*}\nz_{2}(t)\\eta ^{2}K^{3}+z_{1}(t)\\eta K^{2}+K-\\alpha t\\ y_{0}^{\\mathrm{PY}%\n}(\\sigma )=0,\n\\end{equation*}%\nwhich, again with the help of Eq. (\\ref{p3}), is equivalent to a cubic\nequation for $Z$ \n\\begin{equation}\nZ^{3}-Z^{2}+z_{1}(t)\\left[ \\alpha t\\ y_{0}^{\\mathrm{PY}}(\\sigma )\\eta \\right]\nZ+z_{2}(t)\\left[ \\alpha t\\ y_{0}^{\\mathrm{PY}}(\\sigma )\\eta \\right] ^{2}=0.\n\\label{r7}\n\\end{equation}%\nThe physically acceptable solution then reads%\n\\begin{equation}\nZ(\\eta ,t)=\\frac{1}{3}\\left( 1+\\sqrt[3]{\\mathcal{B}+\\sqrt{\\mathcal{B}^{2}-%\n\\mathcal{C}^{3}}}+\\sqrt[3]{\\mathcal{B}-\\sqrt{\\mathcal{B}^{2}-\\mathcal{C}^{3}}%\n}\\right) , \\label{r8}\n\\end{equation}%\nwhere\n\n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\n\\mathcal{B}=1+\\frac{9}{2}z_{1}(t)\\left[ \\alpha t\\ y_{0}^{\\mathrm{PY}}(\\sigma\n)\\eta \\right] +\\frac{27}{2}z_{2}(t)\\left[ \\alpha t\\ y_{0}^{\\mathrm{PY}%\n}(\\sigma )\\eta \\right] ^{2} \\\\ \n\\mathcal{C}=1+3z_{1}(t)\\left[ \\alpha t\\ y_{0}^{\\mathrm{PY}}(\\sigma )\\eta %\n\\right] .\\text{ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ }%\n\\end{array}%\n\\right. \\label{r9}\n\\end{equation}%\nIn conclusion, our approximate analytic solution for $K$, $\\Lambda _{1}$ and \n$\\Lambda _{2}$ includes three simple steps: i) calculate $K$ by using Eqs. (%\n\\ref{p3}), (\\ref{r8})-(\\ref{r9}), (\\ref{r6}); ii) evaluate $x=K\\eta $; iii)\nsolve for $\\Lambda _{1}$ and $\\Lambda _{2}$ by means of Eqs. (\\ref{p2b}) and\n(\\ref{r5}).\n\n\\bigskip\n\n\\subsection{Numerical comparison}\n\nIn order to assess the precision of previous approximations, we have\ncalculated $K$, $\\Lambda _{1}$ and $\\Lambda _{2}$ by two methods: i) solving\nnumerically Eqs. (\\ref{s2}), and ii) using our analytic approximations.\nAfter fixing $\\alpha =1\/2,$ we have increased the adhesion strength (or\ndecreased the temperature) from $t=0$ (HS limit) up to $t=0.8$, for some\nrepresentative values of the volume fraction ($\\eta =0.01$, $0.1$, $0.2$ and \n$0.4)$. The maximum value of $t$ corresponds to $\\tau =1\/(12t)\\simeq 0.1$,\nwhich lies close to the critical temperature of the isotropic SHS fluid. On\nthe other hand, $\\eta =0.01$ has been chosen to illustrate the fact that, as \n$\\eta \\rightarrow 0$, the parameter $K$ tends to $\\alpha t$. The linear\ndependence of $K$ on $t$ in this case is clearly visible in the top panel of\nFigure 2.\n\nIn Figures 2 and 3 the exact and approximate results for $K$, $\\Lambda _{1}$\nand $\\Lambda _{2}$ are compared. The agreement is excellent: at $\\eta =0.1$, \n$0.2$ and $0.4$, the relative error on $K$ does not exceed $0.1\\%$, $0.4\\%$\nand $1\\%$, respectively, while the maximum of the absolute relative errors\non $\\Lambda _{1}$ and $\\Lambda _{2}$ always remain less than $0.05$, $0.2$\nand $0.6~\\%$ in the three above-mentioned cases. It is worth noting that, as \n$\\eta $ increases, the variations of $\\Lambda _{1}$ and $\\Lambda _{2}$ are\nalways relatively small; on the contrary, $K$ experiences a marked change,\nwith a progressive lowering of the relevant curve.\n\n\\bigskip\n\n\\section{Some illustrative results on the local orientational structure}\n\nArmed with the knowledge of the analytic expression for the $q_{m}s$ a rapid\nnumerical calculation of the three harmonic coefficients $\\left\\{\nh_{0},h_{\\Delta },h_{D}\\right\\} $ appearing in\n\n\\begin{equation}\ng^{\\mathrm{PY-OL}}(1,2)=1+h_{0}(r)+h_{\\Delta }(r)\\Delta (1,2)+h_{D}(r)D(1,2).\n\\label{g1}\n\\end{equation}%\ncan be easily obtained as follows. From the second Baxter IE ($\\ref{ie2b}$),\none can generate $h(r)$ directly from $q(r)$, avoiding the passage through $%\nc(r)$. From $\\left\\{ q_{0},q_{1},q_{2}\\right\\} $ one first obtains $\\left\\{\nh_{0},h_{1},h_{2}\\right\\} $, by applying a slight extension of Perram's\nnumerical method \\cite{Perram75} and then derive $\\left\\{ h_{0},h_{\\Delta\n},h_{D}\\right\\} $, according to the above-mentioned recipes.\n\nThe main aim of the present paper was to present the necessary mathematical\nmachinery to investigate thermophysical properties. We now illustrate the\ninterest of the model by reporting some preliminary numerical results on the\norientational dependence of $g^{\\mathrm{PY-OL}}(1,2) $ -- i.e. on the local\norientational structure -- as a consequence of the anisotropic adhesion. A\nmore detailed analysis will be reported in a forthcoming paper.\n\nConsider the configuration depicted in Figure 4. Let a generic particle $1$\nbe fixed at a position $\\mathbf{r}_{1\\text{ }}$in the fluid with orientation \n$\\mathbf{u}_{1\\text{ }}$, and consider another particle $2$ located along\nthe straigth half-line which originates from the center of $1$ and with\ndirection $\\mathbf{u}_{1\\text{ }}$. This second particle has then a fixed\ndistance $r$ from $1$, but can assume all possible orientations $\\mathbf{u}%\n_{2\\text{ }}$, which -- by axial symmetry -- can be described by a single\npolar angle $\\theta \\equiv \\theta _{2}$ (i.e., the angle between $\\mathbf{u}%\n_{1\\text{ }}$and $\\mathbf{u}_{2\\text{ }}$) with respect to the\nintermolecular reference frame. Within this geometry, we have $\\left( \\theta\n_{1},\\varphi _{1}\\right) =(0,0)$ and $\\varphi _{2}=0$, obtaining $\\Delta\n(1,2)=\\cos \\theta $, $D(1,2)=2\\cos \\theta $. Consequently, $%\ng(1,2)=g(r,\\theta _{1},\\varphi _{1},\\theta _{2},\\varphi _{2})$ reduces to \n\\begin{equation}\ng(r,\\theta )=g_{0}(r)+\\left[ h_{\\Delta }(r)+2h_{D}(r)\\right] \\cos \\theta ,\n\\label{eq6}\n\\end{equation}%\nwhere $\\theta \\equiv \\theta _{2}$, and $g_{0}(r)=1+h_{0}(r)$ is the radial\ndistribution function of the reference isotropic SHS fluid.\n\nClearly, $g(r,\\theta )$ is proportional to the probability of finding, at a\ndistance $r$ from a given molecule $1$, a molecule $2$ having a \\textit{%\nrelative} orientation $\\theta $. We consider the three most significant\nvalues of this angle: i) $\\theta =0$, which corresponds to the `parallel'\nconfiguration of $\\mathbf{u}_{1}$ and $\\mathbf{u}_{2}$; ii) $\\theta =\\pi \/2$%\n, for the `orthogonal' configuration; and $\\theta =\\pi $, for the two\n`antiparallel' (head-to-head and tail-to-tail) configurations. From Eq. (\\ref%\n{eq6}) it follows that%\n\\begin{equation}\n\\begin{array}{c}\ng^{\\mathrm{par}}(r)=g(r,0)=g_{0}(r)+\\left[ h_{\\Delta }(r)+2h_{D}(r)\\right] ,%\n\\text{ \\ \\ \\ } \\\\ \ng^{\\mathrm{ortho}}(r)=g(r,\\pi \/2)=g_{0}(r),\\text{ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ } \\\\ \ng^{\\mathrm{antipar}}(r)=g(r,\\pi )=g_{0}(r)-\\left[ h_{\\Delta }(r)+2h_{D}(r)%\n\\right] .\\text{ \\ }%\n\\end{array}\n\\label{g2}\n\\end{equation}%\nNote that $g^{\\mathrm{ortho}}(r)$ coincides with the isotropic result $%\ng_{0}(r)$.\n\nIn Figures 5 we depict the above sections through the three-dimensional\nsurface corresponding to $g(r,\\theta )$, i.e., $g^{\\mathrm{par}}(r)$, $g^{%\n\\mathrm{ortho}}(r)$ and $g^{\\mathrm{antipar}}(r)$, for $\\eta =0.3$ with $%\nt=0.2$ and $t=0.6$, respectively, at the highest asymmetry value admissible\nin the present model, i.e. $\\alpha =1\/2$. The most significant features from\nthese plots are: i) $g^{\\mathrm{antipar}}(\\sigma ^{+})>g^{\\mathrm{par}%\n}(\\sigma ^{+})$; ii) for $r>2\\sigma $ $g^{\\mathrm{antipar}}(r)\\approx g^{%\n\\mathrm{par}}(r)\\approx g_{0}(r)$, i.e., the anisotropic adhesion seems to\naffect only the first coordination layer, $\\sigma 0.\n\\label{stability}\n\\end{eqnarray}\n\nHere $d(i)$ stands for $d \\mathbf{r}_i ~d \\Omega_i$, $i=1,2$, and we assume the\nequilibrium one-particle density to be $\\rho\/4\\pi$ \\cite{Stecki81,Chen92,Klapp97}. \n\nWe expand the fluctuations both in Fourier modes and in spherical harmonics\n\\cite{Gray84}\n\n\\begin{eqnarray}\n\\delta \\rho \\left(j\\right) \\equiv \\delta \\rho \\left(\\mathbf{r}_j,\\Omega_j\\right)\n&=& \\int \\frac{d \\mathbf{k}}{\\left(2\\pi\\right)^3} ~\\mathrm{e}^{\\mathrm{i} \\mathbf{k} \\cdot\n\\mathbf{r}_j} \\sum_{l=0}^{+\\infty} \\sum_{m=-l}^{+l} \\delta \\tilde{\\rho} \\left(\\mathbf{k}\n\\right) Y_{lm} \\left(\\Omega_j\\right). \n\\label{expansion}\n\\end{eqnarray}\n\nUsing the orthogonality relation \\cite{Gray84}\n\n\\begin{eqnarray}\n\\int d \\Omega ~Y_{lm}^{*} \\left(\\Omega\\right) Y_{l'm'}\\left(\\Omega\\right) &=&\n\\delta_{l l'} \\delta_{m m'},\n\\label{orthogonality}\n\\end{eqnarray}\n\nstandard manipulations \\cite{Klapp97} show that condition (\\ref{stability}) can\nbe recast into the form\n\n\\begin{eqnarray}\n\\sum_{l_{1},l_{2}=0}^{+\\infty} \\sum_{m_{1}=-l_{1}}^{+l_{1}} \\sum_{m_{2}=-l_{2}}^{+l_{2}}\n\\int \\frac{d \\mathbf{k}}{\\left(2 \\pi\\right)^3} ~ \\delta \\tilde{\\rho}_{l_{1} m_{1}} \n\\left(\\mathbf{k} \\right) \\delta \\tilde{\\rho}_{l_{1} m_{1}}^{*} \n\\left(\\mathbf{k} \\right) \\tilde{A}_{l_{1} m_{1} l_{2} m_{2}} \\left(\\mathbf{k} \\right) &>0&,\n\\label{stability2}\n\\end{eqnarray}\n\nwhere the matrix elements $\\tilde{A}_{l_{1} m_{1} l_{2} m_{2}} \\left(\\mathbf{k} \\right)$\nare given by\n\n\\begin{eqnarray} \\label{matrix}\n\\tilde{A}_{l_{1} m_{1} l_{2} m_{2}} \\left(\\mathbf{k} \\right) &=& \\left(-1\\right)^{m_{1}}\n\\frac{4 \\pi}{\\rho} \\delta_{l_{1} l_{2}} \\delta_{m_{1},-m_{2}} -\n\\int d \\Omega_1 \\int d ~ \\Omega_{2} Y_{l_{1} m_{1}} \\left(\\Omega_1\\right) \nY_{l_{2} m_{2}} \\left(\\Omega_2\\right) \\\\ \\nonumber\n&\\times& \\int d\\mathbf{r} ~\\mathrm{e}^{\\mathrm{i} \\mathbf{k}\n\\cdot \\mathbf{r}} c\\left(\\mathbf{r},\\Omega_1,\\Omega_2\\right).\n\\end{eqnarray}\n\nThe problem of the stability has been reported to the character \nof the eigenvalues of matrix (\\ref{matrix}). This turns out to be particularly\nsimple in our case. Using the results (\\ref{wertheim_int}) it is easy to\nsee that\n\n\\begin{eqnarray}\n\\int d \\mathbf{r}~ \\mathrm{e}^{\\mathrm{i} \\mathbf{k} \\cdot \\mathbf{r}}\nc\\left(\\mathbf{r},\\Omega_1,\\Omega_2 \\right) &=& \\tilde{c}_{0} \\left(k\\right)\n+\\tilde{c}_{\\Delta} \\left(k\\right) \\Delta\\left(\\Omega_1,\\Omega_2\\right) +\n\\overline{c}_{D} \\left(k\\right) D\\left(\\Omega_1,\\Omega_2,\\Omega_k\\right).\n\\label{integral_c}\n\\end{eqnarray}\n\nInsertion of Eq.(\\ref{integral_c}) into Eq.(\\ref{matrix}) leads to\n\n\\begin{eqnarray} \\label{matrix2}\n\\tilde{A}_{l_{1} m_{1} l_{2} m_{2}} \\left(\\mathbf{k} \\right) &=& \n\\left(-1\\right)^{m_{1}}\n\\frac{4 \\pi}{\\rho} \\delta_{l_{1} l_{2}} \\delta_{m_{1},-m_{2}} \\\\ \\nonumber\n&-& \\left[\n\\tilde{c}_{0}\\left(k\\right) I_{l_{1} m_{1} l_{2} m_{2}}^{(0)}\n+\\tilde{c}_{\\Delta} \\left(k\\right) I_{l_{1} m_{1} l_{2} m_{2}}^{(\\Delta)}\n+\\tilde{c}_{D} \\left(k\\right) I_{l_{1} m_{1} l_{2} m_{2}}^{(D)}\n\\right],\n\\end{eqnarray}\n\n\\noindent where we have introduced the following integrals, which can be evaluated\nin the intermolecular frame, using standard properties of the\nspherical harmonics \\cite{Gray84}\n\n\\begin{eqnarray} \\label{integrals}\nI_{l_{1} m_{1} l_{2} m_{2}}^{(0)} &\\equiv& \\int d \\Omega_1 \\int d \\Omega_2\n~Y_{l_{1} m_{1}} \\left(\\Omega_1\\right) Y_{l_{2} m_{2}} \\left(\\Omega_2\n\\right) = 4 \\pi \\delta_{l_{1} 0,} \\delta_{l_{2},0} \\delta_{m_{1} 0}\n\\delta_{m_{2}} \\delta_{0} \\\\ \\nonumber\nI_{l_{1} m_{1} l_{2} m_{2}}^{(\\Delta)} &\\equiv& \\int d \\Omega_1 \\int d \\Omega_2\n~Y_{l_{1} m_{1}} \\left(\\Omega_1\\right) Y_{l_{2} m_{2}} \\left(\\Omega_2\n\\right) ~\\Delta\\left(\\Omega_1,\\Omega_2\\right)= \\frac{4}{3} \\pi \\delta_{l_{1} 1} \n\\delta_{l_{2},1} \\delta_{m_{1} 0} \\delta_{m_{2},0} \\\\ \\nonumber\nI_{l_{1} m_{1} l_{2} m_{2}}^{(D)} \\left(\\cos \\theta \\right) &\\equiv& \n\\int d \\Omega_1 \\int d \\Omega_2\n~Y_{l_{1} m_{1}} \\left(\\Omega_1\\right) Y_{l_{2} m_{2}} \\left(\\Omega_2\n\\right) D\\left(\\Omega_1,\\Omega_2,\\Omega_k\\right) \\\\ \\nonumber \n&=& \\frac{4}{3} \\pi \\delta_{l_{1},1} \n\\delta_{l_{2},1} \\delta_{m_{1}, 0}\n\\delta_{m_{2},0} ~2 ~P_{2}\\left(\\cos \\theta \\right) \n\\end{eqnarray}\n\nand where $P_2(x)=(3 x^2 -1)\/2$ is the second Legendre polynomial.\n\nHence, the matrix (\\ref{matrix}) is diagonal and the relevant terms\nare\n\n\\begin{eqnarray}\n\\tilde{A}_{0000} \\left(k\\right) &=& 4 \\pi\n\\left[ \\frac{1}{\\rho}-\\tilde{c}_{0} \\left(k\\right) \\right],\n\\label{element00}\n\\end{eqnarray}\nwhose positiveness is recognized as the isotropic stability condition,\nand\n\\begin{eqnarray}\n\\tilde{A}_{1010}\\left(\\mathbf{k}\\right) &=& 4 \\pi \\left\\{\n\\frac{1}{\\rho} -\\frac{1}{3} \\left[\\tilde{c}_{\\Delta}\\left(k\\right)\n+ 2 P_{2} \\left(\\cos \\theta\\right) \\overline{c}_{D}\\left(k\\right)\n\\right] \\right\\}.\n\\label{element11}\n\\end{eqnarray} \n\n\\noindent All remaining diagonal terms have the form $\\tilde{A}_{l0l0}=\n4 \\pi\/\\rho>0$. \n\nIn order to test for possible angular instabilities, we consider\nthe limit $k \\to 0$ of Eq.~(\\ref{element11}) namely\n\n\\begin{eqnarray}\n\\tilde{A}_{1010}\\left(0\\right) &=& \\frac{4 \\pi}{\\rho} \\left\\{\n1 -\\frac{\\rho}{3} \\left[\\tilde{c}_{\\Delta}\\left(0\\right)\n+ 2 P_{2} \\left(\\cos \\theta\\right) \\overline{c}_{D}\\left(0\\right)\n\\right] \\right\\}.\n\\label{element11k0}\n\\end{eqnarray} \n\\noindent\nThis can be quickly computed with the aid of Eqs.~(\\ref{so11}), (\\ref{bf1}),\nthe fact that $\\bar{c}_{D}(0)=\\tilde{c}_{D}^{0}(0)$ and the identity\n(\\ref{so10}). We find\n\n\\begin{eqnarray}\n\\label{element11_res}\n\\tilde{A}_{1010}\\left(0\\right) &=& \\frac{4 \\pi}{\\rho} a_1^2,\n\\end{eqnarray}\nwhich is independent of the angle $\\theta$. This value is found to be always\npositive as $a_1>0$ (see Fig.\\ref{fig6}).\nWithin this first-order approximation, therefore the only instability\nin the system stems from the isotropic compressibility. \nThe reason for this can be clearly traced back to the first-order\napproximation to the angular dependence of the correlation functions. If\nquadratic terms in $\\Delta $ and $D$ were included into the series expansion\nfor correlation functions, the particular combination leading to a cancellation\nof the angular dependence in the stability matrix \n$\\tilde{A}_{l_{1}m_{1}l_{2}m_{2}}\\left(0\\right)$ would not occur, leading to\na different result. \n\nThis fact is consistent with the more general statement that, in any\napproximate theory, thermodynamics usually requires a higher degree of\ntheoretical accuracy than the one sufficient for obtaining significant\nstructural data. Conceptually, the need of distinguishing structural results\nfrom thermodynamical ones is rather common. For instance, in statistical\nmechanics of liquids it is known that approximating the model potential only\nwith its repulsive part (for instance, the hard sphere term) can account for\nall essential features of the structure, but yields unsatisfactory\nthermodynamics. On the other hand, the present paper refers to a \\textit{%\nsimplified} statistical-mechanical tool, i.e. the OZ equation within our\nPY-OL closure, which has been explicitly selected to allow an analytical\nsolution. Our results however indicate that the first-order expansion used\nin the PY-OL closure can give reasonable information about structure, but\nnot on thermodynamics, where a higher level of sophistication is required.\n\n\\section{Concluding remarks}\n\nIn this paper we have discussed an anisotropic variation of the original\nBaxter model of hard spheres with surface adhesion. In addition to the HS\npotential, molecules of the fluid interact via an isotropic sticky\nattraction plus an additional anisotropic sticky correction, whose strength\ndepends on the orientations of the particles in dipolar way. By varying the\nvalue of a parameter $\\alpha $, the anisotropy degree can be changed.\nConsequenly, the strength of the total sticky potential can vary from twice\nthe isotropic one down to the limit of no adhesion (HS limit). These\nparticles may be regarded as having two non-uniform, hemispherical,\n`dipolar-like patches', thus providing a link with uniformly adhesive\npatches \\cite{Jackson88,Ghonasgi95,Sear99,Mileva00,Kern03,Zhang05,Fantoni07}.\n\nWe have obtained a full analytic solution of the molecular OZ equation,\nwithin the PY-OL approximation, by using Wertheim's technique \\cite%\n{Wertheim71}. Our PY-OL approximation should be tested against exact\ncomputer simulations, in order to assess its reliability. Nevertheless, we\nmay reasonably expect the results to be reliable even at experimentally\nsignificant densities, notwithstanding the truncation of the higher-orders\nterms in the angular expansion. Only one equation, for the parameter $K$,\nhas to be solved numerically. In additon, we have provided analytic\napproximations to $K$, $\\Lambda _{1}$ and $\\Lambda _{2}$ so accurate that,\nin practice, the whole solution can really be regarded as fully analytical.\nFrom this point of view, the present paper complements the above-mentioned\nprevious work by Blum \\textit{et al. }\\cite{Blum90}.\n\nWe have also seen that thermophysical properties require a more detailed\ntreatment of the angular part than the PY-OL closure. Nonetheless, \neven within the PY-OL oversimplified framework, our findings\nare suggestive of a dependence of the fluid-fluid coexistence line on\nanisotropy.\n\nOur analysis envisions a number of interesting perspectives, already hinted\nby the preliminary numerical results reported here. It would be very\ninteresting to compare the structural and thermodynamical properties of this\nmodel with those stemming from truly dipolar hard spheres \\cite{Stecki81,\nChen92,Klapp97}.\nThe possibility of local orientational ordering can be assessed by computing\nthe pair correlation function $g(1,2)$ for the most significant\ninterparticle orientations. We have shown that this task can be easily\nperformed within our scheme. This should provide important information about\npossible chain formation and its subtle interplay with the location of the\nfluid-fluid transition line. The latter bears a particular interest in view\nof the fact that computer simulations on DHS are notoriously difficult and\ntheir predictions regarding the location of such a transition line have\nproven so far unconclusive \\cite{Frenkel02}. The long-range nature of DHS\ninteractions may in fact promote polymerization preempting the usual\nliquid-gas transition \\cite{Tlusty00}. Our preliminary results on the\npresent model strongly suggest that this is not the case for sufficiently\nshort-ranged interactions, thus allowing the location of such a transition\nline to be studied as a function of the anisotropy degree of the model. Our\nsticky interactions have only attractive adhesion, the only repulsive part\nbeing that pertinent to hard spheres, whereas the DHS potential is both\nattractive and repulsive, depending on the orientations.\n\nFinally, information about the structural ordering in the present model\nwould neatly complement those obtained by us in a recent parallel study on a\nSHS fluid with one or two uniform circular patches \\cite{Fantoni07}. Work\nalong this line is in progress and will be reported elsewhere.\n\n\\acknowledgments \nWe acknowledge financial support from PRIN 2005027330. It is our pleasure to\nthank Giorgio Pastore and Mark Miller for enlighting discussions on the\nsubject.\n\n\\bigskip ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:intro}Introduction}\n\nMillisecond pulsars are commonly believed to be descendants of normal neutron stars that have been spun-up and recycled back as radio pulsars by acquiring angular momentum from their companion during the low-mass X-ray binary (LMXB) phase \\citep{ACR82,RS82}.\n\nThere are about $\\sim$20 high confidence nuclear or accretion powered (see Table~1) millisecond X-ray pulsars (MSXPs) which are thought to be the progenitors of millisecond radio pulsars (MSRPs) \\citep{WK98}. These MSXPs may become observable in radio wavelengths once accretion ceases, or the column density of the plasma from the fossil disk around the neutron star becomes thin enough to allow vacuum gap formation that leads to the production of coherent radio emission. Towards the end of the secular LMXB evolution, as accretion rates fall below a critical value above which detection presumably may be hampered due to absorption or dispersion \\citep{TBE94}, the neutron star can re-appear as a MSRP.\n\nAlthough the connection between LMXBs and MSRPs has been significantly strengthened after the discovery of quasi-periodic kHz oscillations and X-ray pulsations in some transient X-ray sources \\citep{WK98, MSS02, GCM02, GMM05}, no radio pulsations from MSXPs have been detected so far \\citep{BBP03}. \n\nAt the end of the recycling process the neutron star will reach an equilibrium period \\citep{BH91} which is approximated by the Keplerian orbital period at the Alfven radius \\citep{GL92}:\n\n\\begin{equation}\nP_{eq} \\sim 1.9\\, ms\\, B_{9}^{6\/7} \\biggl (\\frac{M}{1.4 \\,M_{\\sun}}\\biggr)^{-5\/7} \\biggl(\\frac{\\dot{m}}{\\dot{M}_{Edd}}\\biggr)^{-3\/7} R^{16\/7}_{6}\n\\end{equation}\nwhere $B_{9}$ and $R_{6}$ are the neutron star surface magnetic dipole field and radius in units of $10^{9}$ G and $10^{6}$ cm respectively. The Eddington limited accretion rate $\\dot{M}_{Edd}$ for a neutron star typically is $\\sim10^{-8}\\,M_{\\sun}\\,yr^{-1}$ above which the radiation pressure generated by accretion will stop the accretion flow. This equilibrium period combined with the dominant mechanism for energy loss delineates the subsequent kinematics of the spun-up millisecond pulsar. The magnetic dipole model then implies a ``spin-up region'' ($\\dot{P}\\, \\sim\\, P_{0}^{4\/3} $) \\citep[see][]{ACW99} on which the recycled neutron stars will be reborn as MSRPs. At the end of the active phase, MSXPs accreting with $\\dot{m}$ and spinning with $P_{eq}$, presumably transition into a MSRPs with an initial spin period of $P_{0} \\sim P_{eq}$.\n\nIn the standard spin-down model, the MSRP evolution is driven by pure magnetic dipole radiation, i.e. braking index $n=3$ in vacuum \\citep[see][]{MT77, LK04}. Alternative energy loss mechanisms such as multipole radiation or gravitational wave emission, especially during the initial phases of the reborn millisecond pulsars, have been suggested by several authors \\citep{K91,B98} but have yet to be observationally corroborated. Advanced Laser Interferometer Gravitational Wave Observatory (LIGO) will be able to probe the frequency space at which millisecond pulsars are expected to radiate gravitational waves, thereby putting stringent constraints on the micro physics of millisecond pulsars.\n\nThe advances in radio observations, increased sky coverage with deep exposures of current surveys combined with robust post-bayesian statistical techniques that incorporate minimal assumptions, give us unprecedented predictive power on the joint period-spindown ($P-\\dot{P}$) and implied magnetic field ($B$) distributions.\n\nIn this {\\it Letter}, we attempt to go beyond phenomenological arguments and test whether MSXPs can produce the characteristics of the observed MSRPs within the framework of the standard model \\citep[and references therein]{BH91}.\n\t\n\t\\section{\\label{sec:dist}The Joint Period - Spindown ($P-\\dot{P}$) Distribution}\n\t\t\t\t\t\t\t\nThe evolution of millisecond pulsars can be consistently described in terms of {\\bf i)} the equilibrium period distribution ($D$) of MSXPs at the end of the LMXB evolution {\\bf ii)} the mass accretion rates ($\\dot{M}$) of the progenitor population during the recycling process {\\bf iii)} Galactic birth rates ($R$), and {\\bf iv)} the dominant energy loss mechanism after the onset of radio emission. \n\n\t\t\t\\subsection{\\label{sec:stat}Statistics}\n\nWe devise a semi-analytical evolution function $\\mathcal{E}$ to parametrize the evolution of millisecond pulsars after the accretion phase, which can be described in closed form as:\n\\begin{eqnarray}\n\\displaystyle\\sum_{i=0}^{r} \\mathcal{E}(D_{i},\\dot{M}_{i},R_{i}\\, | \\, \\alpha_{i}^{k},\\beta_{i}^{k}) \\xrightarrow{n=3} \\mathcal{PDF}(P,\\dot{P}) \\label{stat.eq} \n\\end{eqnarray}\nwhere $\\mathcal{PDF}$ is the probability distribution function. The shape parameters $\\alpha$ and $\\beta$ define the distributions (i.e., $D,\\dot{M}, R$ for k=1,2,3) for the Beta functions\\footnote{Beta functions are commonly preferred in Bayesian statistics as the least restrictive and most flexible prior distributions. It can take the form of an uninformative (e.g. uniform) prior, a monotonic line, concave, convex, unimodal (e.g. normal) or any extreme combinations of these shapes.}\\citep{EHP00} inferred from observations at each Monte-Carlo realization ``r''. \n\nThe evolution function $\\mathcal{E}$ is built by randomly choosing initiation seeds from a period distribution $D$, which is then convolved via the standard model to consequently sample the $P-\\dot{P}$ parameter space. For the observed MSXPs, the period distribution which seeds will be randomly chosen from is the observed $P_{MSXP}$ distribution (table~1). We uniquely construct a ``relaxed multidimensional Kolmogorov-Smirnov (K-S) filter'' (fig.~\\ref{fig:probdist}) to check population consistencies by calculating the 2D K-S \\citep{FF87} probabilities ($P_{2DK-S}$) between observed MSRPs and the synthetic population that is formed by these properly evolved progenitor seeds. The filtering is reiterated for each realization to obtain synthetic populations with consistent distributions as:\n\\begin{eqnarray}\nD\\,(\\alpha_{i}^{1},\\beta_{i}^{1}) \\xrightarrow{filter} D_{i} \\label{statD.eq} \\\\\n\\dot{M}(\\alpha_{i}^{2},\\beta_{i}^{2}) \\xrightarrow{filter} \\dot{M}_{i} \\label{statM.eq} \\\\\nR\\,(\\alpha_{i}^{3},\\beta_{i}^{3}) \\xrightarrow{filter} R_{i} \\label{statR.eq}\n\\end{eqnarray}\nwhich is then used to construct the $\\mathcal{PDF}$ in Equation ~\\protect\\ref{stat.eq}.\n\nNominally any $P_{2DK-S} > 0.2$ value would imply consistent populations in a 2D K-S test. By allowing $0.005 < P_{2DK-S} < 0.2$ with lower fractions (see fig.~\\ref{fig:probdist}), we oversample outliers to compensate for possible statistical fluctuations and contaminations. A peak sampling rate around the nominal acceptance value of $P_{2DK-S}\\sim0.2$ is the most optimal scheme that prevents strong biases due to over or under-sampling. The main goal for oversampling outliers and relaxing the K-S filter is to test whether the standard model can at least marginally produce very fast millisecond pulsars with relatively high magnetic fields like PSR B1937+21.\n\nThe predictive significance of the $P-\\dot{P}$ distribution for the probability map (fig~\\protect\\ref{fig:MSPs}) is obtained from a Monte-Carlo run with $r=10^{7}$ valid realizations that produce consistent synthetic samples. Whilst sampling the $P-\\dot{P}$ space, no assumptions were made regarding the progenitor period distribution ($D$), the accretion ($\\dot{M}$), or the Galactic birth ($R$) rates. The filter (eq.~\\protect\\ref{statD.eq}, \\protect\\ref{statM.eq}, \\protect\\ref{statR.eq}) is implicitly driven by the observed MSRPs.\n\t\nFig \\protect\\ref{fig:MSPs} shows the expected $P-\\dot{P}$ distribution for the standard model assuming that MSRPs have evolved from a progenitor population similar to the observed MSXPs. We do not include MSRPs in globular clusters because the $P-\\dot{P}$ values in these cases may not necessarily be the sole imprint of the binary evolution, but can be significantly changed by possible gravitational interactions due to the crowded field. To explore the extend of the effects of an unevenly sampled progenitor population, we also show the region in the $P-\\dot{P}$ space that is sensitive to alternative $P_{MSXP}$ distributions. The probability map is overlaid with the observed MSRPs. \n\n\t\\section{\\label{sec:dis}Discussion and Conclusions}\n\nThe discovery of millisecond pulsations from neutron stars in LMXBs has substantiated the theoretical prediction that links MSRPs and LMXBs. Since then, the recycling process that produces MSRPs on a spin-up region from LMXBs, followed by spin-down due to dipole radiation has been conceived as the ``standard evolution'' of millisecond pulsars. However, the question whether all observed MSRPs could be produced within this framework has not been quantitatively addressed until now.\n\nThe standard evolutionary process produces millisecond pulsars with periods ($P$) and spin-downs ($\\dot{P}$) that are not entirely independent. The possible $P-\\dot{P}$ values that MSRPs can attain are {\\it jointly} constrained by the equilibrium period distribution ($D$) of the progenitor population, the mass accretion rates ($\\dot{M}$) during the recycling process and the dominant energy loss mechanism after the onset of radio emission. \n\nIn order to test whether the observed MSRPs can be reconciled with a single coherent progenitor population that evolves via magnetic dipole braking after the spin-up process, we have produced the predictive joint $P-\\dot{P}$ distribution of MSRPs for the standard model. We did not put restrictions on any of the parameters that drive the evolution. Acceptable $D,\\dot{M}$ and $R$ values were implicitly filtered. We have relaxed the K-S filter (see fig.~\\ref{fig:probdist}) in order to oversample outliers and see whether it is even remotely feasible to produce young millisecond pulsars, like PSR B1937+21 or J0218+4232, that have higher B fields. The color contours in Figure~\\ref{fig:MSPs} represent the $P-\\dot{P}$ densities for MSRPs that are direct descendants of observed MSXPs (i.e. initial spin periods $P_{0}\\sim P_{MSXP}$). \n\nThe standard evolutionary model is able to successfully produce the general demographics of older MSRPs. It fails, however, to predict the younger and fastest MSRP sub-population that have higher $B$ fields.\n\nAccretion rates that MSRPs have experienced during their accretion phase deduced from observed $P-\\dot{P}$ values, combined with the observed MSXP period distribution ($D \\equiv P_{MSXP}$) produces mostly older MSRPs, including MSRPs with spin-down ages $\\tau_{c} > 10^{10}$ yrs. Figure ~\\ref{fig:MSPs} shows clearly that the apparent enigma of millisecond pulsars with spin-down ages older than the age of the Galaxy is mainly a manifestation of very low accretion rates during the late stages of the LMXB evolution. \n\nOn the other hand, no physically motivated $P_{MSXP}$ distribution has been able to produce the whole MSRP population consistently. The observed period distribution of MSXPs is likely to be under-sampled due to observational selection effects. It is also possible that some neutron stars in LMXBs simply do not produce observable pulses. In order to understand how the predicted $P-\\dot{P}$ distribution is affected by different MSXP period distributions, we have estimated the whole extend of the $P-\\dot{P}$ region that is sensitive to the prior. The values that may be produced for different $P_{MSXP}$ distributions are shown by the shaded areas in Figure ~\\ref{fig:MSPs}. No MSXP period distribution could mimic the observed relative ratios of young\/old pulsars with high B fields. The fraction of the observed young\/old MSRPs with high $B$ fields is higher than what the standard model predicts by several orders of magnitude. This may further be exacerbated by strong selection effects that limit our ability to observe very fast millisecond pulsars \\citep{HRS07}. The choice of a standard K-S test instead of the relaxed 2D K-S only increases the statistical significance. Hence, we argue that young millisecond pulsars with higher magnetic fields (e.g. PSR B1937+21) are inconsistent with the standard model. \n\nTherefore, it is tempting to suggest that the fastest spinning millisecond pulsars, in particular PSR B1937+21, may originate from a different evolutionary channel. While it appears that ordinary magnetic-dipole spin down from a source population similar to the observed MSXPs is adequate to explain the great majority of observed MSRPs, the low final accretion rates that are required cannot be reconciled with the high accretion rates needed to produce the fastest, youngest pulsars. We believe that it is necessary to posit the existence of a separate class of progenitors, most likely with a different distribution of magnetic fields, accretion rates and equilibrium spin periods, presumably among the LMXBs that have not been revealed as MSXPs. Understanding this additional channel is clearly critical to developing a natural solution to the long lasting ``birth rate problem'' \\citep[see, e.g.][]{KN88}.\n\nIt is also possible that the standard evolutionary model fails at another point. For example, if MSRPs during some portion of their evolution lose energy through a dominant mechanism other than magnetic dipole radiation (e.g. multipole radiation, gravitational wave or neutrino emission), then the evolution of pulsars through the $P-\\dot{P}$ diagram could be complex. \n\nA combination of the above mentioned factors (i.e. alternative progenitors and subsequent non-standard radiation) are then likely to play a role in millisecond pulsar evolution. A MSXP period distribution that has sharp multimodal features coupled with non-standard energy loss mechanisms may be able to reconcile for the joint $P-\\dot{P}$ distribution of millisecond pulsars.\n\n\\acknowledgements \nThe research presented here has made use of the August 2008 version of the ATNF Pulsar Catalogue \\citep{MHT93}. The authors acknowledge NSF grant AST-0506453.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzedpm b/data_all_eng_slimpj/shuffled/split2/finalzzedpm new file mode 100644 index 0000000000000000000000000000000000000000..dfa1f73469d8dc575458b80827b22403a3365410 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzedpm @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nFor many interpretations of the modal operators -- e.g., for deontic, epistemic, game-theoretic, and high-probability interpretations -- it is necessary to adopt logics that are weaker than the normal ones; e.g., deontic paradoxes, see \\cite{G13,M06}, are one of the main motivations for adopting a non-normal deontic logic. Non-normal logics, see \\cite{C80} for naming conventions, are quite well understood from a semantic point of view by means of neighbourhood semantics \\cite{H09,P17}.\n Nevertheless, until recent years their proof theory has been rather limited since it was mostly confined to Hilbert-style axiomatic systems. This situation seems to be rather unsatisfactory since it is difficult to find derivations in axiomatic systems. When the aim is to find derivations and to analyse their structural properties, sequent calculi are to be preferred to axiomatic systems. Recently different kinds of sequent calculi for non-normal logics have been proposed: Gentzen-style calculi \\cite{I05,I11,L00,O15}; labelled \\cite{GM14,CO18} and display \\cite{P19} calculi based on translations into normal modal logics; labelled calculi based on the internalisation of neighbourhood \\cite{N17,NO19} and bi-neighbourhood \\cite{DON} semantics; and, finally, \\mbox{linear nested sequents \\cite{L17}.}\n \n \n This paper, which extends the results presented in \\cite{O15}, concentrates on Gentzen-style calculi since they are better suited than labelled calculi, display calculi, and nested sequents to give decision procedures (computationally well-behaved) and constructive proofs of interpolation theorems. We consider cut- and contraction-free G3-style sequent calculi for all the logics in the cube of non-normal modalities and for their extensions with the deontic axioms $D^\\Diamond:=\\Box A\\supset\\Diamond A$ and $D^\\bot:=\\neg\\Box\\bot$. The calculi we present have the subformula property and allow for a straightforward decision procedure by a terminating loop-free proof search. Moreover, with the exception of the calculi for {\\bf EC(N)} and its deontic extensions, they are \\emph{standard} \\cite{G16} -- i.e., each operator is handled by a finite number of rules with a finite number of premisses -- and they admit of a Maehara-style constructive proof of Craig's interpolation theorem.\n \n This work improves on previous ones on Gentzen-style calculi for non-normal logics in that we prove cut admissibility for non-normal modal and deontic logics, and not only for the modal ones \\cite{L00,I05,I11}. Moreover, we prove height-preserving admissibility of weakening and contraction, whereas neither weakening nor contraction is admissible in \\cite{L00,I05} and weakening but not contraction is admissible in \\cite{I11}. The admissibility of contraction is a major improvement since, as it is well known, contraction can be as bad as cut for proof search: we may continue to duplicate some formula forever and, therefore, we need a (computationally expensive) loop-checker to ensure termination. Proof search procedures based on contraction-free calculi terminate because the height of derivations is bounded by a number depending on the complexity of the end-sequent and, therefore, we avoid the need of loop-checkers. To illustrate, the introduction of contraction-free calculi has allowed to give computationally optimal decision procedures for propositional intuitionistic logic ({$\\mathbf{IL_p}$) \\cite{H93} and for the normal modal logics {\\bf K} and {\\bf T} \\cite{B97,H95}. The existence of a loop-free terminating decision procedure has also allowed to give a constructive proof of uniform interpolation for $\\mathbf{IL_p}$ \\cite{P92} as well as for {\\bf K} and {\\bf T} \\cite{B07}. The cut- and contraction-free calculi for non-normal logics considered here are such that the height of each derivation is bounded by the weight of its end-sequent and, therefore, we easily obtain a polynomial space upper complexity bound for proof search. This upper bound is optimal for the logics having $C$ as theorem (the satisfiability problem for non-normal modal logics without $C$ is in {\\sc NP}, see \\cite{V89}).\n \n Moreover, the introduction of well-behaved calculi for non-normal deontic logics is interesting since proof analysis can be applied to the deontic paradoxes \\cite{M06} that are one of the central topics of deontic reasoning. We illustrate this in Section \\ref{forrester} by considering Forrester's Paradox \\cite{F84} and by showing that proof analysis cast doubts on the widespread opinion \\cite{M06,P17,T97} that Forrester's argument provides evidence against rule $RM$ (see Table \\ref{rulesinf}).\nIf Forrester's argument is formalized as in \\cite{M06} then it does not compel us to adopt a deontic logic weaker than {\\bf KD}. If, instead, it is formalised as in \\cite{T97} then it forces the adoption of a logic where $RM$ fails, but the formal derivation differs substantially from Forrester's informal argument. \n \n It is also given a constructive proof of interpolation for all logics having a standard calculus. To our knowledge in the literature there is no other constructive study of interpolation in non-normal logics. In \\cite[Chap(s). 3.8 and 6.6]{F83} a constructive proof of Craig's (and Lyndon's) interpolation theorem is given for the modal logics {\\bf K} and {\\bf R}, and for some of their extensions, including the deontic ones, but the proof makes use of model-theoretic notions. A proof of interpolation by the Maehara-technique for {\\bf KD} is given in \\cite{V93}. For a thorough study of interpolation in modal logics we refer the reader to \\cite{G05}. A model-theoretic proof of interpolation for {\\bf E} is given in \\cite{H09}, and a coalgebraic proof of (uniform) interpolation for all the logics considered here, as well as all other rank-1 modal logics (see below), is given in \\cite{P13}. As it is explained in Example \\ref{prob}, we have not been able to prove interpolation for calculi containing the non-standard rule $LR$-$C$ (see Table \\ref{Modal rules}) and, as far as we know, it is still an open problem whether it is possible to give a constructive proof of interpolation for these logics.\n \n \\paragraph{Related Work.}\\label{related}\n The modal rules of inference presented in Table \\ref{Modal rules} are obtained from the rules presented in \\cite{L00} by adding weakening contexts to the conclusion of the rules. This minor modification, used also in \\cite{I11,P13,PS10} for several modal rules, allows us to shift from set-based sequents to multiset-based ones and to prove not only that cut is admissible, as it is done in \\cite{I05,I11,L00}, but also that weakening and contraction are height-preserving admissible. Given that implicit contraction is not eliminable from set-based sequents, the decision procedure for non-normal logics given in \\cite{L00} is based on a model-theoretic inversion technique so that it is possible to define a procedure that outputs a derivation for all valid sequents and a finite countermodel for all invalid ones. One weakness of this decision procedure is that it does not respect the subformula property for logics without rule $RM$ (the procedure adds instances of the excluded middle).\n \n\n \nThe paper \\cite{I05} considers multiset-based calculi for the non-normal logic {\\bf M(N)} and for its extensions with axioms $D^\\Diamond,T, 4, 5$, and $B$. Nevertheless, neither weakening nor contraction is eliminable because there are no weakening contexts in the conclusion of the modal rules. In \\cite{I11} multiset-based sequent calculi for the non-normal logic {\\bf E(N)} and for its extensions with axioms $D^\\Diamond,T$, 4, 5, and $B$ are given. The rules $LR$-$E$ and $R$-$N$ are as in Table \\ref{Modal rules}, but the deontic axiom $D^\\Diamond$ is expressed by the following rule:\n\n$$\n\\infer[\\infrule D\\text{-}2]{\\Box A,\\Box B,\\Gamma\\Rightarrow\\Delta}{A,B\\Rightarrow&(\\Rightarrow A,B)}\n$$\nwhere the right premiss is present when we are working over $LR$-$E$ and it has to be omitted when we work over $LR$-$M$. In the calculi in \\cite{I05,I11} weakening and contraction are taken as primitive rules and not as admissible ones as in the present approach. Even if it is easy to show that weakening is eliminable from the calculi in \\cite{I11}, contraction cannot be eliminated because rule \\emph{D-2} has exactly two principal formulas and, therefore, it is not possible to permute contraction up with respect to instances of rule \\emph{D-2} (see Theorem \\ref{contr}). \nThe presence of a non-eliminable rule of contraction makes the elimination of cut more problematic: in most cases we cannot eliminate the cut directly, but we have to consider the rule known as multicut \\cite[p. 88]{NP01}. Moreover, cut is not eliminable from the calculus given in \\cite{I11} for the deontic logic {\\bf END}. The formula $D^\\bot:= \\neg\\Box\\bot$ is a theorem of this logic, but it can be derived only with a non-eliminable instance of cut as in:\n\n$$\n\\infer[\\infrule R\\neg]{\\Rightarrow \\neg\\Box\\bot}{\n\\infer[\\infrule Cut]{\\Box\\bot\\Rightarrow}{\n\\infer[\\infrule R\\mbox{-}N]{\\Rightarrow\\Box\\top}{\\Rightarrow\\top}&\n\\infer[\\infrule D\\mbox{-}2]{\\Box\\top,\\Box\\bot\\Rightarrow}{\\bot,\\top\\Rightarrow&\\Rightarrow\\bot,\\top}}}\n$$ \n\nFinally, it is worth noticing that all the non-normal logics we consider here are \\emph{rank-1} logics in the sense of \\cite{P13,PS10,PS09} -- i.e., logics whose modal axioms are propositional combinations of formulas of shape $\\Box\\phi$, where $\\phi$ is purely propositional -- and the calculi we give for the modal logics {\\bf E}, {\\bf M}, {\\bf K} and {\\bf KD} are explicitly considered in \\cite{P13,PS09}. Thus, they are part of the family of modal coalgebraic logics \\cite{P13,PS10,PS09} and most of the results in this paper can be seen as instances of general results that hold for rank-1 (coalgebraic) logics. If, in particular, we consider cut-elimination for coalgebraic logics \\cite{PS10} then all our calculi absorb congruence and Theorem \\ref{contr} and case 3 of Theorem \\ref{cut} show that they absorb contraction and cut. Hence, \\cite[Thm. 5.7]{PS10} entails that cut and contraction are admissible in these calculi; moreover, \\cite[Props. 5.8 and 5.11]{PS10} entail that they are one-step cut free complete w.r.t. coalgebraic semantics. This latter result gives a semantic proof of cut admissibility in the calculi considered here. Analogously, if we consider decidability, the polynomial space upper bound we find in Section \\ref{decision} coincides with that found in \\cite{PS09} for rank-1 modal logics. \n\n\n\\paragraph{Synopsis. }\nSection \\ref{secaxiom} summarizes the basic notions of axiomatic systems and of neighbourhood semantics for non-normal logics. Section \\ref{seccalculi} presents G3-style sequent calculi for these logics and then shows that weakening and contraction are height-preserving admissible and that cut is (syntactically) admissible. Section \\ref{secdecax} describes a terminating proof-search decision procedure for all calculi, it shows that each calculus is equivalent to the corresponding axiomatic system, and it applies proof search to Forrester's paradox. Finally, Section \\ref{secinterpol} gives a Maehara-style constructive proof of Craig's interpolation theorem for the logics having a standard calculus.\n\\section{Non-normal Logics}\\label{secaxiom}\n\\subsection{Axiomatic Systems}\nWe introduce, following \\cite{C80}, the basic notions of non-normal logics. Given a countable set of propositional variables $\\{p_n\\,|\\,n\\in \\mathbb{N}\\}$, the formulas of the modal language $\\mathcal{L}$ are generated by:\n\n$$\nA::= \\;p_n\\;|\\;\\bot\\;|\\;A\\wedge A\\;|\\;A\\lor A\\;|\\;A\\supset A\\;|\\;\\Box A\n$$\nWe remark that $\\bot$ is a 0-ary logical symbol. This will be extremely important in the proof of Craig's interpolation theorem. As usual $\\neg A$ is a shorthand for $A\\supset\\bot$, $\\top$ for $\\bot\\supset\\bot$, $A \\leftrightarrow B$ for $(A\\supset B)\\wedge(B\\supset A)$, and $\\Diamond A$ for $\\neg\\Box\\neg A$. We follow the usual conventions for parentheses. \n\n\n\n\\begin{table}\n\\caption{ Rules of inference}\\label{rulesinf}\n\\begin{center}\n\\begin{tabular}{ccc}\n\\hline\\hline\\noalign{\\smallskip}\n\\infer[\\infrule RE]{\\Box A\\leftrightarrow\\Box B}{A\\leftrightarrow B}\n&$\\quad$&\n\\infer[\\infrule RM]{\\Box A\\supset\\Box B}{A\\supset B}\n\\\\\\noalign{\\smallskip\\smallskip\\smallskip}\n\\infer[\\infrule RR,\\; n\\geq 1]{(\\Box A_1\\wedge\\dots\\wedge\\Box A_n)\\supset\\Box B}{(A_1\\wedge\\dots\\wedge A_n)\\supset B}\n&$\\quad$&\n\\infer[\\infrule RK,\\; n\\geq 0]{(\\Box A_1\\wedge\\dots\\wedge\\Box A_n)\\supset\\Box B}{(A_1\\wedge\\dots\\wedge A_n)\\supset B}\n\\\\\n\\noalign{\\smallskip}\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{table}\n\\caption{ Axioms}\\label{axioms}\n\\begin{center}\n\\begin{tabular}{cclccclcccl}\n\\hline\\hline\\noalign{\\smallskip}\n$M$)&$\\quad$&$\\Box (A\\wedge B)\\supset(\\Box A\\wedge\\Box B)$&${}$\\qquad{}\\qquad{}\\qquad&$C$)&$\\quad$&$(\\Box A\\wedge\\Box B)\\supset\\Box(A\\wedge B)$ \\\\\\noalign{\\smallskip\\smallskip\\smallskip}$N$)&$\\quad$&$\\Box \\top$&${}$\\qquad{}\\qquad{}\\qquad&\n$D^\\bot$)&$\\quad$&$\\neg\\Box\\bot$ \\\\\\noalign{\\smallskip\\smallskip\\smallskip}$D^\\Diamond$)&$\\quad$&$ \\Box A\\supset\\Diamond A$\n\\\\\n\\noalign{\\smallskip}\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nLet {\\bf L} be the logic containing all $\\mathcal{L}$-instances of propositional tautologies as axioms, and modus ponens ($MP$) as inference rule. The minimal non-normal modal logic {\\bf E} is the logic {\\bf L} plus the rule $RE$ of Table \\ref{rulesinf}. We will consider all the logics that are obtained by extending {\\bf E} with some set of axioms from Table \\ref{axioms}. We will denote the logics according to the axioms that define them, e.g. {\\bf EC} is the logic {\\bf E}$\\,\\oplus \\,C$, and {\\bf EMD$^\\bot$} is {\\bf E}$\\,\\oplus\\, M\\oplus D^\\bot$. By {\\bf X} we denote any of these logics and we write \\mbox{{\\bf X} $\\vdash A$} whenever $A$ is a theorem of {\\bf X}. We will call \\emph{modal} the logics containing neither $D^\\bot$ nor $D^\\Diamond$, and \\emph{deontic} those containing at least one of them. We have followed the usual naming conventions for the modal axioms, but we have introduced new conventions for the deontic ones: $D^\\bot$ is usually called either $CON$ or $P$ and $D^\\Diamond$ is usually called $D$, cf. \\cite{G00,G13,M06}.\n \n It is also possible to give an equivalent rule-based axiomatization of some of these logics. In particular, the logic {\\bf EM}, also called {\\bf M}, can be axiomatixed as {\\bf L} plus the rule $RM$ of Table \\ref{rulesinf}. The logic {\\bf EMC}, also called {\\bf R}, can be axiomatized as {\\bf L} plus the rule $RR$ of Table \\ref{rulesinf}. Finally, the logic {\\bf EMCN}, i.e. the smallest normal modal logic {\\bf K}, can be axiomatized as {\\bf L} plus the rule $RK$ of Table \\ref{rulesinf}. These rule-based axiomatizations will be useful later on since they simplify the proof of the equivalence between axiomatic systems and sequent calculi (Theorem \\ref{comp}).\n\n\n\n\n\n\n\nThe following proposition states the well-known relations between the theorems of non-normal modal logics. For a proof the reader is referred to \\cite{C80}.\n\n\n\\begin{proposition} For any formula $A\\in \\mathcal{L}$ we have that {\\bf E} $\\vdash A$ implies {\\bf M} $\\vdash A$; {\\bf M} $\\vdash A$ implies {\\bf R} $\\vdash A$; {\\bf R} $\\vdash A$ implies {\\bf K} $\\vdash A$. Analogously for the logics containing axiom $N$ and\/or axiom $C$.\\end{proposition}\n\n\nAxiom $D^\\bot$ is {\\bf K}-equivalent to $D^\\Diamond$, but the correctness of $D^\\Diamond$ has been a big issue in the literature on deontic logic. This fact urges the study of logics weaker than {\\bf KD}, where $D^\\bot$ and $D^\\Diamond$ are no more equivalent \\cite{C80}. \nThe deontic formulas $D^\\bot$ and $D^\\Diamond$ have the following relations in the logics we are considering.\n\\begin{proposition} $D^\\bot$ and $D^\\Diamond$ are independent in {\\bf E}; $D^\\bot$ is derivable from $D^\\Diamond$ in non-normal logics containing at least one of the axioms $M$ and $N$; $D^\\Diamond$ is derivable from $D^\\bot$ in non-normal logics containing axiom $C$.\n\\end{proposition}\nIn Figure \\ref{cube} the reader finds the lattice of non-normal modal logics, see \\cite[p. 237]{C80}, and in Figure \\ref{cubed} the lattice of non-normal deontic logics. \n \\begin{figure}[t\n\\begin{center}\n\\begin{tikzpicture}\n\\node at (1,-2) {EM={\\bf M}};\n\\node at (4,-4) {{\\bf E}};\n\\node at (4,-2) {{\\bf EC}};\n\\node at (7,-2) {{\\bf EN}};\n\\node at (1,0) {EMC={\\bf R}};\n\\node at (4,2) {EMCN={\\bf K}};\n\\node at (4,0) {{\\bf EMN}};\n\\node at (7,0) {{\\bf ECN}};\n\n\\draw (3.8,-3.8) -- (1,-2.2);\n\\draw (4,-3.8) -- (4,-2.2); \n\\draw (4.2,-3.8) -- (7,-2.2);\n\n\\draw (1,-1.8) -- (1,-0.2);\n\\draw (4,0.2) -- (4,1.8); \n\\draw (7,-1.8) -- (7,-0.2);\n\n\\draw (1.2,0.2) -- (3.8,1.8);\n\\draw (6.8,0.2) -- (4.2,1.8);\n\\draw (1.2,-1.8) -- (3.8,-0.2);\n\\draw (6.8,-1.8) -- (4.2,-0.2);\n\n\\draw (3.8,-1.8) -- (1.2,-0.2);\n\\draw (4.2,-1.8) -- (6.8,-0.2);\n\\end{tikzpicture}\n\\caption{Lattice of non-normal modal logics}\\label{cube}\n\\end{center}\n\\end{figure}\n\n \\begin{figure}[t]\n\\begin{center}\n\\scalebox{0.80000}{\\begin{tikzpicture}\n\\filldraw [black] (5,-4.2) circle (4pt) \n\t\t (6.7,-4.2) circle (4pt)\n\t\t (5,-3) circle (4pt);\n\\node[below] at (4.8,-4.3) {\\small{\\bf ED$^\\bot$}};\n\\node[below] at (6.7,-4.3) {\\small{\\bf ED$^\\Diamond$}};\n\\node[left] at (4.9,-3) {\\small{ ED$^\\bot$D$^\\Diamond$}=};\n\\node[left] at (4.9,-3.3) {\\small{\\bf ED}};\n\\draw (5,-4.2) -- (1,-1.7)\n\\draw (5,-3) -- (1,-0.5)\n\\draw (5,-4.2) -- (5,-3)\n\\draw (6.7,-4.2) -- (5,-3)\n\\draw (5,-3) -- (5,-0.5)\n\\draw (6.7,-4.2) -- (6.7,-1.6)\n\\draw (5,-4.2) -- (9,-1.7)\n\\draw (5,-3) -- (9,-0.5)\n\n\n\n\\filldraw [black] (6.7,-1.5) circle (4pt) \n\t\t (5,-0.5) circle (4pt);\n\\node[left] at (6.5,-1.5) {\\small{\\bf ECD$^\\Diamond$}};\n\\node[left] at (4.9,-0.5) {\\small{ ECD$^\\bot$=}};\n\\node[left] at (4.9,-0.8) {\\small{ \\bf ECD}};\n\\draw (5,-0.5) -- (9,2)\n\\draw (6.7,-1.5)-- (5,-0.5)\n\\draw (5,-0.5) -- (1,2)\n\n\n\\filldraw [black] (9,-1.7) circle (4pt) \n\t\t \n\t\t (9,-0.5) circle (4pt);\n\\node[right] at (9.2,-1.7) {\\small{\\bf END$^\\bot$}};\n\\node[right] at (9.1,-0.5) {\\small{ END$^\\Diamond$}= {\\bf END}};\n\\draw (9,-0.5) -- (9,2)\n\\draw (9,-1.7) -- (9,-0.5)\n\\draw (9,-1.7) -- (5,0.8)\n\\draw (9,-0.5) -- (5,2)\n\n\n\\filldraw [black] (1,2) circle (4pt) \n\\node[left] at (0.9,2) {\\small{{ RD$^\\bot$}= {RD$^\\Diamond$}= {\\bf RD}}};\n\\draw (1,2) -- (5,4.5);\n\n\\filldraw [black] (5,4.5) circle (4pt) \n\\node[above] at (5,4.6) {\\small{{KD$^\\bot$}= {KD$^\\Diamond$}= {\\bf KD}}};\n\n\\filldraw [black] (1,-1.7) circle (4pt) \n\t\t (1,-0.5) circle (4pt);\n\\node[left] at (0.9,-1.7) {\\small{\\bf MD$^\\bot$}};\n\\node[left] at (0.9,-0.5) {\\small{ MD$^\\Diamond$}= {\\bf MD}};\n\\draw (1,-1.7) -- (5,0.8)\n\\draw (1,-0.5) -- (5,2)\n\\draw (1,-1.7) -- (1,-0.5)\n\t\t (5,2) circle (4pt);\n\\draw (1,-0.5) -- (1,2)\n\n\\filldraw [black] (5,0.8) circle (4pt) \n\t\t (5,2) circle (4pt); \n\\node[right] at (5.2,0.8) {\\small{\\bf MND$^\\bot$}};\n\\node[right] at (5.1,2) {\\small{ MND$^\\Diamond$}= {\\bf MND}};\n\\draw (5,0.8) -- (5,2);\n\\draw (5,2) -- (5,4.5);\n\n\\filldraw [black]\n\t\t (9,2) circle (4pt);\n\\node[right] at (9.1,2) {\\small{ECND$^\\bot$} = {ECND$^\\Diamond$}= {\\bf ECND}};\n\\draw (9,2) -- (5,4.5)\n\\end{tikzpicture}}\n\\caption{Lattice of non-normal deontic logics}\\label{cubed}\n\\end{center}\n\\end{figure\n\n\n\n\\subsection{Semantics}\\label{semantics}\nThe most widely known semantics for non-normal logics is neighbourhood semantics. We sketch its main tenets following \\cite{C80}, where neighbourhood models are called \\emph{minimal models}. \n\n\\begin{definition}A \\emph{neighbourhood model} is a triple $\\mathcal{M}:=\\langle W,\\, N,\\,P\\rangle$, where $W$ is a non-empty set of possible worlds; $N:W\\longrightarrow 2^{2^W}$ is a neighbourhood function that associates to each possible world $w$ a set $N(w)$ of subsets of $W$; and $P$ gives a truth value to each propositional variable at each world. \\end{definition}\n\n\\noindent The definition of truth of a formula $A$ at a world $w$ of a neighbourhood model $\\mathcal{M}$ -- $\\models_w^\\mathcal{M}A$ -- is the standard one for the classical connectives with the addition of\n\n$$\n\\models_w^\\mathcal{M}\\Box A\\qquad \\textnormal{iff}\\qquad || A||^\\mathcal{M}\\in N(w)\n$$\nwhere $||A||^\\mathcal{M}$ is the truth set of $A$ -- i.e., $||A||^\\mathcal{M}=\\{w\\,|\\,\\models_w^\\mathcal{M} A\\}$. We say that a formula $A$ is \\emph{valid} in a class $\\mathcal{C}$ of neighbourhood models iff it is true in every world of every $\\mathcal{M}\\in\\mathcal{C}$.\n\nIn order to give soundness and completeness results for non-normal modal and deontic logics with respect to (classes of) neighbourhood models, we introduce the following definition.\n\\begin{definition} Let $\\mathcal{M}=\\langle W,\\, N,\\,P\\rangle$ be a neighbourhood model, $X,Y \\in 2^W$, and $w\\in W$, we say that: \n\\begin{itemize}\n\\item $\\mathcal{M}$ is \\emph{supplemented} if $X\\cap Y\\in N(w)$ imples $X\\in N(w)$ and $Y\\in N(w)$;\n\\item$\\mathcal{M}$ is \\emph{closed under finite intersection} if $X\\in N(w)$ and $Y\\in N(w)$ imply $X\\cap Y\\in N(w)$;\n\\item $\\mathcal{M}$ \\emph{contains the unit} if $W\\in N(w)$;\n\\item $\\mathcal{M}$ is \\emph{non-blind} if $X\\in N(w)$ implies $X\\neq \\emptyset$;\n\\item $\\mathcal{M}$ is \\emph{complement-free} if $X\\in N(w)$ implies $W-X\\not\\in N(w)$. \n\\end{itemize}\\end{definition}\n\n\n\\begin{proposition}\\label{corrax} We have the following correspondence results between $\\mathcal{L}$-formulas and the properties of the neighbourhood function defined above:\n\\begin{itemize}\n\\item Axiom $M$ corresponds to supplementation;\n\\item Axiom $C$ corresponds to closure under finite intersection;\n\\item Axiom $N$ corresponds to containment of the unit;\n\\item Axiom $D^\\bot$ corresponds to non-blindness;\n\\item Axiom $D^\\Diamond$ corresponds to complement-freeness.\n\\end{itemize}\n\\end{proposition}\n\n\\begin{theorem}\\label{compax} {\\bf E} is sound and complete with respect to the class of all neighbourhood models. Any logic {\\bf X} which is obtained by extending {\\bf E} with some axioms from Table \\ref{axioms} is sound and complete with respect to the class of all neighbourhood models which satisfies all the properties corresponding to the axioms of {\\bf X}.\n\\end{theorem}\n\nSee \\cite{C80} for the proof of Proposition \\ref{corrax} and of Theorem \\ref{compax}.\n\n\\section{Sequent Calculi}\\label{seccalculi}\n\n We introduce sequent calculi for non-normal logics that extend the multiset-based sequent calculus {\\bf G3cp} \\cite{NP01,NP11,T00} for classical propositional logic -- see Table \\ref{G3cp} -- by adding some modal and deontic rules from Table \\ref{Modal rules}. In particular, we consider the modal sequent calculi given in Table \\ref{modcalculi}, which will be shown to capture the modal logics of Figure \\ref{cube}, and their deontic extensions given in Table \\ref{deoncalculi}, which will be shown to capture all deontic logics of Figure \\ref{cubed}.\nWe adopt the following notational conventions: we use {\\bf G3X} to denote a generic calculus from either Table \\ref{modcalculi} or Table \\ref{deoncalculi}, and we use {\\bf G3Y(Z)} to denote both {\\bf G3Y} and {\\bf GRYZ}. All the rules in Tables \\ref{G3cp} and \\ref{Modal rules} but $LR$-$C$ and $L$-$D^{\\Diamond_C}$ are standard rules in the sense of \\cite{G16}: each of them is a single rule with a fixed number of premisses; $LR$-$C$ and $L$-$D^{\\Diamond_C}$, instead, stand for a recursively enumerable set of rules with a variable number of premisses. \n\n For an introduction to {\\bf G3cp} and the relevant notions, the reader is referred to \\cite[Chapter 3]{NP01}.\n We sketch here the main notions that will be used in this paper. A \\emph{sequent} is an expression $\\Gamma\\Rightarrow \\Delta$, where $\\Gamma$ and $\\Delta$ are finite, possibly empty, multisets of formulas. If $\\Pi$ is the (possibly empty) multiset $A_1,\\dots,A_m$ then $\\Box\\Pi$ is the (possibly empty) multiset $\\Box A_1,\\dots,\\Box A_m$. A \\emph{derivation} of a sequent $\\Gamma\\Rightarrow\\Delta$ in {\\bf G3X} is an upward growing tree of sequents having $\\Gamma\\Rightarrow\\Delta$ as root, initial sequents or instances of rule $L\\bot$ as leaves, and such that each non-initial node is the conclusion of an instance of one rule of {\\bf G3X} whose premisses are its children. In the rules in Tables \\ref{G3cp} and \\ref{Modal rules}, the multisets $\\Gamma$ and $\\Delta$ are called \\emph{contexts}, the other formulas occurring in the conclusion (premiss(es), resp.) are called \\emph{principal} (\\emph{active}). In a sequent the \\emph{antecedent} (\\emph{succedent}) is the multiset occurring to the left (right) of the sequent arrow $\\Rightarrow$. As for {\\bf G3cp}, a sequent $\\Gamma\\Rightarrow\\Delta$ has the following \\emph{denotational interpretation}: the conjunction of the formulas in $\\Gamma$ implies the disjunction of the formulas in $\\Delta$.\n \nAs measures for inductive proofs we use the weight of a formula and the height of a derivation. The \\emph{weight} of a formula $A$, $w(A)$, is defined inductively as follows: $w(\\bot)=w(p_i)=0$; $w(\\Box A)=w(A)+1$; $w(A\\circ B)=w(A)+w(B)+1$ (where $\\circ$ is one of the binary connectives $\\wedge,\\,\\lor,\\,\\supset$). The \\emph{weight} of a sequent is the sum of the weight of the formulas occurring in that sequent. The \\emph{height} of a derivation is the length of its longest branch minus one. A rule of inference is said to be (\\emph{height-preserving}) \\emph{admissible} in {\\bf G3X} if, whenever its premisses are derivable in {\\bf G3X}, then also its conclusion is derivable (with at most the same derivation height) in {\\bf G3X}. The \\emph{modal depth} of a formula (sequent) is the maximal number of nested modal operators occurring in it(s members).\n \n\n \\begin{table}\n\\caption{ The sequent calculus {\\bf G3cp}}\\label{G3cp}\n\\begin{center}\n\\scalebox{0.85000}{\\begin{tabular}{cccc}\n\\hline\\hline\\noalign{\\smallskip}\n Initial sequents: &$\\qquad p_n,\\Gamma\\Rightarrow\\Delta,p_n$&& $p_n$ propositional variable \\\\\n \\noalign{\\smallskip}\\hline\\noalign{\\smallskip\\smallskip}\n Propositional rules:&\n \\infer[\\infrule L\\wedge]{A\\wedge B,\\g\\To\\d}{A,B,\\g\\To\\d}\n\n&$\\qquad$&\n\\infer[\\infrule R\\wedge]{\\g\\To\\d,A\\wedge B}{\\g\\To\\d,A\\quad&\\g\\To\\d,B}\n\n\\\\\\noalign{\\smallskip\\smallskip}\n\\infer[\\infrule L\\bot]{\\bot,\\g\\To\\d}{}&\n\\infer[\\infrule L\\lor]{A\\lor B,\\g\\To\\d}{A,\\g\\To\\d\\quad&B,\\g\\To\\d}\n\n&&\n\\infer[\\infrule R\\lor]{\\g\\To\\d,A\\lor B}{\\g\\To\\d,A,B}\n\n\\\\\\noalign{\\smallskip\\smallskip}&\n\\infer[\\infrule L\\supset]{A\\supset B,\\g\\To\\d}{\\g\\To\\d,A\\quad&B,\\g\\To\\d}\n\n&&\n\\infer[\\infrule R\\supset]{\\g\\To\\d,A\\supset B}{A,\\g\\To\\d,B}\n\n\\\\\n\\noalign{\\smallskip}\\hline\\hline\n\\end{tabular}}\n\\end{center}\n\\caption{Modal and deontic rules}\\label{Modal rules}\n\\begin{center}\n\\scalebox{0.87000}{\\begin{tabular}{llll}\n\\hline\\hline\\noalign{\\smallskip}\n\n\\infer[\\infrule LR\\mbox{-}E]{\\Box A,\\g\\To\\d,\\Box B}{A\\Rightarrow B\\quad&B\\Rightarrow A}\n&\n\\infer[\\infrule LR\\mbox{-}M ]{\\Box A,\\g\\To\\d,\\Box B}{A\\Rightarrow B}\n&\n\\multicolumn{2}{c}{\\infer[\\infrule LR\\mbox{-}R]{\\Box A,\\Box \\Pi,\\g\\To\\d,\\Box B}{A,\\Pi\\Rightarrow B}}\n\n\\\\\\noalign{\\smallskip\\smallskip}\n\\multicolumn{2}{c}{\\infer[\\infrule LR\\mbox{-}C ]{\\Box A_1,\\dots,\\Box A_n,\\g\\To\\d,\\Box B}{A_1,\\dots,A_n\\Rightarrow B&B\\Rightarrow A_1&{}^{\\dots}&B\\Rightarrow A_n}\n}\n&\n\\infer[\\infrule LR\\mbox{-}K]{\\Box\\Pi,\\g\\To\\d,\\Box B}{\\Pi\\Rightarrow B}&\n\\infer[\\infrule R\\mbox{-}N]{\\g\\To\\d,\\Box B}{\\Rightarrow B}\n\\\\\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n\\end{tabular}}\n\n\\scalebox{0.8700}{\\begin{tabular}{llllll}\n\n\n&\n\\infer[\\infrule L\\mbox{-}D^\\bot]{\\Box A,\\g\\To\\d}{A\\Rightarrow}\n&\n\\infer[\\infrule L\\mbox{-}D^{\\Diamond_E}, \\,|\\Pi|\\leq 2]{\\Box\\Pi,\\g\\To\\d}{\\Pi\\Rightarrow&\\Rightarrow\\Pi}\n&\n\\infer[\\infrule L\\mbox{-}D^{\\Diamond_M}, \\,|\\Pi|\\leq 2]{\\Box\\Pi,\\g\\To\\d}{\\Pi\\Rightarrow}\n\\\\\\noalign{\\smallskip\\smallskip}\n\\phantom{aaaaaaaa}&\n\\multicolumn{2}{c}{\\infer[\\infrule L\\mbox{-}D^{\\Diamond_C}]{\\Box \\Pi,\\Box\\Sigma,\\g\\To\\d}{\\Pi,\\Sigma\\Rightarrow&\\{ \\Rightarrow A, B|\\; A\\in\\Pi, B\\in \\Sigma \\}}\n}\n&\n\\infer[\\infrule L\\mbox{-}D^*]{\\Box\\Pi,\\g\\To\\d}{\\Pi\\Rightarrow}&\\phantom{aaaaaaa}\n\\\\\n\\noalign{\\smallskip}\\hline\\hline\n\\end{tabular}}\\end{center}\n \\caption{Modal sequent calculi (\\checkmark= rule of the calculus, $\\star$ = admissible rule, $-$ = neither)}\\label{modcalculi}\n \n \\begin{center}\n\\scalebox{0.66000}{ \\begin{tabular}{r|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \n\\noalign{\\smallskip} \\hline\\hline\\noalign{\\smallskip\\smallskip}\n&{\\bf G3E}&{\\bf G3EN}&{\\bf G3M}& {\\bf G3MN}&{\\bf G3C}&{\\bf G3CN}& {\\bf G3R}&{\\bf G3K}\\\\\n\n \\noalign{\\smallskip}\\hline\\noalign{\\smallskip\\smallskip}\n$LR$-$E$&\\checkmark&\\checkmark&$\\star$&$\\star$&$\\star$&$\\star$&$\\star$&$\\star$\\\\\n\n \\noalign{\\smallskip}\\hline\\noalign{\\smallskip\\smallskip}\n$LR$-$M$&$-$&$-$&\\checkmark&\\checkmark&$-$&$-$&$\\star$&$\\star$\\\\\n\n \\noalign{\\smallskip}\\hline\\noalign{\\smallskip\\smallskip}\n$LR$-$C$&$-$&$-$&$-$&$-$&$\\checkmark$&$\\checkmark$&$\\star$&$\\star$\\\\\n\n \\noalign{\\smallskip}\\hline\\noalign{\\smallskip\\smallskip}\n$LR$-$R$&$-$&$-$&$-$&$-$&$-$&$-$&\\checkmark&$\\star$\\\\\n\n \\noalign{\\smallskip}\\hline\\noalign{\\smallskip\\smallskip}\n$LR$-$K$&$-$&$-$&$-$&$-$&$-$&$-$&$-$&\\checkmark\\\\\n\n \\noalign{\\smallskip}\\hline\\noalign{\\smallskip\\smallskip}\n$R$-$N$&$-$&$\\checkmark$&$-$&$\\checkmark$&$-$&$\\checkmark$&$-$&$\\star$\\\\\\noalign{\\smallskip}\\hline\\hline\n \\end{tabular}}\\end{center}\n\\caption{Deontic sequent calculi (\\checkmark= rule of the calculus, $\\star$ = admissible rule, $-$ = neither)}\\label{deoncalculi}\n\\begin{center}\n\\scalebox{0.66000}{\\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|}\n\n\\hline\\hline\\noalign{\\smallskip\\smallskip\\smallskip}\n &{\\bf G3E(N)D$^\\bot$}&{\\bf G3ED$^\\Diamond$}&{\\bf G3E(N)D}&{\\bf G3M(N)D$^\\bot$}&{\\bf G3M(N)D}&{\\bf G3CD$^\\Diamond$}&{\\bf G3C(N)D}&{\\bf G3RD}&{\\bf G3KD}\\\\\n \n \\noalign{\\smallskip}\\hline\\noalign{\\smallskip\\smallskip}\n $L$-$D^\\bot$&\\checkmark&$-$&\\checkmark&\\checkmark&$\\star$&$-$&$\\star$&$\\star$&$\\star$\\\\\n \n \\noalign{\\smallskip}\\hline\\noalign{\\smallskip\\smallskip}\n $L$-$D^{\\Diamond_E}$&$-$&\\checkmark&\\checkmark&$-$&$\\star$&$\\star$&$\\star$&$\\star$&$\\star$\\\\\n \n \\noalign{\\smallskip}\\hline\\noalign{\\smallskip\\smallskip}\n $L$-$D^{\\Diamond_M}$&$-$&$-$&$-$&$-$&\\checkmark&$\\star$&$\\star$&$\\star$&$\\star$\\\\\n \n \\noalign{\\smallskip}\\hline\\noalign{\\smallskip\\smallskip}\n $L$-$D^{\\Diamond_C}$&$-$&$-$&$-$&$-$&$-$&\\checkmark&$\\star$&$\\star$&$\\star$\\\\\n \n \\noalign{\\smallskip}\\hline\\noalign{\\smallskip\\smallskip}\n $L$-$D^*$&$-$&$-$&$-$&$-$&$-$&$-$&\\checkmark&\\checkmark&\\checkmark\n \n\\\\\n\\noalign{\\smallskip}\\hline\\hlin\n\\end{tabular}}\n\\end{center}\n\\end{table}\n\n\n\\subsection{Structural rules of inference}\nWe are now going to prove that the calculi {\\bf G3X} have the same good structural properties of {\\bf G3cp}: weakening and contraction are height-preserving admissible and cut is admissible. All proofs are extension of those for {\\bf G3cp}, see \\cite[Chapter 3]{NP01}; in most cases, the modal rules have to be treated differently from the propositional ones because of the presence of empty contexts in the premiss(es) of the modal ones. We adopt the following notational convention: given a derivation tree $\\mathcal{D}_k$, the derivation tree of the $n$-th leftmost premiss of its last step is denoted by $\\mathcal{D}_{kn}$. We begin by showing that the restriction to atomic initial sequents, which is needed to have the propositional rules invertible, is not limitative in that initial sequents with arbitrary principal formula are derivable in {\\bf G3X}.\n\n\\begin{proposition}\\label{genax} Every instance of $A,\\Gamma\\Rightarrow\\Delta, A$ is derivable in {\\bf G3X}.\\end{proposition}\n\n\\begin{proof} By induction on the weight of $A$. If $w(A)=0$ -- i.e., $A$ is atomic or $\\bot$ -- then we have an instance of an initial sequent or of a conclusion of $L\\bot$ and there is nothing to prove. If $w(A)\\geq1$, we argue by cases according to the construction of $A$. In each case we apply, root-first, the appropriate rule(s) in order to obtain sequents where some proper subformula of $A$ occurs both in the antecedent and in the succedent. The claim then holds by the inductive hypothesis (IH). To illustrate, if $A\\equiv\\Box B$ and we are in {\\bf G3M(ND)}, we have:\n\n$$\n\\infer[\\infrule LR\\mbox{-}M]{\\Box B,\\g\\To\\d,\\Box B}{\\infer[\\infrule IH]{B\\Rightarrow B}{}}\n$$\n \\end{proof}\n\n\\begin{theorem}\\label{weak} The left and right rules of weakening are height-preserving admissible in {\\bf G3X}\\\n\n$$\n\\infer[\\infrule LW]{A,\\g\\To\\d}{\\g\\To\\d}\n\\qquad\n\\infer[\\infrule RW]{\\g\\To\\d,A}{\\g\\To\\d}\n$$\n \\end{theorem}\n\n\\begin{proof}The proof is a straightforward induction on the height of the derivation $\\mathcal{D}$ of $\\Gamma\\Rightarrow \\Delta$.\n If the last step of $\\mathcal{D}$ is by a propositional rule, we have to apply the same rule to the weakened premiss(es), which are derivable by IH, see \\cite[Thm. 2.3.4]{NP01}. If it is by a modal or deontic rule, we proceed by adding $A$ to the appropriate weakening context of the conclusion of that rule instance. To illustrate, if the last rule is $LR$-$E$, we transform \n \n$$\n\\infer[\\infrule LR\\mbox{-}E]{\\Box B,\\g\\To\\d,\\Box C}{\\deduce{B\\Rightarrow C}{\\vdots\\;\\mathcal{D}_1}&\\deduce{C\\Rightarrow B}{\\vdots\\;\\mathcal{D}_2} }\n\\qquad\\mbox{into}\\qquad\n\\infer[\\infrule LR\\mbox{-}E]{\\Box B,A,\\g\\To\\d,\\Box C}{\\deduce{B\\Rightarrow C}{\\vdots\\;\\mathcal{D}_1}&\\deduce{C\\Rightarrow B}{\\vdots\\;\\mathcal{D}_2} }\n$$\n\n{}\\end{proof}\n\n\n\n\\noindent Before considering contraction, we recall some facts that will be useful later on. \n\\begin{lemma} In {\\bf G3X} the rules$\\quad$\n\\infer[\\infrule L\\neg]{\\neg A,\\g\\To\\d}{\\g\\To\\d,A}\nand$\\quad$\n\\infer[\\infrule R\\neg]{\\g\\To\\d,\\neg A}{A,\\g\\To\\d}\nare admissible.\n\\end{lemma}\n\\begin{proof}\nWe have the following derivations (the step by $RW$ is admissible thanks to Theorem \\ref{weak}):\n\n$$\\infer[\\infrule L\\supset]{A\\supset\\bot,\\g\\To\\d}{\\g\\To\\d,A&\\infer[\\infrule L\\bot]{\\bot,\\g\\To\\d}{}}\n\\qquad\\qquad\n\\infer[\\infrule R\\supset]{\\g\\To\\d,A\\supset\\bot}{\\infer[\\infrule RW]{A,\\g\\To\\d,\\bot}{A,\\g\\To\\d}}\n$$\n\\end{proof}\n\n\\begin{lemma}\\label{inv}All propositional rules are height-preserving invertible in {\\bf G3X}, that is the derivability of (a possible instance of) a conclusion of a propositional rule entails the derivability, with at most the same derivation height, of its premiss(es).\\end{lemma}\n\n\\begin{proof} We have only to extend the proof for {\\bf G3cp}, see \\cite[Thm. 3.1.1]{NP01}, with new cases for the modal and deontic rules. If $A\\circ B$ occurs in the antecedent (succedent) of the conclusion of an instance of a modal or deontic rule then it must be a member of the weakening context $\\Gamma$ ($\\Delta$) of this rule instance, and we have only to change the weakening context according to the rule we are inverting.\\end{proof}\n\n\\begin{theorem}\\label{contr}The left and right rules of contraction are height-preserving admissible in {\\bf G3X}\\vspace{0.3cm}\n\n$$\n\\infer[\\infrule LC]{A,\\g\\To\\d}{A,A,\\g\\To\\d}\n\\qquad\n\\infer[\\infrule RC]{\\g\\To\\d,A}{\\g\\To\\d,A,A}\n$$\n \\end{theorem}\n \n \\begin{proof} The proof is by simultaneous induction on the height of the derivation $\\mathcal{D}$ of the premiss for left and right contraction. The base case is straightforward. For the inductive steps, we have different strategies according to whether the last step in $\\mathcal{D}$ is by a propositional rule or not. If the last step in $\\mathcal{D}$ is by a propositional rule, we have two subcases: if the contraction formula is not principal in that step, we apply the inductive hypothesis and then the rule. Else we start by using the height-preserving invertibility -- Lemma \\ref{inv} -- of that rule, and then we apply the inductive hypothesis and the rule, see \\cite[Thm. 3.2.2]{NP01} for details. \n \n If the last step in $\\mathcal{D}$ is by a modal or deontic rule, we have two subcases: either (the last step is by one of $LR$-$C$, $LR$-$R$, $LR$-$K$, $L$-$D^{\\Diamond_E}$, $L$-$D^{\\Diamond_M}$, $L$-$D^{\\Diamond_C}$ and $L$-$D^*$ and) both occurrences of the contraction formula $A$ of $LC$ are principal in the last step or some instance of the contraction formula is introduced in the appropriate weakening context of the conclusion. In the first subcase, we apply the inductive hypothesis to the premiss and then the rule. An interesting example is when the last step in $\\mathcal{D}$ is by $L$-$D^{\\Diamond_E}$. We transform\n \n$$\n\\infer[\\infrule LC]{\\Box B,\\Gamma\\Rightarrow\\Delta}{\\infer[\\infrule L\\mbox{-}D^{\\Diamond}]{\\Box B,\\Box B,\\Gamma\\Rightarrow\\Delta}{\\deduce{B,B\\Rightarrow\\quad}{\\vdots\\;\\mathcal{D}_1}&\\deduce{\\Rightarrow B,B}{\\vdots\\;\\mathcal{D}_2}}}\n\\qquad\\textrm{ into }\\qquad\n\\infer[\\infrule L\\mbox{-}D^{\\Diamond}]{\\Box B,\\Gamma\\Rightarrow\\Delta}{\\deduce{B\\Rightarrow\\qquad}{\\vdots\\;IH(\\mathcal{D}_1)}&\\deduce{\\Rightarrow B\\qquad}{\\vdots\\;IH(\\mathcal{D}_2)}}\n$$\n \n\\noindent where $IH(\\mathcal{D}_1)$ is obtained by applying the inductive hypothesis for the left rule of contraction to $\\mathcal{D}_1$ and $IH(\\mathcal{D}_2)$ is obtained by applying the inductive hypothesis for the right rule of contraction to $\\mathcal{D}_2$.\n\nIn the second subcase, we apply an instance of the same modal or deontic rule which introduces one less occurrence of $A$ in the appropriate context of the conclusion. Let's consider $RC$. If the last step is by $LR$-$M$ and no instance of $A$ is principal in the last rule, we transform\\vspace{0.3cm}\n$$\n\\infer[\\infrule RC]{\\Box B,\\Gamma'\\Rightarrow\\Delta',A,\\Box C}{ \\infer[\\infrule LR\\mbox{-}M]{\\Box B,\\Gamma'\\Rightarrow\\Delta',A,A,\\Box C}{\\deduce{B\\Rightarrow C}{\\vdots\\;\\mathcal{D}_1}}}\n\\qquad\\mbox{into}\\qquad\n\\infer[\\infrule LR\\mbox{-}M]{\\Box B,\\Gamma'\\Rightarrow\\Delta',A,\\Box C}{\\deduce{B\\Rightarrow C\\quad}{\\vdots\\;\\mathcal{D}_1}}\n$$\n\n\\end{proof}\n \n \\begin{theorem}\\label{cut}The rule of cut is admissible in {\\bf G3X}\n$$\n\\infer[\\infrule Cut]{\\Gamma,\\Pi\\Rightarrow\\Delta,\\Sigma}{\\deduce{\\g\\To\\d,D}{\\vdots\\;\\mathcal{D}_1}&\\deduce{D,\\Pi\\Rightarrow\\Sigma}{\\vdots\\;\\mathcal{D}_2}}\n$$ \n\\end{theorem}\n \n \\begin{proof}\n We consider an uppermost application of $Cut$ and we show that either it is eliminable, or it can be permuted upward in the derivation until we reach sequents where it is eliminable. The proofs, one for each calculus, are by induction on the weight of the cut formula $D$ with a sub-induction on the sum of the heights of the derivations of the two premisses (cut-height for shortness). The proof can be organized in 3 exhaustive cases:\n \n \\begin{enumerate}\n \\item At least one of the premisses of cut is an initial sequent or a conclusion of $L\\bot$;\n\\item The cut formula in not principal in the last step of at least one of the two premisses;\n \\item The cut formula is principal in both premisses.\n \\end{enumerate}\n \n\n\n\\medskip\n\\noindent {\\bf{\\large\\textbullet}$\\quad$ Case (1).}$\\quad$ Same as for {\\bf G3cp}, see \\cite[Thm. 3.2.3]{NP01} for the details.\n \\medskip\n \n\n \\medskip\n\\noindent {\\bf{\\large\\textbullet}$\\quad$ Case (2).}$\\quad$ We have many subcases according to the last rule applied in the derivation ($\\mathcal{D}^\\star$) of the premiss where the cut formula is not principal. For the propositional rules, we refer the reader to \\cite[Thm. 3.2.3]{NP01}, where it is given a procedure that allows to reduce the cut-height. If the last rule applied in $\\mathcal{D}^\\star$ is a modal or deontic one, we can transform the derivation into a cut-free one because the conclusion of \\mbox{\\it{Cut}} is derivable by replacing the last step of $\\mathcal{D}^\\star$ with the appropriate instance of the same modal or deontic rule. We present explicitly only the cases where the last step of the left premiss is by $LR$-$E$ and $L$-$D^\\bot$ and the cut formula is not principal in it, all other transformations being similar.\n \\medskip\n\n \n\n\n\\noindent $\\mathbf{LR\\textrm{-}E}:\\quad$ \\, If the left premiss is by rule $LR$-$E$ (and $\\Gamma\\equiv \\Box A,\\Gamma'$ and $\\Delta\\equiv \\Delta',\\Box B$), we transform\n\n$$\n\\infer[\\infrule Cut]{\\Box A,\\Gamma',\\Pi\\Rightarrow\\Delta',\\Box B,\\Sigma}{\\infer[\\infrule LR\\mbox{-}E]{\\Box A,\\Gamma'\\Rightarrow\\Delta',\\Box B,D}{\\deduce{A\\Rightarrow B}{\\vdots\\;\\mathcal{D}_{11}}&\\deduce{B\\Rightarrow A}{\\vdots\\;\\mathcal{D}_{12}}}&\\deduce{D,\\Pi\\Rightarrow\\Sigma}{\\vdots\\;\\mathcal{D}_2}}\n\\qquad\\mbox{into}\\qquad\n\\infer[\\infrule LR\\mbox{-}E ]{\\Box A,\\Gamma',\\Pi\\Rightarrow\\Delta',\\Box B,\\Sigma}{\\deduce{A\\Rightarrow B}{\\vdots\\;\\mathcal{D}_{11}}&\\deduce{B\\Rightarrow A}{\\vdots\\;\\mathcal{D}_{12}}}\n$$\n\n\n\n\\noindent $\\mathbf{L}${\\bf-}$\\mathbf{D^\\bot}:\\quad$ \\, If the left premiss is by rule $L$-$D^\\bot$, we transform\n\n$$\n\\infer[\\infrule Cut]{\\Box A,\\Gamma',\\Pi\\Rightarrow\\Delta,\\Sigma}{\\infer[\\infrule L\\mbox{-}D^\\bot]{\\Box A,\\Gamma'\\Rightarrow\\Delta,D}{\\deduce{A\\Rightarrow }{\\vdots\\;\\mathcal{D}_{11}}}&\\deduce{D,\\Pi\\Rightarrow\\Sigma}{\\vdots\\;\\mathcal{D}_2}}\n\\qquad\\mbox{into}\\qquad\n\\infer[\\infrule L\\mbox{-}D^\\bot ]{\\Box A,\\Gamma',\\Pi\\Rightarrow\\Delta,\\Sigma}{\\deduce{A\\Rightarrow }{\\vdots\\;\\mathcal{D}_{11}}}\n$$\n\n\n\n\n \n\n\n\\medskip\n\n\\noindent {\\bf {\\large\\textbullet}$\\quad$ Case (3).}$\\quad$ If the cut formula $D$ is principal in both premisses, we have cases according to the principal operator of $D$. In each case we have a procedure that allows to reduce the weight of the cut formula, possibly increasing the cut-height. For the propositional cases, which are the same for all the logics considered here, see \\cite[Thm. 3.2.3]{NP01}.\n\n If $D\\equiv\\Box C$, we consider the different logics one by one, without repeating the common cases. \\\\\n\n\\noindent\\textbullet$\\quad${\\bf G3E(ND).}$\\quad$ Both premisses are by rule $LR$-$E$, we have\n\n$$\n\\infer[\\infrule Cut]{\\Box A,\\Gamma',\\Pi\\Rightarrow\\Delta,\\Sigma',\\Box B}{\\infer[\\infrule LR\\mbox{-}E]{\\Box A,\\Gamma'\\Rightarrow\\Delta,\\Box C}{\\deduce{A\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}&\\deduce{C\\Rightarrow A}{\\vdots\\;\\mathcal{D}_{12}}}&\\infer[\\infrule LR\\mbox{-}E]{\\Box C,\\Pi\\Rightarrow\\Sigma',\\Box B}{\\deduce{C\\Rightarrow B}{\\vdots\\;\\mathcal{D}_{21}}&\\deduce{B\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{22}}}}\n$$\n\n\\noindent and we transform it into the following derivation that has two cuts with cut formulas of lesser weight, which are admissible by IH.\n\n$$\n\\infer[\\infrule LR\\mbox{-}E]{\\Box A,\\Gamma',\\Pi\\Rightarrow\\Delta,\\Sigma',\\Box B}{\\infer[\\infrule Cut]{ A\\Rightarrow B}{\\deduce{A\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}&\\deduce{C\\Rightarrow B}{\\vdots\\;\\mathcal{D}_{21}}}&\\infer[\\infrule Cut]{B\\Rightarrow A}{\\deduce{B\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{22}}&\\deduce{C\\Rightarrow A}{\\vdots\\;\\mathcal{D}_{12}}}}\n$$\n\n\n\n\\noindent\\textbullet$\\quad${\\bf G3EN(D).}$\\quad$ Left premiss by $R$-$N$ and right one by $LR$-$E$. We transfor\n\n$$\n\\infer[\\infrule Cut]{\\Gamma,\\Pi\\Rightarrow\\Delta,\\Sigma',\\Box A}{\\infer[\\infrule R\\mbox{-}N]{\\g\\To\\d,\\Box C}{\\deduce{\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}}&\\infer[\\infrule LR\\mbox{-}E]{\\Box C,\\Pi\\Rightarrow\\Sigma',\\Box A}{\\deduce{C\\Rightarrow A}{\\vdots\\;\\mathcal{D}_{21}}&\\deduce{A\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{22}}}}\n\\qquad\\mbox{into}\\qquad\n\\infer[\\infrule R\\mbox{-}N ]{\\Gamma,\\Pi\\Rightarrow\\Delta,\\Sigma',\\Box A}{\\infer[\\infrule Cut]{\\Rightarrow A}{\\deduce{\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}&\\deduce{C\\Rightarrow A}{\\vdots\\,\\mathcal{D}_{21}}}}\n$$\n\n\n\\noindent \\textbullet$\\quad${\\bf G3E(N)D$^\\bot$.}$\\quad$\nLeft premiss is by $LR$-$E$, and right one by $L$-$D^\\bot$. We transform \n\n $$\n\\infer[\\infrule Cut]{\\Box A,\\Gamma',\\Pi\\Rightarrow\\Delta,\\Sigma}{\\infer[\\infrule LR\\mbox{-}E]{\\Box A,\\Gamma'\\Rightarrow\\Delta,\\Box C}{\\deduce{A\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}&\\deduce{C\\Rightarrow A}{\\vdots\\;\\mathcal{D}_{12}}}&\\infer[\\infrule L\\mbox{-}D^\\bot]{\\Box C,\\Pi\\Rightarrow\\Sigma}{\\deduce{C\\Rightarrow }{\\vdots\\;\\mathcal{D}_{21}}}}\n\\qquad\\mbox{into}\\qquad\n\\infer[\\infrule L\\mbox{-}D^\\bot ]{\\Box A,\\Gamma',\\Pi\\Rightarrow\\Delta,\\Sigma}{\\infer[\\infrule Cut]{A\\Rightarrow }{\\deduce{A\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}&\\deduce{C\\Rightarrow }{\\vdots\\,\\mathcal{D}_{21}}}}\n$$\n\n\\noindent \\textbullet$\\quad${\\bf G3E(N)D$^\\Diamond$.}$\\quad$\nLeft premiss is by $LR$-$E$, and right one by $L$-$D^{\\Diamond_E}$. We transform ($|\\Xi|\\leq 1$\n\n{\\small $$\n\\infer[\\infrule Cut]{\\Box A,\\Gamma',\\Box\\Xi,\\Pi'\\Rightarrow\\Delta,\\Sigma}{\\infer[\\infrule LR\\mbox{-}E]{\\Box A,\\Gamma'\\Rightarrow\\Delta,\\Box C}{\\deduce{A\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}&\\deduce{C\\Rightarrow A}{\\vdots\\;\\mathcal{D}_{12}}}&\\infer[\\infrule L\\mbox{-}D^{\\Diamond_E}]{\\Box C,\\Box\\Xi,\\Pi'\\Rightarrow\\Sigma}{\\deduce{C,\\Xi\\Rightarrow }{\\vdots\\;\\mathcal{D}_{21}}&\\deduce{\\Rightarrow C,\\Xi}{\\vdots\\;\\mathcal{D}_{22}}}}\n\\text{into} \n\\infer[\\infrule L\\mbox{-}D^{\\Diamond_E}]{\\Box A,\\Gamma',\\Box\\Xi,\\Pi'\\Rightarrow\\Delta,\\Sigma}{\\infer[\\infrule Cut]{ \\Rightarrow\\Xi,A}{\\deduce{\\Rightarrow \\Xi, C}{\\vdots\\;\\mathcal{D}_{22}}&\\deduce{C\\Rightarrow A}{\\vdots\\;\\mathcal{D}_{12}}}&\\infer[\\infrule Cut]{A,\\Xi\\Rightarrow }{\\deduce{A\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}&\\deduce{C,\\Xi\\Rightarrow }{\\vdots\\;\\mathcal{D}_{21}}}}\n$$}\n\n\\noindent\\textbullet$\\quad${\\bf G3E(N)D.}$\\quad$ Left premiss by $LR$-$E$ and right one by $L$-$D^\\bot$ or $L$-$D^{\\Diamond_E}$. Same as above.\\vspace{0.3cm}\n\n\n\n\n\n\\noindent\\textbullet$\\quad${\\bf G3END$^\\bot$.}$\\quad$ Left premiss by $R$-$N$ and right one by $L$-$D^\\bot$. We transform\n\n $$\n\\infer[\\infrule Cut]{\\Gamma,\\Pi\\Rightarrow\\Delta,\\Sigma}{\\infer[\\infrule R\\mbox{-}N]{\\g\\To\\d,\\Box C}{\\deduce{\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}}&\\infer[\\infrule L\\mbox{-}D^\\bot]{\\Box C,\\Pi\\Rightarrow\\Sigma}{\\deduce{C\\Rightarrow }{\\vdots\\;\\mathcal{D}_{21}}}}\n\\qquad\\mbox{into}\\qquad\n\\infer=[\\infrule LWs\\mbox{ and }RWs ]{\\Gamma,\\Pi\\Rightarrow\\Delta,\\Sigma}{\\infer[\\infrule Cut]{\\phantom{C}\\Rightarrow^{} }{\\deduce{\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}&\\deduce{C\\Rightarrow }{\\vdots\\,\\mathcal{D}_{21}}}}\n$$\n\n\n\n\\noindent\\textbullet$\\quad${\\bf G3END.}$\\quad$ Left premiss by $R$-$N$ and right one by $L$-$D^{\\Diamond_E}$. We transform ($|\\Xi|\\leq 1$)\n\n$$\n\\infer[\\infrule Cut]{\\Box \\Xi,\\Gamma,\\Pi'\\Rightarrow\\Delta,\\Sigma}{\n\\infer[\\infrule R\\mbox{-}N]{\\Gamma\\Rightarrow\\Delta,\\Box C}{\\deduce{\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}}&\n\\infer[\\infrule L\\mbox{-}D^\\Diamond]{\\Box C,\\Box \\Xi,\\Pi'\\Rightarrow\\Sigma}{\\deduce{C,\\Xi}{\\vdots\\;\\mathcal{D}_{21}}\\Rightarrow&\\Rightarrow \\deduce{C,\\Xi}{\\vdots\\;\\mathcal{D}_{22}}}}\n\\qquad\\text{into}\\qquad\n\\infer[\\infrule (\\star)]{\\Box \\Xi,\\Gamma,\\Pi'\\Rightarrow\\Delta,\\Sigma}{\\infer[\\infrule Cut]{\\Xi\\Rightarrow}{\n\\deduce{\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}&\n\\deduce{C,\\Xi\\Rightarrow}{\\vdots\\;\\mathcal{D}_{21}}}}\n$$\nwhere $(\\star)$ is an instance of $L$-$D^\\bot$ if $|\\Xi|=1$, else ($|\\Xi|=0$ and) it is some instances\\mbox{ of $LW$ and $RW$.} \n\n\\noindent \\textbullet$\\quad${\\bf G3M(ND).}$\\quad$\n Both premisses are by rule $LR$-$M$, we transfor\n $$\n\\infer[\\infrule Cut]{\\Box A,\\Gamma',\\Pi\\Rightarrow\\Delta,\\Sigma',\\Box B}{\\infer[\\infrule LR\\mbox{-}M]{\\Box A,\\Gamma'\\Rightarrow\\Delta,\\Box C}{\\deduce{A\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}}&\\infer[\\infrule LR\\mbox{-}M]{\\Box C,\\Pi\\Rightarrow\\Sigma',\\Box B}{\\deduce{C\\Rightarrow B}{\\vdots\\;\\mathcal{D}_{21}}}}\n\\qquad\\mbox{into}\\qquad\n\\infer[\\infrule LR\\mbox{-}M]{\\Box A,\\Gamma',\\Pi\\Rightarrow\\Delta,\\Sigma',\\Box B}{\\infer[\\infrule Cut]{A\\Rightarrow B }{\\deduce{A\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}&\\deduce{C\\Rightarrow B}{\\vdots\\,\\mathcal{D}_{21}}}}\n$$\n\n\n\n\\noindent\\textbullet$\\quad${\\bf G3MN(D).}$\\quad$ Left premiss by $R$-$N$ and right one by $LR$-$M$. Similar to the case with left premiss by $R$-$N$ and right one by $LR$-$E$.\\vspace{0.3cm}\n\n\\noindent \\textbullet$\\quad${\\bf G3M(N)D$^\\bot$} and {\\bf G3M(N)D.}$\\quad$\nLeft premiss is by $LR$-$M$, and right one by $L$-$D^\\bot$ or $L$-$D^{\\Diamond_M}$. Similar to the case with left premiss by $LR$-$E$ and right by $L$-$D^\\bot$ or $L$-$D^{\\Diamond_E}$, respectively.\\vspace{0.3cm}\n\n\n\n\\noindent\\textbullet$\\quad${\\bf G3MND$^\\bot$} and {\\bf G3MND.}$\\quad$ The cases with left premiss by $R$-$N$ and right one by a deontic rule are like the analogous ones we have already considered.\n\n\n\\medskip\n\\noindent \\textbullet$\\quad${\\bf G3C(ND).} Both premisses are by rule $LR$-$C$. Let us agree to use $\\Lambda$ to denote the non-empty multiset $A_1,\\dots,A_n$, and $\\Xi$ for the (possibly empty) multiset $B_2,\\dots B_m$. The derivation\n\n$$\n\\infer[\\infrule Cut]{\\Box\\Lambda,\\Gamma',\\Box\\Xi,\\Pi'\\Rightarrow\\Delta,\\Sigma', \\Box E}{\\infer[\\infrule LR\\mbox{-}C]{\\Box\\Lambda,\\Gamma'\\Rightarrow\\Delta,\\Box C}{\\deduce{\\Lambda\\Rightarrow C}{\\vdots \\;\\mathcal{D}_{11}}& \\deduce{C\\Rightarrow A_1}{\\vdots \\;\\mathcal{D}_{A_1}}&{}^{\\dots}&\\deduce{C\\Rightarrow A_n}{\\vdots \\;\\mathcal{D}_{A_n}}}&\n\\infer[\\infrule LR\\mbox{-}C]{\\Box C,\\Box\\Xi,\\Pi'\\Rightarrow\\Sigma',\\Box E}{\\deduce{C,\\Xi\\Rightarrow E}{\\vdots \\;\\mathcal{D}_{21}}&\\deduce{ E\\Rightarrow C}{\\vdots \\;\\mathcal{D}_{C}}&{}^{\\dots}& \\deduce{E\\Rightarrow B_m}{\\vdots \\;\\mathcal{D}_{B_m}}}}\n$$\n\n\\noindent is transformed into the following derivation having $n+1$ cuts on formulas of lesser weight\n\\noindent\\scalebox{0.8000}{$$\n\\infer[\\infrule LR\\mbox{-}C]{\\Box\\Lambda,\\Gamma',\\Box\\Xi,\\Pi'\\Rightarrow\\Delta,\\Sigma', \\Box E}{\\infer[\\infrule Cut]{\\Lambda,\\Xi\\Rightarrow E}{\\deduce{\\Lambda\\Rightarrow C}{\\vdots\\:\\mathcal{D}_{11}}&\\deduce{C,\\Xi\\Rightarrow E}{\\vdots\\;\\mathcal{D}_{21}}}&\n\\infer[\\infrule{Cut}\\;\\dots]{E\\Rightarrow A_1}{\\deduce{E\\Rightarrow C}{\\vdots\\:\\mathcal{D}_{C}}&\\deduce{C\\Rightarrow A_1}{\\vdots\\;\\mathcal{D}_{A_n}}}&\n\\infer[\\infrule Cut]{E\\Rightarrow A_n}{\\deduce{E\\Rightarrow C}{\\vdots\\:\\mathcal{D}_{C}}&\\deduce{C\\Rightarrow A_n}{\\vdots\\;\\mathcal{D}_{A_n}}}&\n \\deduce{E\\Rightarrow B_1}{\\vdots \\;\\mathcal{D}_{B_1}}\n &{}^{\\dots}& \\deduce{E\\Rightarrow B_m}{\\vdots \\;\\mathcal{D}_{B_m}}\n }\n$$}\\vspace{0.3cm}\n\n\\noindent \\textbullet$\\quad${\\bf G3CN(D).} Left premiss by $R$-$N$ and right premiss by $LR$-$C$. We have\n\n$$\n\\infer[\\infrule Cut]{\\Gamma,\\Box A_1,\\dots,\\Box A_n,\\Pi'\\Rightarrow\\Delta,\\Sigma', \\Box B}{\\infer[\\infrule R\\mbox{-}N]{\\g\\To\\d,\\Box C}{\\deduce{\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}}&\n\\infer[\\infrule LR\\mbox{-}C]{\\Box C,\\Box A_1,\\dots,\\Box A_n,\\Pi'\\Rightarrow\\Sigma',\\Box B}{\\deduce{C,A_1,\\dots,A_n\\Rightarrow B}{\\vdots \\;\\mathcal{D}_{21}}&\\deduce{ B\\Rightarrow C}{\\vdots \\;\\mathcal{D}_{C}}&{}^{\\dots}& \\deduce{B\\Rightarrow A_n}{\\vdots \\;\\mathcal{D}_{A_n}}}}\n$$\n where $A_1,\\dots,A_n$ (and thus also $\\Box A_1,\\dots,\\Box A_n$) may or may not be the empty multiset. If $A_1,\\dots,A_n$ is not empty, we transform it into the following derivation having one cut with cut formula of lesser weigh\n \n$$\n\\infer[\\infrule LR\\mbox{-}C]{\\Gamma,\\Box A_1,\\dots\\Box A_n,\\Pi'\\Rightarrow\\Delta,\\Sigma', \\Box B}{\n\\infer[\\infrule Cut]{A_1,\\dots,A_n\\Rightarrow B}{\\deduce{\\Rightarrow C\\quad}{\\vdots\\:\\mathcal{D}_{11}}&\\deduce{C,A_1,\\dots, A_n\\Rightarrow B}{\\vdots\\;\\mathcal{D}_{21}}}&\n \\deduce{B\\Rightarrow A_1}{\\vdots \\;\\mathcal{D}_{A_1}}\n &{}^{\\dots}& \\deduce{B\\Rightarrow A_n}{\\vdots \\;\\mathcal{D}_{A_n}}\n }\n$$\nIf, instead, $A_1,\\dots,A_n$ is empty, we transform it into\n\n$$\n\\infer[\\infrule R\\mbox{-}N]{\\Gamma,\\Pi'\\Rightarrow\\Delta,\\Sigma',\\Box B}{\\infer[\\infrule Cut]{\\Rightarrow B}{\\deduce{\\Rightarrow C\\qquad}{\\vdots\\:\\mathcal{D}_{11}}&\\deduce{C\\Rightarrow B}{\\vdots\\;\\mathcal{D}_{21}}}}\n$$\n\n\\noindent \\textbullet$\\quad${\\bf G3CD$^{\\Diamond}$.} Left premiss by $LR$-$C$ and right premiss by $L$-$D^{\\Diamond_C}$. We transform (we assume $\\Xi= A_1,\\dots A_k$, $\\Theta= C, B_2,\\dots, B_m$ and $\\Lambda= D_1,\\dots, D_n$) \n\n$$\n\\infer[\\infrule Cut ]{\\Box\\Xi,\\Box B_1,\\dots, \\Box B_m,\\Box \\Lambda,\\Gamma',\\Pi'\\Rightarrow\\Delta,\\Sigma}{\n\\infer[LR\\mbox{-}C]{\\Box\\Xi,\\Gamma'\\Rightarrow\\Delta,\\Box C}{\\deduce{\\Xi\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}&\\deduce{\\{ C\\Rightarrow A_i\\,|\\, A_i\\in\\Xi\\}}{\\vdots\\;\\mathcal{D}_{1A_i}}}&\n\\infer[\\infrule L\\mbox{-}D^{\\Diamond_C}]{\\Box C,\\Box B_2,\\dots,\\Box B_m,\\Box\\Lambda,\\Pi'\\Rightarrow\\Sigma}{\n\\deduce{\\Theta,\\Lambda\\Rightarrow}{\\vdots\\;\\mathcal{D}_{21}}&\n\\deduce{\\{ \\Rightarrow E,D_j\\,|\\, E\\in\\Theta\\text{ and } D_j\\in\\ \\Lambda\\}}{\\vdots\\;\\mathcal{D}_{\\Theta_{i}\\Lambda_j}}}}\n$$\ninto the following derivation having $1+(k\\times n)$ cuts on formulas of lesser weight\n\n$$\\scalebox{0.830}{\n\\infer[\\infrule{ L\\mbox{-}D^{\\Diamond_C}}]{\\Box\\Xi,\\Box B_1,\\dots, \\Box B_m,\\Box \\Lambda,\\Gamma',\\Pi'\\Rightarrow\\Delta,\\Sigma}{\n\\infer[\\infrule Cut]{\\Xi,B_2,\\dots, B_m,\\Lambda\\Rightarrow}{\n\\deduce{\\Xi\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}&\\deduce{C,B_2,\\dots,B_m,\\Lambda\\Rightarrow}{\\vdots\\;\\mathcal{D}_{21}}}\n&\n\\infer[\\infrule Cut]{\\{\\Rightarrow A_i,D_j| A_i\\in\\Xi,\\, D_j\\in\\Lambda\\}}{\\deduce{\\Rightarrow D_j,C}{\\vdots\\;\\mathcal{D}_{\\Theta_1,\\Lambda_j}}&\\deduce{C\\Rightarrow A_i}{\\vdots\\;\\mathcal{D}_{1A_i}}}\n&\n\\deduce{\\{\\Rightarrow B_i,D_j| B_i\\in\\Theta-C,\\, D_j\\in \\Lambda\\}}{\\vdots\\;\\mathcal{D}_{\\Theta_i\\Lambda_j}}\n}\n}$$\n\n\n\n\n\n\n\n\n\n\n\\noindent \\textbullet$\\quad${\\bf G3C(N)D.}$\\quad$ Left premiss by $LR$-$C$ and right one by $L$-$D^*$. It is straightforward to transform the derivation into another one having one cut with cut formula of lesser weight. \\medskip\n\n\n\n\n\\noindent \\textbullet$\\quad${\\bf G3R(D).}$\\quad$ Both premisses are by rule $LR$-$R$, we transform\n\n\\noindent\\scalebox{0.88000}{ $$\n\\infer[\\infrule Cut]{\\Box A,\\Box\\Xi,\\Gamma',\\Box\\Psi,\\Pi'\\Rightarrow\\Delta,\\Sigma',\\Box B}{\\infer[\\infrule LR\\mbox{-}R]{\\Box A,\\Box\\Xi,\\Gamma'\\Rightarrow\\Delta,\\Box C}{\\deduce{A,\\Xi\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}}&\\infer[\\infrule LR\\mbox{-}R]{\\Box C,\\Box\\Psi,\\Pi'\\Rightarrow\\Sigma',\\Box B}{\\deduce{C,\\Psi\\Rightarrow B}{\\vdots\\;\\mathcal{D}_{21}}}}\n\\quad\\mbox{into}\\quad\n\\infer[\\infrule LR\\mbox{-}R]{\\Box A,\\Box\\Xi,\\Gamma',\\Box\\Psi,\\Pi'\\Rightarrow\\Delta,\\Sigma',\\Box B}{\\infer[\\infrule Cut]{A,\\Xi,\\Psi\\Rightarrow B }{\\deduce{A,\\Xi\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}&\\deduce{C,\\Psi\\Rightarrow B}{\\vdots\\,\\mathcal{D}_{21}}}}\n$$}\n\\medskip\n\n\\noindent\\noindent \\textbullet$\\quad${\\bf G3RD$^\\star$.}$\\quad$Left premiss is by $LR$-$R$, and right one by $L$-$D^\\star$, we transform\n\n\\noindent$$\n\\infer[\\infrule Cut]{\\Box A,\\Box\\Xi,\\Gamma',\\Box\\Psi,\\Pi'\\Rightarrow\\Delta,\\Sigma}{\\infer[\\infrule LR\\mbox{-}R]{\\Box A,\\Box\\Xi,\\Gamma'\\Rightarrow\\Delta,\\Box C}{\\deduce{A,\\Xi\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}}&\\infer[\\infrule L\\mbox{-}D^*]{\\Box C,\\Box\\Psi,\\Pi'\\Rightarrow\\Sigma}{\\deduce{C,\\Psi\\Rightarrow }{\\vdots\\;\\mathcal{D}_{21}}}}\n\\quad\\mbox{into}\\quad\n\\infer[\\infrule L\\mbox{-}D^*]{\\Box A,\\Box\\Xi,\\Gamma',\\Box\\Psi,\\Pi'\\Rightarrow\\Delta,\\Sigma}{\\infer[\\infrule Cut]{A,\\Xi,\\Psi\\Rightarrow }{\\deduce{A,\\Xi\\Rightarrow C}{\\vdots\\;\\mathcal{D}_{11}}&\\deduce{C,\\Psi\\Rightarrow }{\\vdots\\,\\mathcal{D}_{21}}}}\n$$ \n\n\n\\noindent \\textbullet$\\quad${\\bf G3K(D).}$\\quad$\nThe new cases with respect to {\\bf G3R(D)} are those with left premiss by an instance of $LR$-$K$ that has no principal formula in the antecedent. These cases can be treated like cases with left premiss by $R$-$N$.\\end{proof}\n\n\n\n\n\n\\section{Decidability and syntactic completeness}\\label{secdecax}\n\\subsection{Decision procedure for {\\bf G3X}}\\label{decision}\nEach calculus {\\bf G3X} has the strong subformula property since all active formulas of each rule in Tables \\ref{G3cp} and \\ref{Modal rules} are proper subformulas of the the principal formulas and no formula disappears in moving from premiss(es) to conclusion. As usual, this gives us a syntactic proof of consistency.\n\\begin{proposition}\\label{cons}\\\n\\begin{enumerate}\n\\item Each premiss of each rule of {\\bf G3X} has smaller weight than its conclusion;\n\\item Each premiss of each modal or deontic rule of {\\bf G3X} has smaller modal depth than its conclusion;\n\\item The calculus {\\bf G3X} has the subformula property: a {\\bf G3X}-derivation of a sequent $\\mathcal{S}$ contains only sequents composed of subformulas of $\\mathcal{S}$;\n\\item The empty sequent is not {\\bf G3X}-derivable.\n\\end{enumerate}\\end{proposition} \nWe also have an effective method to decide the derivability of a sequent in {\\bf G3X}: we start from the desired sequent $\\Gamma\\Rightarrow\\Delta$ and we construct all possible {\\bf G3X}-derivation trees until either we find a tree where each leaf is an initial sequent or a conclusion of $L\\bot$ -- we have found a {\\bf G3X}-derivation of $\\Gamma\\Rightarrow\\Delta$ -- or we have checked all possible {\\bf G3X}-derivations and we have found none -- $\\Gamma\\Rightarrow\\Delta$ is not {\\bf G3X}-derivable. \n\nMore in details, we present a depth-first procedure that tests {\\bf G3X}-derivability in polynomial space. As it is usual in decision procedures involving non-invertible rules, we have trees involving two kinds of branching. Application of a rule with more than one premiss produce an \\emph{AND-branching} point, where all branches have to be derivable to obtain a derivation. Application of a non-invertible rule to a sequent that can be the conclusion of different instances of non-invertible rules produces an \\emph{OR-branching} point, where only one branch need be derivable to obtain a derivation.\nIn the procedure below we will assume that, given a calculus {\\bf G3X} and given a sequent $\\Box\\Pi,\\Gamma^p\\Rightarrow\\Delta^p,\\Box\\Sigma$ (where $\\Gamma^p$ and $\\Delta^p$ are multisets of propositional variables), there is some fixed way of ordering the finite (see below) set of instances of modal and deontic rules of {\\bf G3X} (\\emph{{\\bf X}-instances}, for shortness) having that sequent as conclusion. Moreover, we will represent the root of branches above an OR-branching point by nodes of shape $\\Box_i$, where $\\Box_i$ is the name of the $i$-th {\\bf X}-instance applied (in the order of all {\\bf X}-instances having that conclusion). To illustrate, if we are in {\\bf G3EN} and we have to consider $\\Box A,\\Box B,\\Gamma^p\\Rightarrow\\Delta^p,\\Box C$ then we obtain (fixing one way of ordering the three {\\bf X}-instances having that sequent as conclusion):\n \n$$\n\\infer{\\Box A,\\Box B,\\Gamma^p\\Rightarrow\\Delta^p,\\Box C}{\n\\infer[\\qquad]{LR\\mbox{-}E}{A\\Rightarrow C& C\\Rightarrow A}&\n\\infer[\\qquad]{LR\\mbox{-}E}{B\\Rightarrow C&C\\Rightarrow B}&\n\\infer{R\\mbox{-}N}{\\Rightarrow B}}\n$$\nwhere the lowermost sequent is an OR-branching point and the two nodes $LR$-$E_1$ and $LR$-$E_2$ are AND-branching points. Finally, Given an AND(OR)-branching point \n\n$$\\infer{\\mathcal{S}}{\\mathcal{S}_1&\\dots&\\mathcal{S}_n}$$ we say that the branch above $\\mathcal{S}_i$ is an \\emph{unexplored AND(OR)-branch} if no one of its nodes has already been active.\n\n\n\\begin{definition}[{\\bf G3X}-decision procedure]\\label{decisiontree}\\\n\\begin{description}\n\\item[Stage 1.] We write the one node tree $\n\\Gamma\\Rightarrow\\Delta$\nand we label $\\Gamma\\Rightarrow\\Delta$ as active.\n\\item[Stage n+1.] Let $\\mathcal{T}_n$ be the tree constructed at stage $n$, let $\\mathcal{S}\\equiv\\Pi\\Rightarrow\\Sigma$ be its active sequent, and let $\\mathcal{B}$ be the branch going from the root of $\\mathcal{T}_n$ to $\\mathcal{S}$. \n\\begin{description}\n\\item[Closed.] If $\\mathcal{S}$ is such that $p\\in\\Pi\\cap\\Sigma$ (for some propositional variable $p$) or $\\bot\\in\\Pi$, then we label $\\mathcal{S}$ as closed and\n\\begin{description}\n\\item[Derivable.] If $\\mathcal{B}$ contains no unexplored AND-branch, the procedure ends and \\mbox{$\\Gamma\\Rightarrow\\Delta$} is {\\bf G3X}-derivable;\n\\item[AND-backtrack.]If, instead, $\\mathcal{B}$ contains unexplored AND-branches, we choose the topmost one and we label as active its leftmost unexplored leaf. {\\bf Else}\n\\end{description}\n\\item[Propositional.] if $\\mathcal{S}$ can be the conclusion of some instances $\\circ_1,\\dots,\\circ_m$ of the invertible propositional rules, we extend $\\mathcal{B}$ by applying one of such instances: \n\n$$\\infer[\\infrule{\\circ_i\\quad 1\\leq i\\leq m}]{\\mathcal{S}}{\\mathcal{S}_1&(\\mathcal{S}_2)}$$ \\noindent where, if $\\mathcal{S}_2$, if present, $\\mathcal{S}$ is an AND-branching point. {\\bf Else}\n\\item[Modal.] If $\\mathcal{S}$ can be the conclusion of the following canonically ordered list of {\\bf X}-instances:\n\n\n$$\n\\infer[\\infrule \\Box_1]{\\mathcal{S}}{\\mathcal{S}_1^1&\\dots&\\mathcal{S}^1_k }\n\\qquad\\deduce[\\dots]{\\phantom{a}}{}\\qquad\n\\infer[\\infrule \\Box_m]{\\mathcal{S}}{\\mathcal{S}_1^m&\\dots&\\mathcal{S}^m_l }\n$$ \nthen we extend $\\mathcal{B}$ as follows:\n\n\n$$\n\\infer{\\mathcal{S}}{\\infer{\\Box_1\\phantom{^1}}{\\mathcal{S}_1^1&\\dots&\\mathcal{S}^1_k }&\\deduce[\\dots]{\\phantom{a}}{}&\\infer{\\Box_m\\phantom{^1}}{\\mathcal{S}_1^m&\\dots&\\mathcal{S}^m_l}\n}\n$$\nwhere, if $m\\geq 2$, $\\mathcal{S}$ is OR-branching and, if $\\Box_i$ is a rule with more than one premiss, $\\Box_i$ is AND-branching. Moreover, we label $\\mathcal{S}_1^1$ as active. {\\bf Else} \n\\item[Open.] No rule of {\\bf G3x} can be applied to $\\mathcal{S}$, then we label $\\mathcal{S}$ as open and\n\\begin{description}\n\\item[Underivable.] If $\\mathcal{B}$ contains no unexplored OR-branch, the procedure ends and \\mbox{$\\Gamma\\Rightarrow\\Delta$} is not {\\bf G3X}-derivable;\n\\item[OR-backtrack.]If, instead, $\\mathcal{B}$ contains unexplored OR-branches, we choose the topmost one and we label as active its leftmost unexplored leaf. \n\\end{description}\n\\end{description}\n\\end{description}\n\\end{definition}\n \nTermination can be shown as follows. Proposition \\ref{cons}.1 entails that the height of each branch of the tree $\\mathcal{T}$ constructed in a {\\bf G3X}-decision procedure for a sequent $\\Gamma\\Rightarrow\\Delta$ is bounded by the weight of $\\Gamma\\Rightarrow\\Delta$ (in particular, given Proposition \\ref{cons}.2, the number of OR-branching points occurring in a branch is bounded by the modal depth of $\\Gamma\\Rightarrow\\Delta$). Moreover, $\\mathcal{T}$ is finitary branching since all rules of {\\bf G3X} are finitary branching rules, and since each sequent can be the conclusion of a finite number $k$ of {\\bf X}-instances (for each {\\bf G3X} $k$ is bounded by a function of $|\\Gamma|$ and $|\\Delta|$)}. Hence, after a finite number of stages we are either in case {\\bf Derivable} or in case {\\bf Underivable} and, in both cases, the procedure ends. In the first case we can easily extract a {\\bf G3X}-derivation of $\\Gamma\\Rightarrow\\Delta$ from $\\mathcal{T}$ (we just have to delete all unexplored branches as well as all underivable sub-trees above an OR-branching point). In the latter case, thanks to Proposition \\ref{cons}.3, we know that (modulo the order of the invertible propositional rules) we have explored the whole search space for a {\\bf G3X}-derivation of $\\Gamma\\Rightarrow\\Delta$ and we have found none. \n\n\n\nWe prove that it is possible to test {\\bf G3X}-derivability in polynomial space by showing how it is possible to store only the active node together with a stack containing information sufficient to reconstruct unexplored branches. For the propositional part of the calculi, we proceed as in \\cite{B97,H93,H95}: each entry of the stack is a triple containing the name of the rule applied, an index recording which of its premisses is active, and its principal formula. For the {\\bf X}-instances two complications arise: we need to record which OR-branches are unexplored yet, and we have to keep track of the weakening contexts of the conclusion in the premisses of {\\bf X}-instances. The first problem has already been solved by having assumed that the {\\bf X}-instances applicable to a given sequent have a fixed canonical order. The second problem is solved by adding a numerical superscript to the formulas occurring in a sequent and by imposing that:\\\\\n- All formulas in the end-sequent have 1 as superscript;\\\\\n - The superscript $k$ of the principal formulas of rules and of initial sequents are maximal in that sequent;\\\\- Active formulas of {\\bf X}-instances (propositional rules) have $k+2$ ($k$, respectively) as superscript;\\\\- \n Contexts are copied in the premisses of each rule.\\\\ By doing so, the contexts of the conclusion are copied in the premisses in each rule of {\\bf G3X}, but they cannot be principal in the trees above the premisses of the {\\bf X}-instances because their superscript is never maximal therein. It is immediate to see the the superscripts occurring in a derivation are bounded by (twice) the modal depth of the end-sequent.\n \n Instances of all modal and deontic rules in Table \\ref{Modal rules} but $LR$-$C$ and $L$-$D^{\\Diamond_C}$ are such that there is no need to record their principal formulas in the stack entry: they are the boxed version of the formulas having maximal superscript in the active premiss; moreover, the name of the rule and the number of the premiss allow to reconstruct the position of the principal formulas (for the right premiss of $LR$-$E$ and $L$-$D^{\\Diamond_E}$, we have to switch the two formulas). In instances of rules $LR$-$C$ and $L$-$D^{\\Diamond_C}$, instead, this doesn't hold since in all premisses but the leftmost one there is no subformula of some principal formulas. We can overcome this problem by copying in each premiss all principal formulas having no active subformula in that premiss and by adding one to their superscript. We also keep fixed the position of all formulas (modulo the swapping of the two active formulas). To illustrate, one such instance is:\n \n $$ \\scalebox{0.9}{\\infer[\\infrule LR\\mbox{-}C]{\\Box A^k_1,\\Box A^k_2,\\Gamma\\Rightarrow\\Delta,\\Box B^k}{A^{k+2}_1,A^{k+2}_2,\\Gamma\\Rightarrow\\Delta, B^{k+2}\\quad&\n B^{k+2},\\Box A^{k+1}_2,\\Gamma\\Rightarrow\\Delta, A_1^{k+2}\n \\quad&\n \\Box A^{k+1}_1,B^{k+2},\\Gamma\\Rightarrow\\Delta, A_2^{k+2}\n }\n}$$\n In this way, given the name of the modal or deontic rule applied, any premiss of this rule instance, and its position among the premisses of this rule, we can reconstruct both the conclusion of this rule instance and its position in the fixed order of {\\bf X}-instances concluding that sequent (thus we know which OR-branches are unexplored yet). In doing so, we use the hp-admissibility of contraction to ensure that no formula has more than one occurrence in the antecedent or in the succedent of the conclusion of {\\bf X}-instances (otherwise we might be unable to reconstruct which of two identical {\\bf X}-instances we are considering). Hence, for {\\bf X}-instances each stack entry records the name of the rule applied and an index recording which premiss we are considering. \n \n\nThe decision procedure is like in Definition \\ref{decisiontree}. The only novelty is that at each stage, instead of storing the full tree constructed so far, we store only the active node and the stack, we push an entry in the stack and, if we are in a backtracking case, we pop stack entries (and we use them to reconstruct the corresponding active sequent) until we reach an entry recording unexplored branches of the appropriate kind, if any occurs. \n\n\\begin{theorem}\n{\\bf G3X}-derivability is decidable in $\\mathcal{O}(n\\, \\log{} n)$-{\\sc space}, where $n$ is the weight of the end-sequent.\n\\end{theorem}\n\n\\begin{proof}\nWe have already argued that proof search terminates.\nAs in \\cite{B97,H93,H95}, Proposition \\ref{cons}.1 entails that the stack depth is bounded by $\\mathcal{O}(n)$ and, by storing the principal formulas of propositional rules as indexes into the end-sequent, each entry requires $\\mathcal{O}(\\log{}n)$ space. Hence we have an $\\mathcal{O}(n\\, \\log{} n)$ space bound for the stack. Moreover, the active sequent contains at most $\\mathcal{O}(n)$ subformulas of the end-sequent and their numerical superscripts. Each such subformula requires $\\mathcal{O}(\\log{}n)$ space since it can be recorded as an index into the end-sequent; its numerical superscript requires $\\mathcal{O}(\\log{}n)$ too since there are at most $\\mathcal{O}(n)$ superscripts. Hence also the active sequent requires $\\mathcal{O}(n\\log{}n)$ space\n\\end{proof}\n\n\n\\subsection{Equivalence with the axiomatic systems}\nIt is now time to show that the sequent calculi introduced are equivalent to the non-normal logics of Section \\ref{secaxiom}. We write {\\bf G3X} $\\vdash \\Gamma\\Rightarrow\\Delta$ if the sequent $\\Gamma\\Rightarrow\\Delta$ is derivable in {\\bf G3X}, and we say that $A$ is derivable in {\\bf G3X} whenever {\\bf G3X} $\\vdash \\;\\Rightarrow A$. We begin by proving the following \n\n\\begin{lemma}\\label{ax} All the axioms of the axiomatic system {\\bf X} are derivable in {\\bf G3X}.\\end{lemma}\n\n\\begin{proof} A straightforward application of the rules of the appropriate sequent calculus, possibly using Proposition \\ref{genax}. As an example, we show that the deontic axiom $D^\\bot$ is derivable by means of rule $L$-$D^\\bot$ and that axiom $C$ is derivable by means of $LR$-$C$.\n\n$$\n\\infer[\\infrule R\\neg]{\\Rightarrow\\neg\\Box\\bot}{\\infer[\\infrule L\\mbox{-}D^\\bot]{\\Box\\bot\\Rightarrow}{\\infer[\\infrule L\\bot]{\\bot\\Rightarrow}{}}}\n\\qquad\n\\infer[\\infrule R\\supset]{\\Rightarrow \\Box A\\wedge\\Box B\\supset\\Box(A\\wedge B)}{\\infer[\\infrule L\\wedge]{\\Box A\\wedge \\Box B\\Rightarrow \\Box (A\\wedge B)}{\\infer[\\infrule LR\\mbox{-}C]{\\Box A,\\Box B\\Rightarrow \\Box(A\\wedge B)}{\\infer[\\infrule R\\wedge]{A,B\\Rightarrow A\\wedge B}{\\infer[\\infrule{\\ref{genax}}]{A,B\\Rightarrow A}{}&\\infer[\\infrule{\\ref{genax}}]{A,B\\Rightarrow B}{}}&\n\\infer[\\infrule L\\wedge ]{A\\wedge B\\Rightarrow A}{\\infer[\\infrule{\\ref{genax}}]{A,B\\Rightarrow A}{}}&\n\\infer[\\infrule L\\wedge]{A\\wedge B\\Rightarrow B}{\\infer[\\infrule{\\ref{genax}}]{A,B\\Rightarrow B}{}}}}}\n$$\n\n\\end{proof}\n\nNext we prove the equivalence of the sequent calculi for non-normal logics with the corresponding axiomatic systems in the sense that a sequent $\\Gamma\\Rightarrow\\Delta$ is derivable in {\\bf G3X} if and only if its characteristic formula $\\bigwedge\\Gamma\\supset\\bigvee \\Delta$ is derivable in {\\bf X} (where the empty antecedent stands for $\\top$ and the empty succedent for $\\bot$). As a consequence each calculus is sound and complete with respect to the appropriate class of neighbourhood models (see Section \\ref{semantics}).\n\n\\begin{theorem}\\label{comp} Derivability in the sequent system {\\bf G3X} and in the axiomatic system {\\bf X} are equivalent, i.e.\n\n\\begin{center}\n{\\bf G3X} $\\vdash\\;\\Gamma\\Rightarrow\\Delta \\qquad$iff$\\qquad${\\bf X} $\\vdash\\bigwedge\\Gamma\\supset\\bigvee\\Delta$\n\\end{center}\\end{theorem}\n\n\\begin{proof}\nTo prove the right-to-left implication, we argue by induction on the height of the axiomatic derivation in {\\bf X}. The base case is covered by Lemma \\ref{ax}. For the inductive steps, the case of $MP$ follows by the admissibility of Cut and the invertibility of rule $R\\supset$. If the last step is by $RE$, then $\\Gamma=\\emptyset$ and $\\Delta$ is $\\Box C\\leftrightarrow \\Box D$. We know that (in {\\bf X}) we have derived $\\Box C\\leftrightarrow\\Box D$ from $C\\leftrightarrow D$. Remember that $C\\leftrightarrow D$ is defined as $(C\\supset D)\\wedge (D\\supset C)$. Thus we assume, by inductive hypothesis (IH) , that {\\bf G3ED} $\\vdash\\; \\Rightarrow C\\supset D\\wedge D\\supset C$. From this, by invertibility of $R\\wedge$ and $R\\supset$ (Lemma \\ref{inv}), we obtain that {\\bf G3ED} $\\vdash\\; C\\Rightarrow D$ and {\\bf G3ED} $\\vdash\\; D\\Rightarrow C$. We can thus proceed as follows \\vspace{0.3cm}\n\n$$\n\\infer[\\infrule R\\wedge]{\\Rightarrow (\\Box C\\supset \\Box D)\\wedge(\\Box D\\supset \\Box C)}{\\infer[\\infrule R\\supset]{\\Rightarrow \\Box C\\supset\\Box D}{\\infer[\\infrule LR\\mbox{-}E]{\\Box C\\Rightarrow \\Box D}{\\infer[IH+\\ref{inv}]{C\\Rightarrow D}{}&\\infer[IH+\\ref{inv}]{D\\Rightarrow C}{}}\n}&\n\\infer[\\infrule R\\supset]{\\Rightarrow \\Box D\\supset\\Box C}{\\infer[\\infrule LR\\mbox{-}E]{\\Box D\\Rightarrow \\Box C}{\\infer[IH+\\ref{inv}]{D\\Rightarrow C}{}&\\infer[IH+\\ref{inv}]{C\\Rightarrow D}{}}}}\n$$\n\n\nFor the converse implication, we assume {\\bf G3X} $\\vdash \\Gamma\\Rightarrow\\Delta$, and show, by induction on the height of the derivation in sequent calculus, that {\\bf X} \\mbox{$\\vdash \\bigwedge \\Gamma\\supset\\bigvee\\Delta$.} If the derivation has height 0, we have an initial sequent -- so $\\Gamma\\cap\\Delta\\neq\\emptyset$ -- or an instance on $L\\bot$ -- thus $\\bot\\in\\Gamma$. In both cases the claim holds. If the height is $n+1$, we consider the last rule applied in the derivation. If it is a propositional one, the proof is straightforward. If it is a modal rule, we argue by cases. \n\n\n\nIf the last step of a derivation in {\\bf G3E(ND)} is by $LR$-$E$, we have derived $\\Box C,\\Gamma'\\Rightarrow\\Delta',\\Box D$ from $C\\Rightarrow D$ and $D\\Rightarrow C$. By IH and propositional reasoning, {\\bf ED} $\\vdash C\\leftrightarrow D$, thus {\\bf ED} $\\vdash \\Box C\\supset \\Box D$. By some propositional steps we conclude {\\bf ED} $\\vdash (\\Box C\\wedge\\bigwedge\\Gamma')\\supset (\\bigvee\\Delta'\\lor\\Box D).$ The cases of $LR$-$M$, $LR$-$R$, and $LR$-$K$ can be treated in a similar manner (thanks, respectively, to the rule $RM$, $RR$, and $RK$ from Table \\ref{rulesinf}).\n\nIf we are in {\\bf G3C(ND)}, suppose the last step is the following instance of $LR$-$C$:\n$$\n\\infer[\\infrule LR\\mbox{-}C]{\\Box C_1,\\dots,\\Box C_k,\\Gamma'\\Rightarrow \\Delta',\\Box D}{C_1,\\dots C_k\\Rightarrow D&D\\Rightarrow C_1&\\dots& D\\Rightarrow C_k}\n$$\nBy IH, we have that {\\bf C(ND)} $\\vdash D\\supset C_i$ for all $i\\leq k$, and, by propositional reasoning, we have that {\\bf C(ND)} $\\vdash D\\supset C_1\\wedge\\dots\\wedge C_k$. We also know, by IH, that {\\bf C(ND)} $\\vdash C_1\\wedge\\dots\\wedge C_k\\supset D$. By applying $RE$ to these two theorems we get that \n\\begin{equation}\\label{3}\n\\mathbf{ C(ND)} \\vdash \\Box(C_1\\wedge\\dots\\wedge C_k)\\supset \\Box D\n\\end{equation}\n By using axiom $C$ and propositional reasoning, we know that \n \\begin{equation}\\label{4}\n \\mathbf{ C(ND)} \\vdash \\Box C_1\\wedge\\dots\\wedge\\Box C_k\\supset\\Box(C_1\\wedge\\dots\\wedge C_k)\n \\end{equation}\n By applying transitivity to (\\ref{4}) and (\\ref{3}) and some propositional steps, we conclude that \n $$\n \\mathbf{ C(ND)} \\vdash (\\Box C_1\\wedge\\dots\\wedge\\Box C_k\\wedge \\bigwedge\\Gamma')\\supset (\\bigvee\\Delta'\\lor\\Box D)\n $$\n \n Let's now consider rule $L$-$D^\\bot$. Suppose we are in {\\bf G3XD$^\\bot$} and we have derived $\\Box C,\\Gamma'\\Rightarrow\\Delta$ from $C\\Rightarrow$. By IH, {\\bf XD$^\\bot$} $\\vdash C\\supset\\bot$, and we know that {\\bf xD$^\\bot$} $\\vdash \\bot \\supset C$. Thus by $RE$ (or $RM$), we get {\\bf XD$^\\bot$} $\\vdash \\Box C\\supset \\Box \\bot$. By contraposing it and then applying a $MP$ with the axiom $D^\\bot$, we get that {\\bf XD$^\\bot$} $\\vdash \\neg\\Box C$. By some easy propositional steps we conclude {\\bf XD$^\\bot$} $\\vdash ( \\Box C\\wedge\\bigwedge\\Gamma')\\supset \\bigvee\\Delta$. The case $R$-$N$ is similar.\n\nLet's consider rules $L$-$D^{\\Diamond_E}$. Suppose we are in {\\bf G3ED$^\\Diamond$} and we have derived \\mbox{$\\Box A,\\Box B,\\Gamma'\\Rightarrow\\Delta$} from the premisses $A,B\\Rightarrow$ and $\\Rightarrow A,B$. By induction we get that {\\bf ED$^\\Diamond$}$\\vdash A\\wedge B\\supset\\bot$ and {\\bf ED$^\\Diamond$}$\\vdash A\\lor B$. Hence, {\\bf ED$^\\Diamond$}$\\vdash B\\supset \\neg A$ and {\\bf ED$^\\Diamond$}$\\vdash\\neg A\\supset B$. By applying $RE$ we get that \n\n$$ \\mathbf{ED^\\Diamond}\\vdash \\Box B\\supset\\Box \\neg A$$\n which, thanks to axiom $D^\\Diamond$, entails that \n \n $$\\mathbf{ ED^\\Diamond}\\vdash \\Box B\\supset\\neg\\Box A$$ \n By some propositional steps we conclude \n \n $$\\mathbf{ ED^\\Diamond}\\vdash (\\Box A\\wedge\\Box B\\wedge \\bigwedge\\Gamma')\\supset\\bigvee\\Delta$$\n Notice that, thanks to Proposition \\ref{cons}.4 and Theorem \\ref{cut}, we can assume that instances of rule $L$-$D^\\Diamond$ always have two principal formulas. Otherwise the calculus would prove the empty sequent (we will also assume that neither $\\Pi$ nor $\\Sigma$ is empty in instances of rule $L$-$D^{\\Diamond_C}$).\n \n The case of $L$-$D^{\\Diamond_M}$ is analogous to that of $L$-$D^\\bot$ for instances with one principal formula and to that of $L$-$D^{\\Diamond_E}$ for instances with two principal formulas.\n \n\nLet's consider rule $L$-$D^{\\Diamond_C}$. Suppose we have a {\\bf G3CD$^\\Diamond$}-derivation whose last step is:\n$$\n\\infer{\\Box\\Pi,\\Box\\Sigma,\\Gamma'\\Rightarrow \\Delta'}{\\Pi,\\Sigma\\Rightarrow &\\{\\Rightarrow A,B| \\,A\\in\\Pi\\text{ and }B\\in \\Sigma\\}}\n$$\nBy induction and by some easy propositional steps we know that {\\bf ECD$^\\Diamond$} $\\vdash \\bigwedge\\Pi\\leftrightarrow\\neg\\bigwedge\\Sigma$. By rule $RE$ we derive {\\bf ECD$^\\Diamond$} $\\vdash\\Box\\bigwedge\\Pi\\supset\\Box\\neg\\bigwedge\\Sigma$, which, thanks to axiom $D^\\Diamond$, entails that {\\bf ECD$^\\Diamond$} $\\vdash\\Box\\bigwedge\\Pi\\supset\\neg\\Box\\bigwedge\\Sigma$. By transitivity with two (generalized) instances of axiom $C$ we obtain {\\bf ECD$^\\Diamond$} $\\vdash \\bigwedge\\Box\\Pi\\supset \\neg\\bigwedge\\Box\\Sigma$. By some easy propositional steps we conclude that \\mbox{{\\bf ECD$^\\Diamond$} $\\vdash (\\bigwedge\\Box\\Pi\\wedge\\bigwedge\\Box\\Sigma\\wedge\\bigwedge\\Gamma')\\supset\\bigvee\\Delta$.}\n\nThe admissibility of $L$-$D^*$ in {\\bf EC(N)D}, {\\bf RD}, and {\\bf KD} is similar to that of $LR$-$C$: in (\\ref{3}) we replace $\\Box D$ with $\\Box \\bot$ and then we use theorem $D^\\bot$ to transform it into $\\bot$.\n\\end{proof}\nBy combining this and Theorem \\ref{compax} we have the following result.\n\\begin{corollary}\nThe calculus {\\bf G3X} is sound and complete with respect to the class of all neighbourhood models for {\\bf X}.\n\\end{corollary}\n\n\n\\subsection{Forrester's Paradox}\\label{forrester}\n As an application of our decision procedure, we use it to analyse two formal reconstructions of Forrester's paradox \\cite{F84}, which is one of the many paradoxes that endanger the normal deontic logic {\\bf KD} \\cite{M06}. Forrester's informal argument goes as follows:\n\n\\begin{quote}\n\nConsider the following three statements:\n\\begin{enumerate}\n\\item Jones murders Smith.\n\\item Jones ought not murder Smith.\n\\item If Jones murders Smith, then Jones ought to murder Smith gently.\n\\end{enumerate}\nIntuitively, these sentences appear to be consistent. However 1 and 3 together imply that\n\\begin{itemize}\n\\item[4.] Jones ought to murder Smith gently.\n\\end{itemize}\nAlso we accept the following conditional:\n\\begin{itemize}\n\\item[5.] If Jones murders Smith gently, then Jones murder Smith.\n\\end{itemize}\nOf course, this is \\emph{not} a logical validity but, rather, a fact about the world we live in. Now, if we assume that the monotonicity rule is valid, then statement 5 entails\n\\begin{itemize}\n\\item[6.] If Jones ought to murder Smith gently, then Jones ought to murder Smith.\n\\end{itemize}\nAnd so, statements 4 and 6 together imply\n\\begin{itemize}\n\\item[7.] Jones ought to murder Smith.\n\\end{itemize}\nBut [given the validity of $D^\\Diamond$] this contradicts statement 2. The above argument suggests that classical deontic logic should \\emph{not} validate the monotonicity rule [$RM$] \\cite[p. 16]{P17} \n\\end{quote} \n \n\n We show that Forrester's paradox is not a valid argument in deontic logics by presenting, in Figure \\ref{fig}, a failed {\\bf G3KD}-proof search of the sequent that expresses it\n\\begin{equation}\ng\\supset m,m\\supset\\Box g,\\Box\\neg m,m\\Rightarrow\n\\end{equation}\n where $m$ stands for 'John \\underline{m}urders Smith' and $g$ for `John murders Smith \\underline{g}ently' \\cite[pp. 87--91]{M06}.\nNote that, by Theorem \\ref{comp}, if Forrester's paradox is not {\\bf G3KD}-derivable, then it is not valid in all the weaker deontic logics we have considered.\n\n\\begin{figure}\n$$\\infer[\\infrule L\\supset]{g\\supset m,m\\supset\\Box g,\\Box\\neg m,m\\Rightarrow}{\n\\infer[\\infrule L\\supset]{m\\supset\\Box g,\\Box\\neg m,m\\Rightarrow g}{\n\\deduce{m,\\Box \\neg m\\Rightarrow g,m^{\\phantom{a}}}{\\textnormal{closed}}&\n\\infer{\\Box g,\\Box\\neg m,m\\Rightarrow g^{\\phantom{a}}}{\n\\infer{L\\mbox{-}D^\\star}{\n\\infer[\\infrule L\\neg]{g,\\neg m\\Rightarrow}{\\deduce{g\\Rightarrow m}{\\textnormal{open}}}}\n&\n\\infer{L\\mbox{-}D^\\star}{\\deduce{g\\Rightarrow }{\\textnormal{open}}}\n&\n\\infer{L\\mbox{-}D^\\star}{\\infer[\\infrule L\\neg]{\\neg m\\Rightarrow}{\\deduce{\\Rightarrow m}{\\textnormal{open}}}}}\n}&\n\\deduce{m,m\\supset\\Box g,\\Box\\neg m,m\\Rightarrow^{\\phantom{A}}}{\\vdots}}$$\n\\caption{Failed {\\bf G3KD}-proof search of Forrester's paradox \\cite{M06}}\\label{fig}\n\\end{figure}\n\n To make our failed proof search into a derivation of Forrester's paradox, we would have to add (to {\\bf G3MD$^\\Diamond$} or stronger calculi) a non-logical axiom $\\Rightarrow g\\supset m$, and to have cut as a primitive -- and ineliminable -- rule of inference. An Hilbert-style axiomatization of Forrester's argument -- e.g., \\cite[p. 88]{M06} -- hides this cut with a non-logical axiom in the step where $\\Box g\\supset\\Box m$ is derived from $ g\\supset m$, by one of $RM$, $R R$ or $RK$. This step -- i.e., the step from 5 to 6 in the informal argument above -- is not acceptable because none of these rules allows to infer its conclusion when the premiss is an assumption and not a theorem. We have here an instance of the same problem that has led many authors to conclude that the deduction theorem fails in modal logics, conclusion that has been shown to be wrong in \\cite{NH12}.\n \n An alternative formulation of Forrester's argument is given in \\cite{T97}, where the sentence `John murders Smith gently' is expressed by the complex formula $g\\wedge m$ instead of by the atomic $g$. In this case Forrester's argument becomes valid whenever the monotonicity rule is valid as it shown in Figure \\ref{figfig}. Nevertheless, whereas it was an essential ingredient of the informal version, under this formalization premiss 5 becomes dispensable. Hence it is disputable that this is an acceptable way of formalising Forrester's argument.\n \\begin{figure}\n$$ \\infer[\\infrule L\\supset]{m\\supset\\Box(m\\wedge g),\\Box\\neg m,m\\Rightarrow}{\n \\deduce{\\Box\\neg m,m\\Rightarrow m^{\\phantom{a}}}{\\textnormal{closed}}&\n\\infer{\\Box(g\\wedge m),\\Box\\neg m,m\\Rightarrow}{ \\infer{L\\mbox{-}D^\\star}{\n\\infer[\\infrule L\\wedge]{g\\wedge m,\\neg m\\Rightarrow}{\n\\infer[\\infrule L\\neg]{g, m,\\neg m\\Rightarrow}{\n\\deduce{g,m\\Rightarrow m}{\\textnormal{closed}}}}}&\n \\infer{L\\mbox{-}D^\\star}{g\\wedge m\\Rightarrow}&\n \\infer{L\\mbox{-}D^\\star}{\\neg m\\Rightarrow}}\n }$$\n \\caption{ Succesfull {\\bf G3MD}-proof search for the alternative version of Forrester's paradox \\cite{T97}}\\label{figfig}\n \\end{figure}\n \n This is not the place to discuss at length the correctness of formal representation of Forrester's argument and their implications for deontic logics. We just wanted to illustrate how the calculi {\\bf G3XD} can be used to analyse formal representations of the deontic paradoxes. If Forrester's argument is formalised as in \\cite{M06} then it does not force to adopt a deontic logic weaker than {\\bf KD}. If, instead, it is formalised as in \\cite{T97} then it forces the adoption of a logic where $RM$ fails, but the formal derivation differs substantially from Forrester's informal argument \\cite{F84}.\n \n\\section{Craig's Interpolation Theorem}\\label{secinterpol}\nIn this section we use Maehara's \\cite{M60,M61} technique to prove Craig's interpolation theorem for each modal or deontic logic {\\bf X} which has $C$ as theorem only if it has also $M$ (Example \\ref{prob} illustrates the problem with the non-standard rule $LR$-$C$).\n\n\\begin{theorem}[Craig's interpolation theorem] \\label{Craig}\nLet $A\\supset B$ be a theorem of a logic {\\bf X} that differs from {\\bf EC(N)} and its deontic extensions {\\bf EC(N)D} and {\\bf ECD$^\\Diamond$}, then \nthere is a formula $I$, which contains propositional variables common to $A$ and $B$ only, such that both $A\\supset I$ and $I\\supset B$ are theorems of {\\bf X}.\n\n\\end{theorem}\n\n\\noindent In order to prove this theorem, we use the following notions.\n\n\\begin{definition}\n\nA \\emph{partition} of a sequent $\\Gamma\\Rightarrow \\Delta$ is any pair of sequents \\\\$\\<\\Gamma_1\\Rightarrow\\Delta_1\\;||\\;\\Gamma_2\\Rightarrow\\Delta_2\\>$ such that $\\Gamma_1,\\Gamma_2=\\Gamma$ and $\\Delta_1,\\Delta_2=\\Delta$.\\\\\nA \\emph{{\\bf G3X}-interpolant of a partition} $\\<\\Gamma_1\\Rightarrow\\Delta_1\\;||\\;\\Gamma_2\\Rightarrow\\Delta_2\\>$ is any formula $I$ such that:\n\\begin{enumerate} \n\\item All propositional variables in $I$ are in $(\\Gamma_1\\cup\\Delta_1)\\cap(\\Gamma_2\\cup\\Delta_2)$;\n \\item {\\bf G3X} $\\vdash\\Gamma_1\\Rightarrow\\Delta_1,I$ and {\\bf G3X} $\\vdash I,\\Gamma_2\\Rightarrow\\Delta_2$.\n \\end{enumerate}\n\n\\end{definition}\n\nIf $I$ is a {\\bf G3X}-interpolant of the partition $\\<\\Gamma_1\\Rightarrow\\Delta_1\\;||\\;\\Gamma_2\\Rightarrow\\Delta_2\\>$, we write\n\n$$(\\textrm{{\\bf G3X}}\\vdash)\\;\\<\\Gamma_1\\Rightarrow\\Delta_1\\;\\stackrel{I}{||}\\;\\Gamma_2\\Rightarrow\\Delta_2\\>\n$$\n\n\\noindent where one or more of the multisets $\\Gamma_1,\\Gamma_2,\\Delta_1,\\Delta_2$ may be empty. When the set of propositional variables in $(\\Gamma_1\\cup\\Delta_1)\\cap(\\Gamma_2\\cup\\Delta_2)$ is empty, the {\\bf X}-interpolant has to be constructed from $\\bot$ (and $\\top$).\nThe proof of Theorem \\ref{Craig} is by the following lemma, originally due to Maehara \\cite{M60,M61} for (an extension of) classical logic.\n\n\\begin{lemma}[Maehara's lemma]\\label{Maehara} If {\\bf G3X} $\\vdash \\Gamma\\Rightarrow\\Delta$ and $LR$-$C$ (and $L$-$D^{\\Diamond_C}$) is not a rule of {\\bf G3X} (see Tables \\ref{modcalculi} and \\ref{deoncalculi}), every partition of $\\Gamma\\Rightarrow\\Delta$ has a {\\bf G3X}-interpolant.\n\\end{lemma}\n\n\\begin{proof}\nThe proof is by induction on the height of the derivation $\\mathcal{D}$ of $\\Gamma\\Rightarrow\\Delta$. We have to show that each partition of an initial sequent (or of a conclusion of a 0-premiss rule) has a {\\bf G3X}-interpolant and that for each rule of {\\bf G3X} (but $LR$-$C$ and $L$-$D^{\\Diamond_C}$) we have an effective procedure that outputs a {\\bf G3X}-interpolant for any partition of its conclusion from the interpolant(s) of suitable partition(s) of its premiss(es). The proof is modular and, hence, we can consider the modal rules without having to reconsider them in the different calculi.\n\nFor the base case of initial sequents with $p$ principal formula, we have four possible partitions, whose interpolants are:\\begin{center}\\begin{tabular}{ccc}\n$(1)\\;\\\\qquad$&$\\qquad(2)\\;\\$\\\\\\noalign{\\smallskip\\smallskip}\n$(3)\\;\\<\\Gamma_1\\Rightarrow\\Delta_1',p\\;\\stackrel{\\neg p}{||}\\;p,\\Gamma'_2\\Rightarrow\\Delta_2\\>\\qquad$&$\\qquad(4)\\;\\<\\Gamma_1\\Rightarrow\\Delta_1\\;\\stackrel{\\top}{||}\\;p,\\Gamma'_2\\Rightarrow\\Delta'_2,p\\>$\n\\end{tabular}\\end{center} \n\n\\noindent and for the base case of rule $L\\bot$, we have:\n\\begin{center}\\begin{tabular}{ccc}\n$(1)\\;\\<\\bot,\\Gamma_1'\\Rightarrow\\Delta_1\\;\\stackrel{\\bot}{||}\\;\\Gamma_2\\Rightarrow\\Delta_2\\>\\qquad$&$\\qquad(2)\\;\\<\\Gamma_1\\Rightarrow\\Delta_1\\;\\stackrel{\\top}{||}\\;\\bot,\\Gamma'_2\\Rightarrow\\Delta_2,\\>$\n\\end{tabular}\\end{center}\n\n For the proof of (some of) the propositional cases the reader is referred to \\cite[pp. 117-118]{T00}. Thus, we have only to prove that all the modal and deontic rules of Table \\ref{Modal rules} (modulo $LR$-$C$ and $L$-$D^{\\Diamond_C}$) behave as desired.\n\n\\noindent\\textbullet$\\quad$ {\\bf LR-E}$)\\quad$ If the last rule applied in $\\mathcal{D}$ is \n\n\\begin{center} \\begin{prooftree}\n A\\Rightarrow B\\qquad B\\Rightarrow A\n \\justifies\n \\Box A,\\Gamma\\Rightarrow\\Delta,\\Box B\n \\using LR\\textrm{-}E\n \\end{prooftree} \n \\end{center}\n \\noindent we have four kinds of partitions of the conclusion:\n\n\\begin{center}\\begin{tabular}{ccc}\n$(1)\\;\\<\\Box A,\\Gamma_1'\\Rightarrow\\Delta_1',\\Box B\\;||\\;\\Gamma_2\\Rightarrow\\Delta_2\\>\\qquad$&$\\qquad(2)\\;\\<\\Box A,\\Gamma_1'\\Rightarrow\\Delta_1\\;||\\;\\Gamma_2\\Rightarrow\\Delta'_2,\\Box B\\>$\\\\\\noalign{\\smallskip\\smallskip}\n$(3)\\;\\<\\Gamma_1\\Rightarrow\\Delta_1',\\Box B\\;||\\;\\Box A,\\Gamma'_2\\Rightarrow\\Delta_2\\>\\qquad$&$\\qquad(4)\\;\\<\\Gamma_1\\Rightarrow\\Delta_1\\;||\\;\\Box A,\\Gamma'_2\\Rightarrow\\Delta'_2,\\Box B\\>$\n\\end{tabular}\\end{center}\n\n\\noindent In each case we have to choose partitions of the premisses that permit to construct a {\\bf G3E(ND)}-interpolant for the partition under consideration.\n\n In case {\\bf(1)} we have \n $$\n\\framebox{ \\infer[\\infrule LR\\mbox{-}E]{\\<\\Box A,\\Gamma_1'\\Rightarrow\\Delta_1',\\Box B\\;\\stackrel{C}{||}\\;\\Gamma_2\\Rightarrow\\Delta_2\\>}{ \\&\\}\n} $$\n \n\\noindent This can be shown as follows. By IH there is some $C$ ($D$) that is a {\\bf G3E(ND)}-interpolant of the given partition of the left (right) premiss. Thus both $C$ and $D$ contains only propositional variables common to $A$ and $B$; and (i) $\\vdash A\\Rightarrow B,C\\;$ (ii) $\\vdash C\\Rightarrow\\;$ (iii) $\\vdash B\\Rightarrow A,D\\;$ and (iv) $\\vdash D\\Rightarrow\\;$. Since the common language of the partitions of the premisses is empty, no propositional variable can occur in $C$ nor in $D$. Here is a proof that $C$ is a {\\bf G3E(ND)}-interpolant of the partition under consideration (the sequents $A\\Rightarrow B$ and $B\\Rightarrow A$ are derivable since they are the premisses of the instance of $LR$-$E$ we are considering):\n \n$$\n\\infer[\\infrule LR\\mbox{-}E]{\\Box A,\\Gamma'_1\\Rightarrow\\Delta_1',\\Box B,C}{\\infer{A\\Rightarrow B}{}&\\infer{B\\Rightarrow A}{}}\n\\qquad\\qquad\n\\infer[\\infrule LWs+RWs]{C,\\Gamma_2\\Rightarrow\\Delta_2}{\\infer[\\infruler{(ii)}]{C\\Rightarrow}{}}\n$$\n\n\n\n\\noindent In case {\\bf(2)} we have\n\n$$\n\\framebox{\\infer[\\infrule LR\\mbox{-}E]{\\<\\Box A,\\Gamma_1'\\Rightarrow\\Delta_1\\;\\stackrel{\\Box C}{||}\\;\\Gamma_2\\Rightarrow\\Delta'_2,\\Box B\\>}{ \\ & \\}\n}$$\n \n\\noindent By IH it holds that some $C$ and $D$ are {\\bf G3E(ND)}-interpolants of the given partitions of the premisses. Thus, (i) $\\vdash A\\Rightarrow C\\;$ (ii) $\\vdash C\\Rightarrow B\\;$ (iii) $\\vdash B\\Rightarrow D\\;$ (iv) $\\vdash D\\Rightarrow A\\;$ and (v) all propositional variables in $C\\cup D$ are in $A\\cap B$.\n Here is a proof that $\\Box C$ is a {\\bf G3E(ND)}-interpolant of the given partition (the language condition is satisfied thanks to (v)\\,):\n \n$$\n\\infer[\\infrule LR\\mbox{-}E]{\\Box A,\\Gamma_1'\\Rightarrow\\Delta_1,\\Box C}{\\infer[\\infruler{(i)}]{A\\Rightarrow C}{}&\n\\infer[\\infrule Cut]{C\\Rightarrow A}{\\infer[\\infrule Cut]{C\\Rightarrow D}{\\infer[\\infruler{(ii)}]{C\\Rightarrow B}{}&\\infer[\\infruler{(iii)}]{B\\Rightarrow D}{}}&\n\\infer[\\infruler{(iv)}]{D\\Rightarrow A}{}}}\n$$\n\n$$\n\\infer[\\infrule LR\\mbox{-}E]{\\Box C,\\Gamma_2\\Rightarrow\\Delta_2',\\Box B}{\\infer[\\infruler{(ii)}]{C\\Rightarrow B}{}&\\infer[\\infrule Cut]{B\\Rightarrow C}{\\infer[\\infruler{(iii)}]{B\\Rightarrow D}{}&\\infer[\\infrule Cut]{D\\Rightarrow C}{\\infer[\\infruler{(iv)}]{D\\Rightarrow A}{}&\\infer[\\infruler{(i)}]{A\\Rightarrow C}{}}}}\n$$\n\n\\noindent In case {\\bf(3)} we have \n\n$$\n\\framebox{\\infer[\\infrule LR\\mbox{-}E]{\\<\\Gamma_1\\Rightarrow\\Delta_1',\\Box B\\;\\stackrel{\\Diamond C}{||}\\;\\Box A,\\Gamma'_2\\Rightarrow\\Delta_2\\>}{ \\<\\Rightarrow B\\;\\stackrel{C}{||}\\;A\\Rightarrow \\> & \\<\\Rightarrow A\\;\\stackrel{D}{||}\\;B\\Rightarrow \\>}\n}$$\n\\noindent By IH, there are $C$ and $D$ that are {\\bf G3E(ND)}-interpolants of the partitions of the premisses. Thus (i) $\\vdash \\Rightarrow B,C\\;$ (ii) $\\vdash C,A\\Rightarrow\\;$ (iii) $\\vdash \\Rightarrow A,D\\;$ and (iv) $\\vdash D,B\\Rightarrow\\;$. We prove that $\\Diamond C$ is a {\\bf G3E(ND)}-interpolant of the (given partition of the) conclusion as follows:\n\n$$\n\\infer[\\infrule R\\neg]{\\Gamma_1\\Rightarrow\\Delta_1',\\Box B,\\neg\\Box\\neg C}{\\infer[\\infrule LR\\mbox{-}E]{\\Box\\neg C,\\Gamma_1\\Rightarrow\\Delta_1',\\Box B}{\n\\infer[\\infrule L\\neg]{\\neg C\\Rightarrow B}{\\infer[\\infruler{(i)}]{\\Rightarrow B,C}{}}&\n\\infer[\\infrule R\\neg]{B\\Rightarrow\\neg C}{\\infer[\\infrule Cut]{B,C\\Rightarrow}{\n\\infer[\\infrule Cut]{C\\Rightarrow D}{\\infer[\\infruler{(iii)}]{\\Rightarrow D,A}{}&\\infer[\\infruler{(ii)}]{A,C\\Rightarrow}{}}&\n\\infer[\\infruler{(iv)}]{D,B\\Rightarrow}{}}}}}\n$$\n\n$$\n\\infer[\\infrule L\\neg]{\\neg\\Box\\neg C,\\Box A,\\Gamma_2'\\Rightarrow\\Delta_2}{\\infer[\\infrule LR\\mbox{-}E]{\\Box A,\\Gamma_2'\\Rightarrow\\Delta_2,\\Box\\neg C}{\\infer[\\infrule R\\neg]{A\\Rightarrow \\neg C}{\\infer[\\infruler{(ii)}]{C,A\\Rightarrow}{}}&\n\\infer[\\infrule L\\neg]{\\neg C\\Rightarrow A}{\\infer[\\infrule Cut]{\\Rightarrow A,C}{\n\\infer[\\infruler{(iii)}]{\\Rightarrow A,D}{}&\n\\infer[\\infrule Cut]{D\\Rightarrow C}{\\infer[\\infruler{(ii)}]{\\Rightarrow C,B}{}&\\infer[\\infruler{(iv)}]{B,D\\Rightarrow}{}}}}}}\n$$\n\\noindent In case {\\bf(4)} we have \n\n$$\n\\framebox{\n\\infer[\\infrule LR-E]{\\<\\Gamma_1\\Rightarrow\\Delta_1\\;\\stackrel{C}{||}\\;\\Box A,\\Gamma'_2\\Rightarrow\\Delta'_2,\\Box B\\>}{\\<\\Rightarrow \\;\\stackrel{C}{||}\\;A\\Rightarrow B\\>&\\<\\Rightarrow \\;\\stackrel{D}{||}\\;B\\Rightarrow A \\>}\n}\n$$\n \\noindent By IH, there are {\\bf G3E(ND)}-interpolants $C$ and $D$ of the partitions of the premisses. Thus (i) $\\vdash \\Rightarrow C\\;$ (ii) $\\vdash C,A\\Rightarrow B\\;$ (iii) $\\vdash \\Rightarrow D\\;$ and (iv) $\\vdash D,B\\Rightarrow A\\;$. Since the common language of the partitions of the premisses is empty, no propositional variable occurs in $C$ nor in $D$. We show that $C$ is a {\\bf G3E(ND)}-interpolant of the partition under consideration as follows (as in case {\\bf (1)}, $A\\Rightarrow B$ and $B\\Rightarrow A$, being the premisses of the instance of $LR$-$E$ under consideration, are derivable):\n \n$$\n\\infer[\\infrule LWs+RWs]{\\Gamma_1\\Rightarrow\\Delta_1,C}{\\infer[\\infruler{(i)} ]{\\Rightarrow C}{}}\n\\qquad\n\\infer[\\infrule LR\\mbox{-}E]{C,\\Box A,\\Gamma_2'\\Rightarrow\\Delta_2',\\Box B}{\\infer{A\\Rightarrow B}{}&\\infer{B\\Rightarrow A}{}}\n$$\n\n\\noindent\\textbullet$\\quad$ {\\bf LR-M}$)\\quad$ If the last rule applied in $\\mathcal{D}$ is \n\n$$\n\\infer[\\infrule LR\\mbox{-}M]{\\Box A,\\Gamma\\Rightarrow\\Delta,\\Box B}{A\\Rightarrow B}\n$$\n\n\\noindent we give directly the {\\bf G3M(ND)}-interpolants of the possible partitions of the conclusion (and of the appropriate partition of the premiss). The proofs are parallel to those for $LR$-$E$.\n\n$$\n\\framebox{ \\infer[\\infrule LR\\mbox{-}M]{\\<\\Box A,\\Gamma_1'\\Rightarrow\\Delta_1',\\Box B\\;\\stackrel{C}{||}\\;\\Gamma_2\\Rightarrow\\Delta_2\\>}{\\ } \n\\qquad\n \\infer[\\infrule LR\\mbox{-}M]{\\<\\Box A,\\Gamma_1'\\Rightarrow\\Delta_1\\;\\stackrel{\\Box C}{||}\\;\\Gamma_2\\Rightarrow\\Delta'_2,\\Box B\\>}{\\}\n }\n$$\n$$\\framebox{\n\\infer[\\infrule LR\\mbox{-}M]{\\<\\Gamma_1\\Rightarrow\\Delta_1',\\Box B\\;\\stackrel{\\Diamond C}{||}\\;\\Box A,\\Gamma'_2\\Rightarrow\\Delta_2\\>}{\\<\\Rightarrow B\\;\\stackrel{C}{||}\\;A\\Rightarrow \\>} \n\\qquad\n\\infer[\\infrule LR\\mbox{-}M]{\\<\\Gamma_1\\Rightarrow\\Delta_1\\;\\stackrel{C}{||}\\;\\Box A,\\Gamma'_2\\Rightarrow\\Delta'_2,\\Box B\\>}{ \\<\\Rightarrow \\;\\stackrel{C}{||}\\;A\\Rightarrow B\\> }\n}$$\n \n \n\n\n\\noindent\\textbullet$\\quad$ {\\bf LR-R}$)\\quad$ If the last rule applied in $\\mathcal{D}$ is \n\n$$\n\\infer[\\infrule LR\\mbox{-}R]{\\Box A,\\Box \\Pi,\\Gamma\\Rightarrow\\Delta,\\Box B}{A,\\Pi\\Rightarrow B}\n$$\n \\noindent we have four kinds of partitions of the conclusion:\n\n\n\\begin{center}\\begin{tabular}{ll}\n$(1)\\qquad$&$\\<\\Box A,\\Box\\Pi_1,\\Gamma_1'\\Rightarrow\\Delta_1',\\Box B\\;||\\;\\Box\\Pi_2,\\Gamma'_2\\Rightarrow\\Delta_2\\>\\quad$\\\\\\noalign{\\smallskip\\smallskip}\n$(2)$&$\\<\\Box A,\\Box\\Pi_1,\\Gamma_1'\\Rightarrow\\Delta_1\\;||\\;\\Box\\Pi_2,\\Gamma'_2\\Rightarrow\\Delta'_2,\\Box B\\>$\\\\\\noalign{\\smallskip\\smallskip}\n$(3)$&$\\<\\Box\\Pi_1,\\Gamma'_1\\Rightarrow\\Delta_1',\\Box B\\;||\\;\\Box A,\\Box\\Pi_2,\\Gamma'_2\\Rightarrow\\Delta_2\\>\\quad$\\\\\\noalign{\\smallskip\\smallskip}\n$(4)$&$\\<\\Box\\Pi_1,\\Gamma'_1\\Rightarrow\\Delta_1\\;||\\;\\Box A,\\Box\\Pi_2,\\Gamma'_2\\Rightarrow\\Delta'_2,\\Box B\\>$\n\\end{tabular}\\end{center}\n\n\n In case {\\bf(1)} we have two subcases according to whether $\\Pi_2$ is empty or not. If it is not empty we have \n\n\n$$\n\\framebox{\\infer[\\infrule LR\\mbox{-}R]{\\<\\Box A,\\Box\\Pi_1,\\Gamma_1'\\Rightarrow \\Delta_1',\\Box B\\;\\stackrel{\\Diamond C}{||}\\; \\Box\\Pi_2,\\Gamma_2'\\Rightarrow\\Delta_2\\> }{\\ }\n}$$\n \n\\noindent By IH, there is a {\\bf G3R(D$^\\star$)}-interpolant $C$ of the chosen partition of the premiss. Thus (i) $\\vdash A,\\Pi_1\\Rightarrow B,C$ and (ii) $\\vdash C,\\Pi_2\\Rightarrow$, and we have the following derivations\n\n$$\n\\infer[\\infrule R\\neg]{\\Box A,\\Box\\Pi_1,\\Gamma_1'\\Rightarrow\\Delta_1',\\Box B,\\neg\\Box\\neg C}{\\infer[\\infrule LR\\mbox{-}R]{\\Box\\neg C,\\Box A,\\Box\\Pi_1,\\Gamma_1'\\Rightarrow\\Delta_1',\\Box B}{\\infer[\\infrule L\\neg]{\\neg C,A,\\Pi_1\\Rightarrow B}{\\infer[\\infruler{(i)}]{A,\\Pi_1\\Rightarrow B,C}{}}}}\n\\qquad\n\\infer[\\infrule L\\neg]{\\neg\\Box\\neg C,\\Box\\Pi_2,\\Gamma'_2\\Rightarrow\\Delta_2}{\\infer[\\infrule LR\\mbox{-}R]{\\Box\\Pi_2,\\Gamma'_2\\Rightarrow\\Delta_2,\\Box \\neg C}{\\infer[\\infrule R\\neg]{\\Pi_2\\Rightarrow \\neg C}{\\infer[\\infruler{(ii)}]{C,\\Pi_2\\Rightarrow}{}}}}\n$$\n\n\\noindent When $\\Pi_2$ (and $\\Box \\Pi_2$) is empty we cannot proceed as above since we cannot apply $LR$-$R$ in the right derivation. But in this case, reasoning like in case {\\bf (1)} for rule $LR$-$E$, we can show that\n\n$$\n\\framebox{\\infer[\\infrule LR\\mbox{-}R]{\\<\\Box A,\\Box\\Pi_1,\\Gamma_1'\\Rightarrow \\Delta_1',\\Box B\\;\\stackrel{C}{||}\\; \\Gamma_2'\\Rightarrow\\Delta_2\\> }{\\ }\n}$\n\n\n\n\\noindent Cases {\\bf(2)} and {\\bf(3)} are similar to the corresponding cases for rule $LR$-$E$:\n{\\footnotesize$$\n\\framebox{\\infer[\\infrule LR\\mbox{-}R]{\\<\\Box A,\\Box\\Pi_1,\\Gamma'_1\\Rightarrow\\Delta_1\\;\\stackrel{\\Box C}{||}\\;\\Box\\Pi_2,\\Gamma'_2\\Rightarrow\\Delta'_2,\\Box B\\>}{\\< A,\\Pi_1\\Rightarrow\\;\\stackrel{C}{||}\\;\\Pi_2\\Rightarrow B\\>}\\quad\n\\infer[\\infrule LR\\mbox{-}R]{\\<\\Box\\Pi_1,\\Gamma'_1\\Rightarrow\\Delta'_1\\,\\Box B\\;\\stackrel{\\Diamond C}{||}\\;\\Box A,\\Box\\Pi_2,\\Gamma'_2\\Rightarrow\\Delta_2\\>}{\\<\\Pi_1\\Rightarrow B\\;\\stackrel{C}{||}\\; A,\\Pi_2\\Rightarrow\\;\\>}\n}$$}\n \n\n\n In case {\\bf(4)} we have two subcases according to whether $\\Pi_1$ is empty or not:\n{\\small$$\n\\framebox{\\infer[\\infrule LR\\mbox{-}R]{\\<\\Gamma'_1\\Rightarrow\\Delta_1\\;\\stackrel{C}{||} \\Box A,\\Box\\Pi_2,\\Gamma_2'\\Rightarrow\\Delta_2',\\Box B\\>}{\\<\\;\\Rightarrow\\;\\stackrel{C}{||}A,\\Pi_2\\Rightarrow B\\>}\\quad\n\\infer[\\infrule LR\\mbox{-}R]{\\<\\Box\\Pi_1,\\Gamma'_1\\Rightarrow\\Delta_1\\;\\stackrel{\\Box C}{||} \\Box A,\\Box\\Pi_2,\\Gamma_2'\\Rightarrow\\Delta_2',\\Box B\\>}{\\<\\Pi_1\\Rightarrow\\;\\stackrel{C}{||}A,\\Pi_2\\Rightarrow B\\>}\n}$$ }\nThe proofs are similar to those for case {\\bf (1)}.\n \n\n\n\n\\noindent\\textbullet$\\quad$ {\\bf LR-K}$)\\quad$ If the last rule applied in $\\mathcal{D}$ is \n\n$$\n\\infer[\\infrule LR\\mbox{-}K]{\\Box \\Pi,\\Gamma\\Rightarrow\\Delta,\\Box B}{\\Pi\\Rightarrow B}\n$$\n\\noindent we give directly the {\\bf G3K(D)}-interpolants of the two possible partitions of the conclusion:\n\n{\\small$$\n\\framebox{\\infer[\\infrule LR\\mbox{-}K]{\\<\\Box\\Pi_1,\\Gamma_1'\\Rightarrow\\Delta_1\\;\\stackrel{\\Box C}{||}\\;\\Box\\Pi_2,\\Gamma_2'\\Rightarrow\\Delta_2',\\Box B\\>}{\\<\\Pi_1\\Rightarrow\\;\\stackrel{C}{||}\\;\\Pi_2\\Rightarrow B\\>}\\qquad\n\\infer[\\infrule LR\\mbox{-}K]{\\<\\Box\\Pi_1,\\Gamma_1'\\Rightarrow\\Delta_1',\\Box B\\;\\stackrel{\\Diamond C}{||}\\;\\Box\\Pi_2,\\Gamma_2'\\Rightarrow\\Delta_2\\>}{\\< \\Pi_1\\Rightarrow B\\;\\stackrel{C}{||}\\;\\Pi_2\\Rightarrow\\;\\>}\n}$$}\nThe proofs are, respectively, parallel to those for cases {\\bf (2)} and {\\bf(3)} of $LR$-$E$ (when $\\Pi=\\emptyset$, we can proceed as for rule $R$-$N$ and use $C$ instead of $\\Box C$ and of $\\Diamond C$, respectively).\n \n\n\n \n\n\n\\noindent\\textbullet$\\quad$ {\\bf L-D}$^\\bot)\\quad$ If the last rule applied in $\\mathcal{D}$ is \n\n\\begin{center} \\begin{prooftree}\n A\\Rightarrow \n \\justifies\n \\Box A,\\Gamma\\Rightarrow\\Delta\n \\using \\infrule{L\\mbox{-}D^\\bot}\n \\end{prooftree} \n \\end{center}\n \\noindent we have two kinds of partitions of the conclusion, whose {\\bf G3XD$^\\bot$}-interpolants are, respectively:\n\n \\begin{center}\\framebox{ \\begin{prooftree}\n\\\n \\justifies\n\\<\\Box A,\\Gamma_1'\\Rightarrow\\Delta_1\\;\\stackrel{C}{||}\\;\\Gamma_2\\Rightarrow\\Delta_2\\>\n \\using \\infrule{L\\mbox{-}D^\\bot}\n \\end{prooftree}\\qquad\n \\begin{prooftree}\n\\<\\;\\Rightarrow\\;\\stackrel{C}{||}\\; A\\Rightarrow\\;\\>\n \\justifies\n\\<\\Gamma_1\\Rightarrow\\Delta_1\\;\\stackrel{C}{||}\\; \\Box A,\\Gamma_2'\\Rightarrow\\Delta_2\\>\n \\using \\infrule{L\\mbox{-}D^\\bot}\n \\end{prooftree} }\n \\end{center}\n \n\n\\noindent\\textbullet$\\quad$ {\\bf L-D$^\\Diamond$}$)\\quad$ If the last rule applied in $\\mathcal{D}$ is \n\n$$\n\\infer[\\infrule L\\mbox{-}D^{\\Diamond_E}]{\\Box A,\\Box B,\\Gamma\\Rightarrow \\Delta}{A,B\\Rightarrow\\qquad&\\Rightarrow A,B }\\qquad\n\\textnormal{or}\\qquad\n\\infer[\\infrule L\\mbox{-}D^{\\Diamond_M}]{\\Box A,\\Box B,\\Gamma\\Rightarrow \\Delta}{A,B\\Rightarrow}\n$$\n \\noindent we have three kinds of partitions of the conclusion:\n\n\n\\begin{center}\\begin{tabular}{ll}\n$(1)\\qquad$&$\\<\\Box A,\\Box B,\\Gamma'_1\\Rightarrow\\Delta_1\\;||\\;\\Gamma_2\\Rightarrow\\Delta_2\\>\\quad$\\\\\\noalign{\\smallskip\\smallskip}\n$(2)$&$\\<\\Gamma_1\\Rightarrow\\Delta_1\\;||\\;\\Box A,\\Box B,\\Gamma'_2\\Rightarrow\\Delta_2\\>$\\\\\\noalign{\\smallskip\\smallskip}\n$(3)$&$\\<\\Box A,\\Gamma'_1\\Rightarrow\\Delta_1\\;||\\;\\Box B,\\Gamma'_2\\Rightarrow\\Delta_2\\>\\quad$\\\\\\noalign{\\smallskip\\smallskip}\n\\end{tabular}\\end{center}\n\n In cases {\\bf (1)} and {\\bf (2)} we have, respectively (omitting the right premiss for $L$-$D^{\\Diamond_M}$):\n \n $$\n\\framebox{\\infer[\\infrule L\\mbox{-}D^\\Diamond]{\\<\\Box A,\\Box B,\\Gamma_1'\\Rightarrow \\Delta_1\\;\\stackrel{ C}{||}\\; \\Gamma_2\\Rightarrow\\Delta_2\\> }{\\ \\qquad \\<\\Rightarrow \\;\\stackrel{D}{||}\\; \\Rightarrow \\;A,B\\>}\n\\infer[\\infrule L\\mbox{-}D^\\Diamond]{\\<\\Gamma_1\\Rightarrow \\Delta_1\\;\\stackrel{ C}{||}\\; \\Box A,\\Box B,\\Gamma'_2\\Rightarrow\\Delta_2\\> }{\\ \\qquad \\<\\Rightarrow \\;\\stackrel{D}{||}\\; \\Rightarrow \\;A,B\\>}\n}$$\n\nFinally, in case {\\bf (3)} we have:\n\n $$\n\\framebox{\\infer[\\infrule L\\mbox{-}D^\\Diamond]{\\<\\Box A,\\Gamma_1'\\Rightarrow \\Delta_1\\;\\stackrel{ \\Box C}{||}\\; \\Box B,\\Gamma'_2\\Rightarrow\\Delta_2\\> }{\\ \\qquad \\<\\Rightarrow \\; A\\stackrel{D}{||}\\; \\Rightarrow \\;B\\>}\n}$$\nBy IH, we can assume that $C$ is an interpolant of the partition of the left premiss and $D$ of the right one.\nWe have the following {\\bf G3YD$^\\Diamond$}-derivations ({\\bf Y} $\\in\\{$ {\\bf E,M}$\\}$):\n\n$$\n\\infer[\\infrule LR\\mbox{-}E]{\\Box A,\\Gamma_1'\\Rightarrow\\Delta_1,\\Box C}{\\infer[\\infrule IH]{A\\Rightarrow C}{}&\n\\infer[\\infrule Cut]{C\\Rightarrow A}{\\infer[\\infrule IH]{\\Rightarrow A,D}{}&\n\\infer[\\infrule Cut]{D,C\\Rightarrow}{\n\\infer[\\infrule IH]{D\\Rightarrow B}{}&\\infer[\\infrule IH]{B,C\\Rightarrow}{}\n}}}\n$$\n\n$$\n\\infer[\\infrule L\\mbox{-}D^{\\Diamond_{E}}]{\\Box C,\\Box B,\\Gamma_2'\\Rightarrow\\Delta_2}{\n\\infer[\\infrule IH]{C,B\\Rightarrow}{}&\\infer[\\infrule Cut]{\\Rightarrow C,B}{\n\\infer[\\infrule Cut]{\\Rightarrow C,D}{\n\\infer[\\infrule IH]{\\Rightarrow D,A}{}&\n\\infer[\\infrule IH]{A\\Rightarrow C}{}}&\n\\infer[\\infrule IH]{D\\Rightarrow B}{}}\n}\n$$\nIt is also immediate to notice that $\\Box C$ satisfies the language condition for being a {\\bf G3YD$^\\Diamond$}-interpolant of the conclusion since, by IH, we know that each propositional variable occurring in $C$ occurs in $A\\cap B$.\n\n\\noindent\\textbullet$\\quad$ {\\bf L-D$^\\star$}$)\\quad$ If the last rule applied in $\\mathcal{D}$ is \n\n\\begin{center} \\begin{prooftree}\n \\Pi\\Rightarrow \n \\justifies\n \\Box\\Pi, \\Gamma\\Rightarrow\\Delta\n \\using \\infrule{L\\mbox{-}D^\\star}\n \\end{prooftree} \n \\end{center}\n \\noindent we have the following kind of partition:\n\\quad$\\< \\Box\\Pi_1,\\Gamma'_1\\Rightarrow\\Delta_1\\;{||}\\;\\Box\\Pi_2,\\Gamma'_2\\Rightarrow\\Delta_2\\>$\n\n\nIf $\\Pi_1$ is not empty we have: \n\\begin{center} \\framebox{\\begin{prooftree}\n\\<\\Pi_1\\Rightarrow\\;\\stackrel{C}{||}\\;\\Pi_2\\Rightarrow\\;\\>\n \\justifies\n\\< \\Box\\Pi_1,\\Gamma'_1\\Rightarrow\\Delta_1\\;\\stackrel{\\Box C}{||}\\;\\Box\\Pi_2,\\Gamma'_2\\Rightarrow\\Delta_2\\>\n \\using \\infrule{L\\mbox{-}D^\\star}\n \\end{prooftree} }\n \\end{center}\n\n\\noindent By IH, there is some $C$ that is an interpolant of the premiss. It holds that $\\vdash \\Pi_1\\Rightarrow C$ and $\\vdash C,\\Pi_2\\Rightarrow\\;$. We show that $\\Box C$ is a {\\bf G3YD}-interpolant ({\\bf Y} $\\in\\{${\\bf R,K}$\\}$) of the partition of the conclusion as follows:\n\n$$\n\\infer[\\infrule LR\\mbox{-}Y]{\\Box\\Pi_1,\\Gamma_1'\\Rightarrow\\Delta_1,\\Box C}{\\infer[\\infruler{IH}]{\\Pi_1\\Rightarrow C}{}}\n\\qquad\n\\infer[\\infrule L\\mbox{-}D^*]{\\Box C,\\Box \\Pi_2,\\Gamma_2'\\Rightarrow\\Delta_2}{\\infer[\\infruler{IH}]{C,\\Pi_2\\Rightarrow}{}}\n$$\nIf, instead, $\\Pi_1$ is empty then $\\Pi_2$ cannot be empty and we have\n\\begin{center} \\framebox{\\begin{prooftree}\n\\<\\;\\Rightarrow\\;\\stackrel{C}{||}\\;\\Pi_2\\Rightarrow\\;\\>\n \\justifies\n\\< \\Gamma_1\\Rightarrow\\Delta_1\\;\\stackrel{\\Diamond C}{||}\\;\\Box\\Pi_2,\\Gamma'_2\\Rightarrow\\Delta_2\\>\n \\using \\infrule{ L\\mbox{-}D^\\star}\n \\end{prooftree} }\n \\end{center} \n \n\\noindent By IH there is a formula $C$, containing no propositional variable, such that $\\vdash \\;\\Rightarrow C$ and $\\vdash C,\\Pi_2\\Rightarrow\\;$ . Thus, {\\bf G3YD} $\\vdash\\Gamma_1\\Rightarrow\\Delta_1,\\Diamond C$ ($L$-$D^*$ makes $\\Rightarrow\\Diamond C$ derivable from $\\Rightarrow C$) and {\\bf G3YD} $\\vdash\\Diamond\\top,\\Box\\Pi_2,\\Gamma_2'\\Rightarrow\\Delta_2$ ($LR$-$Y$ makes $\\Diamond C,\\Box\\Pi_2\\Rightarrow$ derivable from $C,\\Pi_2\\Rightarrow$ when $\\Pi_2\\neq \\emptyset$).\n \n \n\n \\noindent\\textbullet$\\quad$ {\\bf R-N}$)\\quad$ If the last rule applied in $\\mathcal{D}$ is \n\n\\begin{center} \\begin{prooftree}\n \\Rightarrow A \n \\justifies\n\\Gamma\\Rightarrow\\Delta,\\Box A\n \\using \\infrule{R\\mbox{-}N}\n \\end{prooftree} \n \\end{center}\n\n\\noindent The interpolants for the two possible partitions are\n\n\\noindent\\framebox{ \\begin{tabular}{cccc}\n $(1)\\;$& \\begin{prooftree}\n\\<\\;\\Rightarrow A\\stackrel{\\bot}{||}\\;\\Rightarrow\\;\\>\n \\justifies\n\\< \\Gamma_1\\Rightarrow\\Delta_1',\\Box A\\;\\stackrel{\\bot}{||}\\;\\Gamma_2\\Rightarrow\\Delta_2\\>\n \\using \\infrule{R\\mbox{-}N}\\quad\n \\end{prooftree} &\n \n \n\n$(2)\\;$& \\begin{prooftree}\n\\<\\;\\Rightarrow \\;\\stackrel{\\top}{||}\\;\\Rightarrow A\\>\n \\justifies\n\\< \\Gamma_1\\Rightarrow\\Delta_1\\;\\stackrel{\\top}{||}\\;\\Gamma_2\\Rightarrow\\Delta_2',\\Box A\\>\n \\using \\infrule{R\\mbox{-}N}\n \\end{prooftree} \n \\end{tabular}}\\vspace{0.3cm}\n \n \n\n\\noindent This completes the proof.{}\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{Craig}]\nAssume that $A\\supset B$ is a theorem of {\\bf X}. By Theorem \\ref{comp} and Lemma \\ref{inv} we have that {\\bf G3X} $\\vdash A\\Rightarrow B$. By Lemma \\ref{Maehara} (taking $A$ as $\\Gamma_1$ and $B$ as $\\Delta_2$ and $\\Gamma_2,\\Delta_1$ empty) and Theorem \\ref{comp} there exists a formula $I$ that is an interpolant of $A\\supset B$ -- i.e. $I$ is such such that all propositional variables occurring in $I$, if any, occur in both $A$ and $B$ and such that $A\\supset I$ and $I\\supset B$ are theorems of {\\bf X}.{}\n\\end{proof}\n\nObserve that the proof is constructive in that Lemma \\ref{Maehara} gives a procedure to extract an interpolant for $A\\supset B$ from a given derivation of $A\\Rightarrow B$. Furthermore the proof is purely proof-theoretic in that it makes no use of model-theoretic notions.\n\nCraig's theorem is often -- e.g., in \\cite{M61} for an extension of classical logic -- stated in the following stronger version:\n\\begin{quote}\nIf $A\\supset B$ is a theorem of the logic {\\bf X}, then \n\\begin{enumerate}\n\\item If $A$ and $B$ share some propositional variable, there is a formula $I$, which contains propositional variables common to $A$ and $B$ only, such that both $A\\supset I$ and $I\\supset B$ are theorems of {\\bf X};\n\\item Else, either $\\neg A$ or $B$ is a theorem of {\\bf X}.\n\\end{enumerate}\n\\end{quote}\n\n\\noindent But the second condition doesn't hold for modal and deontic logics where at least one of $N:=\\Box\\top$ and $D^\\bot:=\\Diamond\\top$ is not a theorem.\nTo illustrate, it holds that $ \\Box\\top\\supset\\Box\\top$ is a theorem of {\\bf E} and its interpolant is $\\Box\\bot$ (see Figure \\ref{fig}), but neither $\\neg \\Box\\top$ nor $\\Box\\top$ is a theorem of {\\bf E}. Analogously, we have that $\\Box\\bot\\supset\\Box\\bot$ is a theorem of {\\bf E} and its interpolant is $\\Box\\bot$ (see Figure \\ref{fig}), but neither $\\neg \\Box\\bot$ nor $\\Box\\bot$ is a theorem of {\\bf E}. These counterexamples work in all extensions of {\\bf E} that don't have both $N$ and $D^\\bot$ as theorems: to prove the stronger version of Craig's theorem we need $N$ and $D^\\bot$, respectively.\n\n\\begin{figure}\n\\scalebox{0.9900}{\\infer[\\infrule LR\\mbox{-}E]{\\< \\;\\Box\\top\\Rightarrow\\;\\stackrel{\\Box\\top}{||}\\;\\Rightarrow\\Box\\top\\;\\>}{\n\\< \\;\\top\\Rightarrow\\;\\stackrel{\\top}{||}\\;\\Rightarrow\\top\\;\\>& \\< \\;\\top\\Rightarrow\\;\\stackrel{\\top}{||}\\;\\Rightarrow\\top\\;\\>}\\qquad\n\\infer[\\infrule LR\\mbox{-}E]{\\< \\;\\Box\\bot\\Rightarrow\\;\\stackrel{\\Box\\bot}{||}\\;\\Rightarrow\\Box\\bot\\;\\>}{\n\\< \\;\\bot\\Rightarrow\\;\\stackrel{\\bot}{||}\\;\\Rightarrow\\bot\\;\\>& \\< \\;\\bot\\Rightarrow\\;\\stackrel{\\bot}{||}\\;\\Rightarrow\\bot\\;\\>}\n\n\n\\caption{Construction of an {\\bf ED}-interpolant for $\\Box\\top\\supset\\Box\\top$ and for $\\Box\\bot\\supset\\Box\\bot$}\\label{fig}\n\\end{figure}\n\n\nAmong the deontic logics considered here, the stronger version of Craig's theorem holds only for {\\bf END$^{\\bot(\\Diamond)}$}, {\\bf MND$^{\\bot(\\Diamond)}$}, and {\\bf KD}, as shown by the following \n\n\\begin{corollary}\\label{cor} Let {\\bf XD} be one of {\\bf END$^{\\bot(\\Diamond)}$}, {\\bf MND$^{\\bot(\\Diamond)}$}, and {\\bf KD}. If $A\\supset B$ is a theorem of {\\bf XD} and $A$ and $B$ share no propositional variable, then either $\\neg A$ or $B$ is a theorem of {\\bf XD}.\n\\end{corollary}\n\n\\begin{proof}\nSuppose that {\\bf XD} $\\vdash A\\supset B$ and that $A$ and $B$ share no propositional variable, then the interpolant $I$ is constructed from $\\bot $ and $\\top$ by means of classical and deontic operators. Whenever $D^\\bot$ and $N$ are theorems of {\\bf XD}, we have that $\\Diamond\\top\\leftrightarrow \\top$, $\\Box\\top\\leftrightarrow \\top$, $\\Diamond\\bot\\leftrightarrow \\bot$, and $\\Box\\bot\\leftrightarrow \\bot$ are theorems of {\\bf XD}. Hence, the interpolant $I$ is (equivalent to) either $\\bot$ or $\\top$. In the first case {\\bf XD} $\\vdash\\neg A$ and in the second one {\\bf XD} $\\vdash B$.{}\n\\end{proof}\n\n\\noindent As noted in \\cite[p. 298]{F83}, Corollary \\ref{cor} is a Halld\\'en-completeness result. A logic {\\bf X} is \\emph{Halld\\'en-complete} if, for every formulas $A$ and $B$ that share no propositional variable, {\\bf X} $\\vdash A\\lor B$ if and only if {\\bf X} $\\vdash A$ or {\\bf X} $\\vdash B$. All the modal and deontic logics considered here, being based on classical logic, are such that $A\\supset B$ is equivalent to $\\neg A \\lor B$. Thus the deontic logics considered in Corollary \\ref{cor} are Halld\\'en-complete, whereas all other non-normal logics for which we have proved interpolation are Halld\\'en-incomplete since they don't satisfy Corollary \\ref{cor}.\n\n\n\\begin{example}[Maehara's lemma and rule $LR$-$C$]\\label{prob}\n We have not been able to prove Maehara's Lemma \\ref{Maehara} for rule $LR$-$C$ because of the cases where the principal formulas of the antecedent are splitted in the two elements of the partition. In particular, if we have two principal formulas in the antecedent, the problematic partitions are (omitting the weakening contexts):\n \\begin{center}\n (1)\\quad $\\<\\Box A_1\\Rightarrow\\;||\\;\\Box A_2\\Rightarrow \\Box B\\>$\\qquad\\qquad\n (2)\\quad $\\<\\Box A_1\\Rightarrow\\Box B\\;||\\;\\Box A_2\\Rightarrow\\>$\n \\end{center}\nTo illustrate, an interpolant of the first partition would be a formula $I$ such that:\n\n$$\n (i)\\quad \\vdash \\Box A_1\\Rightarrow I\\qquad (ii)\\quad\\vdash I,\\Box A_2\\Rightarrow\\Box B\\qquad(iii)\\quad p\\in I\\textnormal{ only if }p\\in (A_1)\\cap(A_2,B)\n$$\nBut we have not been able to find partitions of the premisses allowing to find such $I$. More in details, for the first premiss it is natural to consider the partition $\\< A_1\\Rightarrow\\;\\stackrel{C}{||}\\; A_2\\Rightarrow B\\>$ in order to find an $I$ that satisfies $(iii)$. But, for any combination of the partitions of the other two premisses that is compatible with $(iii)$, we can prove that $(ii)$ is satisfied (by $\\Box C$) but we have not been able to prove that also $(i)$ is satisfied.\n\\end{example}\n\n\\section{Conclusion}\\label{conc}\nWe presented cut- and contraction-free sequent calculi for non-normal modal and deontic logics. We have proved that these calculi have good structural properties in that weakening and contraction are height-preserving admissible and cut is (syntactically) admissible. Moreover, we have shown that these calculi allow for a terminating decision procedure whose complexity is in {\\sc Pspace}. Finally, we have given a constructive proof of Craig's interpolation property for all the logics that do not contain rule $LR$-$C$. As far as we know, it is still an open problem whether it is possible to give a constructive proof of interpolation for these logics. Another open question is whether the calculi given here can be used to give a constructive proof of the uniform interpolation property for non-normal logics as it is done in \\cite{P92} for $\\mathbf{IL_p}$ and in \\cite{B07} for {\\bf K} and {\\bf T}. \n\n\n\n\n\n\n\\vspace{0.3cm}\n\n\\noindent {\\bf Thanks.} Thanks are due to Tiziano Dalmonte, Simone Martini, and two anonymous referees for many helpful suggestions.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\label{sxn0intrdxn}\nMRI is a household name as a diagnostic tool in the medical field\n\\cite{vp,tey},\nwith an impressive resume in many other fields, including the\nstudy of materials \\cite{callaghan,blumch,bjj,rugar},\ncorrosion of metals, monitoring batteries and supercapacitors\n\\cite{britton2010,ilott2014}.\n\nHowever, historically, MRI of bulk metals has been very rare, limited to\nspecialized cells using r.f. gradients (with limited control) \\cite{si0grld},\ninstead of the magnetic field gradients employed in conventional MRI.\nIn other studies involving bulk metals, the MRI targeted the surrounding\ndielectric (electrolyte in electrochemical and fuel cells with metallic\nelectrodes, or tissues with embedded metallic implants)\n\\cite{si0pines,si0camacho,si0bennet,si0olsen,si0alm,si0viano,si0shafiei,si0graf,\nsi0koch,si0jfz,si0garwood2011,si0garwood}.\nNotwithstanding, these studies addressed issues that can cause distortions and\nlimit sensitivity of the MRI images, such as bulk magnetic susceptibility (BMS) \neffects and eddy currents (produced on bulk metal surface due to\ngradient switching).\n\nThe dearth of mainstream bulk metal MRI is rooted in unique challenges posed by \nthe physics of propagation of r.f. EM fields in bulk metals\n(MRI employs r.f. pulses to generate the MR signal leading to the images).\nAll along, it has been known that the incident r.f. field decays rapidly and\nexponentially inside the metallic conductor (Fig.\\ref{figSI0dltaEff}), a\nphenomenon known as skin effect \\cite{jackson,si0griffiths,si0ulaby}.\nThe characteristic length of decay ($\\delta$, the {\\em skin depth}), typically\nof the order of several microns (Eq.(\\ref{eq:eq0sknDpth})),\ncharacterizes the limited r.f. penetration into the metal. This in turn, results\nin attenuated MR signal intensity for bulk metals \\cite{si0pines,rangeet}.\nTurning the tables, Bhattacharyya et. al.,\\cite{rangeet} exploited the skin\neffect to separate and quantify bulk and non-bulk metal NMR signals in Li ion\nbatteries to monitor the growth of dendritic metallic structures.\nSubsequently, for bulk metal MRI, yet another impediment was correctly diagnosed\n\\cite{chandrashekar}.\nIt was found that the orientation of the bulk metal surface, relative to \\mbox{$\\bf B_1$}\n(the r.f. magnetic field),\ncritically affected the outcome.\nUsing optimal alignment of the bulk metal (electrodes), relative to \\mbox{$\\bf B_1$},\nrecent studies have successfully demonstrated and highlighted bulk metal MRI\nalbeit, primarily applied to batteries and electrochemical cells\n\\cite{chandrashekar,ilott,romanenko,britton2014,hjc2015}.\n\nThough unanticipated at the time, the recent bulk metal MRI findings eased the\nimplementation of MRI of liquid electrolyte, by helping mitigate adverse effects\ndue to the metal in the vicinity of lithium, zinc and titanium electrodes,\nyielding fresh insights\n\\cite{romanenko,britton2014,si0britton2013,si0klett,si0furo}.\nSimilar benefits may be expected to accrue for MRI based radiology of soft\ntissues with embedded metallic implants (pacemeakers, prosthetics, dental\nimplants, etc.).\n\\\\\n\n\n\nHere, we present several key findings on bulk metal MRI and CSI:\n\\begin{itemize}\n\\item\nDuring a systematic noninvasive thickness measurement of bulk metal strips\nby MRI \\cite{romanenko}\n(section S\\ref{sxnSI0thcknss}),\nwe come across unexpected regions of intensity, and assign them to two mutually \northogonal pairs of faces of the strip.\n\n\\item\nTo explain the peculiar ratios of intensities from these different regions in\nbulk metal MRI and CSI, we derive formulae from first principles, unveling\na surprising underlying reason: \n{\\em differing effective elemental volumes for these different regions}.\n\n\\item\nIn the process, the images enable a visualization of a virtual EM vacuum\ninside the bulk metal via an {\\em MRI tunnel}.\n\n\\item\nAdditionally, we demonstrate that the bulk metal CSI distinguishes different\nfaces (surfaces) of a metal block according to their distinct NMR chemical\nshifts.\n\n\\end{itemize}\n\nWe attained these by employing three {\\em phantoms} (samples)\n{\\bf P0, P1, P3}, depicted schematically in Fig.\\ref{fig0phntms} and described\nin Methods section \\ref{subsxn0mthdsPhntms}.\nAll phantoms were derived from the same stock of 0.75 $mm$ thick lithium (Li)\nstrip. Phantom P3 is a super strip composed of three Li strips pressed together,\nforming an {\\em effective} single strip three times thicker than the\nindividual strips in phantoms P0 and P1.\n\nThe setup of phantoms, r.f. coil and the gradient assembly ensures that the\nimaging directions $x,y,z$ fulfill the condition that \n$x \\parallel a \\parallel \\mbox{$\\bf B_1$}$ and $z \\parallel \\mbox{$\\bf B_0$}$\n(the static main magnetic field), with the possibility to reorient the phantom\nabout the $x$-axis; $a,b,c$ are the three sides of the strips.\n\nSince all MRI and CSI images to follow were acquired with the given phantom's\n$bc$ faces {\\em normal} ($\\perp$, {\\em perpendicular}) to \\mbox{$\\bf B_1$}, these images\nbear the imprint of having no contribution from these faces to the magnetic\nresonance (MR) signal \\cite{chandrashekar,ilott,romanenko,britton2014}, since\n\\mbox{$\\bf B_1$}\\ penetration into the metal is maximal when it is {\\em parallel}\n($\\parallel$) to the metal surface, and minimal when $\\perp$ metal surface\n\\cite{jackson,chandrashekar,ilott}.\n\nFor details on the MRI experiments, including the nomenclature, the reader is\nreferred to \nMethods section \\ref{subsxn0mthdsMri}.\n\n\n\\section{MRI}\n\\label{sxn0mri}\nFig.\\ref{fig0xy0yz} furnishes stackplots (intensity along the vertical axis) of\n\\mbox{$^7$Li} 2d MRI (without slice selection) of phantom P3.\nPanel (a) displays MRI($xy$). Panel (b) displays MRI($yz$).\n\nIt is straightforward to infer that the {\\em walls} of high intensity regions in\neither image emanate from $ac$ faces of the P3 strip\n\\cite{chandrashekar,ilott,romanenko}, as we did while measuring the\nthickness of metal strips (Figs.\\ref{fig0twoStrps} and \\ref{fig0xy},\nsection S\\ref{sxnSI0thcknss}).\nIn either image, contributions along the non imaged axis sum up to yield the\nhigh intensity walls.\n\nHowever, the unexpected intensity between the two $ac$ faces of the super strip,\nin both the images, is perplexing.\nThe 2d MRI($xy$) in Fig.\\ref{fig0xy0yz}a exhibits a low intensity {\\em plateau}\nspanning the walls.\nThe 2d MRI($yz$) in Fig.\\ref{fig0xy0yz}b displays low intensity {\\em ridges} \nbridging the walls.\n\\\\\n\n\\subsection{Visualizing a virtual eletromagnetic vacuum by MRI}\nTo understand better these unexpected regions of intensity, we acquired\n\\mbox{$^7$Li} 3d MRI($xyz$) of phantom P3, shown in Fig.\\ref{fig0xyz}.\nIn addition to the $ac$ faces (separated along $y$),\nthe $ab$ faces (separated along $z$) are revealed for the first time.\n\nAs noted earlier, $bc$ faces (being $\\perp$ to \\mbox{$\\bf B_1$}) are absent.\nThe hollow region in MRI($xyz$) arises from the skin depth phenomenon\n\\cite{rangeet,chandrashekar,ilott,jackson,si0griffiths,si0ulaby},\nrestricting the EM fields to effectively access only a limited\nsubsurface underneath the $ac$ and $ab$ faces\n(section S\\ref{sxnSI0subSurface} and Fig.\\ref{figSI0dltaEff}).\nThe presence of faces $\\parallel$ \\mbox{$\\bf B_1$}, coupled with the conspicuous absence\nof faces $\\perp$ \\mbox{$\\bf B_1$}, in combination with the hollow region, imparts the 3d\nimage an appearance of an {\\em MRI tunnel}, supplying a compelling visualization\nof a virtual {\\em EM vacuum} in the interior of a metallic conductor\n(hitherto depicted only schematically in literature\n(for e.g. Ref.\\cite{rangeet})).\n\\\\\n\nWith the aid of 3d MRI in Fig.\\ref{fig0xyz}, the intensity regions in\n2d MRI($xy$) and 2d MRI($yz$) images of Fig.\\ref{fig0xy0yz}, can be easily\ninterpreted as simply regions resulting respectively from projections\nalong $z$ and $x$ axes of the 3d image.\nIt is convincingly clear that the intensity between $ac$ faces\n(either the plateau or the ridges), is due to the pair of $ab$ faces\nof the superstrip P3.\n\nYet, the basis for the relative intensity values remains elusive at\nthis stage.\n\nFor the 2d MRI($xy$) in Fig.\\ref{fig0xy0yz}a, it can be argued that,\nfor the $ac$ face the entire length of side $c=$7 $mm$\n(Fig.\\ref{fig0phntms}) contributes to the signal,\nwhile for the $ab$ face, only a subsurface depth\n$ \\mbox{$\\delta_{\\text{eff}}$} \\approx$ 9.49 $\\mu m$ contributes\n(Eq.(\\ref{eq:eq0sknDpth}), Eq.(\\ref{eq:eq0dltaEff}),\nsection S\\ref{sxnSI0subSurface} and Fig.\\ref{figSI0dltaEff}).\nThis would lead to a ratio of the corresponding intensities, $S_{ac}\/S_{ab}$,\nto be of the order of $c\/(2 \\mbox{$\\delta_{\\text{eff}}$}) \\approx 368 $ (Fig.\\ref{fig0xy0yz0sim}a),\nin obvious and jarring disagreement with the observed ratio\n(of maxima of $S_{ac}$ and $S_{ab}$) of 6.6.\n\nFor the 2d MRI($yz$) in Fig.\\ref{fig0xy0yz}b, the expected intensity pattern in \na stack plot would be one with equal intensities from $ab$ and $ac$ faces,\nsince they share the same side, $a$, along $x$ (non imaged) axis\n(Fig.\\ref{fig0xy0yz0sim}b).\nThis again, is in stark contrast with the observed ratio\n(of maxima of $S_{ac}$ and $S_{ab}$) of 10.\n\nFor the MRI($xyz$), naively, uniform intensity would be expected from both $ab$ \nand $ac$ faces, resulting in a ratio of unity. Instead, the observed ratio\n(of maxima of $S_{ac}$ and $S_{ab}$) is found to be 3.8.\n \nThus, the MRI images bear peculiar intensity ratios from comfortably identified \n(from 3d MRI) regions of the bulk metal.\nWe will return to this topic later.\n\\\\\n\n\\section{CSI}\n\\label{sxn0csi}\nThe \\mbox{$^7$Li} NMR spectrum of phantom P3 (Fig.\\ref{fig0csi} inset)\ncontains two distinct peaks in the Knight shift region for metallic \\mbox{$^7$Li}\n(see Methods section \\ref{subsxn0mthdsMri}),\ncentered at $\\delta_1$= 256.4 and $\\delta_2$= 266.3 ppm.\nAt first sight it might seem odd that a metallic strip, of regular geometry and\nuniform density, that is entirely composed of identical Li atoms, gives\nrise to two NMR peaks instead of the expected single peak.\n\nTo gain additional insight as to the spatial distribution of the Li metal\nspecies with different NMR shifts, we turn to CSI, which combines an NMR\nchemical shift (CS) dimension with one or more imaging(I) dimensions\n\\cite{callaghan,haacke,si0spnglr,si0kwf}.\n\nThe 2d CSI($y$) shown in Fig.\\ref{figSI0csi} comprises of two bands separated\nalong $y$ located at $\\delta_2$, along the CS dimension,\nwhile a low intensity band spans them along $y$ at a CS of $\\delta_1$, strongly\nhinting at, the two bands (at $\\delta_2$) being associated with $ac$ faces.\n\nThis observation called for adding an additional\nimaging dimension along $z$, leading to 3d CSI($yz$), which is realized in\nFig.\\ref{fig0csi},\nwhere $y$ and $z$ are the imaging dimensions, accompanied by the CS dimension.\nThe bands separated along $z$, occur at $\\delta_1$.\nThe bands separated along $y$, occur at $\\delta_2$.\nIn conjunction with the 3d MRI image in Fig.\\ref{fig0xyz},\nit is evident that the pairs of bands at $\\delta_1$ and $\\delta_2$ arise from\n$ab$ and $ac$ faces respectively, completing the spatio-chemical assignment.\n\nThese assignments readily carry over to 2d CSI($y$) in Fig.\\ref{figSI0csi},\nwith the pair of bands at $\\delta_2$ and the low ridge spanning them at \n$\\delta_1$, being respectively identified with $ac$ and $ab$ faces. Similarly,\nin the NMR spectrum, the short and tall peaks\nrespectively at $\\delta_{1,2}$ are assigned to $ab$ and $ac$ faces, consistent\nwith the reported \\cite{rangeet,ilott} experiments and simulations.\n\nThat different types of faces of the bulk metal strip suffer different NMR\n(Knight) shifts according to their orientations {\\em relative} to \\mbox{$\\bf B_0$},\nis consistent with previous observations and simulations \\cite{rangeet,ilott},\nand has been traced to bulk magnetic susceptibility (BMS) effect\n\\cite{rangeet,chandrashekar,ilott,hjc2015,hoffman,lina,hjc}.\n\nInterestingly, the 3d CSI sheds new light on previous bulk metal NMR studies.\nFor instance, in an earlier study \\cite{rangeet}, a similar shift\ndifference between NMR peaks was observed at $\\parallel$ and $\\perp$\norientations (relative to \\mbox{$\\bf B_0$}) of the major faces of a thinner metal strip,\nby carrying out two {\\em separate} experiments.\nHere, phantom P3 furnishes these two orientations in a {\\em single} experiment, \nvia $ac$ and $ab$ faces (Fig.\\ref{fig0phntms}).\nThe present work provides physical insight into another previous \\cite{ilott}\nobservation. It was found that the intensity of NMR peak arising from $ab$\nfaces, unlike that from the $ac$ faces, was invariant under rotation about\n$z \\parallel c \\parallel B_0$ axis.\nOur 3d MRI (Fig.\\ref{fig0xyz}) and 3d CSI (Fig.\\ref{fig0csi}), directly\ndemonstrate that such a rotation leaves the orientation of \\mbox{$\\bf B_1$}\\ relative to\n$ab$ face (but not the $ac$ face) the same (signal intensity from a given face\ndepends on its orientation relative to \\mbox{$\\bf B_1$} \\cite{chandrashekar,ilott}).\nNote that the shifts themselves remain unshifted since they depend on the\norientation of the faces relative to \\mbox{$\\bf B_0$}, which does not change under this\nrotation ($ac$ and $bc$ faces remain $\\parallel \\mbox{$\\bf B_0$}$, whilst $ab$ faces\nremain $\\perp \\mbox{$\\bf B_0$}$).\n\nThus, bulk metal CSI supplies direct\nevidence, that the bulk metal chemical (Knight) shifts resulting from BMS are\ncorrelated with the differing orientations (relative to \\mbox{$\\bf B_0$}) of different parts \nof the bulk metal.\\\\\n\nLike for MRI, the basis for the ratio of intensities from the $ac$ and $ab$\nfaces ($S_{ac}\/S_{ab} \\approx$ 2.8), is not immediately intuitively obvious\nand will be explored next.\n\n\n\n\\section{Intensity ratio formulae for bulk metal MRI and CSI}\n\\label{sxn0formulae}\nThe peculiar intensity ratios in MRI and CSI, of signals $S_{ab}$ and $S_{ac}$,\narising respectively from $ab$ and $ac$ faces of phantom P3\n(Fig.\\ref{fig0xy0yz}, sections \\ref{sxn0mri} and \\ref{sxn0csi}),\ncould be due to gradient switching involved in the MRI experiments (the\nresultant eddy currents could be different for $ab$ and $ac$ faces).\nHowever, as shown in section S\\ref{sxnSI0mri2dNtnst}, this can be ruled out on\nthe basis of 2d MRI($yz$) and MRI($zy$) at mutually orthogonal orientations\n(related by a rotation about $x \\parallel a \\parallel \\mbox{$\\bf B_1$}$),\nshown in Fig.\\ref{fig0p3hrzntl}.\n\nAnd yet, it is possible to derive, from elementary considerations\nand first principles, and arrive at expressions for the {\\em ratios} of MRI\nand CSI intensities from $ab$ and $ac$ faces.\\\\\n\nFor the 2d MRI($xy$), in Fig.\\ref{fig0xy0yz}a, the signal intensity from the\n$ab$ faces can be written as (see Eq.(\\ref{eq:eq1dltaEff}))\n\\begin{equation}\n S_{ab} (x,y)\\propto dx\\ dy\\ \\int dz = dx\\ dy\\ 2\\delta_{\\text{eff}}\n\\label{eq:eq0sabxy}\n\\end{equation}\nwith $dx\\ dy\\ dz$ denoting {\\em elemental} volume of the metal,\nand \\mbox{$\\delta_{\\text{eff}}$}\\ is the {\\em effective} subsurface {\\em depth} that would account\nfor the MR signal in the {\\em absence} of \\mbox{$\\bf B_1$}\\ decay\n(see Eq.(\\ref{eq:eq0dltaEff})\nand Fig.\\ref{figSI0dltaEff}).\nAbove, the integral over $z$, is replaced by \\mbox{$\\delta_{\\text{eff}}$}\\, underneath the two $ab$\nfaces separated along $z$.\n\nSimilarly, for the signal intensity from {\\em either} of the $ac$ faces,\n\\begin{equation}\n S_{ac} (x,y)\\propto dx\\ dy\\ \\int dz = dx\\ \\delta_{\\text{eff}}\\ c\n\\label{eq:eq0sacxy}\n\\end{equation}\nsince the subsurface now is $\\perp y$.\n\nEq.(\\ref{eq:eq0sabxy}) and Eq.(\\ref{eq:eq0sacxy}), reveal {\\em differing\neffective elemental volumes}(voxels) underneath these faces:\n\\begin{equation}\ndV_{\\text{eff}}^{\\text{ab}} = dx\\ dy\\ \\delta_{\\text{eff}}\n\\label{eq:eq0dVeffab}\n\\end{equation}\n\\begin{equation}\ndV_{\\text{eff}}^{\\text{ac}} = dx\\ \\delta_{\\text{eff}}\\ dz \n\\label{eq:eq0dVeffac}\n\\end{equation}\nFrom Eq.(\\ref{eq:eq0sabxy}) and Eq.(\\ref{eq:eq0sacxy}),\n\\begin{equation}\n \\frac{S_{ac}}{S_{ab}} =\\frac{c}{2\\Delta y}\n\\label{eq:eq0ratioxy}\n\\end{equation}\nwhere we have replaced $dy$ by $\\Delta y$, the resolution along\n$y\\ \\parallel b$.\nConsulting the Methods section \\ref{subsxn0mthdsMri} and Fig.\\ref{fig0phntms},\n$c=7$ $mm$, $\\Delta y$=0.25 $mm$ and Eq.(\\ref{eq:eq0ratioxy}) yields a\ncalculated ratio of $S_{ac}\/S_{ab}$= 14\n(as illustrated in Fig.\\ref{fig0xy0yz0drvd}a),\nwithin an order of magnitude of the observed ratio (section \\ref{sxn0mri},\nFig.\\ref{fig0xy0yz}a) and a 25 fold improvement relative to the expected ratio\n(Fig.\\ref{fig0xy0yz0sim}a).\n\nAlso, Eq.(\\ref{eq:eq0ratioxy}) reveals that $S_{ac}\/S_{ab}$ increases with\nincreasing resolution along $y$, as shown in the three images in\nFig.\\ref{figSI0td20td40td80},\nwith relative resolutions increasing by factors of 1, 2 and 4,\nyielding calculated $S_{ac} \/ S_{ab}$ ratios of 7, 14 and 28 respectively.\nThe corresponding observed ratios (of maxima of $S_{ac}$ and $S_{ab}$) of\n3.3, 6.6, and 11.6, are within an order of magnitude of the calculated values.\nRemarkably, these observed ratios themselves increase by factors of 1, 2 and\n3.5, mimicking closely the corresponding factors of resolution increase.\n\\\\\n\nContinuing in the same vein, for the 2d MRI($yz$), in Fig.\\ref{fig0xy0yz}b,\n\\begin{equation}\n S_{ab} (y,z) \\propto dy\\ dz \\int dx = a\\ dy\\ \\delta_{\\text{eff}}\n\\label{eq:eq0sabyz}\n\\end{equation}\nwhile,\n\\begin{equation}\n S_{ac} (y,z) \\propto dy\\ dz \\int dx = a\\ \\delta_{\\text{eff}}\\ dz\n\\label{eq:eq0sacyz}\n\\end{equation}\nleading to\n\\begin{equation}\n \\frac{S_{ac}}{S_{ab}}= \\frac{\\Delta z}{\\Delta y}\n\\label{eq:eq0ratioyz}\n\\end{equation}\nonce again, replacing $dy,\\ dz$ by $\\Delta y,\\ \\Delta z$, the respective\nresolutions along $y,\\ z$.\nUsing the values of $\\Delta y, \\Delta z$= 0.0357, 1 $mm$\nin Eq.(\\ref{eq:eq0ratioyz}), ensues\na calculated ratio of $S_{ac} \/ S_{ab} \\approx$ 28\n(as illustrated in Fig.\\ref{fig0xy0yz0drvd}b),\nwithin an order of magnitude of the observed ratio (section \\ref{sxn0mri},\nFig.\\ref{fig0xy0yz}b), and 3.5 fold better than the expected ratio\n(Fig.\\ref{fig0xy0yz0sim}b). More importantly, the expected intensity pattern is \neven {\\em qualitatively} (visually) different from the experiment, unlike the\nderived pattern.\n\n\nSimilarly, for the 2d MRI($zy$) in Fig.\\ref{fig0p3hrzntl}b, of phantom P3 in\n{\\em horizontal} orientation, it can be easily shown that, \n\\begin{equation}\n \\frac{S_{ac}}{S_{ab}}= \\frac{\\Delta y}{\\Delta z}\n\\label{eq:eq0ratiozy}\n\\end{equation}\nUsing the values of $\\Delta z, \\Delta y$= 0.0357, 1 $mm$\nin Eq.(\\ref{eq:eq0ratiozy}), results in a calculated ratio of\n$S_{ac} \/ S_{ab} \\approx$ 28, within an order of magnitude of the observed\nratio (section S\\ref{sxnSI0mri2dNtnst}).\\\\\n\nProceeding along the same lines, for the MRI($xyz$) in Fig.\\ref{fig0xyz},\n\\begin{equation}\n S_{ab} (x,y,z) \\propto dx\\ dy\\ dz = dx\\ dy\\ \\delta_{\\text{eff}}\n\\label{eq:eq0sabxyz}\n\\end{equation}\nwhile,\n\\begin{equation}\n S_{ac} (x,y,z) \\propto dx\\ dy\\ dz = dx\\ \\delta_{\\text{eff}}\\ dz\n\\label{eq:eq0sacxyz}\n\\end{equation}\nAs usual by now, replacing $dy,\\ dz$ by $\\Delta y,\\ \\Delta z$, the respective\nresolutions along $y,\\ z$, we obtain again Eq.(\\ref{eq:eq0ratioyz}).\nConsulting the Methods section \\ref{subsxn0mthdsMri}, \n$\\Delta y, \\Delta z$= 0.25, 1 $mm$, respectively. Using these values in \nEq.(\\ref{eq:eq0ratioyz}) yields a calculated ratio of $S_{ac} \/ S_{ab}$= 4,\nwithin an order of magnitude of the observed ratio\n(section \\ref{sxn0mri}).\n\nFig.\\ref{fig0xyzSlc25} shows (in stack plot representation, with vertical axis\ndenoting intensity) $xy$ slices (along $z$) from the 3d MRI($xyz$).\nThe central slice contains no signal between the walls of intensity (from $ac$\nfaces) as expected.\nHowever, the slice from the top $ab$ face exhibits a {\\em plateau} of intensity\nbetween the $ac$ faces, visually demonstrating that $S_{ab} \\neq S_{ac}$ in the \nnon-hollow regions of the 3d image, in place of the expected uniform intensity.\n\\\\\n\nSimilarly, for the 3d CSI($yz$) in Fig.\\ref{fig0csi}, it can be shown that\nthe ratio $S_{ac} \/ S_{ab}$ is given by Eq.(\\ref{eq:eq0ratioyz}), which along\nwith the relevant experimental parameters for this image,\nyields a calculated value of 2, within an order of magnitude of the observed\nratio (of maxima of $S_{ac}$ and $S_{ab}$) of 2.8.\n\nOn the other hand, for the 2d CSI($y$), in Fig.\\ref{figSI0csi}, it can be shown \nthat the ratio $S_{ac} \/ S_{ab}$ is given by Eq.(\\ref{eq:eq0ratioxy}), from\nwhich we obtain a calculated value of 14, using the experimental parameters in\nsection \\ref{subsxn0mthdsMri}. The measured ratio (of maxima of $S_{ac}$ and\n$S_{ab}$) of 9.5, is again within an order of magnitude of the calculated value.\n\\\\\n\nThus, the $S_{ac} \/ S_{ab}$ ratios calculated from\nEqs.(\\ref{eq:eq0ratioxy}), (\\ref{eq:eq0ratioyz}) and (\\ref{eq:eq0ratiozy}),\nagree with the observed values within an order of magnitude\nfor 2d MRI($xy$), 2d MRI($yz$), 2d MRI($zy$), 3d MRI($xyz$), 3d CSI($yz$) and\n2d CSI($y$).\nIn fact, discrepancies between observed and derived $S_{ac}\/S_{ab}$ ratios range\nonly by factors of 0.7 to 2.8 across various MRI and CSI images\n(see Table.\\ref{table:tbl0ratio}).\nMore importantly, the derived patterns {\\em resemble} the observed patterns,\nunlike the expected patterns, which differ even visually from the observed\npatterns (for e.g., see\nFigs.\\ref{fig0xy0yz}, \\ref{fig0xy0yz0sim} and \\ref{fig0xy0yz0drvd}).\n\n\\begin{table}[h]\n \\caption{\n The observed, derived (from\n Eqs.(\\ref{eq:eq0ratioxy}), (\\ref{eq:eq0ratioyz}), (\\ref{eq:eq0ratiozy})),\n and expected (from skin depth arguments alone) $S_{ac} \/ S_{ab}$ ratios.\n }\n \\begin{tabular}{|c| c c c|}\n \\hline\n & & $S_{ac} \/ S_{ab}$ & \\\\\n \\hline\n Experiment & Observed & Derived & Expected \\\\\n\\hline\nMRI($xy$) & & & \\\\\nFig.\\ref{figSI0td20td40td80}a & 3.3 & 7 & 368 \\\\\nFig.\\ref{figSI0td20td40td80}b & 6.6 & 14 & 368 \\\\\nFig.\\ref{figSI0td20td40td80}c & 11.6 & 28 & 368 \\\\\n & & & \\\\\nMRI($yz$) & 10 & 28 & 1 \\\\\nMRI($zy$) & 10 & 28 & 1 \\\\\nMRI($xyz$) & 3.8 & 4 & 1 \\\\\n2d CSI($y$) & 9.5 & 14 & 368 \\\\\n3d CSI($yz$) & 2.8 & 2 & 1 \\\\\n\\hline\n \\end{tabular}\n \\label{table:tbl0ratio}\n\\end{table}\n\nIn summary, the formulae unveil the underlying reason for the significant\ndeparture of observed $S_{ac}\/S_{ab}$ from expected values:\n{\\em differing effective elemental volumes underneath these faces},\nas revealed by Eqs.(\\ref{eq:eq0dVeffab}) and (\\ref{eq:eq0dVeffac}).\nThe derived patterns bear closer resemblance to\nexperiment, than what is expected from skin depth consideratons alone, or from\nconventional specifications of the voxel= $\\Delta x \\Delta y \\Delta z$ (see for\ne.g. Figs.\\ref{fig0xy0yz}, \\ref{fig0xy0yz0sim} and \\ref{fig0xy0yz0drvd}).\n\nOn a practical note, these formulae can guide experimental strategies to\nrelatively enhance MRI and CSI signals from different regions of the bulk metal.\n\n\n\\section{Conclusions}\n\\label{sxn0cnclusn}\nIn conclusion, the unexpected findings presented here may impact bulk metal MRI \nand CSI studies in general,\nvia fresh insights\nfor data collection, analysis and interpretation.\nThe bulk metal MRI and CSI\n(correlating different bulk metal surfaces with distinct chemical shifts)\nresults in this study have the noninvasive diagnostic potential in other fields\nsuch as\nstructure of metals and alloys \\cite{rdrgz,flyn},\nmetallurgy (metal fatigue, fracture, strain)\n\\cite{mtllurg,bppag,mvrkks},\ncatalysis\n\\cite{zhong,yuan},\nbulk metal surface science and surface chemistry\n\\cite{tlptr,whttn,rgg},\nmetallic medical implants, dielectric MRI in the vicinity of bulk metals\netc.\n(section \\ref{sxn0intrdxn}).\n\nThe findings may also lead to as yet unforeseen applications\n(section \\ref{sxn0intrdxn}) since,\n(i) they are of a fundamental nature,\n(ii) there are no inherent limitations to the approach employed\n(scalability, different metals, systems other than batteries, etc., are all\npossible),\n(iii) the study utilizes only standard MRI tools (hardware, pulse sequences,\ndata acquisition and processing), ensuring ease of implementation and\nreproducibility. Thus it is likely to benefit from advances made in the\nmainstream (medical) MRI field.\n\n\n\n\n\\section{Methods}\n\\label{sxn0mthds}\n\\subsection{Phantoms}\n\\label{subsxn0mthdsPhntms}\nAll phantoms were assembled and sealed in an argon filled glove box.\nAll three phantoms, P0, P1 and P3, shown in Fig.\\ref{fig0phntms} \n(of dimensions $a \\times b \\times c$),\nwere derived from a (0.75 $mm$ thick) stock Li strip (Alfa Aesar 99.9\\%).\nThe Li strips were mounted on a 2.3 $mm$ thick teflon strip and the resulting\nsandwich bound together with Kapton tape.\nEach phantom was placed in a flat bottom glass tube (9.75 $mm$ inner diameter\n(I.D.), 11.5 $mm$ outer diameter (O.D.) and 5 $cm$ long), with\nthe longest side, $a$, $\\parallel$ to the tube axis and to the axis of the\nhome built horizontal loop gap resonator (LGR) r.f. coil (32 $mm$ long, 15 $mm$\nO.D.), thus guaranteeing $\\mbox{$\\bf B_1$} \\parallel a$.\nThe phantom containing glass tubes were wrapped with Scotch tape\nto snugly fit into the r.f. coil.\n\nIn Fig.\\ref{fig0phntms}, $x,y,z$ specify the imaging (gradient) directions,\nwith $z \\parallel \\mbox{$\\bf B_0$}$ (the main magnetic field).\nFor our horizontal LGR r.f. coil (the MR resonator) and the gradient assembly\nsystem, $\\mbox{$\\bf B_1$} \\parallel x$, resulting in $\\mbox{$\\bf B_1$} \\parallel x \\parallel a$.\n\\begin{itemize}\n\\item {\\bf P0}: Pair of Li strips separated by a teflon strip;\nfor each Li strip,\n$a \\times b \\times c=$ 20 x 0.75 x 7 $mm^3$.\n\\item\n{\\bf P1}: Single Li strip.\n$a \\times b \\times c=$ 15 x 0.75 x 7 $mm^3$.\n\\item\n{\\bf P3}: Three Li strips pressed together to yield a single\ncomposite super strip.\n$a \\times b \\times c=$ 15 x 2.25 x 7 $mm^3$.\n\\end{itemize}\n\n\n\\subsection{MRI and CSI}\n\\label{subsxn0mthdsMri}\nMagnetic resonance experiments were conducted on a $B_0$=21$T$ magnet\n(corresponding to \\mbox{$^7$Li} Larmor frequency of 350 MHz)\noperating under Bruker Avance III system with Topspin spectrometer control and\ndata acquisition, and equipped with a triple axes ($x,y,z$) gradient amplifier\nassembly, using a multinulcear MRI probe\n(for a triple axes 63 $mm$ I.D. gradient stack by Resonance Research Inc.), \nemploying the LGR r.f. coil (resonating at 350 MHz) desribed above.\n\n\nThe MRI and CSI data were acquired using spin-echo imaging pulse sequence\nwithout slice selection \\cite{callaghan,haacke} (yielding sum total of signal\nconributions from the non imaged dimensions).\nFrequency encoding gradient was\nemployed for the directly detected dimension and phase encoding gradients for\nthe indirect dimensions \\cite{callaghan,haacke}.\nThe CSI experiments were carried out with the NMR chemical shift as the directly\ndetected dimension, with phase encoding gradients along the indirectly detected \nimaging dimensions.\nThe r.f. pulses were applied at a carrier frequency of 261 ppm (to excite the\nmetallic \\mbox{$^7$Li} nuclear spins in the Knight shift region\n\\cite{rangeet,abragam}),\ntypically with a strength of 12.5 $kHz$, with a recycle (relaxation) delay of\n0.5 $s$.\nThe gradient dephasing delay and phase encoding gradient duration were 0.5 $ms$.\n\nThroughout this manuscript,\nthe first axis label ($x,y,z$) describing an MRI experiment stands for frequency\nencoding dimension and the remaining ones correspond to phase econded\ndimensions. For e.g., MRI($xyz$) implies frequency encoding along $x$ axis,\nand phase encoding along the remaining directions.\n\n$G_x,G_y,G_z$ and $N_x,N_y,N_z$ denote respectively the gradient strengths in\nunits of $T\/m$ and number of data points in $k$-space ($^*$ denoting complex\nnumber of points acquired in quadrature) \\cite{callaghan,haacke}, along\n$x,y,z$ axes.\n$L_x,L_y,L_z$ and $\\Delta x,\\Delta y,\\Delta z$ are respectively the resultant\nnominal field of view (FOV) and resolution, in units of $mm$, along $x,y,z$ axes\n\\cite{callaghan,haacke}.\n\nAlso, $n$ is the number of transients accumulated for signal averaging and\n$SW$ is the spectral width (in units of $kHz$) for the directly detected\ndimension in MRI and CSI.\n\\\\ \\\\\n{\\bf 1d MRI({\\em y}):} \\\\\n$n=64,\\ SW=50$ \\\\\n$G_y=0.42,\\ N_y^*=200,\\ L_y=7.143,\\ \\Delta y= 0.0357$\n\\\\\n{\\bf 2d MRI({\\em xy}):} \\\\\n$n=32,\\ SW=100$ \\\\\n$G_x=0.24,\\ N_x^*=200,\\ L_x=25,\\ \\Delta x= 0.125$ \\\\\n$L_y=10$ \\\\\n(1) $G_y=0.12,\\ N_y=20,\\ \\Delta y= 0.500$\n (Figs.\\ref{fig0twoStrps}a,\\ref{figSI0td20td40td80}a) \\\\\n(2) $G_y=0.24,\\ N_y=40,\\ \\Delta y= 0.250$\n (Figs.\\ref{fig0twoStrps}b,\\ref{fig0xy}b,\\ref{fig0xy0yz}a,\n \\ref{figSI0td20td40td80}b) \\\\\n(3) $G_y=0.48,\\ N_y=80,\\ \\Delta y= 0.125$ (Fig.\\ref{figSI0td20td40td80}c) \\\\\n{\\bf 2d MRI({\\em yz}):} \\\\\n$n=32$ \n$SW=50$ \\\\\n$G_y=0.42,\\ N_y^*=200,\\ L_y=7.143,\\ \\Delta y= 0.0357$ \\\\\n$G_z=0.06,\\ N_z=16,\\ L_z=16,\\ \\Delta z= 1.000$\n\\\\\n{\\bf 2d MRI({\\em zy}):} \\\\\n$n=32,\\ SW=50$ \\\\\n$G_z=0.42,\\ N_z^*=200,\\ L_z=7.143,\\ \\Delta z= 0.0357$ \\\\\n$G_y=0.06,\\ N_y=16,\\ L_y=16,\\ \\Delta y= 1.000$\n\\\\\n{\\bf MRI({\\em xyz}):} \\\\\n$n=16,\\ SW=100$ \\\\ \n$G_x=0.24,\\ N_x^*=200,\\ L_x=25,\\ \\Delta x= 0.125$ \\\\\n$G_y=0.24,\\ N_y=40,\\ L_y=10,\\ \\Delta y= 0.250$ \\\\\n$G_z=0.06,\\ N_z=16,\\ L_z=16,\\ \\Delta z= 1.000$\n\\\\\n{\\bf 2d CSI({\\em y}):} \\\\\n$n=8,\\ SW=100$, number of data points(complex)=$1024$ \\\\ \n$G_y=0.24,\\ N_y=40,\\ L_y=10,\\ \\Delta y= 0.250$\n\\\\\n{\\bf 3d CSI({\\em yz}):} \\\\\n$n=24,\\ SW=100$, number of data points(complex)=$1024$ \\\\ \n$G_y=0.12,\\ N_y=20,\\ L_y=10,\\ \\Delta y= 0.500$ \\\\\n$G_z=0.06,\\ N_z=16,\\ L_z=16,\\ \\Delta z= 1.000$ \\\\\n\\\\\nAll data were processed in Bruker's Topspin, with one zero fill prior to complex\nfast Fourier Transform (FFT) along each dimension either without any window\nfunction or with sine-bell window function.\nAll data were 'normalized' (to $\\approx 10$, for plotting convenience) to aid\ncomparing {\\em relative} intensities from different regions {\\em within} a given\nimage.\nFor the purpose of determining the ratios of signal intensities associated\nwith different regions of the bulk metal, the intensity values were measured\ndirectly from the processed images either in Topspin or Matlab (for e.g.,\n'datatip' utility in Matlab, yields the coordinates and the 'value' (intensity) \nof a data point by clicking on it, in 1d, 2d and 3d plots). \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\nApplication of external pressure is a powerful method to tune the intricate interplay of competing energy scales in correlated materials and the emergence of novel unconventional phases in a clean fashion. It offers significant advantages as a control parameter compared with chemical substitution and application of magnetic fields, because it does not introduce additional disorder as in the case of substitution of one element by an other or polarize the electrons as a magnetic field does.\n\nA large variety of experimental setups has been developed to probe physical properties under hydrostatic pressure.\\cite{Nicklas2015} In contrast, experiments under uniaxial pressure appeared to be limited to low pressure and only few experimental probes. Recently, however, the development of piezoelectric-driven pressure devices opened a new perspective.\\cite{Hicks2014a,Barber2019} These devices allow the application of large positive and negative pressures and the amplitude of the applied pressure can be easily changed at low temperatures. In a short period of time experimental stages to access a large number of physical properties of materials have been developed. These include electrical transport,\\cite{Stern2017,Steppke2017} magnetic susceptibility,\\cite{Hicks2014b} nuclear-magnetic resonance,\\cite{Kissikov2017,Luo2019,Pustogow2019} muon-spin resonance,\\cite{Grinenko2020} and angle-resolved photo emission, for which mechanically or thermally activated cells have also been introduced.\\cite{Flototto2018,Ricco2018,Pfau2019a,Pfau2019b,Sunko2019}\n\nAn important quantity to characterize a material is the specific heat, which is the fundamental thermodynamic property giving information on the internal degrees of freedom of a material and the entropy related with them.\nTo address the experimental challenge of studying the heat capacity under large uniaxial pressures, we employ a variation of known AC heat-capacity measurement techniques.\\cite{Sullivan1968} Heat capacity measurement has been combined with uniaxial pressure previously,\\cite{Jin1992,Reinders1994,Miclea2002,Zieve2004,Dix2009} but with traditional, anvil-based uniaxial pressure cells. Samples have been thermally isolated by using low thermal conductivity materials, such as stainless steel or superconducting NbTi, as piston or additional spacer. However in previous anvil-based uniaxial-pressure measurements, e.g.\\ on the unconventional superconductor Sr$_2$RuO$_4$, it did not prove practical to maintain high stress homogeneity,\\cite{Kittaka2010,Taniguchi2015} which is one of the main challenges in carrying out this kind of experiments. Furthermore, under applied uniaxial pressure the samples even may deform plastically. To reduce these effects we apply force to the sample through a layer of epoxy,\\cite{Hicks2014a} which acts as a conformal layer that dramatically improves stress homogeneity. However, it also makes heat-capacity measurement more challenging, because the epoxy layer provides an unavoidable strong thermal link to the pressure cell.\n\nFor our study we have used Sr$_2$RuO$_4$ that provides a demanding test of our new apparatus. Sr$_2$RuO$_4$ is an unconventional superconductor with a superconducting transition temperature up to $T_c=1.5$~K in the best crystals.\\cite{Mackenzie1998,Mackenzie2003,Mackenzie2017} From resistivity and magnetic susceptibility experiments it is known that $T_c$ shows a pronounced dependence on the applied uniaxial pressure.\\cite{Hicks2014b,Steppke2017,Barber2018} This and the sharp superconducting transition anomaly make it an ideal material to demonstrate the potential of our technique for the study of correlated materials. A successful experiment on Sr$_2$RuO$_4$ can only be done using a technique that introduces no disorder or plastic deformations, and that probes a region in which the strain induced in the sample is highly homogeneous.\n\n\n\n\\section{METHOD}\n\nFor a setup in which the sample is strongly coupled to the environment as it is in a pressure cell, whether hydrostatic or uniaxial, standard quasi-adiabatic or relaxation techniques are limited to cases where the heat capacity of the whole pressure cell including the sample is measured and the heat capacity of the sample can then be separated from the (large) addenda. That implies restrictions on the materials which can be investigated and limits experiments to low temperatures. The advantage of such a technique is that one obtains absolute values of the heat capacity, but the resolution and the pressure regime are limited. In heat-capacity measurements at higher pressure, where anvil-type cells are used, or for uniaxial pressure experiments, the application of this technique is not possible anymore. The mass of sample is negligible with respect to that of the pressure apparatus. In these cases the heat capacity can only be determined using an AC heat capacity measurement technique.\\cite{Sullivan1968} With the AC technique it is possible to record heat capacity data in a wide range of parameter space on a sample which is not well thermally isolated from its environment by adjusting the measurement frequency. The drawback is that it is generally challenging to obtain absolute values of the heat capacity and one usually has to be content with data having arbitrary units. As we demonstrate, however, it can still yield a wealth of useful information.\n\n\\subsection{AC Heat Capacity}\\label{AC Heat Capacity}\n\n\\begin{figure}[tb!]\n\\includegraphics[width=0.9\\linewidth]{Scheme_HC}\n\\centering\n\\caption{\n(a) Schematic drawing of the thermal couplings of a sample in an AC heat-capacity setup.\n(b) A schematic diagram of the frequency response curve $F$ against $\\omega$. $\\omega$ is the angular frequency of the temperature oscillation. The curve can be divided into three regions separated by $\\omega_1$ and $\\omega_2$.\n }\n\\label{Scheme_HC}\n\\end{figure}\n\nIn the AC heat capacity measurement technique an alternating current is applied at frequency $\\omega\/2$ to the heater, leading to an AC heat power at frequency $\\omega$ to determine the heat capacity $C_{AC}$. Here $\\omega=2\\pi f$ is the angular frequency.\nThe governing relationship for measurements of the AC heat capacity is\n\\begin{equation}\n C_{AC}= \\frac{P}{\\omega T_{AC} } F(\\omega).\n\\label{Cac}\n\\end{equation}\n$P$ is the average power and $F(\\omega)$ is a frequency response curve that characterizes the thermalization of the sample, and differs from sample to sample, because it depends on time constants determined by thermal conductances and heat capacities of the system.\n$F(\\omega)$ depends on the time constants $\\tau_1$ and $\\tau_2$:\n\\begin{equation}\\label{F_omega}\n F(\\omega)=\\left[ 1+\\frac{1}{\\omega^2\\tau_1^2}+\\omega^2\\tau_2^2\\right] ^{-1\/2}.\n\\end{equation}\n$\\tau_1=C_{AC}\/k_b$ describes the time scale of the applied heat power decaying to the environment, whereas $\\tau_2=\\sqrt{\\tau_h^2+\\tau_{\\theta}^2+\\tau_{\\rm int}^2}$ describes the internal thermal time scale within the system itself.\nHere $\\tau_h=C_h\/k_h$,$\\tau_{\\theta}=C_{\\theta}\/k_{\\theta}$ and $\\tau_{\\rm int}=C_s\/k_s$ (see also Fig.\\ \\ref{Scheme_HC}a). The time constants $\\tau_h$ $\\tau_{\\theta}$ and $\\tau_{\\rm int}$ describe the time scales for the heater and, thermometer and sample to be thermalized, respectively. $C_h$, $C_{\\theta}$, and $C_s$ are the heat capacity of the heater, thermometer, and sample, respectively. For a good design, the responses of heater and thermometer need to be fast so one should aim at $\\tau_{\\rm int}\\gg \\tau_h$ and $\\tau_{\\theta}$.\n\nA schematic diagram of $F(\\omega)$ is shown in Fig.\\ \\ref{Scheme_HC}b. At low frequencies, indicated as regime I, $\\omega\\ll\\omega_1= 1\/\\tau_1=k_b\/C_{AC}$, $F(\\omega)$ is reduced due to dissipation of temperature oscillations into the environment and at high frequencies, $\\omega\\gg\\omega_2= 1\/\\tau_2\\cong k_s\/C_s$, because the heater-sample-temperature sensor system does not thermalize, marked as regime III.\nIn the plateau region between these limits $F(\\omega)\\approx 1$ and\n\\begin{equation}\\label{C_ac_prox}\nC_{AC}\\approx \\frac{P}{\\omega T_{AC} }.\n\\end{equation}\n\nIn addition to the temperature oscillations the application of the oscillatory heating power leads to an temperature offset $T_{DC}$ in the sample, which can be determined in the low frequency limit $\\omega\\ll\\omega_1$. Here $F(\\omega)=\\omega\\tau_1$ and the temperature offset can be estimated as\n\\begin{equation}\\label{T_DC}\nT_{DC}\\approx \\frac{P}{k_b}.\n\\end{equation}\n\n\n\n\n\\subsection{Experimental Setup}\n\n\nThe general considerations in the section above show that it should be possible to measure the heat capacity in an uniaxial pressure apparatus as shown in Fig.\\ \\ref{Scheme_thermalConductivity}a by choosing the correct set of experimental parameters. In the following we will explain the experimental setup and describe the details of the preparation process using the example of a Sr$_2$RuO$_4$ single crystal.\n\n\nThe sample is marked by a red circle in Fig.\\ \\ref{Scheme_thermalConductivity}a and shown in detail in Fig.\\ \\ref{Scheme_thermalConductivity}b. In this setup the applied force results in a normal strain\n\\begin{equation}\\label{strain}\n\\varepsilon_{xx}=\\frac{l-l_0}{l_0}\n\\end{equation}\nin the sample. Here $l_0$ is the length of the unstrained sample and $l$ the length of the strained sample. The length change is measured capacitively and can be controlled. \\MN{}{The applied strain can go beyond $1\\%$. In Sr$_2$RuO$_4$ the Young's modulus is about 180~GPa and correspondingly the applied uniaxial pressure can reliably reach up to about 2~GPa. However, the maximum uniaxial pressure depends strongly on the mechanical properties of the investigated material.} Further details can be found in Ref.\\ \\onlinecite{Hicks2014a}.\nThe present AC heat-capacity technique can be adapted to different types of uniaxial pressure devices, e.g.\\ to a stress-controlled apparatus \\cite{Barber2019} \\MN{}{and is fully compatible with experiments in magnetic fields}\n\n\n\n\\begin{figure}[tb!]\n\\includegraphics[width=0.9\\linewidth]{Scheme_thermalConductivity}\n\\centering\n\\caption{\n(a) Photograph of the uniaxial pressure apparatus used in the present study. The red circle marks the sample region.\n(b) Photograph of the setup of the heat capacity measurements under strain including heater and thermometer. The sample is glued between the jaws of the uniaxial pressure device. \\MN{}{The exposed length, width and thickness of the shown sample are 2~mm, 200~$\\mu$m and 150~$\\mu$m, respectively. The device} allows the application of compressive and tensile strains. The red, yellow, and white rectangles represent the (quasi)homogeneous, inhomogeneous, and unstrained regions, respectively, see text for details.\n(c) Schematic diagram of the setup illustrating the photograph in (b).}\n\\label{Scheme_thermalConductivity}\n\\end{figure}\n\n\nIn Fig.\\ \\ref{Scheme_thermalConductivity}b we show a photograph of the bar-shaped sample that has been carefully cut, polished, and then mounted within the jaws of the uniaxial pressure rig.\nThe nature of the apparatus means that only the central part of the sample is homogeneously strained. Force is transferred to the sample through the epoxy layer around the sample. The sample ends which are protruding beyond it are unstrained, and there are intermediate regions, marked in yellow in Fig.\\ \\ref{Scheme_thermalConductivity}b, where the strain is built up. Therefore, we have to choose the measurement conditions in a way that we only probe the homogeneous part of the sample. On the example of a Sr$_2$RuO$_4$ single crystal we will demonstrate that this is in principle possible by varying the excitation frequency $f_{\\rm exc}=f\/2$ of the heater, if the characteristic parameters of the setup, such as the different thermal conductances, have been chosen in the appropriate range.\n\nFor the experiments single crystalline Sr$_2$RuO$_4$ was aligned using a bespoke Laue x-ray camera, and cut using a wire saw into thin bars with whose long axis aligned with the [100] direction of the crystal. For the best results these bars were polished using home-made apparatus based on diamond impregnated paper with a minimum grit size of 1~$\\mu$m. The bar was then mounted within the jaws of the uniaxial pressure rig using Stycast 2850FT epoxy with Catalyst 23LV (Henkel Loctide). A resistive thin film resistor chip (State of the Art, Inc., Series No.:\\ S0202DS1001FKW) as heater and a Au-AuFe(0.07\\%) thermocouple are fixed to opposite sides of the sample using Dupont 6838 single component silver-filled epoxy. \\MN{}{The resistance of heater is about 640~$\\Omega$ and the applied power is in the range of $\\mu$W. The heater is connected electrically using manganin wires providing a low thermal conductance to the bath. At 1~K the thermal conductance of the Stycast layers and the manganin wires is about $10^{-4}$ and $10^{-7}$~ W\/K, respectively. Thus, the heat loss is largely dominated by the Stycast layers.} The thermocouple was spot-welded in-house and its calibration fixed by reference to that of a calibrated RuO$_2$ thermometer.\\footnote{In the temperature range between 0.15 and 4.5~K the thermopower $S$ of the Au-AuFe(0.07\\%) thermocouple is described by $S(T)=[10.1483\\cdot T\/{\\rm K}-8.75772\\cdot (T\/{\\rm K})^2+4.00231\\cdot (T\/{\\rm K})^3-0.838741\\cdot (T\/{\\rm K})^4+0.0667604\\cdot (T\/{\\rm K})^5]{\\rm ~\\mu V\/K}$ .} Special care was taken when epoxying to the pressure cell to minimize tilt and ensure an as homogeneous strain field as possible.\n\n\nThe uniaxial pressure apparatus was mounted on a dilution refrigerator (Oxford Instruments), with thermal coupling to the mixing chamber via a high purity silver wire. The data were acquired between 500~mK and 4.2~K, with operation above 1.5~K achieved by circulating a small fraction of the mixture. The extremely low noise level of 20~${\\rm pV\/\\sqrt{Hz}}$ on the thermocouple readout was achieved by the combination of an EG\\&G 7265 lock-in amplifier and a \\MN{}{high frequency} low temperature transformer (\\MN{}{LTT-h from} CMR direct) mounted on the 1~K pot of the dilution refrigerator, operating at a gain of 300. \\MN{}{The input impedance of the transformer is about $0.1~\\Omega$, which ensured a flat frequency response from several hundred Hz to several tens of kHz.} A Keithley 6221 low-noise current source was used to drive the heater. The piezo-electric actuators were driven at up to $\\pm400$ V using a bespoke high-voltage amplifier.\n\n\n\n\\subsection{Strain inhomogeneity}\n\nThe nature of our setup is that the strain profile along the direction of the application of the uniaxial pressure is not homogeneous.\nAs we will describe below, by adjusting the excitation frequency $f_{exc}$ to an appropriate value the actual heat-capacity measurement can be confined to the quasi-homogeneously strained region of the sample. Besides this source of strain inhomogeneity there are other sources which can be reduced in the preparation and mounting process of the sample in the apparatus.\n\n\n\\subsubsection*{Imperfections of sample surface and geometry}\n\nThe bar-shaped needles cut from crystals have typically terraces and irregular shapes on their surface which can induce inhomogeneous strain fields when they are under uniaxial pressure. Imperfections may also lead to an early failure of pressurized samples reducing the maximum achievable pressure.\nA perfect sample is a cuboid, i.e.\\ each surface is parallel to the opposite one, and has a smooth surface roughness. Therefore, we carefully polish our samples and inspect the shape and the surface quality under a microscope before mounting in the uniaxial pressure apparatus.\n\n\\begin{figure}[tb!]\n\\includegraphics[width=0.95\\linewidth]{heater_mounting}\n\\centering\n\\caption{(a) Heater fixed with a silver foil to the sample. The contact to the sample is on the whole plane.\n(b) The simulation of the strain $\\varepsilon_{xx}$ pattern corresponding to the setup in (a). One of the silver-filled epoxy blocks was set to be invisible, as indicated by the\ndash-dotted lines, such that the strain profile on the edge of the sample is visible.\n(c) Heater fixed to the sample by four thin silver wires on the edges of the sample.\n(d) Corresponding simulation to (c). The strain inhomogeneity is reduced in the center compared with the setup shown in (a).\n }\n\\label{heater_mounting}\n\\end{figure}\n\n\\subsubsection*{Bending}\n\nAsymmetric mounting of a sample leads to bending.\\cite{Barber2017} An ideal sample mounting is a sample mounted between two plates with symmetrical epoxy layers on top and at the bottom. However, the sample might end up with a small offset in height. To reduce inhomogeneity in preparing the sample we aim for an aspect ratio $l_s\/t >10$, where $l_s$ is the exposed length and $t$ the thickness of the sample\n\n\\subsubsection*{Mounting of the heater}\n\nOne of the main sources of inhomogeneous strain fields originates from the sample configuration in the AC heat-capacity setup. In order to transmit the heating power from the heater resistance to the sample, we use thermal contacts made by silver wires glued to the sample using silver-filled epoxy. Since the Young's modulus of silver and the sample are generally very different, as in our example of Sr$_2$RuO$_4$, the contacts create inhomogeneous strain fields. We tried to minimize this effect. We realized two different types of silver contacts to the sample. Figures \\ref{heater_mounting}a and \\ref{heater_mounting}c show photographs of the setups. In the first one a silver strip was glued on a contact length of about 300~$\\mu$m on both edges to the sample using silver-filled epoxy. In the second, the thermal contact is divided into 4 smaller areas instead of a large one, by gluing 8 silver wires with diameter of 50~$\\mu$m on both edges. The total contact area in both cases is almost the same.\nThe experiments on our test sample Sr$_2$RuO$_4$ showed indeed a significant sharpening of the superconducting transition anomaly in the latter case.\n\n\nIn addition to the experiments we simulated the strain fields in the sample by a finite element method using a commercial software package.\\footnote{Autodesk Inventor 2015, Autodesk Inc..}\nFor the simulation we set the Young's modulus of the sample to 180 GPa and the Poisson's ratio to 0.33. For the dimensions of the sample we used the values from the experiment, a thickness 100~$\\mu$m, width 300~$\\mu$m, and length 2~mm. One of the sample ends was set to be fixed and the other end was subjected to a pressure of 0.18~GPa, leading to $\\varepsilon_{xx}=0.1$\\%. The silver and silver-filled epoxy were set to have Poisson's ratio of 0.35. The Young's modulus for the silver epoxy was set to be $1\/3$ of that of the silver, which is 110~GPa. The results for both configurations are shown in Figs.\\ \\ref{heater_mounting}b and \\ref{heater_mounting}d. The color bar shows the strain scale, ranging from 0.07 to 0.13\\%. In the first configuration, with silver strip glued with silver-filled epoxy on both edges to the sample, the strain inhomogeneity on the sample is greater than 60\\% in the center (see Fig.\\ \\ref{heater_mounting}b).\nIn the second design using 8 silver wires with diameter of 50~$\\mu$m on both edges for the thermal contact, the strain inhomogeneity is strongly reduced in the bulk, except in the regions very close to the contact surfaces. Since heat capacity is a bulk-sensitive measurement, the inhomogeneity near the surface is negligible. The strain inhomogeneity in this configuration is only about $10$\\% in the center region of the sample. This shows that it is highly desirable to have separated smaller contact areas to transmit the heat to the sample in order to reduce strain inhomogeneities in accordance with the experimental results. In the following we continued with the second configuration.\n\n\n\n\n\\section{RESULTS}\n\nWe demonstrate the capabilities of our setup and discuss its advantages and limitations by showing representative data from experiments on Sr$_{2}$RuO$_{4}$. The first step in an AC heat-capacity experiment is to find a suitable measurement frequency in the plateau region of the frequency response curve $F(\\omega)$. We note that the existence of this plateau depends on the respective characteristics of the setup as discussed in Sec.\\ \\ref{AC Heat Capacity}. If a suitable frequency has been found, temperature \\MN{}{sweeps, also in applied magnetic field, or pressure\/magnetic field ramps} can be conducted and the heat capacity recorded. According to Eq.\\ \\ref{C_ac_prox} we will plot our results on Sr$_{2}$RuO$_{4}$ as $P\/[\\omega T_{AC}(T)]$. As we will discuss in Sec.\\ \\ref{Cv} $C_{AC}\\approx P\/(\\omega T_{AC})$ is not strictly valid in our setup and has to be treated with caution.\n\n\n\\subsection{Measuring frequency}\n\n\\begin{figure}[tb!]\n\\includegraphics[width=0.95\\linewidth]{frequency_effect}\n\\centering\n\\caption{(a) Frequency sweeps at 1 and 4.23~K.\n(b) Data recorded at 313~Hz for zero and small strains up to $\\varepsilon_{xx}=-0.19$\\%.\n(c) Data on Sr$_2$RuO$_4$ in the region around its superconducting transition at $\\varepsilon_{xx}=-0.19$\\% for different frequencies.\n }\n\\label{frequency_effect}\n\\end{figure}\n\n\n\nFigure \\ref{frequency_effect}a shows the frequency response at 1 and 4.23~K in case of our example Sr$_2$RuO$_4$ crystal. It shows a broad plateau between a few hundred hertz and several kilohertz at both temperatures, attesting that in principle heat-capacity measurements should be possible in the desired temperature range. By raising the temperature from 1 to 4.23~K the plateau narrows slightly but remains well-defined.\n\nIn the lower frequency part of the plateau in Fig.\\ \\ref{frequency_effect}a, temperature oscillations extend throughout the sample and all three regions the homogenously strained in the center, the unstrained portions at the ends and the regions where strain builds up are probed in a measurement (see Fig.\\ \\ref{Scheme_thermalConductivity}b). Figure \\ref{frequency_effect}b shows $P\/[\\omega T_{AC}(T)]$ recorded at $f_{exc}=313$~Hz for different $\\varepsilon_{xx}$. At zero strain we see a single sharp transition anomaly at $T_c\\approx1.45$~K. Upon increasing $|\\varepsilon_{xx}|$ the step-like feature moves to higher temperatures, consistent with the increase in $T_c$ with strain,\\cite{Hicks2014b} but a second feature remains at the original zero-strain transition. This latter feature stems from the unstrained part of the sample.\n\nTo reduce the size of the probed part of the sample and restrict it to the homogenously strained region in the center, we increased the measurement frequency. We note that we still stay in the plateau region of frequency response curve. To demonstrate the importance of this increase in measurement frequency, we applied modest strain $\\varepsilon_{xx}=-0.19$~\\% and increased $f_{exc}$ from 313~Hz in steps to 2503~Hz. The data are displayed in Fig.\\ \\ref{frequency_effect}c.\nAt 313 and 613~Hz, in addition to the peak at $\\approx 1.65$~K corresponding to the transition in the central, strained, portion of the sample, a smaller peak is visible at $\\approx 1.45$~K, corresponding to the transition in the end portions. This feature shows that temperature oscillations extend into the sample ends at these frequencies.\nTo avoid this, one has to work at the high end of the feasible range of frequencies. For this particular sample, a measurement frequency above $\\sim1.5$~kHz was required.\nWorking at high frequencies with low enough power to avoid heating gives a very small signal, an r.m.s.\\ thermocouple voltage of only $1 - 2$~nV. Therefore, the described low temperature passive amplification was employed to achieve an r.m.s.\\ noise level of 20~pVHz$^{-1\/2}$, ensuring a signal-to-noise ratio in excess of 50.\n\n\n\n\n\\subsection{Heat-capacity results on Sr$_2$RuO$_4$}\n\nBased on considerations outlined in the previous section we selected an excitation frequency of $f_{exc}=1503$~Hz to measure the heat capacity of Sr$_2$RuO$_4$. The results for three different strains $\\varepsilon_{xx}=0$\\%, $-0.25$\\%, and $-0.37$\\% are presented in Fig.\\ \\ref{HC_Sr2RuO4} as $P\/[\\omega T_{AC}(T)]$. Additionally the inset shows the results from a standard relaxation-type heat-capacity measurement from a piece of sample cut from the same crystal. It is qualitatively similar to the results in the uniaxial pressure cell at zero strain.\n\n\\begin{figure}[tb!]\n\\includegraphics[width=0.95\\linewidth]{HC_Sr2RuO4}\n\\centering\n\\caption{Recorded signal $P\/[\\omega T_{AC}(T)]$ of Sr$_2$RuO$_4$ as function of temperature for three different strains $\\varepsilon_{xx}=0$\\%, $-0.25$\\%, and $-0.37$\\%. The inset shows a specific-heat experiment on a piece from the same crystal using a standard relaxation time method.\n}\n\\label{HC_Sr2RuO4}\n\\end{figure}\n\nAccording to Eq.\\ \\ref{C_ac_prox} we find $C_{AC}(T)\\approx P\/[\\omega T_{AC}(T)]$. However, this relation has to be taken with caution since the probed sample volume is not constant as function of temperature. We have selected $f_{exc}$ in order to probe the homogenously strained portion of the sample, but we have to notice that the thermal conductivity $\\kappa$ of any studied material varies as function of temperature and strain, and as a consequence the probed sample volume also changes. To obtain absolute values of the volume specific heat $c_v(T)$ at a certain strain the temperature dependence of the thermal conductivity $\\kappa(T)$ has to be known at that strain too.\n\n\n\n\\subsection{Determination of the volume specific heat}\\label{Cv}\n\nThe conversion between the measured signal and the volume or molar specific heat is trivial in a conventional setup, because the volume (or mass) of the sample is constant. In our measurements, the probed sample volume varies since the thermal diffusion length $l_d$, which depends on thermal conductivity, specific heat, and frequency, changes as a function of temperature. Therefore, it is nontrivial to convert our data $P\/[\\omega T_{AC}(T)]$ to volume specific heat $c_v$. We start with an ideal case to demonstrate the relation between $P\/[\\omega T_{AC}(T)]$ and $c_v$ in case of our experimental setup.\nSuppose that the heater contact is point-like in the center of a very narrow sample such that the heat flow is one-dimensional propagating in the left and right direction. The probed volume $V$ is equal to the cross-sectional area $A$ times twice the diffusion length $l_d$, which is a function of the angular frequency $\\omega$, the volume specific heat $c_v$ and the thermal conductivity $\\kappa$.\n\\begin{equation}\n l_d=\\sqrt{\\frac{2\\kappa(T)}{\\omega c_v(T)}}\t\n \\label{S1}\n\\end{equation}\n$C_{AC}$ can be expressed as follows:\n\\begin{equation}\\label{S2}\n C_{AC}=c_v \\times V = c_v \\times A\\times l_d = \\frac{2A}{\\sqrt{\\omega}} \\sqrt{2\\kappa(T) c_v(T)}.\n\\end{equation}\nBy using Eq.\\ \\ref{Cac} and \\ref{S2} we finally obtain the volume specific heat $c_v$:\n\\begin{equation}\\label{S3}\n c_{v}(T)=\\left (\\frac{P \\times F(\\omega)}{2A}\\right )^2 \\times \\frac{1}{\\omega\\times 2\\kappa(T)} \\times \\frac{1}{[T_{AC}(T)]^2}.\n\\end{equation}\t\nThis exemplifies the reciprocal dependence of $c_v(T)$ on the thermal conductivity and the square of the temperature-oscillation amplitude in case of a simplified one dimensional model.\n\nWe further note that the excitation frequency in our current measurement is not too far away from the upper cut-off frequency, which describes the time scale for the heat propagating from the heater to the thermocouple. At this excitation frequency $F(\\omega)<1$ and depends on temperature adding a further uncertainty on the determination of $c_v(T)$.\n\n\nThe validity of the Eqs.\\ \\ref{S2} and \\ref{S3} is based on the above-mentioned assumptions that the heater contact is point-like and the heat flow is one-dimensional. In reality, both the sample width and the heater contact size are finite. This implies for the experimental setup to satisfy the assumptions of the examined model system, the exposed sample length ($l_s$) must be far longer than the heater length ($l_h$) and the sample width ($w_s$), $l_s \\gg l_h,w_s$. Our present setup is already a good approximation to an ideal configuration but could in principle be further optimized.\n\nIn spite of the above caveats, we note that in some cases quantitative statements on the evolution of the specific heat on varying uniaxial pressure are possible based on the presently accessible data. For example, in superconductors, as in the case of Sr$_2$RuO$_4$, it is possible to obtain information on the evolution of the size of the superconducting transition anomaly with pressure, which is an important quantity characterizing superconductivity. In that case the thermal conductivity does not show any abrupt change across the transition and close to $T_c$\n\\begin{equation}\\label{S4}\n \\frac{c_{v}^s}{c_{v}^n}=\\frac{\\kappa_n}{\\kappa_s} \\times \\left(\\frac{T_{AC}^n}{T_{AC}^s}\\right)^2 \\approx \\left(\\frac{T_{AC}^n}{T_{AC}^s}\\right)^2\n\\end{equation}\t\nwith $\\kappa_n\\approx\\kappa_s$.\nThe indices $s$ and $n$ indicate the corresponding values in the superconducting and in the normal state, respectively.\n\n\n\\section{CONCLUSION}\n\nWe have developed a new experimental setup using piezoelectric-driven uniaxial pressure cells for probing heat capacity at low temperatures. By optimizing our preparation and measuring processes we achieve an extremely high resolution and a high strain homogeneity in the probed sample volume.\nThe technique can be easily extended to different temperature regions. In addition to temperature sweeps, heat capacity can be recorded as function of applied pressure, and our apparatus is also fully compatible with work in magnetic fields.\n\n\n\n\n\n\\begin{acknowledgments}\nWe thank A.\\ S.\\ Gibbs, F.\\ Jerzembeck, N.\\ Kikugawa, Y.\\ Maeno, D.\\ A.\\ Sokolov for providing and characterizing the samples and M.\\ Brando and U.\\ Stockert for experimental support.\n\\end{acknowledgments}\n\n\\section*{DATA AVAILABILITY}\nThe data that support the findings of this study are available from the corresponding author upon reasonable request.\n\n\n\\section*{REFERENCES}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nThe ubiquitous appearance of translation symmetry in physical systems signals the importance of having a complete picture of the complex role it may play. In particular, although the ground state energy (associated with time-translation symmetry) of a many-body quantum system or a quantum field theory is frequently studied, the ground state \\textit{momentum} (associated with space-translation symmetry) is rarely discussed. Rather, in most cases one focuses on the momentum difference between excited states and the ground state. In this work we reveal a connection between the momentum and the entanglement structure of a quantum state, in the context of lattice spin (boson) systems:\n\\begin{theorem}\nIf a quantum state $|\\Psi\\rangle$ in a lattice spin (boson) system is an eigenstate of the lattice translation operator $T:|\\Psi\\rangle\\to e^{iP}|\\Psi\\rangle$ with a non-trivial momentum $e^{iP}\\neq1$, then $|\\Psi\\rangle$ must be long-range entangled, namely $|\\Psi\\rangle$ cannot be transformed to an un-entangled product state $|000...\\rangle$ through an adiabatic evolution or a finite-depth quantum circuit (local unitary). \n\n\n\\label{Thm}\n\\end{theorem}\n\nThe intuition behind this statement follows from the sharp difference between translation $T$ and an ordinary onsite symmetry $G$ that is defined as a tensor product of operators acting on each lattice-site (such as the electromagnetic $U(1)$). A product state may recreate any total symmetry charge $Q$ under $G$ by simply assigning individual local Hilbert space states to carry some charge $Q_\\alpha$ such that $Q=\\sum_{\\alpha}Q_\\alpha$. However in the case of non-onsite translation symmetry, all translation-symmetric product states, which take the form $|\\alpha\\rangle^{\\otimes L}$, can only carry trivial charge (lattice momentum). This suggests that non-trivial momentum is an inherently non-local quantity that cannot be reproduced without faraway regions still retaining some entanglement knowledge of each other, i.e. the state must be long-range entangled.\n\nIn condensed matter physics, we are often interested in ground states of translational-invariant local Hamiltonians. If the ground state is short-range entangled~\\cite{PhysRevB.82.155138} (SRE) in the sense that it is connected to a product state through a finite-depth (FD) quantum circuit, then we expect the ground state to be unique, with a finite gap separating it from the excited states. In contrast for long-range entangled~\\cite{PhysRevB.82.155138} (LRE) ground states, we expect certain ``exotic'' features: possible options include spontaneous symmetry-breaking cat states (e.g. GHZ-like states), topological orders (e.g. fractional quantum Hall states), and gapless states (e.g. metallic or quantum critical states). Theorem~\\ref{Thm} provides us an opportunity to explore the interplay between translation symmetry and the above modern notions. An immediate corollary is\n\\begin{corollary}\nIf a non-zero momentum state $|\\Psi\\rangle$ is realized as a ground state of a local spin Hamiltonian, then the ground state cannot be simultaneously unique and gapped. Possible options include (1) gapless spectrum, (2) intrinsic topological order and (3) spontaneous translation symmetry breaking.\n\\end{corollary}\nIn fact, we show in Sec.~\\ref{sec:wCDW} that option (2) is a special subset of option (3) through the mechanism of ``weak symmetry-breaking\"~\\cite{Kitaev06}.\n\nOur result is reminiscent of the celebrated Lieb-Schultz-Mattis-Oshikawa-Hastings (LSMOH) theorems \\cite{LSM,Oshikawa1999,Hastings04}, which state that in systems with charge $U(1)$ and translation symmetries, a ground state with fractional $U(1)$ charge filling (per unit cell) cannot be SRE. In our case the non-trivial lattice momentum $e^{iP}\\neq1$ plays a very similar role as the fractional charge density in LSMOH. In fact, as we discuss in Sec.~\\ref{sec: LSMOH}, our theorem can be viewed as a more basic version of LSMOH that only involves translation symmetry, from which the standard LSMOH can be easily derived. As a by-product, we also discover a previously unknown version of LSMOH constraint that involves an onsite $\\mathbb{Z}_n$ symmetry and lattice translations. \n\nThe rest of this paper will be structured as follows: in Sec.~\\ref{sec:proof} we provide a proof of Theorem~\\ref{Thm} via a quantum circuit approach, and generalize it to fermion systems. Three consequences of Theorem~\\ref{Thm} are discussed in Sec.~\\ref{sec:consequences}: in Sec.~\\ref{sec: LSMOH} we discuss several LSMOH-type theorems; in Sec.~\\ref{sec:wCDW} we show that a gapped topological order must \\textit{weakly} break translation symmetry if one of its ground states on torus has nonzero momentum -- this is a generalization of the Tao-Thouless physics in fractional quantum Hall effect~\\cite{PhysRevB.28.1142,PhysRevB.77.155308}; in Sec.~\\ref{sec:cSPT} we discuss the implication of Theorem~\\ref{Thm} for the classification of crystalline symmetry-protected topological (SPT) phases. We end with some discussions in Sec.~\\ref{sec:discussions}.\n\n\n\\section{Proof}\n\\label{sec:proof}\n\nIn this section we prove that SRE states necessarily possess trivial momentum, conversely implying that all non-trivial momentum ground states must be LRE. The approach that we take utilizes the quantum circuit formalism, which is equivalent to the usual adiabatic Hamiltonian evolution formulation~\\cite{doi:10.1126\/science.273.5278.1073,PhysRevB.82.155138} but conceptually cleaner. In particular we will harness the causal structure of quantum circuits, which will allow us to `cut and paste' existing circuits to create useful new ones.\n\nWe shall first prove Theorem~\\ref{Thm} in one space dimension, from which the higher-dimensional version follows immediately.\n\n\\subsection{Proof in $1$d}\n\\label{sec:1dproof}\n\nFirst let us specify our setup more carefully. We consider a spin (boson) system with a local tensor product Hilbert space $\\mathcal{H}=\\otimes_i\\mathcal{H}_i$ where $\\mathcal{H}_i$ is the local Hilbert space at unit cell $i$. The system is put on a periodic ring with $L$ unit cells so $i\\in\\{1,2...L\\}$. In each unit cell the Hilbert space $\\mathcal{H}_i$ is $q$-dimensional ($q$ does not depend on $i$), with a basis labeled by $\\{|a_i\\rangle_i\\}$ ($a_i\\in\\{0,1...q-1\\}$). The translation symmetry is implemented by a unitary operator that is uniquely defined through its action on the tensor product basis \n\\begin{eqnarray}\n\\label{eq:Tboson}\n T:\\hspace{5pt} & & |a_1\\rangle_1\\otimes|a_2\\rangle_2\\otimes...|a_{L-1}\\rangle_{L-1}\\otimes|a_L\\rangle_L \\nonumber \\\\ & &\\longrightarrow |a_L\\rangle_1\\otimes|a_1\\rangle_2\\otimes...|a_{L-2}\\rangle_{L-1}\\otimes|a_{L-1}\\rangle_L.\n\\end{eqnarray}\nUnder this definition of translation symmetry (which is the usual definition), we have\\footnote{Importantly, we are not dealing with translation under twisted boundary condition, in which case $T^L=g$ for some global symmetry $g$. Many of our conclusions in this work need to be rephrased or reexamined for such twisted translations.} $T^L=1$ and any translational-symmetric product state $|\\varphi\\rangle^{\\otimes L}$ has trivial lattice momentum $e^{iP}=1$.\n\nNow consider a SRE state $|\\Psi_{P(L)}\\rangle$ with momentum $P(L)$. By SRE we mean that there is a quantum circuit $U$ with depth $\\xi\\ll L$ that sends $|\\Psi_{P(L)}\\rangle$ to the product state $|\\mathbf{0}\\rangle\\equiv |0\\rangle^{\\otimes L}$ (we do not assume $U$ to commute with translation). The depth $\\xi$ will be roughly the correlation length of $|\\Psi_{P(L)}\\rangle$. Our task is to prove that $P(L)=0$ mod $2\\pi$ as long as $\\xi\\ll L$. Notice that this statement is in fact stronger than that for FD circuit which requires $\\xi\\sim O(1)$ as $L\\to \\infty$. For example, our result holds even if $\\xi\\sim {\\rm PolyLog}(L)$, which is relevant if we want the quantum circuit to simulate an adiabatic evolution more accurately~\\cite{Haah2018}. Our result is also applicable if the existence of $U$ requires extra ancilla degrees of freedom (DOF) that enlarges the onsite Hilbert space to $\\tilde{\\mathcal{H}}_i$ with dimension $\\tilde{q}>q$ (for example see Ref.~\\cite{ElsePoWatanabe2019}), since ancilla DOFs by definition come in product states and therefore cannot change the momentum.\n\nThe proof will be split into two steps where in \\textit{Step 1} we first prove that the momentum is trivial for all $L=mn$ where $m,n\\in\\mathbb{Z}^+$ are mutually coprime satisfying $m,n\\gg\\xi$. In \\textit{Step 2} we use the results of \\textit{Step 1} to show that this may be extended to all other lengths.\n\n\n\n\\underline{\\textit{Step 1:}} A key ingredient of the proof is to recognize that the entanglement structure of the SRE state $|\\Psi_{P(L)}\\rangle$ on system size $L=mn$, where $m,n\\in\\mathbb{Z}^+$ and $n\\gg\\xi$, is adiabatically connected to that of $m$ identical unentangled length $n$ SRE systems. The existence of such an adiabatic deformation, which is of a similar flavour to those presented in Refs.~\\cite{PhysRevX.7.011020} and \\cite{PhysRevB.96.205106}, is due to the finite correlation length of SRE systems, and will be explained in the following paragraph.\n\n\n\nTake the SRE state $|\\Psi_{P(L)}\\rangle$ placed on a periodic chain of length $L=mn$ with $m,n\\in\\mathbb{Z}^+$ and $n\\gg\\xi$. Let us try to decouple this system at some point (say between site $i$ and $i+1$) via an adiabatic evolution, creating an `open' chain.\nTo show that such a decoupling cut exists, we use the fact that SRE states always have a FD quantum circuit $U$ that sends the ground state to the $|\\mathbf{0}\\rangle\\equiv |0\\rangle^{\\otimes L}$ product state (see Fig.~\\ref{fig:unitaries}(a)). The appropriate cut is then created by modifying this circuit to form a new lightcone-like FD quantum circuit $\\tilde{U}$ with all unitaries outside the `lightcone', i.e. those that do not affect the transformation that sends the two sites $i$ and $i+1$ to $|0\\rangle$, set to identity (see Fig.~\\ref{fig:unitaries}(b)). Such a modified circuit would span $\\sim\\xi$ qudits on either side of the cut and by construction takes the two sites on either side of the cut to $|0\\rangle$, thus completely removing any entanglements across the link\\footnote{This can be better understood in reverse: consider the state constructed by $\\tilde{U} U^\\dag|\\mathbf{0}\\rangle$ ($=\\tilde{U}|\\Psi_{P(L)}\\rangle$) which never directly couples qudits on either side of the cut. Thus $\\tilde{U}$ can be understood as completely removing the entanglement across the applied link.}.\nLet us concretely take $\\tilde{U}^{[0]}$ to denote the appropriate lightcone cut between the last and first qudit (recall that we are on a ring), and define the shifted adiabatic cut between the $x-1$ and $x$th qudits to be $\\tilde{U}^{[x]}\\equiv T^x \\tilde{U}^{[0]}T^{-x}$. If the ground state is translation-symmetric we have $\\tilde{U}^{[x]}|\\Psi_{P(L)}\\rangle=e^{-i x P(L)}T^x\\tilde{U}^{[0]}|\\Psi_{P(L)}\\rangle$\nso we see that $\\tilde{U}^{[x]}$ performs the same cut (up to a phase factor) at any link. By construction this means that the local density matrices of a region surrounding the cut obeys $\\rho_{lr}=\\rho_l\\otimes|00\\rangle\\langle00|\\otimes\\rho_r$, where the left (right) region to the cut is denoted $l$ ($r$), which in turn implies that the operations $\\tilde{U}^{[x]}$ disentangles the system along that cut.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.98\\columnwidth]{unitaries.pdf}\n \\caption{\\label{fig:unitaries} (Color online) Depiction of finite-depth quantum circuits applied on $|\\Psi_P\\rangle$. Here qudits are depicted as solid circles while unitaries are depicted as rectangles. (a) A SRE state $|\\Psi_P\\rangle$ is always connected to the $|\\mathbf{0}\\rangle$ trivial state via a FD quantum circuit $U$. From $U$ a lightcone-like `adiabatic cut' $\\tilde{U}$ can be created (framed in blue). (b) $\\tilde{U}$ connects $|\\Psi_P\\rangle$ to a state that is completely decoupled across the cut.}\n\\end{figure}\n\nThe cutting procedure may be simultaneously applied to two separate links, as long as they are separated by a distance much greater than the correlation length. With this in mind, let us identically apply the cut on an $L=mn$ length system with a cut after every $n$th qudit, as depicted in Fig.~\\ref{fig:adiabaticcutting}, via the FD quantum circuit $\\tilde{U}^{[0]}\\tilde{U}^{[n]}...\\tilde{U}^{[(m-1)n]}$. Since the adiabatic deformation fully disentangles the system across the cuts, the resulting state should take the form $|\\tilde{\\Psi}_1\\rangle\\otimes|\\tilde{\\Psi}_2\\rangle\\otimes...|\\tilde{\\Psi}_m\\rangle$ where each $|\\tilde{\\Psi}_i\\rangle$ is an $n$-block SRE state. \n\nNow let us examine the symmetries of this resultant system. The original $\\mathbb{Z}_{mn}$ translation symmetry, generated by operator $T$, of the original system is broken by the adiabatic deformation. However the $\\mathbb{Z}_m$ translation symmetry subgroup, generated by operator $T^n$, is preserved since by construction identical cuts occurs at every $n$th junction. This immediately implies that all the $n$-block states are identical $|\\tilde{\\Psi}_i\\rangle=|\\tilde{\\Psi}\\rangle$ and the total state after the cut is simply $|\\tilde{\\Psi}\\rangle^{\\otimes m}$.\nThus we know that the original $\\mathbb{Z}_m$ quantum number is the same as the final one which must be trivial since we are dealing with an $n$-block product state $|\\tilde{\\Psi}\\rangle^{\\otimes m}$. This implies\n\\begin{align}\nnP(L)=0\\mod 2\\pi\\quad,\n\\label{eq:LP}\n\\end{align}\n$\\forall L=mn$ with $m,n\\in\\mathbb{Z}^+$ and $n\\gg\\xi$.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.95\\columnwidth]{cutting_chains.pdf}\n \\caption{\\label{fig:adiabaticcutting} (Color online) Illustration of the adiabatic cutting procedure on a periodic length $L=mn$ chain. Here we take $m=4$ example to demonstrate how four identical cuts, applied by $\\tilde{U}$ (blue rectangle) at every $n$th link, on a length $L=4n$ state $|\\Psi_{P(L)}\\rangle$ (purple circle) produces four decoupled length $n$ SRE states.}\n\\end{figure}\n\nUsing this relation on a general system length $L=p_1^{q_1}p_2^{q_2}...p_d^{q_d}$ (here we are using prime factorisation notation) we arrive at the condition\n\\begin{align}\nP(L)=0\\mod \\frac{2\\pi}{p_1^{r_1}p_2^{r_2}...p_d^{r_d}}\\quad,\n\\end{align}\n$\\forall{r_i\\in\\{1,...,q_i\\}}$ such that $p_1^{r_1}p_2^{r_2}...p_d^{r_d}\\gg\\xi$. If $L$ factorises into at least two mutually coprime numbers $m,n$ with $m,n\\gg\\xi$ then these conditions can only be satisfied if \n\\begin{align}\nP(L)=0\\mod 2\\pi\\quad,\n\\label{eq:zeroP}\n\\end{align}\nwhich is satisfied for almost all large enough $L$.\n\n\\underline{\\textit{Step 2:}} There are a sparse set of cases for which \\textit{Step 1} does not enforce trivial momentum, the most notable case being when $\\tilde{L}=p^{q}$ with $p$ prime and $q\\in\\mathbb{Z}^+$. Factorisations such as $\\tilde{L}=p_1^{q_1}p_2$ are also not covered if $p_1^{q_1}\\not\\gg \\xi$. \n\nTo show that these cases also possess trivial momentum, once again take a SRE state $|\\Psi_{P(L)}\\rangle$ on a general length $L$ system with momentum $P(L)\\mod 2\\pi$. By the definition of a SRE state, there exists a FD quantum circuit $V_{L}$ such that $|\\Psi_{P(L)}\\rangle=V_L|\\mathbf{0}\\rangle$. This circuit obeys $TV_LT^\\dag|\\mathbf{0}\\rangle=e^{iP(L)}V_L|\\mathbf{0}\\rangle$, meaning that it boosts the trivial momentum of the $|\\mathbf{0}\\rangle$ state by $P(L)\\mod 2\\pi$. Consider the composition of a circuit\n\\begin{align}\n \\left(TV_L^\\dag T^\\dag\\right) V_L|\\mathbf{0}\\rangle=e^{-iP(L)}|\\mathbf{0}\\rangle\\quad.\n \\label{eq:boostL}\n\\end{align}\nAs may be understood via the causality structure the phase $e^{-iP(L)}$ will come piecewise from lightcone circuits. Let us understand this in detail: split $\\tilde{V}_L\\equiv TV_L^\\dag T^\\dag V_L$ into a light-cone circuit $\\tilde{V}_{L,1}$ and reverse lightcone circuit $\\tilde{V}_{L,2}$ such that $\\tilde{V}_L=\\tilde{V}_{L,1}\\tilde{V}_{L,2}$, as depicted in Fig.~\\ref{fig:UgeneralL}. The causal structure of the light cone guarantees that a gate $U_1$ in $\\tilde{V}_{L,1}$ and a gate $U_2$ in $\\tilde{V}_{L,1}$ must commute if $U_2$ appears in a layer after $U_1$, which then allows for the decomposition $\\tilde{V}_L=\\tilde{V}_{L,1}\\tilde{V}_{L,2}$.\nAlthough the exact form of this decomposition is quite malleable, for concreteness let us define $\\tilde{V}_{L,1}$ to be constructed causally such that the 1st (lowest) layer consists of a single 2-qudit gate (as seen in Fig.~\\ref{fig:UgeneralL}).\n$\\tilde{V}_{L,1}$ will have support over qudits in the range $[L-\\eta,L]$, where by the SRE nature $\\eta\\ll L$. Due to Eq.~\\ref{eq:boostL} we see that\n\\begin{align}\n \\tilde{V}_{L,2} |\\mathbf{0}\\rangle=|0...0\\rangle^{[1,L-\\eta-1]}\\otimes|\\alpha\\rangle^{[L-\\eta,L]}\n\\end{align}\nfor some $|\\alpha\\rangle$. By construction, we have\n\\begin{align}\n \\tilde{V}_{L,1} |\\alpha\\rangle=e^{-iP(L)}|0...0\\rangle^{[L-\\eta,L]}\\quad,\n \\label{eq:circ1tildeU}\n\\end{align}\nsuch that we satisfy Eq.~\\ref{eq:boostL}.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{UgeneralLf.pdf}\n \\caption{\\label{fig:UgeneralL} (Color online) Illustration of splitting $TV_L^\\dag T^\\dag V_L=\\tilde{V}_{L,1}\\tilde{V}_{L,2}$ with $\\tilde{V}_{L,1}\\tilde{V}_{L,2}|\\mathbf{0}\\rangle=e^{-iP(L)}|\\mathbf{0}\\rangle$. Here we have taken a snapshot of the circuit to focus on $\\tilde{V}_{L,1}$ (framed in blue), however the support of $\\tilde{V}_{L,1}$ (in the depicted example 16 qudits) is actually much smaller than the system length. Recall that the circuit is periodic such that the orange arrows, corresponding to components of $\\tilde{V}_{L,2}$ (framed in orange), eventually connect on the far side of the ring.\n }\n\\end{figure}\n\nNow we will extend the circuit $V_L$ from length $L$ to $nL$ for some $n\\in\\mathbb{Z}^+$, where $n,L$ are coprime and $\\gg\\xi$, and denote this extended circuit $V_{nL}$. To do this we simply unstitch the circuit $V_L$ at some link and reconnect the ends of $n$ consecutive copies of this unstitched $V_L$ circuit to create a FD quantum circuit $V_{nL}$. Let us see what happens to $\\tilde{V}_{nL}\\equiv TV_{nL}^\\dag T^\\dag V_{nL}$ by once again splitting the circuit into two $\\tilde{V}_{nL}=\\tilde{V}_{nL,1}\\tilde{V}_{nL,2}$, where $\\tilde{V}_{nL,k}=\\prod_{j=0}^{n-1} T^{jL}\\tilde{V}_{L,k}(T^\\dag)^{jL}$ with $k\\in\\{1,2\\}$. By construction and due to the SRE nature of state construction\n\\begin{align}\n \\tilde{V}_{nL,2} |\\mathbf{0}\\rangle^{\\otimes n}=\\left(|0...0\\rangle^{[1,L-\\eta-1]}\\otimes|\\alpha\\rangle^{[L-\\eta,L]}\\right)^{\\otimes n}\\quad.\n\\end{align}\nHowever, by Eq.~\\ref{eq:circ1tildeU}, we have\n\\begin{align}\n \\tilde{V}_{nL,1} \\tilde{V}_{nL,2} |\\mathbf{0}\\rangle^{\\otimes n}=e^{-inP(L)}|\\mathbf{0}\\rangle^{\\otimes n}\\quad,\n \\label{eq:circ1tildeUnL}\n\\end{align}\nso this implies\n\\begin{align}\n TV_{nL} T^\\dag|\\mathbf{0}\\rangle^{\\otimes n} =e^{inP(L)}V_{nL}|\\mathbf{0}\\rangle^{\\otimes n}\\quad,\n \\label{eq:boostnL}\n\\end{align}\nwhich means that $V_{nL}$ boosts the momentum of $|\\mathbf{0}\\rangle$ on a length $nL$ system to a state with momentum $P(nL)=nP(L)\\mod 2\\pi$. In \\textit{Step 1} we showed that $P(nL)=0\\mod 2\\pi$, so this implies $nP(L)=0\\mod2\\pi$. Since this holds for two mutually coprime values of $n$, one concludes that $1$d SRE translation-symmetric states have $P(L)=0\\mod2\\pi$ for all $L\\gg\\xi$.\n\n\n\n\n\n\\subsection{Higher-dimensional extension}\n\nOur result can be extended to higher dimensions. Consider a $d$-dimensional lattice system and a state $|\\Psi\\rangle$ that has nontrivial momentum $P$ along, say, $\\hat{x}$ direction. We can view the state as a 1d state along the $\\hat{x}$ axis, with an enlarged Hilbert space per unit cell (generally exponentially large in $\\prod_i L_i$ with $i$ denoting the transverse directions). A finite-depth quantum circuit of the $d$-dimensional system will also be a finite-depth quantum circuit when viewed as a $1$d circuit along the $\\hat{x}$-direction (a proof and a somewhat subtle example are presented in Appendix~\\ref{app:higherdSRE}; the converse is not true but that does not concern us here). This immediately implies that a SRE state on the $d$-dimensional system must also be SRE when viewed as a $1$d state along $\\hat{x}$. What we proved in Sec.~\\ref{sec:1dproof} thus implies that the non-trivial momentum state $|\\Psi\\rangle$ must be long-range entangled. In particular, imposing locality in the transverse directions will only further restrict possible FD circuit, and will certainly not lead to possibilities beyond the $1$d proof. This completes the proof of Theorem~\\ref{Thm}.\\hfill$\\blacksquare$\n\n\n\\subsection{Fermion systems}\n\\label{sec:Fermions}\n\n\n\nIt is not difficult to generalize our Theorem~\\ref{Thm} to fermionic system. The only subtlety is that the usual definition of translation symmetry in fermion systems has an extra $\\mathbb{Z}_2$ sign structure compared to the naive implementation in Eq.~\\ref{eq:Tboson}. Instead of specifying the sign structure in the tensor product basis as in Eq.~\\ref{eq:Tboson}, it is more convenient to define translation operator through $Tc_{i,\\alpha}T^{-1}=c_{i+1,\\alpha}$ where $c_{i,\\alpha}$ is a fermion operator in unit cell $i$ with some internal index $\\alpha$, and $c_{L+1,\\alpha}=c_{1,\\alpha}$. This operator relation, together with $T|\\mathbf{0}\\rangle=|\\mathbf{0}\\rangle$ for the fermion vacuum, uniquely determines the action of $T$ on any state. Now consider a product state $|\\varphi\\rangle^{\\otimes L}$, it is easy to verify that the momentum is $e^{iP}=1$ for odd $L$ and $e^{iP}=\\pm1$ for even $L$, where the sign is the fermion parity on each site $\\langle\\varphi|(-1)^{\\sum_\\alpha c^{\\dagger}_{\\alpha}c_{\\alpha}}|\\varphi\\rangle$. We can then go through the proof in Sec.~\\ref{sec:proof}, but now with fermion parity preserving FD quantum circuits, and conclude the following:\n\\begin{theorem}\nAny short-range entangled translation eigenstate $|\\Psi\\rangle$ in a lattice fermion system must have momentum (say in the $x$-direction) $e^{iP_x}=1$ if $L_x$ is odd, and $e^{iP_x}=\\pm1$ if $L_x$ is even. States violating this condition must in turn be long-range entangled.\n\\label{FermionThm}\n\\end{theorem}\nThe details of the proof are presented in Appendix~\\ref{app:fermionproof}.\n\nUsing the same proof technique, we can extend the above result further in various directions. We mention two such extensions without going into the details: (1) for $L_x$ even, the option of $e^{iP_x}=-1$ is possible for a SRE state only if $V\/L_x$ is odd ($V=L_xL_y...$ being the volume); (2) if the total fermion parity is odd in a system with even $V$, then any translation eigenstate must be LRE.\n\n\n\\section{Consequences}\n\\label{sec:consequences}\n\nOne of the beauties of Theorem~\\ref{Thm} lies in the non-trivial consequences that easily follow. For this section, it is useful to introduce an alternative, but equivalent, formulation of Theorem~\\ref{Thm}\n\\begin{manualtheorem}{1}[Equivalent]\nIf there exists a finite-depth local unitary that boosts a state's momentum to a different value (mod $2\\pi$), then the state is necessarily long-range entangled.\n\\end{manualtheorem}\nThe equivalence of this new formulation with the one introduced in Sec.~\\ref{sec:intro} can be understood as follows: if all translation-symmetric SRE states possess trivial momentum then non-trivial momentum states must be LRE. Thus if there exists a finite-depth local unitary that can boost a state's momentum to a different value then at least one of either the original or final state possesses non-trivial momentum and must be LRE. The other state is connected to the LRE state via a finite-depth local unitary and thus must also be LRE. The converse follows by contradiction: assume there exists a SRE state that has non-trivial momentum. Such a state (by definition of SRE) is connected via a FD local unitary to the translation-symmetric direct-product state $|\\alpha\\rangle^{\\otimes L}$ which in turn has trivial momentum. Since there now exists a FD local unitary that boosts the momentum to a different value, this implies that the original state was LRE which leads to the contradiction.\n\nThis equivalent formulation allows for a direct test for long-range entanglement that we will demonstrate on known and previously unknown LSM theories, and topological orders. In the following discussions we will mostly focus on spin (boson) systems for simplicity, but the results can be generalized quite readily to fermion systems as well.\n\n\n\\subsection{LSMOH constraints}\n\\label{sec: LSMOH}\n\n\nThe original Lieb-Schultz-Mattis (LSM) theory~\\cite{LSM} along with the extensions by Oshikawa~\\cite{Oshikawa1999} and Hastings~\\cite{Hastings04}, collectively referred to as LSMOH, and their descendants are powerful tools for understanding the low-energy nature of lattice systems. In one of its most potent forms the theorem states that systems with $U(1)$ and translational symmetry that have non-commensurate $U(1)$ charge filling must be `exotic', meaning that they cannot be SRE states. Since the conception of the original LSM theory the field has flourished rapidly with many extensions that impose similar simple constraints based on symmetry~\\cite{Nachtergaele2007,Watanabe14551,Po2017,LU2020168060,lu2017liebschultzmattis,PhysRevB.98.125120,PhysRevB.99.075143}, and connections to various fields of physics such as symmetry-protected topological (SPT) order and t'Hooft anomaly in quantum field theory~\\cite{Cheng2015,Jian2017,Cho2017,Metlitski2017,PhysRevB.101.224437,Ye2021}. These sort of constraints also have immediate experimental consequences, as they provide general constraints in determining candidate materials of exotic states such as quantum spin liquids~\\cite{Balents2010}. Thus, unsurprisingly, there has been a lot of interest in generating more LSMOH-like theorems that provide simple rules to find exotic states. In the following section we provide simple proofs of some known and previously unknown LSMOH theorems.\n\n\n\nThe first example we consider is the aforementioned non-commensurate 1d $U(1)\\times T$ LSM ($T$ denotes the translation symmetry). In this case there exists a local unitary momentum boost that is the large gauge transformation $U=e^{i\\frac{2\\pi }{L}\\sum x\\hat{n}_x}$, where $\\hat{n}_x$ is the local number operator at $x$. Notice that this transformation is an on-site phase transformation and thus a FD quantum circuit of depth 1. The commutation relation with translation is $TUT^\\dag=e^{i 2\\pi \\frac{\\hat{N}}{L}}U$ ($\\hat{N}$ being the total charge) which means that for non-commensurate filling $\\frac{\\hat{N}}{L}\\notin\\mathbb{Z}$, we may always boost the momentum by a non-trivial value $2\\pi \\frac{\\hat{N}}{L}$. Via the equivalent formulation of Theorem~\\ref{Thm}, this immediately implies that non-commensurate filling leads to a LRE state. This observation may be summarised as\n\\begin{corollary}\n{\\normalfont($U(1)\\times T$ LSM) A $1$d translation and $U(1)$ symmetric state that possesses non-commensurate $U(1)$ charge filling must be long-range entangled.}\n\\end{corollary}\nThe standard LSM theorem follows from this statement since we may now apply it to a \\textit{ground} state of a $1$d translation and $U(1)$ symmetric local spin Hamiltonian to show that the state must be either gapless or a spontaneously symmetry-broken cat state.\nNotice that, strictly speaking, the statement we proved differs slightly from the standard LSM theorem, in that we did not directly prove the vanishing of the energy gap. Rather we showed that any simultaneous eigenstate of translation and $\\hat{N}$ such that $\\langle\\hat{N}\\rangle\/L\\notin\\mathbb{Z}$ must be LRE. In principle we do not even need to assume the parent Hamiltonian to be translation or $U(1)$ symmetric, just that the state itself be translation and $U(1)$ symmetric. In fact the statement encompasses all states, not just the ground state, which is perhaps unsurprising since LRE is fundamentally a property of a state and not the Hamiltonian.\n\nThe higher-dimensional $U(1)\\times T$ LSMOH theorem may be proved following the same logic if $\\langle\\hat{N}\\rangle\/L_i\\notin\\mathbb{Z}$ for some direction $i$ (similar to what was done in Ref.~\\cite{Oshikawa1999}). For generic values of $L_i$ the above condition may not hold, and more elaborate arguments are needed (for example see Ref.~\\cite{YaoOshikawa2020}) which we will not discuss here. \n\nOur proof of the LSM theorem has an appealing feature compared to the classic proof~\\cite{LSM}: we did not need to show that the state $|\\Omega'\\rangle=U|\\Omega\\rangle$ had excitation energy $\\sim O(1\/L)$ (relative to the ground state $|\\Omega\\rangle$); rather it suffices for us to show that $|\\Omega'\\rangle$ has a different lattice momentum compared to $|\\Omega\\rangle$, which is enough to establish the LRE nature of $|\\Omega\\rangle$. Next we shall use this simplifying feature to generalize the $U(1)\\times T$ LSM theorem to a new constraint involving only discrete $\\mathbb{Z}_n\\times T$ symmetries.\n\n\n\nLet us consider a spin chain ($1$d) with translation symmetry and an onsite $\\mathbb{Z}_n$ symmetry generated by $Z\\equiv\\otimes_iZ_i$ ($Z_i^n=1$). We consider the case when the system size $L=nM$ for some $M\\in\\mathbb{N}$, and study simultaneous eigenstates of the translation and $\\mathbb{Z}_n$ symmetries. If such a state is an unentangled product state $\\otimes_i|\\varphi\\rangle_i$, then by definition $Z=1$ when acting on this state, namely the state carries trivial $\\mathbb{Z}_n$ charge. This turns out to be true for any symmetric SRE state, which we now prove. Suppose a translation eigenstate $|\\Psi\\rangle$ has $Z|\\Psi_P\\rangle=e^{i2\\pi Q\/n}|\\Psi\\rangle$ for some $Q\\neq0$ (mod $n$). We can construct a local unitary which is an $\\mathbb{Z}_n$-analogue of the large gauge transform\n\\begin{equation}\nU=\\otimes_iZ_i^i,\n\\end{equation}\nwhere $i$ is the unit cell index. For system size $L=nM$ one can verify that $TUT^{-1}U^{\\dagger}=Z^{\\dagger}$. This means that the momentum of the twisted state $U|\\Psi\\rangle$ will differ from that of the untwisted $|\\Psi\\rangle$ by $\\langle\\Psi|Z^{\\dagger}|\\Psi\\rangle=e^{-i2\\pi Q\/n}\\neq1$. By the equivalent form of Theorem~\\ref{Thm} $|\\Psi\\rangle$ must be LRE. We therefore have\n\\begin{corollary}\n\\label{ZnLSM}\n{\\normalfont($\\mathbb{Z}_n\\times T$ LSM)} A $1$d translation and $\\mathbb{Z}_n$ symmetric ground state that possesses non-trivial $\\mathbb{Z}_n$ charge on system lengths $L=nM$ for some $M\\in\\mathbb{N}$ cannot be short-range entangled, and thus is either gapless or spontaneously symmetry-broken cat state.\n\\end{corollary}\n\nThe above statement also generalizes to higher dimensions if $L_i=nM$ for some direction $i$. For systems with $U(1)$ symmetry, we can choose to consider a $Z_L$ subgroup of the $U(1)$, and the above $\\mathbb{Z}_n\\times T$ LSM theorem leads to the familiar $U(1)\\times T$ LSMOH theorem.\n\n\nThe two LSM-type theorems discussed so far, together with our Theorem~\\ref{Thm}, can all be viewed as ``filling-type\" LSM theorems, in the sense that these theorems constraint a symmetric many-body state $|\\Psi\\rangle$ to be LRE when $|\\Psi\\rangle$ carries certain non-trivial quantum numbers, such as lattice momentum $e^{iP}\\neq1$, total $U(1)$ charge $Q\\notin L\\mathbb{Z}$ or total $\\mathbb{Z}_n$ charge $Q\\notin L\\mathbb{Z}\/n\\mathbb{Z}$.\n\nThere is another type of LSM theorems that involve projective symmetry representations in the onsite Hilbert space, the most familiar example being the spin-$1\/2$ chain with $SO(3)$ symmetry. Our Theorem~\\ref{Thm} can also be used to understand some (but possibly not all) of the projective symmetry types of LSM. Here we discuss one illuminating example with onsite $\\mathbb{Z}_2\\times\\mathbb{Z}_2$ symmetry in one dimension~\\cite{PhysRevB.83.035107,Ogata2019,Ogata2021}, such that the generators of the two $\\mathbb{Z}_2$ group anti-commutes when acting on the local Hilbert space: $X_iZ_i=-Z_iX_i$ (this can simply be represented by the Pauli matrices $\\sigma_x$, $\\sigma_z$). Now set the length $L=2N$ with odd $N$, and consider the three local unitaries $U_x=(\\mathds{1}\\otimes\\sigma_x)^{\\otimes N}$, $U_z=(\\mathds{1}\\otimes\\sigma_z)^{\\otimes N}$, and $U_{xz}=(\\sigma_x\\otimes\\sigma_z)^{\\otimes N}$. One can verify the commutation relations $TU_{x}T^\\dag=(-1)^{Q_x}U_x$, $TU_{z}T^\\dag=(-1)^{Q_z}U_z$, and $TU_{xz}T^\\dag=(-1)^{N+Q_x+Q_z} U_{xz}$. These commutation relations imply that the momentum of any symmetric state $|\\Psi\\rangle$ will be boosted by $\\Delta P=\\pi$ by at least one of the three unitaries, therefore $|\\Psi\\rangle$ must be LRE by Theorem~\\ref{Thm}.\n\n\n\n\n\\subsection{Topological orders: weak CDW}\n\\label{sec:wCDW}\n\n\nWe now consider an intrinsic (bosonic) topological order on a $d$-dimensional torus. By definition there will be multiple degenerate ground states, separated from the excitation continuum by a finite energy gap. If one of the ground states $|\\Psi_a\\rangle$ has a non-trivial momentum, say along the $\\hat{x}$ direction, then according to Theorem 1 this state should be LRE even when viewed as a one-dimensional system in $\\hat{x}$ direction (with the other dimensions $y,z...$ viewed as internal indices). Since there is no intrinsic topological order in one dimension, the only mechanism for the LRE ground state is spontaneous symmetry breaking. The lattice translation symmetry is the only relevant symmetry here -- all the other symmetries can be explicitly broken without affecting the LRE nature of $|\\Psi_a\\rangle$, since the state will still have nontrivial momentum. Therefore $|\\Psi_a\\rangle$ must be a cat state that spontaneously breaks the $\\hat{x}$-translation symmetry~\\footnote{Another way to see this is to note that a cat state is composed of individual SRE states. Since we have proven that translation symmetric SRE states possess trivial momentum, it follows that the cat state may only achieve non-trivial momentum when the individual SRE states break translation symmetry, i.e. the cat state must correspond to translation symmetry breaking.}, also known as a charge density wave (CDW) state~\\cite{RevModPhys.60.1129}. Furthermore, any other ground state $|\\Psi_{b\\neq a}\\rangle$ can be obtained from $|\\Psi_a\\rangle$ by a unitary operator $U_{ba}$ that is non-local in the directions transverse to $\\hat{x}$, but crucially is local in $\\hat{x}$ -- for example in two dimensions $U_{ba}$ corresponds to moving an anyon around the transverse cycle. By Theorem~\\ref{Thm} we then conclude that $|\\Psi_b\\rangle$ is also a CDW in $\\hat{x}$. \n\nPerhaps the most familiar example of the above statement is the fractional quantum Hall effect. It is known that the $1\/k$ Laughlin state on the torus is adiabatically connected to a quasi-one-dimensional CDW state in the Landau gauge, also known as the Tao-Thouless state~\\cite{PhysRevB.28.1142,PhysRevB.77.155308}. For example for $k=2$ the Tao-Thouless state with momentum $P=\\pi n$, in the Landau orbit occupation number basis, reads\n\\begin{equation}\n |101010...\\rangle+e^{i\\pi n}|010101...\\rangle.\n\\end{equation}\n\n\nThe CDW nature of the ground states is perfectly compatible with the topological order being a symmetric state, since there is no \\textit{local} CDW order parameters with nonzero expectation value. The CDW order parameter in this case is non-local in the directions transverse to $\\hat{x}$. For example, in two-dimensions the CDW order parameter is defined on a large loop that wraps around the cycle transverse to $\\hat{x}$. This phenomenon is dubbed \\textit{weak} symmetry breaking in Ref.~\\cite{Kitaev06}. The weak spontaneous symmetry breaking requires a certain degeneracy for the ground state. This degeneracy is naturally accommodated by the ground state manifold of the topological order. For example for the above Tao-Thouless state at $k=2$ the CDW order requires a two-fold ground state degeneracy, which is nothing but the two degenerate Laughlin states on torus.\n\n\nThe above results can be summarized as follows:\n\\begin{corollary}\n\\label{cor:toporderCDW}\nIf a ground state of a gapped topological order on a $d$-dimensional torus ($d>1$) has a non-trivial momentum in $\\hat{x}$, then any ground state of this topological order must \\textit{weakly} break translation symmetry in $\\hat{x}$.\n\\end{corollary}\nA further example of these results, alongside the effects of anyon condensation, applied upon the $\\mathbb{Z}_2$ topologically ordered Toric code is demonstrated in the Appendix~\\ref{app:Toriccode}. The above result also implies the following constraint on possible momentum carried by a topologically ordered ground state:\n\\begin{corollary}\nIf a gapped topological order has $q$ degenerate ground states on torus, then the momentum of any ground state in any direction is quantized: \n\\begin{equation}\nP^{(a)}_i=2\\pi N^{(a)}_i\/q,\n\\end{equation}\nwhere $N^{(a)}_i$ is an integer depending on the ground state (labeled by $a$) and direction $i$.\n\\end{corollary}\nThis is simply because for other values of the momentum, the ground state degeneracy required by the spontaneous translation-symmetry-breaking order will be larger than the ground state degeneracy from the topological order, which results in an inconsistency. An immediate consequence of the above corollary is that invertible topological orders (higher-dimensional states that are LRE by our definition but has only a unique gapped ground state on closed manifolds), such as the chiral $E_8$ state\\cite{Kitaev06}, cannot have nontrivial momentum on a closed manifolds since $q=1$.\n\n\n\nThe above statement immediately implies that the momenta of topological ordered ground states are robust under adiabatic deformations, as long as the gap remains open and translation symmetries remain unbroken. For the Tao-Thouless states this conclusion can also be drawn from the LSM theorem if the $U(1)$ symmetry is unbroken. Our result implies that the momenta of Laughlin-Tao-Thouless states are robustly quantized even if the $U(1)$ symmetry is explicitly broken.\n\n\n\n\n\\subsection{Crystalline symmetry-protected topological phases}\n\\label{sec:cSPT}\n\n\nThere has been growing interest and successes in understanding the symmetry-protected topological (SPT) phases associated with crystalline symmetries~\\cite{PhysRevX.7.011020,ThorngrenElse,PhysRevB.96.205106,Shiozaki2018,SongFangQi2018,Else2018}. When the protecting symmetry involves lattice translation, a crucial ``smoothness'' assumption~\\cite{ThorngrenElse,PhysRevB.96.205106} is used. Essentially one assumes that for such SPT phases the inter unit-cell entanglement can be adiabatically removed, possibly with the help of additional ancilla degrees of freedom. This allows one to formally ``gauge'' the translation symmetry~\\cite{ThorngrenElse} and build crystalline topological phases out of lower-dimensional states~\\cite{PhysRevB.96.205106,SongFangQi2018,Else2018}. \n\nOur result, namely Theorem~\\ref{Thm}, serves as a non-trivial check on the smoothness assumption in the following sense. If there were SRE states with non-trivial lattice momenta, such states would have irremovable inter-unit cell entanglement since unentangled states cannot have non-trivial momentum. Equivalently the correlation length $\\xi$ cannot be tuned to be smaller than the unit cell size $a$. In fact, if such states exist, they would by definition be non-trivial SPT states protected solely by translation symmetry -- such SPT states would be beyond all the recent classifications. \n\nWe note that our result is a necessary condition, but not a proof, for the smoothness assumption, as there may be other ways to violate the assumption without involving a ground state momentum. It will be interesting to see if the arguments used in this work can be extended to fully justify the smoothness assumption.\n\n\n\\section{Discussions}\n\\label{sec:discussions}\n\nIn this paper we have shown that a quantum many-body state with non-trivial lattice momentum is necessarily long-range entangled, hence establishing a simple yet intriguing connection between two extremely familiar concepts in physics: translation symmetry and quantum entanglement. Many directions can be further explored, which we briefly comment on in the remainder of this Section.\n\nOne important aspect that we have so far skipped over is that LSM theory is in fact intimately connected to quantum anomalies~\\cite{Cheng2015,Jian2017,Cho2017,Metlitski2017,PhysRevB.101.224437,Ye2021}. This is natural since they both provide UV conditions that constraint the low-energy behaviours. For the ``projective symmetry\" type of LSM theorems, this connection has been precisely established and it is known that such LSM constraints correspond to certain discrete (quantized) t'Hooft anomalies. For the ``partial filling'' type of LSM such as the familiar $U(1)\\times T$ constraint, however, the connection has been discussed~\\cite{Song2021Polarization,Else2021FL,Wang20,PhysRevResearch.3.043067} but has yet to be fully developed. As we discussed in Sec.~\\ref{sec: LSMOH}, our main result (Theorem~\\ref{Thm}) can be viewed as a ``partial filling'' type of LSM that only involves translation symmetry. It is therefore natural to ask whether Theorem~\\ref{Thm} can be understood from an anomaly perspective. To achieve this goal, it is clear that the standard quantized t'Hooft anomaly is insufficient (a point which was also emphasized in Ref.~\\cite{Else2021FL} for the $U(1)\\times T$ LSM) -- for example, the toric code discussed in Appendix~\\ref{app:Toriccode} has no t'Hooft anomaly since one can condense the $e$ particle to obtain a trivial symmetric state. One would therefore need to expand the notion of anomaly to accommodate the partial-filling type of LSM constraints including the one discussed in this work, possibly along the line of the ``unquantized anomaly\" discussed in Ref.~\\cite{PhysRevResearch.3.043067}. We leave this aspect to future work.\n\nAnother powerful consequence of the traditional $U(1)\\times T$ LSM theorem is on the stability of the LRE ground states (with partial charge filling) under symmetric perturbations: assuming the charge compressibility is finite (could be zero), then a small perturbation will not change the charge filling discontinuously, so the system remains LRE under small symmetric perturbations (unless the perturbation leads to spontaneous symmetry-breaking like the BCS attraction). It is natural to ask whether the other ``partial filling\" types of LSM theorems can serve similar purposes. In fact Ref.~\\cite{PhysRevResearch.3.043067} discussed precisely this point under the notion of ``unquantized anomaly''. The unquantized anomalies are very similar to Theorem~\\ref{Thm} and \\ref{FermionThm} and Corollary~\\ref{ZnLSM}, except that the key quantity is not the discrete charges (lattice momentum or $\\mathbb{Z}_n$ charges) on a specific systems size $L$, but the charge densities (momentum density or $\\mathbb{Z}_n$ charge density). Such discrete charge densities can not be defined for a fixed $L$, but may be defined for a sequence of systems with $L\\to \\infty$. Ref.~\\cite{PhysRevResearch.3.043067} argued that, in the context of Weyl and Dirac semimetals, as long as these discrete charge densities are well behaved in the $L\\to\\infty$ limit, the unquantized anomalies will protect the LRE nature of the states under symmetric perturbations. Our work here can be viewed as a rigorous justification of the unquantized anomalies in Ref.~\\cite{PhysRevResearch.3.043067} on fixed system sizes.\n\nAssuming a well-behaving momentum density in the thermodynamic limit, we can also apply our results to a Fermi liquid with a generic Fermi surface shape, such that the ground state from the filled Fermi sea has a non-vanishing momentum density (this requires breaking of time-reversal, inversion and reflection symmetries). This can be viewed as a non-perturbative explanation for the stability of such low-symmetry Fermi surface, even in the absence of the charge $U(1)$ symmetry. (Recall that perturbatively the stability comes from the fact that the Cooper pairing terms no longer connect opposite points on the Fermi surface).\n\nAnother question one may ask is whether a broader group of non-onsite symmetries obey similar charge and entanglement restrictions. It is easy to see that exactly the same constraint holds for glide reflections and screw rotations, since when the system is viewed as $1$d there is no difference between glide reflection, screw rotation and translation. It is also easy to see that the constraint does \\textit{not} hold for point group symmetries (rotations and reflections), because such symmetries will be onsite at some points in space (the fixed points of point groups). It is therefore important that translation symmetry is \\textit{everywhere} non-onsite. The question becomes even more intriguing if we consider more general unitary operators (such as quantum cellular automata~\\cite{GNVW}).\n\nThere are many more natural avenues for further exploration. The interplay between the non-local nature of translation symmetry with crystalline symmetry anomalies is not yet well understood and requires more concrete mathematical grounding such as a rigorous proof of when the smoothness condition is valid. Relatedly it remains to be determined whether translation symmetry may be truly treated as an onsite symmetry and gauged, or whether its non-locality and non-trivial momentum may hinder or require modifications to the usual gauging process. Implications of our results on the ``emergibility\" of phases may also provide fruitful insights to achievable and unachievable states on the lattice~\\cite{PhysRevX.11.031043,Ye2021}. Our work has shown without a doubt that translation symmetry is many-faceted and plays a crucial role in the entanglement properties of crystalline materials.\n\n\n\n\\begin{acknowledgments}\nWe acknowledge insightful discussions with Yin-Chen He, Timothy Hsieh, and especially Liujun Zou. We thank Anton Burkov for a previous collaboration that inspired this work. We thank the anonymous referees for their careful reading of our manuscript and their illuminating comments and questions. LG was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada and by a Vanier Canada Graduate Scholarship. \nResearch at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\nIn many models of set theory, Souslin trees offer a variety of different\nhomogeneity or rigidity properties.\nProbably the most prominent homogeneity property for Souslin trees is\n\\emph{strong homogeneity} (cf. Section \\ref{sec:str_hom} for the definition)\nwhich implies that the tree is in a certain sense minimal with respect to its\nautomorphism group.\nOn the other hand, a great number of rigidity notions\n(i.e. absence of nontrivial automorphisms) for Souslin trees\nand an array of implications between most of them are known.\nIn this paper, which resulted out of a part of the authors PhD thesis\n\\cite{diss}, we present some interrelations between the class\nof strongly homogeneous Souslin trees and that of free trees,\nthe latter consisting of those Souslin trees which have the strongest known\nrigidity properties.\n\nThe key result which leads to these correspondences\nis a certain method for decomposing a strongly homogeneous Souslin tree\ninto $n$ free factors\n(Theorem \\ref{thm:str_hom=free_x_free}, which is a strengthening of a known\nthough unpublished result).\nThis decomposition uses an elementary,\nbut apparently new combinatorial tool,\nan $n$-optimal matrix of partitions,\nwhich we introduce in the first section.\nAs will be seen in Section 2, there are several ways to decompose a strongly\nhomogeneous Souslin tree into $n$ free trees.\nBut the construction we give using an $n$-optimal matrix of partitions\nenables us to prove strong consequences about the behaviour of the factors\nwhich finally are used in the third section to separate certain notions\nof parametrized rigidity for Souslin trees (which are all weakenings of\nfreeness) in Corollaries \\ref{cor:n-free} and \\ref{cor:n-free_not_UBP}.\n\nA few words on the structure of the paper and the assumed background\nwhich differs strongly from section to section.\nThe first section\nis about the very elementary notion of $n$-optimal matrices of\npartitions and does not assume any prerequisites.\nThe other two sections treat Souslin trees and their structural properties.\nIn Section 2 we review strong homogeneity and freeness for Souslin trees\nand prove two decomposition theorems for strongly homogeneous\nSouslin trees.\nThe final section collects several rigidity notions for Souslin trees\n(most of them taken from \\cite{degrigST}) and gives the aforementioned\nseparation results.\nSome definitions and proofs in Section 3 refer to the technique of forcing\nwhich we do not review here.\nAnd even though we give the necessary definitions concerning Souslin trees\nat the beginning of Section 2, some acquaintance with this subject will\ncertainly enhance the reader's understanding\nof the constructions in Section 2\n(very good references, also on forcing, are,\ne.g. \\cite{devlin-johnsbraten,kunen,jechneu}).\nAnyway, we have made an effort to write a paper that is accessible to an\naudience exceeding the circle of experts on Souslin trees.\n\n\\section{Optimal matrices of partitions}\n\nThe main idea is as follows:\nConsider an infinite matrix with $\\omega$ rows and $n$ columns\nwhere $n$ is a natural number larger than 1:\n$$\n\\begin{pmatrix}\nP_{0,0}&\\ldots&P_{0,m}&\\ldots&P_{0,n-1}\\\\\n\\vdots&\\vdots&\\vdots&\\vdots&\\vdots\\\\\nP_{k,0}&\\ldots&P_{k,m}&\\ldots&P_{k,n-1}\\\\\n\\vdots&\\vdots&\\vdots&\\vdots&\\vdots\\\\\n\\end{pmatrix}\n$$\nSuppose that the entries of this matrix are partitions of the set $\\omega$\nof natural numbers.\nWe want to choose these partitions in a way such that\n(i) we get an infinite set whenever we intersect a finite family of subsets\nof $\\omega$ coming from (distinct) partitions of a single column and\n(ii) we get a singleton whenever we intersect $n$ sets belonging\nto partitions each coming from different columns.\nIn the following definition the latter requirement is stated\nin a slightly stronger form: we want to obtain a singleton whenever we\nintersect $n$ sets not all coming from the same column.\nThe construction in the proof of Lemma \\ref{lm:opt_mtx_prt} actually yields\nmatrices that satisfy this stronger condition,\nand we will use it in the proof of Proposition \\ref{prp:decomp}\nto derive an additional result.\n\n\\begin{defi}\n\\label{defi:opt_mtx_prt}\nFor $n\\in\\omega$,\nan $n$\\emph{-optimal matrix of partitions}\nis a family $(P_{k,m}\\mid k\\in\\omega,\\,m1$.\n\\end{lm}\n\\begin{proof}\nTo start we fix a bijective enumeration\n$h=(h_0,\\ldots,h_{n-1}):\\omega\\to\\omega^n$ and define $a_i^{0,m}$\nto be the pre-image of $i$ under $h_m$.\nLet $P_{0,m}:=\\{a_i^{0,m}\\mid i\\in\\omega\\}$.\n\nThe rest of the construction consists of a three-fold recursion.\nThe outer loop is indexed with $(k,m)\\in\\omega\\times n$,\nand goes row by row, from the left to the right.\nOne could also say that the progression of the indices\nfollows the lexicographic order of $\\omega\\times n$, i.e.,\n$m$ grows up to $n-1$ and then drops down to 0 while $k$ increases to $k+1$.\n(The first $n$ stages of the outer loop, where $k=0$,\nhave been included in the recursive anchor in the first line of the proof.)\n\nThe inner recursion loops are common $\\omega$-recursions.\nIn each stage of the middle one we define one element $a^{k,m}_i$\nof the partition $P_{k,m}$,\nand the innermost consists of a choice procedure for the elements\nof that set $a^{k,m}_i$.\n\nSo assume that the partitions $P_{\\ell,m}=\\{a_i^{\\ell,m}\\mid i\\in\\omega\\}$\nhave already been defined for $(\\ell,m)<_\\mathrm{lex}(k,n)$\nand also the $i$ first sets\n$a^{k,m}_0=a_0,\\ldots,a^{k,m}_{i-1}=a_{i-1}$ of $P_{k,m}$\nhave been fixed.\nAssume also,\nthat the family constructed so far\nhas the properties (i) and (ii) from Definition~\\ref{defi:opt_mtx_prt}.\nWe inductively choose three sequences\n$x_\\ell,\\,y_\\ell$ and $z_\\ell$ of members of $\\omega\\setminus\\bigcup_{h\\alpha$ we let $s\\!\\!\\upharpoonright\\!\\!\\alpha$ be the unique\npredecessor of $s$ in level $\\alpha$.\n\nThe \\emph{height of a tree} $T$, $\\hgt T$,\nis the minimal ordinal $\\alpha$ such that $T_\\alpha$ is empty.\nAn \\emph{antichain} is a set of pairwise incomparable nodes of $T$,\nso for $\\alpha<\\hgt T$,\nthe level $T_\\alpha$ is an antichain of $T$.\n\nNodes, that do not have $<_T$-successors, are called \\emph{leaves}, and\n$T$ is called \\emph{$\\kappa$-splitting} or \\emph{$\\kappa$-branching},\n$\\kappa$ a cardinal, if all nodes of $T$\nhave exactly $\\kappa$ immediate successors, except for the leaves.\n\nA \\emph{branch} is a subset $b$ of $T$ that is linearly ordered by $<_T$ and\nclosed downwards, i.e. if $s<_T t\\in b$ then $s\\in b$.\nUnder the notion of a \\emph{normal} tree we subsume the following\nfour conditions:\n\\begin{enumerate}[a)]\n\\item there is a single minimal node called the \\emph{root};\n\\item each node $s$ with $\\hgt(s)+1<\\hgt T$ has at least two immediate successors;\n\\item each node has successors in every higher non-empty level;\n\\item branches of limit length have unique limits (if they are extended in the tree),\ni.e., if $s,t$ are nodes of $T$ of limit height\nwhose sets of predecessors coincide, then $s=t$.\n\\end{enumerate}\nNote that by condition c) leaves can\nonly appear in the top level of a normal tree.\n\nFor a node $t\\in T$ we denote by $T(t)$ the set $\\{s\\in T:t\\leq_T s\\}$\nof nodes above (and including) $t$ which becomes a tree when equipped\nwith the ordering inherited from $T$.\nA tree $T$ is said to be \\emph{homogeneous},\nif for all pairs $s,t\\in T$ of the same height there is\na tree isomorphism (of partial orders) between $T(s)$ and $T(t)$,\nthe trees of nodes in $T$ above $s$ and $t$ respectively.\nFor many classes of trees, such as Souslin trees, this is equivalent to \nthe condition that for each pair $s,t\\in T$ of nodes of the same height there\nis an automorphism of $T$ mapping $s$ to $t$.\nA tree is \\emph{rigid} if it does not admit any non-trivial automorphism.\n\nWe will consider two operations on the class of trees: sum and product.\nGiven trees $(T^i,<_i)$ for $i\\in I$, the \\emph{tree sum} of this family,\ndenoted by $\\bigoplus_{i\\in I}T^i$ is the disjoint union of the sets $T^i$\nplus a common root $r\\notin \\bigcup T^i$.\nThe tree order $<$ on $\\bigoplus T^i$ is\ngiven by the (disjoint) union of the tree orders of summands as well as\nthe relation $r< t$ for all $t\\in \\bigcup T^i$.\nThe height of $\\bigoplus T^i$ is given by the ordinal\n$1 + \\sup\\{\\hgt T^i: i\\in I\\}$.\n\nLet now all trees $T^i$ be of height $\\mu$.\nThe \\emph{tree product} $\\bigotimes_{i\\in I}T^i$\nover the family $(T^i)_{i\\in I}$\nis given by the union over the cartesian products of the levels\n$T^i_\\alpha$:\n$$\\bigotimes_{i\\in I}T^i := \\bigcup_{\\alpha<\\mu}\\prod_{i\\in I} T^i_\\alpha.$$\nThe product tree order is simply the conjunction of the relations $<_i$.\n\nIn order to make a decomposition of a tree into a product feasible\nwe also introduce the notion of a nice tree equivalence relation.\nLet $T$ be a normal and $\\aleph_0$-splitting tree and $\\equiv$\nan equivalence relation on $T$.\nThen we say that $\\equiv$ is a \\emph{nice tree equivalence relation (nice\n t.e.r.)} if $\\equiv$ respects levels (i.e., it refines $T\\otimes T$),\nis compatible with the tree order (i.e., $\\hgt(s)=\\hgt(r) $ and\n$s r$ imply $s\\equiv r$), the quotient partial order\n$T\/\\!\\!\\equiv$ of $\\equiv$-classes ordered by the inherited partial order, i.e.\n$$[s]<_\\equiv [t]\\quad\\iff\\quad s< t\\,,$$\nis a normal and $\\aleph_0$-splitting tree and the relation is nice,\nby which we mean that for all triples of nodes $s,r,t$ such that\n$s\\equiv r$ and $t$ is above $s$ there is a node $u\\equiv t$, $u$ above\n$r$.\nAnother way to formulate this last property ``niceness'' associates to each\nbranch $b$ through $T$ a subtree $T^{b}_\\equiv:=\\bigcup_{s\\in\n b}s\/\\!\\!\\equiv$ of $T$ and requires that it satisfies point c) in our\ndefinition of normal trees, i.e., every node $t\\in T^{b}_\\equiv$ has\nsuccessors in every higher level of $T^b_\\equiv$. \n\nNow consider the case that a tree $T$ carries nice tree equivalence relations\n$\\equiv_i$ for $is$ are mapped by the automorphism\n$\\varphi$ according to the rule\nstated above with $\\alpha=\\hgt(s)$.\n\nTo reach a statement contradicting the transitivity\nof the family $(\\psi_{st})$,\nwe assume that there is a node $r\\in T$,\nsuch that for each successor $s$ of $r$ there is\na node $t\\geq s$, such that $\\varphi(t)\\neq\\psi_{s\\varphi(s)}(t)$.\nWe can inductively choose an increasing sequence of ordinals\n$\\alpha_n$ such that for all nodes $t\\in T_{\\alpha_{n+1}}$ we have\n$$\\varphi(t)\\neq\n\\psi_{t\\upharpoonright\\alpha_n \\varphi(t\\upharpoonright\\alpha_n)}(t).$$\nLet $\\alpha$ be the supremum of the $\\alpha_n$ and pick any node\n$t\\in T_\\alpha$.\nSince $\\alpha$ is a limit ordinal and by transitivity of the coherent family\nwe find an $n\\in\\omega$ such that\n$\\varphi(t)=\\psi_{t\\upharpoonright\\alpha_n \\varphi(t\\upharpoonright\\alpha_n)}(t)$\nwhich is of course impossible by the choice of the $\\alpha_n$.\n\\end{proof}\n\nNow we come to \\emph{free} trees. Also this property has several different\nnames, e.g. full\n(Jensen, Todor\\c{c}evi\\'{c}, \\cite{GKH,todorcevic_trees_and_orders}) or\n'Souslin and all derived trees Souslin'\n(Abraham and Shelah, \\cite{abraham-shelah_aronszajn_trees,abraham-shelah}).\nIn the context of \\cite{degrigST}\n(cf. Section \\ref{sec:free} of the present article)\nfree trees could also be called\n'$<\\!\\!\\omega$-fold Souslin off the generic branch'.\n\n\\begin{defi}\nA normal tree $T$ of height $\\omega_1$ is \\emph{free}\nif for every finite (and non-empty) set of nodes\n$s_0,\\ldots,s_n$ of $T$ of the same height,\nthe tree product $\\bigotimes_{i=0}^n T(s_i)$ satisfies the c.c.c.\n\\end{defi}\n\nFree trees are easily seen to be rigid Souslin trees\nas the product of two isomorphic relative trees $T(s)$ and $T(t)$\nwould clearly not be Souslin.\nIn Section \\ref{sec:separating} we will also consider weaker,\nparametrized forms of freeness.\n\n\\subsection{Decompositions of strongly homogeneous Souslin trees}\n\\label{sec:dec_free}\n\nWe now come to the key result of this paper.\nThe following theorem is stated in \\cite[p.246]{shelah-zapletal}\nin the case $n=2$ without proof.\nLarson gives the construction of a single free subalgebra\nof a strongly homogeneous Souslin algebra in terms of trees\nin the proof of Theorem 8.5 in his paper \\cite{larson}.\nSome ideas in the following proof are borrowed from that construction.\n\n\\begin{thm}\\label{thm:str_hom=free_x_free}\nFor every natural number $n>1$ and every\n$\\aleph_0$-branching, strongly homogeneous Souslin tree $T$\nthere are free Souslin trees $S_0,\\ldots,S_{n-1}$\nsuch that $T\\cong \\bigotimes_{ms_0,$$\nand let $r_i:=\\psi_{s_0,s_i}(r_0)>s_i$ for $i0$.\nStarting with $\\alpha=1$ we know that\n$(s_m\/\\!\\!\\equiv_m)=a_{i_m}^{k_m,m}(\\mathrm{root})$\nfor some $i_m$ and $k_m$.\nSo property (ii) of our $n$-optimal matrix is all we need here.\nFor $\\alpha=\\gamma+1$ we assume that the classes $s_m^-\/\\!\\!\\equiv_m$\nmeet in a single node, say $r\\in T_\\gamma$.\nThe set of elements of $s_m\/\\!\\!\\equiv_m$ which lie above $r$\nis then just $a_{i_m}^{h_m(r),m}(r)$ \nand again property (ii) of the matrix proves the claim.\nIn the limit case we once more use the transitivity of the coherent family.\nSo let $\\alpha$ be a limit and $\\gamma<\\alpha$ large enough such that\n$\\psi_{q_m,q_\\ell}(s_m)=s_\\ell$ where we abbreviate $s_m\\!\\!\\upharpoonright\\!\\!\\gamma=q_m$.\nFor a last time in this proof we use the commutativity of the\ncoherent family:\nLet $r$ be the unique element of the intersection\nof the classes $q_m\/\\!\\!\\equiv_m$.\nThen $t=\\psi_{q_m,r}(s_m)$ is well defined and\nindependent from the choice of $m2$.\nSo assume that $R:=\\bigotimes_{i0$.\n\\end{thm}\n\\begin{proof}\nThis is just a simpler variant of the construction in the proof\nof Theorem \\ref{thm:str_hom=free_x_free} where\nwe use only the first row of the matrix of partitions\n(or just any bijection between $\\omega^n$ and $\\omega$).\nIt is then easy to verify that the coherent family of $T$ descends\nto the thus obtained factor trees and renders them strongly homogeneous.\n\\end{proof}\n\n\\begin{rem}\n\\label{rem:decomp}\n\\begin{enumerate}[(i)]\n\\item Though, of course, not every tree product\nof two strongly homogeneous Souslin trees is Souslin again\n(e.g.\\ take $T\\otimes T$),\nthere is a converse to the last theorem:\nIf $S$ and $T$ are strongly homogeneous Souslin trees and\nthe tree product $S\\otimes T$ satisfies the c.c.c.,\nthen $S\\otimes T$ is a strongly homogeneous Souslin tree as well.\n\\item We see that there are two essentially distinct ways to decompose\na strongly homogeneous tree into (at least three) free factors.\nAn application of Theorem \\ref{thm:str_hom=str_hom_x_str_hom}\nto decompose a given strongly homogeneous Souslin tree $T$ into $\\ell$\nstrongly homogenous factors $S_0,\\ldots,S_{\\ell-1}$\nfollowed by an $\\ell$-fold application of the procedure used in the proof of\nTheorem \\ref{thm:str_hom=free_x_free} to decompose the tree $S_k$ into $m_k$\nfree trees $R^k_i$ for $0\\leq i < m_k$\nnever results in the same decomposition\nas directly using the proof of Theorem\n\\ref{thm:str_hom=free_x_free} to decompose $T$ into\n$\\sum_{k=0}^{\\ell-1} m_k$ free factors.\nThe partial products of the latter decomposition are all rigid by\nProposition \\ref{prp:decomp} while\nthe first also has partial products that are strongly homogeneous.\n\\end{enumerate}\n\\end{rem}\n\n\\section{Separating high degrees of rigidity}\n\\label{sec:separating}\n\nIn this chapter we review several families of rigidity notions\nfor Souslin trees, all of them weaker than freeness.\nThese definitions (except for that of an \\emph{$n$-free} Souslin tree)\nare all taken from \\cite{degrigST}.\nMost of these definitions refer to the technique of forcing applied\nwith a Souslin tree as the forcing partial order.\nWe do not review forcing here.\nBut recall,\nthat forcing with a Souslin tree always assumes the inverse order on the tree\n(i.e., trees grow downwards when considered as forcing partial orders,\nthe root is the maximal element, etc.) and adjoins a cofinal branch.\n\nThis section is divided in five short subsections.\nThe first two introduce the rigidity notions to be considered\nand the last three state many and prove some separations between them.\nWe only give proofs that either are elementary or use the\nproof of the Decomposition\nTheorem \\ref{thm:str_hom=free_x_free}.\n\n\\subsection{Parametrized freeness}\n\\label{sec:free}\n\nConsidering the definition of the property \\emph{free} for Souslin trees\nit is natural to ask whether or not it makes any difference\nif the number of the factors in the tree products,\nthat are required to be Souslin, is bounded.\nThis leads to the following definition which we rightaway connect\nto the definition of \\emph{being $n$-fold Souslin off the generic branch}\nmet in \\cite{degrigST}.\n\n\\begin{defi}\nLet $n$ be a positive natural number.\n\\begin{enumerate}[a)]\n\\item We say that a Souslin tree $T$ is \\emph{$n$-free} if for every subset\n$P$ of size $n$ of some level $T_\\alpha$, $\\alpha<\\omega_1$, the\ntree product $\\bigotimes_{s\\in P} T(s)$ satisfies the c.c.c.\n\\item A Souslin tree is said to be\n$n$\\emph{-fold Souslin off the generic branch},\nif for any sequence $\\vec{b}=(b_0,\\ldots,b_{n-1})$\ngeneric for the $n$-fold forcing product of (the inverse partial order\nof) $T$ and any node $s\\in T\\setminus\\bigcup_{i\\in n}b_i$,\nthe subtree $T(s)$ of all nodes of $T$ above $s$ is a Souslin tree\nin the generic extension $M[\\vec{b}]$ (which amounts to requiring that the\nadjunction of $\\vec{b}$ does not collapse $\\omega_1$ and preserves the\nc.c.c. of the $T(s)$, $s\\notin\\bigcup b_i$).\n\\end{enumerate}\n\\end{defi}\nIt is easy to see that a 2-free Souslin tree or a tree which is Souslin\noff the generic branch cannot be decomposed as the product of two Souslin\ntrees. And this common feature is no coincidence.\n\n\\begin{prp}\\label{prp:free_Sotgb}\nFor a positive natural number $n$ and a normal Souslin tree $T$\nthe following statements are equivalent.\n\\begin{enumerate}[a)]\n\\item $T$ is $n$-fold Souslin off the generic branch.\n\\item $T$ is $(n\\!+\\!1)$-free.\n\\end{enumerate}\n\\end{prp}\n\n\\begin{proof}\nWe start with the implication (b$\\to$a).\nAssume that $T$ is $n+1$-free and let $\\vec{b}=(b_0,\\ldots,b_{n-1})$ be\ngeneric for $T^{\\otimes n}$, the $n$-fold tree product of $T$ with itself.\nChoose $\\alpha<\\omega_1$ large enough,\nsuch that the nodes $t_i:=b_i(\\alpha)$ are pairwise incompatible.\nFinally, pick a node $t_n\\in T_\\alpha$ distinct from all the $b_i(\\alpha)$.\nBy our freeness assumption on $T$,\nthe product tree $\\bigotimes_{i\\in{n+1}}T(t_i)$\nsatisfies the countable chain condition.\nBut then $M[\\vec{b}]\\vDash$''$T(t_n)$ is Souslin''\nby a standard argument concerning chain conditions in forcing iterations.\nNow it is easy to see that $T$ is $n$-fold Souslin off the generic branch.\n\nFor the other direction we inductively show that\n$T$ is $m$-free for $m\\leq n+1$,\nassuming that $T$ is $n$-fold Souslin off the generic branch.\nThe inductive claim is trivial for $m=1$. \nSo let $m\\geq1$ and let $s_0,\\ldots,s_m$ be pairwise distinct nodes\nof the same height.\nThen for any generic sequence\n$\\vec{b}=(b_0,\\ldots,b_{m-1})$ for $\\bigotimes_{i\\in m}T(s_i)$\nwe know that $T(s_m)$ is Souslin in the generic extension $M[\\vec{b}]$.\nFinally the two-step iteration $\\bigotimes_{i\\in m}T(s_i)\\ast \\check{T}(s_m)$\nis isomorphic to $\\bigotimes_{i\\in m+1}T(s_i)$\nand satisfies the countable chain condition.\n\\end{proof}\n\nThis proposition implies that a free tree $T$ is also\n\\emph{free off the generic branch} in the sense that\nin the generic extension obtained by adjoining\na cofinal branch $b$ through $T$, for every node $t\\in T\\setminus b$,\nthe tree $T(t)$ is still free.\n\n\\subsection{Further types of rigidity}\n\nIn Sections 1-4 of \\cite{degrigST} different notions of rigidity\nfor Souslin trees are collected:\n(ordinary) rigidity, total rigidity and the unique branch property and their\nabsolute counterparts,\nwhere absoluteness refers to forcing extensions obtained by\nadjoining a generic branch to the Souslin tree under consideration.\nIn this context also the stronger notion of being\n($n$-fold) Souslin off the generic branch is introduced \nwhich we already considered in the last section.\n\n\\begin{defi}\n\\begin{enumerate}[a)]\n\\item A Souslin tree $T$ is called \\emph{$n$-absolutely rigid}, if $T$ is a rigid tree in the generic extension\nobtained by forcing with $T^n$ (or equivalently $T^{\\otimes n}$).\n\\item A Souslin tree is \\emph{totally rigid}, if the trees $T(s)$ and $T(t)$ are non-isomorphic for all pairs\nof distinct nodes $s$ and $t$ of $T$. It is \\emph{$n$-absolutely totally rigid} if it is totally rigid\nafter forcing with $T^n$.\n\\item A Souslin tree $T$ has the \\emph{unique branch property (UBP)}, if forcing with $T$ adjoins only a single\ncofinal branch to $T$. For $n>0$ we say, that $T$ has the $n$-\\emph{absolute UBP}, if forcing with\n$T^{n+1}$ adjoins exactly $n+1$ cofinal branches to $T$.\n\\end{enumerate}\n\\end{defi}\n\n\nFuchs and Hamkins prove implications as well as some independencies\nbetween these rigidity notions.\nThey also give in \\cite[Section 4]{degrigST}\na diagram of implications between the\ndegrees of rigidity that we have approximately\nreconstructed here for the convenience of the reader.\n\n\\begin{table}[htb]\n\\begin{center}\n\\begin{small}\n\\begin{tabular}{ccccccc}\n2-free&$\\longleftarrow$&\\makebox[2.5cm][c]{3-free}&$\\longleftarrow$&\\makebox[2.5cm][c]{4-free}&$\\longleftarrow$&$\\ldots$\\\\\n$\\downarrow$&&$\\downarrow$&&$\\downarrow$&&\\\\\nUBP&$\\longleftarrow$&\\makebox[2.5cm][c]{absolutely UBP}&$\\longleftarrow$&\\makebox[2.5cm][c]{2-absolutely UBP}&$\\longleftarrow$&$\\ldots$\\\\\n$\\downarrow$&&$\\downarrow$&&$\\downarrow$&&\\\\\ntotally rigid&$\\longleftarrow$&\\makebox[3cm][c]{abs. totally rigid}&$\\longleftarrow$&\n\\makebox[3cm][c]{2-abs. totally rigid}&$\\longleftarrow$&$\\ldots$\\\\\n$\\downarrow$&&$\\downarrow$&&$\\downarrow$&&\\\\\nrigid&$\\longleftarrow$&\\makebox[2.5cm][c]{absolutely rigid}&$\\longleftarrow$&\\makebox[2.5cm][c]{2-absolutely rigid}&$\\longleftarrow$&$\\ldots$\\\\\n&&&&&&\n\\end{tabular} \\end{small}\n\\end{center}\n\\caption{Implications between degrees of rigidity for Souslin trees.}\n\\label{diagram}\n\\end{table}\n\nFuchs and Hamkins show that\nthe part of the diagram to the left and below ``absolutely UBP''\nis complete in the sense that there are no further\ngeneral implications between these rigidity properties.\nThey ask whether the rest of the diagram\nis complete as well, cf.~\\cite[Question 4.1]{degrigST}.\nWe will show (resp. state) below that there are\nneither implications from left to the right\n(including downwards diagonals,\ncf.~Corollaries \\ref{cor:n-free} and \\ref{cor:n-free_not_UBP}),\nnor from the second to the upper row (Theorem \\ref{thm:not_simple_UBP}).\n\n\\begin{rem}\n\\label{rem:diagram}\n\\item Using a standard $\\diamondsuit$-construction\nscheme for a Souslin tree (e.g., cf. \\cite[Section 2]{degrigST})\nit is not hard to construct a Souslin tree $T$ with the following two\nfeatures:\n\\begin{itemize}\n\\item On each level $T_\\alpha$ no two distinct nodes\nhave the same number of immediate successors.\nSo in particular $T$ is $n$-absolutely totally rigid\nfor every $n\\in \\omega$.\n\\item The substructure $R$ of $T$ obtained by\nrestricting the supporting set to the nodes on the limit levels of $T$\nplus the root, is a homogeneous Souslin tree.\nThen in a generic extension obtained by forcing with $T$ there are\nmany cofinal branches in $R$ and each of them gives rise to a cofinal branch\nof $T$, which is thus not a UBP tree.\n(In fact, every $\\aleph_0$-spiltting Souslin tree can be extended to an\n$n$-absolutely totally rigid Souslin tree by inserting new successor levels \nsuch that every two nodes of the same height have a different number of\nimmediate successors.)\n\\end{itemize}\nThis shows that in Diagram \\ref{diagram} there can be no arrows that\npoint upwards from the two lower rows.\nSo the only question left open is whether there should be any more arrows\nbetween the two lower rows, but a similar construction as the one alluded to\nabove should also eliminate those.\n\\end{rem}\n\n\\subsection{Distinct degrees of freeness}\n\nOur next corollary of (the proof of)\nTheorem~\\ref{thm:str_hom=free_x_free} gives the separation of\nthe finite degrees of freeness, i.e.,\nit shows that the family of parametrized freeness\nconditions is properly increasing in strength.\n\n\\begin{cor}\\label{cor:n-free}\nIf there is a strongly homogeneous Souslin tree,\nthen there is an $n$-free, but not $n+1$-free tree.\n\\end{cor}\n\\begin{proof}\nLet the strongly homogeneous Souslin tree $T$ be decomposed\nas the tree product of $n$ free trees $S_i$ for $is_i$ in $T_\\beta$ and\nany sequence $m:n\\to n$ the intersection\n$$\\bigcap_{i1$.\nIf there is a strongly homogeneous Souslin tree,\nthen there is an $n$-free tree\nwhich is not $(n\\!-\\!1)$-absolutely rigid.\n\\end{cor}\n\\begin{proof}\nWe fix $n>1$ and\nuse the tree $R$ from the proof of Corollary~\\ref{cor:n-free}\nobtained from a strongly homogeneous tree $T$\nas the tree sum $R=\\bigoplus_{is$ and $v,w>t$ be of the same height, where $v\\ne w$.\nThen\n$$S(u,v)\\otimes S(u,w) \\cong T(u)\\otimes T(v)\\otimes T(u)\\otimes T(w)$$\nhas an uncountable antichain,\nbecause it has the square of the Souslin tree $T(u)$ as a factor.\n\\end{proof}\n\nThis result cannot be improved by simply requiring $T$ to be free, because\nby iterating the forcing with a tree product of two factors\n$n\\!+\\!1$ times, we adjoin at least $2^n$ cofinal branches.\n\nWe do have the following non-implication result for the $n$-absolute UBP\nand $2$-freeness under the stronger assumption of $\\diamondsuit$.\n\n\\begin{thm}\n\\label{thm:not_simple_UBP}\nAssume $\\diamondsuit$.\nThen there is a Souslin tree\nwhich is not 2-free but has the $n$-absolute UBP for all $n\\in\\omega$.\n\\end{thm}\n\nThe methods of proof for this theorem lie beyond the scope of this paper.\nIt uses ideas from \\cite{degrigST} and \\cite{SAE}.\nA proof-sketch can be found in \\cite[Theorem 1.6.3]{diss}.\n\n\\subsection{Further directions}\n\nAs a closing remark we mention how Diagram \\ref{diagram},\nwhich captures the implications between four families of rigidity notions and\nimplications between them, could possibly be extended.\n\n\\begin{description}\n\\item[Real rigidity] \nIn \\cite{abraham-shelah_aronszajn_trees} two Aronszajn trees are called\n\\emph{really different} if there is no isomorphism between any of their\nrestrictions to some club set of levels. In this vein, we could call a Souslin\ntree \n\\emph{really rigid} if all of its restrictions to club sets of levels are\nrigid. This property is clearly stronger than ordinary rigidity yet\nindependent of total rigidity (cf. Remark \\ref{rem:diagram}) and is implied\nby the unique branching property. \nAlso the variant of \\emph{real, total\n rigidity} and the $n$-absolute versions of real and of real, total rigidity\ncould be considered. \n\n\\item[Self-specializing trees] A normal tree $T$ of height $\\omega_1$ is called \\emph{special} if there is a\ncountable family $(A_n)_{n\\in\\omega}$ of antichains of $T$ that covers all of\n$T$. As $T$ is uncountable, one of the $A_n$ has to be uncountable as well,\nso a special tree $T$ is not Souslin.\nOn the other hand, every branch of $T$ meets each antichain $A_n$\nin at most one node and is therefore countable.\n\nA \\emph{self-specializing tree} is a Souslin tree $T$ that specializes itself\nby forcing a generic branch $b$ through it, i.e., in the generic extension\nobtained by adjoining $b$ to the universe, the tree $T\\setminus b$ is special.\nSelf-specializing trees can be found in models of $\\diamondsuit$.\nThey are UBP: a second cofinal branch in $T$ would prevent $T\\setminus b$\nfrom being special. But of course they are not Souslin off the generic branch, \nand they can neither be 2-absolutely really rigid nor absolutely UBP,\nbecause forcing with a special tree collapses $\\omega_1$,\nand in this second generic extension the limit levels of $T$ form an\n$\\aleph_0$-splitting tree of countable height which must be homogeneous\nby a result of Kurepa (cf.\\cite[p.102]{kurepa}). \n\nNow let us call a Souslin tree $T$ \\emph{$n$-self-specializing} if it is\n$n$-free (i.e. $(n\\!-\\!1)$-fold Souslin off the generic branch) and\nforcing a generic branch $\\vec{b}$ through $T^n$ makes $T\\setminus\\tilde{b}$\nspecial where $\\tilde{b}$ is the set of components of the elements of\n$\\vec{b}$. It is not yet verified but seems quite plausible\nthat one can construct an $n$-self-specializing tree under $\\diamondsuit$.\nIn the implication diagram its place could be between $n$-free\nand $(n\\!-\\!1)$-absolutely UBP, yet it is stronger than both of these\nproperties. And there would be no horizontal implications, for\nan $n$-self-specializing tree is neither $(n\\!-\\!1)$-self-specializing nor\n$(n\\!+\\!1)$-self-specializing.\n\n\n\\end{description}\nAs is clear from the outset, adding these families to Diagram\n\\ref{diagram} results in a far more complicated directed graph which is in\nparticular non-planar.\nWe leave such considerations for future work.\n\n\\subsection*{Acknowledgements}\nThanks are due to Piet Rodenburg for pointing out a flaw in the proof of Lemma\n\\ref{lm:opt_mtx_prt}.\n\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzeugc b/data_all_eng_slimpj/shuffled/split2/finalzzeugc new file mode 100644 index 0000000000000000000000000000000000000000..bb221933f6726cc16bceb5542564c962fdaf8f8b --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzeugc @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nGravitational lensing is the process in which light from background galaxies is deflected as it travels towards us. The deflection is a result of the gravitation of the intervening mass.\nMeasuring the deformations in a large sample of galaxies offers a direct probe of the matter distribution in the Universe (including dark matter) and can thus be directly compared to theoretical models of structure formation. The statistical properties of the weak-lensing field can be assessed by a statistical analysis of either the shear field or the convergence field.\nOn the one hand, convergence is a direct tracer of the total matter distribution integrated along the line of sight, and is therefore directly linked with the theory. On the other hand, the shear (or more exactly, the reduced shear) is a direct observable and usually preferred for simplicity reasons.\n\nAccordingly, the most common method for characterising the weak-lensing field distribution is the shear two-point correlation function. It is followed very closely by the mass-aperture two-point correlation functions, which are the result of convolving the shear two-point correlation functions by a compensated filter \\citep{2pcf:schneider02} that is able to separate the E and B modes of the two-point correlation functions \\citep{wl:crittenden02}.\nHowever, gravitational clustering is a non-linear process, and in particular, the mass distribution is highly non-Gaussian at small scales. For this reason, several estimators of the three-point correlation functions have been proposed, either in the shear field \\citep{wl:bernardeau02,wl:benabed06} or using the mass-aperture filter \\citep{map:kilbinger05}. The three-point correlation functions are the lowest order statistics to quantify non-Gaussianity in the weak-lensing field and thus provide additional information on structure formation models.\n\nThe convergence field can also be used to measure the two- and three-point correlation functions and other higher-order statistics. When we assume that the mass inversion (the \ncomputation of the convergence map from the measured shear field) is properly conducted, the shear field contains the same information as the convergence maps \\citep[e.g.][]{2pcf:schneider02,3pcf:shi11}. While it carries the same information, the lensing signal is more compressed in the convergence maps than in the shear field, which makes it easier to extract and computationally less expensive.\nThe convergence maps becomes a new tool that might bring additional constraints complementary to those that we can obtain from the shear field.\nHowever, the weak-lensing signal being highly non-Gaussian at small scales, mass-inversion methods using smoothing or de-noising to regularise the problem are not optimal.\n\nReconstructing convergence maps from weak lensing is a difficult task because of shape noise, irregular sampling, complex survey geometry, and the fact that the shear is not a direct observable.\nThis is an ill-posed inverse problem and requires regularisation to avoid pollution from spurious B modes.\nSeveral methods have been derived to reconstruct the projected mass distribution from the observed shear field. The first non-parametric mass reconstruction was proposed by \\cite{wl:kaiser93} and was further improved by \\cite{wl:bartelmann95}, \\cite{wl:kaiser95}, \\cite{wl:schneider95}, and \\cite{wl:squires96}. These linear inversion methods are based on smoothing with a fixed kernel, which acts as a regularisation of the inverse problem. Non-linear reconstruction methods were also proposed using different sets of priors and noise-regularisation techniques \\citep{wl:bridle98,wl:seitz98,wl:marshall02,wl:pires09,wl:jullo14, wl:lanusse16}.\nConvergence mass maps have been built from many surveys, including\nthe COSMOS Survey \\citep{cosmos:massey07}, \nthe Canada France Hawa\\\"i Telescope Lensing Survey CFHTLenS \\citep{cfhtlens:vanwaerbeke13}, the CFHT\/MegaCam Stripe-82 Survey \\citep{cs82:shan14}, \nthe Dark Energy Survey Science Verification DES SV \\citep{dessv:chang15,dessv:vikram15,dessv:jeffrey18}, \nthe Red Cluster Sequence Lensing Survey RCSLenS \\citep{rcslens:hildebrandt16}, and the Hyper SuprimeCam Survey \\citep{hsc:oguri18}.\nWith the exception of \\cite{dessv:jeffrey18}, who used the non-linear reconstruction proposed by \\cite{wl:lanusse16}, all these methods are based on the standard Kaiser \\& Squires method.\n \n \nIn the near future, several wide and deep weak-lensing surveys are planned: Euclid \\citep{euclid:laureijs11}, Large Synoptic Survey Telescope LSST \\citep{lsst:abell09}, and Wide Field Infrared Survey Telescope WFIRST \\citep{wfirst:green12}.\nIn particular, the Euclid satellite will survey 15 000 deg$^2$ of the sky to map the geometry of the dark Universe.\nOne of the goals of the Euclid mission is to produce convergence maps for non-Gaussianity studies and constrain cosmological parameters.\nTo do this, two different mass inversion methods are being included into the official Euclid data processing pipeline. \nThe first method is the standard Kaiser \\& Squires method (hereafter KS). Although it is well known that the KS method has several shortcomings, it is taken as the reference for cross-checking the results. The second method is a new non-linear mass-inversion method (hereafter KS+) based on the formalism developed in \\cite{wl:pires09}. The KS+ method aims at performing the mass inversion with minimum information loss. This is done by performing the mass inversion with no other regularisation than binning while controlling systematic effects. \n \n\nIn this paper, the performance of these two mass-inversion methods is investigated using the Euclid Flagship mock galaxy catalogue (version 1.3.3, Castander F. et al. in prep) with realistic observational effects (i.e. shape noise, missing data, and the reduced shear). The effect of intrinsic alignments is not studied in this paper because we lack simulations that would properly model intrinsic alignments.\nHowever, intrinsic alignments also need to be considered seriously because they affect second- and higher-order statistics. A contribution of several percent is expected to two-point statistics \\citep[see e.g.][]{ia:joachimi13}.\n\nWe compare the results obtained with the KS+ method to those obtained with a version of the KS method in which no smoothing step is performed other than binning. \nWe quantify the quality of the reconstruction using two-point correlation functions and moments of the convergence.\nOur tests illustrate the efficacy of the different mass-inversion methods in preserving the second-order statistics and higher-order moments.\n\n\n\n\nThe paper is organised as follows.\nIn Sect. 2 we present the weak-lensing mass-inversion problem and the standard KS method.\nSection 3 presents the KS+ method we used to correct for the different systematic effects.\nIn Sect. 4 we explain the method with which we compared these two mass-inversion methods.\nIn Sect. 5 we use the Euclid Flagship mock galaxy catalogue with realistic observational effects such as shape noise and complex survey geometry and consider the reduced shear to investigate the performance of the two mass-inversion methods. First, we derive simulations including only one issue at a time to test each systematic effect independently. Then we derive realistic simulations that include them all and study the systematic effects simultaneously.\nWe conclude in Sect. 6.\n\n\\section{Weak-lensing mass inversion}\n\\label{inversion}\n\n\\subsection{Weak gravitational lensing formalism}\n\\label{formalism}\nIn weak-lensing surveys, the shear field $\\gamma({\\vec{\\theta}})$ is derived from the ellipticities of the background galaxies at position {\\vec{\\theta}} in the image. The two components of the shear can be written in terms of the lensing potential $\\psi({\\vec{\\theta}})$ as \\citep[see e.g.][]{wl:bartelmann01}\n\\begin{eqnarray}\n\\label{eq:gamma_psi} \n\\gamma_1 & = & \\frac{1}{2}\\left( \\partial_1^2 - \\partial_2^2 \\right) \\psi, \\nonumber \\\\\n\\gamma_2 & = & \\partial_1 \\partial_2 \\psi,\n\\end{eqnarray}\nwhere the partial derivatives $\\partial_i$ are with respect to the angular coordinates $\\theta_i$, $i = 1,2$ representing the two dimensions of sky coordinates. \nThe convergence $\\kappa({\\vec{\\theta}})$ can also be\nexpressed in terms of the lensing potential as\n\\begin{eqnarray}\n\\label{eq:kappa_psi}\n\\kappa = \\frac{1}{2}\\left(\\partial_1^2 + \\partial_2^2 \\right) \\psi.\n\\end{eqnarray}\nFor large-scale structure lensing, assuming a spatially flat Universe, the convergence at a sky position ${\\vec{\\theta}}$ from sources at comoving distance $r$ is defined by \n\\begin{eqnarray}\n\\label{eq:kappa_r}\n\\kappa(\\vec{\\theta}, r) =\\frac{3H^2_0\\Omega_{\\rm m}}{2 c^2}\\int_0^r {\\rm d}r' \\frac{r'(r-r')}{r} \\frac{\\delta(\\vec{\\theta}, r')}{a(r')},\n\\end{eqnarray}\nwhere $H_0$ is the Hubble constant, $\\Omega_{\\rm m}$ is the matter density, $a$ is the scale factor, and $\\delta \\equiv (\\rho-\\bar\\rho)\/\\bar\\rho$ is the density contrast (where $\\rho$ and $\\bar\\rho$ are the 3D density and the mean 3D density, respectively).\nIn practice, the expression for $\\kappa$ can be generalised to sources with a distribution in redshift, or equivalently, in comoving distance $f(r)$, yielding\n\\begin{eqnarray}\n\\label{eq:kappa}\n\\kappa(\\vec{\\theta}) =\\frac{3H^2_0\\Omega_{\\rm m}}{2 c^2}\\int_0^{r_{\\rm H}} {\\rm d}r'p(r')r' \\frac{\\delta(\\vec{\\theta}, r')}{a(r')},\n\\end{eqnarray}\nwhere $r_{\\rm H}$ is the comoving distance to the horizon.\nThe convergence map reconstructed over a region on the sky gives us the integrated mass-density fluctuation weighted by the lensing-weight function $p(r')$,\n\\begin{eqnarray}\n\\label{eq:kappa_r}\np(r') =\\int_{r'}^{r_{\\rm H}} {\\rm d}r f(r)\\frac{r-r'}{r}.\n\\end{eqnarray}\n\n\n\n\n\\subsection{Kaiser \\& Squires method (KS)} \n\\label{ks}\n\n\n\\subsubsection{KS mass-inversion problem} \n\n\nThe weak lensing mass inversion problem consists of reconstructing the convergence $\\kappa$ from the measured shear field $\\gamma$. We can use complex notation to represent the shear field, $\\gamma = \\gamma_1 + {\\rm i} \\gamma_2$, and the convergence field, $\\kappa = \\kappa_{\\rm E} + {\\rm i} \\kappa_{\\rm B}$, with $\\kappa_{\\rm E}$ corresponding to the curl-free component and $\\kappa_{\\rm B}$ to the gradient-free component of the field, called E and B modes by analogy with the electromagnetic field.\nThen, from Eq.~(\\ref{eq:gamma_psi}) and Eq.~(\\ref{eq:kappa_psi}), we can derive the relation between the shear field $\\gamma$ and the convergence field $\\kappa$.\nFor this purpose, we take the Fourier transform of these equations and obtain\n\\begin{eqnarray}\n\\label{eq:kappa2gamma}\n\\hat \\gamma = \\hat P \\, \\hat \\kappa,\n\\end{eqnarray}\nwhere the hat symbol denotes Fourier transforms, $\\hat P = \\hat P_1 + {\\rm i} \\hat P_2$,\n\\begin{eqnarray}\n\\hat{P_1}(\\vec{\\ell}) & = & \\frac{\\ell_1^2 - \\ell_2^2}{\\ell^2}, \\nonumber \\\\\n\\hat{P_2}(\\vec{\\ell}) & = & \\frac{2 \\ell_1 \\ell_2}{\\ell^2},\n\\end{eqnarray}\nwith $\\ell^2 \\equiv \\ell_1^2 + \\ell_2^2$ and $\\ell_i$ the wave numbers corresponding to the angular coordinates $\\theta_i$. \n\n$\\hat P$ is a unitary operator. The inverse operator is its complex conjuguate $\\hat P^* = \\hat P_1 - {\\rm i} \\hat P_2$ , as shown by \\cite{wl:kaiser93},\n\\begin{eqnarray}\n\\label{eq:gamma2kappa}\n\\hat \\kappa = \\hat P^* \\, \\hat \\gamma.\n\\end{eqnarray}\nWe note that to recover $\\kappa$ from $\\gamma,$ there is a degeneracy when $\\ell_1 = \\ell_2 = 0$. Therefore the mean value of $\\kappa$ cannot be recovered if only shear information is available. This is the so-called mass-sheet degeneracy \\citep[see e.g.][for a discussion]{wl:bartelmann95}.\nIn practice, we impose that the mean convergence vanishes across the survey by setting the reconstructed $\\ell = 0$ mode to zero. This is a reasonable assumption for large-field reconstruction \\citep[e.g.][]{wl:seljak98}.\n\n\nWe can easily derive an estimator of the E-mode and B-mode convergence in the Fourier domain,\n\\begin{eqnarray}\n\\label{eq:fourier}\n\\hat{\\tilde \\kappa}_{\\rm E} &=& \\hat P_1 \\hat \\gamma_1 + \\hat P_2 \\hat \\gamma_2,\\\\ \\nonumber \n\\hat{\\tilde \\kappa}_{\\rm B} &=& - \\hat P_2 \\hat \\gamma_1 + \\hat P_1 \\hat \\gamma_2.\n\\end{eqnarray}\nBecause the weak lensing arises from a scalar potential (the lensing potential $\\psi$), it can be shown that weak lensing only produces E modes. However, intrinsic alignments and imperfect corrections of the point spread function (PSF) generally generate both E and B modes. The presence of B modes can thus be used to test for residual systematic effects in current weak-lensing surveys.\n\n\\subsubsection{Missing-data problem in weak lensing}\nThe shear is only sampled at the discrete positions of the galaxies where the ellipticity is measured. \nThe first step of the mass map-inversion method therefore is to bin the observed ellipticities of galaxies on a regular pixel grid to create what we refer to as the observed shear maps $\\gamma^{\\rm{obs}}$.\nSome regions remain empty because various masks were applied to the data, such as the masking-out of bright stars or camera CCD defects. In such cases, the shear is set to zero in the original KS method,\n\\begin{eqnarray}\n\\label{eq:mask}\n\\gamma^{\\rm{obs}} &=& M \\gamma^{\\rm n},\n\\end{eqnarray}\nwith $M$ the binary mask (i.e. $M = 1$ when we have information at the pixel, $M = 0$ otherwise) and $\\gamma^{\\rm n}$ the noisy shear maps.\nAs the shear at any sky position is non-zero in general, this introduces errors in the reconstructed convergence maps.\nSome specific methods address this problem by discarding masked pixels at the noise-regularisation step \\cite[e.g.][]{cfhtlens:vanwaerbeke13}. However, as explained previously, this intrinsic filtering results in subtantial signal loss at small scales. Instead, inpainting techniques are used in the KS+ method to fill the masked regions (see Appendix A).\n\n\n\n\\subsubsection{Weak-lensing shape noise}\n\\label{sect_shape_noise}\nThe gravitational shear is derived from the ellipticities of the background galaxies.\nHowever, the galaxies are not intrinsically circular, therefore their measured ellipticity is a combination of their intrinsic ellipticity and the gravitational lensing shear. \nThe shear is also subject to measurement noise and uncertainties in the PSF correction. All these effects can be modelled as an additive noise, $N = N_1 + {\\rm i} N_2$,\n\\begin{eqnarray}\n\\gamma^{\\rm n} &=& \\gamma+ N\n\\label{eq:noise1}\n\\end{eqnarray}\nThe noise terms $N_1$ and $N_2$ are assumed to be Gaussian and uncorrelated with zero mean and standard deviation,\n\\begin{eqnarray}\n\\sigma_{\\rm n}^i = \\frac{\\sigma_{\\rm \\epsilon}}{\\sqrt{N_{\\rm g}^i}}, \n\\label{eq:noise2}\n \\end{eqnarray}\nwhere $N_{\\rm g}^i$ is the number of galaxies in pixel $i$.\nThe root-mean-square shear dispersion per galaxy, $\\sigma_{\\rm \\epsilon}$, arises both from the measurement uncertainties and the intrinsic shape dispersion of galaxies. The Gaussian assumption is a reasonable assumption, and $\\sigma_{\\rm \\epsilon}$ is set to 0.3 for each component as is generally found in weak-lensing surveys \\citep[e.g.][]{sigmae:leauthaud07, sigmae:schrabback15, sigmae:schrabback18}. The surface density of usable galaxies is expected to be around $n_{\\rm g} = 30$ gal. arcmin$^{-2}$ for the Euclid Wide survey \\citep{euclid:cropper13}.\n\n\nThe derived convergence map is also subject to an additive noise,\n\\begin{eqnarray}\n\\hat{\\tilde \\kappa}^{\\rm n} = \\hat P^* \\, \\hat{ \\gamma}^{\\rm n} = \\hat \\kappa + \\hat P^* \\, \\hat{N}.\n\\label{kappan}\n\\end{eqnarray}\nIn particular, the E component of the convergence noise is\n\\begin{eqnarray}\nN_{\\rm E}= N_1* P_1 + N_2 * P_2 , \n\\label{eq:noise3}\n\\end{eqnarray}\nwhere the asterisk denotes the convolution operator, and $P_1$ and $P_2$ are the inverse Fourier transforms of $\\hat{P_1}$ and $\\hat{P_2}$.\nWhen the shear noise terms $N_1$ and $N_2$ are Gaussian, uncorrelated, and with a constant standard deviation across the field, the convergence noise is also Gaussian and uncorrelated.\nIn practice, the number of galaxies varies slightly across the field. The variances of $N_1$ and $N_2$ might also be slightly different, which can be modelled by different values of $\\sigma_{\\epsilon}$ for each component. These effects introduce noise correlations in the convergence noise maps, but they were found to remain negligible compared to other effects studied in this paper.\n\n\nIn the KS method, a smoothing by a Gaussian filter is frequently applied to the background ellipticities before mass inversion to regularise the solution.\nAlthough performed in most applications of the KS method, this noise regularisation step is not mandatory. It was introduced to avoid infinite noise and divergence at very small scales.\nHowever, the pixelisation already provides an intrinsic regularisation. This means that there is no need for an additional noise regularisation prior to the inversion. Nonetheless, for specific applications that require denoising in any case, the filtering step can be performed before or after the mass inversion.\n\n\n\\section{Improved Kaiser \\& Squires method (KS+)} \n\\label{iks}\n\nSystematic effects in mass-inversion techniques must be fully controlled in order to use convergence maps as cosmological probes for future wide-field weak-lensing experiments such as \\textit{Euclid}\\xspace. \nWe introduce the KS+ method based on the formalism developed in \\cite{wl:pires09} and \\cite{wl:jullo14}, which integrates the necessary corrections for imperfect and realistic measurements.\nWe summarise its improvements over KS in this section and evaluate its performance in Sect. \\ref{results_1}.\n\n\nIn this paper, the mass-mapping formalism is developed in the plane. \nThe mass inversion can also be performed on the sphere, as proposed in \\cite{sphere:pichon09} and \\cite{sphere:chang18}, and the extension of the KS+ method to the curved sky is being investigated. \nHowever, the computation time and memory required to process the spherical mass inversion means limitations in terms of convergence maps resolution and\/or complexity of the algorithm.\nThus, planar mass inversions remain important for reconstructing convergence maps with a good resolution and probing the non-Gaussian features of the weak-lensing field (e.g. for peak-count studies).\n\n\\subsection{Missing data}\n\\label{KS+}\n\nWhen the weak-lensing shear field $\\gamma$ is sampled on a grid of $N \\times N$ pixels, we can describe the complex shear and convergence fields by their respective matrices. In the remaining paper, the notations $\\bm \\gamma$ and $\\bm \\kappa$ stand for the matrix quantities.\n\n\n\n\nIn the standard version of the KS method, the pixels with no galaxies are set to zero.\nFig.~\\ref{shear_mask} shows an example of simulated shear maps without shape noise derived from the Euclid Flagship mock galaxy catalogue (see Sect. \\ref{sect_simu} for more details). The upper panels of Fig.~\\ref{shear_mask} show the two components of the shear with zero values (displayed in black) corresponding to the mask of the missing data.\nThese zero values generate an important leakage during the mass inversion.\n\\begin{figure*}[h]\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=g1_mask,height=6.cm,width=7.cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=g2_mask,height=6.cm,width=7.cm,clip=}\n}}\n\\vspace{0.3cm}\n\\centerline{\n\\hbox{\n\\psfig{figure=g1_inp_mask,height=6.cm,width=7.cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=g2_inp_mask,height=6.cm,width=7.cm,clip=}\n}}\n}\n\\caption{Simulated shear maps with missing data covering a field of $5^\\circ \\times 5 ^\\circ$. The left panels show the first component of the shear $\\gamma_1$ , and the right panels present the second component of the shear $\\gamma_2$. The upper panels show the incomplete shear maps, where the pixels with no galaxies are set to zero (displayed in black). The lower panels show the result of the inpainting method that allows us to fill the gaps judiciously.}\n\\label{shear_mask}\n\\end{figure*}\n\n\nWith KS+, the problem is reformulated by including additional assumptions to regularise the problem. \nThe convergence $\\bm{\\kappa}$ can be analysed using a transformation $\\mathbf \\Phi, $ which yields a set of coefficients $\\bm \\alpha = \\mathbf \\Phi^{\\rm T} \\bm{\\kappa}$ ($\\mathbf \\Phi$ is an orthogonal matrix operator, and $\\mathbf \\Phi^{\\rm T}$ represents the transpose matrix of $\\mathbf \\Phi$).\nIn the case of the Fourier transformation, $\\mathbf \\Phi^{\\rm T}$ would correspond to the discrete Fourier transform (DFT) matrix, and $\\bm \\alpha$ would be the Fourier coefficients of $\\bm{\\kappa}$.\nThe KS+ method uses a prior of sparsity, that is, it assumes that there is a transformation $\\mathbf \\Phi$ where the convergence $\\bm{\\kappa}$ can be decomposed into a set of coefficients $\\bm \\alpha$, where most of its coefficients are close to zero. \nIn this paper, $\\mathbf \\Phi$ was chosen to be the discrete cosine transform (DCT) following \\cite{wl:pires09}. The DCT expresses a signal in terms of a sum of cosine functions with different frequencies and amplitudes. It is similar to the DFT, but uses smoother boundary conditions. This provides a sparser representation. Hence the use of the DCT for JPEG compression.\n\nWe can rewrite the relation between the observed shear $\\bm{ \\gamma}^{\\rm{obs} }$ and the noisy convergence $\\bm{\\kappa}^{\\rm n}$ as\n\\begin{eqnarray}\n\\bm{\\gamma}^{\\rm{obs}} =\\mathbf M \\mathbf{P} \\bm{\\kappa}^{\\rm n},\n\\label{miss}\n\\end{eqnarray}\nwith $\\mathbf M$ being the mask operator and $\\mathbf P$ the KS mass-inversion operator.\nThere is an infinite number of convergence $\\bm{\\kappa}^{\\rm n}$ that can fit the observed shear $\\bm \\gamma^{\\rm{obs}}$.\nWith KS+, we first impose that the mean convergence vanishes across the survey, as in the KS method. \nThen, among all possible solutions, KS+ searches for the sparsest solution $\\tilde{\\bm{\\kappa}}^{\\rm n}$ in the DCT $\\mathbf \\Phi$ (i.e. the convergence $\\bm{\\kappa}^{\\rm n}$ that can be represented with the fewest large coefficients). \nThe solution of this mass-inversion problem is obtained by solving\n\\begin{equation}\n\\min_{\\bm{\\tilde{\\kappa}}^{\\rm n}} \\| \\mathbf \\Phi^{\\rm T} \\bm{\\tilde{\\kappa}}^{\\rm n} \\|_0 \\textrm{ subject to } \\parallel \\bm{\\gamma}^{\\rm{obs}} - \\mathbf M \\mathbf P \\bm{\\tilde{\\kappa}}^{\\rm n} \\parallel^2 \\le \\sigma^2,\n\\label{eq1}\n\\end{equation}\nwhere $|| z ||_0$ the pseudo-norm, that is, the number of non-zero entries in $z$, $|| z ||$ the classical $l_2$ norm (i.e. $|| z || =\\smash{\\sqrt{ \\sum_{k}(z_{k})^2}}$), and $\\sigma$ stands for the standard deviation of the input shear map measured outside the mask.\nThe solution of this optimisation task can be obtained through an iterative thresholding algorithm called morphological component analysis (MCA), which was introduced by \\cite{mca:elad05} and was adapted to the weak-lensing problem in \\cite{wl:pires09}. \n\n\\cite{wl:pires09} used an additional constraint to force the B modes to zero. This is optimal when the shear maps have no B modes. However, any real observation has some residual B modes as a result of intrinsic alignments, imperfect PSF correction, etc. The B-mode power is then transferred to the E modes, which degrades the E-mode convergence reconstruction. We here instead let the B modes free, and an additional constraint was set on the power spectrum of the convergence map. \nTo this end, we used a wavelet transform to decompose the convergence maps into a set of aperture mass maps using the starlet transform algorithm \\citep{starck:book98,starck:book02}. \nThen, the constraint consists of renormalising the standard deviation (or equivalently, the variance) of each aperture mass map inside the mask regions to the values measured in the data, outside the masks, and then reconstructing the convergence through the inverse wavelet transform.\nThe variance per scale corresponding to the power spectrum at these scales allows us to constrain a broadband power spectrum of the convergence $\\bm \\kappa$ inside the gaps.\n\n\nAdding the power spectrum constraints yields the final sparse optimisation problem,\n\\begin{equation}\n\\min_{\\bm{\\tilde{\\kappa}}^{\\rm n}} \\| \\mathbf \\Phi^{\\rm T} \\bm{\\tilde{\\kappa}}^{\\rm n} \\|_0 \\textrm{ s.t. } \\parallel \\bm{\\gamma}^{\\rm{obs}} - \\mathbf M \\mathbf P \\mathbf{W^{\\rm T}} \\mathbf Q \\mathbf{W} \\bm{\\tilde{\\kappa}}^{\\rm n} \\parallel^2 \\le \\sigma^2,\n\\label{eq2}\n\\end{equation}\nwhere $\\mathbf{W}$ is the forward wavelet transform and $\\mathbf{W^{\\rm T}}$ its inverse transform, and $\\mathbf Q$ is the linear operator used to impose the power spectrum constraint.\nMore details about the KS+ algorithm are given in Appendix A.\n\nThe KS+ method allows us to reconstruct the in-painted convergence maps and the corresponding in-painted shear maps, where the empty pixels are replaced by non-zero values. These interpolated values preserve the continuity of the signal and reduce the signal leakage during the mass inversion (see lower panels of Fig.~\\ref{shear_mask}). \nThe quality of the convergence maps reconstruction with respect to missing data is evaluated in Sect. \\ref{results_1}.\nAdditionally, the new constraint allows us to use the residual B modes of the reconstructed maps to test for the presence of residual systematic effects and possibly validate the shear measurement processing chain.\n\n\n\n\n\\subsection{Field border effects}\n\\label{border}\n\n\n\n\\begin{figure*}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=g1_border,height=6.cm,width=7.cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=g2_border,height=6.cm,width=7.cm,clip=}\n}}\n\\vspace{0.3cm}\n\\centerline{\n\\hbox{\n\\psfig{figure=g1_inp_border,height=6.cm,width=7cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=g2_inp_border,height=6.cm,width=7cm,clip=}\n}}\n}\n\\caption{Upper panels: Simulated shear maps covering a field of $5^\\circ \\times 5 ^\\circ$ , extended to a field of $10^\\circ \\times 10^\\circ$ by zero padding (zero values are displayed in black). Lower panels: Result of the inpainting method that allows us to extrapolate the shear on the borders. The left panels show the first component of the shear $\\gamma_1$, and the right panels present the second component of the shear $\\gamma_2$.}\n\\label{shear_border}\n\\end{figure*}\n\n\n\nThe KS and KS+ mass-inversion methods relate the convergence and the shear fields in Fourier space.\nHowever, the discrete Fourier transform implicitly assumes that the image is periodic along both dimensions. Because there is no reason for opposite borders to be alike, the periodic image generally presents strong discontinuities across the frame border. These discontinuities cause several artefacts at the borders of the reconstructed convergence maps. The field border effects can be addressed by removing the image borders, which throws away a large fraction of the data.\nDirect finite-field mass-inversion methods have also been proposed \\citep[e.g.][]{wl:seitz96,wl:seitz01}. Although unbiased, convergence maps reconstructed using these methods are noisier than those obtained with the KS method.\nIn the KS+ method, the problem of borders is solved by taking larger support for the image and by considering the borders as masked regions to be in-painted.\nThe upper panels of Fig.~\\ref{shear_border} show the two components of a shear map covering $5^{\\circ} \\times 5^{\\circ}$ degrees and extending to a field of $10^{\\circ} \\times 10^{\\circ}$.\nThe inpainting method is then used to recover the shear at the field boundaries, as shown in the lower panels of Fig.~\\ref{shear_border}. \nAfter the mass inversion is performed, the additional borders are removed. This technique reduces the field border effects by pushing the border discontinuities farther away. \n\n\n\\subsection{Reduced shear}\n\\label{reduced}\nIn Sect. \\ref{ks} we assumed knowledge of the shear, in which case the mass inversion is linear.\nIn practice, the observed galaxy ellipticity is not induced by the shear $\\gamma,$ but by the reduced shear $g$ that depends on the convergence $\\kappa$ corresponding to that particular line of sight,\n\\begin{eqnarray}\ng \\equiv \\frac{\\gamma}{1-\\kappa}.\n\\label{reducedshear}\n\\end{eqnarray}\nWhile the difference between the shear $\\gamma$ and the reduced shear $g$ is small in the regime of cosmic shear ($\\kappa \\ll 1$), neglecting it might nevertheless cause a measurable bias at small angular scales \\citep[see e.g.][]{wl:white05, wl:shapiro09}.\nIn the standard version of KS, the Fourier estimators are only valid when the convergence is small ($\\kappa \\ll 1$),\nand they no longer hold near the centre of massive galaxy clusters.\nThe mass-inversion problem becomes non-linear, and it is therefore important to properly account for reduced shear.\n\nIn the KS+ method, an iterative scheme is used to recover the E-mode convergence map, as proposed in \\cite{wl:seitz95}. The method consists of solving the linear inverse problem iteratively (see Eq.~\\ref{eq:fourier}), using at each iteration the previous estimate of the E-mode convergence to correct the reduced shear using Eq.~(\\ref{reducedshear}). \nEach iteration then provides a better estimate of the shear. This iterative algorithm was found in \\cite{wl:jullo14} to quickly converge to the solution (about three iterations). The KS+ method uses the same iterative scheme to correct for reduced shear, and we find that it is a reasonable assumption in the case of large-scale structure lensing.\n\n\\subsection{Shape noise}\n\n\\label{section:shearnoise}\n\n\n\n\\begin{figure*}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=kappa_512,height=6.cm,width=7cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=kappa_noise,height=6.cm,width=7.cm,clip=}\n}}\n\\vspace{0.3cm}\n\\centerline{\n\\hbox{\n\\psfig{figure=kappa_g5,height=6.cm,width=7cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=kappa_mrlens,height=6.cm,width=7cm,clip=}\n}}\n\n}\n\\caption{Shape-noise effect. The upper panels show the original E-mode convergence $\\kappa$ map (left) and the noisy convergence map with $n_{\\rm g} = 30$ gal. arcmin$^{-2}$ (right). The lower panels show the reconstructed maps using a linear Gaussian filter with a kernel size of $\\sigma = 3\\arcmin$ (left) and the non-linear MRLens filtering using $\\alpha_{\\rm FDR} = 0.05$ (right). The field is $5^\\circ \\times 5^\\circ$ downsampled to $512 \\times 512$ pixels.}\n\n\\label{kappa_noise}\n\\end{figure*}\n\n\nIn the original implementation of KS, the shear maps are first regularised with a smoothing window (i.e. a low-pass filter) to obtain a smoothed version of the shear field. Then, Eq.~(\\ref{eq:fourier}) is applied to derive the convergence maps. \nIn contrast, the KS+ method aims at producing very general convergence maps for many applications. In particular, it produces noisy maps with minimum information loss. \n\nHowever, for specific applications (e.g. galaxy cluster detection and characterisation), it can be useful to add an additional de-noising step, using any of the many regularisation techniques that have been proposed \\citep{wl:bridle98,wl:seitz98,wl:marshall02, wl:starck06, wl:lanusse16}. \nTo compare the results of the KS and KS+ methods on noisy maps, we used a linear Gaussian\nand the non-linear MRLens filter \\citep{wl:starck06} for noise suppression.\nFig.~\\ref{kappa_noise} illustrates the effect of shape noise on reconstructing the convergence map.\nThe upper panels show one E-mode convergence map reconstructed from noise-free (left) and noisy (right) shear data. The convergence map is dominated by the noise. The lower panels show the results of the Gaussian filter (left) and MRLens filter (right). The Gaussian filter gives a smoothed version of the noisy convergence map, whose level of smoothness is set by the width of the Gaussian ($\\sigma$). Thus, the amplitude of the over-densities (in blue) are systematically lowered by the Gaussian filter. \nIn contrast, \nthe MRLens filter uses a prior of sparsity to better recover the amplitude of the structures and uses a parameter, the false-discovery rate ($\\alpha_{\\rm FDR}$), to control the average fraction of false detections (i.e. the number of pixels that is truly inactive, declared positive) made over the total number of detections \\citep{benjamini95}. For some other applications (e.g. two- or three-point correlation), the integrity of the reconstructed noisy convergence maps might be essential and this denoising step can be avoided.\n\n\n\\section{Method}\n\\label{method}\n\n\n\n\\subsection{Comparing second-order statistics}\n\\label{2pcf}\nThe most common tools for constraining cosmological parameters in weak-lensing studies are the shear two-point correlation functions.\nFollowing \\cite{wl:bartelmann01}, they are defined by considering pairs of positions $\\vec{\\vartheta}$ and $\\vec{\\theta+\\vartheta}$, and defining the tangential and cross-component of the shear $\\gamma_{\\rm t}$ and $\\gamma_{\\times}$ at position $\\vec{\\vartheta}$ for this pair as\n\\begin{eqnarray}\n\\gamma_{\\rm t} &=& -\\operatorname{\\mathcal{R}e}(\\gamma \\operatorname{e}^{-2{\\rm i}\\varphi}),\\\\\n\\gamma_{\\times} &=& -\\operatorname{\\mathcal{I}m}(\\gamma \\operatorname{e}^{-2{\\rm i}\\varphi}),\n\\end{eqnarray}\nwhere $\\varphi$ is the polar angle of the separation vector $\\vec{\\theta}$.\nThen we define the two independent shear correlation functions\n\\begin{eqnarray}\n\\xi_\\pm(\\theta) &:=& \\langle \\gamma_{\\rm t} \\gamma_{\\rm t}' \\rangle \\pm \\langle \\gamma_\\times \\gamma_\\times' \\rangle \\\\\n&=& \\frac{1}{2\\pi} \\int_0^{\\infty} d\\ell \\, \\ell \\, P_{\\rm \\kappa}(\\ell) \\,{\\rm J}_{0,4}(\\ell \\theta) ,\n\\end{eqnarray}\nwhere the Bessel function ${\\rm J}_0$ $({\\rm J}_4)$ corresponds to the plus (minus) correlation function, $P_{\\kappa}(\\ell)$ is the power spectrum of the projected matter density, and $\\ell$ is the Fourier variable on the sky.\nWe can also compute the two-point correlation functions of the convergence ($\\kappa = \\kappa_{\\rm E} + \\rm i \\kappa_{\\rm B}$), defined as\n\\begin{eqnarray}\n\\xi_{\\kappa_{\\rm E}}(\\theta) = \\langle \\kappa_{\\rm E} \\kappa_{\\rm E}' \\rangle,\\nonumber \\\\\n\\xi_{\\kappa_{\\rm B}}(\\theta) = \\langle \\kappa_{\\rm B} \\kappa_{\\rm B}' \\rangle.\n\\end{eqnarray}\nWe can verify that these two quantities agree \\citep{2pcf:schneider02}:\n\\begin{eqnarray}\n \\xi_+(\\theta) = \\xi_{\\kappa_{\\rm E}}(\\theta) + \\xi_{\\kappa_{\\rm B}}(\\theta).\n \\end{eqnarray}\nWhen the B modes in the shear field are consistent with zero, the two-point correlation of the shear ($\\xi_+$) is equal to the two-point correlation of the convergence $(\\xi_{\\kappa_{\\rm E}})$. Then the differences between the two are due to the errors introduced by the mass inversion to go from shear to convergence.\n\nWe computed these two-point correlation functions using the tree code \\texttt{athena} \\citep{athena:kilbinger14}. The shear two-point correlation functions were computed by averaging over pairs of galaxies of the mock galaxy catalogue, whereas the convergence two-point correlation functions were computed by averaging over pairs of pixels in the convergence map. The convergence two-point correlation functions can only be computed for separation vectors $\\vec{\\theta}$ allowed by the binning of the convergence map. \n\n\n\\subsection{Comparing higher-order statistics}\n\\label{hos}\n\nTwo-point statistics cannot fully characterise the weak-lensing field at small scales where it becomes non-Gaussian \\citep[e.g.][]{pt:bernardeau02}. Because the small-scale features carry important cosmological information, we computed the third-order moment, $\\langle \\kappa_{\\rm E}^3 \\rangle$, and the fourth-order moment, $\\langle \\kappa_{\\rm E}^4\\rangle$, of the convergence. Computations were performed on the original convergence maps provided by the Flagship simulation, as well as on the convergence maps reconstructed from the shear field with the KS and KS+ methods. \nWe evaluated the moments of convergence at various scales by computing aperture mass maps \\citep{map:schneider96, map:schneider97}. Aperture mass maps are typically obtained by convolving the convergence maps with a filter function of a specific scale (i.e. aperture radii). We performed this here by means of a wavelet transform using the starlet transform algorithm \\citep{starck:book98,starck:book02}, which simultaneously produces a set of aperture mass maps on dyadic (powers of two) scales (see Appendix A for more details). Leonard et al. (2012) demonstrated that the aperture mass is formally identical to a wavelet transform at a specific scale and the aperture mass filter corresponding to this transform is derived. The wavelet transform offers significant advantages over the usual aperture mass algorithm in terms of computation time, providing speed-up factors of about 5 to 1200 depending on the scale.\n\n\n\\subsection{Numerical simulations}\n\\label{sect_simu}\n\n\n\nWe used the Euclid Flagship mock galaxy catalogue version 1.3.3\n(Castander F. et al., in prep) derived from N-body cosmological simulation \\citep{flagship:potter17} with parameters $\\Omega_{\\rm m}=0.319$, $\\Omega_{\\rm b} = 0.049$, $\\Omega_{\\Lambda} = 0.681$, $\\sigma_8=0.83$, $n_{\\rm s}=0.96$, $h=0.67$, and the particle mass was \\smash{$m_{\\rm p} \\sim 2.398 \\times10^9\\,\\text{\\ensuremath{\\textup{M}_{\\odot}}} h^{-1}$}. The galaxy light-cone catalogue contains 2.6 billion galaxies over $5000\\,\\deg^2$ , and it extends up to $z=2.3$. It has been built using a hybrid halo occupation distribution and halo abundance matching (HOD+HAM) technique, whose galaxy-clustering properties were discussed in detail in \\cite{flagship:crocce15}.\nThe lensing properties were computed using the Born approximation and projected mass density maps (in \\texttt{HEALPix} format with $N_{\\rm side}=8192$) generated from the particle light-cone of the Flagship simulation.\nMore details on the lensing properties of the Flagship mock galaxy catalogue can be found in \\cite{flagship:fosalba15,flagship:fosalba18}.\n\nIn order to evaluate the errors introduced by the mass-mapping methods, we extracted ten contiguous shear and convergence fields of $10^\\circ \\times 10^\\circ$ from the Flagship mock galaxy catalogue, yielding a total area of 1000 deg$^2$. The fields correspond to galaxies that lie in the range of \\smash{$15^\\circ < \\alpha < 75^\\circ$} and \\smash{$15^\\circ < \\delta < 35^\\circ$} , where $\\alpha$ and $\\delta$ are the right ascension and declination, respectively.\nIn order to obtain the density of 30 galaxies per arcmin$^2$ foreseen for the Euclid Wide survey, we randomly selected one quarter of all galaxies in the catalogue. \nThen projected shear and convergence maps were constructed by combining all the redshifts of the selected galaxies.\nMore sophisticated selection methods based on galaxy magnitude would produce slightly different maps. However, they would not change the performances of the two methods we studied here.\nThe fields were down-sampled to $1024 \\times 1024$ pixels, which corresponds to a pixel size of about \\ang[angle-symbol-over-decimal]{;0.6;}. Throughout all the paper, the shaded regions stand for the uncertainties on the mean estimated from the total 1000 deg$^2$ of the ten fields. Because the Euclid Wide survey is expected to be 15 000 deg$^2$, the sky coverage will be 15 times larger than the current mock. Thus, the uncertainties will be smaller by a factor of about 4.\n\n\n\\begin{figure*}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=diff_kappa_mask_ks_m20,height=6.cm,width=7cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=diff_kappa_mask_ki_m20,height=6.cm,width=7cm,clip=}\n}}\n}\n\\caption{Missing data effects: Pixel difference outside the mask between the original E-mode convergence $\\kappa$ map and the map reconstructed from the incomplete simulated noise-free shear maps using the KS method (left) and the KS+ method (right). The field is $10^\\circ \\times 10^\\circ$ downsampled to $1024 \\times 1024$ pixels. The missing data represent roughly 20\\% of the data.}\n\\label{kappa_missing}\n\\end{figure*}\n\n\n\\subsection{Shear field projection}\n\\label{projections}\n\nWe considered fields of $10^\\circ \\times10^\\circ$. The fields were taken to be sufficiently small to be approximated by a tangent plane. \nWe used a gnomonic projection to project the points of the celestial sphere onto a tangent plane, following \\cite{cmb:pires12}, who found that this preserves the two-point statistics. We note, however, that higher-order statistics may behave differently under different projections.\n\nThe shear field projection is obtained by projecting the galaxy positions from the sphere ($\\alpha$, $\\delta$) in the catalogue onto a tangent plane ($x$, $y$).\nThe projection of a non-zero spin field such as the shear field requires a projection of both the galaxy positions and their orientations.\nProjections of the shear do not preserve the spin orientation, which can generate substantial B modes (depending on the declination) if not corrected for. \nTwo problems must be considered because of the orientation. First, the projection of the meridians are not parallel, so that north is not the same everywhere in the same projected field of view. Second, the projection of the meridians and great circles is not perpendicular, so that the system is locally non-Cartesian.\nBecause we properly correct for the other effects (e.g. shape noise, missing data, or border effects) and consider large fields of view ($10^{\\circ} \\times 10^{\\circ}$) possibly at high latitudes, these effects need to be considered.\nThe first effect is dominant and generates substantial B modes (increasing with latitude) if not corrected for. This can be easily corrected \nfor by measuring the shear orientation with respect to local north. We find that this correction is sufficient for the residual errors due to projection to become negligible compared to errors due to other effects.\n\n\n\n\n\\section{Systematic effects on the mass-map inversion}\n\\label{results_1}\nIn this section, we quantify the effect of field borders, missing data, shape noise, and the approximation of shear by reduced shear on the KS and KS+ mass-inversion methods. The quality of the reconstruction is assessed by comparing the two-point correlation functions, third- and fourth-order moments.\n\n\n\\subsection{Missing data effects}\n\\label{sect_missing}\n\nWe used the ten noise-free shear fields of $10^\\circ \\times 10^\\circ$ described in Sect.~\\ref{sect_simu} and the corresponding noise-free convergence maps.\nWe converted the shear fields into planar convergence maps using the KS and KS+ methods, masking 20\\% of the data as expected for the \\textit{Euclid}\\xspace survey. \nThe mask was derived from the Data Challenge 2 catalogues produced by the Euclid collaboration using the code \\texttt{FLASK} \\citep{flask:xavier16}.\n\n\\begin{figure}\n\\centerline{\n\\psfig{figure=Residual_PDF_mask.pdf,height=6.5cm,clip=}\n}\n\\caption{Missing data effects: PDF of the residual errors between the original E-mode convergence map and the reconstructed maps using KS (blue) and KS+ (red), measured outside the mask.}\n\\label{missing_residual_PDF}\n\\end{figure}\n\n\n\nFig.~\\ref{kappa_missing} compares the results of the KS and KS+ methods in presence of missing data.\nThe figure shows the residual maps, that is, the pixel difference between the original E-mode convergence map and the reconstructed maps.\nThe amplitude of the residuals is larger with the KS method. Detailed investigation shows that the excess error is essentially localised around the gaps. \nBecause the mass inversion operator $\\mathbf P$ is intrinsically non-local, it generates artefacts around the gaps.\nIn order to quantify the average errors,\nFig.~\\ref{missing_residual_PDF} shows the probability distribution function (PDF) of the residual maps, estimated outside the mask. \nThe standard deviation is 0.0080 with KS and 0.0062 with KS+. The residual errors obtained with KS are then 30\\% larger than those obtained with KS+.\n\n\n\\begin{figure}[h!]\n\\vbox{\n\\centerline{\n\\psfig{figure=correlation_missing_m20.pdf,width=9.cm,clip=}\n}\n\\caption{Missing data effects: Mean shear two-point correlation function $\\xi_+$ (black) and corresponding mean convergence two-point correlation function $\\xi_{\\kappa_{\\rm E}}$ reconstructed using the KS method (blue) and using the KS+ method (red) from incomplete shear maps. The estimation is only made outside the mask $M$. The shaded area represents the uncertainties on the mean estimated on 1000 deg$^{2}$. The lower panel shows the relative two-point correlation errors introduced by missing data effects, that is, the normalised difference between the upper curves.}\n\\label{missing_corr}\n}\n\\end{figure}\n\n\n\n\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=skewness_mask.pdf,width=9.cm,clip=}\n}}\n\\vspace{0.3cm}\n\\centerline{\n\\hbox{\n\\psfig{figure=kurtosis_mask.pdf,width=9.cm,clip=}\n}}\n}\n\\caption{Missing data effects: Third-order (upper panel) and fourth-order (lower panel) moments estimated on seven wavelet bands of the original E-mode convergence map (black) compared to the moments estimated on the KS (blue) and KS+ (red) convergence maps at the same scales. The KS and KS+ convergence maps are reconstructed from incomplete noise-free shear maps. The estimation of the third- and fourth-order moments is made outside the mask. The shaded area represents the uncertainties on the mean estimated on 1000 deg$^{2}$. The lower panel shows the relative higher-order moment errors introduced by missing data effects, that is, the normalised difference between the upper curves.}\n\\label{missing_hos}\n\\end{figure}\n\n\n\nThe quality of the mass inversion at different scales can be estimated using the two-point correlation function and higher-order moments computed at different scales.\nFig.~\\ref{missing_corr} compares the two-point correlation functions computed on the convergence and shear maps outside the mask.\nBecause the B mode is consistent with zero in the simulations, we expect that these two quantities are equal within the precision of the simulations (see Sect.~\\ref{2pcf}).\nThe KS method systematically underestimates the original two-point correlation function by a factor of about 2 on arcminute scales, but can reach factors of 5 at larger scales.\nThe mass-inversion operator $\\mathbf P$ being unitary, the signal energy is conserved by the transformation (i.e. $\\sum(\\gamma_1^2+ \\gamma_2^2) = \\sum(\\kappa_{\\rm E}^2+ \\kappa_{\\rm B}^2)$, where the summation is performed over all the pixels of the maps). We found that about 10\\% of the total energy leaks into the gaps and about 15\\% into the B-mode component.\nIn contrast, the errors of the KS+ method are of the order of a few percent at scales smaller than $1^\\circ$. At any scale, the KS+ errors are about 5-10 times smaller than the KS errors, remaining in the $1\\sigma$ uncertainty of the original two-point correlation function.\n\nFig.~\\ref{missing_hos} shows the third-order (upper panel) and fourth-order (lower panel) moments estimated at six different wavelet scales ( \\ang[angle-symbol-over-decimal]{;2.34;}, \\ang[angle-symbol-over-decimal]{;4.68;}, \\ang[angle-symbol-over-decimal]{;9.37;}, \\ang[angle-symbol-over-decimal]{;18.75;}, \\ang[angle-symbol-over-decimal]{;37.5;}, and \\ang[angle-symbol-over-decimal]{;75.0;}) using the KS and KS+methods. For this purpose, the pixels inside the mask were set to zero in the reconstructed convergence maps. The aperture mass maps corresponding to each wavelet scale were computed, and the moments were calculated outside the masks.\n\nThe KS method systematically underestimates the third- and fourth-order moments at all scales.\nBelow 10$\\arcmin$, the errors on the moments remain smaller than 50\\%, and they increase with scale up to a factor 3. In comparison, the KS+ errors remain much smaller at all scales, and remain within the 1$\\sigma$ uncertainty.\n\n\n\\subsection{Field border effects}\nFig.~\\ref{kappa_borders} compares the results of the KS (left) and KS+ (right) methods for border effects. It shows the residual error maps corresponding to the pixel difference between the original E-mode convergence map and the reconstructed maps.\nWith KS, as expected, the pixel difference shows errors at the border of the field. With KS+, there are also some low-level boundary effects, but these errors are considerably reduced and do not show any significant structure at the field border.\nIn KS+, the image is extended to reduce the border effects. The effect of borders decreases when the size of the borders increases. A border size of 512 pixels has been selected for \\textit{Euclid}\\xspace as a good compromise between precision and computational speed. It corresponds to extending the image to be in-painted to 2048 $\\times$ 2048 pixels.\nAgain, the PDF of these residuals can be compared to quantify the errors. For the two methods, Fig.~\\ref{border_residual_PDF} shows the residuals PDFs computed at the boundaries (as dotted lines) and in the remaining central part of the image (as solid lines). The border width used to compute the residual PDF is 100 pixels, which corresponds to about one degree. \nWith the KS method, the standard deviation of the residuals in the centre of the field is 0.0062.\n In the outer regions, the border effect causes errors of 0.0076 (i.e. 25\\% larger than at the centre). Away from the borders, the KS+ method gives results similar to the KS method (0.0060). However, it performs much better at the border, where the error only reaches 0.0061. The small and uniform residuals of the KS+ method show how efficiently it corrects for borders effects. \n \n\n\\begin{figure*}\n\\centerline{\n\\hbox{\n\\psfig{figure=diff_kappa_nomask_ks,height=6.cm,width=7cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=diff_kappa_nomask_ki,height=6.cm,width=7cm,clip=}\n}}\n\\caption{Field border effects: Pixel difference between the original E-mode convergence $\\kappa$ map and the map reconstructed from the corresponding simulated shear maps using the KS method (left) and the KS+ method (right). The field is $10^\\circ \\times 10^\\circ$ downsampled to $1024 \\times 1024$ pixels.}\n\\label{kappa_borders}\n\\end{figure*}\n\n\n\\begin{figure}\n\\centerline{\n\\psfig{figure=Residual_PDF_nomask.pdf,height=6.5cm,clip=}\n}\n\\caption{Field border effects: PDF of the residual errors between the original E-mode convergence map and the convergence maps reconstructed using KS (blue) and KS+ (red). The dotted lines correspond to the PDF of the residual errors measured at the boundaries of the field, and the solid lines show the PDF of the residual errors measured in the centre of the field. The borders are 100 pixels wide.}\n\\label{border_residual_PDF}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\psfig{figure=correlation_border.pdf,width=9.cm,clip=}\n}\n\\caption{Field border effects: Mean shear two-point correlation function $\\xi_+$ (black) compared to the corresponding mean convergence two-point correlation function $\\xi_{\\kappa_{\\rm E}}$ reconstructed using the KS method (blue) and the KS+ method (red). The shaded area represents the uncertainties on the mean estimated on 1000 deg$^{2}$. The lower panel shows the relative two-point correlation error introduced by border effects.}\n\\label{border_corr}\n}\n\\end{figure}\n\nAs before, the scale dependence of the errors can be estimated using the two-point correlation function and higher-order moments computed at different scales. \nFig.~\\ref{border_corr} shows the two-point correlation functions.\nFor both methods, the errors increase with angular scale because the fraction of pairs of pixels that include boundaries increase with scale. The loss of amplitude at the image border is responsible for significant errors in the two-point correlation function of the KS convergence maps. \nIn contrast, the errors are about five to ten times smaller with the KS+ method and remain in the $1\\sigma$ uncertainty range of the original two-point correlation function.\n\nFig.~\\ref{border_hos} shows field borders effects on the third-order (upper panel) and fourth-order (lower panel) moments of the convergence maps at different scales.\nAs was observed earlier for the two-point correlation estimation, the KS method introduces errors at large scales on the third- and fourth-order moment estimation. With KS+, the discrepancy is about $1\\%$ and within the $1\\sigma$ uncertainty.\n\nWhen the two-point correlation functions and higher-order moments are computed far from the borders, the errors of the KS method decrease, as expected. In contrast, we observe no significant improvement when the statistics are computed similarly on the KS+ maps, indicating that KS+ corrects for borders properly.\n\n\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=skewness_border.pdf,width=9.cm,clip=}\n}}\n\\vspace{0.3cm}\n\\centerline{\n\\hbox{\n\\psfig{figure=kurtosis_border.pdf,width=9.cm,clip=}\n}}}\n\\caption{Field border effects: Third-order (upper panel) and fourth-order (lower panel) moments estimated on seven wavelet bands of the original convergence (black) compared to the moments estimated on the KS (blue) and KS+ (red) convergence maps reconstructed from noise-free shear maps. The shaded area represents the uncertainties on the mean estimated on 1000 deg$^{2}$. The lower panel shows the relative higher-order moment errors introduced by border effects.}\n\\label{border_hos}\n\\end{figure}\n\n\n\n\\subsection{Reduced shear}\nIn this section we quantify the errors due to the approximation of shear ($\\gamma$) by the reduced shear ($g$).\nTo this end, we used the noise-free shear fields described in Sect.~\\ref{sect_simu} and computed the reduced shear fields using Eq.~(\\ref{reducedshear}) and the convergence provided by the catalogue. We then derived the reconstructed convergence maps using the KS and KS+ methods.\n\nFor both methods, the errors on the convergence maps are dominated by field border effects. \nWe did not find any estimator able to separate these two effects and then identify the reduced shear effect in the convergence maps. \nThe errors introduced by the reduced shear can be assessed by comparing the shear and reduced shear two-point correlation functions (see Fig.~\\ref{reduced_corr}), however. While the differences are negligible at large scales, they reach the percent level on arcminute scales \\citep[in agreement with][]{wl:white05}, where they become comparable or larger than the KS+ errors due to border effects.\n\n\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\psfig{figure=correlation_reduced_2.pdf,width=9.cm,clip=}\n}\n\\caption{Reduced shear effects: Relative two-point correlation error between the mean two-point correlation functions \\smash{$\\xi_+^{\\gamma}$} estimated from the shear fields and corresponding mean two-point correlation function \\smash{$\\xi_+^{\\rm g}$} estimated from the reduced shear fields without any correction.}\n\\label{reduced_corr}\n}\n\\end{figure}\n\n\n\\subsection{Shape noise}\n\nIn this section we study the effect of the shape noise on convergence maps.\nWe derived noisy shear maps, assuming a Gaussian noise ($\\sigma_{\\epsilon} = 0.3$).\nThen, we compared the two mass-inversion methods. \nThe pixel difference cannot be used in this case because the convergence maps are noise dominated (see Fig.~\\ref{kappa_noise}, upper right panel).\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\psfig{figure=correlation_border_noise.pdf,width=9.cm,clip=}\n}\n\\caption{Shape noise effects: Mean shear two-point correlation function $\\xi_+$ (black) and corresponding mean convergence two-point correlation function $\\xi_{\\kappa_{\\rm E}}$ estimated from complete noisy shear fields. The convergence maps have been estimated using the KS method (blue) and using the KS+ method (red). The shaded area represents the uncertainties on the mean estimated on 1000 deg$^{2}$. The lower panel shows the relative two-point correlation error introduced by shape noise.}\n\\label{border_noise_corr}\n}\n\\end{figure}\nHowever, we can still assess the quality of the convergence maps using two-point correlation functions because the ellipticity correlation is an unbiased estimate of the shear correlation, and similarly, the convergence two-point correlation functions is unbiased by the shape noise.\n\n \nFig.~\\ref{border_noise_corr} compares the results of the KS and KS+ methods when shape noise is included.\nCompared to Fig.~\\ref{border_corr}, the two-point correlation of the noisy maps is less smooth because the noise fluctuations do not completely average out. However, the amplitude of the errors introduced by the mass inversion remain remarkably similar to the errors computed without shape noise for the KS and KS+ methods. The same conclusions then hold: the errors are about five times smaller with the KS+ method.\n\n\n\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=skewness_border_noise.pdf,width=9.cm,clip=}\n}}\n\\vspace{0.3cm}\n\\centerline{\n\\hbox{\n\\psfig{figure=kurtosis_border_noise.pdf,width=9.cm,clip=}\n}}}\n\\caption{Shape noise effects: Third-order (upper panel) and fourth-order (lower panel) moments estimated on seven wavelet bands of the original convergence with realistic shape noise (black) compared to the moments estimated on the KS (blue) and KS+ (red) convergence reconstructed from noisy shear maps. The shaded area represents the uncertainties on the mean estimated on 1000 deg$^{2}$. The lower panel shows the relative higher-order moment errors introduced by shape noise.}\n\\label{noise_border_hos}\n\\end{figure}\n\nMoments of noisy maps are biased and potentially dominated by the shape noise contribution. For instance, the total variance in the noisy convergence map is expected to be the sum of the variance in the noise-free convergence map and the noise variance. Therefore moments of the noisy KS and KS+ convergence maps cannot be directly compared to moments of the original noise-free convergence maps. \nInstead, Fig.~\\ref{noise_border_hos} compares them to the moments of the original convergence maps where noise was added with properties similar to the noise expected in the convergence maps. For this purpose, we generated noise maps $N_1$ and $N_2$ for each field using Eq.~(\\ref{eq:noise1}) and (\\ref{eq:noise2}), and we derived the noise to be added in the convergence using Eq.~(\\ref{eq:noise3}).\n\nThe comparison of Fig.~\\ref{noise_border_hos} to Fig.~\\ref{border_hos} shows that the third-order moment of the convergence is not affected by shape noise. In contrast, the fourth-order moment is biased for scales smaller than 10$\\arcmin$. The two methods slightly underestimate the third- and fourth-order moments at large scales. However, with KS+, the errors are reduced by a factor of 2 and remain roughly within the $1\\sigma$ uncertainty.\n\n\n\n\n\n\n\\subsection{All systematic effects taken into account simultaneously}\nIn this section, we assess the performance of KS and KS+ for realistic data sets by combining the effects of shape noise, reduced shear, borders, and missing data.\n\n\n\\begin{figure*}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=kappa_1024_m20,height=6.cm,width=6.7cm,clip=}\n\\hspace{0.6cm}\n\\psfig{figure=mask_m20,height=6.cm,width=6.4cm,clip=}\n}}\n\\vspace{0.3cm}\n\\centerline{\n\\hbox{\n\\psfig{figure=kappa_mask_g5_m20,height=6.cm,width=7cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=kappa_mask_mrlens_m20,height=6.cm,width=7cm,clip=}\n}}\n\n}\n\\caption{All systematic effects: The upper panels show the original E-mode convergence $\\kappa$ map (left) and the mask that is applied to the shear maps (right). The lower panels show the convergence map reconstructed from an incomplete noisy shear field using the KS method (left) and using the KS+ method (right) applying a nonlinear MRLens filtering with $\\alpha_{\\rm FDR} = 0.05$. The field is $10^\\circ \\times 10^\\circ$ downsampled to $1024 \\times 1024$ pixels.}\n\n\\label{kappa_complete}\n\\end{figure*}\n\n\nFig.~\\ref{kappa_complete} compares the results of the KS method and the KS+ method combined with a filtering step to correct for all systematic effects in one field.\nWe used the nonlinear MRLens filter to reduce the noise in the KS and KS+ convergence maps because it is particularly well suited for the detection of isotropic structures \\citep{stat:pires09a, wl:pires12, wl:lin16}. Again, KS+ better recovers the over-densities because it reduces the signal leakage during the mass inversion compared to KS.\n\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\psfig{figure=correlation_missing_noise_m20.pdf,width=9.cm,clip=}\n}\n\\caption{All systematic effects: Mean shear two-point correlation function $\\xi_+$ (black) and corresponding mean convergence two-point correlation function $\\xi_{\\kappa_{\\rm E}}$ estimated from incomplete noisy shear fields. The convergence maps have been estimated using KS (blue) and KS+ (red). The convergence two-point correlations were estimated outside the mask. The shaded area represents the uncertainties on the mean estimated on 1000 deg$^{2}$. The lower panel shows the normalised difference between the two upper curves.}\n\\label{missing_noise_corr} \n}\n\\end{figure}\n\n\n\nFig.~\\ref{missing_noise_corr} shows the two-point correlation computed with the two methods. \nThe masked regions were excluded from the two-point correlation computation, resulting in fewer pairs and higher noise than in Fig.~\\ref{border_noise_corr}.\nAgain, the strong leakage due to missing data is clearly observed with the KS method. \nThe results obtained with the KS+ method reduce the errors in the mean convergence two-point correlation function by a factor of about 5, and the errors remain roughly within the 1$\\sigma$ uncertainty.\n\n\n\n\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=skewness_mask_noise.pdf,width=9.cm,clip=}\n}}\n\\vspace{0.3cm}\n\\centerline{\n\\hbox{\n\\psfig{figure=kurtosis_mask_noise.pdf,width=9.cm,clip=}\n}}}\n\\caption{All systematic effects: Third-order (upper panel) and fourth-order (lower panel) moments estimated on seven wavelet bands of the original convergence with realistic noise (black) compared to the moments estimated using KS (blue) and KS+ (red) obtained from incomplete noisy shear maps. The third- and fourth-order moments are estimated outside the mask. The shaded area represents the uncertainties on the mean estimated on 1000 deg$^{2}$. The lower panel shows the relative higher-order moment errors.}\n\\label{noise_missing_hos}\n\\end{figure}\n\n\n\n\nIn Fig.~\\ref{noise_missing_hos} we test the efficacy of the mass-inversion methods in preserving higher-order moments of the convergence maps in a realistic setting. As before, realistic noise was added to the original convergence maps for comparison.\nAs was observed earlier in the noise-free case, the KS method systematically underestimates the third- and fourth-order moments at all scales. \nWith KS+, the errors are significantly reduced, by a factor of about 2 in the third-order moment and by a factor of about 10 in the fourth-order moment estimation, at all scales.\nAlthough reduced, the errors of the KS+ method on the third-order moment cannot be neglected.\nThese errors might result from noise correlations introduced by the inpainting method in the shear maps. Inside the gaps, the noise is indeed correlated because it is interpolated from the remaining data. These noise correlations propagate into the convergence maps and can explain the bias in the moment estimation.\n\nWe note that the two-point correlation functions and higher-order moments are here only used to probe the accuracy of the reconstruction methods. For specific applications, the small residuals of the KS+ method can be reduced even more using additional treatment such as down-weighting the region around the mask when the moments are computed \\citep[e.g.][]{cfhtlens:vanwaerbeke13}.\n\n\n\n\n\n\\section{Conclusion}\n\\label{sect_cl}\n\nThis paper was motivated by the use of convergence maps in \\textit{Euclid}\\xspace to constrain cosmological parameters and to assess other physical constraints. \nConvergence maps encode the lensing information in a different manner, allowing more optimised computations than shear.\nHowever, the mass-inversion process is subject to field border effects, missing data, reduced shear, intrinsic alignments, and shape noise. This requires accurate control of the systematic effects during the mass inversion to reduce the information loss as much as possible. \nWe presented and compared the two mass-inversion methods that are included in the official Euclid data-processing pipeline: the standard Kaiser \\& Squires (KS) method, and an improved Kaiser \\& Squires (KS+) mass-inversion technique that integrates corrections for the mass-mapping systematic effects.\n The systematic effects on the reconstructed convergence maps were studied using the Euclid Flagship mock galaxy catalogue.\n\n\nIn a first step, we analysed and quantified one by one the systematic effects on reconstructed convergence maps using two-point correlation functions and moments of the convergence.\nIn this manner, we quantified the contribution of each effect to the error budget to better understand the error distribution in the convergence maps. With KS, missing data are the dominant effect at all scales. Field border effects also have a strong effect, but only at the map borders. These two effects are significantly reduced with KS+. The reduced shear is the smallest effect in terms of contribution and only affects small angular scales. \nThe study also showed that pixellisation provides an intrinsic regularisation and that no additional smoothing step is required to avoid infinite noise in the convergence maps.\n\n\nIn a second step, we quantified the errors introduced by the KS and KS+ methods in a realistic setting that included the systematic effects. \nWe showed that the KS+ method reduces the errors on the two-point correlation functions and on the moments of the convergence compared to the KS method. \nThe errors introduced by the mass inversion on the two-point correlation of the convergence maps are reduced by a factor of about 5. \nThe errors on the third-order and fourth-order moment estimates are reduced by factors of about 2 and 10, respectively.\nSome errors remain in the third-order moment that remain within the $2\\sigma$ uncertainty. They might result from noise correlations introduced by the inpainting method inside the gaps.\n\nOur study was conducted on a mock of 1000 deg$^2$ divided into ten fields of 10$^{\\circ}$ $\\times$ 10$^{\\circ}$ to remain in the flat-sky approximation. \\textit{Euclid}\\xspace will observe a field of 15 000 deg$^2$. As long as KS+ has not been extended to the curved sky, it is not possible to apply the method to larger fields without introducing significant projection effects. However, the \\textit{Euclid}\\xspace survey can be divided into small fields, which allows reducing the uncertainties in the statistics that are estimated on the convergence maps. Moreover, we can expect that part of the errors will average out.\n\nRecent studies have shown that combining the shear two-point statistics with higher-order statistics of the convergence such as higher-order moments \\citep{hos:vicinanza18}, Minkowski functionals \\citep{hos:vicinanza19}, or peak counts \\citep{hos:liu15,hos:martinet18} allows breaking common degeneracies. \nThe precision of the KS+ mass inversion makes the E-mode convergence maps a promising tool for such cosmological studies.\nIn future work, we plan to propagate these errors into cosmological parameter constraints using higher-order moments and peak counts.\n\n\n\\label{sect_cl}\n\n\n\n\n\n\\section*{Acknowledgments}\n{This study has been carried inside the Mass Mapping Work Package of the Weak Lensing Science Working Group of the \\textit{Euclid}\\xspace project to better understand the impact of the mass inversion systematic effects on the convergence maps. \nThe authors would like to thank the referees and editors for their valuable comments, which helped to improve the manuscript.\nS. Pires thanks F. Sureau, J. Bobin, M. Kilbinger, A. Peel and J.-L. Starck for useful discussions. \n\\AckEC\n\n\\section*{Appendix A: KS+ inpainting algorithm}\n\n\nThis appendix describes the KS+ method presented in Sect.~\\ref{iks} in more detail.\nThe solution of the KS+ mass inversion is obtained through the iterative algorithm described in Algorithm~\\ref{algo}.\n\nThe outer loop starting at step 5 is used to correct for the reduced shear using the iterative scheme described in Sect.~\\ref{reduced}. The inner loop starting at step 7 is used to solve the optimisation problem defined by Eq.~(\\ref{eq2}). \n$\\bm \\Phi$ is the discrete cosine transform operator matrix.\nIf the convergence ${\\bm \\kappa}$ is sparse in $\\bm \\Phi$, most of the signal is contained in the strongest DCT coefficients. The smallest coefficients result from missing data, border effects, and shape noise. Thus, the algorithm is based on an iterative algorithm with a threshold that decreases exponentially (at each iteration) from a maximum value to zero, following the decreasing law $F$ described in \\cite{wl:pires09}. \nBy accumulating increasingly more high DCT coefficients through each iteration, the gaps in $\\bm{\\tilde \\gamma}$ fill up steadily, and the power of the spurious B modes due to the gaps decreases.\nThe algorithm uses the fast Fourier transform at each iteration to compute the shear maps $\\bm \\gamma$ from the convergence maps $\\bm \\kappa$ (step 14) and the inverse relation (step 16). \n\n\n\nA data-driven power spectrum prior is introduced at steps 11-13. To do so, the KS+ algorithm uses the undecimated isotropic wavelet transform that decomposes an image $\\bm \\kappa$ into a set of coefficients $\\{ \\bm{w_1}, \\bm{w_2}, ..., \\bm{w_J}, \\bm{c_J}\\}$, as a superposition of the form\n\\begin{eqnarray}\n\\bm \\kappa[i_1, i_2]= \\bm{c_{J}}[i_1, i_2] + \\sum_{j=1}^{J} \\bm{w_{j}}[i_1,i_2], \n \\label{wavelet}\n \\end{eqnarray}\nwhere $\\bm{c_{J}}$ is a smoothed version of the image, and $\\bm \\kappa$ and $\\bm{w_{j}}$ are a set of aperture mass maps (usually called wavelet bands) at scale $\\theta = 2^{j}$. Then, we estimate the variance on each wavelet band $\\bm{w_j}$. The variance per scale estimated in this way can be directly compared to the power spectrum. This provides a way to estimate a broadband power spectrum of the convergence $\\bm \\kappa$ from incomplete data. \nThe power spectrum is then enforced by multiplying each wavelet coefficient by the factor \\smash{$\\sigma_j^{\\rm{out}}\/\\sigma_j^{\\rm{in}}$} inside the gaps, where \\smash{$\\sigma_j^{\\rm{in}}$} and \\smash{$\\sigma_j^{\\rm{out}}$} are the standard deviation estimated in the wavelet band $\\bm{w_j}$ inside and outside the mask, respectively.\nThis normalisation can be described by a linear operator $\\mathbf Q$ as used in Eq.~(\\ref{eq2}). \nThe constraint is applied on the E- and B-mode components before reconstructing the convergence $\\bm \\kappa$ by backward wavelet transform.\n\\begin{algorithm}[H]\n\\caption{KS+ algorithm} \n\\label{algo} \n\\begin{enumerate}\n\\item[1.] Project the shear from the celestial sphere onto a tangent plane by projecting the galaxy positions and applying a local rotation to the shear field.\n\\item[2.] Bin the projected shear onto a grid and define $\\bm{\\tilde{\\gamma}}$ as the average shear in each pixel.\n\\item[3.] Set the mask $\\mathbf M$: $M[i_1, i_2] = 1$ for pixels where we have information and $M[i_1, i_2] = 0$ for pixels with no galaxies, and take a support twice larger for the shear maps and include the borders in the masked region (see Fig.~\\ref{shear_mask}).\n\\item[4.] Set the maximum number of iterations to $I_{\\rm max}=100$, the maximum threshold $\\lambda_{\\rm max} = \\max(\\mid \\bm \\Phi^{\\rm T} \\mathbf P^* \\bm{\\tilde{\\gamma}} \\mid),$ and the minimum threshold $\\lambda_{\\rm min} = 0$.\n\\item[5.] Set $k = 0$, $\\bm{\\kappa_{\\rm E}^{k}}=0$ and iterate:\n\\begin{enumerate}\n\\item[6.] Update the shear $\\bm{\\tilde{\\gamma}^{k}} = \\bm{\\tilde{\\gamma}} \\, (1-\\bm{\\kappa_{\\rm E}^{k}})$ and initialise the solution to $\\bm{\\kappa^{k}} = \\mathbf P^* \\bm{\\tilde{\\gamma}^{k}}$.\n\\item[7.] Set $i = 0$, $\\lambda^{0}=\\lambda_{\\rm max}$, $\\bm{\\kappa^{i}}=\\bm{\\kappa^{k}}$ and iterate: \n \\begin{enumerate}\n \\item[8.] Compute the forward transform: $\\bm \\alpha = \\bm{\\Phi^{\\rm T}} \\bm{\\kappa^{i}}$.\n \\item[9.] Compute $\\bm{\\tilde \\alpha}$ by setting to zero the coefficients $\\bm \\alpha$ below the threshold $\\lambda^{i}$.\n \\item[10.] Reconstruct $\\bm{\\kappa^{i}}$ from $\\bm{\\tilde \\alpha}$: $\\bm{\\kappa^{i}} = \\bm{\\Phi} \\bm{ \\tilde\\alpha}$.\n \\item[11.] Decompose $\\bm{\\kappa^{i}}$ into its wavelet coefficients $\\{ \\bm{w_1}, \\bm{w_2}, ..., \\bm{w_J}, \\bm{c_J}\\}$.\n \\item[12.] Renormalise the wavelet coefficients $\\bm{w_j}$ by a factor $\\sigma_j^{\\rm{out}}\/\\sigma_j^{\\rm{in}}$ inside the gaps.\n \\item[13.] Reconstruct $\\bm{\\kappa^{i}}$ by performing the backward wavelet transform from the normalised coefficients. \n \\item[14.] Perform the inverse mass relation: $\\bm{\\gamma^{i}} = \\mathbf P \\bm{\\kappa^{i}}$.\n \\item[15.] Enforce the observed shear $\\bm{\\tilde\\gamma}$ outside the gaps:\n $\\bm{\\gamma^{i}} = (1-\\mathbf M) \\, \\bm{\\gamma^{i}} + \\mathbf M \\bm{\\tilde{ \\gamma}^{k}}$.\n \\item[16.] Perform the direct mass inversion: $\\bm{\\kappa^{i}} = \\mathbf P^* \\bm{\\gamma^{i}}$.\n \\item[17.] Update the threshold: $\\lambda^i = F(i, \\rm \\lambda_{min}, \\lambda_{max})$.\n \\item[18.] Set $i=i+1$. If $i>}[d]^f \\\\\nP \\ar@{-->}[ur] \\ar@{->}[r] & Y. \\\\\n}\n\\]\nDually, the injectives and $\\cat{I}$-monic maps determine each other via the extension property\n\\[\n\\xymatrix{\nX \\ar[r] \\ar@{ >->}[d]_f & I \\\\\nY . \\ar@{-->}[ur] & \\\\\n}\n\\]\nThis is part of the equivalent definition of a projective (resp.\\ injective) class described in \\cite{Christensen98}*{Proposition 2.4}. \n\\end{rem}\n\n\n\n\\begin{convention}\\label{co:SuspensionIso}\nWe will implicitly use the natural isomorphism $\\cat{T}(A,B) \\cong \\cat{T}(\\Sigma^k A, \\Sigma^k B)$ sending a map $f$ to $\\Sigma^k f$. \n\\end{convention}\n\n\\begin{defn}\\label{def:AdamsResol}\nAn \\textbf{Adams resolution} of an object $X$ in $\\cat{T}$ with respect to a projective class $(\\cat{P}, \\cat{N})$ is a diagram\n\\begin{equation} \\label{eq:AdamsResolProj}\n\\cxymatrix{\n\\mathllap{X =\\ }X_0 \\ar[rr]^{i_0} & & X_1 \\circar[dl]^{\\delta_0} \\ar[rr]^{i_1} & & X_2 \\circar[dl]^{\\delta_1} \\ar[rr]^{i_2} & & X_3 \\circar[dl]^{\\delta_2} \\ar[r] & \\cdots \\\\\n& P_0 \\ar@{->>}[ul]^{p_0} & & P_1 \\ar@{->>}[ul]^{p_1} & & P_2 \\ar@{->>}[ul]^{p_2} & & \\\\\n}\n\\end{equation}\nwhere every $P_s$ is projective, every map $i_s$ is in $\\cat{N}$, and every triangle $P_s \\ral{p_s} X_s \\ral{i_s} X_{s+1} \\ral{\\delta_s} \\Sigma P_s$ is distinguished. Here the arrows $\\delta_s \\colon X_{s+1} {\\ooalign{$\\longrightarrow$\\cr\\hidewidth$\\circ$\\hidewidth\\cr}} P_{s}$ denote degree-shifting maps, namely, maps $\\delta_s \\colon X_{s+1} \\to \\Sigma P_{s}$.\n\nDually, an \\textbf{Adams resolution} of an object $Y$ in $\\cat{T}$ with respect to an injective class $(\\cat{I}, \\cat{N})$ is a diagram\n\\begin{equation}\\label{eq:AdamsResolInj}\n\\cxymatrix{\n\\mathllap{Y =\\ }Y_0 \\ar@{ >->}[dr]_{p_0} & & Y_1 \\ar@{ >->}[dr]_{p_1} \\ar[ll]_{i_0} & & Y_2 \\ar@{ >->}[dr]_{p_2} \\ar[ll]_{i_1} & & Y_3 \\ar[ll]_{i_2} & \\cdots \\ar[l] \\\\\n& I_0 \\circar[ur]_{\\delta_0} & & I_1 \\circar[ur]_{\\delta_1} & & I_2 \\circar[ur]_{\\delta_2} & & \\\\\n}\n\\end{equation}\nwhere every $I_s$ is injective, every map $i_s$ is in $\\cat{N}$, and every triangle $\\Sigma^{-1} I_s \\ral{\\Sigma^{-1} \\delta_s} Y_{s+1} \\ral{i_s} Y_s \\ral{p_s} I_s$ is distinguished.%\n\\end{defn}\nFrom now on, fix a triangulated category $\\cat{T}$ and a (stable) injective class $(\\cat{I}, \\cat{N})$ in $\\cat{T}$. \n\n\\begin{lem}\nEvery object $Y$ of $\\cat{T}$ admits an Adams resolution.\n\\end{lem}\n\nGiven an object $X$ and an Adams resolution of $Y$, applying $\\cat{T}(X,-)$ yields an exact couple\n\\[\n\\xymatrix{\n\\bigoplus_{s,t} \\cat{T}(\\Sigma^{t-s} X, Y_s) \\ar[rr]^-{i = \\oplus (i_s)_*} & & \n\\bigoplus_{s,t} \\cat{T}(\\Sigma^{t-s} X, Y_s) \\ar[dl]^-{p = \\oplus (p_s)_*} \\\\\n& \\bigoplus_{s,t} \\cat{T}(\\Sigma^{t-s} X, I_s) \\ar[ul]^-{\\delta = \\oplus (\\delta_s)_*}\n}\n\\]\nand thus a spectral sequence with $E_1$ term\n\\[\nE_1^{s,t} %\n = \\cat{T}\\left( \\Sigma^{t-s} X, I_s \\right)\n \\cong \\cat{T}\\left( \\Sigma^t X, \\Sigma^s I_s \\right)\n\\]\nand differentials\n\\[\nd_r \\colon E_r^{s,t} \\to E_r^{s+r, t+r-1}\n\\]\ngiven by $d_r = p \\circ i^{-(r-1)} \\circ \\delta$, where $i^{-1}$ means choosing an $i$-preimage.\nThis is called the \\textbf{Adams spectral sequence} with respect to the injective class $\\cat{I}$\nabutting to $\\cat{T}(\\Sigma^{t-s} X, Y)$.\n\n\n\\begin{lem}\nThe $E_2$ term is given by\n\\[\nE_2^{s,t} = \\Ext_{\\cat{I}}^{s,t}(X,Y) := \\Ext_{\\cat{I}}^{s}(\\Sigma^t X,Y)\n\\]\nwhere $\\Ext_{\\cat{I}}^{s}(X,Y)$ denotes the $s^{\\text{th}}$ derived functor of $\\cat{T}(X,-)$ (relative to the injective class $\\cat{I}$) applied to the object $Y$.\n\\end{lem}\n\n\\begin{proof}\nThe Adams resolution of $Y$ yields an $\\cat{I}$-injective resolution of $Y$\n\\begin{equation}\\label{eq:InjResol}\n\\xymatrix @C=3.3pc {\n0 \\ar[r] & Y \\ar[r]^-{p_0} & I_0 \\ar[r]^-{(\\Sigma p_1) \\delta_0} & \\Sigma I_1 \\ar[r]^-{(\\Sigma^2 p_2) (\\Sigma \\delta_1)} & \\Sigma^2 I_2 \\ar[r] & \\cdots \\\\\n} \\qedhere\n\\end{equation}\n\\end{proof}\n\n\\begin{rem}\\label{re:NotGenerate}\nWe do not assume that the injective class $\\cat{I}$ generates, \ni.e., that every non-zero object $X$ admits a non-zero map $X \\to I$ to an injective. \nHence, we do not expect the Adams spectral sequence to be conditionally convergent in general; c.f.~\\cite{Christensen98}*{Proposition~4.4}.\n\\end{rem}\n\n\n\n\n\n\n\n\n\n\\begin{ex}\\label{ex:EBased}\nLet $E$ be a commutative (homotopy) ring spectrum. \nA spectrum is called \\textbf{$E$-injective} if it is a retract of $E \\sm W$ for some $W$ \\cite{HoveyS99}*{Definition 2.22}. \nA map of spectra $f \\colon X \\to Y$ is called \\textbf{$E$-monic} if the map $E \\sm f \\colon E \\sm X \\to E \\sm Y$ is a split monomorphism. \nThe $E$-injective objects and $E$-monic maps form an injective class in the stable homotopy category. \nThe Adams spectral sequence associated to \nthis injective class\nis the \\emph{Adams spectral sequence based on $E$-homology}, as described in \\cite{Ravenel04}*{Definition~2.2.4}, also called the \\emph{unmodified Adams spectral sequence} in \\cite{HoveyS99}*{\\S 2.2}. Further assumptions are needed in order to identify the $E_2$ term as $\\Ext$ groups in $E_*E$-comodules.\n\\end{ex}\n\n\n\n\n\\begin{defn}\nThe \\textbf{$\\cat{I}$-cohomology} of an object $X$ is the family of abelian groups\\break $H^I(X) := \\cat{T}(X,I)$ indexed by the injective objects $I \\in \\cat{I}$.\n\nA \\textbf{primary operation} in $\\cat{I}$-cohomology is a natural transformation $H^I(X) \\to H^J(X)$ of functors $\\cat{T}^{\\mathrm{op}} \\to \\mathrm{Ab}$. Equivalently, by the (additive) Yoneda lemma, a primary operation is a map $I \\to J$ in $\\cat{T}$.\n\\end{defn}\n\n\\begin{ex}\nThe differential $d_1$ is given by primary operations. More precisely, let $x \\in E_1^{s,t}$ be a map $x \\colon \\Sigma^{t-s} X \\to I_s$. Then $d_1(x) \\in E_1^{s+1,t}$ is the composite\n\\[\n\\xymatrix{\n\\Sigma^{t-s} X \\ar[r]^-{x} & I_s \\ar[r]^-{\\delta_s} & \\Sigma Y_{s+1} \\ar[r]^-{\\Sigma p_{s+1}} & \\Sigma I_{s+1}. \\\\\n}\n\\]\nIn other words, $d_1(x)$ is obtained by applying the primary operation $d_1 := (\\Sigma p_{s+1}) \\delta_s \\colon I_s \\to \\Sigma I_{s+1}$ to $x$.\n\\end{ex}\n\n\\begin{prop}\nA primary operation $\\theta \\colon I \\to J$ appears as $d_1 \\colon I_s {\\ooalign{$\\longrightarrow$\\cr\\hidewidth$\\circ$\\hidewidth\\cr}} I_{s+1}$ in some Adams resolution if and only if $\\theta$ admits an $\\cat{I}$-epi -- $\\cat{I}$-mono factorization.\n\\end{prop}\n\n\\begin{proof}\nThe condition is necessary by construction. In the factorization $d_1 = (\\Sigma p_{s+1}) \\delta_s$, the map $\\delta_s$ is $\\cat{I}$-epic while $p_{s+1}$ is $\\cat{I}$-monic.\n\nTo prove sufficiency, assume given a factorization $\\theta = iq \\colon I \\to W \\to J$, where $q \\colon I \\twoheadrightarrow W$ is $\\cat{I}$-epic and $i \\colon W \\hookrightarrow J$ is $\\cat{I}$-monic. Taking the fiber of $q$ twice yields the distinguished triangle\n\\[\n\\xymatrix{\n\\Sigma^{-1} W \\ar[r] & Y_0 \\ar@{ >->}[r] & I \\ar@{->>}[r]^q & W \\\\\n}\n\\]\nwhich we relabel\n\\[\n\\xymatrix{\nY_1 \\ar[r]^-{i_0} & Y_0 \\ar@{ >->}[r]^-{p_0} & I \\ar@{->>}[r]^-{\\delta_0} & \\Sigma Y_1. \\\\\n}\n\\]\nRelabeling the given map $i \\colon W \\hookrightarrow J$ as $\\Sigma p_1 \\colon \\Sigma Y_1 \\hookrightarrow \\Sigma I_1$, we can continue the usual construction of an Adams resolution of $Y_0$ as illustrated in Diagram~\\eqref{eq:AdamsResolInj}, in which $\\theta = iq$ appears as the composite $(\\Sigma p_1) \\delta_0$. Note that by the same argument, for any $s \\geq 0$, $\\theta$ appears as $d_1 \\colon I_s {\\ooalign{$\\longrightarrow$\\cr\\hidewidth$\\circ$\\hidewidth\\cr}} I_{s+1}$ in some (other) Adams resolution.\n\\end{proof}\n\n\\begin{ex}\nNot every primary operation appears as $d_1$ in an Adams resolution. For example, consider the stable homotopy category with the projective class $\\cat{P}$ generated by the sphere spectrum $S = S^0$, that is, $\\cat{P}$ consists of retracts of wedges of spheres. The $\\cat{P}$-epis (resp. $\\cat{P}$-monos) consist of the maps which are surjective (resp. injective) on homotopy groups. The primary operation $2 \\colon S \\to S$ does \\emph{not} admit a $\\cat{P}$-epi -- $\\cat{P}$-mono factorization.\n\nIndeed, assume that $2 = iq \\colon S \\twoheadrightarrow W \\hookrightarrow S$ is such a factorization. We will show that this implies $\\pi_2 (S\/2) = \\mathbb{Z}\/2 \\oplus \\mathbb{Z}\/2$, contradicting the known fact $\\pi_2 (S\/2) = \\mathbb{Z}\/4$. Here $S\/2$ denotes the mod $2$ Moore spectrum, sitting in the cofiber sequence $S \\ral{2} S \\to S\/2$.\n\nBy the octahedral axiom applied to the factorization $2 = iq$, there is a diagram\n\\[\n\\xymatrix{\nS \\ar@{=}[d] \\ar@{->>}[r]^-{q} & W \\ar@{ >->}[d]^{i} \\ar[r] & C_q \\ar[d]^{\\alpha} \\ar@{ >->}[r]^{\\delta'} & S^1 \\ar@{=}[d] \\\\\nS \\ar[r]^2 & S \\ar@{->>}[d]^{j} \\ar[r] & S\/2 \\ar[d]^{\\beta} \\ar[r]^{\\delta} & S^1 \\\\\n& C_i \\ar@{=}[r] & C_i & \\\\\n}\n\\]\nwith distinguished rows and columns. The long exact sequence in homotopy yields $\\pi_n C_q = \\mbox{}_2 \\pi_{n-1} S$, \nwhere the induced map $\\pi_n(\\delta') \\colon \\pi_n C_q \\to \\pi_n S^1$ corresponds to the inclusion $\\mbox{}_2 \\pi_{n-1} S \\hookrightarrow \\pi_{n-1} S$. Likewise, we have $\\pi_n C_i = \\left( \\pi_{n} S \\right) \/ 2$, \nwhere the induced map $\\pi_n(j) \\colon \\pi_n S \\to \\pi_n C_i$ corresponds to the quotient map $\\pi_{n} S \\twoheadrightarrow \\left( \\pi_{n} S \\right) \/ 2$. The defining cofiber sequence $S \\ral{2} S \\to S\/2$ yields the exact sequence\n\\[\n\\xymatrix{\n\\pi_n S \\ar[r]^2 & \\pi_n S \\ar[r] & \\pi_n (S\/2) \\ar[r]^{\\pi_n \\delta} & \\pi_{n-1} S \\ar[r]^2 & \\pi_{n-1} S \\\\\n}\n\\] \nwhich in turn yields the short exact sequence\n\\begin{equation*}%\n\\xymatrix{\n0 \\ar[r] & \\left( \\pi_n S \\right) \/ 2 \\ar[r] & \\pi_n (S\/2) \\ar[r]^{\\pi_n \\delta} & \\mbox{}_2 \\pi_{n-1} S \\ar[r] & 0. \\\\\n}\n\\end{equation*}\nThe map $\\pi_n (\\alpha) \\colon \\mbox{}_2 \\pi_{n-1} S \\to \\pi_n(S\/2)$ is a splitting of this sequence, because of the equality $\\pi_n(\\delta) \\pi_n (\\alpha) = \\pi_n(\\delta \\alpha) = \\pi_n(\\delta')$. However, the short exact sequence does not split in the case $n=2$, by the isomorphism $\\pi_2(S\/2) = \\mathbb{Z}\/4$. \nFor references, see~\\cite{Schwede12}*{Proposition II.6.48}, \\cite{Schwede10}*{Proposition 4},\nand~\\cite{MO100272}.\n\\end{ex}\n\n\n\\section{\\texorpdfstring{$3$}{3}-fold Toda brackets}\\label{se:3-fold-Toda-brackets}\n\n\nIn this section, we review different constructions of $3$-fold Toda brackets and some of their properties.\n\n\\enlargethispage{3pt}\n\\begin{defn} \\label{def:TodaBracket}\nLet $X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} X_3$ be a diagram in a triangulated category $\\cat{T}$. We define subsets of $\\cat{T}(\\Sigma X_0, X_3)$ as follows.\n\\begin{itemize}\n\\item The \\textbf{iterated cofiber Toda bracket} $\\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\cc} \\subseteq \\cat{T}(\\Sigma X_0, X_3)$ consists of all maps $\\psi \\colon \\Sigma X_0 \\to X_3$ that appear in a commutative diagram\n\\begin{equation} \\label{eq:CofCof}\n\\cxymatrix{\nX_0 \\ar@{=}[d] \\ar[r]^-{f_1} & X_1 \\ar@{=}[d] \\ar[r] & C_{f_1} \\ar[d]^{\\varphi} \\ar[r] & \\Sigma X_0 \\ar[d]^{\\psi} \\\\\nX_0 \\ar[r]^-{f_1} & X_1 \\ar[r]^-{f_2} & X_2 \\ar[r]^-{f_3} & X_3 \\\\\n}\n\\end{equation}\nwhere the top row is distinguished.\n\\item The \\textbf{fiber-cofiber Toda bracket} $\\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\fc} \\subseteq \\cat{T}(\\Sigma X_0, X_3)$ consists of all composites $\\beta \\circ \\Sigma \\alpha \\colon \\Sigma X_0 \\to X_3$, where $\\alpha$ and $\\beta$ appear in a commutative diagram\n\\begin{equation} \\label{eq:FibCof}\n\\vcenter{\n\\xymatrix @C=3.3pc {\nX_0 \\ar[d]_-{\\alpha} \\ar[r]^-{f_1} & X_1 \\ar@{=}[d] & & \\\\\n\\Sigma^{-1} C_{f_2} \\ar[r] & X_1 \\ar[r]^-{f_2} & X_2 \\ar@{=}[d] \\ar[r] & C_{f_2} \\ar[d]^-{\\beta} \\\\\n& & X_2 \\ar[r]^-{f_3} & X_3 \\\\\n}\n}\n\\end{equation}\nwhere the middle row is distinguished.\n\\item The \\textbf{iterated fiber Toda bracket} $\\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\ff} \\subseteq \\cat{T}(\\Sigma X_0, X_3)$ consists of all maps $\\Sigma \\delta \\colon \\Sigma X_0 \\to X_3$ where $\\delta$ appears in a commutative diagram\n\\begin{equation} \\label{eq:FibFib}\n\\vcenter{\n\\xymatrix @C=3.3pc {\nX_0 \\ar[d]_{\\delta} \\ar[r]^-{f_1} & X_1 \\ar[d]_{\\gamma} \\ar[r]^-{f_2} & X_2 \\ar@{=}[d] \\ar[r]^-{f_3} & X_3 \\ar@{=}[d] \\\\\n\\Sigma^{-1} X_3 \\ar[r] & \\Sigma^{-1} C_{f_3} \\ar[r] & X_2 \\ar[r]^-{f_3} & X_3 \\\\\n}\n}\n\\end{equation}\nwhere the bottom row is distinguished.\n\\end{itemize}\n\\end{defn}\n\n\\begin{rem}\\label{re:3-fold-negation}\nIn the literature, there are variations of these definitions, which sometimes\ndiffer by a sign.\nWith the notion of cofiber sequence implicitly used in~\\cite{Toda62},\nour definitions agree with Toda's.\nThe Toda bracket also depends on the choice of triangulation.\nGiven a triangulation, there is an associated negative triangulation whose\ndistinguished triangles are those triangles whose negatives are distinguished\nin the original triangulation (see~\\cite{Balmer02}).\nNegating a triangulation negates the $3$-fold Toda brackets.\nDan Isaksen has pointed out to us that in the stable homotopy category\nthere are $3$-fold Toda brackets which are not equal to their own \nnegatives.\nFor example, Toda showed in~\\cite{Toda62}*{Section~VI.v, and Theorems~7.4\nand~14.1} that the Toda bracket $\\left\\langle 2 \\sigma, 8, \\nu \\right\\rangle$\nhas no indeterminacy and contains an element $\\zeta$ of order $8$.\nWe give another example in Example~\\ref{ex:negative}.\n\\end{rem}\n\nThe following proposition can be found in \\cite{Sagave08}*{Remark 4.5 and Figure 2} and was kindly pointed out by Fernando Muro. It is also proved in \\cite{Meier12}*{\\S 4.6}. We provide a different proof more in the spirit of what we do later. In the case of spaces, it was originally proved by Toda \\cite{Toda62}*{Proposition 1.7}. \n\n\\begin{prop} \\label{TodaBracketsAgree}\nThe iterated cofiber, fiber-cofiber, and iterated fiber definitions of Toda brackets coincide. More precisely, for any diagram $X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} X_3$ in $\\cat{T}$, the following subsets of $\\cat{T}(\\Sigma X_0, X_3)$ are equal:\n\\[\n\\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\cc} = \\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\fc} = \\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\ff}.\n\\]\n\\end{prop}\n\n\\begin{proof}\nWe will prove the first equality; the second equality is dual.\n\n($\\supseteq$) Let $\\beta (\\Sigma \\alpha) \\in \\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\fc}$ be obtained from maps $\\alpha$ and $\\beta$ as in Diagram \\eqref{eq:FibCof}. Now consider the diagram with distinguished rows\n\\[\n\\xymatrix{\nX_0 \\ar[d]^{\\alpha} \\ar[r]^-{f_1} & X_1 \\ar@{=}[d] \\ar[r] & C_{f_1} \\ar@{-->}[d]^{\\varphi} \\ar[r] & \\Sigma X_0 \\ar[d]^{\\Sigma \\alpha} \\\\\n\\Sigma^{-1} C_{f_2} \\ar[r] & X_1 \\ar[r]^-{f_2} & X_2 \\ar@{=}[d] \\ar[r] & C_{f_2} \\ar[d]^{\\beta} \\\\\n& & X_2 \\ar[r]^-{f_3} & X_3 \\\\\n}\n\\]\nwhere there exists a filler $\\varphi \\colon C_{f_1} \\to X_2$. The commutativity of the tall rectangle on the right exhibits the membership $\\beta (\\Sigma \\alpha) \\in \\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\cc}$.\n\n($\\subseteq$) Let $\\psi \\in \\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\cc}$ be as in Diagram \\eqref{eq:CofCof}. The octahedral axiom comparing the cofibers of $q_1$, $\\varphi$, and $\\varphi \\circ q_1 = f_2$ yields a commutative diagram\n\\[\n\\xymatrix @C=1.1pc @R=0.88pc {\n&& && \\Sigma^{-1} C_{\\varphi} \\ar[dd]_{- \\Sigma^{-1} \\iota} \\ar@{=}[rr] & & \\Sigma^{-1} C_{\\varphi} \\ar[dd]_{- \\Sigma^{-1} \\eta} & \\\\ \\\\\nX_0 \\ar[dd]_{\\alpha} \\ar[rr]^-{f_1} && X_1 \\ar@{=}[dd] \\ar[rr]^-{q_1} && C_{f_1} \\ar[dd]_{\\varphi} \\ar[rr]^-{\\iota_1} & & \\Sigma X_0 \\ar[dd]_{\\Sigma \\alpha} \\ar@\/_1pc\/[dddl]_(0.35){\\psi} \\ar[rr]^-{- \\Sigma f_1} && \\Sigma X_1 \\ar@{=}[dd] \\\\ \\\\\n\\Sigma^{-1} C_{f_2} \\ar[rr]^(0.52){- \\Sigma^{-1} \\iota_2} && X_1 \\ar[rr]^-{f_2} && X_2 \\ar[dd]_{q} \\ar[dr]_{f_3} \\ar[rr]^(0.4){q_2} & & C_{f_2} \\ar[dd]^{\\xi} \\ar@{-->}[dl]^{\\!\\beta} \\ar[rr]^{\\iota_2} && \\Sigma X_1 \\\\\n&& && & X_3 & & \\\\\n&& && C_{\\varphi} \\ar@{-->}[ur]^-{\\theta} \\ar@{=}[rr] & & C_{\\varphi}, & \\\\\n}\n\\]\nwhere the rows and columns are distinguished. By exactness of the sequence\n\\[\n\\xymatrix @C=3.3pc {\n\\cat{T}(C_{f_2}, X_3) \\ar[r]^-{(\\Sigma \\alpha)^*} & \\cat{T}(\\Sigma X_0, X_3) \\ar[r]^-{(- \\Sigma^{-1} \\eta)^*} & \\cat{T}(\\Sigma^{-1} C_{\\varphi}, X_3)\n}\n\\]\nthere exists a map $\\beta \\colon C_{f_2} \\to X_3$ satisfying $\\psi = \\beta (\\Sigma \\alpha)$ if and only if the restriction of $\\psi$ to the fiber $\\Sigma^{-1} C_{\\varphi}$ of $\\Sigma \\alpha$ is zero. That condition does hold: one readily checks the equality $\\psi (- \\Sigma^{-1} \\eta) = 0$.\nThe chosen map $\\beta \\colon C_{f_2} \\to X_3$ might \\emph{not} satisfy the equation $\\beta q_2 = f_3$, but we will correct it to another map $\\beta'$ which does. The error term $f_3 - \\beta q_2$ is killed by restriction along $\\varphi$, %\nand therefore factors through the cofiber of $\\varphi$, i.e., there exists a factorization\n\\[\nf_3 - \\beta q_2 = \\theta \\iota\n\\]\nfor some $\\theta \\colon C_{\\varphi} \\to X_3$. The corrected map $\\beta' := \\beta + \\theta \\xi \\colon C_{f_2} \\to X_3$ satisfies $\\beta' q_2 = f_3$. \nMoreover, this corrected map $\\beta'$ still satisfies $\\beta' (\\Sigma \\alpha) = \\psi = \\beta (\\Sigma \\alpha)$, since the correction term satisfies $\\theta \\xi (\\Sigma \\alpha) = 0$. %\n\\end{proof}\n\nThanks to the proposition, we can write $\\left\\langle f_3, f_2, f_1 \\right\\rangle$ if we \ndo not need to specify a particular definition of the Toda bracket.\n\nWe also recall this well-known fact, and leave the proof as an exercise:\n\n\\begin{lem}\\label{le:indeterminacy}\nFor any diagram $X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} X_3$ in $\\cat{T}$,\nthe subset $\\left\\langle f_3, f_2, f_1 \\right\\rangle$ of $\\cat{T}(\\Sigma X_0, X_3)$ is a coset of\nthe subgroup\n\\[\n (f_3)_* \\, \\cat{T}(\\Sigma X_0, X_2) + (\\Sigma f_1)^* \\, \\cat{T}(\\Sigma X_1, X_3) .\n\\vspace{-18pt}\n\\]\n\\qed\n\\end{lem}\n\nThe displayed subgroup is called the \\textbf{indeterminacy}, and when\nit is trivial, we say that the Toda bracket \\textbf{has no indeterminacy}.\n\n\\begin{lem}\\label{le:MoveAround}\nConsider maps $X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} X_3 \\ral{f_4} X_4$. Then the following inclusions of subsets of $\\cat{T}(\\Sigma X_0, X_4)$ hold.\n\\begin{enumerate}\n\\item\n\\[\nf_4 \\left\\langle f_3, f_2, f_1 \\right\\rangle \\subseteq \\left\\langle f_4 f_3, f_2, f_1 \\right\\rangle\n\\]\n\\item\n\\[\n\\left\\langle f_4, f_3, f_2 \\right\\rangle f_1 \\subseteq \\left\\langle f_4, f_3, f_2 f_1 \\right\\rangle\n\\]\n\\item\n\\[\n\\left\\langle f_4 f_3, f_2, f_1 \\right\\rangle \\subseteq \\left\\langle f_4, f_3 f_2, f_1 \\right\\rangle\n\\]\n\\item\n\\[\n\\left\\langle f_4, f_3, f_2 f_1 \\right\\rangle \\subseteq \\left\\langle f_4, f_3 f_2, f_1 \\right\\rangle.\n\\]\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\n(1)-(2) These inclusions %\nare straightforward.\n\n(3)-(4) Using the iterated cofiber definition, the subset $\\left\\langle f_4 f_3, f_2, f_1 \\right\\rangle_{\\cc}$ consists of the maps $\\psi \\colon \\Sigma X_0 \\to X_4$ appearing in a commutative diagram\n\\[\n\\xymatrix{\nX_0 \\ar@{=}[d] \\ar[r]^-{f_1} & X_1 \\ar@{=}[d] \\ar[r] & C_{f_1} \\ar[d]^{\\varphi} \\ar[rr] & & \\Sigma X_0 \\ar[d]^{\\psi} \\\\\nX_0 \\ar[r]^-{f_1} & X_1 \\ar[r]^-{f_2} & X_2 \\ar[r]^-{f_3} & X_3 \\ar[r]^-{f_4} & X_4 \\\\\n}\n\\]\nwhere the top row is distinguished. Given such a diagram, the diagram\n\\[\n\\xymatrix{\nX_0 \\ar@{=}[d] \\ar[r]^-{f_1} & X_1 \\ar@{=}[d] \\ar[r] & C_{f_1} \\ar[dr]^{f_3 \\varphi} \\ar[rr] & & \\Sigma X_0 \\ar[d]^{\\psi} \\\\\nX_0 \\ar[r]^-{f_1} & X_1 \\ar[r]^-{f_2} & X_2 \\ar[r]^-{f_3} & X_3 \\ar[r]^-{f_4} & X_4 \\\\\n}\n\\]\nexhibits the membership $\\psi \\in \\left\\langle f_4, f_3 f_2, f_1 \\right\\rangle_{\\cc}$. A similar argument can be used to prove the inclusion $\\left\\langle f_4, f_3, f_2 f_1 \\right\\rangle_{\\ff} \\subseteq \\left\\langle f_4, f_3 f_2, f_1 \\right\\rangle_{\\ff}$.\n\\end{proof}\n\n\\begin{ex}\nThe inclusion $\\left\\langle f_4 f_3, f_2, f_1 \\right\\rangle \\subseteq \\left\\langle f_4, f_3 f_2, f_1 \\right\\rangle$ need not be an equality. For example, consider the maps $X \\ral{0} Y \\ral{1} Y \\ral{0} Z \\ral{1} Z$. The Toda brackets being compared are\n\\begin{align*}\n\\left\\langle 1_Z 0, 1_Y, 0 \\right\\rangle &= \\left\\langle 0, 1_Y, 0 \\right\\rangle \\\\\n&= \\left\\{ 0 \\right\\} \\\\\n\\left\\langle 1_Z, 0 1_Y, 0 \\right\\rangle &= \\left\\langle 1_Z, 0, 0 \\right\\rangle \\\\\n&= \\cat{T}(\\Sigma X, Z).\n\\end{align*}\n\\end{ex}\n\n\\begin{defn}\nIn the setup of Definition \\ref{def:TodaBracket}, the \\textbf{restricted Toda brackets} are the subsets of the Toda bracket\n\\begin{align*}\n&\\left\\langle f_3, \\stackrel{\\alpha}{f_2, f_1} \\right\\rangle_{\\fc} \\subseteq \\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\fc} \\\\\n&\\left\\langle \\stackrel{\\beta}{f_3, f_2}, f_1 \\right\\rangle_{\\fc} \\subseteq \\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\fc}\n\\end{align*}\nconsisting of all composites $\\beta (\\Sigma \\alpha) \\colon \\Sigma X_0 \\to X_3$, where $\\alpha$ and $\\beta$ appear in a commutative diagram \\eqref{eq:FibCof} where the middle row is distinguished, with the prescribed map $\\alpha \\colon X_0 \\to \\Sigma^{-1} C_{f_2} $ (resp. $\\beta \\colon C_{f_2} \\to X_3$).\n\\end{defn}\n\n\n\n\nThe lift to the fiber $\\alpha \\colon X_0 \\to \\Sigma^{-1} C_{f_2}$ is a witness of the equality $f_2 f_1 = 0$. Dually, the extension to the cofiber $\\beta \\colon C_{f_2} \\to X_3$ is a witness of the equality $f_3 f_2 = 0$.\n\n\\begin{rem} \\label{ComposeWitness}\nLet $X_1 \\ral{f_2} X_2 \\ral{q_2} C_{f_2} \\ral{\\iota_2} \\Sigma X_1$ be a distinguished triangle. By definition, we have equalities of subsets\n\\begin{align*}\n&\\left\\langle f_3, \\stackrel{\\alpha}{f_2, f_1} \\right\\rangle_{\\fc} = \\left\\langle f_3, \\stackrel{1}{f_2, {-}}\\!\\! \\Sigma^{-1} \\iota_2 \\right\\rangle_{\\fc} (\\Sigma \\alpha) \\\\\n&\\left\\langle \\stackrel{\\beta}{f_3, f_2}, f_1 \\right\\rangle_{\\fc} = \\beta \\left\\langle \\stackrel{1}{q_2, f_2}, f_1 \\right\\rangle_{\\fc}.\n\\end{align*}\n\\end{rem}\n\n\n\n\n\n\\section{Adams \\texorpdfstring{$d_2$}{d2} in terms of \\texorpdfstring{$3$}{3}-fold Toda brackets}\\label{se:AdamsD2}\n\nIn this section, we show that the Adams differential $d_r$ can be expressed in several ways \nusing $3$-fold Toda brackets. One of these expressions is as a secondary cohomology operation.\n\nGiven an injective class $\\cat{I}$,\nan Adams resolution of an object $Y$ as in Diagram~\\eqref{eq:AdamsResolInj}, and an object $X$, consider a class $[x] \\in E_2^{s,t}$ represented by a cycle $x \\in E_1^{s,t} = \\cat{T}(\\Sigma^{t-s} X, I_s)$. Recall that $d_2 [x] \\in E_2^{s+2,t+1} $ is obtained as illustrated in the diagram \n\\begin{equation*}\n\\xymatrix{\n\\cdots & Y_s \\ar[l] \\ar@{ >->}[dr]_{p_s} & & Y_{s+1} \\ar@{ >->}[dr]_{p_{s+1}} \\ar[ll]_{i_s} & & Y_{s+2} \\ar@{ >->}[dr]_{p_{s+2}} \\ar[ll]_{i_{s+1}} & & Y_{s+3} \\ar[ll]_{i_{s+2}} & \\cdots \\ar[l] \\\\\n& & I_s \\circar[ur]_{\\delta_s} & & I_{s+1} \\circar[ur]_{\\delta_{s+1}} & & I_{s+2} \\circar[ur]_{\\delta_{s+2}} & & \\\\\n& & \\Sigma^{t-s} X \\ar[u]_x \\ar@\/^0.5pc\/@{-->}[uurrr]_(0.4){\\widetilde{x}} \\ar@\/_0.5pc\/[urrrr]_{d_2(x)} & & & & & & \\\\\n}\n\\end{equation*}%\nExplicitly, since $x$ satisfies $d_1(x) = (\\Sigma p_{s+1}) \\delta_s x = 0$, we can choose a lift $\\widetilde{x} \\colon \\Sigma^{t-s} X {\\ooalign{$\\longrightarrow$\\cr\\hidewidth$\\circ$\\hidewidth\\cr}} \\Sigma Y_{s+2}$ of $\\delta_s x$ to the fiber of $\\Sigma p_{s+1}$. Then the differential $d_2$ is given by \n\\[\nd_2 [x] = \\left[ (\\Sigma p_{s+2}) \\widetilde{x} \\right].\n\\]\n\nFrom now on, we will unroll the distinguished triangles and keep track of the suspensions. %\nFollowing Convention \\ref{co:SuspensionIso}, we will use the identifications\n\\[\nE_1^{s+2,t+1} = \\cat{T}(\\Sigma^{t-s-1} X, I_{s+2}) \\cong \\cat{T}(\\Sigma^{t-s} X, \\Sigma I_{s+2}) \\cong \\cat{T}(\\Sigma^{t-s+1} X, \\Sigma^2 I_{s+2}).\n\\]\n\n\\begin{prop} \\label{pr:DifferentD2}\nDenote by $d_2 [x] \\subseteq E_1^{s+2,t+1}$ the subset of all representatives of the class $d_2 [x] \\in E_2^{s+2,t+1}$. Then the following equalities hold:\n\\begin{enumerate}\n\\item\\label{it:d1pdex}\n\\begin{align*}\nd_2 [x] &= \\left\\langle \\stackrel{\\Sigma^2 p_{s+2}}{\\Sigma d_1,\\strut \\Sigma p_{s+1}}, \\delta_s x \\right\\rangle_{\\fc} \\\\\n&= \\left\\langle \\Sigma d_1, \\Sigma p_{s+1}, \\delta_s x \\right\\rangle\n\\end{align*}\n\\item\n\\begin{align*}\nd_2 [x] &= (\\Sigma^2 p_{s+2}) \\left\\langle \\stackrel{\\!\\!1}{\\Sigma \\delta_{s+1}, \\Sigma p_{s+1}}, \\delta_s x \\right\\rangle_{\\fc} \\\\\n&= (\\Sigma^2 p_{s+2}) \\left\\langle \\Sigma \\delta_{s+1}, \\Sigma p_{s+1}, \\delta_s x \\right\\rangle\n\\end{align*}\n\\item\\label{it:d1d1x}\n\\[\nd_2 [x] = \\left\\langle \\stackrel{\\ \\beta}{\\Sigma d_1, d_1}, x \\right\\rangle_{\\fc} ,\n\\]\nwhere $\\beta$ is the composite $C \\ral{\\widetilde{\\beta}} \\Sigma^{2} Y_{s+2} \\ral{\\Sigma^2 p_{s+2}} \\Sigma^{2} I_{s+2}$ and $\\widetilde{\\beta}$ is obtained from the octahedral axiom applied to the factorization $d_1 = (\\Sigma p_{s+1}) \\delta_{s} \\colon I_s \\to \\Sigma Y_{s+1} \\to \\Sigma I_{s+1}$.\n\\end{enumerate}\n\\end{prop}\n\nIn~\\eqref{it:d1d1x}, $\\beta$ is a witness to the fact that the composite $(\\Sigma d_1) d_1$ \nof primary operations is zero, and so the restricted Toda bracket is a secondary operation.\n\n\\begin{proof}\nNote that $t$ plays no role in the statement, so we will assume without loss of generality that $t=s$ holds.\n\n(1) The first equality holds by definition of $d_2 [x]$, namely choosing a lift of $\\delta_s x$ to the fiber of $\\Sigma p_{s+1}$. \nThe second equality follows from the fact that $\\Sigma^2 p_{s+2}$ is the \\emph{unique} extension of $\\Sigma d_1 = (\\Sigma^2 p_{s+2}) (\\Sigma \\delta_{s+1})$ to the cofiber of $\\Sigma p_{s+1}$. Indeed, $\\Sigma \\delta_{s+1}$ is $\\cat{I}$-epic and $\\Sigma I_{s+2}$ is injective, so that the restriction map\n\\[\n(\\Sigma \\delta_{s+1})^* \\colon \\cat{T}(\\Sigma^2 Y_{s+2}, \\Sigma^2 I_{s+2}) \\to \\cat{T}(\\Sigma I_{s+1}, \\Sigma^2 I_{s+2})\n\\]\nis injective.\n\n(2) The first equality holds by Remark \\ref{ComposeWitness}. The second equality holds because $\\Sigma \\delta_{s+1}$ is $\\cat{I}$-epic and $\\Sigma I_{s+2}$ is injective, as in part (1).\n\n\n(3) The map $d_1 \\colon I_s \\to \\Sigma I_{s+1}$ is the composite $I_s \\ral{\\delta_s} \\Sigma Y_{s+1} \\ral{\\Sigma p_{s+1}} \\Sigma I_{s+1}$. The octahedral axiom applied to this factorization yields the dotted arrows in a commutative diagram\n\\[\n\\cxymatrix{\nI_s \\ar@{=}[d] \\ar[r]^-{\\delta_s} & \\Sigma Y_{s+1} \\ar[d]_{\\Sigma p_{s+1}} \\ar[r]^-{\\Sigma i_s} & \\Sigma Y_s \\ar@{-->}[d]^{\\widetilde{\\alpha}} \\ar[r]^-{-\\Sigma p_s} & \\Sigma I_s \\ar@{=}[d] \\\\\nI_s \\ar[r]^-{d_1} & \\Sigma I_{s+1} \\ar[d]_{\\Sigma \\delta_{s+1}} \\ar@{-->}[r]^-{q} & C_{d_1} \\ar@{-->}[d]^{\\widetilde{\\beta}} \\ar@{-->}[r]^-{\\iota} & \\Sigma I_s \\\\\n& \\Sigma^2 Y_{s+2} \\ar[d]_{-\\Sigma^{2} i_{s+1}} \\ar@{=}[r] & \\Sigma^2 Y_{s+2} \\ar[d] & \\\\\n& \\Sigma^2 Y_{s+1} \\ar[r]^{\\Sigma^{2} i_s} & \\Sigma^2 Y_{s} & \\\\\n}\n\\]\nwhere the rows and columns are distinguished and the equation $(-\\Sigma^2 i_{s+1}) \\widetilde{\\beta} = (\\Sigma \\delta_s) \\iota$ holds. The restricted bracket $\\left\\langle \\stackrel{\\ \\beta}{\\Sigma d_1, d_1}, x \\right\\rangle_{\\fc}$ consists of the maps $\\Sigma X \\to \\Sigma^2 I_{s+2}$ appearing as downward composites in the commutative diagram\n\\[\n\\xymatrix@C-7pt@R-3pt{\n& & & & \\Sigma X \\ar@{-->}[d]_-{\\Sigma \\alpha} \\ar[rr]^{- \\Sigma x} & & \\Sigma I_s \\ar@{=}[d] \\\\\nI_s \\ar[rr]^-{d_1} & & \\Sigma I_{s+1} \\ar@{=}[dd] \\ar[rr]^-{q} & & C_{d_1} \\ar[dl]_(0.55){\\widetilde{\\beta}\\!\\!} \\ar[dd]^{\\beta} \\ar[rr]^-{\\iota} && \\Sigma I_s \\\\\n& & & \\Sigma^2 Y_{s+2} \\ar[dr]^-{\\!\\Sigma^2 p_{s+2}} & & \\\\\n& & \\Sigma I_{s+1} \\ar[rr]_-{\\Sigma d_1} \\ar[ur]^-{\\Sigma \\delta_{s+1}\\!} & & \\Sigma^2 I_{s+2} & \\\\\n}\n\\]\n\n($\\supseteq$) Let $\\beta (\\Sigma \\alpha) \\in \\left\\langle \\stackrel{\\beta}{d_1, d_1}, x \\right\\rangle_{\\fc}$. By definition of $\\beta$, we have $\\beta (\\Sigma \\alpha) = (\\Sigma^2 p_{s+2}) \\widetilde{\\beta} (\\Sigma \\alpha)$. Then $\\widetilde{\\beta} (\\Sigma \\alpha) \\colon \\Sigma X \\to \\Sigma^{2} Y_{s+2}$ is a valid choice of the lift $\\widetilde{x}$ in the definition of $d_2[x]$:\n\\begin{align*}\n(\\Sigma^2 i_{s+1}) \\widetilde{\\beta} (\\Sigma \\alpha) &= -(\\Sigma \\delta_s) \\iota (\\Sigma \\alpha) \\\\%By the equation -i_{s+1} \\widetilde{\\beta} = \\delta_s \\iota in the octahedron\n&= - (\\Sigma \\delta_s) (-\\Sigma x) \\\\\n&= \\Sigma (\\delta_s x).\n\\end{align*}\n($\\subseteq$) Given a representative $(\\Sigma p_{s+2}) \\widetilde{x} \\in d_2 [x]$, let us show that $\\Sigma \\widetilde{x} \\colon \\Sigma X \\to \\Sigma^2 Y_{s+2}$ factors as $\\Sigma X \\ral{\\Sigma \\alpha} C_{d_1} \\ral{\\widetilde{\\beta}} \\Sigma^{2} Y_{s+2}$ for some $\\Sigma \\alpha$, yielding a factorization of the desired form\n\\begin{align*}\n(\\Sigma^2 p_{s+2}) (\\Sigma \\widetilde{x}) &= (\\Sigma^2 p_{s+2}) \\widetilde{\\beta} (\\Sigma \\alpha) \\\\\n&= \\beta (\\Sigma \\alpha).\n\\end{align*}\nBy construction, the map $(\\Sigma^2 i_s) (-\\Sigma^2 i_{s+1}) \\colon \\Sigma^2 Y_{s+2} \\to \\Sigma^2 Y_{s}$ is a cofiber of $\\widetilde{\\beta}$. The condition\n\\[\n(\\Sigma^2 i_s) (\\Sigma^2 i_{s+1}) (\\Sigma \\widetilde{x}) = (\\Sigma^2 i_s) \\Sigma (\\delta_s x) = 0\n\\]\nguarantees the existence of some lift $\\Sigma \\alpha \\colon \\Sigma X \\to C_{d_1}$ of $\\Sigma \\widetilde{x}$. The chosen lift $\\Sigma \\alpha$ might \\emph{not} satisfy $\\iota (\\Sigma \\alpha) = - \\Sigma x$, but we will correct it to a lift $\\Sigma \\alpha'$ which does. The two sides of the equation become equal after applying $-\\Sigma \\delta_s$, i.e., $(-\\Sigma \\delta_s) (-\\Sigma x) = (-\\Sigma \\delta_s) \\iota (\\Sigma \\alpha)$ holds.\nHence, the error term factors as\n\\[\n-\\Sigma x - \\iota \\Sigma \\alpha = (-\\Sigma p_s)(\\Sigma \\theta)\n\\]\nfor some $\\Sigma \\theta \\colon \\Sigma X \\to \\Sigma Y_s$, since $-\\Sigma p_s$ is a fiber of $-\\Sigma \\delta_s$. The corrected map $\\Sigma \\alpha' := \\Sigma \\alpha + \\widetilde{\\alpha} (\\Sigma \\theta) \\colon \\Sigma X \\to C_{d_1}$ satisfies $\\iota (\\Sigma \\alpha') = - \\Sigma x$ \nand still satisfies $\\widetilde{\\beta} (\\Sigma \\alpha') = \\widetilde{\\beta} (\\Sigma \\alpha) = \\Sigma \\widetilde{x}$, since the correction term $\\widetilde{\\alpha} (\\Sigma \\theta)$ satisfies $\\widetilde{\\beta} \\widetilde{\\alpha} (\\Sigma \\theta) = 0$. \n\\end{proof}\n\n\\begin{prop}\\label{pr:inclusions}\nThe following inclusions of subsets hold in $E_1^{s+2,t+1}$: %\n\\[\nd_2 [x] \\subseteq (\\Sigma^2 p_{s+2}) \\left\\langle \\Sigma \\delta_{s+1}, d_1, x \\right\\rangle \\subseteq \\left\\langle \\Sigma d_1, d_1, x \\right\\rangle .\n\\]\n\\end{prop}\n\n\\begin{proof}\nThe first inclusion is \n\\[\nd_2 [x] = (\\Sigma^2 p_{s+2}) \\left\\langle \\Sigma \\delta_{s+1}, \\Sigma p_{s+1}, \\delta_s x \\right\\rangle \\subseteq (\\Sigma^2 p_{s+2}) \\left\\langle \\Sigma \\delta_{s+1}, (\\Sigma p_{s+1}) \\delta_s, x \\right\\rangle,\n\\]\nwhereas the second inclusion is\n\\[\n(\\Sigma^2 p_{s+2}) \\left\\langle \\Sigma \\delta_{s+1}, d_1, x \\right\\rangle \\subseteq \\left\\langle (\\Sigma^2 p_{s+2}) (\\Sigma \\delta_{s+1}), d_1, x \\right\\rangle,\n\\] \nboth using Lemma~\\ref{le:MoveAround}.\n\\end{proof}\n\n\\begin{prop}\\label{pr:proper-inclusion}\nThe inclusion $(\\Sigma^2 p_{s+2}) \\left\\langle \\Sigma \\delta_{s+1}, d_1, x \\right\\rangle \\subseteq \\left\\langle \\Sigma d_1, d_1, x \\right\\rangle$ need \\emph{not} be an equality in general.\n\\end{prop}\n\nIt was pointed out to us by Robert Bruner that this can happen in principle.\nWe give an explicit example in Proposition~\\ref{pr:proper-inclusion-C4}.\n\n\n\n\\section{Higher Toda brackets}\\label{se:HigherBrackets}\n\n\n\n\n\n\nWe saw in Section~\\ref{se:3-fold-Toda-brackets} that there are several equivalent ways\nto define $3$-fold Toda brackets.\nFollowing the approach given in~\\cite{McKeown12}, we show that\nthe fiber-cofiber definition generalizes nicely to $n$-fold Toda brackets.\nThere are $(n-2)!$ ways to make this generalization, and we prove\nthat they are all the same up to a specified sign.\nWe also show that this Toda bracket is self-dual.\n\nOther sources that discuss higher Toda brackets in triangulated categories\nare~\\cite{Shipley02}*{Appendix A}, \\cite{Gelfand03}*{IV \\S 2} and~\\cite{Sagave08}*{\\S 4},\nwhich all give definitions that follow Cohen's approach for spectra or spaces~\\cite{Cohen68}.\nWe show that our definition agrees with those of~\\cite{Shipley02} and~\\cite{Sagave08}.\n(We believe that it sometimes differs in sign from~\\cite{Cohen68}. We have not compared carefully with~\\cite{Gelfand03}.)\n\n\\begin{defn}\\label{def:TodaFamily}\nLet $X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} X_3$ be a diagram\nin a triangulated category $\\cat{T}$.\nWe define the \\textbf{Toda family}\nof this sequence to be the collection $\\mathrm{T}(f_3, f_2, f_1)$\nconsisting of all pairs $(\\beta, \\Sigma \\alpha)$, where $\\alpha$ and $\\beta$ appear in a commutative diagram\n\\[\n\\xymatrix @C=3.3pc {\nX_0 \\ar[d]_-{\\alpha} \\ar[r]^-{f_1} & X_1 \\ar@{=}[d] & & \\\\\n\\Sigma^{-1} C_{f_2} \\ar[r] & X_1 \\ar[r]^-{f_2} & X_2 \\ar@{=}[d] \\ar[r] & C_{f_2} \\ar[d]^-{\\beta} \\\\\n& & X_2 \\ar[r]^-{f_3} & X_3 \\\\\n}\n\\]\nwith distinguished middle row.\nEquivalently,\n\\[\n\\xymatrix @C=3.3pc {\n& & & \\Sigma X_0 \\ar[d]_-{\\Sigma \\alpha} \\ar[r]^-{-\\Sigma f_1} & \\Sigma X_1 \\ar@{=}[d] \\\\\n& X_1 \\ar[r]^-{f_2} & X_2 \\ar@{=}[d] \\ar[r] & C_{f_2} \\ar[d]^-{\\beta} \\ar[r] & \\Sigma X_1 \\\\\n& & X_2 \\ar[r]^-{f_3} & X_3 , \\\\\n}\n\\]\nwhere the middle row is again distinguished. (The negative of $\\Sigma f_1$\nappears, since when a triangle is rotated, a sign is introduced.)\nNote that the maps in each pair form a composable sequence\n$\\Sigma X_0 \\ral{\\Sigma \\alpha} C_{f_2} \\ral{\\beta} X_3$,\nwith varying intermediate object,\nand that the collection of composites $\\beta \\circ \\Sigma \\alpha$ is exactly the\nToda bracket $\\langle f_3, f_2, f_1 \\rangle$, using the fiber-cofiber definition\n(see Diagram~\\eqref{eq:FibCof}).\n(Also note that the Toda family is generally a proper class,\nbut this is only because the intermediate object can be varied up to isomorphism,\nand so we will ignore this.)\n\nMore generally, if $S$ is a set of composable triples of maps,\nstarting at $X_0$ and ending at $X_3$, we define $\\mathrm{T}(S)$ to\nbe the union of $\\mathrm{T}(f_3, f_2, f_1)$ for each triple\n$(f_3, f_2, f_1)$ in $S$.\n\\end{defn}\n\n\\begin{defn}\\label{def:HigherToda}\nLet\n$X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} \\cdots \\ral{f_n} X_n$\nbe a diagram in a triangulated category $\\cat{T}$.\nWe define the \\textbf{Toda bracket} $\\langle f_n, \\ldots, f_1 \\rangle$\ninductively as follows.\nIf $n = 2$, it is the set consisting of just the composite $f_2 f_1$.\nIf $n > 2$, it is the union of the sets\n$\\langle \\beta, \\Sigma \\alpha, \\Sigma f_{n-3}, \\ldots, \\Sigma f_1 \\rangle$,\nwhere $(\\beta, \\Sigma \\alpha)$ is in $T(f_n, f_{n-1}, f_{n-2})$.\n\\end{defn}\n\n\nIn fact, there are $(n-2)!$ such definitions, depending on a\nsequence of choices of which triple of consecutive maps to apply\nthe Toda family construction to.\nIn Theorem~\\ref{th:n-fold} we will enumerate these choices\nand show that they all agree up to sign.\n\n\\begin{ex}\\label{ex:4FoldBracket}\nLet us describe $4$-fold Toda brackets in more detail. We have\n\\[\n\\left\\langle f_4, f_3, f_2, f_1 \\right\\rangle\n = \\bigcup_{\\beta, \\alpha} \\left\\langle \\beta, \\Sigma \\alpha, \\Sigma f_1 \\right\\rangle\n = \\bigcup_{\\beta, \\alpha} \\bigcup_{\\beta', \\alpha'} \\{ \\beta' \\circ \\Sigma \\alpha' \\}\n\\]\nwith $(\\beta, \\Sigma \\alpha) \\in T(f_4, f_3, f_2)$ and $(\\beta', \\Sigma \\alpha') \\in T(\\beta, \\Sigma \\alpha, \\Sigma f_1)$. These maps fit into a commutative diagram\n\\[\n \\xymatrix{\n \\Sigma^2 X_0 \\ar[r]^-{\\Sigma \\alpha'} &\tC_{\\Sigma \\alpha} \\ar[r] \\ar[ddr]^(0.3){\\beta'} &\t\\Sigma^2 X_1 & \\text{row = $\\mathrlap{-\\Sigma^2 f_1}$} \\\\\n \\Sigma X_1 \\ar[r]^{\\Sigma \\alpha} &\tC_{f_3} \\ar[r] \\ar[dr]_(0.45){\\beta} \\ar[u] &\t\\Sigma X_2 &\t\\text{row = $\\mathrlap{-\\Sigma f_2}$} \\\\\n X_2 \\ar[r]^{f_3} &\tX_3 \\ar[u] \\ar[r]_{f_4} &\tX_4 \\\\\n & 0 \\ar[u] \\\\\n }\n\\]\nwhere the horizontal composites are specified as above, and each ``snake''\n\\[\n\\xymatrix{\n& \\cdot \\ar[r] & \\cdot \\\\\n\\cdot \\ar[r] & \\cdot \\ar[u] & \\\\\n}\n\\]\nis a distinguished triangle.\nThe middle column is an example of a \\emph{$3$-filtered object} as defined below.\n\\end{ex}\n\nNext, we will show that Definition \\ref{def:HigherToda} coincides with the definitions of higher Toda brackets in~\\cite{Shipley02}*{Appendix A} and~\\cite{Sagave08}*{\\S 4}, which we recall here.\n\n\\begin{defn}\\label{def:NFiltered}\nLet $n \\geq 1$ and consider a diagram in $\\cat{T}$\n\\[\n\\xymatrix{\nY_0 \\ar[r]^-{\\lambda_1} &\tY_1 \\ar[r]^-{\\lambda_2} &\tY_2 \\ar[r] &\t\\cdots \\ar[r]^-{\\lambda_{n-1}} &\tY_{n-1} \\\\\n} \n\\]\nconsisting of $n-1$ composable maps. An \\textbf{$n$-filtered object} $Y$ based on $(\\lambda_{n-1}, \\ldots, \\lambda_1)$ consists of a sequence of maps\n\\[\n\\xymatrix{\n0 = F_0 Y \\ar[r]^-{i_{0}} &\tF_{1} Y \\ar[r]^-{i_{1}} &\t\\cdots \\ar[r]^-{i_{n-1}} &\tF_n Y = Y \\\\ \n}\n\\]\ntogether with distinguished triangles\n\\[\n\\xymatrix{\nF_{j} Y \\ar[r]^-{i_j} &\tF_{j+1} Y \\ar[r]^-{q_{j+1}} & \\Sigma^{j} Y_{n-1-j} \\ar[r]^-{e_j} &\t\\Sigma F_j Y \\\\\t\n}\n\\]\nfor $0 \\leq j \\leq n-1$, such that for $1 \\leq j \\leq n-1$, the composite\n\\[\n\\xymatrix{\n\\Sigma^j Y_{n-1-j} \\ar[r]^-{e_j} &\t\\Sigma F_j Y \\ar[r]^-{\\Sigma q_j} &\t\\Sigma^{j} Y_{n-j} \\\\\n}\n\\]\nis equal to $\\Sigma^j \\lambda_{n-j}$. In particular, the $n$-filtered object $Y$ comes equipped with maps\n\\[\n\\sigma'_Y \\colon Y_{n-1} \\cong F_1 Y \\to Y \n\\]\n\\[\n\\sigma_Y \\colon Y = F_n Y \\to \\Sigma^{n-1} Y_0.\n\\]\n\\end{defn}\n\n\\begin{defn}\\label{def:HigherTodaSS}\nLet $X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} \\cdots \\ral{f_n} X_n$\nbe a diagram in a triangulated category $\\cat{T}$.\nThe \\textbf{Toda bracket} in the sense of Shipley--Sagave $\\langle f_n, \\ldots, f_1 \\rangle_{\\Ship} \\subseteq \\cat{T}(\\Sigma^{n-2} X_0, X_n)$ is the set of all composites appearing in the middle row of a commutative diagram\n\\[\n\\xymatrix{\n& X_{n-1} \\ar[d]_{\\sigma'_X} \\ar[dr]^-{f_n} & \\\\\n\\Sigma^{n-2} X_0 \\ar[dr]_{\\Sigma^{n-2} f_1} \\ar@{-->}[r] & X \\ar[d]^{\\sigma_X} \\ar@{-->}[r] & X_n \\\\\n& \\Sigma^{n-2} X_1 , & \\\\\n}\n\\]\nwhere $X$ is an $(n-1)$-filtered object based on $(f_{n-1}, \\ldots, f_3, f_2)$.\n\\end{defn}\n\n\\begin{ex}\nFor a $3$-fold Toda bracket $\\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\Ship}$, a $2$-filtered object $X$ based on $f_2$ amounts to a cofiber of $-f_2$, more precisely, a distinguished triangle\n\\[\n\\xymatrix{\nX_2 \\ar[r]^-{\\sigma'_X} & X \\ar[r]^-{\\sigma_X} & \\Sigma X_1 \\ar[r]^-{\\Sigma f_2} & \\Sigma X_2.\n}\n\\]\nUsing this, one readily checks the equality $\\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\Ship} = \\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\fc}$, as noted in \\cite{Sagave08}*{Definition 4.5}.\n\\end{ex}\n\n\\begin{ex}\\label{ex:4FoldBracketSS}\nFor a $4$-fold Toda bracket $\\left\\langle f_4, f_3, f_2, f_1 \\right\\rangle_{\\Ship}$, a $3$-filtered object $X$ based on $(f_3, f_2)$ consists of the data displayed in the diagram\n\\[\n \\xymatrix{\n & F_3 X = X \\ar[r]^-{q_3 = \\sigma_X} & \\Sigma^2 X_1 & \\\\\n \\Sigma X_1 \\ar[r]^-{- \\Sigma^{-1} e_2} & F_2 X \\ar[r]^-{q_2} \\ar[u]_{i_2} & \\Sigma X_2 & \\text{row = $\\mathrlap{-\\Sigma f_2}$} \\\\\n X_2 \\ar[r]^-{- \\Sigma^{-1} e_1} & F_1 X \\ar[u]_{i_1} \\ar[r]^-{q_1}_-{\\cong} & X_3 & \\text{row = $\\mathrlap{-f_3}$} \\\\\n & F_0 X = 0 , \\ar[u]_{i_0} \\\\\n }\n\\]\nwhere the two snakes are distinguished. The bracket consists of the maps $\\Sigma^2 X_0 \\to X_4$ appearing as composites of the dotted arrows in a commutative diagram\n\\[\n \\xymatrix{\n \\Sigma^2 X_0 \\ar@{-->}[r] & X \\ar[r]^-{\\sigma_X} \\ar@\/^1pc\/@{-->}[ddr] & \\Sigma^2 X_1 & \\text{row = $\\mathrlap{\\Sigma^2 f_1}$} \\\\\n \\Sigma X_1 \\ar[r]^-{- \\Sigma^{-1} e_2} & F_2 X \\ar[r]^-{q_2} \\ar[u] & \\Sigma X_2 & \\text{row = $\\mathrlap{-\\Sigma f_2}$} \\\\\n X_2 \\ar[r]^-{- f_3} & X_3 \\ar[u] \\ar[r]^-{f_4} & X_4 & \\\\\n & 0 , \\ar[u] \\\\\n }\n\\]\nwhere the two snakes are distinguished. By negating the first and third map in each snake,\nthis recovers the description in Example \\ref{ex:4FoldBracket}, thus proving the equality of subsets\n\\[\n\\left\\langle f_4, f_3, f_2, f_1 \\right\\rangle_{\\Ship} = \\left\\langle f_4, f_3, f_2, f_1 \\right\\rangle.\n\\]\n\\end{ex}\n\n\\begin{prop}\nDefinitions \\ref{def:HigherToda} and \\ref{def:HigherTodaSS} agree. In other words, we have the equality\n\\[\n\\left\\langle f_n, \\ldots, f_1 \\right\\rangle_{\\Ship} = \\left\\langle f_n, \\ldots, f_1 \\right\\rangle\n\\]\nof subsets of $\\cat{T}(\\Sigma^{n-2} X_0, X_n)$.%\n\\end{prop}\n\n\\begin{proof}\nThis is a straightforward generalization of Example \\ref{ex:4FoldBracketSS}.\n\\end{proof}\n\nWe define the \\textbf{negative} of a Toda family $T(f_3, f_2, f_1)$\nto consist of pairs $(\\beta, -\\Sigma \\alpha)$ for $(\\beta, \\Sigma \\alpha) \\in T(f_3, f_2, f_1)$.\n(Since changing the sign of two maps in a triangle doesn't affect\nwhether it is distinguished, it would be equivalent to put the\nminus sign with the $\\beta$.)\n\n\\begin{lem}\\label{le:four-fold}\nLet\n$X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} X_3 \\ral{f_4} X_4$\nbe a diagram in a triangulated category $\\cat{T}$.\nThen the two sets of pairs\n$T(T(f_4, f_3, f_2), \\Sigma f_1)$ and\n$T(f_4, T(f_3, f_2, f_1))$ are negatives of each other.\n\\end{lem}\n\nThis is stronger than saying that the two ways of computing the Toda bracket\n$\\langle f_4, f_3, f_2, f_1 \\rangle$ are negatives, and the stronger statement\nwill be used inductively to prove Theorem~\\ref{th:n-fold}.\n\n\\begin{proof}\nWe will show that the negative of\n$T(T(f_4, f_3, f_2), \\Sigma f_1)$\nis contained in the family\n$T(f_4, T(f_3, f_2, f_1))$.\nThe reverse inclusion is proved dually.\n\nSuppose $(\\beta, \\Sigma \\alpha)$ is in $T(T(f_4, f_3, f_2), \\Sigma f_1)$,\nthat is, $(\\beta, \\Sigma \\alpha)$ is in $T(\\beta', \\Sigma \\alpha', \\Sigma f_1)$\nfor some $(\\beta', \\Sigma \\alpha')$ in $T(f_4, f_3, f_2)$.\nThis means that we have the following commutative diagram\n\\[\n \\xymatrix@!@C=-0.2ex@R=-0.2ex{\n && && \\Sigma X_1 \\ar[rr]^{-\\Sigma f_2} \\ar@{-->}[dd]^{\\Sigma \\alpha'} && \\Sigma X_2 \\ar@{=}[dd] \\\\\n \\\\\nX_2 \\ar[rr]^{f_3} && X_3 \\ar[dr]_-{f_4} \\ar[rr] && C_{f_3} \\ar@{-->}[dl]^{\\!\\beta'} \\ar[rr] \\ar[dd] && \\Sigma X_2 \\\\\n && & X_4 \\\\\n && && C_{\\Sigma \\alpha'} \\ar@{-->}[ul]_{\\!\\beta} \\ar[dd] \\\\\n && & \\Sigma^2 X_0 \\ar@{-->}[ur]^{\\Sigma \\alpha} \\ar[dr]_-{- \\Sigma^2 f_1} \\\\\n && && \\Sigma^2 X_1 && \\\\\n }\n\\]\nin which the long row and column are distinguished triangles.\n\nUsing the octahedral axiom, there exists a map $\\delta : C_{f_2} \\to X_3$\nin the following diagram making the two squares commute\nand such that the diagram can be extended as shown,\nwith all rows and columns distinguished:\n\\[\n \\xymatrix@!@C=-0.7ex@R=-0.7ex{\n && & \\Sigma X_0 \\ar@{-->}[dl]_{\\gamma\\!} \\ar[dr]^{\\!-\\Sigma f_1}\\\\\nX_2 \\ar[rr] \\ar@{=}[dd] && C_{f_2} \\ar[rr] \\ar@{-->}[dd]^{\\delta} && \\Sigma X_1 \\ar[rr]^{-\\Sigma f_2} \\ar@{-->}[dd]^{\\Sigma \\alpha'} && \\Sigma X_2 \\ar@{=}[dd] \\\\\n \\\\\nX_2 \\ar[rr]^{f_3} && X_3 \\ar[dr]^-{f_4} \\ar[rr] \\ar[dd] && C_{f_3} \\ar@{-->}[dl]^{\\!\\!\\beta'} \\ar[rr] \\ar[dd] && \\Sigma X_2 \\\\\n && & X_4 \\\\\n && C_{\\delta} \\ar@{=}[rr] \\ar[dd] && C_{\\Sigma \\alpha'} \\ar@{-->}[ul]_{\\!\\!\\beta} \\ar[dd] && \\\\\n && & \\Sigma^2 X_0 \\ar@{-->}[ur]^(0.4){\\Sigma \\alpha\\!} \\ar[dr]_(0.4){-\\Sigma^2 \\!f_1\\!\\!\\!\\!} \\ar@{-->}[dl]_{\\Sigma \\gamma\\!\\!\\!} \\\\\n && \\Sigma C_{f_2} \\ar[rr] && \\Sigma^2 X_1 . && \\\\\n }\n\\]\nDefine $\\Sigma \\gamma$ to be the composite \n$\\Sigma^2 X_0 \\to C_{\\Sigma \\alpha'} = C_{\\delta} \\to \\Sigma C_{f_2}$, where the first map is $\\Sigma \\alpha$.\nThen the small triangles at the top and bottom of the last diagram commute as well.\nTherefore, $(\\delta, \\gamma)$ is in $T(f_3, f_2, f_1)$.\nMoreover, this diagram shows that\n$(\\beta, - \\Sigma \\alpha)$ is in $T(f_4, \\delta, \\gamma)$,\ncompleting the argument.\n\\end{proof}\n\n\nTo concisely describe different ways of computing higher Toda\nbrackets, we introduce the following notation.\nFor $0 \\leq j \\leq n-3$, write $T_j(f_n, f_{n-1}, \\ldots, f_1)$ for the set of tuples\n\\[\n \\{ (f_n, f_{n-1}, \\ldots, f_{n-j+1}, \\beta, \\Sigma \\alpha, \\Sigma f_{n-j-3}, \\ldots, \\Sigma f_1) \\},\n\\]\nwhere $(\\beta, \\Sigma \\alpha)$ is in $T(f_{n-j}, f_{n-j-1}, f_{n-j-2})$.\n(There are $j$ maps to the left of the three\\break used for the Toda family.)\nIf $S$ is a set of $n$-tuples of composable maps, we define\\break\n$T_j(S)$ to be the union of the sets $T_j(f_n, f_{n-1}, \\ldots, f_1)$\nfor $(f_n, f_{n-1}, \\ldots, f_1)$ in $S$.\nWith this\\break notation, the standard Toda bracket $\\left\\langle f_n, \\ldots, f_1 \\right\\rangle$ consists of the composites of all the pairs\\break occurring in the iterated Toda family\n\\[\n\\mathrm{T}(f_n, \\ldots, f_1) := T_0(T_0(T_0(\\cdots T_0(f_n, \\ldots, f_1) \\cdots ))).\n\\]\nA general Toda bracket is of the form \n$T_{j_1}(T_{j_2}(T_{j_3}(\\cdots T_{j_{n-2}}(f_n, \\ldots, f_1) \\cdots )))$,\nwhere\\break $j_1, j_2, \\ldots, j_{n-2}$ is a sequence of natural numbers\nwith $0 \\leq j_i < i$ for each $i$.\nThere are $(n-2)!$ such sequences.\n\n\\begin{rem}\nThere are six ways to compute the five-fold Toda bracket \n$\\langle f_5, f_4, f_3, f_2, f_1 \\rangle$, as the set of composites\nof the pairs of maps in one of the following sets:\n\\begin{align*}\n&T_0(T_0(T_0(f_5, f_4, f_3, f_2, f_1))) = T(T(T(f_5, f_4, f_3), \\Sigma f_2), \\Sigma^2 f_1),\\\\\n&T_0(T_0(T_1(f_5, f_4, f_3, f_2, f_1))) = T(T(f_5, T(f_4, f_3, f_2)), \\Sigma^2 f_1),\\\\\n&T_0(T_1(T_1(f_5, f_4, f_3, f_2, f_1))) = T(f_5, T(T(f_4, f_3, f_2), \\Sigma f_1)),\\\\\n&T_0(T_1(T_2(f_5, f_4, f_3, f_2, f_1))) = T(f_5, T(f_4, T(f_3, f_2, f_1))),\\\\\n&T_0(T_0(T_2(f_5, f_4, f_3, f_2, f_1))), \\quad\\text{and}\\\\\n&T_0(T_1(T_0(f_5, f_4, f_3, f_2, f_1))).\n\\end{align*}\nThe last two cannot be expressed directly just using $T$.\n\\end{rem}\n\nNow we can prove the main result of this section.\n\n\\begin{thm}\\label{th:n-fold}\nThe Toda bracket computed using the sequence $j_1, j_2, \\ldots, j_{n-2}$\nequals the standard Toda bracket up to the sign $(-1)^{\\sum j_i}$.\n\\end{thm}\n\n\\begin{proof}\nLet $j_1, j_2, \\ldots, j_{n-2}$ be a sequence with $0 \\leq j_i < i$ for each $i$.\nLemma~\\ref{le:four-fold} tells us that if we replace consecutive entries \n$k, k+1$ with $k, k$ in any such sequence, the two Toda brackets agree up to a sign.\nTo begin with, we ignore the signs.\nWe will prove by induction on $\\ell$ that the initial portion\n$j_1, \\ldots, j_\\ell$ of such a sequence can be converted into\nany other sequence, using just the move allowed by Lemma~\\ref{le:four-fold} and its inverse,\nand without changing $j_i$ for $i > \\ell$.\nFor $\\ell = 1$, there is only one sequence $0$.\nFor $\\ell = 2$, there are two sequences, $0, 0$ and $0, 1$, and Lemma~\\ref{le:four-fold} applies.\nFor $\\ell > 2$, suppose our goal is to produce the sequence $j'_1, \\ldots, j'_\\ell$.\nWe break the argument into three cases:\n\\medskip\n\n\\noindent\nCase 1: $j'_\\ell = j_\\ell$. We can directly use the induction hypothesis to\nadjust the entries in the first $\\ell - 1$ positions.\n\\medskip\n\n\\noindent\nCase 2: $j'_\\ell > j_\\ell$. By induction, we can change the first $\\ell - 1$\nentries in the sequence $j$ so that the entry in position $\\ell - 1$ is $j_\\ell$,\nsince $j_{\\ell} < j'_{\\ell} \\leq \\ell - 1$.\nThen, using Lemma~\\ref{le:four-fold}, we can change the entry in position $\\ell$ to $j_\\ell + 1$.\nContinuing in this way, we get $j'_\\ell$ in position $\\ell$, and then we\nare in Case 1.\n\\medskip\n\n\\noindent\nCase 3: $j'_\\ell < j_\\ell$. Since the moves are reversible, this is equivalent to Case 2.\n\nTo handle the sign, first note that signs propagate through the Toda family construction.\nMore precisely, suppose $S$ is a set of $n$-tuples of maps, and let $S'$ be a set obtained\nby negating the $k^{\\text{th}}$ map in each $n$-tuple for some fixed $k$.\nThen $T_j(S)$ has the same relationship to $T_j(S')$, possibly for a different value of $k$.\n\nAs a result, applying the move of Lemma~\\ref{le:four-fold} changes the resulting\nToda bracket by a sign.\nThat move also changes the parity of $\\sum_i j_i$.\nSince we get a plus sign when each $j_i$ is zero, it follows that the\ndifference in sign in general is $(-1)^{\\sum_i j_i}$.\n\\end{proof}\n\nAn animation of this argument is available at~\\cite{Canim}.\nIt was pointed out by Dylan Wilson that the combinatorial part of the above proof \nis equivalent to the well-known fact that if a binary operation is associative on triples,\nthen it is associative on $n$-tuples.\n\n\\medskip\n\nIn order to compare our Toda brackets to the Toda brackets in the opposite\ncategory, we need one lemma.\n\n\\begin{lem}\\label{le:suspension-of-Toda-family}\nLet\n$X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} X_3$\nbe a diagram in a triangulated category $\\cat{T}$.\nThen the Toda family $T(\\Sigma f_3, \\Sigma f_2, \\Sigma f_1)$ is the negative\nof the suspension of $T(f_3, f_2, f_1)$.\nThat is, it consists of $(\\Sigma \\beta, -\\Sigma^{2} \\alpha)$ for $(\\beta, \\Sigma \\alpha)$ \nin $T(f_3, f_2, f_1)$.\n\\end{lem}\n\n\\begin{proof}\nGiven a distinguished triangle $\\Sigma^{-1} C_{f_2} \\ral{k} X_1 \\ral{f_2} X_2 \\ral{\\iota} C_{f_2}$,\na distinguished triangle involving $\\Sigma f_2$ is\n\\[\nC_{f_2} \\ral{-\\Sigma k} \\Sigma X_1 \\ral{\\Sigma f_2} \\Sigma X_2 \\ral{\\Sigma \\iota} \\Sigma C_{f_2} .\n\\]\nBecause of the minus sign at the left, the maps that arise in the Toda\nfamily based on this triangle are $-\\Sigma^2 \\alpha$ and $\\Sigma \\beta$,\nwhere $\\Sigma \\alpha$ and $\\beta$ arise in the Toda family based on the starting triangle.\n\\end{proof}\n\nGiven a triangulated category $\\cat{T}$, the opposite category $\\cat{T}^{\\mathrm{op}}$\nis triangulated in a natural way. The suspension in $\\cat{T}^{\\mathrm{op}}$ is $\\Sigma^{-1}$\nand a triangle\n\\[\n\\xymatrix{Y_0 \\ar[r]^{g_1} & Y_1 \\ar[r]^{g_2} & Y_2 \\ar[r]^-{g_3} & \\Sigma^{-1} Y_0}\n\\]\nin $\\cat{T}^{\\mathrm{op}}$ is distinguished if and only if the triangle\n\\[\n\\xymatrix{\\Sigma \\Sigma^{-1} Y_0 & Y_1 \\ar[l]_-{g_1'} & Y_2 \\ar[l]_-{g_2} & \\Sigma^{-1} Y_0 \\ar[l]_-{g_3} }\n\\]\nin $\\cat{T}$ is distinguished, where $g_1'$ is the composite of $g_1$ with\nthe natural isomorphism $Y_0 \\cong \\Sigma \\Sigma^{-1} Y_0$.\n\n\\begin{cor}\\label{co:SelfDual}\nThe Toda bracket is self-dual up to suspension.\nMore precisely, let\n$X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} \\cdots \\ral{f_n} X_n$\nbe a diagram in a triangulated category $\\cat{T}$.\nThen the subset\n\\[\n \\left\\langle f_1, \\ldots, f_n \\right\\rangle^{\\cat{T}^{\\mathrm{op}}} \\subseteq \\cat{T}^{\\mathrm{op}}(\\Sigma^{-(n-2)} X_n, X_0)\n= \\cat{T}(X_0, \\Sigma^{-(n-2)} X_n)\n\\]\ndefined by taking the Toda bracket in $\\cat{T}^{\\mathrm{op}}$ is sent to the subset\n\\[\n \\left\\langle f_n, \\ldots, f_1 \\right\\rangle^{\\cat{T}} \\subseteq \\cat{T}(\\Sigma^{n-2} X_0, X_n)\n\\]\ndefined by taking the Toda bracket in $\\cat{T}$ under the bijection\n$\\Sigma^{n-2} : \\cat{T}(X_0, \\Sigma^{-(n-2)} X_n) \\to \\cat{T}(\\Sigma^{n-2} X_0, X_n)$.\n\\end{cor}\n\n\\begin{proof}\nFirst we compare Toda families in $\\cat{T}$ and $\\cat{T}^{\\mathrm{op}}$.\nIt is easy to see that the Toda family\n$T^{\\cat{T}^{\\mathrm{op}}}(f_1, f_2, f_3)$\ncomputed in $\\cat{T}^{\\mathrm{op}}$ consists of the pairs\n$(\\alpha, \\Sigma^{-1} \\beta)$ for $(\\Sigma \\alpha, \\beta)$ in the Toda family\n$T^{\\cat{T}}(f_3, f_2, f_1)$ computed in $\\cat{T}$.\nIn short, one has to desuspend and transpose the pairs.\n\nUsing this, one can see that the iterated Toda family\n\\[\nT^{\\cat{T}^{\\mathrm{op}}}(T^{\\cat{T}^{\\mathrm{op}}} \\cdots T^{\\cat{T}^{\\mathrm{op}}}(f_1, f_2, f_3), \\ldots, \\Sigma^{-(n-3)} f_n)\n\\]\nis equal to the transpose of\n\\[\n\\Sigma^{-1} T^{\\cat{T}}(\\Sigma^{-(n-3)} f_n, \\Sigma^{-1} T^{\\cat{T}}(\\Sigma^{-(n-4)} f_{n-1}, \\Sigma^{-1} T^{\\cat{T}} \\cdots \\Sigma^{-1} T^{\\cat{T}}(f_3, f_2, f_1) \\cdots ))\n\\]\nBy Lemma~\\ref{le:suspension-of-Toda-family}, the desuspensions pass\nthrough all of the Toda family constructions, introducing an overall\nsign of $(-1)^{1+2+3+\\cdots+(n-3)}$, and producing\n\\[\n\\Sigma^{-(n-2)} T^{\\cat{T}}(f_n, T^{\\cat{T}}(f_{n-1}, T^{\\cat{T}} \\cdots T^{\\cat{T}}(f_3, f_2, f_1) \\cdots ))\n\\]\nBy Theorem~\\ref{th:n-fold}, composing the pairs gives the usual\nToda bracket up to the sign\\break $(-1)^{0+1+2+\\cdots+(n-3)}$.\nThe two signs cancel, yielding the result.\n\\end{proof}\n\nWe do not know a direct proof of this corollary.\nTo summarize, our insight is that\nby generalizing the corollary to all $(n-2)!$ methods of computing the\nToda bracket, we were able to reduce the argument to the $4$-fold case (Lemma~\\ref{le:four-fold}) and some combinatorics.\n\n\\begin{rem}\nAs with the $3$-fold Toda brackets (see Remark~\\ref{re:3-fold-negation}),\nthe higher Toda brackets depend on the triangulation.\nIf the triangulation is negated, the $n$-fold Toda brackets change\nby the sign $(-1)^n$.\n\\end{rem}\n\n\n\n\\section{Higher order operations determine \\texorpdfstring{$d_r$}{dr}}\\label{se:AdamsDr}\n\nIn this section, we show that the higher Adams differentials can be\nexpressed in terms of higher Toda brackets, in two ways.\nOne of these expressions is as an $r^{\\text{th}}$ order cohomology operation.\n\nGiven an injective class $\\cat{I}$,\nan Adams resolution of an object $Y$ as in Diagram~\\eqref{eq:AdamsResolInj}, and an object $X$, consider a class $[x] \\in E_r^{s,t}$ represented by an element $x \\in E_1^{s,t} = \\cat{T}(\\Sigma^{t-s} X, I_s)$.\nThe class $d_r[x]$ is the set of all $(\\Sigma p_{s+r}) \\widetilde{x}$, where $\\widetilde x$\nruns over lifts of $\\delta_s x$ through the $(r-1)$-fold composite $\\Sigma(i_{s+1} \\cdots i_{s+r-1})$\nwhich appears across the top edge of the Adams resolution.\n\nOur first result will be a generalization of\nProposition~\\ref{pr:DifferentD2}\\eqref{it:d1pdex},\nexpressing $d_r$ in terms of an $(r+1)$-fold Toda bracket.\n\n\\begin{thm}\\label{th:d1pdex}\nAs subsets of $E_1^{s+r,t+r-1}$, we have\n\\[\nd_r [x] = \\left\\langle \\Sigma^{r-1} d_1 , \\ldots , \\Sigma^2 d_1, \\Sigma d_1 , \\Sigma p_{s+1} , \\delta_s x \\right\\rangle .\n\\]\n\\end{thm}\n\n\\begin{proof}\nWe compute the Toda bracket, applying the Toda family construction starting from\nthe right, which introduces a sign of $(-1)^{1+2+\\cdots+(r-2)}$, by Theorem~\\ref{th:n-fold}.\nWe begin with the Toda family $T(\\Sigma d_1, \\Sigma p_{s+1}, \\delta_s x)$.\nThere is a distinguished triangle\n\\[\n \\cxymatrix{\\Sigma Y_{s+2} \\ar[r]^-{\\Sigma i_{s+1}} & \\Sigma Y_{s+1} \\ar[r]^-{\\Sigma p_{s+1}} & \\Sigma I_{s+1} \\ar[r]^-{\\Sigma \\delta_{s+1}} & \\Sigma^2 Y_{s+2},}\n\\]\nwith no needed signs.\nThe map $\\Sigma d_1$ factors through $\\Sigma \\delta_{s+1}$ as $\\Sigma^2 p_{s+2}$, and this\nfactorization is unique because $\\Sigma \\delta_{s+1}$ is $\\cat{I}$-epic and $\\Sigma^2 I_{s+2}$ is injective.\nThe other maps in the Toda family are $\\Sigma x_1$, where $x_1$ is a lift \nof $\\delta_s x$ through $\\Sigma i_{s+1}$.\nSo \n\\[\n T(\\Sigma d_1, \\Sigma p_{s+1}, \\delta_s x) = \\{ (\\Sigma^2 p_{s+2} , \\, \\Sigma x_1) \\mid x_1 \\text{ a lift of $\\delta_s x$ through $\\Sigma i_{s+1}$} \\}.\n\\]\n(The Toda family also includes $(\\Sigma^2 p_{s+2} \\, \\phi, \\, \\phi^{-1} (\\Sigma x_1))$, where $\\phi$\nis any isomorphism, but these contribute nothing additional to the later computations.)\nThe composites of such pairs give $d_2[x]$, up to suspension, recovering\nProposition~\\ref{pr:DifferentD2}\\eqref{it:d1pdex}.\n\nContinuing, for each such pair we compute\n\\[\n\\begin{aligned}\n T(\\Sigma^2 d_1, \\Sigma^2 p_{s+2}, \\Sigma x_1)\n&= -\\Sigma T(\\Sigma d_1, \\Sigma p_{s+2}, x_1) \\\\\n&= -\\Sigma \\{ (\\Sigma^2 p_{s+3} , \\, \\Sigma x_2) \\mid x_2 \\text{ a lift of $x_1$ through $\\Sigma i_{s+2}$} \\}.\n\\end{aligned}\n\\]\nThe first equality is Lemma~\\ref{le:suspension-of-Toda-family}, and the second reuses\nthe work done in the previous paragraph, with $s$ increased by $1$.\nComposing these pairs gives $-d_3[x]$.\nThe sign which is needed to produce the standard Toda bracket is $(-1)^1$,\nand so the signs cancel.\n\nAt the next step, we compute\n\\[\n\\begin{aligned}\n T(\\Sigma^3 d_1, \\Sigma^3 p_{s+3}, -\\Sigma^2 x_2)\n&= -\\Sigma^2 T(\\Sigma d_1, \\Sigma p_{s+3}, x_2) \\\\\n&= -\\Sigma^2 \\{ (\\Sigma^2 p_{s+4} , \\, \\Sigma x_3) \\mid x_3 \\text{ a lift of $x_2$ through $\\Sigma i_{s+3}$} \\}.\n\\end{aligned}\n\\]\nAgain, the composites give $-d_4[x]$.\nSince it was a double suspension that passed through the Toda family, no additional\nsign was introduced.\nSimilarly, the sign to convert to the standard Toda bracket is $(-1)^{1+2}$,\nand since $2$ is even, no additional sign was introduced.\nTherefore, the signs still cancel.\n\nThe pattern continues. \nIn total, there are $1+2+\\cdots+(r-2)$ suspensions that pass through the Toda\nfamily, and the sign to convert to the standard Toda bracket is also based on\nthat number, so the signs cancel.\n\\end{proof}\n\n\\begin{rem}\nTheorem~\\ref{th:d1pdex} can also be proved using the definition Toda\nbrackets based on $r$-filtered objects, \nas in Definitions~\\ref{def:NFiltered} and~\\ref{def:HigherTodaSS}.\nHowever, one must work in the opposite category $\\cat{T}^{\\mathrm{op}}$.\nIn that category, there is a unique $r$-filtered object, up to isomorphism,\nbased on the maps in the Toda bracket.\nOne of the dashed arrows in the diagram from Definition~\\ref{def:HigherTodaSS}\nis also unique, and the other corresponds naturally to the choice of lift\nin the Adams differential.\n\\end{rem}\n\n\\medskip\n\nIn the remainder of this section, we describe the analog of\nProposition~\\ref{pr:DifferentD2}\\eqref{it:d1d1x}.\nWe begin by defining restricted higher Toda brackets, in terms of\nrestricted Toda families.\n\nConsider a Toda family $T(g h_1, g_1 h_0, g_0 h)$, where the maps\nfactor as shown, there are distinguished triangles\n\\begin{equation}\\label{eq:tri}\n \\cxymatrix{Z_i \\ar[r]^{g_i} & J_i \\ar[r]^{h_i} & Z_{i+1} \\ar[r]^{k_i} & \\Sigma Z_i}\n\\end{equation}\nfor $i = 0, 1$,\nand $g$ and $h$ are arbitrary maps $Z_2 \\to A$ and $B \\to Z_0$, respectively.\nThis information determines an essentially unique element of the Toda family in the following way.\nThe octahedral axiom applied to the factorization $g_1 h_0$\nyields the dotted arrows in a commutative diagram\n\\[\n\\cxymatrix{\nJ_0 \\ar@{=}[d] \\ar[r]^-{h_0} & Z_1 \\ar[d]_(0.45){g_1} \\ar[r]^-{k_0} & \\Sigma Z_0 \\ar@{-->}[d]^{\\alpha_2} \\ar[r]^-{-\\Sigma g_0} & \\Sigma J_0 \\ar@{=}[d] \\\\\nJ_0 \\ar[r]^-{g_1 h_0} & J_1 \\ar[d]_{h_1} \\ar@{-->}[r]^-{q} & W_2 \\ar@{-->}[d]^{\\beta_2} \\ar@{-->}[r]^-{\\iota} & \\Sigma J_0 \\\\\n& Z_2 \\ar[d]_{k_1} \\ar@{=}[r] & Z_2 \\ar[d]^{\\gamma_2} & \\\\\n& \\Sigma Z_1 \\ar[r]^{\\Sigma k_0} & \\Sigma^2 Z_0 , & \\\\\n}\n\\]\nwhere the rows and columns are distinguished and $\\gamma_2 := (\\Sigma k_0) k_1$.\nIt is easy to see that $-\\Sigma(g_0 h)$ lifts through $\\iota$ as $\\alpha_2 (\\Sigma h)$,\nand that $g h_1$ extends over $q$ as $g \\beta_2$.\nWe define the \\textbf{restricted Toda family} to be the set \n$T(g h_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_1 h_0 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_0 h)$ consisting of the pairs $(g \\beta_2, \\, \\alpha_2 (\\Sigma h))$\nthat arise in this way.\nSince $\\alpha_2$ and $\\beta_2$ come from a distinguished triangle involving a fixed map $\\gamma_2$,\nsuch pairs are unique up to the usual ambiguity of replacing\nthe pair with $(g \\beta_2 \\phi, \\, \\phi^{-1} \\alpha_2 (\\Sigma h))$, where $\\phi$ is an isomorphism.\nSimilarly, given any map $x : B \\to J_0$,\nwe define $T(g h_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_1 h_0 , x)$ to be the set \nconsisting of the pairs $(g \\beta_2, \\, \\Sigma \\alpha)$,\nwhere $\\beta_2$ arises as above and $\\Sigma \\alpha$ is any lift of $-\\Sigma x$ through $\\iota$.\n\n\\begin{defn}\nGiven distinguished triangles as in Equation~\\eqref{eq:tri}, for $i = 1, \\ldots, n-1$,\nand maps $g : Z_n \\to A$ and $x : B \\to J_1$, we define the \\textbf{restricted Toda bracket}\n\\[\n\\left\\langle g h_{n-1} \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_{n-1} h_{n-2} \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\ldots \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_3 h_2 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_2 h_1 , x \\right\\rangle\n\\]\ninductively as follows:\nIf $n = 2$, it is the set consisting of just the composite $g h_1 x$.\nIf $n = 3$, it is the set of composites of the pairs in $T(g h_2 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_2 h_1 , x)$.\nIf $n > 3$, it is the union of the sets\n\\[\n\\left\\langle g \\beta_2 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}}\\, \\alpha_2 (\\Sigma h_{n-3}) \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}}\\, \\Sigma (g_{n-3} h_{n-4}) \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\ldots , \\Sigma x \\right\\rangle ,\n\\]\nwhere $(g \\beta_2, \\alpha_2 (\\Sigma h_{n-3}))$ is in $T(g h_{n-1} \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_{n-1} h_{n-2} \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_{n-2} h_{n-3})$.\n\\end{defn}\n\n\\begin{rem}\nDespite the notation, we want to make it clear that these restricted\nToda families and restricted Toda brackets depend on the choice of\nfactorizations and on the distinguished triangles in Equation~\\eqref{eq:tri}.\nMoreover, the elements of the restricted Toda families are not simply pairs,\nbut also include the factorizations of the maps in those pairs, and\nthe distinguished triangle involving $\\alpha_2$ and $\\beta_2$.\nThis information is used in the $(n-1)$-fold restricted Toda bracket\nthat is used to define the $n$-fold restricted Toda bracket.\n\\end{rem}\n\nRecall that the maps $d_1$ are defined to be $(\\Sigma p_{s+1}) \\delta_s$, and that we\nhave distinguished triangles\n\\[\n \\cxymatrix{Y_s \\ar[r]^{p_s} & I_s \\ar[r]^-{\\delta_s} & \\Sigma Y_{s+1} \\ar[r]^{\\Sigma i_s} & \\Sigma Y_s}\n\\]\nfor each $s$.\nThe same holds for suspensions of $d_1$, with the last map\nchanging sign each time it is suspended. \nThus for $x : \\Sigma^{t-s} X \\to I_s$ in the $E_1$ term, the $(r+1)$-fold restricted Toda bracket\n$\\left\\langle \\Sigma^{r-1} d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\ldots \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\Sigma d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} d_1 , x \\right\\rangle$\nmakes sense for each $r$, where we are implicitly using the defining factorizations\nand the triangles from the Adams resolution.\n\n\\begin{thm}\\label{th:AdamsDrCohomOp}\nAs subsets of $E_1^{s+r,t+r-1}$, we have\n\\[\nd_r [x] = \\left\\langle \\Sigma^{r-1} d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\ldots \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\Sigma d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} d_1 , x \\right\\rangle .\n\\]\n\\end{thm}\n\nThis is a generalization of Proposition~\\ref{pr:DifferentD2}\\eqref{it:d1d1x}.\nThe data in the Adams resolution is the witness that the composites of\nthe primary operations are zero in a sufficiently coherent way to permit\nan $r^{\\text{th}}$ order cohomology operation to be defined.\n\n\\begin{proof}\nThe restricted Toda bracket $\\left\\langle \\Sigma^{r-1} d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\ldots \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\Sigma d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} d_1 , x \\right\\rangle$\nis defined recursively, working from the left.\nEach of the $r-2$ doubly restricted Toda families has essentially one element.\nThe first one involves maps $\\alpha_2$, $\\beta_2$ and $\\gamma_2$ that form a distinguished\ntriangle, and $\\gamma_2$ is equal to $[(-1)^r \\Sigma^r i_{s+r-2}][-(-1)^r \\Sigma^r i_{s+r-1}]$.\nWe will denote the corresponding maps in the following octahedra $\\alpha_k$, $\\beta_k$ and $\\gamma_k$,\nwhere each $\\gamma_k$ equals $[(-1)^r \\Sigma^r i_{s+r-k}] \\, \\gamma_{k-1}$,\nand so $\\gamma_k = -(-1)^{rk} \\Sigma^r (i_{s+r-k} \\cdots i_{s+r-1})$.\nOne is left to compute the singly restricted Toda family\n$\\left\\langle \\Sigma^r p_{s+r} \\beta_{r-1} \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}}\\, \\alpha_{r-1} \\Sigma^{r-2} \\delta_s ,\\, \\Sigma^{r-2} x \\right\\rangle$,\nwhere $\\alpha_{r-1}$ and $\\beta_{r-1}$ fit into a distinguished triangle\n\\[\n \\cxymatrix{\\Sigma^{r-1 }Y_{s+1} \\ar[r]^-{\\alpha_{r-1}} & W_{r-1} \\ar[r]^-{\\beta_{r-1}} & \\Sigma^{r} Y_{s+r} \\ar[r]^-{\\gamma_{r-1}} & \\Sigma^r Y_{s+1} ,}\n\\]\nand $\\gamma_{r-1} = - \\Sigma^r (i_{s+1} \\cdots i_{s+r-1})$.\nThus, to compute the last restricted Toda bracket, one uses the following diagram,\nobtained as usual from the octahedral axiom:\n\\[\n\\xymatrix@C+24pt{\n& & & \\Sigma^{t-s+r-1} X \\ar[d]^(0.45){-\\Sigma^{r-1} x} \\\\\n\\Sigma^{r-2} I_{s} \\ar@{=}[d] \\ar[r]^-{\\Sigma^{r-2} \\delta_{s}} & \\Sigma^{r-1} Y_{s+1} \\ar[d]_{\\alpha_{r-1}} \\ar[r]^-{\\!(-1)^r \\, \\Sigma^{r-1} i_{s}} & \\Sigma^{r-1} Y_{s} \\ar@{-->}[d]^{\\alpha_r} \\ar[r]^-{- \\Sigma^{r-1} p_{s}} & \\Sigma^{r-1} I_{s} \\ar@{=}[d] \\\\\n\\Sigma^{r-2} I_{s} \\ar[r]^-{} & W_{r-1} \\ar[d]_{\\beta_{r-1}} \\ar@{-->}[r]^-{q_{r-1}} & W_r \\ar@{-->}[d]^{\\beta_r} \\ar@{-->}[r]^-{\\iota_{r-1}} & \\Sigma^{r-1} I_{s} \\\\\n\\Sigma^r I_{s+r} & \\Sigma^r Y_{s+r} \\ar[l]_{\\Sigma^r p_{s+r}} \\ar[d]_{\\gamma_{r-1}} \\ar@{=}[r] & \\Sigma^r Y_{s+r} \\ar[d]^{\\gamma_r} & \\\\\n& \\Sigma^r Y_{s+1} \\ar[r]^{(-1)^r \\, \\Sigma^r i_{s}} & \\Sigma^r Y_{s} . & \\\\\n}\n\\]\nUp to suspension, both $d_r[x]$ and the last restricted Toda bracket are computed by\ncomposing certain maps $\\widetilde{x} : \\Sigma^{t-s+r-2} X \\to \\Sigma^r Y_{s+r}$ with $\\Sigma^r p_{s+r}$.\nFor $d_r[x]$, the maps $\\widetilde{x}$ must lift $\\Sigma^{r-1} (\\delta_s x)$ through $- \\gamma_{r-1}$.\nFor the last bracket, the maps $\\widetilde{x}$ are of the form $\\beta_r y$,\nwhere $y : \\Sigma^{t-s+r-1} X \\to W_r$ is a lift of $-\\Sigma^{r-1} x$ through $\\iota_{r-1}$.\nAs in the proof of Proposition~\\ref{pr:DifferentD2}\\eqref{it:d1d1x}, one can\nsee that the possible choices of $\\widetilde{x}$ coincide.\n\\end{proof}\n\nWe next give a description of $d_r[x]$ using higher Toda brackets defined\nusing filtered objects, as in Definitions~\\ref{def:NFiltered} and~\\ref{def:HigherTodaSS}.\nThe computation of the restricted Toda bracket above produces a sequence \n\\begin{equation}\\label{eq:W}\n \\cxymatrix{0 = W_0 \\ar[r]^-{q_0} & W_1 \\ar[r]^{q_1} & \\cdots \\ar[r]^{q_{r-1}} & W_r , }\n\\end{equation}\nwhere $W_k$ is the fibre of the $k$-fold composite $\\Sigma^r (i_{s+r-k} \\cdots i_{s+r-1})$.\n(The map $\\gamma_k$ may differ in sign from this composite, but that doesn't affect the fibre.)\nFor each $k$, we have a distinguished triangle\n\\[\n \\xymatrix@C+3pt{W_k \\ar[r]^-{q_k} & W_{k+1} \\ar[r]^-{\\iota_k}\n & \\Sigma^{r-1} I_{s+r-k-1} \\ar[rrr]^-{-(\\Sigma \\alpha_k)(\\Sigma^{r-1} \\delta_{s+r-k-1})} &&& \\Sigma W_k ,}\n\\]\nwhere we extend downwards to $k=0$ by defining $W_1 = \\Sigma^{r-1} I_{s+r-1}$\nand using the non-obvious triangle\n\\[\n \\xymatrix@C+7pt{W_0 \\ar[r]^-{q_0 = 0} & W_{1} \\ar[r]^-{\\iota_0 = -1}\n & \\Sigma^{r-1} I_{s+r-1} \\ar[r]^-{0} & \\Sigma W_0 .}\n\\]\nOne can check that \n\\[\n(\\Sigma \\iota_{k-1})(-\\Sigma \\alpha_k)(\\Sigma^{r-1} \\delta_{s+r-k-1}) = (\\Sigma^r p_{s+r-k})(\\Sigma^{r-1} \\delta_{s+r-k-1})\n= \\Sigma^{r-1} d_1 = \\Sigma^k (\\Sigma^{r-k-1} d_1) ,\n\\]\nwhere $\\Sigma^{r-k-1} d_1$ is the map appearing in the $(k+1)$st spot of the Toda bracket.\nIn other words, the sequence~\\eqref{eq:W} is an $r$-filtered object based on\n$(\\Sigma^{r-2} d_1, \\ldots, d_1)$.\n\nThe natural map $\\sigma_W : W_r \\to \\Sigma^{r-1} I_s$ is $\\iota_{r-1}$,\nand the natural map $\\sigma_W' : \\Sigma^{r-1} I_{s+r-1} \\cong W_1 \\to W_r$ is the composite\n$q_{r-1} \\cdots q_1 \\iota_0 = -q_{r-1} \\cdots q_1$.\nThe Toda bracket computed using the filtered object $W$ consists of all\ncomposites appearing in the middle row of this commutative diagram:\n\\begin{equation}\\label{eq:Wlift}\n\\cxymatrix{\n& \\Sigma^{r-1} I_{s+r-1} \\ar[d]_{\\sigma'_W} \\ar[dr]^-{\\Sigma^{r-1} d_1} & \\\\\n\\Sigma^{t-s+r-1} X \\ar[dr]_{\\Sigma^{r-1} x} \\ar@{-->}[r]^-a & W_r \\ar[d]^{\\sigma_W} \\ar@{-->}[r]_-b & \\Sigma^r I_{s+r} \\\\\n& \\Sigma^{r-1} I_s . & \\\\\n}\n\\end{equation}\nWe claim that there is a natural choice of extension $b$.\nSince $\\Sigma^{r-1} d_1 = (\\Sigma^r p_{s+r})(\\Sigma^{r-1} \\delta_{s+r-1})$, it suffices\nto extend $\\Sigma^{r-1} \\delta_{s+r-1}$ over $\\sigma_W'$.\nWell, $\\beta_2$ by definition is an extension of $\\Sigma^{r-1} \\delta_{s+r-1}$ over $q_1$,\nand each subsequent $\\beta_k$ gives a further extension.\nBecause $\\iota_0 = -1$, $-(\\Sigma^r p_{s+r}) \\beta_r$ is a valid choice for $b$.\n\nOn the other hand, as described at the end of the previous proof,\nthe lifts $a$ of $\\Sigma^{r-1} x$ through $\\sigma_W = \\iota_{r-1}$, when \ncomposed with $-(\\Sigma^r p_{s+r}) \\beta_r$, give exactly\nthe Toda bracket computed there.\n\nIn summary, we have:\n\n\\begin{thm}\nGiven an Adams resolution of $Y$ and $r \\geq 2$, there is an associated\n$r$-filtered object $W$ and a choice of a map $b$ in Diagram~\\eqref{eq:Wlift},\nsuch that for any $X$ and class $[x] \\in E_r^{s,t}$,\nwe have\n\\[\nd_r [x] = \\left\\langle \\Sigma^{r-1} d_1 , \\ldots , \\Sigma d_1 , d_1 , x \\right\\rangle ,\n\\]\nwhere the Toda bracket is computed only using the $r$-filtered object $W$\nand the chosen extension $b$.\n\\end{thm}\n\n\n\\section{Sparse rings of operations}\\label{se:sparse}\n\nIn this section, we focus on injective and projective classes which \nare generated by an object with a ``sparse'' endomorphism ring.\nIn this context, we can give conditions under which the restricted Toda bracket\nappearing in Theorem~\\ref{th:AdamsDrCohomOp} is equal to the unrestricted Toda bracket,\nproducing a cleaner correspondence between Adams differentials and Toda brackets.\nWe begin in Subsection~\\ref{ss:sparse-injective} by giving the results in the case\nof an injective class, and then briefly summarize the dual results in Subsection~\\ref{ss:sparse-projective}.\nSubsection~\\ref{ss:sparse-examples} gives examples.\n\nLet us fix some notation and terminology, also discussed in \\cite{Sagave08}, \\cite{Patchkoria12}, \n\\cite{SchwedeS03}*{\\S 2}, \nand \\cite{BensonKS04}.\n\n\\begin{defn}\nLet $N$ be a natural number. A graded abelian group $R_*$ is \\textbf{$N$-sparse} if $R_*$ is concentrated in degrees which are multiples of $N$, i.e., $R_i = 0$ whenever $i \\not\\equiv 0 \\pmod{N}$.\n\\end{defn}\n\n\n\\subsection{Injective case}\\label{ss:sparse-injective}\n\n\\begin{nota}\nLet $E$ be an object of the triangulated category $\\cat{T}$. Define the \\textbf{$E$-cohomology} of an object $X$ to be the graded abelian group $E^*X$ given by $E^n X := \\cat{T}(X,\\Sigma^n E)$. Postcomposition makes $E^*X$ into a left module over the graded endomorphism ring $E^*E$.%\n\\end{nota}\n\n\\begin{assum}\nFor the remainder of this subsection, we assume the following.\n\\begin{enumerate}\n\\item The triangulated category $\\cat{T}$ has infinite %\nproducts.\n\\item The graded ring $E^* E$ is $N$-sparse for some $N \\geq 2$.%\n\\end{enumerate}\n\\end{assum}\n\nLet $\\cat{I}_E$ denote the injective class generated by $E$, as in Example~\\ref{ex:InjClass}. Explicitly, $\\cat{I}_E$ consists of retracts of (arbitrary) products $\\prod_i \\Sigma^{n_i} E$.\n\n\\begin{lem}\\label{le:SparseInj}\nWith this setup, we have:\n\\begin{enumerate}\n\\item Let $I$ be an injective object such that $E^* I$ is $N$-sparse. Then $I$ is a retract of a product $\\prod_i \\Sigma^{m_i N} E$.\n\\item If, moreover, $W$ is an object such that $E^*W$ is $N$-sparse, then we have $\\cat{T}(W,\\Sigma^t I) = 0$ for $t \\not\\equiv 0 \\pmod{N}$. \n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\n(1) $I$ is a retract of a product $P = \\prod_i \\Sigma^{n_i} E$, with a map $\\iota \\colon I \\hookrightarrow P$ and retraction $\\pi \\colon P \\twoheadrightarrow I$. Consider the subproduct $P' = \\prod_{N \\mid n_i} \\Sigma^{n_i} E$, with inclusion $\\iota' \\colon P' \\hookrightarrow P$ (via the zero map into the missing factors) and projection $\\pi' \\colon P \\twoheadrightarrow P'$. Then the equality\n\\[\n\\iota' \\pi' \\iota = \\iota \\colon I \\to P\n\\]\nholds, using the fact that $E^*I$ is $N$-sparse. %\nTherefore, we obtain $\\pi \\iota' \\pi' \\iota = \\pi \\iota = 1_I$, so that $I$ is a retract of $P'$. %\n\n(2) By the first part, $\\cat{T}(W,\\Sigma^t I)$ is a retract of\n\\begin{align*}\n\\cat{T}(W,\\Sigma^t \\prod_i \\Sigma^{m_i N} E) &= \\cat{T}(W,\\prod_i \\Sigma^{m_i N+ t} E) \\\\\n&= \\prod_i \\cat{T}(W,\\Sigma^{m_i N+ t} E) \\\\ \n&= \\prod_i E^{m_i N+ t}W \\\\\n&= 0 ,\n\\end{align*}\nusing the assumption that $E^*W$ is $N$-sparse.\n\\end{proof}\n\n\n\\begin{lem}\\label{le:SparseBracket}\nLet $I_0 \\ral{f_1} I_1 \\ral{f_2} I_2 \\to \\cdots \\ral{f_r} I_r$ be a diagram\nin $\\cat{T}$, with $r \\leq N+1$. Assume that each object $I_j$ is injective and \nthat each $E^*(I_j)$ is $N$-sparse. Then the iterated Toda family $\\mathrm{T}(f_r, f_{r-1}, \\ldots, f_1)$ is either empty or consists of a single composable pair $\\Sigma^{r-2} I_0 \\to C \\to I_r$, up to automorphism of $C$.\n\\end{lem}\n\n\\begin{proof}\nIn the case $r=2$, there is nothing to prove, so we may assume $r \\geq 3$. The iterated Toda family is obtained by $r-2$ iterations of the $3$-fold Toda family construction. The first iteration computes the Toda family of the diagram\n\\[\n\\xymatrix{\nI_{r-3} \\ar[r]^-{f_{r-2}} & I_{r-2} \\ar[r]^-{f_{r-1}} & I_{r-1} \\ar[r]^-{f_{r}} & I_{r}. \\\\\n}\n\\]\nChoose a cofiber of $f_{r-1}$, i.e., a distinguished triangle $I_{r-2} \\ral{f_{r-1}} I_{r-1} \\to C_1 \\to \\Sigma I_{r-2}$.\nA lift of $f_{r-2}$ to the fiber $\\Sigma^{-1} C_1$, if it exists, is determined up to\n\\[\n\\cat{T}(I_{r-3}, \\Sigma^{-1} I_{r-1}) = \\cat{T}(\\Sigma I_{r-3}, I_{r-1}),\n\\]\nwhich is zero by Lemma~\\ref{le:SparseInj}(2).\nLikewise, an extension of $f_{r}$ to the cofiber $C_1$, if it exists, is determined up to\n\\[\n\\cat{T}(\\Sigma I_{r-2}, I_{r}) = 0.\n\\]\nHence, $\\mathrm{T}(f_r, f_{r-1}, f_{r-2})$ is either empty or consists of a single pair $(\\beta_1, \\Sigma \\alpha_1)$,\nup to automorphisms of $C_1$.\nIt is easy to see that the object $C_1$ has the following property:\n\\begin{equation}\\label{eq:SparseCohom1}\n\\text{If $E^*W = 0$ for $\\ast \\equiv 0,1 \\;(\\bmod\\; N)$, then $\\cat{T}(W,C_1) = 0$.}\n\\end{equation}\nFor $r \\geq 4$, the next iteration computes the Toda family of the diagram\n\\[\n\\xymatrix{\n\\Sigma I_{r-4} \\ar[r]^-{\\Sigma f_{r-3}} & \\Sigma I_{r-3} \\ar[r]^-{\\Sigma \\alpha_1} & C_1 \\ar[r]^-{\\beta_1} & I_{r}. \\\\\n}\n\\]\nThe respective indeterminacies are\n\\[\n\\cat{T}(\\Sigma^2 I_{r-4}, C_1),\n\\]\nwhich is zero by Property~\\eqref{eq:SparseCohom1}, and \n\\[\n\\cat{T}(\\Sigma^2 I_{r-3}, I_{r}),\n\\]\nwhich is zero by Lemma~\\ref{le:SparseInj}(2), since $N \\geq 3$ in this case.\nHence, $\\mathrm{T}(\\beta_1, \\Sigma \\alpha_1, \\Sigma f_{r-3})$ is either empty or consists of a single pair $(\\beta_2, \\Sigma \\alpha_2)$,\nup to automorphism of the cofiber $C_2$ of $\\Sigma \\alpha_1$.\nRepeating the argument inductively, the successive iterations compute the Toda family of a diagram\n\\[\n\\xymatrix @C=3.3pc {\n\\Sigma^j I_{r-3-j} \\ar[r]^-{\\Sigma^j f_{r-2-j}} & \\Sigma^j I_{r-2-j} \\ar[r]^-{\\Sigma \\alpha_j} & C_j \\ar[r]^-{\\beta_j} & I_{r} \\\\\n}\n\\]\nfor $0 \\leq j \\leq r-3$, where $C_j$ has the following property:\n\\begin{equation}\\label{eq:SparseCohomj}\n\\text{If $E^*W = 0$ for $\\ast \\equiv 0,1,\\ldots,j \\;(\\bmod\\; N)$, then $\\cat{T}(W,C_j) = 0$.}\n\\end{equation}\nThe indeterminacies $\\cat{T}(\\Sigma^{j+1} I_{r-3-j}, C_j)$ and $\\cat{T}(\\Sigma^{j+1} I_{r-2-j}, I_{r})$\nagain vanish.\nHence,\\break $\\mathrm{T}(\\beta_j, \\Sigma \\alpha_j, \\Sigma^j f_{r-2-j})$ is either empty or consists of a single pair \n$(\\beta_{j+1}, \\Sigma \\alpha_{j+1})$, up to automorphism of $C_{j+1}$.\nNote that the argument works until the last iteration $j = r-3$, by the assumption $r-2 < N$.\n\\end{proof}\n\nWe will need the following condition on an object $Y$:\n\\begin{condition}\\label{co:InjSparse}\n$Y$ admits an $\\cat{I}_E$-Adams resolution $Y_{\\bullet}$ (see \\eqref{eq:AdamsResolInj}) such that for each injective $I_j$ in the resolution,\n$E^* (\\Sigma^j I_j)$ is $N$-sparse.\n\\end{condition}\n\n\\pagebreak[2]\n\\begin{rem}\\leavevmode\n\\begin{enumerate}\n\\item Condition~\\ref{co:InjSparse} implies that $E^* Y$ is itself $N$-sparse, because of the surjection $E^* I_0 \\twoheadrightarrow E^* Y$.\n\\item The condition can be generalized to: there is an integer $m$ such that for each $j$, $E^* (\\Sigma^j I_j)$ is concentrated in degrees $\\ast \\equiv m \\pmod{N}$. We take $m=0$ for notational convenience.\n\\item We will see in Propositions~\\ref{pr:CohomProduct} and~\\ref{pr:CoherentRing} situations in which\nCondition~\\ref{co:InjSparse} holds.\n\\end{enumerate}\n\\end{rem}\n\n\\begin{thm}\\label{th:SparseDr}\nLet $X$ and $Y$ be objects in $\\cat{T}$ and consider the Adams spectral sequence abutting to $\\cat{T}(X,Y)$ with respect to the injective class $\\cat{I}_E$. Assume that $Y$ satisfies Condition~\\ref{co:InjSparse}. Then for all $r \\leq N$, the Adams differential is given, as subsets of $E_1^{s+r,t+r-1}$, by\n\\[\nd_r [x] = \\left\\langle \\Sigma^{r-1} d_1 , \\ldots, \\Sigma d_1, d_1 , x \\right\\rangle.\n\\]\nIn other words, the restricted bracket appearing in Theorem~\\ref{th:AdamsDrCohomOp} coincides with the full Toda bracket.\n\\end{thm}\n\n\\begin{proof}\nWe will show that \n\\[\n \\left\\langle \\Sigma^{r-1} d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\ldots \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\Sigma d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} d_1 , x \\right\\rangle = \\left\\langle \\Sigma^{r-1} d_1 , \\ldots, \\Sigma d_1, d_1 , x \\right\\rangle.\n\\]\nConsider the diagram \n\\[\n\\xymatrix{\nI_s \\ar[r]^-{d_1} & \\Sigma I_{s+1} \\ar[r]^-{\\Sigma d_1} & \\Sigma^2 I_{s+2} \\ar[r] & \\cdots \\ar[r] & \\Sigma^{r-1} I_{r-1} \\ar[r]^-{\\Sigma^{r-1} d_1} & \\Sigma^{r} I_{s+r} \\\\\nX \\ar[u]^{x} & & & & & \\\\\n}\n\\]\nwhose Toda bracket is being computed. The corresponding Toda family is\n\\[\n\\mathrm{T}(\\Sigma^{r-1} d_1 , \\ldots, \\Sigma d_1, d_1 , x) = \\mathrm{T} \\left( \\mathrm{T}(\\Sigma^{r-1} d_1 , \\ldots, \\Sigma d_1, d_1), \\Sigma^{r-2} x \\right).\n\\]\nWe know that\n\\[\n \\mathrm{T}(\\Sigma^{r-1} d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\ldots \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\Sigma d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} d_1) \\subseteq \\mathrm{T}(\\Sigma^{r-1} d_1 , \\ldots, \\Sigma d_1, d_1).\n\\]\nBy Lemma~\\ref{le:SparseBracket}, %\nthe Toda family on the right has at most one element, up to automorphism.\nBut fully-restricted Toda families are always non-empty, so the inclusion must be an equality.\nWrite $\\Sigma^{r-2} I_{s} \\ral{f} C \\ral{g} \\Sigma^r I_{s+r}$ for an element of these families.\nIt remains to show that the inclusion\n\\[\n \\left\\langle g \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} f, \\Sigma^{r-2} x \\right\\rangle \\subseteq \\left\\langle g, f, \\Sigma^{r-2} x \\right\\rangle \n\\]\nis an equality, i.e., that the extension of $g$ to the cofiber of $f$ is unique.\nThis follows from the equality $\\cat{T}(\\Sigma^{r-1} I_{s}, \\Sigma^r I_{s+r}) = 0$, which uses the assumption on the injective objects $I_j$ and that $r-1 < N$.\n\\end{proof}\n\nNext, we describe situations in which Theorem~\\ref{th:SparseDr} applies.\n\n\\begin{prop}\\label{pr:CohomProduct}\nAssume that every product of the form $\\prod_i \\Sigma^{m_i N} E$ has cohomology\\break $E^* \\left( \\prod_i \\Sigma^{m_i N} E \\right)$ which is $N$-sparse. Then every object $Y$ such that $E^* Y$ is $N$-sparse also satisfies Condition~\\ref{co:InjSparse}.\n\\end{prop}\n\n\\begin{proof}\nLet $(y_i)$ be a set of non-zero generators of $E^* Y$ as an $E^* E$-module. %\nThen the corresponding map $Y \\to \\prod_i \\Sigma^{\\abs{y_i}} E$ is $\\cat{I}_E$-monic into an injective object;\nwe take this map as the first step $p_0 \\colon Y_0 \\to I_0$, with cofiber $\\Sigma Y_1$. By our assumption on $Y$, each degree $\\abs{y_i}$ is a multiple of $N$, and thus $E^* I_0$ is $N$-sparse, by the assumption on $E$. The distinguished triangle $Y_1 \\to Y_0 \\ral{p_0} I_0 \\to \\Sigma Y_1$ induces a long exact sequence on $E$-cohomology which implies that the map $I_0 \\to \\Sigma Y_1$ is injective on $E$-cohomology. It follows that $E^*(\\Sigma Y_1)$ is $N$-sparse as well. Repeating this process, we obtain an $\\cat{I}_E$-Adams resolution of $Y$ such that for every $j$, $E^* (\\Sigma^j Y_j)$ and $E^* (\\Sigma^j I_j)$ are $N$-sparse.\n\\end{proof}\n\nThe condition on $E$ is discussed in Example \\ref{ex:Compact}.\n\n\\begin{prop}\\label{pr:CoherentRing}\nAssume that the ring $E^* E$ is left coherent, and that $E^* Y$ is $N$-sparse and finitely presented as a left $E^*E$-module. Then $Y$ satisfies Condition~\\ref{co:InjSparse}.\n\\end{prop}\n\n\\begin{proof}\nSince $E^*Y$ is finitely generated over $E^*E$, the map $p_0 \\colon Y \\to I_0$ can be chosen so that $I_0 = \\prod_i \\Sigma^{m_i N} E \\cong \\oplus_i \\Sigma^{m_i N} E$ is a finite product. \nIt follows that $E^* I_0$ is $N$-sparse and finitely presented.\nWe have that $E^{*-1}Y_1 = \\ker \\left( p_0^* \\colon E^*I_0 \\twoheadrightarrow E^*Y \\right)$.\nThis is $N$-sparse, since $E^* I_0$ is, and is finitely presented over $E^*E$, since both $E^* I_0$ and $E^*Y$ are, and $E^*E$ is coherent \\cite{Bourbaki98}*{\\S I.2, Exercises 11--12}. %\nRepeating this process, we obtain an $\\cat{I}_E$-Adams resolution of $Y$ such that for every $j$, $\\Sigma^j I_j$ is a finite product of the form $\\prod_i \\Sigma^{m_i N} E$.\n\\end{proof}\n\n\n\n\\subsection{Projective case}\\label{ss:sparse-projective}\n\nThe main applications of Theorem~\\ref{th:SparseDr} are to projective classes instead of injective classes. For future reference, we state here the dual statements of the previous subsection and adopt a notation inspired from stable homotopy theory.\n\n\\begin{nota}\nLet $R$ be an object of the triangulated category $\\cat{T}$. Define the \\textbf{homotopy} (with respect to $R$) of an object $X$ as the graded abelian group $\\pi_* X$ given by $\\pi_n X := \\cat{T}(\\Sigma^n R,X)$. Precomposition makes $\\pi_*X$ into a right module over the graded endomorphism ring $\\pi_* R$.%\n\\end{nota}\n\n\\begin{assum}\nFor the remainder of this subsection, we assume the following.\n\\begin{enumerate}\n\\item The triangulated category $\\cat{T}$ has infinite %\ncoproducts.\n\\item The graded ring $\\pi_* R$ is $N$-sparse for some $N \\geq 2$.\n\\end{enumerate}\n\\end{assum}\n\nLet $\\cat{P}_R$ denote the stable projective class spanned by $R$, as in Example~\\ref{ex:GhostSphere}. Explicitly, $\\cat{P}_R$ consists of retracts of (arbitrary) coproducts $\\oplus_i \\Sigma^{n_i} R$.\n\n\\begin{condition}\\label{co:ProjSparse}\n$X$ admits an $\\cat{P}_R$-Adams resolution $X_{\\bullet}$ as in Diagram~\\eqref{eq:AdamsResolProj} such that each projective $P_j$ satisfies that $\\pi_* (\\Sigma^{-j} P_j)$ is $N$-sparse.\n\\end{condition}\n\n\\begin{thm}\\label{th:SparseDrProj}\nLet $X$ and $Y$ be objects in $\\cat{T}$ and consider the Adams spectral sequence abutting to $\\cat{T}(X,Y)$ with respect to the projective class $\\cat{P}_R$. Assume that $X$ satisfies Condition~\\ref{co:ProjSparse}. Let $[y] \\in E_r^{s,t}$ be a class represented by $y \\in E_1^{s,t} = \\cat{T}(\\Sigma^{t-s} P_s, Y)$. Then for all $r \\leq N$, the Adams differential is given, as subsets of $E_1^{s+r,t+r-1}$, by\n\\[\nd_r [y] = \\left\\langle y, d_1, \\Sigma^{-1} d_1, \\ldots, \\Sigma^{-(r-1)} d_1 \\right\\rangle.\n\\]\n\\end{thm}\n\nNote that we used Corollary~\\ref{co:SelfDual} to ensure that the equality holds as stated, not merely up to sign.\n\n\\begin{prop}\\label{pr:HomotCoproduct}\nAssume that every coproduct of the form $\\oplus_i \\Sigma^{m_i N} R$ has homotopy\\break $\\pi_* \\left( \\oplus_i \\Sigma^{m_i N} R \\right)$ which is $N$-sparse. Then every object $X$ such that $\\pi_* X$ is $N$-sparse also satisfies Condition~\\ref{co:ProjSparse}.\n\\end{prop}\n\nRecall the following terminology:\n\n\\begin{defn}\nAn object $X$ of $\\cat{T}$ is \\textbf{compact} if the functor $\\cat{T}(X,-)$ preserves infinite coproducts.%\n\\end{defn}\n\n\\begin{ex}\\label{ex:Compact}\nIf $R$ is compact in $\\cat{T}$, then $R$ satisfies the assumption of Proposition~\\ref{pr:HomotCoproduct}. This follows from the isomorphism\n\\[\n\\pi_* \\left( \\oplus_i \\Sigma^{m_i N} R \\right) \\cong \\bigoplus_i \\pi_* (\\Sigma^{m_i N} R) = \\bigoplus_i \\Sigma^{m_i N} \\pi_* R \n\\]\nand the assumption that $\\pi_* R$ is $N$-sparse. The same argument works if $R$ is a retract of a coproduct of compact objects.\n\nDually, if $E$ is cocompact in $\\cat{T}$, then $E$ satisfies the assumption of Proposition~\\ref{pr:CohomProduct}. \nThis holds more generally if $E$ is a retract of a product of cocompact objects.\n\\end{ex}\n\n\\begin{rem}\nSome of the related literature deals with compactly generated triangulated categories. As noted in Remark~\\ref{re:NotGenerate}, we do \\emph{not} assume that the object $R$ is a generator, i.e., that the condition $\\pi_* X = 0$ implies $X=0$.\n\\end{rem}\n\n\\begin{prop}\\label{pr:CoherentRingProj}\nAssume that the ring $\\pi_* R$ is right coherent, and that $\\pi_* X$ is $N$-sparse and finitely presented as a right $\\pi_* R$-module. Then $X$ satisfies Condition~\\ref{co:ProjSparse}.\n\\end{prop}\n\nThe following is a variant of \\cite{Patchkoria12}*{Lemma 2.2.2}, where we do not assume that $R$ is a generator.\nIt identifies the $E_2$ term of the spectral sequence associated to the projective class $\\cat{P}_R$.\nThe proof is straightforward.\n\n\\begin{prop}\\label{pr:ExtEndoRing}\nAssume that the object $R$ is compact.\n\\begin{enumerate}\n\\item Let $P$ be in the projective class $\\cat{P}_R$. Then the map of abelian groups\n\\[ %\n\\cat{T}(P,Y) \\to \\Hom_{\\pi_* R} (\\pi_* P, \\pi_* Y)\n\\] %\nis an isomorphism for every object $Y$.\n\\item There is an isomorphism\n\\[\n\\Ext^{s}_{\\cat{P}_R}(X,Y) \\cong \\Ext^{s}_{\\pi_* R}(\\pi_* X, \\pi_* Y)\n\\]\nwhich is natural in $X$ and $Y$. \n\\end{enumerate}\n\\end{prop}\n\n\n\n\n\\subsection{Examples}\\label{ss:sparse-examples}\n\nTheorem~\\ref{th:SparseDrProj} applies to modules over certain ring spectra. We describe some examples, along the lines of \\cite{Patchkoria12}*{Examples 2.4.6 and 2.4.7}.\n\n\\begin{ex}\\label{ex:RingSpectrum}\nLet $R$ be an $A_{\\infty}$ ring spectrum, and let $h\\Mod{R}$ denote the homotopy category of the stable model category of (right) $R$-modules %\n\\cite{SchwedeS03}*{Example 2.3(ii)} \n\\cite{EKMM97}*{\\S III}. Then $R$ itself, the free $R$-module of rank $1$, is a compact generator for $h\\Mod{R}$. The $R$-homotopy of an $R$-module spectrum $X$ is the usual homotopy of $X$, as suggested by the notation:\n\\[\nh\\Mod{R}(\\Sigma^n R, X) \\cong h\\Mod{S}(S^n, X) = \\pi_n X.\n\\]\nIn particular, the graded endomorphism ring $\\pi_* R$ is the usual coefficient ring of $R$.\n\nThe projective class $\\cat{P}_R$ is the ghost projective class \\cite{Christensen98}*{\\S 7.3}, generalizing Example~\\ref{ex:GhostSphere}, where $R$ was the sphere spectrum $S$. The Adams spectral sequence relative to $\\cat{P}_R$ is \nthe universal coefficient spectral sequence\n\\[\n\\Ext_{\\pi_* R}^{s}(\\Sigma^t \\pi_* X, \\pi_* Y) \\Ra h\\Mod{R}(\\Sigma^{t-s} X,Y)\n\\]\nas described in \\cite{EKMM97}*{\\S IV.4} and~\\cite{Christensen98}*{Corollary 7.12}. %\nWe used Proposition~\\ref{pr:ExtEndoRing} to identify the $E_2$ term.\n\nSome $A_{\\infty}$ ring spectra $R$ with sparse homotopy $\\pi_* R$ are discussed in \\cite{Patchkoria12}*{\\S 4.3, 5.3, 6.4}. In view of Proposition~\\ref{pr:ExtEndoRing}, the Adams spectral sequence in $h\\Mod{R}$ collapses at the $E_2$ page if $\\pi_* R$ has (right) global dimension less than $2$. %\n\nThe Johnson--Wilson spectrum $E(n)$ has coefficient ring\n\\[\n\\pi_* E(n) = \\mathbb{Z}_{(p)}[v_1, \\ldots, v_n, v_n^{-1}], \\quad \\abs{v_i} = 2(p^i - 1),\n\\]\nwhich has global dimension $n$ and is $2(p-1)$-sparse. Hence, Theorem~\\ref{th:SparseDrProj} applies in this case to the differentials $d_r$ with $r \\leq 2(p-1)$, while $d_r$ is zero for $r > n$.\nLikewise, connective complex $K$-theory $ku$ has coefficient ring\n\\[\n\\pi_* ku = \\mathbb{Z}[u], \\quad \\abs{u} = 2,\n\\]\nwhich has global dimension $2$ and is $2$-sparse.\n\\end{ex}\n\n\\begin{ex}\\label{ex:DGA}\nLet $R$ be a differential graded (\\emph{dg} for short) algebra over a commutative ring $k$, and consider the category of dg $R$-modules $\\dgMod{R}$. The homology $H_* X$ of a dg $R$-module is a (graded) $H_* R$-module. %\nThe derived category $D(R)$ is defined as the localization of $\\dgMod{R}$ with respect to quasi-isomorphisms. \nThe free dg $R$-module $R$ is a compact generator of $D(R)$. The $R$-homotopy of an object $X$ of $D(R)$ is its homology $\\pi_* X = H_* X$. In particular, the graded endomorphism ring of $R$ in $D(R)$ is the graded $k$-algebra $H_* R$.\n\nThe Adams spectral sequence relative to $\\cat{P}_R$ is \nan Eilenberg--Moore spectral sequence\n\\[\n\\Ext_{H_* R}^{s} \\left( \\Sigma^t H_* X, H_* Y \\right) \\Ra D(R)(\\Sigma^{t-s} X, Y)\n\\]\nfrom ordinary $\\Ext$ to differential $\\Ext$, as described in \\cite{BarthelMR14}*{\\S 8, 10}. See also \\cite{KrizM95}*{\\S III.4}, \\cite{HoveyPS97}*{Example 10.2(b)}, \nand \\cite{EilenbergM66}.\n\\end{ex}\n\n\\begin{rem}\nExample~\\ref{ex:DGA} can be viewed as a special case of Example~\\ref{ex:RingSpectrum}. Letting $HR$ denote the Eilenberg--MacLane spectrum associated to $R$, the categories $\\Mod{HR}$ and $\\dgMod{R}$ are Quillen equivalent, by \\cite{SchwedeS03}*{Example 2.4(i)} \\cite{Shipley07HZ}*{Corollary 2.15}, yielding a triangulated equivalence $h\\Mod{HR} \\cong D(R)$. The generator $HR$ corresponds to the generator $R$ via this equivalence.\n\\end{rem}\n\n\\begin{ex}\\label{ex:Ring}\nLet $R$ be a ring, viewed as a dg algebra concentrated in degree $0$. Then Example~\\ref{ex:DGA} yields the ordinary derived category $D(R)$. The graded endomorphism ring of $R$ in $D(R)$ is $H_* R$, which is $R$ concentrated in degree $0$.\nThis is $N$-sparse for any $N \\geq 2$. \n\nThe Adams spectral sequence relative to $\\cat{P}_R$ is the hyperderived functor spectral sequence\n\\[\n\\Ext_{H_* R}^{s} \\left( \\Sigma^t H_* X, H_* Y \\right) = \\prod_{i \\in \\mathbb{Z}} \\Ext_{R}^{s} \\left( H_{i-t} X, H_{i} Y \\right) \\Ra D(R)(\\Sigma^{t-s}X, Y) = \\mathbf{Ext}_{R}^{s-t}(X,Y)\n\\]\nfrom ordinary $\\Ext$ to hyper-$\\Ext$, as described in \\cite{Weibel94}*{\\S 5.7, 10.7}.%\n\\end{ex}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRogers \\cite{Rog58} introduced the notion of computable numbering\n(of all partial computable functions) and the notion of reducibility.\nHe showed that the set of all equivalence classes of computable numberings\nforms an upper semilattice with respect to reducibility which is called\nthe \\emph{Rogers semilattice}. He asked whether this semilattice is\na lattice, and if not, whether any two elements have a lower bound.\nFriedberg \\cite{Fri58} constructed an injective computable numbering\n(called a Friedberg numbering) by a finite-injury priority argument.\nPour-El \\cite{Pou64} showed that every Friedberg numberings of is\nminimal. She also showed that there are two incomparable Friedberg\nnumberings through modifying Friedberg's construction. These numberings\nare non-equivalent minimal elements, and therefore they have no lower\nbound. Thus Rogers' questions were negatively answered.\n\nShen \\cite{She12} gave some examples of game-theoretic proofs of\ntheorems in computability theory and algorithmic information theory.\nIn particular, he gave the game-theoretic proof of the theorem of\nFriedberg. The game representation of Friedberg's construction is\nclear and intuitional, and can be used to prove other existence theorems\nof Friedberg numberings as we will demonstrate in the paper.\n\nIn \\prettyref{sec:Friedberg's construction} we present Shen's proof\nto use later. We provide game proofs of certain well-known existence\ncriteria of Friedberg numberings for general classes of partial computable\nfunctions. In \\prettyref{sec:Modifications}, we give two proofs of\nthe theorem of Pour-El using two games. Also we give the proof of\nthe existence of an infinite c.e. sequence and an independent sequence\nof Friedberg numberings. These are essentially modifications of Shen's\nproof.\n\n\n\\section{Notations and Definitions}\n\nWe denote by $\\mathcal{P}^{\\left(1\\right)}$ the set of all partial\ncomputable functions from $\\mathbb{N}$ to $\\mathbb{N}$. $\\braket{\\cdot,\\cdot}$\nis a computable pairing function which is a computable bijection between\n$\\mathbb{N}^{2}$ and $\\mathbb{N}$. Let $\\mathcal{A}$ be any set.\nA surjective map $\\nu:\\mathbb{N}\\to\\mathcal{A}$ is called a \\emph{numbering}\nof $\\mathcal{A}$. Let $\\nu$ and $\\mu$ be numberings of $\\mathcal{A}$.\nWe say that $\\nu$ is \\emph{reducible to} $\\mu$, denoted by $\\nu\\leq\\mu$,\nif there is a total computable function $f:\\mathbb{N}\\to\\mathbb{N}$\nsuch that $\\nu=\\mu\\circ f$. We say that $\\nu$ and $\\mu$ are \\emph{equivalent}\nif they are reducible to each other. We say that $\\nu$ and $\\mu$\nare \\emph{incomparable} if they are not reducible to each other. In\nthis paper, we often identify a numbering $\\nu$ of a set of partial\nmaps from $X$ to $Y$ with the partial map $\\nu\\left(i,x\\right)=\\nu\\left(i\\right)\\left(x\\right)$\nfrom $\\mathbb{N}\\times X$ to $Y$. A numbering $\\nu$ of a subset\nof $\\mathcal{P}^{\\left(1\\right)}$ is said to be \\emph{computable}\nif it is computable as a partial function from $\\mathbb{N}^{2}$ to\n$\\mathbb{N}$. A computable injective numbering is called a \\emph{Friedberg\nnumbering}. A sequence $\\set{\\nu_{i}}_{i\\in\\mathbb{N}}$ of numberings\nof a subset of $\\mathcal{P}^{\\left(1\\right)}$ is said to be \\emph{uniformly\nc.e.} if it is uniformly c.e. as a sequence of partial functions from\n$\\mathbb{N}^{2}$ to $\\mathbb{N}$, or equivalently, if it is computable\nas a partial function from $\\mathbb{N}^{3}$ to $\\mathbb{N}$. We\nsay that a sequence $\\set{\\nu_{i}}_{i\\in\\mathbb{N}}$ of numberings\nof a set $\\mathcal{A}$ is \\emph{independent} if $\\nu_{i}\\nleq\\bigoplus_{j\\neq i}\\nu_{j}$\nfor all $i\\in\\mathbb{N}$, where $\\bigoplus_{i\\in\\mathbb{N}}\\nu_{j}$\nis the direct sum of $\\set{\\nu_{i}}_{i\\in\\mathbb{N}}$ defined by\n$\\bigoplus_{i\\in\\mathbb{N}}\\nu_{i}\\left(\\braket{j,k}\\right)=\\nu_{j}\\left(k\\right)$.\n\n\n\\section{\\label{sec:Friedberg's construction}Friedberg's construction and\nthe infinite game with two boards}\n\\begin{thm}[{Friedberg \\cite[Corollary to Theorem 3]{Fri58}}]\n\\label{thm:Fri58-Corollary-to-Theorem3}$\\mathcal{P}^{\\left(1\\right)}$\nhas a Friedberg numbering.\\end{thm}\n\\begin{proof}[Proof (Shen \\cite{She12})]\nFirst, we consider an infinite game $\\mathcal{G}_{0}$ and prove\nthat the existence of a computable winning strategy of $\\mathcal{G}_{0}$\nfor one of the players implies the existence of a Friedberg numbering\nof $\\mathcal{P}^{\\left(1\\right)}$. The game $\\mathcal{G}_{0}$ is\nas follows:\n\\begin{description}\n\\item [{Players}] Alice, Bob.\n\\item [{Protocol}] FOR $s=0,1,2,\\ldots$:\n\n\nAlice announces a finite partial function $A_{s}:\\mathbb{N}^{2}\\rightharpoonup\\mathbb{N}$.\n\n\nBob announces a finite partial function $B_{s}:\\mathbb{N}^{2}\\rightharpoonup\\mathbb{N}$.\n\n\\item [{Collateral duties}] $A_{s}\\subseteq A_{s+1}$ and $B_{s}\\subseteq B_{s+1}$\nfor all $s\\in\\mathbb{N}$.\n\\item [{Winner}] Let $A=\\bigcup_{s\\in\\mathbb{N}}A_{s}$ and $B=\\bigcup_{s\\in\\mathbb{N}}B_{s}$.\nBob wins if\n\n\\begin{enumerate}\n\\item for each $i\\in\\mathbb{N}$, there is a $j\\in\\mathbb{N}$ such that\n$A\\left(i,\\cdot\\right)=B\\left(j,\\cdot\\right)$;\n\\item for any $i,j\\in\\mathbb{N}$, if $i\\neq j$, then $B\\left(i,\\cdot\\right)\\neq B\\left(j,\\cdot\\right)$.\n\\end{enumerate}\n\\end{description}\n\nWe consider $A$ and $B$ as two boards, $A$-table and $B$-table.\nEach board is a table with an infinite number of rows and columns.\nEach player plays on its board. At each move player can fill finitely\nmany cells with any natural numbers. The collateral duties prohibit\nplayers from erasing cells.\n\n\nA strategy is a map that determines the next action based on the previous\nactions of the opponent. Since any action in this game is a finitary\nobject, we can define the computability of strategies via g\\\"odelization.\nSuppose that there is a computable winning strategy for Bob. Let Alice\nfill $A$-table with the values of some computable numbering of $\\mathcal{P}^{\\left(1\\right)}$\nby using its finite approximation, and let Bob use some computable\nwinning strategy. Clearly $B$ is a Friedberg numbering of $\\mathcal{P}^{\\left(1\\right)}$.\n\n\nSecond, we consider an infinite game $\\mathcal{G}_{1}$, which is\na simplified version of $\\mathcal{G}_{0}$, and describe a computable\nwinning strategy of $\\mathcal{G}_{1}$. The game $\\mathcal{G}_{1}$\nis as follows:\n\\begin{description}\n\\item [{Players}] Alice, Bob.\n\\item [{Protocol}] FOR $s=0,1,2,\\ldots$:\n\n\nAlice announces a finite partial function $A_{s}:\\mathbb{N}^{2}\\rightharpoonup\\mathbb{N}$.\n\n\nBob announces a finite partial function $B_{s}:\\mathbb{N}^{2}\\rightharpoonup\\mathbb{N}$\nand a finite set $K_{s}\\subseteq\\mathbb{N}$.\n\n\\item [{Collateral duties}] $A_{s}\\subseteq A_{s+1}$, $B_{s}\\subseteq B_{s+1}$\nand $K_{s}\\subseteq K_{s+1}$ for all $s\\in\\mathbb{N}$.\n\\item [{Winner}] Let $A=\\bigcup_{s\\in\\mathbb{N}}A_{s}$, $B=\\bigcup_{s\\in\\mathbb{N}}B_{s}$\nand $K=\\bigcup_{s\\in\\mathbb{N}}K_{s}$. Bob wins if\n\n\\begin{enumerate}\n\\item for each $i\\in\\mathbb{N}$, there is a $j\\in\\mathbb{N}\\setminus K$\nsuch that $A\\left(i,\\cdot\\right)=B\\left(j,\\cdot\\right)$;\n\\item for any $i,j\\in\\mathbb{N}\\setminus K$, if $i\\neq j$, then $B\\left(i,\\cdot\\right)\\neq B\\left(j,\\cdot\\right)$.\n\\end{enumerate}\n\\end{description}\n\nWe consider that in this game Bob can \\emph{invalidate} some rows\nand that we ignore invalid rows when we decide the winner. Bob cannot\nvalidate invalid rows again.\n\n\nTo win this game, Bob hires a countable number of assistants who guarantee\nthat each of the rows in $A$-table appears in $B$-table exactly\nonce. At each move, the assistants work one by one. The $i$-th assistant\nstarts working at move $i$. She can reserve a row in $B$-table exclusively,\nfill her reserved row, and invalidate her reserved row. The instruction\nfor the $i$-th assistant: \\textit{if you have no reserved row, reserve\na new row. Let $k$ be the number of rows such that you have already\ninvalidated. If in the current state of $A$-table the first $k$\npositions of the $i$-th row are identical to the first $k$ positions\nof some previous row, invalidate your reserved row. If you have a\nreserved row, copy the current contents of the $i$-th row of $A$-table\ninto your reserved row.} These instructions guarantee in the limit\nthat\n\\begin{itemize}\n\\item if the $i$-th row in $A$-table is identical to some previous row,\nthen the $i$-th assistant invalidates her reserved row infinitely\nmany times, so she has no permanently reserved row;\n\\item if not, the $i$-th assistant invalidates her reserved row only finitely\nmany times, so she has a permanently reserved row.\n\\end{itemize}\n\n\\noindent In the second case, she faithfully copies the contents of\nthe $i$-th row of $A$-table into her permanently reserved row. We\ncan assume that each of the rows in $B$-table has been reserved or\ninvalidated in the limit: when some assistant reserves a row let her\nselect the first unused row. Then Bob wins the simplified game. Now\nwe prove the above properties. Suppose that the $i$-th row in $A$-table\nis not identical to any previous row in the limit. For each of the\nprevious rows, select some column witnessing that this row is not\nidentical to the $i$-th row. Let $k$ be the maximum of the selected\ncolumns. Wait for convergence of the rectangular area $\\left[0,i\\right]\\times\\left[0,k\\right]$\nof $A$-table. After that, the first $k$ positions of the $i$-th\nrow in $A$-table are not identical to the first $k$ positions of\nany previous row, and hence the $i$-th assistant invalidates her\nreserved row at most $k$ times. Conversely, suppose that the $i$-th\nassistant invalidates her reserved row only finitely many times. Let\n$k$ be the number of invalidations. After the $k$-th invalidation,\nthe $i$-th row in $A$-table is not identical to any previous row,\nand the same is true in the limit.\n\n\nFinally, we describe a computable winning strategy of $\\mathcal{G}_{0}$\nthrough modifying the winning strategy of $\\mathcal{G}_{1}$ described\nabove. We say that a row is odd if it contains a finite odd number\nof non-empty cells. We can assume without loss of generality that\nodd rows never appear in $A$-table: if Alice fills some cells in\na row making this row odd, Bob ignores one of these cells until Alice\nfills other cells in this row. We replace invalidation to \\emph{odd-ification}:\ninstead of invalidating a row, fill some cells in this row making\nit new and odd. Bob consider that odd-ified rows in $B$-table are\ninvalid. This modification guarantees that each of the non-odd rows\nof $A$-table appears in $B$-table exactly once. Bob hires an additional\nassistant who guarantees that each odd row appears in $B$-table exactly\nonce. At each move, the additional assistant reserves some row exclusively\nand fills some cells in this row making it new and odd so that all\nodd rows are exhausted in the limit. Thus Bob wins this game, and\nthe theorem is proved.\n\n\\end{proof}\nKummer \\cite{Kum90a} gave a priority-free proof of the existence\nof a Friedberg numbering of $\\mathcal{P}^{\\left(1\\right)}$. The key\npoint of his proof is to split $\\mathcal{P}^{\\left(1\\right)}$ into\n$\\mathcal{P}^{\\left(1\\right)}\\setminus\\mathcal{O}$ and $\\mathcal{O}$,\nwhere $\\mathcal{O}$ is the set of all \\emph{odd} partial functions.\nObserve that $\\mathcal{P}^{\\left(1\\right)}\\setminus\\mathcal{O}$ has\na computable numbering, $\\mathcal{O}$ has a Friedberg numbering,\nand any finite subfunction of a partial function in $\\mathcal{P}^{\\left(1\\right)}\\setminus\\mathcal{O}$\nhas infinitely many extensions in $\\mathcal{O}$. He provided the\nfollowing useful criterion.\n\\begin{cor}[{Kummer \\cite[Extension Lemma]{Kum89}}]\n\\label{cor:Kum89-Extension-Lemma}Let $\\mathcal{A}$ and $\\mathcal{B}$\nbe disjoint subsets of $\\mathcal{P}^{\\left(1\\right)}$. If $\\mathcal{A}$\nhas a computable numbering, $\\mathcal{B}$ has a Friedberg numbering,\nand every finite subfunction of a member of $\\mathcal{A}$ has infinitely\nmany extensions in $\\mathcal{B}$, then $\\mathcal{A}\\cup\\mathcal{B}$\nhas a Friedberg numbering.\\end{cor}\n\\begin{proof}[Proof sketch]\nLet us play the game $\\mathcal{G}_{0}$ where Alice fills $A$-table\nwith the values of some computable numbering of $\\mathcal{P}^{\\left(1\\right)}$,\nand Bob uses the strategy which is obtained by modifying the winning\nstrategy of $\\mathcal{G}_{0}$ as follows. In this strategy, we do\nnot assume that odd rows never appear in $A$-table, and Bob does\nnot ignore cells in $A$-table. Assistants use partial functions in\n$\\mathcal{B}$ instead of odd partial functions. Replace odd-ification\nto \\emph{$\\mathcal{B}$-ification}: instead of odd-ificating a row,\nfill some cells in this row making it an unused member of $\\mathcal{B}$\nin the limit. The additional assistant guarantees that each member\nof $\\mathcal{B}$ appears in $B$-table exactly once. These actions\nare possible since $\\mathcal{B}$ has a Friedberg numbering. Then,\nBob wins, and $B$ becomes a Friedberg numbering of $\\mathcal{A}\\cup\\mathcal{B}$.\\end{proof}\n\\begin{cor}[{Pour-El and Putnam \\cite[Theorem 1]{PP65}}]\nLet $\\mathcal{A}$ be a subset of $\\mathcal{P}^{\\left(1\\right)}$\nand $f$ be a member of $\\mathcal{P}^{\\left(1\\right)}$ with an infinite\ndomain. If $\\mathcal{A}$ has a computable numbering, then there is\na subset $\\mathcal{B}$ of $\\mathcal{P}^{\\left(1\\right)}$ such that\n\\begin{enumerate}\n\\item $\\mathcal{A}\\subseteq\\mathcal{B}$,\n\\item the domain of every member of $\\mathcal{B}\\setminus\\mathcal{A}$ is\nfinite,\n\\item for any $g\\in\\mathcal{B}\\setminus\\mathcal{A}$, there is an $h\\in\\mathcal{A}$\nwith $g\\subseteq f\\cup h$,\n\\item $\\mathcal{B}$ has a Friedberg numbering.\n\\end{enumerate}\n\\end{cor}\n\\begin{proof}[Proof sketch]\nLet us play the game $\\mathcal{G}_{0}$ where Alice fills $A$-table\nwith the values of some computable numbering of $\\mathcal{P}^{\\left(1\\right)}$,\nand Bob uses the strategy which is obtained by modifying the winning\nstrategy of $\\mathcal{G}_{0}$ as follows. When some assistant fills\na cell in the $j$-th column making this row odd, she must use $f\\left(j\\right)$\nfor filling. The instruction for the additional assistant: \\textit{if\nthere is an odd row $i$ in $A$-table such that this row is not identical\nto any row in $B$-table, reserve a new row, copy the current contents\nof the $i$-th row of $A$-table into your reserved row, and release\nyour reserved row. }Released rows cannot be used forever. Note that\nthe additional assistant exceptionally does not ignore cells in $A$-table.\nThen, Bob wins, $\\mathcal{B}=\\set{B\\left(i,\\cdot\\right)|i\\in\\mathbb{N}}$\nhas the desired properties, and $B$ becomes a Friedberg numbering\nof $\\mathcal{B}$.\n\\end{proof}\n\n\\section{\\label{sec:Modifications}Modifications}\n\\begin{thm}[{Pour-El \\cite[Theorem 2]{Pou64}}]\n\\label{thm:Pou64-Theorem2}There are two incomparable Friedberg numberings\nof $\\mathcal{P}^{\\left(1\\right)}$.\n\\end{thm}\nThe first proof is obtained from the proof of \\prettyref{thm:Fri58-Corollary-to-Theorem3}\nthrough modifying in the same way done by Pour-El.\n\\begin{proof}[Proof (asymmetric version)]\nWe consider the following game $\\mathcal{G}_{2}$:\n\\begin{description}\n\\item [{Players}] Alice, Bob.\n\\item [{Protocol}] FOR $s=0,1,2,\\ldots$:\n\n\nAlice announces a finite partial function $A_{s}:\\mathbb{N}^{2}\\rightharpoonup\\mathbb{N}$.\n\n\nBob announces a finite partial function $B_{s}:\\mathbb{N}^{2}\\rightharpoonup\\mathbb{N}$.\n\n\\item [{Collateral duties}] $A_{s}\\subseteq A_{s+1}$ and $B_{s}\\subseteq B_{s+1}$\nfor all $s\\in\\mathbb{N}$.\n\\item [{Winner}] Let $A=\\bigcup_{s\\in\\mathbb{N}}A_{s}$ and $B=\\bigcup_{s\\in\\mathbb{N}}B_{s}$.\nBob wins if\n\n\\begin{enumerate}\n\\item for each $i\\in\\mathbb{N}$, there is a $j\\in\\mathbb{N}$ such that\n$A\\left(i,\\cdot\\right)=B\\left(j,\\cdot\\right)$;\n\\item for any $i,j\\in\\mathbb{N}$, if $i\\neq j$, then $B\\left(i,\\cdot\\right)\\neq B\\left(j,\\cdot\\right)$;\n\\item for each $i\\in\\mathbb{N}$, if $A\\left(i,\\cdot\\right)$ is total,\nthen there is a $j\\in\\mathbb{N}$ such that $B\\left(A\\left(i,j\\right),\\cdot\\right)\\neq A\\left(j,\\cdot\\right)$.\n\\end{enumerate}\n\\end{description}\n\nSuppose that there is a computable winning strategy of $\\mathcal{G}_{2}$\nfor Bob. Let Alice fill $A$-table with the values of some Friedberg\nnumbering of $\\mathcal{P}^{\\left(1\\right)}$, and let Bob use some\ncomputable winning strategy. Then, $A$ is a Friedberg numbering of\n$\\mathcal{P}^{\\left(1\\right)}$, and $B$ is a Friedberg numbering\nof $\\mathcal{P}^{\\left(1\\right)}$ to which $A$ is not reducible.\nSince $A$ is minimal, $B$ is also not reducible to $A$.\n\n\nWe describe a computable winning strategy of $\\mathcal{G}_{2}$. Bob\nuses the winning strategy of $\\mathcal{G}_{0}$ described in the proof\nof \\prettyref{thm:Fri58-Corollary-to-Theorem3}, which guarantees\nthat the first two winning conditions are satisfied. For the third\nwinning condition, Bob adds the following instruction for the $i$-th\nassistant: \\textit{if the $i$-th row $i$-th column in $A$-table\nhas been filled, and you have reserved the $A\\left(i,i\\right)$-th\nrow, then odd-ify the $A\\left(i,i\\right)$-th row.} Each of these\ninstructions is done at most once because after doing this the corresponding\nrequirement is permanently satisfied. Hence they do not interrupt\nsatisfying the first two winning conditions. It remains to show that\nthe third winning condition is also satisfied. Suppose that $A\\left(i,\\cdot\\right)$\nis total. We can assume without loss of generality that $i$ is the\nleast index of $A\\left(i,\\cdot\\right)$, i.e., there is no $j1, & \\end{array} \\right .\n\\end{equation}\nNote that in all cases, the number variance of a hyperuniform\npoint pattern grows more slowly than $R^d$.\n\n\\subsection{Order metrics}\n\nThe local bond-orientational-order metric $q_6$ is defined as\n\\cite{ref25}\n\\begin{equation}\nq_6 = \\left |{\\frac{1}{N_b}\\sum_j\\sum_k \\exp(6i\n\\theta_{jk})}\\right |,\n\\end{equation}\nwhere $j$ runs over all cells in the system, $k$ runs over all\nneighbors of cell $j$, $\\theta_{jk}$ is the angle between some\nfixed reference axis in the system and the bond connecting the\ncenters of cells $j$ and $k$, and $N_b$ is the total number of\nsuch bonds in the system. This quantity indicates the degree of\norientational order in the local arrangement of the immediate\nneighbors of a cell and it is maximized (i.e., $q_6=1$) for the\nperfect hexagonal arrangement.\n\nTo characterize translational order of a configuration, we use the\nfollowing translation order metric $T$ introduced in Ref.\n\\cite{order_T} and further applied in Ref. \\cite{ref26},\n\\begin{equation}\nT = \\frac{1}{\\eta_c}\\int_0^{\\eta_c}|g_2(r)-1|dr =\n\\frac{1}{\\eta_c}\\int_0^{\\eta_c}|h(r)|dr\n\\end{equation}\nwhere $g_2(r)$ is the pair correlation function, $h(r) = g_2(r)-1$\nis the total correlation function and $\\eta_c$ is a numerical\ncutoff determined by the linear size of the system. The\ntranslational order metric measures the deviation of the spatial\narrangement of cell centers in a pattern from that of a totally\ndisordered system (i.e., a Poisson distribution of points). The\ngreater the deviation from zero, the more ordered is the point\nconfiguration.\n\n\n\\section{Structural properties of experimentally obtained photoreceptor\npatterns}\n\nThe chicken retina contains five different cone cell types of\ndifferent sizes: violet, blue, green, red and double. Each cell\ntype of this {\\it multicomponent} system is maximally sensitive to\nvisible light of a different wavelength. The spatial coordinates\nof each cell can be determined by the presence of a colored oil\ndroplet in the cell's inner segment (Fig. \\ref{fig1_cone}). Since\nthe oil droplets used to identify the locations of individual\nphotoreceptors are not always in exactly the same plane\n\\cite{ref12}, pairs of real photoreceptors sometimes appear to be\ncloser to one another than they are in actuality and in the\nsimulations. In addition, the original slightly curved retina\nepithelium was flattened for imaging purposes \\cite{ref12}. These\neffects introduce small errors in the intercell small-distance\nbehavior but do not affect the overall statistics, especially on\nlarge length scales. The spatial coordinate datasets of post-hatch\nday 15 chicken (Gallus gallus) cone photoreceptors were obtained\nfrom a published study \\cite{ref12}. Each dataset contains\napproximately 4430 photoreceptors, and the average numbers of\nviolet, blue, green, red and double species are respectively 350,\n590, 880, 670 and 1840. To clearly illustrate the photoreceptor\npatterns of different species, only a portion of the entire system\nis shown in Fig. \\ref{fig_cellpacking}. We compute a variety of\nthe associated statistical structural descriptors and order\nmetrics to quantify the degree of spatial regularity (or disorder)\nof the cell arrangements.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=10.5cm,keepaspectratio]{fig3.eps}\\\\\n\\end{center}\n\\caption{Experimentally obtained configurations representing the\nspatial arrangements of chicken cone photoreceptors. Upper panels:\nThe configurations shown from left to right respectively\ncorrespond to violet, blue, green species. Lower panels: The\nconfigurations shown from left to right respectively correspond to\nred, double species and the overall pattern.}\n\\label{fig_cellpacking}\n\\end{figure*}\n\n\n\\subsection{Disordered Hyperuniformity}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[height=10cm,keepaspectratio]{fig4.eps} \\\\\n\\end{center}\n\\caption{Structure factors $S(k)$ of the experimentally obtained\npoint configurations representing the spatial arrangements of\nchicken cone photoreceptors. The experimental data were obtained\nby averaging 14 independent patterns. The estimated values of\n$S(k=0)$ by extrapolation for violet, blue, green, red, double and\nthe overall population in the actual pattern are respectively\ngiven by $2.11\\times 10^{-3}$, $6.10\\times10^{-4}$,\n$1.06\\times10^{-3}$, $5.72\\times10^{-4}$, $1.38\\times10^{-4}$,\n$1.13\\times10^{-3}$.} \\label{fig_Sk}\n\\end{figure*}\n\n\nAs discussed in Sec. IIB, a point pattern is hyperuniform if the\nnumber variance $\\sigma^2(R)$ within a spherical sampling window\nof radius $R$ (in $d$ dimensions) grows more slowly than the\nwindow volume for large $R$, i.e., more slowly than $R^d$\n\\cite{ref13}. The property of hyperuniformity can also be\nascertained from the small wavenumber behavior of the structure\nfactor, i.e., $S(k=0)=0$ of the pattern \\cite{ref13}, which\nencodes information about large-scale spatial correlations (see\nSec. IIB for details). We find that $S(k)$ for the cell\nconfigurations associated with both the total population and the\nindividual photoreceptor species are hyperuniform and each of\nthese structure factors vanishes linearly with k as k tends to\nzero, i.e., $S(k) \\sim k$ ($k \\rightarrow 0$) (see Fig.\n\\ref{fig_Sk}). As discussed in Sec. IIB [cf. Eq. (\\ref{eq_S0})],\nsuch a linear behavior indicates a power-law decay for large $r$\nvalues in the pair correlation function (i.e., $g_2(r)-1 \\sim\n-1\/r^{3}$) instead of an exponential decay and therefore\nquasi-long-range correlations in the system. We will elaborate on\nthis point in the ensuing discussion.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[height=8.5cm,keepaspectratio]{fig5.eps} \\\\\n\\end{center}\n\\caption{The number variance $\\sigma^2(R)$ associated with the\nphotoreceptor patterns in chicken retina as well as the associated\nfitting function of the form $\\sigma^2(R) = A R^2 + B R\\ln(R) + C\nR$. We found that the values of the parameter $A$ are several\norders of magnitude smaller than the other two parameters,\nindicating that the associated patterns are effectively\nhyperuniform. Also shown in each plot is the ``surface term'' $C\nR$ for purposes of comparison. The window radius $R$ is normalized\nwith respect to the mean nearest neighbor distance $d_0$ of the\ncorresponding point configurations.} \\label{fig_sigma}\n\\end{figure*}\n\n\nWe have directly computed the number variance $\\sigma^2(R)$ for\nthe individual and overall patterns and verified that they are\nalso consistent with hyperuniformity, i.e., the ``volume term'' in\n$\\sigma^2(R)$ is several orders of magnitude smaller than the\nother terms [c.f. Eq.~\\eqref{numasymp}]. Specifically, for each\n$R$ value, 2500 windows are randomly placed in the system without\noverlapping the system boundary. The finite system size $L$\nimposes an upper limit on the largest window size, which is chosen\nto be $R_{max} = L\/2$ here. Figure \\ref{fig_sigma} shows the\nexperimental data as well as the associated fitting functions of\nthe form\n\\begin{equation}\n\\sigma^2(R) = A R^2 + B R\\ln(R) + C R,\n\\end{equation}\nwhere $A = S(k=0)$ and $B, C>0$. Note that in the plots, the\nwindow size $R$ is normalized by the corresponding\nnearest-neighbor distance $d_0$ for each species. Also shown in\neach plot is the corresponding ``surface term'' $C R$ for purposes\nof comparison. The numerical values of the fitting parameters for\nboth the overall pattern and the individual species are given in\nTable \\ref{tab_1}. It can be clearly seen that the values of the\nparameter $A$ are several orders of magnitude smaller than the\nother two parameters, indicating that the associated patterns are\neffectively hyperuniform. These values are also consistent with\nthe numerical values of $S(k=0)$ obtained by directly fitting\n$S(k)$ for small $k$ values \\cite{footnote0}.\n\n\\begin{table*}[h]\n\\caption{The numerical values of the fitting parameters for both\nthe overall pattern and the individual species.}\n\\begin{tabular}{c|@{\\hspace{0.35cm}}c@{\\hspace{0.35cm}}c@{\\hspace{0.35cm}}c@{\\hspace{0.35cm}}c@{\\hspace{0.35cm}}c@{\\hspace{0.35cm}}c}\n\\hline\n& Violet & Blue & Green & Red & Double & Overall \\\\\n\\hline $A$ & $2.53\\times 10^{-4}$ & $9.24\\times 10^{-4}$ &\n$1.07\\times 10^{-3}$ & $1.77\\times 10^{-3}$ & $4.46\\times 10^{-3}$\n& $1.93\\times 10^{-3}$ \\\\\n$B$ & 0.203 & 0.198 & 0.169 & 0.146 & 0.122 & 0.127\\\\\n$C$ & 1.22 & 1.14 & 1.03 & 1.09 & 1.17 & 1.06 \\\\\n\\hline\n\\end{tabular}\n\\label{tab_1}\n\\end{table*}\n\nThe fact that the photoreceptor patterns display both overall\nhyperuniformity and homotypic hyperuniformity implies that if any\nsubpopulation of the individual species is removed from the\noverall population, the remaining pattern is still hyperuniform.\nWe term such patterns {\\it multi-hyperuniform} because distinct\nmultiple subsets of the overall point pattern are themselves\nhyperuniform. These are highly unusual and unique structural\nattributes. Until now, the property of \\textit{overall}\nhyperuniformity was identified only in a special subset of\ndisordered physical systems \\cite{ref15, ref16, ref18, berthier,\nweeks, aleksPRL, jiaoPRE, helium, plasma,universe, fermion,\nref17}. The chicken photoreceptor patterns provides the first\nexample of a disordered hyperuniform biological system. In\naddition, the photoreceptor patterns possess quasi-long-range\n(QLR) correlations as indicated by the linear small-$k$ behavior\nin $S(k)$. We will elaborate on these points in Sec. V.\n\n\\subsection{Pair Correlation Functions}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[height=10cm,keepaspectratio]{fig6.eps} \\\\\n\\end{center}\n\\caption{Pair correlation functions $g_2(r)$ of the experimentally\nobtained point configurations representing the spatial\narrangements of chicken cone photoreceptors. The experimental data\nwere obtained by averaging 14 independent patterns. The distance\nis rescaled by the average nearest neighbor distance $d_n$ in the\nsystem.} \\label{fig_g2}\n\\end{figure*}\n\nWe find that each cell is associated with an effective exclusion\nregion (i.e., an area in 2D) with respect to any other cells,\nregardless of the cell types. The size of these exclusion regions\nroughly corresponds to the size of the cells themselves\n\\cite{ref12}. In addition, cells belonging to the same subtype\n(i.e., like-cells) are found to be mutually separated from one\nanother almost as far as possible, leading to a larger effective\nexclusion region associated with like-cells of each species. The\nexclusion effects are quantitatively captured by the associated\npair-correlation functions (Fig. \\ref{fig_g2}). The hard-core\nexclusion effect is manifested in $g_2(r)$ as an interval of $r$\nfor which $g_2(r) = 0$ (i.e., an ``exclusion gap'') and $g_2(r)$\napproaches its large-$r$ asymptotic value of unity very quickly,\nindicating the absence of any long-range spatial ordering. This is\nto be contrasted with ordered systems, such as crystals, whose\npair correlation functions are composed of individual Dirac delta\nfunctions at specific $r$ values.\n\n\\subsection{Order Metrics}\n\n\\begin{table*}[h]\n\\caption{Bond-orientational and translational order metrics, $q_6$\nand $T$, respectively, of the chicken photoreceptor patterns. The\nexperimental data were obtained by averaging 14 independent\npatterns.}\n\\begin{tabular}{@{\\vrule height 10.5pt depth4pt width0pt}c|c|c}\n\\hline\n~Species~ & ~$q_6$~ & ~$T$~ \\\\\n\\hline\nViolet & ~0.150~ & ~0.304~ \\\\\nBlue & ~0.158~ & ~0.411~ \\\\\nGreen & ~0.130~ & ~0.278~ \\\\\nRed & ~0.147~ & ~0.254~ \\\\\nDouble & ~0.184~ & ~0.390~ \\\\\nAll & ~0.058~ & ~0.096~ \\\\\n\\hline\n\\end{tabular}\n\\label{tab_2}\n\\end{table*}\n\n\nA bond-orientational order metric $q_6$ \\cite{ref25} and a\ntranslational order metric $T$ \\cite{ref26} were used next to\nquantify the degree of spatial regularity in the photoreceptor\npatterns (see Tab. \\ref{tab_2}), each of which are maximized by\nthe triangular lattice and minimized by a spatially uncorrelated\npoint pattern. Interestingly, the $q_6$ and $T$ values for the\ntotal population are close to the corresponding values for\npolydisperse hard-disk packings we obtained, implying that local\ncell exclusion effect plays a primary role in determining the\noverall pattern. In contrast, the higher $q_6$ and $T$ values for\nindividual cell species suggest that like-cells interact with one\nanother on a length scale larger than the size of a single cell,\nwhich tends to increase the degree of order in the arrangements of\nlike-cells.\n\nFrom a functional point of view, photoreceptor cells of a given\ntype maximize their sampling efficiency when arranged on an\nordered triangular lattice, as in the case of the compound eye of\ninsects \\cite{ref5, ref6}. Importantly, the triangular lattice has\nbeen shown to be the most hyperuniform pattern \\cite{ref13}, i.e.,\nit minimizes the large-scale density fluctuations among all 2D\npatterns. However, this most hyperuniform pattern may not be\nachieved if other constraints (e.g., cell size polydispersity)\nare operable. We therefore hypothesize that the disordered\nhyperuniformity of avian photoreceptor patterns represents a\ncompromise between the tendency of the individual cell types to\nmaximize their spatial regularity and the countervailing effects\nof packing heterotypic cell types within a single epithelium,\nwhich inhibits the spatial regularity of the individual cell\ntypes. In other words, the avian photoreceptors are driven to\nachieve the most ``uniform'' spatial distribution subject to\nheterotypic cell packing constraints.\n\n\n\\section{Computational Model That Yields Multi-Hyperuniform Patterns}\n\nOur initial attempt to model the avian photoreceptor cell patterns\nemployed classic packing models of polydisperse hard disks that\nare driven to their ``jammed states'' \\cite{ref27}. However, these\nmodels failed to generate patterns with multi-hyperuniformity. Such standard\njamming models involving interactions on a single length scale are\ninsufficient to represent the two competing effects leading to the\nphotoreceptor patterns and motivated us to develop a unique\nmultiscale packing model as described below.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=10.5cm,keepaspectratio]{fig7.eps}\n\\end{center} \\caption{Illustration of the hard-core and soft-core\ninteractions in a two-species system containing black and red\ncells. The left panel shows the exclusion regions (circular disks\nwith two distinct sizes) associated with the two types of cells,\nwhich is proportional to the actual sizes of the cells. The black\ncells have a larger exclusion region than the red cells. The\nmiddle panel illustrates the soft-core repulsive interaction\n(large concentric overlapping circles of the solid black disks)\nbetween the black cells. Such a repulsive interaction will drive\nthe black cells to arrange themselves in a perfect triangular\nlattice in the absence of other species. The right panel\nillustrates the soft-core repulsive interaction (large concentric\noverlapping circles of the solid red disks) between the red\ncells.} \\label{fig_packing}\n\\end{figure*}\n\nIn the experimental data representing the spatial arrangements of\nchicken cone photoreceptors, each cell is represented by a point.\nWe refer to these points as ``cell centers'', although they may\nnot correspond to the actual geometrical centers of the cells.\n\nIn order to modify a simple hard-core interaction, we consider two\ntypes of effective cell-cell interactions: isotropic short-range\nhard-core repulsions between any pair of cells and isotropic\nlong-range soft-core repulsions between pairs of like-cells (i.e.,\ncells of the same subtype). The multiscale nature of the model\nresults from the multiple length scales involved in these\ninteractions for different species, as we discuss now. The\nstrength of the hard-core repulsion is characterized by the radius\n$R^i_h$ of a hard-disk exclusion region associated with a cell\ntype $i$. This interaction imposes a nonoverlap constraint such\nthat the distance between the cells $i$ and $j$ cannot be smaller\nthan $(R^i_h + R^j_h)$, which mimics the physical cell packing\nconstraint. In this regard, $R^i_h$ will also be referred to as\nthe radius of a cell $i$ in the ensuing discussions. The relative\nmagnitudes of $R^i_h$ are estimated from an electron micrograph\nshowing photoreceptor cell packing at the level of the inner\nsegment (see discussion below) \\cite{ref11}. The characteristic\nradius $R_s$ of the soft-core repulsion is associated with the\nmean nearest-neighbor distance of the cells of the same type.\nSpecifically, the pair potential between two like-cells is given\nby\n\\begin{equation}\n\\label{potential} E(r) =\n\\left\\{{\\begin{array}{c@{\\hspace{0.8cm}}c}\n\\displaystyle{\\frac{\\alpha}{\\beta+1}}(2R_s-r)^{\\beta+1} & r\\le\n2R_s,\n\\\\\\\\ 0 & r>2R_s, \\end{array}} \\right .\n\\end{equation}\nwhere the parameters $\\alpha>0$ and $\\beta>0$ set the scale of the\ninteraction energy \\cite{footnote2}. In our simulations, we\nrequire that value of $R_s$ be uniquely determined by the\nassociated cell number density $\\rho$, i.e., $R_s =\n\\frac{1}{2}\\sqrt{2\/(\\sqrt(3)\\rho)}$. This implies that a system\ncomposed of cells of the same type (i.e., a single-component\nsystem) interacting via a pair potential given by Eq.\n\\eqref{potential} at number density $\\rho$ (i.e., the number of\ncells per unit area) possesses the triangular-lattice ground\nstate, i.e., an arrangement associated with a minimal total energy\n(sum of the total interaction energy between any pairs of\nlike-cells). In other words, when the total energy in a\nsingle-component system is reduced to its minimal value (e.g.,\nzero), sufficiently slowly from an arbitrary initial\nconfiguration, the cells will reorganize themselves into a\ntriangular-lattice arrangement.\n\n\n\nWhen the system contains multiple species, the hard-core and\nsoft-core interactions represent two competing effects in\ndetermining the packing arrangement of the cells; see Fig.\n\\ref{fig_packing}. Specifically, the polydisperse hard-disk\nexclusion regions induce geometrical frustration in the packing,\ni.e., in this five-component system, it is not possible for the\nsubset of disks with the same size, surrounded by disks with\ndifferent sizes to be arranged on a perfect triangular lattice. On\nthe other hand, the long-range soft interaction between like\nspecies tends to drive the cells of the same type to arrange\nthemselves on a perfect triangular lattice. Note that although the\nrelative magnitudes of $R^i_h$ for different species (i.e., the\nratio between any two $R^i_h$) are fixed, the actual values of\n$R^i_h$ are variable and used as a tuning parameter in our model.\nAs stated above, the ratios between $R^i_h$ are estimated from a\npreviously published study \\cite{ref11}. Specifically, the\nrelative sizes of the violet, blue, green, red and double species\nare 1.00, 1.19, 1.13, 1.06 and 1.50, respectively. Given the\nnumber of cells of each species, the values of $R^i_h$ can be\nuniquely determined from the packing fraction $\\phi$ of the cells\n(i.e., the fraction of space covered by the cells) and vice versa,\n\\begin{equation}\n\\label{eq_phi}\n\\phi = \\displaystyle{\\frac{1}{A}\\sum_i N_i \\pi (R^i_h)^2},\n\\end{equation}\nwhere $N_i$ is the number of cells of species $i$ and $A$ is the\narea of the system.\n\nOur Monte Carlo algorithm, which involves iterating ``growth'' and ``relaxation'' steps,\nworks as follows:\n\n\\begin{itemize}\n\n\\item{(1) Initialization. In the beginning of the simulation, cell centers of each species are generated in a\nsimulation box using the random-sequential-addition (RSA) process\n\\cite{ref27}. Specifically, for each species $i$, $N_i$ cell\ncenters are randomly generated such that these cell centers are\nmutually separated by a minimal distance $\\mu R_s$ $(0<\\mu<1)$. In addition,\nthe newly added cell cannot overlap any existing cells in the box\n(determined by the hard-core radius $R_s$), regardless of cell types. The initial covering fraction $\\phi_I$\nassociated with the hard-core exclusion regions is determined by\n$R^i_h$ via Eq. (\\ref{eq_phi}), and is about $80\\%$ of the RSA saturation density \\cite{ref27}.}\n\n\n\\item{(2) Growth step. At each stage $n$, the cells are allowed to randomly move a prescribed\nmaximal distance ($\\sim 0.25 R^i_h$) and direction such that no\npairs of cells overlap. After a certain number ($\\approx$1,000) of\nsuch random movements for each cell, the radius $R^i_h$ of each\ncell is increased by a small amount such that the size ratios of\nthe cells remain the same. This leads to an increase of the\npacking fraction $\\phi_n$ at this stage by an amount of about $1\\% - 3\\%$.\nNote that in this ``growth'' step, the long-range\nsoft interactions between the like-cells are turned off.}\n\n\\item{(3) Relaxation step. At the end of the ``growth'' step,\nthe soft interactions are then turned on, and the cells are\nallowed to relax from their current positions to reduce the total\nsystem energy subject to the nonoverlap condition. The steepest\ndecent method is used to drive the system to the closest local\nenergy minimum (i.e., the inherent structure \\cite{ref27}) associated with the starting configuration. This is\nreferred to as the ``relaxation'' process.}\n\n\n\\item{(4) Statistics. After the relaxation process, structural statistics of the\nresulting configuration of cell centers are obtained and compared\nto the corresponding experimental data. To ensure that the simulations\nmatch the data for the pair statistics to the best extent possible,\nwe introduce a deviation metric $\\Delta$. Specifically, $\\Delta$ is the\nnormalized sums of the squared differences between the simulated and experimental $S(k)$\nand $g_2(r)$ associated with the simulated and actual patterns, i.e.,\n\\begin{equation}\n\\label{eq_delta}\n\\Delta = \\frac{1}{n_S}\\sum_i^{n_S}\\sum_r[g^{(i)}_2(r) - \\bar{g}^{(i)}_2(r)]^2\n +\\frac{1}{n_S}\\sum_i^{n_S}\\sum_k[S^{(i)}(k) - \\bar{S}^{(i)}(k)]^2,\n\\end{equation}\nwhere $n_S = 6$ is the total number of species including both the 5 individual species\nand the overall pattern, $g_2^{(i)}(r)$ and $S^{(i)}(k)$ are the simulated functions associated with\nspecies $i$, and $\\bar{g}_2^{(i)}(r)$ and $\\bar{S}^{(i)}(k)$ are the corresponding\nexperimentally measured functions.}\n\n\n\\item{(5) The growth and relaxation steps described in the bullet items (2) and (3),\nrespectively, are repeated until $\\phi_n$ reaches a prescribed\nvalue $\\phi_F$. Specifically, the configuration obtained by relaxation at stage $n$ is used\nas the starting point for the growth step at stage $n+1$. The best simulated pattern (i.e., that with the smallest\ndeviation metric $\\Delta_{min}$) and the associated $\\phi^*$ value are\nthen identified.}\n\n\\end{itemize}\n\nAt a given packing fraction $\\phi$ (or equivalently a set of\n$R^i_h$), the polydispersity of the exclusion regions associated\nwith different species and the resulting nonoverlap constraints\nfrustrate the spatial order in the system. For example, the\nlong-range soft interaction drives a single-species system to the\ntriangular-lattice arrangement in the absence of other species. On\nthe other hand, for any $\\phi >0$, it is impossible for cells of a\nparticular species, surrounded by cells of other species to sit on\na perfect triangular lattice \\cite{ref12}. Therefore, the\ndisordered point configurations obtained by minimizing the energy\nassociated with the soft repulsive interactions subject to the\nhard-core packing constraints are the local energy minima (i.e., inherent structures)\nof the system. The extent to which the structure deviates from that of a\nperfect triangular lattice (i.e., global energy minimum) is\ndetermined by the parameter $\\phi$ (or, equivalently, $R^i_h$).\nTherefore, by tuning this parameter in our algorithm, one can, in\nprinciple, generate a continuous spectrum of configurations of\ncell centers with varying degrees of spatial order (see Appendix).\nNote that in the limit $R^i_h \\rightarrow 0$, triangular-lattice\narrangements for individual species are accessible again and the\nresulting configuration is a superposition of five\ntriangular-lattice arrangements of the cell centers.\n\n\nWe note that the order of the aforementioned growth and relaxation\nsteps can be interchanged without affecting the final\nconfiguration. In addition, instead of starting from a disordered\nRSA arrangement of cell centers as described above, we have also\nused ordered initial configurations (i.e., superposition of\ntriangular-lattice arrangements), leading to the same\nconfiguration at a given number density $\\rho$. However, the\ninitial packing density $\\phi_I$ associated with ordered initial\nconfigurations is very low and thus, it is computationally\ninefficient to start from such initial configurations. By tuning\nthe ``strength'' of the hard-core interactions via the packing\nfraction associated with the exclusion regions, our multiscale\npacking model enables us to produce disordered point configuration\nwith various degrees of hyperuniformity, examples of which are\nprovided in the Appendix for a three-component system for\nillustrative purposes.\n\n\n\\subsection{Modeling Avian Photoreceptor System via Multiscale\nParticle Packing}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=11.5cm,keepaspectratio]{fig8.eps} \\\\\n\\end{center}\n\\caption{Left panel: The bond-orientational order metric $q_6$ of\nthe individual species as a function of the packing fraction\n$\\phi$ associated with the exclusion regions. Right panel: The\ntranslational order metric $T$ of the individual species as a\nfunction of the packing fraction $\\phi$ associated with the\nexclusion regions.} \\label{fig_order}\n\\end{figure*}\n\nBy using the multiscale packing model, we were able to accurately\nreproduce the unique features of the native avian photoreceptors.\nWe modeled the aforementioned two competing effects as two types\nof effective interactions between the cells: a long-range\nsoft-core repulsion between the cells of the same type (that would\nlead to an ordered triangular-lattice arrangement in the absence\nof packing constraints) and a short-range hard-core repulsion\n(with polydisperse exclusion regions associated with different\ncell species) between any pair of cells that frustrates spatial\nordering in the system. Given the sizes of the hard-core exclusion\nregions associated with each cell species (or equivalently the\npacking fraction $\\phi$ of the exclusion regions), the system is\nallowed to relax to a state that is a local energy minimum for the\nlong-range soft-core repulsive interactions between like-species.\nSuch long-range interactions would drive each of the five cell\nspecies in the multicomponent system to the associated\ntriangular-lattice arrangement (global energy minimum) in the\nabsence of the hard-core repulsions. As we increase the strength\nof the hard-core repulsions by increasing $\\phi$, the degree of\norder in the system, which is quantified by the order metrics\n$q_6$ and $T$, decreases (see Fig. \\ref{fig_order}). It is\nimportant to emphasize that these disordered hyperuniform avian\nphotoreceptor patterns are {\\it not} simple random perturbations\nof a triangular-lattice pattern. Statistically equivalent\ndisordered hyperuniform patterns have also been obtained from\ndisordered initial configurations (e.g., RSA packings). Thus, the\nunique structural features in these patterns are not attributed to\nparticular initial configurations but rather arise from the two\ncompeting effects, which are well captured by our multiscale\npacking model.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=10.5cm,keepaspectratio]{fig9.eps}\\\\\n\\end{center}\n\\caption{Simulated point configurations representing the spatial\narrangements of chicken cone photoreceptors. Upper panels: The\nconfigurations shown from left to right respectively correspond to\nviolet, blue, green species. Lower panels: The configurations\nshown from left to right respectively correspond to red, double\nspecies and the overall pattern. The simulated patterns for\nindividual photoreceptor species are virtually indistinguishable\nfrom the actual patterns obtained from experimental measurements.}\n\\label{fig_simupacking}\n\\end{figure*}\n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[height=10cm,keepaspectratio]{fig10.eps} \\\\\n\\end{center}\n\\caption{Comparison of the structure factors $S(k)$ of the\nexperimentally obtained and simulated point configurations\nrepresenting the spatial arrangements of chicken cone\nphotoreceptors. The simulation data were obtained by averaging 50\nindependent configurations.} \\label{fig_simuSk}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[height=10cm,keepaspectratio]{fig11.eps} \\\\\n\\end{center}\n\\caption{Comparison of the pair correlation functions $g_2(r)$ of\nthe experimentally obtained and simulated point configurations\nrepresenting the spatial arrangements of chicken cone\nphotoreceptors. The simulation data were obtained by averaging 50\nindependent configurations. The distance is rescaled by the\naverage nearest neighbor distance $d_n$ in the system.}\n\\label{fig_simug2}\n\\end{figure*}\n\n\n\\begin{table*}[h]\n\\caption{Comparison of the bond-orientational and translational\norder metrics, $q_6$ and $T$, of the experimental and simulated\npoint configurations. The simulation data were obtained by\naveraging 50 independent configurations.}\n\\begin{tabular}{@{\\vrule height 10.5pt depth4pt width0pt}c|c|c|c|c }\n\\hline\n& \\multicolumn{2}{|c|}{$q_6$} & \\multicolumn{2}{|c}{$T$} \\\\\n~Species~& ~Exp.~ & ~Sim.~ & ~Exp.~ & ~Sim.~ \\\\\n\\hline\nViolet & 0.150 & 0.148 & 0.304 & 0.327 \\\\\nBlue & 0.158 & 0.164 & 0.411 & 0.395 \\\\\nGreen & 0.130 & 0.134 & 0.278 & 0.266 \\\\\nRed & 0.147 & 0.149 & 0.254 & 0.263 \\\\\nDouble & 0.184 & 0.189 & 0.390 & 0.363\\\\\nAll & 0.058 & 0.063 & 0.096 & 0.108 \\\\\n\\hline\n\\end{tabular}\n\\label{tab_simu}\n\\end{table*}\n\n\n\nThe simulation box contains 2600 cell centers and the\nnumbers of violet, blue, green, red and double species are\nrespectively 210, 355, 530, 405, and 1100. The\nrelative sizes of the violet, blue, green, red and double species\nare 1.00, 1.19, 1.13, 1.06 and 1.50, respectively.\nThe initial packing fraction associated with the hard cores\nis $\\phi_I = 0.45$ and the simulation stops at $\\phi_F = 0.7$.\nAt $\\phi \\approx 0.58$, the resulting\nconfigurations (see Fig. \\ref{fig_simupacking}) are virtually\nindistinguishable from the actual photoreceptor patterns, as\nquantified using a variety of descriptors. Specifically, the\nassociated structure factors (see Fig. \\ref{fig_simuSk}) and pair\ncorrelation functions (see Fig. \\ref{fig_simug2}) match the\nexperimental data very well, as quantified by the minimum\ndeviation metric value of $\\Delta_{min} \\approx 0.4$ [c.f. Eq.(\\ref{eq_delta})].\nWe note that the major contributions to $\\Delta_{min}$ are the large\nfluctuations in the experimental data due to a limited number of samples.\n(The initial value of $\\Delta$ is roughly $3.16$.)\nThe order metrics $q_6$ and $T$ of the simulated pattern also match\nthose of the experimental data very well (see Tab. \\ref{tab_simu}).\nThis is a stringent test for the simulations to pass. The success\nof the simulations strongly suggests that the disordered\nhyperuniform photoreceptor patterns indeed arise from the\ncompetition between cell packing constraints and the tendency to\nmaximize the degree of regularity for efficient light sampling,\nsuggesting that the individual photoreceptor types are as uniform\nas they can be, given the packing constraints within the\nphotoreceptor epithelium.\n\n\n\\section{Conclusions and Discussion}\n\nBy analyzing the chicken cone photoreceptor patterns using a\nvariety of sensitive microstructural descriptors arising in\nstatistical mechanics and particle-packing theory, we found that\nthese disordered patterns display both overall and homotypic\nhyperuniformity, i.e., the system is multi-hyperuniform. This\nsingular property implies that if any subset of the individual\nspecies is removed from the overall population, the remaining\npattern is still hyperuniform. Importantly, it is highly\nnontrivial to devise an algorithm that would remove a large\nfraction of the points from a disordered hyperuniform system while\nleaving the remaining point pattern hyperuniform, and yet Nature\nhas found such a design.\n\nUntil now, the property of \\textit{overall} hyperuniformity was\nidentified only in a special subset of disordered physical\nsystems, including ground-state liquid helium \\cite{helium},\none-component plasmas \\cite{plasma}, Harrison-Zeldovich power\nspectrum of the density fluctuations of the early Universe\n\\cite{universe}, fermionic ground states \\cite{fermion}, classical\ndisordered ground states \\cite{ref17}, and maximally random jammed\npackings of equal-sized hard particles \\cite{aleksPRL, jiaoPRE}.\nAll of these examples involve single-component systems. More\nrecently, disordered multicomponent physical systems such as\nmaximally random jammed (MRJ) hard-particle packings \\cite{ref16,\nref18, ref15} have been identified that possess an appropriately\ngeneralized hyperuniformity property ascertained from the local\nvolume fraction fluctuations. However, the multicomponent\nphotoreceptor avian system pattern, which represents the first\nexample of a disordered hyperuniform system in a living organism,\nis singularly different from any of these hyperuniform physical\nsystems in that in the pattern each species and the total\npopulation are hyperuniform, i.e., the avian patterns are\nmulti-hyperuniform. Although it is not very difficult to construct\na overall hyperuniform system by superposing subsystems that are\nindividually hyperuniform, the reverse process (i.e., decomposing\na hyperuniform system into individually hyperuniform subsets) is\nhighly nontrivial. It will be of interest to identify other\ndisordered hyperuniform biological systems. It is likely that some\nother epithelial tissues and phyllotactic systems \\cite{ref27}\npossess such attributes. Interestingly, it has been shown that the\nlarge-scale number-density fluctuations associated with the\nmalignant cells in brain tumors are significantly suppressed,\nalthough the cell patterns in such brain tumors are not\nhyperuniform \\cite{plos}.\n\n\nIn addition, the photoreceptor patterns possess quasi-long-range\n(QLR) correlations as indicated by the linear small-$k$ behavior\nin $S(k)$. Such QLR correlations are also observed in the\nground-state liquid helium \\cite{helium}, the density fluctuations\nof the early Universe \\cite{universe}, fermionic ground states\n\\cite{fermion} and MRJ packings of hard particles \\cite{ref16,\nref18, ref15}. In the MRJ particle packings, it is believe that\nthe QLR correlations arise from the competition between the\nrequirement of jamming and maximal disorder in the system\n\\cite{ref16, ref18, ref15}. As we showed employing the unique\nmultiscale packing model, the multicomponent avian system that is\nboth homotypic and overall hyperuniform, i.e., multi-hyperuniform,\ncan result from two competing interactions between the\nphotoreceptors.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=10.5cm,keepaspectratio]{fig2.eps}\\\\\n\\end{center}\n\\caption{Left panel: A random-sequential-addition (RSA) packing of\nhard, identical circular disks in two-dimensions with a packing\nfraction $\\phi = 0.54$, which is close to the saturation state.\nRight panel: An equilibrium system of hard, identical disks at\n$\\phi = 0.54$. The fact that neither of these systems is\nhyperuniform, as discussed in the text, indicates that hard-core\nexclusion effects alone are not sufficient to induce\nhyperuniformity.} \\label{fig2_rsa}\n\\end{figure*}\n\nIt is noteworthy that while hard-core exclusion and high density\nin a disordered particle packing are necessary conditions to\nachieve a hyperuniform state, these are not sufficient conditions.\nFigure \\ref{fig2_rsa} shows a nonequilibrium\nrandom-sequential-addition (RSA) packing of hard circular disks in\ntwo-dimensions with a packing fraction $\\phi = 0.54$ (left panel),\nwhich is generated by randomly and sequentially placing hard disks\nin a domain without overlapping existing disks, until there is no\nroom for additional disks \\cite{rsa}. The right panel of Fig.\n\\ref{fig2_rsa} shows an equilibrium system of hard disks at $\\phi\n= 0.54$ (right panel). The structure factor values at $k=0$ for\nthe RSA and equilibrium systems are respectively given by $S(0)=\n0.059$ \\cite{rsa} and $S(0) = 0.063$ \\cite{salbook, newref43,\nfootnote}. Although hard-core exclusion play a central role in\nthese two distinct high-density packings, neither packing is\nhyperuniform, as indicated by the relatively large positive values\nof the corresponding $S(0)$.\n\n\n\n\nTo understand the origin of the unique spatial features of the\navian photoreceptor patterns, we have devised a unique multiscale\ncell packing model that suggests that photoreceptor types interact\nwith both short- and long-ranged repulsive forces and that the\nresultant competition between the types gives rise to the singular\ncell patterns. The fact that a disordered hyperuniform pattern\ncorresponds to a local optimum associated with the multiscale\npacking problem indicates that such a pattern may represent the\nmost uniform sampling arrangement attainable in the avian system,\ninstead of the theoretical optimal solution of a regular hexagonal\narray. Specifically, our studies show how fundamental physical\nconstraints can change the course of a biological optimization\nprocess. Although it is clear that physical cell packing\nconstraints are the likely cause of the short-range hard-core\nrepulsion, the origin of the effective longer-range soft-core\nrepulsion is less obvious. We hypothesize that repulsive forces of\nthis type occur during retinal development and may be secondary to\ncell-cell interactions during photoreceptor neurogenesis. However,\na comprehensive test of this hypothesis is beyond the scope of\nthis investigation, and therefore its resolution represents a\nfascinating avenue for future research.\n\n\nRecent studies have shown that disordered hyperuniform materials\ncan be created that possess unique optical properties, such as\nbeing ``stealthy'' (i.e., transparent to incident radiation at\ncertain wavelengths) \\cite{ref17}. Moreover, such disordered\nhyperuniform point patterns have been employed to design isotropic\ndisordered network materials that possess complete photonic band\ngaps (blocking all directions and polarizations of light)\ncomparable in size to those in photonic crystals \\cite{ref28,\nman13}. While the physics of these systems are not directly\nrelated to the avian photoreceptor patterns, such investigations\nand our present findings demonstrate that a class of disordered\nhyperuniform materials are endowed with novel photonic properties.\n\nBesides capturing the unusual structural features of photoreceptor\npatterns, our multiscale packing model represents a unique\nalgorithm that allows one to generate multi-hyperuniform\nmulticomponent systems with varying degrees of order by tuning the\npacking fraction $\\phi$ of the hard-core exclusion regions (see\nAppendix for additional examples). This knowledge could now be\nexploited to produce multi-hyperuniform disordered structures for\napplications in condensed matter physics and materials science. For example, it would be of\ninterest to explore whether colloidal systems can be synthesized\nto have such repulsive interactions in order to self assemble\ninto the aforementioned unique disordered arrangements and to study\nthe resulting optical properties. It is noteworthy that it has\nalready been demonstrated that three-dimensional disordered hyperuniform\npolymer networks can be fabricated for photonic applications using direct\nlaser writing \\cite{polymer}.\n\n\n\n\n\n\n\\begin{acknowledgments}\nThe authors are grateful to Paul Steinhardt for useful\ndiscussions. Y. J. and S. T. were supported by the National Cancer\nInstitute under Award NO. U54CA143803 and by the Division of\nMathematical Sciences at the National Science Foundation under\nAward No. DMS-1211087. J.C.C. was supported by NIH grants\n(EY018826, HG006346 and HG006790) and J.C.C., M.M.-H. and H.H.\nwere supported by a grant from the Human Frontier Science Program.\nH.H. also acknowledges the support of the German Research\nFoundation (DFG) within the Cluster of Excellence, ``Center for\nAdvancing Electronics Dresden''. This work was partially supported\nby a grant from the Simons Foundation (Grant No. 231015 to\nSalvatore Torquato).\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzgatf b/data_all_eng_slimpj/shuffled/split2/finalzzgatf new file mode 100644 index 0000000000000000000000000000000000000000..b7ba99a0fd29957bc9ab500e4690b8b2ec698f9b --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzgatf @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe Ap star $\\gamma$ Equ (HD 201601, BS 8097) is one of the brightest\nobjects of this class, with the apparent luminosity $V=4.66$ mag. The exact \nspectral type of this object\nis A9p (SrCrEu subclass). The magnetic field of $\\gamma$ Equ has been\nstudied for more than 50 years, starting from October 1946 (see Babcock\n1958). The longitudinal magnetic field $B_e$ of this star does not exhibit\nperiodic variations in time scales typical of stellar rotation,\n$0.5 - 30$ days. Such a variability of the $B_e$ field was observed in most\nAp stars. The above effect is commonly interpreted as the result of\nstellar rotation (oblique dipole model).\n\nThe first measurements by Babcock (1958) showed that the value of the\nlongitudinal magnetic field $B_e$ of $\\gamma$ Equ was positive in 1946--52,\nand approached nine hundred G. From that time on the value of $B_e$ slowly\ndecreased and even changed sign in 1970\/71. One could interpret the magnetic \nbehavior of $\\gamma$ Equ either as secular variations, or variations\ncaused by extremely slow rotation. If the latter picture is correct,\nthen the corresponding magnetic and rotational periods are in the range\nfrom 72 to 110 years (Bonsack \\& Pilachowski 1974; Leroy et al. 1994; \nBychkov \\& Shtol' 1997; Scholz et al. 1997).\n\nThe behavior of the $B_e$ field in $\\gamma$ Equ was investigated by many\nauthors in the second half of the twenty{\\sl th} century. For this research\nwe compiled $B_e$ observations published by Bonsack \\& Pilachowski (1974),\nScholz (1975; 1979), Borra \\& Landstreet (1980), Zverko et al. (1989), \nMathys (1991), Bychkov et al. (1991), Bychkov \\& Shtol' (1997), Scholz et\nal. (1997), Mathys \\& Hubrig (1997), Hildebrandt et al. (2000),\nLeone \\& Kurtz (2003) and Hubrig et al. (2004).\n\nWe included in this paper our unpublished magnetic $B_e$ measurements which\nwere obtained during the past seven years. All the new magnetic observations\nshowed, that the slow decrease of the $B_e$ field in $\\gamma$ Equ apparently\nreached the minimum in 1996--2002 and has actually started to increase. \n\nIn this paper we determined the accurate parameters of secular\nvariability of $\\gamma$ Equ: the period $P_{mag}$, the amplitude and the\ntime of zero phase for $B_e$ variations, which were approximated by a sine\nwave. We support the hypothesis that the long-term $B_e$ \nvariation in $\\gamma$ Equ is a periodic feature. Possible origin of this\nvariation cannot be uniquely determined, see discussion in \nSection ~\\ref{sec:discussion} of this paper.\n\n\\section{ Observations and data processing }\n\nWe have performed spectropolarimetric observations of Zeeman line splitting\nfor $\\gamma$ Equ at the Coude focus of the 1-m optical telescope (Special\nAstrophysical Observatory, Russian Academy of Sciences).\nZeeman spectra were obtained with the echelle spectrograph GECS (Musaev\n1996). We have put the achromatic analyser of circularly polarised light\nin front of the spectrometer slit. Images of the Zeeman echelle spectra\nwere recorded from CCD detectors in standard FITS format.\nFinal reduction of the archived spectra was performed with the standard\nMIDAS software (Monin 1999).\n\nEffects of instrumental polarisation on $B_e$ measurements obtained with\nthis instrument were investigated by Bychkov et al. (1998, 2000).\n\nTable~\\ref{tab:saores} presents the full set of our $B_e$ measurements\nof $\\gamma$ Equ (total 33 $B_e$ points). \nThe meaning of the first 3 columns is obvious. The fourth column\ngives the number $N$ of spectral lines which were\nused for the measurement of $B_e$ for a given exposure. \nTime length $\\Delta t$ of the exposure (in min) is given in the last column\nof Table~\\ref{tab:saores}.\n\nOn average, the value of a single $B_e$ number listed in \nTable~\\ref{tab:saores} was obtained after averaging of $B_e$ measurements\nobtained in 500-1300 spectral lines. Standard deviation $\\sigma_{B_e}$\nfor the resulting value of $B_e$ was computed in the standard manner as\nthe error of an arithmetic mean value.\n\nErrors $\\sigma_{B_e}$ determined in the above way reached rather low values\nin several observations listed in Table~\\ref{tab:saores}. In 2005\/2006\nwe plan to verify the reality of such $\\sigma_{B_e}$ by a special program\nof $B_e$ observations. Actually we accept these errors {\\sl bona fide}\nand note the following properties of our $B_e$ measurements.\n\nThe referee pointed out that a few pairs of $B_e$ measurements of one night\nin Table~\\ref{tab:saores} differ by only a few G, which is substantially\nless than the corresponding standard deviation $\\sigma_{B_e}$. \nWe can explain this only as a purely random effect, and do not see\nany reason for it either in the acquisition of observational data or\ntheir reduction.\n\nSecondly, series of measurements taken within a few nights generally \nshow a scatter of the order of 100 G, which is much higher than the \nstandard errors $\\sigma_{B_e}$ in Table~\\ref{tab:saores}. The latter are \nof the order of $20-30$ G, and such a discrepancy suggests that our\nstandard deviations are systematically underestimated, and are in fact\nof the order of 100 G. On the other hand,\nsuch a scatter of $\\approx$ 100 G is not inconsistent with the\nshort-term variability of light and the longitudinal magnetic field\n$B_e$ in $\\gamma$ Equ in time scales of minutes or above it. \n\nLeone \\& Kurtz (2003) recently discovered periodic variations of the\nlongitudinal magnetic field $B_e$ in $\\gamma$ Equ over the pulsation\nperiod of this star, $P_{puls} = 12.1$ min. The estimated amplitude\n$\\Delta B_e = 240$ G for this period, therefore, these variations \nat least can contribute to the scatter of our $B_e$ points collected\nin Table~\\ref{tab:saores}. \n\nStudy of the rapid periodic $B_e$ variations on a time scale of minutes \nwas also presented in Bychkov et al. (2005b) for $\\gamma$ Equ. They\ndid not found conclusive evidence of such variations above the noise\nlevel at $\\approx 240$ G. \n\nWe also performed spectral analysis of the full set of 298 $B_e$ time\nseries from years 1946--2004. We concluded that there are no short-period\nfield variations with periods above ca. 1 day, but were not able to extend\nour analysis for shorter periods, see Section 4 of this paper.\n\n\n{\n\\newdimen\\digitwidth\n\\setbox0=\\hbox{\\rm0}\n\\digitwidth=\\wd0\n\\catcode`?=\\active\n\\def?{\\kern\\digitwidth}\n\n\\newcommand{\\displaystyle}{\\displaystyle}\n\\begin{table}\n\\caption{Measurements of $B_e$ in $\\gamma$ Equ (HD 201601). }\n\\label{tab:saores}\n\\renewcommand{\\arraystretch}{1.1}\n\\begin{tabular}{|l|r|c|r|c|}\n\\noalign{\\vskip 2 mm}\n\\hline\nJD \\hskip1mm 2400000.+ & $B_e$ (G) & $\\sigma_{B_e}$ (G)&\\hskip-2mm $N$ &\n\\hskip2mm $\\Delta t$ (min)\\\\\n\\hline\n49648.323 & --1045 & 21 & 706 & 30\\\\[-1.0pt]\n49648.345 & --1315 & 26 & 755 & 30\\\\[-1.0pt]\n49649.229 & --1463 & 37 & 576 & 30\\\\[-1.0pt]\n49649.257 & --1159 & 31 & 656 & 30\\\\[-1.0pt]\n49932.424 & --1317 & 26 & 691 & 60\\\\[-1.0pt]\n49932.469 & --1317 & 26 & 675 & 60\\\\[-1.0pt]\n49933.460 & --1316 & 26 & 700 & 60\\\\[-1.0pt]\n49933.507 & --1317 & 29 & 704 & 60\\\\[-1.0pt]\n50023.158 & --1291 & 22 & 501 & 40\\\\[-1.0pt]\n50023.189 & --1380 & 23 & 650 & 40\\\\[-1.0pt]\n50066.128 & --1539 & 26 & 718 & 40\\\\[-1.0pt]\n50066.157 & --1611 & 62 & 532 & 40\\\\[-1.0pt]\n51533.1229 & --1014 & 16 & 966 & 30\\\\[-1.0pt]\n51533.1451 & --1011 & 14 & 701 & 30\\\\[-1.0pt]\n51535.1847 & -- 902 & 16 & 955 & 40\\\\[-1.0pt]\n51535.2153 & -- 901 & 19 & 855 & 40\\\\[-1.0pt]\n51536.1069 & -- 670 & 18 & 821 & 30\\\\[-1.0pt]\n51536.1285 & -- 642 & 24 & 508 & 30\\\\[-1.0pt]\n51888.166 & --1069 & 18 & 847 & 30\\\\[-1.0pt]\n51888.190 & --1092 & 20 &1353 & 30\\\\[-1.0pt]\n51889.103 & -- 890 & 20 & 847 & 30\\\\[-1.0pt]\n51889.126 & -- 865 & 20 & 817 & 30\\\\[-1.0pt]\n51890.142 & -- 742 & 21 & 770 & 30\\\\[-1.0pt]\n52163.3000 & -- 845 & 19 & 833 & 30\\\\[-1.0pt]\n52163.3201 & -- 855 & 19 & 732 & 30\\\\[-1.0pt]\n52164.2861 & -- 956 & 16 & 947 & 30\\\\[-1.0pt]\n52164.3076 & -- 967 & 16 & 914 & 30\\\\[-1.0pt]\n52165.2812 & --1061 & 17 & 835 & 40\\\\[-1.0pt]\n52165.3111 & --1029 & 16 & 991 & 40\\\\[-1.0pt]\n52186.2229 & -- 922 & 17 &1085 & 30\\\\[-1.0pt]\n52186.2451 & -- 942 & 17 &1055 & 30\\\\[-1.0pt]\n52187.2673 & -- 882 & 16 &1072 & 30\\\\[-1.0pt]\n52188.2395 & -- 908 & 18 & 838 & 30\\\\\n\\hline\n\\end{tabular}\n\\end{table} }\n\n\\section{ Magnetic period of $\\gamma$ Equ }\n\\label{sec:magnet}\n\nMagnetic observations presented in Table~\\ref{tab:saores} represent \ncompletely new data. They cover time span of ca. 7 years \nand include the\nphase when the effective magnetic field $B_e$ in $\\gamma$ Equ apparently\nreached its minimum value, and then the slow decrease of $B_e$ observed\nin the recent $\\approx$ 50 years has been reversed. This fact is of \nextraordinary importance, because it allows one for a fairly accurate \ndetermination of the magnetic period and the amplitude of $B_e$ variations\nin $\\gamma$ Equ.\n\nWe have compiled the set of 298 observations of the $B_e$ field in\n$\\gamma$ Equ, scattered in the literature, and appended our measurements.\nThese data cover the time period 1946--2004 (58 years). They are\ndisplayed in Fig.~\\ref{fig:long}. \nNote, that the $B_e$ measurements obtained by Babcock (1958) apparently\ncover the phase of the maximum longitudinal magnetic field in $\\gamma$ Equ.\n\nThe set of $B_e$ measurements analysed in this paper is rather \nheterogeneous. The data have been obtained by several different observers\nover a long time period using various instruments and techniques, and it\nis impossible to estimate or test credibly their systematic and random errors,\nparticularly for the earliest observations of the longitudinal magnetic\nfield in $\\gamma$ Equ.\n\nTherefore, we arbitrarily assumed that systematic errors of the $B_e$\nobservations are equal to zero. In other words, all the $B_e$ points for\n$\\gamma$ which were found in the literature are fully compatible.\n\nRandom errors of individual $B_e$ points frequently were given in the\nsource papers, and are denoted by vertical bars in Fig.~\\ref{fig:long}.\nThese errors\nwere not directly available for the earliest \nphotographic measurements by H.W. Babcock (1958) and Bonsack \\& Pilachowski \n(1974). We adopted here the estimated error for Babcock's data equal\n238 G, and 151 G for Bonsack \\& Pilachowski. These numbers were obtained \nin our thorough reanalysis of the earliest papers dealing with measurements\nof stellar magnetic fields, cf. Section 3.1 in Bychkov et al. (2003).\n\nDetermination of the period and other parameters of the apparent magnetic\nvariability for $\\gamma$ Equ was performed in the following manner.\nAssuming that the run of the observed longitudinal field $B_e$ with time\n$T$ can be approximated by a sine wave \n\\begin{equation}\n B_e (T)=B_0+B_1 \\sin\\left[{2\\pi (T-T_0)\\over P}-{\\pi\\over 2}\\right] \\, ,\n\\label{equ:sigma1}\n\\end{equation} \nwe determined all four parameters: the period $P$, the average field $B_0$,\nthe amplitude $B_1$ and the time of zero phase $T_0$\nusing the iterative technique of nonlinear fitting.\n\nStarting values of $P$, $B_0$, $B_1$, $T_0$ and their standard deviations\nwere found by our computer code for the nonlinear least squares method\n(Bychkov et al. 2003). \nThe final values and their errors were then computed with the public domain\ncode ``nlfit.f'', which is designed for curve and surface fitting with the \nLevenberg-Marquardt procedure ({\\sc ODRPACK v. 2.01} subroutines). The code\nis available at the site {\\tt www.netlib.org}.\n\n\nFitting of a sine wave to all the 298 $B_e$ points with errors as in\nFig.~\\ref{fig:long} gave very poor results with the $\\chi^2$ for\na single degree of freedom $\\chi^2\/\\nu = 18.0420$. Such fits are unacceptable,\nand in case of $\\gamma$ Equ the poor fit is the result of underestimated\nerrors of many $B_e$ points. Many $B_e$ observations presented in \nFig.~\\ref{fig:long} have very low errors, which sometimes are less than 20 G.\nOur new $B_e$ points, which are collected in Table~\\ref{tab:saores}, also\nare of such a high formal accuracy.\n\nWe cannot judge, whether an apparent scatter of $B_e$ points in\nFig.~\\ref{tab:saores} is due to unrealistic error estimates or the intrinsic\nshort-term variability of the longitudinal magnetic field in $\\gamma$ Equ.\nThe estimated random error of $B_e$ points about the starting sine wave\nequals to 213 G. For the final fitting of a sine we assumed that all the\n298 points have identical errors of 213 G. \n\nFinal\nvalues of the fitted parameters and their standard deviations $\\sigma$\nfor the sine phase curve are given below. \n\\halign {\\hskip 1 cm #\\hfil\\hskip1mm &#\\hfil \\cr\n\\noalign{\\vskip 5 mm}\n $P_{mag}$ & = $33278 \\pm 1327$ days $= 91.1 \\pm 3.6$ years \\cr\n $T_0 $ & = JD $2417795.0 \\pm 1057. $ \\cr\n $B_0 $ & = $-\\, 262 \\pm 22.4 $ G \\cr\n $B_1 $ & = $+\\, 839 \\pm 22.1 $ G \\cr\n $r $ & = $-\\, 0.524 \\pm 0.043$ \\cr\n\\noalign{\\vskip 5 mm}\n}\n\\noindent\nIn other words, a parameter range from $-\\sigma $ to $+\\sigma$ is just\nthe true 68\\% confidence interval for this parameter. \n\nThe above fit of a sine wave with uniform errors of 213 G is very good, with\n$\\chi^2\/\\nu = 1.0134$. The effect of inhomogeneity in the $B_e$ time series\nplus the possible existence of rapid magnetic variability in $\\gamma$ Equ\nwere compensated by the increase of the random error, and neither should\ninfluence the above parameters of secular magnetic variability in \n$\\gamma$ Equ.\n\nThe standard parameter $r$ was defined for the oblique rotator model\nof an Ap star. It is related to the angle $\\beta$ between the magnetic\ndipole axis and the rotational axis, and the angle $i$ between the rotational\naxis and the line of sight (Preston 1967):\n\\begin{equation}\nr = {{\\cos\\beta \\cos i - \\sin\\beta \\sin i} \\over\n {\\cos\\beta \\cos i + \\sin\\beta \\sin i}} \n = {B_e (\\min) \\over {B_e (\\max)}} \\, .\n\\label{eqn:rrr}\n\\end{equation}\nParameters $B_e (\\min)$ and $B_e (\\max)$ of the $B_e$ sine wave for \n$\\gamma$ Equ are given by\n\\halign {\\hskip 1 cm #\\hfil\\hskip1mm &#\\hfil\\hskip1mm &# \\hfil \\cr\n\\noalign{\\vskip 2 mm}\n $B_e({\\rm max}) $ & $=B_0+B_1$ & = $+\\,\\,\\, 577 \\pm 31.4$ G \\cr\n $B_e({\\rm min}) $ & $=B_0-B_1$ & = $ -1101 \\pm 31.4$ G \\cr\n\\noalign{\\vskip 2 mm}\n}\nNote, that the meaning of $B_e({\\rm max}) $ and $B_e({\\rm min}) $ for\nuse in Eq.~\\ref{eqn:rrr} is different: $B_e({\\rm max}) $ denotes there\nthe value of magnetic intensity which has the higher absolute value, and\n$B_e({\\rm min})$ has the lower absolute value. In this way we obtained the\nvalue of $r$ for $\\gamma$ Equ equal to $r=577 \/ (-1101) = -0.524 $.\n\nBychkov et al. (2005a) presented an extensive catalog of the magnetic\nphase curves and their parameters for 136 stars on the main sequence and\nabove it. We quoted there the \npreviously estimated period for $\\gamma$ Equ, $P_{mag}=27027^d$,\nwhich was obtained on the basis of a shorter series of $B_e$ data.\nThis paper and the new, more accurate $P_{mag} = 33278^d$ represents a \nmajor revision of the previously known magnetic period of $\\gamma$ Equ.\n\n\\begin{figure}\n\\resizebox{\\hsize}{0.8\\hsize}{\\rotatebox{0}{\\includegraphics{fig1.eps}}}\n\\caption[]{The longitudinal magnetic field $B_e$ for $\\gamma$ Equ in years\n 1946--2004. }\n\\label{fig:long}\n\\end{figure}\n\n \n\\section{Search for additional magnetic periods in $\\gamma$ Equ}\n\nSignificant scatter of the observed points in the long-term run of $B_e (T)$\nin Fig.~\\ref{fig:long} suggests the search for short-term periodicities.\nWe applied the strategy of prewhitening to the set of available $B_e$\nmeasurements, and removed the principal sine-wave variations from the data.\nPrewhitened data were then analysed with the method developed by Kurtz \n(1985), and with his Fortran code (Kurtz 2004). \n\nSuch a search for peaks in the $B_e$ amplitude spectrum of $\\gamma$ Equ\nin this paper was restricted to trial periods higher than 1 day. \nThis is because many of the earlier magnetic observations for this star \neither have poorly determined the time of measurement, or have \nlong times of exposure (see e.g. Babcock 1958). \nThe star $\\gamma$ Equ exhibits rapid nonradial pulsations and the\ncorresponding $B_e$ with the period $P_{mag}=12.1$ min (Leone \\& Kurtz\n2003) and, possibly, with simultaneous shorter periods \n(Bychkov et al. 2005b). None of them were analysed in this paper.\n\nWe have identified two additional periods of statistically low significance\nin the range $P_{mag} > 1^d$, see Fig.~\\ref{fig:short}:\n\n\\vskip 3 mm\n$P_1 = 348.07$ days, amplitude $=122$ G \\par\n$P_2 = 23.44$ days, amplitude $=110$ G \\par\n\\vskip 3 mm\n\n\\noindent\nBoth peaks in the amplitude spectrum in Fig.~\\ref{fig:short} exhibit low\nsignal to noise ratio, with noise level at ca. 80 G. The period $P_1$ \nis close to 1 year. Since most of the existing $B_e$ observations for $\\gamma$\nEqu were performed in months July-November, then the peak $P_1$ in the\namplitude spectrum represents a false period which most likely reflects the\naverage 1-year repetition time in the acquisition of the existing magnetic\nmeasurements.\n\nWe believe that the peak $P_2$ in the amplitude spectrum of the $B_e$ field\nof $\\gamma$ Equ is the random effect of a pure noise. The peak is very\nnarrow, in fact, it only appears in a single bin of a very dense discrete\nfrequency mesh.\n\nKurtz (1983) discussed the possible existence of the period of $\\approx 38$\ndays in his photometric observations of $\\gamma$ Equ\nin 1981. That period was of low probability, but \npossibly could be identified with the real rotational period in\nthis star. We do not confirm the existence of the 38 day period in \nlong-term $B_e$ observations of $\\gamma$ Equ, see Fig.~\\ref{fig:short}.\n\n\n\n\\begin{figure}\n\\resizebox{\\hsize}{0.8\\hsize}{\\rotatebox{0}{\\includegraphics{fig2.eps}}}\n\\caption[]{Amplitude spectrum of the $B_e$ time series for \n $\\gamma$ Equ, years 1946--2004. }\n\\label{fig:short}\n\\end{figure}\n\n\n\\section{ Discussion }\n\\label{sec:discussion}\n\nThere exist three possible explanations for the observed long-term behavior \nof the longitudinal magnetic field in $\\gamma$ Equ:\n\n\\vskip 3 mm\n\\noindent\n{\\bf 1.} Precession of the rotational axis (Lehmann 1987). \\par\n\\noindent\n{\\bf 2.} Solar-like magnetic cycle (Krause \\& Scholz 1981), \\par\n\\noindent\n{\\bf 3.} Rotation with the period of 91.2 years. \n\\vskip 3 mm\n\nThe Ap star $\\gamma =$ HD 201601 in fact is a binary system. One can\nassume, that the gravitational force from the secondary companion can cause\nprecession of the Ap star. As the result, the angle between the rotational\naxis and the direction towards the Earth varies periodically. Therefore,\nchanges of the aspect can in principle cause apparent variations of the\nlongitudinal magnetic field $B_e$ or the amplitude of its variations.\n\nEffects of precession in long-period Ap stars were studied by Lehmann \n(1987), who showed that the oblateness of stars caused by the rotational\nor magnetic flattening is not adequate to produce observable precession \neffects. The only exception was 52 Her, where the observed behavior of \nthe star could be interpreted as a precessional motion. \n\nThe above considerations indicate that the precession theory does not\nconvincingly explain $B_e$ variations in this star. \n\nThe idea by Krause \\& Scholz (1981) that we actually observe the solar-like\nmagnetic cycle in $\\gamma$ Equ in which the global magnetic field\nreverses its polarity, cannot be easily verified by the existing \nobservations of the global longitudinal magnetic field $B_e$. Moreover,\none can note that such an idea requires the existence of a mechanism in the\ninterior of $\\gamma$ Equ which ensures the transfer of huge magnetic\nenergy into electric currents and vice versa. Note that the required\nefficiency of such a mechanism and the amplitude of magnetic field \nvariations in $\\gamma$ Equ is ca. four orders of magnitude larger than\nthat in the Sun in a similar timescale.\n\nFollowing the widely accepted picture of an Ap\nstar, we believe that the magnetic field of $\\gamma$\nEqu can be approximated by a dipole located in the center\nof the star. The dipole is inclined to the rotational axis of $\\gamma$ Equ.\nWe assume that the magnetic field is stable and remains frozen in the \ninterior of a rotating star at the time of observations, i.e. during at\nleast of 58 years. Therefore, slow variations of the $B_e$ field\nin $\\gamma$ Equ are caused by an extremely slow rotation, in which case our\n$P_{mag} = P_{rot} = 33278^d$. Such an explanation is supported to some \nextent by polarimetric measurements by Leroy et al. (1994).\n\nWe plan to perform high accuracy polarimetric \nmeasurements of $\\gamma$ Equ with the new version of MINIPOL. The device\nwas constructed to measure the angles and the degree of linear polarisation\nof stellar radiation, and will be operational at the Special\nAstrophysical Observatory in 2006. We also expect that shall be able to\nverify the extremely slow rotation of $\\gamma$ Equ measuring the rate of \nchange for the polarisation angle of stellar radiation.\n\n\n\\section{Summary }\n\nThe Ap star $\\gamma$ Equ (HD 201601) exhibited slow and systematic\ndecrease of the longitudinal magnetic field $B_e$ starting from 1946,\nwhen the global magnetic field of this star was discovered (Babcock 1958).\nWe have compiled the full set of 298 existing $B_e$ measurements, which\nconsists of the $B_e$ data published in the literature and our observations\nobtained during recent 7 years. The latter magnetic data (33 $B_e$ points)\nwere measured with the echelle spectrograph in the Coude focus of the 1-m\ntelescope at the Special Astrophysical Observatory.\nOur newest observations showed that the longitudinal magnetic field $B_e$\nof $\\gamma$ Equ reached its local minimum and started to rise in 1998-2004.\n\nAll the available data cover the time period of 58 years (1946-2004) and\ninclude both phases of the maximum and minimum $B_e$. \nAssuming that the secular variability of the $B_e$ field is a periodic\nfeature, we determined parameters of the magnetic field curve in\n$\\gamma$ Equ and give the value of its period, $P=91.1 \\pm 3.6$ years,\nwith the zero phase (maximum of $B_e$) at \n$T_0 =$ JD $2417795.0 \\pm 1057$. Sine-wave fit to the $B_e$ phase curve \nyields $B_e({\\rm max}) =+577 \\pm 31$ G and $B_e({\\rm min}) =-1101 \\pm 31$ G.\n\nSpectral analysis of the 58-year long $B_e$ time series essentially do not\nshow the existence of shorter periods, down to trial periods of $\\approx$\n1 day. More specifically, there are no real shorter periods in the run of the\nlongitudinal magnetic field $B_e$ with amplitudes exceeding the noise\nlevel of 80 G.\n\n\n\\section*{Acknowledgments}\n\nWe are grateful to John D. Landstreet, the referee, for his criticism\nand suggestions regarding our computations and the manuscript. \nWe thank Don Kurtz for providing his Fortran software used\nhere to compute the amplitude spectrum of $\\gamma$ Equ. \nThis research was supported by the Polish Committee for Scientific\nResearch grant No. 1 P03D 001 26.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Appendix}\n\n\\renewcommand{\\thesubsection}{\\Alph{subsection}}\n\n\\subsection{Results under more attacks}\n\nIn order to verify the effectiveness of the proposed method, in this section, we further evaluate the robustness of our method under a broader range of powerful attacks: $1)$ AutoAttack~\\cite{croce2020reliable} (an ensemble of four strong diverse attacks, which is widely considered as the strongest attack for robustness evaluation), $2)$ CW attack~\\cite{carlini2017towards} (CW-200), $3)$ PGD attack with restart~\\cite{madry2018towards} (PGD-200), $4)$ One-pixel attack~\\cite{su2019one}, $5)$ Spatial Transformation attack~\\cite{xiao2018spatially}, as well as $6)$ Color Channel attack~\\cite{kantipudi2020color}. PGD-200 and CW-200 both restart 5 times with 40 optimization steps each restart.\n\nIn Table~\\ref{tab:other_attacks-1}, we report the robust accuracy under these attacks with AdvCL serving as baseline on CIFAR100.\nThe results show that our methods can improve robustness under all different attacks across almost all settings, \\textit{e.g.}, 21.43\\% vs. 19.57\\% under AutoAttack and 29.56\\% vs. 27.13\\% under PGD-200 attack, with loss function $\\mathcal{L}^{IP+HN}$ (Equation~\\ref{eq:ip+hn}), under Linear Probing.\n\n\\begin{table}[h]\n\\fontsize{8}{9}\\selectfont \n\\renewcommand\\arraystretch{1.5}\n\\centering\n\\caption{Robustness evaluation under diverse attacks on CIFAR100 with AdvCL as baseline.}\n\\vspace{5pt}\n\\label{tab:other_attacks-1}\n\\scalebox{0.88}{\n\\begin{tabular}{clcccccc}\n\\toprule\n\\multicolumn{2}{c}{\\begin{tabular}[c]{@{}c@{}} Training Methods\\end{tabular}} & PGD-200 & CW-200 & AA & One-pix. & Spatial-Tr. & Color-Ch. \\\\ \\hline\\hline\n\\multirow{4}{*}{Linear Probing} & AdvCL & 27.13 & 21.85 & 19.57 & 72.10 & 47.94 & 25.62 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 27.87 & 22.10 & 19.80 & 69.60 & 49.31 & 25.88 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{HN}$ & 29.43 & 23.10 & 21.23 & \\textbf{73.20} & 51.57 & 28.01 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{29.56} & \\textbf{23.60} & \\textbf{21.43} & 73.00 & \\textbf{52.62} & \\textbf{28.94} \\\\ \\hline\n\\multirow{4}{*}{\\begin{tabular}[c]{@{}c@{}}Adversarial Linear\\\\ Finetuning\\end{tabular}} & AdvCL & 27.29 & 22.01 & 20.09 & \\textbf{72.80} & 47.31 & 24.98 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 27.84 & 22.37 & 20.06 & 71.60 & 46.22 & 24.23 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{HN}$ & \\textbf{29.79} & \\textbf{23.79} & 21.52 & 70.80 & \\textbf{51.04} & \\textbf{27.84} \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & 29.58 & 23.64 & \\textbf{21.66} & 71.70 & 49.87 & 27.14 \\\\ \\hline\n\\multirow{4}{*}{\\begin{tabular}[c]{@{}c@{}}Adversarial Full\\\\ Finetuning\\end{tabular}} & AdvCL & 29.48 & 25.73 & 24.46 & \\textbf{72.20} & 57.86 & 25.12 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 30.10 & 26.05 & 24.73 & 71.00 & 58.95 & 25.55 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{HN}$ & \\textbf{30.46} & \\textbf{26.60} & \\textbf{25.22} & 69.30 & 59.04 & \\textbf{26.02} \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{30.46} & 26.54 & 25.06 & 69.00 & \\textbf{59.33} & 25.70 \\\\ \\toprule\n\\end{tabular}}\n\\end{table}\n\nTable~\\ref{tab:other_attacks-2} provides results on CIFAR10 under canonical optimization-based attack methods: PGD-200, CW-200 and AutoAttack. Our methods also yield robustness gain in almost all settings. \n\n\\begin{table}[h]\n\\fontsize{8}{9}\\selectfont \n\\renewcommand\\arraystretch{1.5}\n\\centering\n\\vspace{10pt}\n\\caption{Robustness evaluation under optimization-based attacks on CIFAR10, with AdvCL as baseline.}\n\\vspace{5pt}\n\\label{tab:other_attacks-2}\n\\begin{tabular}{clccc}\n\\toprule\n\\multicolumn{2}{c}{\\begin{tabular}[c]{@{}c@{}}Training Methods\\end{tabular}} & PGD-200 & CW-200 & AutoAttack \\\\ \\hline\\hline\n\\multirow{4}{*}{Linear Probing} & AdvCL & 51.05 & 45.65 & 43.48 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 51.99 & 46.02 & 43.57 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{HN}$ & \\textbf{52.36} & \\textbf{46.09} & \\textbf{43.68} \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & 52.01 & 45.35 & 42.92 \\\\ \\hline\n\\multirow{4}{*}{\\begin{tabular}[c]{@{}c@{}}Adversarial Linear\\\\ Finetuning\\end{tabular}} & AdvCL & 52.30 & 46.04 & 43.93 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 52.77 & 46.60 & \\textbf{44.22} \\\\\n & \\ \\ w\/ $\\mathcal{L}^{HN}$ & \\textbf{53.22} & \\textbf{46.44} & 44.15 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & 52.77 & 45.55 & 43.01 \\\\ \\hline\n\\multirow{4}{*}{\\begin{tabular}[c]{@{}c@{}}Adversarial Full\\\\ Finetuning\\end{tabular}} & AdvCL & 52.90 & 50.92 & 49.58 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP}$ & \\textbf{53.61} & 51.25 & 49.90 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{HN}$ & 53.25 & 51.11 & 49.93 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & 53.51 & \\textbf{51.46} & \\textbf{50.28} \\\\ \\toprule\n\\end{tabular}\n\\end{table}\n\\vspace{5pt}\n\nBesides, we also report results compared with RoCL under PGD-200, CW-200 and AutoAttack in Table~\\ref{tab:rocl-other_attacks}, which further validate the effectiveness of the proposed methods. For instance, 25.09\\% vs. 23.51\\% under CW-200 attack, Adversarial Full Finetuning scheme, on CIFAR100.\n\n\\begin{table}[h]\n\\fontsize{8}{9}\\selectfont \n\\renewcommand\\arraystretch{1.7}\n\\centering\n\\vspace{10pt}\n\\caption{Robustness evaluation under optimization-based attacks, with RoCL as baseline, on CIFAR-10 and CIFAR-100.}\n\\vspace{5pt}\n\\label{tab:rocl-other_attacks}\n\\begin{tabular}{cclccc}\n\\toprule\n\\multicolumn{1}{l}{Dataset} & \\multicolumn{2}{c}{\\begin{tabular}[c]{@{}c@{}}Training Methods\\end{tabular}} & PGD-200 & CW-200 & AutoAttack \\\\ \\hline\\hline\n\\multirow{6}{*}{CIFAR10} & \\multirow{2}{*}{Linear Probing} & RoCL & 32.47 & 33.33 & 24.11 \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{34.13} & \\textbf{34.59} & \\textbf{24.58} \\\\ \\cline{3-6} \n & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Adversarial Linear\\\\ Finetuning\\end{tabular}} & RoCL & 42.58 & 40.21 & \\textbf{31.81} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{43.54} & \\textbf{41.26} & 30.37 \\\\ \\cline{3-6} \n & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Adversarial Full\\\\ Finetuning\\end{tabular}} & RoCL & 50.33 & 47.57 & 46.69 \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{51.47} & \\textbf{48.26} & \\textbf{47.05} \\\\ \\hline\n\\multirow{6}{*}{CIFAR100} & \\multirow{2}{*}{Linear Probing} & RoCL & 14.93 & 14.75 & 7.58 \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{17.95} & \\textbf{16.57} & \\textbf{8.58} \\\\ \\cline{3-6} \n & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Adversarial Linear\\\\ Finetuning\\end{tabular}} & RoCL & 22.59 & 18.99 & \\textbf{11.93} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{24.46} & \\textbf{20.69} & 11.69 \\\\ \\cline{3-6} \n & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Adversarial Full\\\\ Finetuning\\end{tabular}} & RoCL & 27.95 & 23.51 & 22.70 \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{29.37} & \\textbf{25.09} & \\textbf{24.01} \\\\ \\toprule\n\\end{tabular}\n\\end{table}\n\\vspace{5pt}\n\\end{document}\n\n\\section{Introduction}\n\n\nThough Deep Neural Networks (DNNs) have exhibited highly expressive performance and even surpass humans on many tasks, they are rigorously challenged by their vulnerability to \\textit{adversarial examples} \\cite{szegedy2014intriguing,goodfellow2014explaining}, which are artificially crafted to mislead the state-of-the-art models into incorrect predictions.\nDNN's weakness in the face of adversarial examples poses severe threats to their applications in security-critical systems, e.g. autonomous driving, surveillance and face recognition~\\cite{akhtar2018threat,kurakin2018adversarial,cao2019adversarial,vakhshiteh2020threat}.\nOne of the most powerful defense strategies against adversarial attacks is adversarial training (AT)~\\cite{kannan2018adversarial,shafahi2019adversarial,zhang2019you,wong2020fast,zhang2019theoretically,zhu2019freelb}, built upon a min-max optimization problem where the inner one constructs adversaries by maximizing the loss and the outer one fits the network's parameters to them by minimizing the expected loss~\\cite{madry2018towards}. AT is effective yet highly depends on labeled data~\\cite{chen2020adversarial,kim2020adversarial}.\n\n\nHow to leverage unlabeled data to perform AT is an important and worth-exploring problem~\\cite{gowal2020self,chen2020adversarial}.\nIn real-world practice, we can easily acquire huge amounts of domain-related unlabeled data, while it is often highly expensive to get them labeled.\nMoreover, to emphasize the value of unlabeled data, some recent works claim that robustness should not inherently require labels because we essentially ask predictors to be stable around naturally occurring inputs~\\cite{carmon2019unlabeled}, and they resort to semi-supervised learning to boost model's robustness \\cite{alayrac2019labels,carmon2019unlabeled}. \nNevertheless, these methods still require labeled images to generate pseudo-supervision for consequent adversarial training, and can be in essence regarded as fully-supervised adversarial training. How to train robust models totally from unlabeled data remains an important but less explored question.\nMost recently, some works attempted to combine contrastive and adversarial training to perform AT on unlabeled data. The main idea is to generate adversaries against the contrastive loss first, then maximize the similarity between clean views and their adversarial counterparts~\\cite{kim2020adversarial,fan2021does,jiang2020robust}.\nRoCL\\cite{kim2020adversarial} first proposed the above unlabeled AT scheme against contrastive loss and AdvCL\\cite{fan2021does} propose to minimize the learning task gap between unlabeled contrast and labeled fintuning by introducing pseudo-supervision in pretraining stage, achiving the state-of-the-art performance.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.41]{images\/intro\u56fe4.png}\n\\caption{\nThe schematic illustration for the conflict brought by adversarial contrastive learning and our proposed new views on adversaries to reduce such conflicts.\n}\n\\label{fig:intro}\n\\end{figure}\n\n\n\n\nAlthough existing \\textit{Adversarial Contrastive Learning} works have achieved satisfactory robustness in unlabeled settings, we argue that current simple\nextension from contrastive learning to adversarial\ndoesn't treat adversaries correctly and brings severe conflicts in pretraining objectives. The adversaries in ACL are generated against the contrastive loss, i.e., they are away from the anchor as far as possible and pushed close to other datapoints, contrary to the CL's objective. Conventional CL pushes away an anchor with other data points, when we introduce adversarial learning, the contrast between adversaries and their anchors attract the anchors towards other data points that the adversaries are close to, which is a totally opposite direction against CL's exertion from the geometry perspective, as shown in Figure \\ref{fig:intro}(b), engendering absolutely conflicts in training objective. Moreover, we will show by a limit case that current introduction of adversaries actually sets a potential objective to shrink all images into one point. The problem is rooted in that current adversarial contrastive learning treats adversaries the same as other clean augmentations and asks the anchors to head towards them.\n\nSo how to take adversaries correctly in adversarial contrastive learning? We present two new treatments for adversaries from the perspectives of both positives and negatives to alleviate this conflict. Firstly, as positives, we propose to view adversaries as \\textit{inferior positives} that have asymmetric similarity with normal positives (clean augmentations) to directly reduce the conflict, as shown in Figure~\\ref{fig:intro}(c). On the other hand, we propose to treat them as \\textit{hard negatives} that should be upweighted when pushed away from other data points to further help relieve the conflict and make the model more robustness-aware, as shown in Figure~\\ref{fig:intro}(d).\n\nWhen viewed as inferior positives, we propose that the similarity between clean views and adversarial views shouldn't be symmetric, i.e., we hope adversarial ones similar to clean ones and to be pulled back, but actually don't hope the model learns to represent clean images as adversaries which are intentionally perturbed. We put forward an adaptive gradient stopping strategy to model this asymmetry.\nMoreover, we argue that adversarial views have excellent intrinsic properties to become hard negatives in which case they should be upweighted to make the model more robustness-aware. As proposed in \\cite{robinson2020contrastive}, two principles to make a good negative sample for $x$ are: 1. \"True negative\" $x^-$ whose label differs from that of the anchor $x$; 2. The embedding currently believes to be similar to $x$.\nAdversaries, as stated above, are often close to some other data points that could have different labels, satisfying our desire for hard negatives perfectly, what we need to do is to distinguish each data point from its surrounding other classes' adversarial views. Here we combine ideas of positive-unlabeled learning~\\cite{du2014analysis,elkan2008learning} with adversarial and resort to the prevalent practice for reweighting negatives in contrastive learning \\cite{robinson2020contrastive,chuang2020debiased}, to effectively sample true adversarial negatives and reweight each sample according to the current similarity.\n\nTo sum up, our contributions are as follows: \n\\begin{itemize}\n \\item We are the first to consider modeling the asymmetric property of \\textit{Adversarial Contrastive Learning} and propose to perform asymmetric gradient stopping, which can boost both standard accuracy and robustness.\n \\item We provide a new perspective for adversaies in contrastive learning, i.e., view them as hard negatives, which can also be extended to other adversarial scenarios.\n \\item We present a generalized asymmetric InfoNCE loss: A-InfoNCE that unifies all current contrastive learning methods and can integrate our two proposed new views conveniently.\n \\item Our methods are compatible with current adversarial contrastive learning methods, outperform the chosen baselines and achieve new state-of-the-art performance.\n \n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\n\nIn this work, we study enhancing model robustness using unlabeled data and investigate the \\textit{identity confusion} issue in Adversarial CL, \\textit{i.e.}, adversaries with different identities attract their anchors together, contradicting to the objective of CL. \nWe present a generic asymmetric objective \\textit{A-InfoNCE}, and treat adversaries discriminatingly as \\textit{inferior positives} or \\textit{hard negatives}, which can overcome the identify confusion challenge.\nComprehensive experiments with quantitative and qualitative analysis show that our methods can enhance existing Adversarial CL methods effectively.\nFurther, it lies in our future work to extend the proposed asymmetric form to other CL settings to take into consideration the asymmetric characteristics between different views.\n\n\\section*{Acknowledgement}\n\nThis work was supported in part by the National Key R\\&D Program of China under Grant 2021ZD0112100, partly by Baidu Inc. through Apollo-AIR Joint Research Center. We would also like to thank the anonymous reviewers for their insightful comments.\n\\section{Asymmetric InfoNCE}\n\n\\subsection{Notations}\n\n\n\\paragraph{Contrastive Learning (CL) }\n\nCL aims to learn generalizable features by maximizing agreement between self-created positive samples while contrasting to negative samples.\nIn typical contrastive learning, each instance $x$ will be randomly transformed into two views $(x_1, x_2)$, then fed into a feature encoder $f$ with parameters $\\theta$ to acquire normalized projected features, \\textit{i.e.}, $z_i = f(x_i;\\theta)$.\nLet $\\mathcal{P}(i)$ denote the set of positive views of $x_i$, containing the views transformed from $x$ with the same instance-level \\textit{identity} (\\textit{e.g.}, augmentations of the original image $x_i$); $\\mathcal{N}(i)$ denotes the set of negative views of $x_i$, containing all the views from other instances.\nThe conventional InfoNCE loss function~\\cite{oord2018representation} used in CL for a positive pair $(x_i,x_j)$ is defined as:\n\\begin{align}\n \\mathcal{L}_{\\rm CL}(x_i,x_j) \n =\n - \\log \\frac\n {\\exp({\\rm sim}(z_i, z_j)\/t)}\n {\n \\exp({\\rm sim}(z_i, z_j)\/t) + \n \\sum_{k\\in \\mathcal{N}(i)} \\exp({\\rm sim}(z_i, z_k)\/t)\n }\n \n\\end{align}\nwhere $x_i$ serves as the anchor, ${\\rm sim}(z_i, z_j)$ denotes a similarity metric (\\textit{e.g.}, cosine similarity) between $z_i$ and $z_j$, and $t$ is a temperature parameter.\nThe final loss of the CL problem is averaged over all positive pairs of instances.\n\n\\paragraph{Adversarial CL}\nAdversarial CL can be regarded as an extension of CL by adding adversarial samples into the positive sets $\\mathcal{P}(\\cdot)$ to contrast. Adversarial CL is typically modeled as the following min-max optimization formulation to incorporate instance-wise attack~\\cite{madry2018towards,fan2021does}:\n\\begin{align}\n \\min_\\theta \\mathbb{E}_{x\\in \\mathcal{X}} \\max_{||\\delta||_\\infty \\leq \\epsilon} \\sum_i\\sum_{j\\in \\mathcal{P}(i)}\n \\mathcal{L}_{\\rm CL}(x_i, x_j),\\quad \\mathcal{P}(i)\\leftarrow \\mathcal{P}(i) \\cup \\{\\hat{x}_i+\\delta\\}\n\\end{align}\nwhere $\\hat{x}_i$ is the view of $x_i$ used to generate adversarial samples, $\\delta$ is the adversarial perturbation whose infinity norm is constrained as less than $\\epsilon$.\nIn the above formulation, the inner maximization problem constructs adversarial samples by maximizing the contrastive loss, and the outer minimization problem optimizes the expected worst-case loss w.r.t. the feature encoder $f$. \n\n\\subsection{Asymmetric InfoNCE: A Generic Learning Objective}\nCurrent Adversarial CL frameworks directly inherit CL's conventional contrastive loss (\\textit{e.g.}, InfoNCE) to evaluate the similarity between adversarial and clean views in a symmetric fashion. This can result in ineffective or even conflicting updates during CL training as aforementioned.\nTo address this challenge, we propose a generic Asymmetric InfoNCE loss (\\textit{A-InfoNCE}) to incorporate the asymmetric influences between different contrast instances, given by:\n\\begin{equation} \\label{eq:1}\n\\resizebox{\\textwidth}{!}{\n $\\mathcal{L}^{\\rm asym}_{\\rm CL} (x_i, x_j;{\\alpha},\\lambda^p, \\lambda^n) \n = \n - \\log \\frac\n {\\lambda^p_j \\cdot \\exp({\\rm sim^{\\alpha}}(z_i, z_j)\/t)}\n {\\lambda^p_j \\cdot \\exp({\\rm sim^{\\alpha}}(z_i, z_j)\/t) + {\\sum_{k\\in \\mathcal{N}(i)} \\lambda^n_k\\cdot\\exp({\\rm sim^{\\alpha}}(z_i, z_k)\/t)}}\n \n $\n}\n\\end{equation}\nwhere $\\rm sim^{\\alpha}(\\cdot)$ is a generalized similarity metric that enables the incorporation of asymmetric relationships (a concrete instantiation is described in the next section); $\\lambda^p$ and $\\lambda^n$ are asymmetric weighting factors for positive and negative pairs, respectively.\nIt is worth noting that although A-InfoNCE is proposed to address the \\textit{identity confusion} issue in Adversarial CL, it can be easily extended to other CL settings when the asymmetric characteristics between different views need to be captured.\nA-InfoNCE can also generalized to many existing CL methods, for example, $\\mathcal{P}(i)$ and $\\mathcal{N}(i)$ can be altered to different choices of positive and negative views; ${\\rm sim^{\\alpha}}(z_i, z_j)$ is also changeable to a symmetric similarity metric for $z_i$ and $z_j$. $\\lambda^p$ and $\\lambda^n$ control the weights of different positive\/negative pairs. Generalization strategies are itemized below:\n\\begin{itemize}\n \\item If ${\\rm sim^{\\alpha}}(z_i, z_j)$ is a symmetric similarity metric and $\\lambda^p, \\lambda^n = 1$, it degrades to the conventional InfoNCE loss used in CL~\\cite{pmlr-v119-chen20j}.\n \\item If $\\mathcal{P}(i)$ is altered, it corresponds to positives sampling~\\cite{tian2020contrastive,bachman2019learning,tian2020makes}\n . When we add adversaries into $\\mathcal{P}(i)$, it degenerates to the conventional Adversarial CL objectives, where $\\lambda^p, \\lambda^n = 1$ with symmetric ${\\rm sim^{\\alpha}}(z_i, z_j)$~\\cite{kim2020adversarial,jiang2020robust,fan2021does}.\n \\item If we seek better $\\mathcal{N}(i)$, it echos negative sampling methods~\\cite{robinson2020contrastive,kalantidis2020hard} such as Moco~\\cite{he2020momentum}, which maintains a queue of consistent negatives; or mimics DCL~\\cite{chuang2020debiased} that debiases $\\mathcal{N}(i)$ into true negatives. \n \\item If we change $\\lambda^p$ and $\\lambda^n$, it mirrors the pair reweighting works~\\cite{chuang2020debiased,robinson2020contrastive} that assign different weights to each pair according to a heuristic measure of importance such as similarity.\n\\end{itemize}\nWhile most existing methods adopt a symmetric similarity metric, we claim that in some scenarios the asymmetric similarity perspective needs to be taken into account, especially when the quality and property of different views vary significantly.\nIn this paper, we focus on the study of Adversarial CL, and demonstrate the benefits of \ncapturing the asymmetric relationships between adversaries and clean views.\nSpecifically, we design two instantiations to model the asymmetric relationships between adversarial and clean samples, as detailed in next section.\nBoth instantiations \ncan be integrated into the proposed \\textit{A-InfoNCE} framework.\n\n\n\n\n\n\n\\section{Related Work}\n\n\\paragraph{Contrastive Learning}\n\nCL has been widely applied to learn generalizable features from unlabeled data~\\cite{pmlr-v119-chen20j,he2020momentum,tian2020contrastive,grill2020bootstrap,chen2020improved,caron2020unsupervised,bachman2019learning,oord2018representation,chen2020big,chen2021large,khosla2020supervised}. The basic idea is instance discrimination~\\cite{wu2018unsupervised}.\nRepresentative works include CMC~\\cite{tian2020contrastive}, SimCLR\\cite{pmlr-v119-chen20j}\n, MoCo\\cite{he2020momentum}, SwAV\\cite{caron2020unsupervised}, BYOL\\cite{grill2020bootstrap}.\nThere is also a stream of work focusing on refined sampling on different views for improved performance ~\\cite{tian2020contrastive,kalantidis2020hard,chuang2020debiased,robinson2020contrastive,tao2021clustering}.\nFor example, DCL\\cite{chuang2020debiased} proposed to \\textit{debias} the assumption that all negative pairs are true negatives. HCL\\cite{robinson2020contrastive} extended DCL and proposed to mine hard negatives for contrastive learning, whose embeddings are uneasy to discriminate. \n\n\\paragraph{Adversarial Training}\n\nAdversarial training (AT) stems from \\cite{goodfellow2014explaining} and adopts a min-max training regime that optimizes the objective over adversaries generated by maximizing the loss~\\cite{madry2018towards,zhang2019theoretically,shafahi2019adversarial,zhang2019you,wong2020fast,zhu2019freelb,gan2020large,pang2019rethinking}. Some recent work introduced unlabeled data into AT~\\cite{hendrycks2019using,chen2020adversarial,carmon2019unlabeled,alayrac2019labels,kim2020adversarial}. By leveraging a large amount of unlabeled data, \\cite{carmon2019unlabeled,alayrac2019labels} performed semi-supervised self-training to first generate pseudo-supervisions, then conducted conventional supervised AT. Our work explores how to learn robust models without any class labels.\n\n\n\\paragraph{Adversarial Contrastive Learning}\n\nSome recent studies applied CL on adversarial training~\\cite{kim2020adversarial,jiang2020robust,fan2021does,gowal2020self}, by considering adversaries as positive views for contrasting, such that the learned encoder renders robust data representations. RoCL~\\cite{kim2020adversarial} was the first to successfully show robust models can be learned in an unsupervised manner.\nAdvCL~\\cite{fan2021does} proposed to empower CL with pseudo-supervision stimulus.\nSame as CL, these Adversarial CL methods perform symmetric contrast for all pairs, which could potentially induces conflicts in CL and AT training objectives. We are the first to investigate the asymmetric properties of Adversarial CL, by treating adversaries discriminatingly.\n\\section{Adversarial Asymmetric Contrastive Learning}\n\n\nThis section explains the instantiations of the \\textit{A-InfoNCE} loss for Adversarial CL. From the \\textit{inferior-positive} perspective, to reduce the impact of identity confusion, we first design a new asymmetric similarity metric ${\\rm sim^{\\alpha}}(z_i, z_j^{adv})$ for modeling the asymmetric relationships and weakening the learning signals from adversarial examples. From the \\textit{hard-negative} perspective, we view adversaries as hard negatives for other negative samples, and reweight each negative pairs by assigning similarity-dependent weights to ease the identity confusion. \n\n\n\\subsection{Adversarial Samples as Inferior Positives}\n\n\nAdversarial samples with different identities may attract their anchors (clean samples) in a contradicting manner to the exertion of CL. By weakening the learning signal from these adversarial examples in positive contrast (as \\textit{inferior positives} that attract the anchors less), we can effectively mitigate the undesired pull from clean samples via an adaptive gradient stopping strategy.\n\n\\subsubsection{Asymmetric Similarity Function.}\n\nAs the symmetric nature of InfoNCE can bring conflicts in Adversarial CL, we design a new asymmetric similarity function ${\\rm sim^{\\alpha}}(z_i, z_j)$ for \\textit{A-InfoNCE}, by manipulating the scale of gradient for each contrasted branch. We decompose it into two parts for each branch:\n\\begin{align} \\label{eq:2}\n {\\rm sim^{\\alpha}}(z_i, z_j) \n = \n \\alpha \\cdot {\\rm \\overline{sim}}(z_i, z_j) \n +\n (1 - \\alpha) \\cdot {\\rm \\overline{sim}}(z_j, z_i)\n\\end{align}\nwhere ${\\overline{\\rm sim}(a, b)}$ means the one-sided similarity of $a$ to $b$, \\textit{i.e.}, when maximizing ${\\overline{\\rm sim}(a, b)}$, we freeze $b$ and only move $a$ towards $b$. This can be implemented by stopping the gradient back-propagation for $b$ and only optimizing $a$.\n\nWe use a hyperparameter $\\alpha$ to control how much $z_i$ and $z_j$ head towards each other. For a clean sample and an adversarial sample, we let $\\alpha$ denote the coefficient of the clean branch's movement. If $\\alpha$ is 0, it performs total gradient freezing on the clean branch and only adversarial representations are optimized through training. Our empirical analysis finds that $\\alpha$ is relatively easy to tune for boosted performance. We show that any value lower than 0.5 brings reasonable performance boost (see Figure~\\ref{fig:ablation1}), when clean samples move less towards adversaries, following the intrinsic asymmetric property of Adversarial CL.\n\n\\subsubsection{Adaptive $\\alpha$-annealing.}\n\n\nWhen the \\textit{identity confusion} is at play, it is necessary to treat adversarial samples inferior to ensure model robustness. But as training progresses, when model learns robust representations and the negative identity-changing impact of adversarial perturbation wanes, we consider adversarial perturbation as strong augmentations, equal to other typical transformations~\\cite{pmlr-v119-chen20j}. \n\nThe question is how to measure the reduction of instance confusion effect. Here we take a geometry perspective and propose to adaptively tune the proportional coefficient $\\alpha$ on-the-fly based on Euclidean distance. Let $d_{i,j} = ||z_i - z_j||_2$ denote the distance between an original image and its adversarial view in the representation space.\nGiven $\\alpha_{min}$, $d_{max}$, $\\alpha_{max}$, $d_{min}$,\nthe goal is for $\\alpha$ to be $\\alpha_{max}$ when the distance approximates $d_{min}$, and $\\alpha_{min}$ to be close to $d_{max}$. During training, we first compute the current representation distance $d$, then use a simple linear annealing strategy to compute $\\alpha$:\n\\begin{align}\n \\alpha = \\alpha_{min} + (d_{max}-d)\\frac{\\alpha_{max}-\\alpha_{min}}{d_{max}-d_{min}}\n\\end{align}\n$d_{min}$ and $\\alpha_{min}$ can be treated as hyperparameters. $\\alpha_{max}$ is 0.5, indicating adversarial perturbation is equal to other transformations and ${\\rm sim}^\\alpha(z_i, z_j)$ degrades to the symmetric similarity. Moreover, we use the first $N$ epochs as a warm-up to compute the average distance as $d_{max}$, in which period $\\alpha$ is fixed.\n\n\\subsubsection{Adversarial CL Loss with Inferior Positives.}\nWith the above asymmetric similarity function $\\rm sim^\\alpha(\\cdot)$ and the \\textit{A-InfoNCE} loss function $\\mathcal{L}^{\\rm asym}_{\\rm CL}(x_i, x_j;{\\alpha},\\lambda^p, \\lambda^n)$, the complete Adversarial CL loss with \\textit{inferior positives} (IP) can be written as: \n\\begin{small}\n\\begin{align} \\label{eq:ip}\n \\mathcal{L}^{\\rm IP} \n = \n \\sum_i\\sum_{j\\in \\mathcal{P}(i)} \\mathcal{L}^{\\rm asym}_{\\rm CL}(x_i, x_j; 0.5, 1, 1) \n +\n \\gamma \\cdot \\sum_i\\sum_{j\\in \\mathcal{P}(i)} \\mathcal{L}^{\\rm asym}_{\\rm CL}(x_i, x_j^{adv}; \\alpha, 1, 1)\n\\end{align}\n\\end{small}\nwhere the first part stands for standard CL loss that maximizes the similarity between two clean views, which is symmetric ($\\alpha=0.5$) with $\\lambda^p = \\lambda^n = 1$, degrading to the conventional InfoNCE loss. The second part is a robust CL loss that maximizes the agreement between clean and adversarial views, but uses the asymmetric similarity function (\\ref{eq:2}) with a hyperparameter $\\alpha$ that gives weaker learning signals to the counterparts of inferior adversarial samples. The hyperparameter $\\gamma$ balances the robustness and accuracy objectives.\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Adversarial Samples as Hard Negatives}\n\nBesides inferior positives, we also propose an alternative view of adversaries as \\textit{hard negatives}~\\cite{robinson2020contrastive} that be pushed away from surrounding data points with higher weights. This can potentially assuage the confusion brought by adversarial samples of the current instance residing too close to the negative samples of the same instance (as illustrated in Figure 1 (d)). Furthermore, this strategy encourages the model towards more robustness-aware, by giving adversarial samples possessing undiscriminating features higher weights in the pretraining stage, further enhancing Adversarial CL.\n\n\nIn practice, we assign a weight of similarity to each pair. To set a basis for weight assigning, we adopt a simple and adaptive weighting strategy used in~\\cite{robinson2020contrastive}, \\textit{i.e.}, taking each pair's similarity as its weight, with $w_{i,j} = \\exp({\\rm {sim}}(z_i, z_j)\/t)$. By doing so, the adversaries with bad instance-level identity (greater similarity to negative samples) can be automatically assigned with higher weights. The weights can adaptively decay as the instance identity recovers during training.\n\nHowever, as the commonly-used $\\mathcal{N}(i)$ is uniformly sampled from the entire data distribution $p(x)$~\\cite{chuang2020debiased} (\\textit{e.g.}, SimCLR~\\cite{pmlr-v119-chen20j} uses other instances in the current batch as negative samples), simply taking similarities as weights may heavily repel semantically-similar instances whose embeddings should be close. To estimate the true negatives distribution $p^-(x)$ , we take advantage of PU-learning~\\cite{du2014analysis,elkan2008learning} and resort to DCL,HCL~\\cite{chuang2020debiased,robinson2020contrastive} to debias negative sampling.\n\nPU-learning~\\cite{du2014analysis} decomposes the data distribution as: $p(x) = \\tau p^+ (x) + (1-\\tau) p^- (x)$, where $p^+(x), p^-(x)$ denote the distribution of data from the same or different class of $x$, and $\\tau$ is the class prior. Thus\n$p^{-}(x)$ can be rearranged as $p^{-}(x)=\\big(p(x) - \\tau p^+ (x)\\big)\/(1-\\tau)$. We can use all instances and positive augmentations containing adversarial samples of $x$ to estimate $p(x)$ and $p^+(x)$, respectively. Following~\\cite{chuang2020debiased}, we debias the negative contrast part in (\\ref{eq:1}) as:\n\\begin{small}\n\\begin{align}\n \\frac{1}{1-\\tau}\n \\Big(\n \\sum_{k\\in \\mathcal{N}(i)} w_{i,k}^n \\cdot \\exp({\\rm sim^{\\alpha}}(z_i, z_k)\/t)\n -\n \\frac{N}{M} \\cdot \\tau \n \\sum_{j\\in \\mathcal{P}(i)} w_{i,j}^p \\cdot \\exp({\\rm sim^{\\alpha}}(z_i, z_j)\/t)\n \\Big)\n\\end{align}\n\\end{small}\nwhere $M, N$ are the numbers of postives and negatives, $w_{i,k}^n$ is the aforementioned weights for negatives, $w_{i,j}^p$ is a expandable weight for positives (set as 1 in our implementation, other choices can be further explored in the future work).\n\n\\subsubsection{Adversarial CL Loss with Hard Negatives.}\nWe substitute (7) into the \\textit{A-InfoNCE} loss function (\\ref{eq:1}) and rearrange it, acquiring the instantiation of \\textit{A-InfoNCE} loss with \\textit{hard negatives} (HN), with concrete forms of $\\lambda^p$ and $\\lambda^n$ as:\n\\begin{small}\n\\begin{align} \\label{eq:hn}\n \\mathcal{L}^{HN} \n = \n \\sum_i\\sum_{j\\in \\mathcal{P}(i)}\n \\mathcal{L}^{\\rm asym}_{\\rm CL}(x_i, x_j; \\alpha, \\frac{M-(M+N)\\tau}{M-M\\tau}w_{i,j}^p, \\frac{1}{1-\\tau}w_{i,k}^n),\\quad k\\in\\mathcal{N}(i)\n\\end{align}\n\\end{small}\nDue to the lack of class information, we treat $\\tau$ as a hyperparameter and set as~\\cite{chuang2020debiased} suggested. \n\n\\subsubsection{Combined Adversarial CL Loss.}\nFinally, we can view adversaries both as inferior positives and hard negatives for other negative samples. This leads to following combined Adversarial CL loss:\n\\begin{small}\n\\begin{align} \\label{eq:ip+hn}\n \\mathcal{L}^{IP+HN}\n &=\n \\sum_i\\sum_{j\\in \\mathcal{P}(i)} \\mathcal{L}^{\\rm asym}_{\\rm CL}(x_i, x_j; 0.5, \\frac{M-(M+N)\\tau}{M-M\\tau}w_{i,j}^p, \\frac{1}{1-\\tau}w_{i,k}^n) \\ \n + \\nonumber \\\\\n & \\gamma \\cdot \\sum_i\\sum_{j\\in \\mathcal{P}(i)} \\mathcal{L}^{\\rm asym}_{\\rm CL}(x_i, x_j^{adv}; \\alpha, \\frac{M-(M+N)\\tau}{M-M\\tau}w_{i,j}^p, \\frac{1}{1-\\tau}w_{i,k}^n),\\quad k\\in\\mathcal{N}(i)\n\\end{align}\n\\end{small}\n\\section{Experiments}\n\nTo demonstrate the effectiveness and generalizability of the proposed approach, we present experimental results across different datasets and model training strategies. Our methods are compatible with existing Adversarial CL frameworks, and can be easily incorporated by replacing their CL loss. \nWe choose two baselines and replace their loss with $\\mathcal{L}^{IP}$(in Equation~\\ref{eq:ip}), $\\mathcal{L}^{HN}$(\\ref{eq:hn}) and $\\mathcal{L}^{IP+HN}$(\\ref{eq:ip+hn}) for evaluation.\n\n\\textbf{Datasets.} We mainly use CIFAR-10 and CIFAR-100 for our experiments. Each dataset has 50,000 images for training and 10,000 for test. STL-10 is also used for transferability experiments. Following previous work~\\cite{fan2021does}, we use ResNet-18~\\cite{he2016deep} as the encoder architecture in all experiments.\n\n\\setlength{\\tabcolsep}{0pt}\n\\setlength{\\arrayrulewidth}{0.2mm}\n\\renewcommand{\\arraystretch}{1.3}\n\\begin{table}[t]\n\\caption{Results for replacing the objectives of the two baselines with $\\mathcal{L}^{IP}$, $\\mathcal{L}^{HN}$ and $\\mathcal{L}^{IP+HN}$, in Standard Accuracy (SA) and Robust Accuracy (RA).\nThe pre-trained methods are evaluated under the Linear Probing (LP), Adversarial Linear Finetuning (ALF) and Adversarial Full Finetuning (AFF) strategies.\nSupervised methods are trained under conventional adversarial training scheme\n}\n\\centering\n\\fontsize{8.5}{9}\\selectfont \n\\scalebox{0.9}{\n\\begin{tabularx}{1\\textwidth}\n{ m{1.2cm}\n m{1.6cm}\n m{2.2cm}\n \n \n \n \n \n \n P{1.1cm}\n P{1.2cm}\n P{1.1cm}\n P{1.3cm}\n P{1.1cm}\n P{1.1cm}\n \n \n \n \n \n \n P{0.5cm}\n }\n\\toprule\n\\multicolumn{1}{l}{\\multirow{3}{*}{Dataset}} & \\multicolumn{2}{c}{\\multirow{3}{*}{\\makecell[c]{Pre-training \\\\ Methods}}} & \\multicolumn{6}{c}{Finetuning Strategies} \\\\ \\cmidrule(l){4-9} \n\\multicolumn{1}{l}{} & \\multicolumn{2}{c}{} & \\multicolumn{2}{c}{Linear Probing} & \\multicolumn{2}{l}{\\begin{tabular}[c]{@{}c@{}}Adversarial Linear\\\\ Finetuning\\end{tabular}} & \\multicolumn{2}{c}{\\begin{tabular}[c]{@{}c@{}}Adversarial Full\\\\ Finetuning\\end{tabular}} \\\\ \\cmidrule(l){4-9} \n\\multicolumn{1}{c}{} & \\multicolumn{2}{c}{} & \\multicolumn{1}{c}{SA} & \\multicolumn{1}{c}{RA} & \\multicolumn{1}{c}{SA} & \\multicolumn{1}{c}{RA} & \\multicolumn{1}{c}{SA} & \\multicolumn{1}{c}{RA} \\\\\n\\hline\\hline\n\\multirow{10}{*}{\\makecell[c]{CIFAR \\\\ 10}} & \\multirow{2}{*}{Supervised} & AT~\\cite{madry2018towards} &-&-&-&-& 78.99 & 47.41 & \\scriptsize{1}\\\\\n & & TRADES~\\cite{zhang2019theoretically} &-&-&-&-& 81.00 & 53.27 & \\scriptsize{2} \\\\ \\cmidrule(l){2-9} \n & \\multirow{8}{*}{\\begin{tabular}[c]{@{}c@{}}Self-\\\\ Supervised \\end{tabular}} & RoCL~\\cite{kim2020adversarial} & 83.84 & 38.98 & 79.23 & 47.82 & 77.83 & 50.54 & \\scriptsize{3} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP}$ & \\textbf{87.63} & 41.46 & \\textbf{84.15} & 50.08 & 78.97 & 50.29 & \\scriptsize{4} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{HN}$ & 84.14 & 40.00 & 79.40 & 48.31 & 78.84 & 51.73 & \\scriptsize{5} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & 85.69 & \\textbf{42.96} & 81.91 & \\textbf{50.90} & \\textbf{80.06} & \\textbf{52.95} & \\scriptsize{6} \\\\ \\cmidrule(l){3-9}\n & & AdvCL~\\cite{fan2021does} & 81.35 & 51.00 & 79.24 & 52.38 & 83.67 & 53.35 & \\scriptsize{7} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 82.37 & 52.33 & 80.05 & \\textbf{53.22} & \\textbf{84.12} & 53.56 & \\scriptsize{8} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{HN}$ & 81.34 & 52.61 & 78.69 & 53.20 & 83.44 & \\textbf{54.07} & \\scriptsize{9} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{83.15} & \\textbf{52.65} & \\textbf{80.41} & 53.19 & 83.93 & 53.74 & \\scriptsize{10} \\\\ \\midrule\n\\multirow{10}{*}{\\makecell[c]{CIFAR \\\\ 100}} & \\multirow{2}{*}{Supervised} & AT~\\cite{madry2018towards} & -&-&-&- & 49.49 & 23.00 & \\scriptsize{11} \\\\\n & & TRADES~\\cite{zhang2019theoretically} & -&-&-&- & 54.59 & 28.43 & \\scriptsize{12} \\\\ \\cmidrule(l){2-9} \n & \\multirow{8}{*}{\\begin{tabular}[c]{@{}c@{}}Self-\\\\ Supervised\\end{tabular}} & RoCL~\\cite{kim2020adversarial} & 55.71 & 18.49 & 49.30 & 25.84 & 51.19 & 26.69 & \\scriptsize{13} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 59.30 & 21.34 & 54.49 & \\textbf{30.33} & 52.39 & 27.84 & \\scriptsize{14} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{HN}$ & 58.77 & 21.17 & 56.38 & 28.03 & \\textbf{55.85} & 29.57 & \\scriptsize{15} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{59.74} & \\textbf{22.54} & \\textbf{57.57} & 29.22 & 55.79 & \\textbf{29.92} & \\scriptsize{16} \\\\ \\cmidrule(l){3-9}\n & & AdvCL~\\cite{fan2021does} & 47.98 & 27.99 & \\textbf{47.45} & 28.29 & 57.87 & 29.48 & \\scriptsize{17} \\\\ \n & & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 49.48 & 28.84 & 45.39 & 28.40 & \\textbf{59.44} & 30.49 & \\scriptsize{18} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{HN}$ & 49.44 & 29.01 & 47.32 & \\textbf{28.69} & 58.41 & 29.93 & \\scriptsize{19} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{50.59} & \\textbf{29.12} & 45.72 & 28.45 & 58.70 & \\textbf{30.66} & \\scriptsize{20} \\\\ \\bottomrule\n\\end{tabularx}}\n\\label{table:1}\n\\end{table}\n\n\\textbf{Baselines.} We compare with two baselines: RoCL~\\cite{kim2020adversarial}, the first method to combine CL and AL; and AdvCL~\\cite{fan2021does}, the current state-of-the-art framework. During experiments, we observe severe overfitting of AdvCL when training 1000 epochs (experiment setting in the original paper), with performance inferior to training for 400 epochs.\nThus, we pre-train 400 epochs on AdvCL at its best-performance setting. All other settings are the same as original papers except for some hyperparameter tuning. Our methods are also compatible with some recent work like SwARo~\\cite{wahed2022adversarial} and CLAF~\\cite{rahamim2022robustness}, by modeling the asymmetry between clean and adversarial views as aforementioned.\n\n\\textbf{Evaluation.}\nFollowing \\cite{jiang2020robust} and \\cite{fan2021does}, we adopt three finetuning strategies to evaluate the effectiveness of contrastive pre-training: $1)$ Linear Probing (LP): fix the encoder and train the linear classifier; $2)$ Adversarial Linear Finetuning (ALF): adversarially train the linear classifier; $3)$ Adversarial Full Finetuning (AFF): adversarially train the full model. We consider two evaluation metrics: $1)$ Standard Accuracy (SA): classification accuracy over clean images; $2)$ Robust Accuracy (RA): classification accuracy over adversaries via PGD-20 attacks~\\cite{madry2018towards}. \nRobustness evaluation under more diverse attacks is provided in the appendix.\n\n\n\\subsection{Main Results}\nIn Table \\ref{table:1}, we report standard accuracy and robust accuracy of each model, learned by different pre-training methods over CIFAR-10 and CIFAR-100. Following previous works~\\cite{kim2020adversarial,jiang2020robust,fan2021does} and common practice in contrastive learning~\\cite{pmlr-v119-chen20j,he2020momentum}, we first use unlabeled images in CIFAR-10\/-100 to pre-train, then introduce labels to finetune the model. As shown in Table \\ref{table:1}, our methods achieve noticeable performance improvement over baselines in almost all scenarios, when replacing the original loss with our proposed adversarial CL loss.\n\n\nIn comparison with RoCL, $\\mathcal{L}^{IP}$ brings significant performance boost on both standard and robust accuracy consistently across different training methods (row 4 vs. 3, row 14 vs. 13) (except for RA of AFF on CIFAR10).\nComparing to AdvCL, $\\mathcal{L}^{IP}$ also brings noticeable margin (row 8 vs. 7, row 18 vs. 17). This can be attributed to that $\\mathcal{L}^{IP}$ aims to lower the priority of adversaries and prevent clean samples moving towards other instances, which results in better instance discrimination and improves clean~\\cite{wu2018unsupervised} and robust accuracy.\n$\\mathcal{L}^{HN}$ also yields substantial boost on robust and standard accuracy (\\textit{e.g.}, row 15 vs. 13).\nWe hypothesize this is due to that $\\mathcal{L}^{HN}$ helps\nalert the model to adversarial samples by assigning higher weights for adversaries in negative contrast. \nWhen combined together, in most settings both standard and robust accuracy are further boosted, especially for Linear Probing. This is because directly mitigating the negative impact of \\textit{identity confusion} by $\\mathcal{L}^{IP}$ and helping adversarial get rid of false identities by $\\mathcal{L}^{HN}$ can complement each other, bringing further performance boost.\n\\subsection{Transferring Robust Features}\n\n\\setlength{\\tabcolsep}{0pt}\n\\setlength{\\arrayrulewidth}{0.2mm}\n\\renewcommand{\\arraystretch}{1.3}\n\\begin{table}[t]\n\\fontsize{8.5}{10}\\selectfont \n\\centering\n\\caption{Transferring results from CIFAR-10\/100 to STL-10, compared with AdvCL~\\cite{fan2021does}, evaluated in Standard accuracy (SA) and Robust accuracy (RA) across different finetuning methods with ResNet-18\n}\n\\scalebox{0.9}{\n\\begin{tabularx}{1\\textwidth}\n{\n P{2cm}\n m{2.4cm}\n \n \n \n \n \n \n P{1.2cm}\n P{1.3cm}\n P{1.2cm}\n P{1.4cm}\n P{1.2cm}\n P{1.3cm}\n}\n\\toprule\n\\multirow{3}{*}{Dataset} & \\multirow{3}{*}{\\makecell[c]{Pre-training \\\\ Methods}} & \\multicolumn{6}{c}{Finetuning Strategies} \\\\ \\cline{3-8} \n & & \\multicolumn{2}{c}{Linear Probing} & \\multicolumn{2}{c}{\\makecell[c]{Adversarial Linear \\\\ Finetuning}} & \\multicolumn{2}{c}{\\makecell[c]{Adversarial Full \\\\ Finetuning}} \\\\ \\cline{3-8} \n & & SA & RA & SA & RA & SA & RA \\\\\n \\hline\\hline\n\\multirow{4}{*}{\\makecell[c]{CIFAR10\\\\$\\downarrow$\\\\STL10}} & AdvCL~\\cite{fan2021does} & 64.45 & 37.25 & 60.86 & 38.84 & 67.89 & 38.78 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 64.83 & 37.30 & 61.95 & 38.90 & \\textbf{68.25} & 39.03 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{HN}$ & 65.24 & \\textbf{38.18} & \\textbf{62.83} & \\textbf{39.70} & 67.88 & \\textbf{39.75} \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{67.19} & 37.00 & 61.34 & 39.35 & 67.95 & 39.12 \\\\ \\hline\n\\multirow{4}{*}{\\makecell[c]{CIFAR100\\\\$\\downarrow$\\\\STL10}} & AdvCL~\\cite{fan2021does} & 52.28 & 30.01 & 49.84 & 32.14 & 63.13 & 35.24 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 52.65 & \\textbf{31.33} & 50.18 & 33.15 & 63.26 & \\textbf{35.34} \\\\\n & \\ \\ w\/ $\\mathcal{L}^{HN}$ & 51.88 & 31.29 & \\textbf{50.73} & \\textbf{33.62} & 62.91 & 34.88 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{53.41} & 31.30 & 51.10 & 33.23 & \\textbf{63.69} & 35.09 \\\\ \\bottomrule\n\\end{tabularx}}\n\\label{table:2}\n\\end{table}\n\nLearning robust features that are transferable is a main goal in self-supervised adversarial learning. It is of great significance if models pre-trained with a huge amount of unlabeled data possess good transferability by merely light-weight finetuning. For example, Linear Probing is often 10$\\times$ quicker than conventional adversarial training, with only a linear classifier trained.\n\nHere we evaluate the robust transferability of the proposed approach, by transfering CIFAR-10 and CIFAR-100 to STL-10, \\textit{i.e.}, use unlabeled images in CIFAR-10\/-100 to pretrain, then use STL-10 to finetune and evaluate the learned models. As shown in Table~\\ref{table:2}, our methods yield both clean and robust accuracy gains in most settings, up to 1.48\\% (33.62\\% vs. 32.14\\%) in robust accuracy and 2.74\\% (67.19\\% vs. 64.45\\%) in clean accuracy.\n\n\n\\subsection{Ablation studies}\n\nWe design a basic adversarial contrastive model, named CoreACL, to study the effect of each component in our proposed methods.\nCoreACL only contains the contrastive component with three positive views: two clean augmented views and one adversarial view of the original image.\n\n\\subsubsection{Fixed $\\alpha$ for Asymmetric Similarity Function.}\nWe first use fixed $\\alpha$ without adaptive annealing to explore the effectiveness of \\textit{inferior positives}. Figure~\\ref{fig:ablation1}\n\\setlength{\\intextsep}{12pt}\n\\begin{wrapfigure}[14]{r}{8.0cm}\n \\centering\n \\includegraphics[scale=0.27]{images\/ab2.png}\n \\caption{Deep probing for asymmetric similarity function with different $\\alpha$.}\n \\label{fig:ablation1}\n\\end{wrapfigure}\npresents the results with different $\\alpha$ values when training models for 200 epochs.\nRecall that $\\alpha$ represents the tendency of the clean sample heading towards the adversarial sample. $\\alpha < 0.5$ means clean samples move less toward the adversaries\n(vice versa for $\\alpha > 0.5$), and $\\alpha = 0.5$ degenerates to the original symmetric similarity function form. \n\n\nCompared with symmetric CoreACL ($\\alpha=0.5$), our approach achieves better robustness and accuracy when $\\alpha<0.5$ (adversarial examples are treated as \\textit{inferior positives}). Intriguingly, when $\\alpha=1.0$, the extreme case when only clean samples are attracted by adversaries, we observe the presence of a trivial solution~\\cite{chen2021exploring}, that is all images collapse into one point. This validates our observation that adversaries with false identities are indeed pulling their positives towards other instances in the positive contrasts, with the risk of drawing all samples together.\nIt is also worth noting that when $\\alpha < 0.2$, performance begins to drop, showing that a small but non-zero $\\alpha$ is the optimal setting empirically.\n\n\\subsubsection{Fixed $\\alpha$ vs. $\\alpha$-Annealing.}\nAs shown in Table 3, compared to CoreACL, fixed $\\alpha$ obtains higher clean accuracy (81.29\\% vs. 78.90\\%) but with no gain on robust accuracy. Adaptive annealing $\\alpha$ achieves both higher robust accuracy (50.24\\% vs. 51.27\\%) and better clean accuracy (79.46\\% vs. 78.90\\%).\n\n\\setlength{\\intextsep}{3pt} \n\\begin{wraptable}[12]{r}{5.5cm\n\t\\centering \n\t\\fontsize{8.5}{9}\\selectfont \n\t\\begin{threeparttable}\n\t\t\\caption{Ablation studies, evaluated in SA, RA and time cost. Trained for 400 epochs on 2 Tesla V100 GPUs.}\n\t\t\\label{tab:headings} \n\t\t\\begin{tabular}\n {\n m{1.95cm}\n P{1cm}\n P{1cm}\n P{1.5cm}\n }\n\t\t\t\\toprule \n\t\t\t\\makecell[c]{Methods} & SA & RA & Time Cost (s\/epoch)\\cr \n\t\t\t\\midrule \n\t\t\t\\noalign{\\smallskip}\n\t\t\tCoreACL & 78.90 & 50.27 & 96 \\\\\n \\ w\/fixed $\\alpha$ & 81.29 & 50.24 & 96\\\\\n \\ w\/annealing $\\alpha$ & 79.46 & 51.37 & 101 \\\\ \\hline\n \\ w\/$\\mathcal{L}^{IP+HN}$ & 81.19 & 51.31 & 101 \\\\\n AdvCL & 81.35 & 51.00 & 182 \\\\\n\t\t\t\\bottomrule \n\t\t\\end{tabular} \n\t\\end{threeparttable} \n\t\\label{table:3}\n\\end{wraptable}\n\\subsubsection{Comparison with AdvCL.}\nTable 3 reports the performance and computation cost comparisons with AdvCL.\nCoreACL with $\\mathcal{L}^{IP+HN}$ achieves similar performance to AdvCL, which is equivalent to integrate additional components (high frequency view and pseudo-supervision) into CoreACL. The computation time of AdvCL is almost twice than that of $\\rm w\/\\mathcal{L}^{IP+HN}$, which could due to extra computation on contrasting high frequency views and the pseudo-labeled adversarial training.\nOur methods only need to compute pair-wise Euclidean distance for $\\alpha$-annealing in $\\mathcal{L}^{IP}$, and no extra cost introduced in $\\mathcal{L}^{HN}$. \n\n\\subsubsection{Effect of Hard Negatives.}\nTo investigate the effect of hard negatives, we\n\\setlength{\\tabcolsep}{4pt}\n\\setlength{\\intextsep}{6pt} \n\\begin{wraptable}[12]{r}{8cm}\n\\renewcommand\\arraystretch{1.5}\n\t\\centering \n\t\\fontsize{9}{9}\\selectfont \n\t\\scalebox{0.9}{\n\t\\begin{threeparttable}\n\t\t\\caption{Ablation studies for AdvCL with hard negatives (AdvCL-HN), evaluated under Linear Probing (LP), Adversarial Linear Finetuning (ALF) and Adversarial Full Finetuning (AFF)} \n\t\t\\label{tab:performance_comparison} \n\t\t\\begin{tabular}{ccccccc} \n\t\t\t\\toprule \n\t\t\t\\multirow{2}{*}{Methods}& \n\t\t\t\\multicolumn{2}{c}{LP}&\\multicolumn{2}{c}{ALF}&\\multicolumn{2}{c}{AFF}\\cr \n\t\t\t\\cmidrule(lr){2-3} \\cmidrule(lr){4-5} \\cmidrule(lr){6-7} \n\t\t\t&SA&RA&SA&RA&SA&RA\\cr \n\t\t\t\\midrule \n \n\t\t\t{AdvCL-HN}&{81.34}&{\\bf 52.96}&{78.69}&{\\bf 53.20}&83.44&{\\bf 54.07}\\cr \n\t\t\t{w\/o debias}&{\\bf 81.52}&51.61&{\\bf 78.89}&52.34&{\\bf 83.73}&{54.01}\\cr \n\t\t{w\/o reweight}&76.93&50.01&73.49&49.86&81.74&52.60\\cr \n\t\t\t\\bottomrule \n\t\t\\end{tabular} \n\t\\end{threeparttable}} \n\\end{wraptable}\nevaluate each component (negatives debiasing~\\cite{chuang2020debiased}, reweighting~\\cite{robinson2020contrastive}) as shown in Table 4. With negatives-debiasing removed, we observe decrease in robust accuracy, with slightly increased standard accuracy. We hypothesize that without debiasing, semantically similar adversarial representations that should be mapped closely are pushed away instead. \nIn addition, the removal of negatives reweighting results in a sharp performance drop, showing that viewing adversarial views as \\textit{hard negatives} with higher weights plays a key role in discriminating adversarial samples.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=0.42]{images\/hist-2.png}\n \\caption{Histograms of Euclidean distance (normalized) distribution of all negative pairs learned by different objectives in (a) CIFAR10 (first row) and (b) CIFAR100 (second row). Baseline is AdvCL~\\cite{fan2021does}; IP: baseline with Inferior Positives; HN: baseline with Hard Negatives. On each dataset, our methods are better at differentiating different instances (with larger distance between negative pairs)\n }\n \\label{fig:ablation2}\n\\end{figure}\n\n\n\\begin{figure}[b]\n \\centering\n \\includegraphics[scale=0.39]{images\/rocl-tsne3.png}\n \\caption{t-SNE visualizations in a global view on CIFAR-10 validation set. The embeddings are learned by different self-supervised pre-training methods (SimCLR(a), RoCL(b) and RoCL-IP(c)) (\\textit{colored figure})\n }\n \\label{fig:global tsne}\n\\end{figure}\n\n\\subsection{Qualitative Analysis}\n\nFigure \\ref{fig:ablation2} shows the distribution of normalized Euclidean distance over all negative pairs. We take AdvCL~\\cite{fan2021does} as the baseline and compare it with its enhanced versions with our methods.\nGenerally, our methods can shift the original distribution curve right (larger distance), meaning that treating adversaries as inferior positives or hard negatives encourages the model to separate negative pairs further apart and induce better instance discrimination. This suggests that our proposed methods effectively mitigate the negative impacts of \\textit{identity confusion}.\n\nFigure \\ref{fig:global tsne} provides 2-D visualization (t-SNE~\\cite{van2008visualizing} on CIFAR-10) for the embeddings learnt by SimCLR~\\cite{pmlr-v119-chen20j}, RoCL~\\cite{kim2020adversarial} and RoCL enhanced by $\\mathcal{L}^{IP}$ (RoCL-IP). Each class is represented in one color.\nCompared to SimCLR, RoCL representations are corrupted by adversaries and exhibit poor class discrimination. RoCL-IP yields better class separation compared with RoCL.\nThis shows that asymmetric similarity consideration eases instance-level identity confusion.\n\n\n\\section{Introduction}\n\nWell-performed models trained on clean data can suffer miserably when exposed to simply-crafted adversarial samples~\\cite{szegedy2014intriguing,goodfellow2014explaining,carlini2017towards,dong2018boosting}.\nThere has been many adversarial defense mechanisms designed to boost model robustness using labeled data~\\cite{kannan2018adversarial,shafahi2019adversarial,zhang2019you,wong2020fast,zhang2019theoretically,zhu2019freelb,athalye2018obfuscated}. In practice, however, obtaining large-scale annotated data can be far more difficult and costly than acquiring unlabeled data. Leveraging easily-acquired unlabeled data for adversarial learning, thus becomes particularly attractive.\n\nContrastive Learning (CL)~\\cite{hadsell2006dimensionality}, which performs instance discrimination~\\cite{wu2018unsupervised} (Figure 1 (a)) by maximizing agreement between augmentations of the same instance in the learned latent features while minimizing the agreement between different instances,\nhas made encouraging progress in self-supervised learning~\\cite{pmlr-v119-chen20j,he2020momentum,chen2020improved,grill2020bootstrap}. Due to its effectiveness in learning rich representations and competitive performance over fully-supervised methods, CL has seen a surge of research in recent years, such as\npositive sampling~\\cite{pmlr-v119-chen20j,tian2020contrastive,bachman2019learning,tian2020makes}, negative sampling~\\cite{he2020momentum,kalantidis2020hard,chuang2020debiased,wu2018unsupervised}, \npair reweighting~\\cite{chuang2020debiased,robinson2020contrastive}, and different contrast methods~\\cite{grill2020bootstrap,caron2020unsupervised,li2020prototypical}.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.43]{images\/intro-18.png}\n\\caption{\nIllustrations of ($a$) Contrastive Learning; ($b$) Adversarial Contrastive Learning; and our proposed methods for viewing adversarial samples asymmetrically as: ($c$) Inferior Positives (asymmetric contrast), and ($d$) Hard Negatives. In each circle, data points are augmentations of the same instance, sharing the same \\textit{Identity}. In ($b$), the Adversarial sample (\\textit{A}) shares the same Identity (\\textit{ID:2}) as the current Instance (\\textit{I}), but resides close to a different Identity (\\textit{ID:1}), thus \\textit{Identity Confusion} problem occurs. Specifically, the Adversarial sample (\\textit{A}) of Instance \\textit{(I)} exhibits similar representations to the Negative sample (\\textit{N}) of (\\textit{I}), which makes the positive contrast (\\textit{A}$\\leftrightarrow$\\textit{I}) and negative contrast (\\textit{N}$\\leftrightarrow$\\textit{I}) undermine each other in the training process (\\textit{colored figure}). \n}\n\\label{fig:intro}\n\\end{figure}\n\nRecently, contrastive learning has been extended to adversarial learning tasks in a self-supervised manner, leading to a new area of \\textit{adversarial contrastive learning} (Adversarial CL) ~\\cite{kim2020adversarial,fan2021does,jiang2020robust,gowal2020self}. \nThe main idea is to generate adversarial samples as additional positives of the same instance~\\cite{kim2020adversarial,fan2021does,jiang2020robust} for instance-wise attack, and maximize the similarity between clean views of the instance and their adversarial counterparts as in CL, while also solving the min-max optimization problem following canonical adversarial learning objective~\\cite{madry2018towards,shafahi2019adversarial,zhang2019you,wong2020fast,zhang2019theoretically,zhu2019freelb}.\nFor example, RoCL\\cite{kim2020adversarial} first proposed an attack mechanism against contrastive loss to confuse the model on instance-level identity, in a self-supervised adversarial training framework. AdvCL\\cite{fan2021does} proposed to minimize the gap between unlabeled contrast and labeled finetuning by introducing pseudo-supervision in the pre-training stage.\n\n\nAlthough these Adversarial CL methods showed improvement on model robustness, we observe that a direct\nextension from CL to adversarial learning (AL) can introduce ineffective CL updates during training.\nThe core problem lies in that they add worst-case perturbations $\\delta$ that no longer guarantee the preservation of instance-level identity~\\cite{kim2020adversarial} (\\textit{i.e.}, different from other data augmentation methods, adversarial samples can reside faraway from the current instance in the feature space after several attack iterations, because the attack objective is to make adversaries away from the current instance while approximating other instances, against the CL objective). \nAs illustrated in Figure~\\ref{fig:intro}(b), when the adversarial sample (\\textit{A}) of the current instance (\\textit{I}) are in close proximity to negative samples (\\textit{N}), CL objective minimizes the agreement between negative samples and current instance (\\textit{I} and \\textit{N} are pushed away from each other), while AL objective maximizes the agreement between adversarial samples and current instance (\\textit{A} and \\textit{I} are pulled together as \\textit{A} is considered as an augmented view of \\textit{I}). Meanwhile, \\textit{A} and \\textit{N} share similar representations, which renders the two objectives contradicting to each other. We term this conflict as ``\\textit{identity confusion}'', it means $A$ attracts and `confuses' $I$ with a false identity induced by $N$, which impedes both CL and AL from achieving their respective best performance.\n\nTo address this issue of \\textit{identity confusion}, we propose to treat adversarial samples unequally and discriminatingly, and design a generic asymmetric InfoNCE objective (\\textit{A-InfoNCE}), in order to model the asymmetric contrast strengths between positive\/negative samples.\nFirstly, to mitigate the direct pull between adversarial sample (\\textit{A}) and current instance (\\textit{I}) (Figure~\\ref{fig:intro} (c)) that might dampen the effectiveness of CL, we propose to treat adversarial samples as \\textit{inferior positives} that induce weaker learning signals to attract their counterparts in a lower degree when performing positive contrasts.\nThis asymmetric consideration in AL promises a trade-off and reduces conflicting impact on the CL loss.\n\nSecondly, to encourage adversarial samples (\\textit{A}) to escape from false identities induced by negative samples (\\textit{N}) that share similar representations to (\\textit{A}) (pushing \\textit{A} away from \\textit{N}) (Figure~\\ref{fig:intro}(d)), we consider adversarial samples (\\textit{A}) as \\textit{hard negatives}~\\cite{robinson2020contrastive} of other negative samples (\\textit{N}), by strengthening the negative contrast between \\textit{A} and \\textit{N} in CL computation.\nTo effectively sample true adversarial negatives and re-weight each sample, we follow positive-unlabeled learning~\\cite{du2014analysis,elkan2008learning} and contrastive negatives reweighting~\\cite{robinson2020contrastive,chuang2020debiased} practice. \n\nOur contributions are summarized as follows: \n$1)$\n We propose an generic asymmetric InfoNCE loss, \\textit{A-InfoNCE}, to address the \\textit{identity confusion} problem in Adversarial CL, by viewing adversarial samples \n as \\textit{inferior positives} or \\textit{hard negatives}.\n \n \n \n 2) Our approach is compatible to existing Adversarial CL methods, by simply replacing standard CL loss with \\textit{A-InfoNCE}. \n 3) Experiments on CIFAR-10, CIFAR-100 and STL-10 show that our approach consistently outperforms existing Adversarial CL methods.\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{\\large\\textsf{\\refname}}%\n \\@mkboth{\\MakeUppercase\\refname}{\\MakeUppercase\\refname}%\n \\list{\\@biblabel{\\@arabic\\c@enumiv}}%\n {\\settowidth\\labelwidth{\\@biblabel{#1}}%\n \\leftmargin\\labelwidth\n \\advance\\leftmargin\\labelsep\n \\@openbib@code\n \\usecounter{enumiv}%\n \\let\\p@enumiv\\@empty\n \\renewcommand\\theenumiv{\\@arabic\\c@enumiv}}%\n \\sloppy\n \\clubpenalty4000\n \\@clubpenalty \\clubpenalty\n \\widowpenalty4000%\n \\sfcode`\\.\\@m}\n {\\def\\@noitemerr\n {\\@latex@warning{Empty `thebibliography' environment}}%\n \\endlist}\n\\makeatother\n\n\\topmargin -2.0cm\n\\oddsidemargin -1.0cm\n\\textwidth 18.5cm\n\\textheight 24cm\n\\footskip 1.0cm\n\n\n\n\\newenvironment{sciabstract}{%\n\\begin{quote} \\bf}\n{\\end{quote}}\n\n\\title{\\textsf{\\textbf{A quantum Fredkin gate}}}\n\n\n\\author\n{Raj B. Patel,$^{1,\\ast}$ Joseph Ho,$^{1}$ Franck Ferreyrol,$^{1,2}$ Timothy C. Ralph,$^{3}$ \\\\\n\\& Geoff J. Pryde,$^{1,\\ast}$\\\\\n\\\\\n\\normalsize{$^{1}$CQC2T and Centre for Quantum Dynamics, Griffith University,}\\\\\n\\normalsize{Brisbane 4111, Australia}\\\\\n\\normalsize{$^{2}$ Laboratoire Photonique, Numerique et Nanosciences, Institut d'Optique,}\\\\\n\\normalsize{CNRS and Universit\\'{e} de Bordeaux, Talence, France}\\\\\n\\normalsize{$^{3}$ CQC2T and School of Mathematics and Physics, University of Queensland, }\\\\\n\\normalsize{Brisbane 4072, Australia}\\\\\n\\normalsize{$^\\ast$To whom correspondence should be addressed; E-mail: r.patel@griffith.edu.au}\\\\\n\\normalsize{or g.pryde@griffith.edu.au}\n}\n\n\n\\date{}\n\n\n\n\n\n\\begin{document}\n\n\n\\baselineskip12pt\n\n\\twocolumn[\n \\begin{@twocolumnfalse}\n\\maketitle\n\n\\begin{sciabstract}\nKey to realising quantum computers is minimising the resources required to build logic gates into useful processing circuits. While the salient features of a quantum computer have been shown in proof-of-principle experiments, difficulties in scaling quantum systems have made more complex operations intractable. This is exemplified in the classical Fredkin (controlled-SWAP) gate for which, despite theoretical proposals, no quantum analogue has been realised. By adding control to the SWAP unitary, we use photonic qubit logic to demonstrate the first quantum Fredkin gate, which promises many applications in quantum information and measurement. We implement example algorithms and generate the highest-fidelity three-photon GHZ states to-date. The technique we use allows one to add a control operation to a black-box unitary, something impossible in the standard circuit model. Our experiment represents the first use of this technique to control a two-qubit operation and paves the way for larger controlled circuits to be realised efficiently.\n\\end{sciabstract}\n\\end{@twocolumnfalse}\n]\n\n\\section*{\\large\\textsf{Introduction}}\nOne of the greatest challenges in modern science is the realisation of quantum computers\\cite{Kok2007,OBrien2009,Ladd2010} which, as they increase in scale, will allow enhanced performance of tasks in secure networking, simulations, distributed computing and other key tasks where exponential speedups are available. Processing circuits to realise these applications are built up from logic gates that harness quantum effects such as superposition and entanglement. At present, even small-scale and medium-scale quantum computer circuits are hard to realise because of the requirement to control enough quantum systems sufficiently well in order to chain together many gates into circuits. One example of this is the quantum Fredkin gate, which requires at least five two-qubit gates\\cite{Smolin1996} to be implemented in the standard circuit model. Thus, despite featuring prominently in schemes for quantum computation\\cite{Vandersypen2001,Lopez2012,Lanyon2007}, error-correction\\cite{Chuang1996,Barenco1997}, cryptography\\cite{Buhrman2001,Horn2005,Gottesman2001}, and measurement\\cite{Ekert2002,FuiasekFilip2002}, no such gate has been realised to date.\n\nThe quantum Fredkin gate, shown in Fig. 1A, is a three-qubit gate whereby, conditioned on the state of the control qubit, the quantum states of the two target qubits are swapped. The original, classical version of the gate first proposed by Fredkin \\cite{Fredkin1982} also serves as one of the first examples of a reversible logic operation where the number of bits are conserved and no energy is dissipated as a result of erasure. In the framework of universal quantum computation, gates are also reversible, so it may seem natural to ask whether it is possible to construct a quantum version of the Fredkin gate. The first design of the quantum Fredkin gate was proposed by Milburn \\cite{Milburn1989} and was to use single photons as qubits and cross-Kerr nonlinearities to produce the necessary coherent interactions. Further schemes utilising linear optics developed these ideas further \\cite{Chau1995,Smolin1996,Fiurasek2006,Fiurasek2008,Gong2008} by using ancilla photons, interference, and multiple two-qubit\\cite{OBrien2003,Pooley2012} and single-qubit gates. However concatenating multiple probabilistic gates in this fashion typically leads to a multiplicative reduction in the overall probability of success of $<1\/100$. Hence it would be desirable to be able to construct a quantum Fredkin gate directly without decomposition and avoid the associated resource overhead.\n\nWe begin by describing the concept of our experiment. We perform the controlled-SWAP operation by adding control to the SWAP unitary $U_{SWAP}$ applying the technique in Zhou \\textit{et al.} \\cite{Zhou2011}, to greatly reduce the complexity of quantum circuits. The notion of adding control to a black-box unitary is forbidden or difficult in many architectures\\cite{Araujo2014,Thompson2013} -- optics lends itself well to this approach because the optical implementation of the unitary leaves the vacuum state unchanged. Here we utilise this method to simplify a controlled multi-qubit operation.\n \\begin{figure*}\n \\begin{center}\n \\includegraphics[width=\\textwidth]{Fig1_S.pdf}\n \\end{center}\n\\noindent\\small{{\\bf Fig. 1.} Experimental arrangement and truth table measurements. \\textbf{A}, The quantum Fredkin gate circuit. The states of the target qubits are either swapped or not swapped depending on the state of the control qubit. \\textbf{B}, Concept of our experiment. Two SPDC photon sources allow production of path entanglement such that modes $R$ and $Y$ are entangled with modes $B$ and $G$. The SWAP operation is carried out on the path modes, depending on the control photon's state, such that arrival of the control photon indicates a system state of $\\alpha|H\\rangle^{C}|\\psi\\rangle^{T1}|\\varphi\\rangle^{T2} + \\beta|V\\rangle^{C}|\\varphi\\rangle^{T1}|\\psi\\rangle^{T2}$. \\textbf{C}, The experimental arrangement. Entangled photons are produced via SPDC (see Materials and Methods). Entering the gate via single-mode fiber, the two target photons are sent through a PBS. The path-entangled state in Eq. \\ref{4GHZ} is produced after each target photon enters a displaced Sagnac interferometer and the which-path information is erased on an NPBS. QWPs and HWPs encode the polarisation state in Eq. \\ref{4GHZU}. The control consists of a polarisation beam displacer interferometer. The desired control state is encoded onto modes $1R$ and $1B$ and coherently recombined. A tilted HWP is used to set the phase of the output state. Successful operation is heralded by four-fold coincidence events between the control, target, and trigger detectors. \\textbf{D}, Ideal (transparent bars) and measured (solid bars) truth table data for our gate. A total of 620 four-fold events were measured for each of the eight measurements, giving $\\left\\langle\\mathcal{O}\\right\\rangle = 96\\pm4\\%$.}\n \\end{figure*}\nA key idea in our demonstration is to use entanglement in a non-qubit degree of freedom (we use the photon's path mode) to drive the operation of the gate. This path entanglement can be produced in different ways. In our demonstration (Fig. 1B), it is generated from spontaneous parametric down-conversion (SPDC). Given the physical arrangement of the circuit and that we only accept detection events where a single photon is counted at each of the four outputs simultaneously, the optical quantum state produced by SPDC is converted to the required four-photon path-mode entangled state (see Materials and Methods) and has the form\n\\begin{equation}\\label{4GHZ}\n \\left(|11\\rangle_{B}|11\\rangle_{G}|00\\rangle_{R}|00\\rangle_{Y} + |00\\rangle_{B}|00\\rangle_{G}|11\\rangle_{R}|11\\rangle_{Y}\\right)\/\\sqrt{2}\n\\end{equation}\n where $B$, $R$, $Y$, and $G$ refer to path-modes and, for example, $|11\\rangle_{B}$ indicates a photon occupying mode $1B$ and another occupying $2B$. The path-modes are distributed throughout the circuit such that $U_{SWAP}$ is applied only to the $B$ and $G$ modes. The qubit state is encoded on the polarisation of the photon. Because the photons are in a spatial superposition, polarisation preparation optics must be applied to both path-modes of each photon. Hence, an arbitrary, separable, three-qubit state $|\\xi\\rangle|\\psi\\rangle|\\varphi\\rangle$ can be prepared as an input to the gate. In particular, the control qubit is encoded on modes $1R$ and $1B$, target 1 is encoded on modes $2R$ and $2B$, and target 2 is encoded on modes $1G$ and $1Y$, yielding\n\\begin{equation}\\label{4GHZU}\n \\left(|\\xi\\rangle^{C}_{1B}|\\psi\\rangle^{T1}_{2B}|\\varphi\\rangle^{T2}_{1G}|H\\rangle^{Tr}_{2G} + |\\xi\\rangle^{C}_{1R}|\\psi\\rangle^{T1}_{2R}|\\varphi\\rangle^{T2}_{1Y}|V\\rangle^{Tr}_{2Y}\\right)\/\\sqrt{2}\n\\end{equation}\nThe two control modes $1R$ and $1B$ are mixed on a polarising beam splitter (PBS), wheras a 50:50 non-polarising beam splitter (NPBS) is used to erase the path information in the target and trigger arms. The SWAP is implemented via rearrangement of the path-modes such that the target modes $2B$ and $1G$ are swapped wheras $2R$ and $1Y$ are not. Successful operation of the gate occurs when photons are detected at the control, target 1, and target 2 detectors (simultaneously with a photon detection at either trigger detector). The polarisation state of the three-qubit system, given the required modes are occupied, is $\\alpha|H\\rangle^{C}|\\psi\\rangle^{T1}|\\varphi\\rangle^{T2} + \\beta|V\\rangle^{C}|\\varphi\\rangle^{T1}|\\psi\\rangle^{T2}$ as expected from application of the Fredkin gate on the state $|\\xi\\rangle^{C}|\\psi\\rangle^{T1}|\\varphi\\rangle^{T2}$ where $|\\xi\\rangle = \\alpha|H\\rangle + \\beta|V\\rangle$. Taking into consideration the probability of recording a four-fold coincidence, successful execution of the gate occurs one-sixteenth of\n the time, on average. This can be increased to one-fourth of the time by collecting the target photons from both NPBS outputs.\n\n \\section*{\\large\\textsf{Results}}\nThe experimental arrangement of the quantum Fredkin gate is shown in Fig. 1C and consists of three interferometers designed to be inherently phase-stable. Pairs of polarisation entangled photons, produced by two SPDC crystals (see Materials and Methods), impinge on a PBS. Two orthogonally polarised photons, one from each source, are sent to separate displaced Sagnac interferometers. Initially, they are incident on a beam splitter where one half of the interface acts as a PBS and the other half acts as an NPBS. Entering at the PBS side, photons may travel along counterpropagating path modes where the polarisation state $|\\psi\\rangle$ is encoded onto one mode and the state $|\\varphi\\rangle$ is encoded on the other. The two paths are then recombined on the NPBS side of the beam splitter where the path information is erased (see Methods and Materials), giving the path-mode entangled state in equation \\ref{4GHZ} whilst the polarisation encoding procedure leads to the state in Eq. \\ref{4GHZU}. The control of the gate is realised in a polarisation interferometer consisting of two calcite beam displacers. The desired polarisation state of the control is encoded onto modes $1R$ and $1B$, which are coherently recombined in the second beam displacer. Given successful operation (arrival of a photon at the control detector), the preparation of the control photon in $|H\\rangle = |1\\rangle$ projects the target photons onto path modes $1G$ and $2B$ which undergo SWAP; conversely, preparing $|V\\rangle = |0\\rangle$ projects the target photons onto path modes $2R$ and $1Y$, which undergo the identity operation. In practice, the trigger arm consists of a half-wave plate (HWP) whose optic axis (OA) is set to $22.5^{\\circ}$, producing diagonal $|D\\rangle=\\frac{1}{\\sqrt{2}}(|H\\rangle + |V\\rangle)$ or anti-diagonal $|A\\rangle=\\frac{1}{\\sqrt{2}}(|H\\rangle - |V\\rangle)$ polarised photons, and a PBS. Successful operation is heralded by measuring four-fold coincidences across the trigger, control and two target detectors.\n\nThe logical operation of the gate was measured by performing eight measurements, one for each of the possible logical inputs. For each input we measure a total of 620 four-fold events distributed across the eight possible output states. Under ideal operation, for a given input, there is a single output. The solid bars in Fig. 1D depict the experimentally measured truth table data, $M_{exp}$ , whereas the transparent bars represent the ideal truth table $M_{ideal}$. To quantify the mean overlap between $M_{exp}$ and $M_{ideal}$, we calculate $\\left\\langle\\mathcal{O}\\right\\rangle = Tr \\left(M_{exp}M_{ideal}^{T}\/M_{ideal}M_{ideal}^{T}\\right)= 96\\pm4\\%$ which confirms excellent performance in the logical basis. The slight reduction in fidelity is most likely due to the imperfect extinction of our polarisation optics.\n \\begin{figure*}\n \\begin{center}\n \\includegraphics{Fig2_S.pdf}\n \\end{center}\n \\noindent\\small{{\\bf Fig. 2.} Real (left) and imaginary (right) parts of the reconstructed density matrices for our four GHZ states. Fidelity and purity were calculated for each state. \\textbf{A}, $|GHZ_{1}^{+}\\rangle$: $F = 0.88.\\pm 0.01$ and $P = 0.79 \\pm 0.02$. \\textbf{B}, $|GHZ_{1}^{-}\\rangle$: $F = 0.90 \\pm 0.01$ and $P = 0.83 \\pm 0.02$. \\textbf{C}, $|GHZ_{2}^{+}\\rangle$: $F = 0.93 \\pm 0.01$ and $P = 0.87 \\pm 0.02$. \\textbf{D}, $|GHZ_{2}^{-}\\rangle$: $F = 0.92 \\pm 0.01$ and $P = 0.85 \\pm 0.02$.}\n \\end{figure*}\n\nWe demonstrate the full quantum nature of our gate by preparing the control in a superposition $|\\xi\\rangle = \\frac{1}{\\sqrt{2}}(|0\\rangle \\pm |1\\rangle)$ which places the gate in a superposition of the SWAP and identity operations. Using our gate, we produce four of the eight maximally entangled three-photon Greenberger-Horne-Zeilinger (GHZ) states, namely\n\\begin{align}\\label{3GHZa}\n \\frac{1}{\\sqrt{2}}(|0\\rangle &\\pm |1\\rangle)^{C}|1\\rangle^{T1}|0\\rangle^{T2}\\rightarrow |GHZ_1^{\\pm}\\rangle \\nonumber\\\\\n &= \\frac{1}{\\sqrt{2}}\\left(|0\\rangle^{C}|1\\rangle^{T1}|0\\rangle^{T2} \\pm e^{i(\\phi + \\theta(\\vartheta))}|1\\rangle^{C}|0\\rangle^{T1}|1\\rangle^{T2}\\right)\n\\end{align}\nand,\n \\begin{align}\\label{3GHZb}\n \\frac{1}{\\sqrt{2}}(|0\\rangle &\\pm |1\\rangle)^{C}|0\\rangle^{T1}|1\\rangle^{T2}\\rightarrow |GHZ_2^{\\pm}\\rangle \\nonumber\\\\\n &= \\frac{1}{\\sqrt{2}}\\left(|0\\rangle^{C}|0\\rangle^{T1}|1\\rangle^{T2} \\pm e^{i(\\phi + \\theta(\\vartheta))}|1\\rangle^{C}|1\\rangle^{T1}|0\\rangle^{T2}\\right)\n\\end{align}\nHere $\\phi$ is a phase shift intrinsic to the gate, and $ \\theta(\\vartheta)$ is a corrective phase shift that can be applied by tilting a HWP at OA by an angle $\\vartheta$, such that $\\phi + \\theta(\\vartheta) = 2n\\pi$ (see Materials and Methods). In doing so, we are able to test the coherent interaction of all three qubits in the gate, which is a key requirement for constructing universal quantum computers. For each of the four states in Eqs. \\ref{3GHZa} and \\ref{3GHZb}, we perform three-qubit quantum state tomography (QST) to fully characterise the state. The control and target qubits are measured independently in the $D\/A$ basis, which we denote as $\\sigma_x$; in the $R\/L$ basis $(\\sigma_y)$, where $|R\\rangle=\\frac{1}{\\sqrt{2}}(|H\\rangle + i|V\\rangle)$ and $|L\\rangle=\\frac{1}{\\sqrt{2}}(|H\\rangle - i|V\\rangle)$; and in the $H\/V$ basis $(\\sigma_z)$. Therefore full state reconstruction can be carried out by a set of 27 measurements settings $(\\sigma_x\\sigma_x\\sigma_x, \\sigma_x\\sigma_x\\sigma_y...)$ effectively resulting in an over-complete set of 216 projective measurements as each measurement setting has eight possible outcomes. Figure 2 shows the real (left) and imaginary (right) parts of the reconstructed density matrices of the four GHZ states, each of which was calculated from $\\sim5000$ four-fold events using a maximum-likelihood algorithm. We measure fidelities and purities of $F = 0.88 \\pm 0.01$ and $P = 0.79 \\pm 0.02$ for $|GHZ_{1}^{+}\\rangle$, $F = 0.90 \\pm 0.01$ and $P = 0.83 \\pm 0.02$ for $|GHZ_{1}^{-}\\rangle$, $F = 0.93 \\pm 0.01$, and $P = 0.87 \\pm 0.02$ for $|GHZ_{2}^{+}\\rangle$, and $F = 0.92\\pm 0.01$ and $P = 0.85 \\pm 0.02$ for $|GHZ_{2}^{-}\\rangle$. The errors were calculated from 500 samples of a Monte-Carlo simulation. These values are most likely limited by imperfect mode overlap at the NPBS in each displaced Sagnac interferometer. Nevertheless, to the best of our knowledge, these values are the highest reported for photonic GHZ states surpassing the previous values reported in Hamel \\textit{et al.} \\cite{Hamel2014}.\n \\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=\\linewidth]{Fig3_S.pdf}\n \\end{center}\n \\noindent\\small{{\\bf Fig. 3.} Measured correlations for violations of Mermin's and Svetlichny's inequalities. \\textbf{A}, Mermin's inequality resulting in $S_M = 3.58 \\pm 0.06$, a violation by 24 standard deviations. \\textbf{B}, Svetlichny's inequality with $S_{Sv} = 4.88 \\pm 0.13$, a violation by 7 standard deviations. Error bars were calculated from Poissonian counting statistics.}\n \\end{figure}\n\nWe perform further measurements to characterise the quality of the $|GHZ_{2}^{+}\\rangle$ state. GHZ states can show a strong contradiction between local hidden-variable theories and quantum mechanics \\cite{Pan2000}. Mermin \\cite{Mermin1990} derived a Bell-like inequality by imposing locality and realism for three particles, which holds for any local hidden-variable theory\n\\begin{align}\\label{Mermin}\nS_M &= |E(a',b,c') + E(a,b',c') + E(a,b,c) - E(a',b',c)|\\nonumber\\\\\n &\\leq 2\n\\end{align}\n\\begin{figure}[H]\n \\begin{center}\n \\includegraphics[width=\\linewidth]{Fig4_S.pdf}\n \\end{center}\n \\noindent\\small{{\\bf Fig. 4.} Estimations of nonlinear functionals of a single-qubit state with the quantum Fredkin gate. \\textbf{A}, Circuit diagram of the network. \\textbf{B}, Measurements of the overlap of two single qubit states, $|\\langle{T1}|{T2}\\rangle|^2$. The fringe visibility or overlap was measured for states $|0\\rangle^{T1}|0\\rangle^{T2}$ (black), $\\frac{1}{\\sqrt{2}}\\left(|0\\rangle \\pm 1\\rangle\\right)^{T1}|0\\rangle^{T2}$ (red), and $|0\\rangle^{T1}|1\\rangle^{T2}$ (blue) with values $0.82 \\pm 0.02$, $0.52 \\pm 0.02$, and $0.05 \\pm 0.01$, respectively. \\textbf{C}, Measurements of the state purity. We measure a visibilities ranging from $0.82 \\pm 0.02$ for a pure state to $0.03 \\pm 0.02$ for a maximally mixed state.}\n \\end{figure}\nThis inequality can be violated by performing measurements with settings $a = b = c = \\sigma_x$ and $a' = b' = c' = \\sigma_y$ with a maximum violation of $S_M = 4$. From the QST of $|GHZ_{2}^{+}\\rangle$, 747 of the total 5029 four-fold events can be used to calculate the correlation functions $E$ in Eq. \\ref{Mermin}; these results are shown in Fig. 3A. This leads to $S_M = 3.58 \\pm 0.06$ which is a violation by 24 standard deviations. The implication of using these particular measurement settings is that the state exhibits genuine tripartite entanglement.\n\nAn additional test, namely, the violation of Svetlichny's inequality, is required to test whether the state is capable of displaying tripartite non-locality \\cite{Svetlichny1987,Lavoie2009}. Non-local hidden variable theories cannot be ruled out with Mermin's inequality, as they can be violated for arbitrarily strong correlations between two of the three particles. Svetlichny's inequality takes the form\n\\begin{align}\\label{Svet}\nS_{Sv} =& |E(a,b,c) + E(a,b,c') + E(a,b',c) - E(a,b',c')\\nonumber\\\\\n+& E(a',b,c) - E(a',b,c') - E(a',b',c) - E(a',b',c')|\\nonumber\\\\\n&\\leq 4\n\\end{align}\nwith settings $a = Sv1_\\pm$ (where $|Sv1_\\pm\\rangle = \\frac{1}{\\sqrt{2}}(|H\\rangle \\pm e^\\frac{i3\\pi}{4}|V\\rangle)$), $a' = Sv2_\\pm$ (where $|Sv2_\\pm\\rangle = \\frac{1}{\\sqrt{2}}(|H\\rangle \\pm e^\\frac{i\\pi}{4}|V\\rangle)$), $b' = c =\\sigma_x$, and $b = c' = \\sigma_y$. The maximum violation allowed by quantum mechanics is $S_{Sv} = 4\\sqrt{2}$. Figure 3B shows the correlations calculated from 2348 four-fold events leading to $S_{Sv} = 4.88 \\pm 0.13$, which is a violation by 7 standard deviations.\n\nAn application of the quantum Fredkin gate is the direct estimation of non-linear functionals\\cite{Ekert2002} of a quantum state, described by a density matrix $\\rho$, without recourse to QST. Here $\\rho = \\varrho_{T1} \\otimes \\varrho_{T2}$ is the density matrix of two separable subsystems. The circuit we employ is shown in Fig. 4A, where an interferometer is formed using two Hadamard gates and a variable phase shift $\\theta(\\vartheta)$. This interferometer is coupled to the controlled-SWAP operation of our quantum Fredkin gate such that measuring the control in the logical basis leads to an interference pattern given by $\\textrm{Tr}[U_{SWAP}\\varrho_{T1} \\otimes \\varrho_{T2}] = \\textrm{Tr}[\\varrho_{T1}\\varrho_{T2}] = ve^{i\\theta(\\vartheta)}$. If $\\varrho_{T1} \\neq \\varrho_{T2}$ then measurement of the fringe visibility provides, for pure states, a direct measure of the state overlap $|\\langle{T1}|{T2}\\rangle|^2$, where $\\varrho_{T1} = |{T1}\\rangle\\langle{T1}|$ and $\\varrho_{T2} = |{T2}\\rangle\\langle{T2}|$. Conversely, if $\\varrho_{T1} = \\varrho_{T2}$ then the fringe visibility provides an estimate of the length of the Bloch vector ( that is, the purity $P = \\textrm{Tr}[\\varrho^2]$). We realise the Hadamard operations in Fig. 4A by setting the quarter wave plate (QWP) and HWP combinations to prepare or measure $\\sigma_x$.\n\nFigure 4B shows the results of preparing the target qubits in the states $|0\\rangle^{T1}|0\\rangle^{T2}$, $\\frac{1}{\\sqrt{2}}\\left(|0\\rangle + 1\\rangle\\right)^{T1}|0\\rangle^{T2}$, and $|0\\rangle^{T1}|1\\rangle^{T2}$, corresponding to ideal (measured) overlaps and visibilities of 1 ($0.82 \\pm 0.02$), 0.5 ($0.52 \\pm 0.02$), and 0 ($0.05 \\pm 0.01$), respectively. Although the maximum visibility we are able to measure is limited by the performance of the three interferometers in the circuit, our measurements show a clear reduction in visibility as the single qubit states are made orthogonal. Figure 4C shows the result of setting $\\varrho_{T1} = \\varrho_{T2}$. As we increase the degree of mixture (see Materials and Methods), we observe a reduction in visibility from $0.82 \\pm 0.02$ for a pure state to $0.03 \\pm 0.02$ for a maximally mixed state.\n\n\\section*{\\large\\textsf{Discussion}}\nIn conclusion, we have used linear optics to perform the first demonstration of the quantum Fredkin gate. This is achieved by exploiting path-mode entanglement to add control to the SWAP operation. Our implementation has an improved success rate of more than one order of magnitude compared to previous proposals and does not require ancilla photons or decomposition into two-qubit gates. Our gate performs with high accuracy in the logical basis and operates coherently on superposition states. We have used the gate to generate genuine tripartite entanglement with the highest fidelities to date for photonic GHZ states and have implemented a small-scale algorithm to characterise quantum states without QST.\n\nAn alternative method for generating the polarisation-path entanglement that drives the gate is the use of C-path gates\\cite{Zhou2011} at the input. Our implementation varies from a fully heralded quantum Fredkin gate (see Materials and Methods), which does not require preexisting entanglement; however it demonstrates the key properties of a quantum Fredkin gate. For completely general quantum circuits that incorporate Fredkin (or similar controlled-arbitrary-unitary) gates at arbitrary circuit locations, the C-path methodology may be necessary at the cost of some additional resources and success probability (see Materials and Methods), though we conjecture that specific circuits comprising multiple Fredkin gates might be optimised using similar techniques to those that allow us to simplify the Fredkin down from a circuit of five two-qubit gates. Nevertheless, for small algorithms or operations and whenever possible, it is significantly favourable to directly generate path entanglement.\n\nThe quantum Fredkin gate has many applications across quantum information processing. Our demonstration should stimulate the design and implementation of even more complex quantum logic circuits. Later we became aware of related work carried out by Takeuchi\\cite{Takeuchi2015}.\n\n\\section*{\\large\\textsf{Materials and Methods}}\n\n\\paragraph*{Source\\\\}\nOur source consisted of a $150\\textrm{ fs}$ pulsed Ti-Sapphire laser operating at a rate of $80\\textrm{ MHz}$ and at a wavelength of $780\\textrm{ nm}$, which was frequency doubled using a $2\\textrm{ mm}$ LBO crystal. Two dispersion-compensating ultrafast prisms spatially filter any residual $780\\textrm{ nm}$ laser light. The frequency- doubled light (with power $100\\textrm{ mW}$) pumped two $2\\textrm{ mm}$ type-II $\\beta$ barium borate (BBO) crystals in succession. Entangled photons, generated via SPDC, were collected at the intersection of each set of emission cones. They then encountered an HWP with its OA at $45^{\\circ}$ and an additional $1\\textrm{ mm}$ type-II BBO crystal used to compensate for spatial and temporal walk-offs. The single photons were coupled into single-mode fiber and delivered to the gate. This configuration gave, on average, a four-fold coincidence rate of 2.2 per minute at the output of the gate.\n\\paragraph*{Entangled state preparation\\\\}\nEach SPDC source emitted pairs of entangled photons of the form $|\\psi^{+}_{1}\\rangle = \\frac{1}{\\sqrt{2}}\\left(|H\\rangle_{1B}|V\\rangle_{2B}+|V\\rangle_{1R}|H\\rangle_{2R}\\right)$ and $|\\psi^{+}_{2}\\rangle = \\frac{1}{\\sqrt{2}}\\left(|H\\rangle_{1Y}|V\\rangle_{2Y}+|V\\rangle_{1G}|H\\rangle_{2G}\\right)$. Polarisation optics were used to distribute the path-modes throughout the circuit and thus convert this state into the path-entangled states $|\\psi^{+}_{1}\\rangle = \\frac{1}{\\sqrt{2}}\\left(|1\\rangle_{1B}|1\\rangle_{2B}|0\\rangle_{1R}|0\\rangle_{2R}+|0\\rangle_{1B}|0\\rangle_{2B}|1\\rangle_{1R}|1\\rangle_{2R}\\right)$ and \\\\$|\\psi^{+}_{2}\\rangle = \\frac{1}{\\sqrt{2}}\\left(|1\\rangle_{1Y}|1\\rangle_{2Y}|0\\rangle_{1G}|0\\rangle_{2G}+|0\\rangle_{1Y}|0\\rangle_{2Y}|1\\rangle_{1G}|1\\rangle_{2G}\\right)$. Path modes from $|\\psi^{+}_{1}\\rangle$ and $|\\psi^{+}_{2}\\rangle$ were combined on a PBS (Fig 1C, PBS with outputs $2R$, $1G$, $1Y$, and $2B$); along with post-selection of four-fold coincidence events at the outputs of the control, target, and trigger outputs, this led to Eq. \\eqref{4GHZ} in the main text. Each qubit was encoded using photon polarisation: using Eq. \\eqref{4GHZ}, considering that each photon exists in a superposition of path-modes and omitting the unoccupied modes, an arbitrary polarisation state can be encoded onto each qubit by performing a local unitary operation on each mode, giving equation \\eqref{4GHZU}. The state encoding was performed inside the beam displacer (control qubit) and displaced Sagnac (target qubits) interferometers.\n\\paragraph*{Tuning the phase\\\\}\nThe phase was tuned by tilting an HWP set to its OA. To set the correct phase for each of the four GHZ states, we varied the tilt of the HWP and measured fringes in the four-fold coincidences with our measurement apparatus in the $\\sigma_x\\sigma_y\\sigma_y$ basis. For the $|GHZ_{1,2}^{+}\\rangle$ $\\left(|GHZ_{1,2}^{-}\\rangle\\right)$ we set the tilt to maximise (minimise) the occurrence of the $|DRR\\rangle$, $|DLL\\rangle$, $|ARL\\rangle$, and $|ALR\\rangle$ events.\n\\paragraph*{Mixed state preparation\\\\}\nThe mixed states of the form $\\varrho = m|0\\rangle\\langle{0}| + \\frac{(1-m)}{2}\\left(|0\\rangle\\langle{0}| + |1\\rangle\\langle{1}|\\right)$ were obtained by measuring output statistics for a combination of pure input states. The input states of the target were prepared, in varying proportions given by the parameter $m$, as $0.25 (1 + m)^2|0\\rangle^{T1}|0\\rangle^{T2}$, $0.25(1 - m^2)|0\\rangle^{T1}|1\\rangle^{T2}$, $0.25(1 - m^2)|1\\rangle^{T1}|0\\rangle^{T2}$, and $0.25 (1 - m)^2|1\\rangle^{T1}|1\\rangle^{T2}$. The aggregated data resulted in a fringe pattern which reflects the purity of the mixed single-qubit state.\n\\paragraph*{Erasing the which-path information\\\\}\nGeneration of path-mode entanglement and successful operation of the gate in the quantum regime relied on the erasure of the which-path information in the two displaced Sagnac interferometers. We tested this by performing a Hong-Ou-Mandel (HOM) two-photon interference measurement after each interferometer. After overlapping path modes $2R$ and $1G$ on an NPBS, an HWP with its OA set to $22.5^{\\circ}$ rotated the polarisation of the photons to $|D\\rangle$ and $|A\\rangle$, respectively. Sending these photons into the same port of a PBS led to bunching at the output if the path-modes were indistinguishable. Doing the same for modes $2B$ and $1Y$ gave two separate HOM dips (see Materials and Methods) with visibilities of $90 \\pm 5\\%$ and $91 \\pm 6\\%$.\n\\paragraph*{Heralding the quantum Fredkin gate\\\\}\nIn order to use a quantum Fredkin gate as part of a much larger quantum circuit (with gates in series), it is preferable for the gate to be heralded. Realising our gate in this manner involves adding C-path gates\\cite{Zhou2011} to each input. For the best probability of success $P_{success}$, each C-path gate requires two heralded C-NOT gates\\cite{Pittman2001} which, in turn, require two entangled pair ancillae. Execution of the C-path gate succeeds with $P_{success} = (1\/4)^2$\\cite{Zhou2011,Pittman2001}. C-path gates are not a necessity at the output if successful execution is heralded by non-detections at the relevant NPBS ports, at an additional probability cost of factor $1\/4$.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDue to the increasing growth of 3D point cloud data, point cloud semantic segmentation has been receiving more and more attention in the 3D computer vision community. Most of these segmentation methods focus on fully supervised segmentation with manually annotated points \\cite{hu2020randla, thomas2019kpconv, lei2020seggcn, zhao2019pointweb, wang2019graph}. However, annotating large-scale 3D point clouds is a cumbersome process, which is costly in labor and time. Particularly, the number of point clouds in some real scenes such as the indoor scene can often reach the order of magnitude to millions. Therefore, it is difficult to obtain the accurate labels of these million points for full-supervised segmentation.\n\n\nDifferent from full-supervised point cloud segmentation, semi-supervised segmentation aims to learn a good label prediction for point clouds with partially annotated points. Recent works have been dedicated to the semi-supervised point cloud segmentation task.\nGuinard~\\emph{et al.}~\\cite{guinard2017weakly} propose a weakly supervised conditional random field classifier for 3D LiDAR point cloud segmentation.\nHowever, it converts the segmentation task into an optimization problem, and the contextual information in point clouds is ignored. Mei~\\emph{et al.} propose a semi-supervised 3D LiDAR point cloud segmentation method~\\cite{mei2019semantic}, where the 3D data is projected to range images for feature embedding, and the inter-frame constraints are combined with some labeled samples to encourage feature consistency. Nonetheless, the constraints along the LiDAR sequential frames are not available in general 3D segmentation datasets. Lately,~\\cite{XuLee_CVPR20} proposes a semi-supervised point cloud segmentation method, which employs three constraints to enhance the feature learning of unlabeled points, including block-level label penalization, data augmentation with rotation and flipping for prediction consistency, and a spatial and color smoothness constraint in local regions. Although it can obtain effective segmentation results, the long-range relations are ignored in this method.\n\n\nAlthough some efforts have been made on semi-supervised point cloud segmentation, how to accurately predict the labels of unannotated points for segmentation is still a challenging problem. Particularly, since point clouds are irregular, it is difficult to exploit the geometry structures of point clouds to accurately infer pseudo labels of unannotated points for label propagation. In addition, the uncertainty of inferred pseudo labels of unannotated points hinders the network from learning discriminative features of point clouds, leading to inaccurate label prediction.\n\nAiming at the aforementioned two problems, in this paper, we propose a novel semi-supervised semantic point cloud segmentation network, named SSPC-Net. We first divide the point clouds into superpoints and build the superpoint graph, where the superpoint is a set of points with isotropically geometric features. Thus, we can convert the point-level label prediction problem in the point cloud segmentation task into the superpoint-level label prediction problem. Following the method in ~\\cite{landrieu2018large}, we employ the gated graph neural network (GNN)~\\cite{li2015gated} for superpoint feature embedding.\nIn order to fully exploit the local geometry structure of the constructed superpoint graph, we then develop a dynamic label propagation method to accurately infer pseudo labels for unsupervised superpoints. Specifically, the labels of supervised superpoints are gradually extended to the adjacent superpoints with high semantic similarity along the edges of the superpoint graph.\nWe also adopt a superpoint dropout strategy to obtain the high-quality pseudo labels during the label propagation process, where the extended superpoints with low confidences are dynamically pruned.\nFurthermore, we propose a coupled attention mechanism to learn the discriminative context features of superpoints. We alternatively perform attention on the supervised and extended superpoints so that the discrimination of the features of the supervised and extended superpoints can be boosted each other, alleviating the uncertainty of the inferred pseudo labels of the unsupervised superpoints.\nFinally, we employ a combined cross-entropy loss to train the segmentation network.\nExtensive results on various indoor and outdoor datasets demonstrate that our method can yield good performance with only few point-level annotations.\n\n\nThe main contributions of this paper are summarized as: \\textbf{(1)} We develop a dynamic superpoint label propagation method to accurately infer the pseudo labels of unsupervised superpoints. We also present a superpoint dropout strategy to select the high-quality pseudo labels. \\textbf{(2)} We propose a coupled attention mechanism on the supervised and extended superpoints to learn the discriminative features of the superpoints. \\textbf{(3)} Our proposed method can yield better performance than the current semi-supervised point cloud semantic segmentation method with fewer labels.\n\n\\section{Related Work}\n\\textbf{Deep learning on 3D point clouds.}\nRecently, many deep learning methods are proposed to tackle point cloud classification and segmentation.\nSome methods~\\cite{wu20153d,maturana2015voxnet,sedaghat2016,qi2016volumetric} voxelize point clouds and employ 3D CNNs for feature embedding. However, the voxel-based methods suffer from the large memory cost due to the high-resolution voxels. By projecting point clouds into 2D views, \\cite{su15mvcnn,boulch2017unstructured,tatarchenko2018tangent} use classic CNNs to extract features from point clouds. However, the view-based methods are sensitive to the density of 3D data.\nTo reduce memory cost and additional preprocessing, Qi~\\emph{et al.} propose PointNet, which directly processes the unordered point clouds and uses multi-layer perceptrons (MLPs) and the maxpooling function for feature embedding. Following PointNet, many efforts~\\cite{qi2017pointnet++,klokov2017escape,wang2019graph,hua2018pointwise,li2018pointcnn,zhao2019pointweb,wang2019dynamic,thomas2019kpconv,wu2019point,liu2019point2sequence,han2020point2node,zhao2020jsnet,feng2018gvcnn,ma2018learning} are proposed for point cloud processing. Although these methods have achieved decent performance, their models depend on fully annotated 3D point clouds for training.\nHowever, in this paper, we focus on the semi-supervised point cloud semantic segmentation.\n\n\\textbf{Semi-\/Weakly supervised deep learning on 3D point clouds.}\nMany efforts~\\cite{mei2019semantic,wei2020multi,XuLee_CVPR20} have been proposed to tackle semi-\/weakly supervised point cloud semantic segmentation. In \\cite{mei2019semantic}, Mei~\\emph{et al.} introduce a semi-supervised 3D LiDAR data segmentation method. It first converts the 3D data to depth maps and then applies CNNs for feature embedding. In addition to a small part of supervised data, it also leverages the temporal constraints along the LiDAR scans sequence to boost feature consistency. Therefore, it is not practicable for general point cloud segmentation cases. Inspired by CAM~\\cite{zhou2016learning}, Wei~\\emph{et al.} propose MPRM~\\cite{wei2020multi} with scene-level and subcloud-level labels for weakly supervised segmentation. Specifically, it leverages a point class activation map (PCAM) to obtain the localization of each class and then generates point-wise pseudo labels with a multi-path region mining module. In this way, the segmentation network can be trained in a fully supervised manner. However, in practice, generating the subcloud-level annotation is still time-consuming.\nLately, in~\\cite{XuLee_CVPR20}, Xu~\\emph{et al.} propose a semi-supervised algorithm, which uses three constraints on the unlabeled points, $i.e.$, the block level labels for penalizing the negative categories in point clouds, data augmentation with random in-plane rotation and flipping for feature consistency and a spatial and color smoothness constraint in point clouds. \n\n\\begin{figure*}[t]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.9\\linewidth]{.\/img\/framework_final_v2.pdf}\n\t\\end{center}\n\t\\caption{Overview of the proposed semi-supervised semantic point cloud segmentation network (SSPC-Net). We first leverage the gated GNN to extract superpoints features. Then based on the superpoint graph, we conduct the dynamic label propagation strategy to generate pseudo labels. Next, based on the supervised superpoints and the extended superpoints, we perform a coupled attention mechanism to further boost the extraction of discriminative contextual features in the point cloud.}\n\t\\label{fig_outline}\n\\end{figure*}\n\n\n\\section{Our Method}\nIn this section, we present our semi-supervised point cloud segmentation network and the outline of our framework is shown in Fig. \\ref{fig_outline}. We first introduce the superpoint graph embedding module. Then we propose a dynamic label propagation approach combined with a superpoint dropout strategy. Next, we propose a coupled attention mechanism to learn discriminative contextual features of superpoints. Finally, we depict the framework of our method.\n\n\n\\subsection{Superpoint Graph Embedding}\\label{sec_graph_embeddding}\n\nTo obtain the superpoints and learn the superpoint features, following \\cite{landrieu2018large}, we perform an unsupervised superpoints partition approach to generate superpoints and then build superpoint graphs combined with graph neural network (GNN) for superpoints feature embedding.\nDenote $\\mathcal{G=(V, E)}$ as the superpoint graph built upon superpoints, where $\\mathcal{V}$ is the node set and $\\mathcal{E}$ is the edge set. Edge $(i,j) \\in \\mathcal{E}$ links node $i \\in \\mathcal{V}$ with $j \\in \\mathcal{V}$. \nWe first perform a lightweight PointNet-like structure on the superpoints to obtain superpoints features. After that, we learn the superpoint embedding with the gated GNN used in~\\cite{li2015gated}.\nGiven the superpoint embeddings and the semi-supervision, we can penalize the model with incomplete supervision. For a point cloud consists of $N$ superpoints, we define $a_i \\in \\{0,1\\}^{N}$ to indicate whether the $i$-th superpoint has supervision. Then the segmentation loss $\\mathcal{L}_{s}$ on the superpoint graph embedding module can be formulated as:\n\\begin{equation}\n\\mathcal{L}_{s} = \\frac{1}{A} \\sum \\nolimits _{i=1}^{N}a_i \\cdot \\mathcal{F}_{loss}\\left(z_i, \\bm{y}_i\\right)\n\\end{equation}\nwhere $\\mathcal{F}_{loss}$ is the loss function and we choose the cross-entropy loss in experiments, $A=\\sum \\nolimits _{i=1}^{N} a_i$ is adopted for normalization, $z_i$ represents the superpoint-level label of $i$-superpoint and $\\bm{y}_i$ is the prediction logit.\n\nThe reason why we choose the superpoint graph as the representation of point cloud is at two points. On the one hand, the superpoint is geometrically isotropic and therefore we can directly extend the point-level label to the superpoint-level label, which alleviates the lack of supervision. On the other hand, since the superpoint graph is rooted in the geometric structure of the point cloud, where the linking edges between the superpoints greatly facilitate the feature propagation. Thus we can obtain more discriminative contextual features of superpoints. \n\n\n\\begin{figure}[t]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.95\\linewidth]{.\/img\/extension_v3.pdf}\n\t\\end{center}\n\t\\caption{The procedure of our dynamic label propagation. We progressively propagate the superpoint-level label and discard the extended superpoint with low confidence.}\n\t\\label{fig_extension}\n\\end{figure}\n\n\n\\subsection{Dynamic Label Propagation}\\label{sec_extension_growing}\nTo propagate superpoint labels, we propose a dynamic label propagation strategy to generate pseudo labels. \nSuppose we have constructed three sets: the supervised superpoints set $S$, unsupervised superpoints set $U$, and extended superpoints set $E$. Note that at the beginning we set $E=\\varnothing$. Besides, elements in each set indicate the index of superpoints. \n\n\nFor $\\forall ~i \\in T, T=S\\cup E$, we use the adjacent superpoints to construct candidate set $\\mathcal{N}_i$, where we consider propagating labels in it. Suppose $z_i$ is the label of $i$-th superpoint. Note that $\\forall ~j \\in\\mathcal{N}_i$ must satisfy two constraints: $j\\in U$ and the predicted category of the $j$-th superpoint should be the same as that of the $i$-th superpoint, that is, $z_i$. Compared with other unsupervised superpoints, elements in $\\mathcal{N}_i$ are with higher possibilities to be assigned with pseudo labels, due to the close geometric relations and the close distances to the $i$-th superpoint. To generate high-quality pseudo labels, we assess the confidence scores of the superpoints in $\\mathcal{N}_i$ and denote the scores as $\\bm{m}_i\\in\\mathbb{R}^{|\\mathcal{N}_i|}$. Then, we enumerate all the superpoints in $\\mathcal{N}_i$ and select the superpoint with the highest confidence score. The operation can be formulated as:\n\\begin{equation}\nj^*=\\mathop{\\arg\\max}\\limits_{j=1,2,\\ldots, |\\mathcal{N}_i|}({m}_{i,j})\n\\label{eqn:constraint}\n\\end{equation}\nwhere $j^*$ represents the index of the superpoint with the highest confidence score in $\\mathcal{N}_i$. To further ensure the high quality of pseudo labels, we set the threshold $\\tau$ to filter the selected superpoints with dissatisfactory confidence values. When the confidence score $m_{i,j^*}\\geqslant \\tau$, the $j^*$-th superpoint is selected and assigned with pseudo label $z_i$. Then $j^*$ will be removed from the unsupervised superpoints set $U$ and added to the extended superpoints set $E$. On the contrary, if there is no superpoint satisfying the constraint, no superpoint will be extended from $\\mathcal{N}_{i}$. In the experiments, $\\tau$ is empirically set to 0.9. Note that for each extension procedure, we merge the supervised superpoints set $S$ and the extended superpoints set $E$ to the new set $T=S\\cup E$ for further extension. Because the extended superpoints with pseudo labels can also be treated as the superpoints with supervision for further label propagation. In this way, we can progressively propagate the labels of the supervised superpoints and generate more high-quality pseudo labels for unsupervised superpoints in $U$. What's more, Algorithm \\ref{algorithm_extension} shows the details of the graph-based supervision extension procedure.\n\n\n\\begin{algorithm}[t]\n\t\\DontPrintSemicolon\n\t\\KwIn{Supervised superpoints set $S$, unsupervised superpoints set $U$, extended superpoints set $E$, threshold $\\tau$}\n\t\\KwOut{Updated sets $U$ and $E$}\n\t$T = S\\cup E$ \\\\\n\t\\For {$~i \\in T$}\n\t{\n\t\tDenote $z_i$ as the label of $i$-th superpoint \\\\\n\t\tConstruct the candidate supeproints set $\\mathcal{N}_i$ \\\\\n\t\t\\If{$\\mathcal{N}_i\\neq \\varnothing $}\n\t\t{\n\t\t\tGenerate the confidence scores $\\bm{m}_i$ \\\\\n\t\t\t$j^*=\\mathop{\\arg\\max}\\limits_{j=1,2,\\ldots, |\\mathcal{N}_i|}({m}_{i,j})$ \\\\\n\t\t\t\\If{$m_i^{j^*}\\geqslant \\tau$}\n\t\t\t{\n\t\t\t\tAssign pseudo label $z_i$ to the $j^*$-th superpoint \\\\\n\t\t\t\t$U := U \\setminus \\{j^*\\} \\quad E := E \\cup \\{j^*\\}$\n\t\t\t}\n\t\t}\n\t\t\n\t}\n\t\\caption{Graph-based supervision extension}\\label{algorithm_extension}\n\\end{algorithm}\n\n\nSince our extension strategy is performed progressively, we consider removing the low-confidence superpoints in the extended superpoints set $E$. Hence, we propose a superpoint dropout strategy assessing the reliability of the extended superpoints in the embedding space. In the superpoints set $T=S\\cup E$, we cluster the superpoints into $c$ classes according to the superpoints labels or pseudo labels, where $c$ is the number of categories. Suppose $\\mathcal{C}_i$ is the $i$-th cluster set that contains the index of the superpoints belonging to the $i$-th category. In addition, we denote $\\bm{v}_i$ as the feature of the cluster center of $\\mathcal{C}_i$, which is computed by averaging the features of all the superpoints in $\\mathcal{C}_i$. We assess the confidence of the extended superpoints by considering its distance to the corresponding cluster center in the feature space. For $\\forall ~j\\in E\\cap \\mathcal{C}_i$, its Euclidean distance to the cluster center in the feature space is formulated as: \n\\begin{equation}\nd_i^j = \\left\\|\\bm{f}_{j}-\\bm{v}_i \\right \\|_2\n\\end{equation} \nwhere $\\bm{f}_j\\in\\mathbb{R}^{D}$ is the feature of $j$-th superpoint, and $\\bm{v}_i\\in\\mathbb{R}^{D}$ is the feature of cluster center. Smaller distance indicates the higher reliability of extended superpoints, whereas the larger distance means the higher uncertainty. Therefore, in each cluster, we discard $k$ extended superpoints that are furthest from the cluster center, where $k$ is set to 0.05$*|E\\cap\\mathcal{C}_i|$. In other words, we retain the most reliable 95\\% superpoints and drop the 5\\% unreliable superpoints in the set $E\\cap\\mathcal{C}_i$. Our superpoint dropout strategy is explained in Algorithm \\ref{algorithm_dropout}.\n\nConcretely, as shown in Fig. \\ref{fig_extension}, we perform our graph-based dynamic label propagation strategy every ${M}$ epochs, therefore, the extended superpoints are gradually ``growing'' on the graph from the supervised superpoints. The reason why we conduct the extension operation in a multi-stage manner instead of every epoch is that our extension strategy is a cumulative one, which means that too much extension operations will cause redundant extended superpoints and aggravate the memory cost. Meanwhile, the model is not stable at the beginning, which is not conducive to generating extended superpoints. \n\n\\begin{algorithm}[t]\n\t\\DontPrintSemicolon\n\t\\KwIn{Number of classes $c$, supervised superpoints set $S$, unsupervised superpoints set $U$, extended superpoints set $E$}\n\t\\KwOut{Updated sets $U$ and $E$}\n\t$T = S\\cup E$ \\\\\n\tCluster on $T$ and obtain $c$ cluster sets: $\\mathcal{C}_1,\\mathcal{C}_2, \\ldots,\\mathcal{C}_c$ \\\\\n\t\\For {$i = 1:c $}\n\t{\t\t\t\n\t\tCompute the feature $\\bm{v}_i$ of the cluster center of $\\mathcal{C}_i$ \\\\\n\t\t\\For{each $j \\in E\\cap\\mathcal{C}_i$}\n\t\t{\n\t\t\tGenerate the feature $\\bm{f}_j$ of the $j$-th superpoint \\\\\n\t\t\tCompute the distance $d_i^j = \\left\\|\\bm{f}_{j}-\\bm{v}_i \\right \\|_2$\n\t\t}\n\t\tFind the farthest 5\\% superpoints (set as $\\mathcal{C}_{drop}$) in $E\\cap\\mathcal{C}_i$ from the cluster center according to the distance $\\bm{d}_i$ \\\\\n\t\t$E := E \\setminus \\mathcal{C}_{drop}\\quad U := U \\cup \\mathcal{C}_{drop}$\n\t}\n\t\\caption{Superpoint dropout strategy}\\label{algorithm_dropout}\n\\end{algorithm}\n\n\n\n\\subsection{Coupled Attention for Feature Enhancement}\\label{sec_feature_enhance}\nAiming to learn more discriminative contextual features in point clouds, we propose a coupled attention mechanism.\nFor $\\forall ~i\\in S$, we denote the corresponding embedding as $\\bm{h}_i \\in \\mathbb{R}^{D}$. Similarly, for $\\forall ~j\\in E$, we denote the corresponding embedding as $\\bm{h}_j$. By weighing all the extended superpoints, we extract the novel contextual feature of $i$-th superpoint with attention mechanism:\n\\begin{equation}\n\\bm{x}_i=\\sum \\nolimits_{j\\in E} g\\left(\\phi(\\bm{h}_i, \\bm{h}_j) \\right)\\odot \\alpha(\\bm{h}_j)\n\\end{equation}\nwhere $\\phi(\\bm{h}_i, \\bm{h}_j)=\\mathop{MLP}(\\bm{h}_i-\\bm{h}_j)$ embeds the channel-wise relations between superpoints, $\\alpha(\\bm{h}_j)=\\mathop{MLP}(\\bm{h}_j)$ is a unary function for individual superpoint embedding, $\\phi(\\cdot,\\cdot):\\mathbb{R}^{D} \\rightarrow \\mathbb{R}^{D}$ and $\\alpha:\\mathbb{R}^{D} \\rightarrow \\mathbb{R}^{D}$, $\\odot$ is the Hadamard product. $g$ is a normalization function and is defined as:\n\\begin{equation}\ng\\left(\\phi_l(\\bm{h}_i, \\bm{h}_j) \\right) = \\frac{\\exp(\\phi_{l}({\\bm h}_{i}, {\\bm h}_{j}))}{\\sum\\nolimits_{r \\in E}\\exp(\\phi_{l}({\\bm h}_{i}, {\\bm h}_{r}))}\n\\end{equation}\nwhere $l=1,2,\\ldots,D$, represents $l$-th element of embedding $\\phi_{l}(\\cdot,\\cdot)$. Consequently, the matrix representation of the attention operation on the supervised superpoints in $S$ can be formulated as:\n\\begin{equation}\n\\bm{X}_s = \\sum \\nolimits_{j\\in E}\\bm{W}_{es,j} \\odot \\bm{H}_{e,j}\n\\end{equation}\nwhere $\\bm{X}_s\\in\\mathbb{R}^{|S|\\times D}$, $\\bm{W}_{es,j}\\in\\mathbb{R}^{|S|\\times D}$, $\\bm{H}_{e,j} \\in \\mathbb{R}^{|S|\\times D}$ and $j$ enumerates the extended superpoints in $E$. Note that $\\bm{W}_{es}\\in\\mathbb{R}^{|S|\\times|E|\\times D}$ represents the channel-wise weights from the extended superpoints to the supervised superpoints.\n\n\nOnce we obtain the attention embedding $\\bm{X}_s\\in\\mathbb{R}^{|S|\\times D}$, we can derive new segmentation logits of supervised superpoints and formulate the loss as:\n\\begin{equation}\n\\mathcal{L}_{es} = \\frac{1}{|S|} \\sum \\nolimits _{i \\in S}\\mathcal{F}_{loss}\\left(z_i, \\mathop{FC} \\left(\\bm{X}_{s,i}\\right)\\right)\n\\end{equation}\nwhere $z_i$ is the superpoint-level label, $\\bm{X}_{s,i}$ is the attention feature of the corresponding supervised superpoint, and $\\mathcal{F}_{loss}$ is the cross-entropy loss adopted in experiments. Note that $\\mathop\n{FC}$ is the fully connected layer, which maps $\\bm{X}_{s,i}\\in\\mathbb{R}^{|S|\\times D}$ from the $D$-dim to the dimension of the categories.\n\n\n\\begin{figure}[t]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.95\\linewidth]{.\/img\/coupled_attention_v6.pdf}\n\t\\end{center}\n\t\\caption{The coupled attention for feature enhancement.\n\t}\n\t\\label{fig_attention}\n\\end{figure}\n\n\n\nSimilarly, to promote the feature characterization of the extended superpoints, we then perform attention on the extended superpoints in reverse. By weighting the new features enhanced by the attention operation of the supervised superpoints, we boost the context feature propagation and thus enhance the robustness of the features of the extended superpoints. Thus, for $\\forall j \\in E$, the new embedding of the corresponding superpoint can be calculated as:\n\\begin{equation}\n\\bm{y}_j=\\sum \\nolimits_{i\\in S} g\\left(\\psi(\\bm{h}_j, \\bm{x}_i) \\right)\\odot \\beta(\\bm{x}_i)\n\\end{equation}\nwhere $\\psi(\\bm{h}_j, \\bm{x}_i)=\\mathop{MLP}(\\bm{h}_j-\\bm{x}_i)$ characterizes the dependencies of the extended superpoints on the attention embeddings of the supervised superpoints. $\\beta(\\bm{x}_i)=\\mathop{MLP}(\\bm{x}_i)$ is a unary function similar to $\\alpha$, $\\psi(\\cdot,\\cdot):\\mathbb{R}^{D} \\rightarrow \\mathbb{R}^{D}$ and $\\beta:\\mathbb{R}^{D} \\rightarrow \\mathbb{R}^{D}$. $g$ is a normalization function defined as:\n\\begin{equation}\ng\\left(\\psi_{l}(\\bm{h}_j, \\bm{x}_i) \\right) = \\frac{\\exp(\\psi_{l}({\\bm h}_{j}, {\\bm x}_{i}))}{\\sum\\nolimits_{r \\in S}\\exp(\\psi_{l}({\\bm h}_{j}, {\\bm x}_{r}))}\n\\end{equation}\nwhere $l=1,2,\\ldots,D$, denotes $l$-th element of embedding $\\psi(\\cdot,\\cdot)$. Then the matrix representation of the attention operation on the extended superpoints in $E$ can be defined as:\n\\begin{equation}\n\\bm{Y}_e = \\sum \\nolimits_{i\\in S}\\bm{W}_{ese,i} \\odot \\bm{\\mathcal{X}}_{s,i}\n\\end{equation}\nwhere $\\bm{Y}_e\\in\\mathbb{R}^{|E|\\times D}$, $\\bm{W}_{ese,i}\\in\\mathbb{R}^{|E|\\times D}$, $\\bm{\\mathcal{X}}_{s,i}\\in\\mathbb{R}^{|E|\\times D}$ and $i$ enumerates superpoints in $S$. Note that $\\bm{\\mathcal{X}}_{s}$ is the feature after employing function $\\beta(\\cdot)$ on attention feature $\\bm{X}_{s}$. In this way, we develop the coupled attention, $i.e.$, $\\bm{W}_{ese}\\in\\mathbb{R}^{|S|\\times|E|\\times D}$ denotes the channel-wise weights from the attentional supervised superpoints to extended superpoints.\n\nThen the loss $\\mathcal{L}_{ese}$ on the extended superpoints with enhanced attention features can be formulated as:\n\\begin{equation}\n\\mathcal{L}_{ese} = \\frac{1}{|E|} \\sum \\nolimits _{j \\in E}\\mathcal{F}_{loss}\\left(z^{p}_j, \\mathop{FC} \\left(\\bm{Y}_{e,j}\\right)\\right)\n\\end{equation}\nwhere $z^{p}_j$ is the pseudo label and $\\mathcal{F}_{loss}$ is the cross-entropy loss as well. $\\mathop{FC}$ maps the feature to the category space.\n\nSpecifically, as shown in Fig. \\ref{fig_attention}, our coupled attention considers the intra- and inter-relations concurrently. To encourage the feature consistency in different point clouds, \nwe integrate the supervised superpoints and extended superpoints in various point clouds into sets $S$ and $E$, respectively.\nThe connections between $S$ and $E$ are constructed within and across various point cloud samples, and superpoints with the same labels are encouraged to have more similar semantic embeddings compared to those with diverse classes. \nAs a result, by alternatively performing attention on the supervised and extended superpoints, more long-range dependencies between superpoints are built. Hence, the model learns more discriminative and robust contextual features of the supervised and unsupervised superpoints. \n\n\n\n\n\\subsection{Framework}\\label{sec_framework}\nThe framework of our model is illustrated in Fig. \\ref{fig_outline}. In our framework, the superpoint graph embedding module is the basis of our point cloud feature embedding. Based on this module, the dynamic label propagation method assesses the semantic similarity between the superpoints and propagates the superpoint-level supervision along the edges of the superpoint graph. Then, with the extended superpoints searched by the dynamic label propagation module, we propose a coupled attention mechanism to boost the contextual feature learning of the point cloud. \n\nThe final objective function is a combination of the three objectives $\\mathcal{L}_{final}=\\mathcal{L}_{s} + \\lambda_1\\cdot \\mathcal{L}_{es} +\\lambda_2\\cdot \\mathcal{L}_{ese}$ and we empirically set $\\lambda_1, \\lambda_2$ to 1.\nAs shown in Fig. \\ref{fig_outline}, the dynamic label propagation module and coupled attention module are only conducted in the training stage. For testing, we obtain the inferred prediction directly from the superpoint graph embedding module. \n\n\n\\section{Experiments}\n\\subsection{Implementation Details}\nTo train our model, we adopt Adam optimizer with a base learning rate of 0.01. For the S3DIS~\\cite{armeni20163d}, ScanNet~\\cite{dai2017scannet} and vKITTI~\\cite{Gaidon2016Virtual} dataset, we employ the mini-batch size of 4, 8, 8, respectively. We empirically implement the dynamic label propagation module every $M=40$ epochs. \n\n\n\n\\textbf{Semi-supervision generation.}\nTo produce the semi-supervision of point clouds, we randomly select a part of the points with annotations in each class. For example, given a point cloud containing $n$ points with $c$ classes, suppose the supervision rate be $r$, then we evenly distribute the supervision budget $r\\cdot n$ and randomly sample $ (r\\cdot n)\/c$ points in each category as the supervised points.\nThe label of superpoint is the category with the most annotated points.\nIf there is no supervised point contained, then the superpoint will be unsupervised. \nNote that compared with the sampling strategy of random sampling annotated points directly in point clouds, our labeling mechanism is more in coincident with the human annotation behavior, since the random sampling strategy will result in that most of the supervised points will be occupied by the areas with simple geometric structure but more points, $e.g.$, walls, roads, etc.\nFor evaluation, all the quantitative results are computed at the point level.\n\n\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\begin{adjustbox}{width=0.88\\linewidth}\n\t\t\\large\n\t\t\\begin{tabular}{c|l|c|ccc}\n\t\t\n\t\t\t\\toprule\n\t\t\t\\multicolumn{2}{c|}{{\\bf Method}} &\\multicolumn{1}{c|}{Rate}&mIoU&mAcc&OA\\\\\n\t\t\n\t\t\t\\midrule\n\t\t\t\\multicolumn{6}{c}{6-fold cross validation} \\\\\n\t\t\n\t\t\t\\midrule\n\t\t\t\\multirow{7}{*}{Full}\n\t\t\t&PointNet&100\\%&47.6&66.2&78.5\\\\\t\t\t\n\t\t\t&SPGraph &100\\%& 62.1 &73.0 &85.5 \\\\\n\t\t\t&PointCNN &100\\%&{\\bf65.3} &{\\bf75.6} &{\\bf88.1} \\\\\n\t\t\n\t\t\n\t\t\t&RSNet&100\\%&56.4&66.4&-\\\\\n\t\t\n\t\t\t& G+RCU2 &100\\%&49.7 &66.4 &81.1 \\\\\n\t\t\t& 3P-RNN &100\\%&56.3 &73.6 &86.9 \\\\\n\t\t\n\t\t\n\t\t\t\\midrule\n\t\t\t\\multirow{3}{*}{Semi-}\n\t\t\n\t\t\n\t\t\n\t\t\t&Baseline &0.002\\% &45.1 &63.7 &73.9 \\\\\n\t\t\t&{\\bf SSPC-Net} &0.002\\% &48.5 &68.3 &79.1\\\\\t\n\t\t\t&{\\bf SSPC-Net} &0.01\\%&{\\bf54.5}&{\\bf70.8} &{\\bf 80.4}\\\\\t\t\n\t\t\n\t\t\t\\midrule\n\t\t\t\\multicolumn{6}{c}{Fold 5} \\\\\n\t\t\n\t\t\t\\midrule\n\t\t\t\\multirow{5}{*}{Full}\n\t\t\t&PointNet&100\\%&41.1&49.0&-\\\\\n\t\t\t&PointNet++&100\\%&47.8&-&-\\\\\n\t\t\t&SPGraph &100\\%&{\\bf58.0} &{\\bf66.5} &{\\bf86.3} \\\\\n\t\t\n\t\t\t&SegCloud&100\\%&48.9&57.3&-\\\\\n\t\t\t&PointCNN&100\\%&57.2&63.8&85.9\\\\\n\t\t\n\t\t\t\\midrule\n\t\t\t\\multirow{6}{*}{Semi-}\n\t\t\n\t\t\t&Semi-Seg&1pt &44.5&-&-\\\\\n\t\t\t&Semi-Seg&10\\% &48.0&-&-\\\\\t\t\t\n\t\t\n\t\t\n\t\t\n\t\t\t&Baseline &0.002\\% &39.6 &52.1 &72.4 \\\\\t \t\n\t\t\n\t\t\t&{\\bf SSPC-Net}&0.002\\% &43.0 &56.4 &76.2\\\\\n\t\t\t&{\\bf SSPC-Net}&0.01\\%&51.5 &63.8 &82.0\\\\\n\t\t\t&{\\bf SSPC-Net}&1pt &{\\bf53.8} &{\\bf63.9} &{\\bf83.8}\\\\\n\t\t\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{adjustbox}\n\t\\caption{Evaluation on the S3DIS dataset.}\n\t\\label{tab_results_s3dis}\n\\end{table}\n\n\n\\subsection{Semi-supervised Semantic Segmentation}\n\\textbf{S3DIS.}\nS3DIS~\\cite{armeni20163d} dataset is an indoor 3D dataset including 6 areas and 13 categories. Three metrics are adopted for quantitative evaluation: mean IoU (mIoU), mean class accuracy (mAcc), and overall accuracy (OA).\n\nThe quantitative and visual results are shown in Tab. \\ref{tab_results_s3dis} and Fig. \\ref{fig_visual_results}, respectively. For a fair comparison, we test our framework with the ``1pt'' labeling strategy adopted in \\cite{XuLee_CVPR20} (dubbed ``Semi-Seg'' in Tab. \\ref{tab_results_s3dis}) as well, which samples one point in each category of each block as the supervised point. It can be seen that our SSPC-Net achieves a significant gain of 9.3\\% in terms of mIoU with the ``1pt'' labeling strategy. In \\cite{XuLee_CVPR20}, Xu {\\em et~al.} split the point cloud into blocks and then train and test their model on each block separately. Nonetheless, our model learns the embeddings of superpoints in the whole point cloud, therefore we can obtain more discriminative contextual features and yield better performance. Note that in Tab.~\\ref{tab_results_s3dis}, ``Baseline'' represents our method without the label propagation strategy and coupled attention mechanism. One can see that our SSPC-Net improves the performance from 39.6\\% to 43.0\\% in terms of mIoU with the supervision rate of 0.002\\% on Area 5 of the S3DIS dataset, benefiting from the pseudo labels generated from the label propagation and the discriminative contextual features extracted by the coupled attention mechanism.\n\n\\textbf{ScanNet.}\nScanNet~\\cite{dai2017scannet} is an indoor scene dataset containing 1513 point clouds with 20 categories. We split the dataset into a training set with 1201 scenes and a testing set with 312 scenes following \\cite{qi2017pointnet++}. We adopt overall semantic voxel labeling accuracy (OA) and mean IoU (mIoU) for evaluation.\n\n\nWe list the quantitative results on the testing set in Tab.~\\ref{tab_scannet_vkitti}. Similar to S3DIS, ScanNet is also an indoor dataset, but the point cloud of ScanNet is much sparser than that of S3DIS. This brings greater challenges to the propagation of supervised labels. However, the proposed model can still achieve good segmentation results and even outperform some fully supervised methods like PointNet \\cite{qi2017pointnet} with semi-supervision. Furthermore, the performance of the proposed model is much better than the baseline method, which further validates the effectiveness of our method.\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\LARGE\n\t\\begin{adjustbox}{width=0.99\\linewidth}\n\t\t\\begin{tabular}{c|l|c|cc|ccc}\n\t\t\t\\toprule\n\t\t\t\\multicolumn{2}{c|}{\\multirow{2}{*}{{\\bf Method}}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{Rate}} &\n\t\t\t\\multicolumn{2}{c|}{ScanNet} & \\multicolumn{3}{c}{vKITTI} \\\\\n\t\t\t\\multicolumn{2}{c|}{} & & mIoU & OA & mIoU & mAcc & OA \\\\ \n\t\t\t\\midrule\n\t\t\t\\multirow{7}{*}{Full}\n\t\t\t&PointNet&100\\%\t&-&73.9&34.4&47.0&79.7\\\\\n\t\t\t&PointNet++&100\\%&-&{\\bf84.5} &-&-&-\\\\\n\t\t\t\n\t\t\t&SSP + SPG&100\\% &- &- &{\\bf52.0} &{\\bf67.3} &84.3 \\\\\n\t\t\n\t\t\t&G+RCU&100\\% &-&- &35.6&57.6&79.7\\\\\n\t\t\n\t\t\n\t\t\t&RSNet&100\\% &{\\bf39.3} &79.2 &-&-&- \\\\\n\t\t\t&3P-RNN&100\\%&-&- &41.6&54.1&{\\bf87.8} \\\\\n\t\t\n\t\t\n\t\t\t&3DCNN&100\\%&-&73.0 &-&-&- \\\\\n\t\t\t\\midrule\n\t\t\t\\multirow{3}{*}{Semi-}\n\t\t\n\t\t\n\t\t\t&Baseline&0.01\\% &24.1&38.2 &35.7 &53.4 &79.2 \\\\\n\t\t\t&{\\bf SSPC-Net}&0.01\\% &27.1&66.6 &41.0 &55.7 &81.2 \\\\\n\t\t\t&{\\bf SSPC-Net}&0.05\\% &{\\bf39.3}&{\\bf77.1} &{\\bf50.6} &{\\bf64.8} &{\\bf85.4} \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{adjustbox}\n\t\\caption{Evaluation on the ScanNet and vKITTI datasets.}\n\t\\label{tab_scannet_vkitti} \t \n\\end{table}\n\n\n\\begin{figure}[t]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.87\\linewidth]{.\/img\/visual_v4.pdf}\n\t\\end{center}\n\t\\caption{The visual results on the S3DIS dataset with supervision rate of 0.002\\%.}\n\t\\label{fig_visual_results}\n\\end{figure}\n\n\n\\textbf{vKITTI.}\nvKITTI~\\cite{Gaidon2016Virtual} dataset mimics the real-world KITTI dataset and contains the synthetic outdoor scenes with 13 classes (including road, tree, terrain, car, etc.). For evaluation, we split the dataset into 6 non-overlapping sub-sequences and employ 6-fold cross validation following \\cite{ye20183d}. Mean IoU (mIoU), mean class accuracy (mAcc)\nand overall accuracy (OA) are employed for evaluation.\n\n\nThe quantitative results are presented in Tab.~\\ref{tab_scannet_vkitti}. With the 0.01\\% point-level annotations, compared with the baseline method, our model achieves better segmentation results due to the dynamic label propagation strategy and the discriminative contextual features generated from the coupled attention module. In addition, our model can achieve better or comparable performance than some fully supervised methods with only 0.01\\% and 0.05\\% of the supervised points.\n\n\n\\subsection{Ablation Study}\n\\textbf{Contribution of individual components.}\nIn this section, we investigate the contribution of the proposed components to model performance. The evaluation results on Area5 of the S3DIS dataset of different components with the supervision ratio of 0.002\\% and 0.01\\% are shown in Tab. \\ref{tab_components}, where the components are the graph embedding (Graph Emb.), dynamic label propagation (Label Prop.), coupled attention for feature enhancement (Coup. Attn.). It can be observed that there is an obvious promotion on the performance with the addition of dynamic label propagation and coupled attention module, which further demonstrates the effectiveness of these strategies for the semi-supervision.\n\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\huge\n\t\\begin{adjustbox}{width=1.0\\linewidth}\n\t\n\t\n\t\t\\begin{tabular}{ccc|ccc|ccc}\n\t\t\t\\toprule\n\t\t\t\\multicolumn{3}{c|}{{\\bf Components}} & \\multicolumn{3}{c|}{Rate$=$0.002\\%} & \\multicolumn{3}{c}{Rate$=$0.01\\%} \\\\\n\t\t\t\\midrule\n\t\t\n\t\t\tGraph & Label & Coup. & \\multirow{2}{*}{mIoU} & \\multirow{2}{*}{mAcc} & \\multirow{2}{*}{OA} & \\multirow{2}{*}{mIoU} & \\multirow{2}{*}{mAcc} & \\multirow{2}{*}{OA} \\\\\n\t\t\tEmb. & Prop. & Attn. & & & & & & \\\\\t\t\n\t\t\t\\midrule\n\t\t\t\\checkmark\t&\t\t\t&\t\t\t&39.6 &52.1 &72.4 &48.5 &61.2 &80.3 \\\\\n\t\t\t\\checkmark\t&\\checkmark &\t &40.9 &55.8 &73.6 &50.0 &60.6 &80.8 \\\\\n\t\t\t\\checkmark\t&\\checkmark\t&\\checkmark\t&{\\bf 43.0} &{\\bf 56.4} &{\\bf 76.2} &{\\bf 51.5} &{\\bf 63.8} &{\\bf 82.0}\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{adjustbox}\n\t\\caption{The contribution of different components on Area5 of the S3DIS dataset with different annotation rates.}\n\t\\label{tab_components}\t\n\\end{table}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.78\\linewidth]{.\/img\/curve_v4_cropped.pdf}\n\t\\caption{The percentage of supervised superpoints (ss) and extended superpoints (es) during training. Note that ``all'' means the overall superpoints.}\n\t\\label{fig_ext_ablation}\n\\end{figure}\n\n\n\n\n\\textbf{Supervision rate.} The number of supervised points plays an important role in the segmentation performance. The more labeled points, the smaller gap of data distribution between the semi-supervision and full supervision. To discuss the effect of various labeling rates on model performance, we test our method on Area5 of the S3DIS dataset. The results are shown in Tab. \\ref{tab_comparison}. Combined with Tab. \\ref{tab_results_s3dis}, it can be observed that with only few labeled points, our model has already achieved effective segmentation results. With the growth of supervision, the performance of our model further increases. It is worth noting that we pay more attention to the cases of extremely few supervision signals, which is more challenging for the point cloud segmentation task.\n\n\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\large\n\t\\begin{adjustbox}{width=0.83\\linewidth}\n\t\t\\begin{tabular}{c|ccc|c}\n\t\t\t\\toprule\n\t\t\t\\;\\;\\;\\;\\;{\\bf Rate}\\;\\;\\;\\;\\;&\\;\\;mIoU & mAcc&OA\\;\\;&\\;OA of es\\; \\\\\n\t\t\t\\midrule\n\t\t\t0.002\\% &43.0 &56.4 &76.2 & 87.3 \\\\\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\t0.01\\% &51.5 &63.8 &82.0 &90.9 \\\\\n\t\t\n\t\t\n\t\t\t0.1\\% &56.2 &66.1 &84.6 & 91.0 \\\\\n\t\t\n\t\t\n\t\t\t1.0\\% &58.3 &66.5 &85.7 &90.1 \\\\\n\t\t\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{adjustbox}\n\t\\caption{Comparison of various supervision rates on Area5 of the S3DIS dataset, where ``es'' represents the extended superpoints.} \n\t\\label{tab_comparison}\n\\end{table}\n\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\LARGE\n\t\\begin{adjustbox}{width=0.82\\linewidth}\n\t\t\\begin{tabular}{c|ccc}\n\t\t\t\\toprule\n\t\t\t\\;\\;\\;\\;{\\bf Interval} \\bm{$M$}\\;\\;\\;\\; &\\;\\;\\;mIoU\\;\\;\\; &\\;\\;\\;mAcc\\;\\;\\; &\\;\\;\\;OA\\;\\;\\; \\\\\n\t\t\t\\midrule\n\t\t\t20 &50.2 &61.1 &81.2 \\\\\n\t\t\t30 &50.8 &63.3 &81.5 \\\\\t\t\t\n\t\t\t40 &{\\bf 51.5} &{\\bf 63.8} &{\\bf 82.0} \\\\\n\t\t\t50 &49.6 &61.5 &80.7 \\\\\n\t\t\t60 &49.9 &62.2 &81.0 \\\\\t\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{adjustbox}\n\t\\caption{Comparison of segmentation results with various interval $M$ of the dynamic label propagation method in the case of the supervision rate of 0.01\\%.} \n\t\\label{tab_extension_M}\n\\end{table}\n\n\n\\textbf{Number of the extended superpoints.} The dynamic label propagation strategy plays an important role in our model. As shown in Fig. \\ref{fig_ext_ablation}, we show the proportion of the supervised superpoints and extended superpoints in the training set when testing on Area 5 of the S3DIS dataset. With the increase of the annotated points, the proportion of the supervised superpoints increases rapidly. Because the probability of a superpoint containing a supervised point is getting higher as well. However, when there are fewer supervised points, the percentage of extended superpoints is obviously larger.\nThis demonstrates the importance of pseudo labels facing extremely few point annotations.\n\n\n\n\\textbf{Quality of the extended superpoints.} To analyze the quality of the extended superpoints, we evaluate the overall accuracy of the extended superpoints (OA of es) in Tab. \\ref{tab_comparison}. Noted that, similar to the aforementioned metrics, the quantitative results of the extended superpoints are conducted at the point level as well. \nFrom Tab. \\ref{tab_comparison}, one can see that the overall accuracy of extended superpoints is around 90\\%, which demonstrates the high quality of extended superpoints. This further proves the effectiveness of our label propagation strategy which generates high-quality pseudo labels. In addition, the high quality of pseudo labels of the extended superpoints further reveals the reason for the improved performance based on the label propagation module.\n\n\n\\textbf{Epoch interval in dynamic label propagation.}\nDuring the training, we perform the dynamic label propagation method every $M$ epochs. For comparison, we train our model with various interval $M$ while keeping other parameters unchanged with the supervision rate of 0.01\\%. The evaluation results on Area 5 of the S3DIS dataset are shown in Tab. \\ref{tab_extension_M}. It can be observed that when $M=40$, our model achieves the best performance.\n\n\\section{Conclusion}\nIn this paper, we proposed a semi-supervised point cloud segmentation network. We first partitioned the point cloud into superpoints and built superpoint graphs to explore the long-range relations in the point cloud. Then based on superpoint graphs, we proposed a dynamic label propagation method combined with a superpoint dropout strategy to generate high-quality pseudo labels for the unsupervised superpoints. Next, we proposed a coupled attention module to learn discriminative contextual features of superpoints and fully exploit the generated pseudo labels. Our method can achieve better performance than the current semi-supervised point cloud segmentation methods with fewer labels.\n\n\\section{Acknowledgments}\nThis work was supported by the National Science Fund of China (Grant Nos.\nU1713208, 61876084), Program for Changjiang Scholars.\n\n{\n\t\\bibliographystyle{IEEEtran}\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Relative systoles}\n\n\nWe prove a systolic inequality for a~$\\phi$--relative systole of a\n\\mbox{$\\phi$--essential}\n$2$--complex~$X$, where~$\\phi \\colon\\thinspace \\pi_1(X) \\to G$ is a homomorphism to a\nfinitely presented group~$G$. Thus, we show that universally for\nany~$\\phi$--essential Riemannian~$2$--complex~$X$, and any~$G$, we\nhave~$\\sys(X, \\phi)^2 \\leq 8 \\, \\area(X)$. Combining our results with\na method of L.~Guth, we obtain new quantitative results for\ncertain~$3$--manifolds: in particular for the Poincar\\'e\nhomology sphere~$\\Sigma$, we have~$\\sys(\\Sigma)^3 \\leq 24 \\, \\vol(\\Sigma)$. To\nstate the results more precisely, we need the following definition.\n\nLet~$X$ be a finite connected~$2$--complex. Let~$\\phi \\colon\\thinspace \\pi_{1}(X) \\to\nG$ be a group homomorphism. Recall that~$\\phi$ induces a classifying\nmap (defined up to homotopy)~$X \\to K(G,1)$.\n\n\\begin{definition}\nThe complex~$X$ is called~$\\phi$--{\\em essential\\\/} if the classifying\nmap~$X \\to K(G,1)$ cannot be homotoped into the~$1$--skeleton\nof~$K(G,1)$.\n\\end{definition}\n\n\n\n\\begin{definition}\nGiven a piecewise smooth Riemannian metric on~$X$, the~$\\phi$--relative\nsystole of~$X$, denoted~$\\sys(X,\\phi)$, is the least length of a loop\nof~$X$ whose free homotopy class is mapped by~$\\phi$ to a nontrivial\nclass.\n\\end{definition}\n\nWhen~$\\phi$ is the identity homomorphism of the fundamental group, the\nrelative systole is simply called the systole, and denoted~$\\sys(X)$.\n\n\\begin{definition} \n\\label{def:sigma}\nThe~$\\phi$--systolic area~$\\sigma_\\phi(X)$ of~$X$ is defined as\n\\begin{equation*}\n\\sigma_{\\phi}(X) = \\frac{\\area(X)}{\\sys(X,\\phi)^{2}}.\n\\end{equation*}\nFurthermore, we set\n\\begin{equation*}\n\\sigma_{*}(G) = \\inf_{X, \\phi} \\sigma_{\\phi}(X),\n\\end{equation*}\nwhere the infimum is over all~$\\phi$--essential piecewise Riemannian\nfinite connected \\mbox{$2$--complexes}~$X$, and homomorphisms~$\\phi$\nwith values in~$G$.\n\\end{definition}\n\nIn the present text, we prove a systolic inequality for\nthe~$\\phi$--relative systole of a~$\\phi$--essential~$2$--complex~$X$.\nMore precisely, in the spirit of Guth's text \\cite{Gu09}, we prove a\nstronger, {\\em local\\\/} version of such an inequality, for almost\nextremal complexes with minimal first Betti number. Namely, if~$X$\nhas a minimal first Betti number among all~$\\phi$--essential piecewise\nRiemannian \\mbox{$2$--complexes} satisfying~$\\sigma_{\\phi}(X) \\leq\n\\sigma_*(G) +\\varepsilon$ for an~$\\varepsilon>0$, then the area of a\nsuitable disk of~$X$ is comparable to the area of a Euclidean disk of\nthe same radius, in the sense of the following result.\n\n\n\\begin{theorem} \n\\label{13}\nLet~$\\varepsilon >0$. Suppose~$X$ has a minimal first Betti number\namong all~$\\phi$--essential piecewise Riemannian\n$2$--complexes satisfying~$\\sigma_{\\phi}(X) \\leq \\sigma_*(G)\n+\\varepsilon$. Then each ball centered at a point~$x$ on\na~$\\phi$--systolic loop in~$X$ satisfies the area lower bound\n\\begin{equation*}\n\\area \\, B(x,r) \\geq\n\\frac{\\left(r-\\varepsilon^{1\/3}\\right)^2}{2+\\varepsilon^{1\/3}}\n\\end{equation*}\nwhenever~$r$ satisfies~$\\varepsilon^{1\/3} \\leq r \\leq\n\\frac{1}{2}\\sys(X,\\phi)$.\n\\end{theorem}\n\nA more detailed statement appears in~Proposition~\\ref{prop:minB}. The\ntheorem immediately implies the following systolic inequality.\n\n\\begin{corollary} \n\\label{coro:A}\nEvery finitely presented group~$G$ satisfies\n\\begin{equation*}\n\\sigma_*(G) \\geq \\frac{1}{8}, \n\\end{equation*}\nso that every piecewise Riemannian~$\\phi$--essential~$2$--complex~$X$\nsatisfies the inequality\n\\begin{equation*}\n\\sys(X,\\phi)^{2} \\leq 8 \\, \\area(X).\n\\end{equation*}\n\\end{corollary}\n\nIn the case of the absolute systole, we prove a similar lower bound\nwith a Euclidean exponent for the area of a suitable disk, when the\nradius is smaller than half the systole, without the assumption of\nnear-minimality. Namely, we will prove the following theorem.\n\n\\begin{theorem} \n\\label{theo:B}\nEvery piecewise Riemannian essential~$2$--complex~$X$ admits a\npoint~$x\\in X$ such that the area of the~$r$--ball centered at~$x$ is\nat least~$r^2$, that is,\n\\begin{equation}\n\\label{11c}\n\\area ( B(x,r)) \\geq r^2,\n\\end{equation}\nfor all~$r \\leq \\frac{1}{2} \\sys(X)$.\n\\end{theorem}\n\nWe conjecture a bound analogous to \\eqref{11c} for the area of a\nsuitable disk of a \\mbox{$\\phi$--essential}~$2$--complex~$X$, with\nthe~$\\phi$--relative systole replacing the systole, {\\it cf.}~the GG-property\nbelow. The application we have in mind is in the case\nwhen~$\\phi \\colon\\thinspace \\pi_1(X)\\to \\Z_p$ is a homomorphism from the fundamental\ngroup of~$X$ to a finite cyclic group. Note that the conjecture is\ntrue in the case when~$\\phi$ is a homomorphism to~$\\Z_2$, by Guth's\nresult \\cite{Gu09}.\n\n\n\\begin{definition}[GG-property%\n\\footnote{GG-property stands for the property analyzed by M.~Gromov\nand L.~Guth}\n] \n\\label{def:GG}\nLet~$C>0$. Let~$X$ be a finite connected~$2$--complex,\nand~$\\phi \\colon\\thinspace \\pi_1(X) \\to G$, a group homomorphism. We say that~$X$ has\nthe~$\\rm{GG}_{C}$-property for~$\\phi$ if\nevery piecewise smooth Riemannian metric on~$X$ admits a point~$x \\in\nX$ such that the~$r$--ball of~$X$ centered at~$x$ satisfies the bound\n\\begin{equation} \n\\label{eq:ball}\n\\area \\, B(x,r) \\geq C r^2,\n\\end{equation}\nfor every~$r \\leq \\frac{1}{2} \\sys(X,\\phi)$.\n\\end{definition}\n\nNote that if the~$2$--complex~$X$ is~$\\varepsilon$--almost minimal,\ni.e., satisfies the bound $\\sigma_{\\phi}(X) \\leq G_*(G) +\n\\varepsilon$, and has least first Betti number among all such\ncomplexes, then it satisfies~\\eqref{eq:ball} for some~$C>0$ and\nfor~$r\\geq \\varepsilon^{1\/3}$ by Theorem~\\ref{13}.\n\nModulo such a conjectured bound, we prove a systolic inequality for\nclosed~$3$--manifolds with finite fundamental group.\n\n\n\\begin{theorem}\n\\label{theo:main}\nLet~$p\\geq 2$ be a prime. Assume that\nevery~$\\phi$--essential~$2$--complex has the~$\\rm{GG}_C$-property\n\\eqref{eq:ball} for each homomorphism~$\\phi$ into~$\\Z_p$ and for some\nuniversal constant~$C>0$. Then every orientable closed\nRiemannian~$3$--manifold~$M$ with finite fundamental group of order\ndivisible by~$p$, satisfies the bound\n\\begin{equation*}\n\\sys(M)^3 \\leq 24 \\, C^{-1} \\; \\vol(M).\n\\end{equation*}\nMore precisely, there is a point~$x\\in M$ such that the volume of\nevery~$r$--ball centered at~$x$ is at least~$\\frac{C}{3}r^{3}$, for all\n$r \\leq \\frac{1}{2} \\sys(M)$.\n\\end{theorem}\n\n\nA slightly weaker bound can be obtained modulo a weaker GG-property,\nwhere the point~$x$ is allowed to depend on the radius~$r$.\n\nSince the GG-property is available for~$p=2$ and~$C=1$ by Guth's article\n\\cite{Gu09}, we obtain the following corollary.\n\n\\begin{corollary}\nEvery closed Riemannian~$3$--manifold~$M$ with fundamental group of\neven order satisfies\n\\begin{equation}\n\\label{Poincare}\n\\sys(M)^3 \\leq 24 \\; \\vol(M).\n\\end{equation}\n\\end{corollary}\n\nFor example, the Poincar\\'e homology~$3$--sphere satisfies the systolic\ninequality \\eqref{Poincare}. \\\\\n\nIn the next section, we present related developments in systolic\ngeometry and compare some of our arguments in the proof of\nTheorem~\\ref{theo:main} to Guth's in~\\cite{Gu09},\n{\\it cf.}~Remark~\\ref{rem:compare}. Additional recent developments in\nsystolic geomety include \\cite{AK, BB10, Bal08, e7, BT, Be08, Bru,\nBru2, Bru3, DKR, DR09, Elm10, EL, Gu09, KK, KK2, Ka4, KR2, KSh, NR,\nPar10, Ro, RS08, Sa08, Sa10}.\n\n\n\n\n\\section{Recent progress on Gromov's inequality}\n\nM.~Gromov's upper bound for the~$1$--systole of an essential\nmanifold~$M$ \\cite{Gr1} is a central result of systolic geometry.\nGromov's proof exploits the Kuratowski imbedding of~$M$ in the Banach\nspace~$L^\\infty$ of bounded functions on~$M$. A complete analytic\nproof of Gromov's inequality \\cite{Gr1}, but still using the\nKuratowski imbedding in~$L^\\infty$, was recently developed by\nL.~Ambrosio and the second-named author~\\cite{AK}. See\nalso~\\cite{AW}.\n\nS.~Wenger~\\cite{wen} gave a complete analytic proof of an\nisoperimetric inequality between the volume of a manifold~$M$, and its\nfilling volume, a result of considerable independent interest. On the\nother hand, his result does not directly improve or simplify the proof\nof Gromov's main filling inequality for the filling radius. Note that\nboth the filling inequality and the isoperimetric inequality are\nproved simultaneously by Gromov, so that proving the isoperimetric\ninequality by an independent technique does not directly simplify the\nproof of either the filling radius inequality, or the systolic\ninequality.\n\nL.~Guth \\cite{Gu11} gave a new proof of Gromov's systolic inequality\nin a strengthened {\\em local\\\/} form. Namely, he proved Gromov's\nconjecture that every essential manifold with unit systole contains a\nball of unit radius with volume uniformly bounded away from zero.\n\nMost recently, Guth \\cite {Gu09} re-proved a significant case of\nGromov's systolic inequality \\cite{Gr1} for essential manifolds,\nwithout using Gromov's filling invariants.\n\nActually, in the case of surfaces, Gromov himself had proved better\nestimates, without using filling invariants, by sharpening a technique\nindependently due to Y.~Burago and V.~Zalgaller \\cite[p.~43]{BZ}, and\nJ.~Hebda \\cite{Hebda}. Here the essential idea is the following.\n\nLet~$\\gamma(s)$ be a minimizing non-contractible closed geodesic of\nlength~$L$ in a surface~$S$, where the arclength parameter~$s$ varies\nthrough the interval~$[-\\frac{L}{2}, \\frac{L}{2}]$. We consider\nmetric balls (metric disks)~$B(p,r) \\subset S$ of radius~$r<\n\\frac{L}{2}$ centered at~$p=\\gamma(0)$. The two points~$\\gamma(r)$\nand~$\\gamma(-r)$ lie on the boundary sphere (boundary curve)~$\\partial\nB(p,r)$ of the disk. If the points lie in a common connected\ncomponent of the boundary (which is necessarily the case if~$S$ is a\nsurface and~$L=\\sys(S)$, but may fail if~$S$ is a more\ngeneral~$2$--complex), then the boundary curve has length at\nleast~$2r$. Applying the coarea formula\n\\begin{equation}\n\\label{11b}\n\\area \\, B(p,r)=\\int_0^r \\length \\, \\partial B(p,\\rho) \\, d\\rho,\n\\end{equation}\nwe obtain a lower bound for the area which is quadratic in~$r$.\n\nGuth's idea is essentially a higher-dimensional analogue of Hebda's,\nwhere the minimizing geodesic is replaced by a minimizing\nhypersurface. Some of Guth's ideas go back to the even earlier texts\nby Schoen and Yau \\cite{SY78, SY79}.\n\nThe case handled in \\cite{Gu09} is that of~$n$--dimensional manifolds\nof maximal~$\\Z_2$--cuplength, namely~$n$. Thus, Guth's theorem covers\nboth tori and real projective spaces, directly generalizing the\nsystolic inequalities of Loewner and Pu, see \\cite{Pu} and \\cite{SGT}\nfor details.\n\n\n\n\n\\begin{remark} \\label{rem:compare}\nTo compare Guth's argument in his text~\\cite{Gu09} and our proof of\nTheorem~\\ref{theo:main}, we observe that the topological ingredient of\nGuth's technique exploits the multiplicative structure of the\ncohomology ring~$H^*(\\Z_2;\\Z_2)=H^*(\\R {\\mathbb P}^\\infty; \\Z_2)$.\nThis ring is generated by the~$1$--dimensional class. Thus, every\n$n$--dimensional cohomology class decomposes into the cup product\nof~$1$--dimensional classes. This feature enables a proof by\ninduction on~$n$.\n\nMeanwhile, for~$p$ odd, the cohomology ring~$H^*(\\Z_p;\\Z_p)$ is not\ngenerated by the~$1$--dimensional class; see Proposition~\\ref{42} for\na description of its structure. Actually, the square of\nthe~$1$--dimensional class is zero, which seems to yield no useful\ngeometric information.\n\nAnother crucial topological tool used in the proof of~\\cite{Gu09} is\nPoincar\\'e duality which can be applied to the manifolds representing\nthe homology classes in~$H_*(\\Z_2;\\Z_2)$. For~$p$ odd, the homology\nclasses of~$H_{2k}(\\Z_p;\\Z_p)$ cannot be represented by manifolds.\nOne could use D.~Sullivan's notion of~$\\Z_p$--manifolds,\n{\\it cf.}~\\cite{Su,MS}, to represent these homology class, but they do not\nsatisfy Poincar\\'e duality.\n\nFinally, we mention that, when working with cycles representing\nhomology classes with torsion coefficients in~$\\Z_p$, we exploit a\nnotion of volume which ignores the multiplicities in~$\\Z_p$,\n{\\it cf.}~Definition~\\ref{def:Vol}. This is a crucial feature in our proof. \nNote that minimal cycles with torsion coefficients were studied by\nB.~White \\cite{Wh2}.\n\\end{remark}\n\n\n\n\\section{Area of balls in~$2$--complexes}\n\nIt was proved in~\\cite{Gr1} and~\\cite{KRS} that a finite~$2$--complex\nadmits a systolic inequality if and only if its fundamental group is\nnonfree, or equivalently, if it is~$\\phi$--essential for~$\\phi= {\\rm\nId}$.\n\n\n\nIn~\\cite{KRS}, we used an argument by contradiction, relying on an\ninvariant called {\\em tree energy\\\/}, to prove a bound for the\nsystolic ratio of a~$2$--complex. We present an alternative short proof\nwhich yields a stronger result and simplifies the original argument.\n\n\\begin{theorem} \n\\label{theo:r2}\nLet~$X$ be a piecewise Riemannian finite essential~$2$--complex. There\nexists~$x \\in X$ such that the area of every~$r$--ball centered at~$x$\nis at least~$r^2$ for every~$r \\leq \\frac{1}{2} \\sys(X)$.\n\\end{theorem}\n\nAs mentioned in the introduction, we conjecture that this result still\nholds for~$\\phi$--essential complexes and with the~$\\phi$--relative\nsystole in place of~$\\sys$.\n\n\\begin{proof}\nWe can write the Grushko decomposition of the fundamental group of~$X$\nas\n\\begin{equation*}\n\\pi_1(X) = G_1*\\cdots*G_r*F,\n\\end{equation*}\nwhere~$F$ is free, while each group~$G_i$ is nontrivial,\nnon-isomorphic to~$\\Z$, and not decomposable as a nontrivial free\nproduct.\n\nConsider the equivalence class~$[G_1]$ of~$G_1$ under external\nconjugation in~$\\pi_1(X)$. Let~$\\gamma$ be a loop of least length\nrepresenting a nontrivial class~$[\\gamma]$ in~$[G_1]$. Fix~$x \\in\n\\gamma$ and a copy of~$G_1 \\subset \\pi_1(X,x)$ containing the homotopy\nclass of~$\\gamma$. Let~$\\overline{X}$ be the cover of~$X$ with\nfundamental group~$G_1$.\n\n\\begin{lemma}\nWe have~$\\sys(\\overline{X}) = \\length(\\gamma)$.\n\\end{lemma}\n\n\\begin{proof}\nThe loop~$\\gamma$ lifts to~$\\overline{X}$ by construction of the subgroup\n$G_1$. Thus,~$\\sys(\\overline{X}) \\leq \\length(\\gamma)$. Now, the\ncover~$\\overline{X}$ does not contain noncontractible loops~$\\delta$\nshorter than~$\\gamma$, because such loops would project to~$X$ so that\nthe nontrivial class~$[\\delta]$ maps into~$[G_1]$, contradicting our\nchoice of~$\\gamma$.\n\\end{proof}\n\nContinuing with the proof of the theorem, let~$\\bar{x} \\in \\overline{X}$\nbe a lift of~$x$. Consider the level curves of the distance function\nfrom~$\\bar{x}$. Note that such curves are necessarily connected, for\notherwise one could split off a free-product-factor~$\\Z$\nin~$\\pi_1(\\overline{X})=G_1$, {\\it cf.}~\\cite[Proposition 7.5]{KRS},\ncontradicting our choice of~$G_1$. In particular, the\npoints~$\\gamma(r)$ and~$\\gamma(-r)$ can be joined by a path contained\nin the curve at level~$r$. Applying the coarea formula~\\eqref{11b},\nwe obtain a lower bound~$\\area \\, B(\\bar{x},r)\\geq r^2$ for the area\nof an~$r$--ball~$B(\\bar{x},r) \\subset \\overline{X}$, for all~$r \\leq\n\\frac{1}{2} \\length(\\gamma)=\\frac{1}{2} \\sys(\\overline{X})$.\n\nIf, in addition, we have~$r \\leq \\frac{1}{2}\\sys(X)$ (which apriori\nmight be smaller than~$\\frac{1}{2} \\sys(\\overline{X})$), then the ball\nprojects injectively to~$X$, proving that\n\\begin{equation*}\n\\area(B(x,r)\\subset X) \\geq r^2\n\\end{equation*}\nfor all~$r\\leq \\frac{1}{2}\\sys(X)$.\n\\end{proof}\n\n\n\\section{Outline of argument for relative systole}\n\\label{three}\n\nLet~$X$ be a piecewise Riemannian connected~$2$--complex, and\nassume~$X$ is~$\\phi$--essential for a group homomorphism\n$\\phi \\colon\\thinspace \\pi_1(X)\\to G$. We would like to prove an area lower bound\nfor~$X$, in terms of the~$\\phi$--relative systole as in\nTheorem~\\ref{theo:r2}. Let~$x \\in X$. Denote by~$B=B(x,r)$ and\n$S=S(x,r)$ the open ball and the sphere (level curve) of radius~$r$\ncentered at~$x$ with~$r < \\frac{1}{2} \\sys(X,\\phi)$. Consider the\ninterval~$I=[0,\\frac{L}{2}]$, where~$L=\\length(S)$.\n\n\\begin{definition} \n\\label{def:Y}\nWe consider the complement~$X \\setminus B$, and attach to it a buffer\ncylinder along each connected component~$S_i$ of~$S$. Here a buffer\ncylinder with base~$S_i$ is the quotient\n\\begin{equation*}\nS_i \\times I\/ \\!\\! \\sim\n\\end{equation*}\nwhere the relation~$\\sim$ collapses each subset~$S_i \\times \\{ 0 \\}$\nto a point~$x_i$. We thus obtain the space\n\\[\n\\left( S_i \\times I\/ \\!\\! \\sim \\right) \\cup_f \\left( X \\setminus B\n\\right),\n\\]\nwhere the attaching map~$f$ identifies\n$S_i\\times\\left\\{\\tfrac{L}{2}\\right\\}$ with~$S_i\\subset X\\setminus B$.\nTo ensure the connectedness of the resulting space, we attach a\ncone~$CA$ over the set of points~$A=\\{ x_i \\}$. We set the length of the\nedges of the cone~$CA$ equal to~$\\sys(X,\\phi)$. We will denote by\n\\begin{equation} \\label{eq:Y}\nY=Y(x,r)\n\\end{equation}\nthe resulting~$2$--complex. The natural metrics on~$X \\setminus B$ and\non the buffer cylinders induce a metric on~$Y$.\n\\end{definition}\n\nIn the next section, we will show that~$Y$ is~$\\psi$--essential for\nsome homomorphism~$\\psi \\colon\\thinspace \\pi_1(Y) \\to G$ derived from~$\\phi$. The\npurpose of the buffer cylinder is to ensure that the relative systole\nof~$Y$ is at least as large as the relative systole of~$X$. Note that\nthe area of the buffer cylinder is~$L^2\/2$.\n\n\nWe normalize~$X$ to unit relative systole and take a point~$x$ on a\nrelative systolic loop of~$X$.\nSuppose~$X$ has a minimal first Betti number among the complexes\nessential in~$K(G,1)$ with almost minimal systolic area (up to\nepsilon). We sketch below the proof of the local relative systolic\ninequality satisfied by~$X$.\n\nIf for every~$r$, the space~$Y=Y(x,r)$ has a greater area than~$X$,\nthen\n\\begin{equation*}\n\\area \\, B(r) \\leq \\tfrac{1}{2}(\\length \\, S(r))^2\n\\end{equation*}\nfor every~$r < \\frac{1}{2} \\sys(X,\\phi)$. Using the coarea\ninequality, this leads to the differential inequality~$y(r) \\leq\n\\tfrac{1}{2} y'(r)^2$. Integrating this relation shows that the area\nof~$B(r)$ is at least~$\\frac{r^2}{2}$, and the conclusion follows.\n\nIf for some~$r$, the space~$Y$ has a smaller area than~$X$, we argue\nby contradiction. We show that a~$\\phi$--relative systolic loop of~$X$\n(passing through~$x$) meets at least two connected components of the\nlevel curve~$S(r)$. These two connected components project to two\nendpoints of the cone~$CA$ connected by an arc of~$Y \\setminus CA$.\nUnder this condition, we can remove an edge~$e$ from~$CA$ so that the\nspace~$Y'=Y \\setminus e$ has a smaller first Betti number than~$X$.\nHere~$Y'$ is still essential in~$K(G,1)$, and its relative systolic\narea is better than the relative systolic area of~$X$, contradicting\nthe definition of~$X$.\n\n\n\\section{First Betti number and essentialness of~$Y$} \\label{sec:remov}\n\nLet~$G$ be a fixed finitely presented group. We are mostly interested\nin the case of a finite group~$G=\\Z_p$. Unless specified otherwise,\nall group homomorphisms have values in~$G$, and all complexes are\nassumed to be finite. Consider a homomorphism~$\\phi \\colon\\thinspace \\pi_1(X) \\to\nG$ from the fundamental group of a piecewise Riemannian finite\nconnected~$2$--complex~$X$ to~$G$.\n\n\n\\begin{definition}\nA loop~$\\gamma$ in~$X$ is said to be~$\\phi$--contractible if the image\nof the homotopy class of~$\\gamma$ by~$\\phi$ is trivial, and\n$\\phi$--noncontractible otherwise. Thus, the~$\\phi$--systole of~$X$,\ndenoted by~$\\sys(X,\\phi)$, is defined as the least length of a\n$\\phi$--noncontractible loop in~$X$. Similarly, the\n\\mbox{$\\phi$--systole} based at a point~$x$ of~$X$, denoted\nby~$\\sys(X,\\phi,x)$, is defined as the least length of\na~$\\phi$--noncontractible loop based at~$x$.\n\\end{definition}\n\n\\forget\n\\begin{definition}\n\\label{42b}\nThe~$\\phi$--systolic area of~$X$ is defined as\n$$\n\\sigma_{\\phi}(X) = \\frac{\\area(X)}{\\sys(X,\\phi)^{2}}.\n$$\n\\end{definition}\n\\forgotten\n\nThe following elementary result will be used repeatedly in the sequel.\n\n\\begin{lemma} \n\\label{lem:trivial}\nIf~$r < \\frac{1}{2} \\sys(X,\\phi,x)$, then\nthe~$\\pi_{1}$--homomorphism~$i_{*}$ induced by the inclusion~$B(x,r)\n\\subset X$ is trivial when composed with~$\\phi$, that is~$\\phi \\circ\ni_{*}=0$. More specifically, every loop in~$B(x,r)$ is homotopic to a\ncomposition of loops based at~$x$ of length at most~$2r+\\varepsilon$, for\nevery~$\\varepsilon>0$.\n\\end{lemma}\n\n\n\nWithout loss of generality, we may assume that the piecewise\nRiemannian metric on~$X$ is piecewise flat. Let~$x_{0} \\in X$. The\npiecewise flat~$2$--complex~$X$ can be embedded into some~$\\R^N$ as a\nsemialgebraic set and the distance function~$f$ from~$x_0$ is a\ncontinuous semialgebraic function on~$X$, {\\it cf.}~\\cite{BCR98}.\nThus,~$(X,B)$ is a CW-pair when~$B$ is a ball centered at~$x_0$ (see\nalso \\cite[Corollary~6.8]{KRS}). Furthermore, for almost every~$r$,\nthere exists a~$\\eta >0$ such that the set\n\\begin{equation*}\n\\{ x \\in X \\mid r-\\eta < f(x) < r+\\eta \\}\n\\end{equation*}\nis homeomorphic to~$S(x_0,r) \\times (r-\\eta,r+\\eta)$ where~$S(x_0,r)$\nis the~$r$--sphere centered at~$x_{0}$ and the~$t$--level curve of~$f$\ncorresponds to~$S(x_0,r) \\times \\{t\\}$, {\\it cf.}~\\cite[\\S~9.3]{BCR98}\nand~\\cite{KRS} for a precise description of level curves on~$X$. \nIn such case, we say that~$r$ is a \\emph{regular value} of~$f$. \\\\\n\n\\forget \n\nSince the function~$\\ell(r) = \\length \\, f^{-1}(r)$ is piecewise\ncontinuous, {\\it cf.}~\\cite[\\S~9.3]{BCR98}, the condition~$\\area \\, B >\n\\lambda \\, (\\length \\, S)^{2}$ is open (see ~\\eqref{eq:lambda} below).\nTherefore, slightly changing the value of~$r$ if necessary, we can\nassume that~$r$ is regular. \n\n\\forgotten\n\nConsider the connected~$2$--complex~$Y=Y(x_0,r)$ introduced in\nDefinition~\\ref{def:Y}, with~$r < \\frac{1}{2} \\sys(X,\\phi)$ and~$r$\nregular. Since~$r$ is a regular value, there exists~$r_- \\in (0,r)$\nsuch that~$B \\setminus B(x_0,r_-)$ is homeomorphic to the product\n\\[\nS \\times [r_-,r) = \\coprod_i S_i \\times [r_-,r).\n\\]\nConsider the map\n\\begin{equation} \n\\label{eq:XY}\n\\pi \\colon\\thinspace X \\to Y\n\\end{equation}\nwhich leaves~$X \\setminus B$ fixed, takes~$B(x_0,r_-)$ to the vertex\nof the cone~$CA$, and sends~$B \\setminus B(x_0,r_-)$ to the union of\nthe buffer cylinders and~$CA$.\nThis map induces an epimorphism between the first\nhomology groups. In particular,\n\\begin{equation} \\label{eq:b1}\nb_1(Y) \\leq b_1(X).\n\\end{equation}\n\n\\medskip\n\n\\forget\n\\begin{lemma}\n\\label{lem:betti}\nWe have\n\\[\nb_1(Y) \\leq b_1(X).\n\\]\n\\end{lemma}\n\n\\begin{proof}\nSince~$r$ is a regular value, there exists~$r_- \\in (0,r)$ such that~$B \\setminus B(x,r_-)$ is homeomorphic to~$S \\times [r_-,r) = \\coprod_i S_i \\times [r_-,r)$.\nThe map~$X \\to Z$ which leaves~$X \\setminus B$ fixed and takes~$B(x,r_-)$ to the vertex of the cone~$C$ of~$Z$ induces an epimorphism between the first homology groups.\nHence the result.\n\\end{proof}\n\\forgotten\n\n\n\\forget\n\\begin{proof}\nLet~$f$ be the distance function from~$x_0$. It is convenient to\nintroduce the Reeb space~$\\widehat{X}$ obtained from~$X$ by collapsing to\npoints the connected components of the level curves~$f^{-1}(t)$, for\nevery~$t \\in [0,r]$ (level curves for~$t>r$ are unaffected). Note\nthat the lower bound for the systole no longer holds for~$\\widehat{X}$ due\nto possible ``shortcuts'' created in the graph~$T\\subset\\widehat{X}$\ncorresponding to~$t\\leq r$.\n\nSince the fibers of the map~$X\\to \\widehat{X}$ are connected, by the\ncovering homotopy property, we obtain that every closed path in\n$\\widehat{X}$ lifts to a closed path in~$X$, proving the surjectivity of\n$\\pi_1(X)\\to \\pi_1(\\widehat{X})$.\n\nWe first assume that~$X\\setminus B$ is connected. Then the Reeb\nspace~$\\widehat{X}$ is homotopy equivalent%\n\\footnote{The fact that the ``Reeb graph'' is indeed a finite graph\nfollows from semialgebraicity; see \\cite{KRS} for a detailed\ndiscussion.}\nto the union~$Y \\cup T$ obtained by attaching a finite graph~$T$ to\nthe finite set~$\\{x_1,\\ldots,x_n\\} \\subset Y$,\ncf.~Definition~\\eqref{def:Y}. By van Kampen's theorem, the removal of\nthe graph leads to a further decrease in the Betti number. The\nnon-closed path~$\\alpha$ closes up to a loop in~$X$ but not in~$Y$.\n\nIf~$X\\setminus B$ is not connected, our space~$Y$ is homotopy\nequivalent to a connected component of~$\\widehat{X}$ with the graph~$T$\nremoved, proving the lemma.\n\\end{proof}\n\\forgotten\n\n\n\\forget\nLet~$A=\\{x_1,\\ldots, x_n\\}$ be the finite set formed by the\npoints~$x_i$. Let~$Y \\cup CA$ be the space obtained by attaching a\ncone over~$A$ to~$Y$. Consider the map\n\\[\n\\widehat{X} \\to Y \\cup CA\n\\]\nwhich leaves~$Y$ fixed and takes~$T \\setminus (\\cup_i e_i)$ to the\nvertex of~$CA$, where the~$e_i$ are the semi-edges of~$T$ with\nendpoints~$x_i$. The composite\n\\[\nX \\to \\widehat{X} \\to Y \\cup CA,\n\\]\nwhere~$X \\to \\widehat{X}$ is the quotient map, leaves~$X \\setminus\n\\overline{B}$ fixed and induces an epimorphism between the first\nhomology groups. Hence,\n$$\nb_1(Y) \\leq b_1(Y \\cup CA) \\leq b_1(X).\n$$\n\nNow, suppose that the projection of some arc~$\\alpha$ of~$X \\setminus\nB$ to~$Y$ connects two points of~$A$. Then the space~$Y \\cup CA$ is\nhomotopy equivalent to~$(Y \\cup CA') \\vee S^{1}$, where~$A'\n\\subset A$. That is,\n$$\nY \\cup CA \\simeq (Y \\cup CA') \\vee S^{1}.\n$$\nWe deduce that\n$$\nb_1(Y) < b_1(Y \\cup CA) \\leq b_1(X). \n$$\n\\forgotten\n\n\n\\begin{lemma} \\label{lem:class}\nIf~$r < \\frac{1}{2} \\sys(X,\\phi)$, then~$Y$ is~$\\psi$--essential for\nsome homomorphism~$\\psi \\colon\\thinspace \\pi_{1}(Y) \\to G$ such that\n\\begin{equation} \\label{eq:circ}\n\\psi \\circ \\pi_* =\\phi\n\\end{equation}\nwhere~$\\pi_*$ is the~$\\pi_1$--homomorphism induced by \\mbox{$\\pi \\colon\\thinspace X \\to Y$}.\n\\end{lemma}\n\n\\begin{proof}\nConsider the CW-pair~$(X,B)$ where~$B=B(x_0,r)$. By\nLemma~\\ref{lem:trivial}, the restriction of the classifying\nmap~$\\varphi \\colon\\thinspace X \\to K(G,1)$ induced by~$\\phi$ to~$B$ is homotopic to a\nconstant map. Thus, the classifying map~$\\varphi$ extends to~$X \\cup\nCB$ and splits into\n\\[\nX \\hookrightarrow X \\cup CB \\to K(G,1),\n\\]\nwhere~$CB$ is a cone over~$B \\subset X$ and the first map is the\ninclusion map. Since~$X \\cup CB$ is homotopy equivalent to the\nquotient~$X\/B$, {\\it cf.}~\\cite[Example~0.13]{Hat}, we obtain the following\ndecomposition of~$\\varphi$ up to homotopy:\n\\begin{equation} \\label{eq:XB}\nX \\stackrel{\\pi}{\\longrightarrow} Y \\to X\/B \\to K(G,1).\n\\end{equation}\n\nHence,~$\\psi \\circ \\pi_* = \\phi$ for the~$\\pi_1$--homomorphism~$\\psi \\colon\\thinspace \\pi_1(Y) \\to G$ induced by the map~$Y \\to K(G,1)$. \nIf the map~$Y \\to K(G,1)$ can be homotoped into the~$1$--skeleton of~$K(G,1)$, the same\nis true for\n\\[\nX \\to Y \\to K(G,1)\n\\]\nand so for the homotopy equivalent map~$\\varphi$, which contradicts\nthe~$\\phi$--essentialness of~$X$.\n\\end{proof}\n\n\n\\section{Exploiting a ``fat\" ball}\n\nWe normalize the~$\\phi$--relative systole of~$X$ to one, i.e.~$\\sys(X,\\phi)=1$. \nChoose a fixed~$\\delta \\in (0,\\frac{1}{2})$\n(close to~$0$) and a real parameter~$\\lambda > \\frac{1}{2}$ (close\nto~$\\frac{1}{2}$).\n\n\\begin{proposition} \n\\label{prop:reeb}\nSuppose there exist a point~$x_{0} \\in X$ and a value~$r_{0} \\in\n(\\delta,\\frac{1}{2})$ regular for~$f$ such that\n\\begin{equation} \n\\label{eq:lambda}\n\\area \\, B > \\lambda \\, (\\length \\, S)^{2}\n\\end{equation}\nwhere~$B=B(x_{0},r_{0})$ and~$S=S(x_{0},r_{0})$. \nThen there exists a\npiecewise flat metric on~$Y=Y(x_{0},r_{0})$\nsuch that the systolic areas ({\\it cf.}~Definition~\\ref{def:sigma}) satisfy\n$$\n\\sigma_{\\psi}(Y) \\leq \\sigma_{\\phi}(X).\n$$\n\\end{proposition}\n\n\\begin{proof}\nConsider the metric on~$Y$ described in Definition~\\ref{def:Y}.\nStrictly speaking, the metric on~$Y$ is not piecewise flat since the\nconnected components of~$S$ are collapsed to points, but it can be\napproximated by piecewise flat metrics.\n\nDue to the presence of the buffer cylinders, every loop of~$Y$ of\nlength less than~$\\sys(X,\\phi)$ can be deformed into a loop of~$X\n\\setminus B$ without increasing its length. Thus, by~\\eqref{eq:circ},\none obtains\n\\begin{equation*}\n\\sys(Y,\\psi) \\geq \\sys(X,\\phi) = 1.\n\\end{equation*}\nFurthermore, we have\n\\begin{equation*}\n\\area \\, Y \\leq \\area \\, X - \\area \\, B + \\tfrac{1}{2} (\\length \\,\nS)^{2}.\n\\end{equation*}\nCombined with the inequality~\\eqref{eq:lambda}, this leads to\n\\begin{equation} \\label{eq:wh}\n\\sigma_{\\psi}(Y) < \\sigma_{\\phi}(X) - \\left( \\lambda - \\tfrac{1}{2}\n\\right) (\\length \\, S)^{2}.\n\\end{equation}\nHence,~$\\sigma_{\\psi}(Y) \\leq \\sigma_{\\phi}(X)$,\nsince~$\\lambda > \\frac{1}{2}$.\n\\end{proof}\n\n\n\n\\section{An integration by separation of variables}\n\nLet~$X$ be a piecewise Riemannian finite connected~$2$--complex. Let\n\\mbox{$\\phi \\colon\\thinspace \\pi_{1}(X) \\to G$} be a nontrivial homomorphism to a\ngroup~$G$. We normalize the metric to unit relative systole:\n$\\sys(X,\\phi)=1$. The following area lower bound appeared\nin~\\cite[Lemma~7.3]{RS08}.\n\n\\begin{lemma} \\label{lem:BS}\nLet~$x \\in X$,~$\\lambda >0$ and~$\\delta \\in (0,\\frac{1}{2})$.\nIf \n\\begin{equation} \\label{eq:BS}\n\\area \\, B(x,r) \\leq \\lambda \\, (\\length \\, S(x,r))^{2}\n\\end{equation}\nfor almost every~$r \\in (\\delta,\\frac{1}{2})$, then \n$$\n\\area \\, B(x,r) \\geq \\frac{1}{4\\lambda} (r-\\delta)^{2}\n$$\nfor every~$r \\in (\\delta,\\frac{1}{2})$.\n\nIn particular,~$\\displaystyle \\area(X) \\geq \\frac{1}{16 \\lambda} \\,\n\\sys(X,\\phi)^{2}$.\n\\end{lemma}\n\n\\begin{proof}\nBy the coarea formula, we have\n\\begin{equation*}\na(r) := \\area \\, B(x,r) = \\int_0^r \\ell(s) \\, ds\n\\end{equation*}\nwhere~$\\ell(s)=\\length \\, S(x,s)$. Since the function~$\\ell(r)$ is\npiecewise continuous, the function~$a(r)$ is continuously\ndifferentiable for all but finitely many~$r$ in~$(0,\\frac{1}{2})$\nand~$a'(r)=\\ell(r)$ for all but finitely many~$r$\nin~$(0,\\frac{1}{2})$. By hypothesis, we have\n$$\na(r) \\leq \\lambda \\, a'(r)^2\n$$\nfor all but finitely many~$r$ in~$(\\delta,\\frac{1}{2})$.\nThat is,\n$$ \\left( \\sqrt{a(r)} \\right)' = \\frac{a'(r)}{2 \\sqrt{a(r)}} \\geq\n\\frac{1}{2\\sqrt{\\lambda}}.~$$\nWe now integrate this differential inequality from~$\\delta$ to~$r$, to\nobtain\n$$\n \\sqrt{a(r)} \\geq \\frac{1}{2\\sqrt{\\lambda}} (r-\\delta).\n$$\nHence, for every~$r \\in (\\delta, \\frac{1}{2})$, we obtain\n\\[\n a(r) \\geq \\frac{1}{4 \\lambda} (r-\\delta)^{2},\n\\]\ncompleting the proof.\n\\end{proof}\n\\forget\n, or\n\\begin{equation*}\ndr \\leq \\lambda^{1\/2} a^{-1\/2} da.\n\\end{equation*}\nWe now integrate this differential inequality from~$\\delta$ to~$r$, to\nobtain\n\\begin{equation*}\nr-\\delta \\leq \\int_{a(\\delta)}^{a(r)} \\lambda^{1\/2} a^{-1\/2} da,\n\\end{equation*}\nand hence\n\\begin{equation*}\nr-\\delta \\leq 2 \\lambda^{1\/2} \\left( a(r)^{1\/2} - a(\\delta^{1\/2})\n\\right) \\leq 2 \\lambda^{1\/2} a(r)^{1\/2},\n\\end{equation*}\nproving the result so long as we have the inequality for all the\nintermediate values of~$r$. \n\nWhy is that?\n\\forgotten\n\n\n\n\\section{Proof of relative systolic inequality}\n\nWe prove that if~$X$ is a~$\\phi$--essential piecewise\nRiemannian~$2$--complex which is almost minimal (up to~$\\varepsilon$),\nand has least first Betti number among such complexes, then~$X$ possesses\nan~$r$--ball of large area for each~$r< \\tfrac{1}{2} \\sys(X, \\phi)$.\nWe have not been able to find such a ball for an\narbitrary~$\\phi$--essential complex (without the assumption of almost\nminimality), but at any rate the area lower bound for almost minimal\ncomplexes suffices to prove the~$\\phi$--systolic inequality for\nall~$\\phi$--essential complexes, as shown below.\n\n\\forget\n\\begin{definition}\nLet~$G$ be a group. We set\n\\begin{equation*}\n\\sigma_{*}(G) = \\inf_{X} \\sigma_{\\phi}(X),\n\\end{equation*}\nwhere the infimum is over all~$\\phi$--essential piecewise Riemannian\nfinite~$2$--complexes~$X$, where the homomorphism~$\\phi$ has values\nin~$G$.\n\\end{definition}\n\\forgotten\n\n\\begin{remark}\nWe do not assume at this point that~$\\sigma_{*}(G)$ is nonzero,\n{\\it cf.}~Definition~\\ref{def:sigma}. In fact, the proof of\n$\\sigma_{*}(G)>0$ does not seem to be any easier than the explicit\nbound of Corollary~\\ref{coro:A}.\n\\end{remark}\n\nTheorem~\\ref{13} and Corollary~\\ref{coro:A} are consequences of the\nfollowing result.\n\n\\begin{proposition} \n\\label{prop:minB}\nLet~$\\varepsilon >0$. Suppose~$X$ has a minimal first Betti number\namong all~$\\phi$--essential piecewise Riemannian~$2$--complexes\nsatisfying \n\\begin{equation} \\label{eq:eps}\n\\sigma_{\\phi}(X) \\leq \\sigma_*(G) +\\varepsilon.\n\\end{equation} \nThen each ball centered at a point~$x$ on a~$\\phi$--systolic loop in~$X$\nsatisfies the area lower bound\n\\begin{equation*}\n\\area \\, B(x,r) \\geq\n\\frac{(r-\\delta)^2}{2+\\frac{\\varepsilon}{\\delta^2}}\n\\end{equation*}\nfor every~$r \\in \\left(\\delta,\\frac{1}{2}\\sys(X,\\phi) \\right)$, where\n$\\delta \\in \\left(0,\\frac{1}{2}\\sys(X,\\phi)\\right)$. In particular,\nwe obtain the bound\n\\begin{equation*}\n\\sigma_*(G) \\geq \\frac{1}{8}.\n\\end{equation*}\n\\end{proposition}\n\n\\forget\n\\begin{proof}\nIf for each~$r$ we have~$a(r) \\leq a'(r)$ then we separate variables\nas in the previous section to obtain the area lower bound.\n\nIf for some~$r$, we have~$a(r) > a'(r)$, then there are two\npossibilities: either~$S(r)$ is connected, and we obtain a lower bound\nof~$r^2$ for the area by Hebda's trick, or~$S(r)$ is disconnected.\nBut the latter case is impossible by the hypothesis if minimality of\nBetti number.\n\\end{proof}\n\nWe can now proceed with the proof of the relative systolic inequality\nfor essential~$2$--complexes.\n\\forgotten\n\n\\begin{proof\nWe will use the notation and results of the previous sections.\nChoose~$\\lambda > 0$ such that\n\\begin{equation}\n\\label{52}\n\\varepsilon < 4 \\left(\\lambda - \\tfrac{1}{2} \\right) \\delta^{2}.\n\\end{equation}\nThat is,\n\\[\n\\lambda > \\frac{1}{2} + \\frac{\\varepsilon}{4 \\delta^2} \\quad \\mbox{\n(close to } \\frac{1}{2} + \\frac{\\varepsilon}{4 \\delta^2}).\n\\]\nWe normalize the metric on~$X$ so that its~$\\phi$--systole is equal to one.\nChoose a point~$x_{0} \\in X$ on a~$\\phi$--systolic loop~$\\gamma$ of~$X$. \n\nIf the balls centered at~$x_0$ are too ``thin'',\ni.e., the inequality~\\eqref{eq:BS} is satisfied for~$x_{0}$ and almost\nevery~$r \\in (\\delta,\\frac{1}{2})$, then the result follows from\nLemma~\\ref{lem:BS}. \n\nWe can therefore assume that there exists a ``fat'' ball centered at~$x_0$, i.e., the hypothesis of Proposition~\\ref{prop:reeb} holds\nfor~$x_{0}$ and some regular~$f$--value~$r_{0} \\in\n(\\delta,\\frac{1}{2})$, where~$f$ is the distance function from~$x_0$.\n(Indeed, almost every~$r$ is regular for~$f$.)\nArguing by contradiction, we show that the assumption on the minimality of the first Betti number rules out this case.\n\nWe would like to construct a~$\\psi$--essential piecewise\nflat~$2$--complex~$Y'$ with~$b_1(Y') < b_1(X)$ such that\n$\\sigma_{\\psi}(Y') \\leq \\sigma_{\\phi}(X)$ and therefore\n\\begin{equation}\n\\sigma_{\\psi}(Y') \\leq \\sigma_{*}(G) + \\varepsilon\n\\end{equation}\nfor some homomorphism~$\\psi \\colon\\thinspace \\pi_{1}(Y') \\to G$.\n\nBy Lemma~\\ref{lem:class} and Proposition~\\ref{prop:reeb}, the space~$Y = Y(x_{0},r_{0})$, endowed with the piecewise Riemannian metric of Proposition~\\ref{prop:reeb}, satisfies\n\\begin{equation*}\n\\sigma_{*}(G) \\leq \\sigma_{\\psi}(Y) \\leq \\sigma_{\\phi}(X).\n\\end{equation*}\nCombined with the inequalities~\\eqref{eq:wh} in the proof of\nProposition~\\ref{prop:reeb} and~\\eqref{eq:eps}, this yields\n\\begin{equation*}\n\\left( \\lambda - \\frac{1}{2} \\right) (\\length \\, S)^{2} < \\varepsilon.\n\\end{equation*}\nFrom~$\\varepsilon < 4 (\\lambda - \\frac{1}{2}) \\delta^{2}$\nand~$\\delta \\leq r_{0}$, we deduce that\n$$\n\\length \\, S < 2 r_{0}.\n$$\n\nNow, by Lemma~\\ref{lem:trivial}, the~$\\phi$--systolic\nloop~$\\gamma\\subset X$ does not entirely lie in~$B$. Therefore, there\nexists an arc~$\\alpha_0$ of~$\\gamma$ passing through~$x_{0}$ and lying\nin~$B$ with endpoints in~$S$. We have\n\\[\n\\length(\\alpha_0) \\geq 2r_{0}.\n\\]\nIf the endpoints of~$\\alpha_0$ lie in the same connected component\nof~$S$, then we can join them by an arc~$\\alpha_1 \\subset S$ of length\nless than~$2r_{0}$. By Lemma~\\ref{lem:trivial}, the loop~$\\alpha_0\n\\cup \\alpha_1$, lying in~$B$, is~$\\phi$--contractible. Therefore, the\nloop~$\\alpha_1 \\cup (\\gamma \\setminus \\alpha_0)$, which is shorter\nthan~$\\gamma$, is~$\\phi$--noncontractible. Hence a contradiction.\n\nThis shows that the~$\\phi$--systolic loop~$\\gamma$ of~$X$ meets two\nconnected components of~$S$. \n\nSince a~$\\phi$--systolic loop is length-minimizing, the loop~$\\gamma$\nintersects~$S$ exactly twice. Therefore, the complementary\narc~$\\alpha=\\gamma \\setminus \\alpha_0$, joining two connected\ncomponents of~$S$, lies in~$X \\setminus B$.\nThe two endpoints of~$\\alpha$ are connected by a length-minimizing arc\nof~$Y \\setminus (X \\setminus \\overline{B})$ passing exactly through\ntwo edges of the cone~$CA$.\n\nLet~$Y'$ be the~$2$--complex obtained by removing the interior of one\nof these two edges from~$Y$. The complex~$Y'=Y \\setminus e$ is\nclearly connected and the space~$Y$, obtained by gluing back the\nedge~$e$ to~$Y$, is homotopy equivalent to~$Y' \\vee S^1$. That is,\n\\begin{equation} \\label{eq:Y'}\nY \\simeq Y' \\vee S^1.\n\\end{equation}\nThus,~$Y'$ is~$\\psi$--essential if we still denote by~$\\psi$ the\nrestriction of the homomorphism~$\\psi \\colon\\thinspace \\pi_1(Y) \\to G$\nto~$\\pi_1(Y')$. Furthermore, we clearly have\n\\[\n\\sigma_{\\psi}(Y') = \\sigma_{\\psi}(Y) \\leq \\sigma_{\\phi}(X).\n\\]\nCombined with~\\eqref{eq:b1}, the homotopy equivalence~\\eqref{eq:Y'}\nalso implies\n$$\nb_1(Y') < b_1(Y) \\leq b_1(X).\n$$\nHence the result.\n\\end{proof}\n\n\n\\begin{remark}\nWe could use round metrics (of constant positive Gaussian curvature)\non the ``buffer cylinders\" of the space~$Y$ in the proof of\nProposition~\\ref{prop:reeb}. This would allow us to choose~$\\lambda$\nclose to~$\\frac{1}{2 \\pi}$ and to derive the lower bound\nof~$\\frac{\\pi}{8}$ for~$\\sigma_{\\phi}(X)$ in Corollary~\\ref{coro:A}.\nWe chose to use flat metrics for the sake of simplicity.\n\\end{remark}\n\n\n\\section{Cohomology of Lens spaces}\n\nLet~$p$ be a prime number. The group~$G=\\Z_p$ acts freely on the\ncontractible sphere~$S^{2\\infty+1}$ yielding a model for the\nclassifying space\n\\begin{equation*}\nK = K(\\Z_{p},1) = S^{2\\infty+1}\/\\Z_{p}.\n\\end{equation*}\nThe following facts are well-known, {\\it cf.}~\\cite{Hat}.\n\n\\begin{proposition}\n\\label{42}\n\n\nThe cohomology ring~$H^*(\\Z_p;\\Z_p)$ for~$p$ an odd prime is the\nalgebra~$\\Z_p(\\alpha)[\\beta]$ which is exterior on one\ngenerator~$\\alpha$ of degree~$1$, and polynomial with one\ngenerator~$\\beta$ of degree~$2$. Thus,\n\\begin{itemize}\n\\item\n$\\alpha$ is a generator of~$H^1(\\Z_p;\\Z_p)\\simeq \\Z_p$,\nsatisfying~$\\alpha^2=0$;\n\\item\n$\\beta$ is a generator of~$H^2(\\Z_p;\\Z_p)\\simeq \\Z_p$.\n\\end{itemize}\n\\end{proposition}\n\nHere the~$2$--dimensional class is the image under the Bockstein\nhomomorphism of the~$1$--dimensional class. The cohomology of the\ncyclic group is generated by these two classes. The cohomology is\nperiodic with period~$2$ by Tate's theorem. Every even-dimensional\nclass is proportional to~$\\beta^n$. Every odd-dimensional class is\nproportional to~$\\alpha \\cup \\beta^n$.\n\nFurthermore, the reduced integral homology is~$\\Z_p$ in odd dimensions\nand vanishes in even dimensions. The integral cohomology is~$\\Z_p$ in\neven positive dimensions, generated by a lift of the class~$\\beta$\nabove to~$H^2(\\Z_p;\\Z)$.\n\n\n\\begin{proposition}\n\\label{41}\n\\label{33}\nLet~$M$ be a closed~$3$--manifold~$M$ with~$\\pi_1(M)=\\Z_{p}$. Then its\nclassifying map~$\\varphi \\colon\\thinspace M \\to K$ induces an\nisomorphism\n\\[\n\\varphi_i \\colon\\thinspace H_i(M;\\Z_p)\\simeq H_i(K;\\Z_p)\n\\]\nfor~$i=1,2,3$.\n\\end{proposition}\n\n\\begin{proof}\nSince~$M$ is covered by the sphere, for~$i=2$ the isomorphism is a\nspecial case of Whitehead's theorem. Now consider the exact sequence\n(of Hopf type)\n\\begin{equation*}\n\\pi_3(M) \\overset{\\times p}{\\longrightarrow}H_3(M;\\Z)\\to\nH_3(\\Z_p;\\Z)\\to 0\n\\end{equation*}\nsince~$\\pi_2(M)=0$. Since the homomorphism~$H_3(M;\\Z) \\to\nH_3(\\Z_p;\\Z)$ is onto, the result follows by reduction modulo~$p$.\n\\end{proof}\n\n\n\n\n\\section{Volume of a ball}\n\nOur Theorem \\ref{theo:main} is a consequence of the following result.\n\n\\begin{theorem} \n\\label{theo:ball}\nAssume the~$\\rm{GG}_C$-property~\\eqref{eq:ball} is satisfied for some universal constant\n$C>0$ and every homomorphism~$\\phi$ into a finite group~$G$. \nThen every closed\nRiemannian~$3$--manifold~$M$ with fundamental group~$G$ contains a\nmetric ball~$B(R)$ of radius~$R$ satisfying\n\\begin{equation}\n\\label{24}\n\\vol \\, B(R) \\geq \\frac{C}{3} R^3,\n\\end{equation}\nfor every~$R\\leq\\frac{1}{2}\\sys(M)$.\n\\end{theorem}\n\n\\forget\nRecall the following result.\n\n\\begin{proposition}\n\\label{006}\nIn an orientable~$3$--manifold, cup product on~$H^1\\otimes H^2$ in\ncohomology with~$\\Z_p$ coefficients is dual to intersection between\na~$2$--cycle and a ~$1$--cycle with coefficients in~$\\Z_p$.\n\\end{proposition}\n\nHere the global orientation allows one to count an integer\nintersection index, which is then reduced modulo~$p$. \\\\\n\\forgotten\n\nWe will first prove Theorem~\\ref{theo:ball} for a\nclosed~$3$--manifold~$M$ of fundamental group~$\\Z_{p}$, with~$p$\nprime. We assume that~$p$ is odd (the case~$p=2$ was treated by\nL.~Guth). In particular,~$M$ is orientable. Let~$D$ be a~$2$--cycle\nrepresenting a nonzero class~$[D]$ in\n\\begin{equation*}\nH_2(M;\\Z_p) \\simeq H_{1}(M;\\Z_{p}) \\simeq \\Z_p.\n\\end{equation*}\nDenote by~$D_0$ the finite~$2$--complex of~$M$ given by the support\nof~$D$. Without loss of generality, we can assume that~$D_0$ is\nconnected. The restriction of the classifying map~$\\varphi \\colon\\thinspace M \\to\nK$ to~$D_0$ induces a homomorphism~$\\phi \\colon\\thinspace \\pi_{1}(D_0) \\to \\Z_{p}$.\n\n\\begin{lemma} \\label{lem:DB}\nThe cycle~$D$ induces a trivial relative class in the homology of\nevery metric~$R$--ball~$B$ in~$M$ relative to its boundary, with~$R <\n\\frac{1}{2} \\sys(M)$. That is,\n$$\n[D \\cap B] = 0 \\in H_{2}(B,\\partial B;\\Z_{p}).\n$$\n\\end{lemma}\n\n\\begin{proof}\nSuppose the contrary. By the Lefschetz-Poincar\\'e duality theorem,\nthe relative~$2$--cycle~$D \\cap B$ in~$B$ has a nonzero intersection\nwith an (absolute)~$1$--cycle~$c$ of~$B$. Thus, the intersection\nbetween the~$2$--cycle~$D$ and the~$1$--cycle~$c$ is nontrivial\nin~$M$. Now, by Lemma~\\ref{lem:trivial}, the~$1$--cycle~$c$ is\nhomotopically trivial in~$M$. Hence a contradiction.\n\\end{proof}\n\nWe will exploit the following notion of volume for cycles with torsion\ncoefficients.\n\n\\begin{definition} \n\\label{def:Vol}\nLet~$D$ be a~$k$--cycle with coefficients in~$\\Z_p$ in a Riemannian\nmanifold~$M$. We have\n\\begin{equation}\n\\label{11}\nD= \\sum_i n_i \\sigma_i\n\\end{equation}\nwhere each~$\\sigma_i$ is a~$k$--simplex, and each~$n_i\\in \\Z_p^*$ is\nassumed nonzero. We define the notion of~$k$--area~$\\area$ for cycles\nas in \\eqref{11} by setting\n\\begin{equation}\n\\label{12}\n\\area(D)= \\sum_i |\\sigma_i|,\n\\end{equation}\nwhere~$|\\sigma_i|$ is the~$k$--area induced by the Riemannian metric\nof~$M$.\n\\end{definition}\n\n\\begin{remark}\nThe non-zero coefficients~$n_i$ in \\eqref{11} are ignored in defining\nthis notion of volume.\n\\end{remark}\n\n\\begin{proof}[Proof of Theorem~\\ref{theo:ball}]\nWe continue the proof of Theorem~\\ref{theo:ball} when the fundamental\ngroup of~$M$ is isomorphic to~$\\Z_p$, with~$p$ an odd prime. We will\nuse the notation introduced earlier. Suppose now that~$D$ is a\npiecewise smooth~$2$--cycle area minimizing in its homology\nclass~$[D]\\not=0\\in H_2(M;\\Z_p)$ up to an arbitrarily small error\nterm~$\\varepsilon>0$, for the notion of volume (area) as defined\nin~\\eqref{12}.\n\n\nRecall that~$\\phi \\colon\\thinspace \\pi_1(D_0)\\to \\Z_p$ is the homomorphism induced\nby the restriction of the classifying map~$\\varphi \\colon\\thinspace K \\to M$ to the\nsupport~$D_0$ of~$D$. By Proposition~\\ref{33}, the~$2$--complex~$D_0$\nis~$\\phi$--essential. Thus, by hypothesis of Theorem~\\ref{theo:ball},\nwe can choose a point~$x \\in D_0$ satisfying\nthe~$\\rm{GG}_C$-property~\\eqref{eq:ball}, i.e., the area of~$R$--balls\nin~$D_0$ centered at~$x$ grows at least as~$C R^2$ for~$R <\n\\frac{1}{2} \\sys(D_0,\\phi)$.\nTherefore, the intersection of~$D_0$ with the~$R$--balls of$M$\ncentered at~$x$ satisfies\n\\begin{equation}\n\\label{111}\n\\area(D_0\\cap B(x,R)) \\geq CR^2\n\\end{equation}\nfor every~$R < \\frac{1}{2} \\sys(D_0,\\phi)$.\nThe idea of the proof is to control the area of distance spheres\n(level surfaces of the distance function) in~$M$, in terms of the\nareas of the distance disks in~$D_0$.\n\nLet~$B=B(x,R)$ be the metric~$R$--ball in~$M$ centered at~$x$\nwith~$R<\\frac{1}{2} \\sys(M)$. We subdivide and slightly perturb~$D$\nfirst, to make sure that~$D \\cap \\bar B$ is a subchain of~$D$. Write\n\\[\nD=D_- + D_+,\n\\]\nwhere~$D_-$ is a relative~$2$--cycle of~$\\bar B$, and~$D_+$ is a\nrelative~$2$--cycle of~$M\\setminus B$. By Lemma~\\ref{lem:DB},~$D_-$\nis homologous to a~$2$--chain~$\\mathcal{C}$ contained in the distance\nsphere~$\\partial B = S(x,R)$ with\n\\[\n\\partial \\mathcal{C} = \\partial D_- = - \\partial D_+.\n\\]\nWe subdivide and perturb~$\\mathcal{C}$ in~$S(x,R)$ so that the\ninteriors of its~$2$--simplices either agree or have an empty\nintersection. Here the simplices of the~$2$--chain~$\\mathcal{C}$ may\nhave nontrivial multiplicities.\nSuch multiplicities necessarily affect the volume of a chain if one\nworks with integer coefficients.\nHowever, these multiplicities are ignored for the notion\nof~$2$--volume~\\eqref{12}. This special feature allows us to derive\nthe following: the~$2$--volume~\\eqref{12} of the chain~$\\mathcal{C}$\nis a lower bound for the usual area of the distance sphere~$S(x,R)$.\n\nNote that the homology class~$[\\mathcal{C}+D_+]=[D] \\in H_2(M;\\Z_p)$\nstays the same. We chose~$D$ to be area minimizing up\nto~$\\varepsilon$ in its homology class in~$M$ for the notion of\nvolume~\\eqref{12}. Hence we have the following bound:\n\\begin{equation}\n\\label{112}\n\\area(S(x,R)) \\geq \\area(\\mathcal{C}) \\geq \\area(D_-) - \\varepsilon\n\\geq \\area (D_0 \\cap B) - \\varepsilon.\n\\end{equation}\nNow, clearly~$\\sys(M) \\leq \\sys(D_0,\\phi)$. Combining the\nestimates~\\eqref{111} and~\\eqref{112}, we obtain\n\\begin{equation}\n\\label{113}\n\\area ( S(x,R)) \\geq C R^2 - \\varepsilon\n\\end{equation}\nfor every~$R<\\frac{1}{2} \\sys(M)$. Integrating the\nestimate~\\eqref{113} with respect to~$R$ and letting~$\\varepsilon$ go\nto zero, we obtain a lower bound of~$\\frac{C}{3} R^3$ for\nthe~$3$--volume of some~$R$--ball in the closed manifold~$M$, proving\nTheorem~\\ref{theo:ball} for closed~$3$--manifolds with fundamental\ngroup~$\\Z_{p}$. \\\\\n\nSuppose now that~$M$ is a closed~$3$--manifold with finite (nontrivial)\nfundamental group. Choose a prime~$p$ dividing the\norder~$|\\pi_1(M)|$ and consider a cover~$N$ of~$M$ with fundamental group cyclic\nof order~$p$.\nThis cover satisfies~$\\sys(N) \\geq \\sys(M)$, and we apply the\nprevious argument to~$N$.\n\nNote that the reduction to a cover could not have been done in the\ncontext of M.~Gromov's formulation of the inequality in terms of the\nglobal volume of the manifold. Meanwhile, in our formulation using a\nmetric ball, following L.~Guth, we can project injectively the ball of\nsufficient volume, from the cover to the original manifold. Namely,\nthe proof above exhibits a point~$x \\in N$ such that the volume of\nthe~$R$--ball~$B(x,R)$ centered at~$x$ is at least~$\\frac{C}{3} R^3$\nfor every~$R < \\frac{1}{2} \\sys(M)$. Since~$R$ is less than half the\nsystole of~$M$, the ball~$B(x,R)$ of~$N$ projects injectively to an\n$R$--ball in~$M$ of the required volume, completing the proof of\nTheorem~\\ref{theo:ball}.\n\\end{proof}\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzjvan b/data_all_eng_slimpj/shuffled/split2/finalzzjvan new file mode 100644 index 0000000000000000000000000000000000000000..2ec26e7e3a652149671d84aa8e52874fbaada3d3 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzjvan @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:1}\nThe understanding of diffusion and transport of passive \ntracers in a given velocity field has both theoretical and practical \nrelevance in many fields of science and engineering, \ne.g. mass and heat transport in geophysical flows \n(for a review see \\cite{davis,davis2}),\ncombustion and chemical engineering \\cite{Moffatt}.\n\n\nOne common interest is the study of the mechanisms\nwhich lead to transport enhancement as a fluid is driven \nfarther from the motionless state. \nThis is related to the fact that the Lagrangian motion of \nindividual tracers can be rather complex even in simple laminar flows\n\\cite{Ottino,lagran}.\n\nThe dispersion of passive scalars in a given velocity field is the result,\nusually highly nontrivial, of two different contributions:\nmolecular diffusion and advection.\nIn particular, one can have rather fast transport, even without\nmolecular diffusion, in presence of {\\it Lagrangian chaos \\\/}, \nwhich is the sensitivity to initial conditions of\nLagrangian trajectories.\nIn addition, also for a 2D stationary velocity field, \nwhere one cannot have Lagrangian chaos \\cite{Licht}, in presence of \na particular geometry of the streamlines the diffusion can \nbe much larger than the one due only to the molecular \ncontribution, as in the case of spatially periodic stationary\nflows \\cite{Rosenbluth,Shraiman}.\n\nTaking into account the molecular diffusion, the motion of \na test particle (the tracer) is described by the following Langevin\nequation:\n\\begin{equation}\n\\frac{d{\\bf x}}{dt}= {\\bf u}({\\bf x},t)+\n\\mbox{\\boldmath $\\eta$}(t),\n\\label{eq:langevin}\n\\end{equation}\nwhere \n${\\bf u}({\\bf x},t)$ \nis the Eulerian \nincompressible velocity field at the point ${\\bf x}$ \nand time $t$, $\\mbox{\\boldmath $\\eta$}(t)$ is a Gaussian white noise \nwith zero mean and\n\\begin{equation}\n<\\eta_{i}(t) \\eta_{j}(t^{'}) >= 2 D_{0} \\delta_{ij} \\delta(t-t^{'})\\,,\n\\label{eq:whitenoise}\n\\end{equation}\nwhere $D_{0}$ is the (bare) molecular diffusivity.\n\nDenoting \n$\\Theta({\\bf x},t)$ \nthe concentration of tracers,\none has:\n\\begin{equation}\n{\\partial}_{t} \\Theta+ \n\\left( {\\bf u} \\cdot \\mbox{\\boldmath $\\nabla$} \\right) \\Theta=\nD_{0} \\,\\Delta \\Theta \\,.\n\\label{eq:fokker}\n\\end{equation}\n\nFor an Eulerian velocity field \nperiodic in space, or anyway defined in infinite domains, \nthe long-time, large-distance behavior of the diffusion process\nis described by the effective diffusion tensor $D_{ij}^{E}$\n({\\it eddy-diffusivity tensor}\\\/):\n\\begin{equation}\nD_{ij}^{E}= \\lim_{t\\rightarrow \\infty} \\frac {1}{2t}\n<(x_{i}(t)-)(x_{j}(t)-)>\\,,\n\\label{def:eddydiff}\n\\end{equation}\nwhere now ${\\bf x}(t)$ is the position of the the tracer \nat time $t$, $i,j=1,\\cdots,d$ (being $d$ the spatial dimension) , and\nthe average is taken over the initial positions\nor, equivalently, over an ensemble of test particles.\nThe tensor $D^{E}_{ij}$ gives the long-time, large-distance\nequation for $< \\! \\Theta \\!>$ i.e. the concentration field locally averaged\n over a volume of linear distance much larger than the\ntypical length $l_{u}$ of the velocity field, according to\n\\begin{equation}\n{\\partial}_{t} <\\Theta>= \\sum_{i,j=1}^{d} D_{ij}^{E}\\, \n\\frac{{\\partial}^{2}}{\\partial x_{i} \\partial x_{j}} <\\Theta>\\,.\n\\label{eq:eddydiff}\n\\end{equation}\nThe above case, with finite $D_{ij}^{E}$, is\nthe typical situation where the diffusion, for very large \ntimes, is a standard diffusion process. \nHowever there are also cases \nshowing the so-called {\\it anomalous diffusion}\\\/: the spreading \nof the particles does not behave linearly with time but\nhas a power law $t^{2\\nu}$ with $\\nu \\neq 1\/2$.\nTransport anomalies are, in general, indicators of the \npresence of strong \ncorrelation in the dynamics, even at large time and space scales\n\\cite{georges}.\n\nIn the case of infinite spatial domains and periodic \nEulerian fields the powerful multiscale technique \n(also known as homogenization in mathematical literature) \ngives a useful tool for studying standard diffusion\nand, with some precautions, also the anomalous situations \n\\cite{BCVV}.\n\nOn the other hand we have to stress the fact that \ndiffusivity tensor (\\ref{def:eddydiff}) is \nmathematically well defined only in the limit of infinite times, \ntherefore it gives \na sensible result only if the characteristic length\n$l_{u}$ of the velocity field is much smaller than the size \n$L$ of the domain. \n\nThe case when $l_{u}$ and $L$ are not well separated\nis rather common in many geophysical problems, e.g.\nspreading of pollutants in Mediterranean or Baltic sea, \nand also in plasma physics.\nTherefore it is important to introduce some other \ncharacterizations of the diffusion properties\nwhich can be used also in non ideal cases.\nFor instance, \\cite{Zambia} propose to employ exit times \nfor the study of transport in basins with complicated geometry.\n\nIn Section \\ref{sec:2} we introduce a characterization of\nthe diffusion behavior in terms of the typical time\n$\\tau(\\delta)$ at scale $\\delta$;\nthis allows us to define a finite size diffusion \ncoefficient \n$D(\\delta) \\sim \\delta^{2}\/\\tau(\\delta)$.\n>From the shape of $\\tau(\\delta)$ \nas a function of $\\delta$, one can distinguish different\nspreading regimes.\n\nIn Section \\ref{sec:3} we present the results of numerical experiments\nin closed basins and present new results\nrelative to the behavior of the diffusion coefficient \nnear the boundary (a detailed discussion \nis in the appendix).\n\nIn Section \\ref{sec:4} we summarize our results and\npresent conclusions and we discuss \nthe possibility of treatment of experimental data \naccording to the method introduced in Section \\ref{sec:2}.\n\n\\section{Finite size diffusion coefficient}\n\\label{sec:2}\nBefore a general discussion let us start with a simple example.\nConsider the relative diffusion of a cloud of N test particles\nin a smooth, spatially periodic velocity field with characteristic\nlength $l_{u}$. We assume that the Lagrangian motion is chaotic \ni.e. the maximum Lyapunov exponent $\\lambda$ is positive.\nDenoting with $R^{2}(t)$ the square of the typical radius of\nthe cloud\n\\begin{equation}\nR^{2}(t)= \n\\ll|{\\bf x}_{i}(t)-\\ll{\\bf x}_{i}(t)\\gg|^{2}\\gg\\,,\n\\label{def:disprel}\n\\end{equation}\nwhere\n\\begin{equation}\n\\ll{\\bf x}_i(t)\\gg={1 \\over N} \\sum_{i=1}^N {\\bf x}_i(t)\n\\end{equation}\nwe expect the following regimes to hold\n\\begin{equation}\n\\overline{R^{2}(t)} \\simeq \\left\\{ \n\\begin{array}{ll}\nR^{2}(0)\\exp(L(2)t) & \\;\\;\\;\\;\n{\\mbox {if $\\overline{R^{2}(t)}^{1\/2} \\ll l_{u}$}}\n \\\\\n2 D t & \\;\\;\\;\\;\n{\\mbox {if $\\overline{R^{2}(t)}^{1\/2} \\gg l_{u}$}}\n\\end{array}\n\\label{eq:regimiperR}\n\\right.\n\\,,\n\\label{example1} \n\\end{equation}\nwhere $L(2) \\geq 2\\lambda$ is the generalized Lyapunov exponent\n\\cite{BPPV85,PV87}, $D$ is the diffusion coefficient and \nthe overbar denotes the average over initial conditions.\n\nIn this paper we prefer to study the relative diffusion \n(\\ref{def:disprel}) instead of the usual absolute diffusion.\nFor spatially infinite cases, without mean drift\nthere is no difference;\nfor closed basins the relative dispersion is,\nfor many aspects, more interesting than the absolute one\nand, in addition, the latter is dominated by \nthe sweeping induced by large scale flow.\n\nFurthermore we underline \nthat although the dynamics of the ocean circulation is dominated\nby large mesoscale gyres, the smaller scales \nactivities within the gyres\ncontrol important local phenomena as deep water \nformation in North Atlantic and in Mediterranean \nbasin \\cite{marshal}.\nTherefore the study of relative diffusion could be \nrelevant to describe this small-scale motion\nand can give crucial informations on the way \nto parameterize the subgrid scales \nin ocean numerical global model \\cite{garret}. \n\nAnother, at first sight rather artificial, way to describe\nthe above behavior is by introducing the ``doubling \ntime\\\/'' $\\tau(\\delta)$ at scale $\\delta$ as follows:\nwe define a series of thresholds $\\delta^{(n)}= r^{n} \\delta^{(0)}$,\nwhere $\\delta^{(0)}$ is the initial size of the cloud, defined according\nto (\\ref{def:disprel}), and then we measure the time $T(\\delta^{(0)})$ \nit takes for the growth\nfrom $\\delta^{(0)}$ to $\\delta^{(1)}= r \\delta^{(0)}$, and so on\nfor $T(\\delta^{(1)})\\,,\\;T(\\delta^{(2)})\\,,\\ldots$\nup to the largest scale under consideration.\nFor the threshold rate $r$ any value can be chosen but too large ones\nmight not separate different scale contributions, \nthough strictly speaking the term ``doubling time''\nrefers to the threshold rate $r=2$.\n\nPerforming ${\\cal N} \\gg 1$ experiments with\ndifferent initial conditions for the cloud, we define the \ntypical doubling time $\\tau(\\delta)$ at scale \n$\\delta$ as\n\\begin{equation}\n\\tau(\\delta) = < T(\\delta) >_e =\\frac{1}{{\\cal N}}\n \\sum_{i=1}^{{\\cal N}} T_{i}(\\delta)\\,.\n\\label{def:taudelta}\n\\end{equation}\nLet us stress the fact that the average \nin (\\ref{def:taudelta}) is different from the usual\ntime average.\n\nFrom the average doubling time we can define the finite size \nLagrangian Lyapunov exponent as\n\\begin{equation}\n\\lambda(\\delta)=\\frac{\\ln r}{\\tau(\\delta)}\\,,\n\\end{equation}\nwhich is a measure of the average rate of separation of two\nparticles at a distance $\\delta$. Let us remark that $\\lambda(\\delta)$\nis independent of $r$, for $r \\rightarrow 1^{+}$. \nFor very small separations (i.e. $\\delta \\ll l_u$) one recovers the standard \nLagrangian Lyapunov exponent $\\lambda$,\n\\begin{equation}\n\\lambda=\\lim_{\\delta \\rightarrow 0} \\frac{1}{\\tau(\\delta)}\n\\ln r\\,.\n\\label{def:liapfromtau}\n\\end{equation}\nSee \\cite{ABCPV} for a detailed discussion about \nthese points.\nIn this framework the\nfinite size diffusion coefficient $D(\\delta)$ dimensionally turns out to be\n\\begin{equation}\nD(\\delta)=\\delta^{2}\\lambda(\\delta)\\,.\n\\label{def:fsd}\n\\end{equation}\nNote the absence of the factor $2$, as one can expect by\nthe definition (\\ref{def:eddydiff}), in the denominator of\n$D(\\delta)$ in equation (\\ref{def:fsd}); this is due \nto the fact that $\\tau(\\delta)$ is a difference of times.\nFor a standard diffusion process $D(\\delta)$ approaches the diffusion\ncoefficient $D$ (see eq. (\\ref{eq:regimiperR})) in the limit of \nvery large separations ($\\delta \\gg l_u$). This result stems from \nthe scaling of the doubling times $\\tau(\\delta) \\sim \\delta^2$ for \nnormal diffusion. \n\nThus the finite size Lagrangian Lyapunov exponent $\\lambda(\\delta)$, or\nits counterpart $D(\\delta)$, embody the asymptotic behaviors \n\\begin{equation}\n\\lambda(\\delta) \\sim \\left\\{ \n\\begin{array}{ll}\n\\lambda & \\;\\;\\;\\;\n{\\mbox {if $\\delta \\ll l_{u}$}}\n \\\\\nD\/\\delta^{2} & \\;\\;\\;\\;\n{\\mbox {if $\\delta \\gg l_{u}$}}\n\\end{array}\n\\right.\n\\,,\n\\label{eq:regimipertau} \n\\end{equation}\nOne could naively conclude, matching the behaviors \nat $\\delta \\sim l_{u}$, that $D \\sim \\lambda l_{u}^{2}$.\nThis is not always true, since one can have a rather large range\nfor the crossover due to the \nfact that nontrivial correlations can be present in \nthe Lagrangian dynamics \\cite{FV89}.\n\nAnother case where the \nbehavior of $\\tau(\\delta)$ as a function of $\\delta$ \nis essentially well understood\nis 3D fully developed turbulence. \nFor sake of simplicity we neglect intermittency \neffects. \nThere are then three different ranges:\n\\begin{enumerate}\n\\item \n$\\delta \\ll \\eta ={\\mbox {Kolmogorov length}}$ :\n$1\/\\tau(\\delta) \\sim \\lambda$;\n\\item\n$\\eta \\ll \\delta \\ll l={\\mbox{ typical size of the \nenergy containing eddies}}$: \nfrom the Richardson law \n$\\overline{R^{2}(t)} \\sim t^{3}$ \none has \n$1\/\\tau(\\delta) \\sim \\delta^{-2\/3}$;\n\\item\n$\\delta \\gg l$ : usual diffusion behavior \n$1\/\\tau(\\delta) \\sim \\delta^{-2}\\,.$\n\\end{enumerate}\n\nOne might wonder that the proposal to introduce\nthe time $\\tau(\\delta)$ is just another way \nto look at\n$\\overline{R^{2}(t)}$ as a function of $t$.\nThis is true only in limiting cases, when\nthe different characteristic lengths are \nwell separated and intermittency is weak.\nIn \\cite{previouswork1,previouswork2,sabot} rather \nclose techniques are used for the computation of\nthe diffusion coefficient in nontrivial cases.\n\nThe method of working at fixed scale $\\delta$,\nallows us to extract the physical information at that spatial\nscale avoiding unpleasant troubles of the method of\nworking at a fixed delay time $t$.\nFor instance, if one has a strong intermittency, and this is a rather\nusual situation, $R^{2}(t)$ as a function of \n$t$ can appear very different in each realization.\nTypically one can have, see figure \\ref{fig1}a,\ndifferent exponential rates of growth for different\nrealizations, producing a rather odd behavior\nof the average $\\overline{R^{2}(t)}$ without\nany physical meaning. For instance in figure \\ref{fig1}b we show\nthe average $\\overline{R^{2}(t)}$ versus time $t$; at large times we\nrecover the diffusive behavior but at intermediate times \nappears an apparent anomalous regime which is only due to \nthe superposition of exponential and diffusive contributions\nby different samples at the same time.\nOn the other hand exploiting the tool of doubling times one has \nan unambiguous result (see figure \\ref{fig1}c).\n\nOf course the interesting situations are those where\nthe different characteristic lengths ($\\eta\\,,\\;l\\,,\\;L$) \nare not very different and therefore each \nscaling regime for $\\overline{R^2(t)}$ is not well evident.\n\n\\section{Numerical results}\n\\label{sec:3}\nHere we present some numerical experiments \nin simple models with\nLagrangian chaos in the zero molecular diffusion limit.\nBefore showing the results, we describe the numerical \nmethod adopted.\n\nWe choose a passive tracers trajectory having a chaotic behavior, \ni.e. with a positive maximum Lyapunov \nexponent, computed by using standard algorithms \\cite{BeneGalg}.\nThen we place $N-1$ passive tracers around the first one\nin a cloud of initial size \n\\begin{eqnarray}\nR(0)=\\delta(0)=\\delta^{(0)}\\,,\n\\nonumber\n\\end{eqnarray} \nwith $R(0)$ defined by equation (\\ref{def:disprel}). \nIn order to have average properties we repeat this procedure \nreconstructing the passive cloud around the last\nposition reached by the reference chaotic tracer in the previous \nexpansion.\nThis ensures that the initial expansion of the cloud \nis exponential\nin time, with typical exponential rate equal to the\nLyapunov exponent.\n\nFurther we define a series of thresholds $\\delta^{(n)}=r^{n}\\delta^{(0)}$\n(as described in Section 2) \n$n=1,\\cdots,n_{max}$ and we measure the time $T_{n}$ \nspent in expanding from $\\delta^{(n)}$ to $\\delta^{(n+1)}\\,$.\nThe value of $n_{max}$ has to be chosen in such a way that\n$\\delta^{(n_{max})}\\sim \\delta_{max}$, where $\\delta_{max}$\ncorresponds to the uniform distribution of the tracers in the basin\n (see forthcoming discussion and the Appendix). Each realization stops\nwhen $\\delta(t)=\\delta^{(n_{max})}$.\n\nTherefore following \\cite{ABCPV} we define a scale dependent\nLagrangian Lyapunov exponent as:\n\\begin{equation}\n\\lambda(\\delta^{(n)}) = \\frac{1}{<{T_{n}}>_e} \\ln r =\n\\frac{1}{\\tau(\\delta^{(n)})} \\, \\ln r.\n\\label{def:lambdadidelta}\n\\end{equation}\nIn equation (\\ref{def:lambdadidelta}) we have implicitly assumed that\nthe evolution of the size $\\delta(t)$ of the cloud is continuous in time.\nThis is not true in the case of discontinuous processes such as maps or\nin the analysis of experimental data taken at \nfixed delay times.\nDenoting $T_{n}$ the time to reach size \n$\\tilde{\\delta} \\geq \\delta^{(n+1)}$ from $\\delta^{(n)}$ ,\nnow $\\tilde{\\delta}$ is a fluctuating quantity,\n equation (\\ref{def:lambdadidelta}) has to be modified as follows \n\\cite{ABCPV}:\n\\begin{equation}\n\\lambda(\\delta^{(n)}) = \\frac{1}{<{T_{n}}>_e} \n\\left< \\ln \\left( \\frac{\\tilde{\\delta}}{\\delta^{(n)}}\\right) \\right>_e\n\\,.\n\\label{def:lambdadidelta1}\n\\end{equation}\n\nIn our numerical experiments we have the regimes \ndescribed in sect. 2: exponential regime\n, i.e. $\\lambda(\\delta)=\\lambda$, and diffusion-like regime\ni.e. $\\lambda(\\delta)=D\/\\delta^{2}$, at least if the size $L$ \nof the basin is large enough.\n\nFor cloud sizes close to the saturation value $\\delta_{max}$\nwe expect the following behavior to hold for a broad class \nof systems:\n\\begin{equation}\n \\lambda(\\delta)=\\frac{D(\\delta)}{\\delta^{2}} \\propto\n\\frac{(\\delta_{max}-\\delta)}{\\delta} \\,.\n\\label{eq:nearbound}\n\\end{equation}\nThe constant of proportionality\nis given by the second eigenvalue of the \nPerron-Frobenius operator which is related to the typical time \nof exponential relaxation of tracers' density to the uniform distribution\nActually, the analytical evaluation of this eigenvalue can be \nperformed only for extremely simple dynamical systems\n(for instance random walkers, as shown in the Appendix).\nAs a consequence the range of validity for (\\ref{eq:nearbound})\ncan be assessed only by numerical simulation.\n\n\\subsection{A model for transport in Rayleigh-B\\'enard convection}\nThe advection in two dimensional incompressible flows is described,\nin absence of molecular diffusion, by Hamiltonian equation of motion \nwhere the Hamilton function is the stream function $\\psi$:\n\\begin{equation}\n\\frac{dx}{dt}=\\frac{\\partial \\psi}{\\partial y}\\,, \\;\\;\\;\n\\frac{dy}{dt}=-\\frac{\\partial \\psi}{\\partial x}\\,.\n\\label{eq:hamilton}\n\\end{equation}\nIf $\\psi$ is time-dependent the system (\\ref{eq:hamilton})\nis non-autonomous and in general non-integrable, then\nchaotic trajectories may exist. \n\nOne example is the model introduced in \\cite{gollub}\nto describe the chaotic advection\nin the time-periodic Rayleigh-B\\'enard convection.\nIt is defined by the stream function:\n\\begin{equation}\n\\psi(x,y,t)=\\frac{A}{k} \n\\sin\\left\\{ k \\left[ x+B \\sin(\\omega t)\\right]\\right\\}\nW(y)\\,,\n\\label{eq:gollubinf}\n\\end{equation}\nwhere $W(y)$ is a function that satisfies rigid \nboundary conditions on the surfaces $y=0$ and $y=a$ \n(we use $W(y)=\\sin(\\pi y\/a)$).\nThe direction $y$ is identified with the vertical direction\nand the two surfaces $y=a$ and $y=0$ are the top and bottom\nsurfaces of the convection cell.\nThe time dependent term $B\\sin(\\omega t)$ represents \nlateral oscillations of the roll pattern \nwhich mimic the even oscillatory instability \\cite{gollub}.\n\nTrajectories starting near the roll\nseparatrices could have positive Lyapunov exponent and thus\ndisplay chaotic motion and diffusion in the x direction. \nIt is remarkable that in spite of the simplicity of the model,\nthe agreement of the numerical results with experimental ones is quite\ngood \\cite{gollub}.\n\nDefining a passive cloud in the $x$ direction (i.e. a\nsegment) and performing the expansion experiment described \nin the previous section\nwe have that, until $\\delta$ is below a fraction of the\ndimension of the cell, $\\lambda(\\delta)=\\lambda$ (figure \\ref{fig2}a).\nFor larger values of $\\delta$ we have \nthe standard diffusion $\\lambda(\\delta)=D\/\\delta^{2}$ \nwith good quantitative agreement with the value of the \ndiffusion coefficient evaluated by the standard technique, i.e.\nusing $\\overline{R^{2}(t)}$ as a function of time $t$\n(compare figure \\ref{fig2}a with figure \\ref{fig2}b).\n\nTo confine the motion of tracers in a closed domain,\ni.e. $x \\in [-L,L]$, we must slightly modify the streamfunction\n(\\ref{eq:gollubinf}). \nWe have modulated the oscillating term\nin such a way that for $|x|=L$ the amplitude of the oscillation\nis zero, i.e. $B \\rightarrow B \\sin(\\pi x\/L)$ with $L=2\\,\\pi n\/k$\n($n$ is the number of convective cells).\nIn this way\nthe motion is confined in $[-L,L]$.\n\nIn figure \\ref{fig3} we show $\\lambda(\\delta)$ for two values of $L$. \nIf $L$ is large enough one can well see the three regimes, \nthe exponential one, the diffusive one and the saturation given\nby equation (\\ref{eq:nearbound}).\nDecreasing $L$ the range\nof the diffusive regime decreases, and for small values of\n$L$ it disappears. \n\n\\subsection{Modified Standard Map}\nOne of the simplest deterministic dynamical system displaying both\nexponential growth of separation for close trajectories \nand asymptotic diffusive behavior \nis the standard (Chirikov - Taylor) mapping \\cite{Chi79}.\nIt is customarily defined as \n\\begin{equation}\n\\left\\{\n\\begin{array}{ll}\nx_{n+1}=x_n+K \\sin y_n & \\\\\n y_{n+1}=y_n+x_{n+1}& \\mbox{ mod $2 \\pi $}\n\\end{array}\n\\right.\n\\label{eq:standard}\n\\end{equation}\nThis mapping conserves the area in the phase space.\nIt is widely known that for large enough values of the \nnonlinearity strength parameter $K \\gg K_c \\simeq 1$ the motion\nis strongly chaotic in almost all the phase space.\nIn this case the standard map, in the $x$-direction\n mimics the behavior of a one-dimensional random walker,\nstill being deterministic, and so one expects the behavior of\n$\\lambda(\\delta)$ to be quite similar to the one already \nencountered in the model for Rayleigh-B\\'enard convection\nwithout boundaries. \nNumerical iteration of (\\ref{eq:standard}) for a cloud of particles\nclearly shows the two regimes described in \n(\\ref{eq:regimipertau}), similar to that showed for the \nmodel discussed in the previous section.\n\nWe turn now to the more interesting case in which the domain is limited\nby boundaries reflecting back the particle. \nTo achieve the confinement of the trajectory\ninside a bounded region we modify the standard map in the following\nway\n\\begin{equation}\n\\left\\{\n\\begin{array}{ll}\nx_{n+1}=x_n+K f(x_{n+1})\\sin y_n & \\\\ \ny_{n+1}=y_n+x_{n+1}-K f'(x_{n+1}) \\cos y_n & \\mbox{ mod $ 2 \\pi$}.\n\\end{array}\n\\right.\n\\label{eq:modified}\n\\end{equation}\nwhere $f(x)$ is a function which has its only zeros in $\\pm L$. \nSince the mapping is defined in implicit form, \nthe shape of $f$ must be chosen in such a way to assure\na unique definition for $(x_{n+1},y_{n+1})$ given $(x_n,y_n)$. \nFor any $f$ fulfilling this request the mapping \n(\\ref{eq:modified}) conserves the area.\nA trial choice could be\n\\begin{equation}\nf(x)=\n\\left\\{\n\\begin{array}{ll}\n1 & |x|<\\ell \\\\\n\\begin{displaystyle}\n{L-|x| \\over L-\\ell}\n\\end{displaystyle}\n & \\ell<|x|j} \\Gamma_i \\Gamma_j \\log \\left[\n{r_i^2+r_j^2-2 r_i r_j \\cos (\\theta_i-\\theta_j) \\over 1 + r_i^2 r_j^2 -\n2 r_i r_j \\cos (\\theta_i-\\theta_j)} \\right] +\n{1 \\over 4 \\pi} \\sum_{i=1}^N \\Gamma_i^2 \\log (1-r_i^2)\n\\label{}\n\\end{equation}\n\nPassive tracers evolve according to (\\ref{eq:hamilton}) with $\\psi$ given\nby\n\\begin{equation}\n\\psi(x,y) = - {1 \\over 4\\pi} \\sum_{i}^{N} \\Gamma_i\n\\log \\left[{r^2+r_i^2-2 r r_i \\cos(\\theta-\\theta_i) \\over\n1 + r^2 r_i^2 - 2 r r_i \\cos(\\theta-\\theta_i)} \\right]\n\\label{}\n\\end{equation}\nwhere $(x=r \\cos \\,\\theta,y=r \\sin \\,\\theta)$ denote the tracer\nposition.\n\nFigure \\ref{fig5} shows the relative diffusion as a function of \ntime in a system with 4 vortices. \nApparently there is an intermediate regime of anomalous diffusion.\nOn the other hand from figure \\ref{fig6} one can see rather clearly \nthat, with the method of working at fixed scale, only\ntwo regimes survive: the exponential one and that one \ndue to the saturation. \nComparing figure \\ref{fig5} and figure \\ref{fig6} one understands that\nthe mechanism described in Section 2 \nhas to be held for responsible of this spurious anomalous diffusion.\nWe stress the fact that these misleading behaviors are \ndue to the superposition of different regimes and that \nthe method of working at fixed scale has the advantage \nto eliminate this trouble.\n\nThe absence of the diffusive range \n$\\lambda(\\delta) \\sim \\delta^{-2}$\nis due to the fact that the characteristic \nlength of the velocity field, which is comparable with \nthe typical distance between two close vortices, is not \nmuch smaller than the size of the basin.\n\n\\section{Conclusions}\n\\label{sec:4}\nIn this paper we investigated the relative dispersion of passive tracers\nin closed basins. Instead of the customary approach based on \nthe average size of the cloud of tracers as a function of time,\nwe introduced a typical inverse time $\\lambda(\\delta)$ which \ncharacterizes the diffusive process at fixed scale $\\delta$.\n\nFor very small values of $\\delta$, $\\lambda(\\delta)$ coincides with the\nmaximum Lagrangian Lyapunov exponent which is positive in\nthe case of chaotic Lagrangian motion.\nFor larger $\\delta$ the shape of $\\lambda(\\delta)$ \ndepends on the detailed mechanism of spreading which is given\nby the structure of the advecting flow, which is in turn conditioned \nby the presence of boundaries. In the case of diffusive regime, one\nexpects the scaling $\\lambda(\\delta) \\simeq \\delta^{-2}$, which leads to a\nnatural generalization of the diffusion coefficient as\n$D(\\delta)=\\lambda(\\delta) \\delta^2$. \n\nThe effectiveness of finite size quantities $\\lambda(\\delta)$ \nor $D(\\delta)$ in characterizing the dispersion properties of\na cloud of particles is demonstrated by several numerical examples.\n\nFurthermore, when $\\delta$ gets close to its saturation value\n(i.e. the characteristic size of the basin), a simple argument gives \nthe shape of $\\lambda(\\delta)$ which is expected to be universal\nwith respect to a wide class of dynamical systems.\n\nIn the limiting case when the characteristic length of\nthe Eulerian velocity $l_u$ and the size of the basin $L$ are\nwell separated, the customary approach and the proposed method\ngive the same information. \nIn presence of strongly intermittent Lagrangian motion, or when\n$l_u\/L$ is not much smaller than one, the traditional method\ncan give misleading results, for instance apparent anomalous\nscaling over a rather wide time interval, as demonstrated by\na simple example.\n\nWe want to stress out that our method is very \npowerful in separating the different scales acting on diffusion \nand consequently it could give improvement about the parameterization \nof small-scale motions of complex flows.\nThe proposed method could be also relevant in the analysis of\ndrifter experimental data or in numerical models for Lagrangian\ntransport, in particular for addressing the question about the\nexistence of low dimensional chaotic flows.\n\n\\section{Acknowledgments}\nWe thank E. Aurell and A. Crisanti for useful suggestions and \nfirst reading of the paper. G.B. and A.C. thank the Istituto di\nCosmogeofisica del CNR, Torino, for hospitality.\nThis work was partially supported by INFN {\\it Iniziativa specifica\nMeccanica Statistica FI11}, by CNR (Progetto speciale coordinato\n{\\it Variabilit\\`a e Predicibilit\\`a del Clima}) and by EC-Mast contract\nMAS3-CT95-0043 (CLIVAMP).\n\n\\section*{Appendix}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \nRecently, the HADES collaboration has measured $\\Lambda(1405)$ production in proton-proton reactions at a beam kinetic energy of 3.5~GeV \\cite{Agakishiev:2012xk}, where the $\\Lambda(1405)$ hyperon has been reconstructed in the charged $\\Sigma^{\\pm}\\pi^{\\mp}$ decay channels. By investigating the $\\Sigma\\pi$ invariant mass distributions, clear peak structures below 1400~MeV\/$c^2$ were observed. These structures were interpreted by a small contribution of $\\Sigma(1385)^0$ and a large contribution of a low mass $\\Lambda(1405)$ signal (see also \\cite{Agakishiev:2012ja}). Besides this, the spectra also showed a considerable contribution by $\\Lambda(1520)$ production and by non$\\--$resonant $\\Sigma\\pi$ production, resulting in a phase space like background below the $\\Lambda(1405)$. The experimental data were finally described by an incoherent sum of Monte Carlo simulations, where the $\\Lambda(1405)$ was simulated to follow a Breit-Wigner type distribution with a Breit-Wigner mass of 1385~MeV\/$c^2$ and a width of 50~MeV\/$c^2$. With help of these simulations the experimental data were corrected for the effects of acceptance and efficiency. These corrected data allow to compare any (more advanced) model to the obtained $\\Sigma\\pi$ invariant mass distributions. \\\\\nThe experimental data, where the maximum of the $\\Sigma\\pi$ missing mass (see Fig.~\\ref{fig:LA1405_HADES_Thomas_Comp}) lies below the nominal value of $1405\\,\\mathrm{MeV\/}c^2$ associated to the $\\Lambda(1405)$, suggest a shift of this resonance towards lower masses.\nIn this paper we want to address different possible explanations for the observed mass shift of the $\\Lambda(1405)$ in the new HADES data and aim to stimulate theorists to further investigate the production of this resonance in $p+p$ reactions.\nThe $\\Lambda(1405)$ spectral function measured in $p+p$ collisions by the HADES collaboration differs from the predictions by theoretical models and also from some of the experimental observations. These differences reside in the complex nature of the resonance.\n\n From the theoretical point of view, the $\\Lambda(1405)$ is normally treated in a unitarized coupled channel framework based on chiral SU(3) dynamics \\cite{Hyodo:2007jq,Borasoy:2006sr,Kaiser:1995eg,Oset:1997it}, where this resonance is generated dynamically as a coherent sum of two different poles. The first pole, $z_1$, is located at higher energies of around 1420~MeV and is mainly associated with a narrow quasi-bound $\\bar{K}N$ state. The second pole, $z_2$, is found at lower energies of about 1390~MeV and this pole strongly couples to a broad $\\Sigma\\pi$ resonance. As the relative contribution of these two states depends on the entrance channel, also the observed properties of the $\\Lambda(1405)$ could differ for different reactions. Therefore, in order to understand the complex formation process of the $\\Lambda(1405)$, it is important that experiments measure this resonance in different collision systems, and that, at the same time, theory provides appropriate models for each of those systems. \n \nFirst we refer to the measurement presented by the ANKE collaboration in \\cite{Zychor:2007gf}, where the $\\Lambda(1405)$ spectral shape is reconstructed in $p+p$ collisions at 2.83~GeV beam kinetic energy out of the decay into $\\Sigma^0$ and $\\pi^0$ pairs.\nThese data are within the systematic errors and the statistical significance of the mass bins around 1400~MeV\/$c^2$ consistent with the HADES results.\\\\\nFrom the theory side, the authors of \\cite{Geng:2007vm} followed a unitarized coupled channel approach based on the chiral Lagrangian in order to \npredict the $\\Lambda(1405)$ line shape in $p+p$ reactions for the $\\Sigma^0\\pi^0$ decay channel. In their Ansatz the $\\Lambda(1405)$ was \ngenerated from pion, kaon and $\\rho$ meson exchange mechanisms, all of them leading to a different coupling to the two $\\Lambda(1405)$ poles. The \ncoherent sum of all contributions results in a $\\Lambda(1405)$ line shape with a maximum in the $\\Sigma\\pi$ mass distribution at around \n$1410\\,\\mathrm{MeV\/}c^2$. With this approach the authors of \\cite{Geng:2007vm} delivered a result compatible with the $\\Lambda(1405)$ signal, measured by the ANKE collaboration in the $\\Sigma^0\\pi^0$ decay channel \\cite{Zychor:2007gf}. \nThis calculation is the only one available for $p+p$ reactions but since the HADES data refer to the charged decay channels\na quantitative comparison results difficult. Nevertheless, the results by \\cite{Geng:2007vm} have been used in this work as a starting point to model the $\\Lambda(1405)$ as the combination of two Breit-Wigner functions.\\\\\nIn general, the HADES data show a larger contribution by the non$\\--$resonant $\\Sigma\\pi$ production in comparison with ANKE. In particular, it has \nbeen shown in \\cite{Agakishiev:2012qx} that the $\\Delta^{++}$ is strongly coupling to the $\\Sigma^- \\pi^+ p K^+$ final state via the reaction $p+p\n\\rightarrow \\Sigma^- + \\Delta^{++} + K^+$ and it cannot be excluded that $N^*\/\\Delta^0$ resonances contribute to the $\\Sigma^+\\pi^-pK^+$ channel\n as well. These contributions might appear in the $I=0$ channels and hence interfere with the resonant amplitude.\n\nAside from the predictions by models that consider the $\\Lambda(1405)$ as the combination of two main poles, one has to consider those models where \na single pole is associated to the formation of the resonance. \nIn \\cite{Hassanvand:2012dn} a phenomenological ${\\mathrm{\\bar{K}}p}$ interaction was employed to derive the mass distribution of $\n\\Lambda(1405)$.\nThis approach was used to fit the HADES data. For this purpose all simulated contributions, mentioned in\n\\cite{Agakishiev:2012xk}, were subtracted so that only the pure $\\Lambda(1405)$ signal was left. The fit within this phenomenological approach\n results into a good description of the experimental data and allowed to extract the mass and width of the $\\Lambda(1405)$ to\n$1405^{+11} _{-9}\\mathrm{ MeV\/c^2}$ and $62\\pm 10 \\mathrm{MeV\/}c^2$, which is in good agreement with the PDG values \\cite{PDG} and\ndoes not imply any evident shift of the spectral function.\n\nIn the sector of $\\gamma$-induced reactions, the CLAS collaboration has recently published new high quality data on the $\\Lambda(1405)$ \\cite{Moriya:2013eb,Moriya:2013hwg,Schumacher:2013vma}. In these reports, all three $\\Sigma\\pi$ decay channels have been investigated simultaneously for different incident photon energies. The observed $\\Lambda(1405)$ spectral shape partially appears at higher masses, clearly above 1400 MeV\/$c^2$. \n\\begin{figure}[tbp]\n\t\\centering\n\t\t\\includegraphics[width=0.45\\textwidth]{LA1405_HADES_Thomas_Comp}\n\t\\caption{Comparison between the HADES data and the data of \\cite{Thomas:1973uh} and \\cite{Engler:1965zz}, measured in $\\pi^-+p$ reactions. All three spectra show the sum of the $\\Sigma^+\\pi^-$ and $\\Sigma^-\\pi^+$ data samples.}\n\t\\label{fig:LA1405_HADES_Thomas_Comp}\n\\end{figure}\nThe structures measured by CLAS show a dependency on the photon incident energy and also differ among the three $\\Sigma\\pi$ decay channels. \nIndeed, one has to consider that for photon-induced reactions the interference between the $I=0$ and $I=1$ channels is not negligible as for proton- and pion-induced reactions. \nRecent theoretical works \\cite{PhysRevC.87.055201,Roca:2013cca} employ parameters fitted to the CLAS experimental data which \nallow for a small SU(3) breaking. This study shows that more precise calculations including higher order corrections could be needed in this sector \nand also suggests the possible existence of a $I=1$ bound state in the vicinity of the \n$\\bar{\\mathrm{K}N}$ threshold. Additionally, the CLAS collaboration reported recently on the first observation of the $\\Lambda(1405)$ in electron-\ninduced reactions \\cite{Lu:2013nza} showing very different features respect to the photon-induced results. \n\nIn the sector of pion-induced reactions, Thomas et al. \\cite{Thomas:1973uh} and Engler et al. \\cite{Engler:1965zz} have measured $\\pi^-+p$ \ncollisions at a beam momentum of 1.69~GeV\/$c$ and have reconstructed the $\\Lambda(1405)$ from its decay into $\\Sigma^{\\pm}\\pi^{\\mp}$. The \nresults for the $\\Sigma^+\\pi^-+\\Sigma^-\\pi^+$ invariant mass spectra are shown in the black and open data points in Fig.~\n\\ref{fig:LA1405_HADES_Thomas_Comp}, respectively.\nAccording to \\cite{Thomas:1973uh}, the spectra consist of several contributions, namely 46\\% $\\Lambda(1405)$, 8\\% $\\Sigma(1385)^0$, 3\\% $\\Lambda(1520)$ and 43\\% non$\\--$resonant $\\Sigma\\pi$ production. The broad peak structure around 1400~MeV\/$c^2$ is mainly identified with the $\\Lambda(1405)$ signal. \nThis experimental spectrum, however, is not fully understood from the theoretical side, which expects a large contribution from the first $\\Lambda(1405)$ pole, shifting the expected $\\Lambda(1405)$ distribution to higher mass values \\cite{Hyodo:2003jw}. Therefore, the question arises if, in case of $\\pi^-$-induced reactions, the coupling to the second, broad pole at $\\approx 1390$~MeV is underestimated by theory.\\\\\nIt is interesting to compare the results from $\\pi^-$-induced reactions to the new HADES data. The gray histogram in Fig.~\\ref{fig:LA1405_HADES_Thomas_Comp} shows the summed invariant mass spectrum of $\\Sigma^+\\pi^-+\\Sigma^-\\pi^+$ of \\cite{Agakishiev:2012xk}. As the relative contributions from $\\Lambda(1405)$, $\\Sigma(1385)^0$, $\\Lambda(1520)$ and non$\\--$resonant $\\Sigma\\pi$ channels in these data are quite similar to the ones in the considered $\\pi^-+p$ reactions, a comparison between the data sets is justified. In order to allow such a comparison, the data from \\cite{Thomas:1973uh} and \\cite{Engler:1965zz} have been scaled appropriately. The agreement between the three spectra in the region around 1400~MeV\/$c^2$ is excellent, indicating that all measurements observe a similar low mass $\\Lambda(1405)$ signal. Furthermore, this suggests that in both, $p+p$ and $\\pi^-+p$ reactions, the broad $\\Sigma\\pi$ pole might be dominant in the coupling to the $\\Lambda(1405)$ state. \n\\noindent However, since the measured $\\Sigma\\pi$ final state does not contain the pure signature of the $\\Lambda(1405)$, but also contains a \nconsiderable contribution of non$\\--$resonant background, the interpretation of this result is not straight forward.\n \\section{Influence of interference effects}\n\\begin{figure}[tbp]\n\t\\centering\n\t\t\\includegraphics[width=0.45\\textwidth]{OsetFitted}\n\t\\caption{The $\\Lambda(1405)$ spectral shape calculated in \\cite{Geng:2007vm} (gray histogram). The spectrum is fitted with Eq.~(\\ref{equ:DoubleBW}). The two phase space modified Breit-Wigner functions $C_{\\mathrm{p.s.}}(m)\\left|BW_1(m)\\right|^2q_{\\mathrm{c.m.}}$ and $C_{\\mathrm{p.s.}}(m)\\left|BW_2(m)\\right|^2q_{\\mathrm{c.m.}}$ are shown in the dotted and dashed lines.}\n\t\\label{fig:OsetFitted}\n\\end{figure} \nInterference between the resonant and the non$\\--$resonant amplitudes can affect the observed mass distribution considerably. In order to evaluate this scenario, we take the result of the chiral Ansatz \\cite{Geng:2007vm} as a starting point to develop a simple model, which we use to parametrize the $\\Lambda(1405)$ amplitude and to interpret the HADES data. The predicted $\\Sigma\\pi$ spectrum of \\cite{Geng:2007vm} is shown in the gray band of Fig.~\\ref{fig:OsetFitted}, with a peak at 1410~MeV and with the typical $\\Lambda(1405)$ shape, having a sharp drop to the $\\bar{K}N$ threshold. The idea is to reconstruct this spectrum by a coherent sum of two Breit-Wigner functions (BW), where each of these BW amplitudes represents one of the two $\\Lambda(1405)$ poles. This approach is similar to what has been proposed in \\cite{Jido:2003cb} to represent via Breit-Wigner distributions the results of the unitarized coupled channel calculations. It is clear that any variation of the parameters associated to the two poles does not fulfill unitarity anymore and hence our procedure is not equivalent to a full-fledged calculation. Still it is interesting to see how much the parameters have to be modified to fit the experimental data.\n Within this approach the total $\\Lambda(1405)$ amplitude reads then as follows:\n\\begin{eqnarray}\n\\frac{d \\sigma}{d m}&=&\\left|T_{\\Lambda(1405)}\\right|^2= \\nonumber \\\\\n&=&C_{\\mathrm{p.s.}}(m)\\left|BW_1(m)e^{i\\varphi_1}+BW_2(m) \\right|^2q_{\\mathrm{c.m.}} \\label{equ:DoubleBW} \\\\\n&&\\mbox{with } BW_i=A_i\\frac{1}{\\left(m-m_{0,i}\\right)^2+im_{0,i}\\Gamma_{0,i}} \\nonumber\n\\end{eqnarray}\n$C_{\\mathrm{p.s.}}(m)$ is a dimensionless weight function, normalized to unity in the mass range of 1280-1730~MeV\/$c^2$. This function considers the limited production phase space of the $\\Lambda(1405)$ in $p+p$ reactions. $q_{\\mathrm{c.m.}}$ is the decay momentum of $\\Sigma$ and $\\pi$ in the $\\Lambda(1405)$ rest frame (in units of [MeV\/c]). The Breit-Wigner function is a simple relativistic parametrization with amplitude $A_i$ in units of $\\left[\\sqrt{\\mu b\/\\mbox{c}}\\cdot \\mbox{MeV\/c}^2\\right]$, mass $m_{0,i}$ and width $\\Gamma_{0,i}$ in units of [MeV\/$c^2$]. Thus, the whole expression has dimensions of $\\left[\\frac{\\mu b}{\\mbox{MeV\/c}^2}\\right]$. We also introduce a free phase $e^{i\\varphi_1}$, which determines the interference between the two Breit-Wigner functions. Furthermore, we make use of the recent coupled channel calculations by Ikeda et al. \\cite{Ikeda:2012au}, which are constrained by the new SIDDARTHA data on kaonic hydrogen \\cite{Bazzi:2011zj}. In this work the $\\mathrm{\\bar{K}N}$ pole was found at $z_1=1424^{+7}_{-23}+i26^{+3}_{-14}$~MeV, while the $\\Sigma\\pi$ pole was found at $z_2=1381^{+18}_{-6}+i81^{+19}_{-8}$~MeV. With these values we constrain the Breit-Wigner mass $m_ic^2=Re(z_i)$ and the Breit-Wigner width $\\Gamma_{0,i}c^2=2Im(z_i)$ of our model to vary only within the given ranges.\\\\\nAlthough the parametrization of Eq.~(\\ref{equ:DoubleBW}) is simplified compared to the advanced calculations in \\cite{Geng:2007vm}, it still allows us to reconstruct the spectral shape in Fig.~\\ref{fig:OsetFitted} (gray band).\nBy fitting the Eq.~(\\ref{equ:DoubleBW}) to the theoretical prediction, we obtain the black distribution with the fit parameters listed in Table~\\ref{tab:tableInter}.\n\\begin{table}[tbp]\n\\caption{\\label{tab:tableInter}%\nTable with fit parameters obtained by fitting Eq.~(\\ref{equ:DoubleBW}) to the theoretical prediction of \\cite{Geng:2007vm} shown in Fig.~\\ref{fig:OsetFitted}.}\n\\begin{ruledtabular}\n\\begin{tabular}{cccccc}\n\\textrm{$m_1$}&\n\\textrm{$\\Gamma_1$}&\n\\textrm{$m_2$}&\n\\textrm{$\\Gamma_2$}&\n\\textrm{$A_1\/A_2$}&\n\\textrm{$\\varphi_1$}\\\\\n\\colrule\n1426 & 28 & 1375 & 147 & 0.23 & 205$^{\\circ}$ \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table} \nThis distribution is consistent with the gray band in Fig.~\\ref{fig:OsetFitted} . Especially the peak structure around 1410~MeV\/$c^2$ and the drop to the $\\bar{K}N$ threshold is reproduced correctly. Additionally included in the figure are the absolute contributions of the two Breit-Wigner functions $C_{p.s.}(m)\\left|BW_1(m)\\right|^2q_{c.m.}$ and $C_{p.s.}(m)\\left|BW_2(m)\\right|^2q_{c.m.}$ (dotted and dashed lines).\\\\ \nIn this way we have fixed the parametrization of the $\\Lambda(1405)$ and can now study the maximal interference effects with the non$\\--$resonant background. \nFor this purpose, we fit the HADES results for the $\\Sigma^+\\pi^-$ and $\\Sigma^-\\pi^+$ invariant mass distributions simultaneously with the following two functions:\n\\begin{widetext}\n\\begin{eqnarray}\n\\left(\\frac{d\\sigma}{dm}\\right)_{\\Sigma^+\\pi^-}=\\left|A_{\\Lambda(1405)}T_{\\Lambda(1405)}+e^{i\\alpha}A_{\\Sigma^+\\pi^-}T_{\\Sigma^+\\pi^-}\\right|^2+\\left|BW_{\\Sigma(1385)^0}\\right|^2 + \\left|BW_{\\Lambda(1520)}\\right|^2 \\label{equ:DistSpPm} \\\\\n\\left(\\frac{d\\sigma}{dm}\\right)_{\\Sigma^-\\pi^+}=\\left|A_{\\Lambda(1405)}T_{\\Lambda(1405)}+e^{i\\beta}A_{\\Sigma^-\\pi^+}T_{\\Sigma^-\\pi^+}\\right|^2+\\left|BW_{\\Sigma(1385)^0}\\right|^2 + \\left|BW_{\\Lambda(1520)}\\right|^2 \\label{equ:DistSmPp}\n\\end{eqnarray} \n\\end{widetext}\nThe contributions from $\\Sigma(1385)^0$ and $\\Lambda(1520)$ are parameterized as Breit-Wigner functions so that they match the extracted shapes and yields reported in \\cite{Agakishiev:2012xk}. We assume here that they do not interfere with the other contributions to the $\\Sigma\\pi$ invariant mass spectra and thus add them incoherently in the Eq.s~(\\ref{equ:DistSpPm}) and (\\ref{equ:DistSmPp}). \\\\\nThe $\\Lambda(1405)$ is parameterized as described above. $A_{\\Lambda(1405)}$ is a free fit parameter which determines the absolute yield of $\\Lambda(1405)$.\\\\ \nThe non$\\--$resonant background shapes for the $\\Sigma^+\\pi^-$ and $\\Sigma^-\\pi^+$ channels ($T_{\\Sigma^+\\pi^-}$ and $T_{\\Sigma^-\\pi^+}$) are described by modified polynomials of 4th order, which read as follows: \n\\begin{eqnarray}\nT_{\\Sigma\\pi}(m)=\\left[C_{p.s.}(m)q_{c.m.}\\sum^{4}_{n=0}a_nm^n\\right]^{\\frac{1}{2}} \\label{equ:NonRes}\n\\end{eqnarray}\nThis parametrization has no physical meaning but it was chosen such to describes the simulated $\\Sigma\\pi$ invariant mass distributions in \\cite{Agakishiev:2012xk}. The parameters $a_n$ are given in units of $\\left[\\left(\\mbox{MeV\/c}^2\\right)^{-n}\\right]$ and their values are listed in Table~\\ref{tab:table1}.\n\\begin{table}[tbp]\n\\caption{\\label{tab:table1}%\nTable with coefficients for the description of the non$\\--$resonant background according to Eq.~(\\ref{equ:NonRes}).}\n\\begin{ruledtabular}\n\\begin{tabular}{cccc}\n\\textrm{$T_{\\Sigma\\pi}$}&\n\\textrm{$a_0$}&\n\\textrm{$a_1$}&\n\\textrm{$a_2$}\\\\\n\\colrule\n$T_{\\Sigma^+\\pi^-}$ & $7.949\\cdot10^{-3}$ & $-4.412\\cdot10^{-7}$ & $-1.558\\cdot10^{-9}$ \\\\\n$T_{\\Sigma^-\\pi^+}$ & $-2.387\\cdot10^{0}$ & $2.512\\cdot10^{-3}$ & $1.315\\cdot10^{-6}$ \\\\\n\\colrule\\\\\n\\textrm{$T_{\\Sigma\\pi}$}&\n\\textrm{$a_3$}&\n\\textrm{$a_4$}\\\\\n\\colrule\n$T_{\\Sigma^+\\pi^-}$ & $2.942\\cdot10^{-12}$ & $-1.121\\cdot10^{-15}$ \\\\\n$T_{\\Sigma^-\\pi^+}$ & $-2.254\\cdot10^{-9}$ & $6.481\\cdot10^{-13}$ \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\nThe modified polynomials are multiplied by constant factors $A_{\\Sigma^+\\pi^-}$ and $A_{\\Sigma^-\\pi^+}$ (see~Eq. (\\ref{equ:DistSpPm}) and (\\ref{equ:DistSmPp})), which have dimensions of $\\left[\\frac{\\sqrt{\\mu b\/\\mbox{c}}}{\\mbox{MeV\/c}^2}\\right]$. These are again free fit parameters, determining the absolute yields of the non$\\--$resonant channels, where a value of $A_{\\Sigma\\pi}=1$ corresponds to the yield extracted in \\cite{Agakishiev:2012xk}. Furthermore, complex phases $e^{i\\alpha}$ and $e^{i\\beta}$ have been included so that the modeled background can interfere with the $\\Lambda(1405)$ amplitude. The values of these phases are determined by the fitting procedure as well. Hence, the simultaneous fit of the two functions (\\ref{equ:DistSpPm}) and (\\ref{equ:DistSmPp}) to the experimentally determined $\\Sigma^+\\pi^-$ and $\\Sigma^-\\pi^+$ invariant mass distributions is characterized by five free parameters.\\\\\nWe consider here a scenario of maximal interference, which means that the whole non$\\--$resonant background interferes with the $\\Lambda(1405)$ amplitude.\n The best results of the fitting procedure (gray lines) are illustrated together with the experimental data in Fig.~\\ref{fig:Interferences} panel a) and b). The black lines show the amplitude of the $\\Lambda(1405)$, the red lines the contribution by the non$\\--$resonant $\\Sigma\\pi$ channels and the gray lines correspond to the coherent sum of all contributions. The fit parameters are listed in Table~\\ref{tab:table2}. \n\\begin{table}[tbp]\n\\caption{\\label{tab:table2}%\nObtained fit parameters after fitting Eq.s~(\\ref{equ:DistSpPm}) and (\\ref{equ:DistSmPp}) to the experimental data points in Fig.~\\ref{fig:Interferences}.}\n\\begin{ruledtabular}\n\\begin{tabular}{ccccc}\n\\textrm{$A_{\\Lambda(1405)}$}&\n\\textrm{$A_{\\Sigma^+\\pi^-}$}&\n\\textrm{$A_{\\Sigma^-\\pi^+}$}&\n\\textrm{$\\alpha$}&\n\\textrm{$\\beta$}\\\\\n\\colrule\n$1.06$ & $0.93$ & $1.04$ & $67^{\\circ}$ & $109^{\\circ}$ \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\\begin{figure}[tbp]\n\t\\centering\n\t\t\\includegraphics[width=0.46\\textwidth]{Interferences_1}\n\t\\caption{ Missing mass spectrum to proton and $K^+$. The black data points are the measurements of \\cite{Agakishiev:2012xk}, for the $\\Lambda(1405)$ in the $\\Sigma^+\\pi^-$ (a) and $\\Sigma^-\\pi^+$ (b) decay channel. Panel c) shows the summed spectrum of a) and b). The black lines in a) and b) are the results from the simultaneous fit with Eq.~(\\ref{equ:DistSpPm}) and (\\ref{equ:DistSmPp}), the gray line represents the sum of all fitted functions. In c) the fit functions corresponding to the $\\Sigma^+\\pi^-$ and $\\Sigma^-\\pi^+$ channels respectively are shown in gray and the sum of both functions is shown in the black line.}\n\t\\label{fig:Interferences}\n\\end{figure}\nA reasonable description of the data is achieved, expressed in a normalized $\\chi^2$ value of 1.23. In panel c) of Fig.~\\ref{fig:Interferences} we show the sum of the two invariant mass distributions and compare it to the sum of our model (black line). \\\\\nThe main message of the three pictures is that, assuming a maximal interference between the $\\Lambda(1405)$ and the non$\\--$resonant contributions and the large phases given in Table~\\ref{tab:table2}, one obtains a shift of the maximum to lower masses. This is the case although the mass maximum of the initial $\\Lambda(1405)$ amplitude is located around 1400~MeV\/$c^2$. By integrating $\\left|A_{\\Lambda(1405)}T_{\\Lambda(1405)}\\right|^2$ (black lines in panel a) and b)), the total cross section of the reaction $p+p\\rightarrow\\Lambda(1405)+p+K^+$ is determined to 3.3 $\\mu b$. This is considerably smaller than the value extracted in \\cite{Agakishiev:2012xk}, where an incoherent approach was used with a low mass $\\Lambda(1405)$ signal.\\\\ \nAt this point one has to emphasize some details of the model used in this work. First, the parametrization of the $\\Lambda(1405)$ as a simple sum of two Breit-Wigner amplitudes is not equivalent to the full coupled channel calculation of \\cite{Geng:2007vm}. A second point is the description of the non$\\--$resonant background $T_{\\Sigma\\pi}$ as polynomial functions. This is certainly a simplification. Also the assumption that the non$\\--$resonant channels just have a constant complex phase is oversimplified.\nFurthermore, we assume that the whole $\\Sigma\\pi$ non$\\--$resonant background appears in the s-wave channel with I=0, testing here the maximal possible interference.\nHowever, this maximal interference scenario turns out to be unlikely since no comparable mass shifts have been observed in the spectral shape of the $\\Sigma(1385)^+$ in the $\\mathrm{\\Lambda-p}$ final state \\cite{Agakishiev:2011qw} or for the $\\Lambda(1520)$ in the $\\Sigma\\pi$ decay channel. Moreover, it seems to be rather peculiar that interferences between the $\\Lambda(1405)$ and the non$\\--$resonant background should result in the same mass shift for both, the $\\Sigma^+\\pi^-$ and the $\\Sigma^-\\pi^+$ invariant mass distributions, where in both cases the physical origin of the non$\\--$resonant background is quite different. According to \\cite{Agakishiev:2012xk}, the $\\Sigma^-\\pi^+$ non$\\--$resonant background arises from a strong contribution of a $\\Delta^{++}$, whereas other mechanisms, e.g. $N^*\/\\Delta^0$ production via the reaction $p+p\\rightarrow\\Sigma^+ + N^*\/\\Delta^0+K^+\\rightarrow\\Sigma^++(\\pi^-+p)+K^+$, could contribute to the $\\Sigma^+\\pi^-$ non$\\--$resonant spectrum.\\\\\nOn the other hand, the presented results shall just emphasize that interference effects can play a significant role. Indeed, our model, even though it is not a full-fledged theoretical approach, has shown that interference effects can significantly shift the observed mass peak in the experimental spectra. We cannot prove that these effects are indeed responsible for the observed low mass $\\Lambda(1405)$ signal. We just aim to illustrate the importance of a serious treatment of the non$\\--$resonant background in any theoretical approach.\n\\section{Contributions of the two poles}\nHaving illustrated the maximal possible influence of interference effects between the $\\Lambda(1405)$ resonance and the non$\\--$resonant background, we now consider the second extreme case, where this particular interference term is neglected. The observed low mass peaks in the HADES data are then completely attributed to the \"pure'' $\\Lambda(1405)$ signal. With this assumption one can try to determine the parameters and the relative contribution of the two $\\Lambda(1405)$ poles. \nAs a starting point, we again parameterize the $\\Lambda(1405)$ as a coherent sum of the two Breit-Wigner amplitudes (see Eq.~(\\ref{equ:DoubleBW})). This time, however, $m_{0,1}$, $\\Gamma_{0,1}$, $m_{0,2}$ and $\\Gamma_{0,2}$ as well as $A_1$ and $A_2$ shall be determined directly from the HADES data. As before, the real and imaginary part of both poles are constrained by the results of \\cite{Ikeda:2012au} to $z_1=1424^{+7}_{-23}+i26^{+3}_{-14}$~MeV and $z_2=1381^{+18}_{-6}+i81^{+19}_{-8}$~MeV. Now the two functions~(\\ref{equ:DistSpPm2}) and (\\ref{equ:DistSmPp2}) are used to described the experimental data points of Fig.~\\ref{fig:Interferences} a) and b). \n\\begin{widetext}\n\\begin{eqnarray}\n\\left(\\frac{d\\sigma}{dm}\\right)_{\\Sigma^+\\pi^-}=C_{p.s.}(m)\\left|BW_1(m)e^{i\\varphi_1}+BW_2(m) \\right|^2q_{c.m.}+\\left|A_{\\Sigma^+\\pi^-}T_{\\Sigma^+\\pi^-}\\right|^2+\\left|BW_{\\Sigma(1385)^0}\\right|^2 + \\left|BW_{\\Lambda(1520)}\\right|^2 \\label{equ:DistSpPm2} \\\\\n\\left(\\frac{d\\sigma}{dm}\\right)_{\\Sigma^-\\pi^+}=C_{p.s.}(m)\\left|BW_1(m)e^{i\\varphi_1}+BW_2(m) \\right|^2q_{c.m.}+\\left|A_{\\Sigma^-\\pi^+}T_{\\Sigma^-\\pi^+}\\right|^2+\\left|BW_{\\Sigma(1385)^0}\\right|^2 + \\left|BW_{\\Lambda(1520)}\\right|^2 \\label{equ:DistSmPp2}\n\\end{eqnarray} \n\\end{widetext}\nIn these Eq.s all individual contributions besides the two $\\Lambda(1405)$ amplitudes sum up incoherently. The non$\\--$resonant background is fixed in yield to the HADES results \\cite{Agakishiev:2012xk}, by setting $A_{\\Sigma^+\\pi^-}$ and $A_{\\Sigma^-\\pi^+}$ to 1. In this way only $m_{0,1}$, $\\Gamma_{0,1}$, $m_{0,2}$, $\\Gamma_{0,2}$, $A_1$, $A_2$ and $\\varphi_1$ are the free fit parameters in Eq.~(\\ref{equ:DistSpPm2}) and (\\ref{equ:DistSmPp2}). The results for the best fit ($\\chi^2\/ndf=1.04$) are shown in Fig.~\\ref{fig:Fit_Weise} and the obtained fit parameters are listed in Table~\\ref{tab:table3}. \n\\begin{figure}[tbp]\n\t\\centering\n\t\t\\includegraphics[width=0.46\\textwidth]{Fit_Weise_1}\n\t\\caption{(Color online) Same as Fig.~\\ref{fig:Interferences} but now the data are fitted with the Eq.s~(\\ref{equ:DistSpPm2}) and (\\ref{equ:DistSmPp2}). See text for details.}\n\t\\label{fig:Fit_Weise}\n\\end{figure}\n\\begin{table}[tbp]\n\\caption{\\label{tab:table3}%\nObtained free fit parameters after fit with Eq.s~(\\ref{equ:DistSpPm2}) and (\\ref{equ:DistSmPp2}).}\n\\begin{ruledtabular}\n\\begin{tabular}{cccccc}\n\\textrm{$m_{0,1}$}&\n\\textrm{$\\Gamma_{0,1}$}&\n\\textrm{$m_{0,2}$}&\n\\textrm{$\\Gamma_{0,2}$}&\n\\textrm{$A_1\/A_2$}&\n\\textrm{$\\varphi_1$}\\\\\n\\colrule\n$1418$ & $58$ & $1375$ & $146$ & $0.395$ & $178^{\\circ}$ \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\nA very good description of the data is achieved with the maximum in the $\\Lambda(1405)$ distribution now appearing at around $1385$~MeV\/$c^2$. Integrating this signal results in a total cross section of $\\sigma=9.0$~$\\mu b$ for the reaction $p+p\\rightarrow\\Lambda(1405)+p+K^+$, which is in good agreement with the result quoted in \\cite{Agakishiev:2012xk}. The composition of the $\\Lambda(1405)$ signal itself is illustrated in Fig.~\\ref{fig:L1405_Weise}. The resulting $\\Lambda(1405)$ amplitude differs strongly from the results reported in \\cite{Geng:2007vm} and shown in Fig. \\ref{fig:OsetFitted}. \n\\begin{figure}[tbp]\n\t\\centering\n\t\t\\includegraphics[width=0.45\\textwidth]{L1405_Weise}\n\t\\caption{$\\Lambda(1405)$ signal (black line) obtained by fitting Eq.~(\\ref{equ:DistSpPm2}) and (\\ref{equ:DistSmPp2}) to the experimental data in Fig.~\\ref{fig:Fit_Weise}. The dotted and dashed lines show the contributions of $C_{p.s.}(m)\\left|BW_1(m)\\right|^2q_{c.m.}$ and $C_{p.s.}(m)\\left|BW_2(m)\\right|^2q_{c.m.}$.}\n\t\\label{fig:L1405_Weise}\n\\end{figure}\nThe $z_2$ pole (dashed line) has the dominant contribution with mass and width values at the edge of the allowed fit range ($m_{0,2}=1375$~MeV, $\\Gamma_{0,2}=146$~MeV). This very broad signal interferes with the $z_1$ pole (dotted line) in a way that the high mass region of the $\\Lambda(1405)$ is strongly suppressed, creating a peak below 1400~MeV\/$c^2$ (solid black line). However, with the large amount of free parameters, with the broad ranges in which the mass and width values are constrained and with the limited number of data points, it is difficult to derive strict conclusions about the relative contributions and the positions of the two poles. It can just be claimed that a significant part of the $\\Lambda(1405)$ amplitude could be associated to the second, broad pole. One should also notice here that the $\\Lambda(1405)$ signal shows a tail for masses above 1440~MeV\/$c^2$. This tail was not considered by the authors of \\cite{Agakishiev:2012xk}, who assumed the $\\Lambda(1405)$ to be only located at lower energies and who therefore used the high mass range of the $\\Sigma\\pi$ spectra to determine the contribution of the $\\Sigma\\pi$ non$\\--$resonant channels. From the results in Fig.~\\ref{fig:Fit_Weise} and \\ref{fig:L1405_Weise} one sees, however, that this assumption might be too simple. Thus, a further possibility is to not fix the non$\\--$resonant background but to treat $A_{\\Sigma^+\\pi^-}$ and $A_{\\Sigma^-\\pi^+}$ as two additional fit parameters.\nApplying this new fit to the data, results in the fit values listed in Table~\\ref{tab:table4}.\n\\begin{table}[tbp]\n\\caption{\\label{tab:table4}%\nObtained free fit parameters after fit with Eq.s~(\\ref{equ:DistSpPm2}) and (\\ref{equ:DistSmPp2}) with $A_{\\Sigma^+\\pi^-}$ and $A_{\\Sigma^-\\pi^+}$ as additional free parameters.}\n\\begin{ruledtabular}\n\\begin{tabular}{cccccccc}\n\\textrm{$m_{0,1}$}&\n\\textrm{$\\Gamma_{0,1}$}&\n\\textrm{$m_{0,2}$}&\n\\textrm{$\\Gamma_{0,2}$}&\n\\textrm{$A_1\/A_2$}&\n\\textrm{$\\varphi_1$} &\n\\textrm{$A_{\\Sigma^+\\pi^-}$} & \n\\textrm{$A_{\\Sigma^-\\pi^+}$} \\\\\n\\colrule\n$1431$ & $58$ & $1375$ & $146$ & $0.204$ & $164^{\\circ}$ & $0.857$ & $0.887$\\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table} \nThe fit quality of $\\chi^2\/ndf=1.03$ is as good as before, but the contribution of the $z_1$ pole is further reduced as seen by comparing the $A_1\/A_2$ ratios in Table~\\ref{tab:table3} and \\ref{tab:table4}. However, as the number of free parameters has further increased, the fit to the data is not robust anymore so that the results of Table~\\ref{tab:table4} are not very reliable.\\\\ \nIn summary, it can be concluded that it is rather difficult to precisely determine the relative contribution between $z_1$ and $z_2$ and simultaneously to determine their exact positions in the complex energy plane just by a fit to the HADES data alone. In fact, an appropriate theory model for proton-proton reactions with a serious treatment of all possible background contributions is required to make further conclusions. Nevertheless, the results obtained in this work clearly show that, in order to describe the new HADES data, a rather large contribution of the broad, low mass $\\Lambda(1405)$ pole is needed, provided that no interference with the non$\\--$resonant background is present.\\\\\nIn this context, it would also be important to have more restrictive constraints on the real and imaginary part of $z_1$ and $z_2$. The values quoted above allow a rather large variation of these parameters. However, the situation is even more unclear. In a recent work by Mai and Meissner \\cite{Mai:2012dt}, who also used the latest SIDDARTHA results, the positions of the two poles were derived to $z_1=1428^{+2}_{-1}+i8^{+2}_{-2}$~MeV and $z_2=1497^{+11}_{-7}+i75^{+9}_{-9}$~MeV. \nThe imaginary part of the first pole is much smaller than the one of \\cite{Ikeda:2012au}, but even more spectacular is the totally different value in the real part of $z_2$, which is shifted by about 100~MeV to higher energies. We can also take these values to fit the HADES data. For this purpose, the non$\\--$resonant background amplitudes $A_{\\Sigma^+\\pi^-}$ and $A_{\\Sigma^-\\pi^+}$ are again fixed to 1 and the mass and width values are allowed to vary within the given ranges. The best fit result is shown in Fig.~\\ref{fig:Fit_Meissner} and the obtained fit parameters are listed in Table~\\ref{tab:tableMeissner}. The corresponding decomposition of the $\\Lambda(1405)$ spectrum is shown in Fig.~\\ref{fig:L1405_Meissner1}. \n\\begin{figure}[tbp]\n\t\\centering\n\t\t\\includegraphics[width=0.46\\textwidth]{Fit_Meissner_1}\n\t\\caption{ Same as Fig.~\\ref{fig:Fit_Weise} but the mass and width constraints for the two $\\Lambda(1405)$ poles are taken from \\cite{Mai:2012dt}. See text for details.}\n\t\\label{fig:Fit_Meissner}\n\\end{figure}\n\\begin{figure}\n\t\\centering\n\t\t\\includegraphics[width=0.45\\textwidth]{L1405_Meissner}\n\t\\caption{Same as Fig.~\\ref{fig:L1405_Weise} but the constraints of \\cite{Mai:2012dt} for the mass and width values of the two $\\Lambda(1405)$ poles were used in the fits. See text for details.}\n\t\\label{fig:L1405_Meissner1}\n\\end{figure}\n\n\\begin{table}[tbp]\n\\caption{\\label{tab:tableMeissner}%\nObtained free fit parameters after fit with Eq.s~(\\ref{equ:DistSpPm2}) and (\\ref{equ:DistSmPp2}) and by using the constraints of \\cite{Mai:2012dt} for the mass and width values of the two $\\Lambda(1405)$ poles.}\n\\begin{ruledtabular}\n\\begin{tabular}{cccccc}\n\\textrm{$m_{0,1}$}&\n\\textrm{$\\Gamma_{0,1}$}&\n\\textrm{$m_{0,2}$}&\n\\textrm{$\\Gamma_{0,2}$}&\n\\textrm{$A_1\/A_2$}&\n\\textrm{$\\varphi_1$}\\\\\n\\colrule\n$1427$ & $20$ & $1490$ & $168$ & $0.17$ & $264^{\\circ}$ \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table} \nThe fit result is very poor, which is also expressed in a normalized $\\chi^2$-value of $\\chi^2\/ndf=3.4$. With the $z_2$ pole having such a large real part, it becomes impossible to create a peak structure at around 1385~MeV\/$c^2$ like it was observed by the HADES collaboration. Within our simplified model, we can thus conclude that the pole positions extracted in \\cite{Mai:2012dt} are not compatible with the new HADES data.\n\n\\section{Summary} \n\nWe have presented different interpretations of the low mass $\\Lambda(1405)$ signal, measured by the HADES collaboration in $p+p$ reactions. It was shown that the obtained signal is very similar to the results obtained in $\\pi^-+p$ reactions. One possible explanation for the observed mass shift is based on interference effects. For that purpose we have developed a simple model, where we assumed the $\\Lambda(1405)$ to consist of two poles, parameterized as Breit-Wigner amplitudes. The contribution and position of these poles were first chosen such to match the $\\Lambda(1405)$ line shape of \\cite{Geng:2007vm} with the mass peak appearing at $\\approx 1410$~MeV\/$c^2$. With this model we could show that maximum interference between the $\\Lambda(1405)$ amplitude and the non$\\--$resonant $\\Sigma\\pi$ background can indeed create a low mass signal and thus describe the HADES data. However, it was argued that this explanation is probably very unrealistic. In a second approach we have neglected interference effects with the non$\\--$resonant background and have used the HADES data themselves to determine the position and the relative contribution of the two $\\Lambda(1405)$ poles. With a relatively large contribution of the second, broad $\\Lambda(1405)$ pole, $z_2$, we could achieve a very good description of the HADES data. However, the limited statistic in the data did not allow to draw firm conclusions on the precise contribution of the two poles. Also their position in the complex energy plane could not be determined precisely just by a fit to this single data set. Nevertheless, it was shown that, within our simple model, a high energy pole position of $z_2=1497^{+11}_{-7}+i75^{+9}_{-9}$~MeV, like it was obtained in \\cite{Mai:2012dt}, is not compatible with the new HADES data. \\\\ \n\n\nThe authors would like to thank Wolfram Weise, Wolfgang Koenig and Piotr Salabura for very useful discussions. This work is supported by the Munich funding agency, BMBF 06DR9059D,05P12WOGHH, TU M\\\"unchen, Garching\n(Germany) MLL M\\\"unchen: DFG EClust 153, VH-NG-330 BMBF 06MT9156 TP6 GSI. \n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nLet $S$ be a set of rectangles in the plane, with vertical and horizontal sides, whose interiors do not intersect. We say that two rectangles $A$ and $B$ in $S$ \\textit{\\textbf{see each other}} if there is a vertical or horizontal line segment intersecting the interiors of both $A$ and $B$ and intersecting no other (closed) rectangles in $S$, like the dotted lines in Figure~\\ref{fig:RVG-example}. We refer to such segments as \\textit{\\textbf{lines of sight}}, and under this definition we may consider them to have small positive width. For example, there is no line of sight between rectangles $B$ and $F$ in Figure~\\ref{fig:RVG-example}, since a line of sight needs positive width. \n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=.6\\textwidth]{RVG-example.pdf}\n\\caption{A rectangle visibility graph and a corresponding RV-representation with integer rectangles.}\n\\label{fig:RVG-example} \n\\end{figure} \n\nWe construct a graph $G$ with a vertex for every rectangle in $S$, and an edge between two vertices if and only if their corresponding rectangles see each other. We say that $S$ is a \\textbf{\\textit{rectangle visibility}} (\\textit{\\textbf{RV-}}) \\textit{\\textbf{representation}} of $G$, and $G$ is a \\textit{\\textbf{rectangle visibility graph}} or \\textit{\\textbf{RVG}}. We allow rectangles in $S$ to share edges. \n\nA similar notion of rectangle visibility graph was first introduced in 1976 by Garey, Johnson and So \\cite{garey76} as a tool to study the design of printed circuit boards. Their RVGs have only $1\\times 1$ squares, located at a set of lattice points in a grid. Hutchinson continued work on this problem in 1993 \\cite{hutchinson93}. In \\cite{bose1996rectangle}, Bose, Dean, Hutchinson and Shermer described the problem of \\textit{two-layer routing} in ``very large-scale integration'' (VLSI) design as follows: \n\\begin{quote} {\\small\nIn two-layer routing, one embeds processing components and their connections (sometimes called \\textit{wires}) in two layers of silicon (or other VLSI material). The components are embedded in both layers. The wires are also embedded in both layers, but one layer holds only horizontal connections, and the other holds only vertical ones. If a connection must be made between two components that are not cohorizontal or covertical, then new components (called \\textit{vias}) are added to connect horizontal and vertical wires together, resulting in bent wires that alternate between the layers. However, vias are large compared to wires and their use should be minimized. In this setting, asking if a graph is a rectangle-visibility graph is the same as asking if a set of components can be embedded so that there is a two-layer routing of their connections that uses no vias. Our requirement that visibility bands have positive width is motivated by the physical constraint that wires must have some minimum width. A similar problem arises in printed-circuit board design, as printed-circuit boards naturally have two sides, and connecting wires from one side to the other (the equivalent of making vias) is relatively expensive.}\n\\end{quote}\nIn 1995, Dean and Hutchinson began the study of RVGs in their own right. They focused on bipartite RVGs, and showed that $K_{p,q}$ is an RVG if and only if $p \\leq 4$, and that every bipartite RVG with $n$ vertices has at most $4n-12$ edges \\cite{Dean95}. In 1996 Bose, Dean, Hutchinson, and Shermer sought to characterize families of graphs that are RVGs. They proved that every graph with maximum degree four is an RVG, and every graph that can be decomposed into two caterpillar forests is an RVG, among other results \\cite{bose1996rectangle}. In 1999 Hutchinson, Shermer, and Vince \\cite{hutchinson1999representations} proved that every RVG with $n$ vertices has at most $6n-20$ edges, and this bound is tight for $n\\geq 8$. RVGs have since been studied by many other authors \\cite{angelini18,biedl16, Cahit98, GD2014, Dean97,Dean98,dean08,Dean2010,Streinu03}, including generalizations to 3-dimensional boxes \\cite{bose99,develin03,Fekete99,gethner11}, rectilinear polygons with more than four edges \\cite{digiacomo18, liotta21}, and other variations.\n\nIn 1997, Kant, Liotta, Tamassia, and Tollis considered the minimum area, height, and width required to represent a tree as an RVG, as measured by the smallest bounding box containing all of the rectangles in the RV-representation \\cite{kant97}. They obtained asymptotic bounds on the area, width, and height of these representations and found a linear-time algorithm to construct them. In this paper we consider a similar problem, but seek exact bounds on the area, width, height, and perimeter of an RV-representation of any graph with $n$ vertices. We say that $\\area(G)$, $\\perimeter(G)$, $\\height(G)$, and $\\width(G)$ are the minimum area, perimeter, height, and width, respectively, of the bounding box of any integer rectangle visibility representation of the graph $G$. These are the objects of study in this paper.\n\nIn Section~\\ref{Definitions} we specify the rectangle visibility graphs we consider and provide definitions and notation needed for the paper. We finish the section with lemmas we will use in later sections.\n\nIn Section~\\ref{SeparatingExamples} we show that these four measures of size of a rectangle visibility graph are all distinct, in the sense that there exist two graphs $G_1$ and $G_2$ with $\\area(G_1)<\\area(G_2)$ but $\\perimeter(G_2)<\\perimeter(G_1)$, and analogously for all other combinations of these parameters. \n\nIn Section~\\ref{SeparatingRepresentations} we show that these measures are not necessarily all attained by the same representation; i.e., there is a graph $G_3$ with two RVG representations $S_1$ and $S_2$ with $\\area(G_3)=\\area(S_1)<\\area(S_2)$ but $\\perimeter(G_3)=\\perimeter(S_2)<\\perimeter(S_1)$, and analogously for all other combinations of these parameters.\n\nIn Section \\ref{SmallParameters} we characterize the graphs that have the smallest height, width, area, and perimeter among all graphs with $n$ vertices.\n\nIn Section~\\ref{LargeParameters} we investigate the graphs with largest height, width, area, and perimeter. We show that, among graphs with $n \\leq 6$ vertices, the empty graph $E_n$ has largest area, and for graphs with 7 or 8 vertices, the complete graphs $K_7$ and $K_8$ have larger area than $E_7$ and $E_8$, respectively. Using this, we show that for all $n \\geq 7$, the empty graph $E_n$ does not have largest area among all RVGs on $n$ vertices. The graphs with more than 6 vertices that maximize these parameters are still unknown.\n\nIn Section~\\ref{Conclusions}, we conclude with a number of open questions.\n\n\\section{Basic Definitions and Results} \\label{Definitions}\n\nA rectangle with horizontal and vertical sides whose corners are integer lattice points is said to be an \\textit{\\textbf{integer rectangle}}. We consider only integer rectangles for the remainder of the paper. Each rectangle is specified by the two $x$-coordinates and two $y$-coordinates of its corners. For a set $S$ of rectangles, the smallest rectangle with horizontal and vertical sides containing $S$ is the \\textit{\\textbf{bounding box}} of $S$.\n\nSuppose $G$ is an RVG with RV-representation $S$ contained in the bounding box $R$, and say $R$ has corners with $x$-coordinates $0$ and $u$ and $y$-coordinates $0$ and $v$, with $u,v\\in \\mathbb{Z}$. We can view $R$ as a $u \\times v$ grid, with $v$ rows and $u$ columns, and with rectangles in $S$ each contained in a consecutive set of rows and columns. For example, in the representation shown in Figure~\\ref{fig:RVG-example}, rectangle $A$ is contained in rows 3, 4, 5, and 6, and column 1.\n\nWe use the convention that lower case letters are vertices of the graph $G$, and the corresponding upper case letters are rectangles in the RV-representation $S$ of $G$; e.g., $a$ is a vertex in $G$, and $A$ is its corresponding rectangle in $S$. For a given rectangle $A$ in $S$, we denote the $x$-coordinates of its vertical sides by $x_1^A$ and $x_2^A$ with $x_1^A < x_2^A$, and the $y$-coordinates of its horizontal sides by $y_1^A$ and $y_2^A$, with $y_1^A < y_2^A$. In other words, as a Cartesian product of intervals, we have $$A=[x_1^A,x_2^A] \\times [y_1^A,y_2^A].$$\n\nWe also introduce notation to refer to the set of rectangles in $S$ that are above (respectively, below, to the left of, or to the right of) a given rectangle $A$. Specifically, let the set of rectangles above (north of) $A$ be denoted by\n$$ {\\mathcal{N}}(A) = \\{ X \\in S : y_1^X \\geq y_2^A \\mbox{ and } (x_1^X,x_2^X) \\cap (x_1^A,x_2^A) \\not= \\emptyset \\}.$$ Similarly define ${\\mathcal{S}}(A)$, ${\\mathcal{W}}(A)$, and ${\\mathcal{E}}(A)$ (rectangles south, west, and east of $A$, respectively). For example, in Figure \\ref{fig:RVG-example}, $\\mathcal{N}(D)=\\{B,E\\},$ while $\\mathcal{E}(A)=\\{B, E, D\\}$ and $\\mathcal{S}(A)=\\varnothing.$ Note that $A$ might not see every rectangle in ${\\mathcal{N}}(A)$ if there are other rectangles obstructing the view (and similarly for rectangles in the other three sets). \n\nLet $R$ be the smallest bounding box having horizontal and vertical sides and containing all the rectangles in a set of integer rectangles $S$. For the remainder of the paper, we turn $R$ so that $\\height(R) \\leq \\width(R)$. \nGiven a graph $G$, the \\textit{\\textbf{area}}, \\textit{\\textbf{perimeter}}, \\textit{\\textbf{height}}, and \\textit{\\textbf{width}} of $G$ are the minimums of the corresponding parameters taken over all bounding boxes of RV-representations of $G$ with height less than or equal to width.\n\nWe conclude this section with some preliminary results. \nFirst we explore the extent to which we can focus on the parameters of connected graphs, and in what ways the values for disconnected graphs are determined or bounded by the parameters of their connected components.\n\nFor convenience in stating the next result, we introduce the following notation.\nFor any positive integers $h$ and $w$, let ${\\mathcal{F}}_{h,w}$ denote the (finite) set of graphs that have RV-representations in an $h \\times w$ bounding box.\n\n\n\n\n\n\n\\begin{lemma}\\label{NewDisjointLem}\nIf $G$ is the disjoint union of graphs $H$ and $J$, then:\n\\begin{enumerate}[label=\\rm{(\\roman*).}]\n\\item $\\height(G)=\\height(H)+\\height(J),$\n\\item $\\perimeter(G)=\\perimeter(H)+\\perimeter(J),$ \n\\item $\\width(G)=\\min \\{ \\max \\{x+b, y+a\\} \\, | \\, H \\in {\\mathcal{F}}_{x,y}, J \\in {\\mathcal{F}}_{a,b} \\}$,\n\\item $\\area(G)=\\min \\{ (x+a)(y+b) \\, | \\, H \\in {\\mathcal{F}}_{x,y}, J \\in {\\mathcal{F}}_{a,b} \\}$.\n\\end{enumerate} \n\\end{lemma}\n\n\\begin{proof}\nSuppose $G$ is the disjoint union of graphs $H$ and $J$. Given any RV-representations $S_1$ and $S_2$ of $H$ and $J$, we construct two RV-representations of $G$. As indicated in Figure~\\ref{Glue}, we identify the upper right corner of $S_1$ with the lower left corner of either $S_2$ or $S_2^T$, where $S_2^T$ denotes the RV-representation of $J$ formed by transposing $S_2$ across its main (top left to lower right) diagonal. If $S_1$ is $x \\times y$ and $S_2$ is $a \\times b$, then it follows that\n$ G \\in {\\mathcal{F}}_{x+a,y+b} \\cap {\\mathcal{F}}_{x+b,y+a}.$\nThis implies each of the expressions in (i)-(iv) are upper bounds for the parameters of $G$.\n\nTo prove these expressions are also lower bounds, note that \n any RV-representation $S$ of $G$ must have the rectangles corresponding to vertices of $H$ in separate rows and columns from the rectangles corresponding to vertices of $J$. If $S$ has height smaller than $\\height(H)+\\height(J)$, then either the rectangles in $S$ corresponding to vertices of $H$ must form an RV-representation of $H$ with height less than $\\height(H)$ or the rectangles in $S$ corresponding to vertices of $J$ must form an RV-representation of $J$ with height less than $\\height(J)$. Therefore no such representation is possible. Similar arguments apply to the other parameters.\n\\end{proof}\n\n\\begin{remark}\nThe following example illustrates a subtlety captured by the formula in Lemma~\\ref{NewDisjointLem}(iv). Consider the graphs $P_4+K_{1,4}$ and $K_{1,4}+K_{1,4}$. We have $\\area(P_4+K_{1,4})=27$, obtained from a $1 \\times 4$ RV-representation of $P_4$ and a $2 \\times 5$ RV-representation of $K_{1,4}$. But we also have $\\area(K_{1,4}+K_{1,4})=36$, obtained from two copies of a $3 \\times 3$ RV-representation of $K_{1,4}$ (both representations of $K_{1,4}$ are given in Figure~\\ref{fig:width-examples2a}). So the minimum area of $H + K_{1,4}$ uses different representations of $K_{1,4}$ depending on $H$.\n\\end{remark}\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=.7\\textwidth]{disjoint-union.pdf}\n\\caption{Two options for a combined representation of a disjoint union of two RVGs}\n\\label{Glue} \n\\end{figure} \n\n\\begin{corollary}\\label{NewDisjointCor}\nIf $G$ is the disjoint union of graphs $H$ and $J$, then:\n\\begin{enumerate}[label=\\rm{(\\roman*).}]\n\\item $\\width(G) \\leq \\width(H) + \\width(J)$,\n\\item $\\area(G) \\leq (\\width(H) + \\width(J))^2$.\n\\end{enumerate} \n\\end{corollary}\n\n\\begin{proof}\nThese both follow immediately from the \nconstruction in the proof of Lemma~\\ref{NewDisjointLem} shown in Figure~\\ref{Glue}. \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\begin{lemma}\nSuppose $G$ is a graph with RV-representation $S$ with bounding box $R$. If $\\height(G)=\\height(R)$ and $\\width(G)=\\width(R)$, then $\\area(G)=\\area(R)$ and $\\perimeter(G)=\\perimeter(R)$. In other words, if $S$ realizes the height and width of $G$, then $S$ also realizes the area and perimeter of $G$. \n\\label{lemma-same-height-width}\n\\end{lemma}\n\n\\begin{proof}\nAny representation with smaller area or perimeter must have smaller height or smaller width, which is impossible by hypothesis.\n\\end{proof}\n\nLater (in Table~\\ref{table-separating2}), we will see that the hypotheses of Lemma~\\ref{lemma-same-height-width} are necessary. In particular, $G_4$ has two different representations for minimizing area and perimeter.\n\n\n\\section{Height, Width, Area, and Perimeter induce distinct orderings of RVGs} \\label{SeparatingExamples}\n\nIn this section, we consider the various notions of height, width, perimeter, and area of RVGs. We show that these parameters represent independent measures of RVGs, in the sense that they do not always give identical orderings of the sets of graphs on a given number of vertices. \nExamples to illustrate these results are summarized in Tables~\\ref{table-separating1} and \\ref{table-separating-new}. Graphs $G_1$, $G_2$, $G_3$, and $G_4$ in these tables are shown in Figures~\\ref{fig:width-examplesBB} and~\\ref{fig:width-examplesCC}. For each pair of parameters, there exists a pair of graphs with an equal number of vertices that are oppositely ordered by those parameters. We have verified the height, width, area, and perimeter of all connected graphs with 6 or fewer vertices by computer search \\cite{WebList} and the claims regarding $P_6$, $C_6$, $G_1$, and $G_2$ in Tables~\\ref{table-separating1} and \\ref{table-separating-new} follow easily, see Figures~\\ref{fig:width-examplesAA} and \\ref{fig:width-examplesBB}.\nThe claims regarding $G_3$ and $G_4$, each with 15 vertices, are proved in Theorems~\\ref{width-theorem} and \\ref{width-theorem2}. \n\n \n\n\\begin{table}[ht!]\n\\begin{tabular}{c|c|cccc} \n Graph & Vertices & Height & Width & Area & Perimeter \\\\ \\hline \\hline\n $P_6$ & 6 & {\\bf 1} & 4 & {\\bf 6} & 14 \\\\ \n $C_6$ & 6 & 2 & {\\bf 3} & 8 & {\\bf 12} \\\\ \\hline\n $G_1$ & 6 & {\\bf 2} & -- & 10 & -- \\\\\n $G_2$ & 6 & 3 & -- & {\\bf 9} & -- \\\\ \\hline\n $G_3$ & 15 & -- & 6 & -- & {\\bf 18} \\\\\n $G_4$ & 15 & -- & {\\bf 5} & -- & 20 \\\\ \\hline\n\\end{tabular} \n\\smallskip\n\\caption{Separating examples for height, width, area, and perimeter.}\n\\label{table-separating1}\n\\end{table}\n\n\n\\begin{table}[ht!]\n\\begin{tabular}{c|ccc}\n & Perimeter & Height & Width \\\\ \\hline \\hline\n Area & $P_6,C_6$ & $G_1,G_2$ & $P_6,C_6$ \\\\ \n Perimeter & -- & $P_6,C_6$ & $G_3,G_4$ \\\\ \n Height & -- & -- & $P_6,C_6$ \\\\\n \\hline\n\\end{tabular} \n\\smallskip\n\\caption{Graph pairs that are oppositely ordered by each of the parameter pairs.}\n\\label{table-separating-new}\n\\end{table}\n \n\n\\begin{figure}[ht!]\n \\centering \\includegraphics[width=.9\\textwidth]{P6-and-C6.pdf}\n \\caption{Graphs $P_6$ and $C_6$ show that height and area order graphs differently than width and perimeter.}\n \\label{fig:width-examplesAA}\n\\end{figure}\n\n\n\n\\begin{figure}[ht!]\n \\includegraphics[width=.8\\textwidth]{G1-and-G2-new.pdf}\n \\caption{Graphs $G_1$ and $G_2$ show that height orders graphs differently than area.}\n \\label{fig:width-examplesBB}\n\\end{figure}\n\n\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=.4\\textwidth]{width-example1.pdf} \\hspace{1em}\n \\includegraphics[width=.4\\textwidth]{width-example2.pdf}\n \\caption{Graphs $G_3$ and $G_4$ show that width orders graphs differently than perimeter.}\n \\label{fig:width-examplesCC}\n\\end{figure}\n\n\n\n \n\n\n\n\n\n\\begin{theorem}\nThe graph $G_3$ shown on the left in Figure~\\ref{fig:width-examplesCC} has perimeter 18 and width 6. \\label{G1-proof}\n\\label{width-theorem}\n\\end{theorem}\n\n\\begin{proof}\n\\noindent To find the perimeter of $G_3$, we first consider its area. Let $v$ be the vertex of degree 10 in $G_3$. In any RV-representation of $G_3$, the corresponding rectangle $V$ must have perimeter at least 10, so its area is at least 4. Together with the 14 other vertices, we see $\\area(G_3)\\ge 18.$ Now suppose $G_3$ can be represented in a bounding box of height $h$, width $w$, and perimeter $p=2h+2w$. Such a rectangle has maximum area when $h=w=p\/4$, so $\\area(R) \\leq p^2\/16$. But $\\area(R) \\geq 18$, so $p \\geq \\sqrt{18 \\cdot 16} > 16.$ Since $p$ must be even, $\\perimeter(G_3)=18$ by Figure \\ref{fig:width-examplesCC}.\n\nNext we consider $\\width(G_3)$. If $\\width(G_3)<6$, then $G_3$ can be represented in a $5\\times 5$ box $R$. In a $5 \\times 5$ box with 14 other vertices, $V$ has area at most 11, and hence $V$ must be $1 \\times 4$, $1 \\times 5$, $2 \\times 3$, $2 \\times 4$, $2 \\times 5$, or $3 \\times 3$. We rule out each possibility below.\n \n \n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=.9\\textwidth]{G1-cases.pdf}\n \\caption{Diagrams illustrating several of the cases in the proof of Theorem~\\ref{G1-proof}.}\n \\label{fig:width-proof}\n\\end{figure} \n \n $\\bullet \\;$ If $V$ is $1 \\times 4$, at least one unit of the perimeter of $V$ is on the boundary of $R$, and therefore $V$ cannot represent a degree-10 vertex. \n\n $\\bullet \\;$ If $V$ is $1 \\times 5$ or $2 \\times 5$, it must touch opposite sides of $R$; thus $V$ must see 5 rectangles on each of its other two sides in order to have degree 10. But then $v$ is a cut vertex, and $G_3$ has none. \n\n $\\bullet \\;$ If $V$ is $3\\times 3$, it cannot have an edge on the boundary of $R$ and represent a degree-10 vertex. In this case, $V$ occupies the middle of $R$ as shown on the left in Figure~\\ref{fig:width-proof}. The four vertices not adjacent to $v$ must be represented by $1\\times 1$ squares in the corners of $R$. Among these four, two (disjoint) pairs have a unique common neighbor in $G_3$. Locating the rectangles for these common neighbors in their respective $3 \\times 1$ blocks of $R$, we see there now remain only 6 locations for the other 8 vertices. \n\n $\\bullet \\;$ If $V$ is $2\\times 4$, it must (by symmetry) appear as in the middle of Figure~\\ref{fig:width-proof}. To have degree 10, $V$ must see a $1\\times 1$ square at $W$. But $v$ has no neighbor of degree less than $3$. \n\n $\\bullet \\;$ If $V$ is $2\\times 3$, it must (by symmetry) appear as on the right of Figure~\\ref{fig:width-proof}. To have degree 10, $V$ must see distinct rectangles on each unit of its perimeter. Numbering the locations in $R$ as in the right of Figure~\\ref{fig:width-proof}, if location 3 is empty, location 2 must contain a $1\\times 1$ square, but $v$ has no neighbor of degree less than $3$. If a $2 \\times 1$ rectangle $X$ covers location 3, the rectangles in locations 1 through 6 form a path $P_5$ in the neighborhood of $V$, but $G_3$ has no such subgraph. Thus, locations 3 and (by symmetry) 7 contain $1 \\times 1$ squares. But $G_3 -v$ has no vertices of degree 2 that are at distance 4.\n \\end{proof} \n \n\n\\begin{theorem}\nThe graph $G_4$ shown on the right in Figure~\\ref{fig:width-examplesCC} has perimeter 20 and width 5. \\label{G2-proof}\n\\label{width-theorem2}\n\\end{theorem}\n \n\\begin{proof} \nWe use the labeling in Figure~\\ref{fig:width-examplesCC}. We begin with perimeter. Note that since $G_4$ has 15 vertices, every RV-representation of $G_4$ has area at least 15. Then if $\\perimeter(G_4)<20$, it must have a representation that fits in a $3 \\times 6$ or $4 \\times 5$ bounding box. But a $3 \\times 6$ box has area 18, so the 3 vertices of degree more than four in $G_4$ must be represented with $1 \\times 2$ rectangles. But then all the vertices of $G_4$ must have degree $\\leq 6$, a contradiction.\n\nThus $G_4$ must be representable in a $4 \\times 5$ box. The two vertices of degree 5 in $G_4$ must each be represented with a rectangle of area at least 2. In a $4 \\times 5$ box $R$, $U$ then has area 3 or 4 and cannot lie in a corner. All vertices of $G_4$ have degree at least 3, so no corner of $R$ has a $1 \\times 1$ square. Therefore, each corner of $R$ is either empty or occupied by a rectangle of area at least 2. With 15 vertices, this exceeds the total area of 20 in $R$: each vertex contributes at least one, the four corners require at least one additional unit of area each, and $U$ needs two or three additional units of area not lying in a corner. Now $\\perimeter(G_4)=20$ by Figure~\\ref{fig:width-examplesCC}.\n\nWe next consider width. If $\\width(G_4)<5$, then $G_4$ can be represented in a $4 \\times 4$ box. But then it also has a $4 \\times 5$ representation, which we just proved impossible. By Figure~\\ref{fig:width-examplesCC}, we see that $\\width(G_4) =5$. \n\\end{proof}\n\n\n\\section{Minimizing Height, Width, Area, and Perimeter can require distinct representations of an RVG} \\label{area} \\label{SeparatingRepresentations}\n\nIn this section, we further explore our four parameters and we observe that, even for a single RVG, it is possible that the set of representations minimizing one of them may be disjoint from the set of representations minimizing another. \nExamples to illustrate these results are summarized in Tables~\\ref{table-separating2} and \\ref{table-separating2b}. For each pair of parameters, there exists a graph that requires distinct representations to separately minimize each parameter in that pair. Specifically, the star $K_{1,4}$ has representations minimizing height that are distinct from those minimizing width, area, and perimeter; the graph $G_5$ shown in Figure~\\ref{fig:width-examples2a} has representations minimizing width that are distinct from those minimizing the other parameters; and the graph $G_6$ shown in Figure~\\ref{fig:width-examples2b} has distinct representations minimizing area and perimeter. Since $K_{1,4}$ has fewer than 7 vertices, the claims regarding it are verified by our computer search \\cite{WebList}. The claims regarding $G_5$ and $G_6$, each with 7 vertices, are proved in Theorems~\\ref{G5-theorem} and \\ref{G6-theorem} below. \n\n\\begin{table}[ht!]\n\\begin{tabular}{c|c|cccc} \nGraph (representation) & Vertices & Height & Width & Area & Perimeter\\\\ \\hline \\hline \n $K_{1,4}$ ($S_1$) & 5 & {\\bf 2} & 5 & 10 & 14\\\\\n $K_{1,4}$ ($S_2$) & 5 & 3 & {\\bf 3} & {\\bf 9} & {\\bf 12}\\\\ \\hline\n $G_5$ ($S_1$) & 7 & {\\bf 2} & 5 & {\\bf 10} & {\\bf 14}\\\\\n $G_5$ ($S_2$) & 7 & 4 & {\\bf 4} & 16 & 16\\\\ \\hline\n $G_6$ ($S_1$) & 7 & {\\bf 2} & 7 & {\\bf 14} & 18\\\\\n $G_6$ ($S_2$) & 7 & 4 & {\\bf 4} & 16 & {\\bf 16}\\\\ \\hline\n\\end{tabular} \n\\smallskip\n\\caption{Graphs representations used to minimize height, width, area, and perimeter.}\n\\label{table-separating2}\n\\end{table} \n\n\\begin{table}[ht!]\n\\begin{tabular}{c|ccc}\n & Perimeter & Height & Width \\\\ \\hline \\hline\n Area & $G_6$ & $K_{1,4}$ & $G_5$ \\\\ \n Perimeter & -- & $K_{1,4}$ & $G_5$ \\\\ \n Height & --& --& $G_5$ \\\\\n \\hline\n\\end{tabular} \n\\smallskip\n\\caption{Graphs requiring different representations to minimize area, perimeter, height, and width.}\n\\label{table-separating2b}\n\\end{table}\n\n\\begin{theorem}\nThe graph $G_5$ shown in Figure~\\ref{fig:width-examples2a} has height 2, width 4, area 10, and perimeter 14. Furthermore, minimizing width requires a different RV-representation from height, area, or perimeter. \\label{G5-theorem}\n\\end{theorem}\n\n\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=\\textwidth]{K14-and-G5-new.pdf}\n \\caption{Graphs $K_{1,4}$ and $G_5$ are shown. For $K_{1,4}$, one representation ($S_1$) minimizes height and another ($S_2$) minimizes area, perimeter, and width. For $G_5$, one representation ($S_1$) minimizes height, area, and perimeter, and another ($S_2$) minimizes width.}\n \\label{fig:width-examples2a}\n\\end{figure}\n\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=.6\\textwidth]{G6-new.pdf}\n \\caption{Graph $G_6$ with distinct representations minimizing area and perimeter.}\n \\label{fig:width-examples2b}\n\\end{figure}\n \n\\begin{proof}\n The representation $S_1$ in Figure \\ref{fig:width-examples2a} has height 2, width 5, area 10, and perimeter 14. The representation $S_2$ has height 4, width 4, area 16, and perimeter 16. \n \nWe claim $G_5$ has no $3 \\times 4$ representation. Each of the 3-cycles in $G_5$ must have at least 2 rows that occupy at least 2 units of area. This implies that some row in a $3 \\times 4$ box has 2 units of each 3-cycle, which therefore must see each other. Since $G_5$ has no edges joining these 3-cycles, this is impossible.\n\nIt follows that $G_5$ has no $3 \\times 3$ or $2 \\times 4$ representation. Since $G_5$ is not a path, $\\height(G_5)\\geq 2$ and these facts together imply that $G_5$ has height 2, width 4, area 10, and perimeter 14. \nSince the only representations of $G_5$ with width 4 are $4 \\times 4$, any representation minimizing width does not minimize height, area, or perimeter.\n\\end{proof}\n \n \n\n\n\\begin{theorem} \nThe graph $G_6$ shown in Figure~\\ref{fig:width-examples2b} has perimeter 16 and area 14. Furthermore, minimizing perimeter requires a\ndifferent RV-representation than area. \\label{G6-theorem}\n\\end{theorem}\n\n\n\n\n\\begin{proof}\n The first representation in Figure \\ref{fig:width-examples2b} has perimeter 18 and area 14. The second has perimeter 16 and area 16. \n \n The 4 vertices of degree 1 each require at least 3 units of length on the perimeter with free lines of sight. The degree 2 vertex requires at least 2 units, and the 2 degree 3 vertices each require at least 1 unit. So $\\perimeter(G_6)=16$. \n \n If $\\area(G_6)<14,$ then $G_6$ can be represented in a rectangle of area 7, 8, ..., or 13. Since $G_6$ is not a path, prime areas are not possible, so $G_6$ must have height $>1$. Since $\\perimeter(G_6)=16$, the only possible bounding box must be $2 \\times 6$. We claim this is impossible. Since $G_6$ has 7 vertices, a box with 6 columns would force some column to contain portions of 2 rectangles. Neither of these rectangles can represent a vertex of degree 1, or else the remaining graph must be a path, so they must have degrees 2 and 3, which implies that $D$ has an empty horizontal line of sight. But now, taken together, the 5 vertices of degree 1 or 2 require 5 empty horizontal lines of sight, while the bounding box only has height 2, with at most 4 such lines. \n \n The only rectangle with height $>1$ and area 14 is $2 \\times 7$, so the perimeter and area must be achieved with distinct representations.\n\\end{proof}\n\n\n\\section{Graphs with small area, perimeter, height, and width} \\label{SmallParameters} \n\nIn this section, we address the question of which graphs on a given number of vertices minimize each parameter. \n\nFor any real number $x$, we use $\\lceil x \\rceil$ and $\\lfloor x \\rfloor$ to denote the integer ceiling and floor of $x$, respectively. We use $[[ x ]] = \\lfloor x + 1\/2 \\rfloor$ to denote $x$ rounded to the nearest integer. Recall that, for any positive integers $h$ and $w$, we let ${\\mathcal{F}}_{h,w}$ denote the (finite) set of graphs that have RV-representations in an $h \\times w$ bounding box.\n\n\\begin{theorem}\\label{lowerbds}\nLet $G$ be any graph with $n$ vertices and suppose $G$ has an RV-representation. Then the following hold.\n\\begin{enumerate}[label=\\rm{(\\roman*).}]\n \\item The height of $G$ satisfies\n $$\\height(G) \\geq 1.$$\n Equality holds if and only if $G \\cong P_n$, the path on $n$ vertices.\n \\item The area of $G$ satisfies\n $$\\area(G) \\geq n.$$ \n Equality holds if and only if $G \\cong P_h \\openbox P_w$, for some positive integers $h$ and $w$, where $n=h \\cdot w$. \n \\item The width of $G$ satisfies\n $$\\width(G) \\geq \\lceil \\sqrt{n} \\rceil.$$ \n Equality holds if and only if $G \\in {\\mathcal{F}}_{w,w}$ where $w = \\lceil \\sqrt{n} \\rceil$.\n \\item The perimeter of $G$ satisfies\n $$\\perimeter(G) \\geq 2 \\cdot [[\\sqrt{n}]] + 2 \\cdot \\lceil \\sqrt{n} \\rceil.$$ \n Equality holds if and only if $G \\in {\\mathcal{F}}_{h,w}$ for some positive integers $h$ and $w$, where $h+w = [[\\sqrt{n}]] + \\lceil \\sqrt{n} \\rceil$ and $hw \\geq n$.\n\\end{enumerate} \n\\end{theorem}\n\n\\begin{proof}\n To see (i) and (ii), note that every RV-representation of a graph must have height at least 1 and area at least $n$, since each vertex requires at least a $1\\times 1$ rectangle. For $G$ to have height exactly 1, every rectangle in such a representation must be on the same row in the representation, so $G$ is a path. For $G$ to have area exactly $n$, every rectangle in such a representation must be $1 \\times 1$, with no empty space in the bounding box $R$. Thus $G$ is the grid $P_h \\openbox P_w$, where $h$ is the height of $R$ and $w$ is the width of $R$. \n\n \n We show (iii) by way of contradiction. Suppose $G$ can be represented in an $a \\times b$ bounding box $R$, where $a \\leq b < \\lceil \\sqrt{n} \\rceil$. Then $ b \\leq \\lceil \\sqrt{n} \\rceil -1$, so $b < \\sqrt{n}$. But now the area of $R$ is $ab < n$, which is impossible since $G$ has $n$ vertices. It follows that $\\width(G) = \\lceil \\sqrt{n} \\rceil$ if and only if $G \\in {\\mathcal{F}}_{w,w}$ where $w = \\lceil \\sqrt{n} \\rceil$. \n \n We also show (iv) by way of contradiction. Suppose $G$ can be represented in an $a \\times b$ bounding box $R$, where $a + b < [[\\sqrt{n}]] + \\lceil \\sqrt{n} \\rceil$. By (ii), we know $ab \\geq n$. We consider cases for whether $\\sqrt{n}$ rounds up or down. In each case, we find that\n $$ (a+b)^2 < 4n \\leq 4ab,$$\n which implies that $(a-b)^2 <0,$ a contradiction. \n It follows that $\\perimeter(G) = 2 \\cdot [[\\sqrt{n}]] + 2 \\cdot \\lceil \\sqrt{n} \\rceil$ if and only if $G \\in {\\mathcal{F}}_{h,w}$ for some positive integers $h$ and $w$, where $h+w = [[\\sqrt{n}]] + \\lceil \\sqrt{n} \\rceil$ and $hw \\geq n$. \n\n\\end{proof}\n\n\\begin{remark}\nThe condition for equality in Theorem~\\ref{lowerbds}(iv) restricts the bounding box to be very nearly square. Specifically, we can say the following. \nLet $k$ denote the integer such that $k^2 < n \\leq (k+1)^2$.\n\nIf $k^2 < n \\leq k(k+1)$, then equality holds in Theorem~\\ref{lowerbds}(iv) if and only if $G \\in {\\mathcal{F}}_{k-t,k+1+t}$ for some integer $t$ where $$0 \\leq t \\leq \\sqrt{(k+{\\textstyle\\frac{1}{2}})^2-n}-{\\textstyle\\frac{1}{2}}.$$\n\nIf $k(k+1) < n \\leq (k+1)^2$, then equality holds in Theorem~\\ref{lowerbds}(iv) if and only if $G \\in {\\mathcal{F}}_{k+1-t,k+1+t}$ for some integer $t$ where $$0 \\leq t \\leq \\sqrt{(k+1)^2-n}.$$\n\\end{remark}\n\nFor example, any RVG with $n=70$ vertices must have perimeter at least 34, with equality only when $G$ has a $7 \\times 10$ or $8 \\times 9$ representation.\nSimilarly, any RVG with $n=120$ vertices must have perimeter at least 44, with equality only when $G$ has a $10 \\times 12$ or $11 \\times 11$ representation. \n\n\n\\section{Graphs with large area, perimeter, height, and width} \\label{CompleteArea} \\label{LargeParameters}\n\nIn this section we turn to the question of which graphs on a given number of vertices maximize our four parameters. \n\nRecall that the \\textit{\\textbf{empty graph}} $E_n$ is the graph with $n$ vertices and no edges. Among small graphs (at most 6 vertices), the empty graphs maximize each of the four parameters. When the number of vertices is larger than 7, however, we will see that the empty graph no longer reigns supreme. Our proof is constructive, as we will provide specific graphs that we will prove are larger than $E_n$ in each parameter. But our results here leave open, perhaps for future work, the more difficult question of which graphs on $n$ vertices actually achieve the maximum values for the four parameters.\n\nWe begin with the small graphs.\n\n\\begin{theorem}\nFor $1 \\leq n \\leq 6$, among all graphs with $n$ vertices, the empty graph $E_n$ has largest height, width, area, and perimeter.\n\\end{theorem}\n\n\\begin{proof}\nBy Lemma~\\ref{NewDisjointLem}, the empty graph $E_n$ has height $n$, width $n$, area $n^2$, and perimeter $4n$. Figures~\\ref{fig:small representations} and \\ref{fig:small representations6} show RV-representations of all connected graphs with at most 6 vertices. These figures show that no other connected graphs with $2 \\leq n \\leq 6$ vertices exceed any of these values. For a disconnected graph $G$, Lemma~\\ref{NewDisjointLem} implies that, as long as $G$ has at least one component with more than one vertex, we can combine the representations of the components as in Figure~\\ref{Glue} to obtain height less than $n$, area less than $n^2$, and perimeter less than $4n$. Because $K_2$ has width 2, the graph $K_2 + E_{n-2}$ has width $n$, but no graph has larger width than $E_n$ for $n \\leq 6$.\n\\end{proof}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.7\\textwidth]{small-representations.pdf}\n\\caption{RV-representations of all connected graphs with between 1 and 5 vertices.}\n\\label{fig:small representations}\n\\end{figure}\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=.9\\textwidth]{small-representations6-labeled-updated.pdf}\n\\caption{RV-representations of all connected graphs with 6 vertices, using the labeling from~\\cite{small-graphs-website}. These representations have smallest area, by computer search.}\n\\label{fig:small representations6}\n\\end{figure} \n\n\nOur next results focus heavily on the RV-representations of the complete graph $K_n$. Let $S$ be any set of rectangles representing $K_n$ and let $R = [0,u] \\times [0,v]$ denote the smallest bounding box containing them. \nWe define the set ${\\mathcal{T}}_S$ of {\\bf top rectangles of $S$} as follows:\n$$\n{\\mathcal{T}}_S = \\{ X \\in S : y_1^X \\geq y_1^Y \\mbox{ for all } Y \\in S\\}.$$\nWe now prove that for $K_n$, ${\\mathcal{T}}_S$ contains a single rectangle when $n\\geq 6$.\n\n\\medskip\n\n\\begin{figure}\n \\centering\n \\includegraphics[height=2in]{top-box-proof.pdf}\n \\caption{An example to illustrate the proof of Lemma~\\ref{lonely}}\n \\label{fig:lonely rectangle}\n\\end{figure}\n \n \n\\begin{lemma}\\label{lonely} Suppose $n\\geq 6$ and let $S$ be any rectangle visibility representation of $K_n$.\nThen $|{\\mathcal{T}}_S | =1.$\n\\end{lemma}\n \n\\begin{proof} By way of contradiction, suppose $|{\\mathcal{T}}_S | \\geq 2.$ Fix any distinct rectangles $A, B \\in {\\mathcal{T}}_S$, and note that ${\\mathcal{N}}(A)$ and ${\\mathcal{N}}(B)$ are empty, as illustrated by the example in Figure~\\ref{fig:lonely rectangle}. Without loss of generality, assume that $x_1^A \\leq x_1^B$ and $y_2^A \\geq y_2^B$ (i.e., the taller rectangle is on the left). We observe the following:\n\\medskip\n \n$\\bullet \\;$ \n\n${\\mathcal{W}}(A)$ is empty:\nIf $F \\in {\\mathcal{W}}(A)$, then $F$ cannot see $B.$\n\\medskip\n \n$\\bullet \\;$ \n$|{\\mathcal{E}}(B)| \\leq 1$:\nOtherwise, fix $I$, $J \\in {\\mathcal{E}}(B)$ with $x_1^I < x_1^J$, and note that $y_1^I, y_1^J\\le y_1^B$. Then $y_2^I > y_2^B$ since $I$ must see $A$. But now $J$ cannot see $B$, a contradiction.\n\\medskip \n \n$\\bullet \\;$ \n${\\mathcal{S}}(A)={\\mathcal{S}}(B)$:\nIf $C \\in {\\mathcal{S}}(A)$ then $y_2^C \\leq y_1^A = y_1^B$. But $C$ sees $B$, so $C \\in {\\mathcal{S}}(B)$. Thus ${\\mathcal{S}}(A) \\subseteq {\\mathcal{S}}(B)$ and, similarly, ${\\mathcal{S}}(B) \\subseteq {\\mathcal{S}}(A)$. \n\\medskip\n \n \n$\\bullet \\;$ \n$|{\\mathcal{S}}(A)| >1$: \nOtherwise $|{\\mathcal{E}}(A)| \\geq 4$, since $n \\geq 6$ and ${\\mathcal{N}}(A)$ and ${\\mathcal{W}}(A)$ are empty.\nSince $|{\\mathcal{E}}(B)|\\leq 1$, this implies $|{\\mathcal{E}}(A) \\cap {\\mathcal{W}}(B)| \\geq 2$. Fix distinct $G$, $H \\in {\\mathcal{E}}(A) \\cap {\\mathcal{W}}(B)$ with $x_1^G \\leq x_1^H$. Now $y_2^H > y_2^G$ since $H$ sees $A$. But then $G$ cannot see $B$, a contradiction.\n\\medskip\n \n$\\bullet \\;$ \n$|{\\mathcal{S}}(A)| \\leq 1$: \nOtherwise, fix distinct $C$,$D \\in {\\mathcal{S}}(A)$ with $y_1^C \\geq y_1^D$. \n Since ${\\mathcal{S}}(A)={\\mathcal{S}}(B),$ both $C$, $D$ see $A$ and $B$ from below. Now ${\\mathcal{E}}(A)\\cap {\\mathcal{W}}(B)$ is empty, since if $G \\in {\\mathcal{E}}(A) \\cap {\\mathcal{W}}(B)$, then $D$ cannot see $G$. Since ${\\mathcal{N}}(A)$ and ${\\mathcal{W}}(A)$ are empty and $n \\geq 6$, it follows that\n $|{\\mathcal{E}}(A)| \\geq 3$ and thus ${\\mathcal{E}}(B) \\geq 2$, a contradiction.\n\\medskip\n\nHaving shown that $|{\\mathcal{S}}(A)| > 1$ and $|{\\mathcal{S}}(A)| \\leq 1$, we have arrived at a contradiction, and we conclude that $|{\\mathcal{T}}_S | =1.$\n\\end{proof}\n\n\\begin{figure}\n \\includegraphics[height=1.9in]{extracting-operation.pdf}\n \n \\caption{Applying the extracting operation $S \\uparrow A$.}\n \\label{fig:my_label}\n\\end{figure}\n\nAnother operation that will be useful is an extraction operation that can move a certain rectangle to the top row of the bounding box. Specifically, for a rectangle visibility representation $S$ of $K_n$ with bounding box $R = [0,u] \\times [0,v]$ and a rectangle $A \\in \\mathcal{T}_S$, we define $S \\uparrow A$ to be the set of rectangles in $R$ given by\t$$ S \\uparrow A = \\{f(X) : X \\in S\\},$$\nwhere $f(A) = [0,u] \\times [v-1,v],$ and where, for every $X \\not= A$, $$ f(X) = [x_1^X,x_2^X] \\times [y_1^X, \\min\\{y_2^X,v-1\\}].$$ As illustrated in Figure~\\ref{fig:my_label}, the function $f$ maps the rectangle $A$ to the top row of $R$ and maps no other rectangle to that row. Furthermore, $f(X) \\subseteq X$ for all $X \\not= A$, so the rectangles of $ S \\uparrow A$ do not overlap.\n\n\\begin{lemma}\\label{extract} For any $n \\geq 1$ let $S$ be a rectangle visibility representation of $K_n$ with bounding box $R = [0,u] \\times [0,v]$. If ${\\mathcal{T}}_S =\\{A\\}$ then $S \\uparrow A$ also represents $K_n$ and has bounding box $R$.\n\\end{lemma}\n \n \\begin{proof} Since $f$ bijectively maps the $n$ rectangles of $S$ to the $n$ rectangles of $S\\uparrow A$, and since these representations share the same bounding box $R$, it remains only to show that $f$ preserves adjacency. \n \t\nSince ${\\mathcal{N}}(A)$ is empty and the graph is complete, $S$ is partitioned as $$ S = \\{A\\} \\cup {\\mathcal{E}}(A) \\cup {\\mathcal{W}}(A) \\cup {\\mathcal{S}}(A).$$ For any distinct $X$ and $Y$ in $S$, we claim $f(X)$ and $f(Y)$ see each other: \n\n\\smallskip\n\n\\textbf{Case 1. $X=A$ and $Y \\in {\\mathcal{E}}(A)$.} Since ${\\mathcal{T}}_S=\\{A\\}$, note ${\\mathcal{N}}(Y)$ is empty. So ${\\mathcal{N}}(f(Y))=\\{f(A)\\}$, and $f(Y)$ sees $f(X)$ vertically. \n\n\\smallskip\n\n\\textbf{Case 2. $X=A$ and $Y \\in {\\mathcal{S}}(A)$.} Note\n${\\mathcal{S}}(f(A)) \\supseteq {\\mathcal{S}}(A)$\nand so $f(Y)$ sees $f(X)$ vertically.\n\n\\smallskip\n\n\\textbf{Case 3. $X\\not=A$ and $Y \\in {\\mathcal{S}}(A)$.} Since ${\\mathcal{T}}_S=\\{A\\}$, any line of sight between $X$ and $Y$ must be contained in the region below the top row of $R$. The only change to this region in $S \\uparrow A$ is the removal of $A$, so $f(X)$ still sees $f(Y)$ along the original line of sight.\n\n\\smallskip\n\n\\textbf{Case 4. $\\{X,Y\\} \\subseteq {\\mathcal{E}}(A)$.} If $X$ sees $Y$ vertically, say with $X$ above $Y$, then $y_1(X) > y_1(A)$ so that $Y$ can see $A$. Since ${\\mathcal{T}}_S=\\{A\\}$, $X$ must see $Y$ horizontally. If the only line of visibility from $X$ to $Y$ were in the top row of $R$, then $y_2^X =y_2^Y = v$ and one of $X$,$Y$ could not see $A$. Therefore, $X$ must see $Y$ in a lower row, and so $f(X)$ still sees $f(Y)$ horizontally.\n\n\\smallskip\n\n\\textbf{Case 5. $X \\in {\\mathcal{E}}(A)$ and $Y \\in {\\mathcal{W}}(A)$.} Since ${\\mathcal{T}}_S=\\{A\\}$, $X$ must see $Y$ horizontally. If $X$ sees $Y$ in any row below the top row of $R$, then $f(X)$ still sees $f(Y)$ in that same row. If $X$ {\\em only} sees $Y$ in the top row of $R$, then $y_2^A < v$. Since ${\\mathcal{T}}_S=\\{A\\}$, both $X$,$Y$ see $A$ horizontally in the top row of $A$. Since the bottom row of $f(A)$ is above the top row of $A$, now $f(X)$ sees $f(Y)$ horizontally in what was the top row of $A$.\n\n\\smallskip\n\nBy symmetry, these cover all possible cases.\n\\end{proof}\n\n\\begin{figure}[ht!]\n \\centerline{\\includegraphics[height=1.9in]{K7-height.pdf} }\n \\caption{The seven rectangles in the proof of Theorem~\\ref{htk7}.}\n\\label{fig:K7proof}\n \\end{figure}\n \n\\begin{lemma}\\label{4out} Assume $n\\geq 6$ and $K_n$ has a rectangle visibility representation with bounding box $R = [0,u] \\times [0,v]$. Then $K_n$ has a rectangle visibility representation with bounding box $R$ in which the boundary of $R$ is covered by 4 rectangles of height or width 1.\n\\end{lemma}\n\n\\begin{proof}\n\tApply\n\tLemma \\ref{extract} successively in each of the four directions. Each time, a rectangle is brought to the corresponding boundary without changing the bounding box $R$.\n\\end{proof}\n\nRecall that a \\textit{\\textbf{bar visibility graph}} $G$ is a graph representable with a set of disjoint horizontal bars in the plane, with edges between bars that have vertical lines of sight between them. All bar visibility graphs are planar \\cite{Duchet83, Tamassia86, Wismath85}.\n\n\\begin{lemma}\nSuppose $S$ is an RV-representation for a graph $G$. If we partition the edges of $G$ into those with vertical lines of sight and horizontal lines of sight in $S$, then the subgraphs $G_V(S)$ and $G_H(S)$ of $G$ with these edges are bar visibility graphs, and hence planar graphs. \\label{bar-planar}\n\\end{lemma}\n\n\\begin{proof}\nReplace each rectangle $A$ in $S$ with a horizontal line segment at the top edge of $A$. This is a bar visibility representation of $G_V$. Rotating $S$ by 90 degrees and then replacing each rectangle by its new top edge yields a bar visibility representation of $G_H$.\n\\end{proof}\n\n\\begin{theorem} \\label{htk7} The complete graph $K_7$ has $\\height(K_7)=7$.\n\\end{theorem}\n\\begin{proof}\nSuppose $S$ is an RV-representation of $K_7$ with minimum height. By Lemma \\ref{4out}, we may assume the boundary of the bounding box $R$ is covered by 4 rectangles of height or width 1, as in Figure \\ref{fig:K7proof}. Label these $A$, $B$, $C$, and $D$ clockwise from the top.\n\nThe remaining 3 rectangles $E$, $F$, and $G$ in the interior induce a 3-clique. If the 3 edges of this clique all correspond to horizontal lines of sight, then, together with rectangles $B$ and $D$, the edges of a 5-clique are represented entirely by horizontal lines of sight. This is a 5-clique in $G_H$, which is impossible by Lemma~\\ref{bar-planar}. Similarly, the edges among $E$, $F$, and $G$ cannot all be vertical lines of sight. Rotating and renaming if necessary, assume $E$ sees $F$ and $G$ vertically and $F$ sees $G$ horizontally. Then $F$ and $G$ must be on the same side of $E$ and we may assume $F,G \\in {\\mathcal{S}}(E)$ and $G \\in {\\mathcal{E}}(F)$, as shown in Figure~\\ref{fig:K7proof}. \n\nThe following five horizontal lines of sight must occupy five distinct rows in $R$:\n $BD$, $BE$, $FG$, $BF$, and $DG$. To see why, notice first that edge $BD$ must have its own row to reach all the way across the representation. The row for $BE$ must be distinct from $FG$, $BF$, and $DG$ since $F,G \\in {\\mathcal{S}}(E)$. Rows for $FG$ and $BF$ are distinct since $B,G \\in {\\mathcal{E}}(F)$. Rows for $FG$ and $DG$ are distinct since $F, D \\in {\\mathcal{W}}(G)$. Rows for $BF$ and $DG$ are distinct since $G \\in {\\mathcal{E}}(F)$.\n \nSince $A$ and $C$ each take their own row by construction, it follows that $R$ has height at least 7. The representation of $K_7$ in Figure \\ref{fig:complete-graph-examples} proves equality.\n\\end{proof}\n\n\\begin{theorem}\\label{wk7} The complete graph $K_7$ has $\\width(K_7)=8$.\n\\end{theorem}\n\\begin{proof} If $\\width(K_7)=7$, then Lemma~\\ref{lowerbds}(iii) would guarantee an RV-representation $S$ in a $7 \\times 7$ box $R$.\nBy Lemma \\ref{4out} and Theorem \\ref{htk7}, we may assume the boundary of $R$ is covered by 4 rectangles of height or width 1. Label these $A,B,C,D$ clockwise from the top. As before, assume $F,G \\in {\\mathcal{S}}(E)$ and $G \\in {\\mathcal{E}}(F)$. \n\nThe following six vertical lines of sight must occupy six distinct columns in $R$:\n $AC,AF,EF,EG,AG,CE$. To see why, note that edge $AC$ must have its own column to reach all the way across the representation. Any column meeting $F$ cannot meet $G$, since $G \\in {\\mathcal{E}}(F)$. Columns for $CE$,$EF$,$EG$ are distinct since $C,F,G \\in {\\mathcal{S}}(E)$. Columns for $EG$,$AG$ are distinct since $A,E \\in {\\mathcal{N}}(G)$. Columns for $AF$,$EF$ are distinct since $A,E \\in {\\mathcal{N}}(F)$. Columns for $AF$,$CE$ are distinct since $E \\in {\\mathcal{N}}(F)$. Columns for $AG$,$CE$ are distinct since $E \\in {\\mathcal{N}}(G)$.\n \n Thus $R$ has width at least 8. By Figure \\ref{fig:complete-graph-examples}, $\\text{width}(K_7)=8$. \n\\end{proof}\n\n\\begin{corollary}\n$\\area(K_7)=56$ and $\\perimeter(K_7)=30$.\n\\end{corollary}\n\n\\begin{proof}\nTheorems~\\ref{htk7} and \\ref{wk7} show that any representation of $K_7$ requires height at least 7 and width at least 8. Figure~\\ref{fig:complete-graph-examples} shows a representation of $K_7$ with height 7 and width 8 exactly. Therefore this representation also has smallest area and perimeter by Lemma~\\ref{lemma-same-height-width}.\n\\end{proof}\n\n\\begin{theorem} \\label{htk8}\nThe complete graph $K_8$ has $\\rm{height}(K_8)=10$.\n\\end{theorem} \n\n\\begin{proof} Suppose $S$ is an RV-representation of $K_8$ with minimum height. By Lemma \\ref{4out}, we may assume the boundary of the bounding box $R$ is covered by 4 rectangles of height or width 1, as in Figure \\ref{4out}. Label these $A$, $B$, $C$, and $D$ clockwise from the top.\n\nRectangles $E$, $F$, $G$, and $H$ in the interior must induce a 4-clique. By Lemma~\\ref{bar-planar}, the edges in this clique that correspond to vertical lines of sight must form a triangle-free subgraph (and similarly for the horizontal edges). Up to isomorphism, there are only two decompositions of $K_4$ into a pair of triangle-free graphs, shown in Figure~\\ref{fig:K8proof}.\n \n \\begin{figure}[ht!]\n \\centerline{\\includegraphics[height=1in]{K8-coloring.pdf}}\n \\caption{The partition of the edges of $K_8$ in Theorem~\\ref{htk8}.}\n\\label{fig:K8proof} \n \\end{figure}\n\nFirst we claim that the graph on the right of Figure~\\ref{fig:K8proof} is impossible. To see why, let the solid edges denote vertical lines of sight. Since $FG$ is horizontal and $EF$ and $EG$ are vertical, $F$ and $G$ are on the same side of $E$. Assume $F,G \\in {\\mathcal{S}}(E)$. Since $EH$ is horizontal and $FH$ and $GH$ are vertical, $F, G \\in {\\mathcal{S}}(H)$. Renaming if necessary, $E \\in {\\mathcal{E}}(H)$ and $F \\in {\\mathcal{E}}(G)$. So $x_2^G \\leq x_1^F$. But $x_1^E < x_2^G$ and $x_1^F < x_2^H$. It follows that $x_1^E < x_2^H$, contradicting that $E \\in {\\mathcal{E}}(H)$. A similar argument elimates the case when the solid edges denote horizontal lines of sight.\n\nNext we claim that the graph on the left requires $R$ to contain at least 10 rows. The graph is symmetric in solid and dotted edges, so assume the solid edges denote vertical lines of sight. Since $FG$ is horizontal and $EF$ and $EG$ are vertical, $F$ and $G$ are on the same side of $E$. Assume $F,G \\in {\\mathcal{S}}(E)$.\nSince $EH$ is horizontal and $EF$ and $FH$ are vertical, $E$ and $H$ are on the same side of $F$. So $H \\in {\\mathcal{N}}(F)$.\nSince $FH$ is vertical and $GH$ and $FG$ are horizontal, $F$ and $H$ are on the same side of $G$. Assume $F,H \\in {\\mathcal{W}}(G)$.\nSince $EG$ is vertical and $GH$ and $EH$ are horizontal, $E$ and $G$ are on the same side of $H$. So $H \\in {\\mathcal{W}}(E)$. \n\nThe following 8 horizontal lines of sight occupy distinct rows in $R$: \n $$BD, DE, EH, BH, GH, DG, FG, BF. $$\nEdge $BD$ must have its own row to reach all the way across the representation. Any row meeting $E$ cannot meet $F$ or $G$, since $F,G \\in {\\mathcal{S}}(E)$. Any row meeting $F$ cannot meet $H$, since $H \\in {\\mathcal{N}}(F)$. \nRows $DE$ and $EH$ are distinct since $D,H \\in {\\mathcal{W}}(E)$.\nRows $EH$ and $BH$ are distinct since $B,E \\in {\\mathcal{E}}(H)$.\nRows $BH$ and $GH$ are distinct since $B,G \\in {\\mathcal{E}}(H)$.\nRows $GH$ and $DG$ are distinct since $D,H \\in {\\mathcal{W}}(G)$.\nRows $DG$ and $FG$ are distinct since $F,D \\in {\\mathcal{W}}(G)$.\nRows $FG$ and $BF$ are distinct since $B,G \\in {\\mathcal{E}}(F)$.\nRows $DE$ and $BH$ are distinct since $H \\in {\\mathcal{W}}(E)$.\nRows $BH$ and $DG$ are distinct since $H \\in {\\mathcal{W}}(G)$.\nRows $DG$ and $BF$ are distinct since $F \\in {\\mathcal{W}}(G)$.\n \nSince $A$ and $C$ each take their own row by construction, $R$ has height $\\geq 10$. The representation of $K_8$ shown in Figure~\\ref{fig:complete-graph-examples} proves equality.\n\\end{proof}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.3\\textwidth]{complete7-rectangles1.pdf} \\hspace{1cm} \\includegraphics[width=.4\\textwidth]{complete8-rectangles1.pdf}\n\\caption{A representation of $K_7$ in a $7 \\times 8$ bounding box, and a representation of $K_8$ in a $10 \\times 10$ bounding box.}\n\\label{fig:complete-graph-examples}\n\\end{figure}\n\n\\begin{theorem} \\label{wk8} \nThe complete graph $K_8$ has $\\width(K_8)=10$.\n\\end{theorem} \n\n\\begin{proof} Since, by definition, $\\width(K_8) \\geq \\rm{height}(K_8) =10$, the representation of $K_8$ shown in Figure \\ref{fig:complete-graph-examples} proves equality.\n\\end{proof}\n\n\\begin{corollary}\nThe complete graph $K_8$ has $\\area(K_8)=100$ and $\\perimeter(K_8)=40$.\n\\end{corollary}\n\n\\begin{proof}\nTheorems~\\ref{htk8} and \\ref{wk8} show that any representation of $K_8$ requires height at least 10 and width at least 10. Figure~\\ref{fig:complete-graph-examples} shows a representation of $K_8$ with height 10 and width 10 exactly. Therefore this representation also has smallest area and perimeter by Lemma~\\ref{lemma-same-height-width}.\n\\end{proof}\n\nNote that $K_8$ is the largest complete RVG \\cite{hutchinson1999representations}, so we can't investigate the size of RV-representations of larger complete graphs. But using the disjoint unions of complete graphs and Lemma~\\ref{NewDisjointLem}, we can construct graphs on $n$ vertices whose RV-representations are larger than the empty graph for all $n \\geq 8$, as follows.\n\n\\begin{corollary} \\label{disjoint complete graphs}\nFix any positive integer $n$ and write $n=8q+r$ for integers $q$ and $r$ with $0 \\leq r < 8$. Define the graph $G_n = q K_8 + E_r$, which has $q$ disjoint copies of $K_8$ and $r$ isolated vertices. Then $G_n$ has $n$ vertices, $\\height(G_n) = \\width(G_n)=n+2q$, $\\area(G_n)=(n+2q)^2$ and $\\perimeter(G_n)=4n+8q$.\n\\end{corollary}\n\n\\begin{proof}\nBy Lemma~\\ref{NewDisjointLem}, $\\height(G_n)= q \\cdot \\height(K_8)+ \\height(E_r)$. By Theorem~\\ref{htk8}, $\\height(K_8)=10$, and again by Lemma~\\ref{NewDisjointLem}, $\\height(E_r)=r$. So $\\height(G_n)=10q+r=n+2q$.\n\nSince the bounding boxes of the representations of $K_8$ and $E_r$ are square, the same argument holds for $\\width(G_n)$. Since the representations of $G_n$ with minimum height and width are the same, these representations also yield the minimum area and perimeter of $G_n$ by Lemma~\\ref{lemma-same-height-width}.\n\\end{proof}\n\n\\begin{corollary}\nAmong all rectangle visibility graphs with $n \\geq 7$ vertices, the empty graph $E_n$ does not have the largest width, area, or perimeter.\nAmong all rectangle visibility graphs with $n \\geq 8$ vertices, the empty graph $E_n$ does not have the largest height.\n\\end{corollary}\n\n\n\\section{Directions for Further Research} \\label{Conclusions}\n\nWe conclude with a number of open problems and questions that could further this line of research.\n\n\\begin{enumerate}[wide]\n\\item We have established that for $n=7,8$ the complete graph exceeds the empty graph in area, perimeter, height, and width. We have also shown that for $n>8$, the empty graph does not maximize any of these parameters. Accordingly, it is natural to ask, in general, which rectangle visibility graph(s) with $n$ vertices have largest height, perimeter, width, and area? Note that for $n>8$, $K_n$ is not a rectangle visibility graph \\cite{hutchinson1999representations}. Furthermore, when $r=7$ in Corollary~\\ref{disjoint complete graphs}, we can replace $E_7$ by $K_7$ to obtain a graph with width $n+2q+1$. Are there any other graphs that beat the graph in Corollary~\\ref{disjoint complete graphs}?\n\n\\item Tables~\\ref{table-separating1} and~\\ref{table-separating-new} show that, for each pair of parameters of area, perimeter, height, and width, there are pairs of graphs that share the same number of vertices, but that are ordered oppositely by that pair of parameters. However, it does not consider triples and quadruples of parameters. For example, is there a pair of graphs $G_1$ and $G_2$ for which $\\area(G_1)<\\area(G_2)$ but both $\\height(G_2)<\\height(G_1)$ and $\\width(G_2)<\\width(G_1)$?\n\n\\item Tables~\\ref{table-separating2} and~\\ref{table-separating2b} show that, for each pair of parameters of area, perimeter, height, and width, there is a graph with two representations that are ordered oppositely by that pair of parameters. However, it does not consider triples and quadruples of parameters. For example, is there a graph $G$ that requires three distinct RV-representations to minimize its area, perimeter, and height?\n\n\\item Say that an RV-representation $S$ is \\textit{compressible} if we can delete a row or column of $S$ and still have a representation of the same graph. For a given number of vertices $n$, which graphs have the largest incompressible representations, in terms of area, perimeter, height, or width? How large are these values?\n\n\\item We might consider additional measures of size in terms of the rectangles in an RV-representation, rather than the bounding box. For example, for an RV-representation $S$, say $\\recarea(S)$ is the area of the largest rectangle in $S$. Then $\\recarea(G)$ is the smallest value of $\\recarea(S)$ for any RV-representation of $G$. Which graphs $G$ with $n$ vertices have largest $\\recarea(G)$?\n\n\\item We conjecture that if $\\height(G)=2$ then $G$ is outerplanar. Is this true? Can we characterize other families of graphs of specific area, perimeter, height, or width?\n\n\\item We can consider other dimensions. For dimension 1, what is the minimum length of an integer bar visibility graph on $n$ vertices? For 3-dimensional box visibility graphs, there are many parameters measuring the size of an integer box visibility representation. Which graphs require the largest 3-dimensional representation, as measured by these parameters? Note that in \\cite{Fekete99} Fekete and Meijer proved that $K_{56}$ is a 3-dimensional box visibility graph.\n \n \\end{enumerate}\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{NGC3516: X-ray and UV Observations}\n\n NGC3516 contains the strongest UV absorption system known in a\nSeyfert 1 galaxy. This system contains at least two distinct\ncomponents: a broad (FWHM$\\sim$2000 km\/s) variable component and a\nnarrow ($\\sim$500 km\/s) non-variable component. Both the broad and\nthe narrow systems contain high as well as low ionization absorption\nlines. Recent observations have shown that the broad high ionization\nabsorption lines have {\\bf \\it disappeared} since $\\sim$1992 (Koratkar\net al. 1996 and references therein, Kriss et al. 1996)\n\n We analyze a high signal-to-noise (S\/N) ROSAT PSPC archival spectrum\nof NGC3516 obtained in 1992. The high S\/N allows the strong detection\nof both OVII and OVIII edges independently, in spite of the limited\nspectral resolution of the PSPC. A warm absorber fit to the data shows\nthat the absorber is highly ionized (U$=10^{+2.6}_{-2.1}$), and has a\nlarge column density N$_H \\sim 10^{22}$ cm$^{-2}$.\n\n\\section{The XUV Absorber}\n\n In several AGN, the X-ray and the UV absorbers were found to be one\nand the same (the `XUV' absorbers, Mathur et al. 1994, 1995). Is it\nalso true for NGC3516? The absorption systems in NGC3516 are clearly\ncomplex with multiple components (Kriss et al. ~1996). It is the {\\it\nhigh ionization, broad} absorption system that is most likely to be\nassociated with the X-ray warm absorber. Investigation of this\nquestion is tricky, however, because the broad absorption lines have\ndisappeared. Here we argue that the XUV absorption picture is {\\it\nconsistent} with the presence of a highly ionized X-ray absorber and the\ncurrent non-detection of CIV and NV broad absorption lines (see Fig. 1).\nThe X-ray absorber {\\it MUST} have a UV signature showing OVI\nabsorption lines (Fig. 1). Since there were no simultaneous\nROSAT \\& far-UV observations in 1992, this cannot be directly determined.\nHowever, we note that in the 1995 HUT observations, OVI doublets are\nunresolved, but consistent with being broad\n(FWHM=1076$\\pm$146 km\/s, Kriss et al. 1996).\n\\begin{figure}\n\\vspace*{-0.9in}\n\\centerline{\n\\epsscale{.75}\n\\plotone{mathurs.eps}\n}\n\\vspace*{-0.7in}\n\\caption{Ionization fractions f of OVI, OVII, OVIII, CIV and NV as a\nfunction of ionization parameter, U (with CLOUDY, Ferland 1991). The\nvertical lines define the range of U for which the ratio\nf$_{\\mbox{OVII}}$\/f$_{\\mbox{OVIII}}$ lies within the observed ROSAT\nrange. The arrows on the CIV and NV curves indicate the lower limits\nof f$_{\\mbox{CIV}}$${_>\\atop^{\\sim}}$ $3\\times 10^{-4}$ and f$_{\\mbox{NV}}$${_>\\atop^{\\sim}}$\n$3.1\\times 10^{-4}$ based on the published IUE data. The + mark\ncorresponds to the HUT data in Kriss et al. 1996.}\n\\end{figure}\n\n We argue that the XUV absorber in NGC3516 has evolved with time (Fig.\n1). {\\bf Pre-1992:} It showed broad, high ionization CIV and NV\nabsorption lines and an X-ray ionized absorber (U ${_<\\atop^{\\sim}}$ 7). As it\nevolves, outflowing and expanding, the density falls and the\nionization parameter increases. {\\bf 1992:} CIV and NV absorption lines\ndisappeared; X-ray absorber is still present with OVI lines in the UV\n(U$\\sim$10) (No UV data available to verify). {\\bf 1995:} CIV and NV\nabsorption line remain absent; X-ray absorber is present. OVI lines\nare present, and detected with HUT (U$\\sim$13.5). {\\bf Post-1996:} We\npredict that the OVI absorption lines will disappear as the ionization\nparameter increases further (U${_>\\atop^{\\sim}}$ 20). The OVIII edge will continue\nto strengthen relative to the OVII edge. Eventually, even the X-ray\nabsorber will also disappear.\n\n\\acknowledgments\nSM gratefully acknowledges the financial support of NASA grant\nNAGW-4490 (LTSA) and BW, TA of NASA contract NAS8-39073 (ASC).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAn ordinary heavy baryon constitutes a pair of light quarks and a\nheavy quark. Since the charm and bottom quarks are very heavy in\ncomparison with the light quarks, it is plausible to take the limit of\nthe infinitely heavy mass of the heavy quark, i.e. $m_Q\\to \\infty$. In\nthis limit, the physics of heavy baryons become simple. The spin of\nthe heavy quark is conserved, because of its infinitely heavy mass. It\nresults in the conservation of the total spin of light quarks:\n$\\bm{J}_L \\equiv \\bm{J}-\\bm{J}_Q$, where $\\bm{J}_L$, $\\bm{J}_Q$, and\n$\\bm{J}$ denote the spin of the light-quark pair, that of the\nheavy quark, and the total spin of the heavy baryon. This is called \nthe heavy-quark spin symmetry that allows $\\bm{J}_L$ to be a\ngood quantum number. Moreover, the physics is kept intact under the\nplacement of heavy quark flavors. This is called the heavy-quark\nflavor symmetry~\\cite{Isgur:1989vq, Isgur:1991wq,\n Georgi:1990um, Manohar:2000dt}. Then a heavy quark becomes static,\nso that it can be considered as a static color source. Its importance\nis only found in making the heavy baryon a color singlet, and\nin giving higher-order contributions arising from $1\/m_Q$ \ncorrections. Consequently, the dynamics inside a heavy baryon is\nmainly governed by the light quarks. \n\nThe flavor structure of the heavy baryon is also determined by\nthem. Since there are two light quarks inside the heavy baryon, we\nhave two different flavor $SU_{\\mathrm{f}}(3)$ irreducible\nrepresentations, i.e. $\\bm{3}\\otimes \\bm{3}=\\overline{\\bm{3}} \\oplus\n\\bm{6}$. In the language of a quark model, the spatial part of the \nheavy-baryon ground state is symmetric due to the zero orbital angular\nmomentum, and the color part is totally antisymmetric. Since the\nflavor anti-triplet ($\\overline{\\bm{3}}$) is antisymmrtric, the spin\nstate corresponding to {$\\overline{\\bm{3}}$} should be antisymmetric. Thus, the\nbaryons belonging to the anti-triplet should be $J_L=0$. Similarly, the\nflavor-symmetric sextet ($\\bm{6}$) should be symmetric in spin space,\ni.e. $J_L=1$. This leads to the fact that the baryon antitriplet has\nspin $J=1\/2$, while the baryon sextet carries spin $J=1\/2$ or $J=3\/2$,\nwith the spin of the light-quark pair being coupled with the heavy\nquark spin $J_Q=1\/2$. So, we can classify 15 different lowest-lying\nheavy baryons as shown in Fig.~\\ref{fig:1} in the case of charmed\nbaryons. \n\\begin{figure}[htp]\n\\centering\n\\includegraphics[scale=0.35]{fig1.pdf}\\qquad\n\\caption{The anti-triplet ($\\overline{\\bm{3}}$) and sextet ($\\bm{6}$)\n representations of the lowest-lying heavy baryons. The left panel\n draws the weight diagram for the anti-triplet with the total spin\n $\\frac{1}{2}$. The centered panel corresponds to that for the sextet\n with the total spin $1\/2$ and the right panel depicts that\n for the sextet with the total spin $3\/2$.} \n\\label{fig:1}\n\\end{figure}\n\nRecently, there has been a series of new experimental data on the\nspectra of heavy baryons~\\cite{Aaltonen:2007ar, Chatrchyan:2012ni,\n Abazov:2008qm, Kuhr:2011up, Aaij:2012da, Aaij:2013qja, Aaij:2014esa,\n Aaij:2014lxa, Aaij:2014yka}, which renewed interest\nin the physics of the heavy baryons. The lowest-lying singly heavy baryons\nare now almost classified except for $\\Omega_b ^\\ast$. In the meanwhile, the\nLHCb Collaboration has announced the first finding of two heavy\npentaquarks, $P_c(4380)$ and $P_c(4450)$ ~\\cite{Aaij:2015tga,\n Aaij:2016phn, Aaij:2016ymb, Aaij:2016iza}. Very recently, the\nfive excited $\\Omega_c$ baryons were reported~\\cite{Aaij:2017nav}, among\nwhich the four of them was confirmed by the Belle\nexperiment~\\cite{Yelton:2017qxg}. Interestingly the two of the excited\n$\\Omega_c$s, i.e. $\\Omega_c(3050)$ and $\\Omega_c(3119)$, have very narrow\nwidths: $\\Gamma_{\\Omega_c(3050)}=(0.8\\pm0.2\\pm0.1)\\,\\mathrm{MeV}$ and\n$\\Gamma_{\\Omega_c(3119)}=(1.1\\pm0.8\\pm0.4)\\,\\mathrm{MeV}$. \n\nWhile there is a great deal of theoretical approaches for the\ndescription of heavy baryons, we will focus on a pion mean-field\napproach in the present short review. This mean-field \napproach was first proposed by E. Witten in this seminal\npapers~\\cite{Witten:1979kh,Witten:1983}, where he asserted that in the\nlimit of the large number of colors ($N_c$) the nucleon can be\nregarded as a bound state of $N_c$ \\textit{valence} quarks in a pion\nmean field with a hedgehog symmetry~\\cite{Pauli:1942kwa,\n Skyrme:1961vq}. Since a baryon mass \nis proportional to $N_c$ whereas the quantum fluctuation around the\nsaddle point of the pion field is suppressed by $1\/N_c$, the\nmean-field approach is a rather plausible method for explaining\nproperties of baryons. The presence of $N_c$ \\textit{valence} quarks\nin this large $N_c$ limit, which consist of the lowest-lying baryons, \nproduce the pion mean fields by which they are influenced\n\\emph{self-consistently}. This picture is very similar to a Hartree \napproximation in many-body theories. Witten also showed how to\nconstruct the mean-field theory for the baryon schematically in\ntwo-dimensional quantum chromodynamics (QCD). Though his idea was\ncriticized sometimes ago by S. Coleman~\\cite{Coleman} because of its\ntechnical difficulties, it is worthwhile to pursue it to see how far\nwe can describe the structure of the baryon in the pion mean-field\napproach. \n\nThe chiral quark-soliton model ($\\chi$QSM)~\\cite{Diakonov:1987ty,\n Christov:1995vm, Diakonov:1997sj} has been\nconstructed based on Witten's argument. The $\\chi$QSM starts from the\neffective chiral action (E$\\chi$A) that was derived from the instanton\nvacuum~\\cite{Diakonov:1983hh, Diakonov:1985eg}. The E$\\chi$A respects\nchiral symmetry and its spontaneous breakdown, in which the essential\nphysics of the lowest-lying hadrons consists. One can derive the\nclassical energy of the nucleon by computing the nucleon correlation\nfunction in Euclidean space, taking the Euclidean time to go to\ninfinity. Minimizing the classical energy self-consistently in the\nlarge $N_c$ limit with the $1\/N_c$ meson quantum fluctuations\nsuppressed, we obtain the classical mass and the self-consistent\nprofile function of the chiral soliton. While we ignore the $1\/N_c$\nquantum fluctuations around the saddle point of the soliton field, we\nneed to take into account the zero modes that do not change the\nsoliton energy. Since the soliton with hedgehog symmetry is not\ninvariant under translational, rotational and isotopic\ntransformations, we impose these symmetry properties on the\nsoliton and obtain a completely new solution with the same classical\nenergy. Because of the hedgehog symmetry, an \n$\\mathrm{SU(2)}$ soliton needs to be embedded into the isospin\nsubgroup of the flavor\n$\\mathrm{SU(3)}_{\\mathrm{f}}$~\\cite{Witten:1983}, which was \nalready utilized by various chiral soliton\nmodels~\\cite{Guadagnini:1983uv, Mazur:1984yf, Jain:1984gp}. This\ncollective quantization of the chiral soliton leads to the collective\nHamiltonian with effects of flavor $\\mathrm{SU(3)}_{\\mathrm{f}}$\nsymmetry breaking. The $\\chi$QSM has one salient feature: the \nright hypercharge is constrained to be $Y'=N_c\/3$ imposed by the $N_c$ \nvalence quarks. This right hypercharge selects allowed representations\nof light baryons such as the baryon octet ($\\bm{8}$), the decuplet\n($\\bm{10}$), etc. The $\\chi$QSM was successfully applied to the\nproperties of the lowest-lying light baryons such as the mass\nsplittings~\\cite{Blotz:1992pw, Yang:2010fm}, the form\nfactors~\\cite{Kim:1995mr, Silva:2001st, Ledwig:2010tu},\nthe magnetic moments~\\cite{Kim:1995ha, Wakamatsu:1996xm, Kim:1997ip,\n Kim:2005gz}, hyperon \nsemileptonic decays~\\cite{Ledwig:2008ku, Yang:2015era}, \nparton distributions~\\cite{Diakonov:1996sr, Wakamatsu:2003wg},\ntransversities of the nucleon~\\cite{Kim:1995bq, Kim:1996vk,\n Schweitzer:2001sr}, generalized parton\ndistributions~\\cite{Goeke:2001tz}, and so on. \n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[scale=0.25]{fig2.pdf}\n\\caption{Schematic picture of a heavy baryon. The $N_c-1$ valence\n quarks are filled in the lowest-lying valence level $K^P=0^+$ with\n the heavy quark stripped off. $K^P$ denotes the grand spin which we\n will explain later and $P$ is the corresponding parity of the\n level. The presence of the valence quarks will interact with the sea\n quarks filled in the Dirac sea each other. This interaction will\n bring about the pion mean field.} \n\\label{fig:2}\n\\end{figure}\nVery recently, Ref.~\\cite{Yang:2016qdz} extended a mean-field \napproach to describe the masses of singly heavy baryons, being \nmotivated by Ref.~\\cite{Diakonov:2010tf}. A singly heavy baryon\nconstitutes a heavy baryon and $N_c-1$ light valence quarks (see\nFig.~\\ref{fig:2}). In the limit of $m_Q\\to\\infty$, the heavy quark can\nbe considered as a static color source. Thus, the dynamics inside a\nheavy baryon is governed by the $N_c-1$ valence quarks. The presence\nof the $N_c-1$ valence quarks will produce the pion mean fields as in\nthe case of the light baryons. However, there is one very significant\ndifference. \nthe constraint right hyper charge is taken to be $Y' =(N_c-1)\/3$ and\nallows the lowest-lying representations: the baryon anti-triplet\n($\\overline{\\bm{3}}$), the baryon sextet ($\\bm{6}$), the baryon\nanti-decapentaplet ($\\overline{\\bm{15}}$). The model reproduced\nsuccessfully the mass splitting of the baryon anti-triplet and sextet\nin both the charm and bottom sectors. In addition, the mass of the\n$\\Omega_b^\\ast $ baryon, which has not yet found, was predicted. The model\nwas further extended by including the second-order perturbative\ncorrections of flavor $SU_{\\mathrm{f}}(3)$ symmetry\nbreaking~\\cite{Kim:2018xlc}. The magnetic moments \nbaryons~\\cite{Yang:2018uoj} and electromagnetic form\nfactors~\\cite{Kim:2018nqf} of the singly heavy baryons were also\nstudied within the same framework. The $\\chi$QSM was also used to\ninterpret the five $\\Omega_c$ baryons newly found by the LHCb\nCollaboration~\\cite{Kim:2017jpx, Kim:2017khv}. Within the present\nframework, two of the $\\Omega_c$s with the smaller widths are\nclassified as the members of the baryon $\\overline{\\bm{15}}$, whereas\nall other $\\Omega_c$'s belong to the excited baryon sextet. The widths\nwere quantitatively well reproduced without any free parameter. In the\npresent work, we will review briefly these recent investigations on the singly\nheavy baryons. \n\nWe sketch the present work as follows: In Section\nII, we review the general formalism of the $\\chi$QSM for singly heavy\nbaryons. In Section III, we examine the mass splittings of the heavy\nbaryons, emphasizing the discussion of the effects of\n$\\mathrm{SU(3)}_{\\mathrm{f}}$ breaking. In Section IV, we discuss the\nrecent results of the magnetic moments and electromagnetic form\nfactors of the heavy baryons. In Section V, we briefly introduce a\ntheoretical interpretation of the excited $\\Omega_c$ baryons found by the\nLHCb, based on the present mean-field approach. The final Section is\ndevoted to the conclusions and outlook.\n\n\\section{The chiral quark-soliton model for singly heavy baryons} \nIn the present approach, a heavy baryon is considered as a bound state\nof the $N_c-1$ valence quarks in the pion mean field with a heavy\nquark stripped off from the valence level. Thus, the correlation\nfunction of the heavy baryon can be expressed in terms of the $N_c-1$\nvalence quarks\n\\begin{align}\n\\Pi_{B}(0, T) = \\langle J_B (0, T\/2) J_B^\\dagger\n (0,-T\/2) \\rangle_0 = \\frac{1}{Z}\\int \\mathcal{D} U\n \\mathcal{D}\\psi^\\dagger \n \\mathcal{D}\\psi J_B(0,T\/2) J_B^\\dagger (0,-T\/2)\n e^{\\int d^4 x\\,\\psi^\\dagger (i\\rlap{\/}{\\partial} + i\n MU^{\\gamma_5}+ i \\hat{m})\\psi} , \n\\label{eq:corr1}\n\\end{align}\nwhere $J_B$ denotes the light-quark current with the $N_c-1$ \nlight quarks for a heavy baryon $B$\n\\begin{align}\nJ_B(\\bm{x}, t) = \\frac1{(N_c-1)!}\n \\varepsilon^{\\beta_1\\cdots\\beta_{N_c-1}} \\Gamma_{J'J_3',TT_3}^{\\{f\\}}\n \\Psi_{\\beta_1f_1}(\\bm{x}, t) \\cdots \\Psi_{\\beta_{N_c-1}f_{N_c-1}}\n (\\bm{x}, t). \n\\end{align}\n$\\beta_i$ stand for color indices and\n$\\Gamma_{J'J_3',TT_3}^{\\{f_1\\cdots f_{N_c-1}\\}}$ represents a matrix \nwith both flavor and spin indices. $J'$ and $T$ are the spin and\nisospin of the heavy baryon, respectively. $J_3'$ and $T_3$ are their\nthird components, respectively. The notation $\\langle \\cdots \\rangle_0$ in\nEq.~(\\ref{eq:corr1}) is the vacuum expectation value, $M$\nthe dynamical quark mass, and the chiral field $U^{\\gamma_5}$ is\ndefined as \n\\begin{align}\nU^{\\gamma_5} = U\\frac{1+\\gamma_5}{2} + U^\\dagger \\frac{1-\\gamma_5}{2} \n\\end{align}\nwith \n\\begin{align}\nU = \\exp(i\\pi^a \\lambda^a). \n\\end{align}\nHere, $\\pi^a$ represents the pseudo-Goldstone boson field and\n$\\hat{m}$ denotes the flavor matrix of the current quarks, written as\n$\\hat{m}=\\mathrm{diag}(m_{\\mathrm{u}},\\,m_{\\mathrm{d}},\\,m_{\\mathrm{s}})$. We \nassume isospin symmetry, i.e. $m_{\\mathrm{u}}=m_{\\mathrm{d}}$. Since\nthe strange current quark mass is small enough, we will treat it\nperturbatively. \n\nIntegrating over the quark fields, we derive the correlation function\nas \n\\begin{align}\n\\Pi_{B}(0, T) =\n \\frac{1}{Z}\\Gamma_{J'J_3',TT_3}^{\\{f\\}}\\Gamma_{J'J_3',TT_3}^{\\{g\\}*} \\int\n \\mathcal{D} U \\prod_{i=1}^{N_c-1} \\left\\langle 0,T\/2\\left|\n \\frac1{D(U)} \\right|0,-T\/2\\right\\rangle\n e^{-S_{\\mathrm{eff}}(U)}, \n\\label{eq:corr2}\n\\end{align} \nwhere the single-particle Dirac operator $D(U)$ is defined as \n\\begin{align}\nD(U) = i\\gamma_4 \\partial_4 + i\\gamma_k \\partial_k + i MU^{\\gamma_5} +\n i \\hat{m} \n\\end{align}\nand $S_{\\mathrm{eff}}$ is the effective chiral action written\nas \n\\begin{align}\nS_{\\mathrm{eff}} = -N_c \\mathrm{Tr}\\log D(U). \n\\label{eq:effecXac}\n\\end{align}\nEquation~(\\ref{eq:corr2}) can be schematically depicted as\nFig.~\\ref{fig:2}. It consists of two different terms: The first and\nsecond ones are respectively called the \\textit{valence-quark\n contribution} and \\textit{sea-quark contribution} within the\n$\\chi$QSM. \n\\begin{figure}[htp]\n\\centering\n\\includegraphics[scale=0.3]{fig3.pdf}\\qquad\n\\caption{Correlation function for a heavy baryon} \n\\label{fig:3}\n\\end{figure}\nWhen the Euclidean time $T$ is taken from $-\\infty$ to $\\infty$, \nthe correlation function picks up the ground-state\nenergy~\\cite{Diakonov:1987ty, Christov:1995vm} \n\\begin{align}\n\\lim_{T\\to\\infty} \\Pi_B(T) \\sim \\exp[-\\left\\{(N_c-1) E_{\\mathrm{val}} +\n E_{\\mathrm{sea}}\\right\\} T], \n\\end{align}\nwhere $E_{\\mathrm{val}}$ and $E_{\\mathrm{sea}}$ the valence and\nsea quark energies. Minimizing self-consistently the energies around\nthe saddle point of the chiral field $U$\n\\begin{align}\\label{eq:sol}\n\\left.\\frac{\\delta}{\\delta U}[ (N_c-1) E_{\\mathrm{val}} +\n E_{\\mathrm{sea}}]\\right|_{U_c} = 0, \n\\end{align}\nwe get the classical soliton mass \n\\begin{align}\\label{eq:solnc}\nM_{\\mathrm{sol}} = (N_c-1) E_{\\mathrm{val}}(U_c) + E_{\\mathrm{sea}}(U_c). \n\\end{align}\nNote that a singly heavy baryon has a heavy quark, so its classical\nis expressed as the sum of the classical and heavy-quark masses\n\\begin{align}\nM_{\\mathrm{cl}} = M_{\\mathrm{sol}} + m_Q. \n\\label{eq:classical_mass}\n\\end{align}\nWe want to mention that $m_Q$ is the \\textit{effective} heavy quark\nmass that is different from that of QCD and will be absorbed\nin the center mass of each representation. \n\nThe rotational excitations of the soliton with $N_c-1$ valence quarks\nwill produce the lowest-lying heavy baryons. To keep the hedgehog\nsymmetry, the SU(2) soliton $U_c(\\bm{r})$ will be embedded into\nSU(3)~\\cite{Witten:1983} \n\\begin{align}\nU(\\bm{r}) = \\begin{pmatrix}\nU_c (\\bm{r}) & 0 \\\\\n0 & 1\n\\end{pmatrix}.\n\\label{eq:embed}\n\\end{align}\nAs mentioned in Introduction, we consider explicitly the rotational zero\nmodes. Assuming that the soliton $U(\\bm{r})$ in Eq.(\\ref{eq:embed})\nrotates slowly, we apply the rotation matrix $A(t)$ in\n$\\mathrm{SU}_{\\mathrm{f}}(3)$ space \n\\begin{align}\nU(\\bm{r},\\,t) = A(t) U(\\bm{r}) A^\\dagger (t). \n\\end{align}\nThen, we can derive the collective Hamiltonian for heavy\nbaryons\n\\begin{align}\nH =& H_{\\mathrm{sym}} + H^{(1)}_{\\mathrm{sb}} + H^{(2)}_{\\mathrm{sb}},\n\\end{align}\nwhere $H_{\\mathrm{sym}}$ represents the flavor SU(3) symmetric part, \n$H^{(1)}_{\\mathrm{sb}}$ and $H^{(2)}_{\\mathrm{sb}}$ the\nSU(3) symmetry-breaking parts respectively to the first and second\norders. $H_{\\mathrm{sym}}$ is expressed\nas \n\\begin{align} \nH_{\\mathrm{sym}}=M_{\\mathrm{cl}}+\\frac{1}{2I_{1}}\\sum_{i=1}^{3}\n\\hat{J}^{2}_{i} +\\frac{1}{2I_{2}}\\sum_{a=4}^{7}\\hat{J}^{2}_{a}, \n\\end{align} \nwhere $I_{1}$ and $I_{2}$ are the moments of inertia of the\nsoliton and the operators $\\hat{J}_{i}$ denote the SU(3) \ngenerators. We get the eigenvalue of the quadratic Casimir operator\n$\\sum_{i=1}^8 J_i^2$ in the $(p,\\,q)$ representation, given as \n\\begin{align}\nC_2(p,\\,q) = \\frac13 \\left[p^2 +q^2 + pq + 3(p+q)\\right], \n\\label{eq:Casimir}\n\\end{align}\nwhich leads to the eigenvalues of $H_{\\mathrm{sym}}$ \n\\begin{align} \nE_{\\mathrm{sym}}(p,q) = M_{\\mathrm{cl}}+ \\frac{1}{2I_{1}} J_L(J_L+1) \n+\\frac{1}{2I_{2}}\\left[C_2(p,\\,q) - J_L(J_L+1)\\right] \n-\\frac{3}{8I_{2}} {Y'}^{2}.\n\\label{eq:RotEn}\n\\end{align} \nThe right hypercharge $Y'$ is constrained by the $N_c-1$ valence\nquarks inside a singly heavy baryon, i.e. $Y'=(N_c-1)\/3$. The\ncorresponding collective wave functions of the singly heavy baryon is\nthen obtained as \n\\begin{align} \n\\psi_B^{({\\mathcal{R}})}(JJ_3,J_L;A)=\n\\sum_{m_{3}=\\pm1\/2}C^{J J_3}_{J_{Q} m_{3} J_L \nJ_{L3}} \\chi_{m_{3}} \\sqrt{\\mathrm{dim}(p,\\,q)}\n(-1)^{-\\frac{ Y' }{2}+{J}_{L3}}\n D^{(\\mathcal{R})\\ast}_{(Y,T,T_3)(Y' ,J_L,-J_{L3})}(A), \n\\label{eq:waveftn}\n\\end{align} \nwhere \n\\begin{align}\n\\mathrm{dim}(p,\\,q) = (p+1)(q+1)\\left(1+\\frac{p+q}{2}\\right). \n\\end{align}\n $J$ and $J_3$ in Eq.~(\\ref{eq:waveftn}) are the spin angular\n momentum and its third component of the heavy baryon, respectively. \n$J_L$ and $J_{Q}$ represent the soliton spin and\nheavy-quark spin, respectively. ${J_{L3}}$ and ${m_{3}}$ are the\ncorresponding third components, respectively. Since the spin operator\nfor the heavy baryon is given as \n\\begin{align}\n\\label{eq:quantum}\n\\bm{J}=\\bm{J}_{Q}+\\bm{J}_L,\n\\end{align} \nthe relevant Clebsch-Gordan coefficients appear in\nEq.(\\ref{eq:waveftn}). The SU(3) Wigner $D$ function in\nEq.(\\ref{eq:waveftn}) means just the wave-function for the quantized\nsoliton with the $N_c-1$ valence quarks, and $\\chi_{m_3}$\nis the Pauli spinor for the heavy quark. $\\mathcal{R}$ designates a\nSU(3) irreducible representation corresponding to $(p,\\,q)$. \nSince the soliton is coupled to the heavy quark, we finally obtain the\nthree lowest-lying representations illustrated in\nFig.~\\ref{fig:1}. In the limit of $m_Q\\to\\infty$, the two sextet\nrepresentations are degenerate. One needs to introduce a \nhyperfine spin-spin interaction to lift this degeneracy. As will be\ndiscussed soon, this hyperfine interaction will be determined by using\nthe experimental data on the masses of heavy baryons. \n\nIn the present zero-mode quantization scheme, we find the following\nthe two important selection rule. The allowed SU(3) representations\nmust contain states with $Y'=(N_c-1)\/3$ and the isospin $\\bm{T}$ of\nthe states with $Y'=(N_c-1)$\/3 are coupled with the soliton so that we\nhave a singlet $\\bm{K}=\\bm{T}+\\bm{J}_{L}=\\bm{0}$, where $\\bm{K}$ is called \nthe grand spin. The lowest-lying heavy baryons have the grand spin\n$K=0$, that is, we must have always $J_{L}=T$ with $Y'=(N_c-1)\/3$ for the\nground-state heavy baryons as shown in fig.~\\ref{fig:4}. \n\\begin{figure}[htp]\n\\centering\n\\includegraphics[scale=0.2]{fig4.pdf}\\qquad\n\\caption{The baryon anti-triplet has the $J_{L}=T=0$ state with $Y'=2\/3$ whereas\n the baryon sextet contains the $J_{L}=T=1$ state with $Y'=2\/3$.}\n\\label{fig:4}\n\\end{figure} \n\nAn observable of the heavy baryon can be expressed in general as a\nthree-point correlation function\n\\begin{align}\n\\langle B,\\,p'| J_\\mu(0) |B,\\,p\\rangle &= \\frac1{\\mathcal{Z}}\n \\lim_{T\\to\\infty} \\exp\\left(i p_4\\frac{T}{2} - i p_4'\n \\frac{T}{2}\\right) \\int d^3x d^3y \\exp(-i \\bm{p}'\\cdot \\bm{y} + i\n \\bm{p}\\cdot \\bm{x}) \\cr\n& \\hspace{-1cm} \\times \\int \\mathcal{D}U\\int \\mathcal{D} \\psi \\int\n \\mathcal{D} \\psi^\\dagger J_{B}(\\bm{y},\\,T\/2) \\psi^\\dagger(0)\n \\gamma_4 \\Gamma \\mathcal{O} \\psi(0) J_B^\\dagger (\\bm{x},\\,-T\/2)\n \\exp\\left[-\\int d^4 z \\psi^\\dagger iD(U) \\psi\\right],\n \\label{eq:3corrftn}\n\\end{align}\nwhere $\\Gamma$ and $\\mathcal{O}$ represent respectively generic Dirac\nspin and flavor matrices. Computing Eq.~(\\ref{eq:3corrftn}), one can \nstudy heavy baryonic observables such as form factors, magnetic moments,\naxial-vector constants, etc. For the detailed formalism, we refer to\nRefs.~\\cite{Kim:1995mr, Christov:1995vm}. \n\n\\section{Mass splittings of the singly heavy baryons}\nWe first discuss the mass splittings of the singly heavy baryons. In\norder to obtain the mass splittings, one should include the\nsymmetry-breaking part of the collective\nHamiltonian~\\cite{Blotz:1992pw, Christov:1995vm} \n\\begin{align} \nH^{(1)}_{\\mathrm{sb}} \n&= \\overline{\\alpha} D^{(8)}_{88}+ \\beta \\hat{Y}\n+ \\frac{\\gamma}{\\sqrt{3}}\\sum_{i=1}^{3}D^{(8)}_{8i}\n\\hat{J}_{i},\n\\label{sb}\n\\end{align}\nwhere\n\\begin{align} \n\\overline{\\alpha} = \\left (-\\frac{\\overline{\\Sigma}_{\\pi N}}{3m_0}+\\frac{\n K_{2}}{I_{2}}Y' \n\\right )m_{\\mathrm{s}},\n \\;\\;\\; \\beta=-\\frac{ K_{2}}{I_{2}}m_{\\mathrm{s}}, \n\\;\\;\\; \\gamma = 2\\left (\n \\frac{K_{1}}{I_{1}}-\\frac{K_{2}}{I_{2}} \\right ) m_{\\mathrm{s}}.\n\\label{eq:alphaetc}\n\\end{align}\nThe parameters $\\overline{\\alpha}$, $\\beta$, and\n$\\gamma$ are the essential ones in determining the masses of the\nlowest-lying singly heavy baryons, which are expressed in terms of the\nmoments of inertia $I_{1,\\,2}$ and $K_{1,\\,2}$. However, we do not\nneed to fit them, since they are related to $\\alpha$,\n$\\beta$, and $\\gamma$ in the light-baryon sector. \nThe valence parts are only different from those in the light baryon\nsector by the color factor $N_c-1$. So, we need to replace $N_c$ by $N_c-1$\nin the valence parts of all the relevant dynamical parameters\ndetermined in the light-baryon sector. The valence part of\n$\\overline{\\Sigma}_{\\pi N}$ is just the $\\pi N$ sigma term with\ndifferent $N_c$ factor: $\\overline{\\Sigma}_{\\pi N} = (N_c-1)N_c^{-1}\n\\Sigma_{\\pi N}$, where $\\Sigma_{\\pi N} = (m_u+m_d)\\langle N|\n\\bar{u} u + \\bar{d} d|N\\rangle = (m_u+m_d) \\sigma$. On the other\nhand, the sea parts should be kept intact as in the light baryon\nsector. \n\nThe dynamical parameters $\\alpha$, $\\beta$ and $\\gamma$ have been\nfixed by using the experimental data on the baryon octet masses and\na part of the baryon decuplet and anti-decuplet masses with \nisospin symmetry breaking effects~\\cite{Yang:2010id}. The\nvalues of $\\alpha$, $\\beta$, and $\\gamma$ have been obtained by\nthe $\\chi^2$ fit~\\cite{Yang:2010fm}\n\\begin{align}\n\\alpha = -255.03\\pm5.82 \\;{\\rm MeV},\\;\\;\\;\n\\beta = -140.04\\pm3.20 \\;{\\rm MeV},\\;\\;\\;\n\\gamma = -101.08\\pm2.33 \\;{\\rm MeV},\n\\label{eq:abrNumber}\n\\end{align}\nWhile $\\beta$ and $\\gamma$ are not required to be changed in the\nheavy-baryon sector, $\\alpha$ should be modified by\n\\begin{align}\n \\label{eq:alpha}\n\\overline{\\alpha} = \\rho \\alpha,\n\\end{align}\nwhere $\\rho=(N_c-1)\/N_c$.\nHowever, there is a caveat when one uses the values of\nEq.~\\eqref{eq:abrNumber}. As mentioned above, only the valence parts\nshould be modified, while the scaling in Eq.~\\eqref{eq:alpha} changes\nthe sea part too. To compensate this we choose $\\rho \\approx 0.9$. If\none computes the parameters $\\overline{\\alpha}$, $\\beta$, and $\\gamma$\nin a self-consistent way, we do not have this problem~\\cite{Kim:2018xlc}.\n\nConsidering the first-order perturbative corrections of $m_{\\mathrm{s}}$, \none can express the masses of the singly heavy baryons in\nrepresentation $\\mathcal{R}$ as \n\\begin{align}\nM_{B,\\mathcal{R}}^Q = M_{\\mathcal{R}}^Q + M_{B,\\mathcal{R}}^{(1)} \n\\label{eq:FirstOrderMass}\n\\end{align}\nwith\n\\begin{align}\nM_{\\mathcal{R}}^Q = m_Q + E_{\\mathrm{sym}}(p,q). \n\\end{align}\nHere, $M_{\\mathcal{R}}^Q$ is the center mass of a heavy baryon in\nrepresentation $\\mathcal{R}$. $E_{\\mathrm{sym}}(p,q)$ is the\neigenvalue energy of the symmetric part of the collective Hamiltonian\ndefined in Eq.~\\eqref{eq:RotEn}. Note that the lower index $B$\ndesignates a certain baryon in a specific representation\n$\\mathcal{R}$. The upper index $Q$ denotes either the charm sector\n($Q=c$) or the bottom sector ($Q=b$). \nThen the center masses for the anti-triplet and sextet representations\nare obtained as \n\\begin{align}\nM_{\\overline{\\bm{3}}}^Q = M_{\\mathrm{cl}} + \\frac1{2I_2}, \\;\\;\\; \nM_{\\bm{6}}^Q = M_{\\overline{\\bm{3}}}^Q + \\frac1{I_1},\n\\end{align}\nwhere $M_{\\mathrm{cl}}$ was defined in Eq.~\\eqref{eq:classical_mass}.\nThe second term in Eq.~(\\ref{eq:FirstOrderMass}), which arises from\nthe linear-order $m_{\\mathrm{s}}$ corrections, is proportional to the\nhypercharge of the soliton with the light-quark pair \n\\begin{align}\nM^{(1)}_{B,{\\cal{R}}} = \\langle B, {\\cal{R}} | H_{\\mathrm{sb}}^{(1)} \n| B, {\\cal{R}} \\rangle = Y\\delta_{{\\cal{R}}},\n\\end{align}\nwhere\n \\begin{align} \n\\delta_{\\overline{\\bm{3}}}=\\frac{3}{8}\\overline{\\alpha}+\\beta, \\;\\;\\;\\;\n\\delta_{\\bm{6}}=\\frac{3}{20}\n \\overline{\\alpha}+\\beta-\\frac{3}{10}\\gamma. \n\\label{eq:deltas}\n\\end{align}\nFinally, we arrive at the expressions for the masses of the\nlowest-lying baryon anti-triplet and sextet as follows \n\\begin{align}\nM_{B,\\overline{\\bm{3}}}^Q = M_{\\overline{\\bm{3}}}^Q +\n Y \\delta_{\\overline{\\bm{3}}} ,\\;\\;\\;\nM_{B,\\bm{6}}^Q =M_{\\bm{6}}^Q + Y \\delta_{\\bm{6}}, \n \\label{eq:firstms}\n\\end{align}\nwith the linear-order $m_{\\mathrm{s}}$ corrections taken into account. \n\nSince the baryon sextet with spin 1\/2 and 3\/2 are degenerate, we need\nto remove the degeneracy by introducing the hyperfine spin-spin\ninteraction Hamiltonian~\\cite{Zeldovich}. Typically, the hyperfine\nHamiltonian is written as \n\\begin{align}\nH_{LQ} = \\frac{2}{3}\\frac{\\kappa}{m_{Q}\\,M_{\\mathrm{sol}}}\\bm{J}_L\n\\cdot \\bm{J}_{Q} \n= \\frac{2}{3}\\frac{\\varkappa}{m_{Q}}\n\\bm{J}_{L} \\cdot \\bm{J}_{Q}, \n\\label{eq:ssinter}\n\\end{align}\nwhere $\\kappa$ stands for the flavor-independent hyperfine coupling.\n$M_{\\mathrm{sol}}$ has been incorporated into an unknown \ncoefficient $\\varkappa$ that will be fixed by using the experimental\ndata. . The Hamiltonian $H_{LQ}$ does not affect the \n$\\overline{\\bm{3}}$ states with $J_L=0$. On the other hand, \nthe baryon sextet acquire additional contribution from $H_{LQ}$ which\nbring about the splitting between different spin states \n\\begin{align}\nM_{B,{\\bm{6}}_{1\/2}}^{Q} = \nM_{B,\\bm{6}}^Q\\;-\\;\\frac{2}{3}\\frac{\\varkappa}{m_{Q}}, \n\\;\\;\\;\nM_{B,{\\bm{6}}_{3\/2}}^{Q} = \nM_{B,\\bm{6}}^Q\\;+\\;\\frac{1}{3} \\frac{\\varkappa}{m_{Q}}, \n\\label{eq:Csextet}\n\\end{align}\nwhich leads to the splitting \n\\begin{align}\nM_{B,{\\bm{6}}_{3\/2}}^{Q}\\;-\\; M_{B,{\\bm{6}}_{1\/2}}^{Q} =\n \\frac{\\varkappa}{m_{Q}} . \n\\label{eq:DCsextet}\n\\end{align}\nThe numerical values of $\\varkappa\/m_Q$ were determined by using the center\nvalues of the masses of the baryon sextet~\\cite{Yang:2016qdz} \n\\begin{align}\n\\frac{\\varkappa}{m_c} = (68.1\\pm 1.1)\\,\\mathrm{MeV},\\;\\;\\;\n\\frac{\\varkappa}{m_b} = (20.3\\pm 1.0)\\,\\mathrm{MeV}.\n \\label{eq:kappavalue}\n\\end{align}\nNote that $\\varkappa$ is flavor-independent. So, knowing the ratio\n$m_c\/m_b$, one can extract the value of $\\varkappa$ from\nEq.~\\eqref{eq:kappavalue}.\n\nWe now present the numerical results of the masses of the heavy\nbaryons~\\cite{Yang:2016qdz}. Using the values of $\\overline{\\alpha}$,\n$\\beta$, and $\\gamma$, we can immediately determine the values of\n$\\delta_{\\overline{\\bm{3}}}$ and $\\delta_{\\bm{6}}$ defined in\nEq.~\\eqref{eq:deltas} \n\\begin{align}\n\\delta_{\\overline{\\bm{3}}} = (-203.8\\pm 3.5)\\,\\mathrm{MeV},\\;\\;\\;\n\\delta_{\\bm{6}} = (-135.2\\pm 3.3)\\,\\mathrm{MeV}.\n\\end{align}\nIncluding the results of $\\varkappa\/m_c$ and $\\varkappa\/m_b$, we can\nobtain the numerical results of the heavy baryon masses. \nIn Table~\\ref{tab:1} and Table~\\ref{tab:2} the numerical results of\nthe charmed and bottom baryon masses are presented, respectively. They\nare in good agreement with the experimental data taken from \nRef.~\\cite{PDG2017}. The mass of $\\Omega_b^*$ is still experimentally\nunknown. Thus, the prediction of its mass is given as \n\\begin{align}\nM_{\\Omega_b^*} = (6095.0\\pm 4.4)\\,\\mathrm{MeV}. \n\\end{align}\nThe uncertainties in Tables~\\ref{tab:1} and \\ref{tab:2} are due to\nthose in $\\overline{\\alpha}$, $\\beta$, $\\gamma$, and $\\varkappa\/m_Q$. \n\\begin{table}[htp]\n\\begin{centering}\n\\begin{tabular}{c|ccc}\n\\hline \\hline\n$\\mathbf{\\mathcal{R}}_{J}^{Q}$ \n& $B_{c}$ \n& Mass \n& Experiment\n\\tabularnewline[0.1em]\n\\hline \n\\multirow{2}{*}{$\\mathbf{\\overline{3}}_{1\/2}^{c}$} \n& $\\Lambda_{c}$ \n& $2272.5 \\pm 2.3$\n& $2286.5 \\pm 0.1$\n\\tabularnewline\n& $\\Xi_{c}$ \n& $2476.3 \\pm 1.2$\n& $2469.4 \\pm 0.3$\n\\tabularnewline\n\\hline \n\\multirow{3}{*}{$\\mathbf{6}_{1\/2}^{c}$} \n& $\\Sigma_{c}$ \n& $2445.3 \\pm 2.5$\n& $2453.5 \\pm 0.1$\n\\tabularnewline\n& $\\Xi_{c}^{\\prime}$ \n& $2580.5 \\pm 1.6$\n& $2576.8 \\pm 2.1$\n\\tabularnewline\n& $\\Omega_{c}$ \n& $2715.7 \\pm 4.5$\n& $2695.2 \\pm 1.7$\n\\tabularnewline\n\\hline \n\\multirow{3}{*}{$\\mathbf{6}_{3\/2}^{c}$} \n& $\\Sigma_{c}^{\\ast}$ \n& $2513.4 \\pm 2.3$\n& $2518.1 \\pm 0.8$\n\\tabularnewline\n& $\\Xi_{c}^{\\ast}$ \n& $2648.6 \\pm 1.3$\n& $2645.9 \\pm 0.4$\n\\tabularnewline\n& $\\Omega_{c}^{\\ast}$ \n& $2783.8 \\pm 4.5$\n& $2765.9 \\pm 2.0$\n\\tabularnewline\n\\hline \\hline\n\\end{tabular}\n\\par\\end{centering}\n\\caption{The numerical results of the charmed baryon masses in \n comparison with the experimental data~\\cite{PDG2017}.}\n\\label{tab:1}\n\\end{table}\n\n\\begin{table}[htp]\n\\centering{}\\vspace{3em}%\n\\begin{tabular}{c|ccc}\n\\hline\\hline \n$\\mathbf{\\mathcal{R}}_{J}^{Q}$ \n& $B_{b}$ \n& Mass\n& Experiment\n\\\\\n\\hline \n\\multirow{2}{*}{$\\mathbf{\\overline{3}}_{1\/2}^{b}$} \n& \\textcolor{black}{$\\Lambda_{b}$} \n& $5599.3 \\pm 2.4 $\n& $5619.5 \\pm 0.2$ \n \\tabularnewline\n& \\textcolor{black}{$\\Xi_{b}$} \n& $5803.1 \\pm 1.2 $\n& $5793.1 \\pm 0.7 $ \n \\tabularnewline\n\\hline \n\\multirow{3}{*}{$\\mathbf{6}_{1\/2}^{b}$} \n& \\textcolor{black}{$\\Sigma_{b}$} \n& $5804.3 \\pm 2.4 $\n& $5813.4 \\pm 1.3$ \n\\tabularnewline\n& \\textcolor{black}{$\\Xi_{b}^{\\prime}$} \n& $5939.5 \\pm 1.5 $\n& $5935.0 \\pm 0.05$ \n\\tabularnewline\n& \\textcolor{black}{$\\Omega_{b}$} \n& $6074.7 \\pm 4.5 $\n& $6048.0 \\pm 1.9$ \n \\tabularnewline\n\\hline \n\\multirow{3}{*}{$\\mathbf{6}_{3\/2}^{b}$} \n& \\textcolor{black}{$\\Sigma_{b}^{\\ast}$} \n& $5824.6 \\pm 2.3 $\n& $5833.6 \\pm 1.3$ \n\\tabularnewline\n&\\textcolor{black}{$\\Xi_{b}^{\\ast}$} \n& $5959.8 \\pm 1.2 $\n& $5955.3 \\pm 0.1$ \n \\tabularnewline\n& \\textcolor{black}{$\\Omega_{b}^{\\ast}$} \n& $6095.0 \\pm 4.4 $\n& $-$\n\\tabularnewline\n\\hline \\hline\n\\end{tabular}\n\\caption{The results of the masses of the bottom baryons in comparison\nwith the experimental data~\\cite{PDG2017}.}\n\\label{tab:2}\n\\end{table}\n\n\\section{Magnetic moments of heavy baryons} \nIn this Section, we briefly summarize a recent work on the magnetic\nmoments of the heavy baryons~\\cite{Yang:2018uoj}.\nStarting from Eq.~\\eqref{eq:3corrftn}, one can derive the general\nexpressions of the collective operator for the magnetic moments \n\\begin{align}\n \\label{eq:MagMomOp}\n \\hat{\\mu} = \\hat{\\mu}^{(0)} + \\hat{\\mu}^{(1)}, \n\\end{align}\nwhere $\\hat{\\mu}^{(0)}$ and $\\hat{\\mu}^{(1)}$ denote the leading\nand rotational $1\/N_c$ contributions, and the linear $m_{\\mathrm{s}}$\ncorrections respectively \n\\begin{align}\n\\hat{\\mu}^{(0)} & = \n\\;\\;w_{1}D_{\\mathcal{Q}3}^{(8)}\n\\;+\\;w_{2}d_{pq3}D_{\\mathcal{Q}p}^{(8)}\\cdot\\hat{J}_{q}\n\\;+\\;\\frac{w_{3}}{\\sqrt{3}}D_{\\mathcal{Q}8}^{(8)}\\hat{J}_{3},\\cr\n\\hat{\\mu}^{(1)} & = \n\\;\\;\\frac{w_{4}}{\\sqrt{3}}d_{pq3}D_{\\mathcal{Q}p}^{(8)}D_{8q}^{(8)}\n+w_{5}\\left(D_{\\mathcal{Q}3}^{(8)}D_{88}^{(8)}+D_{\\mathcal{Q}8}^{(8)}D_{83}^{(8)}\\right)\n\\;+\\;w_{6}\\left(D_{\\mathcal{Q}3}^{(8)}D_{88}^{(8)}-D_{\\mathcal{Q}8}^{(8)}D_{83}^{(8)}\\right).\n\\label{eq:magop}\n\\end{align}\n $d_{pq3}$ is the SU(3) symmetric tensor of which the indices run over \n$p=4,\\cdots,\\,7$. $\\hat{J_3}$ and $\\hat{J}_{p}$ denote the third\nand the $p$th components of the spin operator acting on the soliton\nwith the light-quark pair. $D_{\\mathcal{Q}3}^{(8)}$ arises from the\nrotation of the electromagnetic current \n\\begin{align}\nD_{\\mathcal{Q}3}^{(8)} = \\frac12 \\left( D_{33}^{(8)} + \\frac1{\\sqrt{3}}\n D_{83}^{(8)}\\right).\n\\end{align}\nThe coefficients $w_i$ in Eq.~\\eqref{eq:magop} are independent of\nbaryons involved, which encode the interaction of light quarks with\nthe electromagnetic current. Each term has a physical meaning: $w_1$\nrepresents the leading-order contribution, a part of the rotational\n$1\/N_c$ corrections, and linear $m_{\\mathrm{s}}$ corrections, whereas\n$w_2$ and $w_3$ describe the rest of the rotational $1\/N_c$\ncorrections. $w_1$ includes the $m_s$-dependent term, which is not\nexplicitly involved in the breaking of flavor SU(3) symmetry. So, we\nneed to treat $w_1$ as if it had contained the SU(3) symmetric part.\n On the other hand, $w_4$, $w_5$, and $w_6$ are the\nSU(3) symmetry breaking terms. There are yet another $m_{\\mathrm{s}}$\ncorrections, which arise from the collective wave functions. Though \n$w_i$ can be determined within a specific chiral solitonic model such\nas the $\\chi$QSM~\\cite{Kim:1995mr, Kim:1995ha}, we will use the values\nof $w_i$, which have been already fixed from the experimental data on\nthe magnetic moments of the baryon octet. \n\nThe baryon wave function given in Eq.~\\eqref{eq:waveftn} is not enough\nto compute the magnetic moments, because the collective wave functions\nshould be revised when the perturbation coming from the strange\ncurrent quark mass is considered. In this case, the baryon is no more\nin a pure state but is mixed with higher representations. \nIn Ref.~\\cite{Kim:2018nqf}, the collective baryon wave functions for\nthe heavy baryons have been already derived. Those for the baryon\nanti-triplet ($J_{L}=0$) and the sextet ($J_{L}=1$) are expressed respectively\nas~\\cite{Kim:2018nqf} \n\\begin{align}\n&|B_{\\overline{\\bm3}_{0}}\\rangle = |\\overline{\\bm3}_{0},B\\rangle + \np^{B}_{\\overline{15}}|\\overline{\\bm{15}}_{0},B\\rangle, \\cr\n&|B_{\\bm6_{1}}\\rangle = |{\\bm6}_{1},B\\rangle +\n q^{B}_{\\overline{15}}|{\\overline{\\bm{15}}}_{1},B \n\\rangle + q^{B}_{\\overline{24}}|{\n{\\overline{\\bm{24}}}_{1}},B\\rangle,\n\\label{eq:mixedWF1}\n\\end{align}\nwith the mixing coefficients\n\\begin{eqnarray}\np_{\\overline{15}}^{B}\n\\;\\;=\\;\\;\np_{\\overline{15}}\\left[\\begin{array}{c}\n-\\sqrt{15}\/10\\\\\n-3\\sqrt{5}\/20\n\\end{array}\\right], \n& \nq_{\\overline{15}}^{B}\n\\;\\;=\\;\\;\nq_{\\overline{15}}\\left[\\begin{array}{c}\n\\sqrt{5}\/5\\\\\n\\sqrt{30}\/20\\\\\n0\n\\end{array}\\right], \n& \nq_{\\overline{24}}^{B}\n\\;\\;=\\;\\;\nq_{\\overline{24}}\\left[\\begin{array}{c}\n-\\sqrt{10}\/10\\\\\n-\\sqrt{15}\/10\\\\\n-\\sqrt{15}\/10\n\\end{array}\\right],\n\\label{eq:pqmix}\n\\end{eqnarray}\nrespectively, in the basis $\\left[\\Lambda_{Q},\\;\\Xi_{Q}\\right]$ for\nthe anti-triplet and $\\left[\\Sigma_{Q}\\left(\\Sigma_{Q}^{\\ast}\\right),\\;\n \\Xi_{Q}^{\\prime}\\left(\\Xi_{Q}^{\\ast}\\right),\\;\\Omega_{Q}\n \\left(\\Omega_{Q}^{\\ast}\\right)\\right]$ for the sextets. The\nparameters $p_{\\overline{15}}$, $q_{\\overline{15}}$, and\n$q_{\\overline{24}}$ are written by \n\\begin{eqnarray}\np_{\\overline{15}}\n\\;\\;=\\;\\;\n\\frac{3}{4\\sqrt{3}}\\overline{\\alpha}{I}_{2}, \n& \nq_{\\overline{15}}\n\\;\\;=\\;\\;\n{\\displaystyle -\\frac{1}{\\sqrt{2}}\n\\left(\\overline{\\alpha}+\\frac{2}{3}\\gamma\\right)\nI_{2}}, \n& \nq_{\\overline{24}}\\;\\;=\\;\\;\n\\frac{4}{5\\sqrt{10}}\n\\left(\\overline{\\alpha}-\\frac{1}{3}\\gamma\\right)\nI_{2}.\n\\label{eq:pqmix2}\n\\end{eqnarray}\n Combining\nEq.~\\eqref{eq:mixedWF1} with the heavy-quark spinor as in\nEq.~\\eqref{eq:waveftn}, one can construct the collective wave\nfunctions for the heavy baryon states~\\cite{Yang:2018uoj}. \n\nComputing the baryon matrix elements of $\\hat{\\mu}$ in\nEq.~\\eqref{eq:MagMomOp}, we get the magnetic moments of the \nheavy baryons \n\\begin{equation}\n\\mu_{B}=\\mu_{B}^{(0)}+\\mu_{B}^{(\\mathrm{op})}+\\mu_{B}^{(\\mathrm{wf})}\n\\label{eq:mu_B}\n\\end{equation}\nwhere $\\mu_{B}^{(0)}$ is the part of the magnetic moment in\nthe chiral limit and $\\mu_{B}^{(\\mathrm{op})}$ comes from\n$\\hat{\\mu}^{(1)}$ in Eq.~\\eqref{eq:MagMomOp}, which include $w_4$,\n$w_5$, and $w_6$. $\\mu_{B}^{(\\mathrm{wf})}$ is derived from the\ninterference between the $\\mathcal{O}(m_{\\mathrm{s}})$ and\n$\\mathcal{O}(1)$ parts of the collective wave functions in\nEq.~\\eqref{eq:mixedWF1}. \n\nSince the soliton with the light-quark pair for the baryon\nanti-triplet has spin $J_L=0$, , the magnetic moments of the baryon\nanti-triplet vanish. In this case $1\/m_Q$ contributions are the\nleading ones. However, we will not include them, since we need to go\nbeyond the mean-field approximation to consider the $1\/m_Q$\ncontributions within the present framework. \n\nSince $w_1$ contains both the leading-order contributions and the \n$1\/N_c$ rotational corrections, we have to decompose them. Following\nthe argument of Ref.~\\cite{Kim:2017khv}, we can separately consider\neach contribution. The coefficients $w_1$, $w_2$, and $w_3$ are\nexpressed in terms of the model dynamical parameters \n\\begin{align}\n \\label{eq:w123}\nw_{1} = \nM_{0}\\;-\\;\\frac{M_{1}^{\\left(-\\right)}}{I_{1}^{\\left(+\\right)}},\\;\\;\\;\nw_{2} = -2\\frac{M_{2}^{\\left(-\\right)}}{I_{2}^{\\left(+\\right)}},\\;\\;\\;\nw_{3} = -2\\frac{M_{1}^{\\left(+\\right)}}{I_{1}^{\\left(+\\right)}},\n\\end{align}\nwhere the explicit forms of $M_0$, $M_1^{(\\pm)}$, $M_2^{(-)}$ are\ngiven in Refs.~\\cite{Kim:1995mr, Praszalowicz:1998j}. $I_1^{(+)}$ and \n$I_2^{(+)}$ are the moments of inertia with the notation of\nRef.~\\cite{Praszalowicz:1998j} taken. In the limit of the small soliton\nsize, the parameters in Eq.~\\eqref{eq:w123} can be simplified as \n\\begin{align}\nM_{0}\\;\\rightarrow\\;-2N_{c}K,\n\\;\\;\\;\n\\frac{M_{1}^{\\left(-\\right)}}{I_{1}^{\\left(+\\right)}}\n\\;\\rightarrow\\;\\frac{4}{3}K, \\;\\;\\;\n \\frac{M_{1}^{\\left(+\\right)}}{I_{1}^{\\left(+\\right)}}\n \\;\\rightarrow\\;-\\frac{2}{3}K,\\;\\;\\;\n\\frac{M_{2}^{\\left(-\\right)}}{I_{2}^{\\left(+\\right)}}\n\\;\\rightarrow\\;-\\frac{4}{3}K.\n\\label{eq:sss} \n\\end{align} \nThese results yield the expressions of the magnetic moments in the\nnonrelativistic (NR) quark model. For example, the ratio of the proton\nand magnetic moments can be correctly obtained as\n$\\mu_p\/\\mu_n=-3\/2$. In the NR limit, we also derive the relation\n$M_{1}^{\\left(-\\right)}\\;=\\;-2M_{1}^{\\left(+\\right)}$. Furthermore, we\nhave to assume that this relation can be also applied to the \ncase of the realistic soliton size. Then, we can write the \nleading-order contribution $M_0$ in terms of $w_1$ and $w_3$\n\\begin{align}\n \\label{eq:4}\nM_0= w_1 + w_3. \n\\end{align}\nSince a heavy baryon constitutes $N_c-1$ valence quarks, the original\n$M_0$ is modified by introducing $(N_c-1)\/N_c$. As mentioned\npreviously, only the valence part of $M_0$ should be changed by this\nscaling factor. Since, however, we have determined the values of\n$w_i$ using the experimental data, we can not fix separately the\nvalence and sea parts. Thus, we introduce an additional scaling factor\n$\\sigma$ to express a new coefficient $\\tilde{w}_1$ \n\\begin{align}\n\\label{eq:w1tilde}\n\\tilde{w}_1 = \\left[\\frac{N_c-1}{N_c} (w_1+w_3) - w_3\\right] \\sigma. \n\\end{align}\n$\\sigma$ compensates also possible deviations from the NR relation\n$M_{1}^{\\left(-\\right)}\\;=\\;-2M_{1}^{\\left(+\\right)}$ assumed to be \nvalid in the realistic soliton case. The value of $\\sigma$ is taken to\nbe $\\sigma\\sim0.85$. \n\nConsidering the scaling parameters, we are able to determine the\nfollowing values for $w_i$\n\\begin{align}\n\\tilde{w_{1}} \n& = \n-10.08\\pm0.24,\n\\cr\nw_{2} & = 4.15\\pm0.93,\n\\cr\nw_{3} & = 8.54\\pm0.86,\n\\cr\n\\overline{w}_{4} & = -2.53\\pm0.14,\n\\cr\n\\overline{w}_{5} & = -3.29\\pm0.57,\n\\cr\n\\overline{w}_{6} & = -1.34\\pm0.56.\n\\label{eq:numW}\n\\end{align}\n\nBefore we carry on the calculation of the magnetic moments, we examine\nthe general relations between them. First, we find the generalized\nColeman and Glashow relations~\\cite{Coleman:1961jn}, which arise from \nthe isospin invariance\n\\begin{align}\n\\mu(\\Sigma_{c}^{++})\\;-\\;\\mu(\\Sigma_{c}^{+})\n& = \n\\mu(\\Sigma_{c}^{+})\\;-\\;\\mu(\\Sigma_{c}^{0}),\n\\cr\n\\mu(\\Sigma_{c}^{0})\\;-\\;\\mu(\\Xi_{c}^{\\prime0}) \n& = \n\\mu(\\Xi_{c}^{\\prime0})\\;-\\;\\mu(\\Omega_{c}^{0}),\n\\cr\n2 [\\mu(\\Sigma_{c}^{+})\\,-\\,\\mu(\\Xi_{c}^{\\prime0})]\n& = \n\\mu(\\Sigma_{c}^{++})\\,-\\,\\mu(\\Omega_{c}^{0}).\n\\label{eq:coleman} \n\\end{align}\nSimilar relations were also found in Ref.~\\cite{Banuls:1999mu}.\nHowever, there is one very important difference. While the \nColeman-Glashow relations are known to be valid in the chiral \nlimit, the relations in Eq.~\\eqref{eq:coleman} are justified \neven when the effects of SU(3) flavor symmetry breaking are\nconsidered. We also find the relation according to\nthe $U$-spin symmetry\n\\begin{align}\n\\mu(\\Sigma_{c}^{0})\\;=\\;\n\\mu(\\Xi_{c}^{\\prime0})\\;=\\;\n\\mu(\\Omega_{c}^{0})\\;=\\;\n-2\\mu(\\Sigma_{c}^{+})\\;=\\;\n-2\\mu(\\Xi_{c}^{\\prime+})\\;=\\;\n-\\frac{1}{2}\\mu (\\Omega_{c}^{0}),\n\\label{eq:Usym}\n\\end{align}\nwhich are only valid in the SU(3) symmetric case. \nWe derive also the sum rule given as \n\\begin{align}\n\\sum_{B_c\\in\\mathrm{sextet}}\\mu(B_c)\\;=\\;0\n\\label{eq:sum}\n\\end{align}\nin the SU(3) symmetric case. \n\n\\begin{table}[htp]\n\\caption{Numerical results of the magnetic moments for the charmed\n baryon sextet with $J=1\/2$ in units of the nuclear magneton $\\mu_N$.} \n\\renewcommand{\\arraystretch}{1.3\n\\begin{tabular}{ccc}\n\\hline \\hline\n$\\mu\\left[6_{1}^{1\/2},\\;B_{c}\\right]$ \n& $\\mu^{(0)}$ \n& $\\mu^{(\\text{total})}$ \n\\tabularnewline \\hline\n$\\Sigma_{c}^{\\text{++}}$ \n& $2.00\\pm0.09$\n& $2.15\\pm0.1$ \n\\tabularnewline\n$\\Sigma_{c}^{\\text{+}}$ \n& $0.50\\pm0.02$ \n& $0.46\\pm0.03$ \n\\tabularnewline\n$\\Sigma_{c}^{0}$ \n& -$1.00\\pm0.05$ \n& -$1.24\\pm0.05$ \n\\tabularnewline\n\\hline \n$\\Xi_{c}^{\\prime+}$ \n& $0.50\\pm0.02$ \n& $0.60\\pm0.02$ \n\\tabularnewline\n$\\Xi_{c}^{\\prime0}$ \n& -$1.00\\pm0.05$ \n& -$1.05\\pm0.04$ \n\\tabularnewline\n\\hline \n$\\Omega_{c}^{0}$ \n& -$1.00\\pm0.05$ \n& -$0.85\\pm0.05$ \n\\tabularnewline\n\\hline \\hline\n\\end{tabular}\n\\label{tab:3}\n\\end{table}\n\\begin{table}[htp]\n\\renewcommand{\\arraystretch}{1.3\n\\caption{Numerical results of magnetic moments for charmed baryon\n sextet with $J=3\/2$ in units of the nuclear magneton $\\mu_N$.} \n\\begin{tabular}{ccc}\n\\hline \\hline \n$\\mu\\left[6_{1}^{3\/2},\\;B_{c}\\right]$ \n& $\\mu^{(0)}$ \n& $\\mu^{(\\text{total})}$ \n\\tabularnewline\n\\hline \n$\\Sigma_{c}^{\\ast\\text{++}}$ \n& $3.00\\pm0.14$ \n& $3.22\\pm0.15$ \n\\tabularnewline\n$\\Sigma_{c}^{\\ast\\text{+}}$ \n& $0.75\\pm0.04$ \n& $0.68\\pm0.04$ \n\\tabularnewline\n$\\Sigma_{c}^{\\ast0}$ \n& $-1.50\\pm0.07$ \n& $-1.86\\pm0.07$ \n\\tabularnewline\n\\hline \n$\\Xi_{c}^{\\ast+}$ \n& $0.75\\pm0.04$ \n& $0.90\\pm0.04$ \n\\tabularnewline\n$\\Xi_{c}^{\\ast0}$ \n& $-1.50\\pm0.07$ \n& $-1.57\\pm0.06$ \n\\tabularnewline\n\\hline \n$\\Omega_{c}^{\\ast0}$ \n& -$1.50\\pm0.07$ \n& -$1.28\\pm0.08$ \n\\tabularnewline\n\\hline \\hline\n\\end{tabular}\n\\label{tab:4}\n\\end{table}\nIn Tables~\\ref{tab:3} and \\ref{tab:4}, we list the numerical results\nof the charmed baryon sextet with spin 1\/2 and 3\/2, respectively. We\nobtain exactly the same results for the bottom baryons because of the\nheavy-quark symmetry in the $m_Q\\to\\infty$ limit. In\nRef.~\\cite{Yang:2018uoj}, a detailed discussion can be found, the\npresent results being compared with those from many other models. \n\\section{Excited $\\Omega_c$ baryons}\nThe present mean-field approach was applied to the classification of\nthe excited $\\Omega_c^0$'s that were recently reported by the LHCb\nCollaboration~\\cite{Aaij:2017nav}. The masses and decay widths of the\n$\\Omega_c^0$'s, which were reported by the LHCb Collaboration, are\nlisted in Table~\\ref{tab:5}. The Belle Collaboration has confirmed the\nfour of them~\\cite{Yelton:2017qxg} (see Table~\\ref{tab:6}). The Belle\ndata unambiguously confirmed the existence of the $\\Omega_c(3066)$ and\n$\\Omega_c(3090)$, and $\\Omega_c(3000)$ and $\\Omega_c(3050)$ are also\nconfirmed with reasonable significance. On the other hand the narrow\nresonance $\\Omega_c(3119)$ was not seen in the Belle experiment but\nthe nonobservation of $\\Omega_c(3119)$ is not in disagreement because\nit is due to the small yield. \n\\begin{table}[htp]\n\\renewcommand{\\arraystretch}{1.3\n\\caption{Experimental data on the five $\\Omega_c^0$ baryons reported by the\n LHCb Collaboration~\\cite{Aaij:2017nav}.} \n\\begin{tabular}{ccc}\n\\hline \\hline \nResonance\n& Mass (MeV)\n& Decay width (MeV)\n\\tabularnewline\n\\hline \n $\\Omega_c(3000)^0$\n& $3000.4\\pm 0.2\\pm0.1_{-0.5}^{+0.3}$ \n& $4.5\\pm 0.6\\pm 0.3$\n\\tabularnewline\n $\\Omega_c(3050)^0$\n& $3050.2\\pm0.1\\pm0.1_{-0.5}^{+0.3}$\n& $0.8\\pm 0.2\\pm 0.1$\n\\tabularnewline\n $\\Omega_c(3066)^0$\n& $3065.6\\pm 0.1\\pm 0.3_{-0.5}^{+0.3}$ \n& $3.5\\pm0.4\\pm0.2$\n\\tabularnewline\n $\\Omega_c(3090)^0$\n& $3090.2\\pm 0.3\\pm 0.5_{-0.5}^{+0.3}$\n& $8.7\\pm 1.0 \\pm 0.8$\n\\tabularnewline\n $\\Omega_c(3119)^0$\n& $3119.1\\pm 0.3 \\pm 0.9_{-0.5}^{+0.3}$\n& $1.1 \\pm 0.8 \\pm 0.4$\n\\tabularnewline\n$\\Omega_{c}(3188)$ \n& $3188\\pm 5 \\pm 13$\n& $60\\pm 15 \\pm 11$\n\\tabularnewline\n\\hline \\hline\n\\end{tabular}\n\\label{tab:5}\n\\end{table}\n\n\\begin{table}[htp]\n\\renewcommand{\\arraystretch}{1.3\n\\caption{Experimental data on the four $\\Omega_c^0$ baryons reported by the\n Belle Collaboration~\\cite{Yelton:2017qxg}.}\n\\begin{tabular}{cc}\n\\hline \\hline \nResonance\n& Mass (MeV)\n\\tabularnewline\n\\hline \n $\\Omega_c(3000)^0$\n& $3000.7\\pm 1.0\\pm 0.2$\n\\tabularnewline\n $\\Omega_c(3050)^0$\n& $3050.2\\pm0.4\\pm0.2$\n\\tabularnewline\n $\\Omega_c(3066)^0$\n& $3064.9\\pm 0.6\\pm 0.2$\n\\tabularnewline\n $\\Omega_c(3090)^0$\n& $3089.3\\pm 1.2\\pm 0.2$\n\\tabularnewline\n $\\Omega_c(3119)^0$\n& --\n\\tabularnewline\n$\\Omega_{c}(3188)$ \n& $3199\\pm 9 \\pm 4$\n\\tabularnewline\n\\hline \\hline\n\\end{tabular}\n\\label{tab:6}\n\\end{table} \n\nWhen one examines the excited heavy baryons in the present work, we\nneed to consider states with the grand spin $K=1$. Since we have the\nquantization rule $\\bm{K}=\\bm{J}_L+\\bm{T}$, the possible values of the\nspin are determined by \n\\begin{align}\nJ_L = |K-T|,\\cdots, K+T. \n\\end{align}\nThus, In the case of $T=0$ which corresponds to the anti-triplet with\n$Y'=2\/3$, we must have $J_L=1$ because of $K=1$. Combining it with the\nheavy-quark spin 1\/2, we have \\textit{two} excited baryon\nanti-triplet. Similarly, $T=1$ corresponds to the sextet. In this\ncase $J_L$ can have the values of 0, 1, and 2. Being coupled with the\nheavy-quark spin $1\/2$, we get \\textit{five} excited baryon sextets:\n$(1\/2)$, $(1\/2,\\,3\/2)$, and $(3\/2,\\,5\/2)$, corresponding to $J_L=0$, and\n$J_L=1$, and $J_L=2$. In each sextet representation, we have a\nisosinglet $\\Omega_c^0$. Thus, is is natural to think that the newly\nfound five $\\Omega_c^0$'s are those in the excited baryon sextets. \nNote that the representations for each value of $J$ are degenerate in\nthe limit of $m_Q\\to\\infty$. So, we need to introduce an additional\nhyperfine spin-spin interaction as done for the ground-state baryon\nsextet\n\\begin{align}\nH_{LQ} = \\frac23 \\frac{\\varkappa'}{m_Q} \\bm{J}_L \\cdot \\bm{J}_Q,\n\\end{align}\nwhich is very similar to Eq.~\\eqref{eq:ssinter}. $\\varkappa'$ can be\nfixed by using the experimental data on the masses of the excited\nbaryon anti-triplet.\n\nFollowing Refs.~\\cite{Diakonov:2012zz, Diakonov:2013qta}, we revise the\neigenvalues of the symmetric Hamiltonian for the excited baryons\n($K\\neq 0$) as \nfollows \n\\begin{align}\nM_{\\mathcal{R}}^{(K)\\prime} &= M_{\\mathrm{cl}}^{(K)\\prime} + \\frac1{2I_2} \\left[\n C_2(\\mathcal{R}) - T (T+1) - \\frac34 Y^{\\prime 2} \\right] \\cr\n& + \\frac1{2I_1} \\left[(1-a_K) T(T+1) + a_K J_L(J_L+1) - a_K(1-a_K) K(K+1) \n \\right], \n\\label{eq:symmass}\n\\end{align}\nwhere $C_2(\\mathcal{R})$ is the eigenvalue of the SU(3) Casimir\noperator, which was already defined in Eq.~\\eqref{eq:Casimir}. The\nparameter $a_K$ is related to one-quark excitation. \nThe collective wave functions for the soliton are derived as \n\\begin{align}\n\\Phi_{B,J_L, J_{L3},(T,K)}^{\\mathcal{R}} = \\sqrt{\\frac{2J_L+1}{2K+1}} \\sum_{T_3\n J_{L3}' K_3'} C_{TT_3J_LJ_{L3}'}^{KK_3} (-1)^{(T+T_3)}\n \\Psi_{(\\mathcal{R^*};-Y'TT_3)}^{(\\mathcal{R};B)} D_{J_{L3}'J_{L3}}^{(J_L)*}(S)\n \\chi_{K_3'}, \n\\end{align}\nwhere index $(\\mathcal{R};YTT_3)$ denotes the SU(3) quantum numbers of\na corresponding baryon in representation $\\mathcal{R}$, and\n$(\\mathcal{R}^*;-Y'TT_3)$ is attached to a fixed value of $Y'$ and\nis formally given in a conjugate representation to $\\mathcal{R}$. The\nfunction $D^{(J_L)}$ represents the SU(2) Wigner $D$ function and\n$\\chi_{K_3}$ is the spinor corresponding to $K$ and $K_3$. The \nwave function for the excited baryons can be constructed by coupling\n$\\Phi_{B,J_L, J_{L3},(T,K)}^{\\mathcal{R}}$ with the heavy-quark\nspinor. \n\nThe SU(3) symmetry-breaking Hamiltonian in Eq.~\\eqref{sb} also needs to\nbe extended to describe the mass splittings of the excited heavy\nbaryons \n\\begin{align}\nH_{\\mathrm{sb}}^{(K)} = \\overline{\\alpha} D_{88}^{(8)} + \\beta \\hat{Y}\n + \\frac{\\gamma}{\\sqrt{3}} \\sum_{i=1}^3 D_{8i}^{(8)} \\hat{T}_i +\n \\frac{\\delta}{\\sqrt{3}} \\sum_{i=1}^3 D_{8i}^{(8)} \\hat{K}_i. \n\\label{eq:excitedsu3br}\n\\end{align}\nThe additional parameter $\\delta$ can be determined by using the mass\nspectrum of excited baryons. \n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[scale=0.2]{fig5.pdf}\\qquad\n\\caption{Schematic picture of the first excited heavy baryons. A\n possible excitation of a quark from the Dirac sea to the valence\n level might have $K^P=1^-$.} \n\\label{fig:5}\n\\end{figure}\nAs shown in Fig.~\\ref{fig:5}, the transition from a $K^P=1^-$\nDirac-sea level to an unoccupied $K^P=0^+$ state may correspond to the\nfirst excited heavy baryons~\\cite{Diakonov:2010tf}. Note that such a transition\nis only allowed in the heavy-baryon sector, not in the light-baryon\nsector. As discussed already, there are two baryon anti-triplets and\nfive baryon sextets. From Eq.~\\eqref{eq:symmass}, we can derive the\nfollowing expressions\n\\begin{align}\nM_{\\overline{\\bm{3}}}^{\\prime} &= M_{\\mathrm{cl}}^{\\prime} +\n \\frac1{2I_2} + \\frac1{I_1} (a_1^2),\\cr\nM_{\\bm{6}}^{\\prime} &= M_{\\overline{\\bm{3}}}^{\\prime} +\n \\frac{1-a_1}{I_1} + \\frac{a_1}{I_1}\\times\n \\left\\{\n \\begin{array}{ll} \n-1 & \\mbox{ for $J_{L}=0$} \\\\\n0 & \\mbox{ for $J_{L}=1$} \\\\\n2 & \\mbox{ for $J_{L}=2$}\n \\end{array} \\right. .\n \\label{eq:excitedM3-6}\n\\end{align}\nConsidering the SU(3) symmetry breaking from\nEq.~\\eqref{eq:excitedsu3br}, we find the splitting parameters for the\n$\\overline{\\bm{3}}$ and $\\bm{6}$ \n\\begin{align}\n\\delta_{\\overline{\\bm{3}}}' &= \\frac38 \\overline{\\alpha} + \\beta =\n \\delta_{\\overline{\\bm{3}}} = -180\\,\\mathrm{MeV},\\cr\n \\delta_{\\bm{6}J_{L}}' &= \\delta_{\\bm{6}} -\\frac{3}{20}\\delta \\times\n \\left\\{\n \\begin{array}{ll} \n-1 & \\mbox{ for $J_{L}=0$} \\\\\n0 & \\mbox{ for $J_{L}=1$} \\\\\n2 & \\mbox{ for $J_{L}=2$}\n \\end{array} \\right.,\n\\label{eq:excitedeltas}\n\\end{align}\nwhere we see that $\\delta_{\\overline{\\bm{3}}}'$ is just the same as\n$\\delta_{\\overline{\\bm{3}}}$ given in\nEq.~\\eqref{eq:deltas}. $\\delta_{\\bm{6}}$ is given as $-120$\nMeV. Though we do not know the numerical value of the new parameter\n$\\delta$, we still can analyze the mass splittings of the newly found\n$\\Omega_c$'s, using the splittings between the states with different\nvalues of $J_{L}$. \n\nWe now turn to the hyperfine splittings. The two anti-triplets of spin\n1\/2 and 3\/2 and the two sextets of spin 1\/2 and 3\/2 are split by \n\\begin{align}\n\\Delta_{\\overline{\\bm{3}}}^{\\mathrm{hf}} =\n \\Delta_{\\bm{6}J_{L}=1}^{\\mathrm{hf}} = \\frac{\\varkappa'}{m_c}, \n\\end{align}\nwhereas another two sextets of spin 3\/2 and 5\/2 are split by\n\\begin{align}\n \\Delta_{\\bm{6}J_{L}=2}^{\\mathrm{hf}} = \\frac53 \\frac{\\varkappa'}{m_c}. \n\\label{eq:hf6}\n\\end{align}\nOne sextet of spin 1\/2 from the $J_{L}=0$ case has no hyperfine\nsplitting. The results are depicted in Fig.~\\ref{fig:6}.\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[scale=0.2]{fig6.pdf}\\qquad\n\\caption{Mass splitting of the five excited sextets.}\n\\label{fig:6}\n\\end{figure}\nNote that the $\\Delta_1$ represent the splittings\nbetween the $J_{L}=0$ state and the degenerate $J_{L}=1$ state, whereas\n$\\Delta_2$ denote those between degenerate $J_{L}=1$ and $J_{L}=2$ states \n\\begin{align}\n\\Delta_1 = \\frac{a_1}{I_1} + \\frac{3}{20} \\delta,\\;\\;\\; \\Delta_2 =\n 2\\Delta_1. \n\\label{eq:Jsplit}\n\\end{align}\nWe will soon see that the relation $\\Delta_1=2\\Delta_2$ will play an\ncritical role \nin identifying the excited $\\Omega_c$'s within the $\\chi$QSM. \n\nIf one identifies $\\Lambda_c(2592)$ and $\\Xi_c(2790)$ as the members\nof the excited baryon anti-triplet of spin $(1\/2)^-$ with negative\nparity, and $\\Lambda_c(2592)$ and $\\Xi_c(2790)$ as those of the\nexcited baryon anti-triplet of spin $(3\/2)^-$, then we find\n$\\delta_{\\overline{\\bm{3}}}=-198$ and $-190$ MeV, which are more or\nless in agreement with the value given in\nEq.~\\eqref{eq:excitedeltas}. The $\\varkappa'\/m_c$ can be also\ndetermined as \n\\begin{align}\n\\frac{\\varkappa'}{m_c} = \\frac13 (M_{\\Lambda_c(2628)} + 2\n M_{\\Xi_c(2818)}) - \\frac13 (M_{\\Lambda_c(2592)} + 2 M_{\\Xi_c(2790)})\n = 30\\,\\mathrm{MeV}, \n\\label{eq:exhfvalue}\n\\end{align}\nand $M_{\\overline{\\bm{3}}}$ is also fixed by \n\\begin{align}\nM_{\\overline{\\bm{3}}} = \\frac29 (M_{\\Lambda_c(2628)} + 2\n M_{\\Xi_c(2818)}) + \\frac19 (M_{\\Lambda_c(2592)} + 2 M_{\\Xi_c(2790)})\n = 2744\\,\\mathrm{MeV}. \n\\end{align}\n\nWe now assert that as a minimal scenario the newly found $\\Omega_c$ baryons\nby the LHCb Collaboration belong to the five excited sextets. Then\n$\\Omega_c(3000)$ can be identified as the state with $(J_{L}=0,\\,1\/2^-)$,\nwhich corresponds to the lightest state in Fig.~\\ref{fig:6}. All other\nfour states can be consequently identified as depicted in\nFig.~\\ref{fig:6}. Including the hyperfine interactions, we get the\nresults as summarized in Table~\\ref{tab:7}.\n\\renewcommand{\\arraystretch}{1.3}\n\\begin{table}[thp]\n\\caption{Scenario 1: All five LHCb\n $\\Omega_{c}$ states are assigned to the excited baryon sextets.\n\\label{tab:7}%\n\\begin{center}%\n\\begin{tabular}\n[c]{ccccc}\\hline\\hline\n$J_{L}$ & $S^{P}$ & $M$~[MeV] & $\\varkappa^{\\prime}\/m_{c}$~[MeV] &\n $\\Delta_{J_{L}}$~[MeV]\\\\ \\hline\n0 & $\\frac{1}{2}^{-}$ & 3000 & not applicable & not applicable\\\\ \n\\multirow{2}{*}{1} & $\\frac{1}{2}^{-}$ & 3050 & \\multirow{2}{*}{16} &\n\\multirow{2}{*}{61}\\\\\n~ & $\\frac{3}{2}^{-}$ & 3066 & & \\\\\n\\multirow{2}{*}{2} & $\\frac{3}{2}^{-}$ & 3090 & \\multirow{2}{*}{17} &\n\\multirow{2}{*}{47}\\\\\n& $\\frac{5}{2}^{-}$ & 3119 & & \\\\\\hline \\hline\n\\end{tabular}\n\\end{center}\n\\par\n\\end{table}\n\\renewcommand{\\arraystretch}{1}\nWe find at least three different contradictions arising from the\nassignment of these $\\Omega_c$ states as the members of the\nexcited sextets within the $\\chi$QSM. Firstly, this assignment\nrequires that the hyperfine splitting should be almost as twice as \nsmaller than in the $\\overline{\\bm{3}}$ case. Secondly, the robust\nrelation $\\Delta_2=2\\Delta_1$ given in Eq.~\\eqref{eq:Jsplit} is badly\nbroken. Finally, there are two orthogonal sum rules\n$\\sigma_1=\\sigma_2=0$ derived from the $\\chi$QSM \n\\begin{align}\n\\sigma_1&=6\\; \\Omega_c(J_{L}=0,1\/2^-)- \\Omega_c(J_{L}=1,1\/2^-)-8 \\;\n \\Omega_c(J_{L}=1,3\/2^-)+3\\; \\Omega_c(J_{L}=2,5\/2^-) , \\label{sr}\\\\ \n\\sigma_2&=-4\\; \\Omega_c(J_{L}=0,1\/2^-)+9\\; \\Omega_c(J_{L}=1,1\/2^-)-3 \\;\n \\Omega_c(J_{L}=1,3\/2^-)-5 \\; \\Omega_c(J_{L}=2,3\/2^-) \n+3\\; \\Omega_c(J_{L}=2,5\/2^-), \\notag\n\\end{align}\nwhich are also badly broken. Thus, we come to the conclusion that the\nthe five $\\Omega_c$ baryons is unlikely to belong to the excited\nsextets. A similar conclusion was drawn by\nRef.~\\cite{Karliner:2017kfm} in a different theoretical framework. \nMoreover, the computed decay widths of the excited $\\Omega_c$'s do not\nmatch with the experimental data. Therefore, the first scenario is\nunrealistic in the present mean-field approach. \n\nSince the first scenario is not suitable for identifying the five\nexcited $\\Omega_c$ baryons, we have to come up with another\nscenario. Observing that two of them have rather narrower decay widths\nthan other three $\\Omega_c$'s, we assert that these narrow\n$\\Omega_c(3050)$ and $\\Omega_c(3119)$ belong to the possible exotic\nanti-decapentaplet ($\\overline{\\bm{15}}$) which is yet another\nlowest-lying allowed representation, whereas three of them belong to\nthe excited sextet. We find in this scenario that two other members of\nthe excited baryon sextet with $J_{L}=2$ have masses above the $\\Xi D$\nthreshold at 3185 MeV. Since they have rather broad widths, they are\nnot clearly seen in the LHCb data and may fall into the bump\nstructures appearing in the LHCb data. \n\n\\renewcommand{\\arraystretch}{1.3}\n\\begin{table}[thp]\n\\caption{Scenario 2. Only three LHCb states are\nassigned to {the} sextets. }%\n\\label{tab:8}%\n\\begin{center}%\n\\begin{tabular}\n[c]{ccccc}\\hline \\hline\n$J_{L}$& $S^{P}$ & $M$~[MeV] & $\\varkappa^{\\prime}\/m_{c}$~[MeV] & $\\Delta_{J_{L}}%\n$~[MeV]\\\\\\hline\n0 & $\\frac{1}{2}^{-}$ & 3000 & not applicable & not applicable\\\\\n\\multirow{2}{*}{1} & $\\frac{1}{2}^{-}$ & 3066 & \\multirow{2}{*}{24} &\n\\multirow{2}{*}{82}\\\\\n~ & $\\frac{3}{2}^{-}$ & 3090 & & \\\\\n\\multirow{2}{*}{2} & $\\frac{3}{2}^{-}$ & \\emph{3222} & input & input\\\\\n& $\\frac{5}{2}^{-}$ & \\emph{3262} & 24 & 164\\\\\\hline \\hline\n\\end{tabular}\n\\end{center}\n\\par\n\\end{table}\n\\renewcommand{\\arraystretch}{1}\nThe results of the second scenario are summarized in Table~\\ref{tab:8}\nexcept for the $\\Omega_c(3050)$ and $\\Omega_c(3119)$ which will be\ndiscussed separately. The italic numbers correspond to the bump\nstructures from which $\\Omega_c(3222)$ used as input. Scenario 2\nprovides a much more plausible prediction than scenario 1\ndoes. Interestingly, the value of $\\varkappa'\/m_c\\approx 24$ MeV is\ncloser to that determined from the excited baryon anti-triplets, given\nin Eq.~\\eqref{eq:exhfvalue}. Moreover, the relation\n$\\Delta_1=2\\Delta_2$ is nicely satisfied in this scenario. \n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[scale=0.3]{fig7.pdf}\\qquad\n\\caption{Representation of the anti-decapentaplet\n ($\\overline{\\bm{15}}$). As in the case of the baryon sextet, there\n are two baryon anti-decapentaplets with spin 1\/2 and 3\/2. The\n $\\Omega_c$s belong to the isotriplet in the $\\overline{\\bm{15}}$plet.}\n\\label{fig:7}\n\\end{figure}\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[scale=0.3]{fig8.pdf}\\qquad\n\\caption{The allowed representations for the lowest-lying heavy\n baryons. }\n\\label{fig:8}\n\\end{figure}\nThe anti-decapentaplet ($\\overline{\\bm{15}}$) was first suggested by\nDiakonov~\\cite{Diakonov:2010tf}. Figure~\\ref{fig:7} illustrates the\nrepresentation of the $\\overline{\\bm{15}}$. Since the\n$\\overline{\\bm{15}}$ belongs to the allowed representations for the\nground-state heavy baryons, it satisfies the quantization rule\n$\\bm{J}_{L}+\\bm{T}=\\bm{0}$, so $T=J_{L}=1$ (see Fig.~\\ref{fig:8}). When the\nlight-quark pair with $J_{L}=1$ is coupled to the heavy-quark spin, there\nare two possible $\\overline{\\bm{15}}$ representations that are\ndegenerate in the limit of $m_Q\\to \\infty$. It means that one needs to\nconsider the hyperfine interaction defined in Eq.~\\eqref{eq:ssinter}.\nAs given in Eq.~\\eqref{eq:kappavalue}, the value of $\\varkappa\/m_c$ is\naround 68 MeV. Surprisingly, the mass difference between the\n$\\Omega_c(3050)$ and the $\\Omega_c(3119)$ is \n\\begin{align}\nM_{\\Omega_c(3\/2^+)} (3119) - M_{\\Omega_c(1\/2^+)} (3050) =\n \\frac{\\varkappa}{m_c} \\approx 69\\,\\mathrm{MeV} \n\\end{align}\nwhich is almost the same as what was determined from the lowest-lying\nsextet baryons. The decay widths of the excited $\\Omega_c$ baryons\npredicted within the present framework further support the\nplausibility of scenario 2~\\cite{Kim:2017khv}. The decay widths for\nthe $\\Omega_c(3050)$ and $\\Omega_c(3119)$ are predicted to be \n\\begin{align}\n\\Gamma_{\\Omega_c(3050)(\\overline{\\bm{15}},1\/2^+)} =\n 0.48\\,\\mathrm{MeV}, \\;\\;\\; \n\\Gamma_{\\Omega_c(3119)(\\overline{\\bm{15}},3\/2^+)} = 1.12\\,\\mathrm{MeV}, \n\\end{align}\nwhich are in good agreement with the LHCb data\n$\\Gamma_{\\Omega_c(3050)}=(0.8\\pm0.2\\pm 0.1)$ MeV and\n$\\Gamma_{\\Omega_c(3119)}=(1.1\\pm0.8\\pm 0.4)$ MeV. For detailed\ndiscussion related to the decay widths of $\\Omega_c$, we refer to\nRef.~\\cite{Kim:2017khv}. \n\nIn addition to scenarios 1 and 2, we also tried to examine\nseveral other scenarios but find that they all turned out to be\ninconsistent with the experimental data. Finally, we want to emphasize\nthat the $\\Omega_c(3050)$ and $\\Omega_c(3119)$ assigned to the members\nof the $\\overline{\\bm{15}}$ are isotriplets. It implies that if they\nindeed belong to the $\\overline{\\bm{15}}$, charged $\\Omega_c^{\\pm}$\nshould exist. Knowing that the excited $\\Omega_c^0$'s have been\nmeasured in the $\\Xi_c^+K_c^-$ channel, we propose that the $\\Xi_c^+\nK^0$ and $\\Xi_c^0 K^-$ channels need to be scanned in the range of the\ninvariant mass between 3000 MeV and 3200 MeV to find an isovector\n$\\Omega_c$'s. If they do not exist, this will falsify the present\npredictions. \n\\section{Conclusion and outlook}\nIn the present short review, we briefly summarized a series of recent\nworks on the properties of the singly heavy baryons within a pion\nmean-field approach, also known as the chiral quark-soliton model.\nIn the limit of the infinitely heavy quark mass ($m_Q\\to \\infty$), \nthe heavy quark inside a heavy baryon can be treated as a mere static\ncolor source. Then a heavy baryon is portrayed as a state of $N_c-1$\nvalence quarks bound by the pion mean field with a heavy stripped off\nfrom the valence level. This mean-field approach has a certain\nvirtue since both the light and heavy baryons can be dealt with on an\nequal footing. It means that we can bring all dynamical parameters\nwhich have been already determined in the light-baryon sector to\ndescribe the heavy baryons. Indeed we can simply replace the\n$N_c$-counting prefactor by $N_c-1$ for the valence contributions to\nthe heavy baryons. Accordingly, we were able to explain the masses of\nthe lowest-lying heavy baryons and the magnetic moments of them\nwithout introducing additional parameters except for the hyperfine\nspin-spin interactions. We have employed the same framework to\nidentify the newly found excited $\\Omega_c$ baryons reported by the\nLHCb Collaboration. Assigning the three of them to the excited baryon\nsextets and the two of them with narrower decay widths to the possible\nexotic baryon anti-decapentaplet, we were able to classify the\n$\\Omega_c$'s successfully. Since the $\\Omega_c$ baryons in the\nanti-decapentaplet are the isovector baryons, we anticipate that\ncharged $\\Omega_c$'s might be found in other channels such as the $\\Xi_c^+\nK^0$ and $\\Xi_c^0 K^-$. \n\nThe present model can be further applied to future investigations on\nvarious properties and form factors of heavy baryons. As already shown\nin Ref.~\\cite{Kim:2018nqf}, the electric form factor of the charged\nheavy baryon indicates that a heavy baryon is an electrically compact\nobject. Transition form factors of heavy baryons will further reveal\ntheir internal structure. Understanding excited heavy baryons is\nanother crucial issue that should be investigated. Related studies are\nunder way. \n\n\\begin{acknowledgments}\nI am very grateful to M. V. Polyakov, M. Prasza{\\l}owicz, and\nGh.-S. Yang for fruitful collaborations and discussions over decades. \nI am thankful to J.-Y. Kim for the discussion related to the\nelectromagnetic form factors of heavy baryons. \nI want to express the gratitude to the editors of the Journal of the\nKorean Physical Society (JKPS) for giving me an opportunity to join\nthe very special 50th anniversary celebration of the JKPS. \nThe present work was supported by the Inha University Grant in 2017.\n\\end{acknowledgments}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzjvpv b/data_all_eng_slimpj/shuffled/split2/finalzzjvpv new file mode 100644 index 0000000000000000000000000000000000000000..56deaa5c598e282a98d21db08ea04e283e315cc7 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzjvpv @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{intro}\n\n\nOne of the first, and most important, problems to be tackled by the theory of linear elasticity is that of the buckling of a column under an axial load. \nUsing Bernoulli's beam equations, Euler found the critical load of compression $N_\\text{cr}$ leading to the buckling of a slender cylindrical column of radius $B$ and length $L$.\nAs recalled by Timoshenko and Gere \\cite{tige61}, Euler looked at the case of an ideal column with (built in)-(free) end conditions. \nWhat is now called ``Euler buckling'', or the ``\\emph{fundamental case} of buckling of a prismatic bar'' \\cite{tige61} corresponds to the case of a bar with hinged-hinged end conditions. \nThe corresponding critical load is given by\n\\begin{equation} \\label{euler}\n\\dfrac{N_\\text{cr}}{\\pi^3B^2} = \\frac{E}{4}\\left(\\frac{B}{L}\\right)^2,\n\\end{equation}\nwhere $E$ is the Young modulus.\nThe extension of this formula to the case of a thick column is a non-trivial, even sophisticated, problem of non-linear three-dimensional elasticity. \nIn general, progress can be only made by using reductive (rod, shells, etc.) theories. \nHowever, there is another choice of boundary conditions for which the criterion~(\\ref{euler}) is valid: namely, the case where both ends are ``guided'' or ''sliding'' (the difference between the two cases lies in the shape of the deflected bar, which is according to the half-period of a sine in one case and of a cosine in the other case.) \nIn exact non-linear elasticity, there exists a remarkable three-dimensional analytical solution to this problem (due to Wilkes \\cite{Wilk55}) which describes a small-amplitude (incremental) deflection superimposed upon the large homogeneous deformation of a cylinder compressed between two lubricated platens. \nIn this case, the Euler formula can be extended to the case of a column with finite dimensions, for arbitrary constitutive law.\n\nThe exact incremental solution allows for an explicit derivation of Euler's formula at the \\emph{onset of non-linearity}, which combines third-order elastic constants with a term in $(B\/L)^4$.\nIn Goriely et al. \\cite{Goriely08}, we showed that for an incompressible cylinder, \n\\begin{equation}\\label{incompressible_case}\n\\frac{N_\\text{cr}}{\\pi^3B^2} = \\frac{E}{4}\\left(\\frac{B}{L}\\right)^2\n - \\frac{\\pi^2}{96}\\left(\\frac{20}{3}E + 9 \\mathcal{A}\\right)\\left(\\frac{B}{L}\\right)^4,\n\\end{equation}\nwhere $\\mathcal{A}$ is Landau's third-order elasticity constant.\nThis formula clearly shows that \\emph{geometrical non-linearities} (term in $(B\/L)^4$) are intrinsically coupled to \\emph{physical non-linearities} (term in $\\mathcal{A}$) for this problem.\n(For connections between Euler's theory of buckling and incremental incompressible nonlinear elasticity, see the early works of Wilkes \\cite{Wilk55}, Biot \\cite{Biot63}, Fosdick and Shield \\cite{FoSh63}, and the references collected in \\cite{Goriely08}.)\n\nNow, in third-order \\emph{in}compressible elasticity, there are two elastic constants \\cite{Ogden74b}: the shear modulus $\\mu$ ($=E\/3$) and $\\mathcal{A}$.\nIn third-order \\emph{compressible} elasticity, there are five elastic constants:\n$\\lambda$ and $\\mu$, the (second-order) Lam\\'e constants (or equivalently, $E$ and $\\nu$, Young's modulus and Poisson's ratio), and $\\mathcal{A}$, $\\mathcal{B}$, and $\\mathcal{C}$, the (third-order) Landau constants. \nEuler's formula at order $(B\/L)^2$, equation (\\ref{euler}), involves only one elastic constant, $E$. \nIt is thus natural to ask whether Poisson's ratio, $\\nu$, plays a role in the non-linear correction to Euler formula of $(B\/L)^4$, the next-order term.\nThe final answer is found as formula \\eqref{correc} below, which shows that the non-linear correction involves all five elastic constants.\n\n\n\\section{Finite compression and buckling}\n\n\nIn this section, we recall the equations governing the homogeneous compression of a cylinder in the theory of exact (finite) elasticity. \nWe also recall the form of some incremental solutions that is, of some small-amplitude static deformations which may be superimposed upon the large compression, and which indicate the onset of instability for the cylinder.\n\n\n\n\\subsection{Large deformation}\n\n\nWe take a cylinder made of a hyperelastic, compressible, isotropic solid with strain-energy density $W$ say, with radius $B$ and length $L$ in its undeformed configuration.\nWe denote by ($R$, $\\Theta$, $Z$) the coordinates of a particle in the cylinder at rest, in a cylindrical system.\n\nThen we consider that the cylinder is subject to the following deformation,\n\\begin{equation} \\label{deformation}\n{r} = \\lambda_1R, \\qquad {\\theta}= \\Theta, \\qquad {z}=\\lambda_3 Z,\n\\end{equation}\nwhere ($r$, $\\theta$, $z$) are the current coordinates of the particle initially at ($R$, $\\Theta$, $Z$), $\\lambda_1$ is the radial stretch ratio and $\\lambda_3$ is the axial stretch ratio.\nExplicitly, $\\lambda_1 = b\/B$ and $\\lambda_3 = l\/L$, where $b$ and $l$ are the radius and length of the deformed cylinder, respectively.\nThe physical components of $\\tens{F}$, the corresponding deformation gradient, are: $\\tens{F} = \\textrm{Diag} \\left(\\lambda_1, \\lambda_1, \\lambda_3 \\right)$, showing that the deformation is equi-biaxial and homogeneous (and thus, universal). \n\nThe (constant) Cauchy stresses required to maintain the large homogeneous compression are \n(see, for instance, Ogden \\cite{Ogden84}),\n\\begin{equation} \\label{Cauchy stress tensor}\n\\sigma_i = \\left( \\det \\tens{F} \\right)^{-1}\\lambda_i W_i, \\qquad i=1,2,3 \\quad \\textrm{(no sum)},\n\\end{equation}\nwhere $W_i \\equiv \\partial W\/ \\partial \\lambda_i$.\nIn our case, $\\sigma_1=\\sigma_2$ because the deformation is equi-biaxial, and $\\sigma_1=\\sigma_2=0$ because the outer face of the cylinder is free of traction. \nHence\n\\begin{equation}\n\\sigma_1=\\sigma_2= \\lambda_1^{-1}\\lambda_3^{-1}W_1=0, \\qquad \n\\sigma_3= \\lambda_1^{-2} W_3. \\label{sigma}\n\\end{equation}\nNote that we may use the first equality to express one principal stretch in terms of the other (provided, of course, that inverses can be performed).\n\n\n\\subsection{Incremental equations}\n\n\nNow we recall the equations governing the equilibrium of incremental solutions, in the neighborhood of the finite compression.\nThey read in general as \\cite{Ogden84}\n\\begin{equation} \\label{Incremental_equations_equilibrium}\n\\textrm{div}\\ \\tens{s}= \\mathbf{0},\n\\end{equation}\nwhere $\\tens{s}$ is the incremental nominal stress tensor.\nIt is related to $\\mathbf{u}$, the incremental mechanical displacement, through the incremental constitutive law,\n\\begin{equation} \\label{component_form}\n\\mathbf{s} = \\tens{B} \\left(\\text{grad} \\ \\mathbf{u}\\right)^T,\n\\end{equation}\nwhere $\\tens{B}$ is the fourth-order tensor of incremental elastic moduli and the gradient is computed in the current cylindrical coordinates \\cite{DoHa06}. \nThe non-zero components of $\\tens{B}$, in a coordinate system aligned with the principal axes associated with the deformation \\eqref{deformation}, are given in general by \\cite{Ogden84}\n\\begin{align} \n& JB_{iijj}= \\lambda_i\\lambda_j W_{ij}, \\nonumber \\\\\n&JB_{ijij}= (\\lambda_i W_i - \\lambda_j W_j)\\lambda_i^2\/(\\lambda_i^2-\\lambda_j^2), & i\\neq j,\\ \\lambda_i\\neq \\lambda_j, \\nonumber \\\\\n&JB_{ijji}= (\\lambda_j W_i - \\lambda_i W_j)\\lambda_i \\lambda_j\/(\\lambda_i^2-\\lambda_j^2), & i\\neq j,\\ \\lambda_i\\neq \\lambda_j, \\nonumber \\\\\n& JB_{ijij}=(B_{iiii}-B_{iijj} + \\lambda_i W_i)\/2, & i\\neq j,\\ \\lambda_i= \\lambda_j, \\nonumber \\\\\n& JB_{ijji}=B_{jiij}=B_{ijij} - \\lambda_i W_i, & i\\neq j,\\ \\lambda_i= \\lambda_j, \n\\end{align}\n(no sums), where $W_{ij}\\equiv \\partial^2 W\/(\\partial \\lambda_i\\partial \\lambda_j)$. \nNote that here, some of these components are not independent one from another because $\\lambda_1=\\lambda_2$ and $\\sigma_1=\\sigma_2=0$. \nIn particular, we find that \n\\begin{align}\n& B_{1212} = B_{2121} = B_{1221}, \\qquad \n B_{2323}=B_{1313}=B_{1331}=B_{2332},\n \\notag \\\\\n& B_{2222}=B_{1111}, \\quad \n B_{2233}=B_{1133},\n \\quad\n B_{3232}=B_{3131}, \\quad \n B_{1122}=B_{1111}-2B_{1212}. \n\\end{align}\n\n\n\\subsection{Incremental solutions}\n\n\nWe look for incremental static solutions that are periodic along the circumferential and axial directions, and have yet unknown radial variations. \nThus our ansatz for the components of the mechanical displacement is the same as Wilkes's \\cite{Wilk55}:\n\\begin{equation} \\label{slon}\nu_r = U_r(r) \\cos n\\theta \\cos kz, \\quad\nu_\\theta = U_{\\theta}(r) \\sin n \\theta \\cos kz, \\quad\nu_z = U_z(r) \\cos n\\theta \\sin k z, \n\\end{equation}\nwhere $n=0,1,2,\\ldots$ is the \\textit{circumferential mode number}; $k$ is the \\textit{axial wavenumber}; the subscripts $(r,\\theta,z)$ refer to components within the cylindrical coordinates $(r,\\theta,z)$; and all upper-case functions are functions of $r$ alone. \n\nDorfmann and Haughton \\cite{DoHa06} show that the following displacements $\\mathbf{U}^{(1)}$, $\\mathbf{U}^{(2)}$, and $\\mathbf{U}^{(3)}$ are solutions to the incremental equations \\eqref{Incremental_equations_equilibrium}, \n\\begin{equation} \\label{U12}\n\\mathbf{U}^{(1)}(r), \\, \\mathbf{U}^{(2)}(r) = \n\\left[ I'_{n}(qkr), -\\frac{n}{qkr}I_n(qkr), -\\dfrac{(B_{1111}q^2-B_{3131})}{q(B_{1313}+B_{1133})}I_n(qkr)\\right]^T,\n\\end{equation}\nand \n\\begin{equation} \\label{U3}\n\\mathbf{U}^{(3)}(r) = \\left[\\dfrac{1}{r}I_n(q_3kr), - \\dfrac{q_3k}{n}I'_{n}(q_3kr), 0\\right]^{T},\n\\end{equation}\nwhere $q=q_1, q_2$ and $I_n$ is the modified Bessel function of order $n$.\nHere $q_1$, $q_2$, and $q_3$ are the square roots of the roots $q_1^2$, $q_2^2$ of the following quadratic in $q^2$: \n\\begin{equation} \nB_{1313} B_{1111} q^4+[(B_{1133}+B_{1313})^2-B_{1313}B_{3131}-B_{3333}B_{1111}]q^2+B_{3333}B_{3131}=0,\n\\end{equation}\nand of the root of the following linear equation in $q^2$\n\\begin{equation}\nB_{1212}q^2-B_{3131}=0,\n\\end{equation}\nrespectively.\n\n\n\nFrom \\eqref{component_form} we find that the incremental traction on planes normal to the axial direction has components of the same form as that of the displacements, namely \n\\begin{align} \\label{soln}\n& s_{r r} = S_{r r}(r) \\cos n\\theta \\cos k z, \\notag \\\\\n& s_{r\\theta} = S_{r\\theta}(r) \\sin n \\theta \\cos k z, \\notag \\\\\n& s_{r z} = S_{r z}(r) \\cos n\\theta \\sin k z, \n\\end{align}\nsay, where again all upper-case functions are functions of $r$ alone. \nThen we find that the traction solutions corresponding to the solutions \\eqref{U12}-\\eqref{U3} are given by\n\\begin{multline}\nr \\mathbf{S}^{(1)}(r), \\ \\ r \\mathbf{S}^{(2)}(r)= \n\\left[ \n2B_{1212} I'_n(q k r) \\right.\n\\\\ \\left . - \\left(2B_{1212} \\dfrac{n^2}{q k r} + q k r B_{1111} - \\dfrac{k r B_{1133}\\left(B_{1111}q^2 - B_{3131}\\right)}{q\\left(B_{1313}+B_{1133}\\right)}\\right) I_n(q k r), \\right.\n \\\\ \n\\left. 2 n B_{1212}\\left(\\frac{I_n(q k r)}{q k r}-I'_n(q k r)\\right),\n - k r B_{1313}\\left(\\frac{B_{1111}q^2-B_{3131}}{B_{1313}+B_{1133}}+1\\right)I'_{n}(q k r)\n\\right]^T,\n\\end{multline}\nand\n\\begin{multline}\nr \\mathbf{S}^{(3)}(r) = \\left[\n 2 B_{1212}\\left(\\frac{I_n(q_3kr)}{r}-q_3 k I'_n(q_3kr)\\right), \\right.\n \\\\\n\\left. B_{1212}\\left(\\frac{2q_3k}{n}I'_n(q_3kr)-\\left(\\frac{2n}{r}+\\frac{q_3^2k^2r}{n}\\right)I_n(q_3kr)\\right),\n- k B_{1313}I_n(q_3kr)\\right]^{T}.\n\\end{multline}\n\nThe general solution to the incremental equations of equilibrium is thus of the form\n\\begin{equation} \nr \\mathbf{S}(r) = \n \\begin{bmatrix} [c|c|c]\nr \\mathbf{S}^{(1)}(r) & r \\mathbf{S}^{(2)}(r) & r \\mathbf{S}^{(3)}(r)\n\\end{bmatrix}\n\\mathbf{c},\n\\end{equation}\nwhere $\\mathbf{S} \\equiv [S_{r r}, S_{r\\theta}, S_{r z}]^T$ and $\\mathbf{c}$ is a constant three-component vector.\nNote that we use the quantity $r \\mathbf{S}$ for the traction (instead of $\\mathbf{S}$), because it is the Hamiltonian conjugate to the displacement in cylindrical coordinates \\cite{Shuv03}.\n\nNow when the cylinder is compressed (by platens say), its end faces should stay in full contact with the platens so that the first incremental boundary condition is \n\\begin{equation}\nu_{z} = 0, \\qquad \\text{on} \\quad z=0, l, \n\\end{equation}\nwhich leads to \\cite{DoHa06,Goriely08}\n\\begin{equation}\nk = m \\pi\/l,\\end{equation}\nfor some integer $m$, the \\emph{axial mode number}.\nFrom \\eqref{soln}, we now see that on the thrust faces, we have\n\\begin{equation}\ns_{r z} = 0, \\qquad \\text{on} \\quad z=0, l, \n\\end{equation}\nwhich means that the end faces of the column are in sliding contact with the thrusting platens.\nIn other words, in the limit of a slender column, we recover the Euler strut with \\emph{sliding-sliding}, or \\emph{guided-guided} end conditions. \nIn Figure \\ref{fig4}, we show the first two axi-symmetric and two asymmetric modes of incremental buckling.\n\\begin{figure}[!ht]\n\\begin{center}\n\\epsfig{figure=modes.pdf, width=.6\\textwidth}\n\\end{center}\n \\caption{First two axi-symmetric and two asymmetric modes of buckling for a compressed strut with guided-guided end conditions: $n$ is the circumferential mode number and $m$ the axial mode number. For slender enough cylinders, the $n=1$, $m=1$ mode is the first mode of buckling.}\n \\label{fig4}\n\\end{figure}\n\nThe other boundary condition is that the cylindrical face is free of incremental traction: $\\mathbf{S}(b) = \\mathbf{0}$.\nThis gives \n\\begin{equation}\n\\Delta \\equiv \\det \\begin{bmatrix} [c|c|c]\nb \\mathbf{S}^{(1)}(b) & b \\mathbf{S}^{(2)}(b) & b \\mathbf{S}^{(3)}(b)\n\\end{bmatrix} = 0.\n\\end{equation}\n\n\n\\section{Euler buckling}\n\n\n\n\n\\subsection{Asymptotic expansions}\n\\label{Asymptotic Euler buckling}\n\n\nWe now specialize the analysis to the asymmetric buckling mode $n=1$, $m=1$, corresponding to the Euler buckling with guided-guided end conditions, in the limit where the axial compressive stretch $\\lambda_3$ is close to 1 (the other modes are not reached for slender enough cylinders). \nTo this end, we only need to consider the so-called \\emph{third-order elasticity} expansion of the strain energy density, for example that of Landau and Lifshitz \\cite{LaLi86}, \n\\begin{equation}\nW = \n\\dfrac{\\lambda}{2}\\left(\\textrm{tr}\\tens{E}\\right)^2 +\\mu \\ \\textrm{tr}(\\tens{E}^2) + \\dfrac{\\mathcal{A}}{3}\\textrm{tr}(\\tens{E}^3) + \\mathcal{B}\\left(\\textrm{tr}\\tens{E}\\right)\\textrm{tr}(\\tens{E}^2)\n+ \\dfrac{\\mathcal{C}}{3}\\left(\\textrm{tr} \\tens{E}\\right)^3,\n\\end{equation}\nwhere $\\tens{E}=\\tens{E}^T$ is the Lagrange, or Green, strain tensor defined as $\\tens{E}=\\left(\\tens{F}^T\\tens{F}-\\tens{I}\\right)\/2$,\n$\\lambda$ and $\\mu$ are the Lam\\'e moduli, and $\\mathcal{A}$, $\\mathcal{B}$, $\\mathcal{C}$ are the Landau third-order elastic constants\n(Note that there are other, equivalent, expansions based on other invariants, such as the ones proposed by Murnaghan \\cite{Murn51}, Toupin and Bernstein \\cite{ToBe61}, Bland \\cite{Blan69}, or Eringen and Suhubi \\cite{ErSu74}, see Norris \\cite{Norr99} for the connections.)\n\nTo measure how close $\\lambda_3$ is to 1, we introduce $\\epsilon$, a small parameter proportional to the slenderness of the deformed cylinder,\n\\begin{equation} \\label{epsilon}\n\\epsilon=k b=\\pi b\/l.\n\\end{equation}\nThen we expand the radial stretch $\\lambda_1$ and the critical buckling stretch $\\lambda_3$ in terms of $\\epsilon$ up to order $M$,\n\\begin{equation}\n\\lambda_1=\\lambda_1(\\epsilon)=1+\\sum_{p=1}^{M}\\alpha_p\\epsilon^p + \\mathcal{O}(\\epsilon^{M+1}),\n\\qquad\n\\lambda_3=\\lambda_3(\\epsilon)=1+\\sum_{p=1}^{M}\\beta_p\\epsilon^p + \\mathcal{O}(\\epsilon^{M+1}),\n\\end{equation}\nsay, where the $\\alpha$'s and $\\beta$'s are to be determined shortly. \nSimilarly, we expand $\\Delta$ in powers of $\\epsilon$,\n\\begin{equation}\\nonumber\n\\Delta = \\sum_{p=1}^{M_d}d_p\\epsilon^p + \\mathcal{O}(\\epsilon^{M_d+1}),\n\\end{equation}\nsay, and solve each order $d_p=0$ for the coefficients $\\alpha_p$ and $\\beta_p$, making use of the condition $\\sigma_1=0$.\nWe find that $\\alpha_p$ and $\\beta_p$ vanish identically for all odd values of $p$, and that $\\lambda_1$ and $\\lambda_3$, up to the fourth-order in $\\epsilon$, are given by \n\\begin{equation}\\label{lambda1_3}\n\\lambda_1=1+\\alpha_{2}\\epsilon^2+\\alpha_{4}\\epsilon^4+O(\\epsilon^6),\n\\qquad\n\\lambda_3=1+\\beta_{2}\\epsilon^2+\\beta_{4}\\epsilon^4+O(\\epsilon^6),\n\\end{equation}\nwith $\\alpha_2$ and $\\alpha_4$ given by \n\\begin{align} \\label{alpha}\n \\alpha_2 = & \\dfrac{\\nu}{4}, \\notag \\\\\n \\alpha_4 = & - \\dfrac{\\nu(1+\\nu)}{32} \\notag \\\\ \n& - \\dfrac{(1+\\nu)(1-2\\nu)}{16E} \\left[\\nu^2\\mathcal{A} + (1-2\\nu + 6\\nu^2)\\mathcal{B} + (1 - 2\\nu)^2 \\mathcal{C} \\right] - \\nu \\beta_4, \n\\end{align}\nwherein \n\\begin{align} \\label{beta}\n \\beta_2 = & -\\dfrac{1}{4}, \\notag \\\\\n \\beta_4 = & \\dfrac{29 + 39 \\nu + 8 \\nu^2}{96(1+\\nu)}\\notag \\\\\n & + \\dfrac{1}{16E} \\left[( 1 - 2\\nu^3)\\mathcal{A} + 3(1-2\\nu)(1+2\\nu^2)\\mathcal{B} + (1 - 2\\nu)^3 \\mathcal{C} \\right]. \n\\end{align}\nNote that we switched from Lam\\'e constants to Poisson's ratio and Young's modulus for these expressions, using the connections $\\nu = \\lambda\/(2\\lambda+2\\mu)$ and $E = \\mu(3\\lambda+2\\mu)\/(\\lambda+\\mu)$.\n\n\n\\subsection{Onset of nonlinear Euler buckling}\n\n\nThe analytical results presented above are formulated in terms of the \\emph{current} geometrical parameter $\\epsilon$, defined in \\eqref{epsilon}. \nIn order to relate these results to the classical form of Euler buckling, we introduce the \\emph{initial} geometric slenderness $B\/L$. \nRecalling that $\\epsilon=\\pi b\/l$, $\\lambda_3=l\/L$, and $b=\\lambda_1B$, we find that \n\\begin{equation}\\label{condition}\n\\epsilon \\lambda_3 = \\pi \\lambda_1 (B\/L).\n\\end{equation} \nWe expand $\\epsilon$ in powers of $B\/L$, and solve \\eqref{condition} to obtain \n\\begin{eqnarray}\\label{epsilon2}\n\\nonumber \\epsilon \n&=& \\pi (B\/L) + (\\alpha_2-\\beta_2) \\pi^3 (B\/L)^3 \n+ \\mathcal{O}\\left((B\/L)^4\\right) \\\\\n&=& \\pi (B\/L) + (1 + \\nu)(\\pi^3\/4)(B\/L)^3 + \\mathcal{O}\\left((B\/L)^4\\right).\n\\end{eqnarray}\n\nSecond, we wish to relate the axial compression to the current axial load $N$. \nTo do so, we integrate the axial stress over the faces of the cylinder, \n\\begin{equation} \\label{axialforce}\n N = -2\\pi\\int_0^b r \\sigma_3 \\text{d}r = -\\pi b^2 \\sigma_3=-\\pi \\lambda_1^2 B^2 \\sigma_3,\n\\end{equation}\nbecause $\\sigma_3$ is constant, given by (\\ref{sigma})$_{2}$.\n\nFinally, in order to write the nonlinear buckling formula, we expand $\\lambda_1$ and $\\lambda_3$ in \\eqref{axialforce}, using \\eqref{lambda1_3}, and then expand $\\epsilon$ in powers of the slenderness ($B\/L$), using \\eqref{epsilon2}.\nIt gives the desired expression for the first \\emph{non-linear correction to the Euler formula},\n\\begin{equation} \\label{correc}\n\\dfrac{N_\\text{cr}}{\\pi^3 B^2} = \\dfrac{E}{4} \\left(\\dfrac{B}{L}\\right)^2 - \\dfrac{\\pi^2}{96} \\delta_\\text{\\ NL} \\left(\\dfrac{B}{L}\\right)^4,\n\\end{equation}\nwhere\n\\begin{multline}\n\\delta_\\text{\\ NL} = 2\\dfrac{13 + 12 \\nu - 2 \\nu^2}{(1+\\nu)} E \\\\\n + 12\\left[(1 - 2\\nu^3)\\mathcal{A} + 3 (1 - 2\\nu)(1 + 2\\nu^2)\\mathcal{B} + (1 - 2\\nu)^3\\mathcal{C}\\right]. \n\\end{multline}\n\nWe now check this equation against its incompressible counterpart \\eqref{incompressible_case}.\nTheoretical considerations and experimental measurements \\cite{Ogden74b}, \\cite{WHIZ08}, \\cite{CaGF03}, \\cite{DeOg10}, show that in the incompressible limit, $E$ and $\\mathcal{A}$ remain finite, $\\nu \\rightarrow 1\/2$, $(1-2\\nu)\\mathcal{B} \\rightarrow -E\/3$, and $(1-2\\nu)^3 \\mathcal{C} \\rightarrow 0$.\nIt is then a simple exercise to verify that \\eqref{correc} is indeed consistent with \\eqref{incompressible_case} in those limits.\n\n\n\n\\subsection{Examples}\n\n\n\\begin{table}[!ht]\n\\caption{Lam\\'e constants and Landau third-order elastic moduli for five solids ($10^{9}$ N$\\cdot$ m$^{-2}$)}\n\\label{tab:1} \n\\begin{tabular}{lrrrrr}\n\\hline\\noalign{\\smallskip}\nmaterial & $\\lambda$ & $\\mu$ & $\\mathcal{A}$ & $\\mathcal{B}$ & $\\mathcal{C}$ \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n Polystyrene & $ 1.71$ & $0.95$ & $-10$ & $-8.3$ & $-10.6$ \\\\ \n\t\t\tSteel Hecla 37 & $111$ & $82.1$ & $-358$ & $-282$ & $-177$ \\\\\n\t\t\tAluminium 2S & $57$ & $27.6$ & $-228$ & $-197$ & $-102$ \\\\\n\t\t\tPyrex glass & $13.5$ & $27.5$ & $420$ & $-118$ & $132$ \\\\\n\t\t\tSiO$_2$ melted & $15.9$ & $31.3$ & $-44$ & $93$ & $36$ \\\\ \n\t\t\t\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\end{table}\nTo evaluate the importance of the non-linear correction, we computed the critical axial stretch ratio of column buckling for two solids.\nIn Table \\ref{tab:1}, we list the second- and third-order elastic constants of five compressible solids, as collected by Porubov \\cite{Poru03} (in the Table we converted the ``Murnaghan constants'' given by Porubov to the Landau constants $\\mathcal{A,B,C}$).\nFigure \\ref{fig5} shows the variations of $\\lambda_3$ with the squared slenderness $(B\/L)^2$, for pyrex and silica (two last lines of Table 1).\n\\begin{figure}[!ht]\n\\centering \\mbox{\\subfigure{\\epsfig{figure=Pyrex.pdf, width=.45\\textwidth}}}\n \\quad \\quad\n \\subfigure{\\epsfig{figure=SiO2.pdf, width=.45\\textwidth}}\n \\caption{Comparison of the different Euler formulas obtained by expanding the exact solution to order 2 (classical Euler buckling formula, plot labeled ``Euler$_2$'') and to order 4 (plot labeled ``Euler$_4$''), for pyrex (figure on the left) and for silica (figure on the right).}\n \\label{fig5}\n\\end{figure}\n\n\n\\section{Conclusions}\n\n\nThe present analysis provides an asymptotic formula for the critical value of the load for the Euler buckling problem, with guided-guided (sliding-sliding) end conditions. This formula was checked both in the incompressible limit and on particular cases against the exact value of the buckling obtained from the exact solutions. Not surprisingly it reinforces the universal and generic nature of the Euler buckling formula as the correction is small for most systems even when nonlinear elastic effects and nonlinear geometric effects are taken into account. It would be of great interest to see if these effects could be observed experimentally.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Method, results of modeling, and application} \n \nThe most important and useful sources for the aim of neutron stars (NSs) $M$ and $R$ finding \nare X-ray bursting NSs with photospheric radius expansion \\cite{LvPT:93}. \n The relation between observed normalization $K$ (for blackbody fit of spectra) and the \nreal ratio of NS radius $R$ to distance on late outburst phases is: \n\\begin{equation} \\label{u1} \n K^{1\/2} = \\frac{R_{\\rm BB}{\\rm (km)}}{D_{10}} = \\frac{R{\\rm (km)}}{\\fc^2~D_{10}}(1+z), \n\\end{equation} \nwhere $D_{10}$ is the distance in units of10 kpc, and $\\fc = T_{\\rm c}\/T_{\\rm eff}$ is a color correction factor. \nTherefore, on these phases $K(t)$ dependence reflects $\\fc(t)$ dependence only. \nWe suggest to fit the observed $K^{-1\/4}$ -- $F$ relation by a theoretical $\\fc$ -- $l\\equiv L\/L_{\\rm Edd}$ relation, where $F$ is \nthe integral observed flux. From this fit we can obtain two independent values: $R{\\rm (km)}\\times(1+z)\/D_{10}$ \nand $F_{\\rm Edd} \\sim L_{\\rm Edd}\/((1+z)D^2_{10})$. Combining these values, we can obtain an observed $M\/R$ relation, \nwhich is independent on the distance and physically corresponds to a maximum possible effective temperature on \nthe NS surface. \nIf the distance is known \n we can find $M$ and $R$ \nsimultaneously. \nFor this method extended theoretical $\\fc(l)$ calculations are necessary. \n \nWe computed model \natmospheres of X-ray bursting NSs subject to the constraints \nof hydrostatic and radiative equilibrium assuming planar geometry \nin LTE approximation with Compton scattering taken into account (see details \nof the code in \\cite{SP:06,SW:07}). \n \nWe calculated an extended set of NS model \natmospheres with 6 chemical compositions (pure H, He, and solar H\/He mixture \nwith $Z$ = 1, 0.3, 0.1 and 0.01 $Z_\\odot$), 3 surface gravities: $\\log~g$ = 14.0, 14.3 and \n14.6, and 20 luminosities $L$: 0.001, 0.003, 0.01, 0.03, 0.05, 0.07, 0.1, 0.15, 0.2, 0.3, 0.4, \n0.5, 0.6, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, and 0.98 $L_{\\rm Edd}$. Corresponding $T_{\\rm eff}$ were \ncalculated from $L$ using $\\log g$ and chemical composition. \nThe model emergent redshifted spectra were fitted \nby diluted blackbody spectra $F_{\\rm E} = wB_{\\rm E}(\\fc T_{\\rm eff})$ \nin the {\\it RXTE}\/PCA energy band $3 - 20$ keV. Here $w \\approx \\fc^{-4}$ is the dilution factor. \nThe accepted redshifts were calculated from $\\log g$ assuming $M = 1.4 M_\\odot$. \nResults are partially presented in Fig.\\,\\ref{v1sfig1}, left panel. \n \n\\begin{figure} \n \\includegraphics[height=.185\\textheight]{sv1_01.eps} \n \\includegraphics[height=.175\\textheight]{sv1_02.eps} \n \\includegraphics[height=.18\\textheight]{sv1_03.eps} \n \\caption{ \\label{v1sfig1} \n{\\it Left:} Dependence of color correction factors on the relative luminosity \nfor low gravity and various chemical compositions in NS atmosphere models. \n {\\it Middle:} Comparison of the observed dependence of $K^{-1\/4} - F$ for 4U\\,1724$-$307 (croses) to the best fit theoretical models \n$\\fc - l$. \n {\\it Right:} Constraints on mass and radius of the neutron star 4U\\,1724$-$307. \n} \n\\end{figure} \n \n \n We fitted the observed $K^{-1\/4}$ -- $F$ relation, obtained for the extremely long outburst of \n4U\\,1724$-$307~ in November 8, 1996 ({\\it RXTE}, \\cite{molkov:00}) by computed $\\fc$ -- $l$ relations. \nWe obtained limitations on $R$ and $M$ for the adopted \ndistance $D = 5.3 \\pm 0.6$ kpc \\cite{ort:97} and various chemical compositions, see Fig.\\,\\ref{v1sfig1}. \nThe values of $M$ and $R$ obtained for H-rich atmospheres correspond to a stiff Equation of State in the inner NS core. \nHelium atmospheres are not acceptable. More details can be found in \\cite{SPRW:10}. \n \n\\begin{theacknowledgments} \n \nThe work is supported by the DFG grant SFB \/ Transregio 7 ``Gravitational Wave Astronomy'' (V.S.), \nRussian Foundation for Basic Research (grant 09-02-97013-p-povolzhe-a, V.S.), and \nthe Academy of Finland (grant 127512, J.P.). \n\\end{theacknowledgments} \n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nStochastic optimization problems of the following form\n\\begin{align}\\label{eq-main01}\n\\inf_{\\boldsymbol{\\beta} \\in \\mathcal D} \\rho^F(Y \\cdot \\boldsymbol{\\beta}^\\top \\mathbf X)\n\\end{align}\narise naturally from many machine learning (ML) and operations research (OR) applications,\nwhere $F$ represents the joint distribution of random variables $(Y,\\mathbf X) \\in \\{-1,1\\} \\times {\\mathbb R}^n$, and $\\boldsymbol{\\beta}^\\top \\mathbf X$ is a random variable linearly depending on the decision variable $\\boldsymbol{\\beta}$ and can be interpreted generally as an affine decision rule. The binary random variable $Y$, taking values from $\\{-1,1\\}$, occurs most often in ML to represent binary outcomes\nfor a classification problem. In the setup where $Y$ is constant, the formulation \\eqref{eq-main01} covers a wide array of regression problems in ML and risk minimization problems in OR. The function $\\rho^F$ generally represents a measure of risk that maps a given random variable to a real value that quantifies the riskiness of the random variable. The notion of risk in this paper is broadly defined as the undesirability of a random variable $Z$, i.e., $Z$ with a larger value of $\\rho^F(Z)$ is less preferable. In ML, the measure $\\rho^F$ typically takes the form of an expected function, i.e.\n$ \\rho^F(Z) = \\mathbb{E}^F[\\ell(Z)]$,\nwhere $\\ell$ represents a loss function used to penalize regression or classification errors and the expected penalty $\\mathbb{E}^F[\\ell(Z)]$ is considered as the risk of prediction errors. In OR applications, the function $\\ell$ often represents a disutility function and the expected disutility is considered as a measure of risk. In some important ML and OR applications, however, the measure $\\rho^F$ can go beyond an expected function. This is the case for example of $\\nu$-support vector machine (\\cite{SSWB00}) in ML, or Conditional Value-at-Risk (CVaR) minimization in portfolio optimization, where the measure $\\rho^F$ is a non-expected function, i.e. nonlinear in distributions.\n\nIn this paper, we study the distributionally robust counterpart of \\eqref{eq-main01}, i.e.\n\\begin{align}\\label{eq-main02}\n\\inf_{\\boldsymbol{\\beta} \\in \\mathcal D} \\sup_{F\\in {\\cal B}(F_0,\\epsilon)} \\rho^F(Y \\cdot \\boldsymbol{\\beta}^\\top \\mathbf X),\n\\end{align}\nfor a broad class of measures $\\rho^F$, where ${\\cal B}(F_0,\\epsilon)$ denotes a ball of distributions centred at a reference distribution $F_0$ with a radius $\\epsilon$. In particular, we consider the case where the distance between a distribution $F$ and the reference distribution $F_0$ is measured according to the optimal transportation cost of moving the probability mass from $F_0$ to $F$, also known as the Wasserstein distance (\\cite{K42}, \\cite{V09}). The transportation cost of moving a unit mass between any two points $\\xi_1, \\xi_2 \\in \\mathbb{R}^n$ is most often calculated by the norm $\\Vert \\xi_1-\\xi_2\\Vert^p$ for some order $p \\geq 1$, and the corresponding optimal transportation cost is called the type-$p$ Wasserstein distance (\\cite{K19}). Throughout this paper, a ball ${\\cal B}_p(F_0,\\epsilon)$ defined based on the type-$p$ Wasserstein distance is called type-$p$ Wasserstein ball. While there are several other ways to measure the distance between two distributions and define the ball ${\\cal B}(F_0,\\epsilon)$, such as $\\phi$-divergence (\\cite{B13}, \\cite{HH18}, \\cite{JG16}), the optimal transportation cost has become an increasingly popular distance measure in both OR and ML, given its many desirable theoretical properties (see e.g. \\cite{GK16}, \\cite{EK18}). The distributionally robust optimization problem \\eqref{eq-main02} with a type-$p$ Wasserstein ball ${\\cal B}_p(F_0,\\epsilon)$ has been studied extensively for the case where $\\rho^F$ is an expected function, i.e. $ \\rho^F(Z) = \\mathbb{E}^F[\\ell(Z)]$ (see e.g. \\cite{K19}, \\cite{SKE19}, \\cite{GCK17}, \\cite{G22}).\n\nThere are two key findings in this stream of works. First, in the case where the empirical distribution is chosen as the\nreference distribution $F_0$, the Wasserstein distributionally robust optimization model \\eqref{eq-main02}, with ${\\cal B}(F_0,\\epsilon):={\\cal B}_p(F_0,\\epsilon)$ and $\\rho^F(Z) = \\mathbb{E}^F[\\ell(Z)]$, can enjoy in some settings generalization bounds, i.e. upper confidence bounds on the out-of-sample performances (see e.g. \\cite{SKE19}, \\cite{G22}). In particular, \\cite{G22} recently shows that in the case where the Wasserstein ball ${\\cal B}_p(F_0,\\epsilon)$ is of type-1 or 2, i.e. $p=1,2$, the bounds can be established with the radius $\\epsilon$ of the Wasserstein ball ${\\cal B}_p(F_0,\\epsilon)$ chosen in the square-root order $N^{-1\/2}$, where $N$ denotes the sample size. These bounds are in sharp contrast with, and much less conservative than, the bounds derived directly from the measure concentration property of Wasserstein distance (\\cite{EK18}). The latter require the radius $\\epsilon$ to be chosen in the order of $N^{-1\/\\max\\{2,n\\}}$, where $n$ denotes the dimension of the random vector in \\eqref{eq-main02}, and is suffered from the curse of dimensionality (\\cite{SKE19}). Second, in the case where the Wasserstein ball is of type-1, i.e. $p=1$, the Wasserstein distributionally robust optimization model \\eqref{eq-main02} with $\\rho^F(Z) = \\mathbb{E}^F[\\ell(Z)]$ has been found equivalent to the nominal problem \\eqref{eq-main01} with the addition of a regularizer on the decision variable $\\boldsymbol{\\beta}$ when the loss function $\\ell$ is Lipschitz continuous (\\cite{SKE19}). \\cite{BKM19} shows a similar equivalence relation holds when the Wasserstein ball is of type-2, i.e. $p=2$, and the loss function $\\ell$ is a square function. This relation to the classical regularization scheme, commonly applied in ML, provides a powerful interpretation of the distributionally robust optimization model and has stimulated considerable interest in its applications in ML and OR (see e.g. \\cite{BK21}, \\cite{BKM19}, \\cite{CP18}, \\cite{GCK17}, \\cite{GCK22}).\n\n\nIt remains largely unclear, however, whether these findings, particularly the generalization bounds, can be carried over to the general setting of \\eqref{eq-main02}, where the ball ${\\cal B}(F_0,\\epsilon)$ is a general type-$p$ Wasserstein ball ${\\cal B}_p(F_0,\\epsilon)$, $p \\in [1,\\infty]$ and $\\rho^F$ is a general measure of risk, i.e. potentially a non-expected function. The question of how to bound out-of-sample risk for a general measure of risk $\\rho^F$ arises naturally from many risk minimization problems (see e.g. \\cite{PDB16}, \\cite{L18}, and references therein) or regression\/classification problems that entail the use of a risk measure (see e.g. \\cite{RU13}, \\cite{RUZ08}, \\cite{GU17}). To date, however, even the notion of generalization bounds has not been well defined in the literature for a general measure of risk $\\rho^F$. It is perhaps doubtful also whether generalization bounds can be established for a general measure of risk $\\rho^F$ without suffering from the curse of dimensionality, given that such bounds are only known for the special case of type-1 and 2 Wasserstein ball and an expected function $\\rho^F(Z) = \\mathbb{E}^F[\\ell(Z)]$ (\\cite{G22}), and that a general measure $\\rho^F$ can have distinctly different properties than an expected function. In fact, even in the literature of ML, the question of how to obtain generalization bounds, possibly through the classical regularization scheme, for the nominal problem \\eqref{eq-main01} in general has been largely left open (see e.g. \\cite{SB14}). This may have to do with the fact that existing analyses in Wasserstein DRO or ML for building generalization bounds rely heavily on exploiting the properties of expected function. As a key contribution of this paper, we show how to establish generalization bounds for the general model \\eqref{eq-main02} and bypass the curse of dimensionality by leveraging the structure of affine decision rules. Our result may be viewed as a generalization of the result in \\cite{SKE19}, who study the special case of \\eqref{eq-main02} with the type-1 Wasserstein ball and an expected function as the measure $\\rho^F$. Their approach to derive \ngeneralization bounds with the $N^{-1\/2}$-rate, i.e. with the radius $\\epsilon$ chosen in the square-root order, is fundamentally different from ours and can hardly be applicable beyond this special case, since it relies heavily on the equivalency of this special case to a regularized model. \\cite{G22} is a recent effort on studying how to break the curse of dimensionality in studying generalization bounds for a more general setting of Wasserstein DRO. While the setting in \\cite{G22} is more general than ours in that it does not impose the structure of affine decision rules, it is more restrictive in that the measure $\\rho^F$ is largely limited to an expected function. The approach of \\cite{G22} not only relies on the properties of expected function, but also applicable only to type-1 and 2 Wasserstein balls. Given the reliance of existing approaches on the properties of expected function, one may suspect the availability of $N^{-1\/2}$-rate bounds may hinge heavily on the functional form of $\\rho^F$. Yet, as shown in this paper, it turns out that the bounds can be established (almost) independently from the form of $\\rho^F$ and thus for the model \\eqref{eq-main02} in great generality. Our finding that the Wasserstein DRO model \\eqref{eq-main02} can break the curse of dimensionality virtually for any problem with affine decision rules offers a solid theoretical underpinning for justifying the power of Wasserstein DRO in general out-of-sample tests.\n\nAnother focus of this work is on investigating whether the other key finding of Wasserstein DRO, namely its equivalency to the classical regularization scheme in ML, can also be carried over to a more general setting in \\eqref{eq-main02}.\nTo answer this, we pay particular attention first to the setting where the Wasserstein ball ${\\cal B}(F_0,\\epsilon)$ is of any type, i.e. $p \\in [1,\\infty]$, and the measure $\\rho^F$ is an expected function $ \\rho^F(Z) = \\mathbb{E}^F[\\ell(Z)]$ with a loss function $\\ell$ having a growth order $p$. The setting includes, as a special case, the distributionally robust least square regression problem studied in \\cite{BKM19} when $p=2$ and the loss function $\\ell$ is a square function, and many other importance instances of classification, regression, and risk minimization problems (see Section \\ref{rcr}). We show that for many loss functions $\\ell$ arising from practical applications, it is possible to build an exact relation between the Wasserstein DRO model \\eqref{eq-main02} and the classical regularization scheme in more general terms. To demonstrate the generality of our result, we prove something stronger - there is no loss function $\\ell$, other than the ones we identify, under which the exact relation can hold. This gives a definite answer as to how far one can interpret the Wasserstein DRO model from a classical regularization perspective. Our regularization results reveal also the tractability of solving the Wasserstein DRO model for many non-Lipschitz continuous loss functions $\\ell$ and higher-order Wasserstein balls, i.e. $p>1$. It came to our attention that the recent work of \\cite{SZZ21} proposes to solve Wasserstein distributionally robust classification and regression problems for non-Lipschitz continuous loss functions $\\ell$ and higher order Wasserstein balls. Yet, their focus is on solving the problems by an approximation approach, given the challenge of identifying tractable reformulations for the Wasserstein DRO problems. Our results, on the other hand, show the cases that can be solved exactly via regularization reformulations. The results also enable us to discover the equivalency relation in the setting where the measure $\\rho^F$ is a non-expected function. To the best of our knowledge, in this setting, the exact relation between the Wasserstein DRO model \\eqref{eq-main02} and classical regularization has been known only for the case where the measure $\\rho^F$ is variance (\\cite{BCZ22}) or a distortion risk measure (\\cite{W14}). We show that the equivalency relation exists for a larger family of measures, including for instance higher-order risk measures (\\cite{K07}) and other measures emerging more recently from the literature of OR and ML (\\cite{RU13}, \\cite{RUZ08}, \\cite{GU17}).\n\n\n\\subsection*{Related work}\n\n\\emph{From the perspective of generalization bounds}.\nA series of works done by \\cite{BKM19}, \\cite{BK21}, \\cite{BMS22} take a different approach to tackle the curse of dimensionality. They study the classical setting of Wasserstein DRO, where the measure $\\rho^F$ is an expected function, and propose a radius selection rule for Wasserstein balls. They show the rule can be applied to build a confidence region of the optimal solution, and the radius can be chosen in the square-root order as the sample size goes to infinity. Although this allows for bypassing the curse of dimensionality, the bounds derived from the rule are only valid in the asymptotic sense. \\cite{BCZ22} also takes this approach to obtain generalization bounds for mean-variance portfolio selection problems. On the other hand, the generalization bounds established in this paper, like those in \\cite{SKE19}, \\cite{CP18}, and \\cite{G22}, break the curse of dimensionality in a non-asymptotic sense, i.e. applicable to any finite sample size and dimension.\n\n\n\n\n\n\\emph{From the perspective of the equivalency between Wasserstein DRO and regularization}. While the focus of this work is on studying the exact equivalency between Wasserstein DRO and regularization, there is an active stream of works studying the asymptotic equivalence in the setting where the measure $\\rho^F$ is an expected function\n(see \\cite{GCK17, BKM19, BMS22, VNSDMS18, BDSW20}). In particular, \\cite{GCK22} introduce the notion of variation regularization and show that for a broad class of loss functions, Wasserstein DRO is asymptotically equivalent to a variation regularization problem.\n\n\n\n\nThe rest of the paper is organized as follows. In Section \\ref{WDRO}, we start with some preliminaries regarding Wasserstein distributionally robust optimization and then provide a list of examples for different measures $\\rho^F$. In Section \\ref{Gen}, we provide generalization bounds for the general formulation \\eqref{eq-main02}. We perform in Section \\ref{reg} a systematic study of the equivalency between the distributionally robust optimization model \\eqref{eq-main02} and the regularized versions of the problem \\eqref{eq-main01}. Section \\ref{sec:conclusion} concludes the paper.\n\n\\section{Wasserstein Distributionally Robust Optimization Model} \\label{WDRO}\n\\subsection{Preliminaries}\\label{sec:preliminaries}\nLet $(\\Omega, \\mathcal A,\\mathbb{P})$ be an atomless probability space. A random vector $\\xi$ is a measurable mapping from $\\Omega$ to $\\mathbb{R}^{n+1}$, $n\\in\\mathbb{N}$. Denote by $F_{\\xi}$ the distribution of $\\xi$ under $\\mathbb{P}$.\nFor $p\\ge 1$, let $\\mathcal{M}_p:=\\mathcal{M}_p(\\Xi)$ be the set of all distributions on $\\Xi\\subseteq\\mathbb{R}^{n+1}$ with finite $p$th moment in each component. Denote $q$ as the H\\\"{o}lder conjugate of $p$, i.e. $1\/p+1\/q=1$.\nRecall that given any two distributions $F_1 \\in \\mathcal{M}_p$ and $F_2 \\in \\mathcal{M}_p$, the type-$p$ Wasserstein metric is defined as\n\\begin{align} \\label{eq:dwasser}\nW_{d,p}\\left(F_1, F_2\\right):= \\left( \\inf_{\\pi \\in \\Pi(F_1,F_2)}\n\\mathbb{E}^\\pi [d(\\xi_1, \\xi_2)^p] \\right)^{ {1}\/{p}},\n\\end{align}\nwhere $d(\\cdot,\\cdot):\\Xi\\times\\Xi\\to\\mathbb{R}_{\\geq 0}\\cup\\{+\\infty\\}$ is a metric on $\\Xi$.\nThe set $\\Pi(F_1,F_2)$ denotes the set of all joint distributions of $\\xi_1 \\in \\Xi$ and $\\xi_2 \\in \\Xi$ with marginals $F_1$ and $F_2$ respectively. The metric is often interpreted as the minimal transportation cost of moving the mass from the distribution $F_1$ to the distribution $F_2$ with the cost calculated according to the chosen function $d( \\xi_1, \\xi_2)^p$.\n\nConsider now the distributionally robust optimization problem \\eqref{eq-main02}, where a ball of distributions\nneeds to be defined for random variables $\\xi=(Y, \\mathbf X) \\in \\Xi$ with $\\Xi=\\{-1,1\\} \\times {\\mathbb R}^n \\subseteq \\mathbb{R}^{n+1}$. We apply the type-$p$ Wasserstein metric \\eqref{eq:dwasser} with $d( \\xi_1 ,\\xi_2 )= d ((Y_1,\\mathbf X_1),(Y_2,\\mathbf X_2)) $, defined by the following additively separable form\n\\begin{align}\\label{eq-classificationmetric}\nd((Y_1,\\mathbf X_1), (Y_2,\\mathbf X_2)) :=\n\\Vert \\mathbf X_1-\\mathbf X_2 \\Vert + \\Theta(Y_1-Y_2),\n\\end{align}\nwhere $\\Vert \\cdot \\Vert$ is any given norm on $\\mathbb{R}^n$ with its dual norm $\\|\\cdot\\|_*$ defined by $\\|\\mathbf y\\|_*=\\sup_{\\|\\mathbf x\\|\\le 1}\\mathbf x^\\top \\mathbf y$, and $\\Theta: \\mathbb{R} \\rightarrow \\{0,\\infty\\}$ satisfies $\\Theta(s) = 0$ if $s=0$ and $\\Theta(s) = \\infty$ otherwise. That is, the function \\eqref{eq-classificationmetric} assigns an infinitely large cost to any discrepancy in $Y$, i.e. $Y_1-Y_2 \\neq 0$, and reduces to a general norm on $\\mathbf X$ when there is no discrepancy in $Y$, i.e. $Y_1-Y_2=0$. With this form of norm, we define the ball of distribution $\\overline{{\\cal B}}_p(F_0,\\epsilon)$ by\n\\begin{align}\\label{eq-WU-multi}\n\t\\overline{{\\cal B}}_{p}(F_0,\\epsilon) =\\left\\{F\\in \\mathcal{M}_p: W_{d,p}(F,F_0) \\le\\epsilon\\right\\},\n\\end{align}\nand call it type-$p$ Wasserstein ball throughout this paper. In the remainder of this paper, we will show that the distributionally robust optimization problem \\eqref{eq-main02} with the above definition of Wasserstein ball, i.e.\n\\begin{align}\\label{eq-main05}\n\\inf_{\\boldsymbol{\\beta}\\in \\mathcal D} \\sup_{F\\in \\overline{{\\cal B}}_{p}(F_0,\\epsilon)} \\rho^F(Y \\cdot \\boldsymbol{\\beta}^\\top \\mathbf X)\n\\end{align}\ncan enjoy generalization guarantees for a general measure $\\rho^F$ and any type-$p$ Wasserstein ball. Thereafter, we will shed light on this general guarantee by drawing the connection between the distributionally robust optimization model \\eqref{eq-main05} and the classical regularization scheme in Section \\ref{reg}.\n\nWe first highlight in the next section the generality of the model \\eqref{eq-main05} to accommodate a wide array of measures $\\rho^F$ arising from machine learning and operations research applications.\n\n\\subsection{Classification, Regression, and Risk Minimization} \\label{rcr}\n\n\n\\subsubsection{Classification}\nThe distributionally robust optimization model \\eqref{eq-main05} can be directly applied to classification problems in ML, where the random variable $Y \\in \\{-1,1\\}$ is often termed the output and the random variables $\\mathbf X \\in {\\mathbb R}^n$ are considered as the input. The use of a linear scoring function $\\boldsymbol{\\beta}^{\\top}\\mathbf X$ remains the most popular approach in classification to predict the sign of the output $Y$, and the product $Y \\cdot \\boldsymbol{\\beta}^\\top \\mathbf X$ would be positive if the prediction is correct and negative otherwise. The set $\\mathcal D$ encodes the prior knowledge of the decision variables $\\boldsymbol{\\beta}$, also called the weight variables. Distributionally robust classification problems \\eqref{eq-main05} that minimize the worst-case expected prediction error, i.e. $ \\rho^F(Z) = \\mathbb{E}^F[\\ell(Z)]$, have been studied in \\cite{SKE19} for the case where $p=1$, i.e. the type-1 Wasserstein ball, and the loss function $\\ell$ is convex Lipschitz continuous. We present several examples of $\\rho^F$ below for classification, where either the loss function $\\ell$ is not\nLipschitz continuous or the measure $\\rho^F$ is not an expected function.\n\n\\paragraph{Example (i): Higher-order hinge loss}\n$\\rho(Z) = \\mathbb{E}[(1-Z)_+^s]$, $s \\geq 1$.\n~\n\nIn the case $s=2$, the classification model is also known as smooth support vector machine (SSVM), first proposed in \\cite{LM01}.\n\n\\paragraph{Example (ii): Higher-order SVM}\n$\\rho(Z) = \\mathbb{E}[|1-Z|^s]$, $s \\geq 1$.\n~\n\nIn the case $s=2$, the classification model is widely known as least-squares support vector machine (LS-SVM) (\\cite{SV99}).\n\n\\paragraph{Example (iii): Sum-Exp}\n$\\rho(Z) = \\frac{1}{t}\\left( \\mathbb{E}[e^{-tZ}]\\right)$, $t > 0$.\n~\n\nThis measure appears in the popular classification model, AdaBoost (\\cite{FS97}).\n\n\\paragraph{Example (iv): $\\nu$-support vector machine}\n$\\rho(Z) = {\\rm CVaR}_\\alpha(-Z)$, $\\alpha\\in(0,1)$, where CVaR denotes the conditional value-at-risk (\\cite{RU02}) defined by\n\\begin{align*}\n\t {\\rm CVaR}_\\alpha(Z)=\\frac{1}{1-\\alpha}\\int_\\alpha^1 F^{-1}_Z(s)\\mathrm{d} s,\n\\end{align*}\nwhere $F_Z^{-1}$ is the quantile function of $Z$. It is known that $\\nu$-support vector machine, introduced in the seminal work of \\cite{SSWB00}, employs CVaR as the measure (see e.g. \\cite{GU17}).\n\n\n\n\\subsubsection{Regression}\nThe distributionally robust optimization model \\eqref{eq-main05} can also be applied to regression problems in ML. Namely, without loss of generality, the output in regression problems can be represented by the first random variable, denoted by\n $X_{1}$, in $\\mathbf X$ given that the output in regression takes real values, whereas the input can be represented by the rest of the random variables, denoted by $\\mathbf X_{(2,:)}$, in $\\mathbf X$. By setting $\\boldsymbol{\\beta}:= (1, - \\boldsymbol{\\beta}_r)$, $\\boldsymbol{\\beta}_r \\in \\mathcal D$, and $F_0$ be a reference distribution satisfying $F_0(\\{Y=1\\})=1$\nin \\eqref{eq-main05}, we arrive at the following distributionally robust regression model\n\\begin{align}\\label{regression}\n\\inf_{\\boldsymbol{\\beta}_r\\in \\mathcal D}\\sup_{F\\in \\overline{{\\cal B}}_{p}(F_0,\\epsilon)} \\rho^F((1, - \\boldsymbol{\\beta}_r)^{\\top}\\mathbf X).\n\\end{align}\nThe above model seeks a linear regressor $\\boldsymbol{\\beta}_r^{\\top} \\mathbf X_{(2,:)}$ to predict the output $X_1 \\in \\mathbb{R}$ from the input\n $X_{(2,:)}$. It has been studied in \\cite{SKE19} for the case where $p=1$, i.e. the type-1 Wasserstein ball, and $ \\rho^F(Z) = \\mathbb{E}^F[\\ell(Z)]$ with a convex Lipschitz continuous loss function $\\ell$. Several other examples of $\\rho^F$ are presented below.\n\n\n\n\n\n\n\n\\paragraph{Example (i): Higher-order regression}\n$\\rho(Z) = \\mathbb{E}[|Z|^s]$, $s\\ge 1$.\n~\n\nIn the case $s=2$, the regression model is the well-known least-square regression.\n\n\n\\paragraph{Example (ii): Higher-order $c$-insensitive regression}\n$\\rho(Z) = \\mathbb{E}[(|Z|-c)_+^s]$, $s\\ge 1$ and $c\\ge 0$.\n~\n\nThe regression model is the well-known $c$-insensitive support vector regression ($c$-SVR) (\\cite{DBKSV97}) in the case $s=1$ and $c$-smooth support vector regression ($c$-SSVR) (\\cite{LHH05}) in the case $s=2$.\n\n\\paragraph{Example (iii): $\\nu$-support vector regression }\n$\\rho(Z) = {\\rm CVaR}_\\alpha(|Z|)$, $\\alpha\\in(0,1)$.\n~\n\n$\\nu$-support vector regression (\\cite{S98}) is a popular alternative to the $c$-insensitive support vector regression. It allows for bypassing the difficulty of specifying the insensitivity parameter $c$ in the $c$-insensitive support vector regression.\n\n\n\\subsubsection{Risk minimization}\nThe distributionally robust optimization model \\eqref{eq-main05} can also accommodate a general risk minimization problem, taking the form of\n\\begin{align}\\label{eq-mainp}\n\\inf_{\\boldsymbol{\\beta}\\in \\mathcal D} \\sup_{F\\in \\overline{{\\cal B}}_{p}(F_0,\\epsilon)} \\rho^F(\\boldsymbol{\\beta}^\\top \\mathbf X).\n\\end{align}\nThe most classical example of a risk minimization problem is portfolio optimization, in which case the variable $\\mathbf X$ represents a random vector of losses from $d$ different financial assets and $\\bm \\beta$ denotes a portfolio vector.\n\n\\paragraph{Example (i): Lower partial moments (LPM)} $\\rho(Z) = \\mathbb{E}[(Z-c)_+^s]$, $s\\ge 1$ and $c\\in\\mathbb{R}$.\n~\n\nLower partial moments represent an important class of downside risk measure, first introduced by \\cite{B75} and \\cite{F77} and more recently studied by \\cite{CSS11} in DRO.\n\n\\paragraph{Example (ii): CVaR-Deviation} $\\rho(Z) = {\\rm CVaR}_\\alpha(Z-\\mathbb{E}[Z])$, $\\alpha\\in(0,1)$.\n\nThis represents an important example of deviation measures built upon risk measures (\\cite{RUZ08}). Its generalization can be found in Section \\ref{reg}.\n\n\n\n\\paragraph{Example (iii): Higher moment coherent risk measures} $\\rho(Z) = \\inf_{t\\in\\mathbb{R}}\\{t+c(\\mathbb{E}[(Z-t)_+^s])^{1\/s}\\}$, $s,c\\ge 1$.\n\nThis measure is a well-known generalization of CVaR that is closely related to lower partial moments (LPM) and\ncompatible with the second order stochastic dominance and utility theory (\\cite{K07}).\n\n\n\n~\n\n\n\n\nWe summarize in Table \\ref{tab-R} a list of measures $\\rho^F$ considered in this paper.\n\n\n\n\n\n\n\n\\begin{table}\n\\caption{Risk functions analyzed in this paper.}\\label{tab-R}\\def2{2}\n\\begin{center}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{llc}\n\\hline\n \\textbf{Applications} & \\textbf{Risk function} & \\textbf{Formulation}\\\\[8pt]\n\\hline\n\\textbf{Classification} & \\makecell[l]{Higher-order hinge loss\\\\[5pt] Higher-order SVM\\\\\n[5pt] Sum-Exp \\\\\n[5pt] $\\nu$-support vector machine} & \\makecell{$\\mathbb{E}[(1-Z)_+^s]$, $s\\ge 1$\\\\[5pt] $\\mathbb{E}[|1-Z|^s]$, $s\\ge 1$\\\\[5pt] $\\frac1t \\mathbb{E}[e^{-tZ}]$, $t>0$\\\\[5pt] ${\\rm CVaR}_\\alpha(-Z)$, $\\alpha\\in(0,1)$}\\\\\n\\hline\n\\textbf{Regression} & \\makecell[l]{Higher-order regression\\\\[5pt] Higher-order $c$-insensitive\\\\[5pt] $\\nu$-support vector regression} &\\makecell{$\\mathbb{E}[|Z|^s]$, $s\\ge 1$\\\\[5pt] $\\mathbb{E}[(|Z|-c)^s_+]$, $s\\ge 1$, $c\\ge 0$\\\\[5pt] ${\\rm CVaR}_\\alpha(|Z|)$, $\\alpha\\in(0,1)$}\\\\[5pt]\n\\hline\n\\textbf{Risk minimization} & \\makecell[l]{Lower partial moments\\\\[5pt] CVaR-Deviation\\\\[5pt] Higher moment risk measure\n} & \\makecell{$\\mathbb{E}[(Z-c)_+^s]$, $s\\ge 1$, $c\\in\\mathbb{R}$\\\\[5pt] ${\\rm CVaR}_\\alpha(Z-\\mathbb{E}[Z])$, $\\alpha\\in(0,1)$\\\\[5pt] $\\inf_{t\\in\\mathbb{R}}\\{t+c(\\mathbb{E}[(Z-t)_+^s])^{1\/s}\\}$, $s,c\\ge 1$\n} \\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n\n\n\n\n\n\n\n\\section{Generalization bounds} \\label{Gen}\nWasserstein distributionally robust optimization is most commonly applied in a data-driven setting, where the joint distribution $F$ of random variables $(Y,\\mathbf X) \\in \\{-1,1\\} \\times {\\mathbb R}^n$ can only be partially observed through sample data $(\\widehat{y}_i,\\widehat{\\mathbf x}_i)$, $i=1,...,N$, independently drawn from $F$. In this setting, the empirical distribution,\ni.e. $\\widehat{F}_N:= \\frac{1}{N} \\sum_{i=1}^N \\delta_{(\\widehat{y}_i,\\widehat{\\mathbf x}_i)}$, is chosen as the reference distribution $F_0$ in Wasserstein DRO where $\\delta_{\\mathbf x}$ denotes the point-mass at $\\mathbf x$. The key question underlying Wasserstein DRO is whether there exist upper confidence bounds on the out-of-sample performances, i.e. generalization bounds, that can scale gracefully in the dimensionality of the random vector $(Y,\\mathbf X)$, i.e. breaking the curse of dimensionality. Such out-of-sample performance guarantees are only known for the case where the Wasserstein ball is of order $p \\in [1,2]$ and the measure $\\rho$ is an expected function, i.e. $ \\rho^F(Z) = \\mathbb{E}^F[\\ell(Z)]$ for some $\\ell$ (\\cite{G22}, \\cite{SKE19} and \\cite{CP18}). Existing analyses hinge on the specific choice of the order $p$ and loss functions $\\ell$.\n\nIn this section, we seek to answer the question of whether generalization bounds exist for a more general setting of the Wasserstein DRO model \\eqref{eq-main05}. Let $\\rho^F$ be a distribution-invariant measure, i.e. $\\rho^F(Z_1)=\\rho^F(Z_2)$ for any $Z_1 \\equiv_{F} Z_2$\nand $F^*$ denote the true distribution of $(Y,\\mathbf X)$. We say that generalization bounds exist if the following condition can hold with high probability\n\\begin{align*}\n\t\\rho^{F^*}(Y\\cdot\\bm \\beta^\\top\\mathbf X)\\le \\sup_{F\\in \\overline{{\\cal B}}_p(\\widehat{F}_N,\\epsilon_N)}\\rho^{F}(Y\\cdot\\bm \\beta^\\top\\mathbf X)+\\tau_N,~~\\forall~\\bm \\beta\\in\\mathcal D,\n\\end{align*}\nwhere the radius $\\epsilon_N$ can decrease in the order of $1\/\\sqrt{N}$ (or $\\sqrt{(\\log N)\/N}$) and the residual $\\tau_N$ diminishes in the order of $1\/N$. The bounds, if exist, would break the curse of dimensionality. The order $1\/\\sqrt{N}$ for the radius follows closely the square root law.\n\nWe take two steps to study generalization bounds. First, in Section \\ref{out-of-sample-risk} we study how to build a confidence bound on the out-of-sample performance for a single solution $\\bm \\beta$. Then in Section \\ref{union}, we extend the result to all $\\bm \\beta \\in\\mathcal D$ by building a union bound on the out-of-sample performances for all $\\bm \\beta \\in\\mathcal D$, i.e. the generalization bound. The first step is critical. As shown below, quite surprisingly, a confidence bound can be built for any measure $\\rho$ and a Wasserstein ball of any type.\n\n\n\n\n\\subsection{Confidence bounds on out-of-sample risks} \\label{out-of-sample-risk}\nThe notion of out-of-sample performance has not been well defined and studied for a general measure $\\rho$. Throughout this section, we adopt the following definitions.\n\\begin{definition}\nGiven a solution $\\bm \\beta \\in\\mathcal D$, its (Wasserstein) in-sample risk is defined by\n\\begin{equation} \\label{in-sample}\n\\sup_{F\\in \\overline{{\\cal B}}_p(\\widehat{F}_N,\\epsilon)}\\rho^{F}(Y\\cdot\\bm \\beta^\\top\\mathbf X),\n\\end{equation}\nwhereas its out-of-sample risk is defined by\n$$\\rho^{F^*}(Y\\cdot {\\bm \\beta}^\\top \\mathbf X).$$\nLet $\\widehat{J}_N$ denote the in-sample risk achieved by the solution of the DRO model \\eqref{eq-main05}, i.e.\n\\begin{align}\\label{eq-JN}\n\\widehat{J}_N=\\inf_{\\bm \\beta\\in \\mathcal D}\\sup_{F\\in \\overline{{\\cal B}}_p(\\widehat{F}_N,\\epsilon)}\\rho^{F}(Y\\cdot\\bm \\beta^\\top\\mathbf X),\n\\end{align}\nand $J_{oos}$ denote the out-of-sample risk of the solution, i.e.\n\\begin{align}\\label{eq-Joos}\nJ_{oos}=\\rho^{F^*}(Y\\cdot\\widehat{\\bm \\beta}_N^\\top \\mathbf X),\n\\end{align}\nwhere $\\widehat{\\bm \\beta}_N$ is the optimal solution of the problem \\eqref{eq-JN}.\n\\end{definition}\n\nWe show that under the following assumptions, the out-of-sample risk of a solution $\\bm \\beta \\in \\mathcal D$ can be bounded by its in-sample risk with high probability.\n\n\\begin{assumption}[Light-tailed distribution]\\label{assum-1}\nThere exists an exponent $a>p$ such that\n$$\nA:=\\mathbb{E}^{F^*}[\\exp({\\|\\mathbf X\\|^a})]<+\\infty.\n$$\n\\end{assumption}\n\n\\begin{assumption}\\label{assum-2}\nThe feasible set $\\mathcal D\\subseteq \\mathbb{R}^n$ is bounded, that is $U_{\\mathcal D}:=\\sup_{\\bm \\beta\\in \\mathcal D}\\|\\bm \\beta\\|_*<+\\infty$.\n\\end{assumption}\n\n\\begin{assumption}\\label{assum-3}\nThe feasible set ${\\mathcal D}\\subseteq \\mathbb{R}^n$ is away from the origin, that is $L_{\\mathcal D}:=\\inf_{\\bm \\beta\\in \\mathcal D}\\|\\bm \\beta\\|_*>0$.\n\\end{assumption}\n\nAssumption \\ref{assum-1} requires the tail of the distribution $F^*$ decays at an exponential rate, which is a common assumption in Wasserstein DRO. It enables invoking the measure concentration property of Wasserstein metrics (\\cite{FG15}). Assumptions \\ref{assum-2} and \\ref{assum-3} impose some restrictions on the decision space ${\\mathcal D}$.\nIt is interesting to note that these two assumptions appeared also in the earlier work of \\cite{SKE19}, even though the approach taken in \\cite{SKE19} to build $1\/\\sqrt{N}$-rate confidence bounds is fundamentally different from, and more restrictive than, our approach. Their approach applies only to the case where the Wasserstein ball is of type-1 and the measure $\\rho^F$ is an expected function $\\rho^F(Z) = \\mathbb{E}^F[\\ell(Z)]$ with the loss function $\\ell$ assumed to be convex Lipschitz continuous. Our approach, on the other hand, allows for building $1\/\\sqrt{N}$-rate confidence bounds for any type of Wasserstein ball and measure $\\rho$.\n\n\n\nBefore delving into the details of our approach, we present first the confidence bounds obtained from our approach, as the main result of this section.\n\\begin{theorem}\\label{th-FSG}\nSuppose that Assumptions \\ref{assum-1}, \\ref{assum-2} and \\ref{assum-3} hold and $\\eta\\in(0,1)$. By setting the radius $\\epsilon$ in problem \\eqref{in-sample} as $\\epsilon_{p,N}(\\eta)\/L_{\\mathcal D}$, where\n\\begin{align}\\label{eq-epsilon}\n\\epsilon_{p,N}(\\eta)=\\begin{cases}\n\\left(\\frac{\\log(c_1\\eta^{-1})}{c_2 N}\\right)^{1\/2},~~&\\text{if}~N\\ge \\frac{\\log(c_1\\eta^{-1})}{c_2},\\\\\n\\left(\\frac{\\log(c_1\\eta^{-1})}{c_2 N}\\right)^{p\/a},~~&\\text{if}~N< \\frac{\\log(c_1\\eta^{-1})}{c_2},\n\\end{cases}\n\\end{align}\nand $c_1,c_2$ are positive constants that only depend on $U_{\\mathcal D},$ $a$, $A$, and $p$\\footnote{Theorem 2 of \\cite{FG15} (in the one-dimension case) shows that for a distribution $F$ on $\\mathbb{R}$ satisfying $\nA:=\\mathbb{E}^{F}[\\exp({\\gamma \\|X\\|^a})]<+\\infty\n$, $a>p$,\nit holds that \n\\begin{align*}\n\t\\mathbb{P}\\left(W_p\\left(\\widehat{F}_{N},F\\right)\\ge \\epsilon\\right)\\le \\begin{cases}\n\t\tc_1\\exp(-c_2 N\\epsilon^2)~~&\\text{if}~\\epsilon\\le 1,\\\\\n\t\tc_1\\exp(-c_2 N\\epsilon^{a\/p})~~&\\text{if}~\\epsilon> 1,\n\t\\end{cases}\n\\end{align*}\nwhere $\\widehat{F}_{N}$ is the empirical distribution based on the independent sample drawn from $F$, and $c_1$ and $c_2$ are constants depending on $\\gamma, a,A,p$; see more details in Theorem 2 of \\cite{FG15}. The constants $c_1$ and $c_2$ in \\eqref{eq-epsilon} are exactly the same as those in Theorem 2 of \\cite{FG15} in one-dimension case with $\\gamma= U_{\\mathcal D}^{-a}$. See the more details in Appendix.}, we have\n\\begin{align*}\n\\mathbb{P}\\left(\\rho^{F^*}(Y\\cdot\\bm \\beta^\\top\\mathbf X)\\le \\sup_{F\\in \\overline{{\\cal B}}_p(\\widehat{F}_N,\\epsilon_{p,N}(\\eta)\/L_{\\mathcal D})}\\rho^{F}(Y\\cdot\\bm \\beta^\\top\\mathbf X)\\right)\\ge 1-\\eta,~~\\forall \\bm \\beta\\in \\mathcal D.\n\\end{align*}\nIn particular, letting $\\bm \\beta=\\widehat{\\bm \\beta}_N$ in the above equation yields\n\\begin{align} \\label{perf}\n\\mathbb{P}(J_{oos}\\le \\widehat{J}_N)\\ge 1-\\eta.\n\\end{align}\n\\end{theorem}\nOne can see that the above confidence bounds are essentially dimension-independent, and for any $N\\ge {\\log(c_1\\eta^{-1})}\/{c_2}$, the radius decreases in the order of $1\/\\sqrt{N}$. They provide also out-of-sample guarantee for the solution of the DRO model \\eqref{eq-JN}, i.e. \\eqref{perf}, for any choice of the type-$p$ Wasserstein ball and measure $\\rho$.\n\nThe approach we take to obtain the above general {dimension-independent} bounds consists of two steps. First, we reduce the Wasserstein ball defined over $(Y,\\mathbf X) \\sim F_{(Y,\\mathbf X)}$, with dimensionality $n+1$, to a Wasserstein ball defined over $Y\\cdot\\bm\\beta^{\\top}{\\bf X} \\sim F_{Y\\cdot\\bm\\beta^{\\top}\\bf X}$, with dimensionality $1$. Then, we derive confidence bounds by applying a measure concentration property of the Wasserstein metric to the one-dimensional Wasserstein ball. Our first step is formally summarized as follows. {To state it, we introduce the Wasserstein ball on $\\mathbb{R}^n$ with the metric $d(\\cdot,\\cdot)$ being a norm:\n\\begin{align}\\label{eq-WU-multi-d}\n\t{\\cal B}_{p}(F_0,\\epsilon) =\\left\\{F\\in \\mathcal{M}_p(\\mathbb{R}^n): W_{p}(F,F_0) \\le\\epsilon\\right\\},\n\\end{align}\nwhere \\begin{align} \\label{eq:dwasser-d}\nW_{p}\\left(F_1, F_2\\right):= \\left( \\inf_{\\pi \\in \\Pi(F_1,F_2)}\n\\mathbb{E}^\\pi [\\|\\xi_1- \\xi_2\\|^p] \\right)^{ {1}\/{p}},\n\\end{align}\nand $\\Vert \\cdot \\Vert$ is the norm of \\eqref{eq-classificationmetric} on $\\mathbb{R}^n$ with its dual norm $\\|\\cdot\\|_*$. Without loss of generality, for $n=1$, i.e., on $\\mathbb{R}$, assume that $\\|\\cdot\\|=|\\cdot|$ is the absolute-value norm.\n}\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{theorem}\\label{thm:221010-1}\n\tFor $p\\in[1,+\\infty]$, $\\epsilon\\ge0$, $\\bm \\beta\\in\\mathbb{R}^n$, a distribution $F_0$ on $\\{-1,1\\} \\times {\\mathbb R}^n$, and $(Y_0,\\mathbf X_0)\\sim F_0$, we have\n\\begin{align}\n\t\\label{eq:221010-1}\n\t\\{ F_{Y\\cdot\\bf X}: F_{(Y,{\\bf X})} \\in \\overline{\\cal B}_p (F_0,\\epsilon)\\}\n\n\t={\\cal B}_p (F_{Y_0\\cdot\\bf X_0},\\epsilon)\n\\end{align}\nand\n\\begin{align}\n\t\t\\label{eq:221010-2}\n\t\t\t\\{F_{Y \\cdot \\bm \\beta^\\top \\mathbf X}: F_{(Y,\\mathbf X)}\\in \\overline{\\cal B}_p (F_0,\\epsilon)\\}\n\t\n\t\t=\\mathcal B_p\\left(F_{Y_0\\cdot\\bm\\beta^{\\top}\\bf X_0},\\epsilon\\|\\bm\\beta\\|_*\\right).\n\\end{align}\n\\end{theorem}\n\nNote that the above theorem will be invoked also in Section \\ref{reg} for studying the Wasserstein DRO model \\eqref{eq-main05} from a regularization perspective.\nBy Theorem \\ref{thm:221010-1}, we know that\n\\begin{align*}\n\\{F_{Y \\cdot \\bm \\beta^\\top \\mathbf X}: F_{(Y,\\mathbf X)}\\in \\overline{\\cal B}_p (\\widehat{F}_N,\\epsilon)\\}\n= \\mathcal B_p(\\widehat{F}_{N,\\bm \\beta},\\epsilon\\|\\bm \\beta\\|_*),\n\\end{align*}\nwhere $\\widehat{F}_{N,\\bm\\beta}=\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\widehat{y}_i\\cdot \\bm\\beta^\\top\\widehat{\\mathbf x}_i}$. If the one-dimensional Wasserstein ball $\\mathcal B_p(\\widehat{F}_{N,\\bm \\beta},\\epsilon\\|\\bm \\beta\\|_*)$ can contain the distribution $F^*_{\\bm\\beta} :=F_{Y^*\\cdot \\bm\\beta^\\top \\mathbf X^*}$, i.e. the distribution of a random variable projected from $(Y^*,\\mathbf X^*) \\sim F^*$, with high probability, this will then allow us to derive the confidence bounds in Theorem \\ref{th-FSG}. Our second step achieves this by applying the following measure concentration property of one-dimensional Wasserstein metric, built upon Theorem 2 of \\cite{FG15}.\n\n\n\n\\begin{lemma}\\label{th-MC}\nIf Assumptions \\ref{assum-1} and \\ref{assum-2} hold, then for any $\\bm \\beta\\in \\mathcal D$, $\\eta\\in(0,1)$ and $N\\ge 1$, we have\n\\begin{align*}\n\\mathbb{P}\\left(W_p\\left(\\widehat{F}_{N,\\bm\\beta},F^*_{\\bm\\beta}\\right)\\le \\epsilon_{p,N}(\\eta)\\right)\\ge1-\\eta,\n\\end{align*}\nwhere $\\widehat{F}_{N,\\bm\\beta}=\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\widehat{y}_i\\cdot \\bm\\beta^\\top\\widehat{\\mathbf x}_i}$, $F^*_{\\bm\\beta} :=F_{Y^*\\cdot \\bm\\beta^\\top \\mathbf X^*}$ with $(Y^*,\\mathbf X^*) \\sim F^*$, and $\\epsilon_{p,N}$ is defined in \\eqref{eq-epsilon}.\n\\end{lemma}\n\nCombining the two steps, we obtain the confidence bounds in Theorem \\ref{th-FSG}.\n\n\n\n\\paragraph{Proof of Theorem \\ref{th-FSG}}\n \tDenote by $\\epsilon_N:=\\epsilon_{p,N}(\\eta)\/L_{\\mathcal D}$.\n\tThe result follows by noting that\n\t\\begin{align}\n \\mathbb{P}\\left(\\rho^{F^*}(Y\\cdot\\bm \\beta^\\top\\mathbf X)\\le\n\\sup_{F\\in \\overline{\\cal B}_p(\\widehat{F}_N,\\epsilon_N)}\\rho^{F}(Y\\cdot\\bm \\beta^\\top\\mathbf X)\\right)\n&= \\mathbb{P}\\left(\\rho^{F^*_{\\bm\\beta}}(Z)\\le\n\\sup_{F\\in \\{F_{Y \\cdot \\bm \\beta^\\top \\mathbf X}: F_{(Y,\\mathbf X)}\\in \\overline{\\cal B}_p(\\widehat{F}_N,\\epsilon_N)\\} }\\rho^{F}(Z)\\right) \\notag \\\\\n&\\ge \\mathbb{P}\\left(F^*_{\\bm\\beta}\\in\\{F_{Y \\cdot \\bm \\beta^\\top \\mathbf X}: F_{(Y,\\mathbf X)}\\in \\overline{\\cal B}_p (\\widehat{F}_N,\\epsilon_N)\\}\\right)\n\\notag\\\\\n\t&=\\mathbb{P}\\left(F^*_{\\bm\\beta}\\in \\mathcal B_p\\left(\\widehat{F}_{N,\\bm\\beta},\\|\\bm \\beta\\|_*\\epsilon_{p,N}(\\eta)\/L_{\\mathcal D}\\right)\\right) \\notag\\\\\n\t&=\\mathbb{P}\\left(W_p\\left(\\widehat{F}_{N,\\bm\\beta},F^*_{\\bm\\beta}\\right)\\le \\|\\bm \\beta\\|_*\\epsilon_{p,N}(\\eta)\/L_{\\mathcal D}\\right) \\nonumber\\\\\n\t&\\ge \\mathbb{P}\\left(W_p\\left(\\widehat{F}_{N,\\bm\\beta}\n\t,F^*_{\\bm\\beta}\\right)\\le \\epsilon_{p,N}(\\eta)\\right)\\notag\\\\\n\t&\\ge 1-\\eta\\notag,\n\t\\end{align}\nwhere the second equality and the last inequality follow from Theorem \\ref{thm:221010-1} and Lemma \\ref{th-MC}, respectively.\\qed\n\nIt should be clear from above that one can also obtain naive confidence bounds by applying measure concentration results ({\\cite{FG15}) directly to {the probability $\\mathbb{P}(F^*\\in \\overline{\\mathcal B}_{p}(\\widehat{F}_N,\\epsilon_N))$.}\nThis naive approach, unfortunately, suffers from the curse of dimensionality, with the radius scaled in the order of $O(N^{-\\min\\{p\/n,1\/2\\}})$ (see e.g., \\cite{EK18}).\n\n\n\n\n\n\n\n\\begin{remark}\nSuppose that the metric $d(\\cdot,\\cdot)$ on $\\{-1,1\\}\\times\\mathbb{R}^n$ is defined by\n\\begin{align} \\label{theta}\nd((Y_1,\\mathbf X_1),(Y_2,\\mathbf X_2)):=\\|\\mathbf X_1-\\mathbf X_2\\|+\\theta|Y_1-Y_2|\n\\end{align}\nfor some $\\theta\\in(0,\\infty]$, and we use the convention that $0\\cdot \\infty=0$.\nObviously, the metric in \\eqref{eq-classificationmetric} is a special case of \\eqref{theta} with $\\theta=\\infty$. Note that\nthe size of $\\overline{\\cal B}_p(F_0,\\epsilon)$ is smaller when $\\theta$ is larger. Therefore, the result of finite sample guarantee in Theorem \\ref{th-FSG} is also valid if one chooses to apply the metric \\eqref{theta} with $\\theta\\in(0,\\infty)$.\n\\end{remark}\n\n\\subsection{Union bounds} \\label{union}\nWith Theorem \\ref{th-FSG}, we are now ready to move on to building generalization bounds. The confidence bounds in Theorem \\ref{th-FSG} apply to any fixed decision $\\bm \\beta\\in \\mathcal D$. Generalization bounds require further establishing that the in-sample risk can bound the out-of-sample risk uniformly for all $\\bm \\beta\\in\\mathcal D$ with high probability. In the case where the set $\\mathcal D$ is finite, one can easily build generalization bounds for any type of Wasserstein ball and measure $\\rho$ by applying the union bound to Theorem \\ref{th-FSG}. Clearly, these union bounds remain to be dimension-independent and the radius required to reach any probability level remains in the same order.\n\nThe focus of this section is on the case where the set $\\mathcal D$ is not finite. We build generalization bounds from Theorem \\ref{th-FSG} by applying the standard covering number argument (see e.g., \\cite{G22}). Recall that for $\\tau>0$, an $\\tau$-cover of $\\mathcal D$, denoted by $\\mathcal D_\\tau$, is a subset of $\\mathcal D$ which satisfies that\nfor each $\\bm \\beta\\in \\mathcal D$, there exists $\\widetilde{\\bm \\beta}\\in\\mathcal D_\\tau$ such that $\\|\\widetilde{\\bm \\beta}-\\bm \\beta\\|_\\mathcal D\\le \\tau$, where $\\|\\cdot\\|_\\mathcal D$ is a norm on $\\mathcal D$.\nThe covering number $\\mathcal N(\\tau;\\mathcal D,\\|\\cdot\\|_\\mathcal D)$ of $\\mathcal D$ with respect to $\\|\\cdot\\|_\\mathcal D$ is defined as the smallest cardinality of an $\\tau$-cover of $\\mathcal D$.\n\nWe show that under the following assumption on the measure $\\rho$, one can obtain $1\/\\sqrt{N}$-rate generalization bounds for any type of Wasserstein ball and measure $\\rho$.\n\n\\begin{assumption}\\label{assum-4}\nLet $p\\in[1,+\\infty]$ and $\\rho: L^p\\to \\mathbb{R}$. Assume that there exist $M>0$ and $k\\in[1,p]$ such that\n\\begin{align*}\n\t|\\rho(Z_1)-\\rho(Z_2)|\\le M \\left(\\mathbb{E}[|Z_1-Z_2|^k]\\right)^{1\/k},~~\\forall Z_1,Z_2\\in L^p.\n\\end{align*}\n\\end{assumption}\n\n\\begin{theorem}\\label{th-unionrho}\nGiven that Assumptions \\ref{assum-1}-\\ref{assum-4} hold, $\\eta\\in(0,1)$, and\n\t\\begin{align}\\label{eq-epsilonunion}\n\t\t\\epsilon_N:=\\epsilon_{p,N}\\left(\\frac{\\eta}{\\mathcal N(1\/N;\\mathcal D,\\|\\cdot\\|_*)}\\right)\\big\/L_{\\mathcal D},\n\t\\end{align}\nwith $\\epsilon_{p,N}(\\cdot)$ defined in \\eqref{eq-epsilon}, then\n\\begin{align*}\n\\mathbb{P}\\left(\t\\rho^{F^*}(Y\\cdot\\bm \\beta^\\top\\mathbf X)\\le \\sup_{F\\in \\overline{\\cal B}_p(\\widehat{F}_N,\\epsilon_N)}\\rho^{F}(Y\\cdot\\bm \\beta^\\top\\mathbf X)+\\tau_N,~~\\forall~\\bm \\beta\\in\\mathcal D \\right) \\ge 1-\\eta-\\frac1N,\n\\end{align*}\n where\n$\n\\tau_N= {M[2\\left(\\mathbb{E}^{F^*}[\\|\\mathbf X\\|^k]\\right)^{1\/k}+\\left({\\rm\\mathbb{V}ar}^{F^*}(\\|\\mathbf X\\|^k)\\right)^{1\/2k}+\\epsilon_N]}\/{N}.\n$\n\\end{theorem}\n\nIt is known that the covering number in \\eqref{eq-epsilonunion} satisfies $\\log(\\mathcal N(\\tau; \\mathcal D,\\|\\cdot\\|_*))\\le n\\log(1+2B\/\\tau)$, where $B$ is the diameter of $\\mathcal D$, i.e. $B=\\sup_{\\boldsymbol{\\beta},\\widetilde{\\boldsymbol{\\beta}}\\in\\mathcal D}\\|\\boldsymbol{\\beta}-\\widetilde{\\boldsymbol{\\beta}}\\|_*$ (see Example 5.8 of \\cite{W19}). Moreover, we have $B\\le 2 U_{\\mathcal D}$, which follows from Assumption \\ref{assum-2}. With these facts, Theorem \\ref{th-unionrho} implies that the radius $\\epsilon_N$ can be chosen in the order of $\\sqrt{(\\log N)\/N}$ such that the in-sample risk can be an upper bound of the \nout-of-sample risk, up to a residual in the order of $1\/N$, for all $\\boldsymbol{\\beta}\\in\\mathcal D$. That is, $1\/\\sqrt{N}$-rate generalization bounds exist. Assumption \\ref{assum-4} holds for a large class of measures $\\rho$ but does not hold for expected functions $\\rho(Z)=\\mathbb{E}[\\ell(Z)]$ when the loss function $\\ell$ is not Lipschitz continuous. We show below that if only expected functions are considered, one can still obtain $1\/\\sqrt{N}$-rate generalization bounds even if the loss function is not Lipschitz continuous, by relaxing Assumption \\ref{assum-4} to the following.\n\\begin{assumption}\\label{assum-EU}\nLet $p\\in[1,+\\infty]$ and $\\rho(Z)=\\mathbb{E}[\\ell(Z)]$, $Z\\in L^p$ for some $\\ell:\\mathbb{R}\\to\\mathbb{R}$.\nAssume that there exists a function $f:\\mathbb{R}^n\\to\\mathbb{R}_+$ and constant $a_1,a_2\\ge 0$ and $k\\in[1,p]$ satisfying $f(\\mathbf x)\\le a_1\\|\\mathbf x\\|^k+a_2$ for all $\\mathbf x\\in \\mathbb{R}^n$, such that\n\\begin{align*}\n\\left|\\ell(\\bm \\beta^\\top \\mathbf x)-\\ell({\\widetilde{\\bm \\beta}}^\\top \\mathbf x)\\right|\\le f(\\mathbf x)\\|\\bm \\beta-{\\widetilde{\\bm \\beta}}\\|_*,~~\\forall \\bm \\beta,\n{\\widetilde{\\bm \\beta}}\\in \\mathcal D.\n\\end{align*}\n\\end{assumption}\n\n\\begin{theorem}\\label{th-unionEL}\nGiven that Assumptions \\ref{assum-1}-\\ref{assum-3} and \\ref{assum-EU} hold and $\\eta\\in(0,1)$, then\n\\begin{align*}\n\\mathbb{P}\\left(\t\t\\mathbb{E}^{F^*}[\\ell(Y\\cdot\\bm \\beta^\\top\\mathbf X)]\\le \\sup_{F\\in \\overline{\\cal B}_p(\\widehat{F}_N,\\epsilon_N)}\\mathbb{E}^{F}[\\ell(Y\\cdot\\bm \\beta^\\top\\mathbf X)]+\\tau_N,~\\forall~\\bm \\beta\\in\\mathcal D\\right) \\ge 1-\\eta-\\frac1N,\n\t\\end{align*}\nwhere\n\t$\n\t\t\\tau_N=[{a_1(2^{k-1}+1)\\mathbb{E}^{F^*}[\\|\\mathbf X\\|^k]+a_1 2^{k-1}(\\sqrt{{\\rm\\mathbb{V}ar}^{F^*}(\\|\\mathbf X\\|^k)}+\\epsilon_N^k)+2a_2}]\/{N}\n\t$\nand $\\epsilon_N$ is defined by \\eqref{eq-epsilonunion}.\n\\end{theorem}\n\nBy the same reasoning as for Theorem \\ref{th-unionrho}, we can conclude from above that $1\/\\sqrt{N}$-rate generalization bounds exist. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{A Regularization Perspective} \\label{reg}\nRegularization generally refers to any means that can be applied to avoid overfitting and enhance generalization. In this regard, with the generalization guarantees established in the previous section, the Wasserstein DRO model \\eqref{eq-main05} can be particularly well justified as a general regularization model. In this section, we seek to provide further insights as to the regularization effect of the model \\eqref{eq-main05} on the decision variable $\\boldsymbol{\\beta}$, which offers an alternative, and practically useful, interpretation to the model. In particular, it is known that in some settings Wasserstein DRO is equivalent to a regularized empirical optimization model in ML, with a norm regularizer capturing the regularization effect on the decision variable $\\boldsymbol{\\beta}$. We show that a similar equivalence relationship can be established more generally for many instances of the model \\eqref{eq-main05}, such as those discussed in Section \\ref{rcr}. In addition to explaining the regularization effect, the equivalence relationship reveals also how these instances, previously unknown for their tractability, can be solved efficiently via regularized empirical optimization models. Our results are general in that not only do they unify all the previously known equivalence relationships, they also provide necessary conditions under which such an equivalence relationship can exist. Theorem \\ref{thm:221010-1} greatly facilitates our analysis. Note first that by \\eqref{eq:221010-1} in the theorem, the Wasserstein DRO model \\eqref{eq-main05} can be recast as\n\\begin{align}\\label{eq-main5}\n\\inf_{\\boldsymbol{\\beta} \\in \\mathcal D} \\sup_{F\\in \\mathcal B_p(F_0,\\epsilon)}\\ \\rho^F(\\boldsymbol{\\beta}^\\top \\mathbf Z).\n\\end{align}\n\n\nIn the case where $\\rho^F$ is an expected function, the Wasserstein distributionally robust optimization model \\eqref{eq-main5}, i.e.\n\\begin{align}\\label{eq-main-ef}\n\\inf_{\\boldsymbol{\\beta} \\in \\mathcal D} \\sup_{F\\in \\mathcal B_p(F_0,\\epsilon)} \\mathbb{E}^F[\\ell(\\boldsymbol{\\beta}^\\top \\mathbf Z)],\n\\end{align}\nis known to be equivalent to a regularized model of the form\n\\begin{align}\\label{eq-main21}\n\\inf_{\\boldsymbol{\\beta} \\in \\mathcal D} \\left\\{\\mathbb{E}^{F_0}[\\ell(\\boldsymbol{\\beta}^\\top \\mathbf Z)] + {\\rm Lip(\\ell)}\\epsilon\\Vert\\bm\\beta\\Vert_*\\right\\},\n\\end{align}\nwhen $p=1$, i.e. the type-1 Wasserstein ball, and ${\\rm Lip(\\ell)}$ is the Lipschitz constant of $\\ell$.\n\n\nIn the case where the Wasserstein ball is of a higher order, i.e. $p>1$, and\/or the loss function $\\ell$ is not Lipschitz continuous, the relationship between the model \\eqref{eq-main-ef} and a regularized model is largely unknown except the special case of $p=2$ and $\\ell(y) = y^2$ (\\cite{BKM19}). It remains largely open also whether \\eqref{eq-main-ef} in this case can be tractably solved. Higher-order Wasserstein balls can be attractive from a practical standpoint, given that they are less conservative than the type-1 Wasserstein ball. It is natural to wonder whether there exists an equivalence relationship between the model \\eqref{eq-main-ef} and a regularized model of the form \\eqref{eq-main21} in the case of a higher-order Wasserstein ball, i.e. $p>1$. We show below that such an equivalence relationship exists if and only if the loss function $\\ell$ is linear or takes the form of an absolute function.\n\n\\begin{theorem}\\label{th-RegularizationEL}\n\tLet $\\ell:\\mathbb{R}\\to\\mathbb{R}$ be a convex function. For $p\\in(1,\\infty]$, suppose that $\\mathbb{E}[|\\ell(Z)|]<+\\infty$ for all $Z\\in L^p$.\n\tThen the following statements are equivalent.\n\n\n\n\t\n\t\n\t\n\t%\n\n\n\n\n\n\t\\begin{itemize}\n\t\t\\item[(i)] For any $F_0\\in\\mathcal M_p(\\mathbb{R}^n)$, $\\epsilon\\ge 0$ and $\\mathcal D\\subseteq \\mathbb{R}^n$, we have\n\t\t$$\n\t\t\\inf_{\\bm\\beta\\in \\mathcal D} \\sup_{F\\in \\mathcal B_p(F_0,\\epsilon)} \\mathbb{E}^F[\\ell(\\boldsymbol{\\beta}^\\top \\mathbf Z)]=\\inf_{\\bm\\beta\\in \\mathcal D} \\left\\{\\mathbb{E}^{F_0}[\\ell(\\boldsymbol{\\beta}^\\top \\mathbf Z)] + C\\epsilon \\Vert\\bm\\beta\\Vert_*\\right\\}\n\t\t$$\n\t\twith some $C >0$.\n\t\n\t\t\n\t\t\\item[(ii)] $\\ell$ has one of the following two forms:\n\t\t\n\t\t-$\\ell_1(x)=C x+b$ or $\\ell_1(x)=-C x+b$ with some $b\\in\\mathbb{R}$;\n\t\t\n\t\t-$\\ell_2(x)=C|x-m|+b$ with some $m,b\\in \\mathbb{R}$.\n\t\t\n\t\n\t\\end{itemize}\n\n\n\\end{theorem}\n\nThe above result, which applies to any type-$p$ Wasserstein ball, is perhaps surprising in that the Wasserstein DRO model is equivalent to the same regularized model, regardless of the order $p$. This turns out to be the case when the slope of the loss function $\\ell$ takes values only from a constant and its negative. The result indicates also that it would not be possible to obtain a regularized model in the form of \\eqref{eq-main21} for any other loss function $\\ell$. In other words, if there does exist a certain equivalence relationship between the Wasserstein DRO model \\eqref{eq-main-ef} and a regularized model for some other loss function $\\ell$, the regularized model must take some form other than \\eqref{eq-main21}. Before moving on to discussing other forms of regularization, it is worth noting first the following application of the above result in regression.\n\n\\begin{example} {\\bf (Regression)}\n~\n\n{\\bf - (Least absolute deviation (LAD) regression)}\nApplying $\\ell_2(x)=C|x-m|+b$ in Theorem \\ref{th-RegularizationEL} and setting $m=0$, $b=0$, and $C=1$, we arrive at the distributionally robust counterpart of the least absolute deviation regression, i.e.\n\\begin{align}\n\\inf_{\\boldsymbol{\\beta}_r\\in \\mathcal D}\\sup_{F\\in {\\cal B}_{p}(F_0,\\epsilon)}\\mathbb{E}^F[|(1, - \\boldsymbol{\\beta}_r)^{\\top}\\mathbf X|].\n\\end{align}\nIt is equivalent to\n\\begin{align*}\n\\inf_{\\bm\\beta_r\\in \\mathcal D}\\left\\{\\mathbb{E}^{F_0}[|(1, - \\boldsymbol{\\beta}_r)^{\\top}\\mathbf X|] + \\epsilon \\Vert (1, -\\boldsymbol{\\beta}_r)\\Vert_*\\right\\},\n\\end{align*}\nfor any $p \\geq 1$.\n\\end{example}\n\nWe now turn our attention to exploring whether there exists any other form of regularization equivalent to the Wasserstein DRO model \\eqref{eq-main-ef}. In particular, we seek to address the case where the loss function $\\ell$ is not Lipschitz continuous, such as those loss functions of a high order arising from the applications discussed in Section \\ref{rcr}. It is known that the worst-case expectation problem defined based on the type-1 Wasserstein ball can be unbounded if the loss function is not Lipschitz continuous. To study the case that goes beyond Lipschitz continuous functions, we consider the following formulation of the Wasserstein DRO model \\eqref{eq-main-ef}\n\\begin{align}\\label{eq-main12}\n\\inf_{\\boldsymbol{\\beta}\\in \\mathcal D} \\sup_{F\\in \\mathcal B_p(F_0,\\epsilon)} \\mathbb{E}^F[\\ell^p(\\boldsymbol{\\beta}^\\top \\mathbf Z)],\n\\end{align}\nwhere $\\ell^p$ denotes a Lipschitz continuous function $\\ell$ raised to the power of $p>1$, with the same order as that of the type-$p$ Wasserstein ball.\n\n\n\n\n\n\nAs the main result, we show below that for several loss functions $\\ell$ of practical interest, the Wasserstein DRO model \\eqref{eq-main12} is equivalent to an alternative form of regularization.\n\n\\begin{theorem}\\label{th-highorderloss}\nSuppose that $\\ell$ has one of the following four forms:\n\n-$\\ell_1(x)=(x-m)_+$ with some $m\\in\\mathbb{R}$;\n\n-$\\ell_2(x)=(x-m)_-$ with some $m\\in\\mathbb{R}$;\n\n-$\\ell_3(x)=(|x-m_1|-m_2)_+$ with some $m_1\\in\\mathbb{R}$ and $m_2\\ge 0$;\n\n-$\\ell_4(x)=|x-m|+b$ with some $m\\in\\mathbb{R}$ and $b>0$.\\\\\nThen for any $p\\in(1,\\infty)$, $F_0\\in\\mathcal M_p(\\mathbb{R}^n)$, $\\epsilon\\ge 0$ and $\\mathcal D\\subseteq \\mathbb{R}^n$, we have\n$$\n\\inf_{\\boldsymbol{\\beta}\\in \\mathcal D} \\sup_{F\\in \\mathcal B_p(F_0,\\epsilon)} \\mathbb{E}^F[\\ell^p(\\boldsymbol{\\beta}^\\top \\mathbf Z)]=\\inf_{\\boldsymbol{\\beta}\\in \\mathcal D} \\left(\\left(\\mathbb{E}^{F_0}[\\ell^p(\\boldsymbol{\\beta}^\\top \\mathbf Z)]\\right)^{1\/p} + \\epsilon||\\boldsymbol{\\beta}||_*\\right)^p.\n$$\n\\end{theorem}\n\nThe above result covers, quite remarkably, all the instances of expected functions in Section \\ref{rcr}, except Sum-Exp in classification.\n\n\\begin{example} {\\bf (Classification)} \\label{ex1}\n~\n\n{\\bf - (Higher-order hinge loss)}\nApplying $\\ell_2(x)=(x-m)_-$ and setting $m=1$, we have that the following classification problem with a higher-order hinge loss\n\\begin{align*}\n\\inf_{\\bm\\beta\\in \\mathcal D}\\sup_{F\\in \\overline{\\cal B}_{p}(F_0,\\epsilon)} \\mathbb{E}^F[(1-Y\\cdot \\bm \\beta^{\\top}\\mathbf X)_+^p]\n\\end{align*}\nis equivalent to the regularization problem\n\\begin{align*}\n\\inf_{\\bm\\beta\\in \\mathcal D} \\left(\\left(\\mathbb{E}^{F_0}[(1-Y\\cdot \\bm \\beta^{\\top}\\mathbf X)_+^p]\\right)^{1\/p}+ \\varepsilon ||\\bm \\beta||_* \\right)^p.\n\\end{align*}\n\n{\\bf - (Higher-order SVM)}\nApplying $\\ell_3(x)=(|x-m_1|-m_2)_+$ and setting $m_1=1$ and $m_2=0$, we have that the higher-order SVM classification problem\n\\begin{align*}\n\\inf_{\\bm\\beta\\in \\mathcal D}\\sup_{F\\in \\overline{\\cal B}_{p}(F_0,\\epsilon)} \\mathbb{E}^F[|1-Y\\cdot \\bm \\beta^{\\top}\\mathbf X|^p]\n\\end{align*}\nis equivalent to the regularization problem\n\\begin{align*}\n\\inf_{\\bm\\beta\\in \\mathcal D} \\left( (\\mathbb{E}^{F_0}[|1-Y\\cdot \\bm \\beta^{\\top}\\mathbf X|^p])^{1\/p}+ \\varepsilon ||\\bm \\beta||_* \\right)^p.\n\\end{align*}\n\\end{example}\n\n\\begin{example} {\\bf (Regression)} \\label{ex2}\n~\n\n{\\bf - (Higher-order error measure)}\nApplying $\\ell_3(x)=(|x-m_1|-m_2)_+$ and setting $m_1=0$ and $m_2=0$, we have that the regression with a higher order error measure\n\\begin{align*}\n\\inf_{\\bm\\beta_r\\in \\mathcal D}\\sup_{F\\in {\\cal B}_{p}(F_0,\\epsilon)} \\mathbb{E}^F[|(1, - \\boldsymbol{\\beta}_r)^{\\top}\\mathbf X|^p],\n\\end{align*}\nis equivalent to the regularization problem\n\\begin{align*}\n\\inf_{\\bm\\beta_r\\in \\mathcal D} \\left( \\left(\\mathbb{E}^{F_0}[|(1, - \\boldsymbol{\\beta}_r)^{\\top}\\mathbf X|^p]\\right)^{1\/p}+ \\varepsilon \\|(1,-\\bm \\beta_r)\\|_* \\right)^p.\n\\end{align*}\n\n{\\bf - (Higher order $c$-insensitive measure)}\nApplying $\\ell_3(x)=(|x-m_1|-m_2)_+$ and setting $m_1=0$ and $m_2=c$, we have that the following regression problem with a higher order $c$-insensitive measure\n\\begin{align*}\n\\inf_{\\bm\\beta_r\\in \\mathcal D}\\sup_{F\\in {\\cal B}_{p}(F_0,\\epsilon)} \\mathbb{E}^F[(|(1, - \\boldsymbol{\\beta}_r)^{\\top}\\mathbf X|-c)_+^p]\n\\end{align*}\nis equivalent to the regularization problem\n\\begin{align*}\n\\inf_{\\bm\\beta_r\\in \\mathcal D} \\left( \\left(\\mathbb{E}^{F_0}[(|(1, - \\boldsymbol{\\beta}_r)^{\\top}\\mathbf X|-c)_+^p]\\right)^{1\/p}+ \\varepsilon ||(1,-\\bm \\beta_r)||_* \\right)^p.\n\\end{align*}\n\\end{example}\n\n\\begin{example} {\\bf (Risk minimization)} \\label{ex3}\n~\n\n{\\bf - (Lower partial moments)}\nApplying $\\ell_1(x)=(x-m)_+$ and setting $m=c$, we have that the risk minimization with lower partial moments\n\\begin{align*}\n\\inf_{\\bm\\beta\\in \\mathcal D}\\sup_{F\\in {\\cal B}_{p}(F_0,\\epsilon)} \\mathbb{E}^F[(\\bm\\beta^{\\top}\\mathbf X-c)_+^p]\n\\end{align*}\nis equivalent to the regularization problem\n\\begin{align*}\n\\inf_{\\bm\\beta\\in \\mathcal D} \\left( \\left(\\mathbb{E}^{F_0}[(\\bm\\beta^{\\top}\\mathbf X-c)_+^p]\\right)^{1\/p}+ \\varepsilon ||\\bm\\beta||_* \\right)^p.\n\\end{align*}\n\\end{example}\n\nWith these many examples covered by the theorem, it is perhaps tempting to conjecture that the equivalence relationship may hold more broadly for other loss functions $\\ell$. We provide a negative answer below.\n\n\\begin{theorem}\\label{th-necessity-highorderloss}\nLet $\\ell:\\mathbb{R}\\to\\mathbb{R}$ be a nonnegative, Lipschitz continuous and convex function. For an integer $p\\in(1,\\infty)$, suppose that for any $F_0\\in\\mathcal M_p(\\mathbb{R}^n)$, $\\epsilon\\ge 0$ and $\\mathcal D\\subseteq \\mathbb{R}^n$, we have\n\\begin{equation} \\label{imposs}\n\\inf_{\\boldsymbol{\\beta}\\in \\mathcal D} \\sup_{F\\in \\mathcal B_p(F_0,\\epsilon)} \\mathbb{E}^F[\\ell^p(\\boldsymbol{\\beta}^\\top \\mathbf Z)]=\\inf_{\\boldsymbol{\\beta}\\in \\mathcal D} \\left(\\left(\\mathbb{E}^{F_0}[\\ell^p(\\boldsymbol{\\beta}^\\top \\mathbf Z)]\\right)^{1\/p} + C\\epsilon||\\boldsymbol{\\beta}||_*\\right)^p\n\\end{equation}\nwith some $C>0$. Then $\\ell$ must be one of the four forms in Theorem \\ref{th-highorderloss} multiplying a constant $C$.\n\\end{theorem}\n\nThis is an ``impossibility\" theorem, which puts to rest any effort attempting to draw the equivalence relation for other convex Lipschitz continuous functions $\\ell : \\mathbb{R} \\rightarrow \\mathbb{R}_{\\geq 0}$. This theorem should be of fundamental importance to the study of Wasserstein DRO, given the continuous interests and efforts in motivating Wasserstein DRO from a classical regularization perspective. It shows {\\it exactly} how far one can take this perspective.\n\n\nEven though Theorem \\ref{th-necessity-highorderloss} points out the impossibility to draw the equivalence relation\n\\eqref{imposs} for a more general loss function $\\ell$, Theorem \\ref{th-highorderloss} can in fact be applied more broadly, as a powerful basis, to derive alternative equivalence relations for a richer family of measure $\\rho$. In particular, there is a large family of measures $\\rho$ that can be expressed generally by the following two forms\n\\begin{align}\\label{eq-infEU\n\t\\rho^F(Z) = \\inf_{t \\in \\mathbb{R}} \\left\\{t+\\left(\\mathbb{E}^F[\\ell^p(Z,t)]\\right)^{1\/p}\\right\\}~~~\n {\\rm and}~~~\n\t\\mathcal V^F(Z) = \\inf_{t \\in \\mathbb{R}} \\mathbb{E}^F[\\ell^p(Z,t)]\n\\end{align}\nfor some loss functions $\\ell$, such as the measure applied in $\\nu$-support vector regression, higher moment coherent risk measures in Section \\ref{rcr}, and variance.\n\n\nWe show in the appendix (see Lemma \\ref{lm-minmaxeq}) that for a wide range of loss functions $\\ell$, the following switching of $\\sup$ and $\\inf$ is valid\n$$\\sup_{F\\in \\mathcal B_p(F_0,\\epsilon)} \\inf_{t \\in \\mathbb{R}} \\pi_{i,\\ell}(F,t)=\n\\inf_{t \\in \\mathbb{R}} \\sup_{F\\in \\mathcal B_p(F_0,\\epsilon)}\\pi_{i,\\ell}(F,t),~~i=1,2,$$\nwhere\n$$\n\\pi_{1,\\ell}(F,t)=t+\\left(\\mathbb{E}^F[\\ell^p(\\boldsymbol{\\beta}^\\top \\mathbf Z,t)]\\right)^{1\/p} ~~{\\rm and}~~\\pi_{2,\\ell}(F,t)=\\mathbb{E}^F[\\ell^p(\\boldsymbol{\\beta}^\\top \\mathbf Z,t)].\n$$\nThis, combined with Theorem \\ref{th-highorderloss}, leads to the following.\n\n\n\n\\begin{corollary}\\label{co-1125-1}\nFor any $p\\in [1,\\infty)$ and $c>1$, let $\\rho^F$ be defined by \\eqref{eq-infEU}, where $\\ell(z,t):=c\\ell(z-t)$, and $\\ell$ is one of $\\ell_1,\\ell_3,\\ell_4$ in Theorem \\ref{th-highorderloss} or $\\ell(z,t)=c(|z|-t)_+$. \n\nIt holds that for $F_0\\in\\mathcal M_p(\\mathbb{R}^n)$ and $\\epsilon\\ge 0$\n$$\n \\inf_{\\boldsymbol{\\beta}\\in\\mathcal D}\\sup_{F\\in \\mathcal B_p(F_0,\\epsilon)} \\rho^F (\\boldsymbol{\\beta}^\\top \\mathbf Z)= \\inf_{\\boldsymbol{\\beta}\\in\\mathcal D}\\left\\{\\rho^{F_0}(\\boldsymbol{\\beta}^\\top \\mathbf Z) + c \\epsilon||\\boldsymbol{\\beta}||_*\\right\\}.\n$$\n\\end{corollary}\n\n\n\n\\begin{example} {\\bf (Classification)} \\label{ex5}\nSetting $\\ell(z,t)=(z-t)_+$ in Corollary \\ref{co-1125-1} and $p=1$, we have\nthat the following $\\nu$-support vector machine problem\n\t\\begin{align*}\n\t\t\\inf_{\\bm\\beta\\in \\mathcal D}\\sup_{F\\in \\overline{{\\cal B}}_{1}(F_0,\\epsilon)} {\\rm CVaR}_\\alpha^F(-Y \\cdot \\boldsymbol{\\beta}^\\top\n\t\\mathbf X)\n\t\\end{align*}\nis equivalent to the regularization problem\n\\begin{align*}\n\t\t\\inf_{\\bm\\beta\\in \\mathcal D} \\left\\{{\\rm CVaR}_\\alpha^{F_0}(-Y \\cdot \\boldsymbol{\\beta}^\\top\n\t\\mathbf X)+ \\frac{1}{1-\\alpha}\\varepsilon ||\\bm \\beta||_* \\right\\}.\n\\end{align*}\n\\end{example}\n\n\n\\begin{example} {\\bf (Regression)} \\label{ex5}\nSetting $\\ell(z,t)=(|z|-t)_+$ in Corollary \\ref{co-1125-1} and $p=1$, we have that the following $\\nu$-support vector regression problem\n\t\\begin{align*}\n\t\t\\inf_{\\bm\\beta_r\\in \\mathcal D}\\sup_{F\\in {\\cal B}_{1}(F_0,\\epsilon)} {\\rm CVaR}_\\alpha^F(|(1, - \\boldsymbol{\\beta}_r)^{\\top}\\mathbf X|)\n\t\\end{align*}\nis equivalent to the regularization problem\n\\begin{align*}\n\t\t\\inf_{\\bm\\beta_r\\in \\mathcal D} \\left\\{{\\rm CVaR}_\\alpha^{F_0}(|(1, - \\boldsymbol{\\beta}_r)^{\\top}\\mathbf X|)+ \\frac{1}{1-\\alpha}\\varepsilon ||(1,-\\bm \\beta_r)||_* \\right\\}.\n\\end{align*}\n\\end{example}\n\n\n\n\n\n\n\\begin{example} {\\bf (Risk minimization)} \\label{ex6}\n{Setting $\\ell(z,t)=\\ell_1(z-t)=(z-t)_+$ in Corollary \\ref{co-1125-1},} we have that the following problem of minimizing higher moment risk measures\n\\begin{align*}\n\\inf_{\\bm\\beta\\in \\mathcal D}\\sup_{F\\in {\\cal B}_{p}(F_0,\\epsilon)}\n\\inf_{t\\in\\mathbb{R}}\\left\\{t+c\\left(\\mathbb{E}^F[(\\bm\\beta^{\\top}\\mathbf X-t)_+^p]\\right)^{1\/p}\\right\\}\n\\end{align*}\nis equivalent to the regularization problem\n\\begin{align*}\n\\inf_{\\bm\\beta\\in \\mathcal D, t\\in\\mathbb{R}}\\left\\{t+c\\left(\\mathbb{E}^{F_0}[(\\bm\\beta^{\\top}\\mathbf X-t)_+^p]\\right)^{1\/p}\n+ c\\varepsilon ||\\bm\\beta||_*\\right\\}.\n\\end{align*}\n\\end{example}\n\n\n\\begin{corollary} \\label{cor1}\nFor any $p\\in [1,\\infty)$, let $\\mathcal V^F$ be defined by \\eqref{eq-infEU}, where $\\ell(z,t):=c\\ell(z-t)$ and $\\ell$ is $\\ell_3$ or $\\ell_4$ in Theorem \\ref{th-highorderloss}.\nIt holds that for $F_0\\in\\mathcal M_p(\\mathbb{R}^n)$ and $\\epsilon\\ge 0$\n$$\n \\inf_{\\boldsymbol{\\beta}\\in\\mathcal D}\\sup_{F\\in \\mathcal B_p(F_0,\\epsilon)} \t\\mathcal V^F (\\boldsymbol{\\beta}^\\top \\mathbf Z)= \\inf_{\\boldsymbol{\\beta}\\in\\mathcal D}\\left( \\left( \t\\mathcal V^{F_0}(\\boldsymbol{\\beta}^\\top \\mathbf Z) \\right)^{1\/p} + \\epsilon||\\boldsymbol{\\beta}||_* \\right) ^p.\n$$\n\\end{corollary}\n\nThe above corollary can accommodate, for example, the case where variance is applied as the measure. Variance is not listed as an example in Section \\ref{rcr} because it has been studied in \\cite{BCZ22}. Yet, our approach to derive the equivalency relation is different from, and significantly more general than, the approach taken in \\cite{BCZ22}. We give the following example to highlight the generality of our approach.\n\n\\begin{example} {(\\cite{BCZ22})} \\label{ex7}\nWhen $p=2$ and $\\ell(z,t) =\\ell_3(z-t) = |z-t|$ (with $m_1=0$ and $m_2=0$),\n$\\mathcal V^F = {\\rm\\mathbb{V}ar}^F$. That is,\n\\begin{align*}\n\\sup_{F\\in \\mathcal B_{2}(F_0,\\epsilon)}{\\rm\\mathbb{V}ar}^F(\\bm\\beta^\\top\\mathbf Z)\n=\\sup_{F\\in \\mathcal B_{2}(F_0,\\epsilon)}\\inf_{t\\in\\mathbb{R}} \\mathbb{E}^F[(\\bm\\beta^\\top\\mathbf Z-t)^2].\n\\end{align*}\nApplying Corollary \\ref{cor1} yields\n\\begin{align*}\n\t\\sup_{F\\in \\mathcal B_{2}(F_0,\\epsilon)}{\\rm\\mathbb{V}ar}^F(\\bm\\beta^\\top\\mathbf Z)\n\t&=\\left(\\sqrt{{\\rm\\mathbb{V}ar}^{F_0}(\\bm\\beta^\\top\\mathbf Z)}\n\t+\\epsilon\\|\\bm\\beta\\|_*\\right)^2.\n\\end{align*}\n\\end{example}\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection*{The case of exponential functions}\nSum-Exp for classification in Section \\ref{rcr}, i.e. $\\rho(Z):=\\frac1t \\mathbb{E}[e^{-tZ}]$, is a special case that cannot be covered by the theorems above. We proceed by assuming that the Wasserstein ball in \\eqref{eq-main5} is of type-$\\infty$, since otherwise the problem may be unbounded. We can make the following observation and identify the regularization counterpart of the Sum-Exp classification problems.\n\n\\begin{proposition}\\label{ex-MMinfinity}\nFor a monotonic function $\\rho: L^\\infty\\to\\mathbb{R}$, that is, $\\rho(Z_1)\\le\\rho(Z_2)$ whenever $Z_1\\le Z_2$ a.s.,\nwe have\n\\begin{align*}\n\\inf_{\\bm\\beta\\in \\mathcal D}\\sup_{F\\in {\\cal B}_\\infty(F_0,\\epsilon)}\\rho^F(\\bm\\beta^{\\top}\\mathbf Z)=\\inf_{\\bm\\beta\\in \\mathcal D} \\rho^{F_0}(\\bm\\beta^{\\top}\\mathbf Z+\\epsilon\\|\\bm\\beta\\|_*).\n\\end{align*}\n\\end{proposition}\n\n\\begin{example} {\\bf (Classification)} \\label{ex9}\n~\n\n{\\bf - (Sum-Exp)}\nApplying $\\rho(Z)=\\frac1t \\mathbb{E}[e^{tZ}]$, we have that the Sum-Exp classification problem\n\\begin{align*}\n\\inf_{\\bm\\beta\\in \\mathcal D}\\sup_{F\\in \\overline{\\cal B}_{\\infty}(F_0,\\epsilon)} \\frac1t \\mathbb{E}^F[e^{-tY\\cdot \\bm \\beta^{\\top}\\mathbf X}]\n\\end{align*}\nis equivalent to the regularization problem\n\\begin{align*}\n\\inf_{\\bm\\beta\\in \\mathcal D} \\frac1t \\mathbb{E}^{F_0}\\left[e^{-t(Y\\cdot \\bm \\beta^{\\top}\\mathbf X - \\epsilon\\|\\bm\\beta\\|_*)}\\right].\n\\end{align*}\n\\end{example}\n\n\n\n\\subsection*{The case of distortion functionals}\n $\\nu$-support vector machine (regression) and CVaR-deviation in Section \\ref{rcr} are two examples of how quantile-based risk measures such as CVaR play the role of building blocks for other measures. It is known in the literature of risk measures that CVaR is a special case of distortion functional,\n\n defined by\n\\begin{align*}\n\t\\rho_h^F(Z)=\\int_0^1 F^{-1}(s)\\mathrm{d} h(s),\n\\end{align*}\nwhere $h:[0,1]\\to\\mathbb{R}$ is called a distortion function and $F^{-1}$ is left-quantile function of $F$, i.e. $F^{-1}(s)=\\inf\\{x: F(x)\\ge s\\}$ for $s\\in(0,1]$, and $F^{-1}(0)=\\inf\\{x: F(x)>0\\}$.\nDistortion functionals allow for taking into account a spectrum of quantiles with respect to different probability levels. In particular, when the distortion function $h$ is a convex function, the distortion functional $\\rho_h^F$ is a mixture of CVaRs with the mixture weights determined by the derivative of $h$ (see e.g. \\cite{RU13}). The following generalization of CVaR-deviation appears in \\cite{RUZ08}\n\\begin{equation} \\label{gencvar}\n\\inf_{\\bm\\beta\\in \\mathcal D}\\sup_{F\\in {\\cal B}_{p}\n(F_0,\\epsilon)} \\rho^F_h(\\bm\\beta^{\\top}\\mathbf X\n-\\mathbb{E}^F[\\bm\\beta^{\\top}\\mathbf X]\n),\n\\end{equation}\nwhere $h$ is convex. Similarly, one can consider the following generalization of $\\nu$-support vector regression\n\\begin{equation} \\label{gen-nu}\n\\inf_{\\bm\\beta_r\\in \\mathcal D}\\sup_{F\\in {\\cal B}_{p}(F_0,\\epsilon)} \\rho^F_h(|(1, - \\boldsymbol{\\beta}_r)^{\\top}\\mathbf X|),\n\\end{equation}\nwhere $h$ is increasing and convex. We show below that for both cases, there exists an exact equivalence relation between the Wasserstein DRO model and regularization.\n\n\\begin{proposition}\\label{prop:disabsolute} \\label{prop-DRdeviation}\n\\begin{itemize}\n\\item [(i)] Let $h: [0,1]\\to\\mathbb{R}$ be a convex function. For $p\\in[1,\\infty]$,\nthe problem \\eqref{gencvar} is equivalent to\n\\begin{equation}\n\\inf_{\\bm\\beta\\in \\mathcal D} \\left\\{\\rho^{F_0}_h(\\bm\\beta^{\\top}\\mathbf X\n-\\mathbb{E}^{F_0}[\\bm\\beta^{\\top}\\mathbf X])\n+ \\|h'_-+h(0)-h(1)\\|_q \\varepsilon||\\bm\\beta||_*\\right\\},\n\\end{equation}\nwhere $\\|g\\|_q=(\\int_0^1 |g(s)|^q \\mathrm{d} s)^{1\/q}$, and $h_-'$ is the left derivative of $h$.\n\\item [(ii)]Let $h:[0,1]\\to\\mathbb{R}$ be an increasing and convex distortion function. For $p\\in[1,\\infty]$,\nthe problem \\eqref{gen-nu} is equivalent to\n\\begin{equation}\n\\inf_{\\bm\\beta_r\\in \\mathcal D} \\left\\{\\rho^{F_0}_h(|(1, - \\boldsymbol{\\beta}_r)^{\\top}\\mathbf X|)+ \\|h'_-\\|_q \\varepsilon ||(1,-\\bm \\beta_r)||_*\\right\\}.\n\\end{equation}\n\\end{itemize}\n\\end{proposition}\n\nOne can easily apply the above proposition with $h(s)= (s-\\alpha)_+\/(1-\\alpha)$, where $(s)_+=\\max\\{s, 0\\}$, to derive the regularization counterpart for the example of CVaR-deviation in risk minimization (in Section \\ref{rcr}).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\\label{sec:conclusion}\nIn this paper, we first show that Wasserstein DRO provides generalization guarantees for virtually any problem with affine decision rules - namely, for any type-$p$ Wasserstein ball and (almost) any measure of risk. This justifies the use and reveals the strength of Wasserstein DRO beyond the classical expectation optimization setting. Our approach sheds light on why generalization guarantees can be obtained (almost) independently of the choice of a Wasserstein ball and the measure of risk. Although our approach is limited to problems with affine decision rules, such problems are very common in practical ML and OR applications given the popularity of affine decision rules. Such problems also constitute an important basis for studying more sophisticated forms of problems. We further show how to draw exact equivalency relations between Wasserstein DRO and the classical regularization scheme for a wide range of regression, classification, and risk minimization problems that have not been considered to date using Wasserstein DRO. Our result broadens considerably the class of Wasserstein DRO problems that can be solved exactly via their regularization formulations. We also prove that our equivalency results are, in some sense, the most general that one can possibly obtain, in the case where the measure is an expected function (Theorems \\ref{th-RegularizationEL} and \\ref{th-necessity-highorderloss}). This provides valuable insights into how far the classical regularization interpretation can be applied to Wasserstein DRO.\nThe generalization and regularization results presented in this paper should be of high interest to ML and OR researchers\/practitioners who seek to gain confidence in Wasserstein DRO for a broader set of applications.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nWhilst the Cold Dark Matter (CDM) paradigm is remarkably successful at explaining the universe's large-scale structure there are many known issues pertaining to structure formation on small-length scales. These include the ``missing satellites'' \\cite{MissingSat1, MissingSat2}, ``Too Big to Fail'' \\cite{TBTF1, TBTF2} and ``core-cusp'' \\cite{CuspCore1, CuspCore2, CuspCore3} problems all of which can broadly be thought of as an overprediction of density at small-scales by simulations of CDM when compared with observation. \n\nConsequently, solutions to these problems generally try to replicate the gross characteristics of CDM on large length scales where it performs well, whilst suppressing structure formation at small-scales where the CDM paradigm fares poorly. Examples of these solutions include Warm Dark Matter (WDM) \\cite{WDM, WDM1, WDM2, WDM3} in which the dark matter is endowed with a velocity dispersion allowing it to stream freely and wash out density perturbations below some scale. Another example is Late Kinetic Decoupling (LKD) \\cite{InteractingDM, InteractingDM2, LKD, LKD2, LKD3} in which the DM is allowed to couple to a relativistic ``heat bath'' (i.e. photons or standard model neutrinos) via elastic scattering until fairly late in the universe's evolution. The coupling allows small-scale modes (i.e. modes which entered the Hubble horizon early and therefore experience the strong effects of the coupling) to experience acoustic oscillations in their density while large scale modes enter the horizon after decoupling and therefore experience little or no suppression of power. \n\nBoth of these solutions lead to, all other things considered, a sharp cutoff in the matter power spectrum and as a result can alleviate some of the problems for CDM at small-length scales. Given that these scenarios are identical at large length scales and possess the same phenomenological characteristics on the small, the question arises as to whether a cosmological observable exists which will allow for these scenarios to be distinguished. \n\nThis manuscript is intended as a short summary of \\cite{Diacoumis:2017hff}, in which we investigated CMB spectral distortions as a probe of these scenarios, and is organised as follows. In Section 2 we give a brief overview of CMB spectral distortions. In sections 3 and 4 we examine the details of DM--neutrino and DM--photon interactions respectively and comment on the effect that these would have on spectral distortions. In section 5 we discuss the implications of our results for future experiments such as PRISM. For further details the reader is encouraged to refer to \\cite{Diacoumis:2017hff}.\n\n\\section{Spectral Distortions}\n\nCMB Spectral Distortions are defined as any deviation in the energy spectrum of CMB photons from a perfect ``blackbody'' shape, they occur during periods where thermal equilibrium is not maintained in the photon fluid i.e. periods of significant energy injection or energy release. One of the most well-known mechanisms for producing CMB Spectral Distortions is through the dissipation of standing sound waves in the photon-baryon fluid whereby energy is transferred from the sound wave to the energy spectrum of the photons due to diffusion damping \\cite{Hu:1993gc, ChlubaRev}. Since diffusion damping occurs at extremely small length scales \\(1 \\, \\textrm{Mpc}^{-1} \\, \\leq k \\leq 10^{4} \\, \\textrm{Mpc}^{-1}\\), roughly corresponding to the scales at which LKD and WDM offer solutions to the small-scale structure problems of CDM, we expect the details of the Dark Matter microphysics to have a significant impact on the form of the distortions.\n\nWe emphasise that spectral distortions are expected to occur via this mechanism even in CDM cosmology, the point is that DM microphysics can significantly alter the form of the perturbations which in turn leads to distortion patterns which deviate from CDM. We will refer throughout this manuscript to the \\(\\mu\\)-parameter which characterises the ``effective chemical potential'' in the photon energy spectrum created as a result of the distortion.\n\n\\section{DM -- neutrino interactions}\nIn the absence of interactions with DM, neutrinos stream freely and therefore carry away some of the power of the perturbation without sourcing a spectral distortion. This is reflected in the form of the photon temperature transfer function which has the form \n\\begin{equation} \\label{eq:Theta}\n\\Theta_{1} \\approx A \\, c_{s} \\sin\\left(k r_{s}\\right)\\exp\\left(-\\frac{k^{2}}{k^{2}_{D}}\\right).\n\\end{equation}\nHere the acoustically oscillating part of the perturbation depends on the sound horizon \\(r_{s}\\), the diffusion damping is characterised by the diffusion scale \\(k_{D}\\), and the WKB amplitude is given by\n\\begin{equation} \\label{eq:amp}\nA \\simeq \\left(1 +\\frac{4}{15}f_{\\nu}\\right)^{-1},\n\\end{equation}\nwhere \\(f_{\\nu} = \\rho_{\\nu}\/(\\rho_{\\nu} + \\rho_{\\gamma}) \\simeq 0.41\\) is the ratio of the neutrino energy density to the total energy density of the relativistic species. The factor of \\(f_{\\nu}\\) amounts to a small correction to the amplitude as a result of the anisotropic stress of the neutrino fluid. \n\nA coupled DM--neutrino system exhibits similar characteristics to the familiar photon--baryon system in that the DM--neutrino fluid experiences acoustic oscillations damped by neutrino diffusion in the tight-coupling limit. The amplitude of the DM density perturbations is reduced on small length scales which serves as the basis for the solution to the small-scale problems of CDM cosmology. At the same time, coupling the neutrinos to DM prevents them from free-streaming and allows them to behave more like a `perfect' relativistic fluid with vanishing anisotropic stress \\cite{GravCluster}.\n\nThe absence of anisotropic stress allows the neutrinos to participate in acoustic oscillations of the perturbations in the same way as photons as they are no longer free-streaming. In the tightly coupled limit the correction factor \\(f_{\\nu}\\) vanishes completely from the amplitude Eq. (\\ref{eq:amp}) and the oscillation amplitude is enhanced relative to CDM. The net result is an \\textit{enhanced} \\(\\mu\\)-parameter as the amplitude of the perturbation has increased and there is more power that can potentially be dissipated leading to larger spectral distortions.\nWe emphasise that this effect is purely gravitational and arises because the neutrinos anisotropic stress vanishes in the presence of the interaction. Figure \\ref{NeutrinoMu} shows clearly that this is a saturated effect and causes a maximum ~\\(20\\%\\) increase in the \\(\\mu\\)-parameter when the DM--neutrino coupling is tight.\n\n\\begin{figure}[h]\n\\includegraphics[width=22pc]{NeutrinoMu.png}\\hspace{2pc}%\n\\begin{minipage}[b]{10pc}\\caption{\\label{NeutrinoMu}Expected chemical potential \\(\\mu\\)-distortion as a function of DM--neutrino scattering cross section \\(\\sigma_{DM-\\nu}\\) Blue denotes time independant cross-sections and red denotes cross sections proportional to temperature squared.\\\\ \\\\ \\\\}\n\\end{minipage}\n\\end{figure}\n\n\\section{DM -- photon interactions} \nIn the photon--baryon fluid the diffusion damping scale is set by \n\\begin{equation} \\label{eq:diffusion}\n\\partial_{z}k_{D}^{-2} = - \\frac{c^{2}_{s}a}{2\\mathcal{H}\\dot{\\kappa}}\\left(\\frac{16}{15} + \\frac{R^{2}}{R+1}\\right),\n\\end{equation}\nwhere \\(c_{s}\\) is the sound speed of the photon--baryon fluid, \\(\\dot{\\kappa}\\) is the interaction rate of photons and baryons and \\(R = (3\/4)\\rho_{b}\/\\rho_{\\gamma}\\) is the baryon-to-photon energy density ratio. Note that the first term in Eq. (\\ref{eq:diffusion}) arises due to the viscosity of the photon fluid and the second term (proportional to \\(R^{2} << 1\\)) arises due to the heat conduction between photons and baryons, here the heat conduction term is extremely subdominant as the photons are tightly coupled to the baryons for the entire time period of interest and do not allow for efficient heat exchange between the two fluids \\cite{Weinberg:1972kfs, Hu:1996mn}.\n\nThe case of DM--photon elastic scattering is distinctly non-trivial as the DM is not tightly coupled to the photons for the entirety of it's evolution and therefore enters a \\textit{weak coupling regime} in which diffusion damping due to heat conduction can be significant or even the dominant mode of dissipation. Here the diffusion damping scale is given by\n\\begin{equation} \\label{analytic}\n\\partial_{z}k_{D}^{-2} \\simeq - \\frac{c^{2}_{s}a}{2\\mathcal{H}} \\left[ \\frac{1}{\\dot{\\kappa} + \\dot{\\mu}_{\\gamma}}\\frac{16}{15} + \\frac{3\\dot{\\mu}}{k^{2}}\\left(\\frac{k^{2}}{k^{2} +3 S_{\\gamma}^{-2}\\dot{\\mu_{\\gamma}}^{2}}\\right) \\right]\n\\end{equation}\nwhere \\(\\dot{\\mu}_{\\gamma}\\) is the DM--photon interaction rate and \\(S = (3\/4)\\rho_{c}\/\\rho_{\\gamma}\\) is the DM-to-photon energy density ratio. When tight-coupling is satisfied (\\(S^{-1}_{\\gamma}\\dot{\\mu}_{\\gamma} > k\\)) the heat conduction term in this expression reduces to a form proportional to \\(S_{\\gamma}^{2}\\) analogous to the heat conduction term in Eq. (\\ref{eq:diffusion}). \n\nThe effect of the heat conduction damping is twofold: firstly, it affects the evolution of the photon temperature transfer functions through additional damping by Eq. (\\ref{eq:Theta}) which thereby damps the amount of perturbation power able to be dissipated and secondly, it directly affects the heating rate creating a temporally localised `burst' of energy. This second effect arises because the DM--photon coupling can enter a weak coupling regime during the \\(\\mu\\)-era, in contrast with the photon--baryon coupling which remains tight for the entire duration of the \\(\\mu\\)-era. To understand the `burst' shape consider that extremely tightly coupled fluids will be almost comoving and therefore not have a significant amount of heat conduction whereas fluid which are completely decoupled cannot communicate with each other efficiently enough to allow heat conduction to take place. The `burst' shape is therefore a result of the DM and photons slipping past each other during the weak couping regime and contributing to a large heat conduction between the two fluids. Both effects are demonstrated in Figure 2 which shows the heating rate of the CMB photons in which the `burst' shape at intermediate times and the suppression of power at late times are both visible.\n\n\\begin{figure}[h]\n\\begin{minipage}[c]{0.6\\textwidth}\n\\includegraphics[width=22pc]{Photon_Heating.png}%\n\\end{minipage}\\hspace{2pc}\n\\begin{minipage}[b]{0.3\\textwidth}\\caption{\\label{PhotonHeatingConst}Heating rate of CMB photons as a function of the scale factor \\(a\\) for differerent values of the parameter \\(u_{\\gamma} \\equiv \\sigma_{\\textrm{DM}-\\gamma}\/\\sigma_{\\textrm{T}}\\left(100 \\, \\textrm{GeV}\/m_{\\textrm{DM}}\\right)\\). \\\\}\n\\end{minipage}\n\\end{figure}\n\n\\begin{figure}[h]\n \\begin{minipage}[c]{0.6\\textwidth}\n \\includegraphics[width=22pc]{Sigma_Photon_Const.png}\n \\includegraphics[width=22pc]{Sigma_Photon_T2.png}\n \\end{minipage}\\hspace{2pc}\n \\begin{minipage}[c]{0.3\\textwidth}\n \\caption{\\label{blah}\\textit{Top}: Upper bounds for DM-X scattering time independent cross-sections as a function of DM mass. Existing constraints from Planck + WP \\cite{Wilknu, Wilkphoton}(dotted), MW satellites \\cite{Boehm:2014vja} (small-dashed) and Lyman-\\(\\alpha\\) forest \\cite{Wilknu} (dot-dashed) shown alongside projected constraints from PRISM in \\cite{ChlubaKam} (long-dashed) and this work (solid). Blue, orange denotes DM-\\(\\gamma\\) and DM-\\(\\nu\\) scattering respectively. \\\\ \\\\\n \\textit{Bottom}: Same as top but for DM-X scattering cross-sections proportional to temperature squared as a function of DM mass. Existing constraints from Lyman-\\(\\alpha\\) forest \\cite{Wilknu} (dot-dashed) \\\\ \\\\ } \\label{Constraints}\n \\end{minipage}\n\\end{figure}\n\n\\section{Results}\nIn the case of DM--neutrino elastic scattering the overall amplitude of the perturbations is enhanced by \\(\\sim 10\\%\\) relative to \\(\\Lambda\\)CDM in the limit of tight coupling. As a result, the expected \\(\\mu\\) distortion is enhanced significantly (see Figure \\ref{NeutrinoMu}) and may be distinguished from the \\(\\Lambda\\)CDM predicition if the present-day value of the scattering cross section is at least as large as \\(\\sigma_{\\textrm{DM}-\\nu} \\sim 4.8 \\times 10^{-32} \\left(m_{\\textrm{DM}}\/\\textrm{GeV}\\right) \\, \\textrm{cm}^{2}\\) for time-independent cross sections, and \\(\\sigma^{0}_{\\textrm{DM}-\\nu} \\sim 2.5 \\times 10^{-47} \\left(m_{\\textrm{DM}}\/\\textrm{GeV}\\right) \\, \\textrm{cm}^{2}\\) for $\\sigma_{{\\rm DM}-\\gamma} \\propto T^2$. In the latter case, it is interesting to note that the constraining power of PRISM on dark matter--neutrino elastic scattering may potentially exceed current limits from the Lyman-\\(\\alpha\\) forest (see bottom of Figure \\ref{Constraints}).\n\nIn the case of DM--photon elastic scattering we derive a new analytical expression Eq. (\\ref{analytic}) for the diffusion damping scale in the presence of a DM--photon elastic scattering interaction which acccounts for the fact that the DM--photon fluid enters a weak coupling regime while the photons are still tightly coupled to the baryons. This expression shows that dissipation due to heat conduction is significant in the case of DM--photon elastic scattering and can actually be the dominant mode of dissipation during the \\(\\mu\\)-era for certain parameter values. We find that future experiments such as PRISM may be sensitive to DM--photon elastic scattering if the cross section is at least as large as \\(\\sigma_{\\textrm{DM}-\\gamma} \\sim 1.1 \\times 10^{-30} \\left(m_{\\textrm{DM}}\/\\textrm{GeV}\\right) \\, \\textrm{cm}^{2}\\) for time-independent cross sections, and \\(\\sigma^{0}_{\\textrm{DM}-\\nu} \\sim 1.8 \\times 10^{-40} \\left(m_{\\textrm{DM}}\/\\textrm{GeV}\\right) \\, \\textrm{cm}^{2}\\) for $\\sigma_{{\\rm DM}-\\gamma} \\propto T^2$. This method is not as constraining as the case of DM--neutrino elastic scattering due to competing effects dominating the heating rate of the photons at different times (see Section 3 for further discussion.)\n\nIn summary, we have identified a number of physical features of DM--neutrino and DM--photon interacting models which influence the evolution of photon perturbations in the early universe. These physical features produce different CMB Spectral Distortions and can be probed with a high degree of sensitivity by future experiments such as PRISM.\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\begin{table*}\n\\caption{Details of the observations.\\label{observations}}\n\\begin{center}\n\\tiny\n\\begin{tabular}{ccccccccc}\n\\hline\nObservation & MJD & \\multicolumn{2}{c}{Map peak} & \\multicolumn{2}{c}{Beam} & \\multicolumn{2}{c}{1$\\sigma$ rms} \n& Notes \\\\\ndate & & \\multicolumn{2}{c}{(mJy\/beam)} & \\multicolumn{2}{c}{(mas $\\times$ mas, $^\\circ$)} &\n\\multicolumn{2}{c}{(mJy\/beam)} & \\\\\n & & 15\\,GHz & 24\\,GHz & 15\\,GHz & 24\\,GHz& 15\\,GHz & 24\\,GHz& \\\\\n\\hline\n\\hline\n2011\/01\/14 & 55575 & 348 & 319 &\\ \\ 1.05 $\\times$ 0.65, 15.3 \\ \\ &\\ \\ 0.79 $\\times$ 0.47, 8.84 \\ \\ & 0.19 & 0.18 & No MK, no NL \\\\\n2011\/02\/25 & 55617 & 391 & 338 & 1.16 $\\times$ 0.74, 14.4 & 0.64 $\\times$ 0.39, $-$6.52 & 0.35 & 0.18 & NL snowing \\\\\n2011\/03\/29 & 55649 & 386 & 359 & 1.06 $\\times$ 0.66, $-$5.37 & 0.65 $\\times$ 0.39, $-$4.26 & 0.17 & 0.24 & No HK \\\\\n2011\/04\/25 & 55675 & 367 & 308 & 0.92 $\\times$ 0.50, $-$3.79 & 0.61 $\\times$ 0.34, $-$3.88 & 0.28 & 0.33 & - \\\\\n2011\/05\/31 & 55712 & 355 & 297 & 0.93 $\\times$ 0.51, $-$5.56 & 0.64 $\\times$ 0.35, $-$8.41 & 0.25 & 0.29 & - \\\\\n2011\/06\/29 & 55741 & 262 & 208 & 0.89 $\\times$ 0.50, $-$7.17 & 0.56 $\\times$ 0.32, $-$12.3 & 0.17 & 0.37 & No LA \\\\\n2011\/07\/28 & 55770 & 220 & 197 & 0.91 $\\times$ 0.56, $-$0.89 & 0.60 $\\times$ 0.37, $-$1.14 & 0.20 & 0.30 & - \\\\\n2011\/08\/29 & 55802 & 275 & 200 & 0.97 $\\times$ 0.55, 0.26 & 0.63 $\\times$ 0.35, $-$2.70 & 0.18 & 0.27 & No HK \\\\ \n2011\/09\/28 & 55832 & 264 & 238 & 1.06 $\\times$ 0.67, 16.2 & 0.72 $\\times$ 0.47, 18.4 & 0.26 & 0.24 & No MK \\\\ \n2011\/10\/29 & 55863 & 261 & 167 & 1.06 $\\times$ 0.69, 0.09 & 0.73 $\\times$ 0.44, $-$3.73 & 0.26 & 0.14 & HK snowing \\\\ \n2011\/11\/28 & 55893 & 283 & 201 & 1.02 $\\times$ 0.59, 18.0 & 0.70 $\\times$ 0.42, 14.3 & 0.28 & 0.17 & No PT, FD, MK \\\\ \n2011\/12\/23 & 55918 & 295 & 287 & 0.89 $\\times$ 0.49, 1.82 & 0.61 $\\times$ 0.35, $-$5.74 & 0.15 & 0.21 & No HK \\\\ \n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=7.5cm]{15ghz_stacked.eps}\n \\includegraphics[width=7.5cm]{24ghz_stacked.eps}\n \\caption{Images of Mrk 421 at 15\\,GHz (left panel) and 24\\,GHz (right panel). These two images were obtained by stacking all of the images of the twelve epochs, at the respective frequency. The restoring beam for the 15\\,GHz image is 0.9 mas$\\times$0.55 mas and the peak flux density is 307.2 mJy\/beam. For the 24\\,GHz image, the restoring beam is 0.6 mas$\\times$0.35 mas and the peak is 251.9 mJy\/beam. The first contour is 0.35 mJy\/beam, which corresponds to three times the off-source noise level. Contour levels are drawn at ($-$1, 1, 1.4, 2, 2.8, 4...) in steps of $\\sqrt{2}$.}\n\\label{maps}\n \\end{figure*}\n\n\nMarkarian 421 (R.A.=$11^h$\\ $04^m$\\ $27.313943^s$, Dec.=$+38^\\circ$\\ 12\\arcmin\\ 31.79906\\arcsec, J2000) is one of the nearest ($z=0.031$) and brightest BL~Lac objects in the sky. It was the first extragalactic source detected at \nTeV energies by the Cherenkov telescope at Whipple Observatory \\citep{Punch1992}. The spectral energy \ndistribution (SED) of this object, dominated by non-thermal emission, has two\nsmooth broad components: one at lower energies, from radio band to the soft X-ray domain, and another at higher\nenergies peaking at $\\gamma$-ray energies \\citep{Abdo2011}. \nThe low-frequency peak is certainly due to synchrotron emission from relativistic electrons in the jet interacting \nwith the magnetic field, whereas the high-frequency peak is probably due to the inverse Compton scattering of the same population of \nrelativistic electrons with synchrotron low-energy photons \\citep[synchrotron-self-compton model, SSC, see][]{Abdo2011, Tavecchio2001}. \nIn this framework, multiwavelength (MWL) coordinated campaigns are a fundamental tool for understanding the physical properties of the source, e.g. by studying variability, which is present at all frequencies, but particularly TeV energies where \\citet{Gaidos1996} measured a doubling time of $\\sim15$ minutes. The accurate MWL study and SED modeling performed by \\citet{Abdo2011} revealed some interesting results, such as the size of the emitting region $R$ and the magnetic field $B$, which in the context of the leptonic scenario they constrained to be $R \\lesssim 10^4 R_g$ and $B \\sim$ 0.05 G. \nHowever, the details of the physical processes responsible for the observed emission are still poorly constrained.\nBecause of the considerable variability and the broadband spectrum, multiwavelength long-term observations are required for a good comprehension of the emission mechanisms.\n \nThis study is part of a new multi-epoch and multi-instrument campaign, which also involves observations in the sub-mm (SMA), optical\/IR (GASP), UV\/X-ray (Swift, RXTE, MAXI), and $\\gamma$ rays (Fermi-LAT, MAGIC, VERITAS), as well as at the cm wavelengths with low resolution observations (e.g. F-GAMMA, Medicina). \nThe aim of this observational effort is to shed light on fundamental questions such \nas the nature of the radiating particles, the connection between the radio and $\\gamma$-ray emission, \nthe location of the emitting regions, and the origin of the flux variability.\nVery long baseline interferometry (VLBI) plays an important role in addressing these scientific questions because it is the only technique that can resolve (at least partially) the inner structure of the jet.\nTherefore, cross-correlation studies of Very Long Baseline Array (VLBA) data with data from other energy ranges (in particular $\\gamma$ rays) \ncan provide us with important information about the structure of the jet and the location of the blazar emission.\n\nAt radio frequencies, Mrk 421 clearly shows a one-sided jet structure aligned at a small angle with respect \nto the line of sight \\citep{Giroletti2006}.\nIn this work, we present new VLBA observations to study in detail the inner jet\nstructure on parsec scales. We are able to investigate the evolution of shocks that arise in the jet, \nby means of the model-fitting technique. In earlier works \\citep{Piner1999, Piner2004}, the jet\ncomponents show only subluminal apparent motion, which seems to be a common\ncharacteristic of TeV blazars. Thanks to accurate measurements of changes on parsec scales, by the VLBA, we can find\nvalid constraints on the geometry and kinematics of the jet. \n\nThis paper is structured as follows: in Section 2 we introduce the dataset, in Section 3 we report the results of this work (model fits, flux density variations, apparent speeds, jet sidedness, spectral index), and in Section 4 we discuss results giving our own interpretation in the astrophysical context. We used the following conventions for cosmological parameters: $H_0=70$ km sec$^{-1}$ Mpc$^{-1}$, $\\Omega_M=0.25$ and $\\Omega_\\Lambda=0.75$, in a flat Universe. We defined the spectral index $\\alpha$ such that $S_{\\nu}\\propto\\nu^\\alpha$.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[angle=-90, clip, width=0.9\\columnwidth]{gen15.ps} \n\\includegraphics[angle=-90, clip, width=0.9\\columnwidth]{gen24.ps} \\\\\n\\caption{Images of Mrk 421 with model fit components for the first epoch at 15\\,GHz (left panel) and at 24\\,GHz (right panel). Levels are drawn at $(-1, 1, 2, 4...) \\times$ the lowest contour, that is at 1.0 mJy\/beam for both images, in steps of 2. The restoring beam is shown in the bottom left corner; its size is given in Table~\\ref{observations}.} \n\\label{components}\n\\end{figure*}\n\n\\section{Observations}\nWe observed Mrk 421 throughout 2011 with the VLBA. \nThe source was observed once per month, for a total of 12 epochs, at three \nfrequencies: 15, 24, and 43 GHz. In this paper, we present the complete analysis of the whole 15 and 24\\,GHz datasets.\nWe also observed, at regular intervals, three other sources \n(J0854+2006, J1310+3220, and J0927+3902) used as fringe finders and calibrators for the band pass, the instrumental (feed) polarization, and the electric-vector position angle.\nAt each epoch, Mrk 421 was observed for nearly 40 minutes at each frequency, spread into several scans of about 3 minutes each, interspersed with calibrator sources in order to improve the (u,v)-coverage. Calibrators were observed for about 10 minutes each, generally spread into three scans of 3 minutes. \n\nFor calibration and fringe-fitting, we used the AIPS software package \\citep{Greisen2003}, and for\nimage production the standard self-calibration procedures included in the DIFMAP software\npackage \\citep{Shepherd1997}, which uses the CLEAN algorithm proposed by \\citet{Hogbom1974}. \nIn some epochs, one or more antennas did not work properly because of technical\nproblems; for a complete report (see Table~\\ref{observations}).\n\n\\section{Results}\n\\subsection{Images}\nWe show in Fig.~\\ref{maps} two images of Mrk 421 at 15 and 24\\,GHz, which were produced by stacking all the images respectively at 15 and 24\\,GHz created with DIFMAP. The alignment of the images was checked by comparing the pixel position of the peak. In both images, we set the lowest contour equal to about three times the off-source residual rms noise level.\n\nAll the 12 images at each frequency show a similar structure, consisting of a well-defined and well-collimated one-sided jet structure emerging from a compact nuclear region (core-dominated source). This is the typical structure of a BL~Lac object\n\\citep{Giroletti2004a}. \nThe jet extends for roughly 4.5 mas (2.67 pc)\\footnote{1 mas corresponds to 0.59 pc.}, \nwith a position angle (PA) of $\\sim-35^\\circ$ (measured from north through east). This morphology agrees with the results of other studies of similar angular resolution \\citep{Marscher1999}.\n\nSince the morphology was very stable from epoch to epoch, the stacking did not smear any details of the structure. A sample couple of 15 and 24\\,GHz images is shown in Fig.~\\ref{components}.\n\\subsection{Model fits}\nFor each epoch, we used the model-fitting routine in DIFMAP to fit the visibility data of the source in the $(u,v)$ plane with either elliptical or circular Gaussian components. In this way, we were able to investigate in detail the inner jet structure and its evolution. \nFor all epochs at 15\\,GHz, a good fit was obtained with five Gaussian components, while at 24\\,GHz we needed six components. At both frequencies, we identified the core with the brightest, innermost, and most compact feature.\nWe label the other components C1, C2, C3, and C4, starting from the outermost (C1) to the innermost (C4). \nThe higher angular resolution achieved at 24\\,GHz resolves the second innermost 15\\,GHz component (C4 located at $\\sim$0.45 mas from the core) into two features (C4b at $\\sim$0.3 mas and C4a $\\sim$0.7 mas from the core) (see Fig.~\\ref{components}).\n\nThanks to the extremely fine time-sampling, we were able to make an attempt to identify the same component in each epoch.\nOverall, the components extend out to a region of about 5 mas. In this way, with a limited number of components, it was possible to analyze the proper motions and flux density levels at various times. \nFrom Fig.~\\ref{modelfit}, we can clearly see that the data occupy well-defined regions in the radius vs. time plot, and that this behavior helps us to identify the individual components across epochs. \n\nAll details of the model fit analysis are shown in Table~\\ref{gaussian}.\nWe calculated the uncertainties in the position (error bars in Fig.~\\ref{modelfit}) using the ratio of the size of each component to the signal-to-noise ratio (S\/N). In the case of very bright, compact components, the nominal error value is too small and replaced by a conservative value equal to 10\\% of the beam size \\citep{Orienti2011}. On the other hand, when the calculated error is very large (i.e. comparable to the component radius, as in the case of a very extended component with a very low flux density), we replace it with a threshold value equal to the maximum value of the calculated errors for the same component at different epochs.\n\nBy comparing with \\citet{Piner2010} and \\citet{Piner2005}, we can argue that the components C2, C3, C4a, and C4b of the present analysis quite likely correspond respectively to the components C5, C6, C7, and C8 of the aforementioned works. We also suggest that our component C1 corresponds to their component C4a. We will consider a more accurate identification of our components with those of previous analyses in a forthcoming paper, where the 43\\,GHz data analysis will also be presented.\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[clip,width=\\columnwidth]{modelfit.eps}\n\\caption{Results of model-fit analysis. Circles and triangles refer, respectively, to positions of the Gaussian components at 15 and 24\\,GHz.} \n\\label{modelfit}\n\\end{figure}\n\n\\addtocounter{table}{1}\n\n\\subsection{Flux density variability} \\label{flux}\nUsing the results of the model-fit technique, we analyzed the temporal evolution of\nthe flux density for each component of the source. The brightest component represents the core; at 15\\,GHz, it has a mean value of around 350 mJy, which decreases along the jet until values of about 10 mJy. By comparing the flux density of each component at the various epochs, it emerges that there are no significant variations in the flux densities of the C1-C4 components. The flux density of each component remains roughly constant\nat various times within the uncertainties calculated; in any case, there is no indication of flaring activity. \nAny small variations may be artifacts brought about by our fitting procedures: for instance, the flux density of the inner components may be underestimated in some cases, because part of it was incorporated into the core component flux, or the flux density of the most extended features may be underestimated at epochs missing some short baseline data because of telescope failures.\nThis is a remarkable result because it also allows us to identify components based on their flux density values and confirms our choice of components based on their positions.\n\nIn Fig.~\\ref{lightcurves}, we show the light curve for Mrk 421 during 2011 at 15\\,GHz (upper panel) and at 24\\,GHz (lower panel), considering the total flux density (squares) and the core flux density (triangles). The light curve reveals an interesting feature: in the second part of the year (starting at MJD $\\sim$55700), we clearly note a decrease in the total flux density. From the complete flux-density analysis, we found that the core is the component responsible for the decrease, while the extended region does not display any significant variations. To further exclude calibration effects, we performed the same analysis on the three calibrators. In Fig.~\\ref{lightcurves}, we present the light curves of the calibrator J1310+3220 (diamonds) at 15\\,GHz (upper panel) and 24\\,GHz (lower panel). From this comparison, we clearly see that the trend of the light curves for the two sources is very different. We can assert that the flux density decrease observed for the core of Mrk 421 is a real feature. Error bars were calculated by considering a calibration error of about $10\\%$ of the flux density and a statistical error equal to three times the map rms noise.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{lightcurve15.eps} \\\\\n\\includegraphics[width=\\columnwidth]{lightcurve24.eps}\n\\caption{Light curves for Mrk421 (squares represents the total flux density and triangles represent the core flux density) and the calibrator J1310+3220 (diamonds represent the total flux density, with scale given on the right hand $y$-axis). The upper and lower panels refer respectively to the 15\\,GHz and 24\\,GHz data.} \n\\label{lightcurves}\n\\end{figure}\n\n\\subsection{Spectral index analysis}\nThanks to our multi-frequency data set, we were able to carry out a detailed analysis of the spectral index distribution on the parsec scale. We conducted a quantitative assessment of the spectral index of the components by comparing the flux density at the two frequencies at each epoch and averaging the results to reduce the statistical fluctuations. For the core, we obtained a value of $\\alpha\\sim-0.3\\pm 0.2$, and for the outermost components (C1, C2, and C3, considered altogether), we obtained $\\alpha\\sim-1.2\\pm 0.5$. The uncertainties were calculated from the theory of the propagation of errors using the formula\n\\begin{equation}\n\\Delta \\alpha = \\frac{1}{\\log(24\/15)} \\sqrt{\\left(\\frac{\\Delta S_{15}}{S_{15}}\\right)^2 + \\left(\\frac{\\Delta S_{24}}{S_{24}}\\right)^2},\n\\end{equation}\nwhere $S$ and $\\Delta S$ represent the flux density and the respective uncertainty (see the last paragraph of section~\\ref{flux}).\nThese values agree with the results obtained from the spectral index image, which we produced with the following procedure: we first produced new images for each epoch and frequency using the same $(u,v)$ range and the same restoring beam ($0.4$ mas $\\times$ $0.7$ mas). We then produced the spectral index image by combining the two images, resulting respectively from the stacking of all images at 15\\,GHz and at 24\\,GHz, using the AIPS task COMB, clipping the pixels with a S\/N $<$ 3 using the input images. By summing images we were able to increase the S\/N, and results are more reliable; the stability of the individual features guarantees that we do not lose structural information in the averaging process. The alignment of the images at the two frequencies was checked by comparing the pixel position of the peak.\nWhen a shift was present, we used the LGEOM task in AIPS to align the images.\nThe resulting image presents the typical flat spectrum in the core region, with a steepening along the jet radius. However, despite the higher S\/N obtained with image stacking, a significant spread in the spectral index values is present in the diffuse jet region, hence we do not show the image here.\n\n\n\\subsection{Apparent speeds}\nFrom our model-fit, we infer a small or no displacement for the jet components. To verify and quantify this statement, we determined the speeds of each component by means of linear fits to the separation of the individual features from the core at different epochs. For the \nthree outer components (C1, C2, and C3), we used combined data at 15 and 24 GHz, since the positions of each component at the two frequencies are generally consistent within the error bars.\nFor the two inner components (C4a and C4b), we only used data at 24\\,GHz.\nResults are shown in Table~\\ref{speeds}.\n\nWe found low values for the apparent speeds, in agreement with previous studies \\citep[e.g.][]{Piner2005}.\nThe two innermost components (C4a and C4b) are essentially stationary, with an upper limit to their separation velocity of $\\sim0.1$c. In addition C2 and C3 are consistent with being stationary, while the outermost component C1 has a low-significance (1.5$\\sigma$) subluminal motion$\\sim0.3$c. \nIf this trend of increasing velocity at larger radii were real and if the apparent speeds shown in Table~\\ref{speeds} represented the bulk apparent speed of the plasma in the jet, we could speculate that some mechanism involving an acceleration acts in the outer region of the jet.\n\n\\begin{table}[t]\n\\begin{center}\n\\footnotesize\n\\caption{Apparent speeds from the linear fit analysis.\\label{speeds}}\n\\begin{tabular}{lcc}\n\\hline\nComponent & Apparent speed & $\\beta_{\\rm app}$ \\\\\n& (mas\/yr) & \\\\\n\\hline\n\\hline\nC1 &$0.17\\pm0.12$ & $0.34\\pm0.24$ \\\\\nC2 &$0.08\\pm0.10$ & $0.16\\pm0.20$ \\\\\nC3 &$0.05\\pm0.06$ & $0.10\\pm0.11$ \\\\\nC4a &$-0.01\\pm0.09$ & $-0.02\\pm0.17$ \\\\\nC4b &$-0.02\\pm0.06$ & $-0.03\\pm0.11$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsection{Jet\/counter-jet ratio}\nWe estimated the ranges of viewing angles $\\theta$ and of $\\beta$ from the jet\/counter-jet brightness ratio. Assuming that the source has two symmetrical jets of the same intrinsic power, we used the equation\n$$\\frac{B_J}{B_{cJ}}=R=\\left(\\frac{1+\\beta \\cos\\theta}{1-\\beta \\cos\\theta}\\right)^{2-\\alpha},$$\nwhere $B_J$ and $B_{cJ}$ are, respectively, the jet and counter-jet brightnesses and $\\alpha$ represents the spectral index defined in the introduction; we adopted the $(2-\\alpha)$ exponent, since the jet is smooth and does not contain well-defined compact {\\it blobs}.\nFor the jet brightness, we used $B_J\\sim28.4$ mJy\/beam, measured at 24\\,GHz, in the image resulting from the stacking of all the 12 epoch images, in the jet region located at $\\sim1$ mas from the core. For the counter-jet, which is not visible, we used an upper limit provided by the $3\\sigma$ rms noise level measured in the image, which resulted in $B_{cJ}=0.11$ mJy\/beam; this consequently yields a lower limit to both $R$ and $\\beta\\cos\\theta$. With a value of $\\alpha=-0.4$, in agreement with our spectral index images, we obtained $R>254.8$ and then $\\beta\\cos\\theta>0.82$. Therefore, the minimum allowed jet bulk velocity is $\\beta_{\\rm min}=0.82$ (corresponding to a bulk Lorentz factor $\\gamma>1.74$) and the maximum viewing angle is $\\theta_{\\rm max} = 35.0^\\circ$.\n\\vspace{-0.25 cm}\n\\section{Discussion and conclusions}\n\nThe Doppler factor defined as\n\n$$\\delta=\\frac{1}{\\gamma(1-\\beta \\cos\\theta)}$$\n\n\\noindent is a key element in the study of blazars, since it affects various parameters such as the observed brightness, the SED peak frequency, the variability timescale, and more. Modeling of the SED and study of the variability in different wavebands generally require large values of the blazar Doppler factors; in the case of Mrk421, \\citet{Gaidos1996} estimated $\\delta>9$, from the observed TeV variability time of about 30 min and \\citet{Abdo2011} required a Doppler factor between 20 and 50 to reproduce the broadband SED. In turn, VLBI observations can also constrain $\\delta$ by posing limits on $\\beta$ and $\\theta$, as provided by the various arguments discussed in this work.\n\nWhen closely spaced repeated observations are available, the study of the proper motion is a useful tool in determining the ranges for $\\beta$ and $\\theta$. Surprisingly, several works \\citep[e.g.][]{Giroletti2004b, Piner2008, Piner2010} have reported subluminal motions, sometimes consistent with the component being stationary, in the jets of all the $\\sim$10 TeV blazars for which \nproper motion studies have been performed in the literature. Thanks to the large number of dual-frequency observations, the fine time-sampling, and the high quality of the data provided by the good $(u,v)$-coverage, we performed a robust identification of the Gaussian components and constrained their motion to be consistent with no displacement at all. At the same time, the high sensitivity and in particular the stacked image place significant constraints on the jet\/counter-jet ratio.\n\nThe first immediate consequence is that we can reject the hypothesis that the small $\\beta_{\\rm app}$ is solely due to a projection effect, since it would require an unrealistically narrow viewing angle: in the case of component C4b, the upper limit to the observed motion implies a viewing angle $<1.3^\\circ$ to reproduce the observed jet\/counter-jet ratio (and even smaller to agree with the high energy limits). If the jets' distribution is isotropic on the sky, the real number of misaligned sources (parent population) is incompatible with these very small values of $\\theta$. For example, in the Bologna Complete Sample selected by \\citet{Giovannini2005} at low frequency (and thus free from Doppler favoritism bias), one would expect fewer than 0.03 sources with $\\theta<1.3^\\circ$ \\citep[see also][]{Tavecchio2005,Henri2006}. \n\nOn the other hand, since larger values of the viewing angle capable of reproducing the observed lack of proper motion are incompatible with the jet\/counter-jet ratio, we conclude that the pattern velocity cannot be representative of the bulk flow velocity. The Gaussian components obtained in our model fit provide a good description of the visibility data but do not represent well-defined, high-contrast jet features \\citep[see][]{Lyutikov2010}. In our interpretation, the low apparent speeds found imply that the proper motion of Mrk421 does not provide any information about the jet bulk velocity; even on the basis of the sole jet brightness ratio, untenable viewing angles would be necessary to match the pattern and bulk velocities. \n\nWhat then are the real values of the viewing angle and the jet bulk velocity in Mrk421? To reproduce the observed jet asymmetry, we needed to consider a range of velocities $0.82<\\beta<1$ and angles $0^\\circ<\\theta<35.0^\\circ$. \nWe could exclude the upper range of the $\\theta$ values, since this would not reproduce the high Doppler factors required by high energy observations \\citep{Gaidos1996, Abdo2011}; in particular, we were unable to achieve $\\delta>20, 10, 5, 3$ when $\\theta>3.0^\\circ, 5.7^\\circ, 11.5^\\circ, 19.4^\\circ$, respectively.\n\nSmaller angles thus seem to be favored since they were the only ones consistent with high values of $\\delta$; however, such small angles would still represent a challenge to the observed radio properties. We were able to estimate the intrinsic power of the radio core $P_c^{\\rm intr}$, by debeaming the observed monochromatic luminosity of the core $P_c^{\\rm obs}$ with the equation\n\n$$P_c^{\\rm obs}=P_c^{\\rm intr} \\times \\delta_c^{2-\\alpha}.$$\nWith a value of $P_c^{\\rm obs}\\sim6.8\\times10^{23}$ W\\,Hz$^{-1}$ at 15\\,GHz, $\\alpha=-0.3$, and $\\delta=20$, we obtained for the intrinsic power of the core $P_c^{\\rm intr}\\sim5.8\\times10^{20}$ W\\,Hz$^{-1}$; this value was at the very low end of the typical range of intrinsic power found for different samples of radio galaxies \\cite[e.g.][]{Liuzzo2011}, suggesting that lower values of $\\delta$ provide a more typical core power. \n\nMoreover, the 12 monthly observations have not revealed any dramatic flux-density variability in the core of the source, which further points to a lower Doppler factor for the radio jet.\nWe estimated the variability brightness temperature of the core ($T_{\\rm B,var}$) with the formula proposed by \\citet{Hovatta2009}\n$$ T_{\\rm B,var}= 1.548 \\times 10^{-32} \\frac{\\Delta S_{\\rm max}d_L^2}{\\nu^2\\tau^2(1+z)},$$\nwhere $\\nu$ is the observed frequency in GHz, $z$ is the redshift, $d_L$ is\nthe luminosity distance in meters, $\\Delta S_{\\rm max}$ is the difference between the maximum value for the core flux density and the minimum value, and $\\tau$ is the variability time. With the values provided by our observations and $\\tau$ $\\sim$ 90 days, we obtaind a value of $T_{\\rm B,var} \\sim 2.1\\times 10^{10}$~K, which does not require any significant beaming.\n\nWe similarly calculated $T_{\\rm B}$ for the most compact component following the standard formula \\citep{Piner1999,Tingay1998}\n$$T_{\\rm B}=1.22 \\times 10^{12} \\frac{S(1+z)}{ab\\nu^2},$$\nwhere $S$ is the flux density of the component measured in Jy, $a$ and $b$ are the full widths at half maximum of the major and minor axes respectively of the component measured in mas, $z$ is the redshift, and $\\nu$ is the observation frequency in GHz. The resulting $T_{\\rm B}$'s are on the order of a few $\\times 10^{11}$ K, only slightly exceeding the limit derived by \\citet{Readhead1994} from equipartition arguments.\n\nTaken together, the lack of superluminal features, the low core dominance, and the weak variability suggest a scenario in which no strong beaming is required in the radio jet. This is not uncommon in TeV blazars \\citep{Piner2004, Piner2008}, but unprecedentedly firm observational support for it has been provided by our intensive campaign. Low values of the Doppler factor, e.g. $\\delta \\sim 3$, can reproduce the observational radio properties, including the jet brightness asymmetry. \n\nWe conclude that the Doppler factor must be different in the radio band than the $\\gamma$-ray band. Since we do not expect that the viewing angle changes significantly, this leads us to the necessity of a velocity structure in the jet, as previously discussed by e.g.\\ \\citet{Chiaberge2000}, \\citet{Georganopoulos2003}, and \\citet{Ghisellini2005}. Our images do not provide strong evidence in favour of either a radial or transverse velocity structure, although previous works have revealed a limb brightening in Mrk421, on both a milliarcsecond scale at 43 GHz \\citep{Piner2010} and at $d>10$ mas at 5 GHz \\citep{Giroletti2006}. This would favor the presence of a transverse velocity structure across the jet axis. This structure consists of two components: a fast inner \\textit{spine} and a slower outer \\textit{layer}. Different Doppler factors were obtained depending on whether we measured the speed of the spine or the layer. \n\nA viable scenario for Mrk421 is that the viewing angle is between $2^\\circ$ and $5^\\circ$, which is consistent with the statistical counts of low-power radio sources and the possibility of reaching the high Doppler factors required by SED modeling and high-energy variability. The jet velocity is structured, with a typical Lorentz factor of $\\gamma\\sim 1.8$ in the radio region (yielding $\\delta \\sim 3$), and $\\gamma \\sim \\delta \\sim 20$ in the high-energy emission region. For example, assuming $\\theta=4^\\circ$, $\\beta_{\\rm h.e.}=0.998$, and $\\beta_{\\rm r}=0.82$, we obtained a value of $\\delta_{\\rm h.e.}=14.3$ and $\\delta_{\\rm r}=3.2$ and successfully reproduced all the observational properties of the source.\n\nIn summary, the detailed analysis presented in this paper has largely confirmed with improved quality expectations based on the knowledge so far achieved for TeV blazars. We have also estimated with a good level of significance some important and fundamental parameters ($\\delta$, $\\theta$, $\\beta$, $\\alpha$) that characterize the physical processes in blazars. \nHowever, there is still much to be understood and we expect to obtain other\nsignificant results from the analysis extended to other wavelengths, particularly in the $\\gamma$-ray domain. Additional works on the dataset presented in this paper are planned, and will deal with, e.g., the 43\\,GHz images and the polarization properties. Moreover, we intend to combine our dataset with those of other works (e.g.\\ \\citealt{Piner2005} or the MOJAVE survey, \\citealt{Lister2009}) to increase the temporal coverage of the observations and obtain even tighter constraints over a longer time frame.\n\\vspace{-0.2 cm}\n\\begin{acknowledgements}\nThis work is based on observations obtained through the BG207 VLBA project, which makes use of the Swinburne University of Technology software correlator, developed as part of the Australian Major National Research Facilities Programme and operated under licence \\citep{Deller2011}. The\nNational Radio Astronomy Observatory is a facility of the National Science Foundation operated\nunder cooperative agreement by Associated Universities, Inc. For this paper we made use of the NASA\/IPAC Extragalactic Database NED which is operated by the JPL, Californian Institute of Technology, under contract with the National Aeronautics and Space Administration. We acknowledge financial contribution from grant PRIN-INAF-2011. This research is partially supported by KAKENHI (24540240). KVS and YYK are partly supported by the Russian Foundation for Basic Research (project 11-02-00368), and the basic research program ``Active processes in galactic and extragalactic objects'' of the Physical\nSciences Division of the Russian Academy of Sciences. YYK is also supported by the Dynasty Foundation. The research at Boston University was supported in part by NASA through Fermi grants NNX08AV65G, NNX08AV61G, NNX09AT99G, NNX09AU10G, and NNX11AQ03G, and by US National Science Foundation grant AST-0907893. We thank Dr. Claire Halliday for the language editing work which improved the text of the present manuscript.\n\\end{acknowledgements}\n\\vspace{-0.8 cm}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzkhoy b/data_all_eng_slimpj/shuffled/split2/finalzzkhoy new file mode 100644 index 0000000000000000000000000000000000000000..44672fa1950297d5a2c2fbb5e68b38ec5a39fc92 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzkhoy @@ -0,0 +1,5 @@ +{"text":"\\section{INTRODUCTION} \nThe life cycles of dwarf galaxies with present-day stellar masses $\\lesssim10^{9}~M_{\\odot}$ illustrate many challenges to our current understanding of galaxy formation and evolution. While we see that starbursts do not play an important role in the global star-formation at the present epoch \\citep{lee}, it is likely that the star formation histories of dwarf galaxies are complex and varied \\citep{mateo98} and that their typical star-formation rates were higher in the past \\cite[e.g.][]{gallagher}. That star formation in low-mass galaxies may be very burst-like is predicted by hydrodynamical simulations \\cite[e.g.,][]{pelupessy,stinson,shensims}. In general, star formation appears to be regulated by stellar feedback in the form of supernovae and winds that heat and deplete the central reservoirs of cold gas required for continued star formation. In simulations of lower mass systems, feedback is predicted to eject gas out of the galaxy and into the halo, resulting in an episodic star formation history \nacross the entire galaxy \\citep{stinson}. Repeated bursts have been cited as the \ndriving \nforce behind intense feedback mechanisms that can change the dynamical profile of the systems by driving baryons out of the center of the halo on short time scales, which also displaces the dark matter from the center and creates a cored dark matter profile \\citep{n96,pontzen,governato,zolotov,amorisco}, potentially addressing one of the principal challenges to the standard $\\Lambda$CDM cosmology. \n\n\nUntil recently, however, the progenitors of contemporary dwarf galaxies at high-redshift could not be studied directly. ``Archaeological'' studies of resolved stellar populations in local dwarfs \\cite[e.g.,][]{grebel, weisz} have confirmed that their star-formation histories are indeed not smooth, and have also found that a large fraction of their stars formed at early epochs ($z > 1$). For such old stellar populations ($>7$ Gyr), the age resolution achieved by the archaeological approach is insufficient to distinguish star formation events with durations of $10^7$ and $10^8$ years, leaving the actual ``burstiness'' of the star formation history unconstrained. \n\n\n\n\nLook-back studies, directly observing the star formation activity in very low-mass systems at $z > 1$, were impractical until recently. But, near-infrared spectroscopy and deep imaging from the Wide-field Camera 3 (WFC3) on the \\textit{Hubble Space Telescope} (HST) has provided a new opportunity. An abundant population of galaxies at $z\\sim 1.7$ with extremely high equivalent widths (EWs) was found by \\citet{vdw} using data from the Cosmic Assembly Near-infrared DEep Legacy Survey\\footnote{\\url{http:\/\/candels.ucolick.org\/}} \\cite[CANDELS,][]{candels1,candels2}. They find a large number of objects with unusual $I-J$ and $J-H$ colors which imply the presence of a bright emission line in the $J$-band, likely to be [O III] after considering other photometric constraints. Slitless grism spectroscopy confirms the redshifts and emission-line interpretation of four of these objects. These are the high-EW tail of the emission line galaxies found in \\citet{straughn} via slitless grism spectroscopy at the HST, a \ntail which is also probed by \\citet{atek}. \nOther objects with similarly low masses and metallicities at these redshifts have been discovered in \\citeauthor{gb2} (2012b; $EW_{[O III],rest}$=1499 \\AA) and \\citeauthor{vdw13} (2013; $EW_{[O III],rest}$=1200 \\AA) assisted by strong gravitational lensing, in \\citeauthor{masters} (2014; $EW_{[O III],rest}$=154 \\AA), in \\citeauthor{erb10} (2010; $EW_{[O III],rest}$=285 \\AA), and in \\citeauthor{maseda} (2013; discussed further below). All of these objects are emission-line dominated systems with low metallicities and high equivalent widths, the so-called ``Extreme Emission Line Galaxies'' (EELGs). \n\nThese systems are likely the high-EW tail of the distribution of high-redshift dwarf galaxies, and resemble the class of blue compact dwarf galaxies \\cite[BCDs,][]{sargent} observed locally in several ways: low masses, high SFRs relative to their mass, and strong emission lines. However, the EELGs are indeed ``extreme,'' with sSFR values an order of magnitude higher than the BCDs, similar to the strong [O III] emitters (``green peas'') discovered photometrically in the SDSS by \\citet{greenpea}, as well as spectroscopically by \\citet{amorin} and \\citet{izotovSDSS}. As suggested by those authors, the star-formation mode exhibited in the ``green peas'' is likely a relic of a mode that was much more prevalent in the earlier Universe: their comoving number density is one to two orders of magnitude lower than the value of $3.7\\times10^{-4}~Mpc^{-3}$ for EELGs at $z\\sim 1.7$ \\citep{vdw}. \n\nThe implication of strong emission lines and an extremely faint continuum are that these systems have low masses and are undergoing an intense burst of star formation. Equivalent widths $>$ 100~\\AA$ $ and stellar masses of $10^8-10^9 ~M_{\\odot}$ imply specific star formation rates (sSFR) in excess of 10 Gyr$^{-1}$, which is more than an order of magnitude higher than star-forming systems of equivalent masses at lower redshifts \\citep{karim}. These low stellar and dynamical masses are confirmed in \\citet{maseda}, who also rule-out significant contributions to the total mass from older stellar populations for objects with restframe [O III]$\\lambda5007$ equivalent widths $> 500$ \\AA$ $ and intimate that the bursts are intense and have low metallicities. The implied star formation rates and masses have only been reproduced recently in hydrodynamical simulations \\cite[e.g. in][]{shensims}. \n\nAlthough these previous observational studies have placed constraints on various quantities, many uncertainties remain. In the case of \\citet{maseda} and \\citet{vdw}, all of the observed [O III], H$\\beta$, and H$\\alpha$ emission is attributed to star formation and not AGN. In \\citet{vdw}, upper-limits are placed on the black hole masses from the UV-continuum slopes, but their starbursting nature is merely plausible given the lack of knowledge about low-metallicity AGN \\citep{izotovAGN, kewley13}. Low metallicies are simply inferred from the consistency of the SED fits using low-Z (0.2 $Z_{\\odot}$) templates with the observed photometry. \n\nThese objects represent a field of growing importance, both for studies of low-mass dwarf galaxies and for the highest-z galaxies observed. Depending on the strength, duration, and initial mass of these bursts, the descendants could display a wide range of masses, from present-day $\\sim 10^9~M_{\\odot}$ dwarf galaxies to potentially Milky Way-like systems, particularly if merging is important. In any case, these galaxies are also significant contaminants in searches for higher-z dropout sources \\citep{atek}, as in the case of \\textit{UDFj-30546284} \\citep{ellis,gb13,bouwens}. As mentioned in \\citet{atek} and \\citet{coe}, great care must be taken in the interpretation of high-z candidates, as EELGs can potentially reproduce their observed colors (although the implied EWs in the latter case exceeded those that can be produced from star formation alone).\n\nSome issues still remain in our understanding of such systems, such as stringent limits to the low masses and metallicities, including those with EWs $<$ 500 \\AA, and the starburst origin of their strong emission lines. Here we \ncombine both high- and low-resolution near-IR spectroscopy with broadband \nphotometry. With the low-resolution spectra, we select candidates for follow-up high-resolution spectroscopy and obtain emission line fluxes. The high-resolution spectra constrain various emission line ratios, some of which are useful diagnostics of AGN activity, as well as line widths, which are themselves a probe of the dynamical masses of the systems. This provides strong evidence for their low masses and low metallicities, as well as confirming their starbursting nature. Sophisticated modeling of the broadband SEDs constrains the stellar masses and ages, as well as providing information on the dust content and metallicities. Together, this tells us about the strength and duration of the star-forming event.\n\n\n\n\n\nThe remainder of this paper is organized as follows. In $\\S$ 2 we describe the near-infrared spectroscopy and multi-band photometry used in the initial candidate selection process and the subsequent follow-up observations. $\\S$ 3 presents the results of the spectroscopic study, including emission-line widths, physical sizes, and masses. In $\\S$ 4 we confirm their low metallicities and rule out AGN as a significant source of contamination. $\\S$ 5 presents the implications of this work for the gas content of these systems, and $\\S$ 6 summarizes our findings and puts the results into the overall context of the formation history of dwarf galaxies. We adopt a flat\n$\\Lambda$CDM cosmology with $\\Omega_m=0.3$ and H$_0=70 ~$km s$^{-1}$\nMpc$^{-1}$ and a \\citet{chabrier03} IMF throughout.\n\n\\section{DATA}\n\n\n\n\\subsection{Candidate Selection}\n\\label{sec:sel}\nIn order to search for and investigate these ``starbursting dwarf galaxies,'' we take a multi-faceted approach. Our preliminary search utilizes data from the 3D-HST survey\\footnote{\\url{http:\/\/3dhst.research.yale.edu\/Home.html}} \\citep{vd,gb}, a near-infrared spectroscopic Treasury program utilizing the WFC3. This program provides WFC3\/IR primary and Advanced Camera for Surveys (ACS) parallel imaging and grism spectroscopy over approximately three-quarters (625 square arcminutes) of the CANDELS fields. The main source of spectroscopic data comes from the WFC3 G141 grism, with an effective wavelength coverage of 1.1 to 1.65 $\\mu$m.\n\n\\begin{figure}\n\n\n \\includegraphics[width=0.475\\textwidth]{f1.eps}\n\n\n\\caption{Restframe equivalent widths as a function of redshift for our entire sample. Circles represent equivalent widths from [O III]$_{5007}$ and squares represent equivalent widths from H$\\alpha$. The equivalent widths were determined from photometry and grism spectroscopy, see Section \\ref{sec:sel}. For objects in which both [O III] and H$\\alpha$ are visible in the grism spectrum, both equivalent widths are plotted. Open symbols show objects without LUCI1 line detections, due to intrinsic faintness or skyline contamination. Our emission-line detection rate is close to 100\\% for objects with EW $>$ 300 \\AA.}\n\\label{fig:ewz}\n\\end{figure}\n\n\n\n\n\nThe grism data allow us to select and confirm strong line emitters spectroscopically. Photometric cuts, such as the $iJH$ cut of \\citet{vdw} and a similar $ViJH$ selection (excess in $H$ compared to a blue continuum, all from CANDELS), are used to preselect objects with strong features in their SEDs. The G141 grism data are reduced according to \\citet{gb} for the UKIDSS Ultra-Deep Survey (UDS), GOODS-South (GOODS-S), and COSMOS fields, and are then used to confirm bright lines with little or no associated continuum. While we find numerous examples of these objects at $z > 1$ (Maseda et al. in prep.), we focus here on objects where the emission lines do not fall in the wavelength range between the $J$-, $H$-, and $K$-bands, enhancing the chances for detectability from the ground. The low-resolution WFC3 grism data ($R\\sim 130$) provide redshift information such that targets can be selected with [O III] in the redshift range 1.15 $< z <$ 2.40 and H$\\alpha$ in the redshift range 0.64 $< z <$ 1.59 to $\\delta$z\/z $\\sim$ 0.005. However, our photometric preselection relies on flux excesses such that we mostly see objects at 1.3 $\\lesssim z \\lesssim$ 1.8 and 2.1 $\\lesssim z \\lesssim$ 2.3. Note that we do not resolve the continuum in our ground-based observations (discussed presently), so all EW values are calculated from the grism spectra directly. In total, we select 31 objects for ground-based spectroscopic observations using this method. An additional five candidates are taken from the sample of \\citet{vdw} and one from \\citet{straughn11}. Their equivalent widths as a function of redshift are shown in Figure \\ref{fig:ewz}. Non-detections are due to either intrinsic faintness in the lines (we are sensitive to line flux in our ground-based spectra, not EW) or contamination from OH-skylines.\n\n\n\n\n\n\n\n\\subsection{LBT\/LUCI1 Spectroscopy}\n\nWe observed our grism-selected sample with the LUCI1 multi-object spectrograph \\citep{seifert} on the 8.4 m \\textit{Large Binocular Telescope} (LBT). We use LUCI1 in MOS mode, splitting our 31 candidates between four masks, during April 2012 (two masks in the COSMOS field), October 2012 (one mask in the UDS field), and March 2013 (an additional mask in the COSMOS field). Approximately two hours are spent observing in each of the $J$- and $H$-bands for the first two COSMOS masks (A and B) using 1$''$ slits, two hours in each of the $H$- and $K$-bands for the UDS mask (C) with 0.6$''$ slits, and three total hours on the final COSMOS mask (D) in the $J$- and $H$-bands using 0.6$''$ slits. All data are taken using the high-resolution $210\\_zJHK$ grating ($R_J=8460$, $R_H=7838$, $R_K=6687$). The exposures are dithered by 3$''$ and are of varying durations, depending on the band: $J$-band data using 600s exposures, $H$-band data with 300s exposures, and $K$-band data with 120s exposures. The shorter \nintegrations in the $H$- and $K$-bands lead to lower signal-to-noise (S\/N) due to additional readnoise, but are necessary so as not to saturate the detector in the regions with the brightest OH sky lines. Seeing was generally good (between 0.5$''$ and 1$''$ in the optical) during the COSMOS observations, with good transparency. For the UDS observations, seeing was generally better than 1$''$. Table \\ref{tab:obs} details the observations of individual objects, including IDs (from the CANDELS catalogs, discussed in Section \\ref{sec:sed}), coordinates, and the main line detections.\n\\begin{deluxetable}{lcccc}\n\\tabletypesize{\\scriptsize} \n\\tablecaption{Summary of Near-IR Observations\\label{tab:obs}}\n\\tablewidth{0pt}\n\\tablehead{ \\colhead{ID} & \\colhead{RA} & \\colhead{Dec} & \\colhead{Mask} & \\colhead{Observed} \\\\\n & (deg) & (deg) & &Lines}\\\\\n \\startdata\nGOODS-S-7892&53.17194 &-27.75915&...&[O III], H$\\alpha$\\\\\nGOODS-S-43693&53.07129&-27.70580&...&H$\\alpha$\\\\\nGOODS-S-43928&53.05158&-27.70476&...&[O III]\\\\\nUDS-6195&34.42648&-5.25577&...&[O III], H$\\alpha$\\\\\nUDS-6377&34.42857&-5.25532&...&[O III], H$\\alpha$\\\\\nUDS-7665&34.39076 &-5.25080 &C&[O III]\\tablenotemark{a}\\\\\nUDS-10138&34.42336 &-5.24226&C&[O III]\\tablenotemark{a}\\\\\nUDS-12435&34.41087 &-5.23481&C&H$\\alpha$\\tablenotemark{a}\\\\\nUDS-12539&34.47389&-5.23423&...&[O III], H$\\alpha$\\\\\nUDS-12920&34.39870 &-5.23320&C&...\\\\\nUDS-15319&34.40516& -5.22493&C&...\\\\\nUDS-19167&34.43140 &-5.21212&C&[O III]\\\\\nUDS-24154&34.39137 &-5.19531&C&[O III], H$\\alpha$\\\\ \nCOSMOS-8509&150.09837 &2.26596 &B&...\\\\ \nCOSMOS-8700&150.09740 &2.26848 &B&...\\\\ \nCOSMOS-10599&150.09535 &2.28725 &B&[O III]\\\\\nCOSMOS-11530&150.11931 &2.29688 &B&...\\\\ \nCOSMOS-12102&150.09728 &2.30252 &B&[O III], H$\\alpha$\\\\\nCOSMOS-13184&150.12424 &2.31367 &B&[O III]\\\\ \nCOSMOS-14249&150.11011 &2.32459 &B&...\\\\\nCOSMOS-14435&150.16232&2.32602&D&...\\\\\nCOSMOS-15091&150.15955&2.33330&D&[O III]\\\\\nCOSMOS-16152&150.18762 &2.34469&A&...\\\\\nCOSMOS-16286&150.17699&2.34539&D&[O III]\\tablenotemark{b}\\\\\nCOSMOS-16566&150.17067&2.34830&D&[O III], H$\\alpha$\\\\\nCOSMOS-17118&150.15114&2.35410&D&[O III]\\\\\nCOSMOS-17162&150.15134 &2.35482&A&...\\\\\nCOSMOS-17295&150.18318&2.35537&D&...\\\\%H$\\beta$, [O III]& Frame-Frame\\\\\nCOSMOS-17539&150.12814 &2.35810&A&...\\\\\nCOSMOS-17839&150.15677 &2.36080&A&[O III]\\\\\nCOSMOS-18299&150.17098 &2.36536&A&...\\\\\nCOSMOS-18358&150.16719 &2.36689&A&[O III]\\\\\nCOSMOS-18582&150.13281 &2.36878&A&...\\\\\nCOSMOS-18777&150.18628 &2.37054&A&...\\\\\nCOSMOS-19049&150.13886 &2.37340&A&[O III]\\tablenotemark{b}, H$\\alpha$\\\\\nCOSMOS-19077&150.18309 &2.37295&A&[O III]\\\\\nCOSMOS-20589&150.18056&2.38822&D&... \\enddata\n\\tablecomments{All IDs refer to the CANDELS catalog for that particular field. Mask A was observed on 21 April 2012, mask B on 22 April 2012, mask C on 10 and 11 October 2012, and mask D on 12 March 2013, all at the LBT; \\textit{GOODS-S-7892} was observed on 15 December 2012, \\textit{GOODS-S-43693} on 1 October 2012, \\textit{GOODS-S-43928} on 15 October 2012, \\textit{UDS-6195} and \\textit{UDS-6377} on 27 August 2012, and \\textit{UDS-12539} on 2 and 27 August 2012 (120 minutes in the NIR), all at the VLT. Line detections are at least 1$\\sigma$.}\n\\tablenotetext{a}{Due to technical problems during the first night of observations, the total exposure time used for these line extractions is only 3600s in $H$.}\n\\tablenotetext{b}{Only the [O III]$\\lambda$5007 component.}\n\\end{deluxetable}\n\\subsubsection{LUCI1 Data Reduction}\nWe first mask regions of the spectra that are affected by persistence due to the acquisition and alignment exposures. This effect is reduced with each readout, so we only mask the regions if they are 2$\\sigma$ higher than the background level. We then create flat-field images from lamp-illuminated exposures and remove cosmic rays using a median-stacking technique. The most important cosmetic step is the removal of bad pixels, which are identified in the lamp-illuminated exposures, and the hot pixels in the spectra, which are identified in dark exposures. Additionally, for an as-yet unknown reason, the first exposure of every series has small ``halos'' around the hot pixels. As such, we remove slightly larger regions around these hot pixels in the first exposure of every series. Our wavelength calibration is done using the OH sky lines, with a code based on \\texttt{XIDL} routines\\footnote{\\url{http:\/\/www.ucolick.org\/\n~xavier\/IDL\/}}. We also use \\texttt{XIDL} for the final sky subtraction, which uses a spline-fitting algorithm to measure and remove the lines. To maximize S\/N, we do not use frame-frame subtraction and instead measure the sky from the individual frames, with the exception being some objects with particularly bad skyline contamination, where frame-frame subtraction better removes the skylines but adds noise to the spectrum. Dithering the exposures by 3$''$ ensures a decreased dependence on the pixel-to-pixel variations in the detector.\n\nSince our objects of interest have virtually no visible continuum, one-dimensional spectral extraction must be done carefully. We must visually search for the lines in the stacked reduced spectra. Since we know the wavelengths of the brightest lines from the grism data, this exercise is straightforward. We isolate the line region according to a signal-to-noise cut, and then collapse that region in the wavelength direction to create a slice containing the spatial line profile. This profile is fit with a standard Gaussian function. The width of the distribution, $\\sigma$, and the center, $\\mu$, are then used in the full spectral extraction: a Gaussian function with these same $\\sigma$ and $\\mu$ values is fit for each pixel row in the spatial direction tracing a constant distance from the edge of the (curved) slit with the amplitude as the only free parameter, reflecting the electron counts at that particular wavelength. All sets of observations were reduced and analyzed separately and the resolution is incorporated into the calculation of the intrinsic line widths to remove systematics in the velocity dispersion measurements. Flux calibration is done by comparing the integrated counts from the LUCI1 spectrum to the integrated line flux from the 3D-HST grism spectrum when possible.\n\n\n\n\\subsection{VLT\/X-Shooter Spectroscopy}\n\n\nFor an additional six sources, we obtained near-IR and visible spectra using the X-Shooter spectrograph \\citep{vernet} on the 8.2 m \\textit{ESO Very Large Telescope} (VLT). We observed five of the objects initially found in \\citet{vdw}: three with previous grism-spectroscopic confirmation in \\citet{straughn11} and Weiner et al. (in prep.), and the two remaining candidates with the largest photometrically inferred line fluxes. A sixth candidate was selected from the \\citet{straughn11} sample. Observations were done in long-slit mode from August to December 2012 with 40 minute integrations using 1$''$\/0.9$''$\/0.9$''$ (UVB\/VIS\/NIR) slits and the 100k\/1pt\/hg\/1$\\times$2 readout mode. See Table \\ref{tab:obs} for the targets and observing dates. The proximity of objects \\textit{UDS-6377} and \\textit{UDS-6195} allowed for them to be observed in the same slit. \n\n\n\nAlthough the X-Shooter spectrograph also observes in the UV-Blue, four of our six objects were not observed during dark time, rendering the data unusable. The near-IR region of X-Shooter spans the combined $YJHK$ region from 1024$-$2048 nm, while the visible region spans 559.5$-$1024 nm. Reduction of the X-Shooter data is performed using version 2.0.0 of the ESO XSHOOTER pipeline\\footnote{\\url{http:\/\/www.eso.org\/sci\/software\/pipelines\/xshooter\/xsh-pipe-recipes.html}}, which provides merged, 2D near-IR and visible spectra. Extraction is performed in a similar manner to the LUCI1 data.\n\n\n\n\n\n\n\\section{Dynamical and Stellar Masses}\nIn order to confirm our hypothesis that these systems represent starbursting dwarf galaxies, we must confirm their low stellar masses, low metallicities, and high star formation rates. Stellar masses can be constrained through SED fits to broadband photometry, and metallicities can be constrained by observing emission-line ratios, such as [O III]\/H$\\beta$. Any star formation rate is contingent on the nature of the emission lines, since AGN can also produce very high excitations.\n\\subsection{Methods and Results}\n\n\\label{sec:lines}\n\n\n\n\\begin{deluxetable*}{lccccccccc}\n\\tabletypesize{\\scriptsize}\n\\tablecaption{Sample of Emission Line Galaxies\\label{tab:lines}}\n\\tablewidth{0pt}\n\\tablehead{ \\colhead{ID} & \\colhead{m$_{F140W}$} & \\colhead{r$_{eff}$\\tablenotemark{a}} & \\colhead{f$_{H\\alpha}$} &\\colhead{EW$_{H\\alpha}$}&\\colhead{f$_{[O III]}$}&\\colhead{EW$_{[O III],5007}$}&\\colhead{z$_{spec}$}&\\colhead{$\\sigma_{H\\alpha}$}&\\colhead{$\\sigma_{[O III]}$} \\\\\n & (AB) & (kpc) & &(\\AA) &&(\\AA)& &($\\>{\\rm km}\\,{\\rm s}^{-1}$) &($\\>{\\rm km}\\,{\\rm s}^{-1}$)}\n\\startdata\n GOODS-S-33131 & 23.66$\\pm$0.08& 0.68$\\pm$0.62& ... & ... & 7.36$\\pm$2.99 & 693$\\pm$47& 1.687 & 48.7$\\pm$4.3 & 52.3$\\pm$5.7 \\\\\n GOODS-S-43693 & 24.36$\\pm$0.12& 0.35$\\pm$0.06& ... & ... & 16.9$\\pm$0.88 & 861$\\pm$66& 1.738 & 54.4$\\pm$4.5 &... \\\\\n GOODS-S-43928 & 24.59$\\pm$0.15& 1.9$\\pm$0.48& 4.5$\\pm$1.2\\tablenotemark{b} & 199\\tablenotemark{b} & 3.7$\\pm$1.6\\tablenotemark{b} & 176\\tablenotemark{b} & 1.472 &...& 31.4$\\pm$8.2 \\\\\n \n UDS-6195 & 24.26$\\pm$0.13& 1.4$\\pm$0.42& ... & ... & ... & 701$\\pm$95\\tablenotemark{c}& 1.687 & 69.9$\\pm$4.9 & 54.7$\\pm$6.1 \\\\\n UDS-6377 & 24.53$\\pm$0.17& 0.67$\\pm$0.04& ... & ... & ... & 731$\\pm$86\\tablenotemark{c}& 1.664 & 54.5$\\pm$4.5 & 48.2$\\pm$5.9 \\\\\n UDS-7665 & 25.40$\\pm$0.14& 0.51$\\pm$0.08& ... & ... & 13.7$\\pm$2.76 & 803$\\pm$162& 2.298 & ...& 57.8$\\pm$9.7 \\\\\n UDS-10138 & 23.77$\\pm$0.03& 0.75$\\pm$0.08& ... & ...& 10.9$\\pm$0.57 & 322$\\pm$17 & 2.151 & ...& 80.9$\\pm$10.0 \\\\\n UDS-12435 & 23.42$\\pm$0.03& 1.0$\\pm$0.06& ...& ... & 12.2$\\pm$0.94 & 263$\\pm$31 & 1.611 & 65.2$\\pm$11.3 & ...\\\\\n UDS-12539 & 23.39$\\pm$0.06& 1.3$\\pm$0.07& ... & ... & 35.5$\\pm$2.73 & 713$\\pm$42& 1.621 & 81.3$\\pm$4.3 & 71.1$\\pm$5.7\\\\\n UDS-19167 & 23.99$\\pm$0.04& 1.1$\\pm$0.20 & ...& ...& 21.5$\\pm$2.82& 723$\\pm$95 & 2.185 & ...& 54.2$\\pm$9.4 \\\\\n UDS-24154 & 23.78$\\pm$0.04 & 1.8$\\pm$0.16 & ...& ... & 21.9$\\pm$3.09 & 503$\\pm$34 & 2.297 & 72.5$\\pm$13.1 & 61.0$\\pm$10.8 \\\\\nCOSMOS-10599 & 24.47$\\pm$0.26 & 0.67$\\pm$0.16&...& ...& 11.8$\\pm$0.89 & 714$\\pm$85 & 2.220 & ...& 30.9$\\pm$9.0 \\\\\nCOSMOS-12102 & 22.82$\\pm$0.06 & 1.6$\\pm$0.08 & 19.9$\\pm$1.0 & 360$\\pm$18 & 49.4$\\pm$2.28 & 630$\\pm$29 & 1.463 & 230.8$\\pm$14.7 & 241.3$\\pm$12.7 \\\\\nCOSMOS-13184 & 23.91$\\pm$0.16 & 0.53$\\pm$0.10 &... & ... & 11.5$\\pm$2.94 & 598$\\pm$189 & 2.199 & ...& 40.3$\\pm$8.9 \\\\\nCOSMOS-15091 & 25.46$\\pm$0.13& 0.75$\\pm$0.11& ... & ... & 5.99$\\pm$3.61 & 628$\\pm$152& 1.583 & ...& 38.2$\\pm$10.0 \\\\\nCOSMOS-16286 & 24.64$\\pm$0.11& 1.1$\\pm$0.13& 0.39$\\pm$3.32 & 41$\\pm$345 & 8.76$\\pm$3.46 & 888$\\pm$351& 1.444 & ...& 46.7$\\pm$14.4 \\\\\nCOSMOS-16566 & 24.60$\\pm$0.09& 1.4$\\pm$0.09& 4.22$\\pm$3.26 & 560$\\pm$432 & 7.28$\\pm$3.58 & 388$\\pm$191& 1.437 & 25.5$\\pm$14.0 & 32.8$\\pm$8.4 \\\\\nCOSMOS-17118 & 24.16$\\pm$0.13 & 4.8$\\pm$0.33& ... & ... & 12.3$\\pm$4.22 & 493$\\pm$169& 1.656 & ...& 46.5$\\pm$8.8 \\\\\nCOSMOS-17839& 24.36$\\pm$0.24 & 0.99$\\pm$0.06 & 3.27$\\pm$3.20 & 325$\\pm$230 & 16.3$\\pm$3.58 & 1126$\\pm$247 & 1.412 & ...& 43.3$\\pm$8.9 \\\\\nCOSMOS-18358 & 22.84$\\pm$0.04 & 1.6$\\pm$0.02& ... & ... & 30.2$\\pm$0.97 & 316$\\pm$26 & 1.645 & ...& 55.9$\\pm$9.0 \\\\\n COSMOS-19049 & 22.93$\\pm$0.05 & 2.3$\\pm$0.06 & 16.4$\\pm$1.2 & 368$\\pm$28 & 20.9$\\pm$2.20 & 330$\\pm$35 & 1.370 & 81.9$\\pm$50.2 & 122.0$\\pm$11.0\\\\\n COSMOS-19077 & 23.69$\\pm$0.12 & 1.6$\\pm$0.05 & ...& ... & 20.5$\\pm$0.77 & 536$\\pm$20 & 1.649 & ...& 47.7$\\pm$9.5\\enddata\n\\tablecomments{All fluxes are given in units of 10$^{-17}$ erg s$^{-1}$ cm$^{-2}$. Equivalent widths are quoted in the rest-frame. A description of the size measurements is given in Section \\ref{sec:lines}.}\n\\tablenotetext{a}{\\citet{vdw12}}\n\\tablenotetext{b}{\\citet{straughn11}}\n\\tablenotetext{c}{\\citet{vdw}}\n\\end{deluxetable*}\n\n\n\nIn our near-IR spectra, the most prominent lines seen are [O III]$\\lambda$5007 and H$\\alpha$, along with their associated complexes ([O III]$\\lambda$4959+H$\\beta$, and [N II]$\\lambda\\lambda$6548,6584, respectively). With the exception of \\textit{COSMOS-12102}, the emission lines can be well-fit by a Gaussian function, as described in \\citet{maseda}. \\textit{COSMOS-12102} has the broadest emission lines of the sample. In addition to their broadness, they display some degree of asymmetry and are thus not well-fit by Gaussian functions. \n \nThe skewness could be caused by several processes, such as the presence of strong outflows. Such interpretations are beyond the scope of this paper and demand additional observations.\n\n \nThe best-fitting redshifts, velocity dispersions, and line ratios are given in Table \\ref{tab:lines}.\n\n\n\n\n\nAs described in \\citet{maseda}, velocity dispersions of the strong emission lines can be used to estimate the dynamical masses, assuming the line width comes entirely from gravitational motion in a virialized system such that\n\\begin{equation}\n M_{dyn} = 3\\frac{r_{\\rm{eff}}\\sigma^2}{G}.\n\\label{eqn:dyn}\n\\end{equation}\nHere, $\\sigma$ is the observed line width from our NIR spectrum and $r_{eff}$ is the effective radius of the galaxy from the public CANDELS catalog released in \\citet{vdw12}. Typical objects are 1.0$\\pm$0.1 kpc in both the $J_{F125W}$- and $H_{F160W}$-bands.\n\nThese dynamical masses are listed in Table \\ref{tab:dp}, ranging from $10^{8.39}$ to $10^{10.6} ~M_{\\odot}$, with a median mass of $10^{9.13}~M_{\\odot}$. The uncertainty in the dynamical mass estimate comes primarily from the systematic uncertainty in the proportionality constant of 3, which relates the intrinsic velocity $v$ to the observed velocity dispersion $\\sigma$, which we assume to be the same factor of 33\\% as \\citet{rix}, since in most cases our observed line widths and physical sizes are well-constrained. Further details can be found in \\citet{maseda}. \\citet{amorin12}, in a study of ``green peas,'' observe multiple star-forming regions and gas flows which is seen as asymmetries and broad, low-intensity wings. While we do not see such clear evidence for outflows via asymmetric line profiles (\\textit{COSMOS-12102} excepted) or broad wings, we can not currently rule-out contributions of non-gravitational motions to the observed line widths since we do not resolve the continuum and can only observe the bright, central line regions. Any such contributions would mean that the intrinsic dispersion is lower and that our dynamical mass estimates are upper-limits. \n \n\n\n\n\\label{sec:sed}\nMulti-band photometry is obtained from 3D-HST \\citep{skelton}, covering 0.3-24 $\\mu$m for the GOODS-S (23 bands), UDS (17 bands), and COSMOS (31 bands) fields. Visual inspection of the Spitzer-IRAC frames show contamination from nearby objects in \\textit{UDS-7665}, \\textit{COSMOS-13184}, and \\textit{COSMOS-15091}, so their SEDs do not include the contaminated points. For the same reason, we do not include any the data from the 5.8 and 8$~\\mu$m IRAC channels for this sample, even though it is available as part of the publicly-released catalogs. Only three objects have detections at 24 $\\mu$m with Spitzer-MIPS (discussed in Section \\ref{sec:starburstagn}), and an upper limit of 10 $\\mu$Jy is adopted for the remainder of the sample.\n\nWe fit the broadband spectral energy distributions of our galaxies in the same manner as \\citet{maseda} using a custom version of the \\texttt{MAGPHYS} code\\footnote{\\url{http:\/\/www.iap.fr\/magphys}} \\citep{magphys}, which computes the emission by stellar populations and the attenuation by dust in a two-component ISM, and includes nebular line emission computed self-consistently using the \\citet{pacifici12} model (\\citeauthor{cl} \\citeyear{cl}; C. Pacifici et al in prep). The broad-band fluxes computed with this model include the contamination by emission lines, so they can be directly and robustly compared with the observed fluxes that we know are likely emission-line contaminated for these galaxies \\cite[at these and higher redshifts, it has been shown that an improper treatment of nebular contamination to broadband magnitudes results in an overestimate of stellar masses and hence an underestimate in sSFRs, e.g.][]{atek,curtislake,stark}. \\texttt{MAGPHYS} compares the input photometry to an extensive library of SED templates spanning a wide range in parameters such as star formation history, metallicity, age, and dust optical depth using a Bayesian method. As such, all results quoted are the medians of the posterior probability distributions \nfor each parameter, with \nuncertainties corresponding to the 16$^{th}$ and 84$^{th}$ percentiles for the distribution. In cases where the output probabilities are not well constrained, typically due to the (potentially sytematic) uncertainties in the photometry, we adopt an uncertainty of 0.3 dex in the relevant parameter: a formal error of 0 is indicative of the models not fitting the data well. 0.3 dex is the typical uncertainty in determining stellar masses from fits to broadband photometry \\citep{conroy}. An example probability distribution for some of the various parameters is shown in the Appendix, Figure \\ref{fig:pdf}. Results of the SED fitting are given in Table \\ref{tab:dp}, showing high sSFRs, low stellar masses, low metallicities, young ages, and low dust extinction. Since the NIR sizes represent the restframe optical and hence the stellar continuum at these redshifts, we are probing the same physical region in both mass estimates.\n\nThe median $\\tau_V$ ($V$-band optical depth seen by young stars in the birth clouds) is 0.2, consistent with the very blue observed SEDs. Even with the lack of infrared data to directly probe the dust content of these systems, we can place a limit on the dust mass based on the total dust attenuation and luminosity inferred from the SED fits and a prior on the dust temperature as in \\cite{dacunha}. Resulting limits are $\\lesssim 10^7~M_{\\odot}$ and hence negligible compared to the stellar masses.\n\nAs mentioned before, the critical piece of additional information that we include to perform these fits is the line fluxes. We see a median SFR of $\\sim$9 $M_{\\odot}~yr^{-1}$ which, combined with the low stellar masses, justifies our emission line criteria for the selection of starbursts. By separating the emission lines from the stellar continuum light, we are better able to trace the gas-phase metallicities. This results in metallicity estimates consistent with direct probes of the oxygen abundance using emission-line ratios (see Section \\ref{sec:metals}). In addition, it allows for better estimates of the extinction in the HII regions, which produce the aforementioned $\\tau_V$ values.\n\nOur model library of stellar population SEDs contains a broad range of\ncomplex star formation histories (SFHs), including bursts on top of\nextended SFHs with a variety of evolutionary trends (rising, falling,\nand constant). Despite these efforts, which far exceed the still\ncommon use of exclusively exponentially-declining SFHs, systematic\nuncertainties remain. In particular for galaxies with significant\nstar formation activity in the past $\\sim 50$~Myr, as is the case\nhere, red supergiants with individual luminosities of $\\sim\n10^5~L_{\\odot}$ can easily outshine more massive populations of stars\nwith any age $>50$~Myr, especially in the near-infrared. Prior\nknowledge of the SFH is needed to address this issue, producing a\ndegree of circularity in the problem of stellar mass determinations.\nKeeping this in mind, we proceed and note where necessary that for\ngalaxies with estimated ages $\\lesssim 50$~Myr the mass (and age)\nestimates must be lower limits, as seen in tails to higher masses and \nmass-weighted ages, e.g. Figure \\ref{fig:pdf}.\n\nExtracted near-IR spectra and SED fitting results for \\textit{GOODS-S-33131} are shown in Figure \\ref{fig:caspec}; all remaining objects in our sample are shown in Figures 2a, 2b, and \\ref{fig:seds}. Telluric corrections are applied as needed.\n\n\\begin{figure}\n \\includegraphics[width=.475\\textwidth]{ca22115.eps}\n\n\n\\caption{Example of a broadband SED including best-fitting model (top), WFC3 grism image\/spectrum (middle), and X-Shooter spectrum (bottom) for an object in our sample. The NIR spectra for the remaining objects can be seen in the Figures 2a for [O III] and 2b for H$\\alpha$; the SEDs for the remaining objects can be seen in the Appendix, Figure \\ref{fig:seds}. SED fits are performed as described in Section \\ref{sec:sed}, with red points denoting the measured photometry (open points are upper limits), the blue curve denoting the non-attenuated SED, and the black curve denoting the observed SED including dust attenuation. The direct F140W image is shown on the left and the dispersed G141 grism image is shown to the right, with important spectral lines labeled and contamination subtracted as described in Momcheva et al. (in prep.). The X-Shooter spectrum is smoothed by 3 pixels and is flux-calibrated according to the grism line flux. The shaded gray area represents the +\/$-$ 1$\\sigma$ flux uncertainties and the red curve shows the best-\nfitting model of \nthe emission lines.}\n\\label{fig:caspec}\n\n\\end{figure}\n\n\\begin{deluxetable*}{lcccccc}\n\\tabletypesize{\\scriptsize}\n\\tablecaption{Derived Parameters\\label{tab:dp}}\n\\tablewidth{0pt}\n \\tablehead{ \\colhead{ID} & \\colhead{log $M_{dyn}$} & \\colhead{log $M_{\\star}$} & \\colhead{log Age} & \\colhead{Z} & \\colhead{log SFR} & \\colhead{$\\tau_{V}$} \\\\\n &($M_{\\odot}$)&($M_{\\odot}$)&(yr)&($Z_{\\odot})$&($M_{\\odot}~$yr$^{-1}$)&}\n\\startdata\nGOODS-S-33131 & 9.05$\\pm$0.30&8.91$_{-0.075}^{+0.050}$&7.78$_{-0.065}^{+0.190}$&0.321$_{-0.132}^{+0.180}$&0.892$_{-0.080}^{+0.030}$&0.242$_{-0.085}^{+0.030}$\\\\\nGOODS-S-43693 & 8.86$\\pm$0.31&8.28$_{-0.050}^{+0.195}$&7.50$_{-0.045}^{+0.240}$&0.147$_{-0.098}^{+0.160}$&0.557$_{-0.065}^{+0.090}$&0.137$_{-0.090}^{+0.050}$\\\\\nGOODS-S-43928 & 9.12$\\pm$0.38&8.83$_{-0.065}^{+0.070}$&8.39$_{-0.260}^{+0.175}$&0.169$_{-0.120}^{+0.272}$&0.262$_{-0.080}^{+0.190}$&0.327$_{-0.145}^{+0.505}$\\\\\nUDS-6195 & 9.47$\\pm$0.33&7.60$_{-0.300}^{+0.320}$&6.85$_{-0.300}^{+0.385}$&0.299$_{-0.130}^{+0.300}$&0.447$_{-0.065}^{+0.300}$&0.027$_{-0.300}^{+0.085}$\\\\\nUDS-6377 & 9.04$\\pm$0.31&8.32$_{-0.300}^{+0.140}$&6.96$_{-0.300}^{+0.735}$&0.081$_{-0.022}^{+0.096}$&1.08$_{-0.580}^{+0.300}$&1.12$_{-0.835}^{+0.025}$\\\\\nUDS-7665& 9.07$\\pm$0.33& 8.52$_{-0.300}^{+0.300}$& 7.04$_{-0.300}^{+0.300}$& 0.155$_{-0.300}^{+0.300}$& 1.21$_{-0.300}^{+0.300}$& 0.862$_{-0.300}^{+0.300}$\\\\\nUDS-10138 & 9.53$\\pm$0.31&8.43$_{-0.300}^{+0.270}$&7.10$_{-0.300}^{+0.430}$&0.177$_{-0.300}^{+0.010}$& 1.05$_{-0.135}^{+0.300}$&0.112$_{-0.040}^{+0.300}$\\\\\nUDS-12435 & 9.47$\\pm$0.33&9.42$_{-0.055}^{+0.045}$&8.60$_{-0.130}^{+0.050}$&0.189$_{-0.138}^{+0.392}$& 0.677$_{-0.030}^{+0.070}$&0.082$_{-0.040}^{+0.085}$\\\\\nUDS-12539& 9.66$\\pm$0.30&8.67$_{-0.300}^{+0.300}$&7.05$_{-0.300}^{+0.300}$&0.581$_{-0.300}^{+0.300}$&1.29$_{-0.300}^{+0.300}$&0.667$_{-0.300}^{+0.300}$\\\\\nUDS-19167& 9.35$\\pm$0.34& 8.96$_{-0.300}^{+0.300}$ & 8.58$_{-0.300}^{+0.300}$& 0.085$_{-0.300}^{+0.300}$& 1.02$_{-0.300}^{+0.300}$& 0.192$_{-0.300}^{+0.300}$\\\\\nUDS-24154 & 9.67$\\pm$0.33&9.10$_{-0.300}^{+0.100}$&7.49$_{-0.300}^{+0.165}$&0.531$_{-0.030}^{+0.300}$& 1.38$_{-0.050}^{+0.300}$&0.462$_{-0.005}^{+0.105}$\\\\\nCOSMOS-10599 & 8.65$\\pm$0.40&8.72$_{-0.300}^{+0.240}$& 7.49$_{-0.300}^{+0.245}$ & 0.531$_{-0.362}^{+0.300}$& 1.00$_{-0.020}^{+0.015}$&0.462$_{-0.300}^{+0.080}$\\\\\nCOSMOS-12102& 10.8$\\pm$0.30&9.57$_{-0.300}^{+0.300}$& 8.40$_{-0.300}^{+0.300}$& 0.155$_{-0.300}^{+0.300}$& 1.63$_{-0.300}^{+0.300}$&1.77$_{-0.300}^{+0.300}$\\\\\nCOSMOS-13184 & 8.78$\\pm$0.36&9.00$_{-0.220}^{+0.070}$& 7.28$_{-0.095}^{+0.365}$ & 0.087$_{-0.040}^{+0.068}$& 1.29$_{-0.120}^{+0.230}$&0.407$_{-0.110}^{+0.245}$\\\\\nCOSMOS-15091& 8.88$\\pm$0.37&7.92$_{-0.120}^{+0.280}$& 7.61$_{-0.105}^{+0.965}$ & 0.169$_{-0.084}^{+0.042}$& 0.062$_{-0.020}^{+0.200}$&0.077$_{-0.300}^{+0.115}$\\\\\nCOSMOS-16286 & 9.22$\\pm$0.40&8.44$_{-0.125}^{+0.160}$& 7.92$_{-0.650}^{+0.525}$ & 0.171$_{-0.054}^{+0.044}$& 0.317$_{-0.325~}^{+0.440~}$&1.05$_{-0.850}^{+0.500}$\\\\\nCOSMOS-16566 & 9.02$\\pm$0.37&8.60$_{-0.045}^{+0.040}$& 8.29$_{-0.110}^{+0.065}$ & 0.137$_{-0.074}^{+0.324}$& 0.147$_{-0.045}^{+0.035}$&0.037$_{-0.020}^{+0.050}$\\\\\nCOSMOS-17118& 9.86$\\pm$0.33&8.47$_{-0.315}^{+0.210}$& 7.66$_{-0.425}^{+0.920}$ & 0.155$_{-0.070}^{+0.136}$& 0.752$_{-0.085}^{+0.300}$&0.192$_{-0.010}^{+0.105}$\\\\\nCOSMOS-17839 & 9.11$\\pm$0.34&8.19$_{-0.190}^{+0.135}$&7.43$_{-0.195}^{+0.335}$& 0.091$_{-0.018}^{+0.300}$& 0.392$_{-0.035}^{+0.105}$&0.037$_{-0.300}^{+0.005}$\\\\\nCOSMOS-18358& 9.54$\\pm$0.32& 9.43$_{-0.300}^{+0.300}$& 8.12$_{-0.300}^{+0.300}$ & 0.629$_{-0.300}^{+0.300}$ & 1.11$_{-0.300}^{+0.300}$&0.037$_{-0.300}^{+0.300}$\\\\\nCOSMOS-19049 & 10.4$\\pm$0.30&9.85$_{-0.300}^{+0.168}$ &8.36$_{-0.300}^{+0.165}$& 0.081$_{-0.300}^{+0.046}$&1.31$_{-0.300}^{+0.120}$&1.72$_{-0.300}^{+0.200}$\\\\\nCOSMOS-19077 & 9.40$\\pm$0.34&8.92$_{-0.225}^{+0.300}$& 8.05$_{-0.555}^{+0.300}$ & 0.173$_{-0.300}^{+0.358}$& 0.692$_{-0.300}^{+0.285}$&0.197$_{-0.300}^{+0.265}$\\enddata\n\\tablecomments{Quoted values for $M_{\\star}$, Age (mass-weighted), Z, SFR, and $\\tau_{V}$ are the medians of the probability distributions from \\texttt{MAGPHYS} with associated +\/-- 1$\\sigma$ values. Cases where we have an uncertainty of 0 occur when the data cannot be well-explained by the models and not when the models constrain the output parameters well, which manifests itself as a large $\\chi^2$ value. As such, we will adopt an uncertainty of 0.3 dex \\cite[the typical uncertainty for stellar masses obtained from fitting broadband photometry,][]{conroy} in those cases to be used in the subsequent analysis. }\n\n\\end{deluxetable*}\n\n\n\\begin{figure*}\n\\label{fig:2a}\n\\epsscale{.98}\n\\figurenum{2a}\n \\plotone{f2_1.eps}\n \\caption{WFC3 G141 grism and LUCI1 or X-Shooter spectrum. The spectra are smoothed by 3 pixels and are flux-calibrated according to the grism line fluxes. The shaded gray area represents the +\/$-$ 1$\\sigma$ flux uncertainties and the red curve shows the best-fitting model of the emission lines. The dotted lines show the position of the [O III]$\\lambda\\lambda$5007, 4959 and H$\\beta$ emission lines.}\n\\end{figure*}\n\n \\begin{figure}\n \\label{fig:2b}\n \\figurenum{2b}\n \\epsscale{.98}\n \\plotone{f2_2.eps}\n \\caption{WFC3 G141 grism and LUCI1 or X-Shooter spectrum, same as Figure \\ref{fig:2a} but for the detected H$\\alpha$ lines. The positions of the [N II] lines as well as the H$\\alpha$ line are denoted by the dotted lines.}\n \\end{figure}\n\n\n\\begin{figure}\n\n\\includegraphics[width=0.475\\textwidth]{f3.eps}\n\n\\caption{Ratio of stellar mass to total dynamical mass versus age (mass-weighted) for the sample, with dynamical masses based on line measurements coming from the ground-based NIR spectra and stellar masses and ages from \\texttt{MAGPHYS}.}\n\\label{fig:dynmass}\n\\end{figure}\n\n\\begin{figure}\n \\includegraphics[width=0.475\\textwidth]{masshist.eps}\n \\caption{Histograms of stellar mass for our sample compared to the larger H$\\alpha$ samples of \\citet{fs09}, \\citet{erb06}, \\citet{mancini11}, and \\citet{contini} at $z>1$. While variations in the stellar templates and the IMF used can alter stellar mass estimates by $\\sim$0.3 dex, our sample still lies at considerably lower masses (up to 2 orders of magnitude lower) than the other samples.}\n \\label{fig:masshist}\n\\end{figure}\n\n\n\\begin{figure}\n\n\\includegraphics[width=0.475\\textwidth]{f4.eps}\n\n\\caption{Stellar mass versus observed line width for the sample. There is no clear trend of increasing line width with increasing stellar mass. The overplotted dashed line corresponds to $M_{\\star} \\sim M_{virial}$ for a fixed $r_{\\rm{eff}}$ of 1 kpc.}\n\\label{fig:sigmass}\n\\end{figure}\n\nAs demonstrated in \\citet{maseda}, there is a tight, linear relation between the total dynamical and stellar mass estimates. Figure \\ref{fig:dynmass} may show that the stellar-to-dynamical mass ratio correlates with the (mass-weighted) age as well. This qualitatively agrees with the model in which a non-replenished reservoir of gas is turned into stars, such that the older systems have formed a proportionally larger number of stars. For further discussion, see Section \\ref{sec:discussion}. To date, these systems represent the lowest dynamical masses ever measured at this epoch, and typically have lower stellar masses than those galaxies in similar star-forming spectroscopic surveys \\cite[e.g.][see also Figure \\ref{fig:masshist} and Section \\ref{sec:comp}]{masters}. Note that the systematic errors in $M_{dyn}$ dominate the size of the error bar. \\textit{COSMOS-12102} shows a broad and asymmetric line profile for which the line width most likely does not trace the underlying gravitational potential. The lack of a clear trend in the relationship \nbetween the two main ``observed'' quantities, $M_{\\star}$ and $\\sigma$, can be seen in Figure \\ref{fig:sigmass}, which suggests that any relationship between $M_{\\star}$ and $M_{dyn}$ must be driven by a relationship between the size of the galaxy and $M_{\\star}$. Figure \\ref{fig:sizemass} shows this relationship, as our sample lies on or below the observed size-mass relation for late-type galaxies at $2 < z < 2.25$ as found in \\cite{vdw14} in 3D-HST and CANDELS. \n\n\n\\begin{figure}\n\n\\includegraphics[width=0.475\\textwidth]{f5.eps}\n\\caption{Effective radius versus stellar mass for the sample. The overplotted lines are the low-mass extrapolation (and instrinsic scatter) to the relationship for star-forming galaxies at $2 < z < 2.25$ as found in \\citet{vdw14}. The emission line galaxies in our sample fall on or somewhat below the size-mass relation for normal star-forming galaxies at similar redshifts.}\n\\label{fig:sizemass}\n\\end{figure}\n\n\\begin{figure}\n\n \\includegraphics[width=0.475\\textwidth]{f6.eps}\n\\caption{sSFR versus stellar mass. Black points are the results from this study, orange points are the values from \\citet{vdw}, the red point is from \\citet{erb10}, the green point is from \\citet{gb2}, the blue lines are the results for the high\/low-z bins of star-forming galaxies in \\citet{fumagalli}, and the green lines are the results for the characteristic sSFR (SFR*\/M$_{\\star}$, see the text for details) for narrow-band selected star-forming galaxies in \\citet{sobral}. The diagonal dotted line represents the nominal detection limit from the 3D-HST survey of 2.8 $M_{\\odot}$ yr$^{-1}$ at $z \\sim 1.5$. The sSFRs are averaged over 10 Myr.}\n\\label{fig:ssfr}\n\\end{figure}\n\nThe extreme nature of the star-formation in these systems is clearly seen in Figure \\ref{fig:ssfr}. The stellar masses measured here are beginning to probe the same regime as \\citet{vdw} and \\citet{gb2}, with similar sSFR values. Note that the sSFR values obtained from \\texttt{MAGPHYS} are averaged over the last 10 Myr: given the ephemeral nature of the bursts, the current (s)SFR may not reflect the most vigorous period of star formation in these systems. These are more than an order of magnitude in excess of the sSFR values characteristic of the star-forming population of massive galaxies at similar redshifts, measured in H$\\alpha$ from \\citet{fumagalli}.\n\n\n\n\n\\subsection{Comparison to Other Studies}\n\\label{sec:comp}\nAt lower redshifts 0.11 $\\leq z \\leq$ 0.93, \\citet{amorin14a,amorin14b} have isolated a sample of EELGs selected on [O III] flux that show remarkable similarities to our sample with sizes $r_{1\/2} \\sim$ 1.3 kpc, masses $10^7-10^{10} M_{\\odot}$, sSFR values $10^{-7}-10^{-9}$ yr$^{-1}$, and metallicity estimates of 0.05 $-$ 0.6 $Z_{\\odot}$ (some determined ``directly\" using the [O III]$\\lambda$4363 line as well, see Section \\ref{sec:metals}). Deep HST-ACS $I$-band images reveal that most (80\\%) of their EELG sample show non-axisymmetric morphologies indicative of recent mergers or interactions. Only with samples that are complete over the redshift ranges in question will allow for a direct comparison between the two populations, which are currently only split artificially according to either optical or near-IR observations. However, caution must be taken in any interpretation: the much higher number density at $z\\sim$1.7 from \\citet{vdw} than at very low-z from \\citet{greenpea} implies that there could be a very different mechanism involved to trigger the bursts. Connecting the two populations in a qualitative sense is the subject of ongoing work.\n\nOur current observations do not allow us to make strong conclusions about the internal dynamics of individual systems. Currently, the only opportunities to study the internal dynamics of such small systems at high-redshift is with gravitational lensing: \\citet{jones10} note that their $z\\sim2-3$ strongly-lensed galaxies would resemble mergers or dispersion-dominated systems without the additional spatial resolution provided by the lensing. Two objects similar to those presented here are \\textit{MACS J2135-0102} ($M_{\\star}$=9.8 $M_{\\odot}$, z=3, $r_{1\/2}$=1.75 kpc, SFR=40 $M_{\\odot}$ yr$^{-1}$, $\\sigma_{H\\alpha}$=54$\\>{\\rm km}\\,{\\rm s}^{-1}$, $V_c$=67$\\>{\\rm km}\\,{\\rm s}^{-1}$) from \\citet{jones10} and \\textit{SHIZELS-10} ($M_{\\star}$=9.4 $M_{\\odot}$, z=1.45, $r_{1\/2}$=2.3 kpc, SFR=10 $M_{\\odot}$ yr$^{-1}$, $\\sigma_{H\\alpha}$=64$\\>{\\rm km}\\,{\\rm s}^{-1}$, $V_c$=26$\\>{\\rm km}\\,{\\rm s}^{-1}$) from \\citet{swinbank12}, which both appear to have a (relatively weak) rotational component to their dynamics. Further discussion of the dynamics of our present sample is deferred to Section \\ref{sec:discussion}.\n\nSeveral other studies have begun to characterize the starforming properties of high-z EELGs using various techniques. The most obvious comparable study to this work is the WFC3 Infrared Spectroscopic Parallels survey \\cite[WISP,][]{wisp}, and specifically the study of \\citet{masters}. As a similar WFC3 grism survey, they are also able to select galaxies based on emission lines instead of photometric techniques, and thus can isolate a sample of high-EW ELGs. \\citet{masters} present a sample of 26 such ELGs with a median restframe [O III] EW of 154 \\AA $ $ (our median is 629 \\AA). These galaxies show similarly-low velocity dispersions ($\\sim$ 70 $\\>{\\rm km}\\,{\\rm s}^{-1}$) and hence also have dynamical masses $\\lesssim$ 10$^{10} M_{\\odot}$. They derive stellar masses using an assumed M\/L ratio and star-formation history in a similar fashion to \\citet{vdw}, which have been shown to generally agree with SED-derived stellar masses \\citep{maseda}, and have also begun to probe the $M_{\\star} \\lesssim 10^9 M_{\\odot}$ regime. Specific discussion of their metallicity estimates is deferred to Section \\ref{sec:metals}.\n\nIn addition to WISP, narrowband studies have also begun to uncover the general starforming population of galaxies at $z > 1$, probing stellar masses below 10$^{9.5} M_{\\odot}$. \\citet{sobral} use data from the HiZELS survey to study H$\\alpha$ emitters at redshifts $z = 0.84, 1.47$, and $2.23$. The size of their survey area ($\\sim$ 2 deg$^2$) and the depth of the imaging allows them to isolate large and pure samples of H$\\alpha$ emitters down to a restframe H$\\alpha$+[NII] EW of 25 \\AA, constraining the stellar mass function of star-forming galaxies down to 10$^{9.5} M_{\\odot}$ at these redshifts. Indeed, their sample also includes a number of objects with restframe H$\\alpha$+[NII] EW values in excess of 300 \\AA $ $ and with stellar masses from SED-fitting below 10$^{9} M_{\\odot}$. Their results for the characteristic sSFR (i.e. the typical SFR for a galaxy at a given mass and redshift divided by its mass) as a function of mass and redshift is shown in Figure \\ref{fig:ssfr}. While some of our objects can be considered ``normal\" at these redshifts according to this determination, a majority of them still have higher sSFRs than expected, albeit typically within 1 dex. This reinforces the notion that these objects are the high-EW tail of the total distribution of star-forming galaxies at these redshifts and do not comprise a genuinely separate population \\citep{vdw}.\n\n\\section{Emission-Line Ratios}\n\\subsection{Starbursts or AGN?}\n\\label{sec:starburstagn}\nSo far, the main caveat is that we have assumed that star formation is primarily responsible for the strong line emission. The most compelling evidence to support this assumption is the relation between stellar mass and dynamical mass, and that the dynamical masses, with two exceptions, do not exceed $10^{10}~M_{\\odot}$ \\citep{maseda}. Such a result would be entirely coincidental in the case that the emission lines are powered by AGN since the width of AGN emission lines is not coupled to the stellar mass of the host galaxy. In other words, we observe narrow emission lines in these small ($\\sim$ 1 kpc) systems, while typical AGN narrow line regions have much larget emission-line widths $\\sigma > 200 \\>{\\rm km}\\,{\\rm s}^{-1}$ \\citep{osterbrock}. However, it is useful to look for evidence of AGN contributions.\n\nAlthough low-metallicity, low-mass AGN are exceedingly rare in the local Universe \\citep{izotovAGN}, there is some evidence that they may be more common at higher-z \\citep{trump11,xue,reines}. Such AGN could cause large observed line fluxes. At $z> 1$, AGN identification with a simple diagnostic such a high [O III]$\\lambda$5007\/H$\\beta$ ratio becomes insufficient by itself, as shown by \\citet{trump}. We thus utilize the ``BPT'' diagnostics of \\citet{bpt} for the objects in our sample with measurements of [N II] and\/or [S II] in addition to H$\\alpha$. Given that these lines are typically quite weak in star forming galaxies and the strong influence of OH skylines in our NIR spectra, in some instances we can only place an upper-limit on the ratios of those lines with H$\\alpha$. These two BPT diagrams are shown in the left and central panels of Figure \\ref{fig:mex} with contours showing galaxies from the SDSS MPA-JHU value-added DR7 catalog.\n\n \nMost of our points plausibly lie on an extension of the starforming region of the BPT diagrams and not on the extension of the AGN region, or at least they lie far from the main locus of AGN-powered emission lines. However, low-metallicity AGN can lie in the starforming region as well \\citep{groves,kewley13}, preventing us from completely ruling out the contribution of AGN to our sample with these diagnostics. In some cases we observe higher [O III]\/H$\\beta$ ratios compared to the other starforming galaxies, but this can be explained simply by higher ionizations and lower metallicities in these systems compared to the low-z sample of SDSS galaxies. \\citet{kewley13} find that starforming galaxies at $z > 1$ are consistent with models that have more extreme ISM conditions than those in the local Universe\\footnote{While beyond the scope of this paper, we would like to point out the large uncertainties in assumptions about the ISM conditions in galaxies at high redshift given the lack of knowledge about ionizing spectrum of hot stars at these early times and low metallicities.}. High electron temperatures (discussed in \nthe next section) support such a hypothesis. This would be a further, unexplained coincidence if they are AGN in addition to\nthe low dynamical masses described previously. While \\citet{trump11} suggest that AGN are widespread in low-mass $1.3 1$ using a combination of the BPT diagnostics and X-ray data. While this diagnostic is easily applicable to our data, we can not constrain the MEx AGN\/starforming \\textit{probabilities} for most of our sample given that it is not properly calibrated for objects with such low metallicities and high sSFRs, AGN or otherwise: the five objects with a probability of star formation ($P_{SF}$, compared to the probability that they are AGN; \\textit{COSMOS-18358}, \\textit{COSMOS-13184}, \\textit{COSMOS-10599}, \\textit{UDS-24154}, and \\textit{UDS-12435}) have a median $P_{SF}$ value of 0.940, firmly placing them in the starforming regime. The remaining objects, while probabilistically unconstrained, still lie far from the \npopulation of AGN in the \nMEx diagnostic plot, as shown in Figure \\ref{fig:mex}.\n\n\nWe can place other constraints on the AGN nature of these systems in much the same way as \\cite{vdw}, i.e. independent of any measured emission line ratio(s). Most objects in our sample do not have strong 24 $\\mu$m detections using Spitzer-MIPS: \\textit{COSMOS-18358} has a MIPS 24$\\mu$m flux of 21.5$\\pm$9.0 $\\mu$Jy and clearly appears to be a merger; \\textit{COSMOS-19049} has a flux of 14.1$\\pm$8.6 $\\mu$Jy, is physically large with $r_{eff}=2.3$ kpc, and also has broad lines with $\\sigma_{[O III]}=122 \\>{\\rm km}\\,{\\rm s}^{-1}$; \\textit{COSMOS-12102} has a flux of 55.3$\\pm$8.7 $\\mu$Jy and is further discussed below. While none of our objects have X-ray detections, GOODS-S is also the only field with sufficient depth to find $z\\sim2$ AGN which are not quasars. That being said, it is possible to hide even a rapidly accreting X-ray AGN in a low mass galaxy \\citep{aird}. The consistent (and resolved) $J_{F125W}$- and $H_{F160W}$-sizes, as well as the sizes of the emission lines in the grism spectra, rule out the presence of a strong point source dominating the emission. \n\nAn exception could be \\textit{COSMOS-12102}, illustrated by the star in Figure \\ref{fig:mex}, which is also has the largest line width in our sample. \\citet{reines} find active black holes in similar mass dwarf galaxies with broad H$\\alpha$ emission in the local Universe with $M_{BH} \\lesssim 10^6~M_{\\odot}$. Using their relation (Equation 5) of $L_{H\\alpha}$ and FWHM$_{H\\alpha}$ to $M_{BH}$, \\textit{COSMOS-12102} has a black hole mass of $\\sim 10^{6.2}~M_{\\odot}$, which is in their observed range. The mass determination is fraught with systematic uncertainties, such as variations in the geometry of the broad-line region and that at least some of the H$\\alpha$ luminosity comes from star formation, but we cannot conclusively rule-out some AGN contribution for this galaxy.\n\n\\begin{figure}\n\n\\includegraphics[scale=.46]{f7.eps}\n\n\\caption{AGN\/SF emission line diagnostic plots. From left to right, the BPT1 diagnostic of [N II]$\\lambda$6584\/H$\\alpha$, the BPT2 diagnostic of [S II]$\\lambda$6718+6731\/H$\\alpha$, and the MEx diagnostic \\citep{juneau} of stellar mass. Divisions between the AGN and the star-forming regions in the BPT diagrams come from \\citet{kewley}. In all cases, the gray contours represent data from the SDSS MPA-JHU value-added DR7 catalog: these emission line and stellar mass measurements are described by \\citet{tremonti} and \\citet{kauffmann}. Arrows denote 3$\\sigma$ upper limits. Each of the diagnostics point to star formation as the ionizing source with at most mild AGN contribution, the possible exception being \\textit{COSMOS-12102} (star symbol). The large uncertainties in [O III]\/H$\\beta$ are caused by very low and uncertain H$\\beta$ fluxes, as [O III] is robustly detected in all of these cases; the true ratio could be even higher than the values plotted here.}\n\\label{fig:mex}\n\\end{figure}\n\n\\subsection{Metallicity}\n\\label{sec:metals}\n\\begin{figure*}\n\\begin{center}\n \\includegraphics[scale=.65]{f8a.eps}\\includegraphics[scale=.65]{f8b.eps}\n\\end{center}\n\\caption{Left panel: Oxygen abundances as a function of stellar mass. Overplotted are the MZ relations of \\citet{kewleyellison}, \\citet{henry13}, and \\citet{erb06} with low-mass extensions given as the dotted lines, using the \\citet{maiolino} parameterization. The dashed line is the \\citet{amorin} relation for luminous compact ``green pea'' galaxies at $0.11 < z < 0.35$. Right panel: Oxygen abundances for our sample as a function of $\\mu_{32}$ as defined in \\citet{mannucci}, with the ``FMRs\" from \\citet{mannucci} and the high-SFR bin from \\citet{zahid} overplotted. In both panels, filled points denote abundances obtained from the ``direct'' $T_e$ method and open points denote other methods. The red point shows the result from \\citet{erb10} and the green point shows the result from \\citet{gb2}. Overall we observe that our objects lie on or below the low-mass\/high-SFR extrapolations to these observed relationships at similar redshifts.} \n\\label{fig:metallicity}\n\n\\end{figure*}\n\nIn order to measure the gas-phase oxygen abundances of these galaxies as a proxy for the metallicity, we first implement the so-called ``direct'' or $T_e$ method which requires a detection of the auroral [O III]$\\lambda$4363 line as well as the [O III]$\\lambda\\lambda$4959, 5007 and [O II]$\\lambda$3727 lines. Some of these lines lie in the UV\/Visible at these redshifts, so we can only apply this method to the X-Shooter sample. Using the calibrations outlined in \\citet{izotov06}, we convert the [O III] emission-line ratios into an electron temperature (with the electron density constrained by the ratio of [O II]$\\lambda$3729 to $\\lambda$3726 or [S II]$\\lambda$6717 to $\\lambda$6731) in the O$^{++}$ region. The total oxygen abundance in the galaxy is O\/H = O$^+$\/H$^+$ + O$^{++}$\/H$^+$, assuming $\\log T_e($O$^+) = 0.7\\log T_e($O$^{++})+0.3$. Two objects, \\textit{UDS-12539} and \\textit{UDS-6377}, have detections of $\\lambda$4363 and an upper limit can be obtained for a third, \\textit{GOODS-S-43928}. This object also lacks a detection of [O II]$\n\\lambda$3727 due to \nskyline \ncontamination, so the O$^+$\/H$^+$ component cannot be constrained directly. The relative contribution of the O$^+$\/H$^+$ component to the total oxygen abundance is 1.8\\% and 7.3\\% for the other two objects, so we neglect its contribution to the abundance of the third object. The N2 method of \\cite[][PP04]{pp04}, which uses the (log) ratio [N II]$\\lambda$6584\/H$\\alpha$, verifies this result. All derived temperatures are in excess of 20,000 K.\n\n\\begin{deluxetable}{lcc}\n\\tablecaption{Metallicity Estimates\\label{tab:z}}\n\\tablehead{ \\colhead{ID} & \\colhead{12 + log(O\/H)} & \\colhead{Method}}\n\\startdata\nGOODS-S-43693&8.10$\\pm$0.34&N2\\\\\nGOODS-S-43928&7.70$\\pm$0.53&$T_e$\/N2\\\\\nUDS-6195&8.11$\\pm$0.23&O3N2\\\\\nUDS-6377&7.52$\\pm$0.37&$T_e$\\\\\nUDS-12539&7.45$\\pm$0.09&$T_e$\\\\\nUDS-24154&8.25$\\pm$0.32&O3N2\\\\\nCOSMOS-19049&8.20$\\pm$0.20&N2\\enddata\n\\end{deluxetable}\n\nFor the remainder of the sample, we must resort to other methods, namely the aforementioned N2 method and also the O3N2 method, both from PP04; O3N2 is the (log) ratio [O III]$\\lambda\\lambda$4959, 5007\/H$\\beta$\/N2. For both methods, uncertainties include contributions from the error in the line ratios as well as the systematic scatter in the relations. If H$\\beta$ is not detected at more than 1$\\sigma$ in our LUCI1 or X-Shooter spectrum, we estimate the [O III]\/H$\\beta$ ratio from the grism spectrum. Also note that in the two instances where the N2 method is used the \\citet{maiolino} calibration results in metallicities consistent with the PP04 values. Results are displayed in Table \\ref{tab:z}, with a median $12 + \\log(O\/H)$ value of 7.90 (0.15 $Z_{\\odot}$). This value agrees well with the median \\texttt{MAGPHYS}-derived metallicity of 0.17 $Z_{\\odot}$ for the full sample.\n\nGiven the relatively small number of direct measurements of the oxygen abundance in high-z galaxies, this provides an important piece of information. Since the standard R23 diagnostic \\citep{r23} using [O III]$\\lambda\\lambda$ 4959, 5007 + [O II] $\\lambda\\lambda$ 3726, 3729 + H$\\beta$ is double-valued, it is important whether higher-z galaxies belong to the metal-rich upper branch or to the metal-poor lower branch. \\citet{henry13a} argue in favor of the upper-branch for galaxies with log ($M_{\\star}$\/$M_{\\odot}$) $>$ 8.2 at $z\\sim0.7$, and \\citet{henry13} favor the upper-branch for galaxies with log ($M_{\\star}$\/$M_{\\odot}$) $>$ 8.8 at $1.3 2.5$, we may be observing the evolution of the FMR on the low-mass and\/or high-SFR end, although we do not have the number statistics yet to quantify any offset. At the very least, we can conclusively rule-out that these objects lie above the FMR \\cite[cf.][who claim this relation is driven by the higher average SFRs of the systems probed at higher redshifts]{stott}.\n\n\n\\section{Constraints on the Gas Fraction and the Star Formation Efficiency}\n\\label{sec:discussion}\nIn this section we use the observed velocity dispersions $\\sigma$ and\nsizes $r_{\\rm{eff}}$, combined with dynamical stability criteria, to\nconstrain the gas fraction and its implications. We assume that the\nsystems consist entirely of stars and gas: we neglect the contribution\nof dark matter to the total dynamical mass as measured within the\ncentral kpc. We also assume that these systems are isolated and not embedded in larger (gaseous) structures that exert pressure.\n\nWe do not know the geometry of the systems, and therefore consider two\nextreme cases: for the case of a sphere with uniform density we calculate the\nJeans mass $M_J$; for the case of a thin rotating disk we calculate\nthe Toomre parameter $Q$. In both cases we assume that the gaseous\nbody has the same extent as the stellar body, and that the velocity\nwidth of the nebular lines traces the total gas kinematics.\n\nFor a uniform sphere the Jeans mass \\citep{binney} is given by\n\\begin{equation}\n\\label{eqn:jeans}\nM_J = \\frac{4\\pi}{3} \\rho_0\\bigg(\\frac{\\pi\\sigma^2}{4G\\rho_0}\\bigg)^{3\/2},\n\\end{equation}\nwhere we have equated the sound speed with observed velocity\ndispersion. This velocity represents the combined effect of all\nsources of pressure that act against collapse, which include thermal\nmotions (associated with the physical sound speed) as well as\nturbulence and streaming motions.\n\n$\\rho_0$ is the density of the gas, which is given by the gas mass\n($f_{gas}\\times M_{dyn}$) and the size:\n\\begin{equation}\n\\label{eqn:rho}\n\\rho_0=\\frac{f_{gas}M_{dyn}}{(4\/3)\\pi r_{eff}^3},\n\\end{equation}\nwhere $f_{gas}$ is the gas fraction. The total dynamical mass\n$M_{dyn}$ is taken from Equation \\ref{eqn:dyn}, with a value of 5\n for the proportionality constant for consistency with\nthe case under consideration here: that of a sphere with uniform\ndensity\\footnote{Elsewhere in this paper we use a value of 3, which corresponds to other, more realistic geometries such as inclined disks and radially-concentrated density profiles (e.g., isothermal).}.\n\nFor typical values of $r_{eff}$ (1 kpc) and $\\sigma$ (50 $\\>{\\rm km}\\,{\\rm s}^{-1}$)\nwe find that $M_J \\simeq M_{gas} (\\equiv f_{gas} M_{dyn})$ for\n$f_{gas}=0.66$. Given that substantial star formation in these\nsystems the gaseous body must be unstable: we conclude, assuming a\nhomogeneous gaseous sphere, that $f_{gas} \\gtrsim 0.66$.\n\nIn order to address the question to what extent this conclusion is\naffected by the chosen geometry, we now consider the other (opposite)\ncase, and assume that these systems are rotating disks, where instability can be described by the Toomre parameter $Q$\n\\citep{toomre,binney}\n\\begin{equation}\n Q_{gas} = \\frac{\\sigma_z \\kappa}{\\pi G \\Sigma_{gas}}\n\\end{equation}\nwhere $\\sigma_z$ is the velocity dispersion perpendicular to the disk,\nand $\\Sigma_{gas}$ is the average gas-mass surface density, given by\n\\begin{equation}\n\\label{eqn:sigma}\n \\Sigma_{gas} = \\frac{M_{gas}}{2 \\pi r_{eff}^2} = \\frac{M_{dyn} f_{gas}}{2 \\pi r_{eff}^2},\n\\end{equation}\nwhere $f_{gas}$ is the gas fraction. The epicyclic frequency, $\\kappa$, in a rotating exponential disk is \n\\begin{equation}\n\\label{eqn:kappa}\n \\kappa = \\sqrt{2}(v_t\/r_{eff}),\n\\end{equation}\nwhere $v_t$ is the circular velocity of the disk.\n\n\nWhile $\\sigma_z$ and $v_t$ are not observed directly, both contribute to the observed linewidth, $\\sigma$. The contribution of rotation to $\\sigma$, assuming an average inclination of 60 degrees and an exponential disk, is $\\sim v_{rot}\/\\sqrt{2}$, such that we have\n\\begin{equation}\n\\label{eqn:sigmaobs}\n \\sigma_{obs}^2 = \\sigma_z^2 + v_t^2\/2,\n\\end{equation}\nwhich is empirically supported \\citep{kassin}.\n\nCombining the above, we solve for $v_t$ as a function of $\\sigma_{obs}$:\n\\begin{equation}\n\\label{eqn:vt}\n v_t^2 = \\sigma_{obs}^2\\left(1 \\pm \\sqrt{1-(9\/4)f_{gas}^2Q_{gas}^2 } \\right),\n\\end{equation}\n\nsuch that an unstable system ($Q < 1$) requires\n\\begin{equation}\n f_{gas} > \\frac{2}{3}\n\\end{equation}\nin order to produce a unique, physical solution.\n\nRemarkably, regardless of whether we assume a homogeneous sphere or a\nrotating disk for the gas, we find that a high gas fraction is needed\nto explain the observed star formation activity. At the same time,\nthe non-negligible contribution of the stellar mass to the total mass\nimplies that $f_{gas}$ cannot be arbitrarily close to unity and should\nbe $\\lesssim 0.9$ (see Figure \\ref{fig:dynmass}).\n\nA significant caveat is that \nthe proportionality constants in Equations \\ref{eqn:jeans},\n\\ref{eqn:rho}, \\ref{eqn:dyn}, \\ref{eqn:kappa}, \\ref{eqn:sigmaobs}, and\n\\ref{eqn:vt} depend on the details of the assumed geometry and\ndynamical structure; their variation can alter the threshold value of\n$f_{gas}$. In addition, we ignore the stabilizing effect of the\nstellar disk on the gas disk, however this increases the implied gas\nfraction further.\n\nThe median gas fraction in our sample is 72\\% (i.e. the y-axis in\nFigure \\ref{fig:dynmass} is a proxy for $f_{gas}$), in agreement with\nthe theoretical calculation. While some objects are observed to have\nlower gas fractions, 18 out of our 22 objects are consistent with\n$f_{gas} >$ 2\/3 within 1$\\sigma$.\n\nIn the following we assume $f_{gas}$=2\/3 in order to constrain the\nstar formation efficiency. In Figure \\ref{fig:gas} we show the star\nformation rate surface density, assuming $\\Sigma_{SFR} = SFR \/ (2\\pi\n\\times r_{eff})$, versus the gas surface mass density $\\Sigma_{gas}$\n(from Equation \\ref{eqn:sigma}). The implied gas depletion time scale\nranges from $\\tau_{depl} = 10^{7.5}$ to $10^9$ yr, with a median of $3\n\\times 10^8$ yr. \n\nCompared to normal present-day galaxies, gas is efficiently\ntransformed into stars but, with the exception of a few objects, not\nas efficiently as observed in starbursting regions in the Milky Way\nand starbursting galaxies in the present-day or high-redshift universe\n\\citep[i.e.][]{kennicutt07,daddi}. This mostly reflects the large gas\nfractions needed to produce systems that are unstable against star\nformation, rather than a modest level of star formation: the inverse\nsSFR (stellar mass growth time scale) of $1\/sSFR = 5 \\times 10^7$ yr\nis very short, among the fastest ever measured. Hence, the stellar\nmass grows at a dynamical time scale ($\\tau_{dyn}\\sim3 \\times 10^7$\nyr).\n\nThe physical interpretation is that the star formation rate is not\nlimited by availability of fuel but by the dynamical time scale.\nHowever, we have to keep in mind that we selected galaxies based on\ntheir sSFR: we are biased against objects that are older and\/or have\nlonger star formation time scales. Further empirical investigation of\nlower levels of star formation in similarly massive galaxies is needed\nto address this issue.\n\nHowever, we propose that star formation will halt within $\\sim\n50$~Myr, long before the gas reservoir is depleted. First, the\nstability arguments given above imply that the gas fraction only needs\nto be reduced by a small amount (from the assumed $f_{gas}=2\/3$ to,\nsay, $f_{gas}<0.5$). Given the observed SFR, this takes ~$\\sim$50\nMyr. Second, feedback should play an important role in these low-mass\nsystems with small escape velocities (several 100 $\\>{\\rm km}\\,{\\rm s}^{-1}$) and high\nSFR: gas is easily transported out of the galaxy, and perhaps even out\nof the halo, preventing recycling. This paradigm is supported by \\citet{law09,law12},\nwho see that higher mass starforming galaxies at z $\\sim$ 2 can support more extended rotationally-supported disks\nand are less efficient at driving outflows than their lower mass counterparts.\n\nIn the above we have ignored the infall of cold gas, which could\ncontinue to feed and maintain the starburst. Assuming that these\ngalaxies reside in relatively low-mass halos ($\\sim10^{11} M_{\\odot}$) the\ntypical accretion rate onto the halo is several $M_{\\odot}$\/yr\n\\citep{mcbride}, somewhat lower than the SFR. The accretion rate onto\nthe halo is not necessarily equal to the accretion rate onto the\ngalaxy, and the latter is likely not constant. A period of\nabove-average accretion for several 100 Myr could, in principle,\nsustain the starburst. However, above-average accretion events are\nmore likely to be of short duration, such that one such event can\nignite the observed starburst by pushing the gas mass surface density\nabove the threshold for star formation or by disturbing the\nalready-present gas, creating an instability. Hence, enhanced accretion could cause the star formation\nactivity but not maintain it.\n\nBased on the Jeans and Toomre instability arguments presented above,\npurely gaseous systems with velocity dispersions and sizes as observed\nbecome unstable once they reach a total mass of a few times\n$10^9~M_{\\odot}$, close to the observed masses of the objects in our\nsample. \n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.475\\textwidth]{f9.eps}\n\n\\end{center}\n\\caption{SFR surface density versus gas-mass surface density. The gas masses are estimated according to $M_{gas} = (2\/3) \\times M_{dyn}$, which is inferred as a probable gas fraction given the star formation rates and $M_{\\star}$\/$M_{dyn}$ ratios (which imply $0.5 \\lesssim f_{gas} \\lesssim 0.9$) in these systems. The solid black line shows the relation for local spiral galaxies from \\citet{kennicutt07} and the dotted line shows the result for local (U)LIRG and high-z SMGs\/QSOs from \\citet{daddi}. The location of the points suggests that these objects spend at least some of the time forming stars more efficiently than the normal, present-day spiral galaxies. Our constraints are not stringent enough to confirm or rule out gas depletion timescales that are on par with or even shorter than more massive, starbursting systems.}\n\\label{fig:gas}\n\\end{figure}\n\n\n\n\n\\section{Summary}\nWe presented near-infrared spectroscopy from the LBT\/LUCI1\nmulti-object near-IR spectrograph and the VLT\/X-Shooter wide band spectrograph\nfor a sample of HST\/WFC3 grism-selected\nemission line objects with restframe equivalent widths of $EW = 200 - 1100$ \\AA$ $ for [O III]$\\lambda$5007\nand\/or H$\\alpha$, and located in the redshift range $1.3 < z < 2.3$. The observed emission lines are narrow,\nwith measured velocity dispersions down to $\\sigma = 30$ km s$^{-1}$, implying low dynamical masses\nof $\\sim10^9~M_{\\odot}$, even for the lower-EW objects not included in \\cite{maseda}. Stellar masses determined\nusing sophisticated \\texttt{MAGPHYS} SED fitting to broadband magnitudes and the inclusion of line fluxes results in low stellar masses\nas well, $\\sim3\\times10^{8}~M_{\\odot}$. Ratios of $M_{\\star}$ to\n$M_{dyn}$ range from 1\/10 to 1, which makes AGN-dominated SEDs unlikely. Emission-line ratios and the narrow line widths also suggest that AGN do not significantly contribute to our sample, and therefore we conclude that the main ionizing source is hot, massive stars.\n\n\n\nDirect probes of the oxygen abundances within these galaxies and [O III]\/H$\\beta$ line ratios of typically $\\gtrsim$ 5 corroborate the expectation that these low mass systems have low metallicities, between 0.05 and 0.3 $Z_{\\odot}$. They lie on or below the (extrapolated) mass-metallicity relationships for these redshifts \\citep{henry13,erb06} which, combined with their young SED-derived ages, \nreinforces the notion that these are nascent galaxies undergoing their first major episode of star formation. \n\nMeasured sSFR values of $\\sim10^{-8}$ yr$^{-1}$ for these galaxies are up to two orders of magnitude larger than those of typical $10^{10}~M_{\\odot}$ starforming galaxies at $z\\gtrsim1$ \\citep{fumagalli}, as well as comparable to or greater than the values from other high-EW systems as discovered in deep narrowband searches \\citep{sobral} and in deep spectroscopic studies at both similar \\citep{masters} and lower redshifts \\citep{amorin14a,amorin14b}. Such high sSFR values have been difficult to reproduce in hydrodynamical simulations, but recently \\citet{shensims} made significant progress by combining a high gas density threshold for star formation and a blastwave scheme for supernova feedback in their simulations of low-mass galaxies.\n\nSuch low mass systems, with observed velocity dispersions of $\\sigma\\sim50\\>{\\rm km}\\,{\\rm s}^{-1}$ and sizes of $\\sim$ 1 kpc are only unstable against star formation if their gas fractions are high (above 2\/3), in agreement with the observed $M_{\\star}\/M_{dyn}$ relation. The bursts are likely to be short-lived ($\\sim$50 Myr), as, even in the absence of feedback, their intense star formation will rapidly build up stellar mass and lower their sSFR well before the gas depletion timescale ($\\sim$100 Myr).\n\nThese results strengthen the conclusions from \\citet{vdw}, who argued that EELGs represent\nlow-mass, starbursting galaxies. Additionally, the existence of (at least) two strong galaxy-galaxy lenses in the CANDELS\/3D-HST fields where the background galaxy is an EELG at $z=1.85$ and $3.42$ \\cite[][respectively]{gb2,vdw13} suggests that this type of object is common. The ubiquity of EELGs may be even more pronounced at high redshifts \\cite[$>$6;][]{smit}. Such systems at $z=$ 1 $-$ 2 thus may present an opportunity to study how star formation proceeded in the early Universe before the advent of the next generation of observatories, such as the \\textit{James Webb Space Telescope}.\n\n\nThe new generation of submillimeter observatories, such as ALMA, can provide direct estimates of the gas masses. Searching for the presence of outflowing material would provide valuable clues about the feedback processes going on in these systems, which is especially relevant to support the hypothesis that these bursts can create the cored dark matter profiles observed in local dwarf galaxies \\cite[e.g.][]{amorisco}. \n\n\n\n\n\\acknowledgements{M.V.M. is a member of the International Max Planck Research School for Astronomy\nand Cosmic Physics at the University of Heidelberg, IMPRS-HD, Germany. C.P. acknowledges support by the KASI-Yonsei Joint Research Program for the Frontiers of Astronomy and Space Science funded by the Korea Astronomy and Space Science Institute. D.C.K. acknowledges funding from NSF grant AST-08-08133.}\n\n{\\it Facilities:} \\facility{LBT}, \\facility{VLT:Melipal}, \\facility{HST}.\n\\bibliographystyle{apj}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{\\Large\\bit Introduction}\n\nImpulsive pp-waves prominently arise as ultrarelativistic limits of \nblack-hole geometries \\cite{AS,BaNa3,BaNa4,LouSa}. Since their curvature is\n\ncompletely concentrated on a null hyperplane they are necessarily\ndistributional in nature. Penrose \\cite{Pen} gave an intrinsic \ndescription of these geometries in terms of his ``cut and paste''\napproach. A different approach to derive the AS-geometry, which\nrepresents the ultrarelativistic limit of Schwarzschild, was given by 't Hooft\nand Dray \\cite{tHo}. They made use of the limit of geodesics in\nSchwarzschild to derive Penroses junction conditions. We will somehow\ntake the opposite point of view in this work and \nconsider the trajectories of massive and massless particles in \ngeometries generated by an impulsive gravitational wave in their own right. \nIt turns out that the calculation inevitably requires the\ndefinition of the point value $\\theta(0)$ at the location of the pulse,\nthus leaving the usual framework of distribution theory. \n>From a physical point of view this would induce a loss of predictability \nsince the above point value is completely arbitrary. The ambiguity\nis however resolved if one imposes the physically sensible requirement\nthat a freely falling (geodetic) observer experiences a constant flow\nof eigentime, i.e. that the norm of the tangent vector remains constant.\nOur result may be reinterpreted in the sense of Colombeau's theory\nof multiplication of distributions if one replaces distributional equality by\nweak equivalence of Coloumbeau functions. \\par\nIn section one we calculate the geodesics for an arbitrary impulsive\nwave profile. The ambiguity that arises from $\\theta(0)$ is resolved\nby taking the covariant constancy of the tangent vector into account.\nSection two gives a brief account of the basic notions of Colombeau's\nnew generalized functions together with an extension to arbitrary \n$C^\\infty$ manifolds. Finally, in section three we look at the calculations\nof section one from the point of view of Colombeau theory thereby justifying\nthe previously obtained results. \n\n\\section*{\\Large\\bit 1) Geodesics in (impulsive) pp-wave geometries }\n\nA pp-wave is usually characterized as\nbeing a vacuum geometry which admits a covariantly constant null-vector \n$p^a$ \\cite{JEK}. However, since we do not want to restrict ourselves\nto vacuum geometries, we require that\nthe Ricci-tensor is proportional to the tensor square of $p^a$.\n>From these requirements one shows \\cite{SiGo,AiBa2} that the metric \nmay be written as\n\\begin{equation}\\label{impwave}\n g_{ab} = \\eta_{ab} + f p_a p_b \\quad \\mbox{together with}\\quad \n (p\\partial)f=0,\n\\end{equation}\nwhere $f$ denotes the so called wave profile and $\\eta_{ab}$ is the\nflat part of the decomposition. Using the covariant \nderivative $\\partial_a$\nassociated with $\\eta_{ab}$ one obtains the difference tensor\n\\begin{equation}\n C^a{}_{bc} = \\frac{1}{2}\\left( p^a \\partial_bf p_c + \n p^a \\partial_c f p_b - \\partial^a f p_b p_c \\right)\n\\end{equation}\nChoosing a conjugate null vector $\\bar{p}^a$ (i.e. $\\bar{p}\\cdot p=-1$) \nallows us to decompose\n\\begin{displaymath}\n \\partial_af = - \\bar{p}_a \\underbrace{(p\\partial)f}_{=0} \n - p_a (\\bar{p}\\partial)f + \\tilde{\\partial}_a f,\n\\end{displaymath}\nwhere the tilde refers to quantities which are projected in the\northogonal two space of $p^a$ and $\\bar{p}^a$.\nThis decomposition in turn gives rise to\n\\begin{equation}\n \\label{diff}\n C^a{}_{bc} = \\frac{1}{2}\\left( p^a \\tilde{\\partial}_bf p_c + \n p^a \\tilde{\\partial}_c f p_b - \\tilde{\\partial}^a f p_b p_c \n - p^a p_b p_c (\\bar{p}\\partial)f \\right).\n \\end{equation}\nWith respect to an affine parameterization the geodesic equation \nbecomes\n\n\\begin{displaymath}\n \\ddot{x}^a + C^a{}_{bc}\\dot{x}^b \\dot{x}^c = 0.\n\\end{displaymath}\nDecomposition with respect to the conjugate null directions yields\n\\begin{eqnarray}\\label{geo}\n &&p\\ddot{x} =0,\\nonumber\\\\\n &&\\bar{p}\\ddot{x} + \\frac{1}{2}\\left( \n -2(p\\dot{x}) (\\dot{\\tilde{x}}\\tilde{\\partial})f + \n (p\\dot{x})^2(\\bar{p}\\partial)f \n \\right)=0,\\nonumber\\\\ \n &&\\ddot{\\tilde{x}} - \\frac{1}{2} (p\\dot{x})^2 \\tilde{\\partial}f =0.\n\\end{eqnarray}\nThe first line of (\\ref{geo}) allows us to choose $px$ as affine parameter\nunless we are moving in a $px=const$ surface. This choice leaves us with two\nequations\n\\begin{eqnarray}\n \\label{ngeo}\n &&(\\bar{p}x)'' + \\left(-x'\\tilde{\\partial}f + \n \\frac{1}{2}(\\bar{p}\\partial)f \\right) =0,\\nonumber\\\\\n &&\\tilde{x}'' - \\frac{1}{2} \\tilde{\\partial} f = 0,\n\\end{eqnarray}\nwhere the prime denotes differentiation with respect to $px$.\nUp to now our derivation did not make use of the fact that\nwe are considering impulsive wave profiles, i.e. \n$f(px,\\tilde{x}) = \\tilde{f}(\\tilde{x}) \\delta(px)$.\nBy a slight abuse of notation we will drop the tilde on the \n$f(\\tilde{x})$ in the following.\nPhysically the above form implies that the geodesics are straight lines\nabove and below the null hyperplane of the pulse\n\\begin{eqnarray}\\label{geoans}\n \\bar{p}x &=& a(px) + b + \\theta(px)\\left( a^+ (px) + b^+ \\right),\\nonumber\\\\ \n \\tilde{x} &=& \\tilde{a}(px) + \\tilde{b} + \\theta(px)\n \\left( \\tilde{a}^+ (px) + \\tilde{b}^+ \\right).\n\\end{eqnarray}\nPlugging (\\ref{geoans}) into the second line of (\\ref{ngeo}) gives\n$$\n\\delta(px)\\tilde{a}^+ + \\delta'(px)\\tilde{b}^+ = \n\\frac{1}{2}\\delta(px)\\tilde{\\partial} f,\n$$\nwhich requires \n\n$$\\tilde{b}^+=0 \\quad\\mbox{and}\\quad \n\\tilde{a}^+=\\frac{1}{2}\\tilde{\\partial} f(\\tilde{b}),\n$$\nwhereas the first line \n$$\na^+\\delta(px) + b^+ \\delta'(px) = \\frac{1}{2}\\tilde{x}'\\tilde{\\partial} f\n\\delta(px) + \\frac{1}{2} \\delta'(px) f(\\tilde{b})\n$$\nleaves us with \n$$\na^+=\\frac{1}{2}\\left( \\tilde{a} + \\theta(0)\\tilde{a}^+\\right)\n\\tilde{\\partial} f(\\tilde{b})\\quad\\mbox{and}\\quad\nb^+=\\frac{1}{2}f(\\tilde{b}).\n$$\nSo the geodesic becomes\n\\begin{eqnarray}\n \\label{geores}\n \\bar{p}x &=& a(px) + b + \\theta(px)\\left( \\frac{1}{2}(\n \\tilde{a} + \\theta(0)\\tilde{a}^+ )\\tilde{\\partial} f(\\tilde{b})(px) +\n \\frac{1}{2}f(\\tilde{b})\\right),\\nonumber\\\\\n \\tilde{x} &=& \\tilde{a} (px) + \\tilde{b} + \\theta(px)\\frac{1}{2}\n \\tilde{\\partial} f(\\tilde{b}) (px).\n\\end{eqnarray}\nAt first glance it seems as if we were done, would it not be for the\nominous factor $\\theta(0)$ which destroys the predictability of the\nscattering process.\nHowever the physical requirement of the covariantly constant \nnorm of the tangent vector will save the day, i.~e.~\n$$\n(\\dot{x}\\nabla)(g_{ab}\\dot{x}^a\\dot{x}^b) = \n2g_{ab}(\\dot{x}\\nabla)\\dot{x}^a\\dot{x}^b = 0 \\quad\\Rightarrow\\quad \ng_{ab}\\dot{x}^a\\dot{x}^b = const\n$$ \nFor a general pp-wave this gives\n\\begin{eqnarray}\n \\label{covconst}\n&&\\dot{x}^a\\dot{x}^b(\\eta_{ab} + fp_a p_b) = \n-2(p\\dot{x})(\\bar{p}\\dot{x}) + \\dot{\\tilde{x}}^2 + f (p\\dot{x})^2 = \nconst.\\nonumber\\\\\n&&\\quad\\Rightarrow\\quad -2(\\bar{p}x') + (\\tilde{x}')^2 + f = const\n\\end{eqnarray}\nInserting (\\ref{geores}) into (\\ref{covconst}) we obtain immediately\n\\begin{equation}\n -2a + \\tilde{a}^2 + \\theta(px) \n \\frac{1}{4}(\\tilde{\\partial}f)^2(\\tilde{b})\\left(1-2\\theta(0)\\right) \n = const,\n\\end{equation}\nwhich is only constant if we choose $\\theta(0)=1\/2$, thereby\nrestoring predictability.\n\n\\section*{\\Large\\bit 2) Weak equality and Colombeau's product of \ndistributions}\n\nAlthough the result of the previous section is physically \nsatisfactory it suffers from mathematical deficiencies. This is\ndue to the fact that we implicitly multiplied discontinuous functions\nwith (singular) distributions resulting in the ambiguous point value\n $\\theta(0)$. Therefore this section is devoted to a brief summary of \nColombeau's new generalized functions \\cite{Col1,Col2,ArBi}, \ntogether with an extension to arbitrary manifolds, since they\n provide a natural framework for the multiplication\nof distributions on a mathematically rigorous basis. From the \nphysical point of view Colombeau objects may be regarded as \n(arbitrary) regularisations of singular distributions. More \nprecisely\\footnote{For a motivation of the growth condition see \\cite{Col1}} \none considers one-parameter families $(f_\\epsilon)_{0<\\epsilon<1}$ \nof $C^\\infty$ functions with moderate growth in the parameter $\\epsilon$,\nnamely\n\\begin{eqnarray}\\label{Mod}\n && {\\mathcal E}_M = \\{ (f_\\epsilon)\\vert f_\\epsilon \\in \n C^\\infty({\\mathbb R}^n)\\quad \\forall K \\subset {\\mathbb R}^n compact, \n \\forall \\alpha\\in {\\mathbb N}^n\\quad \\\\\n &&\\hspace*{1cm}\\exists\\, N\\in {\\mathbb N},\\exists\\> \\eta > 0,\\exists\\> c>0 \n \\quad s.t. \\sup_{x\\in K}\\vert D^\\alpha f_\\epsilon(x)\\vert \\leq \n \\frac{c}{\\epsilon^N}\\quad\\forall 0<\\epsilon< \\eta\\}.\\nonumber\n\\end{eqnarray}\nAddition, multiplication and differentiation are defined as pointwise\noperations, turning ${\\mathcal E}_M$ into an algebra.\n$C^\\infty$ functions $f$ are naturally embedded into ${\\mathcal E}_M$ as \nconstant families, i.e. $f_\\epsilon=f$, whereas continuous functions or \nelements of \n${\\mathcal D}'$ require the use of a so called ``mollifier'' \n$\\varphi$ for their embedding\n\\begin{equation}\\label{moll}\n f_\\epsilon(x) = \\int\\!\\! d^n\\!y \\frac{1}{\\epsilon^n}\n \\varphi\\fracwithdelims(){y-x}{\\epsilon}f(y)\\qquad \\int \nd^n\\!y\\varphi(y) =1,\n\\end{equation}\ntogether with additional conditions on the momenta of $\\varphi$ \n\\cite{Col1,Col2}. \nWith regard to distributions the above convolution integral is formal. \nIn order to identify the different embeddings of $C^\\infty$ functions \none takes the quotient of the algebra ${\\mathcal E}_M$ with respect to\nthe ideal \n\\begin{eqnarray}\n && {\\mathcal N} = \\{ (f_\\epsilon)\\vert f_\\epsilon \\in \n C^\\infty({\\mathbb R}^n)\\quad\n \\forall K \\subset {\\mathbb R}^n compact, \\forall \\alpha\\in {\\mathbb N}^n, \n \\forall N\\in {\\mathbb N}\\\\\n &&\\hspace*{1cm}\\exists\\> \\eta > 0,\\exists\\> c>0,\\quad \n s.t. \\sup_{x\\in K}\\vert D^\\alpha f_\\epsilon(x)\\vert \\leq \n c\\epsilon^N\\quad\\forall\\> 0<\\epsilon< \\eta\\}.\\nonumber\n\\end{eqnarray}\nthereby obtaining the Colombeau algebra $\\mathcal G$.\nElements of $\\mathcal G$ are therefore equivalence classes of \none-parameter families of $C^\\infty$ functions. In order to make contact\nwith distribution theory one considers a coarser equivalence relation which \nis usually called weak equality or association. It is intuitively \nmotivated by the fact that the limit $\\epsilon\\to 0$ (if it exists) should \nreproduce the corresponding distributional object. More precisely, two \nColombeau-objects $(f_\\epsilon)$ and $(g_\\epsilon)$ are associated if\n\\begin{equation}\\label{assoc}\n \\lim_{\\epsilon\\to 0} \\int\\! d^n\\!x\\left( f_\\epsilon(x)-g_\\epsilon(x) \\right)\n \\varphi(x)=0\\qquad \\forall \\varphi\\in C^\\infty_0.\n\\end{equation}\nAssociation behaves like equality on the level of distributions.\nIt is compatible with addition and differentiation and it allows\nmultiplication with $C^\\infty$ functions. However, it does not respect\nmultiplication of Colombeau objects, as might have been expected.\nThe classical example in this context is provided by the powers of the \n$\\theta$ function, which upon naive multiplication would lead to \ncontradictions about the point value of \n$\\theta(0)$. Specifically\n$$\n(\\theta(x))^n = \\theta(x) \\Rightarrow n\\theta(0)\\theta'(x) = \\theta'(x)\n$$\nwhich cannot hold for arbitrary $n$ since $\\theta(0)$ is $n$-independent.\nLooking at the above equality from the Colombeau point of view\nwe have\n$$\n(\\theta(x))^n \\approx \\theta(x) \\Rightarrow n (\\theta(x))^{n-1}\\theta'(x)\n\\approx \\theta'(x)\n$$\nwhich holds separately for each $n$ and thereby allows us to replace \n$(\\theta)^{n-1}\\theta'$ by $(1\/n)\\theta'$. The contradiction is\navoided because multiplication by theta would break the association. \n\nThe above concepts made explicit use of the additive (group) structure of \n${\\mathbb R}^n$. Specifically the convolution integral used for the\nembedding of continuous functions and distributions in general lacks\ngeneralization to arbitrary manifolds. In the following we will \ngive a coordinate--independent formulation of the Colombeau \nalgebra\\footnote{A more detailed version will be given in \\cite{Bal2}}.\nTo begin with let us briefly recall the definition of distributions\non arbitrary (paracompact) $C^\\infty$ manifolds.\nThe natural generalization of distribution space as\nthe dual of a suitable test-function space \\cite{GeSh} is achieved by \nreplacing \ntest functions by test forms. That is we define a distribution as (continuous)\na linear functional on the space of $C^\\infty$ n-forms with compact \nsupport together with the usual locally convex vector space topology\n\\cite{ChoBr}.\nLocally integrable functions $f$ give rise to regular functionals via\n$$\n\\tilde\\varphi \\mapsto (f,\\tilde{\\varphi}) := \n\\int\\limits_{M} f\\tilde{\\varphi} \\qquad \n\\forall\\tilde{\\varphi}\\in C_0^\\infty(M)\n$$\nThe natural generalization of the derivative of a distribution\nis given by the notion of the Lie-derivative with respect to\nan arbitrary $C^\\infty$ vector-field $X$, i.~e.~\n$$\n({\\mathcal L}_X f,\\tilde{\\varphi}) := (-1) (f,{\\mathcal L}_X\\tilde{\\varphi}).\n$$\nThe above definition shows that distribution space does not require\nadditional structure but is purely a concept\nwhich depends on the differentiable structure of the manifold.\nIt reduces to the usual notion of distribution space on ${\\mathbb R}^n$\nif one makes use of the natural volume form $d^n x$ and decomposes\nevery test-form $\\tilde{\\varphi}=\\varphi(x)d^n\\! x$.\nLet us now try to do the same for the Colombeau algebra.\nThe $C^\\infty$ functions of moderate growth in $\\epsilon$ are easily \ngeneralized\n\\begin{eqnarray*}\n{\\mathcal E}_M(M) &=& \\{ (f_\\epsilon) \\in C^\\infty(M) | \\forall\\>\nK \\subset M\\>\\mbox{compact}, \n\\forall\\>\\{X_1,\\dots,X_p\\}\\>\\\\\n&&\\hspace*{0.5cm}X_i\\in\\Gamma(TM),[X_i,X_j ]=0,\\exists N\\in{\\mathbb N},\\exists \n\\eta> 0, \n\\exists c>0\\> \\\\\n&&\\hspace*{0.5cm} s.t. \\sup_{x\\in K}|{\\mathcal L}_{X_1}\\dots{\\mathcal L}_{X_p}\nf_\\epsilon(x)|\n\\leq \\frac{c}{\\epsilon^N}\\quad 0<\\epsilon<\\eta\\}\n\\end{eqnarray*}\nIn the same manner we may extend the ideal ${\\mathcal N}$ \n\\begin{eqnarray*}\n{\\mathcal N}(M) &=& \\{ (f_\\epsilon) \\in C^\\infty(M) | \\forall\\>\nK \\subset M\\>\\mbox{compact},\\forall\\>\\{X_1,\\dots,X_p\\}\\> \\\\\n&&\\hspace*{0.5cm}X_i\\in\\Gamma(TM),\n [X_i,X_j]=0,\\forall q\\in{\\mathbb N},\\exists \\eta> 0, \\exists c>0\\>\\\\\n&&\\hspace*{0.5cm} s.t. \\sup_{x\\in K}|{\\mathcal L}_{X_1}\\dots\n{\\mathcal L}_{X_p}f_\\epsilon(x)|\n\\leq c\\epsilon^q\\quad 0<\\epsilon<\\eta\\},\n\\end{eqnarray*}\nAs usual the Colombeau-algebra ${\\mathcal G}(M)$ is defined to be the \nquotient of ${\\mathcal E}_M(M)$ with respect to ${\\mathcal N}(M)$. \nIn order to generalize the embedding of continuous functions \nand more generally of distributions, we have to find an analogue \nof the smoothing kernel $\\varphi$. The immediate problem we are facing\nis that the expression used in ${\\mathbb R}^n$\n\\begin{equation}\\label{falt}\nf_\\epsilon(x) = \\int f(y)\\varphi\\fracwithdelims(){y-x}{\\epsilon}\n\\frac{1}{\\epsilon^n}d^n\\! y\n\\end{equation}\nmakes explicit use of the linear structure of ${\\mathbb R}^n$ as may\nbe seen from the argument of $\\varphi$ in (\\ref{falt}). Moreover, in\norder to allow the identification of the two types of embeddings of\n$C^\\infty$ functions, as discussed at the beginning of this section, one \nhas to require\n$$\n\\int y^\\alpha \\varphi(y) d^n\\! y = 0 \\qquad 1\\leq |\\alpha | \\leq q,\n$$\nfor some finite but arbitrary $q\\in {\\mathbb N}$. Once again this expression\ndepends explicitly on the chosen coordinates. At first glance it seems \nthat the above conditions have to be modified \\cite{ColMer} to allow a \ngeneralization to an\narbitrary manifold. However, if one makes use of the tangent bundle $TM$\nof the manifold $M$ the above conditions may be taken as they are.\nIn order to show how this can be done let us consider local \ncoordinates on $TM$ that are induced by coordinates on $M$,\ni.~e.~ a bundle chart.\nLet $(U,x)$ denote a local coordinate system of $M$ then $(\\pi^{-1}(U),\n(x,\\lambda))$ will denote the corresponding bundle chart, where\n$TM@>\\pi>>M$ refers to the canonical projection and $\\lambda$ to the \ncoordinates along the fibres. Diffeomorphisms $M @>\\mu>>M$ induce\nfibre-preserving diffeomorphisms on \n$TM@>(\\mu,(\\partial\\mu\/\\partial x))>>TM$, namely \nso called bundle morphisms. The smoothing kernels will now be taken to\nbe $C^\\infty$ n-forms on $TM$, which allow a local ADM-like \nrepresentation\n$$\n\\tilde{\\varphi} = \\varphi(x,\\lambda)(d\\lambda^i+N^i{}_jdx^j)^n.\n$$\nMaking use of the embedding map \n$$\ni_x :T_x M@>>> TM\\qquad \\lambda \\mapsto (x,\\lambda)\n$$\nwe require\n$\\tilde{\\varphi}$ to obey\n\\begin{eqnarray}\\label{smoothk}\n &&\\int\\limits_{T_xM}\\!\\!i_x^*\\tilde{\\varphi} = \n \\int\\limits_{T_xM}\\!\\!\\varphi(x,\\lambda)d^n\\lambda=1,\\qquad\n i_x^*\\tilde{\\varphi} \\in \\Omega_0(T_xM),\\nonumber\\\\\n &&\\int\\limits_{T_xM}\\!\\!\\lambda^\\alpha i_x^*\\tilde{\\varphi} = \n \\int\\limits_{T_xM}\\!\\!\\lambda^\\alpha\\varphi(x,\\lambda)d^n\\lambda=0\n \\qquad\\forall\\>1\\leq |\\alpha|\\leq q.\n\\end{eqnarray}\nAll of the conditions (\\ref{smoothk}) are invariant under \n$M$-diffeomorphisms, since the latter act linearly on the fibres.\nMoreover, rescaling the smoothing kernel is a well defined concept, \nsince the transformation\n$$\n\\phi_\\epsilon: (x,\\lambda) \\mapsto (x,\\frac{1}{\\epsilon}\\lambda)\\qquad\n\\tilde{\\varphi}_\\epsilon := \\phi_\\epsilon^*\\tilde{\\varphi}\n$$\nis a specific case of the left action of the structure group \n$GL(n,{\\mathbb R})$ of $TM$.\nHaving solved the first part of the problem we now have to decide\nhow to generalize the convolution integral necessary for the embedding \nof the continuous functions in the Colombeau algebra. Let us therefore \nchoose a local coordinate system on $M$, which at the same time induces\ncoordinates on every tangent space in its domain. Let us, moreover, denote \nthe representative of a $C^\\infty(M)$ function with respect to the above \ncoordinates by $f(x)$. Identification of the local coordinates\non $M$ with those of the fibre attached at $x$ allows us to ``lift''\n$f$ to $T_xM$ by defining the value of the lift at $\\lambda$ to\nbe $f(x+\\lambda)$. The corresponding smoothened function $f_\\epsilon$ \non $M$ will then be defined to be\n$$ \nf_\\epsilon(x):=\\int\\limits_{T_xM}\\!\\!f(x+\\lambda)\\varphi(x,\n\\frac{\\lambda}{\\epsilon})\\frac{1}{\\epsilon^n}d^n\\!\\lambda\n=\\int\\limits_{T_xM}\\!\\!f(x+\\epsilon\\lambda)\\varphi(x,\\lambda)d^n\\!\\lambda\n$$\nThis definition is not coordinate invariant because the action of an\n$M$-diffeomorphism $\\mu$ yields\n\\begin{eqnarray}\\label{coinv1}\n \\bar{f}_\\epsilon(\\bar{x}) &=& \\int\\limits_{T_{\\bar{x}}M}\\!\\!\\bar{f}\n (\\bar{x}+\\bar{\\lambda})\\bar{\\varphi}(\\bar{x},\n \\frac{\\bar{\\lambda}}{\\epsilon})\\frac{1}{\\epsilon^n}\n d^n\\!\\bar{\\lambda}\n =\\int\\limits_{T_{\\bar{x}}M}\\!\\!\\bar{f}(\\bar{x}+\\epsilon\\bar{\\lambda})\n \\bar{\\varphi}(\\bar{x},\\bar{\\lambda})d^n\\!\\bar{\\lambda}\\nonumber\\\\\n &=&\\int\\limits_{T_xM}\\!\\!f(\\mu^{-1}(\\mu(x)+\\epsilon\n \\frac{\\partial\\mu}{\\partial x}\\lambda)\\varphi(x,\\lambda)d^n\\!\\lambda,\n\\end{eqnarray}\nwhich obviously differs from the definition in the unbarred system \n\\begin{equation}\\label{coinv2}\n f_\\epsilon(x)=\\int\\limits_{T_xM}\\!\\!f(x+\\epsilon\\lambda)\n \\varphi(x,\\lambda)d^n\\!\\lambda\n\\end{equation}\nThe difference of (\\ref{coinv1}) and (\\ref{coinv2}) is of order\n$\\epsilon^{q+1}$ if all the momenta of $\\varphi$ up to order $q$ vanish,\nwhich is a necessary condition to belong to the ideal $\\mathcal N$.\nThus the corresponding Colombeau object is well-defined since the \ncoordinate change only generates a motion within its equivalence class. \nFinally, the concept of association is also without problems, i.~e.~\n$$\n(f_\\epsilon) \\approx (g_\\epsilon)\\>\\mbox{iff}\\> \n\\lim_{\\epsilon\\to 0}\\int\\limits_M(f_\\epsilon - g_\\epsilon)\\tilde{\\varphi}=0\n\\>\\forall \\tilde{\\varphi}\\in\\Omega_0(M).\n$$ \nWe will say that the Colombeau object $(f_\\epsilon)$ gives rise to a\ndistribution $T$ if \n$$\n(T,\\tilde{\\varphi}):=\\lim_{\\epsilon\\to 0} \\int\\limits_M f_\\epsilon \n\\tilde{\\varphi}\n$$\ndefines a (continuous) linear functional over the space of test-forms.\nThe correspondence will in general be many to one. It is easy\nto see that two different distributions with associated Colombeau\nobjects require the latter to be also different. \nAs in the case of ${\\mathbb R}^n$, association of Colombeau-objects\nbecomes equality on the level of distributions and all vector space\noperations together with Lie-derivatives do not break the association.\n\n\\section*{\\Large\\bit 3) Geodesic equation and Colombeau theory}\nLet us now look at the geodesic equation from the point of view \nof Colombeau theory. That is, replace the delta-function\nappearing in (\\ref{impwave}) by an arbitrary regularisation \n$\\delta$ \nand do the same for the $\\theta$'s that enter in (\\ref{geoans}).\nTaking into account the three Killing-vectors of the general \nimpulsive wave-profile \\cite{AiBa2,AiBa3}\n\\begin{equation} \\label{killings}\n \\xi_1^a = p^a\\quad \\xi_{\\tilde{t}}^a = (\\tilde{t}x )p^a - \n (px)\\tilde{t}^a ,\n\\end{equation}\nwhere $\\tilde{t}$ denotes an arbitrary vector in the subspace transversal \nto $p^a,\\bar{p}^a$, yields\n\\begin{eqnarray}\\label{coconst}\n && p\\dot{x} \\approx const\\nonumber\\\\\n && (\\tilde{t}x )p\\dot{x} - (px)(\\tilde{t}\\dot{x})\\approx const.\n\\end{eqnarray}\nThe first line of (\\ref{coconst}) just tells us that $px$ is an \naffine parameter, while\nthe second line fixes the transversal shift $\\tilde{b}^+$ in (\\ref{geoans}) \nto be zero.\nThe requirement that the length of tangent vector is covariantly\nconstant along the geodesic becomes\n\\begin{equation}\n -2a + \\tilde{a}^2 + (-2a^+ + (\\tilde{a}^+)^2)\\theta + \n ( -2b^+ + f(\\tilde{b}) )\\theta' \\approx const,\n\\end{equation}\nwhich implies that the coefficients of $\\theta$ and $\\theta'$\nhave to be zero \n\\begin{eqnarray}\n a^+ &=& \\tilde{a}\\tilde{a}^+ + \\frac{1}{2}(\\tilde{a}^+)^2,\\nonumber\\\\\n b^+ &=& \\frac{1}{2} f(\\tilde{b}).\n\\end{eqnarray}\n>From the projection of the geodesic onto the orthogonal complement\nof $p^a$ and $\\bar{p}^a$\n\\begin{equation}\n \\tilde{x}'' - \\frac{1}{2}\\tilde{\\partial}f \\delta \\approx 0\n\\end{equation}\ntogether with the above results we obtain\n$$\n\\tilde{a}^+ = \\frac{1}{2}\\tilde{\\partial}f(\\tilde{b})\n$$\nSince all the plus-parameters are determined in terms of the\ninitial data, the remaining equation has to be satisfied consistently.\n\\begin{eqnarray}\\label{consist}\n &&(\\bar{p}x)'' - (x'\\tilde{\\partial})f\\>\\delta - \n \\frac{1}{2} f\\>\\delta'\\approx 0\\nonumber\\\\\n &&(a^+(px)+ b^+)\\theta'' + 2 a^+\\theta' \n - \\frac{1}{2}(\\tilde{a} + ((px)\\theta)' (\\tilde{a}^+\n \\tilde{\\partial})f)\\>\\delta - \\frac{1}{2} f(\\tilde{b})\\>\\delta' \n \\approx 0\\nonumber\\\\\n &&b^+\\theta'' + a^+\\theta' -\\frac{1}{2} (\\tilde{a}\\tilde{\\partial})\n f(\\tilde{b})\\theta' -\\frac{1}{2}(\\tilde{a}^+\n \\tilde{\\partial})f\\>((px)\\theta)'\\delta - \\frac{1}{2} f(\\tilde{b})\\theta''\n \\approx 0\\nonumber\\\\\n &&\\frac{1}{8}(\\tilde{\\partial}f)^2(\\tilde{b})\\theta'\n -\\frac{1}{4} (\\tilde{\\partial}f(\\tilde{b})\n \\tilde{\\partial})f\\>((px)\\theta)'\\delta \\approx 0.\n\\end{eqnarray}\nTaking into account that $((px)\\theta)'$ is also a Theta-function,\nwhich will be denoted by $\\hat{\\theta}$ one has \n$\\hat{\\theta}\\delta \\approx C \\theta'$ as may be seen from \n\\begin{eqnarray}\\label{ambi}\n (\\hat{\\theta}\\delta,\\varphi) &=&\\int\\limits_{-\\infty}^\\infty\\!du\n \\frac{1}{\\epsilon}\\phi\\fracwithdelims(){u}{\\epsilon}\n \\int\\limits_{-\\infty}^u\\!dv\\frac{1}{\\epsilon}\n \\psi\\fracwithdelims(){v}{\\epsilon}\\varphi(u)\n \\qquad\\phi,\\psi\\in C^\\infty_0\\nonumber\\\\\n&=&\\int\\limits_{-\\infty}^\\infty\\!du\\phi(u)\\int\\limits_{-\\infty}^u\\!dv\\psi(v)\n\\varphi(\\epsilon u) = C\\varphi(0) + {\\mathcal O}(\\epsilon).\n\\end{eqnarray}\nIn (\\ref{ambi}) $\\phi$ and $\\psi$ respectively\ndenote the regularisations of $\\delta$ and $\\hat{\\theta}'$.\nTherefore (\\ref{consist}) becomes\n$$\n\\theta' \\frac{1}{8}(\\tilde{\\partial}f)^2(\\tilde{b})(1-2C) \\approx 0,\n$$\nwhich requires $C$ to be $1\/2$. However, this is precisely the type of \ncondition we encountered in section one. It restricts the regularisations\none is allowed to choose for $\\delta$ and $\\theta$ in order to obtain\na consistent (regularisation--independent) result.\n\n\\vfill\n\\noindent\n{\\bf Acknowledgement:} The author wants to thank Peter Aichelburg for \nmany useful discussions and the critical reading of the manuscript, and \nMichael Oberguggenberger and Michael Kunzinger for introducing him\nto Colombeau theory during his stay in Innsbruck in February 1996. \n\\newpage\n\\section*{\\Large\\bit Conclusion}\n\nIn this work we solved the geodesic equation for arbitrary impulsive \npp-wave geometries. Due to the singular character of the wave profile,\nwhich contains a delta-function, one inevitably leaves the framework of\n(classical) distribution theory. This fact manifests itself in the\nappearance of the ambiguous pointvalue $\\theta(0)$. However, since\nwe made use of an affine parametrization, the length of the tangent vector \nremains covariantly constant along the geodesic. This requirement fixed\n$\\theta(0)$ to be $1\/2$. In order to justify our result mathematically \nwe made use of the recently developed framework of Colombeau's new \ngeneralized functions, which is designed to allow a (consistent) \nmultiplication of distributions. Although the original definition of the \nColombeau algebra $\\mathcal G$ made use of ${\\mathbb R}^n$-specific concepts \nwe showed that a generalisation to an arbitrary manifold $M$ is possible by \nemploying the corresponding tangent bundle $TM$. Finally we showed that our \ncondition on $\\theta(0)$ may be justified from the point of view of\nColombeau-theory as condition on the ``regularizations'' used for the pulse \nand the geodetic trajectory, respectively. \\par \nAs a next step we will investigate semiclassical scattering,\nnamely various wave equations in a general impulsive pp-background with\nspecific emphasis on those geometries arising as ultrarelativistic \nlimits of black-hole geometries.\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSupernovae are bright stellar explosions which can now be observed up to\nredshifts $z \\sim 1$. Several programmes (Perlmutter et al. 1997, 1998;\nSchmidt et al. 1997; Garnavich et al. 1997) are presently devoted to discover\nhigh--redshift supernovae for cosmological purposes. The goal is to determine\nthe geometry of the Universe through the effect that $\\Omega_{M}$,\n$\\Omega_{\\Lambda}$ have in the magnitude--redshift relationship of Type Ia\nsupernovae (SNe Ia). The same programmes provide rates of explosion up to\nlarge redshifts through proper evaluation of control time and efficiency of\ndetection. Indeed, the Supernova Cosmology Project has already achieved the\nfirst results after evaluating the data collected in its 1995 and 1996\ndiscovery runs (Pain et al. 1996, 1997).\n \nIn this ${\\it Letter}$ we outline the theoretical background for the\nprediction of those number counts for different geometries of the Universe\nand for various SNe Ia progenitors.\n\nThe pace at which stars formed in the past (the SFR) and the evolutionary\nclock to explosion of those binary supernovae is reflected in the counts. The\nevolutionary clock spans, in the case of SNe Ia ---as opposed to SNe II---,\nan important fraction of the age of the Universe.\n\nSNe Ia counts extending up to apparent red magnitudes $m_{R}\\sim 23-25$ could\nprovide a good test of the SNe Ia binary progenitor systems. We also find\nthat SNe Ia counts should be sensitive to a $\\Lambda$--dominancy in our\nUniverse through a larger increase at $z \\sim 1$. Refinements in the\ndetermination of the global SFR and continued SNe Ia searches at large\nredshifts should furnish, in the near future, an accurate enough basis for\nboth tests.\n\n\\section{Modeling}\n\nThe global SFR recently derived for high redshift galaxies in the Hubble Deep\nField (HDF) by Madau et al. (1996) has been extended to lower redshifts\n(Madau 1997) in accordance with the results of Lilly et al. (1995, 1996). The\nvariation of the SFR with $z$ is similar to that of the space density of\nquasars (Shaver et al. 1996), peaking at $z\\simeq 2$, the latter showing,\nhowever, a steeper slope both prior and after maximum. The above $SFR(z)$ at\n$z > 2$ has been obtained from the UV luminosity density along redshift.\nRowan--Robinson et al. (1997) have also derived a $SFR(z)$ from ISO infrared\nobservations of galaxies in the HDF. The general shape agrees well with\nMadau's (1996) data but the inferred values are higher, maybe implying that\nabout 2\/3 of star formation at high $z$ has taken place shrouded by dust. In\na different approach, Pei \\& Fall (1995) have dealt with the determination of\nthe global star formation history by tracking the evolution of the global HI\ncontents of the Universe as measured from $Ly\\alpha$ QSO absorption--line\nsystems. The results of Madau et al. (1996) agree with those from that very\ndifferent approach. On the other hand, Lilly et al. (1996) provide, from the\nCanada--France--Hawaii Redshift Survey, an estimate of the comoving\nluminosity density of the Universe over the redshift range $0 < z < 1$, which\ncan be interpreted in terms of a $SFR(z)$ which agrees with previous\nestimates of a significant increase with $z$ up to $z\\sim 1.5-2$. Prospects\nto further explore the global SFR include the number counts of SNe II\ndiscovered at high redshift, which should trace the activity of star\nformation in the Universe along $z$. Such numbers on SN II are obtainable in\ncurrent high--$z$ SNe searches. Given the continuous improvement in the\ndetermination of the global SFR, we base the present calculation of SNe Ia\nnumber counts on the most recent empirical results. We evaluate the rate of\nexplosion of SNe Ia by convolving their time to explosion with the SFR. The\nrate can then be calculated as:\n\n$$r_{SNeIa}(t) = \\int_{0}^{t} R(t - \\tau)SFR(\\tau) d\\tau\\eqno(1)$$\n\n\\noindent\nwhere $R(t)$ is the SNe Ia rate after an instantaneous outburst, and $t$ and\n$\\tau$ are in the SN rest frame. Depending on the progenitor systems, in some\ncases we would have SNe Ia with a peak rate of explosion at 10$^{9}$ yr. In\nother cases, there can be some SNe Ia exploding even only a few $10^{7}$ yr\nafter star formation already, thus becoming the brightest optical events at\n$z >> 1$. $SFR(\\tau)$ is derived from the global $SFR(z)$ (Madau et al.\n1996; Madau 1997). The adopted $SFR(z)$ (for $H_{0} = 50\\ km\\ s^{-1}\\\nMpc^{-1}$, $\\Omega_{M} = 1.0$, $\\Omega_{\\Lambda}$ = 0) is shown in the top\npanel of Figure 1, along with the measured data points and their error bars.\nNote that the data points from Connolly et al. (1997) for the $1\\hbox{\\lower .8ex\\hbox{$\\,\\buildrel < \\over\\sim\\,$}}\nz\\hbox{\\lower .8ex\\hbox{$\\,\\buildrel < \\over\\sim\\,$}} 2$ range fall somewhat below whereas those from Rowan--Robinson et\nal. (1997) for the $0.6\\hbox{\\lower .8ex\\hbox{$\\,\\buildrel < \\over\\sim\\,$}} z\\hbox{\\lower .8ex\\hbox{$\\,\\buildrel < \\over\\sim\\,$}} 1$ range are significantly above\nthe adopted curve. The $SFR(z)$ is transformed according to the geometries of\nthe model universes considered. Here we restrict this presentation to three\nfavored models of the Universe: {\\it Model A}, a flat universe with $H_{0} =\n65\\ km\\ s^{-1}$, $\\Omega_{M} = 1$, $\\Omega_{\\Lambda} = 0$; {\\it Model B}, an\nopen universe with $H_{0} = 65\\ km\\ s^{-1}\\ Mpc^{-1}$, $\\Omega_{M} = 0.3$,\n$\\Omega_{\\Lambda} = 0$, and {\\it Model C}, a flat universe with $H_{0} = 65\\\nkm\\ s^{-1}\\ Mpc^{-1}$, $\\Omega_{M} = 0.3$, $\\Omega_{\\Lambda} = 0.7$.\n\nThe $SFR(z)$ derived for each case is next transformed into $SFR(t)$ (in the\ncomoving frame). The derived global $SFR(t)$, when time--integrated over the\nwhole history of the corresponding universe (model $A$: $t_{0} = 10.0\\ Gyr$;\nmodel $B$: $t_{0} = 12.2\\ Gyr$; model $C$: $t_{0} = 13.6\\ Gyr$), produces the\nobserved stars today (Guzm\\'an et al. 1997).\n\nThe global SNe Ia rates, $r_{SNe Ia}(t)$ ($yr^{-1}\\ Mpc^{-3}$), for each\nmodel universe and family of SNe Ia progenitor systems, are calculated in the\ncomoving frame according to (1), and integrated over comoving volume to\nobtain the expected SNe Ia counts ($yr^{-1} sq. deg^{-1}$) as a function of\nz. We integrate over $dV$:\n\n$$dV = {d_{M}^{2}\\over (1 + \\Omega_{k}H_{0}^{2}d_{M}^{2})^{1\/2}}\\\nd(d_{M})d\\Omega\\eqno(2)$$\n\n\\noindent\nwhere $d_{M}$ is the proper motion distance and $\\Omega_{k} = 1 - \\Omega_{M}\n- \\Omega_{\\Lambda}$ is the fractional contribution of the curvature to the\nexpansion (Carroll, Press, \\& Turner 1992). $d_{M}$ is related to the\nluminosity distance $d_{L}$ through $d_{L} = (1+z)\\ d_{M}$ (Weinberg 1972).\n$d_{L}$ is calculated as:\n\n$$d_{L} = { (1 + z) \\over H_{0}|\\Omega_{k}|^{1\/2}}\\\nsinn\\left\\{|\\Omega_{k}|^{1\/2}\\int_{0}^{z_{1}}\\left[(1+z)^{2} (1+\\Omega_{M}z)\n- z(2+z)\\Omega_{\\Lambda}\\right]^{-1\/2}dz\\right\\} \\eqno(3)$$\n\n\\noindent\nwhere $sinn$ stands for $sinh$ if $\\Omega_{k} > 0$ and for $sin$ if\n$\\Omega_{k} < 0$ (both $sinn$ and the $\\Omega_{k}$ terms disappear from (3)\nif $\\Omega_{k} = 0$, leaving only the integral times $(1 + z)\/H_{0}$).\n\n\\noindent\nThe dependence on cosmological parameters of the comoving volume derivative\n$dV\/dz d\\Omega$ differs from one model of Universe to another. A different\ncosmological effect is the age--redshift (t(z)) relationship for different\nmodel Universes, which changes the z at which the SNe Ia rates peak.\n\nThe results are tightly linked to the reliability of the global SFR. An\nestimate of its current uncertainty can be obtained by looking at the data\npoints in the top panel of Figure 1. Increasingly accurate SFRs should\nsteadily improve the SNe Ia rate predictions. There is, as well, increasing\nevidence of universality in the slope of the initial mass function (IMF) from\ntests in different metallicity and age environments. Such a ``universal''\nmass function would be well represented by the Salpeter (1955) power law with\n$x = 0.86 \\pm 0.23$. Both the overall trend to convergence in the estimate of\nthe global $SFR(z)$ and the almost constancy of the IMF favor the possibility\nof deriving global values for the SNe Ia explosion rates.\n\n\\noindent{\\it Stellar clock: binary progenitors}. All evolutionary models for\nSNe Ia progenitors involve the accretion of mass by a C+O WD from a companion\nin a close binary system. Current research on SNe Ia has discarded some\nformerly proposed SNe Ia scenarios. The two classes of binary systems\nconsidered here as the most likely systems giving SNe Ia are: (1) the merging\nof two white dwarfs ---also called double degenerate scenario (DD)---, and\n(2) the accretion by the WD of H from a less evolved star in a close binary\nsystem ---a single degenerate scenario (SD)---. We call this second case a\ncataclysmic--like system (CLS). The time evolution of the SNe Ia rates after\nstar formation differs from one family of progenitor systems to another, and\nthere are variations within each family. We have modeled the SNe Ia rates by\nmeans of the same Monte Carlo code as in previous works (Ruiz--Lapuente,\nBurkert, \\& Canal 1995,1997; Canal, Ruiz--Lapuente, \\& Burkert 1996,\nhereafter RCB95,97 and CRB96), and adopting different prescriptions for\neach evolutionary path and for the initial binary parameters. We have also\ncompared our predictions with those of other authors: (1) The physical input\nfor the merging of two C+O white dwarfs (WDs) is modeled both as in Iben \\&\nTutukov (1984) and in RBC95. We allow for a number of different physical\ndescriptions of the binary evolution, and we try different values of the\ncommon envelope parameter $\\alpha$ (a measure of the efficiency with which\norbital energy is used in envelope ejection). (2) The physical modeling of\nthe explosion of a C+O WD when it reaches the Chandrasekhar mass by accretion\nand burning of H from a main--sequence, subgiant, or giant companion is\nmodeled as in the early work by Iben \\& Tutukov (1984), as in CRB96, and\nfinally including the most recently proposed variation of the binary\nevolution by Hachisu, Kato, \\& Nomoto (1996), who find a ``wind'' solution\nfor fast hydrogen accretion by a WD from a red--(sub)giant companion, but we\napply the same evolutionary constraints as Yungelson et al. (1996).\nParticularly relevant to the modeling are the slope of the IMF, the\ndistribution of mass ratios $q$ of the secondary to the primary, and the\ndistribution of initial separations $A_{0}$ in the progenitor binary sytems,\ntogether with the prescriptions for mass transfer. By trying various\nphysical approaches adopted by different modelers and exploring different\nvalues of the initial binary parameters (IMF, $q$, $A_{0}$ distributions) and\nassumptions as to the mass transfer rates, we obtain the characteristic\nbehaviour of each stellar clock and the uncertainties in the absolute scale\nof the rates.\n\nIn Figure 1, a comparison of $R(t)$ in (1) with those from other authors\n(middle panels: different IMF, $q$ and $A_{0}$ distributions; bottom panel:\ndifferent assumptions as to the allowed mass transfer rates) is made. From\nsuch comparison we concluded in our previous work that the evolution of the\nSNe Ia rates (rise, peak, decline) along cosmic time, for a given class of\nsystems, have broad common features shared by the predictions from\ndifferent authors (Iben \\& Tutukov 1984; Tutukov \\& Yungelson 1994; Yungelson\net al. 1996; our own modeling), which basically describe the clock of the\ncorresponding systems. The Hachisu, Kato \\& Nomoto (1996) solution, here\ncalled CLS(W), would allow to have some relatively young SNe Ia in the CLS\nscenario, and the SNe Ia rate would increase fast with redshift. On the other\nhand, the DD scenario predicts a flatter increase of the rate from z $\\simeq$\n0 to z$\\simeq$ 1, in all model universes. In all cases, the $R(t)$ shown here\nwould give, for our own Galaxy and the present time, SNe Ia rates which are\nwithin the current error bars of the observational estimates.\n\nVersions CLS and CLS(W) in the bottom panel of Figure 1 illustrate the range\nof uncertainty in the time of start of the explosions in the SD scenarios.\nFor the DD progenitors we have chosen the most favorable physical\nassumptions, those that enhance the numbers of SNe Ia at high $z$, in order\nto best reproduce the first observational results. Any other choices would\ngive lower numbers. In fact, the observed numbers seem to be above the\npredictions for this type of binary progenitor system (but one should stress\nhere the uncertainty in the global SFR). A clear feature of the DD scenario\nis not only that the SNe Ia rates do not increase fast towards higher\nredshifts but also that they are lower than those predicted for the SD\nprogenitors. We should note here that we are assuming the universality of the\ndistributions of binary parameters determined for the solar neighbourhood\n(Duquennoy \\& Mayor 1991). We have explored, however, the effects that\nplausible changes in the initial $q$ and $A_{0}$ distributions would have in\nthe outcome and found them to be only moderate (see below).\n\n\\noindent{\\it Counts as a function of magnitude}. We compare the model counts\nwith the observations by deriving the dependence of the number counts on\n$m_{R}$, the apparent magnitude of the SNe Ia in the $R$ photometric band. In\nfact, in this transformation the intrinsic dispersion in brightness of the\nSNe Ia is not a key factor, since we are using the $N-mag$ relationship\nmeasured in intervals of 0.2 to 0.5 mag for our tests (we could also use\n$N-z$, thus avoiding the transformation to magnitude). The intrinsic\ndispersion in magnitude is, in contrast, very important for cosmological\ntests using $m(z)$.\n\nWe assume the SNe Ia to have an average absolute blue magnitude:\n\n$$M_{B} = -18.52 + 5\\ log(h\/0.85)\\eqno(4)$$\n\n\\noindent\nwhere $h\\equiv H_{0}\/100$ (Perlmutter et al. 1997).\n\nThe distance modulus for each $z$ is calculated from the luminosity distance\n$d_{L}$. The apparent $m_{R}$ at maximum as a function of $z$ is determined\nfrom:\n\n$$m_{R} = M_{B} + 5\\ log (d_{L}(z)) - 5\\ + K_{corr}\\eqno(5)$$\n\n\\noindent\nwhere $K_{corr}$ is taken from Kim, Goobar, \\& Perlmutter (1996) (it includes\nthe full transformation from $B$ into $R$ magnitudes). We finally calculate\nthe variation of the SNe Ia rate ($yr^{-1}\\ sq. deg^{-1}$ $\\Delta mag^{-1}$)\nwith apparent red magnitude $m_{R}$ (we take $\\Delta m_{R}=0.5\\ mag$).\n\nThe results are displayed in the four panels of Figure 2. Shown in the Figure\nare the data points currently available (Pain et al. 1996; Pain, private\ncommunication). The dips in the curves are entirely due to the shape of the\n$K$--corrections. The different slopes of the curves from one family of\nprogenitors to another mainly reflect how fast the SNe Ia rate declines after\nreaching its peak value plus the time elapsed between star formation and the\npeak (steeper decline and longer time delay for the CLS and CLS(W) models\nthan for the DD model: see the middle and bottom panels of Figure 1), and the\nabsolute values of the rates at low redshift ($m_{R} \\simeq 20-21$) are\nsensitive to the ages of the corresponding model universes ($t_{0}(A)<\nt_{0}(B)< t_{0}(C)$). The slopes of the rates when comparing different models\nof the Universe are sensitive to the contribution of a non--zero $\\Lambda$.\nThe faster increase of the rates with magnitude (redshift) along the model\nsequence A--C corresponds to the increasing comoving volume derivatives\n($dV\/dz d\\Omega$) in the respective model universes. Such derivative is large\nif $\\Omega_{\\Lambda}$ gives a significant contribution to $\\Omega$. As a\ncheck of the sensitivity of the results to the choices of the binary model\nparameters, we have also calculated the $dN\/dm_{R}$--$m_{R}$ relationship,\nfor the DD progenitor, adopting different $q$ and $A_{0}$ distributions\n(dotted line in the middle panel of Figure 1): the final curve is not very\nsensitive to the various choices of the distributions explored in this work\n(the model is shown by the continuous line in the four panels of Figure 2).\nTaking at their face values the SFR and parameters for the SNe Ia progenitor\nevolution adopted, and also the two data points, the results for both the CLS\nand CLS(W) systems give better fits to the data than those for the DD\nsystems, model universe A being most favored and model C giving the worst fit\nfor any kind of system. But those conclusions are preliminary and we must\nawait the reduction of the uncertainties in the observed counts and in the\nglobal SFR for the intermediate--mass stars leading to SNe Ia. As we see from\nFigure 2, at present the uncertainty in the SNe Ia counts at $m_{R}\\simeq 22$\nis of the order of a factor of 2. That is the same as the difference between\nthe prediction for the DD model and the lowest one for the SD model, and also\nsimilar to the range covered by the two extreme predictions (CLS and CLS(W))\nfor the SD model. Any increase in the SFR would almost homologously shift\nupwards all the count predictions. At higher magnitudes, the range of\npredictions for the SD models becomes narrower while the differences with the\nDD model increase. Thus, if we knew both the SNe Ia counts at $m_{R}\\simeq\n23$ and the global SFR to better than a factor 1.5, we could already\ndiscriminate between the SD and the DD models. The sensitivity to the\ncosmological parameters $\\Omega_{M}$ and $\\Omega_{\\Lambda}$ is still low at\nthose magnitudes. If we had determined, for instance, the DD model to be the\nright one, then the SNe Ia counts at $m_{R} = 24.5$ (corresponding to\n$z\\simeq 1$) in the C universe ($\\Omega_{M} = 0.3$, $\\Omega_{\\Lambda} = 0.7$)\nshould be twice those in the A universe ($\\Omega_{M} = 1$, $\\Omega_{\\Lambda}\n= 0$), the factor of increase from $m_{R} = 22$ to $m_{R} = 24.5$ being also\ntwice as large in case C as compared with case A (both contrasts would be\nsharper if the SD model were the right one). The high--$z$ SN searches to\nmeasure $\\Omega_{M}$, $\\Omega_{\\Lambda}$ from the variation of the apparent\nmagnitudes of SNe Ia are now almost reaching $z = 1$ (Perlmutter et al. 1998;\nGarnavich et al. 1997). From samples large enough to derive the SNe Ia counts\nat $z\\simeq 1$, the constraints on both parameters should be tighter than\nthose obtainable from the counts alone, but the latter could provide a\nsupplementary test, since it is based on a different approach: the use of a\nvolume effect instead of a redshift--magnitude effect. Counts extending to\neven higher magnitudes would more clearly reveal the geometry of the\nUniverse. Suggested SN searches up to infrared magnitudes $K\\sim 26-27$\n(Miralda--Escud\\'e \\& Rees 1997) would extend well beyond that point.\n\n\\section{Conclusions and future prospects}\n\nThe theoretical results presented here are intended to show the potential of\nthe comparison between model predictions and the results of undergoing and\nfuture SNe Ia searches. The slopes of the $dN\/dm_{R}$--$m_{R}$ curves are\nmainly sensitive to the general characteristics of each SNe Ia model while\nthe absolute values depend more on model parameters and on the SFR. Hence the\ninterest of data covering a broader $m_{R}$ (or $z$) range than those\ncurrently available. On the other hand, the differences in the predictions\nfrom any SNe Ia progenitor assumed, for $\\Omega_{\\Lambda}$--dominated\nuniverses as compared with matter--dominated and open universes, become\nsignificant at large enough values of $m_{R}$. In a Universe dominated by\n$\\Omega_{\\Lambda}$ the number counts of SNe Ia show a larger increase at $z\n= 1$ due to the large volume encompassed at that redshift. Useful information\nwould be obtained by the measurement of the evolution of the SNe Ia counts up\nto $z = 1$ and beyond (possibly in the K--band for higher redshifts).\n\nThe first data obtained by Pain et al. (1996) gave a value of\n$34.4^{+23.9}_{-16.2}$ SNe Ia $yr^{-1}\\ sq. deg.^{-1}$ in a magnitude range\nof 21.3 $<$ R $<$ 22.3. These first results were obtained from three SNe\ndiscovered at redshifts 0.374, 0.420, and 0.354 (SN 1994H, SN 1994al \\& SN\n1994F) and the small number statistics dominates this very first\nmeasurement. A larger bulk of data, already obtained, will now reduce the\nstatistical uncertainties, and prospects to extend the observations up to\nhigher $z$ are on the way. It is thus too soon to extract firm conclusions\nfrom the data. Our purpose here is to propose a new useful test to be\ncompleted in the near future.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:1}\n\nThe CFT bootstrap program started in the 70's with the works \\cite{FERRARA1973161,Polyakov:1974gs}. Using the Virasoro algebra they solved every unitary 2D CFT with central charge $c<1$. However they were unable to extend their methods to higher dimensions or larger central charge. \n\nThe recent revival of the bootstrap program caused by \\cite{Rattazzi_2008} was due to its novel ways of analyzing crossing symmetry. Not only it was able to be replicated in arbitrary dimensions but was also extended to similar feats of the original in special cases, as the $3D$ Ising and the $O\\left(N\\right)$ models, \\cite{El_Showk_2012,Kos_2014,Kos_2015}. With their new formulation we are able to show for which values of the CFT data there is no unitary CFT.\n\nOur method is dual to theirs, we demonstrated how to construct simple unitary 4-point functions using the generalized polynomials. These will be points that cannot be excluded by their methods. An important distinction between the two is that this is purely analytical. We won't be able to find every unitary CFT. However, as will be shown in section \\ref{sec:4}, the generalized polynomials cover a substantial region of the space for small scaling dimension.\n\nThis work is divided as follows, in section \\ref{sec:2} we review the numerical bootstrap approach to motivate why starting with crossing symmetric expressions is interesting. In section \\ref{sec:4} we introduce and analyze the generalized polynomials. Lastly, in \\ref{sec:5} we present a computational method to find large scalar gaps.\n\nThe appendices cover the details not present in the main text, in \\ref{ap:A} we discuss a special case of generalized polynomials and the relation between crossing symmetry and a more general basis. Appendix \\ref{ap:B} presents the properties of the hypergeometric functions in the OPE expansion and proves the important statements from the main text.\n\n\\section{Reviewing the bootstrap equation and crossing symmetry} \\label{sec:2}\n\nBefore starting it would be good to review some points. For a more in depth review check \\cite{Poland:2018epd,Rychkov:2016iqz,simmonsduffin2016tasi}. The two main elements in the CFT bootstrap program are the OPE and crossing symmetry. We can use the OPE to write all correlation functions in terms of the CFT data of primary operators, $\\left\\{\\Delta_{i},\\lambda_{ijk}\\right\\}$, and we can use crossing symmetry to relate different orderings in a correlator. A 4-point function of identical scalars $\\phi$ is give by\n\n\\begin{align}\n &\\mathcal{G}\\left(u,v\\right)=\\underset{J,\\Delta}{\\sum} \\left(\\lambda_{\\phi\\phi\\mathcal{O}}\\right)^{2} G^{\\left(D\\right)}_{J_{\\mathcal{O}},\\Delta_{\\mathcal{O}}}\\left(u,v\\right), \\\\\n \\label{eq:02} & \\mathcal{G}\\left(u,v\\right)=\\mathcal{G}\\left(\\frac{u}{v},\\frac{1}{v}\\right)=\\left(\\frac{u}{v}\\right)^{\\Delta_\\phi}\\mathcal{G}\\left(v,u\\right).\n\\end{align}\n\nIn the numerical bootstrap the conformal block decomposition is applied to the crossing relations to obtain the bootstrap equation. This new manifestly unitary constraint is analyzed numerically with functionals to bound the CFT data\n\n\\begin{align} \\label{eq:06}\n \\underset{J,\\Delta}{\\sum}\\left(\\lambda_{\\phi \\phi \\mathcal{O}}\\right)^{2}\\left( \\frac{G^{\\left(D\\right)}_{J_\\mathcal{O},\\Delta_{\\mathcal{O}}}\\left(u,v\\right)}{u^{\\Delta_{\\phi}}}-\\frac{G^{\\left(D\\right)}_{J_\\mathcal{O},\\Delta_{\\mathcal{O}}}\\left(v,u\\right)}{v^{\\Delta_{\\phi}}}\\right)=\\underset{J,\\Delta}{\\sum}\\left(\\lambda_{\\phi \\phi \\mathcal{O}}\\right)^{2} F_{J,\\Delta}^{\\Delta_{\\phi}}\\left(u,v\\right)=0, ~~~~ \\left(\\lambda_{\\phi \\phi \\mathcal{O}}\\right)^{2}\\geq0.\n\\end{align}\n\nIn practice we cannot expect to find an exact unitary 4-point function from this equation, but we can from the crossing relations $\\left(\\ref{eq:02}\\right)$. If we construct a crossing symmetric expression and then show that it satisfy unitarity we would have a point in the data space that cannot be excluded without new constrains. In the optimal case we would be able to pinch the boundary between the excluded and allowed region, similar to \\cite{2020}, however this is not what we have now. Although not optimal this still allow us to find interesting results.\n\nAssuming we already have a crossing symmetric expression, whether a closed form or an approximation, the challenge becomes finding when they are unitary. The conformal blocks are closely related to the analytic structure of 4-point functions at the point $u=0$, however crossing symmetry is related to series at the point $u=v=1$. This loss of information about $u=0$ is where most of the difficulty resides. In appendix \\ref{ap:A} we show this in a simple example.\n\nTo bypass this loss we search for solutions of crossing symmetry that behaves similar to a conformal block close to $u=0$. The conformal blocks behave as $G^{\\left(D\\right)}_{J,\\Delta} \\sim \\mathcal{N}_{J,\\Delta} u^{\\frac{\\Delta-J}{2}}$, $\\left(u\\xrightarrow[]{}0 ~~ v\\xrightarrow[]{}1\\right)$, with our normalization shown in \\ref{ap:B}. We use a basis of functions $\\left\\{f_{i}\\left(u,v\\right)\\right\\}$ that for small $u$ it behaves as a sum of power laws, $f_{i}\\left(u,v\\right) \\sim \\sum_{j} \\mathcal{C}_{i, j}\\left(v\\right) u^{p_{i, j}}$ and are flexible enough to work for any scaling dimension in $\\left(\\ref{eq:02}\\right)$. We will start with the simplest basis, the generalized polynomials, and show that it already has interesting results.\n\n\\section{Generalized polynomials} \\label{sec:4}\n\nThe generalized polynomials are polynomial-like functions with arbitrary powers of $u$ and $v$ and with a finite number of terms to keep crossing symmetry under control. They are given by the ansatz\n\n\\begin{align} \\label{eq:12}\n & \\mathcal{G}\\left(u,v\\right) = \\frac{1}{v^{\\Delta_{\\phi}}}\\underset{i=1}{\\overset{N<\\infty}{\\sum}}~ \\xi_{i} ~ u^{a_{i}} v^{b_{i}} = \\underset{i=1}{\\overset{M}{\\sum}}~ E_{i} \\left(a_{i},b_{i},c_{i}\\right), \\\\\n \\label{eq:13} \\left(a,b,c\\right) &= \\frac{u^{a} v^{b}+u^{a}v^{c}+u^{b}v^{a}+u^{b}v^{c}+u^{c}v^{a}+u^{c}v^{b}}{N_{a,b,c} ~ v^{\\Delta_{\\phi}}}, ~~ a+b+c=2\\Delta_{\\phi}, \\\\\n N_{a,b,c} &= \\left\\{\\begin{tabular}{c}\n 1 if $a\\neq b$, $b\\neq c$, $c\\neq a$, \\\\\n 2 if $a=b\\neq c$, $b=c\\neq a$, $c=a\\neq b$, \\\\\n 6 if $a=b=c$.\n \\end{tabular}\\right.\n\\end{align}\n\nFrom the polynomial-like basis $\\left\\{u^{a_{i}} v^{b_{i}-\\Delta_{\\phi}}\\right\\}$ we created a new crossing symmetric basis $\\left\\{\\left(a_{i},b_{i},c_{i}\\right)\\right\\}$ which we can use to construct our correlators. Using the power law behavior at $u=0$ we can predict what is the form of their conformal block decomposition. By matching the powers in the conformal blocks with those in the polynomial basis we find that the correct series should have the following structure\n\n\\begin{align} \\label{eq:15}\n u^{p} {v^{q}} \\sim \\underset{J,n}{\\sum} A^{\\left(D\\right)}_{J,n} ~ G_{J,2p+J+n}^{\\left(D\\right)}\\left(u,v\\right).\n\\end{align}\n\nWe can see that the series generated by two power laws $u^{p}$ and $u^{q}$ won't overlap unless their powers differ by an integer or half-integer.\n\nUnlike unitarity, crossing symmetry do not depend on the number of dimensions, if $D\\geq2$, as such expressions $\\left(\\ref{eq:12}\\right)$ and $\\left(\\ref{eq:15}\\right)$ are general results. In\n \\ref{ap:B} we give all the properties needed to show that the correct conformal block decomposition for general $D$ has the form\n\n\\begin{align} \\label{eq:16}\n \\frac{u^{p}\\left(v^{q}+v^{r}\\right)}{v^{\\Delta_{\\phi}}} = \\underset{m\\geq n}{\\sum} \\frac{1+\\left(-1\\right)^{n+m}}{n! m!} F^{n,m}_{p,D}\\left(\\left|\\frac{q-r}{2}\\right|\\right) G^{\\left(D\\right)}_{m-n,2p+n+m}\\left(u,v\\right),\n\\end{align}\n\nwhere $F_{p,D}^{n,m}\\left(t\\right)$ is a polynomial in $t$. For the rest of the main text we set $D=2$.\n\nThe relation between the $2D$ and $1D$ conformal blocks, $G_{J,\\Delta}\\sim k_{\\frac{\\Delta+J}{2}}k_{\\frac{\\Delta-J}{2}}$, allow us to use the results found in \\cite{Hogervorst_2017} to calculate $F_{p,2}^{n,m}\\left(t\\right)$ exactly. Both the result and the relation between conformal blocks are written in the Dolan-Osborn variables, so we rewrite the functions $u^{p}v^{q}$ as $ \\left(z^{p}\\left(1-z\\right)^{q}\\right)\\left(\\bar{z}^{p}\\left(1-\\bar{z}\\right)^{q}\\right)$ and use \\cite{Hogervorst_2017} to obtain\n\n\\begin{align} \\label{eq:18}\n z^{p}\\left(1-z\\right)^{q} = \\underset{n=0}{\\overset{\\infty}{\\sum}} \\dfrac{\\left(-1\\right)^{n} \\left(p\\right)^{2}_{n}}{n! \\left(2p-1+n\\right)_{n}} {}_3F_{2} \\left(\\left.\\begin{array}{c}\n \\begin{array}{ccc}\n -n, & 2p-1+n, & -q \\\\\n \\end{array}{}\\\\ \\begin{array}{cc}\n p, & p \\\\\n \\end{array}{} \\end{array}{} \\right| 1\\right) k_{p+n}\\left(z\\right).\n\\end{align}\n\nThese hypergeometric function have multiple properties we can exploit, to make them simpler to express we'll rewrite them as\n\n\\begin{align} \\label{eq:19}\n f_{p}^{n}\\left(t\\right) &= \\dfrac{\\left(p\\right)^{2}_{n}}{\\left(2p-1+n\\right)_{n}} {}_3F_{2} \\left(\\left.\\begin{array}{c}\n \\begin{array}{ccc}\n -n, & 2p-1+n, & \\frac{p}{2}-t \\\\\n \\end{array}{}\\\\ \\begin{array}{cc}\n p, & p \\\\\n \\end{array}{} \\end{array}{} \\right| 1\\right),\n\\end{align}\n\nwhich yields\n\n\\begin{align}\n z^{p}\\left(1-z\\right)^{q} &= \\underset{n=0}{\\overset{\\infty}{\\sum}} \\dfrac{\\left(-1\\right)^{n} }{n!} f_{p}^{n}\\left(\\frac{p}{2}+q\\right) k_{p+n}\\left(z\\right),\\\\ \n \\label{eq:20} F_{p,2}^{n,m}\\left(t\\right) &= 2^{m-n}f_{p}^{n}\\left(t\\right)f_{p}^{m}\\left(t\\right).\n\\end{align}\n\nThe first main property of $f_{p}^{n}\\left(t\\right)$ is its parity, $f_{p}^{n}\\left(-t\\right)=\\left(-1\\right)^{n}f_{p}^{n}\\left(t\\right)$, so we only need to analyze the region $t\\geq0$. Second $f$ is a continuous Hahn polynomial, meaning they have a simple recurrence relation\n\n\\begin{align} \\label{eq:22}\n f_{p}^{n+1}\\left(t\\right)=t f_{p}^{n}\\left(t\\right)+\\dfrac{n\\left(n+p-1\\right)^{2} \\left(n+2p-2\\right)}{4\\left(2n+2p-1\\right)\\left(2n+2p-3\\right)} f_{p}^{n-1}\\left(t\\right).\n\\end{align}\n\nUsing induction together with the initial conditions $f_{p}^{0}\\left(t\\right)=1$ and $f_{p}^{1}\\left(t\\right)=t$ and $p\\geq0$ we prove that $f_{p}^{n}\\left(t\\right)\\geq0$. The only step left is to prove $\\left(\\ref{eq:16}\\right)$ for $D=2$ and $a+b+c=2\\Delta_{\\phi}$\n\n\\begin{align} \\label{eq:23}\n & \\nonumber \\frac{u^{p}\\left(v^{q}+v^{r}\\right)}{v^{\\frac{p+q+r}{2}}} = \\underset{m\\geq n}{\\sum} \\frac{\\left(-1\\right)^{n+m}}{n! m!} \\left( F_{p,2}^{n,m}\\left(\\frac{q-r}{2}\\right)+F_{p,2}^{n,m}\\left(\\frac{r-q}{2}\\right) \\right) G^{\\left(2\\right)}_{m-n,2p+n+m}\\left(u,v\\right) \\\\\n & \\nonumber = \\underset{m\\geq n}{\\sum} \\frac{\\left(-1\\right)^{n+m}}{n! m!} \\left( F_{p,2}^{n,m}\\left(\\left|\\frac{q-r}{2}\\right|\\right)+F_{p,2}^{n,m}\\left(-\\left|\\frac{q-r}{2}\\right|\\right) \\right) G^{\\left(2\\right)}_{m-n,2p+n+m}\\left(u,v\\right) \\\\\n & = \\underset{m\\geq n}{\\sum} \\frac{\\left(-1\\right)^{n+m}}{n! m!}\\left(1+\\left(-1\\right)^{n+m}\\right) F_{p,2}^{n,m}\\left(\\left|\\frac{q-r}{2}\\right|\\right) G^{\\left(2\\right)}_{m-n,2p+n+m}\\left(u,v\\right).\n\\end{align}\n\nAlthough all these OPE coefficients are positive they are not sufficient to guarantee unitarity.\n\n\n\\subsection{Unitary generalized polynomials and the scalar gap}\n\nFor a 4-point function of identical real scalars we expect the identity to appear in the OPE, but according to $\\left(\\ref{eq:23}\\right)$ this is not the case when all parameters in $\\left(a,b,c\\right)$ are non-zero. Since we've only assumed identical scalars we don't know whether $\\phi$ is null or it is charged, in any case, we cannot call $\\mathcal{G}$ unitary. Still it is possible for any $\\left(a,b,c\\right)$ to generate an unitary 4-point function. Because of the positivity of $\\left(\\ref{eq:23}\\right)$ the sum $\\mathcal{G}_{\\mathbf{1}}+\\left(a,b,c\\right)$ will be unitary if $\\mathcal{G}_{\\mathbf{1}}$ is unitary. For us $\\mathcal{G}_{\\mathbf{1}}$ will be just another generalized polynomial. Bellow we plot the scalar gap of every possible unitary 4-point function we can guarantee exists from $\\left(\\ref{eq:23}\\right)$.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=0.5]{g2-.png}\n \\caption{The blue curve shows an approximation to the bound given by the numerical bootstrap, the other curves and regions are the places we can find unitary solutions using only $\\left(\\ref{eq:23}\\right)$.}\n \\label{fig:01}\n\\end{figure}\n\nThere are two cases that stand out in the plot, these correspond to the generalized free boson, $\\left(0,\\Delta_{\\phi},\\Delta_{\\phi}\\right)$, as the black line and the cosine operators in the free boson, $\\left(0,0,2\\Delta_{\\phi}\\right)$, as the red curve.\n\nUntil now all we've analyzed is equivalent to a basis with a single element $\\left\\{\\left(a,b,c\\right)\\right\\}$. For a general basis we cannot find a complete set of solutions like we did previously, however we can still focus in searching for interesting 4-point functions. In our case functions with large scalar gap.\n\nNon-trivial solutions, those containing negative basis coefficients, form the majority of the interesting 4-point functions, yet most basis cannot create them. This is the result of two properties of $\\left(\\ref{eq:23}\\right)$, first, all OPE coefficients are positive and second, the operators in the OPE have twist $\\tau=2p+2n$. The first means that whenever a basis coefficient becomes negative there are up to 3 potential series of only negative OPE coefficients. The second severely limits which kinds of terms could be used to counteract the first.\n\nChoosing a basis is, then, an extremely important first step when trying to analyze any property of a CFT. Once one is chosen we need to know which tools we have to say with certain confidence whether an expression is unitary or not. If the number of elements is small we can still use analytical methods to prove unitarity, but for larger basis we will usually rely in the asymptotic expansion of the $f_{p}^{n}\\left(t\\right)$ polynomials. To test this method we start by analyzing small scaling dimensions. Since we didn't need a large basis to get close to the boundary we used analytical methods to prove their existence.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=0.5]{g3-.png}\n \\caption{Regions we could prove the existence of unitary solution analytically.}\n \\label{fig:02}\n\\end{figure}\n\n\\section{Searching for large gaps} \\label{sec:5}\n\nDespite having exact expression for the OPE coefficients there still is a small, but important, hassle in $\\left(\\ref{eq:19}\\right)$, the function $f_{0}^{n}\\left(t\\right)$ is not well defined. For $n\\geq2$ the following redefinition is enough\n\n\\begin{align}\n f_{p}^{n}\\left(t\\right) &= \\dfrac{\\left(\\Gamma\\left(p+n\\right)\\right)^{2}}{\\left(2p-1+n\\right)_{n}} {}_3 \\overset{\\sim}{F}_{2} \\left(\\left.\\begin{array}{c}\n \\begin{array}{ccc}\n -n, & 2p-1+n, & \\frac{p}{2}-t \\\\\n \\end{array}{}\\\\ \\begin{array}{cc}\n p, & p \\\\\n \\end{array}{} \\end{array}{} \\right| 1\\right).\n\\end{align}\n\nBy analyzing $\\left(1-z\\right)^{t}=1-t~z+\\mathcal{O}\\left(z^{2}\\right)$ it becomes clear that $f_{0}^{0}\\left(t\\right)=1$ and $f_{0}^{1}\\left(t\\right)=t$. With every value of $f_{p}^{n}\\left(t\\right)$ defined it will be useful to think of $\\frac{f_{p}^{n}}{n!}\\equiv0$ for $n<0$. As was said, the operators in the OPE for a generalized polynomials have twist $\\tau=2p+2n$, this means that these series overlap only when the parameter differ by an integer. We will represent this overlapping series by $\\mathcal{E}_{\\left[p\\right]}$, $\\left[p\\right]=p-\\lfloor p \\rfloor$\n\n\\begin{align}\n & \\mathcal{G} = \\underset{\\left[p\\right]}{\\sum} \\underset{m\\geq n}{\\sum} \\left(1+\\left(-1\\right)^{n+m}\\right)\\mathcal{E}_{\\left[p\\right]}^{n,m} G^{\\left(2\\right)}_{m-n,2\\left[p\\right]+n+m}, \\\\ \n \\label{eq:28} & \\mathcal{E}_{\\left[p\\right]}^{n,m} = \\underset{i}{\\sum} 2^{m-n}E_{i}\\frac{ f_{\\left[p\\right]+\\lfloor p_{i}\\rfloor}^{n-\\lfloor p_{i}\\rfloor}\\left(t_{i}\\right)}{\\left(n-\\lfloor p_{i}\\rfloor\\right)!} \\frac{f_{\\left[p\\right]+\\lfloor p_{i}\\rfloor}^{m-\\lfloor p_{i}\\rfloor}\\left(t_{i}\\right) }{\\left(m-\\lfloor p_{i}\\rfloor\\right)!}.\n\\end{align}\n\nNo matter which property we try to explore using this method the code should always search for a set of coefficients $E_{i}$ that makes every $\\mathcal{E}_{\\left[p\\right]}^{n,m}$ non-negative. To deal with the infinite number of inequalities we can use the asymptotic form of the $f$ polynomials. By making sure that the asymptotic expression is positive and that $\\mathcal{E}_{\\left[p\\right]}^{n,m}\\geq0$ for a sufficient large $n$ and $m$ we would know with confidence whether the expression is unitary.\n\nFor our search for large scalar gaps this can be coded by introducing two types of linear constrains, the positivity plus normalization, $\\mathcal{E}_{0}^{0,0}=1$ and $\\mathcal{E}_{\\left[p\\right]}^{n,n+J}\\geq0$ for $n\\leq n_{min}$ and $J\\leq J_{min}$, and the gap condition, $\\mathcal{E}_{\\left[p\\right]}^{n,n}=0$ if $0<2\\left[p\\right]+2n<\\Delta_{1}$. Any set of $E_{i}$ that satisfy these conditions could be further tested for $\\mathcal{E}_{\\left[p\\right]}^{n,n+J}\\geq0$ when $n\\leq n_{max}$ and $J\\leq J_{max}$.\n\nFor the green points in the plots bellow we've used the basis\n $\\mathcal{B}=\\left\\{\\left(a,b,c\\right)\\left|a,b,c\\in \\mathbb{Z}_{\\geq0} \\right\\}\\right.$. In this special basis the value of $n_{max}$ and $J_{max}$ can be calculated using some analytical results from the continuous Hahn polynomials. This way we've proved that there are no negative OPE coefficients. This is not the biggest possible gap, as is shown in the second plot bellow when adding half-integers to the basis the gap increases.\n\n\\begin{figure}[h]\n\n\\begin{minipage}[b]{0.48\\linewidth}\n\n \\centering\n \\includegraphics[scale=0.5]{plot1----.png}\n \\caption{The green dots are new unitary solutions, compared to the last plot, the light blue are possible unitary, we tested until twist and spin 200, the black are points we cannot reach with our current basis, the black line is the generalized free boson and the dashed line is $\\frac{8\\Delta}{3}$.}\n \\label{fig:03}\n\n\\end{minipage} \n\\hfill\n\\begin{minipage}[b]{0.48\\linewidth}\n\n \\centering\n \\includegraphics[scale=0.5]{plot2--.png}\n \\caption{By introducing half-integers parameters to the basis we were able to reach the red points, the black ones are points we cannot reach with this new basis.}\n \\label{fig:04}\n\n\\end{minipage} \n\n\\end{figure}\n\nIn AdS\/CFT the large radius limit is a way to recover the flat space-time physics, \\cite{Paulos_2017}. In this limit theories like the generalized free boson are interpreted as free particles with their masses proportional to the scaling dimension $\\Delta_{\\phi}$. The scalar gap of $2\\Delta_{\\phi}$ can be interpreted the 2-particle bound state have twice the particle mass. Figure \\ref{fig:03} shows that for large $\\Delta_{\\phi}$ there are theories with scalar gap close to $\\frac{8\\Delta_{\\phi}}{3}$. Any theories with the same type of interpretation in this region would have $\\frac{8m}{3}$ as the bound state mass of particles of mass $m$. This behavior is not restricted to integers scaling dimension, with a less robust program it was possible to find possible solutions with non-integer scaling dimension connecting these points.\n\n\\section{Conclusion}\n\nIn this work we presented a method to construct simple unitary 4-point functions. Unlike the dual formalism from \\cite{Rattazzi_2008} this is completely analytical, as such we are not able to create every possible 4-point functions. We've analyzed the generalized polynomials trying to maximize the scalar gap.\n\nWe used the properties of the hypergeometric polynomials $f_{p}^{n}\\left(t\\right)$ to study their OPE decomposition and to create a computational method to search for unitary solutions. The main results are given by figure \\ref{fig:02} and \\ref{fig:03}. Figure \\ref{fig:02} shows that despite its simple form the generalized polynomials seems to cover a significant portion of the allowed physical space. Figure \\ref{fig:03} shows that even for relative large scaling dimensions it still possible to find unitary 4-point functions with significantly large scalar gap than the generalized free boson. In the large radius limit of AdS\/CFT this can be related to the mass of the 2-particle bound state, that is, their bound state would have a mass significantly larger than twice the particle mass.\n\nThe results from the program, which can be found at \\href{https:\/\/github.com\/rgnato\/Generalized_polynomials}{GitHub}, are valid only for $D=2$, however in \\ref{ap:B} we used the results of \\cite{Hogervorst_2016,Kaviraj_19} to find recursion relations for the $F_{p,D}^{n,m}\\left(t\\right)$ polynomials in $D$. Most of the code could be re-purposed to analyze higher dimensional CFTs or other observables, but we may need another way to prove, or at least be confident about unitarity.\n\n\\section*{Acknowledgements}\n\nI would like to thank my advisor Pedro Vieira for helping me during this project, for the discussions and for the ideas exchanged. I like to thank both Pedro, Alexandre Homrich and Matheus Fabri for helping check this work. I would also like to thanks the CNPq for the partial financial support with the grant No. 132286\/2018-1 during my stay in the IFT-UNESP\/ICTP-SAIFR. And during my stay at PI. The work was supported in part by the Natural Sciences and Engineering Research Council of Canada. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn this paper we consider the problem of finding an invariant Riemannian metric for a mapping preserving a given volume form on a Riemannian manifold. The setup of the problem is the following.\n\nLet $M$ be a smooth, closed and oriented finite dimensional manifold. Then the set of all Riemannian metrics $\\mathcal{M}$ can be considered as an infinite dimensional manifold. Its tangent vectors can be given an $L^2$ inner product~\\cite{Ebin, Clarke} and thus computing the curvature of $\\mathcal{M}$ makes sense. The sectional curvature of $\\mathcal{M}$ is nonpositive, but $\\mathcal{M}$ is not geodesically complete~\\cite{Freed, Clarke}.\n\nInstead of $\\mathcal{M}$ we consider a submanifold $\\mathcal{M}_\\mu$ of $\\mathcal{M}$ consisting of Riemannian metrics which all have the same given volume form $\\mu$. The submanifold $\\mathcal{M}_\\mu$ is an infinite dimensional globally symmetric space. The exponential mapping is a diffeomorphism, it has nonpositive curvature and any two points can be joined by a unique geodesic. For details of these statements, see~\\cite{Ebin, Freed, Clarke}.\n\nDiffeomorphisms preserving a given volume form are called volumorphisms. They act naturally by pullback on $\\mathcal{M}_\\mu$ and the action is isometric. Let $g$ be some Riemannian metric on $M$ whose induced volume form is $\\mu$ and $\\phi$ a volumorphism of $M$. Tangent spheres defined by $g$ are mapped to tangent ellipsoids by the pushforward of $\\phi$. The sphere and its image under the pushforward of $\\phi$, the ellipsoid, have the same volume due to the fact that $\\phi$ is a volumorphism, but we have no control on how much the sphere is distorted.\n\nWe ask the following question. If the tangent spheres are boundedly distorted under the iterations of the pushforward of $\\phi$, is there another Riemannian metric, say $h$, such that tangent spheres defined by $h$ are mapped to spheres? Since in this case the mapping $\\phi$ considered as a mapping $(M,h)\\to (M,h)$ would be an isometry, the Riemannian metric $h$ is in this sense an optimal Riemannian metric for $\\phi$. A similar idea has been used in the study of quasiconformal mappings in~\\cite{Tukia, Iwaniec}.\n\nWe formalise the idea explained above and ask: ``If the action of a volumorphism on $\\mathcal{M}_\\mu$ has a bounded orbit, is there a fixed point of this action?''. We consider the space of Riemannian metrics of Sobolev class $H^s$ and assume that the mapping $\\phi$ is of Sobolev class $H^{s+1}$, $s>n\/2$. We show that the answer to the question above is affirmative if we allow the possibility that the fixed point is of lower regularity than $H^s$. \n\nA fixed point of the action of $\\phi$ on $\\mathcal{M}_\\mu^s$, if it exists, will belong to a metric space $(X,\\d)$ of $\\mu$-a.e. positive definite symmetric $(0,2)$-tensor fields with volume form agreeing with $\\mu$ a.e. The elements of $X$ are also assumed to satisfy a certain natural integrability condition. As a metric space, $(X,\\d)$ is a complete global Alexandrov nonpositive curvature space containing $(\\mathcal{M}_\\mu^s,d)$ as an isometrically embedded subset. Here $d$ is the distance metric induced by the weak Riemannian metric on $\\mathcal{M}_\\mu^s$. The space $(X,\\d)$ is defined in Theorem~\\ref{X}, and Theorem~\\ref{vol_fp} is the fixed point theorem for the action of volumorphisms on $\\mathcal{M}_\\mu^s$. We expect $(X,\\d)$ to be the metric completion of $(\\mathcal{M}_\\mu^s,d)$.\n\nTo find a fixed point for the action of a volumorphism on $\\mathcal{M}_\\mu$, we generalize a mean ergodic theorem and a fixed point theorem to suit our nonlinear setting. These are Theorems~\\ref{main_thm} and~\\ref{fp_thm}. Mean ergodic theorems consider the convergence of averages of the iterates of the points under the action. In a nonlinear setting there is no obvious notion of average, but on nonpositively curved spaces, such as $\\mathcal{M}_\\mu^s$, there is a natural generalization of averages~\\cite{Jost, Jostb, Karcher}. Averages are also called means or centers of mass in the literature.\n\nThe class of metric spaces where we formulate our mean ergodic theorem and fixed point theorem is that of complete global Alexandrov nonpositive curvature spaces, which are also known as (complete) $\\mbox{CAT}(0)$ spaces. Alexandrov nonpositive curvature spaces have been studied in a setting similar to ours in~\\cite{Jost, Jostb}. In both of the above-mentioned theorems, we assume that the mapping considered is nonexpansive. The mean ergodic theorem additionally assumes that the means of the iterates of the mapping satisfy a certain convexity property.\n\n\\section{Mean ergodic theorem and fixed point theorem in a global Alexandrov nonpositive curvature space}\nIn this section, we formulate and prove a mean ergodic theorem and a fixed point theorem in a nonlinear setting. First we give a brief review of the class of global Alexandrov nonpositive curvature (NPC) spaces we are working on. For details and examples of Alexandrov NPC spaces we refer to~\\cite{Jost,Jostb}.\n\nA metric space $(\\mathcal{N},d)$ is said to be a geodesic length space if for any two points $p,q\\in N$ there exists a rectifiable curve $\\gamma:[0,1]\\to \\mathcal{N}$ with $\\gamma(0)=p$ and $\\gamma(1)=q$ and length equal to $d(p,q)$. Such a curve is called a \\emph{geodesic}.\n\n\\begin{definition}\n A geodesic length space $(\\mathcal{N},d)$ is said to be a global Alexandrov nonpositive curvature (NPC) space if for any three points $p,q,r$ of $\\mathcal{N}$ and any geodesic $\\gamma:[0,1]\\to \\mathcal{N}$ with $\\gamma(0)=p$ and $\\gamma(1)=r$, we have for $0\\leq t\\leq 1$\n\\begin{equation*}\n d^2(q,\\gamma(t))\\leq (1-t) d^2(q,\\gamma(0))+td^2(q,\\gamma(1))-t(1-t)l(\\gamma)^2.\n\\end{equation*}\nHere $l(\\gamma)$ is the length of the geodesic $\\gamma$. \n\\end{definition}\nThe inequality above is called the Alexandrov NPC inequality. We remark that global Alexandrov NPC spaces are simply connected and that for any two given points there is a unique geodesic connecting the points. See Lemma 2.2.1 of~\\cite{Jostb} and the discussion that follows the lemma.\n\nWe will also need the concept of convex sets in geodesic length spaces. A subset of a geodesic length space is \\emph{convex} if any two points of the subset can be joined by a geodesic whose image is contained in that subset. The convex hull $co(S)$ of a subset $S$ of a geodesic length space is the smallest convex subset of $\\mathcal{N}$ containing $S$. \n\nThe convex hull of an arbitrary subset of a geodesic length space need not exist, but global Alexandrov NPC spaces have the property that the convex hull of any set exists~\\cite[Lemma 3.3.1]{Jostb}. By that same lemma, we can express the convex hull of a subset $S\\subset \\mathcal{N}$ as follows. Set $C_0=S$ and define $C_k$ to be the union of all geodesic arcs between points of $C_{k-1}$. We have\n\\begin{equation}\\label{cohull}\n co(S)=\\bigcup_{k=0}^\\infty C_k.\n\\end{equation}\n\nWe record for future reference that the diameter of a set $S$ and its convex hull $co(S)$ are the same. It holds trivially that $\\mbox{diam}(co(S))\\geq \\mbox{diam}(S)$. Let $\\epsilon>0$ and choose $p,q\\in co(S)$ such that $d(p,q)+\\epsilon=\\mbox{diam}(co(S))$. The Alexandrov NPC inequality implies that\n\\begin{equation}\n d^2(q,\\gamma(t))\\leq \\max\\{d^2(q,\\gamma(0)),d^2(q,\\gamma(1))\\}\n\\end{equation}\nfor any $q\\in \\mathcal{N}$, $\\gamma:[0,1]\\to\\mathcal{N}$ a geodesic and $t\\in [0,1]$. By this inequality, it can be seen from the definition of the sets $C_k$ that \n\\begin{equation*}\n d^2(p,q)\\leq \\max_{i\\in I}\\{d^2(p_i,q_i)\\},\n\\end{equation*}\nwhere $I$ is a finite index set and $p_i,q_i\\in C_0=S$. It follows that $d(p,q)$ is bounded from above by $\\mbox{diam}(S)$. Thus we also have $\\mbox{diam}(S)\\geq \\mbox{diam}(co(S))$. \n\nMean ergodic theorems, in general, are convergence theorems for means of iterates of points under a given action to a limit invariant under the action. We need the concept of mean to proceed. The mean in a vector space is just the arithmetic average of the vectors, but the concept of mean generalizes to many other spaces as follows.\n\n\\begin{definition}[Mean]\\label{mean_map}\n Let $S=\\{S_0,S_1,\\ldots, S_{n-1}\\}$ be a finite subset of a metric space $\\mathcal{N}$ with a metric $d$. The \\emph{mean function} of $S$ is the function $F_S$ on $\\mathcal{N}$ given by\n\\begin{equation*}\n F_S(p)=\\frac{1}{n}\\sum_{i=0}^{n-1}d^2(p,S_i).\n\\end{equation*}\nIf there exists a unique minimizer of $F_S$, then we denote \n\\begin{equation*}\nm(S)=\\mbox{ the unique minimizer of } F_S\n\\end{equation*}\nand call $m(S)$ the \\emph{mean} of $S$.\n\\end{definition}\nIn a complete global Alexandrov NPC space the unique minimizer for $F_S$ exists for all finite subsets $S$ of $\\mathcal{N}$ and belongs to the closure $\\overline{co}(S)$ of $co(S)$; see Theorem 3.2.1 and Lemma 3.3.4 of~\\cite{Jostb}. Thus, in this case, we can also consider the mean as a mapping $S\\to \\overline{co}(S)$.\n\nWe are mainly interested in the means of iterations of points by a given mapping $T$. In this case, we denote the mean function of $n$ iterates of $p\\in \\mathcal{N}$ by\n\\begin{equation}\\label{F_n}\n F_n(r,p)=\\frac{1}{n}\\sum_{i=0}^{n-1}d^2(r,T^ip)\n\\end{equation}\nand the mean of $n$ iterates of $p$ is denoted by\n\\begin{equation*}\n m_n(p)=\\mbox{ the unique minimizer of } F_n(\\cdot,p).\n\\end{equation*}\n\n\nAn important class of mappings on $(\\mathcal{N},d)$ we are going to consider is that of nonexpansive mappings. A mapping $T$ from a metric space $(\\mathcal{N},d)$ to itself is called \\emph{nonexpansive} if\n\\begin{equation*}\n d(Tp,Tq)\\leq d(p,q)\n\\end{equation*}\nfor all $p,q\\in \\mathcal{N}$. Thus, the class of nonexpansive mappings contains not only contractions, but also isometries.\n\nConvexity plays a crucial role in the formulation and proof of our main theorem. A function $F:\\mathcal{N}\\to \\mathbb{R}$ is said to be convex if for every geodesic $\\gamma:[0,1]\\to \\mathcal{N}$ the function $F\\circ \\gamma:[0,1]\\to\\mathbb{R}$ is convex. We say that a mapping $T:\\mathcal{N}\\to \\mathcal{N}$ is \\emph{distance convex} if for all $n \\in \\mathbb{N}$ and $q\\in \\mathcal{N}$ the mapping \n\\begin{equation}\\label{T_convex}\n d^2(m_n(\\cdot),q):\\mathcal{N}\\to \\mathbb{R}^+\n\\end{equation}\nis convex. In a normed vector space, any linear mapping is distance convex, yet this definition seems to be new. However, the proof of our main theorem naturally employs the definition, which suggests that the class of distance convex mappings might be of further interest.\n\nWe are now ready to state our main theorem.\n\\begin{thm}[Mean Ergodic Theorem]\\label{main_thm} Let $(\\mathcal{N},d)$ be a complete global Alexandrov NPC space and $T:\\mathcal{N}\\to\\mathcal{N}$ a nonexpansive distance convex mapping. Then, for any $p\\in \\mathcal{N}$ whose orbit is bounded, and any $q\\in \\mathcal{N}$, the following are equivalent:\n\\begin{align*}\n (i) \\qquad &Tq=q \\mbox{ and } q\\in \\overline{co}\\{p,Tp,T^2p,\\ldots\\}, \\\\\n (ii) \\qquad &q=\\lim_{n}m_n(p), \\\\\n (iii) \\qquad &q=\\mbox{w-}\\lim_n m_n(p), \\\\\n (iv) \\qquad &q \\mbox{ is a weak cluster point of the sequence } (m_n(p)).\n\\end{align*}\n\\end{thm}\n\nHere $\\mbox{w-}\\lim_n m_n(p)$ refers to \\emph{weak convergence} defined in terms of projections as follows. For any $p\\in \\mathcal{N}$ and any geodesic arc $\\gamma$ in $\\mathcal{N}$, there exists a unique point $\\pi(p,\\gamma)$ on $\\gamma$ that is closest to $p$. We call $\\pi(p,\\gamma)$ the \\emph{projection} of $p$ onto $\\gamma$. A point $q\\in \\mathcal{N}$ is the weak limit of a sequence $(p_n)\\subset \\mathcal{N}$ if for every geodesic arc through $q$ the sequence $\\pi(p_n,\\gamma)$ converges to $q$. Similarly, a point $q\\in \\mathcal{N}$ is a weak cluster point of a sequence $(p_n)\\subset \\mathcal{N}$ if, for every neighborhood $U$ of $q$, there are infinitely many natural numbers $n\\in \\mathbb{N}$ such that $\\pi(p_n,\\gamma)\\in U$ for every geodesic arc $\\gamma$ through $q$. See Definitions 2.5 and 2.7 of~\\cite{Jost} for details on weak convergence. \n\nWithout the assumption that the mapping is distance convex, we still get an interesting weaker version of the theorem.\n\n\\begin{thm}[Fixed point theorem]\\label{fp_thm} \nLet $(\\mathcal{N},d)$ be a complete global Alexandrov NPC space and $T:\\mathcal{N}\\to\\mathcal{N}$ a nonexpansive mapping. Then, for any $p\\in \\mathcal{N}$ whose orbit is bounded there exists a fixed point $q$ of $T$ in the subset $\\overline{co}\\{p,Tp,T^2p,\\ldots\\}$ of $\\mathcal{N}$. \n\\end{thm}\n\nThe proof of our main theorem is quite lengthy due to the nonstandard framework we are working in. Once the mean ergodic theorem is proved, the fixed point theorem follows easily by using the same techniques. The outline of the proof of Theorem~\\ref{main_thm} follows Krengel's proof of a mean ergodic theorem for Ces\\'aro bounded operators in Banach spaces~\\cite[Theorem 1.1. p.72]{Krengel}. \n\nBefore the proofs of the theorems, we give several auxiliary results to clarify the proofs. The first two statements are general convexity results in global Alexandrov NPC spaces. The statements that follow consider the behavior of minimizers of sequences of convex functions. Then, the results achieved so far are applied to study the behavior of means of iterates of distance convex nonexpansive mappings. The first of the last two auxiliary results shows that projections to convex sets are continuous. The last auxiliary result gives a sufficient condition for the existence of a fixed point of a nonexpansive mapping. \n\nWe begin with a definition.\n\n\\begin{definition}[Uniform convexity]\nA nonnegative lower semicontinuous function $\\psi:\\mathcal{N}\\to\\mathbb{R}^+$ on a geodesic length space $(\\mathcal{N},d)$ is said to be \\emph{uniformly convex} if the following quantitative strict convexity condition holds:\n\nFor any geodesic $\\gamma:[0,1]\\to \\mathcal{N}$ and $\\epsilon>0$ there exists $\\d>0$ such that if\n\\begin{equation*}\n \\psi(\\gamma(\\partial))\\geq \\partial\\psi(\\gamma(0))+\\partial\\psi(\\gamma(1))-\\d\n\\end{equation*}\nthen\n\\begin{equation*}\n d(\\gamma(0),\\gamma(1))<\\epsilon.\n\\end{equation*}\n\nA family $\\mathcal{F}$ of nonnegative lower semicontinuous functions $\\mathcal{N}\\to\\mathbb{R}^+$ is said to be \\emph{equiconvex} if there is a positive number $\\d(\\epsilon)$ such that the above holds for all $F\\in\\mathcal{F}$ for any $\\d$ smaller than $\\d(\\epsilon)$. In this case, we call $\\d(\\epsilon)$ the \\emph{modulus of convexity} of $\\mathcal{F}$.\n\\end{definition}\n \nWe have the following.\n\\begin{lemma}\\label{d2_uni_c}\n Let $(\\mathcal{N},d)$ be a global Alexandrov NPC space. The family of functions $\\mathcal{F}=\\{d^2(\\cdot,p):\\mathcal{N}\\to \\mathbb{R}^+ : \\ p\\in \\mathcal{N}\\}$ is equiconvex with $\\d(\\epsilon)=\\epsilon^2\/4$.\n\\end{lemma}\n\\begin{proof}\nLet $p\\in\\mathcal{N}$, $\\epsilon>0$ and $\\gamma:[0,1]\\to \\mathcal{N}$ be a geodesic. Let $\\d<\\epsilon^2\/4$ and assume that the inequality\n\\begin{equation}\\label{str_con}\n d^2(\\gamma(\\partial),p)\\geq \\partial d^2(\\gamma(0),p)+\\partial d^2(\\gamma(1),p)-\\d\n\\end{equation}\nholds. In a global Alexandrov NPC the distance between two points is given by the length of the unique geodesic joining the points. Thus the NPC inequality reads \n\\begin{equation*}\n \\frac{1}{4}d^2(\\gamma(0),\\gamma(1))\\leq -d^2(\\gamma(\\partial),p)+ \\partial d^2(\\gamma(0),p)+\\partial d^2(\\gamma(1),p).\n\\end{equation*}\nTogether with~\\eqref{str_con} we have\n\\begin{equation*}\n \\frac{1}{4}d^2(\\gamma(0),\\gamma(1))\\leq \\d<\\epsilon^2\/4.\n\\end{equation*}\n\\end{proof}\n\n\\begin{cor}\\label{set_mean_str_conv}\nLet $(\\mathcal{N},d)$ be a global Alexandrov NPC space. The family $\\mathcal{F}=\\{F_S: S \\mbox{ a finite subset of } \\mathcal{N} \\}$ of mean functions $F_S$ is equiconvex with $\\d(\\epsilon)=\\epsilon^2\/4$.\n\\end{cor}\n\\begin{proof}\n Let $S=\\{S_0,\\ldots,S_{n-1}\\}$ be a finite set, let $\\epsilon>0$ and $\\gamma:[0,1]\\to \\mathcal{N}$ be a geodesic. Let $\\d<\\epsilon^2\/4$ and assume that the inequality \n\\begin{equation}\\label{F_str_con}\nF_S(\\gamma(\\partial))\\geq \\partial F_S(\\gamma(0))+\\partial F_S(\\gamma(1))-\\d\n\\end{equation}\nholds. If the inequality\n\\begin{equation*}\n d^2(\\gamma(\\partial),S_i)\\geq \\partial d^2(\\gamma(0),S_i)+\\partial d^2(\\gamma(1),S_i)-\\d\n\\end{equation*}\nis false for all $i=0,\\ldots, n-1$, then the inequality~\\eqref{F_str_con} is also false. Thus we have for some $0\\leq i_0\\leq n-1$,\n\\begin{equation*}\n d^2(\\gamma(\\partial),S_{i_0})\\geq \\partial d^2(\\gamma(0),S_{i_0})+\\partial d^2(\\gamma(1),S_{i_0})-\\d.\n\\end{equation*}\nHence we have\n\\begin{equation*}\n d(\\gamma(0),\\gamma(1))<\\epsilon\n\\end{equation*}\nby the previous lemma.\n\\end{proof}\n\nThe proof of our main theorem considers the asymptotic behavior of the mean functions of the iterates of points of a given mapping. In this case, we wish to analyze the asymptotic behavior of the minimizers of these mean functions. A criterion for the asymptotic minimizers to be close is given by the lemma below.\n\n\n\\begin{lemma}\\label{uni_strc_appr}\nLet $(\\mathcal{N},d)$ be a geodesic length space. Let $(F_n)$ and $(G_n)$ be two sequences of functions $\\mathcal{N}\\to \\mathbb{R}^+$, for which there exist unique minimizers, $(f_n)$ and $(g_n)$ respectively. Assume also that $(f_n)$ and $(g_n)$ belong to some subset $S$ of $\\mathcal{N}$ and that $(F_n)$ is equiconvex with modulus of convexity of $\\d(\\epsilon)$.\n\nLet $\\epsilon>0$ and assume that there exists an $N\\in\\mathbb{N}$ such that the inequality\n\\begin{equation*}\n \\sup_{p\\in S}|F_n(p)-G_n(p)|<\\d(\\epsilon)\n\\end{equation*}\nholds for all $n\\geq N$. Then\n\\begin{equation*}\n d(f_n,g_n)<\\epsilon\n\\end{equation*}\nfor all $n\\geq N$.\n\\end{lemma}\n\n\\begin{proof}\nDenote $\\widetilde{F}_n=F_n-F_n(f_n)$ and $\\widetilde{G}_n=G_n-G_n(g_n)$. Now, the unique minima of $\\widetilde{F}_n$ and $\\widetilde{G}_n$ are zero and the sequence of functions $(\\widetilde{F}_n)$ is equiconvex with modulus of convexity $\\d(\\epsilon)$. \n\nLet $\\epsilon>0$. By assumption, there is an $N=N(\\epsilon)\\in\\mathbb{N}$ such that for all $n\\geq N$\n\\begin{equation*}\n \\sup_{p\\in S}|F_n(p)-G_n(p)|< \\d(\\epsilon).\n\\end{equation*}\nIf $G_n(g_n)-F_n(f_n)\\geq 0$, we have\n\\begin{align*}\n |G_n(g_n)-F_n(f_n)|&=G_n(g_n)-F_n(f_n)\\leq G_n(f_n)-F_n(f_n) \\\\\n &\\leq|G_n(f_n)-F_n(f_n)| < \\d(\\epsilon).\n\\end{align*}\nThis implies that\n\\begin{equation*}\n \\sup_{p\\in S}|\\widetilde{F}_n(p)-\\widetilde{G}_n(p)|\\leq \\sup_{p\\in S}|F_n(p)-G_n(p)|+ |G_n(g_n)-F_n(f_n)| < 2\\d(\\epsilon).\n\\end{equation*}\nIf $G_n(g_n)-F_n(f_n)\\leq 0$, an analogous proof shows that the same conclusion still holds.\n\nSince $\\widetilde{G}_n(g_n)=0$, we get the following inequality\n\\begin{equation*}\n 2\\d(\\epsilon)> \\sup_{p\\in S}|\\widetilde{F}_n(p)-\\widetilde{G}_n(p)|_S \\geq |\\widetilde{F}_n(g_n)-\\widetilde{G}_n(g_n)|=\\widetilde{F}_n(g_n).\n\\end{equation*}\nLet $\\gamma_n: [0,1]\\to \\mathcal{N}$ be a geodesic with $\\gamma_n(0)=f_n$ and $\\gamma_n(1)= g_n$. Then, at the midpoint of the geodesic, we have the following estimate \n\\begin{equation*}\n \\widetilde{F}_n(\\gamma_n(\\partial))+\\d(\\epsilon)\\geq \\d(\\epsilon) > \\partial(\\widetilde{F}_n(\\gamma_n(0))+\\widetilde{F}_n(\\gamma_n(1))).\n\\end{equation*}\nThe equiconvexity of the family $(\\widetilde{F}_n)$ implies\n\\begin{equation*}\nd(f_n,g_n) <\\epsilon\n\\end{equation*}\nfor all $n\\geq N$.\n\\end{proof}\n\n\\begin{lemma}\\label{Tdc_impl_close}\n Let $T:\\mathcal{N}\\to \\mathcal{N} $ be a distance convex nonexpansive mapping on an Alexandrov NPC space $(\\mathcal{N},d)$. Assume that $T$ has a bounded orbit $\\{p,Tp,T^2p,\\ldots\\}$ and that $s\\in co\\{p,Tp,T^2p,\\ldots\\}$. Then there is an $N\\in \\mathbb{N}$ such that the means of the first $n$ iterates of $p$ and $s$ satisfy \n\\begin{equation*}\n d(m_n(p),m_n(s))<\\epsilon\n\\end{equation*}\nfor all $n\\geq N$.\n\\end{lemma}\n\\begin{proof}\nDenote the orbit of $p$ by $S_p$. Assume that $s\\in co(S_p)$ and let $F_n:\\mathcal{N}\\times\\mathcal{N} \\to \\mathbb{R}$ be the mean function\n\\begin{equation*}\n F_n(r,q)=\\frac{1}{n}\\sum_{i=0}^{n-1}d^2(r,T^iq)\n\\end{equation*}\nof the first $n$ iterates of the second argument. The family $\\{F_n(\\cdot,s): n\\in \\mathcal{N}\\}$ is equiconvex with modulus of convexity $\\d(\\epsilon)=\\epsilon^2\/4$ by Corollary~\\ref{set_mean_str_conv}. \n\nSince $s$ belongs to the bounded set $co(S_p)$ and $T$ is nonexpansive, the orbit of $s$ is bounded: For $i\\in \\mathbb{N}$, we have \n\\begin{align}\\label{bded_orbits}\nd(s,T^is)&\\leq d(s,p)+d(p,T^ip)+ d(T^ip,T^is) \\nonumber \\\\\n &\\leq 2d(p,s)+d(p,T^ip)\\leq 3~\\mbox{diam}(co(S_p)).\n\\end{align}\nThus the unique minimizers of the functions $\\{F_n(\\cdot,s)\\}$ belong to the bounded set $\\overline{co}(S_s) \\subset \\mathcal{N}$ by the remarks following Definition~\\ref{mean_map}.\n\nRecall from Eq.~\\ref{cohull} that the convex hull of $S_p$ can be expressed as $\\cup_{k=0}^{\\infty}C_k$. We prove the claim of the lemma by induction on the index $k$. Assume $s\\in C_0=S_p$. Now $s=T^{i_0}p$ for some $i_0$ and we have the following estimate (for $n> i_0$):\n\\begin{align*}\n F_n(r,s)&=F_n(r,T^{i_0}p)=\\frac{1}{n}\\sum_{i=0}^{n-1}d^2(r,T^iT^{i_0}p)=\\frac{1}{n}\\sum_{i=0}^{n-1}d^2(r,T^ip) \\\\\n &-\\frac{1}{n}\\sum_{i=0}^{i_0-1}d^2(r,T^ip)+\\frac{1}{n}\\sum_{i=n}^{i_0+n-1}d^2(r,T^ip).\n\\end{align*}\nBy the remarks above, $\\overline{co}(S_s)$ is bounded and $\\overline{co}(S_p)$ is bounded by assumption. Thus we have that\n\\begin{equation*}\n \\sup_{r\\in \\overline{co}(S_s)\\cup\\overline{co}(S_p)}|F_n(r,s)-F_n(r,p)|< \\epsilon^2\/4\n\\end{equation*}\nfor all $n\\geq N$ for some sufficiently large $N=N(s)\\in \\mathbb{N}$. By Lemma~\\ref{uni_strc_appr}, there holds\n\\begin{equation*}\n d(m_n(s),m_n(p))<\\epsilon\n\\end{equation*}\nfor all $n\\geq N(s)$. We have shown that the claim holds for $k=0$ for any $s\\in C_0$.\n\nAssume then that the claim holds for some $k>0$ and that $s\\in C_{k+1}$. By the definition of $C_{k+1}$, the point $s$ is of the form $s=\\gamma(t_0)$, where $\\gamma:[0,1]\\to \\mathcal{N}$ is a geodesic with $\\gamma(0), \\gamma(1)\\in C_k$ and $t_0\\in [0,1]$. The induction assumption for $\\gamma(0)$ and $\\gamma(1)$ together with the assumption that $T$ is distance convex yields\n\\begin{align*}\n d^2(&m_n(s),m_n(p))\\leq t_0 d^2(m_n(\\gamma(0)),m_n(p)) \\\\\n &+(1-t_0) d^2(m_n(\\gamma(1)),m_n(p))<\\epsilon^2\n\\end{align*}\nfor all $n\\geq \\max\\{N(\\gamma(0)),N(\\gamma(1))\\}$, where $N(\\gamma(0))$ and $N(\\gamma(0))$ are defined by the induction assumption. This completes the induction step.\n\\end{proof}\n\nProjections to closed convex sets in a complete global Alexandrov NPC space exist by Lemma 2.5. of~\\cite{Jost}. The continuity of the projections is given by the following. \n\\begin{lemma}\n Let $(\\mathcal{N},d)$ be a complete global Alexandrov NPC space and $C\\subset \\mathcal{N}$ a closed convex set. Then the projection to $C$ is a continuous mapping.\n\\end{lemma}\n\\begin{proof}\nLet $\\epsilon>0$, and denote by $\\pi$ the projection to $C$. Let $p,q\\in \\mathcal{N}$ and let $\\gamma$ be a geodesic with $\\gamma(0)=\\pi(p)$ and $\\gamma(1)=\\pi(q)$. Since $C$ is convex, we have $\\gamma(\\partial)\\in C$. Thus, by the definition of $\\pi$ there holds\n\\begin{equation*}\n d(\\pi(p),p)\\leq d(\\gamma(\\partial),p) \\mbox{ and } d(\\pi(q),q)\\leq d(\\gamma(\\partial),q).\n\\end{equation*}\nWe also have, by the convexity of $d(\\cdot,p)$,\n\\begin{equation*}\n d(\\gamma(\\partial),p)\\leq \\max\\{d(\\pi(p),p),d(\\pi(q),p)\\}= d(\\pi(q),p),\n\\end{equation*}\nsince $d(\\pi(p),p)\\leq d(\\pi(q),p)$. The above inequalities together with the triangle inequality give\n\\begin{align*}\n d(\\gamma(\\partial),p)&\\leq d(\\pi(q),p)\\leq d(\\pi(q),q)+d(q,p) \\\\ \n &\\leq d(\\gamma(\\partial),q)+d(q,p)\\leq d(\\gamma(\\partial),p)+2d(q,p).\n\\end{align*}\nThus, for any $p$ close enough to $q$, we have $d^2(\\pi(q),p)\\leq d^2(\\gamma(\\partial),p)+\\epsilon^2\/4$ and therefore\n\\begin{equation*}\n \\partial d^2(\\pi(p),p)+ \\partial d^2(\\pi(q),p) \\leq d^2(\\gamma(\\partial),p)+\\epsilon^2\/4.\n\\end{equation*}\nThe NPC inequality gives\n\\begin{equation*}\n d(\\pi(p),\\pi(q))\\leq \\epsilon\n\\end{equation*}\nfor any $p$ close enough to $q$. Thus $\\pi$ is continuous.\n\\end{proof}\n\nThe last lemma before the proofs of the main theorems is the following one giving a sufficient condition for a nonexpansive mapping to have a fixed point.\n\\begin{lemma}\\label{cond_for_fpoint}\n Let $(\\mathcal{N},d)$ be a complete global Alexandrov NPC space and let $T:\\mathcal{N}\\to\\mathcal{N}$ be nonexpansive. If a sequence $p_n$ converges weakly to $q$ and $d(p_n,Tp_n)\\to 0$, $n\\to \\infty$, then $Tq=q$.\n\\end{lemma}\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[scale=0.6]{.\/fig1.pdf}\n\\caption{\\label{jotai} The dashed line is a geodesic arc connecting $p_n$ and $c_n$. The geodesic arc connecting $Tp_n$ and $T(\\pi(p_n))$ has length at most $d(p_n,\\pi(p_n))$ and it will approach the dashed line as $n$ tends to infinity.}\n \\end{center}\n\\end{figure}\n\\begin{proof}\nThe claim follows from geometric considerations illustrated in Figure~\\ref{jotai}. Let $C$ be the maximal geodesic containing $q$ and $Tq$ and $\\pi:\\mathcal{N}\\to C$ the projection to this arc. Let us denote the point $\\pi (T(\\pi(p_n)))\\in C$ by $c_n$ and let $\\triangle_n$ be the geodesic triangle, whose vertices are $p_n, \\pi(p_n)$ and $c_n$. We say that the angle between the geodesic arcs $\\overrightarrow{\\pi(p_n)p_n}$ and $\\overrightarrow{\\pi(p_n)c_n}$ is a right angle, because $\\pi(p_n)$ minimizes the distance from $p_n$ to $C$ and since $\\overrightarrow{\\pi(p_n)c_n}\\subset C$. We will show, in a sense which will be made precise, that, for any $n$ large enough, the angle between the geodesic arcs $\\overrightarrow{c_n p_n}$ and $\\overrightarrow{c_n\\pi(p_n)}$ is arbitrarily close to a right angle. Thus, as in Euclidean geometry, we will be able to deduce that the length of the side $\\overrightarrow{\\pi(p_n)c_n}$ of the geodesic triangles $\\triangle_n$ will tend to zero as $n\\to\\infty$. This observation will yield the claim $Tq=q$.\n\nWe have $c_n=\\pi (T(\\pi(p_n)))\\to Tq$, $n\\to\\infty$, because $p_n$ converges weakly to $q$, $\\pi$ and $T$ are continuous and $Tq\\in C$. The sequences $(T\\pi(p_n))$ and $(c_n)$ both converge to $Tq$. Thus\n\\begin{equation*}\n |d(T(\\pi(p_n)),p_n)-d(c_n,p_n)|\\to 0, \\ n\\to\\infty.\n\\end{equation*}\nBecause $d(Tp_n,p_n)\\to 0$, by assumption, we moreover have\n\\begin{equation}\\label{dist_est}\n |d(T(\\pi(p_n)),Tp_n)-d(c_n,p_n)|\\to 0, \\ n\\to\\infty.\n\\end{equation}\n\nLet $\\gamma_n$ be a geodesic with $\\gamma_n(0)=\\pi(p_n)$ and $\\gamma_n(1)=c_n$. Since $\\pi(p_n)$ and $c_n$ belong to $C$, $\\gamma_n(t)\\in C$ for all $t\\in [0,1]$. The nonexpansiveness of $T$ yields an estimate\n\\begin{align*}\n d^2&(\\gamma_n(\\partial),p_n)\\geq \\partial d^2(\\pi(p_n),p_n) + \\partial d^2(Tp_n,T(\\pi(p_n)) \\\\\n &=\\partial d^2(\\gamma_n(0),p_n) + \\partial d^2(Tp_n,T(\\pi(p_n))\n\\end{align*}\nHere we have also used the fact that $\\pi(p_n)$ minimizes the distance from $p_n$ to $C$ and therefore also to the arc of $\\gamma_n$. By the estimate~\\eqref{dist_est}, we can choose an $N\\in \\mathbb{N}$ such that for all $n\\geq N$ there holds\n\\begin{equation*}\n d^2(T(p_n),T(\\pi(p_n)))> d^2(c_n,p_n)-\\epsilon^2\/4=d^2(\\gamma_n(1),p_n)-\\epsilon^2\/4,\n\\end{equation*}\nwhich shows that\n\\begin{equation*}\nd^2(\\gamma_n(\\partial),p_n)> \\partial d^2(\\gamma_n(0),p_n)+ \\partial d^2(\\gamma_n(1),p_n)-\\epsilon^2\/4.\n\\end{equation*}\nNow the equiconvexity of the family $\\{d^2(\\cdot,p): p\\in \\mathcal{N}\\}$ of functions shows that the length of the geodesic arcs $\\overrightarrow{\\pi(p_n)c_n}$ of the triangles $\\triangle_n$ will tend to zero,\n\\begin{equation*}\n d(\\pi(p_n),c_n)<\\epsilon,\n\\end{equation*}\nwhenever $n\\geq N$. Since $\\pi(p_n)\\to q$ and $c_n\\to Tq$, we have shown that $Tq=q$.\n\\end{proof}\n\n\n\n\nWe are finally set up for the proof of our main theorem.\n\\begin{proof}[Proof of Theorem~\\ref{main_thm}]\n$(i)\\Rightarrow (ii)$: Let $\\epsilon>0$. Since by assumption $q\\in \\overline{co}\\{p,Tp,T^2p,\\ldots\\}$, we can find a point $s\\in co\\{p,Tp,T^2p,\\ldots\\}$ such that\n\\begin{equation}\\label{approximity}\n 2D d(s,q)+ 3d^2(s,q)< \\epsilon^2\/5,\n\\end{equation}\nwhere $D$ is the finite diameter of the union of the convex hulls of the orbits $S_q$ and $S_s$ of $q$ and $s$. The fact that $D$ is finite follows from considerations similar to those in~\\eqref{bded_orbits}. By Lemma~\\ref{Tdc_impl_close}, we have\n\\begin{equation*}\n d(m_n(s),m_n(p))<\\epsilon,\n\\end{equation*}\nwhenever $n$ is large enough. \n\nLet us then estimate the distance\n\\begin{equation*}\n d(m_n(s),m_n(q)).\n\\end{equation*}\nFor this, we estimate the corresponding functions $F_n(\\cdot,s)$ and $F_n(\\cdot,q)$ (see Eq.~\\ref{F_n}). Let $\\d>0$. For fixed $r\\in \\overline{co}(S_q)\\cup\\overline{co}(S_s)$ we get the following estimate:\n\\begin{align*}\n F_n(r,s) & =\\frac{1}{n}\\sum_{i=0}^{n-1}d^2(r,T^i s)\\leq \\frac{1}{n}\\sum_{i=0}^{n-1}(d(r,T^i q) + d(T^i q,T^i s))^2 \\\\\n & =\\frac{1}{n}\\sum_{i=0}^{n-1}\\l(d^2(r,T^i q) + 2 d(r,T^iq)d(T^i q,T^i s)+d^2(T^i q,T^i s)\\r) \\\\\n & \\leq \\frac{1}{n}\\sum_{i=0}^{n-1}\\l(d^2(r,T^i q) + 2Dd(s,q) +d^2(s,q)\\r) < F_n(r,q)+ \\epsilon^2\/5. \n\\end{align*}\nHere we have used~\\eqref{approximity} and the fact that $T$ is nonexpansive. A similar calculation shows that \n\\begin{equation*}\n F_n(r,q)\\leq F_n(r,s) + 2Dd(s,q)+3d^2(s,q)< F_n(r,s)+ \\epsilon^2\/5.\n\\end{equation*}\nThus we have\n\\begin{equation*}\n \\sup_{r\\in \\overline{co}(S_q)\\cup\\overline{co}(S_s)}|F_n(r,s)-F_n(r,q)|\\leq \\epsilon^2\/5<\\epsilon^2\/4\n\\end{equation*}\nand Lemma~\\ref{uni_strc_appr} yields\n\\begin{equation*}\n d(m_n(s),m_n(q))<\\epsilon\n\\end{equation*}\nfor all $n$ large enough.\n\nCombining the estimates above shows that\n\\begin{equation*}\nd(q,m_n(p))=d(m_n(q),m_n(p))\\leq d(m_n(q),m_n(s))+d(m_n(s),m_n(p))<2\\epsilon,\n\\end{equation*}\nwhenever $n$ is large enough. Here we have used the assumption $q=Tq$ to deduce $q=m_n(q)$.\n\n$(ii)\\Rightarrow (iii)$: Strong convergence implies weak convergence by the continuity of the projections. $(iii)\\Rightarrow (iv)$: This is obvious.\n\n$(iv)\\Rightarrow (i)$: First we show that the weak cluster point of the sequence $(m_n(p))$ belongs to $\\overline{co}\\{p,Tp,T^2p,\\ldots\\}=\\overline{co}(S_p)$. Since $q$ is a weak cluster point of a sequence $(m_n(p))$, there exists a subsequence $(m_{n_k}(p))$ converging weakly to $q$ as $k\\to\\infty$. The sequence $(m_{n_k}(p))$ belongs to $\\overline{co}(S_p)$ and in particularly is bounded~\\cite[Lemma 3.3.4]{Jostb}. By the version of Mazur's lemma by Jost~\\cite[Thm. 2.2]{Jost}, the bounded sequence $(m_{n_k}(p))$ contains a subsequence such that its mean values converge to $q$. But now the mean values of elements of any subsequence of $(m_{n_k}(p))$ belong to the closed set $\\overline{co}\\{m_{n_k}(p): k\\in \\mathbb{N}\\}\\subset \\overline{co}(S_p)$. Thus $q\\in\\overline{co}(S_p)$.\n\nIt remains to prove that $q=Tq$. For this let $\\epsilon>0$. We use the nonexpansivity of $T$ to show first that, for all $n$ large enough,\n\\begin{equation*}\n d(Tm_n(p),m_n(p))<\\epsilon.\n\\end{equation*}\nWe have\n\\begin{align*}\n F_n(Tm_n(p),p)&=\\frac{1}{n}\\sum_{i=0}^{n-1}d^2(Tm_n(p),T^ip)\\leq \\frac{1}{n}\\sum_{i=0}^{n-1}d^2(m_n(p),T^ip) \\\\ \n &+\\frac{1}{n}d^2(T m_n(p),p)-\\frac{1}{n}d^2(m_n(p),T^{n-1}p)= F_n(m_n(p),p) \\\\\n &+\\frac{1}{n}d^2(Tm_n(p),p)-\\frac{1}{n}d^2(m_n(p),T^{n-1}p).\n\\end{align*}\nBy the boundedness of the orbit of $p$, we can choose $N\\in\\mathbb{N}$ such that, for all $n\\geq N$,\n\\begin{equation*}\n F_n(m_n(p),p)> F_n(Tm_n(p),p)-\\epsilon^2\/2.\n\\end{equation*}\nFor a geodesic $\\gamma:[0,1]\\to \\mathcal{N}$ with $\\gamma(0)=m_n(p)$ and $\\gamma(1)=Tm_n(p)$, we now have\n\\begin{equation*}\n F_n(\\gamma(\\partial),p)\\geq F_n(m_n(p),p)> \\partial F_n(\\gamma(0),p) + \\partial F_n(\\gamma(1),p) -\\partial \\epsilon^2\/2.\n\\end{equation*}\nHere the first inequality follows from the fact that $m_n(p)$ is the minimizer of the function $F_n(\\cdot,p)$. The equiconvexity of the family of functions $\\{F_n(\\cdot,p)\\}$ now yields\n\\begin{equation*}\n d(Tm_n(p),m_n(p))<\\epsilon\n\\end{equation*}\nwhenever $n$ is large enough. \n\nWe have seen that the sequence $(m_{n_k}(p))$ converges weakly to $q$ and \n\\begin{equation*}\n d(Tm_{n_k}(p),m_{n_k}(p))\\to 0, \\ k\\to \\infty.\n\\end{equation*}\nThus, by Lemma~\\ref{cond_for_fpoint}, we deduce that $Tq=q$.\n\\end{proof}\n\nFrom the proof of the theorem we can see that the distance convexity of $T$ was only used to prove (via Lemma~\\ref{Tdc_impl_close}) that the means converge strongly to the fixed point. We use this observation to prove Theorem~\\ref{fp_thm}.\n\n\\begin{proof}[Proof of Theorem~\\ref{fp_thm}]\nLet $p\\in \\mathcal{N}$ have bounded orbit. By Theorem 2.1 of~\\cite{Jost}, the bounded sequence $(m_n(p))$ contains a subsequence converging weakly to some element $q\\in \\mathcal{N}$. Thus $q$ is a weak cluster point of the sequence $(m_n(p))$. Now, the exact same argument as in the part $(iv)\\Rightarrow (i)$ of the proof of Theorem~\\ref{main_thm} concludes the proof.\n\\end{proof}\n\n\\section{Fixed point for volumorphisms on the space of Riemannian metrics with fixed volume form}\nIn this section we apply the fixed point theorem to volumorphisms acting on the space of all Sobolev Riemannian metrics having a fixed volume form. We give a natural condition for a fixed point to exist if we allow that the fixed point satisfies only mild regularity assumptions. We begin by describing the space of Riemannian metrics with a fixed volume form. We refer to~\\cite{Ebin,Clarke} for basic results and properties of this space and to~\\cite{Freed} for calculations of its geodesics and curvature.\n\nLet $M$ be a smooth compact oriented finite dimensional manifold. We denote by $\\mathcal{M}^s$ the set of all Riemannian metrics on $M$, which are of Sobolev class $H^s$. Throughout this section, we will assume $s>n\/2$. The space $\\mathcal{M}^s$ is an infinite dimensional manifold with a weak Riemannian structure given by\n\\begin{equation*}\n \u00a0\\langle U,V \\rangle_g=\\int_M \\tr{g^{-1}Ug^{-1}V}dV_g,\n\\end{equation*}\nwhere $U,V$\u00a0are tangent vectors at $g\\in\\mathcal{M}^s$ and $dV_g$ is the volume form induced by $g$. Tangent vectors of $\\mathcal{M}^s$ are symmetric $(0,2)$-tensor fields of Sobolev class $H^s$.\n\nLet $\\mu$ be a volume form on $M$. Consider next the subset $\\mathcal{M}_\\mu^s$ of $\\mathcal{M}^s$ consisting of the elements of $\\mathcal{M}^s$ whose induced volume form is $\\mu$. This subset is an infinite dimensional submanifold of $\\mathcal{M}^s$ with the induced inner product\n\\begin{equation*}\n \\langle U,V \\rangle_g=\\int_M \\tr{g^{-1}Ug^{-1}V}d\\mu.\n\\end{equation*}\nHere $U$ and $V$, tangent vectors at the point $g$, are traceless (with respect to $g$) symmetric $(0,2)$-tensor fields. \n\nThe geodesics of $\\mathcal{M}_\\mu^s$ can be given explicitly:\n\\begin{equation*}\\label{can_path}\n g(t)=g\\exp{t(g^{-1}A)}.\n\\end{equation*}\nHere $g(0)=g\\in \\mathcal{M}_\\mu^s$ and $\\dot{g}(0)=A\\in T_g\\mathcal{M}_\\mu^s$. The geodesic $g(t)$ is of constant speed\n\\begin{equation*}\n ||\\dot{g}(t)||^2_{g(t)}=\\int_M\\tr{(g^{-1}A)^2}d\\mu.\n\\end{equation*}\nGeodesics $g(t)$ exist for all times $t$ and we can easily see that the unique solution $A$ of the equation\n\\begin{equation*}\n g(t)|_{t=1}=h\n\\end{equation*}\nis\n\\begin{equation*}\n A=g\\log{(g^{-1}h)}.\n\\end{equation*}\nThus the distance, at least formally, is given by\n\\begin{equation*}\n d(g,h)=\\int_0^1 ||\\dot{g}(t)||_{g(t)}dt=\\l(\\int_M\\tr{(g^{-1}A)^2}d\\mu\\r)^{1\/2}.\n\\end{equation*}\nThat is,\n\\begin{equation}\\label{P_distance}\n d^2(g,h)=\\int_M\\tr{(\\log{(g^{-1}h)})^2}d\\mu.\n\\end{equation}\n\nThe calculation above is formal in the sense that for general infinite dimensional weak Riemannian manifolds it is nontrivial how geodesics relate to the distance function. However, in our case, the exponential mapping $T_g\\mathcal{M}_\\mu^s\\to \\mathcal{M}_\\mu^s$ is a diffeomorphism onto $\\mathcal{M}_\\mu^s$ for any $g\\in \\mathcal{M}_\\mu^s$, and therefore the distance between $g$ and $h$ is given by the norm of the tangent vector $A=g\\log{(g^{-1}h)}\\in T_g\\mathcal{M}_\\mu^s$ justifying~\\eqref{P_distance}. See~\\cite[Prop. 2.23, Prop. 2.46]{Clarke} for details. \n\nWe record that $\\mathcal{M}_\\mu^s$ is indeed a global Alexandrov NPC space. We omit the proof since it is essentially the same as the proof of Theorem~\\ref{X} below.\n\\begin{thm}\\label{M_mu_NPC}\n The space $\\mathcal{M}_\\mu^s$ of Riemannian metrics of Sobolev class $H^s$, $s>n\/2$, with a fixed volume form $\\mu$ on a compact orientable manifold is a global Alexandrov NPC space.\n\\end{thm}\n\nA volumorphism is a diffeomorphism preserving a given volume form. We denote by $\\mathcal{D}_\\mu^{s+1}$ the space of Sobolev $H^{s+1}$ volumorphisms on $M$. The natural action of $\\mathcal{D}_\\mu^{s+1}$ on $\\mathcal{M}_\\mu^s$ is given by pullback. A straightforward calculation shows that the action is actually an isometry in the sense of Riemannian geometry and Eq.~\\ref{P_distance} shows that it is also an isometry in the metric sense. In particular, the action is nonexpansive and we are, almost, in the setup of our fixed point Theorem~\\ref{fp_thm}.\n\nThe last needed assumption to apply Theorem~\\ref{fp_thm} would be the completeness of $\\mathcal{M}_\\mu^s$ as a metric space $(\\mathcal{M}_\\mu^s,d)$. However, $(\\mathcal{M}_\\mu^s,d)$ is not metrically complete, which is quite expected since we are in a sense considering an $L^2$ inner product in the subset of Sobolev $H^s$ Riemannian metrics on $M$.\n\nThe metric completion of the manifold of all Riemannian metrics $\\mathcal{M}^s$ with respect to its distance metric is characterized in~\\cite{Clarke}. The elements of the completion of $\\mathcal{M}^s$ can be identified with measurable semimetrics with finite volume. See~\\cite[Thm 5.25.]{Clarke} for details. As $\\mathcal{M}_\\mu^s$ is a submanifold of $\\mathcal{M}^s$, it follows that the $d$-metric completion $\\overline{\\mathcal{M}}_\\mu^s$ of $\\mathcal{M}_\\mu^s$ is a subset of the metric completion of $\\mathcal{M}^s$. The next theorem implies that actually more is true. \n\n\\begin{thm}\\label{X}\n Let $\\mu$ be a volume form on a smooth oriented compact manifold $M$. Let $X$ be the set of $\\mu$-measurable a.e. positive definite symmetric $(0,2)$-tensor fields $g$ on $M$ with volume form agreeing with $\\mu$ a.e. and\n\\begin{equation*}\n \\int_M\\tr{(\\log{(g^{-1}h)})^2}d\\mu <\\infty,\n\\end{equation*}\nfor all $h\\in \\mathcal{M}_\\mu^s$. Then, if $X$ is equipped with the metric\n\\begin{equation}\\label{metric}\n \\d(g,h)=\\l(\\int_M\\tr{(\\log{(g^{-1}h)})^2}d\\mu\\r)^{1\/2},\n\\end{equation}\n$(X,\\d)$ is a complete global Alexandrov NPC space.\n\nThe geodesics of $(X,\\d)$ are given by the formula\n\\begin{equation}\\label{olP_geo}\n g(t)=g \\exp{tg^{-1}A}.\n\\end{equation}\nHere $g\\in X$ and $A$ belongs to the set of $(L^2,|\\cdot|_g,\\mu)$-integrable symmetric $(0,2)$-tensor fields on $M$ satisfying $\\Trg{A}{g}=0$ a.e. Moreover, the space of mappings $\\mathcal{D}_\\mu^{s+1}$ acts by pullback isometrically on $(X,\\d)$.\n\\end{thm}\n\n\n\\begin{proof}\nWe first show that $X$ equipped with the mapping $\\d:X\\times X\\to [0,\\infty)$ is a metric space. This follows from noting that the integrand in the formula~\\eqref{metric} of the metric $\\d$ is, pointwise in any local coordinates, the square of the distance of matrices in a space isometric to $S:=SL(n,\\mathbb{R})\/SO(n,\\mathbb{R})$. This is because all elements of $X$ have the same volume form a.e., and therefore the coordinate representations of the elements of $X$ are pointwise positive definite symmetric matrices with \\emph{equal} determinants a.e.\n\nThe observation above yields the following. If $\\d(g,h)=0$, then the integrand of~\\eqref{metric} equals zero a.e. and consequently $g=h$ a.e. The triangle inequality, and the fact that $\\d$ is finite, follow from the Minkowski inequality for $L^2(M,\\mu)$ and the triangle inequality on $(S,d_S)$. We conclude that $(X,\\d)$ is a metric space. We also see that $\\mathcal{D}_\\mu^{s+1}$ acts isometrically on $X$.\n\nThe metric $d_S$ of $S$ is given by\n\\begin{align*}\n d^2_S(G,H)&=||\\log (H^{1\/2}G^{-1}H^{1\/2})||^2=\\tr{\\l(\\log(H^{1\/2}G^{-1}H^{1\/2})\\r)^2} \\\\\n &=\\tr{(\\log(G^{-1}H))^2}=\\tr{(\\log(H^{-1}G))^2}\n\\end{align*}\nfor $G,H\\in S$, see~\\cite[Ch. 20]{Iwaniec}. A straightforward calculation shows that a path $\\Gamma:\\mathbb{R} \\to S$ of the form\n\\begin{equation}\\label{S_geo}\n \\Gamma(t)=G\\exp{t\\log{(G^{-1}H)}}\n\\end{equation}\nhas on the interval $[0,1]$ the length $l(\\Gamma)$,\n\\begin{equation*}\n l(\\Gamma):=\\sup\\l\\{\\sum_{i=0}^n d_S(\\Gamma(t_i),\\Gamma(t_{i-1})): 0=t_00$. It follows from what we observed that, for a.e. $x\\in M$, there is an index $N=N_x\\in \\mathbb{N}$ such that\n\\begin{align*}\n d_S&(g_{n}(x),g_{n+k}(x))\\leq \\sum_{j=n}^{n+k-1}d_S(g_{j}(x),g_{j+1}(x)) \\leq \\sum_{j=n}^{\\infty}d_S(g_{j}(x),g_{j+1}(x)) \\\\\n &=\\sum_{j=n}^{\\infty}\\l(\\tr{(\\log(g_j^{-1}(x)g_{j+1}(x)))^2}\\r)^{1\/2}<\\epsilon\n\\end{align*}\nwhenever $n\\geq N_x$ and for all $k\\in \\mathbb{N}$. Thus, $(g_n(x))$ is Cauchy a.e. By the metric completeness of $(S,d_S)$, it follows that, for a.e. $x\\in M$, the sequence $g_{n}(x)$ tends to some $g(x)$. \n\nThe limit is a symmetric positive semi-definite $(0,2)$ tensor field $g$, defined a.e., and has volume form equalling $\\mu$ a.e. The a.e. positiveness of the volume form $\\mu$ implies that $g$ is actually positive definite a.e. The fact that $\\d(g,h)<\\infty$, for all $h\\in \\mathcal{M}_\\mu^s$, follows from Fatou's lemma and the boundedness of Cauchy sequences. We also have $\\d(g_n,g)\\to 0$ as $n\\to\\infty$.\n\\end{proof}\n\nBy the discussion of this section we have that $(\\mathcal{M}_\\mu^s,d)$ is isometrically embedded in $(X,\\d)$. It is reasonable to expect that actually $(X,\\d)$ is the metric completion of $(\\mathcal{M}_\\mu^s,d)$. The proof of this, which is equivalent to proving that $\\mathcal{M}_\\mu^s$ is dense in $(X,\\d)$, does not seem to be straightforward however.\n\nWe have the main theorem of this section.\n\\begin{thm}\\label{vol_fp}\nLet $(X,\\d)$ be as in Theorem~\\ref{X}. If the action of a volumorphism $\\phi\\in\\mathcal{D}_\\mu^{s+1}$ has a bounded orbit in $(X,\\d)$ for some $p\\in X$, then there exists a fixed point $g$ for the action in the $\\d$-closure of the subset $co\\{p,Tp,T^2p,\\ldots\\}$ of $X$. With respect to this fixed point $\\phi$ is a Riemannian isometry,\n\\begin{equation*}\n \\phi^*g=g.\n\\end{equation*}\n\\end{thm}\n\n\\subsection*{Acknowledgements}\n\nThe author is supported by the Finnish National Graduate School in Mathematics and its Applications. This work was inspired by the international scientific workshop in honor of Steven Rosenberg's 60th birthday. The author also wishes to thank the referee for valuable comments and interest in this work.\n\n\n\n\n\\providecommand{\\bysame}{\\leavevmode\\hbox to3em{\\hrulefill}\\thinspace}\n\\providecommand{\\href}[2]{#2}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzleio b/data_all_eng_slimpj/shuffled/split2/finalzzleio new file mode 100644 index 0000000000000000000000000000000000000000..c7c6109158238d3b0d748c89413ae801735a64fb --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzleio @@ -0,0 +1,5 @@ +{"text":"\\section{\\label{sec:level1}First-level heading:\\protect\\\\ The line\n\\section{\\label{sec:level1}INTRODUCTION}\n\nImage-guided radiotherapy (IGRT) is commonly used in commercial stereotactic radiotherapy systems for focusing irradiation on a tumour subject to motion. The technology to locate lung tumour position in real time during irradiation is important because of the trend towards real-time adaptive treatment. Some commercial stereotactic radiotherapy systems support real-time tracking of gold fiducial markers\\cite{1SyncTraX}\\cite{2Harada}\\cite{3Shiinoki}. However, marker implantation is invasive and carries the risks of pneumothorax and marker migration\\cite{4Bhagat}. Markerless tracking is being investigated for the next generation of IGRT\\cite{5Bahig}\\cite{6Yang}\\cite{7Dhont}. In most cases, template matching\\cite{8Patel}\\cite{9Shiinoki}\\cite{10Teske} or a correlation model\\cite{11Shieh} with X-ray images as a training data set are used. Other cases use a correlation model\\cite{12Li} with digitally reconstructed radiographs (DRRs) as training data sets. These methodologies involve the generation of a small personalized training data set for each patient. However, conventional template matching and correlation models often cause robustness problems due to inter- and intra-fractional change or induced artefacts in computed tomography.\n\nImage recognition by machine learning, ng trackespecially deep learning (DL), is another strategy to improve the robustness of markerless tracking\\cite{13Terunuma}. DL is a de facto standard for robust image recognition methods. However, DL for medical imaging usually requires multiple-subject data sets for training. Further, data collection is challenging and does not always work well due to the heterogeneity of patient data.\n\nHere, we propose a different strategy using patient-specific DL with large personalized training data sets generated from individual patients for real-time markerless lung tumour tracking, avoiding the need for collection of a large patient data set. Our strategy uses a personalized training data set generated from each patient's 4-dimensional treatment planning computed tomography (4D-CT). \\textbf{FIG. \\ref{fig1}} shows tumour tracking by patient-specific DL in treatment workflow. The personalized data generation and training process could be done end-to-end automatically using treatment planning with 4D-CT data. Moreover, these processes could be done in the treatment facility without taking patient data out of the facility, avoiding privacy problems. The tracking is performed during treatment, with the possibility of pretreatment rehearsal.\n\nWe validated the feasibility of our strategy by evaluatiing accuracy in both a digital phantom simulation study and an epoxy phantom study.\n\n\\begin{figure}[htb]\n \\begin{center}\n \\includegraphics[width=8cm]{fig1.png}\n \\caption{\\label{fig:epsart}Procedure for markerless tumour tracking by patient-specific DL in treatment workflow.}\n \\label{fig1}\n \\end{center}\n\\end{figure}\n\n\n\\section{\\label{sec:level1}MATERIALS AND METHODS}\n\n\\subsection{\\label{sec:level2}Deep learning for markerless tracking}\n\nWe designed a neural network model (\\textbf{FIG. \\ref{fig2}}) for 2-class semantic segmentation based on a fully convolutional neural network (FCN)\\cite{14FCN}. Our model classifies an input image to tumour area or background area pixel-wise for markerless tumour tracking. We combined the FCN with a pixel shuffle layer\\cite{15PixShuf} instead of deconvolution layers for \\textgreater 15 $\\mathrm{fps}$ (\\textless 66.7 $\\mathrm{ms\/frame}$) real-time processing. We used wider convolution sizes in our model than typical FCN using deconvolution layers such as the original FCN and U-net\\cite{16U-Net}, because we needed not only local textures but also non-local features such as tumour contours to recognize the tumour. A pixel shuffle layer is faster than deconvolution layers, making it suitable for real-time processing.\n\n\\begin{figure}[htb]\n \\begin{center}\n \\includegraphics[width=8cm]{fig2.png}\n \\caption{\\label{fig:epsart}The neural network model based on FCN combined with a pixel shuffle layer. F: Filter size. S: Stride width. P: Padding size. The left-side image is an input image and the right-side image is a labelled image classified to tumour area (white) or background area (black).}\n \\label{fig2}\n \\end{center}\n\\end{figure}\n\nIn the training process, we used DRRs generated from each phantom's 4D-CT as input images. Also, we used labelled images classified to tumour area or background area as teacher images. We explained the detail of training data set, namely, pairs of a training DRR and a labelled image, in the simulation study section and the phantom study section (below). We trained models by using softmax cross entropy as a loss function and using Adam (adaptive moment estimation)\\cite{17Adam} as an optimization algorithm in 200 mini batches and 10 epochs. We consider that Adam optimization is suitable for stable and fast training for a variety of data sets without tuning training parameters for each data set, because Adam automatically updates a learning rate parameter in its internal algorithm. Each projection view was trained independently by using each training data set.\n\n\\subsection{\\label{sec:level2}The simulation study with a digital 4D-CT phantom}\n\\subsubsection{Digital 4D-CT phantom}\n\nIn the simulation study, a digital respiratory motion phantom including ribs and vertebrae in the form of a 4D-CT (XCAT\\textregistered, Duke University, Durham NC, USA)\\cite{18XCAT}\\cite{19XCAT_HP} was used. The XCAT consists of $512 \\times 512 \\times 400$ voxels (1 $\\mathrm{mm^3}$ voxel) and features 200 phases between peak inhalation and peak exhalation.\n\nWe created a digital tumour motion phantom synchronized with the XCAT respiratory motion with the same spatial resolution. The digital tumour phantom motion range was 40 $\\mathrm{mm}$ in the superior-inferior (SI) direction, the main component of respiratory motion. We created 3-$\\mathrm{cm}$ spherical and $1.5 \\times 2.25 \\times 3$-$\\mathrm{cm}$ ovoid digital tumour phantoms with CT values of $100$ Hounsfield units (HU), which are large enough to be overlapped by ribs.\n\n\\subsubsection{Training data sets}\n\nWe used only ten phases with the same phase interval with the XCAT digital phantom, because 4D-CT generally consists of ten phases. DRRs were generated by projecting each 4D-CT phase with the same projection geometry as the two-view fluoroscopic tracking systems in the epoxy phantom study. DRRs were $768 \\times 768$ pixels ($388 \\times 388$ $\\mu m^2$ pixels), identical to the flat panel detector (FPD) in the epoxy phantom study (below). The DRR generation algorithm was based on a ray tracing algorithm\\cite{20Siddon} and was also used for patient positioning with image registration\\cite{21Mori_DRR}\\cite{22Mori_first}.\n\nWe coupled an XCAT DRR and a tumour DRR using 4D-CT with the same projection geometry, allowing us to acquire an XCAT DRR with a tumour as a training DRR (\\textbf{FIG. \\ref{fig3}}). \n\n\\begin{figure}[htb]\n \\begin{center}\n \\includegraphics[width=8cm]{fig3.png}\n \\caption{\\label{fig:epsart}Generating XCAT digitally reconstructed radiographs (DRRs) with a tumour as training DRRs from XCAT and a digital motion tumour phantom on 4D-CT.}\n \\label{fig3}\n \\end{center}\n\\end{figure}\n\nWe solved the problem of overlapping bone by using large numbers of training DRRs generated with displacing projection geometries to simulate a range of tumour motion. These training DRRs used the digital tumour phantom with overlapping bone. This solution also works as data augmentation.\n\nTo generate the DRRs, we displaced projection geometry with 3-dimensional translation ($x$, $y$, $z$) and rotation ($\\psi$, $\\varphi$, $\\theta$) of an X-ray tube\/FPD, by $-6$ to $+6$ $\\mathrm{mm}$ with 3-$\\mathrm{mm}$ steps and $-2^\\circ$ to $+2^\\circ$ with $1^\\circ$ steps. We then acquired $5^6 - 1 = 15,624$ DRRs per 4D-CT phase (\\textbf{FIG. \\ref{fig4}}). Finally, we acquired $156,240$ DRRs in ten 4D-CT phases. Because we can interpolate and extrapolate tumour motion in discrete 4D-CT phases, the geometric displacements were calculated from the tumour phantom motion range between 4D-CT phases.\n\n\\begin{figure*}\n\\begin{ruledtabular}\n \\begin{tabular}{ccc}\n \\rotatebox[origin=c]{90}{\\textbf{View 1}} &\n \\begin{minipage}{0.49\\linewidth}\n \\centering\n \\scalebox{0.25}{\\includegraphics{fig4_upper_left.png}}\n \\end{minipage} &\n \\begin{minipage}{0.49\\linewidth}\n \\centering\n \\scalebox{0.25}{\\includegraphics{fig4_upper_right.png}}\n \\end{minipage} \\\\\n & $(x, y, z) = (+6, +6, +6),$ & $(x, y, z) = (-6, -6, -6),$ \\\\\n & $(\\psi, \\varphi, \\theta) = (+2, +2, +2)$ & $(\\psi, \\varphi, \\theta) = (-2, -2, -2)$\\\\ \\hline \\hline\n \\rotatebox[origin=c]{90}{\\textbf{View 2}} &\n \\begin{minipage}{0.49\\linewidth}\n \\centering\n \\scalebox{0.25}{\\includegraphics{fig4_lower_left.png}}\n \\end{minipage} &\n \\begin{minipage}{0.49\\linewidth}\n \\centering\n \\scalebox{0.25}{\\includegraphics{fig4_lower_right.png}}\n \\end{minipage} \\\\\n & $(x, y, z) = (+6, +6, +6),$ & $(x, y, z) = (-6, -6, -6),$ \\\\\n & $(\\psi, \\varphi, \\theta) = (+2, +2, +2)$ & $(\\psi, \\varphi, \\theta) = (-2, -2, -2)$\\\\\n \\end{tabular}\n \n \\caption{\\label{fig:epsart}Examples of training digitally reconstructed radiographs (DRRs) of a spherical tumour phantom with displaced projection geometry using digital phantoms. ($x$, $y$, $z$) and ($\\psi$, $\\varphi$, $\\theta$) are displacements from the original projection geometry.}\n \\label{fig4}\n\n \\vspace{4mm}\n\n \\begin{tabular}{ccc}\n \\rotatebox[origin=c]{90}{\\textbf{View 1}} &\n \\begin{minipage}{0.49\\linewidth}\n \\centering\n \\scalebox{0.25}{\\includegraphics{fig5_upper_left.png}}\n \\end{minipage} &\n \\begin{minipage}{0.49\\linewidth}\n \\centering\n \\scalebox{0.25}{\\includegraphics{fig5_upper_right.png}}\n \\end{minipage} \\\\\n & $(x, y, z) = (+6, +6, +6),$ & $(x, y, z) = (-6, -6, -6),$ \\\\\n & $(\\psi, \\varphi, \\theta) = (+2, +2, +2)$ & $(\\psi, \\varphi, \\theta) = (-2, -2, -2)$\\\\ \\hline \\hline\n \\rotatebox[origin=c]{90}{\\textbf{View 2}} &\n \\begin{minipage}{0.49\\linewidth}\n \\centering\n \\scalebox{0.25}{\\includegraphics{fig5_lower_left.png}}\n \\end{minipage} &\n \\begin{minipage}{0.49\\linewidth}\n \\centering\n \\scalebox{0.25}{\\includegraphics{fig5_lower_right.png}}\n \\end{minipage} \\\\\n & $(x, y, z) = (+6, +6, +6),$ & $(x, y, z) = (-6, -6, -6),$ \\\\\n & $(\\psi, \\varphi, \\theta) = (+2, +2, +2)$ & $(\\psi, \\varphi, \\theta) = (-2, -2, -2)$\\\\\n \\end{tabular}\n \\caption{\\label{fig:epsart}Examples of training digitally reconstructed radiographs (DRRs) of a 3-cm tumour phantom with displaced projection geometry, random contrast transformation, and random noise. ($x$, $y$, $z$) and ($\\psi$, $\\varphi$, $\\theta$) are displacements from the base projection geometry.}\n \\label{fig5}\n\n\\end{ruledtabular}\n\\end{figure*}\n\nWe also generated large numbers of labelled images corresponding to training DRRs. In a labelled image, tumour area was calculated as the projection area of a tumour in 4D-CT. The other area was labelled as background area. This allowed us to automatically generate phantom-specific training data sets, namely pairs of a training DRR and a labelled image.\n\n\\subsubsection{Markerless tracking}\n\nDRRs for all 4D-CT phases with the same projection geometry, without displacements, were used as tracking images. These tracking images consisted of 200 frames and were not included in the training data set. \n\nA tumour coordinate was calculated from semantic segmentation of a tracking image by using a phantom-specific trained model for every frame. Trained models output the `tumour score' (0.0-1.0) pixel-wise. Therefore, we were able to calculate a tumour coordinate as the density centre \nweighted by pixel-wise tumour score thresholding \\textgreater 0.5. Using the density centre potentially allows acquisition of a tumour coordinate with sub-pixel accuracy in a tracking image. Misdetections of a few pixels had little effect on the density centre coordinate. Therefore, we expected higher accuracy than that of object detection methods outputting an object coordinate directly, such as Faster R-CNN\\cite{23Ren}.\n\n\\subsubsection{Tracking accuracy evaluation}\n\nThe ground truth of a tumour coordinate is the density centre of a tumour DRR weighted by pixel values with sub-pixel accuracy. We evaluated `tracking error' as the 2-dimensional distance between a tracking coordinate and a ground truth coordinate in each view. We evaluated not only absolute error on the isocentre in mm but also relative error on FPD by pixel units, because tracking errors may be affected by pixel resolution. We defined `tracking accuracy' as the ratio of frames satisfying \\textless 1 $\\mathrm{mm}$ tracking error on the isocentre. Tracking accuracy corresponds to gating accuracy in typical isotropic PTV-to-CTV margins of 1 $\\mathrm{mm}$\\cite{24Mori_track}. In case with no pixels having a \\textgreater 0.5 tumour score, we considered that tumour detection did not succeed in this frame.\n\n\\subsection{\\label{sec:level2}The phantom study with an epoxy respiratory motion phantom}\n\\subsubsection{Epoxy respiratory motion phantom}\n\nIn the phantom study, we validated multi-modality DL, which uses DRRs for training and X-ray images for tracking, using an epoxy respiratory motion phantom. A programmable respiratory motion phantom (Dynamic Thorax Phantom MODEL 008A\\textregistered, Computerized Imaging Reference Systems, Inc., Norfolk VA, USA) \\cite{25CIRS} was used. This is an epoxy chest phantom with vertebrae but no ribs. It allows the operator to change spherical tumour size (1 $\\mathrm{cm}$, 2 $\\mathrm{cm}$ and 3 $\\mathrm{cm}$ diameter). The CT values of the epoxy `tumours' are approximately 30 $\\mathrm{HU}$.\n\nA 4D-CT of the MODEL 008A phantom was acquired (Aquilion ONE\\textregistered, Canon Medical Systems Corporation, Otawara City, Japan) with different sized tumour phantoms. We marked phantom position with the CT scanner's laser markers to reposition the chest phantom for tracking in fluoroscopy. We then guided a tumour phantom on a rod in a sinusoidal motion ($\\pm20$ $\\mathrm{mm}$ amplitude and 4 second cycle) in the SI direction. 4D-CT phases were synchronized with tumour motion. This 4D-CT consisted of $512 \\times 512 \\times 204$ voxels ($0.625 \\times 0.625 \\times 1$ $\\mathrm{mm}$ voxel) and ten phases for a full single respiratory cycle (T00T90). Because 4D-CT was acquired in the same manner as during treatment workflow, in discrete phases, the 4D-CT imaging range of tumour motion was approximately 38 $\\mathrm{mm}$, which was narrower than the actual tumour motion range. Also, motion artefacts, which gave the tumours a more strongly differentiated appearance than most cases of inter- and intra-fractional changes, were detected in some 4D-CT phases.\n\n\\subsubsection{Training data sets}\n\nWe generated large numbers of training DRRs with displacing projection geometry from 4D-CT data of the chest phantom to simulate a variety of tumour motion patterns. We displaced the projection geometry by $-6$ to $+6$ $\\mathrm{mm}$ with 3-$\\mathrm{mm}$ steps and $-2^\\circ$ to $+2^\\circ$ with $1^\\circ$ steps, respectively. We then acquired $5^6 = 15,625$ DRRs per 4D-CT phase (\\textbf{FIG. \\ref{fig5}}). Finally, we acquired $156,250$ DRRs in all ten 4D-CT phases. Because we can interpolate and extrapolate tumour motion in discrete 4D-CT phases, the geometry displacement range was decided by the tumour motion range between 4D-CT phases. \n\nIn multi-modality DL using DRRs for training and X-ray images for tracking, we faced the specific problem of differences in image quality between DRRs and X-ray images, caused mainly by beam hardening effect, scattering, sensitivity characteristics of FPDs, and shot noise. Therefore, a DRR cannot be converted to an X-ray image precisely, because pixel values of a DRR do not correspond to pixel values of an X-ray image one-to-one. We solved this problem by using large numbers of training DRRs with random contrast transformation and random noise addition.\n\nThe contrast variation range was decided by analysing differences in contrast between DRRs and X-ray images. We applied gamma transformation with random $\\gamma$ values ($\\gamma = 0.4\u20132.5$) to each DRR as random contrast transformation. We also added Gaussian noise ($\\sigma = 25$) as random noise such that the noise variation range was $\\pm1$ standard deviation of the X-ray image noise.\n\nJust as in the simulation study, we also generated large numbers of labelled images corresponding to training DRRs.\n\n\\subsubsection{Markerless tracking}\n\nTwo-view X-ray fluoroscopy images of the chest phantom were simultaneously taken and marked with the laser positioning system (SyncTraX FX4\\textregistered, Shimadzu Corporation, Inc., Kyoto, Japan) \\cite{1SyncTraX} (\\textbf{FIG. \\ref{fig6}}). We manually positioned the phantom as close as possible to its location in the 4D-CT using the laser markers. Because we had not yet constructed a synchronizing system between X-ray fluoroscopy frame and measurement of tumour position, we acquired X-ray images at the 25 fixed tumour positions on the same trajectory taken by the 4D-CT with a tumour phantom inserted. The other structures of the phantom except the `tumour' were fixed at the same position. The X-ray settings were 110 $\\mathrm{kV}$, 50 $\\mathrm{mA}$, 4 $\\mathrm{ms}$, the same as in our gold marker tracking. These X-ray images consisted of $768 \\times 768$ pixels ($388 \\times 388$ $\\mu m^2$ pixel). They were taken in slightly different positions than in the 4D-CT because of differences in phase intervals.\n\n\\begin{figure}[htb]\n \\begin{center}\n \\includegraphics[width=8cm]{fig6.png}\n \\caption{\\label{fig:epsart}Geometry of the two-view X-ray fluoroscopy imaging system SyncTraX FX4\\textregistered with a radiotherapy system. Two views cross at the isocentre without being blocked by the gantry head. Source-object distance (SOD) = 2353 $\\mathrm{mm}$. Source-image distance (SID) = 4172 $\\mathrm{mm}$.}\n \\label{fig6}\n \\end{center}\n\\end{figure}\n\nWe applied a Gaussian blur filter to each X-ray image to simulate the same spatial resolution as training DRRs, which was lower than that of X-ray images, because DRR spatial resolution was limited by the CT spatial resolution (image quality of DRRs and X-ray images was close to each other for multi-modality DL). These blurred X-ray images were used as tracking images.\n\nAs in the simulation study, a tumour coordinate was calculated from semantic segmentation of a tracking image by using a phantom-specific trained model for every frame. A tumour coordinate was calculated as the density centre of the tumour score with sub-pixel accuracy, which was essentially unaffected by the Gaussian blur filter.\n\n\\subsubsection{Tracking accuracy evaluation}\n\nWe measured displacement of a motion tumour rod of MODEL 008A phantom as ground truth of the tumour coordinate using a laser displacement meter (LK-G155\\textregistered, Keyence Inc., Osaka, Japan, \\textless 20 $\\mu m$ accuracy) outside of the X-ray fields of view (\\textbf{FIG. \\ref{fig7}}). We had previously calibrated between displacements of a tumour phantom and tumour coordinates on X-ray images. We then acquired tumour coordinates from displacements of the tumour phantom at 25 different positions. \n\n\\begin{figure}[htb]\n \\begin{center}\n \\includegraphics[width=8cm]{fig7.png}\n \\caption{\\label{fig:epsart}Tracking accuracy evaluation system using a MODEL 008A phantom with a laser displacement meter (a). We measured displacement of a motion tumour rod of the MODEL 008A phantom outside both X-ray fields of view.}\n \\label{fig7}\n \\end{center}\n\\end{figure}\n\nAs in the simulation study, we evaluated not only absolute error on the isocentre in mm but also relative error on FPD by pixel units. Also, we evaluated `tracking accuracy'.\n\n\\subsection{\\label{sec:level2}Implementation}\n\nThe personalized data generation process was programmed with C++ and CUDA\\textregistered (NVIDIA Corporation, Santa Clara CA, USA) running on the GPU. The training process and the tracking process were programmed with Python with the open-source DL framework Chainer (Preferred Networks, Inc., Tokyo, Japan), running on the GPU.\n\nAll processes (i.e., personalized data generation, training, and tracking) were done with a single computer (Windows\\textregistered 7 64 bit, Intel Xeon CPU E5-2637 dual, 64GB RAM, Microsoft, Inc. Redmond WA, USA; NVIDIA Quadro M6000 24GB GPU dual). Personalized data generation, training, and tracking were run as two-view processes simultaneously by using dual GPUs.\n\n\n\\section{\\label{sec:level1}RESULTS}\n\\subsection{\\label{sec:level2}Digital simulation study}\n\n\\textbf{FIG. \\ref{fig8}} shows an example of a tracking image. There were no false detections, extra-detections, outside the tumour area, in spite of many similar rib structures in the tracking image.\n\n\\textbf{FIG. \\ref{fig9}} shows the tracking error distribution for all frames. The tracking error distribution was \\textless 1 $\\mathrm{mm}$ on the isocentre and almost \\textless 1 pixel on the FPD without effects of pixel resolution in both views and with both tumour shapes.\n\n\\textbf{TABLE \\ref{tab1}} shows a summary of tracking accuracy. We achieved 100\\% tracking accuracy in both views and with both tumour shapes in spite of overlying bone.\n\n\\begin{ruledtabular}\n\\begin{figure*}\n \\begin{tabular}{ccc}\n & \\textbf{Spherical tumour} & \\textbf{Ovoid tumour}\\\\ \\hline\n \\rotatebox[origin=c]{90}{\\textbf{View 1}} &\n \\begin{minipage}{0.49\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig8_upper_left.png}}\n \\end{minipage} &\n \\begin{minipage}{0.49\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig8_upper_right.png}}\n \\end{minipage} \\\\ \\hline \\hline\n \\rotatebox[origin=c]{90}{\\textbf{View 2}} &\n \\begin{minipage}{0.49\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig8_lower_left.png}}\n \\end{minipage} &\n \\begin{minipage}{0.49\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig8_lower_right.png}}\n \\end{minipage} \\\\\n \\end{tabular}\n \\caption{\\label{fig:epsart}An example of a tracking image in the digital simulation study. A calculated tumour coordinate is the centre of the green rectangle in the tracking image.}\n \\label{fig8}\n\\end{figure*}\n \n\\begin{figure*}\n \\begin{tabular}{ccc}\n & \\textbf{Spherical tumour} & \\textbf{Ovoid tumou}r\\\\ \\hline\n \\rotatebox[origin=c]{90}{\\textbf{View 1}} &\n \\begin{minipage}{0.49\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig9_upper_left.png}}\n \\end{minipage} &\n \\begin{minipage}{0.49\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig9_upper_right.png}}\n \\end{minipage} \\\\ \\hline \\hline\n \\rotatebox[origin=c]{90}{\\textbf{View 2}} &\n \\begin{minipage}{0.49\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig9_lower_left.png}}\n \\end{minipage} &\n \\begin{minipage}{0.49\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig9_lower_right.png}}\n \\end{minipage} \\\\\n \\end{tabular}\n \\caption{\\label{fig:epsart}The 2-dimensional tracking error distribution on FPD for all frames in the digital simulation study. The centre of a graph, the origin, means zero error. The scale units are one pixel on the FPD. The red circle means 1 mm error on the isocentre, the criteria for tracking accuracy.}\n \\label{fig9}\n\\end{figure*}\n\n\\begin{table*}\n \\caption{\\label{tab:table3}Summary of tracking accuracy in the digital simulation study. Tracking error shows `mean error' $\\pm$ `standard deviation' with (maximum error) for all frames.}\n\\begin{tabular}{cccccc}\n & &\\multicolumn{2}{c}{\\textbf{View 1}} & \\multicolumn{2}{c}{\\textbf{View2}}\\\\\n \\multicolumn{2}{c}{\\textbf{Tumour shape}} & \\textbf{Spherical} & \\textbf{Ovoid} & \\textbf{Spherical} & \\textbf{Ovoid}\\\\ \\hline\n \\multicolumn{2}{c}{\\textbf{Tracking accuracy}} & 100\\% & 100\\% & 100\\% & 100\\%\\\\ \\hline\n & \\textbf{Isocentre} & 0.05 $\\pm$ 0.02 & 0.05 $\\pm$ 0.03 & 0.06 $\\pm$ 0.03 & 0.07 $\\pm$ 0.04\\\\\n \\textbf{Tracking error} & [$\\mathrm{mm}$] & (0.12) & (0.13) & (0.16) & (0.25)\\\\ \\cline{2-6}\n \\textbf{(Maximum error)} & \\textbf{FPD} & 0.22 $\\pm$ 0.11 & 0.23 $\\pm$ 0.13 & 0.27 $\\pm$ 0.15 & 0.32 $\\pm$ 0.20\\\\\n & [$\\mathrm{pixel}$] & (0.56) & (0.60) & (0.72) & (1.13)\\\\\n\\end{tabular}\n\\label{tab1}\n\\end{table*}\n\\end{ruledtabular}\n\n\\subsection{\\label{sec:level2}Epoxy phantom study}\n\n\\textbf{FIG. \\ref{fig10}} shows an example of a tracking image. There were no false detections in the search area using the 2- and 3-$\\mathrm{cm}$ tumours despite the tumour overlapping the spine in view 2. However, there were false detections in the search area in some frames using the 1-$\\mathrm{cm}$ tumours.\n\n\\textbf{FIG. \\ref{fig11}} shows the tracking error distribution for all frames. In the cases of the 2- and 3-$\\mathrm{cm}$ tumours, the tracking error distribution was almost \\textless 1 mm on the isocentre and almost \\textless 5 pixels on the FPD without effects of pixel resolution in both views. However, with the 1-$\\mathrm{cm}$ tumours, tracking error distribution was \\textgreater 1 mm, and was worse than with the 2- and 3-$\\mathrm{cm}$ tumours. A common bias error for all tumour sizes by manual positioning of the phantom was not detected.\n\n\\textbf{TABLE \\ref{tab2}} shows a summary of tracking accuracy. We achieved \\textgreater 94.7\\% tracking accuracy in 2- and 3-$\\mathrm{cm}$ tumours in spite of multi-modality DL using DRRs for training and X-ray images for tracking. However, tracking accuracy with 1-$\\mathrm{cm}$ tumours was only 40.3\\%.\n\n\\begin{ruledtabular}\n\\begin{figure*}\n \\begin{tabular}{cccc}\n & \\textbf{3-cm tumour} & \\textbf{2-cm tumour} & \\textbf{1-cm tumour}\\\\ \\hline\n \\rotatebox[origin=c]{90}{\\textbf{View 1}} &\n \\begin{minipage}{0.33\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig10_upper_left.png}}\n \\end{minipage} &\n \\begin{minipage}{0.33\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig10_upper_middle.png}}\n \\end{minipage} &\n \\begin{minipage}{0.33\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig10_upper_right.png}}\n \\end{minipage} \\\\ \\hline \\hline\n \\rotatebox[origin=c]{90}{\\textbf{View 2}} &\n \\begin{minipage}{0.33\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig10_lower_left.png}}\n \\end{minipage} &\n \\begin{minipage}{0.33\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig10_lower_middle.png}}\n \\end{minipage} &\n \\begin{minipage}{0.33\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig10_lower_right.png}}\n \\end{minipage} \\\\\n \\end{tabular}\n \n \\caption{\\label{fig:epsart}An example of a tracking image in an epoxy phantom study. The calculated tumour coordinate is in the centre of the green rectangle. The red rectangle is the search area for a tumour.}\n \\label{fig10}\n\\end{figure*}\n\n\\begin{figure*}\n \\begin{tabular}{cccc}\n & \\textbf{3-cm tumour} & \\textbf{2-cm tumour} & \\textbf{1-cm tumour}\\\\ \\hline\n \\rotatebox[origin=c]{90}{\\textbf{View 1}} &\n \\begin{minipage}{0.33\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig11_upper_left.png}}\n \\end{minipage} &\n \\begin{minipage}{0.33\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig11_upper_middle.png}}\n \\end{minipage} &\n \\begin{minipage}{0.33\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig11_upper_right.png}}\n \\end{minipage} \\\\ \\hline \\hline\n \\rotatebox[origin=c]{90}{\\textbf{View 2}} &\n \\begin{minipage}{0.33\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig11_lower_left.png}}\n \\end{minipage} &\n \\begin{minipage}{0.33\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig11_lower_middle.png}}\n \\end{minipage} &\n \\begin{minipage}{0.33\\linewidth}\n \\centering\n \\scalebox{0.2}{\\includegraphics{fig11_lower_right.png}}\n \\end{minipage} \\\\\n \\end{tabular}\n \n \\caption{\\label{fig:epsart}The 2-dimensional tracking error distribution on flat panel detectors (FPD) for all frames (except \\textgreater 6-pixel errors) in the epoxy phantom study. The centre of a graph, the origin, means zero error. The scale units are one pixel on FPD. The red circle means 1 $\\mathrm{mm}$ error on the isocentre, the criteria of tracking accuracy.}\n \\label{fig11}\n\\end{figure*}\n\n\\begin{table*}\n\\caption{\\label{tab:table3}Summary of tracking accuracy in the epoxy phantom study. Tracking error shows `mean error' $\\pm$ `standard deviation' with (maximum error) for all frames.}\n\\begin{tabular}{cccccccc}\n & &\\multicolumn{3}{c}{\\textbf{View 1}} & \\multicolumn{3}{c}{\\textbf{View2}}\\\\\n \\multicolumn{2}{c}{\\textbf{Tumour size}} & \\textbf{3 cm} & \\textbf{2 cm} & \\textbf{1 cm} & \\textbf{3 cm} & \\textbf{2 cm} & \\textbf{1 cm}\\\\ \\hline\n \\multicolumn{2}{c}{\\textbf{Tracking accuracy}} & 100\\% & 100\\% & 73.1\\% & 100\\% & 94.7\\% & 40.3\\%\\\\ \\hline\n & \\textbf{Isocentre} & 0.3 $\\pm$ 0.1 & 0.3 $\\pm$ 0.1 & 0.6 $\\pm$ 0.4 & 0.3 $\\pm$ 0.2 & 0.5 $\\pm$ 0.2 & 1.8 $\\pm$ 4.7\\\\\n \\textbf{Tracking error} & [$\\mathrm{mm}$] & (0.5) & (0.9) & (1.8) & (0.8) & (1.3) & (30.4)\\\\ \\cline{2-8}\n \\textbf{(Maximum error)} & \\textbf{FPD} & 1.3 $\\pm$ 0.4 & 1.5 $\\pm$ 0.6 & 2.9 $\\pm$ 1.9 & 1.5 $\\pm$ 0.8 & 2.4 $\\pm$ 1.1 & 8.4 $\\pm$ 21\\\\\n & [$\\mathrm{pixel}$] & (2.4) & (4.0) & (8.3) & (3.6) & (5.8) & (139.8)\\\\\n\\end{tabular}\n \\label{tab2}\n\\end{table*}\n\\end{ruledtabular}\n\n\\subsection{\\label{sec:level2}Processing time}\n\nThe personalized data generation processing took about 9 hours to output $156,250$ DRRs into a hard disk drive (HDD). This processing time is rate-limited by HDD access. The training process took about 17 hours, and is also rate-limited by HDD access. The tracking process (except image reading) took $32.5 \\pm 4.7$ $\\mathrm{ms\/frame}$ (30.8 $\\mathrm{fps}$) for real-time processing.\n\n\n\\section{\\label{sec:level1}DISCUSSION}\n\nWe proved the potential feasibility of a real-time markerless tumour tracking framework for stereotactic lung radiotherapy based on patient-specific DL with a personalized data generation strategy using a digital phantom simulation study and an epoxy phantom study. The personalized data generation and training process could be done end-to-end automatically between treatment planning and treatment, without manual procedures such as template creation just before the treatment process while a patient is positioned and fixed. Also, our framework does not require cone beam CT (CBCT) data taken while a patient is positioned. Therefore, our framework has potential to improve treatment throughput and reduce patient burden, compared to some conventional markerless tracking frameworks which require manual procedures just before the treatment process or require CBCT\\cite{8Patel}\\cite{9Shiinoki}\\cite{10Teske}\\cite{11Shieh}. Also, our framework using DL with a large personalized training data set has potential to improve the robustness and accuracy of markerless tracking, compared to frameworks using conventional machine learning with a small personalized training data set\\cite{12Li}.\n\nThe digital simulation study had 100\\% tracking accuracy in spite of overlying bone. Also, as the tracking errors were significantly smaller than the displacements for training DRRs, they could be easily distinguished from each other. These results indicate that our method was able to learn tumour shape while ignoring overlying bone by using training data sets with a variety of overlying bone patterns. Also, we succeeded in tracking both spherical and ovoid tumour phantoms. This result indicates that our method has the potential to track all varieties of tumour shape.\n\nIn the epoxy phantom study, we achieved \\textgreater 94.7\\% tracking accuracy in 2- and 3-$\\mathrm{cm}$ tumours in spite of multi-modality DL using DRRs for training and X-ray images for tracking. This result indicates that we were able to allow it to learn modality invariance characteristics by using training data sets with a range of contrast and noise. This was achieved despite discrete 4D-CT phases and a narrower 4D-CT imaging range of tumour motion. This result indicates its success at interpolation and extrapolation of tumour motion. Also, this was achieved without being affected by manual positioning error despite the fixed bone structures. This result indicates that our model did not learn only relative positional relationships between bone structures and a tumour. However, we need additional evaluation of tumour trajectories in all three dimensions, not only in the SI direction, in order to validate its robustness for irregular tumour motion compared with training data. We achieved this accuracy despite motion artefact from a tumour in 4D-CT. We consider that our method has the potential to detect tumours regardless of motion in 4D-CT or inter- and intra-fractional changes in tumour shape. However, we need additional evaluation with a retrospective clinical study to validate its robustness. In 1-$\\mathrm{cm}$ tumours, tracking was difficult because of the low contrast of the X-ray images due to scattered radiation. We required additional evaluation with 1-$\\mathrm{cm}$ `tumours' in the digital simulation study, which ignores scattered radiation. The tracking accuracy could be improved in these small masses by improving the contrast variation algorithm for training DRRs to simulate scattered radiation. Currently, we have calculated a tumour coordinate using only a present frame without temporal information such as tumour coordinates in past frames. We may be able to reduce tumour detection inaccuracies by using temporal information. In this case, we can evaluate tracking accuracy by using irregular respiratory waveforms. \n\nIn the labelled image, tumour area was labelled as the projection area of a tumour on the 4D-CT. But, in actual treatment workflow, tumour area can be labelled as the planning target volume (PTV) or clinical target volume (CTV) generated by treatment planning by projecting PTV or CTV to a training DRR. In this case, tumour contours also can be acquired by positioning PTV or CTV on the tumour centre coordinate during tracking.\n\nWe consider that patient-specific DL has the potential to provide better accuracy for each patient than standard DL using multiple-patient data sets. Patient-specific DL can be considered overfitting for a specific patient. Generally speaking, standard DL provides good accuracy for all patients. Conversely, overfitting provides better accuracy for a specific patient than standard DL.\n\nThe personalized data generation process and training process took about 26 hours in total. These processes have to be done between treatment planning and treatment in the workflow. The training time of 26 hours might not be problematic because, generally, treatment commences about one week after treatment planning. However, more than 24 hours may not be sufficient if our method is applied for retraining or fine tuning during the course of treatment, or for instance at an MRI linac. We can substantially shorten these processing times by generating DRRs and training data sets in memory without rate-limiting HDD accessibility.\n\nThe tracking process requires \\textgreater 15 $\\mathrm{fps}$ (\\textless 66.7 $\\mathrm{ms\/frame}$) real-time processing, so our tracking process of 32.5 $\\mathrm{ms\/frame}$ (30.8 $\\mathrm{fps}$) is sufficient.\n\n\n\\section{\\label{sec:level1}CONCLUSIONS}\n\nWe proved the potential feasibility of a real-time markerless tumour tracking framework for stereotactic lung radiotherapy based on patient-specific DL with personalized data generation by evaluating the tracking accuracy in both digital phantom and epoxy phantom studies.\n\n\n\\section*{\\label{sec:level1}CONFLICT OF INTEREST STATEMENT}\n\nWataru Takahashi and Shota Oshikawa are employees of Shimadzu Corporation, Japan.\n\n\n\\section*{\\label{sec:level1}FUNDING}\n\nThis work was supported by Shimadzu Corporation, Kyoto, Japan and National Institute of Radiological Sciences, Chiba, Japan.\n\n\n\\begin{acknowledgments}\nConstruction of the tracking accuracy evaluation system was supported by Ryuichi Ito, Seiji Yamanaka, and Yui Torigoe of Shimadzu Corporation. We thank Libby Cone, MD, MA, from DMC Corp. (\\url{http:\/\/www.dmed.co.jp\/}) for editing drafts of this manuscript.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Conclusion}\nThis work proposes a comprehensive analyze on how a DNN can describe emotional states. To this purpose, we first studied how many dimensions are sufficient to accurately represent an emotion resulting from a facial expression. We then conclude that three dimensions are a good trade-off between accuracy and compactness, agreeing with the arousal-valence-dominance~\\cite{russell_circumplex_1980}\\cite{mehrabian1996pleasure} psychologist model.\nThereby, we came up with a DNN providing a 3-dimensional compact representation of emotion, learned in a multi-domain fashion on RAF~\\cite{li_reliable_2017}, SFEW~\\cite{dhall_static_2011} and AffecNet~\\cite{mollahosseini_affectnet:_2017}. We set up a comparison with the state-of-the-arts and showed that our model can compete with models having much larger feature sizes. It proves that bigger representations are not necessary for emotion recognition. In addition, we implemented a visualization process enabling to qualitatively evaluate the consistency of the compact features extracted from emotion faces by our model. We thus showed that DNN trained on emotion recognition are naturally learning an arousal-valence-like~\\cite{russell_circumplex_1980} encoding of the emotion. As a future work we plan to also apply state-of-the-art techniques -- as Deep Locality Preserving Loss~\\cite{li_reliable_2017} or Covariance Pooling~\\cite{acharya_covariance_2018} -- to enhance our compact representation. In addition, nothing warranty that the learned CAKE bears the same semantic meanings as arousal-valence-dominance does: further interpreting the perceived semantic of the dimensions would therefore be an interesting piece of work.\n\n\n\\section{Introduction}\n\\begin{figure}\n\\begin{floatrow}\n\\ffigbox[0.45\\textwidth]{\\includegraphics[width=\\linewidth]{Visualizations\/EmotionArouval_TEST_bigger-cropped.pdf}}{\\caption{Comparison of the discrete and continuous (arousal-valence) representations using AffectNet's annotations~\\cite{mollahosseini_affectnet:_2017}.}\\label{fig:affecnet_arouval}}\n\\ffigbox[.45\\textwidth]{\\input{plots\/courbeAVdim.tex}}{\\caption{Influence of adding supplementary dimensions to arousal-valence when predicting emotion on AffectNet~\\cite{mollahosseini_affectnet:_2017}.}\\label{fig:courbeAVdim}}\n\\end{floatrow}\n\\end{figure}\n\nFacial expression is one of the most used human means of communication after language. Thus, the automated recognition of facial expressions -- such as emotions -- has a key role in affective computing, and its development could benefit human-machine interactions. \n\nDifferent models are used to represent human emotion states. Ekman~\\textit{et al.}~\\cite{ekman_constants_1971} propose to classify the human facial expression resulting from an emotion into six classes (\\textit{resp.} happiness, sadness, anger, disgust, surprise and fear) supposed to be independent across the cultures. This model has the benefit of simplicity but could be not sufficient to address the whole complexity of human affect. Moreover it suffers from serious intra-class variations as, for instance, soft smile and laughing equally belong to \\textit{happiness}. That is why Ekman's emotion classes are sometimes assembled into compound emotions~\\cite{du2014compound} (\\textit{e.g.} happily surprised).\nOthers have chosen to represent emotion with an n-dimensional continuous space, as opposite to the Ekman's discrete classes. Russel has built the \\textit{Circumplex Model of Affect}~\\cite{russell_circumplex_1980} in which emotion states are described by two values: arousal and valence.\n\\textit{Arousal} represents the excitation rate -- the higher the arousal is, the more intense the emotion is -- and \\textit{valence} defines whether the emotion has a positive or a negative impact on the subject. Russels suggests in \\cite{russell_circumplex_1980} that all Ekman's emotions~\\cite{ekman_constants_1971} and compound emotions could be mapped in the \\textit{circumplex model of affect}. Furthermore, this two-dimensional approach allows a more accurate specification of the emotional state, especially by taking its intensity into account.\n\nA third dimension has been added by Mehrabian~\\textit{et al.}~\\cite{mehrabian1996pleasure} -- the \\textit{dominance} -- which depends on the degree of control exerted by a stimulus. Last, Ekman and Friesen~\\cite{ekman_measuring_1976} have come up with the \\textit{Facial Action Code System} (FACS) using anatomically based action units. Developed for measuring facial movements, FACS is well suited for classifying facial expressions resulting from an affect. \n\nBased on these emotion representations, several large databases of face images have been collected and annotated according to emotion. EmotioNet~\\cite{benitez-quiroz_emotionet:_2016} gathers faces annotated with Action Units~\\cite{ekman_measuring_1976}; SFEW~\\cite{dhall_static_2011}, FER-13~\\cite{goodfellow_challenges_2013} and RAF~\\cite{li_reliable_2017} propose images in the wild annotated in basic emotions; AffecNet~\\cite{mollahosseini_affectnet:_2017} is a database annotated in both discrete emotion~\\cite{ekman_constants_1971} and arousal-valence~\\cite{russell_circumplex_1980}.\n\nThe emergence of these large databases has allowed to develop automatic emotion recognition systems, such as the recent approaches based on Deep Neural Networks (DNN). AffectNet's authors~\\cite{mollahosseini_affectnet:_2017} use three AlexNet~\\cite{krizhevsky_imagenet_2017} to learn respectively emotion classes, arousal and valence. In \\cite{ng_deep_2015}, the authors make use of transfer learning to counteract the smallness of the SFEW~\\cite{dhall_static_2011} dataset, by pre-training their model on ImageNet~\\cite{deng_imagenet:_2009} and FER~\\cite{goodfellow_challenges_2013}. In \\cite{acharya_covariance_2018} authors implement {\\em Covariance Pooling} using second order statistics when training on emotion recognition (on RAF~\\cite{li_reliable_2017} and SFEW~\\cite{dhall_static_2011}). \n\nEmotion labels, FACS and continuous representations have their own benefits -- simplicity of the emotion classes, accuracy of the arousal-valence, objectivity of the FACS, \\textit{etc.} -- but also their own drawbacks -- imprecision, complexity, ambiguity, \\textit{etc}. \nTherefore several authors have tried to leverage the benefits of all these representations. Khorrami \\textit{et al.}~\\cite{khorrami_deep_2015} first showed that neural networks trained for expression recognition implicitly learn facial action units.\nContributing to highlighting the close relation between emotion and Action Units, Pons \\textit{et al.}~\\cite{pons_multi-task_2018} learned a multitask and multi-domain ResNet~\\cite{he_deep_2015} on both discrete emotion classes (SFEW~\\cite{dhall_static_2011}) and Action Units (EmotioNet~\\cite{benitez-quiroz_emotionet:_2016}). \nFinally, Li \\textit{et al.}~\\cite{li_reliable_2017} proposed a \"\\textit{Deep Locality-Preserving Learning}\" to handle the variability inside an emotion class, by making classes as compact as possible. \n\nIn this context, this paper focuses on the links between arousal-valence and discrete emotion representations for image-based emotion recognition. More specifically, the paper proposes a methodology for learning very compact embedding, with not more than 3 dimensions, performing very well on emotion classification task, making the visualization of emotions easy, and bearing similarity with the arousal-valence representation. \n\n\n\\section{Learning Very Compact Emotion Embeddings}\n\\label{methods}\n\\subsection{Some Intuitions About Emotion Representations}\n\\label{subsec:preliminary_study}\n\nWe first want to experimentally measure the dependence between emotion and arousal-valence as yielded in~\\cite{russell_circumplex_1980}. We thus display each sample of the AffectNet~\\cite{mollahosseini_affectnet:_2017} validation subset in the arousal-valence space and color them according to their emotion label (Figure~\\ref{fig:affecnet_arouval}). For instance, a face image labelled as \\textit{neutral} with an arousal and a valence of zero is located at the center of Figure~\\ref{fig:affecnet_arouval} and colored in blue. It clearly appears that a strong dependence exists between discrete emotion classes and arousal-valence. Obviously, it is due in part to the annotations of the AffectNet~\\cite{mollahosseini_affectnet:_2017} dataset, as the arousal-valence have been constrained to lie in a predefined confidence area based on the emotion annotation. Nevertheless, this dependence agrees with the \\textit{Circumplex Model of Affect}~\\cite{russell_circumplex_1980}.\n\nTo evaluate further how arousal-valence representation is linked to emotion labels, we train a classifier made of one fully connected layer\\footnote{By \"fully connected layer\" we denote a linear layer with biases and without activation function.} (fc-layer) to infer emotion classes from arousal-valence values provided by AffectNet~\\cite{mollahosseini_affectnet:_2017} dataset. We obtain the accuracy of 83\\%, confirming that arousal-valence can be an excellent \\textit{2-d} compact emotion representation. \n\nThis raises the question of the optimality of this 2-\\textit{d} representation. Would adding a third dimension to arousal-valence make the classification performance better? To address this question, we used the 512-\\textit{d} hidden representation of a ResNet-18~\\cite{he_deep_2015} trained to predict discrete emotions on the AffectNet dataset~\\cite{mollahosseini_affectnet:_2017}. This representation is then projected into a more compact space using a fc-layer outputting $k$ dimensions, which are concatenated with the arousal-valence values. On top of this representation, we add another fc-layer predicting emotion classes. The two fc-layers are finally trained using Adam optimizer~\\cite{kingma2014adam}.\nAdding 1 dimension to arousal-valence gives a gain of +3 points on the accuracy. It agrees with the assumption that a three-dimensional representation is more meaningful than a two-dimensional one~\\cite{mehrabian1996pleasure}. The benefit of adding more than 1 dimension is exponentially decreasing; with +512 dimensions, the gain is only of +0.6 points compared to adding 1 dimension, as shown in Figure~\\ref{fig:courbeAVdim}.\n\nFrom these observations, the use of a compact representation seems to be consistent with discrete emotion classes, as it enables an accuracy of 83\\% and 86\\% -- respectively for a 2-\\textit{d} and a 3-\\textit{d} representation -- and it even may allow to describe affect states with more contrast and accuracy. \nEven if arousal-valence is a good representation for emotion recognition, the question of its optimality has not been answered by these preliminary experiments. In other words, is it possible to learn 2-\\textit{d} (or 3-\\textit{d}) embedding better than those built on arousal-valence? We positively answer this question in Section~\\ref{subsec:method}. \n\n\\subsection{Learning Compact and Accurate Representations of Emotions}\n\\label{subsec:method}\nBased on the previous observations, this section proposes a methodology for learning a compact embedding for emotion recognition from images.\n\\paragraph{Features extraction}\nThe basic input of our model is an image containing one face displaying a given emotion. We first extract 512-\\textit{d} features specialized in emotion recognition.\nSo as to, we detect the face, align its landmarks by applying an affine transform and crop the face region. The so-obtained face is then resized into $224\\times224$ and fed to a ResNet-18~\\cite{he_deep_2015} network (Figure~\\ref{fig:cross_arch}, \\textit{Features extraction}). The face image is augmented (\\textit{e.g.} jittering, rotation), mostly to take the face detector noise into account. We also use cutout~\\cite{devries_improved_2017} -- consisting in randomly cutting a $45\\times45$ pixels sized patch from the image -- to regularize and improve the robustness of our model to facial occlusions.\nOur ResNet outputs 512-\\textit{d} features, on top of which a fc-layer can be added. At training time, we also use dropout~\\cite{srivastava2014dropout} regularization.\nThe neural network can be learned from scratch on two given tasks: discrete emotion classification or arousal-valence regression. \n\n\\paragraph{Compact emotion encoding}\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{Diagrams\/model_schema.pdf}\n \\caption{Our approach's overview. Left: we use a ResNet-18 previously trained for discrete emotion recognition or arousal valence regression to extract 512-d hidden representations from face images. Center: using these hidden representations, CAKE or AVk representations (center) are learned to predict discrete emotions. Right: the learning process is multi-domain, predicting emotions on three different datasets with three different classifiers. Gray blocks are non-trainable weights while blue blocks are optimized weights.}\n \\label{fig:cross_arch}\n\\end{figure}\nCompact embedding is obtained by projecting the 512-\\textit{d} features provided by the ResNet-18 (pretrained on discrete emotion recognition) into smaller k-dimensional spaces (Figure~\\ref{fig:cross_arch}, \\textit{Emotion Encoding}) in which the final classification is done.\nThe $k$ features may be seen as a compact representation of the emotion, and the performance of the classifier can be measured for different values of $k$. CAKE-2, CAKE-3, \\textit{etc.}, denote such classifiers with $k=2$, $k=3$, \\textit{etc}.\n\nIn the same fashion we can train the ResNet-18 using arousal-valence regression. In this case, the so-obtained arousal-valence regressor can be used to infer arousal-valence values from novel images and concatenate them to the $k$ features of the embedding. Thus we reproduce here the exact experiment done in Section~\\ref{subsec:preliminary_study} in order to assess the benefit of a third (or more) dimension. The difference is that arousal-valence are not ground truth values but predicted ones. These methods are denoted as AV1, AV2, AV3, \\textit{etc.} for the different values of $k$.\n\n\\paragraph{Domain independent embedding}\nAs we want to ensure a generic compact enough representation, independent of the datasets, we learn the previously described model jointly on several datasets, without any further fine-tuning.\n\nOur corpus is composed of AffectNet~\\cite{mollahosseini_affectnet:_2017}, RAF~\\cite{li_reliable_2017} and SFEW~\\cite{dhall_static_2011}, labelled with seven discrete emotion classes: \\textit{neutral}, \\textit{happiness}, \\textit{sad}, \\textit{surprise}, \\textit{fear}, \\textit{disgust} and \\textit{anger}.\nOur training subset is composed of those of AffectNet (283901 elts., \\textit{95.9\\%} of total), RAF (11271 elts., \\textit{3.81\\%} of total) and SFEW (871 elts., \\textit{0.29\\%} of total). Our testing subset is composed of the subsets commonly used for evaluation in the literature (\\textit{validation} of SFEW and AffecNet, \\textit{test} of RAF).\n\nTo ease the multi-domain training, we first pre-train our features extractor model on AffectNet and freeze its weights. Then we apply the same architectures as described before, but duplicate the last fc-layer in charge of emotion classification in three dataset-specific layers (Figure~\\ref{fig:cross_arch}, \\textit{multi-domain learning}). The whole model loss is a modified softmax cross entropy defined as follows:\n\\begin{equation}\nLoss=\\frac{1}{N} \\sum_{i=1}^{N} \\sum_{j=1}^{3} w_{class}^{i,j} w_{dataset}^{j} \\ E(y^i,\\hat{y}^{i,j})\n\\end{equation}\nwhere $j$ is ranging in [AffectNet, RAF, SFEW], $y^i$ is the label of $i^{th}$ element, $\\hat{y}^{i,j}$ is the prediction of the $j^{th}$ classifier on the $i^{th}$ element, E is the softmax cross entropy loss, $N$ is the number of elements in the batch, $w_{class}^i$ is a weight given to the $i^{th}$ element of the batch depending on its emotion class and $w_{dataset}^{j}$ is a weight given to the $j^{th}$ classifier prediction.\nEach sample of the multi-domain dataset is identified according to its original database, allowing to choose the correct classifier's output when computing the softmax cross entropy.\n\nThe $ w_{class}$ weight is defined as:\n$\n w_{class}^{i,j} = \\frac{N_{total}^j}{N_{class}^{i,j} \\times nbclass}\n$\nwhere $N_{total}^j$ is the number of elements in the $j^{th}$ dataset, $N_{class}^{i,j}$ is the number of elements in the class of the $i^{th}$ element of the $j^{th}$ dataset and $nbclass$ is the number of classes (7 in our case). The goal here is to fix the important class imbalance in the dataset by forcing to fit the uniform distribution, as previously done by \\cite{mollahosseini_affectnet:_2017}.\n\nThe $w_{dataset}$ weight permits to take the imbalance between dataset's sizes into account.\n\\begin{align}\n w_{dataset}^{j}= \\left\\{\n \\begin{array}{cl}\n \\frac{1}{\\log N_{total}^{j}} & sample \\in j^{th} dataset\\\\\n 0 & sample \\notin j^{th} dataset\n\\end{array}\n\\right.\n\\end{align}\n\nWe thus define a global loss enabling to optimize the last two layers of our model (namely \\textit{Emotion Encoding} and \\textit{Multi-domain Learning} in Figure~\\ref{fig:cross_arch}) on the three datasets at the same time. The dimension $k$ (or $k+2$ in the case of the arousal-valence approach) can easily be changed and help to evaluate the interest of supplementary dimensions for emotion representation. \n\n\n\n\\section{Experiments}\n\\label{results}\n\\subsection{Evaluation Metrics}\nWe measure the classification performance with the \\textit{accuracy} and the \\textit{\\textbf{macro} F1 Score}~\\eqref{eq:f1}. \\textit{Accuracy} measures the number of correctly classified samples. Instead of accuracy, we prefer \\textit{macro F1 score} which gives the same importance to each class:\n\\begin{equation}\\begin{aligned}\nF_{1macro}=\\frac{1}{N_c}\\sum_{i}^{N_c}F_{1i} \\quad\nF_{1i}=2\\frac{prec_i \\cdot rec_i}{prec_i+rec_i} \\quad\nprec_i=\\frac{tp_i}{tp_i+fp_i} \\quad\nrec_i=\\frac{tp_i}{tp_i+fn_i} \\quad\n\\label{eq:f1}\n\\end{aligned}\n\\end{equation}\nwhere $i$ is the class index; $prec_i$, $rec_i$ and $F_{1i}$ are the precision, the recall and the F1-score of class $i$; $N_c$ is the number of classes; $tp$, $fp$ and $fn$ are the true positives, false positives and false negatives rates. All scores are averaged on 10 runs, with different initializations, and given with associated standard deviations, on our multi-domain testing subset.\n\n\\subsection{Compactness of the Representation}\n\\label{subsec:compactness}\n\\begin{figure}\n\\begin{floatrow}\n\\ffigbox[.5\\textwidth]{\\input{plots\/compare_dim.tex}\n}{\\caption{Influence of representation size on the multi-domain F1 score.}\\label{fig:repsize}}\n\n\\capbtabbox{\\input{plots\/multiDomainRes.tex}}{\\caption{Evaluation of compact representations on AffectNet, SFEW, RAF.}\n\\label{table:compact}}\n\\end{floatrow}\n\\end{figure}\n\nWe first evaluate the quality of the representations in a multi-domain setting. Table~\\ref{table:compact} reports the F1-score of CAKE-2, AV, CAKE-3 and AV1 trained on three datasets with three different classifiers, each one being specialized on a dataset as explained in Section~\\ref{methods}. Among the 2-\\textit{d} models (AV and CAKE-2), AV is better, taking benefits from the knowledge transferred from the AffectNet dataset. This is not true anymore for the 3D models, where CAKE-3 is better than AV1, probably because of its greater number of trainable parameters.\n\nTo validate the hypothesis of the important gain brought by adding a third dimension, we run the \"CAKE\" and \"AVk\" experiments with different representation sizes. To simplify the analysis of the results, we plot in Figure~\\ref{fig:repsize} a multi-domain F1-score, \\textit{i.e.} the weighted average of the F1-scores according to the respective validation set sizes.\nWe observe that the gain in multi-domain F1-score is exponentially decreasing for both representations -- note that the representation size axis is in log scale -- and thus the performance gap between a representation of size 2 and size 3 is the more important. We also observe that \"CAKE\" representations still seem to yield better results than \"AVk\" when the representation size is greater than 2. \n\nThese first experiment shows that a very compact representation can yield good performances for emotion recognition. It also is in line with the \"dominance\" dimension hypothesis, as a third dimension brought the more significant gain in performance. After 3 dimensions, the gain is much less significant.\n\n\\subsection{Accuracy of the Representation}\nTo evaluate the efficiency of the CAKE-3 compact representation, we compare its accuracy with \nstate-of-the-art approaches (Table~\\ref{table:comparison}) on the public datasets commonly used in the literature for evaluation (\\textit{validation} of SFEW and AffecNet, \\textit{test} of RAF). In order to get a fair comparison, we add a \\textit{\"Rep. Dim.\"} column corresponding to the size of the last hidden representation -- concretely, we take the penultimate fully connected output size.\nWe report the scores under the literature's metrics, namely the mean of the per class recall for RAF~\\cite{li_reliable_2017} and the accuracy for SFEW~\\cite{dhall_static_2011} and AffectNet~\\cite{mollahosseini_affectnet:_2017}. To the best of the author's knowledge no other model has been evaluated before on the AffectNet's seven classes.\n\nCAKE-3 is outperformed by Covariance Pooling~\\cite{acharya_covariance_2018} and Deep Locality Preserving~\\cite{li_reliable_2017}.\nNevertheless, it is still competitive as the emotion representation is far more compact -- 3-\\textit{d} \\textit{versus} 2000-\\textit{d} -- and learned in a multi-domain fashion. Moreover, we gain 1 point on RAF when we compare to models of same size (2 millions parameters), \\textit{e.g.} \\textit{Compact Model}~\\cite{kuo2018compact}. These results support the conclusion made in \\ref{subsec:compactness}, as we show that a compact representation of the emotion learned by small models is competitive with larger representations. This finally underlines that facial expressions may be encoded efficiently into a 3-\\textit{d} vector and that using a large embedding on small datasets may lead to exploit biases of the dataset more than to learn emotion recognition.\n\n\\begin{table}\n\\begin{tabular}{c|c|ccc|}\n\\cline{2-5}\n & Rep. Dim. & \\multicolumn{1}{c}{RAF~\\cite{li_reliable_2017}} & \\multicolumn{1}{c}{SFEW~\\cite{dhall_static_2011}} & AffectNet~\\cite{mollahosseini_affectnet:_2017} \\\\ \\hline\n\\multicolumn{1}{|c|}{\\multirow{2}{*}{Covariance Pooling~\\cite{acharya_covariance_2018}}} & 2000 & 79.4 & - & - \\\\ \\cline{2-5} \n\\multicolumn{1}{|c|}{} & 512 & - & 58.1 & - \\\\ \\hline\n\\multicolumn{1}{|c|}{Deep Locality Preserving~\\cite{li_reliable_2017}} & 2000 & 74.2 & 51.0 & - \\\\ \\hline\n\\multicolumn{1}{|c|}{Compact Model~\\cite{kuo2018compact}} & 64 & 67.6 & - & - \\\\ \\hline\n\\multicolumn{1}{|c|}{VGG\\cite{li_reliable_2017}} & 2000 & 58.2 & - & - \\\\ \\hline\n\\multicolumn{1}{|c|}{Transfer Learning~\\cite{ng_deep_2015}} & 4096 & - & 48.5 & - \\\\ \\hline\\hline\n\\multicolumn{1}{|c|}{{ours (CAKE-3)}} & {3} & {68.9} & {44.7} & {58.2} \\\\ \\hline\n\\multicolumn{1}{|c|}{{ours (Baseline)}} & {512} & {71.7} & {48.7} & {61.7} \\\\ \\hline\n\\end{tabular}\n\\caption{Accuracy of our model regarding state-of-the-art methods. The size of the representation is taken into account. Metrics are the average of per class recall for RAF and accuracy for SFEW and AffectNet.}\n\\label{table:comparison}\n\\end{table}\n\nOur experiments also allow to perform a cross-database study as done in \\cite{li_reliable_2017}. This study consists in evaluating a model trained on dataset B on a dataset A. Thereby we obtain Table~\\ref{table:cross_f1} with the evaluation of each classifier on each dataset.\nResults on SFEW~\\cite{dhall_static_2011} -- trained or evaluated -- are constantly lower than others, with a higher standard deviation. This could be due to the insufficient number of samples in the SFEW training set or more probably to the possible ambiguity in the annotation of SFEW compared to AffectNet and RAF. Confirming this last hypothesis, the \\textit{RAF classifier} has the better generalization among the datasets. It is in line with the claim of Li~\\textit{et al.}~\\cite{li_reliable_2017} that RAF has a really reliable annotation with a large consensus between different annotators. Finally, it also underlines the difficulty to find a reliable evaluation of an emotion recognition system because of the important differences between datasets annotations.\n\n\\begin{table}\n\\begin{tabular}{cl|l|l|l|}\n\\cline{3-5}\n\\multicolumn{1}{l}{} & & \\multicolumn{3}{c|}{Dataset} \\\\ \\cline{3-5} \n\\multicolumn{1}{l}{} & & AffectNet & SFEW & RAF \\\\ \\hline\n\\multicolumn{1}{|c|}{\\multirow{3}{*}{Classifier}} & AffectNet & \\textbf{58.1 } \\textit{($\\pm$ 0.5)} & 27.6 \\textit{($\\pm$ 2.6)} & 53.8 \\textit{($\\pm$ 0.6)} \\\\ \\cline{2-5} \n\\multicolumn{1}{|c|}{} & SFEW & 35.1 \\textit{($\\pm$ 2.1)} & \\textbf{34.1 } \\textit{($\\pm$ 1.0)} & 47.3 \\textit{($\\pm$ 1.2)} \\\\ \\cline{2-5} \n\\multicolumn{1}{|c|}{} & RAF & 51.8 \\textit{($\\pm$ 0.4)} & 31.5 \\textit{($\\pm$ 1.7)} & \\textbf{64.4 } \\textit{($\\pm$ 0.6)} \\\\ \\hline\n\\end{tabular}\n\\caption{Cross-database evaluation on CAKE-3 model (F1-Score).}\n\\label{table:cross_f1}\n\\end{table}\n\\subsection{Visualizing Emotion Maps}\nVisualizations are essential to better appreciate how DNN performs classifications, as well as to visualize emotion boundaries and their variations across datasets. Our visualization method consists in densely sampling the compact representation space -- 2-\\textit{d} or 3-\\textit{d} -- into a mesh grid, and feeding it to a formerly trained model -- AV, CAKE-2 or CAKE-3 -- in order to compute a dense map of the predicted emotions. Not all the coordinates of the mesh grid belong to real emotions and some of them would never happen in real applications. \n\nThe construction of the mesh grid depends on the model to be used. \nFor the AV and the CAKE-2 models, we have simply built it using 2d vectors with all values ranging in intervals containing maximum and minimum values of the coordinates observed with real images. As the CAKE-3 model is dealing with a three-dimensional representation, it is not possible to visualize it directly on a plane figure.\nTo overcome this issue we modify CAKE-3 into a CAKE-3-Norm representation where all the coordinates are constrained to be on the surface of the unit sphere, and visualize spherical coordinates. \nEven if CAKE-3-Norm shows lower performances (about 2 points less than CAKE-3), the visualization is still interesting, bringing some incentives about what has really been learned.\n\nFigure~\\ref{fig:visu} shows the visualization results for CAKE-3-Norm, AV and CAKE-2 representations (\\textit{resp.} from top to down).\nEach dot is located by the coordinates of its compact representation -- \\((arousal, valence)\\) for AV, \\((k_1, k_2)\\) for CAKE-2 and spherical coordinates ($\\phi$ and $\\theta$) for CAKE-3-Norm -- and colored according to the classifier prediction. The per class macro F1-score is displayed inside each emotion area.\n\nFirst, each compact representation -- CAKE-2, CAKE-3-Norm and AV -- exhibits a strong consistency across the datasets (in Figure~\\ref{fig:visu}, compare visualizations on the same row). Indeed, the three classifiers show a very similar organization of the emotion classes, which is demonstrating the reliability of the learned representation. Thereby, the \\textit{neutral} class -- in blue -- is always placed at the origin and tends to neighbor all other classes. It is in line with the idea of neutral as an emotion with a very low intensity. Nevertheless, we can witness small inter-dataset variations, especially on SFEW~\\cite{dhall_static_2011} (in Figure~\\ref{fig:visu}, middle column) with \\textit{disgust} and \\textit{fear} -- \\textit{resp.} brown and purple -- which are almost missing. This underlines the disparities of annotations across the datasets and confirms the need of multi-domain frameworks when wishing to achieve a more general emotion recognition model.\n\nSecond, we can analyze variations between the different representations for a given dataset (in Figure~\\ref{fig:visu}, compare visualizations on the same column). As AV is based on arousal-valence, we observe the same emotion organization as in Figure \\ref{fig:affecnet_arouval}. Especially, as the majority of the AffectNet's training (and validation) samples have a positive arousal, the classifier do not use the whole space (in Figure~\\ref{fig:visu}, second row: see green, blue and orange areas) unlike CAKE-2 and CAKE-3 which are not constrained by arousal-valence. \n\nWe can find many similarities between these three representations, but the most impressive come across when comparing CAKE-2 and AV. Despite the inequality of scaling -- which causes the \\textit{neutral} area (blue) to be smaller in CAKE-2 -- AV and CAKE-2 compact representations are very close. Indeed, the area classes are organized exactly in the same fashion. The only difference is that for AV they are disposed in a clockwise order around \\textit{neutral} whereas for CAKE-2 they are disposed in an anticlockwise order. This observation shows that a DNN trained on the emotion recognition classification is able to learn an arousal-valence-like representation of the emotion. It contributes -- along with Khorrami~\\cite{khorrami_deep_2015} who points that DNNs trained to recognize emotions are learning action units~\\cite{ekman_measuring_1976} -- to bring the dependence across the emotion representations in the forefront.\n\n\\begin{figure}[hb]\n \\begin{minipage}{.86\\linewidth}\n \\includegraphics[width=\\linewidth]{Visualizations\/gridEval_K3_nolegend-cropped.pdf}\n \\end{minipage}\n \\begin{minipage}{.86\\linewidth}\n \\includegraphics[width=\\linewidth]{Visualizations\/gridEval_AV_nolegend-cropped.pdf}\n \\end{minipage}\n \\begin{minipage}{.86\\linewidth}\n \\includegraphics[width=\\linewidth]{Visualizations\/gridEval_K2-cropped_v2-cropped.pdf}\n \\end{minipage}\n \\caption{Visualization of CAKE-3-Norm, AV and CAKE-2. Rows indicate evaluated representation -- \\textit{resp.} from top to down: CAKE-3-Norm, AV, CAKE-2 -- and columns indicate datasets -- \\textit{resp.} from left to right: AffectNet~\\cite{mollahosseini_affectnet:_2017}, SFEW~\\cite{dhall_static_2011} and RAF~\\cite{li_reliable_2017}.}\n \\label{fig:visu}\n\\end{figure}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn recent years there has been a renewed interested in the spectral \ntheory of non-self-adjoint operators where, as opposed to the self-adjoint \ncase, the norm of the resolvent can be very large even far away from \nthe spectrum. Equivalently the spectrum of such operators can be \nhighly unstable even under very small perturbations of the operator. \n\\par\nEmphasized by the works of L.N. Trefethen and M. Embree, see for example \n\\cite{TrEm05}, E.B. Davies, M. Zworski and many others \n\\cite{Da97,Da99,NSjZw04,ZwChrist10,DaHa09}, \nthe phenomenon of spectral instability of non-self-adjoint operators has become \na popular and vital subject of study. In view of this it is very natural to add \nsmall random perturbations.\n\\par\nOne line of recent research concerns the case of elliptic (pseudo)\\-differential \noperators subject to small random perturbations, cf. \n\\cite{BM,Ha06,Ha06b,HaSj08,SjAX1002,Vo14}.\n\\subsection{Perturbations of Jordan blocks}\nIn this paper we shall study the spectrum of a random perturbation of\nthe large Jordan block $A_0$ :\n\\begin{equation}\\label{jb}\nA_0=\\begin{pmatrix}0 &1 &0 &0 &...&0\\\\\n 0 &0 &1 &0 &...&0\\\\\n 0 &0 &0 &1 &...&0\\\\\n . &. &. &. &...&.\\\\\n 0 &0 &0 &0 &...&1\\\\\n 0 &0 &0 &0 &...&0\n\\end{pmatrix}: {\\mathds{C}}^N\\to {\\mathds{C}}^N.\n\\end{equation}\nPerturbations of a large Jordan block have already been studied, \ncf. \\cite{SjZw07,Zw02,DaHa09,GuMaZe14}.\n\\begin{itemize}\n\\item M.~Zworski \\cite{Zw02} noticed that for every $z\\in D(0,1)$,\n there are associated exponentially accurate quasi-modes when $N\\to\n \\infty $. Hence the open unit disc is a region of spectral \n instability.\n\\item We have spectral stability (a good resolvent estimate) in\n${\\mathds{C}}\\setminus \\overline{D(0,1)}$, since $\\| A_0\\| =1$.\n\\item $\\sigma (A_0)=\\{0 \\}$. \n\\end{itemize}\nThus, if $A_\\delta =A_0+\\delta Q$ is a small (random) perturbation of\n$A_0$ we expect the eigenvalues to move inside a small neighborhood of $\\overline{D(0,1)}$.\n\\par\nIn the special case when $Qu=(u|e_1)e_N$, where $(e_j)_1^N$ is the canonical\nbasis in ${\\mathds{C}}^N$, the eigenvalues of $A_\\delta $ are of the form \n$$\n\\delta ^{1\/N}e^{2\\pi ik\/N},\\ k\\in {\\mathds{Z}}\/N{\\mathds{Z}},\n$$\nso if we fix $0<\\delta \\ll 1$ and let $N\\to \\infty $, the spectrum\n``will converge to a uniform distribution on $S^1$''.\n\\par E.B.~Davies and M.~Hager \\cite{DaHa09} studied random\nperturbations of $A_0$. They showed that with probability close to 1,\nmost of the eigenvalues are close to a circle:\n\\begin{thm}\\label{th1}\nLet $A=A_0+\\delta Q$, $Q=(q_{j,k}(\\omega ))$ where $q_{j,k}$ are\nindependent and identically distributed random variables \n$\\sim {\\mathcal{N}}_{{\\mathds{C}}}(0,1)$.\nIf $0<\\delta \\le N^{-7}$, $R=\\delta\n^{1\/N}$, $\\sigma >0$, then with probability $\\ge 1-2N^{-2}$, we have\n$\\sigma (A_\\delta )\\subset D(0,RN^{3\/N})$ and\n$$\n\\# (\\sigma (A_\\delta )\\cap D(0,Re^{-\\sigma }))\\le \\frac{2}{\\sigma\n}+\\frac{4}{\\sigma }\\ln N.\n$$\n\\end{thm}\n A recent result by A.~Guionnet, P.~Matched Wood and\n O.~Zeitouni \\cite{GuMaZe14} implies that when $\\delta $ is\n bounded from above by $N^{-\\kappa -1\/2}$ for some $\\kappa >0$ and\n from below by some negative power of $N$, then \n %\n$$\n\\frac{1}{N}\\sum_{\\mu \\in \\sigma (A_{\\delta})}\\delta (z-\\mu )\\to \\hbox{the\n uniform measure on }S^1,\n$$\nweakly in probability.\n \\\\\n \\par\n The main purpose of this paper is to obtain, for a small coupling \n constant $\\delta$, more information about the distribution of \n eigenvalues of $A_\\delta$ in the interior of a disc, where the \n result of Davies and Hager only yields a logarithmic upper bound \n on the number of eigenvalues; see Theorem \\ref{ed5} below.\n %\n \\par\n %\n In order to obtain more information in this\n region, we will study the expected eigenvalue density, adapting the\n approach of \\cite{Vo14}. (For random polynomials and Gaussian analytic \n functions such results are more classical, \n \\cite{Kac43,SZ03,HoKrPeVi09,So00,ShZe98,Sh08}.) \n \\\\\n \\par \n \\textit{Acknowledgments.}\nWe would like to thank St\\'ephane Nonnenmacher for his observation \non the relation of the density in Theorem \\ref{ed5} with the Poincar\\'e \nmetric. The first author was partially supported by the project ANR \nNOSEVOL $2011$ BS $010119$ $01$.\n\\section{Main result}\\label{mres}\nLet $0< \\delta \\ll 1$ and consider the following random \nperturbation of $A_0$ as in \\eqref{jb}: \n\\begin{equation}\\label{jb.p}\n A_{\\delta} = A_0 + \\delta Q, \\quad Q=(q_{j,k})_{1 \\leq j,k\\leq N},\n\\end{equation}\nwhere $q_{j,k}$ are independent and identically distributed \ncomplex random variables, following the complex Gaussian law \n$\\mathcal{N}_{\\mathds{C}}(0,1)$.\n\\par\nIt has been observed by Bordeaux-Montrieux \\cite{BM} the we have the following \nresult.\n\\begin{prop}\nThere exists a $C_0>0$ such that the following holds: Let \n$X_j\\sim \\mathcal{N}_{\\mathds{C}}(0,\\sigma_j^2)$, $1\\leq j\\leq N <\\infty$ be \nindependent and identically distributed complex Gaussian random \nvariables. Put $s_1=\\max \\sigma_j^2$. Then, for every $x>0$, we \nhave \n %\n \\begin{equation*}\n \\mathds{P}\\left[\\sum_{j=1}^N |X_j|^2 \\geq x\\right]\n \\leq \\exp\\left( \n \\frac{C_0}{2s_1} \\sum_{j=1}^N \\sigma_j^2 - \\frac{x}{2s_1}\n \\right).\n \\end{equation*}\n\\end{prop}\nAccording to this result we have \n$$\nP(\\Vert Q\\Vert_\\mathrm{HS}^2\\ge x)\\le \\exp \\left(\\frac{C_0}{2}N^2-\\frac{x}{2} \\right)\n$$\nand hence if $C_1>0$ is large enough,\n\\begin{equation}\\label{pj.28}\n\\Vert Q\\Vert_\\mathrm{HS}^2\\le C_1^2N^2,\\hbox{ with probability }\\ge 1-e^{-N^2}.\n\\end{equation}\nIn particular (\\ref{pj.28}) holds for the ordinary operator norm of $Q$. \nWe now state the principal result of this work.\n\\begin{thm}\\label{ed5}\nLet $A_{\\delta}$ be the $N\\times N$-matrix in \\eqref{jb.p} and restrict the\nattention to the parameter range\n$e^{-N\/{\\mathcal{O}}(1)}\\le \\delta \\ll 1$, $N\\gg 1$. Let $r_0$ belong to a\nparameter range,\n$$\n\\frac{1}{{\\mathcal{O}}(1)}\\le r_0\\le 1-\\frac{1}{N},\n$$\n\\begin{equation}\\label{ed.60}\n\\frac{r_0^{N-1}N}{\\delta}(1-r_0)^2+\\delta N^3\\ll 1,\n\\end{equation}\nso that $\\delta \\ll N^{-3}$. Then, for all $\\varphi\\in\\mathcal{C}_0(D(0,r_0-1\/N))$\n %\n \\begin{equation*}\\label{ed.61}\n \\mathds{E}\\left[ \n \\mathds{1}_{B_{\\mathds{C}^{N^2}}(0,C_1N)}(Q) \n \\sum_{\\lambda\\in\\sigma(A_{\\delta})}\n \\varphi(\\lambda)\n \\right]\n =\n \\frac{1}{2\\pi}\\int\\varphi(z)\\Xi(z) L(dz),\n \\end{equation*}\n %\n where \n %\n \\begin{equation*}\n \\Xi(z) = \\frac{4}{(1-|z|^2)^2}\n \\left(\n 1 + \n \\mathcal{O}\\!\\left(\\frac{|z|^{N-1}N}{\\delta}(1-|z|)^2\n\t +\\delta N^3\\right)\n \\right).\n \\end{equation*}\nis a continuous function independent of $r_0$. \n$C_1>0$ is the constant in (\\ref{pj.28}).\n\\end{thm}\nCondition \\eqref{ed.60} is equivalent to\n\\begin{equation*}\n r_0^{N-1}(1-r_0)^2 \\ll \\frac{\\delta}{N}\\left( 1 - \\delta N^3\\right).\n\\end{equation*}\nIt is necessary that $r_0 < 1 - 2(N+1)^{-1}$ for this inequality to \nbe satisfied. For such $r_0$ the function $[0,r_0]\\ni r \\mapsto r^{N-1}(1-r)^2$ \nis increasing, and so inequality \\eqref{ed.60} is preserved if we replace $r_0$ by \n$|z|\\leq r_0$.\n\\\\\n\\par\nThe leading contribution of the density $\\Xi(z)$ is \nindependent of $N$ and is equal to the Lebesgue density \nof the volume form induced by the Poincar\\'e metric on the \ndisc $D(0,1)$. This yields a very small density of \neigenvalues close to the center of the disc $D(0,1)$ which is, \nhowever, growing towards the boundary of $D(0,1)$.\n\\\\\n\\par\nA similar result has been obtained by M. Sodin and B. Tsirelson \nin \\cite{SoTs04} for the distribution of zeros of a certain class \nof random analytic functions with domain $D(0,1)$ linking the \nfact that the density is given by the volume form induced by \nthe Poincar\\'e metric on $D(0,1)$ to its invariance under \nthe action of $SL_2(\\mathds{R})$.\n\\subsection{Numerical Simulations}\nTo illustrate the result of Theorem \\ref{ed5}, we present the \nfollowing numerical calculations (Figure \\ref{fig:e5_e4} and \n\\ref{fig:e3_e2}) for the eigenvalues of the \n$N\\times N$-matrix in \\eqref{jb.p}, where $N=500$ and the coupling \nconstant $\\delta$ varies from $10^{-5}$ to $10^{-2}$. \n\\begin{figure}[ht]\n \\begin{minipage}[b]{0.49\\linewidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig_e5.pdf}\n \\end{minipage}\n \\hspace{0cm}\n \\begin{minipage}[b]{0.49\\linewidth}\n \\centering \n \\includegraphics[width=\\textwidth]{fig_e4.pdf}\n \\end{minipage}\n \\caption{On the left hand side $\\delta=10^{-5}$ and \n on the right hand side $\\delta=10^{-4}$.}\n \\label{fig:e5_e4}\n\\end{figure}\n\\begin{figure}[ht]\n \\begin{minipage}[b]{0.44\\linewidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig_e3.pdf}\n \\end{minipage}\n \\hspace{0cm}\n \\begin{minipage}[b]{0.54\\linewidth}\n \\centering \n \\includegraphics[width=\\textwidth]{fig_e2.pdf}\n \\end{minipage}\n \\caption{On the left hand side $\\delta=10^{-3}$ and \n on the right hand side $\\delta= 10^{-2}$.}\n \\label{fig:e3_e2}\n\\end{figure}\n\\section{A general formula}\n\\setcounter{equation}{0}\nTo start with, we shall obtain a general formula (due to \\cite{Vo14}\nin a similar context). Our treatment is slightly different in that we\navoid the use of approximations of the delta function and also that we\nhave more holomorphy available.\n\\par\nLet $g(z,Q)$ be a holomorphic function on $\\Omega \\times W\\subset {\\mathds{C}}\n\\times {\\mathds{C}}^{N^2}$, where $\\Omega\\subset\\mathds{C}$, $W\\subset {\\mathds{C}}^{N^2}$ \nare open bounded and connected. Assume that\n\\begin{equation}\\label{ed.1} \\hbox{for every }Q\\in W,\\\n g(\\cdot ,Q)\\not\\equiv 0.\n\\end{equation}\nTo start with, we also assume that \n\\begin{equation}\\label{ed.2} \n\\hbox{for almost all }Q\\in W,\\ g(\\cdot ,Q) \\hbox{ has only simple zeros.}\n\\end{equation}\nLet $\\phi \\in C_0^\\infty (\\Omega )$ and let $m\\in C_0(W)$. We are interested in \n\\begin{equation}\\label{ed.3}\nK_\\phi =\\int \\left(\\sum_{z;\\, g(z,Q)=0} \\phi (z)\\right) m(Q)L(dQ),\n\\end{equation}\nwhere we frequently identify the Lebesgue measure with a differential\nform,\n\\begin{equation*}\nL(dQ)\\simeq (2i)^{-{N^2}}d\\overline{Q}_1\\wedge dQ_1\\wedge ... \\wedge\nd\\overline{Q}_{N^2}\\wedge dQ_{N^2} =:(2i)^{-{N^2}}d\\overline{Q}\\wedge dQ.\n\\end{equation*}\nIn (\\ref{ed.3}) we\ncount the zeros of $g(\\cdot ,Q)$ with their multiplicity and notice\nthat the integral is finite: For every compact set $K\\subset W$ the\nnumber of zeros of $g(\\cdot ,Q)$ in $\\mathrm{supp\\,}\\phi $, counted \nwith their multiplicity, is uniformly bounded, for $Q\\in K$. This \nfollows from Jensen's formula.\n\\par \nNow assume,\n\\begin{equation}\\label{ed.4}\ng(z,Q)=0 \\Rightarrow d_Qg\\ne 0.\n\\end{equation}\nThen \n$$\n\\Sigma :=\\{(z,Q)\\in \\Omega \\times W;\\, g(z,Q)=0 \\}\n$$ \nis a smooth complex hypersurface in $\\Omega \\times W$ and from\n(\\ref{ed.2}) we see that \n\\begin{equation}\\label{ed.5}\nK_\\phi = \\int_\\Sigma \\phi (z)m(Q)(2i)^{-{N^2}}d\\overline{Q}\\wedge dQ,\n\\end{equation}\nwhere we view $(2i)^{-{N^2}}d\\overline{Q}\\wedge dQ$ as a complex $({N^2},{N^2})$-form on\n$\\Omega \\times W$, restricted to $\\Sigma $, which yields a non-negative differential form \nof maximal degree on $\\Sigma$. \n\\par \nBefore continuing, let us eliminate the assumption\n(\\ref{ed.2}). Without that assumption, the integral in (\\ref{ed.3}) is\nstill well-defined. It suffices to show \\eqref{ed.5} for all \n$\\phi\\in\\mathcal{C}_0^{\\infty}(\\Omega_0\\times W_0)$ when $\\Omega_0\\times W_0$ \nis a sufficiently small open neighborhood of any given point \n$(z_0,Q_0)\\in \\Omega\\times W$. When $g(z_0,Q_0)\\neq 0$ or \n$\\partial_z g(z_0,\\Omega_0)\\neq 0$ we already know that this holds, so we \nassume that for some $m\\geq 2$, $\\partial_z^kg(z_0,Q_0)=0$ for $0\\leq k \\leq m-1$, \n$\\partial_z^mg(z_0,Q_0)\\neq 0$. \n\\par\nPut $g_{\\varepsilon}(z,Q)=g(z,Q)+\\varepsilon$, $\\varepsilon\\in\\mathrm{neigh}(0,\\mathds{C})$. \nBy Weierstrass' preparation theorem, if $\\Omega_0,W_0$ and $r>0$ are small \nenough,\n\\begin{equation*}\n g_{\\varepsilon}(z,Q) = k(z,Q,\\varepsilon)p(z,Q,\\varepsilon) \\quad \n \\text{in } \\Omega_0\\times W_0\\times D(0,r),\n\\end{equation*}\nwhere $k$ is holomorphic and non-vanishing, and \n\\begin{equation*}\n p(z,Q,\\varepsilon)= z^m + p_1(Q,\\varepsilon)z^{m-1} + \\dots \n\t\t + p_m(Q,\\varepsilon).\n\\end{equation*}\nHere, $p_j(Q,\\varepsilon)$ are holomorphic, and $p_j(0,0)=0$. \n\\par\nThe discriminant $D(Q,\\varepsilon)$ of the polynomial $p(\\cdot,Q,\\varepsilon)$ \nis holomorphic on $W_0\\times D(0,r)$. It vanishes precisely when \n$p(\\cdot,Q,\\varepsilon)$ - or equivalently $g_{\\varepsilon}(\\cdot,Q)$ - has a \nmultiple root in $\\Omega_0$.\n\\par\nNow for $0 < |\\varepsilon| \\ll 1$, the $m$ roots of $g_{\\varepsilon}(\\cdot,Q_0)$ \nare simple, so $D(Q_0,\\varepsilon)\\neq 0$. Thus, $D(\\cdot,\\varepsilon)$ is not \nidentically zero, so the zero set of $D(\\cdot,\\varepsilon)$ in $W_0$ is of measure \n$0$ (assuming that we have chosen $W_0$ connected). This means that for \n$0<|\\varepsilon|\\ll 1$, the function $g_{\\varepsilon}(\\cdot,Q)$ has only simple \nroots in $\\Omega$ for almost all $Q\\in W_0$.\n\\par\nLet $\\Sigma _\\epsilon $ be the zero set of\n$g_\\epsilon $, so that $\\Sigma _\\epsilon \\to \\Sigma $ in the natural\nsense. We have \n$$\n\\int \\left(\\sum_{z;\\, g_\\epsilon (z,Q)=0 }\\phi\n(z)\\right) m(Q)(2i)^{-{N^2}}d\\overline{Q}\\wedge dQ=\\int_{\\Sigma _\\epsilon }\\phi\n(z)m(Q)(2i)^{-{N^2}}d\\overline{Q}\\wedge dQ\n$$ \nfor $\\phi\\in\\mathcal{C}_0^{\\infty}(\\Omega_0\\times W_0)$, \nwhen $\\epsilon >0$ is small enough, depending on $\\phi $, $m$. Passing\nto the limit $\\epsilon =0$ we get (\\ref{ed.5}) under the assumptions\n(\\ref{ed.1}), (\\ref{ed.4}), first for \n$\\phi\\in\\mathcal{C}_0^{\\infty}(\\Omega_0\\times W_0)$, and then by partition \nof unity for all $\\phi\\in\\mathcal{C}_0^{\\infty}(\\Omega\\times W)$. Notice that \nthe result remains valid if we replace $m(Q)$ by $m(Q)1_B(Q)$ where $B$ is \na ball in $W$.\n\\par\nNow we strengthen the assumption (\\ref{ed.4}) by assuming that we have\na non-zero $Z(z)\\in {\\mathds{C}}^{N^2}$ depending smoothly on $z\\in \\Omega $\n(the dependence will actually be holomorphic in the application below) \nsuch that\n\\begin{equation}\\label{ed.6}\ng(z,Q)=0 \\Rightarrow \\left(\\overline{Z}(z)\\cdot \\partial _Q\n\\right)g(z,Q)\\ne 0.\n\\end{equation}\nWe have the corresponding orthogonal decomposition\n$$\nQ=Q(\\alpha )=\\alpha _1\\overline{Z}(z)+\\alpha ',\\quad \\alpha '\\in\n\\overline{Z}(z) ^\\perp , \\ \\alpha_1\\in\\mathds{C},\n$$ \nand if we identify unitarily $\\overline{Z}(z)^\\perp$ with ${\\mathds{C}}^{{N^2}-1}$ \nby means of an orthonormal basis $e_2(z),...,e_{N^2}(z)$, so\nthat $\\alpha '=\\sum_2^{N^2} \\alpha _je_j(z)$ we get global coordinates\n$\\alpha _1,\\alpha _2,...,\\alpha _{N^2}$ on $Q$-space.\n\\par \nBy the implicit function theorem, at least locally near any given\npoint in $\\Sigma $, we can represent $\\Sigma $ by $\\alpha\n_1=f(z,\\alpha ')$, $\\alpha '\\in \\overline{Z}(z)^\\perp\\simeq {\\mathds{C}}^{{N^2}-1}$, where $f$ is\nsmooth. (In the specific situation below, this will be valid\nglobally.) \nClearly, since $z,\\alpha _2,...,\\alpha _{N^2}$ are complex coordinates on\n$\\Sigma $, we have on $\\Sigma$ that\n\\begin{equation*}\n \\frac{1}{(2i)^{N^2}}d\\overline{Q}\\wedge dQ\n =\n J(f)\\frac{d\\overline{z}\\wedge dz}{2i} \n (2i)^{1-N^2}d\\overline{\\alpha}_2\\wedge d\\alpha_2 \\wedge ... \n \\wedge d\\overline{\\alpha}_{N^2}\\wedge d\\alpha_{N^2}\n\\end{equation*}\nwith the convention that \n\\begin{equation*}\n J(f)\\frac{d\\overline{z}\\wedge dz}{2i} \\geq 0, \\quad\n (2i)^{1-N^2}d\\overline{\\alpha}_2\\wedge d\\alpha_2 \\wedge ... \n\\wedge d\\overline{\\alpha}_{N^2}\\wedge d\\alpha_{N^2} >0.\n\\end{equation*}\nThus \n\\begin{equation}\\label{ed.7}\n\\begin{split}\nK_\\phi =\\int \\phi (z)m\\left(f(z,\\alpha ')\\overline{Z}(z)+\\alpha\n '\\right)J(f)(z,\\alpha _2,...,\\alpha _{N^2}) \\times \\\\\n(2i)^{-{N^2}}d\\overline{z}\\wedge dz\\wedge d\\overline{\\alpha}_2\\wedge d\n\\alpha_2 \\wedge ... \\wedge d\\overline{\\alpha}_{N^2}\\wedge\nd\\alpha_{N^2} .\n\\end{split}\n\\end{equation}\nThe Jacobian $J(f)$ is invariant under any $z$-dependent unitary change\nof variables, $\\alpha _2,...,\\alpha _{N^2}\\mapsto \\widetilde{\\alpha\n}_2,...,\\widetilde{\\alpha }_{N^2}$, so for the calculation of $J(f)$ at a\ngiven point $(z_0,\\alpha _0')$, we are free to choose the most\nappropriate orthonormal basis $e_2(z),...,e_{N^2}(z)$ in\n$\\overline{Z}(z)^\\perp$ depending smoothly on $z$. We write\n(\\ref{ed.7}) as \n\\begin{equation}\\label{ed.8}\nK_\\phi =\\int \\phi (z) \\Xi (z) \\frac{d\\overline{z}\\wedge dz}{2i},\n\\end{equation}\nwhere the density $\\Xi (z)$ is given by \n\\begin{equation}\\label{ed.9}\\begin{split}\n\\Xi (z)=\\int_{\\alpha '=\\sum_2^{N^2} \\alpha _je_j(z)}m(f(z,\\alpha\n')\\overline{Z}(z)+\\alpha ')J(f)(z,\\alpha _2,...,\\alpha _{N^2})\\times \\\\\n(2i)^{1-{N^2}}d\\overline{\\alpha}_2\\wedge d\\alpha_2\\wedge\n... \\wedge d\\overline{\\alpha}_{N^2}\\wedge d\\alpha_{N^2}.\n\\end{split}\n\\end{equation}\nBefore continuing, let us give a brief overview on the organization \nof following sections: \n\\par\nIn Section \\ref{gru} we will set up an auxiliary Grushin problem \nyielding the effective function $g$ as above. Section \\ref{chvar} \ndeals with the appropriate choice of coordinates $Q$ and the \ncalculation of the Jacobian $J(f)$. Finally, in Section \\ref{pfThm} \nwe complete the proof of Theorem \\ref{ed5}.\n\n\\section{Grushin problem for the perturbed Jordan block}\\label{gru}\n\\subsection{Setting up an auxiliary problem}\nFollowing \\cite{SjZw07}, we introduce an auxiliary Grushin problem.\nDefine $R_+:{\\mathds{C}}^N\\to {\\mathds{C}}$ by\n\\begin{equation}\\label{pj.5}\nR_+u=u_1,\\ u=(u_1\\ ...\\ u_N)^\\mathrm{t}\\in {\\mathds{C}}^N. \n\\end{equation} \nLet $R_-:\\, {\\mathds{C}}\\to {\\mathds{C}}^N$ be defined by\n\\begin{equation}\\label{pj.6}\nR_-u_-=(0\\ 0\\ ...\\ u_-)^\\mathrm{t}\\in {\\mathds{C}}^N.\n\\end{equation}\nHere, we identify vectors in ${\\mathds{C}}^N$ with column matrices. Then\nfor $|z|<1$, the operator\n\\begin{equation}\\label{pj.7}\n{\\mathcal{A}}_0=\\begin{pmatrix}A_0-z &R_-\\\\ R_+ &0\\end{pmatrix}: \n{\\mathds{C}}^{N+1}\\to {\\mathds{C}}^{N+1}\n\\end{equation}\nis bijective. In fact, identifying \n\\begin{equation*}\n{\\mathds{C}}^{N+1}\\simeq\n\\ell^2([1,2,...,N+1])\\simeq\\ell^2({\\mathds{Z}}\/(N+1){\\mathds{Z}}), \n\\end{equation*}\nwe have\n${\\mathcal{A}}_0=\\tau ^{-1}-z\\Pi _N$, where $\\tau u(j)=u(j-1)$ (translation\nby 1 step to the right) and $\\Pi _Nu=1_{[1,N]}u$. Then ${\\mathcal{A}}_0=\\tau\n^{-1}(1-z\\tau \\Pi _N)$, $(\\tau \\Pi _N)^{N+1}=0$,\n$$\n{\\mathcal{A}}_0^{-1}=(1+z\\tau \\Pi _N+(z\\tau \\Pi _N)^2+...+(z\\tau \\Pi\n_N)^N)\\circ \\tau .\n$$\nWrite\n$$\n{\\mathcal{E}}_0:={\\mathcal{A}}_0^{-1}=:\\begin{pmatrix}E^0 &E_+^0\\\\\nE_-^0 &E_{-+}^0\n\\end{pmatrix}.\n$$\nThen \n\\begin{equation}\\label{pj.8}\nE^0\\simeq \\Pi _N(1+z\\tau \\Pi _N+...(z\\tau \\Pi _N)^{N-1})\\tau \\Pi _N,\n\\end{equation}\n\\begin{equation}\\label{pj.9}\nE_+^0=\\begin{pmatrix}1\\\\z\\\\ ..\\\\z^{N-1}\\end{pmatrix},\\\nE_-^0=\\begin{pmatrix}z^{N-1} & z^{N-2} &... &1\\end{pmatrix},\n\\end{equation}\n\\begin{equation}\\label{pj.10}\nE_{-+}^0=z^N.\n\\end{equation}\nA quick way to check \\eqref{pj.9}, \\eqref{pj.10} is to write \n$\\mathcal{A}_0$ as an $(N+1)\\times (N+1)$-matrix where we moved \nthe last line to the top, with the lines labeled from \n$0$ ($\\equiv N+1 \\mod (N+1)\\mathds{Z}$) to $N$ and the columns from \n$1$ to $N+1$.\n\\par\nContinuing, we see that\n\\begin{equation}\\label{pj.10.2}\n\\Vert E^0\\Vert\\le G(|z|),\\ \\Vert E_\\pm^0\\Vert\\le\nG(|z|)^{\\frac{1}{2}},\\ \\Vert E_{-+}^0\\Vert\\le 1 ,\n\\end{equation}\nwhere $\\Vert \\cdot \\Vert$ denote the natural operator norms and\n\\begin{equation}\\label{pj.10.4}\nG(|z|):=\\min \\left( N,\\frac{1}{1-|z|} \\right)\\asymp\n1+|z|+|z|^2+...+|z|^{N-1} .\n\\end{equation}\nNext, consider the natural Grushin problem for $A_\\delta $. If\n$\\delta \\Vert Q\\Vert G(|z|)<1$, we see that\n\\begin{equation}\\label{pj.11}\n{\\mathcal{A}}_\\delta =\\begin{pmatrix}A_\\delta -z &R_-\\\\ R_+ &0\\end{pmatrix}\n\\end{equation}\nis bijective with inverse $${\\mathcal{E}}_\\delta =\\begin{pmatrix}E^\\delta\n & E_+^\\delta \\\\ E_+^\\delta &E_{-+}^\\delta \\end{pmatrix},$$\nwhere\n\\begin{equation}\\label{pj.11.5}\n\\begin{split}\n E^\\delta =&E^0-E^0\\delta QE^0+E^0(\\delta QE^0)^2-...=E^0(1+\\delta\n QE^0)^{-1},\\\\\n E_+^\\delta =&E_+^0-E^0\\delta QE_+^0+(E^0\\delta\n Q)^2E_+^0-...=(1+E^0\\delta\n Q)^{-1}E^0_+,\\\\\n E_-^\\delta =&E_-^0-E_-^0\\delta QE^0+E_-^0(\\delta\n QE^0)^2-...=E_-^0(1+\\delta\n QE^0)^{-1},\\\\\n E^\\delta _{-+}=&E^0_{-+}-E_-^0\\delta QE_+^0+E_-^0\\delta QE^0\\delta\n QE_+^0-...\\\\\n &=E_{-+}^0-E_-^0\\delta Q(1+E^0\\delta Q)^{-1}E_+^0.\n\\end{split}\n\\end{equation}\nWe get\n\\begin{equation}\\label{pj.12}\n\\begin{split}\n&\\Vert E^\\delta \\Vert\\le \\frac{G(|z|)}{1-\\delta \\Vert Q\\Vert G(|z|)},\\\n\\Vert E_\\pm^\\delta \\Vert\\le \\frac{G(|z|)^{\\frac{1}{2}}}{1-\\delta \\Vert\n Q\\Vert G(|z|)},\\\\\n&| E_{-+}^\\delta - E_{-+}^0 |\\le \\frac{\\delta \\Vert Q\\Vert G(|z|)}{1-\\delta \\Vert Q\\Vert G(|z|)}.\n\\end{split}\n\\end{equation}\nIndicating derivatives with respect to $\\delta $ with dots and\nomitting sometimes the super\/sub-script $\\delta $, we have\n\\begin{equation}\\label{pj.13}\n\\dot{{\\mathcal{E}}}=-{\\mathcal{E}}\\dot{{\\mathcal{A}}}{\\mathcal{E}}=-\\begin{pmatrix}EQE &\n EQE_+\\\\ E_-QE &E_-QE_+.\n\\end{pmatrix}\n\\end{equation}\nIntegrating this from 0 to $\\delta $ yields\n\\begin{equation}\\label{pj.14}\n\\Vert E^\\delta -E^0\\Vert \\le \\frac{G(|z|)^2 \\delta \\Vert Q \\Vert}{(1-\\delta \\Vert Q\\Vert\n G(|z|))^2} ,\\quad \n \\Vert E_\\pm^\\delta -E_\\pm^0\\Vert \\le \n \\frac{G(|z|)^{\\frac{3}{2}}\\delta \\Vert Q \\Vert}{(1-\\delta \\Vert Q\\Vert G(|z|))^2}.\n\\end{equation}\nWe now sharpen the assumption that $\\delta \\Vert Q\\Vert G(|z|)<1$ to \n\\begin{equation}\\label{pj.15}\n\\delta \\Vert Q\\Vert G(|z|)<1\/2. \n\\end{equation}\nThen \n\\begin{equation}\\label{pj.16}\n\\begin{split}\n&\\Vert E^\\delta \\Vert\\le 2G(|z|),\\\n\\Vert E_\\pm^\\delta \\Vert\\le 2 G(|z|)^{\\frac{1}{2}},\\\\\n&| E_{-+}^\\delta - E_{-+}^0 |\\le 2\\delta \\Vert Q\\Vert G(|z|).\n\\end{split}\n\\end{equation}\nCombining this with the identity $\\dot{E}_{-+}=-E_-QE_+$ that follows\nfrom (\\ref{pj.13}), we get\n\\begin{equation}\\label{pj.17}\n\\Vert \\dot{E}_{-+}+E_-^0QE_+^0\\Vert \\le 16 G(|z|)^2\\delta\n \\Vert Q\\Vert ^2,\n\\end{equation}\nand after integration from $0$ to $\\delta $,\n\\begin{equation}\\label{pj.18}\nE_{-+}^\\delta =E_{-+}^0-\\delta E_-^0QE_+^0 +{\\mathcal{O}}(1)G(|z|)^2(\\delta\n\\Vert Q\\Vert)^2 . \n\\end{equation}\nUsing (\\ref{pj.9}), (\\ref{pj.10}) we get with $Q=(q_{j,k})$,\n\\begin{equation}\\label{pj.19}\nE_{-+}^\\delta =z^N-\\delta \\sum_{j,k=1}^N q_{j,k}z^{N-j+k-1}\n +{\\mathcal{O}}(1)G(|z|)^2(\\delta\n\\Vert Q\\Vert)^2,\n\\end{equation}\nstill under the assumption (\\ref{pj.15}).\n\\subsection{Estimates for the effective Hamiltonian}\n\n\\par We now consider the situation at the beginning of Section\n\\ref{mres}:\n$$\nA_\\delta =A_0+\\delta Q,\\ Q=(q_{j,k}(\\omega ))_{j,k=1}^N,\\\nq_{j,k}(\\omega )\\sim {\\mathcal{N}}_{\\mathds{C}}(0,1)\\hbox{ independent.}\n$$\nIn the following, we often write $|\\cdot|$ for the Hilbert-Schmidt norm \n$\\lVert\\cdot\\rVert_{\\mathrm{HS}}$. As we recalled in (\\ref{pj.28}), we have \n\\begin{equation}\\label{ed.10}\n|Q|\\le C_1N\\hbox{ with probability }\\ge 1-e^{-N^2},\n\\end{equation}\nand we shall work under the assumption that $|Q|\\le C_1N$.\nWe let $|z|<1$ and assume:\n\\begin{equation}\\label{ed.11}\n\\delta NG(|z|)\\ll 1.\n\\end{equation}\nThen with probability $\\ge 1-e^{-N^2}$, we have (\\ref{pj.15}),\n(\\ref{pj.19}) which give for $g(z,Q):=E_{-+}^\\delta $, \n\\begin{equation}\\label{ed.12}\ng(z,Q)=z^N+\\delta (Q|\\overline{Z}(z))+{\\mathcal{O}}(1)(G(|z|)\\delta N)^2.\n\\end{equation}\nHere, $Z$ is given by\n\\begin{equation}\\label{ed.13}\nZ=\\left( z^{N-j+k-1}\\right)_{j,k=1}^N.\n\\end{equation}\nA straight forward calculation shows that\n\\begin{equation}\\label{ed.14}\n | Z | =\\sum_0^{N-1}|z|^{2\\nu }=\\frac{1-|z|^{2N}}{1-|z|^2}=\n\\frac{1-|z|^N}{1-|z|}\\,\\,\\frac{1+|z|^N}{1+|z|},\n\\end{equation}\nand in particular,\n\\begin{equation}\\label{ed.15}\n\\frac{G(|z|)}{2}\\le | Z| \\le G(|z|).\n\\end{equation}\n\\par \nThe middle term in (\\ref{ed.12}) is bounded in modulus by $\\delta\n|Q||Z|\\le \\delta C_1NG(|z|)$ and we assume that $|z|^N$ is much\nsmaller than this bound:\n\\begin{equation}\\label{ed.16}\n|z|^N\\ll \\delta C_1NG(|z|).\n\\end{equation}\nMore precisely, we work in a disc $D(0,r_0)$, where \n\\begin{equation}\\label{ed.25}\nr_0^N\\le C^{-1}\\delta C_1NG(r_0)\\le C^{-2},\\quad r_0\\le 1-N^{-1}\n\\end{equation}\nand $C\\gg 1$. In fact, the first inequality in \\eqref{ed.25} \ncan be written $m(r_0) \\leq C^{-1}\\delta C_1 N$ and \n$m(r)= r^N(1-r)$ is increasing on $[0,1-N^{-1}]$ so the inequality \nis preserved if we replace $r_0$ by $|z|\\leq r_0$. Similarly, the \nsecond inequality holds after the same replacement since $G$ is \nincreasing. \n\\par \nIn view of (\\ref{ed.11}), we see that\n$$\n\\left(G(|z|)\\delta N \\right)^2\\ll \\delta G(|z|)N\n$$\nis also much smaller than the upper bound on the middle term. \n\n\\par By the Cauchy inequalities,\n\\begin{equation}\\label{ed.17}\nd_Qg=\\delta Z\\cdot dQ+{\\mathcal{O}}(1)G(|z|)^2\\delta ^2N.\n\\end{equation}\nThe norm of the first term is $\\asymp \\delta G\\gg G^2\\delta ^2N$,\nsince $G\\delta N\\ll 1$. (When applying the Cauchy inequalities, we\nshould shrink the radius $R=C_1N$ by a factor $\\theta <1$, but we have\nroom for that, if we let $C_1$ be a little larger than necessary to\nstart with.)\n\nWriting \n$$\nQ=\\alpha _1\\overline{Z}(z)+\\alpha ',\\ \\alpha '\\in\n\\overline{Z}(z)^\\perp\\simeq {\\mathds{C}}^{{N^2}-1},\n$$\nwe identify $g(z,Q)$ with a function $\\widetilde{g}(z,\\alpha )$ which is\nholomorphic in $\\alpha $ for every fixed $z$ and satisfies\n\\begin{equation}\\label{ed.18}\n\\widetilde{g}(z,\\alpha )=z^N+\\delta |Z(z)|^2\\alpha _1+{\\mathcal{O}}(1)\nG(|z|)^2\\delta ^2N^2,\n\\end{equation}\nwhile (\\ref{ed.17}) gives\n\\begin{equation}\\label{ed.19}\n\\partial _{\\alpha _1}\\widetilde{g}(z,\\alpha )=\\delta |Z(z)|^2\n+{\\mathcal{O}}(1)G(|z|)^3\\delta ^2N,\n\\end{equation}\nand in particular,\n$$\n\\left| \\partial _{\\alpha _1}\\widetilde{g} \\right|\\asymp \\delta G(|z|)^2.\n$$\nThis derivative does not depend on the choice of unitary\nidentification $\\overline{Z}^\\perp\\simeq {\\mathds{C}}^{{N^2}-1}$. Notice that\nthe remainder in (\\ref{ed.18}) is the same as in (\\ref{ed.12}) and\nhence a holomorphic function of $(z,Q)$. In particular it is a\nholomorphic function of $\\alpha _1,...,\\alpha _{N^2}$ for every fixed $z$\nand we can also get (\\ref{ed.19}) from this and the Cauchy\ninequalities. In the same way, we get from (\\ref{ed.18}) that\n\\begin{equation}\\label{ed.19.5}\n\\partial _{\\alpha _j}\\widetilde{g}(z,\\alpha )={\\mathcal{O}}(1)G(|z|)^2\\delta ^2N,\\ j=2,...,{N^2}.\n\\end{equation}\n\\par \nThe Cauchy inequalities applied to (\\ref{ed.12}) give,\n\\begin{equation}\\label{ed.26}\n\\partial _zg(z,Q)=Nz^{N-1}+\\delta Q\\cdot \\partial _zZ(z)\n+{\\mathcal{O}}(1)\\frac{(G(|z|)\\delta N)^2}{r_0-|z|}.\n\\end{equation}\nThen, for $\\widetilde{g}(z,\\alpha _1,\\alpha ')=g(z,\\alpha\n_1\\overline{Z}(z)+\\alpha ')$, $\\alpha '= \\sum_2^{N^2} \\alpha _je_j$ \nwe shall see that\n\\begin{equation}\\label{ed.27}\\begin{split}\n\\partial _z\\widetilde{g}=\n Nz^{N-1}+\\delta \\alpha _1\\partial _z\\left( |Z|^2 \\right)\n +{\\mathcal{O}}(1)\\frac{(G\\delta N)^2}{r_0-|z|}+{\\mathcal{O}}(1)G^2\\delta\n^2N\\left| \\sum_2^{N^2} \\alpha _j\\partial _ze_j \\right|,\n\\end{split}\n\\end{equation}\n\\begin{equation}\\label{ed.28}\\begin{split}\n\\partial _{\\overline{z}}\\widetilde{g}\n=& \\delta \\alpha _1\\partial _{\\overline{z}}\\left( |Z|^2 \\right) \n+{\\mathcal{O}}(1)G^2\\delta ^2N\\left| \\alpha _1\\overline{\\partial\n _zZ}+\\sum_2^{N^2} \\alpha _j\\partial _{\\overline{z}}e_j \\right| .\n\\end{split}\\end{equation}\n\\par \nThe leading terms in (\\ref{ed.27}), (\\ref{ed.28}) can be obtained\nformally from (\\ref{ed.18}) by applying $\\partial _z$, $\\partial\n_{\\overline{z}}$ and we also notice that \n$$\n\\partial _z|Z|^2=\\overline{Z}\\cdot \\partial _zZ,\\ \\\n\\partial _{\\overline{z}}|Z|^2=Z\\cdot \\overline{\\partial _zZ }.\n$$\nHowever it is not clear how to handle the remainder in (\\ref{ed.18}),\nso we verify (\\ref{ed.27}), (\\ref{ed.28}), using (\\ref{ed.17}),\n(\\ref{ed.26}):\n\\[\n\\begin{split}\n&\\partial _z\\widetilde{g}=\\partial _zg+d_Qg\\cdot \\sum_2^{N^2} \\alpha\n_j\\partial _ze_j=\\\\\n&Nz^{N-1}+\\delta Q\\cdot \\partial _zZ+{\\mathcal{O}}(1)\\frac{(G\\delta\n N)^2}{r_0-|z|}+(\\delta Z\\cdot dQ +{\\mathcal{O}}(1)G^2\\delta ^2N)\\cdot\n\\sum_2^{N^2} \\alpha _j\\partial _ze_j\\\\\n&= Nz^{N-1}+\\delta \\alpha _1\\partial _z\\left(|Z|^2 \\right) +\\delta\n\\sum_2^{N^2} \\alpha _je_j\\cdot \\partial _zZ +\\delta Z\\cdot \\sum_2^{N^2}\\alpha\n_j\\partial _ze_j\\\\\n&\\hskip 5cm +\\hbox{ the remainders in (\\ref{ed.27})}.\n\\end{split}\n\\]\nThe 3d and the 4th terms in the last expression add up to \n$$\n\\delta \\partial _z\\left(\\sum_2^{N^2} \\alpha _je_j\\cdot Z\n\\right)=\\delta \\partial _z(0)=0,\n$$\nand we get (\\ref{ed.27}).\n\n\\par Similarly,\n\\[\n\\begin{split}\n&\\partial _{\\overline{z}}\\widetilde{g}=d_Qg\\cdot \\left(\\alpha\n _1\\overline{\\partial _zZ}+\\sum_2^{N^2} \\alpha _j\\partial\n _{\\overline{z}}e_j \\right)\\\\\n&=\\left(\\delta Z\\cdot dQ+{\\mathcal{O}}(1)G^2\\delta ^2N \\right)\\cdot \\left(\\alpha\n_1\\overline{\\partial _zZ}+\\sum_2^{N^2} \\alpha _j\\partial\n_{\\overline{z}}e_j\\right) .\n\\end{split}\n\\]\nUp to remainders as in (\\ref{ed.28}), this is equal to \n\\begin{align*}\n\\delta \\alpha _1Z\\cdot \\overline{\\partial _zZ}+\\delta \\sum_2^{N^2} \\alpha\n_jZ\\cdot \\partial _{\\overline{z}}e_j &=\n\\delta \\alpha _1\\partial _{\\overline{z}}\\left(|Z|^2 \\right) +\\delta\n\\sum_2^{N^2} \\alpha _j\\partial _{\\overline{z}}\\left(Z\\cdot e_j\n\\right)\n \\notag \\\\\n&=\\delta \\alpha _1 \\partial _{\\overline{z}}\\left(|Z|^2 \\right).\n\\end{align*}\n\\par \nHere, we know that \n$$\n|Z(z)|=\\sum_0^{N-1}(z\\overline{z})^\\nu =:K(z\\overline{z}),\n$$\n\\begin{equation}\\label{ed.29}\n\\begin{split}\n\\partial _z\\left(|Z(z)|^2 \\right)&=2KK'\\overline{z},\\\\\n\\partial _{\\overline{z}}\\left(|Z(z)|^2 \\right)&=2KK'z.\n\\end{split}\n\\end{equation}\nObserve also that $K(t)\\asymp G(t)$ and that $G(|z|)\\asymp G(|z|^2)$.\n\nThe following result implies that $K'(t)$ and $K(t)^2$ are of the same\norder of magnitude.\n\\begin{prop}\\label{ed1}\n For $k\\in {\\mathds{N}}$, $2\\le N\\in {\\mathds{N}}\\cup \\{+\\infty \\}$, $0\\le t<\n 1$, we put\n\\begin{equation}\\label{ed.29.1}\nM_{N,k}(t)=\\sum _{\\nu =1}^{N-1} \\nu ^kt^\\nu ,\n\\end{equation}\nso that $K(t)=K_N(t)=M_{N,0}(t)+1$, $K'(t)\\asymp M_{N-1,1}(t)+1$. For each fixed $k\\in {\\mathds{N}}$, we\nhave uniformly with respect to $N$, $t$:\n\\begin{equation}\\label{ed.29.2}\nM_{\\infty ,k}(t)\\asymp \\frac{t}{(1-t)^{k+1}},\n\\end{equation}\n\\begin{equation}\\label{ed.29.3}\nM_{\\infty ,k}(t)-M_{N,k}(t)\\asymp \\frac{t^N}{1-t}\\left(N+\\frac{1}{1-t} \\right)^k.\n\\end{equation}\nFor all fixed $C>0$ and $k\\in {\\mathds{N}}$, we have uniformly,\n\\begin{equation}\\label{ed.29.4}\nM_{N,k}(t)\\asymp M_{\\infty ,k}(t),\\hbox{ for }0\\le t\\le\n1-\\frac{1}{CN},\\ N\\ge 2.\n\\end{equation}\n\\end{prop}\n\nNotice that under the assumption in (\\ref{ed.29.4}), the estimate\n(\\ref{ed.29.3}) becomes\n$$\nM_{\\infty ,k}(t)-M_{N,k}(t)\\asymp \\frac{t^NN^k}{1-t}.\n$$\nWe also see that in any region $1-{\\mathcal{O}}(1)\/N\\le t<1$, we have\n$$\nM_{N,k}(t)\\asymp N^{k+1},\n$$\nso together with (\\ref{ed.29.4}), \\eqref{ed.29.2}, this shows that\n\\begin{equation}\\label{ed.29.4.5}\nM_{N,k}(t)\\asymp t\\min \\left(\\frac{1}{1-t},N \\right)^{k+1}.\n\\end{equation}\n\\begin{proof}\nThe statements are easy to verify when $0\\le t\\le 1-1\/{\\mathcal{O}}(1)$ and the\n$N$-dependent statements (\\ref{ed.29.3}), (\\ref{ed.29.4}) are clearly\ntrue when $N\\le {\\mathcal{O}}(1)$. Thus we can assume that $1\/2\\le t<1$ and\n$N\\gg 1$. \n\n\\par Write $t=e^{-s}$ so that $0_x$ on the tangent space $T_x \\mathcal{M}$ at every point $x$.\nThe metric $M(x,t)$ defines local geometric notions such as angles, length, and orthogonality.\nThe directional derivative of metric $M(x,t)$ along vector $v$ is $\\partial_v M = \\sum_i\\frac{\\partial M}{\\partial x_i} v_i$.\nA parameterized differentiable curve $c: \\left[0,1\\right] \\rightarrow \\mathcal{M}$ is a regular if $\\frac{\\partial c}{\\partial s} = c_s \\neq 0$ $\\forall s \\in \\left[0~1\\right]$.\nLet $\\Upsilon(p,q)$ be the family of curves connecting $p,~q\\in \\mathcal{M}$, then a \\emph{geodesic} $\\gamma: \\left[0~1\\right] \\rightarrow \\mathcal{M}$ is the extremum of the energy functional\n\\begin{equation*}\n \\gamma(s) = \\underset{c(s)\\in\\Upsilon(p,q)}{\\argmin}~ E(c,t) = \\int_0^1 c_s^\\top M(c,t) c_s ds,\n\\end{equation*}\nwhere $E$ is the Riemannian energy. \nIf the manifold $\\mathcal{M}$ is a complete metric space, such as $\\mathbb{R}^n$, $n$-sphere $\\mathbb{S}^n$, or any of their respective closed subsets, then a geodesic is guaranteed to exist by the Hopf-Rinow theorem \\cite{carmo1992riemannian}.\nIn the sequel, the time argument in $M(c,t)$ and $E(c,t)$ is dropped for clarity.\n\n\n\\section{Adaptive Safety}\n\\label{sec:acbf}\n\\subsection{Background}\nFirst consider the nominal dynamics of \\cref{eq:unc_dyn}, i.e., $\\Delta(x) = 0$.\nLet a closed convex set $\\mathcal{C} \\subset \\mathbb{R}^n$ be a 0-superlevel set of a continuously differentiable function $h: \\mathbb{R}^n \\rightarrow \\mathbb{R}$ where\n\\begin{equation*}\n \\begin{aligned}\n \\mathcal{C} & = \\left\\{ x \\in \\mathbb{R}^n : h(x) \\geq 0 \\right\\} \\\\\n \\partial \\mathcal{C} & = \\left\\{ x \\in \\mathbb{R}^n : h(x) = 0 \\right\\} \\\\\n \\text{Int}\\left(\\mathcal{C}\\right) & = \\left\\{ x \\in \\mathbb{R}^n : h(x) > 0 \\right\\}.\n \\end{aligned}\n\\end{equation*}\nIf the nominal dynamics are locally Lipschitz, then given an initial condition $x_0$, there exists a maximum time interval $I(x_0) = [t_0,~T)$ such that $x(t)$ is a unique solution on $I(x_0)$. \nThe following definitions are largely taken from \\cite{ames2016control,ames2019control}.\n\\begin{definition}\n\\label{def:fi}\nThe set $\\mathcal{C}$ is \\emph{forward invariant} if for every $x_0\\in \\mathcal{C}$, $x(t) \\in \\mathcal{C}$ for all $t\\in I(x_0)$.\n\\end{definition}\n\n\\begin{definition}\n\\label{def:safety}\nThe nominal system is \\emph{safe} with respect to set $\\mathcal{C}$ if the set $\\mathcal{C}$ is forward invariant.\n\\end{definition}\n\n\\begin{definition}\nA continuous function $\\alpha : \\mathbb{R} \\rightarrow \\mathbb{R}$ is an \\emph{extended class $\\mathcal{K}_\\infty$ function} if it is strictly increasing, $\\alpha(0) = 0$, and is defined on the entire real line.\n\\end{definition}\n\n\\begin{definition}\nLet $\\mathcal{C}$ be a 0-superlevel set for a continuously differentiable function $h:\\mathbb{R}^n\\rightarrow\\mathbb{R}$, then $h$ is a \\emph{control barrier function} if there exists an extended class $\\mathcal{K}_\\infty$ function $\\alpha$ such that\n\\begin{equation}\n\\label{eq:cbf}\n \\underset{u \\in \\mathcal{U}}{\\text{sup}}~ \\left[\\frac{\\partial h}{\\partial x}(x)\\left(f(x) + B(x) u\\right)\\right] \\geq - \\alpha(h(x)).\n\\end{equation}\n\\end{definition}\n\n\\begin{theorem}\nLet $\\mathcal{C}\\subset\\mathbb{R}^n$ be a 0-superlevel set of a continuously differentiable function $h:\\mathbb{R}^n\\rightarrow\\mathbb{R}$, if $h$ is a CBF on $\\mathcal{C}$ then any locally Lipschitz continuous controller satisfying \\cref{eq:cbf} renders the set $\\mathcal{C}$ safe for the nominal system. \n\\end{theorem}\n\n\\subsection{Adaptive CBFs}\nAdaptive CBFs (aCBFs) \\cite{taylor2019adaptive} provide a general framework to guarantee safety through parameter adaptation for systems with structured uncertainties.\nThe notion of safety for uncertain systems must be extended to a family of safe sets $\\mathcal{C}_{\\theta}$ parameterized by $\\theta$. \nMore precisely, the family of safe sets are 0-superlevel sets of a continuously differentiable function $h_a: \\mathbb{R}^n \\times \\mathbb{R}^p \\rightarrow \\mathbb{R}$.\nIf the uncertain dynamics in \\cref{eq:unc_dyn} are locally Lipschitz then the definitions of forward invariance and safety can be directly extended to $\\mathcal{C}_\\theta$.\n\n\\begin{definition}[\\cite{taylor2019adaptive}]\n\\label{def:acbf}\nLet $\\mathcal{C}_\\theta$ be a family of 0-superlevel sets parameterized by $\\theta$ for a continuously differentiable function $h_a:\\mathbb{R}^n\\times \\mathbb{R}^p\\rightarrow\\mathbb{R}$, then $h_a$ is an \\emph{adaptive control barrier function} if for all $\\theta$\n\\begin{equation}\n\\label{eq:acbf}\n \\underset{u \\in \\mathcal{U}}{\\text{sup}}~ \\left[ \\frac{\\partial h_a}{\\partial x}(x,\\theta)\\left( f(x) - \\Delta(x)^\\top \\Lambda(x,\\theta) + B(x) u\\right)\\right] \\geq 0,\n\\end{equation}\nwhere $\\Lambda(x,\\theta) := \\theta - \\Gamma \\left(\\frac{\\partial h_a}{\\partial \\theta}(x,\\theta)\\right)^\\top$ and $\\Gamma \\in \\mathcal{S}^p_+$ is a symmetric positive definite matrix\n\\end{definition}\n\nA controller that satisfies \\cref{eq:acbf} can be combined with an adaptation law to render the uncertain systems safe with respect to $\\mathcal{C}_{{\\theta}}$ \\cite{taylor2019adaptive}.\nHowever, \\cref{eq:acbf} makes the level sets of $h_a$ forward invariant so it is a much stricter condition than \\cref{eq:cbf}.\nMore precisely, the distance to the boundary of the safe set must monotonically increase, i.e., $\\dot{h}_a(x,\\theta) \\geq 0$ for all time (\\cref{fig:acbf}).\nThis can lead to extremely conservative behavior as the system only operates in a set that is monotonically shrinking.\nIn \\cite{taylor2019adaptive}, a modified aCBF was proposed\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:acbf_m}\n \\bar{h}_a(x,\\theta) = \\begin{cases} \n \\sigma^2 ~ &\\text{if} ~ h_a(x,\\theta)\\geq\\sigma \\\\\n \\sigma^2 - (h_a(x,\\theta)-\\sigma)^2 ~~ &\\text{otherwise},\n \\end{cases}\n\\end{aligned}\n\\end{equation}\nwhich satisfies \\cref{eq:acbf} if $h_a$ is a valid aCBF.\nThis modification expands the set of allowable states but the resulting controller is not necessarily Lipschitz and can exhibit high-frequency oscillations in closed-loop, as shown in \\cref{ex:lipschitz}. \n\n\\begin{example}\n\\label{ex:lipschitz}\nConsider the uncertain system $\\dot{x} = - \\theta + u$ with $\\theta>0$ and aCBF $h_a(x) = x - \\ubar{x}$.\nLet the controller $\\kappa$ be the solution to $\\kappa = {\\argmin}~\\frac{1}{2} u^2$ subject to $\\dot{\\bar{h}}_a(x) \\geq 0$.\nThen\n\\begin{equation}\n\\label{eq:u_ex}\n\\begin{aligned}\n \\kappa = \\begin{cases}\n 0 ~ &\\text{if} ~ h_a(x)\\geq\\sigma \\\\\n \\mathrm{max}(0,\\hat{\\theta}) ~~ &\\text{otherwise},\n \\end{cases}\n\\end{aligned}\n\\end{equation}\nwhere $\\hat{\\theta}$ is the estimate of $\\theta$ and is modified based on \\cite{taylor2019adaptive}\n\\begin{equation}\n\\label{eq:ha_ex}\n\\begin{aligned}\n \\dot{\\hat{\\theta}} = \\begin{cases}\n 0 ~ &\\text{if} ~ h_a(x)\\geq\\sigma \\\\\n - \\Gamma \\left[h_a(x)-\\sigma\\right] \\frac{\\partial h_a}{\\partial x} ~~ &\\text{otherwise},\n \\end{cases}\n\\end{aligned}\n\\end{equation}\nwith $\\Gamma > 0$. \nFor $\\hat{\\theta}(0) = \\hat{\\theta}_0 \\leq 0$ then $\\kappa = 0$ so the closed-loop response is $x_{cl}(t) = - \\theta (t-t_0) + x_0$ where $x_{cl}(t_0) = x_0 > \\ubar{x}$ and $t \\geq t_0$.\nFrom the adaptation law \\cref{eq:ha_ex}, it is easy to see that $\\dot{\\hat{\\theta}} \\geq 0 $ which will necessarily lead to $\\hat{\\theta} > 0$ since $h_a \\leq \\sigma$ until $\\kappa > \\theta$.\nFor $\\hat{\\theta} > 0$, the closed-loop response becomes\n\\begin{equation}\n\\label{eq:cl_ex}\n \\begin{aligned}\n x_{cl}(t) = \\begin{cases}\n -\\theta (t-t_0) + x_0 ~ &\\text{if} ~ h_a(x)\\geq\\sigma \\\\\n \\hspace{0.28cm}\\tilde{\\theta} (t-t_0') + x_0' ~~ &\\text{otherwise},\n \\end{cases}\n \\end{aligned}\n\\end{equation}\nwhere $\\tilde{\\theta}:=\\hat{\\theta} - \\theta$ and $x_{cl}(t_0') = x_0'$.\nSince $\\tilde{\\theta} > 0$, \\cref{eq:cl_ex} will continuously switch between its two solutions.\nThe control policy $\\kappa$ must then also switch between $0$ and $\\hat{\\theta}$ based on \\cref{eq:u_ex}.\nFurthermore, $\\kappa$ is not locally Lipschitz continuous and will exhibit high-frequency oscillations of magnitude $\\hat{\\theta}$ in closed-loop.\n\\end{example}\n\nThe intuition behind \\cref{ex:lipschitz} is that chatter arises due to the barrier condition switching between being trivially satisfied, i.e., $\\dot{\\bar{h}}_a = 0 \\geq 0$ for all $u$, to satisfied only for a particular $u$, i.e., a $u$ so that $\\dot{\\bar{h}}_a \\geq 0$.\nThe approach developed in this work addresses the conservatism of aCBFs and results in a locally Lipschitz continuous controller.\n\n\n\\subsection{Robust aCBFs}\n\\label{sub:racbf}\nThis section will show that a \\emph{tightened} set can be made forward invariant if the unknown model parameters are bounded and the parameter adaptation rate is an \\emph{admissible} (to be defined) symmetric positive definite matrix.\n\n\\begin{assumption}\nThe unknown parameters $\\theta$ belong to a known closed convex set $\\Theta$.\nThe parameter estimation error $\\tilde{\\theta}:= \\hat{\\theta} - \\theta$ then also belongs to a known closed convex set $\\tilde{\\Theta}$ and the maximum possible parameter error is $\\tilde{\\vartheta}$.\n\\end{assumption}\n\nLet $\\mathcal{C}^r_\\theta$ be a family of superlevel sets parameterized by $\\theta$ for a continuously differentiable function ${h}_r: \\mathbb{R}^n \\times \\mathbb{R}^p \\rightarrow \\mathbb{R}$ \n\\begin{equation*}\n \\begin{aligned}\n \\mathcal{C}^r_\\theta & = \\left\\{ x \\in \\mathbb{R}^n : {h}_r(x,\\theta) \\geq \\frac{1}{2}\\tilde{\\vartheta}^\\top \\Gamma^{-1} \\tilde{\\vartheta} \\right\\} \\\\\n \\partial \\mathcal{C}^r_\\theta & = \\left\\{ x \\in \\mathbb{R}^n : {h}_r(x,\\theta) = \\frac{1}{2}\\tilde{\\vartheta}^\\top \\Gamma^{-1} \\tilde{\\vartheta} \\right\\} \\\\\n \\text{Int}\\left(\\mathcal{C}^r_\\theta\\right) & = \\left\\{ x \\in \\mathbb{R}^n : {h}_r(x,\\theta) > \\frac{1}{2}\\tilde{\\vartheta}^\\top \\Gamma^{-1} \\tilde{\\vartheta} \\right\\},\n \\end{aligned}\n\\end{equation*}\nwhere $\\Gamma \\in \\mathscr{S}^p_+ \\subset \\mathcal{S}^p_+$ is an admissible symmetric positive definite matrix that will dictate the parameter adaptation rate.\nThe set $\\mathcal{C}^r_\\theta$ can be viewed as a \\emph{tightened} set with respect to $\\mathcal{C}_\\theta$, i.e., $\\mathcal{C}^r_\\theta \\subset \\mathcal{C}_\\theta$, shown in \\cref{fig:racbf}.\nOne can select the desired subset $\\mathcal{C}^r_\\theta$ to be made forward invariant \\textit{a priori} by choosing $h_r(x_r,\\theta_r) >0 $ for appropriate $x_r,~\\theta_r$ so ${h}_r(x_r,\\theta_r) = \\frac{1}{2}\\tilde{\\vartheta}^\\top \\Gamma^{-1} \\tilde{\\vartheta}$.\nTo reduce conservatism, one can either 1) have fast parameter adaptation or 2) reduce model error.\nThe first scenario can lead to well-known undesirable effects in practice so the second scenario is the most viable, and will be explored more in \\cref{sec:smid}.\n\\cref{eq:unc_dyn} is again assumed to be locally Lipschitz so \\cref{def:fi,def:safety} hold.\n\n\\begin{definition}\n\\label{definition:racbf}\nLet $\\mathcal{C}^r_\\theta$ be a family of superlevel sets parameterized by $\\theta$ for a continuously differentiable function $h_r:\\mathbb{R}^n\\times \\mathbb{R}^p\\rightarrow\\mathbb{R}$, then $h_r$ is a \\emph{robust adaptive control barrier function} if there exists an extended class $\\mathcal{K}_\\infty$ function $\\alpha$ such that for all $\\theta\\in\\Theta$ \n\\begin{equation}\n\\label{eq:racbf}\n\\begin{aligned}\n &\\underset{u \\in \\mathcal{U}}{\\text{sup}}~ \\left[ \\frac{\\partial h_r}{\\partial x}(x,\\theta)\\left[ f(x) - \\Delta(x)^\\top \\Lambda(x,\\theta) + B(x) u\\right]\\right] \\\\ \n \n & \\hspace{3cm} \\geq -\\alpha\\left(h_r(x,\\theta) - \\frac{1}{2}\\tilde{\\vartheta}^\\top \\Gamma^{-1} \\tilde{\\vartheta}\\right),\n\\end{aligned}\n\\end{equation}\nwhere $\\Lambda(x,\\theta) := \\theta - \\Gamma \\left(\\frac{\\partial h_r}{\\partial \\theta}(x,\\theta)\\right)^\\top$, $\\tilde{\\vartheta}$ is the maximum possible parameter error, and $\\Gamma \\in \\mathscr{S}^p_+ \\subset \\mathcal{S}^p_+$ is an admissible symmetric positive definite matrix.\n\\end{definition}\n\nThe invariance condition \\cref{eq:racbf} is reminiscent of that in \\cref{eq:cbf} and is less conservative than that in \\cref{eq:acbf} because the system is allowed to approach the boundary of $\\mathcal{C}^r_\\theta$.\n\\cref{thm:racbf} shows the existence of a RaCBF, coupled with an adaptation law, renders the set $\\mathcal{C}^r_\\theta$ forward invariant and hence safe.\n\n\\begin{theorem}\n\\label{thm:racbf}\nLet $\\mathcal{C}^r_{\\hat{\\theta}}\\subset\\mathbb{R}^n$ be a superlevel set of a continuously differentiable function $h_r:\\mathbb{R}^n \\times \\mathbb{R}^p\\rightarrow\\mathbb{R}$, if $h_r$ is a RaCBF on $\\mathcal{C}^r_{\\hat{\\theta}}$, then any locally Lipschitz continuous controller satisfying \\cref{eq:racbf} renders the set $\\mathcal{C}^r_{\\hat{\\theta}}$ safe for the uncertainty system with adaptation law and adaptation gain\n\\begin{equation*}\n\\dot{\\hat{\\theta}} = \\Gamma \\Delta(x) \\left(\\frac{\\partial h_r}{\\partial x}(x,\\hat{\\theta}) \\right)^\\top, ~~ \\lambda_{\\min}(\\Gamma) \\geq \\frac{\\|\\tilde{\\vartheta}\\|^2}{2h_r(x_r,{\\theta}_r)},\n\\end{equation*}\nwhere $\\tilde{\\vartheta}$ is the maximum possible parameter error, $h_r(x_r,\\theta_r)>0$ can be chosen freely based on the desired conservatism, and $\\Gamma \\in \\mathscr{S}^p_+ \\subset \\mathcal{S}^p_+$ is an admissible symmetric positive definite matrix.\nFurthermore, the original set $\\mathcal{C}_{\\hat{\\theta}}$ is also safe for the uncertain system.\n\\end{theorem}\n\n\n\n\\begin{remark}\n\\label{remark:proj}\nThe projection operator \\cite{slotine1991applied} can be used to enforce parameter bounds by modifying the above adaptation law as opposed to capturing them explicitly with $h_r$. \nThis can simplify the design of $h_r$ without forfeiting safety.\nThe proof is omitted but one can show that a positive semi-definite term appears in the same composite candidate CBF used in \\cref{thm:racbf} when adaptation is temporarily stopped.\n\\end{remark}\n\n\n\\begin{figure}[t!]\n\\vskip 0.1in\n \\begin{subfigure}{.48\\columnwidth}\n \\centering\n \\includegraphics[trim=100 0 100 0, clip,width=1\\linewidth]{figures\/acbf.pdf}\n \\caption{{Safe set with adaptive control barrier functions (aCBFs).}}\n \\label{fig:acbf}\n \\end{subfigure}\n \\hspace{0.3em}\n \\begin{subfigure}{.48\\columnwidth}\n \\centering\n \\includegraphics[trim=100 0 100 0, clip,width=1\\linewidth]{figures\/racbf.pdf}\n \\caption{{Safe set with robust adaptive control barrier functions (RaCBFs).}}\n \\label{fig:racbf}\n \\end{subfigure}\n \\caption{Visual comparison of safe sets with adaptive and robust adaptive control barrier functions. (a): System is restricted to level sets (black dashed lines) of aCBF $h_a$. (b): System allowed to operate in larger set with RaCBF $h_r$ reducing conservatism.}\n \\vskip -0.25in\n\\end{figure}\n\nSeveral remarks can be made about \\cref{thm:racbf}.\nFirst, safety is \\emph{guaranteed} for all possible parameter realizations through adaptation with minimal conservatism.\nHence, RaCBFs expand and improve the \\emph{adaptive safety} paradigm.\nSecond, the minimum eigenvalue condition for the adaptation rate depends on the desired conservatism, i.e., the degree of tightening by choice of $h_r(x_r,\\theta_r)$.\nFor low conservatism, i.e., a small $h_r(x_r,\\theta_r)$ value, the adaptation rate must be large so the parameter estimates can change quickly to ensure forward invariance of $\\mathcal{C}^r_\\theta$.\nThere is thus a fundamental trade-off between conservatism and parameter adaptation rate that must be weighed carefully given the well-known undesirable effects of high-gain adaptation.\nThird, the RaCBF condition in \\cref{eq:racbf} can be used as a safety filter for an existing tracking controller or as a constraint within an optimization.\n\\cref{sec:result} will show the latter but with a contraction-based controller.\nLastly, if the adaptation gain must be small (or the maximum parameter error is large) then RaCBFs can be conservative albeit not to the same extent as aCBFs.\nBetter performance can be obtained if the model parameters can be robustly and accurately estimated.\nInstead of obtaining a point-estimate of the parameters, this work will instead identify the \\emph{set} of possible parameter values.\n\n\\subsection{RaCBFs with Set Membership Identification}\n\\label{sec:smid}\nSet membership identification (SMID) is a model estimation technique that constructs an unfalsified set of model parameters.\nSMID was originally developed to identify transfer functions for uncertain linear systems \\cite{pararrieter1992set}, but has been more recently applied to linear \\cite{tanaskovic2014adaptive,lorenzen2017adaptive} and nonlinear adaptive MPC \\cite{lopez2019adaptive}.\nAssume that the true parameters $\\theta^*$ belong to an initial set of possible parameters $\\Theta^0$, i.e., $\\theta^* \\in \\Theta^0$.\nGiven $k$ state, input, and rate measurements (denoted as $x_{1:k}$ and so forth), a set $\\Xi$ can be constructed such that\n\\begin{equation*}\n \n \n\\Xi = \\left\\{ \\varrho : |\\dot{x}_{1:k} - f_{1:k} + \\Delta_{1:k}^\\top \\varrho - B_{1:k}u_{1:k}| \\leq D \\right\\},\n\\end{equation*}\nwhere $D$ can be treated as a tuning parameter that dictates the conservativeness of SMID. \nIt can also represent a disturbance or noise bound \\cite{tanaskovic2014adaptive,lopez2019adaptive}.\nThe set of possible parameter values can then be updated via $\\Theta^{j+1} = \\Theta^j \\cap \\Xi$ for all $j\\geq 0$.\nIn practice, $\\Xi$ can be found by solving a linear program and set intersection can be efficiently done through a combination of min and max operations.\nRestricting $\\hat{\\theta} \\in \\Theta^j$ then $\\tilde{\\theta} \\in \\Tilde{\\Theta}^j$ where $\\Tilde{\\Theta}^j$ is the set of possible parameter errors.\nThe following lemma shows the advantage of performing set identification over point-estimation techniques.\n\\begin{lemma}\n\\label{lemma:smid}\nModel uncertainty monotonically decreases with set membership identification, i.e., $\\tilde{\\Theta}^{j+1} \\subseteq \\tilde{\\Theta}^j$ for all $j\\geq 0$.\n\\end{lemma}\n\\begin{proof}\nSince $\\theta^* \\in \\Theta^{j+1}$ then $\\theta^* \\in \\Theta^{j} \\cap \\Xi$ which is true if $\\theta^* \\in \\Theta^j$ so $\\Theta^{j+1}\\subseteq\\Theta^j$ and $\\tilde{\\Theta}^{j+1} \\subseteq \\tilde{\\Theta}^j$.\n\\end{proof}\n\nThe motivation to combine SMID with RaCBFs is to enlarge the tightened set $\\mathcal{C}_\\theta^r$.\nTo do so, one must ensure $\\mathcal{C}^r_\\theta$ remains forward invariant as the set of model parameters is updated.\nIn general this is non-trivial to prove since the maximum possible parameter error is now time varying.\nHowever, \\cref{thm:racbf_smid} shows that safety is maintained if the model uncertainty monotonically decreases.\n\n\\begin{theorem}\n\\label{thm:racbf_smid}\nLet $\\mathcal{C}^r_\\theta$ be a superlevel set of a continuously differentiable function ${h}_r : \\mathbb{R}^n\\times \\mathbb{R}^p \\rightarrow \\mathbb{R}$. If the system is safe on $\\mathcal{C}^r_\\theta$ then it remains safe if the maximum allowable model parameter error $\\tilde{\\vartheta}$ monotonically decreases. \nMoreover, the tightened set $\\mathcal{C}^r_\\theta$ converges to $\\mathcal{C}_\\theta$ monotonically.\n\\end{theorem}\n\n\nCombining RaCBFs and SMID provides a mechanism to 1) modify parameters via adaptation to achieve safety and 2) update the model to reduce uncertainty and conservatism.\nSafety is \\emph{guaranteed} even as the model parameters are modified online, and the system's performance will only improve as more data is collected.\nThis \\emph{adaptive data-driven safety} paradigm can be merged with a stabilizing adaptive controller for safe reference tracking.\nTo maximize the generality of the proposed unification, the adaptive controller must be applicable to a broad class of nonlinear systems.\n\n\\section{Adaptive Control with Contraction Metrics}\n\\label{sec:accm}\nSeveral adaptive control techniques have been proposed for nonlinear systems, including methods based on feedback linearization, variable structure, and backstepping (see \\cite{slotine1991applied,miroslav1995nonlinear}).\nThese methods are limited to certain classes of systems because they rely on \\emph{explicitly} constructing a control Lyapunov function (CLF) to prove stability.\nThis work will instead utilize a \\emph{differential} approach based on contraction analysis that can be applied to a broad class of systems.\n\n\\begin{figure}[t!]\n\\vskip 0.05in\n \\begin{subfigure}{.47\\columnwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/geodesic.pdf}\n \\caption{{Geodesic and arbitrary curve connecting current and desired state.}}\n \\label{fig:geodesig}\n \\end{subfigure}\n \\hspace{0.6em}\n \\begin{subfigure}{.47\\columnwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/manifold.pdf}\n \\caption{{Differential CLFs along geodesic connecting current and desired state.}}\n \\label{fig:manifold}\n \\end{subfigure}\n \\caption{Geodesic and differential CLF visualization. (a): Geodesic (grey) connecting current $x$ (red) and desired $x_d$ (blue) state. (b): Differential CLFs are integrated along geodesic to achieve exponential convergence.}\n \\label{fig:dclf}\n \\vskip -0.2in\n\\end{figure}\n\n\\subsection{Contraction Metrics}\nThe nominal differential dynamics of \\cref{eq:unc_dyn} are $\\dot{\\delta}_x = A(x,u)\\delta_x + B(x)\\delta_u$ where $A(x,u) = \\frac{\\partial f}{\\partial x} + \\sum_{i=1}^m \\frac{\\partial b_i}{\\partial x}u_i$.\nContraction analysis searches for a \\emph{control contraction metric} (CCM) $M(x)$ such that a \\emph{differential} CLF $\\delta V = \\delta_x^\\top M(x) \\delta_x$ satisfies $\\delta \\dot{V}\\leq - 2 \\lambda \\delta V$ for all $x$.\nA global CLF can be obtained by integrating along a geodesic $\\gamma(s)$, illustrated in \\cref{fig:dclf}, with $\\gamma(0) = x_d$ and $\\gamma(1) = x$ where $x_d$ and $x$ are the desired and current state.\nThe Riemannian energy and tracking error both converge to zero exponentially, i.e., $\\dot{E} \\leq - 2 \\lambda E$.\nLet $W(x) = M(x)^{-1}$, $M(x)$ is CCM if \\cite{manchester2017control}\n\\begin{gather}\n B^\\top_{\\perp}\\left( W A ^\\top + A W - \\dot{W} + 2 \\lambda W \\right)B_{\\perp} \\preceq 0 \\tag{C1} \\label{eq:ccm_c1} \\\\\n \\partial_{b_i}W - W \\frac{\\partial b_i}{\\partial x}^\\top - \\frac{\\partial b_i}{\\partial x} W = 0 ~~ i=1,\\dots,m \\tag{C2} \\label{eq:ccm_c2}\n\\end{gather}\nwhere $B_\\perp(x)$ is the annihilator matrix of $B(x)$, i.e., $B^\\top_\\perp B = 0$.\n\\cref{eq:ccm_c1} ensures the dynamics orthogonal to $u$ are contracting and is a stabilizability condition.\n\\cref{eq:ccm_c2} requires the column vectors of $B(x)$ form a Killing vector for the dual metric $W(x)$ leading to simpler controllers \\cite{manchester2017control}. \n\\cref{eq:ccm_c1} and \\cref{eq:ccm_c2} will be referred to as the \\emph{strong CCM conditions}.\n\n\n\n\n\n\\subsection{Adaptive Control \\& Contraction}\nA novel adaptive control method was developed in \\cite{lopez2019contraction} for closed-loop contracting systems with \\emph{extended matched uncertainties}, i.e., $\\Delta(x)^\\top \\theta \\in \\text{span}\\{B,ad_fB\\}$ were $ad_fB$ is the Lie bracket of the vector fields $f(x)$ and $B(x)$.\nTo stabilize such systems, the \\emph{parameter-dependent} metric $M(x,\\hat{\\theta})$ was introduced and must satisfy the strong CCM conditions \\emph{for all} possible $\\hat{\\theta}$.\nThis led to the following result.\n\n\\begin{theorem}[\\cite{lopez2019contraction}]\n\\label{thm:exMatched}\nIf a parameter-dependent metric can be computed for \\cref{eq:unc_dyn} with extended matched uncertainties, then the closed-loop systems is asymptotically stable with\n\\begin{equation}\n \\dot{\\hat{\\theta}} = - \\Gamma \\Delta(x) M(x,\\hat{\\theta})\\gamma_s(1)\n \\label{eq:accm}\n\\end{equation}\nwhere $\\gamma_s(s):=\\frac{\\partial \\gamma}{\\partial s}$ is the geodesic speed and $\\Gamma \\in \\mathcal{S}^p_+$ is a symmetric positive definite matrix.\n\\end{theorem}\n\n\\begin{remark}\nFor \\emph{matched} uncertainties, the metric is \\emph{independent} of the unknown parameters $\\hat{\\theta}$ \\cite[Lemma 1]{lopez2019contraction} simplifying its computation.\nOtherwise, sum-of-square or robust optimization must be utilized to compute $M(x,\\hat{\\theta})$.\n\\end{remark}\n\n\\begin{remark}\nSeveral modifications can be made to \\cref{eq:accm} that improve transients or robustness including the projection operator discussed in \\cref{sub:racbf} (see \\cite{lopez2019contraction}).\n\\end{remark} \n\n\n\\subsection{Offline Design \\& Online Computation}\n\\label{sub:computation}\nA contraction metric is computed offline via sum-of-square programming for polynomial systems \\cite{manchester2017control} or by imposing \\cref{eq:ccm_c1,eq:ccm_c2} at sampled points in the state space; a process called gridding.\nGeodesics are computed online at each time step by solving a nonlinear program (NLP) with the current state.\nGeodesics are often guaranteed to exist by Hopf-Rinow and are less expensive to compute than solving nonlinear MPC.\nGiven a geodesic $\\gamma(s)$, the Riemannian energy can be interpreted as a CLF so a pointwise min-norm controller similar to that in \\cite{primbs2000receding} can found by solving the QP\n\\begin{equation*}\n \\begin{aligned}\n & u^* = ~\\underset{u\\in \\mathcal{U}}\\argmin ~\\frac{1}{2} u^\\top u \\\\\n & \\text{s.t.} ~\\gamma_s(1)^\\top M(x,\\hat{\\theta})\\dot{\\hat{x}} - \\gamma_s(0)^\\top M(x_d,\\hat{\\theta})\\dot{x}_d \\leq - \\lambda E(\\gamma(s),\\hat{\\theta}) \n \\end{aligned}\n\\end{equation*}\nwhere $\\hat{\\theta}$ is the current parameter estimate, $\\gamma_s$ is the geodesic speed, $\\dot{\\hat{x}}$ is \\cref{eq:unc_dyn} but with $\\hat{\\theta}$, $\\dot{x}_d $ is the desired dynamics for desired state $x_d$.\nThe stability constraint imposes $\\dot{E} \\leq - 2 \\lambda E$ where $\\dot{E}$ is the first variation of the Riemannian energy, which has a known form \\cite{carmo1992riemannian}, and results in exponential convergence of the tracking error.\nIt is more general than the traditional Lyapunov stability as a distance-like function does not need to computed explicitly.\nSafety can be directly embedded in the above QP, resulting in a single optimization for a safe stabilizing controller.\n\n\n\\begin{figure*}[t]\n\\vskip 0.1in\n \\begin{subfigure}{.66\\columnwidth}\n \\centering\n \\includegraphics[trim=150 30 150 30, clip,width=1\\textwidth]{figures\/q_c_i.pdf}\n \\caption{Pitch rate $q$.}\n \\label{fig:q_i}\n \\end{subfigure}\n \\hfill{}\n \\begin{subfigure}{.66\\columnwidth}\n \\centering\n \\includegraphics[trim=150 30 150 30,clip, width=1\\textwidth]{figures\/u_c_i.pdf}\n \\caption{Control input $u$.}\n \\label{fig:u_i}\n \\end{subfigure}\n \\hfill{}\n \\begin{subfigure}{.66\\columnwidth}\n \\centering\n \\includegraphics[trim=150 30 150 30,clip, width=1\\textwidth]{figures\/h_c_i.pdf}\n \\caption{Barrier function $h$.}\n \\label{fig:h_i}\n \\end{subfigure}\n \\caption{Comparison of modified aCBFs \\cref{eq:acbf_m}, RaCBFs, and RaCBFs \\& SMID for desired terminal state $x_d = [180^\\circ~0~0]^\\top$ (Immelmann turn) and maximum pitch rate $q_m$. (a): Pitch rate tracking where RaCBFs and the modified aCBFs exhibit similar conservatism due to model error.\n RaCBF \\& SMID allows the aircraft to utilize 97.9\\% of the maximum allowable pitch rate. `\n (b): Control chatter is observed with the modified aCBFs while RaCBFs generate continuous control inputs. (c): Safety is maintained but RaCBFs and the modified aCBFs are conservative due to potential model error. RaCBFs \\& SMID permit states closer to the boundary of the safe set without losing safety guarantees.\n For tests $k_q^* = 0.2$, $\\ell^*_\\alpha = -1$, $\\Gamma_B = 20$, $\\Gamma_C = 50$, $\\alpha(r) = 10r$, and $D = 0.1$.}\n \\label{fig:results_immelmann}\n\\vskip -0.2in\n\\end{figure*}\n\n\\section{Adaptive \\& Data-Driven Safety}\n\\label{sec:result}\nA safe and stabilizing controller can be computed by unifying RaCBFs, SMID, and adaptive control with contraction.\nThe individual components of the controller are summarized below with their respective computational complexity.\n\\\\[4pt]\n\\noindent \\textbf{1) Compute geodesic (NLP)}\n\\begin{equation*}\n \n \\gamma(s) = \\underset{c(s)\\in\\Upsilon(x,x_d)}{\\argmin}~ E(c,\\hat{\\theta}_C) = \\int_0^1 c_s^\\top M(c,\\hat{\\theta}_C) c_s ds\n\\end{equation*}\n\n\\noindent \\textbf{2) Compute controller (QP \\& Quadrature)}\n\\begin{align*}\n & \\kappa = ~\\underset{u \\in \\mathcal{U}}{\\argmin} ~ \\frac{1}{2} u^\\top u + r \\epsilon^2 \\\\\n &\\text{s.t.} ~\\gamma_s(1)^\\top M(x,\\hat{\\theta}_C)\\dot{\\hat{x}} - \\gamma_s(0)^\\top M(x_d,\\hat{\\theta}_C)\\dot{x}_d \\\\\n & \\hspace{4.6cm} \\leq - \\lambda E(\\gamma(s),\\hat{\\theta}_C)+ \\epsilon \\nonumber \\\\\n & \\hspace{.4cm} \\frac{\\partial h_r}{\\partial x}(x,\\hat{\\theta}_B)\\left[ f(x) - \\Delta(x)^\\top \\Lambda(x,\\hat{\\theta}_B) + B(x) u\\right] \\\\\n & \\hspace{3.1cm} \\geq -\\alpha\\left({h}_r(x,\\hat{\\theta}_B) - \\frac{1}{2}\\tilde{\\vartheta}^\\top \\Gamma^{-1} \\tilde{\\vartheta}\\right) \\nonumber\n\\end{align*}\n\\noindent \\textbf{3) Update parameters (Quadrature)} \\label{item:quad}\n\\begin{equation*}\n\\begin{aligned}\n & \\dot{\\hat{\\theta}}_C = - \\Gamma_C \\Delta(x) M(x,\\hat{\\theta}_C) \\gamma_s(1) \\\\\n & \\dot{\\hat{\\theta}}_B = \\Gamma_B \\Delta(x) \\left(\\frac{\\partial h_r}{\\partial x}(x,\\hat{\\theta}_B)\\right)^\\top\n\\end{aligned}\n\\end{equation*}\n\\vskip -.03in\n\\noindent \\textbf{4) Update parameter error bounds (LP)} \\label{item:lp}\n\\begin{align*}\n & \\Xi = \\left\\{ \\varrho : ~ |\\dot{x}_{1:k} - f_{1:k} + \\Delta_{1:k}^\\top \\varrho - B_{1:k}\\kappa_{1:k}| \\leq D \\right\\} \\\\\n &\\Theta^{j+1} = \\Theta^{j} \\cap \\Xi, ~~ \\tilde{\\vartheta} = \\underset{\\varrho_i,\\forall i}{\\text{sup}} ~\\Theta^{j+1} - \\underset{\\varrho_i,\\forall i}{\\vphantom{\\text{sup}}\\text{inf}}~ \\Theta^{j+1} \\nonumber\n\\end{align*}\n\\vskip -.02in\n\nThe NLP in Step~1) can be efficiently solved by parameterizing geodesics with a set of polynomial basis functions. \nWe adopt the same strategy as in \\cite{leung2017nonlinear} and utilize the the Chebychev Pseudospectral method and Clenshaw-Curtis quadrature to compute a geodesic at each time step.\nUsing the geodesic computed in Step~1), the QP in Step~2) is solved to generate a safe and stabilizing controller $\\kappa$.\nThe QP is similar to that in \\cite{ames2016control} but the stability constraint is replaced with the first variation of the Riemannian energy \\cite{carmo1992riemannian}.\nUnder the premise that \\cref{eq:unc_dyn} is locally Lipschitz, one can show that $\\kappa$ is guaranteed to be locally Lipschitz from \\cite[Theorem 3]{ames2016control} as both the geodesic speed $\\gamma_s$ and metric $M(x)$ are also locally Lipschitz from their definitions.\nNote that $\\dot{x}_d$ is the desired dynamics and $\\dot{\\hat{x}}$ is \\cref{eq:unc_dyn} but with $\\hat{\\theta}_C$.\nStep~3) is simple quadrature and is not computationally expensive.\nNote that the parameter adaptation $\\dot{\\hat{\\theta}}_C$ for the controller should be temporarily stopped when the safety constraint is active to prevent undesirable transients.\nOtherwise, the parameter estimates will windup as the tracking error may increase to ensure safety.\n\nThe LP in Step~4) has $2k$ constraints and is solved $2p$ times at every time step for the upper and lower bound of each parameter. \nSet intersection is done by taking the appropriate minimum or maximum of the newest and current bounds.\nThe complexity of Step~4) can be bounded by either removing redundant constraints or terminating when $\\tilde{\\vartheta} \\leq \\varepsilon$ where $\\varepsilon$ is a predefined threshold.\nStep~4) can also be done outside the control loop since stability and safety do not rely on real-time updates of the parameter bounds; although real-time bounds are desirable to quickly eliminate conservatism.\nConsequently, non-causal filtering can be used to accurately estimate $\\dot{x}$ if necessary.\nMoreover, the right hand side of the inequality can be replaced by $D + \\mathcal{E}$ where $\\mathcal{E}$ is the maximum estimation error of the rate vector, i.e., $|\\dot{\\hat{x}}| \\leq \\mathcal{E}$.\nThe proposed method was tested in MATLAB R2018B with the built-in solvers without any code optimization on a 1.6GHz Intel i5 processor.\nThe NLP was initialized with a linear curve $c(s) = (x-x_d)s+x_d$ at each time step.\n\n\n\n\\section{Illustrative Example}\n\\label{sec:example}\nConsider the simplified pitch dynamics of an aircraft \\cite{mccormick1995aerodynamics}\n\\begin{equation*}\n \\left[ \\begin{array}{c} \\dot{\\theta} \\\\ \\dot{\\alpha} \\\\ \\dot{q} \\end{array} \\right] = \\left[ \\begin{array}{c} q \\\\ q - \\bar{L}(\\alpha) \\\\ -k_q q + \\bar{M}(\\alpha) \\end{array} \\right] + \\left[ \\begin{array}{c} 0 \\\\ 0 \\\\ 1 \\end{array} \\right] u,\n\\end{equation*}\nwhere $\\theta$, $\\alpha$, and $q$ are the pitch angle, angle of attack, and pitch rate.\n$\\bar{L}(\\alpha)$ and $\\bar{M}(\\alpha)$ are the aerodynamic lift and moment.\nThe system is not feedback linearizable as the controllability matrix drops rank at $\\bar{L}'(\\alpha)=0$ and is not in strict-feedback form.\nUtilizing flat plat theory \\cite{mccormick1995aerodynamics}, the aerodynamics of a high-performance aircraft are approximately $\\bar{L}(\\alpha) = 0.8\\mathrm{sin}(2\\alpha)$ and $\\bar{M}(\\alpha) = -\\ell_{\\alpha} \\bar{L}(\\alpha)$.\nThe parameters $k_q$ and $\\ell_\\alpha$ are unknown but $k_q \\in [0.1~0.8]$ and $\\ell_\\alpha \\in [-3~1]$.\nA metric quadratic in $\\alpha$ was synthesized via gridding for $\\alpha \\in [-5^{\\circ}~50{^\\circ}]$ and $q \\in [-10^{\\circ\/s}~50^{\\circ\/s}]$.\nNote that $\\bar{L}'(\\alpha)=0$ is in the chosen grid range.\nThe function $h_r(q) = q_m - q$ where $q_m = 50^{\\circ\/s}$ can be easily shown to be a valid RaCBF that enforces $q \\leq q_m$.\n\n\nThe desired terminal state $x_d = [180^\\circ~0~0]^\\top$ corresponds to the first portion of the aerobatic maneuver known as the Immelmann turn.\nThe vehicle executes a half loop before executing a half roll (not considered here) resulting in level flight in the opposite direction.\n\\cref{fig:q_i} shows modified aCBFs and RaCBFs exhibit similar behavior in terms of conservatism as they do not utilize the maximum allowable pitch rate.\nHowever, modified aCBFs exhibit high-frequency oscillations due to the chatter in the control input, seen in \\cref{fig:u_i}; a result of the formulation as the safety constraint continuously switches between active and inactive. \nHigh-frequency oscillations are also seen in the barrier function in \\cref{fig:h_i} but are absent with RaCBFs.\n\\cref{fig:q_i} shows RaCBFs with SMID results in less conservatism as 97.9\\% of the maximum allowable pitch rate is utilized.\nMoreover, the set of allowable states is considerably larger as is evident by the small value of the barrier function in \\cref{fig:h_i}.\nThe parameter error bounds, shown in \\cref{fig:smid_i}, were reduced by 63.0\\% for $k_q$ and 90.5\\% for ${\\ell}_\\alpha$.\n\\cref{fig:comp_time_i} shows the computation time is within real-time constraints and can be easily reduced by utilizing faster solvers or a lighter programming language.\n\n\\begin{figure}[t!]\n \\begin{subfigure}{.47\\columnwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/smid_i.png}\n \\caption{{Parameter bounds.}}\n \\label{fig:smid_i}\n \\end{subfigure}\n \\hspace{0.6em}\n \\begin{subfigure}{.47\\columnwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/comp_time_i.png}\n \\caption{{Computation time.}}\n \\label{fig:comp_time_i}\n \\end{subfigure}\n \\caption{Parameter bounds and computation time. (a): Parameter bounds monotonically approach the true parameter values $\\ell_\\alpha^*$ and $k_q^*$. (b): Computation time for the proposed controller is well within real-time constraints.}\n \\label{fig:results_comp_i}\n \\vskip -0.2in\n\\end{figure}\n\n\\begin{figure*}[t]\n\\vskip 0.1in\n \\begin{subfigure}{.66\\columnwidth}\n \\centering\n \\includegraphics[trim=150 30 150 30, clip,width=1\\textwidth]{figures\/q_c.pdf}\n \\caption{Pitch rate $q$.}\n \\label{fig:q}\n \\end{subfigure}\n \\hfill{}\n \\begin{subfigure}{.66\\columnwidth}\n \\centering\n \\includegraphics[trim=150 30 150 30,clip, width=1\\textwidth]{figures\/u_c.pdf}\n \\caption{Control input $u$.}\n \\label{fig:u}\n \\end{subfigure}\n \\hfill{}\n \\begin{subfigure}{.66\\columnwidth}\n \\centering\n \\includegraphics[trim=150 30 150 30,clip, width=1\\textwidth]{figures\/h_c.pdf}\n \\caption{Barrier function $h$.}\n \\label{fig:h}\n \\end{subfigure}\n \\caption{Comparison of modified aCBFs \\cref{eq:acbf_m}, RaCBFs, and RaCBFs \\& SMID with a full desired trajectory corresponding to $\\theta_d = -20^\\circ \\mathrm{cos}(t)$. (a): Pitch rate tracking where RaCBFs and aCBFs exhibit similar conservativeness due to model error.\n RaCBF \\& SMID achieves the best performance because model uncertainty is reduced via online estimation. \n (b): Control chatter is observed with aCBFs while RaCBFs generate continuous control inputs. (c): Safety maintained but RaCBFs and aCBFs are conservative due to potential model error. RaCBFs \\& SMID is the least conservative since model is estimated online.\n For tests $k^* = 0.2$, $\\ell^*_\\alpha = -1$, $\\Gamma_B = 20$, $\\Gamma_C = 50$, $\\alpha(r) = 10r$, and $D = 0.1$.}\n \\label{fig:results}\n\\vskip -0.2in\n\\end{figure*}\n\nNow consider the scenario where a full desired trajectory described by $\\theta_d = - 20^\\circ \\mathrm{cos}(t)$ is available.\nA metric was synthesized for a new grid range $\\alpha \\in [-60^{\\circ}~60{^\\circ}]$ and $q \\in [-20^{\\circ\/s}~20^{\\circ\/s}]$; a metric quadratic in $\\alpha$ was again found to be valid over the grid range.\nThe function $h_r(q) = 1 - (\\nicefrac{q}{q_m})^2$ where $q_m = 20^{\\circ\/s}$ is a valid RaCBF that enforces $|q| \\leq q_m$.\nThe results in \\cref{fig:results} show the same exact behavior as in \\cref{fig:results_immelmann}: control input chattering occurs with the modified aCBFs \\cref{fig:u} resulting in high-frequency oscillations in both the pitch rate \\cref{fig:q} and barrier function \\cref{fig:h}. \nChattering does not occur with RaCBFs.\nAdditionally, RaCBFs with SMID again has the best tracking performance in \\cref{fig:q} and is the least conservative in \\cref{fig:h}.\nThe parameter bounds at different time instances are shown in \\cref{fig:smid_sine}.\nThe bounds again monotonically decrease resulting in a reduction of 16.5\\% and 77.3\\% for ${k}_q$ and ${\\ell}_\\alpha$, respectively.\nThe reduction is less than that in the Immelmann turn due to the trajectory not sufficiently exciting $q$ relative to $\\bar{L}(\\alpha)$.\nThe largest reduction occurred at $t=3s$ which is when the barrier function in \\cref{fig:h} becomes less conservative.\nThe computation time shown in \\cref{fig:comp_time_sine} is comparable to that in \\cref{fig:comp_time_i} with the NLP solve time being slightly less as the linear geodesic initialization is a better initial guess for small tracking error. \nThis further confirms that the proposed approach can be run in real-time.\n\n\\begin{figure}[t!]\n \\begin{subfigure}{.47\\columnwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/smid_sine.png}\n \\caption{{Parameter bounds.}}\n \\label{fig:smid_sine}\n \\end{subfigure}\n \\hspace{0.6em}\n \\begin{subfigure}{.47\\columnwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/comp_time_sine.png}\n \\caption{{Computation time.}}\n \\label{fig:comp_time_sine}\n \\end{subfigure}\n \\caption{Parameter bounds and computation time for desired trajectory. (a): Parameter bounds monotonically approach the true parameter values $\\ell_\\alpha^*$ and $k_q^*$. (b): Computation time for the proposed controller is well within real-time constraints.}\n \\label{fig:results_comp_sine}\n \\vskip -0.2in\n\\end{figure}\n\n\\section{Conclusion}\nThis work presented a framework that guarantees safety for uncertain nonlinear systems through parameter adaptation and data-driven model estimation.\nThe unification with a contraction-based adaptive controller allows the approach be applied to a broad class of systems.\nExtending to systems with probabilistic model bounds, non-parametric uncertainties, and external disturbances is future work.\n\n\\section*{Appendix}\n\\label{sec:appendix}\n\n\\begin{proof}[Proof of \\cref{thm:racbf}]\nConsider the composite candidate CBF $h = h_r(x,\\hat{\\theta}) - \\frac{1}{2} \\tilde{\\theta}^\\top\\Gamma^{-1}\\tilde{\\theta}$,\nwhere the minimum eigenvalue of $\\Gamma$ must satisfy $\\lambda_{\\min}(\\Gamma) \\geq \\frac{\\|\\tilde{\\vartheta}\\|^2}{2h_r(x_r,{\\theta}_r)}$ for any $h_r(x_r,\\theta_r)>0$.\nDifferentiating $h$ with respect to \\cref{eq:unc_dyn},\n\\begin{equation*}\n\\begin{aligned}\n \\dot{h} & = \\dot{h}_r(x,\\hat{\\theta}) - \\tilde{\\theta}^\\top\\Gamma^{-1}\\dot{\\hat{\\theta}} \\\\\n &= \\frac{\\partial h_r}{\\partial x}\\left[ f(x) - \\Delta(x)^\\top \\theta + B(x) u\\right] + \\frac{\\partial h_r}{\\partial \\hat{\\theta}} \\dot{\\hat{\\theta}} - \\tilde{\\theta}^\\top\\Gamma^{-1}\\dot{\\hat{\\theta}} \\\\\n \n \n\\end{aligned}\n\\end{equation*}\nAdding and subtracting $\\frac{\\partial h_r}{\\partial x}\\Delta(x)^\\top\\left[\\hat{\\theta} - \\Gamma \\left(\\frac{\\partial h_r}{\\partial \\hat{\\theta}}\\right)^\\top \\right]$ and using the definition of $\\Lambda(x,\\hat{\\theta})$,\n\n\\begin{equation*}\n\\begin{aligned}\n \\dot{h} & = \\frac{\\partial h_r}{\\partial x}\\left[ f(x) - \\Delta(x)^\\top \\Lambda(x,\\hat{\\theta}) + B(x) u\\right] + \\frac{\\partial h_r}{\\partial \\hat{\\theta}}\\dot{\\hat{\\theta}} \\\\\n & \\hspace{1.5cm} - \\tilde{\\theta}^\\top\\Gamma^{-1}\\dot{\\hat{\\theta}} + \\frac{\\partial h_r}{\\partial x}\\Delta(x)^\\top\\left[\\tilde{\\theta} - \\Gamma \\left(\\frac{\\partial h_r}{\\partial \\hat{\\theta}}\\right)^\\top \\right].\n\\end{aligned}\n\\end{equation*}\nChoosing $\\dot{\\hat{\\theta}} = \\Gamma \\Delta(x) \\left(\\frac{\\partial h_r}{\\partial x}\\right)^\\top$, then\n\\begin{equation*}\n\\begin{aligned}\n \\dot{h} &= \\frac{\\partial h_r}{\\partial x}\\left[ f(x) - \\Delta(x)^\\top \\Lambda(x,\\hat{\\theta}) + B(x) u\\right] \\\\\n & \\geq - \\alpha\\left( h_r - \\frac{1}{2}\\tilde{\\vartheta}^\\top \\Gamma^{-1} \\tilde{\\vartheta} \\right) \\geq -\\alpha(h),\n\\end{aligned}\n\\end{equation*}\nwhere the first inequality is obtained via the definition of a RaCBF and the second by noting $|\\tilde{\\theta}| \\leq \\tilde{\\vartheta}$ so $h = h_r - \\frac{1}{2}\\tilde{\\theta^\\top}\\Gamma^{-1}\\tilde{\\theta} \\geq h_r - \\frac{1}{2}\\tilde{\\vartheta^\\top}\\Gamma^{-1}\\tilde{\\vartheta}$. \nSince $h\\geq0$ and $h_r \\geq h$ $\\forall t$, then $h_r \\geq \\frac{1}{2}\\tilde{\\vartheta^\\top}\\Gamma^{-1}\\tilde{\\vartheta} \\geq 0$ and $\\mathcal{C}^r_\\theta$ is forward invariant.\n\\end{proof}\n\n\\begin{proof}[Proof of \\cref{thm:racbf_smid}]\nSince the model uncertainty is changing via estimation, the maximum allowable parameter error is time varying, i.e., $\\tilde{\\vartheta}(t)$.\nFrom \\cref{lemma:smid}, $\\tilde{\\Theta}$ monotonically decreases so $\\dot{\\tilde{\\vartheta}} \\leq 0$. \nLet $h_r$ be a candidate RaCBF, then $\\dot{{h}} = \\dot{h}_{r} - \\tilde{\\vartheta}\\Gamma^{-1}\\dot{\\tilde{\\vartheta}} \\geq \\dot{h}_{r}$ since $\\dot{\\tilde{\\vartheta}} \\leq 0$ for all $t$.\nInequality \\cref{eq:racbf} in Definition~\\ref{definition:racbf} is then still satisfied for $\\dot{\\tilde{\\vartheta}} \\leq 0$.\nUsing the steps in \\cref{thm:racbf}, the system is safe with respect to $\\mathcal{C}^r_{\\hat{\\theta}}$.\nMoreover, since $\\dot{\\tilde{\\vartheta}} \\leq 0$ then $\\tilde{\\vartheta} \\rightarrow 0 $ so $\\mathcal{C}^r_{\\hat{\\theta}} \\rightarrow \\mathcal{C}_{\\hat{\\theta}}$.\n\\end{proof}\n\n\n\n\n\\noindent \\textbf{Acknowledgements} \nWe thank David Fan for stimulating discussions.\nThis work was supported by the NSF Graduate Research Fellowship Grant No. 1122374.\n\n\n\\balance\n\\bibliographystyle{ieeetr}\n\n\\section{Introduction}\nState and actuator constraints are often encountered in real-world systems but systematic feedback controller design remains challenging.\nThe main difficulty arises from needing to \\emph{predict} whether the system will remain in the feasible set when selecting a control input.\nRepeatedly solving a constrained finite-horizon optimal control problem (i.e., model predictive control) is one way to ensure feasibility, \nbut solving a nonlinear optimization in real-time can be difficult.\nAlternatively, one can avoid trajectory optimization entirely by constructing safe invariant sets, i.e., a set of states that guarantee feasibility indefinitely.\nThrough the development of control barrier functions (CBFs)~\\cite{ames2016control}, safe stabilizing controllers can be synthesized by simply solving a quadratic program (QP), and has recently been used in several applications~\\cite{ames2019control}.\nHowever, model error can significantly degrade the performance of these controllers to the extent that safety may no longer be guaranteed.\nWe develop a general framework that guarantees safety through parameter adaptation and online model estimation for uncertain nonlinear systems.\n\nControl barrier functions heavily rely on a model so it is critical to develop methodologies that maintain safety for uncertain systems.\nIn~\\cite{xu2015robustness} the so-called zeroing CBFs were shown to be Input-to-State stable, a property that was used to prove a superset of a safe set is forward invariant.\nThe size of the superset was characterized in~\\cite{kolathaya2018input} by introducing Input-to-State safety.\nStronger safety guarantees can be obtained via robust optimization as demonstrated in~\\cite{gurriet2018towards}.\nLearning-based methods~\\cite{fan2019bayesian,taylor2019learning} have been developed to address the conservatism of the robust strategies, but these can require extensive offline training to substantially improve the model.\nAdaptive CBFs (aCBFs)~\\cite{taylor2019adaptive} use ideas from adaptive control theory to ensure a safe set is forward invariant with online parameter adaptation.\nHowever, aCBFs have a much more restrictive invariance condition that limits the system to remain in sublevel sets of the safety set, ultimately leading to conservative behavior.\nTo address conservatism, \\cite{taylor2019adaptive} limited parameter adaptation to a region near the boundary of the safe set.\nHowever, the resulting safety controller is not necessarily Lipschitz and can exhibit closed-loop chattering.\n\n\nThe contributions of this work are threefold.\nFirst, robust aCBFs (RaCBFs) are defined and shown to guarantee safety for uncertain nonlinear systems.\nWhen combined with parameter adaptation, RaCBFs ensure forward invariance of a \\emph{tightened set} where the degree of tightening can be selected based on the desired conservatism.\nRaCBFs are far less conservative than aCBFs and results in a locally Lipschitz safety controller.\nSecond, RaCBFs are combined with the data-driven method set membership identification \\cite{tanaskovic2014adaptive} to safely reduce modeling error and expand the set of allowable states. \nThis is the first work to utilize both parameter adaptation and data-driven model estimation within the context of safety-critical control. \nAnd third, RaCBFs are merged with a recent direct adaptive controller \\cite{lopez2019contraction} based on the contraction metric framework \\cite{lohmiller1998contraction,manchester2017control}.\nContraction expresses distance-like functions \\emph{differentially} rather than explicitly, and the existence of a metric only requires stabilizability -- a far weaker condition than those needed for feedback linearization or backstepping -- making the unification more general than existing methods.\nThe approach is demonstrated on the pitch dynamics of an aircraft with uncertain nonlinear aerodynamics.\nThe system is non-invertible, not in strict-feedback form, and has non-polynomial dynamics highlighting the generality of the proposed method.\n\n\\section{Problem Formulation \\& Preliminaries}\nConsider the nonlinear system\n\\protect\\begin{equation}\n\\dot{x} = f(x) - \\Delta(x)^\\top \\theta + B(x)u,\n\\label{eq:unc_dyn}\n\\end{equation}\nwith unknown parameters $\\theta \\in \\mathbb{R}^p$ and known dynamics $\\Delta : \\mathbb{R}^n \\rightarrow \\mathbb{R}^{p\\times n} $, state $x \\in \\mathbb{R}^n$, control input $u \\in \\mathbb{R}^m$, nominal dynamics $f: \\mathbb{R}^n \\rightarrow \\mathbb{R}^{n}$, and control input matrix $B : \\mathbb{R}^n \\rightarrow \\mathbb{R}^{n\\times m}$ with columns $b_i(x)$ $i=1,\\dots,m$.\nLet $\\mathcal{S}^n_+$ be the set of all $n\\times n$ symmetric positive definite matrices. \nA smooth Riemannian manifold $\\mathcal{M}$ is equipped with a smooth Riemannian metric $M: \\mathbb{R}^n\\times\\mathbb{R} \\rightarrow \\mathcal{S}^n_+$ that defines an inner product $\\left< \\cdot, \\cdot \\right>_x$ on the tangent space $T_x \\mathcal{M}$ at every point $x$.\nThe metric $M(x,t)$ defines local geometric notions such as angles, length, and orthogonality.\nThe directional derivative of metric $M(x,t)$ along vector $v$ is $\\partial_v M = \\sum_i\\frac{\\partial M}{\\partial x_i} v_i$.\nA parameterized differentiable curve $c: \\left[0,1\\right] \\rightarrow \\mathcal{M}$ is a regular if $\\frac{\\partial c}{\\partial s} = c_s \\neq 0$ $\\forall s \\in \\left[0~1\\right]$.\nLet $\\Upsilon(p,q)$ be the family of curves connecting $p,~q\\in \\mathcal{M}$, then a \\emph{geodesic} $\\gamma: \\left[0~1\\right] \\rightarrow \\mathcal{M}$ is the extremum of the energy functional\n\\begin{equation*}\n \\gamma(s) = \\underset{c(s)\\in\\Upsilon(p,q)}{\\argmin}~ E(c,t) = \\int_0^1 c_s^\\top M(c,t) c_s ds,\n\\end{equation*}\nwhere $E$ is the Riemannian energy. \nIf the manifold $\\mathcal{M}$ is a complete metric space, such as $\\mathbb{R}^n$, $n$-sphere $\\mathbb{S}^n$, or any of their respective closed subsets, then a geodesic is guaranteed to exist by the Hopf-Rinow theorem \\cite{carmo1992riemannian}.\nIn the sequel, the time argument in $M(c,t)$ and $E(c,t)$ is dropped for clarity.\n\n\n\\section{Adaptive Safety}\n\\label{sec:acbf}\n\\subsection{Background}\nFirst consider the nominal dynamics of \\cref{eq:unc_dyn}, i.e., $\\Delta(x) = 0$.\nLet a closed convex set $\\mathcal{C} \\subset \\mathbb{R}^n$ be a 0-superlevel set of a continuously differentiable function $h: \\mathbb{R}^n \\rightarrow \\mathbb{R}$ where\n\\begin{equation*}\n \\begin{aligned}\n \\mathcal{C} & = \\left\\{ x \\in \\mathbb{R}^n : h(x) \\geq 0 \\right\\} \\\\\n \\partial \\mathcal{C} & = \\left\\{ x \\in \\mathbb{R}^n : h(x) = 0 \\right\\} \\\\\n \\text{Int}\\left(\\mathcal{C}\\right) & = \\left\\{ x \\in \\mathbb{R}^n : h(x) > 0 \\right\\}.\n \\end{aligned}\n\\end{equation*}\nIf the nominal dynamics are locally Lipschitz, then given an initial condition $x_0$, there exists a maximum time interval $I(x_0) = [t_0,~T)$ such that $x(t)$ is a unique solution on $I(x_0)$. \nThe following definitions are largely taken from \\cite{ames2016control,ames2019control}.\n\\begin{definition}\n\\label{def:fi}\nThe set $\\mathcal{C}$ is \\emph{forward invariant} if for every $x_0\\in \\mathcal{C}$, $x(t) \\in \\mathcal{C}$ for all $t\\in I(x_0)$.\n\\end{definition}\n\n\\begin{definition}\n\\label{def:safety}\nThe nominal system is \\emph{safe} with respect to set $\\mathcal{C}$ if the set $\\mathcal{C}$ is forward invariant.\n\\end{definition}\n\n\\begin{definition}\nA continuous function $\\alpha : \\mathbb{R} \\rightarrow \\mathbb{R}$ is an \\emph{extended class $\\mathcal{K}_\\infty$ function} if it is strictly increasing, $\\alpha(0) = 0$, and is defined on the entire real line.\n\\end{definition}\n\n\\begin{definition}\nLet $\\mathcal{C}$ be a 0-superlevel set for a continuously differentiable function $h:\\mathbb{R}^n\\rightarrow\\mathbb{R}$, then $h$ is a \\emph{control barrier function} if there exists an extended class $\\mathcal{K}_\\infty$ function $\\alpha$ such that\n\\begin{equation}\n\\label{eq:cbf}\n \\underset{u \\in \\mathcal{U}}{\\text{sup}}~ \\left[\\frac{\\partial h}{\\partial x}(x)\\left(f(x) + B(x) u\\right)\\right] \\geq - \\alpha(h(x)).\n\\end{equation}\n\\end{definition}\n\n\\begin{theorem}\nLet $\\mathcal{C}\\subset\\mathbb{R}^n$ be a 0-superlevel set of a continuously differentiable function $h:\\mathbb{R}^n\\rightarrow\\mathbb{R}$, if $h$ is a CBF on $\\mathcal{C}$ then any locally Lipschitz continuous controller satisfying \\cref{eq:cbf} renders the set $\\mathcal{C}$ safe for the nominal system. \n\\end{theorem}\n\n\\subsection{Adaptive CBFs}\nAdaptive CBFs (aCBFs) \\cite{taylor2019adaptive} provide a general framework to guarantee safety through parameter adaptation for systems with structured uncertainties.\nThe notion of safety for uncertain systems must be extended to a family of safe sets $\\mathcal{C}_{\\theta}$ parameterized by $\\theta$. \nMore precisely, the family of safe sets are 0-superlevel sets of a continuously differentiable function $h_a: \\mathbb{R}^n \\times \\mathbb{R}^p \\rightarrow \\mathbb{R}$.\nIf the uncertain dynamics in \\cref{eq:unc_dyn} are locally Lipschitz then the definitions of forward invariance and safety can be directly extended to $\\mathcal{C}_\\theta$.\n\n\\begin{definition}[\\cite{taylor2019adaptive}]\n\\label{def:acbf}\nLet $\\mathcal{C}_\\theta$ be a family of 0-superlevel sets parameterized by $\\theta$ for a continuously differentiable function $h_a:\\mathbb{R}^n\\times \\mathbb{R}^p\\rightarrow\\mathbb{R}$, then $h_a$ is an \\emph{adaptive control barrier function} if for all $\\theta$\n\\begin{equation}\n\\label{eq:acbf}\n \\underset{u \\in \\mathcal{U}}{\\text{sup}}~ \\left[ \\frac{\\partial h_a}{\\partial x}(x,\\theta)\\left( f(x) - \\Delta(x)^\\top \\Lambda(x,\\theta) + B(x) u\\right)\\right] \\geq 0,\n\\end{equation}\nwhere $\\Lambda(x,\\theta) := \\theta - \\Gamma \\left(\\frac{\\partial h_a}{\\partial \\theta}(x,\\theta)\\right)^\\top$ and $\\Gamma \\in \\mathcal{S}^p_+$ is a symmetric positive definite matrix\n\\end{definition}\n\nA controller that satisfies \\cref{eq:acbf} can be combined with an adaptation law to render the uncertain systems safe with respect to $\\mathcal{C}_{{\\theta}}$ \\cite{taylor2019adaptive}.\nHowever, \\cref{eq:acbf} makes the level sets of $h_a$ forward invariant so it is a much stricter condition than \\cref{eq:cbf}.\nMore precisely, the distance to the boundary of the safe set must monotonically increase, i.e., $\\dot{h}_a(x,\\theta) \\geq 0$ for all time (\\cref{fig:acbf}).\nThis can lead to extremely conservative behavior as the system only operates in a set that is monotonically shrinking.\nIn \\cite{taylor2019adaptive}, a modified aCBF was proposed\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:acbf_m}\n \\bar{h}_a(x,\\theta) = \\begin{cases} \n \\sigma^2 ~ &\\text{if} ~ h_a(x,\\theta)\\geq\\sigma \\\\\n \\sigma^2 - (h_a(x,\\theta)-\\sigma)^2 ~~ &\\text{otherwise},\n \\end{cases}\n\\end{aligned}\n\\end{equation}\nwhich satisfies \\cref{eq:acbf} if $h_a$ is a valid aCBF.\nThis modification expands the set of allowable states but the resulting controller is not necessarily Lipschitz and can exhibit high-frequency oscillations in closed-loop, as shown in \\cref{ex:lipschitz}. \n\n\\begin{example}\n\\label{ex:lipschitz}\nConsider the uncertain system $\\dot{x} = - \\theta + u$ with $\\theta>0$ and aCBF $h_a(x) = x - \\ubar{x}$.\nLet the controller $\\kappa$ be the solution to $\\kappa = {\\argmin}~\\frac{1}{2} u^2$ subject to $\\dot{\\bar{h}}_a(x) \\geq 0$.\nThen\n\\begin{equation}\n\\label{eq:u_ex}\n\\begin{aligned}\n \\kappa = \\begin{cases}\n 0 ~ &\\text{if} ~ h_a(x)\\geq\\sigma \\\\\n \\mathrm{max}(0,\\hat{\\theta}) ~~ &\\text{otherwise},\n \\end{cases}\n\\end{aligned}\n\\end{equation}\nwhere $\\hat{\\theta}$ is the estimate of $\\theta$ and is modified based on \\cite{taylor2019adaptive}\n\\begin{equation}\n\\label{eq:ha_ex}\n\\begin{aligned}\n \\dot{\\hat{\\theta}} = \\begin{cases}\n 0 ~ &\\text{if} ~ h_a(x)\\geq\\sigma \\\\\n - \\Gamma \\left[h_a(x)-\\sigma\\right] \\frac{\\partial h_a}{\\partial x} ~~ &\\text{otherwise},\n \\end{cases}\n\\end{aligned}\n\\end{equation}\nwith $\\Gamma > 0$. \nFor $\\hat{\\theta}(0) = \\hat{\\theta}_0 \\leq 0$ then $\\kappa = 0$ so the closed-loop response is $x_{cl}(t) = - \\theta (t-t_0) + x_0$ where $x_{cl}(t_0) = x_0 > \\ubar{x}$ and $t \\geq t_0$.\nFrom the adaptation law \\cref{eq:ha_ex}, it is easy to see that $\\dot{\\hat{\\theta}} \\geq 0 $ which will necessarily lead to $\\hat{\\theta} > 0$ since $h_a \\leq \\sigma$ until $\\kappa > \\theta$.\nFor $\\hat{\\theta} > 0$, the closed-loop response becomes\n\\begin{equation}\n\\label{eq:cl_ex}\n \\begin{aligned}\n x_{cl}(t) = \\begin{cases}\n -\\theta (t-t_0) + x_0 ~ &\\text{if} ~ h_a(x)\\geq\\sigma \\\\\n \\hspace{0.28cm}\\tilde{\\theta} (t-t_0') + x_0' ~~ &\\text{otherwise},\n \\end{cases}\n \\end{aligned}\n\\end{equation}\nwhere $\\tilde{\\theta}:=\\hat{\\theta} - \\theta$ and $x_{cl}(t_0') = x_0'$.\nSince $\\tilde{\\theta} > 0$, \\cref{eq:cl_ex} will continuously switch between its two solutions.\nThe control policy $\\kappa$ must then also switch between $0$ and $\\hat{\\theta}$ based on \\cref{eq:u_ex}.\nFurthermore, $\\kappa$ is not locally Lipschitz continuous and will exhibit high-frequency oscillations of magnitude $\\hat{\\theta}$ in closed-loop.\n\\end{example}\n\nThe intuition behind \\cref{ex:lipschitz} is that chatter arises due to the barrier condition switching between being trivially satisfied, i.e., $\\dot{\\bar{h}}_a = 0 \\geq 0$ for all $u$, to satisfied only for a particular $u$, i.e., a $u$ so that $\\dot{\\bar{h}}_a \\geq 0$.\nThe approach developed in this work addresses the conservatism of aCBFs and results in a locally Lipschitz continuous controller.\n\n\n\\subsection{Robust aCBFs}\n\\label{sub:racbf}\nThis section will show that a \\emph{tightened} set can be made forward invariant if the unknown model parameters are bounded and the parameter adaptation rate is an \\emph{admissible} (to be defined) symmetric positive definite matrix.\n\n\\begin{assumption}\nThe unknown parameters $\\theta$ belong to a known closed convex set $\\Theta$.\nThe parameter estimation error $\\tilde{\\theta}:= \\hat{\\theta} - \\theta$ then also belongs to a known closed convex set $\\tilde{\\Theta}$ and the maximum possible parameter error is $\\tilde{\\vartheta}$.\n\\end{assumption}\n\nLet $\\mathcal{C}^r_\\theta$ be a family of superlevel sets parameterized by $\\theta$ for a continuously differentiable function ${h}_r: \\mathbb{R}^n \\times \\mathbb{R}^p \\rightarrow \\mathbb{R}$ \n\\begin{equation*}\n \\begin{aligned}\n \\mathcal{C}^r_\\theta & = \\left\\{ x \\in \\mathbb{R}^n : {h}_r(x,\\theta) \\geq \\frac{1}{2}\\tilde{\\vartheta}^\\top \\Gamma^{-1} \\tilde{\\vartheta} \\right\\} \\\\\n \\partial \\mathcal{C}^r_\\theta & = \\left\\{ x \\in \\mathbb{R}^n : {h}_r(x,\\theta) = \\frac{1}{2}\\tilde{\\vartheta}^\\top \\Gamma^{-1} \\tilde{\\vartheta} \\right\\} \\\\\n \\text{Int}\\left(\\mathcal{C}^r_\\theta\\right) & = \\left\\{ x \\in \\mathbb{R}^n : {h}_r(x,\\theta) > \\frac{1}{2}\\tilde{\\vartheta}^\\top \\Gamma^{-1} \\tilde{\\vartheta} \\right\\},\n \\end{aligned}\n\\end{equation*}\nwhere $\\Gamma \\in \\mathscr{S}^p_+ \\subset \\mathcal{S}^p_+$ is an admissible symmetric positive definite matrix that will dictate the parameter adaptation rate.\nThe set $\\mathcal{C}^r_\\theta$ can be viewed as a \\emph{tightened} set with respect to $\\mathcal{C}_\\theta$, i.e., $\\mathcal{C}^r_\\theta \\subset \\mathcal{C}_\\theta$, shown in \\cref{fig:racbf}.\nOne can select the desired subset $\\mathcal{C}^r_\\theta$ to be made forward invariant \\textit{a priori} by choosing $h_r(x_r,\\theta_r) >0 $ for appropriate $x_r,~\\theta_r$ so ${h}_r(x_r,\\theta_r) = \\frac{1}{2}\\tilde{\\vartheta}^\\top \\Gamma^{-1} \\tilde{\\vartheta}$.\nTo reduce conservatism, one can either 1) have fast parameter adaptation or 2) reduce model error.\nThe first scenario can lead to well-known undesirable effects in practice so the second scenario is the most viable, and will be explored more in \\cref{sec:smid}.\n\\cref{eq:unc_dyn} is again assumed to be locally Lipschitz so \\cref{def:fi,def:safety} hold.\n\n\\begin{definition}\n\\label{definition:racbf}\nLet $\\mathcal{C}^r_\\theta$ be a family of superlevel sets parameterized by $\\theta$ for a continuously differentiable function $h_r:\\mathbb{R}^n\\times \\mathbb{R}^p\\rightarrow\\mathbb{R}$, then $h_r$ is a \\emph{robust adaptive control barrier function} if there exists an extended class $\\mathcal{K}_\\infty$ function $\\alpha$ such that for all $\\theta\\in\\Theta$ \n\\begin{equation}\n\\label{eq:racbf}\n\\begin{aligned}\n &\\underset{u \\in \\mathcal{U}}{\\text{sup}}~ \\left[ \\frac{\\partial h_r}{\\partial x}(x,\\theta)\\left[ f(x) - \\Delta(x)^\\top \\Lambda(x,\\theta) + B(x) u\\right]\\right] \\\\ \n \n & \\hspace{3cm} \\geq -\\alpha\\left(h_r(x,\\theta) - \\frac{1}{2}\\tilde{\\vartheta}^\\top \\Gamma^{-1} \\tilde{\\vartheta}\\right),\n\\end{aligned}\n\\end{equation}\nwhere $\\Lambda(x,\\theta) := \\theta - \\Gamma \\left(\\frac{\\partial h_r}{\\partial \\theta}(x,\\theta)\\right)^\\top$, $\\tilde{\\vartheta}$ is the maximum possible parameter error, and $\\Gamma \\in \\mathscr{S}^p_+ \\subset \\mathcal{S}^p_+$ is an admissible symmetric positive definite matrix.\n\\end{definition}\n\nThe invariance condition \\cref{eq:racbf} is reminiscent of that in \\cref{eq:cbf} and is less conservative than that in \\cref{eq:acbf} because the system is allowed to approach the boundary of $\\mathcal{C}^r_\\theta$.\n\\cref{thm:racbf} shows the existence of a RaCBF, coupled with an adaptation law, renders the set $\\mathcal{C}^r_\\theta$ forward invariant and hence safe.\n\n\\begin{theorem}\n\\label{thm:racbf}\nLet $\\mathcal{C}^r_{\\hat{\\theta}}\\subset\\mathbb{R}^n$ be a superlevel set of a continuously differentiable function $h_r:\\mathbb{R}^n \\times \\mathbb{R}^p\\rightarrow\\mathbb{R}$, if $h_r$ is a RaCBF on $\\mathcal{C}^r_{\\hat{\\theta}}$, then any locally Lipschitz continuous controller satisfying \\cref{eq:racbf} renders the set $\\mathcal{C}^r_{\\hat{\\theta}}$ safe for the uncertainty system with adaptation law and adaptation gain\n\\begin{equation*}\n\\dot{\\hat{\\theta}} = \\Gamma \\Delta(x) \\left(\\frac{\\partial h_r}{\\partial x}(x,\\hat{\\theta}) \\right)^\\top, ~~ \\lambda_{\\min}(\\Gamma) \\geq \\frac{\\|\\tilde{\\vartheta}\\|^2}{2h_r(x_r,{\\theta}_r)},\n\\end{equation*}\nwhere $\\tilde{\\vartheta}$ is the maximum possible parameter error, $h_r(x_r,\\theta_r)>0$ can be chosen freely based on the desired conservatism, and $\\Gamma \\in \\mathscr{S}^p_+ \\subset \\mathcal{S}^p_+$ is an admissible symmetric positive definite matrix.\nFurthermore, the original set $\\mathcal{C}_{\\hat{\\theta}}$ is also safe for the uncertain system.\n\\end{theorem}\n\n\n\n\\begin{remark}\n\\label{remark:proj}\nThe projection operator \\cite{slotine1991applied} can be used to enforce parameter bounds by modifying the above adaptation law as opposed to capturing them explicitly with $h_r$. \nThis can simplify the design of $h_r$ without forfeiting safety.\nThe proof is omitted but one can show that a positive semi-definite term appears in the same composite candidate CBF used in \\cref{thm:racbf} when adaptation is temporarily stopped.\n\\end{remark}\n\n\n\\begin{figure}[t!]\n\\vskip 0.1in\n \\begin{subfigure}{.48\\columnwidth}\n \\centering\n \\includegraphics[trim=100 0 100 0, clip,width=1\\linewidth]{figures\/acbf.pdf}\n \\caption{{Safe set with adaptive control barrier functions (aCBFs).}}\n \\label{fig:acbf}\n \\end{subfigure}\n \\hspace{0.3em}\n \\begin{subfigure}{.48\\columnwidth}\n \\centering\n \\includegraphics[trim=100 0 100 0, clip,width=1\\linewidth]{figures\/racbf.pdf}\n \\caption{{Safe set with robust adaptive control barrier functions (RaCBFs).}}\n \\label{fig:racbf}\n \\end{subfigure}\n \\caption{Visual comparison of safe sets with adaptive and robust adaptive control barrier functions. (a): System is restricted to level sets (black dashed lines) of aCBF $h_a$. (b): System allowed to operate in larger set with RaCBF $h_r$ reducing conservatism.}\n \\vskip -0.25in\n\\end{figure}\n\nSeveral remarks can be made about \\cref{thm:racbf}.\nFirst, safety is \\emph{guaranteed} for all possible parameter realizations through adaptation with minimal conservatism.\nHence, RaCBFs expand and improve the \\emph{adaptive safety} paradigm.\nSecond, the minimum eigenvalue condition for the adaptation rate depends on the desired conservatism, i.e., the degree of tightening by choice of $h_r(x_r,\\theta_r)$.\nFor low conservatism, i.e., a small $h_r(x_r,\\theta_r)$ value, the adaptation rate must be large so the parameter estimates can change quickly to ensure forward invariance of $\\mathcal{C}^r_\\theta$.\nThere is thus a fundamental trade-off between conservatism and parameter adaptation rate that must be weighed carefully given the well-known undesirable effects of high-gain adaptation.\nThird, the RaCBF condition in \\cref{eq:racbf} can be used as a safety filter for an existing tracking controller or as a constraint within an optimization.\n\\cref{sec:result} will show the latter but with a contraction-based controller.\nLastly, if the adaptation gain must be small (or the maximum parameter error is large) then RaCBFs can be conservative albeit not to the same extent as aCBFs.\nBetter performance can be obtained if the model parameters can be robustly and accurately estimated.\nInstead of obtaining a point-estimate of the parameters, this work will instead identify the \\emph{set} of possible parameter values.\n\n\\subsection{RaCBFs with Set Membership Identification}\n\\label{sec:smid}\nSet membership identification (SMID) is a model estimation technique that constructs an unfalsified set of model parameters.\nSMID was originally developed to identify transfer functions for uncertain linear systems \\cite{pararrieter1992set}, but has been more recently applied to linear \\cite{tanaskovic2014adaptive,lorenzen2017adaptive} and nonlinear adaptive MPC \\cite{lopez2019adaptive}.\nAssume that the true parameters $\\theta^*$ belong to an initial set of possible parameters $\\Theta^0$, i.e., $\\theta^* \\in \\Theta^0$.\nGiven $k$ state, input, and rate measurements (denoted as $x_{1:k}$ and so forth), a set $\\Xi$ can be constructed such that\n\\begin{equation*}\n \n \n\\Xi = \\left\\{ \\varrho : |\\dot{x}_{1:k} - f_{1:k} + \\Delta_{1:k}^\\top \\varrho - B_{1:k}u_{1:k}| \\leq D \\right\\},\n\\end{equation*}\nwhere $D$ can be treated as a tuning parameter that dictates the conservativeness of SMID. \nIt can also represent a disturbance or noise bound \\cite{tanaskovic2014adaptive,lopez2019adaptive}.\nThe set of possible parameter values can then be updated via $\\Theta^{j+1} = \\Theta^j \\cap \\Xi$ for all $j\\geq 0$.\nIn practice, $\\Xi$ can be found by solving a linear program and set intersection can be efficiently done through a combination of min and max operations.\nRestricting $\\hat{\\theta} \\in \\Theta^j$ then $\\tilde{\\theta} \\in \\Tilde{\\Theta}^j$ where $\\Tilde{\\Theta}^j$ is the set of possible parameter errors.\nThe following lemma shows the advantage of performing set identification over point-estimation techniques.\n\\begin{lemma}\n\\label{lemma:smid}\nModel uncertainty monotonically decreases with set membership identification, i.e., $\\tilde{\\Theta}^{j+1} \\subseteq \\tilde{\\Theta}^j$ for all $j\\geq 0$.\n\\end{lemma}\n\\begin{proof}\nSince $\\theta^* \\in \\Theta^{j+1}$ then $\\theta^* \\in \\Theta^{j} \\cap \\Xi$ which is true if $\\theta^* \\in \\Theta^j$ so $\\Theta^{j+1}\\subseteq\\Theta^j$ and $\\tilde{\\Theta}^{j+1} \\subseteq \\tilde{\\Theta}^j$.\n\\end{proof}\n\nThe motivation to combine SMID with RaCBFs is to enlarge the tightened set $\\mathcal{C}_\\theta^r$.\nTo do so, one must ensure $\\mathcal{C}^r_\\theta$ remains forward invariant as the set of model parameters is updated.\nIn general this is non-trivial to prove since the maximum possible parameter error is now time varying.\nHowever, \\cref{thm:racbf_smid} shows that safety is maintained if the model uncertainty monotonically decreases.\n\n\\begin{theorem}\n\\label{thm:racbf_smid}\nLet $\\mathcal{C}^r_\\theta$ be a superlevel set of a continuously differentiable function ${h}_r : \\mathbb{R}^n\\times \\mathbb{R}^p \\rightarrow \\mathbb{R}$. If the system is safe on $\\mathcal{C}^r_\\theta$ then it remains safe if the maximum allowable model parameter error $\\tilde{\\vartheta}$ monotonically decreases. \nMoreover, the tightened set $\\mathcal{C}^r_\\theta$ converges to $\\mathcal{C}_\\theta$ monotonically.\n\\end{theorem}\n\n\nCombining RaCBFs and SMID provides a mechanism to 1) modify parameters via adaptation to achieve safety and 2) update the model to reduce uncertainty and conservatism.\nSafety is \\emph{guaranteed} even as the model parameters are modified online, and the system's performance will only improve as more data is collected.\nThis \\emph{adaptive data-driven safety} paradigm can be merged with a stabilizing adaptive controller for safe reference tracking.\nTo maximize the generality of the proposed unification, the adaptive controller must be applicable to a broad class of nonlinear systems.\n\n\\section{Adaptive Control with Contraction Metrics}\n\\label{sec:accm}\nSeveral adaptive control techniques have been proposed for nonlinear systems, including methods based on feedback linearization, variable structure, and backstepping (see \\cite{slotine1991applied,miroslav1995nonlinear}).\nThese methods are limited to certain classes of systems because they rely on \\emph{explicitly} constructing a control Lyapunov function (CLF) to prove stability.\nThis work will instead utilize a \\emph{differential} approach based on contraction analysis that can be applied to a broad class of systems.\n\n\\begin{figure}[t!]\n\\vskip 0.05in\n \\begin{subfigure}{.47\\columnwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/geodesic.pdf}\n \\caption{{Geodesic and arbitrary curve connecting current and desired state.}}\n \\label{fig:geodesig}\n \\end{subfigure}\n \\hspace{0.6em}\n \\begin{subfigure}{.47\\columnwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/manifold.pdf}\n \\caption{{Differential CLFs along geodesic connecting current and desired state.}}\n \\label{fig:manifold}\n \\end{subfigure}\n \\caption{Geodesic and differential CLF visualization. (a): Geodesic (grey) connecting current $x$ (red) and desired $x_d$ (blue) state. (b): Differential CLFs are integrated along geodesic to achieve exponential convergence.}\n \\label{fig:dclf}\n \\vskip -0.2in\n\\end{figure}\n\n\\subsection{Contraction Metrics}\nThe nominal differential dynamics of \\cref{eq:unc_dyn} are $\\dot{\\delta}_x = A(x,u)\\delta_x + B(x)\\delta_u$ where $A(x,u) = \\frac{\\partial f}{\\partial x} + \\sum_{i=1}^m \\frac{\\partial b_i}{\\partial x}u_i$.\nContraction analysis searches for a \\emph{control contraction metric} (CCM) $M(x)$ such that a \\emph{differential} CLF $\\delta V = \\delta_x^\\top M(x) \\delta_x$ satisfies $\\delta \\dot{V}\\leq - 2 \\lambda \\delta V$ for all $x$.\nA global CLF can be obtained by integrating along a geodesic $\\gamma(s)$, illustrated in \\cref{fig:dclf}, with $\\gamma(0) = x_d$ and $\\gamma(1) = x$ where $x_d$ and $x$ are the desired and current state.\nThe Riemannian energy and tracking error both converge to zero exponentially, i.e., $\\dot{E} \\leq - 2 \\lambda E$.\nLet $W(x) = M(x)^{-1}$, $M(x)$ is CCM if \\cite{manchester2017control}\n\\begin{gather}\n B^\\top_{\\perp}\\left( W A ^\\top + A W - \\dot{W} + 2 \\lambda W \\right)B_{\\perp} \\preceq 0 \\tag{C1} \\label{eq:ccm_c1} \\\\\n \\partial_{b_i}W - W \\frac{\\partial b_i}{\\partial x}^\\top - \\frac{\\partial b_i}{\\partial x} W = 0 ~~ i=1,\\dots,m \\tag{C2} \\label{eq:ccm_c2}\n\\end{gather}\nwhere $B_\\perp(x)$ is the annihilator matrix of $B(x)$, i.e., $B^\\top_\\perp B = 0$.\n\\cref{eq:ccm_c1} ensures the dynamics orthogonal to $u$ are contracting and is a stabilizability condition.\n\\cref{eq:ccm_c2} requires the column vectors of $B(x)$ form a Killing vector for the dual metric $W(x)$ leading to simpler controllers \\cite{manchester2017control}. \n\\cref{eq:ccm_c1} and \\cref{eq:ccm_c2} will be referred to as the \\emph{strong CCM conditions}.\n\n\n\n\n\n\\subsection{Adaptive Control \\& Contraction}\nA novel adaptive control method was developed in \\cite{lopez2019contraction} for closed-loop contracting systems with \\emph{extended matched uncertainties}, i.e., $\\Delta(x)^\\top \\theta \\in \\text{span}\\{B,ad_fB\\}$ were $ad_fB$ is the Lie bracket of the vector fields $f(x)$ and $B(x)$.\nTo stabilize such systems, the \\emph{parameter-dependent} metric $M(x,\\hat{\\theta})$ was introduced and must satisfy the strong CCM conditions \\emph{for all} possible $\\hat{\\theta}$.\nThis led to the following result.\n\n\\begin{theorem}[\\cite{lopez2019contraction}]\n\\label{thm:exMatched}\nIf a parameter-dependent metric can be computed for \\cref{eq:unc_dyn} with extended matched uncertainties, then the closed-loop systems is asymptotically stable with\n\\begin{equation}\n \\dot{\\hat{\\theta}} = - \\Gamma \\Delta(x) M(x,\\hat{\\theta})\\gamma_s(1)\n \\label{eq:accm}\n\\end{equation}\nwhere $\\gamma_s(s):=\\frac{\\partial \\gamma}{\\partial s}$ is the geodesic speed and $\\Gamma \\in \\mathcal{S}^p_+$ is a symmetric positive definite matrix.\n\\end{theorem}\n\n\\begin{remark}\nFor \\emph{matched} uncertainties, the metric is \\emph{independent} of the unknown parameters $\\hat{\\theta}$ \\cite[Lemma 1]{lopez2019contraction} simplifying its computation.\nOtherwise, sum-of-square or robust optimization must be utilized to compute $M(x,\\hat{\\theta})$.\n\\end{remark}\n\n\\begin{remark}\nSeveral modifications can be made to \\cref{eq:accm} that improve transients or robustness including the projection operator discussed in \\cref{sub:racbf} (see \\cite{lopez2019contraction}).\n\\end{remark} \n\n\n\\subsection{Offline Design \\& Online Computation}\n\\label{sub:computation}\nA contraction metric is computed offline via sum-of-square programming for polynomial systems \\cite{manchester2017control} or by imposing \\cref{eq:ccm_c1,eq:ccm_c2} at sampled points in the state space; a process called gridding.\nGeodesics are computed online at each time step by solving a nonlinear program (NLP) with the current state.\nGeodesics are often guaranteed to exist by Hopf-Rinow and are less expensive to compute than solving nonlinear MPC.\nGiven a geodesic $\\gamma(s)$, the Riemannian energy can be interpreted as a CLF so a pointwise min-norm controller similar to that in \\cite{primbs2000receding} can found by solving the QP\n\\begin{equation*}\n \\begin{aligned}\n & u^* = ~\\underset{u\\in \\mathcal{U}}\\argmin ~\\frac{1}{2} u^\\top u \\\\\n & \\text{s.t.} ~\\gamma_s(1)^\\top M(x,\\hat{\\theta})\\dot{\\hat{x}} - \\gamma_s(0)^\\top M(x_d,\\hat{\\theta})\\dot{x}_d \\leq - \\lambda E(\\gamma(s),\\hat{\\theta}) \n \\end{aligned}\n\\end{equation*}\nwhere $\\hat{\\theta}$ is the current parameter estimate, $\\gamma_s$ is the geodesic speed, $\\dot{\\hat{x}}$ is \\cref{eq:unc_dyn} but with $\\hat{\\theta}$, $\\dot{x}_d $ is the desired dynamics for desired state $x_d$.\nThe stability constraint imposes $\\dot{E} \\leq - 2 \\lambda E$ where $\\dot{E}$ is the first variation of the Riemannian energy, which has a known form \\cite{carmo1992riemannian}, and results in exponential convergence of the tracking error.\nIt is more general than the traditional Lyapunov stability as a distance-like function does not need to computed explicitly.\nSafety can be directly embedded in the above QP, resulting in a single optimization for a safe stabilizing controller.\n\n\n\\begin{figure*}[t]\n\\vskip 0.1in\n \\begin{subfigure}{.66\\columnwidth}\n \\centering\n \\includegraphics[trim=150 30 150 30, clip,width=1\\textwidth]{figures\/q_c_i.pdf}\n \\caption{Pitch rate $q$.}\n \\label{fig:q_i}\n \\end{subfigure}\n \\hfill{}\n \\begin{subfigure}{.66\\columnwidth}\n \\centering\n \\includegraphics[trim=150 30 150 30,clip, width=1\\textwidth]{figures\/u_c_i.pdf}\n \\caption{Control input $u$.}\n \\label{fig:u_i}\n \\end{subfigure}\n \\hfill{}\n \\begin{subfigure}{.66\\columnwidth}\n \\centering\n \\includegraphics[trim=150 30 150 30,clip, width=1\\textwidth]{figures\/h_c_i.pdf}\n \\caption{Barrier function $h$.}\n \\label{fig:h_i}\n \\end{subfigure}\n \\caption{Comparison of modified aCBFs \\cref{eq:acbf_m}, RaCBFs, and RaCBFs \\& SMID for desired terminal state $x_d = [180^\\circ~0~0]^\\top$ (Immelmann turn) and maximum pitch rate $q_m$. (a): Pitch rate tracking where RaCBFs and the modified aCBFs exhibit similar conservatism due to model error.\n RaCBF \\& SMID allows the aircraft to utilize 97.9\\% of the maximum allowable pitch rate. `\n (b): Control chatter is observed with the modified aCBFs while RaCBFs generate continuous control inputs. (c): Safety is maintained but RaCBFs and the modified aCBFs are conservative due to potential model error. RaCBFs \\& SMID permit states closer to the boundary of the safe set without losing safety guarantees.\n For tests $k_q^* = 0.2$, $\\ell^*_\\alpha = -1$, $\\Gamma_B = 20$, $\\Gamma_C = 50$, $\\alpha(r) = 10r$, and $D = 0.1$.}\n \\label{fig:results_immelmann}\n\\vskip -0.2in\n\\end{figure*}\n\n\\section{Adaptive \\& Data-Driven Safety}\n\\label{sec:result}\nA safe and stabilizing controller can be computed by unifying RaCBFs, SMID, and adaptive control with contraction.\nThe individual components of the controller are summarized below with their respective computational complexity.\n\\\\[4pt]\n\\noindent \\textbf{1) Compute geodesic (NLP)}\n\\begin{equation*}\n \n \\gamma(s) = \\underset{c(s)\\in\\Upsilon(x,x_d)}{\\argmin}~ E(c,\\hat{\\theta}_C) = \\int_0^1 c_s^\\top M(c,\\hat{\\theta}_C) c_s ds\n\\end{equation*}\n\n\\noindent \\textbf{2) Compute controller (QP \\& Quadrature)}\n\\begin{align*}\n & \\kappa = ~\\underset{u \\in \\mathcal{U}}{\\argmin} ~ \\frac{1}{2} u^\\top u + r \\epsilon^2 \\\\\n &\\text{s.t.} ~\\gamma_s(1)^\\top M(x,\\hat{\\theta}_C)\\dot{\\hat{x}} - \\gamma_s(0)^\\top M(x_d,\\hat{\\theta}_C)\\dot{x}_d \\\\\n & \\hspace{4.6cm} \\leq - \\lambda E(\\gamma(s),\\hat{\\theta}_C)+ \\epsilon \\nonumber \\\\\n & \\hspace{.4cm} \\frac{\\partial h_r}{\\partial x}(x,\\hat{\\theta}_B)\\left[ f(x) - \\Delta(x)^\\top \\Lambda(x,\\hat{\\theta}_B) + B(x) u\\right] \\\\\n & \\hspace{3.1cm} \\geq -\\alpha\\left({h}_r(x,\\hat{\\theta}_B) - \\frac{1}{2}\\tilde{\\vartheta}^\\top \\Gamma^{-1} \\tilde{\\vartheta}\\right) \\nonumber\n\\end{align*}\n\\noindent \\textbf{3) Update parameters (Quadrature)} \\label{item:quad}\n\\begin{equation*}\n\\begin{aligned}\n & \\dot{\\hat{\\theta}}_C = - \\Gamma_C \\Delta(x) M(x,\\hat{\\theta}_C) \\gamma_s(1) \\\\\n & \\dot{\\hat{\\theta}}_B = \\Gamma_B \\Delta(x) \\left(\\frac{\\partial h_r}{\\partial x}(x,\\hat{\\theta}_B)\\right)^\\top\n\\end{aligned}\n\\end{equation*}\n\\vskip -.03in\n\\noindent \\textbf{4) Update parameter error bounds (LP)} \\label{item:lp}\n\\begin{align*}\n & \\Xi = \\left\\{ \\varrho : ~ |\\dot{x}_{1:k} - f_{1:k} + \\Delta_{1:k}^\\top \\varrho - B_{1:k}\\kappa_{1:k}| \\leq D \\right\\} \\\\\n &\\Theta^{j+1} = \\Theta^{j} \\cap \\Xi, ~~ \\tilde{\\vartheta} = \\underset{\\varrho_i,\\forall i}{\\text{sup}} ~\\Theta^{j+1} - \\underset{\\varrho_i,\\forall i}{\\vphantom{\\text{sup}}\\text{inf}}~ \\Theta^{j+1} \\nonumber\n\\end{align*}\n\\vskip -.02in\n\nThe NLP in Step~1) can be efficiently solved by parameterizing geodesics with a set of polynomial basis functions. \nWe adopt the same strategy as in \\cite{leung2017nonlinear} and utilize the the Chebychev Pseudospectral method and Clenshaw-Curtis quadrature to compute a geodesic at each time step.\nUsing the geodesic computed in Step~1), the QP in Step~2) is solved to generate a safe and stabilizing controller $\\kappa$.\nThe QP is similar to that in \\cite{ames2016control} but the stability constraint is replaced with the first variation of the Riemannian energy \\cite{carmo1992riemannian}.\nUnder the premise that \\cref{eq:unc_dyn} is locally Lipschitz, one can show that $\\kappa$ is guaranteed to be locally Lipschitz from \\cite[Theorem 3]{ames2016control} as both the geodesic speed $\\gamma_s$ and metric $M(x)$ are also locally Lipschitz from their definitions.\nNote that $\\dot{x}_d$ is the desired dynamics and $\\dot{\\hat{x}}$ is \\cref{eq:unc_dyn} but with $\\hat{\\theta}_C$.\nStep~3) is simple quadrature and is not computationally expensive.\nNote that the parameter adaptation $\\dot{\\hat{\\theta}}_C$ for the controller should be temporarily stopped when the safety constraint is active to prevent undesirable transients.\nOtherwise, the parameter estimates will windup as the tracking error may increase to ensure safety.\n\nThe LP in Step~4) has $2k$ constraints and is solved $2p$ times at every time step for the upper and lower bound of each parameter. \nSet intersection is done by taking the appropriate minimum or maximum of the newest and current bounds.\nThe complexity of Step~4) can be bounded by either removing redundant constraints or terminating when $\\tilde{\\vartheta} \\leq \\varepsilon$ where $\\varepsilon$ is a predefined threshold.\nStep~4) can also be done outside the control loop since stability and safety do not rely on real-time updates of the parameter bounds; although real-time bounds are desirable to quickly eliminate conservatism.\nConsequently, non-causal filtering can be used to accurately estimate $\\dot{x}$ if necessary.\nMoreover, the right hand side of the inequality can be replaced by $D + \\mathcal{E}$ where $\\mathcal{E}$ is the maximum estimation error of the rate vector, i.e., $|\\dot{\\hat{x}}| \\leq \\mathcal{E}$.\nThe proposed method was tested in MATLAB R2018B with the built-in solvers without any code optimization on a 1.6GHz Intel i5 processor.\nThe NLP was initialized with a linear curve $c(s) = (x-x_d)s+x_d$ at each time step.\n\n\n\n\\section{Illustrative Example}\n\\label{sec:example}\nConsider the simplified pitch dynamics of an aircraft \\cite{mccormick1995aerodynamics}\n\\begin{equation*}\n \\left[ \\begin{array}{c} \\dot{\\theta} \\\\ \\dot{\\alpha} \\\\ \\dot{q} \\end{array} \\right] = \\left[ \\begin{array}{c} q \\\\ q - \\bar{L}(\\alpha) \\\\ -k_q q + \\bar{M}(\\alpha) \\end{array} \\right] + \\left[ \\begin{array}{c} 0 \\\\ 0 \\\\ 1 \\end{array} \\right] u,\n\\end{equation*}\nwhere $\\theta$, $\\alpha$, and $q$ are the pitch angle, angle of attack, and pitch rate.\n$\\bar{L}(\\alpha)$ and $\\bar{M}(\\alpha)$ are the aerodynamic lift and moment.\nThe system is not feedback linearizable as the controllability matrix drops rank at $\\bar{L}'(\\alpha)=0$ and is not in strict-feedback form.\nUtilizing flat plat theory \\cite{mccormick1995aerodynamics}, the aerodynamics of a high-performance aircraft are approximately $\\bar{L}(\\alpha) = 0.8\\mathrm{sin}(2\\alpha)$ and $\\bar{M}(\\alpha) = -\\ell_{\\alpha} \\bar{L}(\\alpha)$.\nThe parameters $k_q$ and $\\ell_\\alpha$ are unknown but $k_q \\in [0.1~0.8]$ and $\\ell_\\alpha \\in [-3~1]$.\nA metric quadratic in $\\alpha$ was synthesized via gridding for $\\alpha \\in [-5^{\\circ}~50{^\\circ}]$ and $q \\in [-10^{\\circ\/s}~50^{\\circ\/s}]$.\nNote that $\\bar{L}'(\\alpha)=0$ is in the chosen grid range.\nThe function $h_r(q) = q_m - q$ where $q_m = 50^{\\circ\/s}$ can be easily shown to be a valid RaCBF that enforces $q \\leq q_m$.\n\n\nThe desired terminal state $x_d = [180^\\circ~0~0]^\\top$ corresponds to the first portion of the aerobatic maneuver known as the Immelmann turn.\nThe vehicle executes a half loop before executing a half roll (not considered here) resulting in level flight in the opposite direction.\n\\cref{fig:q_i} shows modified aCBFs and RaCBFs exhibit similar behavior in terms of conservatism as they do not utilize the maximum allowable pitch rate.\nHowever, modified aCBFs exhibit high-frequency oscillations due to the chatter in the control input, seen in \\cref{fig:u_i}; a result of the formulation as the safety constraint continuously switches between active and inactive. \nHigh-frequency oscillations are also seen in the barrier function in \\cref{fig:h_i} but are absent with RaCBFs.\n\\cref{fig:q_i} shows RaCBFs with SMID results in less conservatism as 97.9\\% of the maximum allowable pitch rate is utilized.\nMoreover, the set of allowable states is considerably larger as is evident by the small value of the barrier function in \\cref{fig:h_i}.\nThe parameter error bounds, shown in \\cref{fig:smid_i}, were reduced by 63.0\\% for $k_q$ and 90.5\\% for ${\\ell}_\\alpha$.\n\\cref{fig:comp_time_i} shows the computation time is within real-time constraints and can be easily reduced by utilizing faster solvers or a lighter programming language.\n\n\\begin{figure}[t!]\n \\begin{subfigure}{.47\\columnwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/smid_i.png}\n \\caption{{Parameter bounds.}}\n \\label{fig:smid_i}\n \\end{subfigure}\n \\hspace{0.6em}\n \\begin{subfigure}{.47\\columnwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/comp_time_i.png}\n \\caption{{Computation time.}}\n \\label{fig:comp_time_i}\n \\end{subfigure}\n \\caption{Parameter bounds and computation time. (a): Parameter bounds monotonically approach the true parameter values $\\ell_\\alpha^*$ and $k_q^*$. (b): Computation time for the proposed controller is well within real-time constraints.}\n \\label{fig:results_comp_i}\n \\vskip -0.2in\n\\end{figure}\n\n\\begin{figure*}[t]\n\\vskip 0.1in\n \\begin{subfigure}{.66\\columnwidth}\n \\centering\n \\includegraphics[trim=150 30 150 30, clip,width=1\\textwidth]{figures\/q_c.pdf}\n \\caption{Pitch rate $q$.}\n \\label{fig:q}\n \\end{subfigure}\n \\hfill{}\n \\begin{subfigure}{.66\\columnwidth}\n \\centering\n \\includegraphics[trim=150 30 150 30,clip, width=1\\textwidth]{figures\/u_c.pdf}\n \\caption{Control input $u$.}\n \\label{fig:u}\n \\end{subfigure}\n \\hfill{}\n \\begin{subfigure}{.66\\columnwidth}\n \\centering\n \\includegraphics[trim=150 30 150 30,clip, width=1\\textwidth]{figures\/h_c.pdf}\n \\caption{Barrier function $h$.}\n \\label{fig:h}\n \\end{subfigure}\n \\caption{Comparison of modified aCBFs \\cref{eq:acbf_m}, RaCBFs, and RaCBFs \\& SMID with a full desired trajectory corresponding to $\\theta_d = -20^\\circ \\mathrm{cos}(t)$. (a): Pitch rate tracking where RaCBFs and aCBFs exhibit similar conservativeness due to model error.\n RaCBF \\& SMID achieves the best performance because model uncertainty is reduced via online estimation. \n (b): Control chatter is observed with aCBFs while RaCBFs generate continuous control inputs. (c): Safety maintained but RaCBFs and aCBFs are conservative due to potential model error. RaCBFs \\& SMID is the least conservative since model is estimated online.\n For tests $k^* = 0.2$, $\\ell^*_\\alpha = -1$, $\\Gamma_B = 20$, $\\Gamma_C = 50$, $\\alpha(r) = 10r$, and $D = 0.1$.}\n \\label{fig:results}\n\\vskip -0.2in\n\\end{figure*}\n\nNow consider the scenario where a full desired trajectory described by $\\theta_d = - 20^\\circ \\mathrm{cos}(t)$ is available.\nA metric was synthesized for a new grid range $\\alpha \\in [-60^{\\circ}~60{^\\circ}]$ and $q \\in [-20^{\\circ\/s}~20^{\\circ\/s}]$; a metric quadratic in $\\alpha$ was again found to be valid over the grid range.\nThe function $h_r(q) = 1 - (\\nicefrac{q}{q_m})^2$ where $q_m = 20^{\\circ\/s}$ is a valid RaCBF that enforces $|q| \\leq q_m$.\nThe results in \\cref{fig:results} show the same exact behavior as in \\cref{fig:results_immelmann}: control input chattering occurs with the modified aCBFs \\cref{fig:u} resulting in high-frequency oscillations in both the pitch rate \\cref{fig:q} and barrier function \\cref{fig:h}. \nChattering does not occur with RaCBFs.\nAdditionally, RaCBFs with SMID again has the best tracking performance in \\cref{fig:q} and is the least conservative in \\cref{fig:h}.\nThe parameter bounds at different time instances are shown in \\cref{fig:smid_sine}.\nThe bounds again monotonically decrease resulting in a reduction of 16.5\\% and 77.3\\% for ${k}_q$ and ${\\ell}_\\alpha$, respectively.\nThe reduction is less than that in the Immelmann turn due to the trajectory not sufficiently exciting $q$ relative to $\\bar{L}(\\alpha)$.\nThe largest reduction occurred at $t=3s$ which is when the barrier function in \\cref{fig:h} becomes less conservative.\nThe computation time shown in \\cref{fig:comp_time_sine} is comparable to that in \\cref{fig:comp_time_i} with the NLP solve time being slightly less as the linear geodesic initialization is a better initial guess for small tracking error. \nThis further confirms that the proposed approach can be run in real-time.\n\n\\begin{figure}[t!]\n \\begin{subfigure}{.47\\columnwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/smid_sine.png}\n \\caption{{Parameter bounds.}}\n \\label{fig:smid_sine}\n \\end{subfigure}\n \\hspace{0.6em}\n \\begin{subfigure}{.47\\columnwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/comp_time_sine.png}\n \\caption{{Computation time.}}\n \\label{fig:comp_time_sine}\n \\end{subfigure}\n \\caption{Parameter bounds and computation time for desired trajectory. (a): Parameter bounds monotonically approach the true parameter values $\\ell_\\alpha^*$ and $k_q^*$. (b): Computation time for the proposed controller is well within real-time constraints.}\n \\label{fig:results_comp_sine}\n \\vskip -0.2in\n\\end{figure}\n\n\\section{Conclusion}\nThis work presented a framework that guarantees safety for uncertain nonlinear systems through parameter adaptation and data-driven model estimation.\nThe unification with a contraction-based adaptive controller allows the approach be applied to a broad class of systems.\nExtending to systems with probabilistic model bounds, non-parametric uncertainties, and external disturbances is future work.\n\n\\section*{Appendix}\n\\label{sec:appendix}\n\n\\begin{proof}[Proof of \\cref{thm:racbf}]\nConsider the composite candidate CBF $h = h_r(x,\\hat{\\theta}) - \\frac{1}{2} \\tilde{\\theta}^\\top\\Gamma^{-1}\\tilde{\\theta}$,\nwhere the minimum eigenvalue of $\\Gamma$ must satisfy $\\lambda_{\\min}(\\Gamma) \\geq \\frac{\\|\\tilde{\\vartheta}\\|^2}{2h_r(x_r,{\\theta}_r)}$ for any $h_r(x_r,\\theta_r)>0$.\nDifferentiating $h$ with respect to \\cref{eq:unc_dyn},\n\\begin{equation*}\n\\begin{aligned}\n \\dot{h} & = \\dot{h}_r(x,\\hat{\\theta}) - \\tilde{\\theta}^\\top\\Gamma^{-1}\\dot{\\hat{\\theta}} \\\\\n &= \\frac{\\partial h_r}{\\partial x}\\left[ f(x) - \\Delta(x)^\\top \\theta + B(x) u\\right] + \\frac{\\partial h_r}{\\partial \\hat{\\theta}} \\dot{\\hat{\\theta}} - \\tilde{\\theta}^\\top\\Gamma^{-1}\\dot{\\hat{\\theta}} \\\\\n \n \n\\end{aligned}\n\\end{equation*}\nAdding and subtracting $\\frac{\\partial h_r}{\\partial x}\\Delta(x)^\\top\\left[\\hat{\\theta} - \\Gamma \\left(\\frac{\\partial h_r}{\\partial \\hat{\\theta}}\\right)^\\top \\right]$ and using the definition of $\\Lambda(x,\\hat{\\theta})$,\n\n\\begin{equation*}\n\\begin{aligned}\n \\dot{h} & = \\frac{\\partial h_r}{\\partial x}\\left[ f(x) - \\Delta(x)^\\top \\Lambda(x,\\hat{\\theta}) + B(x) u\\right] + \\frac{\\partial h_r}{\\partial \\hat{\\theta}}\\dot{\\hat{\\theta}} \\\\\n & \\hspace{1.5cm} - \\tilde{\\theta}^\\top\\Gamma^{-1}\\dot{\\hat{\\theta}} + \\frac{\\partial h_r}{\\partial x}\\Delta(x)^\\top\\left[\\tilde{\\theta} - \\Gamma \\left(\\frac{\\partial h_r}{\\partial \\hat{\\theta}}\\right)^\\top \\right].\n\\end{aligned}\n\\end{equation*}\nChoosing $\\dot{\\hat{\\theta}} = \\Gamma \\Delta(x) \\left(\\frac{\\partial h_r}{\\partial x}\\right)^\\top$, then\n\\begin{equation*}\n\\begin{aligned}\n \\dot{h} &= \\frac{\\partial h_r}{\\partial x}\\left[ f(x) - \\Delta(x)^\\top \\Lambda(x,\\hat{\\theta}) + B(x) u\\right] \\\\\n & \\geq - \\alpha\\left( h_r - \\frac{1}{2}\\tilde{\\vartheta}^\\top \\Gamma^{-1} \\tilde{\\vartheta} \\right) \\geq -\\alpha(h),\n\\end{aligned}\n\\end{equation*}\nwhere the first inequality is obtained via the definition of a RaCBF and the second by noting $|\\tilde{\\theta}| \\leq \\tilde{\\vartheta}$ so $h = h_r - \\frac{1}{2}\\tilde{\\theta^\\top}\\Gamma^{-1}\\tilde{\\theta} \\geq h_r - \\frac{1}{2}\\tilde{\\vartheta^\\top}\\Gamma^{-1}\\tilde{\\vartheta}$. \nSince $h\\geq0$ and $h_r \\geq h$ $\\forall t$, then $h_r \\geq \\frac{1}{2}\\tilde{\\vartheta^\\top}\\Gamma^{-1}\\tilde{\\vartheta} \\geq 0$ and $\\mathcal{C}^r_\\theta$ is forward invariant.\n\\end{proof}\n\n\\begin{proof}[Proof of \\cref{thm:racbf_smid}]\nSince the model uncertainty is changing via estimation, the maximum allowable parameter error is time varying, i.e., $\\tilde{\\vartheta}(t)$.\nFrom \\cref{lemma:smid}, $\\tilde{\\Theta}$ monotonically decreases so $\\dot{\\tilde{\\vartheta}} \\leq 0$. \nLet $h_r$ be a candidate RaCBF, then $\\dot{{h}} = \\dot{h}_{r} - \\tilde{\\vartheta}\\Gamma^{-1}\\dot{\\tilde{\\vartheta}} \\geq \\dot{h}_{r}$ since $\\dot{\\tilde{\\vartheta}} \\leq 0$ for all $t$.\nInequality \\cref{eq:racbf} in Definition~\\ref{definition:racbf} is then still satisfied for $\\dot{\\tilde{\\vartheta}} \\leq 0$.\nUsing the steps in \\cref{thm:racbf}, the system is safe with respect to $\\mathcal{C}^r_{\\hat{\\theta}}$.\nMoreover, since $\\dot{\\tilde{\\vartheta}} \\leq 0$ then $\\tilde{\\vartheta} \\rightarrow 0 $ so $\\mathcal{C}^r_{\\hat{\\theta}} \\rightarrow \\mathcal{C}_{\\hat{\\theta}}$.\n\\end{proof}\n\n\n\n\n\\noindent \\textbf{Acknowledgements} \nWe thank David Fan for stimulating discussions.\nThis work was supported by the NSF Graduate Research Fellowship Grant No. 1122374.\n\n\n\\balance\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nTwo-dimensional (2D) superconductivity in thin films and Josephson junction arrays (JJAs) reveal complex classical and quantum phase transitions and rich dynamics that depend on competing energy scales and coherence, including the extensively studied superconductor-insulator transition (SIT) \\cite{Jaeger1989, Fisher1991, Lee1990, Goldman2010, Gantmakher2010, Vladimir2012}, which fits into a framework of quantum critical phenomena exhibiting universal scaling \\cite{Mason1999, Yazdani1995, Steiner2005, Bollinger2011, Schneider2012}. New classes of materials have extended SIT studies to include strictly 2D materials \\cite{Allain2012,Tamir2019,Fatemi2018}, hybrid super-semi heterostructures \\cite{Boettcher2018,Vaitieknas2020,Tosato.2022,Boettcher.2022}, and high-temperature superconductors \\cite{Yang2019}, including a transition to a topological insulating phase \\cite{Wu2018}. An apparent metallic phase, with saturating low-temperature finite resistance, is observed in many of these systems \\cite{Kapitulnik2019}, which is incompatible with simple universal scaling.\n\nJJAs enrich the landscape by allowing controlled Coulomb interaction and frustration due to magnetic flux commensuration \\cite{Newrock1999, Fazio2001}. Coulomb charging of Josephson-coupled islands makes the classical XY spin system into a quantum problem, with phase and charge on the islands acting as conjugate variables with an uncertainty relation. Charging also introduces a new type of disorder in the form of a random offset on the island. The periodicity of the array adds complexity in the form of a Hofstadter-like spectrum \\cite{Teitel.1983, Lankhorst.2018b}, and possible spin glass phases \\cite{Vinokur.1987,Spivak.1991,Phillips.2003}. The discrete structure of a JJA also results in periodic pinning of vortices and antivortices, whose unbinding and free motion at finite temperature is described at zero magnetic field by a Berzinskii-Kosterlitz-Thouless (BKT) phase transition \\cite{Halperin1979, Lobb.1983, Newrock1999, Fazio2001, Gantmakher2010}, as investigated recently in this system \\cite{Boettcher.2022}. \n\n Collective pinning of vortices resulting in a zero resistance state, which has been mapped onto Mott insulator of frozen vortices \\cite{Nelson1993}, was investigated experimentally \\cite{Poccia2015, Lankhorst2018, Mironov.2020, Rezvani.2020, Rezvani.2020b, Pei.2022}, showing good agreement with theory, including scaling \\cite{Granato.2018,Granato.2019}. In this picture, the zero-resistance state of the JJAs near nonzero integer and half integer flux quanta per plaquette, denoted frustration, $f$, is described as vortices pinned by a combination of the periodic array potential and, importantly, {\\it collective} pinning due to other vortices. Individual (noncollective) vortex pinning in the binding potential of the array has been thoroughly modelled in metallic JJAs \\cite{Lobb.1983, Rzchowski1990} and for intentional pinning \\cite{Berdiyorov.2005} and antipinning sites \\cite{Berdiyorov.2008}, patterned to prevent dissipation in superconducting films \\cite{Eley.2021}. We note that the Mott insulator in JJAs is closely related to the theoretical Mott insulator state at weak disorder in the 2D Bose Hubbard model. For stronger disorder the Bose Hubbard model shows a glass phase associated with individual vortex pinning \\cite{POLLET2013,Yao2014}.\n\nHere, we investigate a dynamic transition from the superconducting to the resistive state driven by an applied dc current, including scaling analysis of differential resistance at the transition. Previous experiments ~\\cite{Poccia2015, Lankhorst2018, Mironov.2020, Rezvani.2020, Rezvani.2020b, Pei.2022} and theory \\cite{Granato.2018,Granato.2019} interpreted the flux-dependent dynamic transition in terms of a dynamic Mott transition based on a scaling analysis that yielded exponents consistent with a Mott transition. For the gate-tuned semiconductor-based JJA investigated here, both the Josephson coupling, $E_J$ and the vortex pinning potential, $E_B$, are tuned by a gate voltage \\cite{Boettcher2018, Boettcher.2022}. \n\nExamining these transition as a dynamical phase transition, we find reasonable scaling, though with nonuniversal scaling exponents that differ from previous experiments \\cite{Poccia2015, Lankhorst2018, Pei.2022}. We also find different exponents on the low-field and high-field sides of the transitions. This is discussed in (Sec.~\\ref{scaling}). We conclude that our transition is not a simple Mott transition in the same universality class as previously reported. This may be due to the geometry of our structure or weaker vortex pinning in the InAs\/Al system. We note that earlier studies of similar dynamic transitions were interpreted in terms of depinning of vortices from the array potential rather than as a Mott transition \\cite{Benz.1990, Jiang.2004}. \n\nBeyond scaling, several new features of dynamical vortex transitions are presented, elucidating the underlying vortex structure. First, the dip-to-peak transition from the fully superconducting state is {\\it split} at zero magnetic field, $f=0$, unlike the cases of $f=$~1\/2, 1, and other integer $f$. We interpret the split peak as reflecting absent or sparse vortices at $f=0$, unlike at nonzero $f$, consistent with a BKT picture at $f=0$ where vortex-antivortex pairs annihilate at low temperature. \n\nImportantly, we find that when the system is tuned (by gate voltage) into the anomalous metal phase \\cite{Boettcher2018}, an {\\it unsplit} peak at $f=0$ is instead observed. This suggests that remnant unpaired vortices and antivortices, absent in the low-temperature low-current superconducting state, are abundant at $f=0$ in the anomalous metal under otherwise similar conditions. The size of the splitting at $f=0$ in the superconducting state is found to depend on $I_{\\rm dc}$, reflecting the vortex density needed to support a dynamical transition. This is discussed in Sec.~\\ref{split}. \n\nSecond, the evolution of the dip-to-peak transition with increasing $I_{\\rm dc}$ is found to be symmetric about $f=1\/2$ but highly asymmetric about $f=1$ and larger integers. The right-left symmetry around $f=1\/2$ follows from the underlying checkerboard vortex configuration \\cite{Franz.1995, Lankhorst.2018b}: excess vortices are attracted to unfilled sites while deficit vortices, equivalent to antivortices, are attracted to filled sites. The situation at $f=1$ is quite different and naturally asymmetric: the array is full, contains one vortex per site. Any excess vortex is repelled at each site while any deficit, or antivortex, is attracted to each site, where it can annihilate. Higher integer $f$ values mirror the asymmetry at $f=1$ about the intervening half-integer symmetry point. This is discussed in Sec.~\\ref{evenodd}. \n\n\\section{Hybrid Array and Dynamical Vortex Transition}\n\\label{device}\n\n\\begin{figure*}[t]\n \\centering\n\t\\includegraphics[width= 6.4 in]{Figure1.pdf}\n\t\\caption{\\textbf{Hybrid array and frustration-dependent dynamical vortex transitions}. \\textbf{a,b)} Schematic of device with $W=100$ by $L=400$ epitaxial $a = 1\\, \\mu$m Al squares (gray), separated by $b=350$~nm strips (blue) of exposed InAs heterostructure, with ac + dc current bias, $I$, and voltage $V$ measured using side probes. False-color micrograph taken before top gate was deposited. Gate voltage, $V_g$, controls carrier density in InAs strips between Al squares \\cite{Boettcher2018}. \\textbf{c)} Sheet resistance, $R_s \\equiv (W\/L) V\/I$, as a function of dc current, $I_{\\rm dc}$, and perpendicular magnetic field, $B_\\perp$, shows $R_{s}=0$ for small $I_{\\rm dc}$ with enhanced critical current, $I_{0}$ at $f=$~0, 1\/4, 1\/3, 1\/2, 2\/3, and 1. \\textbf{d)} Line cuts of \\textbf{c} shows dips in $R_s$ at $f=$~0, 1\/2, and 1, going to $R_{s} \\sim 0$ for low $I_{\\rm dc}$. \\textbf{e-f)} Evolution of dips in differential resistance, $dV\/dI_{s} \\equiv (W\/L)\\ dV_{ac}\/dI_{\\rm ac}$, as a function of $B_{\\perp}$ into a split peak at $f=0$, a symmetric peak at $f=1\/2$, and an asymmetric peak at $f=1$. Each case is discussed in the text. }\n\t\\label{fig1}\n\\end{figure*}\n\nThe device we investigated, shown in Fig.~\\ref{fig1}, is based on an epitaxial Al\/InAs heterostructure patterned by wet etching to form a square array of $1 \\times 1\\: \\mu {\\rm m}^{2}$ islands separated by 350~nm strips of exposed semiconductor \\cite{Boettcher2018}. A Ti\/Au top gate, separated from the array by 40 nm of atomic-layer deposited HfO$_2$ insulator, was used to control the carrier density in the strips between islands. The gated Hall bar has four side-probes for voltage measurements and two wide contacts at the ends for applying current, and is $W = 100$ islands wide with $L = 400$ between voltage probes. The sample is measured in a dilution refrigerator with a base temperature of 30 mK. All lines are filtered using QDevil rf and low-pass filters. \n\n\n\nThe top gate voltage, $V_g$, controls the Josephson coupling, $E_J$, between islands, driving the system from a superconducting state at $V_g \\sim -3$~V to an insulating state at $V_g \\sim -4$~V \\cite{Boettcher2018}. The top gate also controls the barrier height, $E_B$, (energy saddle) between vortex pinning sites (energy minima) at the corners of the square islands. Numerical studies for metallic islands suggest $E_B \\propto E_J$ \\cite{Rzchowski1990}. Similar numerical studies have not been done for the semiconducting junctions. At intermediate gate voltages, an anomalous metallic phase was previously investigated, showing saturating gate-dependent sheet resistance at low temperature \\cite{Boettcher2018, Boettcher.2022}. In this work, $V_g$ is set to yield a superconducting state at base temperature in the absence of applied dc current and was only modified slightly toward the anomalous metal phase where indicated.\n\n We probe the device in the nonlinear regime by measuring both the ac and dc parts of the total voltage, $V$ in a pair of side probe contacts, when total current $I$, consisting of a varied dc part, $I_{\\rm dc}$ and ac part, $I_{\\rm ac}=5$~nA, was applied to the end contacts. A magnetic field, $B_\\perp$, was applied perpendicular to the plane of the array using an external solenoid controlled by a Keithley 2400 source-measurement unit. The area, $A=(a+b)^2$, of one plaquette of the array, with $a=1\\,\\mu$m and $b=350$~nm, gives a characteristic magnetic field $\\Phi_0\/A = 1.4$~mT, where $\\Phi_0 = h\/2e$, corresponding to frustration $f=1$. Features associated with integer $f$ values are seen in Figs.~\\ref{fig1}(c,d).\n\nFor small dc currents ($I_{\\rm dc}\\lesssim 2\\, \\mu$A), large dips in both sheet resistance, $R_s \\equiv (W\/L)\\,V\/I$, and differential sheet resistance, $dV\/dI_s \\equiv (W\/L)\\,dV\/dI$, reach zero at $f=$~0, 1\/2, and 1, with moderate dips at $f=$~1\/4, 1\/3 and 2\/3, as seen in Fig.~\\ref{fig1}(e). Increasing $I_{\\rm dc}$ beyond an $f$-dependent critical value, $I_0(f)$, results in dip-to-peak transitions in $dV\/I_s$ while minima in $R_s$ remain minima, consistent with previous experiments~\\cite{Poccia2015, Lankhorst2018, Mironov.2020, Rezvani.2020, Rezvani.2020b, Pei.2022}. The dip-to-peak transition in $dV\/dI_s$, visible at $f=1\/2$ and $f=1$, marks the onset of differential vortex motion without complete dissipative vortex flow. \n\n\\section{Zero-field transitions from \\\\ superconductor and anomalous metal}\n\\label{split}\n\nAt zero magnetic field, $f=0$, the dynamical transition appears split, so that $dV\/dI_s$ remains a minimum as a function of $B_\\perp$, unlike $f=1\/2$ and $f=1$, as seen in Figs.~\\ref{fig1}(e,f). The absence of a dip-to-peak transition at $f=0$, unlike for $f=$~1\/2 and 1, is consistent with some previous results \\cite{Poccia2015, Pei.2022} but not others \\cite{Jiang.2004, Mironov.2020, Rezvani.2020b}, as discussed below.\n\n\nWe interpret the split peak as indicating that vortices are absent or sparse at $f=0$, consistent with a BKT picture in which unbound vortices pair and annihilate below a critical temperature, $T_{\\rm BKT}$. \nThis suggests that the zero-resistance state at $f=0$, where vortices are absent or sparse, is qualitatively different from the zero-resistance states at $f=1\/2$ and $f=1$, where vortices of a single sign are abundant but frozen. \n\nThe splitting of the transition around $f=0$ is found to increase at lower $I_{\\rm dc}$, as seen in Figs.~\\ref{fig1}(e,f). This is seen most clearly as the downward orientation of the bright features on either side of zero field in Fig.~\\ref{fig1}(e), indicating that $I_0$ decreases with increasing vortex density, a signature of the role of vortex interaction in the dynamical transition.\nThis dependence is similar the vortex-density ($\\propto f$) dependence of the vortex melting temperature \\cite{Obaidat.2008}.\n\nTuning the gate voltage from $V_g=-2.00$~V to $V_g=-3.23$~V drives the system from the superconducting state, where $R_{s}$ falls below measurement resolution, $\\lesssim 0.1 \\Omega$, at low temperature, to the anomalous metal phase, where $R_s$ saturates at a gate-voltage dependent value, up to $\\sim h\/4e^{2}$, at low temperature \\cite{Boettcher2018, Boettcher.2022}. \n\nAs shown in Fig.~\\ref{fig2}, in the anomalous metal phase a dip-to peak-transition of $dV\/dI_s$ as a function of $I_{\\rm dc}$ {\\it is} observed at $f=0$, while $R_s$ remains a minimum, similar to the dynamical transition observed at $f=1\/2$ and other integer $f$ values.\n\nThe {\\it unsplit} transition in the anomalous metal phase suggests that vortices and antivortices (in equal number) are present at $f=0$, unlike in the superconducting phase. The nonvanishing $R_s$, which characterizes the anomalous metal phase, further suggests that at least some of the remnant vortices and antivortices are mobile even at $I_{\\rm dc} < I_0$ and lowest temperatures. \n\nA picture of residual unpaired vortices at $f=0$ in the anomalous metal but not the superconductor is consistent with the observed vanishing of $T_{\\rm BKT}$ in the anomalous metal \\cite{Boettcher.2022}. One would anticipate that in the superconducting phase at higher temperatures, $T>T_{\\rm BKT}$, an unsplit transition would occur at $f=0$. Reported dynamical transitions at $f=0$ are either at elevated temperatures \\cite{Jiang.2004, Rezvani.2020b} or in the anomalous metal phase \\cite{Mironov.2020}, consistent with our observations.\n\n\n\\begin{figure}[t]\n\t\\includegraphics[width= 2.6 in]{Figure2.pdf}\n\t\t\\caption{\\textbf{Zero-field dynamical transition from the anomalous metal phase}. \\textbf{a)} Sheet resistance, $R_{s}$, as a function of perpendicular magnetic field, $B_\\perp$ over a range of dc currents, $I_{\\rm dc}$. For all dc currents, zero field remains a resistance minimum, similar to the superconducting regime, Fig.~\\ref{fig1}(d) \\textbf{b)} Dip-to-peak transition at $f=0$ over the same range of dc currents. This behavior contrasts the superconducting regime in Fig.~\\ref{fig1}(f), which shows a split peak at $B_\\perp = 0$. The dip-to-peak transition suggests that vortices and antivortices are absent at $f=0$ in the superconducting phase but present in the anomalous metal phase.\n\t\t\t}\n\t\\label{fig2}\n\\end{figure}\n\n\\section{Scaling and critical exponents}\n\\label{scaling}\n\nThe dip-to-peak transition at commensurate frustration can be described as melting of a frozen vortex lattice and has been analyzed as a dynamical Mott transition~\\cite{Nelson1993, Poccia2015, Lankhorst2018, Granato.2018, Granato.2019, Mironov.2020, Rezvani.2020, Rezvani.2020b, Pei.2022}. Within this interpretation, one expects scaling at the transition among relevant parameters controlling the transition, namely the dc current bias, $I_{\\rm dc}$, and the distance, $b \\equiv f-f_{c}$, from the commensurate frustration values, in this case $f_{c}=$~1\/2 or 1. \n\n\n\nFollowing Refs.~\\cite{Poccia2015, Lankhorst2018}, we introduce a scaling form for differential sheet resistance across the dip-to-peak transition, \n\\begin{align}\ndV(f, I)\/dI_s - dV(f, I)\/dI_s|_{I=I_{0}} = \\mathcal{F}(|I - I_0|\/|b|^{\\varepsilon}),\n\\end{align}\n\\noindent where $\\mathcal{F}(x)$ is a function of the scaled variable $ x \\equiv |I- I_0|\/|b|^{\\varepsilon}$ and $\\varepsilon$ is the scaling exponent. \n\nA scaling exponent $\\varepsilon \\sim 2\/3$ was measured at $f=1$ \\cite{Poccia2015, Lankhorst2018} and $f=2$ \\cite{Poccia2015}, in a square array of Nb islands on Au, consistent with theory for a dynamical Mott transition. At $f=1\/2$ in the same system, the value $\\varepsilon \\sim 1\/2$ was found experimentally \\cite{Poccia2015, Lankhorst2018}, and later confirmed numerically ~\\cite{Granato.2018, Granato.2019}. Other experiments investigating the dynamic transition at commensurate $f$ found different exponents~\\cite{Pei.2022} in the triangular lattice, or did not pursue scaling analysis~\\cite{Mironov.2020, Rezvani.2020, Rezvani.2020b}. \n\nThe dip-to-peak transition at $f=1\/2$ was found to be right-left symmetric, while the transition at $f=1$ was asymmetric, as discussed in Sec.~\\ref{evenodd}. A consequence of the asymmetry is that the value of the commensurate frustration, $f_{c}$, depended on $I_{\\rm dc}$ [dots in Figs.~\\ref{fig3}(a,b)]. \n The asymmetry at $f=1$ also yielded different separatrices on the left and right, $I_0^L = 2.5\\: \\mu\\text{A}$, $I_0^R=2.35 \\: \\mu\\text{A}$, while a single separatrix value was found for $f=1\/2$ (solid and dashed curves in Figs.~\\ref{fig3}(a,b). Similar asymmetries are also found at $f=$~2, 3 and 4, as seen in Fig.~\\ref{fig4}. \n \n \\begin{figure*}\n\t\\includegraphics[width= 6\n\tin]{Figure3.pdf}\n\t\t\\caption{\\textbf{Dynamical transitions, scaling, and critical exponents}. \\textbf{a-b)} Differential sheet resistance, $dV\/dI_s$, shows a dip-to-peak transition is a function of dc current, $I_{\\rm dc}$, at commensurate frustration. ($f=1\/2$) and full ($f=1$). Insets: the transitions on each side, denoted as the left and right branches of $f=1\/2$ and $f=1$, are represented as superconductor-insulator transitions: down-bending curves are transitions to the pinned vortex state, while upward-bending curves are transitions to a state of vortex flow. The horizontal field-independent curves separating each state are marked as separatrices used in the scaling analysis (Left: black solid line. Right: black dotted line). Scaling plots of $f=1\/2$ (\\textbf{c-d}) and $f=1$ (\\textbf{e-f}), showing that left and right branches yield different scaling exponents, i.e. there is an asymmetry around each critical frustration field. Exponents extracted for $f=1\/2$ are $\\epsilon =2.1$ (left) and 1.2 (right), while at $f=1$ we extract $\\epsilon = 1.5$ (left) and 2.5 (right).\n\t\t\t}\n\t\\label{fig3}\n\\end{figure*}\n\nScaling exponents $\\varepsilon$ were obtained by fitting the slope on a log-log plot of differential resistance after subtracting the separatrix curve, $\\frac{d}{dI}[dV\/dI_{s}-dV\/dI_{s}|_{I_0^{L\/R}}]$ versus $1\/b$ (see Methods). Scaled data collapse reasonably well, as seen in Figs.~\\ref{fig3}(c-f), but yield different exponents on the left and right sides of the peaks, $\\varepsilon = 2.1$ (left) and $\\varepsilon =1.2$ (right) for $f=1\/2$, and $\\varepsilon = 1.5$ (left) and $\\varepsilon =2.5$ (right) for $f=1$, inconsistent with \\cite{Poccia2015, Lankhorst2018}.\n\n\nWe speculate that different values for $\\varepsilon$ could be caused by stronger pinning potential, $E_{B}$, in the Nb devices in \\cite{Poccia2015,Lankhorst2018} compared to the Al array studied here, leading to a different regime where dynamics may be influenced by competing effects associated with binding potential and vortex-vortex interactions. A model of vortex pinning in metallic arrays yielded, $E_B = 0.2\\,E_J$, where $E_J=(\\hbar\/2e)\\,i_c$ is the Josephson coupling, and $i_c$ is the single-island critical current \\cite{Rzchowski1990}. Taking $i_c\\sim 5 \\,\\mu$A from \\cite{Poccia2015} compared to $i_{c}\\sim 0.5\\,\\mu$A in our Al array suggests roughly an order of magnitude difference in $E_B$.\n\n\n\\begin{figure}\n\t\\includegraphics[width= 2.6 in]{Figure4.pdf}\n\t\t\\caption{\\textbf{Even, odd and zero flux states}. \\textbf{a)} Differential sheet resistance curves for different values of dc current as a function of perpendicular magnetic field, $B_\\perp$, at gate voltage $V_g=-2.8$~V, showing the evolution of vortex states for half-integer frustration $f=1\/2$ and integers $f=1,2,3$ and 4. Odd minima (black dots) move down in field with increasing dc current, while even minima (green dots) move up in field. Minima at $f=1\/2$ is roughly insenstive to dc current, consistent with the overall symmetry in $f$ of the $f=1\/2$ transition. \\textbf{b)} Differential sheet resistance (color) as a function of $B_\\perp$ and $I_{\\rm dc}$ shows commensurate features at factional and integer frustration, $f$. Bright features at the tips of the zero-resistance spikes are the dip-to-peak signature of the dynamical transition. Note the alternating, even-odd asymmetric bright features at integer $f$ values, the symmetric bright feature at $f=\\pm 1\/2$, and the vanishing differential sheet resistance remaining at large $I_{\\rm dc}$ at $f=0$.\n\t\t\t}\n\t\\label{fig4}\n\\end{figure}\n\n\\section{Skewed transitions and Even-odd structure}\n\\label{evenodd}\nFigures \\ref{fig1}(e) and \\ref{fig2} reveal a striking difference between the dip-to-peak transitions at $f=1\/2$ and $f=1$. Throughout the transition at $f=1\/2$, each curve, representing a different $I_{\\rm dc}$ value, is symmetric in $f$ about the transition point. In contrast, for the $f=1$ transition, the curves are highly asymmetric, and in fact in the crossover from dip to peak are nearly antisymmetric lying above the high-current saturation on the right side and below it on the left side. This difference can also be seen in Fig.~\\ref{fig1}(e). The bright feature at the top of the $f=1\/2$ peak, corresponding to the peak in $dV(f)\/dI_{s}$, is flat, while the bright feature at $f = 1$ is tilted, indicating that the peak in $dV(f)\/dI_{s}$ occurs at lower $I_{\\rm dc}$ at higher $f$. \n\nSymmetry in $f$ around $f=1\/2$ follows from the presumed checkerboard vortex configuration near $f=1\/2$ \\cite{Teitel.1983}, which is symmetric with respect to the addition or subtraction of vortices. Excess or deficit vortices slightly above or below $f=1\/2$ are expected to form a low-density superlattice on top of a base checkerboard \\cite{Franz.1995, Lankhorst.2018b}. The symmetry between excess and deficit vortices should persist for weak disorder, which can then pin the superlattice. In contrast, $f=1$ is not symmetric with respect to the addition or subtraction of dilute vortices. Excess vortices above $f=1$ are repelled by each site in the full lattice, while deficit vortices, or antivortices, are attracted to each site, where they can annihilate with a vortex. Within this picture, excess vortices should be weakly pinned and contribute to low-current melting \\cite{Franz.1995}, while vacancies resulting from annihilation with deficit vortices should more readily pin, lowering vortex mobility. The asymmetry between excess and deficit at $f=1$ is a signature of vortex interaction. \n\nLooking at integer transitions above $f=1$, an even-odd structure is evident in Fig.~\\ref{fig4}, where the $f=3$ transition is skewed in the same direction as $f=1$, while transitions at $f=2$ and $f=4$ are skewed in the opposite direction. Even-odd behavior is also visible below the transition, in the current-dependent position of the minima of $dV\/dI_{s}$, marked as dots in Fig.~\\ref{fig4}(a). Some understanding of this structure follows from the arguments above, that half-filling of the array is a symmetry point, so one might expect $f=2$ to be show a reflection symmetry of the $f=1$ behavior about $f=3\/2$, continuing by reflection about half-integers to higher integers. \n\nWhile the overall even-odd pattern at integer $f$ presumably reflects the square potential of the array, the difference between $f=1\/2$ (symmetric) and $f=1$ (asymmetric) is also seen in triangular arrays \\cite{Pei.2022}, and reflects a more basic difference between half and full filling. Even-odd structure of dynamical transition in triangular lattices has not been reported. It will be interesting to investigate vortex filling in various lattices experimentally and numerically.\n\n\n\\section{Discussion}\n\\label{discussion}\n\nWe have investigated a dynamical transition from frozen to mobile vortices in a gate-tunable superconductor-semiconductor Josephson junction array. Tuning the gate into the superconducting phase, where a zero-resistance state is observed at low temperature and current, we see dip-to-peak transitions in differential resistance near frustration $f=$~1\/2, 1, 2, 3, and 4, similar to previous studies in metallic arrays. Motivated by the mapping of this transition to a Mott melting transition of frozen vortices, found good scaling at the transitions but not the Mott exponent found previously, perhaps due to a weaker binding potential in our Al arrays. \n\nThe split transition at $f=0$ in the superconducting phase suggests that vortices are absent, not frozen, at $f=0$, consistent with a BKT model in which vortices and antivortices annihilate at $f=0$ below a critical temperature. When the array is tuned to the anomalous metal phase, a simple unsplit transition is observed at $f=0$ suggesting that frozen and perhaps some unfrozen vortices are present. These observations are consistent with previously reported experiments.\n\nThe transition at $f=1\/2$ is symmetric in $f$ around the transition, reflecting the symmetry of the underlying half-filled checkerboard lattice. At $f = 1$, on the other hand, the transition is strongly asymmetric, suggesting that excess vortices on top of an underlying full lattice melt easily, but deficit vortices (antivortices) do not. We find that the asymmetry of the $f=1$ transition persists to higher integers, mirrored about half integers, giving an overall even-odd pattern to transitions at higher integers. Further work is needed to understand this asymmetry, and how it depends on island and lattice geometry. This could also depend on the energetic of vortex configurations \\cite{Berdiyorov.2005, Berdiyorov.2008} and binding multiple or giant vortices in the potential minima \\cite{Baelus2002,Baelus2006}.\n\nWe emphasize an general connection between these highly tunable Josephson arrays and Bose-Hubbard systems \\cite{Bruder.2005, Arovas.2021}. In these arrays, complexity is controlled by frustration and quantum by charging energy of the islands, relevant as the superconductor-insulator transition is approached. \n\n\\emph{Acknowledgments} \nWe thank A. Kapitulnik, S. Kivelson, B. Spivak, and V. Vinokur for useful discussions. \nResearch supported by Microsoft Station Q, the Danish National Research Foundation, and a research grant (Project 43951) from VILLUM FONDEN.\n\n\\section*{Methods}\n\\label{methods}\n\\subsection*{Extracting scaling exponents}\nThis subsection provides details of the scaling analysis shown in Fig.~\\ref{fig3}. The scaling exponent, $\\varepsilon$, defined in Eq.~1 was extracted from the slope of a log-log plot of $d\/dI(dV\/dI_s-dV\/dI_s|_{I=I_0})$ versus $1\/b$, allowing separate left and right critical currents, $I^L_0$ and $I^R_0$. \n\nFigure~\\ref{figS} shows the spread of measured values. For each value of $b$ a mean of all data was calculated. A curve through the means is show as a solid black curve. Then, a line with offset was fit to the means. The linear fit is shown as a green dashed line. The slope of the linear fit yields $b$. The process is repeated for each value of $f$ and on the left and right sides for $f=1\/2$ and $f=1$.\nThe value of $f$ extracted from the fit to $b$ takes into account the dependence of $f_c$ on $I_{\\rm dc}$. Values of $f_c(I_{\\rm dc})$ are shown as green and back dots in Fig.~\\ref{fig4} \n\nWe note that only points near the separatrix (above and below) were included in the analysis of extracting the scaling exponent for the right branch of $f_c=1$ due to the large asymmetry, see Fig.~\\ref{figS}(d). \n\n\\setcounter{figure}{0}\n\\makeatletter \n\\renewcommand{\\thefigure}{S\\@arabic\\c@figure}\n\\makeatother\n\\begin{figure}[t]\n\t\\includegraphics[width= 3.4 in]{supplemental.pdf}\n\t\t\\caption{\\textbf{Extracting scaling exponents}. \\textbf{a-d)} Log-log plots of the slope of the differential resistance, with the separatrix subtracted, constructed to extract of scaling exponents, $\\varepsilon$, separately for left and right branches around $f=1\/2$ and $f=1$. Means of logs are calculated for each value of $b$ (solid black curve) then means fit to a linear function (dashed green line). }\n\t\\label{figS}\n\\end{figure}\n\n\n\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzmluf b/data_all_eng_slimpj/shuffled/split2/finalzzmluf new file mode 100644 index 0000000000000000000000000000000000000000..019b71bd6e5e8d86d1e0836ae7b3db589d5168c6 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzmluf @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\\vspace{1mm}\n\\noindent\n\nThe spin properties of hadrons inclusively produced in high energy \ninteractions are related to the fundamental properties of quarks and \ngluons and to their elementary interactions in a much more subtle way \nthan unpolarized quantities. They test unusual basic dynamical \nproperties and reveal how the usual models for quark distribution and\nhadronization -- successful in predicting unpolarized cross-sections -- may \nnot be adequate to describe spin effects. We consider here such two cases,\nsingle spin asymmetries in Deep Inelastic Scattering and the polarization\nof mesons produced in the fragmentation of polarized quarks at LEP.\n\n\\section{Single spin asymmetries in DIS}\n\n\\vspace{1mm}\n\\noindent\n\nWe discuss first single spin asymmetries in DIS processes. \nWe start by reminding that single spin asymmetries in large $p_{_T}$ inclusive \nhadronic reactions are forbidden in leading-twist perturbative QCD, \nreflecting the fact that single spin asymmetries are zero at the partonic \nlevel and that collinear parton configurations inside hadrons do not allow \nsingle spin dependences. However, experiments tell us in several cases, \n\\cite{ada1,ada2} that single spin asymmetries are large and indeed non \nnegligible.\n\nThe usual arguments to explain this apparent disagreement between pQCD \nand experiment invoke the moderate $p_{_T}$ values of the data -- a few \nGeV, not quite yet in the true perturbative regime -- and the importance \nof higher-twist effects. Several phenomenological models have recently\nattempted to explain the large single spin asymmetries observed in\n$p^\\uparrow p \\to \\pi X$ as twist-3 effects which might be due to intrinsic \npartonic $\\mbox{\\boldmath $k$}_\\perp$ in the fragmentation \\cite{col1} and\/or distribution \nfunctions \\cite{siv1}-\\cite{noi}. \n\nLet us consider a process in which one has convincing evidence \nthat partons and perturbative QCD work well and successfully describe the \nunpolarized and leading-twist spin data, namely Deep Inelastic Scattering \n(DIS). In particular we shall discuss single spin asymmetries in the inclusive,\n$\\ell N^\\uparrow \\to \\ell + jets$ and $\\ell N^\\uparrow \\to hX$, \nreactions looking at possible origins of such asymmetries and \ndevising strategies to isolate and discriminate among them \\cite{alm}.\n\nAccording to the QCD hard scattering picture and the factorization theorem \n\\cite{col1,col2,col3} the cross-section for the $\\ell N^\\uparrow \\to hX$ \nreaction is given by\n\\begin{eqnarray}\n& & \\frac{E_h \\, d^3\\sigma^{\\ell + N,S \\to h + X}} {d^{3} \\mbox{\\boldmath $p$}_h} = \n\\sum_{q; \\lambda^{\\,}_{q^{\\prime}}, \\lambda^{\\prime}_{q^{\\prime}}, \\lambda^{\\,}_h} \n\\int \\frac {dx \\, d^2\\mbox{\\boldmath $k$}_\\perp d^2\\mbox{\\boldmath $k$}_\\perp^\\prime} {\\pi z} \\label{gen} \\\\ \n& & \\tilde f_{q\/N}^{N,S}(x, \\mbox{\\boldmath $k$}_\\perp) \\>\n\\frac{d\\hat\\sigma^{q,P_q}}{d\\hat t}(x, \\mbox{\\boldmath $k$}_\\perp, \\mbox{\\boldmath $k$}_\\perp^\\prime) \\>\n\\rho^{q^{\\prime}}_{\\lambda^{\\,}_{q^{\\prime}}, \\lambda^{\\prime}_{q^{\\prime}}}\n(x, \\mbox{\\boldmath $k$}_\\perp, \\mbox{\\boldmath $k$}_\\perp^\\prime) \\> \\widetilde \nD_{\\lambda^{\\,}_h, \\lambda^{\\,}_h}^{\\lambda^{\\,}_{q^{\\prime}}, \\lambda^{\\prime}_{q^{\\prime}}}\n(z, \\mbox{\\boldmath $k$}_\\perp^\\prime) \\,. \\nonumber\n\\end{eqnarray}\n\nLet us briefly discuss the different quantities appearing in the above \nequation; more details can be found in Ref. \\cite{alm}.\n$\\tilde f_{q\/N}^{N,S}(x, \\mbox{\\boldmath $k$}_\\perp)$ is the quark distribution function,\nthat is the total number density of quarks $q$ with momentum fraction \n$x$ and intrinsic transverse momentum $\\mbox{\\boldmath $k$}_\\perp$ inside a polarized nucleon\n$N$ with spin four-vector $S$. \n\n$d\\hat\\sigma^{q,P_q} \/ d\\hat t$ is the cross-section for the $\\ell \nq^\\uparrow \\to \\ell q$ process, with an unpolarized lepton and an initial \nquark with polarization $P_q$, while the final quark and lepton polarization\nare summed over. Notice that for helicity conserving elementary \ninteractions $d\\hat\\sigma^{q,P_q} \/ d\\hat t$ equals the unpolarized \ncross-section $d\\hat\\sigma^q \/ d\\hat t$.\n$\\rho^{q^\\prime}$ is the helicity density matrix of the final quark \nproduced in the $\\ell q^\\uparrow$ interaction and \n\\begin{equation}\n\\widetilde D^h_{q, s_q}(z, \\mbox{\\boldmath $k$}_\\perp) = \\sum_{\\lambda^{\\,}_q, \\lambda^{\\prime}_q} \n\\rho^q_{\\lambda^{\\,}_q \\lambda^{\\prime}_q} \\> \\widetilde D_{\\lambda^{\\,}_h, \n\\lambda^{\\,}_h}^{\\lambda^{\\,}_q, \\lambda^{\\prime}_q}(z, \\mbox{\\boldmath $k$}_\\perp)\n\\end{equation}\ndescribes the fragmentation process of a polarized quark $q$ \nwith spin $s_q$ into a hadron $h$ with helicity $\\lambda^{\\,}_h$, \nmomentum fraction $z$ and intrinsic transverse momentum $\\mbox{\\boldmath $k$}_\\perp$ with \nrespect to the jet axis. It is simply the inclusive cross-section for\nthe $q^\\uparrow \\to hX$ fragmentation process. As it will be shown in the \nsecond part we can safely neglect the coherent interactions of the \nfragmenting quark, as we are not looking at the spin state of the final \nhadron. The usual unpolarized fragmentation function is given by\n\\begin{equation}\nD^h_q(z) = {1\\over 2} \\sum_{\\lambda^{\\,}_q, \\lambda^{\\,}_h}\n\\int d^2\\mbox{\\boldmath $k$}_\\perp \\>\n\\widetilde D^{\\lambda^{\\,}_q,\\lambda^{\\,}_q}_{\\lambda^{\\,}_h,\\lambda^{\\,}_h}\n(z, \\mbox{\\boldmath $k$}_\\perp) \\,.\n\\label{fr}\n\\end{equation}\n\nSimilar formulae hold also when the elementary interaction\nis $\\ell q \\to \\ell q g$ rather than $\\ell q \\to \\ell q$: in the latter \ncase two jets are observed in the final state -- the target jet and the \ncurrent quark jet -- and in the former case three -- the target jet and \n$q$ + $g$ current jets.\n\nIn Eq. (\\ref{gen}) we have taken into account intrinsic transverse momenta \nboth in the distribution and the fragmentation process;\nthe $\\mbox{\\boldmath $k$}_\\perp$ dependences are expected to have negligible effects on \nunpolarized variables for which they are indeed usually neglected, but \nthey can be of crucial importance for some single spin observables, as \ndiscussed in Refs. \\cite{col1}-\\cite{noi}.\n\nWe discuss now possible sources of single spin effects in Eq. (\\ref{gen}).\n\n\\goodbreak\n\\vskip 12pt\n\\noindent\n{\\mbox{\\boldmath $k_\\perp$}} {\\bf effects in fragmentation process}\n\\cite{col1}\n\\vskip 6pt\n\\nobreak\n \nThe fragmentation process of a transversely polarized quark into a hadron \n$h$ (whose polarization in not observed) with fixed $z$ and $\\mbox{\\boldmath $k$}_\\perp$ \nmay depend on the quark spin orientation, provided the quark spin \n$s_q$ has a component perpendicular to the hadron-quark plane (otherwise \nany spin dependence would be forbidden by parity conservation). That is, \nthere might be a non zero {\\it quark analysing power} \\cite{col1}:\n\\begin{equation}\nA_q^h \\equiv {\\widetilde D^h_{q, s_q}(z, \\mbox{\\boldmath $k$}_\\perp) \n- \\widetilde D^h_{q,-s_q}(z, \\mbox{\\boldmath $k$}_\\perp) \\over \\widetilde \nD^h_{q, s_q}(z, \\mbox{\\boldmath $k$}_\\perp) + \\widetilde D^h_{q,-s_q}(z,\\mbox{\\boldmath $k$}_\\perp)} \\,\\cdot\n\\label{qap}\n\\end{equation}\nBy rotational invariance $\\widetilde D^h_{q,-s_q}(z,\\mbox{\\boldmath $k$}_\\perp) =\n\\widetilde D^h_{q, s_q}(z,-\\mbox{\\boldmath $k$}_\\perp)$, which shows immediately how the \nquark analysing power vanishes for $\\mbox{\\boldmath $k$}_\\perp = 0$. \n \n\\goodbreak\n\\vskip 12pt\n\\noindent\n{\\mbox{\\boldmath $k_\\perp$}} {\\bf effects in distribution functions}\n\\cite{siv1}-\\cite{noi}\n\\vskip 6pt\n\\nobreak\n\nA similar idea had been previously proposed in Refs. \\cite{siv1, siv2}\nand later rediscovered in Ref. \\cite{noi}, concerning the distribution\nfunctions: that is, the number of quarks (whose spin is not observed)\nwith fixed $x$ and $\\mbox{\\boldmath $k$}_\\perp$ inside a transversely polarized nucleon\nmay depend on the nucleon spin orientation and the function\n\\begin{equation}\n\\Delta\\tilde f_{q\/N}^{N^\\uparrow}(x, \\mbox{\\boldmath $k$}_\\perp) \\equiv\n\\tilde f_{q\/N}^{N^\\uparrow}(x, \\mbox{\\boldmath $k$}_\\perp) - \n\\tilde f_{q\/N}^{N^\\downarrow}(x, \\mbox{\\boldmath $k$}_\\perp)\n= \\tilde f_{q\/N}^{N^\\uparrow}(x, \\mbox{\\boldmath $k$}_\\perp) - \n\\tilde f_{q\/N}^{N,^\\uparrow}(x,-\\mbox{\\boldmath $k$}_\\perp)\n\\label{nap}\n\\end{equation}\nmay be different from zero. $\\uparrow$ and $\\downarrow$ refer to the \nnucleon spin up or down with respect to the quark-nucleon plane. \n\nIn terms of the usual light-cone operator definition of structure \nfunctions one has \\cite{col1,dra}\n\\begin{eqnarray}\n\\Delta\\tilde f_{q\/N}^{N^\\uparrow}\n&=& 2 \\, {\\rm Im} \\int {dy^- d\\mbox{\\boldmath $y$}_\\perp \\over(2 \\pi)^3}\ne^{-i x p^+ y^- + i \\mbox{\\boldmath $k$}_\\perp\\cdot\\mbox{\\boldmath $y$}_\\perp} \\nonumber\\\\\n&&\\quad\\langle p, -|\\bar\\psi_a(0,y^-,y_\\perp)\n{\\gamma^+\\over 2}\\psi_a(0)|p, + \\rangle \\,. \n\\label{del+-}\n\\end{eqnarray}\n\nIn Ref. \\cite{col1} it is argued that such off-diagonal (in the helicity\nbasis) matrix elements are zero due to the time-reversal invariance of QCD, \nand indeed this is proven by exploiting the time-reversal and parity \ntransformation properties of free Dirac spinors. However, in Ref. \\cite{dra} \nit has been shown that this need not be so in chiral models with quark moving \nin a background of chiral fields.\n\nBoth $\\widetilde D^h_{q^\\uparrow}(z, \\mbox{\\boldmath $k$}_\\perp) \n- \\widetilde D^h_{q^\\downarrow}(z, \\mbox{\\boldmath $k$}_\\perp)$ and \n$\\tilde f_{q\/N}^{N^\\uparrow}(x, \\mbox{\\boldmath $k$}_\\perp) - \n\\tilde f_{q\/N}^{N^\\downarrow}(x, \\mbox{\\boldmath $k$}_\\perp)$ can be considered as \nnew fundamental spin and $\\mbox{\\boldmath $k$}_\\perp$ dependent non perturbative \nfunctions describing respectively quark fragmentation and distribution\nproperties. In the sequel we shall devise strategies to test their\nrelevance. \n\n\\vskip 12pt\n\\noindent\n{\\bf Single spin effects in the elementary interactions} \n\\vskip 6pt\n\nAs we already mentioned both perturbative QED and QCD at high energy \ndo not allow single helicity flips in the $\\ell q$ interactions, so that \nthere cannot be any dependence on the quark polarization in \n$d\\hat\\sigma^{q,P_q}\/d\\hat t$. Similarly, the perturbative QCD evolution\nof the distribution and fragmentation functions is not expected to\nintroduce any single spin dependence. We must conclude that the hard \nelementary interactions are unlikely to introduce any single\nspin effect: however, this basic QED and QCD property should also\nbe tested. \n\n\\vskip 6pt\nLet us now describe a set of possible measurements which could single out\nsome of the above mechanisms and test them. \n\n\\goodbreak\n\\vskip 6pt\n\\noindent\n$a) \\> \\ell N^\\uparrow \\to \\ell + 2\\,jets$\n\\vskip 4pt\n\\nobreak\n\nHere one avoids any fragmentation effect by looking at the fully \ninclusive cross-section for the process $\\ell N^\\uparrow \\to \\ell + 2\\, jets$,\nthe 2 jets being the target and current ones; this is the usual DIS, \nand Eq. (\\ref{gen}) becomes \n\\begin{equation}\n\\frac{d^2\\sigma^{\\ell + N,S \\to \\ell + X}} {dx \\, dQ^2} = \\sum_q\n\\int d^2\\mbox{\\boldmath $k$}_\\perp \\> \\tilde f_{q\/N}^{N,S}(x, \\mbox{\\boldmath $k$}_\\perp) \\>\n\\frac{d\\hat\\sigma^{q,P_q}}{d\\hat t}(x, \\mbox{\\boldmath $k$}_\\perp) \\,. \n\\label{gena}\n\\end{equation}\n\nIn this case the elementary interaction is a pure QED, \nhelicity conserving one, $\\ell q \\to \\ell q$, and $d\\hat\\sigma^{q,P_q}\/d\\hat t$\ncannot depend on the quark polarization. Some spin dependence might only\nremain in the distribution function, due to intrinsic $\\mbox{\\boldmath $k$}_\\perp$ effects\n\\cite{siv1}-\\cite{noi}, \\cite{dra} and we have\n\\begin{eqnarray}\n\\frac{d^2\\sigma^{\\ell N^\\uparrow \\to \\ell + X}} {dx \\, dQ^2} \n&-& \\frac{d^2\\sigma^{\\ell N^\\downarrow \\to \\ell + X}} {dx \\, dQ^2} \n= \\sum_q \\int d^2\\mbox{\\boldmath $k$}_\\perp \\nonumber \\\\\n&\\times& \\Delta\\tilde f_{q\/N}^{N^\\uparrow}(x, \\mbox{\\boldmath $k$}_\\perp) \\>\\>\n\\frac{d\\hat\\sigma^q}{d\\hat t}(x, \\mbox{\\boldmath $k$}_\\perp) \\,. \n\\label{asymg}\n\\end{eqnarray}\nDespite the fact that $\\Delta\\tilde f_{q\/N}^{N^\\uparrow}$ is an odd\nfunction of $\\mbox{\\boldmath $k$}_\\perp$ a non zero value of the above difference \n-- of ${\\cal O}(k_\\perp\/\\sqrt {Q^2})$, twist 3 -- \nmight remain even after integration on $d^2\\mbox{\\boldmath $k$}_\\perp$ because of\nthe $\\mbox{\\boldmath $k$}_\\perp$ dependence of $d\\hat\\sigma^q\/d\\hat t$, similarly \nto what happens in $pp^\\uparrow \\to \\pi X$ \\cite{noi}. The observation\nof a non vanishing value of the single spin effect of Eq. (\\ref{asymg})\nwould be a decisive test in favour of the mechanism suggested in \nRefs. \\cite{siv1}-\\cite{noi} and would allow an estimate of the\nnew function (\\ref{nap}).\n\n\\vskip 6pt\n\\noindent\n$b) \\> \\ell N^\\uparrow \\to h + X \\> (2\\,jets, \\> \\mbox{\\boldmath $k$}_\\perp \\not= 0)$\n\\vskip 4pt\n\nOne looks for a hadron $h$, with transverse momentum $\\mbox{\\boldmath $k$}_\\perp$, inside \nthe quark current jet; the final lepton may or may not be observed.\nThe elementary subprocess is $\\ell q \\to \\ell q$ and Eq. (\\ref{gen}) yields\n\\begin{eqnarray}\n& & \\frac{E_h \\, d^5\\sigma^{\\ell + N^\\uparrow \\to h + X}} \n{d^{3} \\mbox{\\boldmath $p$}_h d^2 \\mbox{\\boldmath $k$}_\\perp} \n- \\frac{E_h \\, d^5\\sigma^{\\ell + N^\\downarrow \\to h + X}} \n{d^{3} \\mbox{\\boldmath $p$}_h d^2 \\mbox{\\boldmath $k$}_\\perp} \\label{coll} \\\\\n&=& \\sum_q \\int \\frac {dx} {\\pi z} \\> \n\\Delta_{_T} q(x) \\> \\Delta_{_N} \\hat\\sigma^q (x, \\mbox{\\boldmath $k$}_\\perp) \\,\n\\left[ \\tilde D^h_{q^\\uparrow}(z, \\mbox{\\boldmath $k$}_\\perp)\n- \\tilde D^h_{q^\\uparrow}(z, - \\mbox{\\boldmath $k$}_\\perp) \\right]\n\\nonumber\n\\end{eqnarray}\nwhere $\\Delta_{_T}q$ is the polarized number density for transversely spinning \nquarks $q$ and $\\Delta_{_N} \\hat\\sigma^q$ is the elementary cross-section \ndouble spin asymmetry\n\\begin{equation}\n\\Delta_{_N} \\hat\\sigma^q = {d\\hat \\sigma^{\\ell q^\\uparrow \\to \n\\ell q^\\uparrow} \\over d\\hat t} - {d\\hat \\sigma^{\\ell q^\\uparrow \\to \n\\ell q^\\downarrow} \\over d\\hat t} \\,\\cdot\n\\label{del}\n\\end{equation}\n\nIn Eq. (\\ref{coll}) we have neglected the $\\mbox{\\boldmath $k$}_\\perp$ effect in the \ndistribution function, which can be done once the asymmetry discussed\nin $a)$ turns out to be negligible. We are then testing directly the \nmechanism suggested in Ref. \\cite{col1} and a non zero value of\nthe l.h.s. of Eq. (\\ref{coll}) would be a decisive test in its favour \nand would allow an estimate of the new function appearing in the\nnumerator of Eq. (\\ref{qap}). Notice again that even upon integration over\n$d^2\\mbox{\\boldmath $k$}_\\perp$ the spin asymmetry of Eq. (\\ref{coll}) might survive,\nat higher twist order $k_\\perp\/p_{_T}$, due to some $\\mbox{\\boldmath $k$}_\\perp$ dependence \nin $\\Delta_{_N} \\hat\\sigma^q$.\n\n\\goodbreak\n\\vskip 6pt\n\\noindent\n$c) \\> \\ell N^\\uparrow \\to h + X \\> (2\\,jets, \\> \\mbox{\\boldmath $k$}_\\perp = 0)$\n\\vskip 4pt\n\\nobreak\n\nBy selecting events with the final hadron collinear to the jet axis\n($\\mbox{\\boldmath $k$}_\\perp = 0$) one forbids any single spin effect in the fragmentation\nprocess. As in the fully inclusive case $a)$ the observation of a\nsingle spin asymmetry in such a case would imply single, $\\mbox{\\boldmath $k$}_\\perp$\ndependent, spin effects in the distribution functions. \n\n\\vskip 6pt\n\\noindent\n$d) \\> \\ell N^\\uparrow \\to h + X \\> (3\\,jets, \\> \\mbox{\\boldmath $k$}_\\perp \\not= 0)$\n\\vskip 4pt\n \nThe elementary process is now $\\ell q \\to \\ell qg$ and one looks at \nhadrons with $\\mbox{\\boldmath $k$}_\\perp \\not= 0$ inside the $q$ current jet. Single\nspin asymmetries can originate either from single spin effects in the \nfragmentation process or distribution functions. One should not, in \nprinciple, forget possible spin effects in the elementary QCD interaction.\n\n\\vskip 6pt\n\\noindent\n$e) \\> \\ell N^\\uparrow \\to \\ell + 3\\,jets$ or \n$\\ell N^\\uparrow \\to h + X \\> (3\\,jets, \\> \\mbox{\\boldmath $k$}_\\perp = 0)$\n\\vskip 4pt\n\nThese cases are analogous to $a)$ and $c)$ respectively: the measurement \neliminates spin effects arising from the fragmentation functions. \nThe only possible origin of a single spin asymmetry would reside \nin the distribution function. However, if no effect is observed in cases\n$a)$ and $c)$, but some effect is observed here, then one has to \nconclude that there must be some single spin effect in the elementary \nQCD interaction. Utterly unexpectedly, this would question the\nvalidity of quark helicity conservation, a fundamental property of\npQCD which has never been directly tested. \n\n\\vskip 6pt\nIn summary, a study of single transverse spin asymmetries in DIS could\nprovide a series of profound tests of our understanding of large $p_{_T}$\nQCD-controlled reactions.\n\n\\section{{\\mbox{\\boldmath $\\rho^{\\,}_{1,-1}(V)$}} in the process \n{\\mbox{\\boldmath $e^- e^+ \\to q\\bar q \\to V + X$}}}\n\n\\vspace{1mm}\n\\noindent\n\nWe consider now the spin properties of hadrons produced at LEP. It was pointed \nout in Refs. \\cite{akp} and \\cite{aamr} that final state interactions \nbetween the $q$ and $\\bar q$ created in $e^+ e^-$ annihilations \n-- usually neglected, but indeed necessary -- might give origin to non \nzero spin observables which would otherwise be forced to vanish: \nthe off-diagonal element $\\rho^{\\,}_{1,-1}$ of the helicity density matrix of \nvector mesons may be sizeably different from zero \\cite{akp} due to a \ncoherent fragmentation process which takes into account $q \\bar q$ \ninteractions. The incoherent fragmentation of a single independent quark \nleads instead to zero values for such off-diagonal elements. \n\nWe present predictions \\cite{abmq} for $\\rho^{\\,}_{1,-1}$ of several vector \nmesons $V$ provided they are produced in two jet events, carry\na large momentum or energy fraction $z=2E_{_V}\/\\sqrt s$, and have a small\ntransverse momentum $p_{_T}$ inside the jet. Our estimates are in agreement\nwith the existing data and are crucially related both to the presence \nof final state interactions and to the Standard Model couplings of the \nelementary $e^- e^+ \\to q \\bar q$ interaction. \n\nThe helicity density matrix of a hadron $h$ inclusively produced in the \ntwo jet event $e^- e^+ \\to q\\bar q \\to h + X$ can be written as \n\\cite{akp, aamr}\n\\begin{equation}\n\\rho^{\\,}_{\\lambda^{\\,}_h \\lambda^{\\prime}_h}(h) \n= {1\\over N_h} \\sum_{q,X,\\lambda^{\\,}_X,\\lambda^{\\,}_q,\\lambda^{\\,}_{\\bar q},\n\\lambda^{\\prime}_q,\\lambda^{\\prime}_{\\bar q}} \nD^{\\,}_{\\lambda^{\\,}_h \\lambda^{\\,}_X; \\lambda^{\\,}_q,\\lambda^{\\,}_{\\bar q}} \\>\\>\n\\rho^{\\,}_{\\lambda^{\\,}_q,\\lambda^{\\,}_{\\bar q};\n\\lambda^{\\prime}_q,\\lambda^{\\prime}_{\\bar q}}\\,(q\\bar q) \\>\\> \nD^*_{\\lambda^{\\prime}_h \\lambda^{\\,}_X; \\lambda^{\\prime}_q,\\lambda^{\\prime}_{\\bar q}} \\,,\n\\label{rhoh}\n\\end{equation}\nwhere $\\rho(q\\bar q)$ is the helicity density matrix of the $q\\bar q$ state \ncreated in the annihilation of the unpolarized $e^+$ and $e^-$,\n\\begin{equation}\n\\rho^{\\,}_{\\lambda^{\\,}_q,\\lambda^{\\,}_{\\bar q};\n\\lambda^{\\prime}_q,\\lambda^{\\prime}_{\\bar q}}\\,(q\\bar q)\n= {1\\over 4N_{q\\bar q}} \\sum_{\\lambda^{\\,}_{-}, \\lambda^{\\,}_{+}}\nM^{\\,}_{\\lambda^{\\,}_q \\lambda^{\\,}_{\\bar q};\\lambda^{\\,}_{-} \\lambda^{\\,}_{+}} \\>\nM^*_{\\lambda^{\\prime}_q \\lambda^{\\prime}_{\\bar q}; \\lambda^{\\,}_{-} \\lambda^{\\,}_{+}} \\,.\n\\label{rhoqq}\n\\end{equation}\nThe $M$'s are the helicity amplitudes for the $e^-e^+ \\to q\\bar q$ process and\nthe $D$'s are the fragmentation amplitudes, {\\it i.e.} the helicity\namplitudes for the process $q\\bar q \\to h+X$; the $\\sum_{X,\\lambda_X}$ stands \nfor the phase space integration and the sum over spins of all the unobserved \nparticles, grouped into a state $X$. The normalization factors $N_h$ and\n$N_{q\\bar q}$ are given by:\n\\begin{equation}\nN_h = \\sum_{q,X; \\lambda^{\\,}_h, \\lambda^{\\,}_X, \\lambda^{\\,}_q, \\lambda^{\\,}_{\\bar q},\n\\lambda^{\\prime}_q, \\lambda^{\\prime}_{\\bar q}} \nD^{\\,}_{\\lambda^{\\,}_h \\lambda^{\\,}_X; \\lambda^{\\,}_q,\\lambda^{\\,}_{\\bar q}} \\>\\>\n\\rho^{\\,}_{\\lambda^{\\,}_q,\\lambda^{\\,}_{\\bar q};\n\\lambda^{\\prime}_q,\\lambda^{\\prime}_{\\bar q}}\\,(q\\bar q) \\>\\> \nD^*_{\\lambda^{\\,}_h \\lambda^{\\,}_X; \\lambda^{\\prime}_q,\\lambda^{\\prime}_{\\bar q}} \\,\n= \\sum_q D^h_q \\,,\n\\label{nh}\n\\end{equation}\nwhere $D^h_q$ is the usual fragmentation function of quark $q$ into \nhadron $h$, and \n\\begin{equation}\nN_{q\\bar q} = {1\\over 4} \n\\sum_{\\lambda^{\\,}_q, \\lambda^{\\,}_{\\bar q}; \\lambda^{\\,}_{-}, \\lambda^{\\,}_{+}} \\vert \nM^{\\,}_{\\lambda^{\\,}_q \\lambda^{\\,}_{\\bar q}; \\lambda^{\\,}_{-} \\lambda^{\\,}_{+}} \\vert^2 \\,.\n\\label{nqq}\n\\end{equation}\n\nThe helicity density matrix for the $q\\bar q$ state can be computed \nin the Standard Model and its non zero elements are given by\n\\begin{eqnarray}\n\\rho^{\\,}_{+-;+-}(q\\bar q) &=& 1 - \\rho^{\\,}_{-+;-+}(q\\bar q) \\> \\simeq \\> \n{1\\over 2}\\,{(g_{_V} - g_{_A})^2_q \\over (g^2_{_V} + g^2_{_A})_q}\n\\label{rhoqqd} \\\\\n\\rho^{\\,}_{+-;-+}(q\\bar q) &=& \\rho^*_{+-;-+}(q\\bar q) \\> \\simeq \\> \n{1\\over 2}\\,{(g^2_{_V} - g^2_{_A})_q \\over (g^2_{_V} + g^2_{_A})_q} \n\\, {\\sin^2\\theta \\over 1+ \\cos^2\\theta} \\, \\cdot\n\\label{rhoqqap}\n\\end{eqnarray}\nThese expressions are simple but approximate and hold at the $Z_0$ pole, \nneglecting electromagnetic contributions, masses and terms proportional \nto $g_{_V}^l$; the full correct expressions can be found in Ref. \\cite{abmq}.\n\nNotice that, inserting the values of the coupling constants\n\\begin{equation}\ng_{_V}^{u,c,t} = \\>\\> {1\\over 2} - {4\\over 3}\\sin^2\\theta_{_W} \\quad\\quad\ng_{_V}^{d,s,b} = -{1\\over 2} + {2\\over 3}\\sin^2\\theta_{_W} \\quad\\quad\ng_{_A}^{u,c,t} = - g_{_A}^{d,s,b} = {1\\over 2} \\label{cc}\n\\nonumber\n\\end{equation} \none has\n\\begin{equation}\n\\rho^{\\,}_{+-;-+}(u\\bar u, c\\bar c, t\\bar t) \n\\simeq -0.36 {\\sin^2\\theta \\over 1 + \\cos^2\\theta} \n\\quad\\quad \n\\rho^{\\,}_{+-;-+}(d\\bar d, s\\bar s, b\\bar b) \n\\simeq -0.17 {\\sin^2\\theta \\over 1 + \\cos^2\\theta} \n\\, \\cdot \\label{rho+-ap}\n\\end{equation}\n\nEq. (\\ref{rho+-ap}) clearly shows the $\\theta$ dependence of \n$\\rho^{\\,}_{+-;-+}(q\\bar q)$. In case of pure electromagnetic interactions\n($\\sqrt s \\ll M_{_Z}$) one has exactly:\n\\begin{equation}\n\\rho^{\\gamma}_{+-;-+}(q\\bar q) = {1\\over 2}\n\\,{\\sin^2\\theta \\over 1+ \\cos^2\\theta} \\, \\cdot\n\\label{rhoqqelm}\n\\end{equation}\nNotice that Eqs. (\\ref{rho+-ap}) and (\\ref{rhoqqelm}) have the same\nangular dependence, but a different sign for the coefficient in front,\nwhich is negative for the $Z$ contribution.\n\nBy using the above equations for $\\rho(q\\bar q)$ into Eq. (\\ref{rhoh}) one \nobtains the most general expression of $\\rho(h)$ in terms of the $q\\bar q$ spin \nstate and the unknown fragmentation amplitudes.\n\nDespite the ignorance of the fragmentation process some predictions\ncan be made \\cite{abmq} by considering the production of hadrons almost \ncollinear with the parent jet: the $q \\bar q \\to h + X$ fragmentation is \nthen essentially a c.m. forward process and the unknown $D$ amplitudes must \nsatisfy the angular momentum conservation relation \\cite{bls}\n\\begin{equation}\nD^{\\,}_{\\lambda^{\\,}_h \\lambda^{\\,}_X; \\lambda^{\\,}_q,\\lambda^{\\,}_{\\bar q}} \n\\sim \\left( \\sin{\\theta_h\\over 2} \\right)^{\n\\vert \\lambda^{\\,}_h - \\lambda^{\\,}_X - \\lambda^{\\,}_q + \\lambda^{\\,}_{\\bar q} \\vert} \n\\simeq \\left( {p_{_T}\\over z\\sqrt s} \\right)^{\n\\vert \\lambda^{\\,}_h - \\lambda^{\\,}_X - \\lambda^{\\,}_q + \\lambda^{\\,}_{\\bar q} \\vert} \\,,\n\\label{frd}\n\\end{equation}\nwith $\\theta_h$ the angle between the hadron momentum, \n$\\mbox{\\boldmath $h$} = z \\mbox{\\boldmath $q$} + \\mbox{\\boldmath $p$}_{_T}$, and the quark momentum $\\mbox{\\boldmath $q$}$.\n\nThe bilinear combinations of fragmentation amplitudes contributing to\n$\\rho(h)$ are then not suppressed by powers of $(p_{_T}\/z\\sqrt s)$\nonly if the exponent in Eq. (\\ref{frd}) is zero, which greatly reduces the \nnumber of relevant helicity configurations.\n\nThe fragmentation process is a parity conserving one and the fragmentation\namplitudes must then also satisfy the forward parity relationship\n\\begin{equation}\nD^{\\,}_{-\\lambda^{\\,}_h -\\lambda^{\\,}_X; -+} = (-1)^{S^{\\,}_h + S^{\\,}_X + \n\\lambda^{\\,}_h - \\lambda^{\\,}_X} \\> \nD^{\\,}_{\\lambda^{\\,}_h \\lambda^{\\,}_X; +-} \\,.\n\\label{par}\n\\end{equation}\n\nBefore presenting analytical and numerical results for the coherent quark \nfragmentation let us remember that in case of incoherent single quark \nfragmentation Eq. (\\ref{rhoh}) becomes\n\\begin{equation}\n\\rho^{\\,}_{\\lambda^{\\,}_h \\lambda^{\\prime}_h}(h) \n= {1\\over N_h} \\sum_{q,X,\\lambda^{\\,}_X,\\lambda^{\\,}_q,\\lambda^{\\prime}_q}\nD^{\\,}_{\\lambda^{\\,}_h \\lambda^{\\,}_X; \\lambda^{\\,}_q} \\>\\>\n\\rho^{\\,}_{\\lambda^{\\,}_q \\lambda^{\\prime}_q} \\>\\>\nD^*_{\\lambda^{\\,}_h \\lambda^{\\,}_X; \\lambda^{\\,}_q} \\,,\n\\label{rhohp1}\n\\end{equation}\nwhere $\\rho(q)$ is the quark $q$ helicity density matrix related to\n$\\rho(q\\bar q)$ by\n\\begin{equation}\n\\rho^{\\,}_{\\lambda^{\\,}_q \\lambda^{\\prime}_q} = \\sum_{\\lambda^{\\,}_{\\bar q}}\n\\rho^{\\,}_{\\lambda^{\\,}_q, \\lambda^{\\,}_{\\bar q}; \\lambda^{\\prime}_q,\n\\lambda^{\\,}_{\\bar q}} (q\\bar q) \\,.\n\\end{equation}\n\nIn such a case angular momentum conservation for the collinear quark \nfragmentation requires $\\lambda^{\\,}_q = \\lambda^{\\,}_h + \\lambda^{\\,}_X$;\nthe Standard Model computation of $\\rho(q)$ gives only diagonal\nterms [$\\rho^{\\,}_{++}(q) = \\rho^{\\,}_{+-;+-}(q\\bar q)$, \n$\\rho^{\\,}_{--}(q) = \\rho^{\\,}_{-+;-+}(q\\bar q)$], and one ends up\nwith the usual probabilistic expression\n\\begin{equation}\n\\rho^{\\,}_{\\lambda^{\\,}_h \\lambda^{\\,}_h}(h) \n= {1\\over N_h} \\sum_{q, \\lambda^{\\,}_q}\n\\rho^{\\,}_{\\lambda^{\\,}_q \\lambda^{\\,}_q} \\>\\>\nD_{q,\\lambda^{\\,}_q}^{h, \\lambda^{\\,}_h} \\,,\n\\label{rhohp2}\n\\end{equation}\nwhere $D_{q,\\lambda^{\\,}_q}^{h, \\lambda^{\\,}_h}$ is the polarized fragmentation \nfunction of a $q$ with helicity $\\lambda^{\\,}_q$ into a hadron $h$ with \nhelicity $\\lambda^{\\,}_h$. Off-diagonal elements of $\\rho(h)$ are all zero.\n\n\\goodbreak\n\\vskip 12pt\n\\noindent\n{\\mbox{\\boldmath $e^- e^+ \\to BX, \\> (S_{_B} = 1\/2, \\> p_{_T}\/\\sqrt s \\to 0)$}}\n\\vskip 6pt\n\\nobreak\nLet us consider first the case in which $h$ is a spin 1\/2 baryon. It was\nshown in Ref. \\cite{aamr} that in such a case the coherent quark \nfragmentation only induces small corrections to the usual incoherent \ndescription \n\\begin{eqnarray}\n\\rho^{\\,}_{++}(B) &=& {1 \\over N_{_B}} \\sum_q \n\\left[ \\rho^{\\,}_{+-;+-}(q\\bar q) \\> D_{q,+}^{B,+} + \\rho^{\\,}_{-+;-+}(q\\bar q) \\> \nD_{q,-}^{B,+} \\right] \\\\\n\\rho^{\\,}_{+-}(B) &=& {\\cal O} \\left[ \\left(\n{p_{_T} \\over z \\sqrt s} \\right) \\right] \\label{rho+-b}\\,.\n\\end{eqnarray}\n\nThat is, the diagonal elements of $\\rho(B)$ are the same as those given by \nthe usual probabilistic formula (\\ref{rhohp2}), with small corrections\nof the order of $(p_{_T}\/z\\sqrt s)^2$, while off-diagonal elements are \nof the order $(p_{_T}\/z\\sqrt s)$ and vanish in the $p_{_T}\/\\sqrt s \\to 0$\nlimit.\n\nThe matrix elements of $\\rho(B)$ are related to the longitudinal ($P_z$)\nand transverse ($P_y$) polarization of the baryon:\n\\begin{equation}\nP_z = 2\\rho_{++} - 1, \\quad\\quad\\quad\\quad P_y = -2\\,{\\rm Im} \\rho_{+-} \\,.\n\\end{equation}\nSome data are available on $\\Lambda$ polarization, both longitudinal and\ntransverse, from ALEPH Collaboration \\cite{aleph} and they do agree with the\nabove equations. In particular the transverse polarization, at \n$\\sqrt s = M_{_Z}$, $p_{_T} \\simeq 0.5$ GeV\/$c$ and $z \\simeq 0.5$ is \nindeed of the order 1\\%, as expected from Eq. (\\ref{rho+-b}).\n\n\\goodbreak\n\\vskip 12pt\n\\noindent\n{\\mbox{\\boldmath $e^- e^+ \\to VX, \\> (S_{_V} = 1, \\> p_{_T}\/\\sqrt s \\to 0)$}}\n\\vskip 6pt\n\\nobreak\nIn case of final spin 1 vector mesons one has, always in the limit of small\n$p_{_T}$ \\cite{akp}, \\cite{abmq} \n\\begin{eqnarray}\n\\rho^{\\,}_{00}(V) &=& {1 \\over N_{_V}} \\sum_q D_{q,+}^{V,0} \\\\\n\\rho^{\\,}_{11}(V) &=& {1 \\over N_{_V}} \\sum_q\n\\left[ \\rho_{+-;+-}(q\\bar q) D_{q,+}^{V,1} + \\rho_{-+;-+}(q\\bar q) D_{q,-}^{V,1}\n\\right] \\\\\n\\rho^{\\,}_{1,-1}(V) &=& {1 \\over N_{_V}} \\sum_{q,X}\nD^{\\,}_{10;+-} \\> D^*_{-10;-+} \\> \\rho^{\\,}_{+-;+-}(q\\bar q) \\,.\n\\end{eqnarray}\n\nAgain, the diagonal elements have the usual probabilistic expression; \nhowever, there is now an off-diagonal element, $\\rho^{\\,}_{1,-1}$, \nwhich may survive even in the $p_{_T}\/\\sqrt s \\to 0$ limit. In the sequel\nwe shall concentrate on it. Let us first notice that, in the collinear limit,\none has\n\\begin{eqnarray}\nD^{V,0}_{q,+} &=& \\sum_X \\vert D^{\\,}_{0-1;+-} \\vert^2 = D^{V,0}_{q,-} \\\\\nD^{V,1}_{q,+} &=& \\sum_X \\vert D^{\\,}_{10;+-} \\vert^2 = D^{V,-1}_{q,-} \\\\\nD^{V,1}_{q,-} &=& \\sum_X \\vert D^{\\,}_{12;+-} \\vert^2 = D^{V,-1}_{q,+} \\,,\n\\end{eqnarray}\nwith $ D_q^V = D^{V,0}_{q,+} + D^{V,1}_{q,+} + D^{V,-1}_{q,+}$ and \n$N_{_V} = \\sum_q D_q^V$. We also notice that the two fragmentation \namplitudes appearing in Eq. (30) are related by parity and their product\nis always real. $\\rho^{\\,}_{00}$ and $\\rho^{\\,}_{1,-1}$ can be measured\nthrough the angular distribution of two body decays of $V$. \n\nIn order to give numerical estimates of $\\rho^{\\,}_{1,-1}$ we make some\nplausible assumptions\n\\begin{equation}\nD^{h,1}_{q,-} = D^{h,-1}_{q,+} = 0 \\quad\\quad\\quad\nD^{h,0}_{q,+} = \\alpha^V_q \\> D^{h,1}_{q,+} \\label{ass} \\,.\n\\end{equation}\nThe first of these assumptions simply means that quarks with helicity\n1\/2 ($-1\/2$) cannot fragment into vector mesons with helicity $-1$ ($+1$).\nThis is true for valence quarks assuming vector meson wave functions \nwith no orbital angular momentum, like in $SU(6)$. The second assumption \nis also true in $SU(6)$ with $\\alpha^V_q = 1\/2$ for \nany valence $q$ and $V$. Rather than taking \n$\\alpha^V_q = 1\/2$ we prefer to relate the value of $\\alpha^V_q$ to the \nvalue of $\\rho^{\\,}_{00}(V)$ which can be or has been measured.\nIn fact, always in the $p_{_T} \\to 0$ limit, one has \\cite{abmq}\n\\begin{equation}\n\\rho^{\\,}_{00}(V) = {\\sum_q \\alpha^V_q \\, D^{h,1}_{q,+}\n\\over \\sum_q \\> (1+\\alpha^V_q) \\, D^{h,1}_{q,+}} \\,\\cdot\n\\label{rho00}\n\\end{equation}\nIf $\\alpha^V_q$ is the same for all valence quarks in $V$ \n($\\alpha^V_q = \\alpha^V$) \none has, for the valence quark contribution:\n\\begin{equation}\n\\alpha^V = {\\rho^{\\,}_{00}(V) \\over 1 - \\rho^{\\,}_{00}(V)} \\,\\cdot\n\\label{alrho}\n\\end{equation}\n\nFinally, one obtains \\cite{abmq}\n\\begin{equation}\n\\rho^{\\,}_{1,-1}(V) \\simeq [1 - \\rho^{\\,}_{0,0}(V)] \\,\n{\\sum_q \\, D^{V,1}_{q,+} \\> \\rho_{+-;-+}(q\\bar q) \n\\over \\sum_q \\, D^{V,1}_{q,+}} \\,\\cdot\n\\label{rho1-1tss}\n\\end{equation}\n\nWe shall now consider some specific cases in which we expect \nEq. (\\ref{rho1-1tss}) to hold; let us remind once more that our \nconclusions apply to spin 1 vector mesons produced in \n$e^- e^+ \\to q \\bar q \\to V+X$ processes in the limit of small $p_{_T}$\nand large $z$, {\\it i.e.}, to vector mesons produced in two jet events\n($e^- e^+ \\to q\\bar q$) and collinear with one of them ($p_{_T} = 0$), \nwhich is the jet generated by a quark which is a valence quark for the\nobserved vector meson (large $z$). These conditions should be met \nmore easily in the production of heavy vector mesons. \n\nAmong other results one obtains \\cite{abmq}:\n\\begin{eqnarray}\n\\rho^{\\,}_{1,-1}(D^{*}) &\\simeq& [1 - \\rho^{\\,}_{0,0}(D^{*})] \\>\n\\rho_{+-;-+}(c \\bar c) \\label{rhoda} \\\\\n\\rho^{\\,}_{1,-1}(\\phi) &\\simeq& [1 - \\rho^{\\,}_{0,0}(\\phi)] \\>\n\\rho_{+-;-+}(s\\bar s) \\label{rhopa} \\\\\n\\rho^{\\,}_{1,-1}(K^{*0}) &\\simeq& {1\\over 2} \\> [1 - \\rho^{\\,}_{0,0}(K^{*0})] \n\\> [\\rho_{+-;-+}(d\\bar d) + \\rho_{+-;-+}(s\\bar s)] \\,. \\label{rhok0a} \n\\end{eqnarray}\n\nEqs. (\\ref{rhoda})-(\\ref{rhok0a}) show how the value of $\\rho^{\\,}_{1,-1}(V)$\nare simply related to the off-diagonal helicity density matrix element \n$\\rho_{+-;-+}(q\\bar q)$ of the $q\\bar q$ pair created in the elementary \n$e^- e^+ \\to q\\bar q$ process; such off-diagonal elements would not appear \nin the incoherent independent fragmentation of a single quark, yielding \n$\\rho^{\\,}_{1,-1}(V)=0$.\n\nBy inserting into the above equations the value of $\\rho^{\\,}_{00}$ when\navailable \\cite{opal} and the expressions of $\\rho^{\\,}_{+-;-+}$,\nEq. (18), one has:\n\\begin{eqnarray}\n\\rho^{\\,}_{1,-1}(D^{*}) &\\simeq& -(0.216 \\pm 0.007) \\ \n{\\sin^2\\theta \\over 1 + \\cos^2\\theta} \\\\\n\\rho^{\\,}_{1,-1}(\\phi) &\\simeq& -(0.078 \\pm 0.014) \\ \n{\\sin^2\\theta \\over 1 + \\cos^2\\theta} \\\\\n\\rho^{\\,}_{1,-1}(K^{*0}) &\\simeq& -0.170 \\ \n[1- \\rho^{\\,}_{0,0}(K^{*0})] \\ {\\sin^2\\theta \\over 1 + \\cos^2\\theta} \\,\\cdot\n\\end{eqnarray}\nFinally, in case one collects all meson produced at different angles in\nthe full available $\\theta$ range (say $\\alpha < \\theta < \\pi -\\alpha, \n\\> |\\cos\\theta| < \\cos\\alpha$) an average should be taken in $\\theta$, \nweighting the different values of $\\rho^{\\,}_{1,-1}(\\theta)$ with the \ncross-section for the $e^-e^+ \\to V+X$ process; this gives \\cite{abmq}:\n\\begin{eqnarray}\n\\langle \\rho^{\\,}_{1,-1}(D^{*}) \\rangle_{[\\alpha, \\pi-\\alpha]}\n&\\simeq& -(0.216 \\pm 0.007) \\ \n{3 - \\cos^2\\alpha \\over 3 + \\cos^2\\alpha} \\label{rhodn} \\\\\n\\langle \\rho^{\\,}_{1,-1}(\\phi) \\rangle_{[\\alpha, \\pi-\\alpha]}\n&\\simeq& -(0.078 \\pm 0.014) \\ \n{3 - \\cos^2\\alpha \\over 3 + \\cos^2\\alpha} \\label{rhopn} \\\\\n\\langle \\rho^{\\,}_{1,-1}(K^{*0}) \\rangle_{[\\alpha, \\pi-\\alpha]}\n&\\simeq& -0.170 \\ [1- \\rho^{\\,}_{0,0}(K^*)] \\ \n{3 - \\cos^2\\alpha \\over 3 + \\cos^2\\alpha} \\,\\cdot \\label{rhok0n} \n\\end{eqnarray}\n\nThese results have to be compared with data \\cite{opal}\n\\begin{eqnarray}\n\\rho^{\\,}_{1,-1}(D^*) &=& -0.039 \\pm 0.016 \\quad\\quad {\\rm for}\n\\quad\\quad z > 0.5 \\quad\\quad \\cos\\alpha = 0.9 \\\\ \n\\rho^{\\,}_{1,-1}(\\phi) &=& -0.110 \\pm 0.070 \\quad\\quad {\\rm for}\n\\quad\\quad z > 0.7 \\quad\\quad \\cos\\alpha = 0.9 \\\\ \n\\rho^{\\,}_{1,-1}(K^{*0}) &=& -0.090 \\pm 0.030 \\quad\\quad {\\rm for}\n\\quad\\quad z > 0.3 \\quad\\quad \\cos\\alpha = 0.9 \n\\end{eqnarray}\nwhich shows a good qualitative agreement with the theoretical\npredictions. We notice that while the mere fact that $\\rho_{1,-1}$ differs \nfrom zero is due to a coherent fragmentation of the $q\\bar q$ pair, the actual \nnumerical values depend on the Standard Model coupling constants; for example,\n$\\rho_{1,-1}$ would be positive at smaller energies, at which the one gamma\nexchange dominates, while it is negative at LEP energy where the one $Z$\nexchange dominates. $\\rho_{1,-1}$ has also a peculiar dependence on \nthe meson production angle, being small at small and large angles\nand maximum at $\\theta = \\pi\/2$. Such angular dependence has been tested\nin case of $K^{*0}$ production, assuming no dependence of $\\rho_{00}$\non $\\theta$, and indeed one has \\cite{opal}, in agreement \nwith Eqs. (\\ref{rhok0a}) and (\\ref{rho+-ap})\n\\begin{equation}\n\\left[ {\\rho^{\\,}_{1,-1} \\over 1- \\rho^{\\,}_{00}} \\right]_{|\\cos\\theta|<0.5} \n\\cdot \n\\left[ {\\rho^{\\,}_{1,-1} \\over 1- \\rho^{\\,}_{00}} \\right]^{-1}\n_{|\\cos\\theta|>0.5} = 1.5 \\pm 0.7\n\\end{equation}\n\nThese results are encouraging; it would be interesting to have more and \nmore detailed data, possibly with a selection of final hadrons with the \nrequired features for our results to hold. It would also be interesting to \ntest the coherent fragmentation of quarks in other processes, like\n$\\gamma\\gamma \\to VX$, $pp \\to D^*X$ and $\\ell p \\to VX$. The first two\nprocesses are similar to $e^-e^+ \\to VX$ in that a $q\\bar q$ pair is created \nwhich then fragments coherently into the observed vector meson; one assumes\nthat the dominating elementary process in $pp \\to D^*X$ is $gg \\to c \\bar c$.\nIn both these cases one has for $\\rho^{\\,}_{+-;-+}(q\\bar q)$ the same value\nas in Eq. (\\ref{rhoqqelm}), so that one expects a {\\it positive} value\nof $\\rho^{\\,}_{1,-1}(V)$. \n\nIn the last process, the production of vector mesons in DIS, the quark \nfragmentation is in general a more complicated interaction of the quark \nwith the remnants of the proton and it might be more difficult to obtain \nnumerical predictions. However, if one observes $D^*$ mesons one can \nassume or select kinematical regions for which the underlying elementary \ninteraction is $\\gamma^* g \\to c\\bar c$: again, one would have the same \n$\\rho^{\\,}_{+-;-+}(c\\bar c)$ as in Eq. (\\ref{rhoqqelm}), and one would \nexpect a positive value of $\\rho^{\\,}_{1,-1}(D^*)$. It would indeed be\ninteresting to perform these simple tests of coherent fragmentation \neffects.\n\n\\vskip 18pt\n\\noindent\n{\\bf Acknowledgements}\n\\vskip 6pt\n\\noindent\nI would like to thank the organizers of the Workshop and DESY for financial\nsupport\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\bigskip\n\nGiven an open bounded connected domain $\\Omega \\subset \\mathbb{R}^{N}$ with\na sufficiently regular (locally Lipschitz) boundary, $\\partial \\Omega $, let\nus consider the integra\n\\begin{equation}\nI\\left( u\\right) :=\\int_{\\Omega }W\\left( \\nabla u\\left( x\\right) \\right) \\,dx\n\\label{Functional}\n\\end{equation\nto be minimized on a class of Sobolev functions $u:\\Omega \\rightarrow \n\\mathbb{R}^{d}$ with a kind of boundary conditions to be described later.\nAll over the paper we assume the integrand $W:\\mathbb{R}^{d\\times\nN}\\rightarrow \\mathbb{R\\cup }\\left\\{ +\\infty \\right\\} $ to be \\emph\npolyconvex}. This means that the representatio\n\\begin{equation*}\nW\\left( \\xi \\right) =g\\left( \\mathbb{T}\\left( \\xi \\right) \\right) \n,\\;\\; \\xi \\in \\mathbb{R}^{d\\times N},\n\\end{equation*}\nholds for some convex function $g:\\mathbb{R}^{\\tau \\left( d,N\\right)\n}\\rightarrow \\mathbb{R\\cup }\\left\\{ +\\infty \\right\\} $\n\\begin{equation*}\n\\tau( d,N) :=\\underset{s=1}{\\overset{d\\wedge N}{\\dsum }}\n\\varkappa(s) ,\\;\\; \\varkappa(s) :=\\dbinom{\n}{d}\\dbinom{s}{N}=\\frac{d!N!}{(s!) ^{2}( d-s)!(\nN-s) !},\n\\end{equation*}\nwher\n\\begin{equation*}\n\\mathbb{T}\\left(\\xi \\right) :=\\left( \\mathrm{Adj}_{1}\\xi ,\\mathrm{Adj\n_{2}\\xi ,\\mathrm{Adj}_{3}\\xi ,\\dots ,\\mathrm{Adj}_{d\\wedge N}\\xi \\right) \n,\\;\\; \\xi \\in \\mathbb{R}^{d\\times N},\n\\end{equation*\nand $\\mathrm{Adj}_{k}\\xi $ is the vector of all \\emph{minors} of the matrix \n\\xi $ of order $k=1,2,\\dots ,d\\wedge N$, respectively. In particular, \n\\mathrm{Adj}_{1}\\xi =\\xi $ and $\\mathrm{Adj}_{d}\\xi =\\det \\,\\xi $ whenever \nd=N$.\n\nIt is known that, under strong coercivity assumptions on $W$ to assure weak convergence of the minors of gradients for the minimizing sequence, the functional $I$ attains its minimum on $\\bar{u}\\left( \\cdot \\right) +\\mathbf{\n}_{0}^{1,p}\\left( \\Omega ;\\mathbb{R}^{d}\\right) $, $p\\geq 1$. We refer to the fundamental work by J. Ball \\cite{B} motivated by problems coming from nonlinear elasticity and to \\cite{ADM, MQY, My} for further improvements.\n\n\nThe lower semicontinuity for general polyconvex integrands with respect to the weak convergence in $W^{1,p}(\\Omega;\\mathbb{R}^{d}),$ $\\Omega\\subset \\mathbb{R}^{N},$ has been the subject of many investigations.\nNamely, Marcellini showed in \\cite{M} that this property holds whenever $pN-1$ while Mal\\'{y} in \\cite{My} exhibited a counterexample for $p0$ means that the minimum is searched among the deformations\npreserving orientation, while $\\det \\,\\nabla u\\left( x\\right) =1$ refers to\nthe case of incompressible elastic body.\n\nOne of the possible applications of the above variational problem is\nregarded to plastic surgery, namely, in the woman breast reduction, where we\ndeal with a sort of very elastic and soft tissue. Some recent publications\n(see, e.g., \\cite{Ay, D, S1, PY, S2}) were devoted to mathematical setting\nof the related problems and to their numerical simulations. Medical\nexaminations allow to consider the involved tissue as a neo-Hookean\ncompressible material (see \\cite{Z}). We have a more precise model when the strain energy is defined\nby the integral (\\ref{Functional}) with the density $W:\\mathbb{R}^{3\\times\n3}\\rightarrow \\mathbb{R}$\n\\begin{eqnarray}\n&&W\\left( \\xi \\right) :=\\mu \\left( \\limfunc{tr}\\left( \\xi \\cdot \\xi\n^{T}\\right) -3-2\\ln \\left( \\det \\xi \\right) \\right) \\notag \\\\\n&&+\\lambda \\left( \\det \\xi -1\\right) ^{2}+\\beta \\limfunc{tr}\\mathrm{\\,\n\\left( \\mathrm{Adj\\,}\\xi \\cdot \\mathrm{Adj\\,}\\xi ^{T}\\right) \\,,\n\\label{Int_ex}\n\\end{eqnarray\nwhere \"$\\limfunc{tr}$\" means the trace of a matrix, $\\mathrm{Adj\\,}\\xi :\n\\mathrm{Adj}_{2}\\xi $, and the symbol \"$T$\" stands for the matrix\ntransposition. One of the steps of the\n(breast reduction) surgery is the suturing, which mathematically can\nbe seen as an identification of points of some surface piece $\\Gamma\n^{+}\\subset \\partial \\Omega $ with points of another one $\\Gamma ^{-}\\subset\n\\partial \\Omega $. Denoting the respective correspondence between the points\nof $\\Gamma ^{+}$ and $\\Gamma ^{-}$ by $\\sigma $, we are led to a new type of constrain\n\\begin{equation}\nu(x) =(u\\circ \\sigma)(x), \\;x\\in\n\\Gamma ^{+}, \\label{sutur_cond}\n\\end{equation\ncalled the \\emph{knitting boundary condition}. Let us note that the one-to-one\nmapping $\\sigma $ is not \\emph{a priori} given and should be chosen to\nguarantee the minimum value to the functional (\\ref{Functional}). In other\nwords, a minimizer of (\\ref{Functional}) (if any) should be a pair $\\left(\nu,\\sigma \\right) $ where $u\\in \\mathbf{W}^{1,p}\\left( \\Omega ;\\mathbb{R\n^{3}\\right) $, $p\\geq 1$, and $\\sigma :\\Gamma ^{+}\\rightarrow \\Gamma ^{-}$\nis sufficiently regular. We set the natural hypothesis that $\\sigma $ and\nits inverse $\\sigma ^{-1}$ are Lipschitz transformations (with the same\nLipschitz constant $L>0$). Practically this means that the sutured tissue\ncan not be extended nor compressed too much.\n\n\n\nMotivated by the problem coming from the plastic surgery we will consider\njust the case $p=2$ and $d=N=3$, although the results remain\ntrue in the case $p>2$ and arbitrary $d=N\\geq 2$ as well.\n\nThe paper is organized as follows. In the next section we give the exact\nsetting of the variational problem together with the main hypotheses on the\nintegrand $W$. For simplicity of references we put here also some important\nfacts regarded with the Sobolev functions. In Section 3 we justify first the\nwell-posedness of the problem by showing that the composed function from the\nknitting condition (\\ref{sutur_cond}) belongs to the respective Lebesgue\nclass. Afterwards, we prove existence of a minimizer as an accumulation\npoint of an arbitrary minimizing sequence (the so called \\emph{direct method\n, see \\cite{Dac}). The paper is concluded with a necessary optimality\ncondition for the given problem (see Section 4) allowing to construct\neffective numerical algorithms, which can be successively applied in the\nmedical practice.\n\n\\bigskip\n\n\\section{\\protect\\bigskip Main hypotheses and auxiliary results}\n\nIn what follows we fix a nonempty open bounded and connected set $\\Omega\n\\subset \\mathbb{R}^{3}$ whose boundary $\\partial \\Omega $ is assumed to be\nlocally Lipschitz (see, e.g., \\cite[p. 354]{L}). By the symbol $\\mathcal{L\n^{m}$ ($dx$) we denote the Lebesgue measure in the space $\\mathbb{R}^{m}$, \nm=2,3$, while $\\mathcal{H}^{2}$ means the two-dimensional \\emph{Hausdorff\nmeasure} (see, e.g., \\cite{F}).\n\nLet us divide the surface $\\partial \\Omega $ into several parts $\\Gamma _{i}\n, $i=1,2,3,4$, in such a way that $\\mathcal{H}^{2}\\left( \\Gamma _{i}\\cap\n\\Gamma _{j}\\right) =0$ for $i\\neq j$. Moreover, we set $\\Gamma _{4}:=\\Gamma\n^{+}\\cup \\Gamma ^{-}$ where $\\Gamma ^{\\pm }\\subset \\Gamma $ with $\\mathcal{H\n^{2}\\left( \\Gamma ^{\\pm }\\right) >0$ and $\\mathcal{H}^{2}\\left( \\Gamma\n^{+}\\cap \\Gamma ^{-}\\right) =0$ are also given.\n\nSuppose that $W:\\mathbb{R}^{3\\times 3}\\rightarrow \\mathbb{R}$ is a \\emph\npolyconvex function} satisfying the \\emph{growth\\ assumption}\n\\begin{equation}\nW(\\xi) \\geq c_{0}+c_{1}\\left\\vert \\xi \\right\\vert\n^{2}+c_{2}\\left\\vert \\mathrm{Adj\\,}\\xi \\right\\vert ^{2}+c_{3}\\left(\\det \\xi\n\\right)^{2}, \\;\\; \\xi \\in \\mathbb{R}^{3\\times 3},\n\\label{growth_cond}\n\\end{equation\nwhere $c_{0}\\in \\mathbb{R}$ and $c_{i}>0$, $i=1,2,3$, are some given\nconstants. Here and in what follows by $\\left\\vert \\cdot \\right\\vert $ we\ndenote the norm of both a vector in $\\mathbb{R}^{n}$ and a $3\\times 3\n-matrix.\n\nTaking into account that $\\limfunc{tr}\\left( \\xi \\cdot \\xi ^{T}\\right)\n=\\left\\vert \\xi \\right\\vert ^{2}$ for each matrix $\\xi \\in \\mathbb{R\n^{3\\times 3}$, we see that the integrand (\\ref{Int_ex}) satisfies the above\nproperties. Indeed, it is convex as a function of $\\mathbb{T}\\left( \\xi\n\\right) $ being represented as a sum of three terms, which are convex w.r.t. \n$\\xi $, $\\det \\xi $ and $\\mathrm{Adj\\,}\\xi $, respectively. Furthermore\n\\begin{equation*}\nW(\\xi) =-3\\mu +\\lambda +\\mu \\left\\vert \\xi \\right\\vert\n^{2}+\\beta \\left\\vert \\mathrm{Adj\\,}\\xi \\right\\vert ^{2}+f( \\det \\xi), \\;\\xi \\in \\mathbb{R}^{3\\times 3},\n\\end{equation*\nwhere the functio\n\\begin{equation*}\nf\\left( t\\right) :=\\frac{\\lambda }{2}t^{2}-2\\lambda t-2\\mu \\ln t, \\;t>\n,\n\\end{equation*\nis lower bounded by some (negative) constant.\n\nSince on various pieces of the surface $\\partial \\Omega $ the boundary conditions\nare structurally different (some part of $\\partial \\Omega $ can be left even\nfree), to set the problem we use the notion of the \\emph{trace operator},\nwhich associates to each $u\\in $ $\\mathbf{W}^{1,2}\\left( \\Omega ;\\mathbb{R\n^{3}\\right) $ a function $\\limfunc{Tr}u$ defined on the boundary, $\\partial\n\\Omega,$ which can be interpreted as the \"\\emph{boundary values}\" of $u$. We refer to \\cite[pp. 465-474]{L}, where\nthe existence and uniqueness of the trace operator were proved for scalar Sobolev functions $u \\in \\mathbf{W}^{1,p}(\\Omega), p>1.$ For vector-valued functions $u: \\Omega\\rightarrow \\mathbb{R}^{3}$ instead, we can argue componentwise. So, applying \\cite[Theorem 15.23]{L}, we define the trace as the linear and bounded operator\n$\\limfunc{Tr}:\\mathbf{W}^{1,2}\\left( \\Omega ;\\mathbb{R\n^{3}\\right) \\rightarrow \\mathbf{L}^{2}(\\partial \\Omega ;\\mathbb{R\n^{3})$ satisfying the following properties:\n\n\\begin{enumerate}\n\\item $\\limfunc{Tr}u\\left( x\\right) =u\\left( x\\right) $, $x\\in \\partial\n\\Omega $, whenever $u\\in \\mathbf{W}^{1,2}\\left( \\Omega ;\\mathbb{R\n^{3}\\right) \\cap \\mathbf{C}\\left( \\overline{\\Omega };\\mathbb{R}^{3}\\right) $;\n\n\\item for each $u\\in \\mathbf{W}^{1,2}(\\Omega;\\mathbb{R}^3)$\nand any test function $\\varphi \\in \\mathbf{C}^{1}(\\overline{\\Omega };\\mathbb{R}^{3})$ the equalitie\n\\begin{equation*}\n\\int_{\\Omega}u_{j}\\frac{\\partial \\varphi_{j}}{\\partial x_{i}}dx=-\\int_{\\Omega}\\varphi_{j}\\frac{\\partial u_{j}}{\\partial x_{i}}dx+\\int_{\\partial \\Omega }\\varphi_{j}\n\\limfunc{Tr}(u_j)\\nu_i d\\mathcal{H}^{2}\n\\end{equation*\nhold for each $ i, j=1,2,3$ where $\\nu:=(\\nu_1,\\nu_2,\\nu_3)^{T}$ means the unit outward normal to \n\\partial \\Omega $.\n\\end{enumerate}\n \nIn addition to the properties above, observe that the trace operator gives a compact embedding into the space $\\mathbf{L}^{2}\\left( \\partial \\Omega ;\\mathbb{R}^{3}\\right)$ that will be crucial to obtain the main result in Section 3. Namely, the following proposition takes place.\n\n\\begin{proposition}\n\\label{traceconv}Let $\\Omega \\subset \\mathbb{R}^{3}$ be an open bounded set\nwith locally Lipschitz boundary. Then for each $\\left\\{ u_{n}\\right\\}\n\\subset \\mathbf{W}^{1,2}\\left( \\Omega ;\\mathbb{R}^{3}\\right) $ converging to \n$u$ weakly in $\\mathbf{W}^{1,2}\\left( \\Omega ;\\mathbb{R}^{3}\\right) $ the\nsequence of traces $\\left\\{ \\limfunc{Tr}u_{n}\\right\\} $ converges to \n\\limfunc{Tr}u$ strongly in $\\mathbf{L}^{2}\\left( \\partial \\Omega ;\\mathbb{R\n^{3}\\right)$.\n\\end{proposition}\n\nThe proof is based essentially on the following lemma giving a nice estimate\nfor the surface integral of the trace operator.\n\n\\begin{lemma}\n\\label{Lemmatrace}Let $\\Omega \\subset \\mathbb{R}^{3}$ be as in Proposition \\ref{traceconv}. Then there exists a constant\n$C>0$ such that \n\\begin{equation}\n\\int_{\\partial \\Omega }\\left\\vert \\limfunc{Tr}u\\right\\vert ^{2}d\\mathcal{H\n^{2}\\leq C\\left( \\frac{1}{\\varepsilon }\\int_{\\Omega }\\left\\vert u\\right\\vert\n^{2}dx+\\varepsilon \\int_{\\Omega }\\left\\vert \\nabla u\\right\\vert\n^{2}dx\\right) \\label{Estimate_tr}\n\\end{equation\nfor any $\\varepsilon >0$ and any $u\\in \\mathbf{W}^{1,2}\\left( \\Omega ;\\mathbb{R}^{3}\\right) .$\n\\end{lemma}\n\n\\begin{proof}\nGiven $x\\in \\partial \\Omega ,$ due to the Lipschitz hypothesis there exists\na neighborhood $U_{x}$ of $x$ such that $U_{x}\\cap \\partial \\Omega $ can be\nrepresented as the graph of a Lipschitz function w.r.t. some (local)\ncoordinates. Without loss of generality, we may assume that \n\\begin{equation*}\nU_{x}\\cap \\Omega =\\left\\{ \\left( y^{\\prime },y_{3}\\right) :f_{x}\\left(\ny^{\\prime }\\right) 0,$ $G_{x}\\subset \\mathbb{R}^{2}$ is an open set in the\nspace of the first two coordinates and $f_{x}:G_{x}\\rightarrow \\mathbb{R}$\nis a Lipschitz function. By compactness there exists a finite number of\npoints $x^{i}\\in \\partial \\Omega \\,,~i=1,\\dots ,q,$ such that \n\\begin{equation*}\n\\partial \\Omega =\\tbigcup_{i=1}^{q}\\left( U_{x^{i}}\\cap \\partial \\Omega\n\\right) .\n\\end{equation*\nSet $\\delta _{i}:=\\delta _{x^{i}},~U_{i}:=U_{x^{i}},~G_{i}:=G_{x^{i}}$ and~ \nf_{i}:=f_{x^{i}}$, $i=1,\\dots ,q$. Denote by $L>0$ the biggest Lipschitz\nconstant of the functions $f_{i}.$\n\nLet us choose $\\varepsilon <\\min \\left\\{ \\delta _{i}:i=1,\\dots ,q\\right\\} $\nand consider first the function $u\\in \\mathbf{C}^{1}\\left( \\overline{\\Omega \n\\right) \\cap \\mathbf{W}^{1,2}\\left( \\Omega ;\\mathbb{R}^{3}\\right) .$ Given \ni\\in \\left\\{ 1,\\dots ,q\\right\\} ,$ by the Newton-Leibniz formula, for each \nx^{\\prime }\\in G_{i}$ and $\\ x_{3}\\in \\mathbb{R}$ with $f_{i}\\left(\nx^{\\prime }\\right) \\leq x_{3}0$ with \n\\begin{equation*}\n\\int_{\\Omega }\\left\\vert \\nabla u_{n}\\left( x\\right) \\right\\vert ^{2}dx\\leq M\n\\end{equation*\nfor all $n\\in \\mathbb{N}.$ Then, by the \\emph{Rellich-Kondrachov theorem}\n(see \\cite[Theorem 11.21, p. 326]{L}), $u_{n}\\rightarrow u$ $\\ $strongly$\\ \nin$~\\mathbf{L}^{2}\\left( \\Omega ;\\mathbb{R}^{3}\\right) .$ Applying Lemma \\re\n{Lemmatrace} to $\\limfunc{Tr}\\left( u_{n}-u\\right) =\\limfunc{Tr}u_{n}\n\\limfunc{Tr}u,$ we hav\n\\begin{align*}\n\\int_{\\partial \\Omega }\\vert \\limfunc{Tr}u_{n}-\\limfunc{Tr}\nu\\vert ^{2}d\\mathcal{H}^{2}&\\leq C(\\frac{1}{\\varepsilon }\\int_{\\Omega }\\vert\nu_{n}-u\\vert ^{2}dx+\\varepsilon \\int_{\\Omega }\\vert \\nabla\nu_{n}-\\nabla u\\vert ^{2}dx)\\\\\n&\\leq C( \\frac{1}{\\varepsilon }\\int_{\\Omega }\\vert\nu_{n}-u\\vert ^{2}dx+4\\varepsilon M) .\n\\end{align*}\nHenc\n\\begin{equation*}\n\\underset{n\\rightarrow \\infty }{\\lim \\sup }\\int_{\\partial \\Omega }\\left\\vert \n\\limfunc{Tr}u_{n}-\\limfunc{Tr}u\\right\\vert ^{2}d\\mathcal{H}^{2}\\leq\n4\\varepsilon CM.\n\\end{equation*\nLetting $\\varepsilon \\rightarrow 0^{+}$ concludes the proof.\\medskip \n\\end{proof}\n\nWe will use also the so called \\emph{generalized\nPoincar\\'{e} inequality} (see \\cite[Theorem 6.1-8, p. 281]{C}).\n\n\\begin{proposition}\n\\label{poincare}Given an open bounded domain $\\Omega \\subset \\mathbb{R}^{3}$\nwith locally Lipschitz boundary and a measurable subset $\\Gamma \\subset\n\\partial \\Omega $ with $\\mathcal{H}^{2}\\left( \\Gamma \\right) >0$ there\nexists a constant $C>0$ such that \n\\begin{equation}\n\\underset{\\Omega }{\\dint }\\left\\vert u\\left( x\\right) \\right\\vert ^{2}dx\\leq\nC\\left[ \\underset{\\Omega }{\\dint }\\left\\vert \\nabla u\\left( x\\right)\n\\right\\vert ^{2}\\,dx+\\left\\vert \\underset{\\Gamma }{\\dint }\\limfunc{Tr\nu\\left( x\\right) \\,d\\mathcal{H}^{2}\\left( x\\right) \\right\\vert ^{2}\\right] \n\\label{poincare_ineq}\n\\end{equation\nfor each $u\\in \\mathbf{W}^{1,2}\\left( \\Omega ;\\mathbb{R}^{3}\\right) $.\n\\end{proposition}\n\n\\medskip\nLet us formulate now the boundary conditions in terms of the trace operator.\nConsider first a surface $\\mathcal{S}\\subset \\mathbb{R}^{3}$ defined by some\ncontinuous function $h:\\mathbb{R}^{3}\\rightarrow \\mathbb{R}$\n\\begin{equation*}\n\\mathcal{S}:=\\left\\{ u\\in \\mathbb{R}^{3}:h\\left( u\\right) =0\\right\\} .\n\\end{equation*}\nThen, given $L\\geq 1$ we denote by $\\Sigma _{L}\\left( \\Gamma ^{+};\\Gamma\n^{-}\\right) $ the set of all functions $\\sigma :\\Gamma ^{+}\\rightarrow\n\\Gamma ^{-}$ satisfying the inequalitie\n\\begin{equation}\n\\frac{1}{L}\\left\\vert x-y\\right\\vert \\leq \\left\\vert \\sigma \\left( x\\right)\n-\\sigma \\left( y\\right) \\right\\vert \\leq L\\left\\vert x-y\\right\\vert \n\\label{Lipschitz}\n\\end{equation\nfor all $x,y\\in \\Gamma ^{+}$, and introduce the set $\\mathcal{W}_{L}\\subset \n\\mathbf{W}^{1,2}\\left( \\Omega ;\\mathbb{R}^{3}\\right) \\times \\Sigma\n_{L}\\left( \\Gamma ^{+};\\Gamma ^{-}\\right) $ of all pairs $\\left( u,\\sigma\n\\right) $ satisfying the (boundary) conditions:\n\n\\begin{enumerate}\n\\item[(C$_{1}$)] $\\limfunc{Tr}u\\left( x\\right) =x$ for $\\mathcal{H}^{2}\n-a.e. $x\\in \\Gamma _{1}$;\n\n\\item[(C$_{2}$)] $h\\left( \\limfunc{Tr}u\\left( x\\right) \\right) =0$ for \n\\mathcal{H}^{2}$-a.e. $x\\in \\Gamma _{2}$;\n\n\\item[(C$_{3}$)] $\\limfunc{Tr}u\\left( x\\right) =\\limfunc{Tr}u\\left( \\sigma\n\\left( x\\right) \\right) $ for $\\mathcal{H}^{2}$-a.e. $x\\in \\Gamma ^{+}$.\n\\end{enumerate}\nThus, we can write the \\emph{knitting variational problem} in the for\n\\begin{equation}\n\\min \\left\\{ \\int_{\\Omega }W\\left( \\nabla u\\left( x\\right) \\right)dx:\\left( u,\\sigma \\right) \\in \\mathcal{W}_{L}\\right\\}. \\label{varproblem}\n\\end{equation\n\n\\begin{figure}[htb]\t\n\\begin{center}\n\\begin{tikzpicture}\n\\node[label=below:~] (x1) at (2,2) {$\\bullet$};\n\\node[label=below:~ ] (x2) at (2,0) {$\\bullet$};\n\\node[label=below:~ ] (x3) at (0,0) {$\\bullet$};\n\\node[label=below:~ ] (x4) at (1,1) {$\\bullet$};\n \\node[label=below:~ ] (x5) at (0,1) {$\\bullet$};\n\n \\draw (x1) .. controls (3,1) .. (x2);\n \\draw (x2) .. controls (1,-0.3) .. (x3);\n \\draw (x3) -- (x4);\n \\draw (x4) -- (x5);\n \\draw (x5) .. controls (1,3) .. (x1);\n \n \n\\node[label=below:~] (z1) at (8,2) {$\\bullet$};\n\\node[label=below:~ ] (z2) at (8,0) {$\\bullet$};\n\\node[label=below:~ ] (z3) at (6,0) {$\\bullet$};\n\\node[label=below:~ ] (z4) at (7,1) {$\\bullet$};\n\n\n \\draw (z1) .. controls (9,1) .. (z2);\n \\draw (z2) -- (z3);\n \\draw[very thick] (z3) -- (z4);\n\n \\draw (z3) .. controls (6,2) .. (z1); \n \n\\draw[->] (3.5,1) -- (5.3,1); \n \n\\node [above] at (4.4,1.2) {$u$};\n\n\\node [above] at (3,1.2) {$\\Gamma_1$};\n\\node [above] at (10,1.2) {$\\Gamma_1=u(\\Gamma_1)$};\n\n\\node [below] at (1,-0.3) {$\\Gamma_2$};\n\\node [below] at (7.5,-0.3) {$u(\\Gamma_2),\\;\\; h(x)=0$};\n\n\\node [above] at (0.2,1.8) {$\\Gamma_3$};\n\\node [above] at (6.2,1.9) {$u(\\Gamma_3)$};\n\n\\node [above] at (0.8,1.1) {$\\Gamma_+$};\n\n\n\\node [below] at (0.8,0.6) {$\\Gamma_-$};\n\\node [below] at (7,0.65) {$u(\\Gamma_{\\pm})$};\n\n\\draw[->] (0.2,0.9) -- (0.3,0.4); \n\\node [below] at (0,0.8) {$\\sigma$};\n\n\\node [below] at (2,1.3) {$\\Omega$};\n\\node [below] at (8,1.3) {$u ({\\Omega})$};\n \n\\end{tikzpicture}\n\\end{center}\n\\caption{General scheme of plastic surgery.}\n\\end{figure}\n\nLet us emphasize the difference between the boundary conditions (C$_{1}$)$-\n(C$_{3}$) above (see Fig. 1). The first condition (C$_{1}$) means that the surface \n\\Gamma _{1}$ is a part of the elastic body (breast) that remains fixed. The condition (C$_{2}$),\ninstead, can be interpreted as the knitting of a part of the incised breast \n(surface $\\Gamma _{2}$) to the fixed surface $\\mathcal{S}$ of the woman's\nchest, while (C$_{3}$) means the knitting of the cut breast surface $\\Gamma\n_{4}:=\\Gamma ^{+}\\cup \\Gamma ^{-},$ of $\\Gamma ^{+}$ into $\\Gamma ^{-}$.\nFinally, the piece $\\Gamma _{3}$ of the boundary $\\partial \\Omega $ remains\nfree and admits an arbitrary configuration depending on the knitting process.\n\n\\bigskip\n\n\\section{\\protect\\bigskip Existence of minimizers}\n\nBefore proving the main existence theorem let us justify that for each \n\\sigma \\in \\Sigma _{L}\\left( \\Gamma ^{+};\\Gamma ^{-}\\right) $ the boundary\ncondition (C$_{3}$) makes sense.\n\n\\begin{lemma}\n\\label{measurability_comp}Let $\\Omega \\subset \\mathbb{R}^{3}$ be an open\nbounded connected domain with locally Lipschitz boundary and $\\Gamma ^{\\pm\n}\\subset \\partial \\Omega $ be $\\mathcal{H}^{2}$-measurable sets with \n\\mathcal{H}^{2}\\left( \\Gamma ^{\\pm }\\right) >0$ and $\\mathcal{H}^{2}\\left(\n\\Gamma ^{+}\\cap \\Gamma ^{-}\\right) =0$. Then for each $u\\in \\mathbf{W\n^{1,2}\\left( \\Omega ;\\mathbb{R}^{3}\\right) $ and $\\sigma \\in \\Sigma\n_{L}\\left( \\Gamma ^{+};\\Gamma ^{-}\\right) $, $L\\geq 1$, the composed\nfunction $\\limfunc{Tr}u\\circ \\sigma :\\Gamma ^{+}\\rightarrow \\mathbb{R}^{3}$\nis measurable w.r.t. the measure $\\mathcal{H}^{2}$ on $\\Gamma ^{+}$.\n\\end{lemma}\n\n\\begin{proof}\nGiven $u\\in \\mathbf{W}^{1,2}\\left( \\Omega ;\\mathbb{R}^{3}\\right) $ by the\ndensity argument there exists a sequence of continuous functions $v_{n}\n\\overline{\\Omega }\\rightarrow \\mathbb{R}^{3}$ converging to $u$ in the space \n$\\mathbf{W}^{1,2}\\left( \\Omega ;\\mathbb{R}^{3}\\right) $. Consequently (see the properties $1$ and $2$ of traces), \nv_{n}=\\limfunc{Tr}v_{n}\\rightarrow \\limfunc{Tr}u$, as $\\ n\\rightarrow \\infty $,\nin $\\mathbf{L}^{2}\\left( \\partial \\Omega ;\\mathbb{R}^{3}\\right) $. Then, up\nto a subsequence, we hav\n\\begin{equation}\nv_{n}\\left( y\\right) \\rightarrow \\limfunc{Tr}u\\left( y\\right) \\;\\;\n\\forall y\\in \\Gamma ^{-}\\setminus E_{0}^{-} \\label{appr_cont_a.e.}\n\\end{equation\nwhere $E_{0}^{-}\\subset \\Gamma ^{-}$ is some set with null Hausdorff\nmeasure. So, it remains to prove tha\n\\begin{equation}\n\\mathcal{H}^{2}\\left( \\sigma ^{-1}\\left( E_{0}^{-}\\right) \\right) =0\n\\label{measure_0}\n\\end{equation\n$\\,$because in such case we deduce from (\\ref{appr_cont_a.e.}) that the\nsequence of (continuous) functions $\\left\\{ v_{n}\\left( \\sigma \\left(\nx\\right) \\right) \\right\\} $ converges to $\\limfunc{Tr}u\\left( \\sigma \\left(\nx\\right) \\right) $ for all $x\\in \\Gamma ^{+}$ up to the negligible set of\npoints $E_{0}^{+}:=\\sigma ^{-1}\\left( E_{0}^{-}\\right) $.\n\nOn the other hand, (\\ref{measure_0}) follows easily from the left inequality\nin (\\ref{Lipschitz}) and from the definition of \\emph{Hausdorff measure}\n(see \\cite[p. 171]{F})\n\\begin{equation*}\n\\mathcal{H}^{2}\\left( E\\right) :=\\frac{\\pi }{4}\\lim_{\\varepsilon \\rightarrow\n0}\\,\\inf \\,\\sum_{i=1}^{\\infty }\\left( \\limfunc{diam}A_{i}\\right) ^{2}\n\\end{equation*\nwhere the infimum is taken over all coverings $\\left\\{ A_{i}\\right\\}\n_{i=1}^{\\infty }$ of $E$ with the diameters $\\limfunc{diam}A_{i}\\leq\n\\varepsilon $. In fact, given $\\eta >0$ we find $\\varepsilon >0$ and a\nfamily $\\left\\{ A_{i}\\right\\} _{i=1}^{\\infty }$ with $\\tbigcup_{i=1}^{\\infty\n}A_{i}\\supset E_{0}^{-}$ , $\\limfunc{diam}A_{i}\\leq \\varepsilon \/L$ an\n\\begin{equation*}\n\\sum_{i=1}^{\\infty }\\left( \\limfunc{diam}A_{i}\\right) ^{2}\\leq \\frac{\\eta }\nL^{2}}.\n\\end{equation*\nSince due to (\\ref{Lipschitz}) obviously $\\limfunc{diam}\\,\\left( \\sigma\n^{-1}\\left( A_{i}\\right) \\right) \\leq L\\limfunc{diam}\\,A_{i}\\leq \\varepsilon \n$, $i=1,2,\\dots $; \n\\\\the family $\\left\\{ \\sigma ^{-1}\\left( A_{i}\\right) \\right\\}\n_{i=1}^{\\infty }$ is a covering of $E_{0}^{+}$ an\n\\begin{equation*}\n\\sum_{i=1}^{\\infty }\\left( \\limfunc{diam}\\,\\left( \\sigma ^{-1}\\left(\nA_{i}\\right) \\right) \\right) ^{2}\\leq L^{2}\\sum_{i=1}^{\\infty }\\left( \n\\limfunc{diam}A_{i}\\right) ^{2}\\leq \\eta ,\n\\end{equation*\nwe conclude that $\\mathcal{H}^{2}\\left( E_{0}^{+}\\right) =0$, and the \n\\mathcal{H}^{2}$-measurability of $x\\mapsto \\limfunc{Tr}u\\left( \\sigma\n\\left( x\\right) \\right) $ on $\\Gamma ^{+}$ follows.\n\\end{proof}\n\nLet us give now an \\emph{a priori} estimate for the \"weighted\" integra\n\\begin{equation*}\n\\dint\\limits_{\\Gamma ^{+}}\\left\\vert \\limfunc{Tr}u\\left( \\sigma \\left(\nx\\right) \\right) \\right\\vert ^{2}d\\mathcal{H}^{2}\\left( x\\right) ,\n\\end{equation*\nimplying, in particular, that the composed function $\\limfunc{Tr}u\\circ\n\\sigma $ belongs to the class $\\mathbf{L}^{2}\\left( \\partial \\Omega ;\\mathbb\nR}^{3}\\right) $.\n\n\\begin{lemma}\n\\label{estimate_w}Let $\\Omega \\subset \\mathbb{R}^{3}$ and $\\Gamma ^{\\pm\n}\\subset \\partial \\Omega $ be such as in Lemma \\ref{measurability_comp}. Then\ngiven $L\\geq 1$ there exists a constant $\\mathfrak{L}_{L}>0$ such that the inequalit\n\\begin{equation}\n\\int_{\\Gamma ^{+}}\\left\\vert \\limfunc{Tr}u\\left( \\sigma \\left( x\\right)\n\\right) \\right\\vert ^{2}d\\mathcal{H}^{2}\\left( x\\right) \\leq \\mathfrak{L}_{L\n\\int_{\\Gamma ^{-}}\\left\\vert \\limfunc{Tr}u\\left( y\\right) \\right\\vert ^{2}\n\\mathcal{H}^{2}\\left( y\\right) \\label{estimate_traces}\n\\end{equation\nholds whenever $u\\in \\mathbf{W}^{1,2}\\left( \\Omega ;\\mathbb{R}^{3}\\right) $\nand $\\sigma \\in \\Sigma _{L}\\left( \\Gamma ^{+};\\Gamma ^{-}\\right) $.\n\\end{lemma}\n\n\\begin{proof}\nTo prove the estimate (\\ref{estimate_traces}) we employ the local\nlipschitzianity of the surfaces $\\Gamma ^{\\pm }$. Namely, given $y\\in \\Gamma\n^{-}$ we choose an (open) ball $B\\left( y,\\varepsilon _{y}\\right) $, \n\\varepsilon _{y}>0$, such that $\\Gamma ^{-}\\cap B\\left( y,\\varepsilon\n_{y}\\right) $ can be represented as the graph of a Lipschitz continuous\nfunction with respect to some (local) coordinates. Without loss of\ngenerality we can suppose that this function (say $f_{y}^{-}$) is defined on\nan open set $D_{y}^{-}$ from the cartesian product of the first two\ncomponents $x^{\\prime }:=\\left( x_{1},x_{2}\\right) $ and admits as values\nthe component $x_{3}$, i.e., \n\\begin{equation}\n\\label{Gamma_-cap}\n\\Gamma ^{-}\\cap B\\left( y,\\varepsilon _{y}\\right) =\\left\\{ \\left( z^{\\prime\n},f_{y}^{-}\\left( z^{\\prime }\\right) \\right) :z^{\\prime }\\in\nD_{y}^{-}\\right\\}.\n\\end{equation\nBy the compactness of $\\Gamma^{-}$ one can find a finite number of points \ny^{1},\\dots ,y^{q}\\in \\Gamma^{-}$ with \n\\begin{equation}\n\\Gamma ^{-}=\\Gamma ^{-}\\cap \\tbigcup\\limits_{j=1}^{q}B\\left( y^{j},\\frac\n\\varepsilon_{_{y^{j}}}}{2}\\right) . \\label{repres.Gamma_-}\n\\end{equation\nSet $\\varepsilon _{j}:=\\varepsilon _{y^{j}}$, $D_{j}^{-}:=D_{y^{j}}^{-}$ and \n$f_{j}^{-}\\left( z^{\\prime }\\right) :=f_{y^{j}}^{-}\\left( z^{\\prime }\\right) \n$, $z^{\\prime }\\in D_{j}^{-}$, $j=1,\\dots ,q$.\n\nSimilarly, for any $x\\in \\Gamma ^{+}$ there exist $\\delta _{x}>0$, an open\ndomain $D_{x}^{+}\\subset \\mathbb{R}^{2}$ and a Lipschitz function \nf_{x}^{+}:D_{x}^{+}\\rightarrow \\mathbb{R}$ such that \n\\begin{equation*}\n\\Gamma ^{+}\\cap B\\left( x,\\delta _{x}\\right) =\\left\\{ \\left( z^{\\prime\n},f_{x}^{+}\\left( z^{\\prime }\\right) \\right) :z^{\\prime }\\in\nD_{x}^{+}\\right\\} .\n\\end{equation*\nWe do not loose generality assuming that the value of $f_{x}^{+}$ is the\nlast component of the vector $z\\in \\Gamma ^{+}$ (as in (\\ref{Gamma_-cap\n)). Again due to the compactness argument there exists a finite number of\npoints $x^{1},\\dots ,x^{r}\\in \\Gamma ^{+}$ such tha\n\\begin{equation*}\n\\Gamma ^{+}=\\Gamma ^{+}\\cap \\tbigcup\\limits_{i=1}^{r}B\\left( x^{i},\\delta\n_{i}\\right) \n\\end{equation*\nwher\n\\begin{equation}\n\\delta _{i}:=\\underset{1\\leq j\\leq q}{\\min }\\left( \\delta _{x^{i}},\\tfrac\n\\varepsilon _{j}}{2L}\\right) ,\\;\\; i=1,\\dots ,r.\n\\label{def_delta}\n\\end{equation\nSet also $D_{i}^{+}:=D_{x^{i}}^{+}$ and $f_{i}^{+}\\left( z^{\\prime }\\right)\n:=f_{x^{i}}^{+}\\left( z^{\\prime }\\right) $, $z^{\\prime }\\in D_{i}^{+}$, \ni=1,\\dots ,r$, and denote by $L_{\\Gamma }$ the maximal Lipschitz constant of\nthe functions $f_{j}^{-}:D_{j}^{-}\\rightarrow \\mathbb{R}$, $j=1,\\dots ,q$,\nand $f_{i}^{+}:D_{i}^{+}\\rightarrow \\mathbb{R}$, $i=1,\\dots ,r$.\n\nWe claim that given $i=1,\\dots ,r$ and $\\sigma \\in \\Sigma _{L}$, the image \n\\sigma \\left( \\Gamma ^{+}\\cap B\\left( x^{i},\\delta _{i}\\right) \\right) $ is\ncontained in $\\Gamma ^{-}\\cap B\\left( y^{j},\\varepsilon _{j}\\right) $ for\nsome $j=1,\\dots ,q$. Indeed, let us choose $j$ such that $\\sigma \\left(\nx^{i}\\right) \\in B\\left( y^{j},\\varepsilon _{j}\/2\\right) $ (see (\\re\n{repres.Gamma_-})). Taking an arbitrary $z\\in \\Gamma ^{+}\\cap B\\left(\nx^{i},\\delta _{i}\\right) $ we, in particular, have $\\left\\vert\nz-x^{i}\\right\\vert <\\frac{\\varepsilon _{j}}{2L}$ (see (\\ref{def_delta})) and\nby (\\ref{Lipschitz}) $\\left\\vert \\sigma \\left( z\\right) -\\sigma \\left(\nx^{i}\\right) \\right\\vert <\\varepsilon _{j}\/2$. However, assuming that \n\\sigma \\left( z\\right) \\notin \\Gamma ^{-}\\cap B\\left( y^{j},\\varepsilon\n_{j}\\right) $ we have $\\left\\vert \\sigma \\left( z\\right) -y^{j}\\right\\vert\n\\geq \\varepsilon _{j}$ and henc\n\\begin{equation*}\n\\left\\vert \\sigma \\left( z\\right) -\\sigma \\left( x^{i}\\right) \\right\\vert\n\\geq \\left\\vert \\sigma \\left( z\\right) -y^{j}\\right\\vert -\\left\\vert \\sigma\n\\left( x^{i}\\right) -y^{j}\\right\\vert \\geq \\varepsilon _{j}-\\frac\n\\varepsilon _{_{j}}}{2}=\\frac{\\varepsilon _{_{j}}}{2},\n\\end{equation*\nwhich is a contradiction. In what follows we associate to each $\\sigma \\in \\Sigma _{L}$ and to each $i=1,\\dots ,r$ an index \nj=j\\left( \\sigma ,i\\right) \\in \\left\\{ 1,\\dots ,q\\right\\} $ such tha\n\\begin{equation*}\n\\sigma \\left( \\Gamma ^{+}\\cap B\\left( x^{i},\\delta _{i}\\right) \\right)\n\\subset \\Gamma ^{-}\\cap B\\left( y^{j},\\varepsilon _{j}\\right) .\n\\end{equation*}\n\nThe latter inclusion allows us to define correctly the (injective) mapping \n\\psi _{\\sigma }^{i}:D_{i}^{+}\\rightarrow D_{j\\left( \\sigma ,i\\right) }^{-}$\nsuch tha\n\\begin{equation}\n\\sigma \\left( x\\right) =\\sigma \\left( x^{\\prime },f_{i}^{+}\\left( x^{\\prime\n}\\right) \\right) =\\left( \\psi _{\\sigma }^{i}\\left( x^{\\prime }\\right)\n,f_{j}^{-}\\left( \\psi _{\\sigma }^{i}\\left( x^{\\prime }\\right) \\right)\n\\right) ,x^{\\prime }\\in D_{i}^{+}. \\label{def_psi}\n\\end{equation\n>From (\\ref{Lipschitz}) it follows immediately that $\\psi _{\\sigma }^{i}$ is\nLipschitz\n\\begin{eqnarray}\n\\left\\vert \\psi _{\\sigma }^{i}\\left( x^{\\prime }\\right) -\\psi _{\\sigma\n}^{i}\\left( z^{\\prime }\\right) \\right\\vert &\\leq &\\left\\vert \\sigma \\left(\nx^{\\prime },f_{i}^{+}\\left( x^{\\prime }\\right) \\right) -\\sigma \\left(\nz^{\\prime },f_{i}^{+}\\left( z^{\\prime }\\right) \\right) \\right\\vert \\notag\n\\\\\n&\\leq &L\\left( \\left\\vert x^{\\prime }-z^{\\prime }\\right\\vert ^{2}+\\left(\nf_{i}^{+}\\left( x^{\\prime }\\right) -f_{i}^{+}\\left( z^{\\prime }\\right)\n\\right) ^{2}\\right) ^{1\/2} \\label{Lip_psi} \\\\\n&\\leq &L\\sqrt{1+L_{\\Gamma }^{2}}\\left\\vert x^{\\prime }-z^{\\prime\n}\\right\\vert ,\\;\\; x^{\\prime },z^{\\prime }\\in D_{i}^{+}. \n\\notag\n\\end{eqnarray\nSo, by \\emph{Rademacher's Theorem,} $\\psi _{\\sigma }^{i}$ is $\\mathcal{L}^{2}\n-$a.e. differentiable on $D_{i}^{+}$ with $\\mathcal{L}^{2}$-measurable\ngradient, and the inequalit\n\\begin{equation*}\n\\left\\vert \\nabla \\psi _{\\sigma }^{i}\\left( x^{\\prime }\\right) \\right\\vert\n\\leq M:=L\\sqrt{1+L_{\\Gamma }^{2}} \\label{bound_grad_psi}\n\\end{equation*\nholds for a.e. $x^{\\prime }\\in D_{i}^{+}$. Notice that the inverse mapping \n\\left( \\psi _{\\sigma }^{i}\\right) ^{-1}$ is well defined on $G_{\\sigma\n}^{i}:=\\psi _{\\sigma }^{i}\\left( D_{i}^{+}\\right) \\subset D_{j\\left( \\sigma\n,i\\right) }^{-}$ by the formula similar to (\\ref{def_psi}), namely\n\\begin{equation*}\n\\sigma ^{-1}\\left( y^{\\prime },f_{j}^{-}\\left( y^{\\prime }\\right) \\right)\n=\\left( \\left( \\psi _{\\sigma }^{i}\\right) ^{-1}\\left( y^{\\prime }\\right)\n,f_{i}^{+}\\left( \\left( \\psi _{\\sigma }^{i}\\right) ^{-1}\\left( y^{\\prime\n}\\right) \\right) \\right) ,y^{\\prime }\\in G_{\\sigma }^{i}.\n\\end{equation*\nIn the same way as (\\ref{Lip_psi}) we deduce, from (\\ref{Lipschitz}), that \n\\left( \\psi _{\\sigma }^{i}\\right) ^{-1}$ is Lipschitz on the (open) set \nG_{\\sigma }^{i}$ and, the estimat\n\\begin{equation}\n\\left\\vert \\nabla \\left( \\psi _{\\sigma }^{i}\\right) ^{-1}\\left( y^{\\prime\n}\\right) \\right\\vert \\leq M \\label{bound_grad_psi_inv}\n\\end{equation\nholds for $\\mathcal{L}^{2}$-a.e. $y^{\\prime }\\in G_{\\sigma }^{i}$.\n\nIntegrating the function $\\left\\vert \\limfunc{Tr}u\\left( \\sigma \\left(\nx\\right) \\right) \\right\\vert ^{2}$ on the surface piece $\\Gamma ^{+}\\cap\nB\\left( x^{i},\\delta _{i}\\right) $ we pass first to the double integra\n\\begin{eqnarray}\n&&\\underset{\\Gamma ^{+}\\cap B\\left( x^{i},\\delta _{i}\\right) }{\\int \n\\left\\vert \\limfunc{Tr}u\\left( \\sigma \\left( x\\right) \\right) \\right\\vert\n^{2}d\\mathcal{H}^{2}\\left( x\\right) \\notag \\\\\n&=&\\diint\\limits_{D_{i}^{+}}\\left\\vert \\limfunc{Tr}u\\left( \\sigma \\left(\nx^{\\prime },f_{i}^{+}\\left( x^{\\prime }\\right) \\right) \\right) \\right\\vert\n^{2}\\sqrt{1+\\left\\vert \\nabla f_{i}^{+}\\left( x^{\\prime }\\right) \\right\\vert\n^{2}}dx^{\\prime } \\label{int_change_var} \\\\\n&\\leq &\\sqrt{1+L_{\\Gamma }^{2}}\\diint\\limits_{D_{i}^{+}}\\left\\vert \\limfunc\nTr}u\\left( \\sigma \\left( x^{\\prime },f_{i}^{+}\\left( x^{\\prime }\\right)\n\\right) \\right) \\right\\vert ^{2}\\,dx^{\\prime }. \\notag\n\\end{eqnarray\nDue to the representation (\\ref{def_psi}) we can make the change of\nvariables $y^{\\prime }=\\psi _{\\sigma }^{i}\\left( x^{\\prime }\\right) $, \nx^{\\prime }\\in D_{i}^{+}$, in the integral (\\ref{int_change_var}), and\nreturning then to the surface integral on a piece of $\\Gamma ^{-}$, we hav\n\\begin{eqnarray}\n&&\\diint\\limits_{D_{i}^{+}}\\left\\vert \\limfunc{Tr}u\\left( \\sigma \\left(\nx^{\\prime },f_{i}^{+}\\left( x^{\\prime }\\right) \\right) \\right) \\right\\vert\n^{2}\\,dx^{\\prime } \\notag \\\\\n&=&\\diint\\limits_{G_{\\sigma }^{i}}\\left\\vert \\limfunc{Tr}u\\left( y^{\\prime\n},f_{j}^{-}\\left( y^{\\prime }\\right) \\right) \\right\\vert ^{2}\\left\\vert \\det\n\\nabla \\left( \\psi _{\\sigma }^{i}\\right) ^{-1}\\left( y^{\\prime }\\right)\n\\right\\vert \\,dy^{\\prime } \\notag \\\\\n&\\leq &6M^{3}\\diint\\limits_{G_{\\sigma }^{i}}\\left\\vert \\limfunc{Tr}u\\left(\ny^{\\prime },f_{j}^{-}\\left( y^{\\prime }\\right) \\right) \\right\\vert ^{2}\\sqrt\n1+\\left\\vert \\nabla f_{j}^{-}\\left( y^{\\prime }\\right) \\right\\vert ^{2}\ndy^{\\prime } \\label{change_var1} \\\\\n&\\leq &6M^{3}\\underset{\\Gamma ^{-}}{\\int }\\left\\vert \\limfunc{Tr}u\\left(\ny\\right) \\right\\vert ^{2}d\\mathcal{H}^{2}\\left( y\\right) . \\notag\n\\end{eqnarray\nHere we used the estimate (\\ref{bound_grad_psi_inv}) and the obvious\ninequality $\\left\\vert \\det A\\right\\vert \\leq 6\\left\\Vert A\\right\\Vert ^{3}$\n($A$ is an arbitrary $3\\times 3$-matrix). Since the sets $\\Gamma ^{+}\\cap\nB\\left( x^{i},\\delta _{i}\\right) $, $i=1,\\dots ,r$, cover the surface \n\\Gamma ^{+}$, taking into account (\\ref{int_change_var}) and (\\re\n{change_var1}) we conclude tha\n\\begin{eqnarray*}\n\\int_{\\Gamma ^{+}}\\left\\vert \\limfunc{Tr}u\\left( \\sigma \\left( x\\right)\n\\right) \\right\\vert ^{2}d\\mathcal{H}^{2}\\left( x\\right) &\\leq &\\underset{i=\n}{\\overset{r}{\\dsum }}\\underset{\\Gamma ^{+}\\cap B\\left( x^{i},\\delta\n_{i}\\right) }{\\int }\\left\\vert \\limfunc{Tr}u\\left( \\sigma \\left( x\\right)\n\\right) \\right\\vert ^{2}d\\mathcal{H}^{2}\\left( x\\right) \\\\\n&\\leq &\\mathfrak{L}\\underset{\\Gamma ^{-}}{\\int }\\left\\vert \\limfunc{Tr\nu\\left( y\\right) \\right\\vert ^{2}d\\mathcal{H}^{2}\\left( y\\right) \n\\end{eqnarray*\nwhere $\\mathfrak{L}:=6rM^{3}\\sqrt{1+L_{\\Gamma }^{2}}>0$ depends just on the\nLipschitz constant $L\\geq 1$ and on the properties of the domain $\\Omega $\n(namely, of its boundary).\\bigskip \n\\end{proof}\n\n\\medskip\n\nProving the existence theorem we pay the main attention to the validity of\nthe boundary condition (C$_{3}$) where Lemma \\ref{estimate_w} is crucial.\n\n\\begin{theorem}\n\\label{Th_exist} Let $W:\\mathbb{R}^{3\\times 3}\\rightarrow \\mathbb{R\\cup \n\\left\\{ +\\infty \\right\\} $ be a polyconvex function satisfying the growth\nassumption (\\ref{growth_cond}). Then problem (\\ref{varproblem}) admits a\nminimizer whenever there exists at least one pair $\\omega :=\\left( u,\\sigma\n\\right) \\in \\mathcal{W}_{L}$ wit\n\\begin{equation*}\nI\\left( \\omega \\right) :=\\dint_{\\Omega }W\\left( \\nabla u\\left( x\\right)\n\\right) \\,\\,dx<+\\infty .\n\\end{equation*}\n\\end{theorem}\n\n\\begin{proof}\nLet us consider a minimizing sequence $\\left\\{ \\left( u_{n},\\sigma\n_{n}\\right) \\right\\} \\subset \\mathcal{W}_{L}$ of the functional (\\re\n{Functional}), e.g., such a\n\\begin{equation}\n\\dint_{\\Omega }W\\left( \\nabla u_{n}\\left( x\\right) \\right) \\,dx\\leq \\inf\n\\left\\{ \\dint_{\\Omega }W\\left( \\nabla u\\left( x\\right) \\right) \\,dx:\\left(\nu,\\sigma \\right) \\in \\mathcal{W}_{L}\\right\\} +\\frac{1}{n}<+\\infty .\n\\label{min_seq}\n\\end{equation\nTaking into account the estimate (\\ref{growth_cond}) we deduce from (\\re\n{min_seq}) that the sequences $\\left\\{ \\nabla u_{n}\\right\\} $, $\\left\\{ \n\\mathrm{Adj\\,}\\nabla u_{n}\\right\\} $ and $\\left\\{ \\det \\,\\nabla\nu_{n}\\right\\} $ are bounded in $\\mathbf{L}^{2}\\left( \\Omega ;\\mathbb{R\n^{3\\times 3}\\right) $ and in $\\mathbf{L}^{2}\\left( \\Omega ;\\mathbb{R}\\right) \n$, respectively. Applying Proposition \\ref{poincare} and the boundary\ncondition (C$_{1}$) we find a constant $C>0$ such that the inequalit\n\\begin{equation*}\n\\underset{\\Omega }{\\dint }\\left\\vert u_{n}\\left( x\\right) \\right\\vert\n^{2}dx\\leq C\\left[ \\underset{\\Omega }{\\dint }\\left\\vert \\nabla u_{n}\\left(\nx\\right) \\right\\vert ^{2}\\,dx+\\left\\vert \\dint\\limits_{\\Gamma _{1}}x\\,\n\\mathcal{H}^{2}\\left( x\\right) \\right\\vert \\right] \n\\end{equation*\nholds for each $n\\geq 1$. So, the sequence $\\left\\{ u_{n}\\right\\}$ is bounded in $\\mathbf{W}^{1,2}\\left( \\Omega ;\\mathbb{R\n^{3}\\right) $ and by the \\emph{Banach-Alaoglu theorem}, up to a subsequence,\nconverges weakly to some function $\\bar{u}\\in \\mathbf{W}^{1,2}\\left( \\Omega \n\\mathbb{R}^{3}\\right) $. Without loss of generality, we can also assume that \n$\\left\\{ \\mathrm{Adj\\,}\\nabla u_{n}\\right\\} $ and $\\left\\{ \\mathrm{\\det \\,\n\\nabla u_{n}\\right\\} $ converge weakly to some functions $\\xi \\in \\mathbf{L\n^{2}\\left( \\Omega ;\\mathbb{R}^{3\\times 3}\\right) $ and $\\eta \\in \\mathbf{L\n^{2}\\left( \\Omega ;\\mathbb{R}\\right) $, respectively. Now, by Theorem 8.20 \n\\cite[pp. 395-396]{Dac} due to the uniqueness of the limit we deduce that \n\\xi \\left( x\\right) =\\mathrm{Adj\\,}\\nabla u\\left( x\\right) $ and $\\eta\n\\left( x\\right) =\\mathrm{\\det \\,}\\nabla u\\left( x\\right) $ for almost all \nx\\in \\Omega $. Thus we have the weak convergence\\ of the sequence $\\left\\{ \n\\mathbb{T}\\left( \\nabla u_{n}\\right) \\right\\} $ to the vector-function \n\\mathbb{T}\\left( \\nabla u\\right) $ in the space $\\mathbf{L}^{2}\\left( \\Omega\n;\\mathbb{R}^{\\tau \\left( 3,3\\right) }\\right) $.\n\nOn the other hand, recalling that $\\left\\{ \\sigma _{n}\\right\\} \\subset\n\\Sigma _{L}\\left( \\Gamma ^{+};\\Gamma ^{-}\\right) $ (see (\\ref{Lipschitz}))\nby \\emph{Ascoli's theorem} up to a subsequence, not relabeled, $\\left\\{\n\\sigma _{n}\\right\\} $ converges uniformly to $\\overline{\\sigma }\\in \\Sigma\n_{L}\\left( \\Gamma ^{+};\\Gamma ^{-}\\right) $.\n\nSince the integrand $W$ is polyconvex, it can be represented as $W\\left( \\xi\n\\right) =g\\left( \\mathbb{T}\\left( \\xi \\right) \\right) $, $\\xi \\in \\mathbb{R\n^{3\\times 3}$, with some convex function $g:\\mathbb{R}^{\\tau \\left(\n3,3\\right) }\\rightarrow \\mathbb{R}$, and, therefore\n\\begin{eqnarray*}\n\\underset{\\Omega }{\\dint }W\\left( \\nabla \\overline{u}\\left( x\\right) \\right)\n\\,\\,dx &=&\\underset{\\Omega }{\\dint }g\\left( \\mathbb{T}\\left( \\nabla \n\\overline{u}\\left( x\\right) \\right) \\right) \\,\\,dx\\leq \\underset\nn\\rightarrow \\infty }{\\lim \\inf }\\,\\underset{\\Omega }{\\dint }g\\left( \\mathbb\nT}\\left( \\nabla u_{n}\\left( x\\right) \\right) \\right) \\,\\,dx \\\\\n&\\leq &\\inf \\left\\{ \\underset{\\Omega }{\\dint }W\\left( \\nabla u\\left(\nx\\right) \\right) \\,\\,dx:\\left( u,\\sigma \\right) \\in \\mathcal{W}_{L}\\right\\} \n.\n\\end{eqnarray*\nThus, it remains just to prove that $\\bar{\\omega}:=\\left( \\overline{u}\n\\overline{\\sigma }\\right) \\in \\mathcal{W}_{L}$ (i.e., that the Sobolev\nfunction $\\overline{u}$ satisfies the boundary conditions (C$_{1}$)$-$(C$_{3}\n$) above with the transformation $\\overline{\\sigma }$). The validity of (C\n_{1}$) and (C$_{2}$) follows immediately from Proposition \\ref{traceconv}.\nIn fact, the weak convergence of $\\left\\{ u_{n}\\right\\} $ in $\\mathbf{W\n^{1,2}\\left( \\Omega ;\\mathbb{R}^{3}\\right) $ implies the strong convergence\nof traces $\\left\\{ \\limfunc{Tr}u_{n}\\right\\} $ in $\\mathbf{L}^{2}\\left(\n\\partial \\Omega ;\\mathbb{R}^{3}\\right) $. So, up to a subsequence, $\\limfunc\nTr}u_{n}\\left( x\\right) \\rightarrow \\limfunc{Tr}\\bar{u}\\left( x\\right) $ for \n$\\mathcal{H}^{2}$-a.e. $x\\in \\partial \\Omega $. In particular, $\\limfunc{Tr\n\\bar{u}\\left( x\\right) =x$ and $h\\left( \\limfunc{Tr}\\bar{u}\\left( x\\right)\n\\right) =0$ almost everywhere on $\\Gamma _{1}$ and on $\\Gamma _{2}$,\nrespectively (w.r.t. the Hausdorff measure).\n\nIn order to verify the condition (C$_{3}$) we observe first tha\n\\begin{equation}\n\\limfunc{Tr}{u}_{n}(x) =\\limfunc{Tr}{u}_{n}({\n\\sigma}_{n}(x)) ,n=1,2,\\dots ,\n\\label{cond_C3_n}\n\\end{equation\nfor $\\mathcal{H}^{2}$-a.e. $x\\in \\Gamma ^{+}$ and consider the surface\nintegra\n\\begin{equation*}\n\\mathcal{J}:=\\int_{\\Gamma ^{+}}\\left\\vert \\limfunc{Tr}\\bar{u}\\left( x\\right)\n-\\limfunc{Tr}\\bar{u}\\left( \\bar{\\sigma}\\left( x\\right) \\right) \\right\\vert\n^{2}d\\mathcal{H}^{2}\\left( x\\right) .\n\\end{equation*\nBy the Minkowski's inequality we hav\n\\begin{equation}\n\\mathcal{J}^{1\/2}\\leq \\left( \\mathcal{J}_{1}^{n}\\right) ^{1\/2}+\\left( \n\\mathcal{J}_{2}^{n}\\right) ^{1\/2}+\\left( \\mathcal{J}_{3}^{n}\\right) ^{1\/2}\n\\label{estimate_int}\n\\end{equation\nwher\n\\begin{eqnarray*}\n&&\\mathcal{J}_{1}^{n}:=\\int_{\\Gamma ^{+}}\\left\\vert \\limfunc{Tr}\\bar{u\n\\left( x\\right) -\\limfunc{Tr}u_{n}\\left( \\sigma _{n}\\left( x\\right) \\right)\n\\right\\vert ^{2}d\\mathcal{H}^{2}\\left( x\\right) {;} \\\\\n&&\\mathcal{J}_{2}^{n}:=\\int_{\\Gamma ^{+}}\\left\\vert \\limfunc{Tr}u_{n}\\left(\n\\sigma _{n}\\left( x\\right) \\right) -\\limfunc{Tr}\\bar{u}\\left( \\sigma\n_{n}\\left( x\\right) \\right) \\right\\vert ^{2}d\\mathcal{H}^{2}\\left( x\\right) \n {;} \\\\\n&&\\mathcal{J}_{3}^{n}:=\\int_{\\Gamma ^{+}}\\left\\vert \\limfunc{Tr}\\bar{u\n\\left( \\sigma _{n}\\left( x\\right) \\right) -\\limfunc{Tr}\\bar{u}\\left( \\bar\n\\sigma}\\left( x\\right) \\right) \\right\\vert ^{2}d\\mathcal{H}^{2}\\left(\nx\\right) .\n\\end{eqnarray*}\n\nTaking into account the equalities (\\ref{cond_C3_n}) by using Proposition\n\\ref{traceconv} we immediately obtain that $\\mathcal{J}_{1}^{n}\\rightarrow 0$\nas $n\\rightarrow \\infty $.\n\nDue to the linearity of the trace operator, applying Lemma \\ref{estimate_w}\nand again Proposition \\ref{traceconv} we arrive a\n\\begin{equation*}\n\\mathcal{J}_{2}^{n}\\leq \\mathfrak{L}_L\\int_{\\Gamma ^{-}}\\left\\vert \\limfunc{Tr\n\\left( u_{n}-\\bar{u}\\right) \\left( x\\right) \\right\\vert ^{2}d\\mathcal{H\n^{2}\\left( x\\right) \\rightarrow 0 { \\ \\ as \\ \\ }n\\rightarrow \\infty \n.\n\\end{equation*}\n\nLet us approximate now $\\bar{u}\\in \\mathbf{W}^{1,2}\\left( \\Omega ;\\mathbb{R\n^{3}\\right) $ by a sequence of continuous functions $v_{k}:\\overline{\\Omega \n\\rightarrow \\mathbb{R}^{3}$, $k=1,2,\\dots $ (with respect to the norm of \n\\mathbf{W}^{1,2}\\left( \\Omega ;\\mathbb{R}^{3}\\right) $). Then (see\nProposition \\ref{traceconv}) $v_{k}=\\limfunc{Tr}v_{k}\\rightarrow \\limfunc{Tr\n\\bar{u}$ as $k\\rightarrow \\infty$ in $\\mathbf{L}^{2}\\left( \\partial \\Omega\n;\\mathbb{R}^{3}\\right) $. In particular, given $\\varepsilon >0$ there exists\nan index $k^{\\ast }\\geq 1$ such tha\n\\begin{equation*}\n\\int_{\\Gamma ^{+}}\\left\\vert \\limfunc{Tr}\\bar{u}\\left( y\\right) -v_{k^{\\ast\n}}\\left( y\\right) \\right\\vert ^{2}d\\mathcal{H}^{2}\\left( y\\right) \\leq\n\\varepsilon .\n\\end{equation*\nBy using Lemma \\ref{estimate_w} similarly as was done to estimate the\nintegral $\\mathcal{J}_{2}^{n}$ we hav\n\\begin{eqnarray}\n&&\\int_{\\Gamma ^{+}}\\left\\vert \\limfunc{Tr}\\bar{u}\\left( \\sigma _{n}\\left(\nx\\right) \\right) -v_{k^{\\ast }}\\left( \\sigma _{n}\\left( x\\right) \\right)\n\\right\\vert ^{2}d\\mathcal{H}^{2}\\left( x\\right) \\notag \\\\\n&\\leq &\\mathfrak{L}_{L}\\int_{\\Gamma ^{-}}\\left\\vert \\limfunc{Tr}\\bar{u}\\left(\ny\\right) -v_{k^{\\ast }}\\left( y\\right) \\right\\vert ^{2}d\\mathcal{H\n^{2}\\left( y\\right) \\leq \\mathfrak{L}_{L}\\varepsilon ,\\;\\; n=1,2,\\dots \n, \\label{Int_3_1}\n\\end{eqnarray\nand similarly\n\\begin{equation}\n\\int_{\\Gamma ^{+}}\\left\\vert \\limfunc{Tr}\\bar{u}\\left( \\bar{\\sigma}\\left(\nx\\right) \\right) -v_{k^{\\ast }}\\left( \\bar{\\sigma}\\left( x\\right) \\right)\n\\right\\vert ^{2}d\\mathcal{H}^{2}\\left( x\\right) \\leq \\mathfrak{L}_{L}\\varepsilon \n. \\label{Int_3_2}\n\\end{equation\nOn the other hand, by the uniform continuity of $v_{k^{\\ast }}$ and the uniform\nconvergence $\\sigma _{n}\\rightarrow \\overline{\\sigma }$ as $n\\rightarrow\n\\infty $, we find a number $n^{\\ast }\\geq 1$ such tha\n\\begin{equation*}\n\\left\\vert v_{k^{\\ast }}\\left( \\sigma _{n}\\left( x\\right) \\right)\n-v_{k^{\\ast }}\\left( \\bar{\\sigma}\\left( x\\right) \\right) \\right\\vert \\leq\n\\varepsilon \n\\end{equation*\nfor all $n\\geq n^{\\ast }$ and all $x\\in \\Gamma ^{+}$, and, consequently\n\\begin{equation}\n\\int_{\\Gamma ^{+}}\\left\\vert v_{k^{\\ast }}\\left( \\sigma _{n}\\left( x\\right)\n\\right) -v_{k^{\\ast }}\\left( \\bar{\\sigma}\\left( x\\right) \\right) \\right\\vert\n^{2}d\\mathcal{H}^{2}\\left( x\\right) \\leq \\mathcal{H}^{2}\\left( \\Gamma\n^{+}\\right) \\varepsilon ^{2},n\\geq n^{\\ast }. \\label{Int_3_3}\n\\end{equation\nJoining together the inequalities (\\ref{Int_3_1}), (\\ref{Int_3_2}) and (\\re\n{Int_3_3}) we obtain tha\n\\begin{equation*}\n\\left( \\mathcal{J}_{3}^{n}\\right) ^{1\/2}\\leq \\left( \\mathfrak{L}_L\\varepsilon\n\\right) ^{1\/2}+\\left( \\mathfrak{L}_L\\varepsilon \\right) ^{1\/2}+\\left( \\mathcal\nH}^{2}\\left( \\Gamma ^{+}\\right) \\varepsilon ^{2}\\right) ^{1\/2},n\\geq\nn^{\\ast }.\n\\end{equation*\nSince $\\varepsilon >0$ is arbitrary and the constant $\\mathfrak{L}_L$ does not\ndepend on $n=1,2,\\dots $, we conclude that all the three integrals in the\nright-hand side of (\\ref{estimate_int}) tend to zero as $n\\rightarrow \\infty \n$. Thus $\\mathcal{J}=0$, or, in other words, $\\limfunc{Tr}\\bar{u}\\left(\nx\\right) -\\limfunc{Tr}\\bar{u}\\left( \\bar{\\sigma}\\left( x\\right) \\right) =0$\nfor $\\mathcal{H}^{2}$-a.e. $x\\in \\Gamma ^{+}$, and the theorem is proved.\n\\end{proof}\n\n\n\\section{Necessary conditions of optimality}\n\nIn this section, under some additional hypotheses, we deduce necessary conditions of optimality for problem (\\ref{varproblem}).\n\nTo simplify, assume that the function $W$ is twice continuously differentiable and $h$ is continuously differentiable.\nMoreover, suppose that the surfaces $\\Gamma_1, \\Gamma_2, \\Gamma^+, \\Gamma^-, \\Gamma_4$ are sufficiently smooth. \n\nGiven $\\Gamma \\subset \\partial\\Omega,$ with $\\mathcal{H}^{2}(\\Gamma)>0,$ in what follows we denote by $\\mathbf{C}^{1}(\\Gamma^+;\\mathbb{R}^{3})$ the family of restrictions to $\\Gamma$ of all functions $u:\\Omega\\rightarrow \\mathbb{R}^{3},$ whose gradients are continuous up to the boundary. Let us supply $\\mathbf{C}^{1}(\\Omega;\\mathbb{R}^{3})$ with the natural sup-norm. \n\nWe consider the problem (\\ref{varproblem}) defined in the space $\\mathbf{C}^{1}(\\Omega;\\mathbb{R}^{3})\\times \\mathbf{C}^{1}(\\Gamma^+;\\mathbb{R}^{3})$.\n\n\\begin{theorem}\nLet $(\\bar{u},\\bar{\\sigma})\\in \\mathbf{C}^{1}({\\Omega};\\mathbb{R}^{3})\\times \\mathbf{C}^{1}({\\Gamma}^+;\\mathbb{R}^{3})$ be a minimizer of problem (\\ref{varproblem}). Assume that $\\nabla h(\\bar{u}(x))\\neq 0$, $x\\in \\Gamma_2$ and ${\\rm det}\\nabla\\bar{u}(\\bar{\\sigma}(x))\\neq 0$, $x\\in\\Omega$. Then the following conditions are satisfied:\n\\begin{eqnarray}\n&& {\\rm Div}(\\nabla W)(\\nabla\\bar{u}(x))=0,\\;\\; x\\in\\Omega; \\label{f1}\\\\\n\\smallskip\n&& \\nabla W(\\nabla\\bar{u}(x))\\nu(x)=0,\\;\\; x\\in \\Gamma_3; \\label{f2}\\\\\n\\smallskip\n&& \\nabla W(\\nabla\\bar{u}(x))\\nu(x)\\times\\nabla h(\\bar{u}(x))=0,\\;\\; x\\in \\Gamma_2; \\label{f3}\\\\\n\\smallskip\n&&\\nabla W(\\nabla\\bar{u}(x))\\nu(x)=0,\\;\\; x\\in \\Gamma^{\\pm}. \\label{f4}\n\\end{eqnarray}\n\\end{theorem}\n\n\\begin{proof} Let us write the constraints in the minimization problem (\\ref{varproblem}) as $F(u,\\sigma)=0$ where the map \n$\nF:\\mathbf{C}^{1}({\\Omega};\\mathbb{R}^{3})\\times \\mathbf{C}^{1}({\\Gamma}^+;\\mathbb{R}^{3})\\rightarrow \\mathbf{C}^{1}({\\Gamma}_1;\\mathbb{R}^{3})\\times \\mathbf{C}^{1}({\\Gamma}_2;\\mathbb{R})\\times \\mathbf{C}^{1}({\\Gamma}^+;\\mathbb{R}^{3})$ is given by\n$$\nF(u,\\sigma ):=(u(x)-x,h(u(x)),u(x)-u(\\sigma(x))).\n$$\nUnder our assumptions the map $F$ and the functional $I$ are both Fr\\'{e}chet differentiable. In particular, for the (Fr\\'{e}chet) derivative of $F$ at the point $(\\bar{u},\\bar{\\sigma})$ we have\n$$\nDF(\\bar{u},\\bar{\\sigma})(\\tilde{u},\\tilde{\\sigma})(x)=\n\\left(\n\\begin{array}{c}\n\\tilde{u}(x)\\\\\n\\langle\\nabla h(\\bar{u}(x)),\\tilde{u}(x)\\rangle\\\\\n\\tilde{u}(x)-\\tilde{u}(\\bar{\\sigma}(x))-\\nabla \\bar{u}(\\bar{\\sigma}(x))\\tilde{\\sigma}(x)\n\\end{array}\n\\right).\n$$\nHere, and in what follows, $\\langle\\cdot,\\cdot\\rangle$ denotes the inner product in $\\mathbb{R}^{3}.$ Taking into account that $\\nabla h(\\bar{u}(x))\\neq0$ on $\\Gamma_2$ and that the jacobian matrix $\\nabla\\bar{u}(\\bar{\\sigma}(x))$ is not degenerated, we have that the linear operator $DF(\\bar{u},\\bar{\\sigma})$ is onto the space $\\mathbf{C}^{1}({\\Gamma}_1;\\mathbb{R}^{3})\\times \\mathbf{C}^{1}(\\Gamma_2;\\mathbb{R})\\times \\mathbf{C}^{1}(\\Gamma^+;\\mathbb{R}^{3}).$\nBy the Lagrange multipliers rule (see, e.g., \\cite{IT}) there exist linear continuous functionals $\\lambda_1,\\;\\lambda_2,\\;\\lambda^+$ on $\\mathbf{C}^{1}(\\Gamma_1;\\mathbb{R}^{3})$,\\; $\\mathbf{C}^{1}(\\Gamma_2;\\mathbb{R})$ and $\\mathbf{C}^{1}(\\Gamma^+;\\mathbb{R}^{3}),$ respectively, such that\n\\begin{equation*}\n\\int_{\\Omega}\\sum_{i,j=1}^{3}\\frac{\\partial W(\\nabla\\bar{u}(x))} {\\partial\\xi_{ij}}\\frac{\\partial\\tilde{u}_i(x)}{\\partial x_j}dx\n+\\lambda_1(\\tilde{u})+\\lambda_2(\\langle\\nabla h(\\bar{u}),\\tilde{u}\\rangle)\n+\\lambda^+(\\tilde{u}-\\tilde{u}(\\bar{\\sigma})-\\nabla \\bar{u}(\\bar{\\sigma})\\tilde{\\sigma})=0.\n\\end{equation*}\n\n\\noindent Applying the Divergence theorem we get\n\\begin{eqnarray}\n&&-\\int_{\\Omega}\\left\\langle {\\rm Div}{(\\nabla W(\\nabla\\bar{u}(x)))},\\tilde{u}(x)\\right\\rangle dx\n+\\int_{\\partial\\Omega}\\left\\langle{\\nabla W(\\nabla\\bar{u}(x))\\nu(x),\\tilde{u}(x)}\\right\\rangle d\\mathcal{H}^{2}(x) \\nonumber \\\\\n&&+\\lambda_1(\\tilde{u})+\\lambda_2(\\langle\\nabla h(\\bar{u}),\\tilde{u}\\rangle)+\\lambda^+(\\tilde{u}-\\tilde{u}(\\bar{\\sigma})-\\nabla \\bar{u}(\\bar{\\sigma})\\tilde{\\sigma}))=0. \\label{f5} \n\\end{eqnarray}\nHere $\\nu(x)$ is the unit outer normal to the boundary. Varying $\\tilde{u}$ in (\\ref{f5}) such that $\\tilde{u}(x)=0$ on $\\partial{\\Omega}$, we obtain (\\ref{f1}). Taking then $\\tilde{u} \\in \\mathbf{C}^{1}(\\Omega;\\mathbb{R}^{3})$ with $\\tilde{u}(x)=0$ on $\\partial\\Omega\\setminus \\Gamma_3$ we arrive at (\\ref{f2}).\nFurthermore, choosing appropriate functions $\\tilde{u}$ in (\\ref{f5}) we obtain\n\\begin{equation}\n\\label{f6}\n\\int_{\\Gamma_2}\\left\\langle\\nabla W(\\nabla\\bar{u}(x))\\nu (x),\\tilde{u}(x)\\right\\rangle d\\mathcal{H}^{2}(x)\n+\\lambda_2(\\langle\\nabla h(\\bar{u}),\\tilde{u}\\rangle)=0 \n\\end{equation}\nwhenever $\\tilde{u}(x)=0,\\; x\\in\\partial\\Omega\\setminus \\Gamma_2$, and\n\\begin{equation}\n\\label{f7}\n\\int_{\\Gamma^+\\cup\\Gamma^-}\\left\\langle{\\nabla W(\\nabla\\bar{u}(x)) }\\nu (x),\\tilde{u}(x)\\right\\rangle d\\mathcal{H}^{2}(x)\n+\\lambda^+(\\tilde{u}-\\tilde{u}(\\bar{\\sigma})-\\nabla \\bar{u}(\\bar{\\sigma})\\tilde{\\sigma})=0\\;\\; \n\\end{equation}\nwhenever $\\tilde{u}(x)=0,\\;\\; x\\in\\partial\\Omega\\setminus (\\Gamma^+\\cup\\Gamma^-)$.\n\n\nDenote by $\\Gamma_2^0$ the part of $\\Gamma_2$ where the vectors $a(x):=\\nabla W(\\nabla\\bar{u}(x)) \\nu (x)$ and\n$b(x):=\\nabla h(\\bar{u})(x)$ are co-linear. Taking an arbitrary $c \\in \\mathbf{C}^{1}(\\Gamma_2;\\mathbb{R})$ such that $c(x)=0$ and $\\nabla c(x)=0$ on $\\Gamma_{2}^0$, let us define\n$$\n\\hat{u}(x):=\n\\left\\{\n\\begin{array}{cl}\n\\frac{a(x)\\langle a(x),b(x)\\rangle-b(x)|a(x)|^2}{\\langle a(x),b(x)\\rangle^2-|a(x)|^2|b(x)|^2}c(x),& x\\in\\Gamma_2\\setminus\\Gamma_2^0,\\\\\n\\smallskip\n0, & x\\in\\Gamma_2^0. \n\\end{array}\n\\right.\n$$\nObviously, $\\hat{u} \\in \\mathbf{C}^{1}(\\Gamma_2;\\mathbb{R}^3)$, $\\langle\\hat{u}(x), a(x)\\rangle=0$ and $\\langle\\hat{u}(x),b(x)\\rangle=c(x)$ for $x\\in \\Gamma_2$. From (\\ref{f6}) we\nget $\\lambda_2(c)=\\lambda_2(\\langle b,\\hat{u}\\rangle)=0$. Hence, varying $\\tilde{u} \\in \\mathbf{C}^{1}(\\Gamma_2;\\mathbb{R}^3)$ in (\\ref{f6}) in a suitable way (in particular, setting $\\hat u(x)=0$ on $\\Gamma_2^0$) we get $a(x)=0$ in $\\Gamma_2\\setminus\\Gamma_2^0.$ Thus the equality (\\ref{f3}) follows.\n\n\nFinally, taking $\\tilde{u}=0$, from (\\ref{f7}) we get $\\lambda^+=0$, and, as a consequence, \n$$\n\\int_{\\Gamma^-}\\left\\langle\\nabla W (\\nabla\\bar{u}(x))\\nu (x),\\tilde{u}(x)\\right\\rangle d\\mathcal{H}^2(x)\n+\\int_{\\Gamma^+}\\left\\langle\\nabla W (\\nabla\\bar{u}(x))\\nu (x),\\tilde{u}(x)\\right\\rangle d\\mathcal{H}^2(x)=0,\n$$\nwhich implies (\\ref{f4}). \n\\end{proof}\n\n\\bigskip\n\n\\bigskip\n\n\\noindent {\\bf Acknowledgements}\n\n\\bigskip \n\nThe authors are grateful to Hor\\'{a}cio Costa and Augusta Cardoso for fruitful discussion \nof medical aspects of the problem and also to Giovanni Leoni, who kindly communicate the proof of Lemma \\ref\n{Lemmatrace}.\n\nThis research was supported by Funda\\c{c}\\~{a}o para a Ci\\^{e}ncia e\nTecnologia (FCT), Portuguese Operational Programme for Competitiveness\nFactors (COMPETE), Portuguese National Strategic Reference Framework (QREN)\nand European Regional Development Fund (FEDER) through Project VAPS\n(EXPL\/MAT-NAN\/0606\/2013).\n\n\\bigskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nAlthough we have known since 2012 that there exists a neutral spin-0 resonance with properties (mass and couplings) that are compatible within the experimental error, with those of the scalar SM-Higgs boson~ \\cite{Aad:2012tfa, Chatrchyan:2012xdj}, these data do not exclude the existence of more fields of this sort.\nIn fact, almost all the extensions of the standard model (SM) include extra scalar \nmultiplets: complex~\\cite{Chikashige:1980ui} or real singlets~\\cite{Hill:1987ea,Davoudiasl:2004be,vanderBij:2006ne}, two~\\cite{Branco:2011iw} or more doublets~\\cite{Machado:2012ed}, and Hermitian~\\cite{Brdar:2013iea} and\/or non-Hermitian triplets~\\cite{Konetschny:1977bn,Magg:1980ut,Cheng:1980qt,Escobar:1982dp}. Moreover, extra scalar multiplets usually are introduced in a given model just in order to give masses to the neutrino and\/or charged leptons. Hence, they may have small vacuum expectation values (VEV). \nHowever, this usually implies that there might be light neutral scalars which can be easily ruled out by phenomenology. In two-Higgs doublets this is not the case when there is a positive quadratic term $\\mu^2>0$, which behaves like a positive mass square term in the scalar potential. In this case such parameters may dominate the contributions to the masses of the multiplets' members, which are almost mass degenerated, i.e. in the context of models with one or more inert doublets, see~\\cite{Machado:2012ed} and \\cite{Ma:2006km}. \n\nThe 3-3-1 models are intrinsically multi-Higgs models. For instance,\nin the minimal 3-3-1 model (here denoted by m331 for short), the charged leptons gain mass from a triplet and a sextet and the neutrino gain Majorana masses only through the sextet ~\\cite{Pisano:1991ee,Foot:1992rh,Frampton:1992wt}. \nOn the other hand, in the model with heavy charged~\\cite{Pleitez:1992xh} or neutral leptons~\\cite{Montero:1992jk}, only the triplets are needed, if right-handed neutrinos are introduced and the type-I seesaw mechanism is implemented. \n\nBecause of the sextet, the scalar potential in the m331 becomes more complicated, for this reason it was pointed out in Ref.~\\cite{Montero:2001tq} that the sextet can be omitted if a dimension-five effective operator, involving only triplets, is in charge of the mass generation of the charged leptons and neutrinos. The m331 model without the sextet was called the \"reduced\" m331 model in Ref.~\\cite{Ferreira:2011hm} because only the triplets $\\rho$ and $\\chi$ are introduced. However, there are important differences between our model and those in Refs.~\\cite{Montero:2001tq,Ferreira:2011hm} as we will discuss in Sec.~\\ref{sec:con}. Moreover, as in the case of the SM, the question now is, how does this effective operator arise at tree and\/or loop level~\\cite{Ma:1998dn,Bonnet:2012kz,Sierra:2014rxa}? In the context of the m331 model, mechanisms for generating effective dimension-five operators for the case of the neutrino masses were given in Ref.~\\cite{Montero:2001ts}. \nHowever, in those works thePontecorvo-Maki-Nakawaga-Sakata (PMNS) matrix was not considered. It is far from obvious that the same parameters that allow to obtain the correct lepton masses also accommodate a realistic PMNS matrix. We show that this is possible in the m331 model with a heavy sextet which implements a sort of type-II seesawlike mechanism in the charged lepton sector and, by introducing right-handed neutrinos, neutrino masses arise from a type-I seesaw mechanism. \n\nIn fact, we show that the sextet is just a way to generate, at the tree level, the effective five-dimensional operator proposed in Ref.~\\cite{Montero:2001tq} in order to give the charged leptons their correct masses. This happens if all fields in this multiplet are heavy and its neutral components gain a small ($s^0_2$) or a zero ($s^0_1$) VEV. We also study in this model the conditions upon the dimensionless coupling constants that imply a scalar potential bounded from below, with a global minimum as well. We obtain a realistic PMNS mixing matrix as well. \n\nThe outline of this paper is the following. In Sec.~\\ref{sec:scalars} we give the scalar representation content of the m331 model and the scalar potential of the model. In Sec.~\\ref{sec:masses} we obtain the scalar mass spectra of the model under the conditions of $Z_7\\otimes Z_3$ discrete symmetries. In Sec.~\\ref{sec:pmns} we obtain the charged lepton and neutrino masses and the corresponding PMNS matrix.\nOur conclusions appear in Sec.~\\ref{sec:con}. The conditions for a stable minimum, at tree level, of the scalar potential are obtained in Appendix~\\ref{sec:appendixa}. In Appendix B we consider the Goldstone bosons in the model with exact mass matrices. \n\n\\section{The scalar sector in the m331 model}\n\\label{sec:scalars}\n\nThe scalar potential in several 3-3-1 models was considered in Refs.~\\cite{Diaz:2003dk,Nguyen:1998ui,Giraldo:2011gd,Hernandez:2014vta}. Here, however, we\nwill study only the m331 model in the situation in which the sextet gain a small VEV, the extra scalars in the sextet are heavy, and there is no explicit total lepton number violation in the scalar potential, which is avoided by an appropriate discrete symmetry.\n\nIn the m331 model the scalar sector is composed of a sextet $S\\sim(\\textbf{6},0)$ \nand three triplets: $\\eta=(\\eta^0\\,\\;-\\!\\eta^{-}_1\\,\\eta^+_2)^T\\sim({\\bf3},0)$,\n$\\rho=(\\rho^+\\,\\rho^0\\,\\rho^{++})^T\\sim({\\bf3},1)$, and\n$\\chi=(\\chi^-\\,\\chi^{--}\\,\\chi^0)^T\\sim({\\bf3},-1)$, where $(x,y)$ refer to the $(SU(3)_L,U(1)_X)$ transformations. Only the triplet $\\eta$ and the sextet~$S$,\n\\begin{equation}\nS=\\left(\n\\begin{array}{ccc}\ns^0_1& \\frac{s^-_1}{\\sqrt2} & \\frac{s^+_2}{\\sqrt2}\\\\\n\\frac{s^-_1}{\\sqrt2}& S^{--}_1&\\frac{s^0_2}{\\sqrt2}\\\\\n\\frac{s^+_2}{\\sqrt2}&\\frac{s^0_2}{\\sqrt2}&S^{++}_2\n\\end{array}\n\\right),\n\\label{sextet1}\n\\end{equation}\ncouple to the leptons through the Yukawa interactions $\\overline{(\\Psi_L)^c}\\Psi_LS^*$ and $\\overline{(\\Psi_L)^c}\\Psi_L\\eta$. \n\nWe can write the $SU(3)$ multiplets above in terms of the $SU(2)$ ones. For the triplets we write\n\\begin{equation}\n\\eta=\n\\left(\n\\begin{array}{c}\n\\Phi_\\eta \\\\ \\eta^+_2\\end{array}\\right)\\sim({\\bf3},0),\\quad\n\\rho=\\left(\\begin{array}{c}\n\\Phi_\\rho \\\\ \\rho^{++}\\end{array} \\right)\\sim({\\bf3},1),\\quad\n\\chi=\\left(\n\\begin{array}{c}\n\\Phi_\\chi\\\\ \\chi^0\\end{array} \\right)\\sim({\\bf3},-1).\n\\label{tripletos}\n\\end{equation}\nThe sextet in Eq.~(\\ref{sextet1}) can be written as\n\\begin{equation}\nS=\\left(\n\\begin{array}{cc}\nT & \\frac{\\Phi_s}{\\sqrt2} \\\\\n\\frac{\\Phi^t_s}{\\sqrt2} & S^{--}_2\n\\end{array}\n\\right),\\quad S^*=\\left(\n\\begin{array}{cc}\nT^* & \\frac{\\Phi^*_s}{\\sqrt2} \\\\\n\\frac{\\Phi^\\dagger_s}{\\sqrt2} & S^{++}_2\n\\end{array}\n\\right),\n\\label{sextet2}\n\\end{equation}\nwhere $\\Phi^t_s$ means the transpose of the doublet $\\Phi_s$. Under the $SU(2)\\otimes U(1)_Y$ group the multiplets $\\Phi_{\\eta,\\rho,\\chi,s}$ in Eqs.~(\\ref{tripletos}) and (\\ref{sextet2}) transform as\n\\begin{equation}\n\\Phi_\\eta=\\left(\n\\begin{array}{c}\n\\eta^0\\\\ -\\eta^-_1\n\\end{array}\n\\right),\\; \\Phi_\\rho=\\left(\n\\begin{array}{c}\n\\rho^+\\\\ \\rho^0\n\\end{array}\n\\right),\\; \\Phi_\\chi=\\left(\n\\begin{array}{c}\n\\chi^-\\\\ \\chi^{--}\n\\end{array}\n\\right), \\; \\Phi_s=\\left(\n\\begin{array}{c}\ns^+_2\\\\ s^0_2\n\\end{array}\n\\right),\n\\label{dubletos}\n\\end{equation}\nwhere these are doublets with weak hypercharge $Y=-1,+1,-3,+1$, and $T$ in Eq.~(\\ref{sextet2})\n\\begin{equation}\nT=\\left( \\begin{array}{cc}\ns^0_1 &\\frac{s^+_1}{\\sqrt2} \\\\\n\\frac{s^+_1}{\\sqrt2} & S^{--}_1\n\\end{array}\n\\right),\n\\label{triplet}\n\\end{equation}\nis a triplet with $Y=2$. The $SU(2)$ singlets $\\eta^+_2,\\rho^{++},\\chi^0,S^{--}_2$ have $Y=+2,+4,0,+4$, respectively. \n\nThe total lepton number assignment in the scalar sector is~\\cite{Liu:1993gy}\n\\begin{equation} \nL(T^*,\\eta^-_2,\\Phi_\\chi,\\rho^{--},S^{--}_2)=+2,\\quad L(\\Phi_{\\eta,\\rho,s},\\chi^0)=0.\n\\label{ln}\n\\end{equation}\nNotice that the only scalar doublet carrying lepton number is $\\Phi_\\chi$, and both members of the doublet have electric charge; for this reason, $\\langle \\Phi_\\chi\\rangle=0$ always. The existence of scalars carrying lepton number implies the possibility of explicit breaking of this quantum number in the scalar potential. It is possible to avoid such terms by imposing an appropriate discrete symmetry. We show one possibility in Table~\\ref{z7}.\nIn the table, $Q_{1,2}$ denote the quark triplets and $Q_3$, the quark antitriplet, $j_{mR}$ and $J$ are exotic quarks carrying electric charge of -4\/3 and 5\/3, in units of the positron charge $e$. For more details, see Ref.~\\cite{Machado:2013jca}. \n\nSince the complex triplet $T$ and the singlet (under $SU(2)$) $S^{++}_2$ carry lepton number they do not mix with $\\Phi_{\\eta,\\rho,\\chi}$ if there are no lepton number violating terms in the scalar potential. As we will show below, there is some range of the parameter space that allows $\\langle s^0_1\\rangle=0$ and $\\langle s^0_2\\rangle\/v_W\\ll1$, where $v_W=246$ GeV is the electroweak energy scale. In this situation the neutral scalar $s^0_1$ does not participate in the spontaneous symmetry breaking and $s^0_2$ has a small effect on the vector and charged lepton masses. At this stage, active neutrinos are massless and the charged leptons gain a rather small mass. However, these scalar fields are heavy and the charged leptons gain the appropriate mass through the interaction with the triplet $\\eta$ and an effective interaction involving the triplets $\\rho$ and $\\chi$. Similar to the standard model a non-Hermitian scalar triplet generates, at tree level, the neutrino masses by the interaction $(1\/\\Lambda)\\phi^0\\phi^0\\nu\\nu$ through the exchange of a complex triplet~\\cite{Ma:1998dn} (see Fig.~\\ref{fig1}).\n\n\\begin{table}\n\n\t\\centering\n\t\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\\hline\\hline\n\t\t& $Q_{(1,2)L}$ & $Q_{3L}$ & $U_{aR}$ & $D_{aR}$ & $\\Psi_{aL}$ & $\\nu_{aR}$ & $\\eta$ & $\\chi$ & $\\rho$ & $S$ & $j_{mR}$ & $J_{R}$ \\\\ \\hline\n\t\t$Z_{7}$ & 1 & $\\omega^6$ & $\\omega^5$ & $\\omega^3$ & $\\omega^2$ & $\\omega^6$ & $\\omega^3$ & $\\omega^6$\n\t\t& $\\omega^5$ & $\\omega^4$ & $\\omega^6$ & $\\omega^2$ \\\\ \\hline\n\t\t$Z_{3}$ & 1 & w$^2$ & w & w & w & w & w & 1 \n\t\t& w & w & 1 & w \\\\ \\hline \\hline\n\t\\end{tabular}\n\t\\caption{Transformation properties of the fermion and scalar fields under $Z_{7} \\otimes Z_{3} $. Here $\\omega=e^{i2\\pi\/7}$ and w$=e^{i2\\pi\/3}$. } \n\t\\label{z7}\n\n\\end{table}\n\nThe most general scalar potential involving the three triplets and the sextet is~\\cite{Liu:1993gy}\n\\begin{equation}\nV(\\eta,\\rho,\\chi,S)=V^{(2)}+V^{(3)}+V^{(4a)}+\\cdots +V^{(4e)},\n\\label{potential}\n\\end{equation}\nwhere \n\\begin{eqnarray}\nV^{(2)}&=&\\sum_{X=\\eta,\\rho,\\chi,S}\\mu^2_X\\textrm{Tr}(X^\\dagger X),\\nonumber \\\\\nV^{(3)}&=& \\frac{1}{3!} \\,f_1\\epsilon_{ijk} \\eta_i\\rho_j\\chi_k+f_2 (\\chi^T S^* \\rho+\\rho^T S^*\\chi) +f_3 \\eta^T S^* \\eta+\n\\frac{f_4}{3!}\\,\\epsilon_{ijk}\\epsilon_{mnl}\\;S^*_{im}S^*_{jn}S^*_{kl} ,\n\\nonumber \\\\\nV^{(4a)}&=&a_1(\\eta^\\dagger \\eta)^2+a_2(\\rho^\\dagger\\rho)^2+ a_3(\\chi^\\dagger \\chi)^2+\n\\chi^\\dagger\\chi(a_4\\eta^\\dagger\\eta+a_5\\rho^\\dagger\\rho)+a_6(\\eta^\\dagger\\eta)(\\rho^\\dagger \\rho)\\nonumber\\\\&& \na_7(\\chi^\\dagger \\eta)(\\eta^\\dagger\\chi)+a_8(\\chi^\\dagger\\rho)(\\rho^\\dagger\\chi)+\na_9(\\eta^\\dagger\\rho)(\\rho^\\dagger\\eta)+[a_{10}(\\chi^\\dagger\\eta)(\\rho^\\dagger\\eta)+H.c.],\\nonumber \\\\\nV^{(4b)}&=& b_1\\chi^\\dagger S\\hat{\\chi}\\eta+b_2\\rho^\\dagger S\\hat{\\rho}\\eta+b_3\\eta^\\dagger S[\\hat{\\chi}\\rho-\\hat{\\rho}\\chi]+H.c.,\\nonumber \\\\\nV^{(4c)}&=& c_1\\textrm{Tr}[\\hat{\\eta}S\\hat{\\eta}S]+c_2\\textrm{Tr}[\\hat{\\rho}S\\hat{\\chi}S]+H.c.,\\nonumber \\\\\nV^{(4d)}&=&d_1(\\chi^\\dagger \\chi)\\textrm{Tr}S S^*+d_2 [(\\chi^\\dagger S)(S^* \\chi)]+d_3(\\eta^\\dagger\\eta)\\textrm{Tr}(S S^*)+d_4\\textrm{Tr}[(S^*\\eta)(\\eta^\\dagger S)]\\nonumber \\\\\n&&+d_5(\\rho^\\dagger\\rho)\\textrm{Tr}S S^*+d_6 \\textrm{Tr}[(S^* \\rho)(\\rho^\\dagger S)],\\nonumber \\\\\nV^{(4e)}&=& e_1(\\textrm{Tr}S S^*)^2+e_2\\textrm{Tr}(S S^*S S^*),\n\\label{potential2}\n\\end{eqnarray} \nand we have defined in the $V^{(b)}$ and $V^{(c)}$ terms $\\hat{x}_{ij}= \\epsilon_{ijk}x_k$, with $x=\\eta,\\rho,\\chi$. Notice also that $S^\\dagger=S^*$ since $S$ is a symmetric matrix. The conditions for having a potential bounded from below in Eq.~(\\ref{potential2}), under the conditions in Table~\\ref{z7}, are given in Appendix~\\ref{sec:appendixa}.\n\nConcerning the vacuum alignment and the conservation of the lepton number $L$, five possibilities can be considered (see also Ref.~\\cite{Liu:1993gy}) \n\\begin{enumerate}\n\\item[a)] Explicit $L$ violation and $\\langle s^0_{1,2}\\rangle\\not=0$ and arbitrary. This is the most general case and it has not been consider in the literature. \n\\item[b)] Explicit lepton number violation in the scalar potential and $\\langle s^0_1\\rangle = 0$ and $\\langle s^0_2\\rangle\\not=0$ at tree level, but $\\langle s^0_1\\rangle\\not=0$ by loop corrections \\cite{Pleitez:1992xh,Frampton:1993wu}.\n\\item[c)] No explicit $L$ violation and $\\langle s^0_1\\rangle=0$ and $\\langle s^0_2\\rangle=0$. Notwithstanding, the latter condition is not stable under radiative corrections unless a fine-tuning\nis imposed. \n \n\\item[d)] No explicit lepton number violation but $\\langle s^0_1\\rangle\\not=0$ and $\\langle s^0_2\\rangle\\not=0$. In this case there is a triplet Majoron that has been ruled out by the $Z$ invisible width~\\cite{GonzalezGarcia:1989zh}.\n\n\\item[e)] No explicit $L$ violation and $\\langle s^ 0_1\\rangle=0$ but $\\langle s^0_2\\rangle\\not=0$. Although $L$ is conserved, there is violation of the family numbers $L_{e,\\mu,\\tau}$. In this case $\\langle s^0_1\\rangle=0$ is stable at tree and higher-order level~\\cite{Foot:1992rh,Pleitez:1992xh}. \n\\end{enumerate}\n\nHere we will consider the last case, (e), with $\\langle s^0_1\\rangle=0$ and $\\langle s^0_2\\rangle\/v_W \\ll 1$. \nThis case occurs if the constraint $v^2_W=\\sum v^2_i=(246\\,\\textrm{GeV})^2$ (note that $i = \\rho, \\eta, s_1, s_2$) is saturated with the $v^2_\\eta$ and $v^2_\\rho$ as in Ref.~\\cite{Machado:2013jca}. \nMoreover, as we said before, in order to simplify the scalar potential, we impose a $Z_7$ discrete symmetry which forbids the $L$ violating terms, $f_3,f_4,a_{10},b_3,c_2=0$, but also the terms $c_1$ and $b_{1,2}$ are forbidden if we impose an additional discrete $Z_3$ symmetry. See Table~\\ref{z7}.\n\n\\section{Scalar mass spectra in the model}\n\\label{sec:masses}\n \nLet us consider the scalar potential in Eq.~(\\ref{potential2}) with the $Z_{7}\\otimes Z_3$ symmetries given in Table~\\ref{z7}. We make as usual $y^0=(1\/\\sqrt{2})(v_y+X^0_y+iI^0_y)$, where $y=\\eta,\\rho,\\chi$ and $s_2$.\nThe constraint equations, obtained by imposing that $\\partial V\/\\partial v_y=0$, being $V$ the potential in (\\ref{potential2}), under the conditions of the item e) above and considering all VEVs real, are given by\n\\begin{eqnarray}\n&& v_\\eta\\left[\\mu^2_\\eta+a_1v^2_\\eta+\\frac{a_6}{2}v^2_\\rho+\\frac{a_4}{2}v^2_\\chi+ d_3v^2_{s_2}+\\frac{f_1v_\\rho v_\\chi}{2\\sqrt{2}v_\\eta}\\right]=0,\\nonumber \\\\&&\nv_\\rho\\left[\\mu^2_\\rho+\\frac{a_6}{2}v^2_\\eta+ a_2v^2_\\rho+\\frac{a_5}{2}v^2_\\chi+ \\frac{d_{56}}{2} v^2_{s_2}+\\frac{(f_1v_\\eta+f_2v_{s_2})v_\\chi} {2\\sqrt{2}v_\\rho}\\right]=0,\\nonumber \\\\&&\nv_\\chi\\left[\\mu^2_\\chi+\\frac{a_4}{2}v^2_\\eta+\\frac{a_5}{2}v^2_\\rho+a_3v^2_\\chi+ d_{12}v^2_{s_2}+\\frac{(f_1v_\\eta v_\\rho+f_2v_{s_2})v_\\rho}{2\\sqrt{2}v_\\chi}\\right]=0,\\nonumber \\\\ && \nv_{s_2}\\left[2\\mu^2_S+d_3 v^2_\\eta+\\frac{d_{56}}{2} v^2_\\rho+\\frac{d_{12}}{2}v^2_\\chi+2e_{12}v^2_{s_2}+\\frac{f_2v_\\rho v_\\chi}{2\\sqrt{2}v_{s_2}}\\right]=0,\n\\label{ce}\n\\end{eqnarray} \nwhere we have defined $d_{56}=2d_5+d_6$, $d_{12}=2d_1+d_2$, and $e_{12}=2e_1+e_2$.\nNotice that no VEV can be zero unless $f_1=f_2=0$. However, if this were the case, the scalar potential has a non-Abelian symmetry larger than the rest of the Lagrangian. Hence, $f_1,f_2\\not=0$ in order that the gauge symmetry of the scalar potential is the same as the other terms of the Lagrangian. \nIn the case of the sextet, even if we had begun with $\\mu^2_S>0$ and $v_{s_2}=0$, the term $f_2\\not=0$ induces a tadpole which implies a counterterm leaving this VEV arbitrary. Hence, we assume $\\mu_S^2<0$ and $v_{s_2}\\not=0$ but is small in the sense that $v_{s_2}\/v_W\\ll1$. In Appendix~\\ref{sec:appendixb} we show explicitly the Goldstone bosons. \n\nHere, we will consider the mass matrices of the $C\\!P$-even scalars and other mass matrices in Appendix~\\ref{sec:appendixb}, assuming that $v_{s_2}\/v_\\chi<<1$. We also disregard some off-diagonal terms besides the ones from the assumption just mentioned, assuming that the respective diagonal elements are much bigger, to further simplify the matrices. In this approximation it is possible to obtain exact eigenvectors, but the case of the $CP$-even neutral scalars is more complicated and we will not consider here in detail. We show only that at least the two neutral scalars in the sextet are heavy. Analytical expressions, within the approximation above, of the masses and the eigenstates of the neutral $C\\!P$-odd and the charged sectors are given.\n \n\\subsection{Neutral $C\\!P$-even scalars}\n\\label{subsec:cpeven2}\n\nIn this sector the mass matrix $m^2$ is $5\\times5$ decompose, in the approximation used, into $4\\times4$ + $1\\times1$, where \nthe $4\\times4$ matrix, in the basis $(X_\\eta^0,X_\\rho^0, X_\\chi^0, X_{s2}^0 )$, is given by\n\\begin{equation}\n\\left(\n\\begin{array}{cccc}\n2 a_1 v_\\eta^2-\\frac{f_1 v_\\rho v_\\chi}{\\sqrt{2} v_\\eta} & \t a_6v_\\eta v_\\rho+\\frac{f_1 v_\\chi}{\\sqrt{2}} & \\frac{f_1 v_\\rho}{\\sqrt{2}}+a_4 v_\\eta v_\\chi & d_3 v_\\eta v_{s_2} \\\\\n& \\frac{4 a_2 v_\\rho^3-(\\sqrt{2} f_1 v_\\eta -f_2 v_{s_2}) v_\\chi}{2 v_\\rho} & \\frac{f_1 v_\\eta}{\\sqrt{2}}+\\frac{f_2 v_{s_2}}{2}+a_5 v_\\rho v_\\chi & \\frac{1}{2} [(2 d_5+d_6) v_\\rho v_{s_2}+f_2 v_\\chi] \\\\\n& & \\frac{4 a_3 v_\\chi^3-\\sqrt{2} f_1 v_\\eta v_\\rho-f_2 v_\\rho v_{s_2}}{2 v_\\chi} & \\frac{1}{2} [f_2 v_\\rho+(2 d_1+d_2) v_{s_2} v_\\chi] \\\\\n& & & (2 e_1+e_2) v_{s_2}^2-\\frac{f_2 v_\\rho v_\\chi}{2 v_{s_2}} \\\\\n\\end{array}\n\\right),\n\\label{cpeven1}\n\\end{equation}\nand the $1\\times1$ part implies the eigenvalue is $m^2_5=\\frac{(d_2 v_\\chi^2+2 d_4 v_\\eta^2 -d_6 v_\\rho^2 -2 e_2 v_{s_2}^2)v_{s_2}-2 f_2 v_\\rho v_\\chi}{4 v_{s_2}}>0$, which implies\n$d_2v^2_\\chi\/4-2 f_2 v_\\rho v_\\chi\/v_{s_2}>(2d_4v^2_\\eta- d_6 v_\\rho^2 -2 e_2 v_{s_2}^2)\/4$. Recall that $f_2<0$.\nHere the mass eigenstates are denoted by $m^2_i,\\;i=1,\\cdots,5$.\nIn fact, we can see that $m^2_5\\approx (1\/4)d_2v^2_\\chi-f_2v_\\rho v_\\chi\/v_{s_2}$; hence, this is a large mass. One of them must be mainly that of the 125 GeV discovery at the LHC. Although we have not given details, according to the results in Ref.~\\cite{Machado:2013jca}, if $Re\\,\\rho^0=0.42 h+\\cdots$ this scalar has the same coupling with the top quarks that is numerically equal to that from the Higgs boson in the SM. \n\n\\subsection{Neutral CP-odd scalars}\n\\label{subsec:cpodd2}\n\nThe mass matrix in Eq.~(\\ref{i2}), in the approximation of subsection ~\\ref{subsec:cpeven2},\nalso decomposes in $3\\times3$ and diagonal $2\\times2$ matrices. The $3\\times3$ matrix,\nin the basis $(I_\\eta^0,I_\\rho^0, I_\\chi^0) $, is\n\\begin{equation}\nM^2\\approx \\frac{1}{2} \\left(\n\\begin{array}{ccc}\n\\frac{f_1 v_\\rho v_\\chi}{\\sqrt{2} v_\\eta} & \\frac{f_1 v_\\chi}{\\sqrt{2}} & \\frac{f_1 v_\\rho}{\\sqrt{2}} \\\\\n\\frac{f_1 v_\\chi}{\\sqrt{2}} & \\frac{f_1 v_\\eta v_\\chi}{\\sqrt{2} v_\\rho} & \\frac{f_1 v_\\eta}{\\sqrt{2}} \\\\\n\\frac{f_1 v_\\rho}{\\sqrt{2}} & \\frac{f_1 v_\\eta}{\\sqrt{2}} & \\frac{f_1 v_\\eta v_\\rho}{\\sqrt{2} v_\\chi} \\\\\n\\end{array}\n\\right).\n\\label{i1}\n\\end{equation}\n\nThis matrix has two zero eigenvalues and another nonzero one: \n\\begin{equation}\n\\begin{array}{cl}\nM^2_1&=M^2_2=0 \\\\\nM^2_3&= -\\frac{f_1 \\left(v_\\eta^2 \\left(v_\\rho^2+v_\\chi^2\\right)+v_\\rho^2 v_\\chi^2\\right)}{\\sqrt{2} v_\\eta v_\\rho v_\\chi}\\approx -\\frac{f_1v_\\eta v_\\chi}{\\sqrt{2}v_\\rho}>0,\\;\\;f_1<0,\n\\label{a1}\n\\end{array}\n\\end{equation}\nwith respective eigenvectors given by the columns of the matrix:\n\\begin{equation}\n\\left(\n\\begin{array}{ccc}\n-\\frac{v_\\eta}{\\sqrt{v_\\eta^2+v_\\chi^2}} & -\\frac{v_\\eta v_\\chi^2}{\\sqrt{\\left(v_\\eta^2+v_\\chi^2\\right) \\left(\\left(v_\\rho^2+v_\\chi^2\\right) v_\\eta^2+v_\\rho^2 v_\\chi^2\\right)}} & \\frac{v_\\chi}{v_\\eta \\sqrt{\\left(\\frac{1}{v_\\rho^2}+\\frac{1}{v_\\eta^2}\\right) v_\\chi^2+1}} \\\\\n0 & v_\\rho \\sqrt{\\frac{v_\\eta^2+v_\\chi^2}{\\left(v_\\rho^2+v_\\chi^2\\right) v_\\eta^2+v_\\rho^2 v_\\chi^2}} & \\frac{v_\\eta v_\\chi}{\\sqrt{\\left(v_\\rho^2+v_\\chi^2\\right) v_\\eta^2+v_\\rho^2 v_\\chi^2}} \\\\\n\\frac{v_\\chi}{\\sqrt{v_\\eta^2+v_\\chi^2}} & -\\frac{v_\\eta^2 v_\\chi}{\\sqrt{\\left(v_\\eta^2+v_\\chi^2\\right) \\left(\\left(v_\\rho^2+v_\\chi^2\\right) v_\\eta^2+v_\\rho^2 v_\\chi^2\\right)}} & \\frac{v_\\eta v_\\rho}{\\sqrt{\\left(v_\\rho^2+v_\\chi^2\\right) v_\\eta^2+v_\\rho^2 v_\\chi^2}} \\\\\n\\end{array}\n\\right).\n\\label{e1}\n\\end{equation}\n\nThe eigenvalues for the $2\\times2$ part are $M^2_4$ and $M^2_5$:\n\\begin{equation}\n\\begin{array}{cl}\nM^2_4&=-\\frac{f_2 v_\\rho v_\\chi}{2 v_{s_2}}>0,\\;\\;f_2<0 \\\\\nM^2_5&=-\\frac{d_2 v_{s_2} v_\\chi^2+2 f_2 v_\\rho v_\\chi}{4 v_{s_2}}>0,\\;\\; 2\\vert f_2\\vert v_\\rho>d_2v_{s_2}.\n\\label{a2} \n\\end{array}\n\\end{equation}\nWe see from Eqs.~(\\ref{a1}) and (\\ref{a2}), that the three physical pseudoscalar fields are heavy and also induce the type-II seesawlike mechanism in the charged lepton sector.\n\n\\subsection{Singly charged scalars 1}\n\\label{subsec:charged12}\n\nIf the lepton number is conserved in the scalar potential (\\ref{potential}) under the conditions in Table~\\ref{z7}, as we are assuming, the charged scalar mass matrix splits in two sectors $(\\rho^+,\\eta_1^+,h^+)$ and $(\\chi^+, \\eta_2^+, h_2^+)$. \n\nIn the basis $(\\rho^+,\\eta_1^+,h^+)$, the mass matrix is\n\\begin{equation}\nm^2_+\\approx \\frac{1}{2}\\left(\n\\begin{array}{ccc}\n-\\frac{2 \\sqrt{2} f_1 v_\\eta v_\\chi-2 a_9 v_\\eta^2 v_\\rho}{4 v_\\rho} & \\frac{f_1 v_\\chi}{\\sqrt{2}}-\\frac{a_9 v_\\eta v_\\rho}{2} & 0 \\\\\n\\frac{f_1 v_\\chi}{\\sqrt{2}}-\\frac{a_9 v_\\eta v_\\rho}{2} & \\frac{1}{4} \\left(2 a_9 v_\\rho^2-\\frac{2 \\sqrt{2} f_1 v_\\rho v_\\chi}{v_\\eta}\\right) & 0 \\\\\n0 & 0 & -\\frac{f_2 v_\\rho v_\\chi}{2 v_{s_2}} \\\\\n\\end{array}\n\\right),\n\\end{equation}\nwith the following eigenvalues,\n\\begin{equation}\n\\begin{array}{cl}\nm^2_{+1}&=0 \\\\\nm^2_{+2}&= -\\frac{f_2 v_\\rho v_\\chi}{2 v_{s_2}}>0,\\;\\;f_2<0 \\\\\nm^2_{+3}&= \\frac{\\left(v_\\eta^2+v_\\rho^2\\right) \\left(a_9 v_\\eta v_\\rho-\\sqrt{2} f_1 v_\\chi\\right)}{2 v_\\eta v_\\rho}\\approx -\\frac{1}{\\sqrt2}\\frac{f_1 v_\\eta v_\\chi}{v_\\rho}>0,\n\\end{array}\n\\label{a3}\n\\end{equation}\nand the eigenvectors are given by the column of the matrix\n\\begin{equation}\n\\left(\n\\begin{array}{ccc}\n\\frac{v_\\rho}{\\sqrt{v_\\eta^2+v_\\rho^2}} & 0 & -\\frac{v_\\eta}{\\sqrt{v_\\eta^2+v_\\rho^2}} \\\\\n\\frac{v_\\eta}{\\sqrt{v_\\eta^2+v_\\rho^2}} & 0 & \\frac{v_\\rho}{\\sqrt{v_\\eta^2+v_\\rho^2}} \\\\\n0 & 1 & 0 \\\\\n\\end{array}\n\\right).\n\\end{equation}\nAccording to (\\ref{a3}), both physical charged scalar in this sector are very heavy. \n\n\\subsection{Singly charged scalars 2}\n\\label{subsec:charged22}\n\nIn the other singly charged sector the mass matrix in the basis\n$(\\chi^+, \\eta_2^+, h_2^+)$ is \n\\begin{equation}\nM_+^2 \\approx \\frac{1}{2} \\left(\n\\begin{array}{ccc}\n-\\frac{2 \\sqrt{2} f_1 v_\\eta v_\\rho-2 a_7 v_\\eta^2 v_\\chi}{4 v_\\chi} & \\frac{1}{2} \\left(a_7 v_\\eta v_\\chi-\\sqrt{2} f_1 v_\\rho\\right) & 0 \\\\\n\\frac{1}{2} \\left(a_7 v_\\eta v_\\chi-\\sqrt{2} f_1 v_\\rho\\right) & \\frac{1}{2} v_\\chi \\left(a_7 v_\\chi-\\frac{\\sqrt{2} f_1 v_\\rho}{v_\\eta}\\right) & 0 \\\\\n0 & 0 & -\\frac{v_\\chi (2 f_2 v_\\rho+d_2 v_{s_2} v_\\chi)}{4 v_{s_2}} \\\\\n\\end{array}\n\\right)\n\\end{equation}\n\nthe mass eigenvalues are,\n\n\\begin{equation}\n\\begin{array}{cl}\nM^2_{+1}&=0 \\\\\nM^2_{+2}&= -\\frac{v_\\chi (d_2 v_{s_2} v_\\chi+2 f_2 v_\\rho)}{4 v_{s_2}}>0, \\;\\;2\\vert f_2\\vert v_\\rho>d_2v_{s_2}v_\\chi\\\\\nM^2_{+3}&= \\frac{\\left(v_\\eta^2+v_\\chi^2\\right) \\left(a_7 v_\\eta v_\\chi-\\sqrt{2} f_1 v_\\rho\\right)}{2 v_\\eta v_\\chi}>0,\n\\label{a4}\n\\end{array}\n\\end{equation}\nwith the following eigenvectors:\n\n\\begin{equation}\n\\left(\n\\begin{array}{ccc}\n-\\frac{v_\\chi}{\\sqrt{v_\\eta^2+v_\\chi^2}} & 0 & \\frac{v_\\eta}{\\sqrt{v_\\eta^2+v_\\chi^2}} \\\\\n\\frac{v_\\eta}{\\sqrt{v_\\eta^2+v_\\chi^2}} & 0 & \\frac{v_\\chi}{\\sqrt{v_\\eta^2+v_\\chi^2}} \\\\\n0 & 1 & 0 \\\\\n\\end{array}\n\\right).\n\\end{equation}\nAgain, the charged scalar masses in this sector might be heavy [see Eq.~(\\ref{a4})].\n\n\\subsection{Double charge scalars}\n\\label{subsec:2charges}\n\nThe mass matrix in the basis $(\\chi^{++},\\rho^{++},S_1^{++},S_2^{++})$ is \n\\begin{equation}\nM_{++}^2 \\approx \\frac{1}{2}\\left(\n\\begin{array}{cccc}\n\\frac{v_\\rho \\left(a_8 v_\\rho v_\\chi-\\sqrt{2} f_1 v_\\eta\\right)}{2 v_\\chi} & \\frac{1}{2} \\left(a_8 v_\\rho v_\\chi-\\sqrt{2} f_1 v_\\eta\\right) & 0 & 0 \\\\\n\\frac{1}{2} \\left(a_8 v_\\rho v_\\chi-\\sqrt{2} f_1 v_\\eta\\right) & \\frac{v_\\chi \\left(a_8 v_\\rho v_\\chi-\\sqrt{2} f_1 v_\\eta\\right)}{2 v_\\rho} & 0 & 0 \\\\\n0 & 0 & -\\frac{v_\\chi (2 f_2 v_\\rho+d_2 v_{s_2} v_\\chi)}{4 v_{s_2}} & 0 \\\\\n0 & 0 & 0 & \\frac{v_\\chi (d_2 v_{s_2} v_\\chi-2 f_2 v_\\rho)}{4 v_{s_2}} \\\\\n\\end{array}\n\\right),\n\\end{equation}\nand the masses squared of the fields are,\n\n\\begin{equation}\n\\begin{array}{cl}\nM^2_{++1}&=0 \\\\\nM^2_{++2}&= \\frac{v_\\chi (d_2 v_{s_2} v_\\chi-2 f_2 v_\\rho)}{4 v_{s_2}}>0,\\;\\; \\\\\nM^2_{++3} &= -\\frac{v_\\chi (d_2 v_{s_2} v_\\chi+2 f_2 v_\\rho)}{4 v_{s_2}}>0 \\\\\nM^2_{++4}&= \\frac{\\left(v_\\rho^2+v_\\chi^2\\right) \\left(a_8 v_\\rho v_\\chi-\\sqrt{2} f_1 v_\\eta\\right)}{2 v_\\rho v_\\chi}>0,\n\\end{array}\n\\end{equation}\nwith the respective eigenvectors:\n\\begin{equation}\n\\left(\n\\begin{array}{cccc}\n-\\frac{v_\\chi}{\\sqrt{v_\\rho^2+v_\\chi^2}} & 0 & 0 & \\frac{v_\\rho}{\\sqrt{v_\\rho^2+v_\\chi^2}} \\\\\n\\frac{v_\\rho}{\\sqrt{v_\\rho^2+v_\\chi^2}} & 0 & 0 & \\frac{v_\\chi}{\\sqrt{v_\\rho^2+v_\\chi^2}} \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 1 & 0 & 0 \\\\\n\\end{array}\n\\right).\n\\end{equation}\nIn this doubly charged sector, all physical scalars might be heavy. \n\n\\section{The lepton masses and the PMNS matrix}\n\\label{sec:pmns}\n\nThe problem of the leptonic mixing matrix has already been considered in the context of some 331 models without quarks with exotic charge, right-handed neutrinos transforming nontrivially under $SU(3)_L$, three right-handed Majorana leptons, and with flavor symmetries like $T_7$ \\cite{Hernandez:2015cra}, $A_4$~\\cite{Hernandez:2015tna} and extra scalars transforming as singlets under $SU(3)$. In these sort of models an $S_3$~\\cite{Hernandez:2014lpa} symmetry is also used. Here, however, we will consider the m331 using only the gauge symmetries and also right-handed neutrinos as the extra degrees of freedom. \n\nAs said before, the possibility that the lepton masses are generated by an effective dimensional operator was presented in Ref.~\\cite{Montero:2001tq}. However, here we will consider the case when the sextet is introduced but its degrees of freedom are heavy enough to generate an effective nonrenormalizable interaction. This implements a type-II seesaw mechanism in the charged lepton sector. The latter situation arises when the members of the sextet are very heavy and one of the neutral scalars gains a zero VEV and the other one a small VEV. \n\nThe Yukawa interactions are given by\n\\begin{eqnarray}\n-\\mathcal{L}^{lep}_1&=&-\\frac{1}{2}\\epsilon_{ijk}\\,\\overline{(\\Psi_{ia})^c}G^\\eta_{ab} \\Psi_{jb}\\eta_k+\\frac{1}{2\\Lambda_l}\\,\n\\overline{(\\Psi_{a})^c} \\tilde{G}^s_{ab} (\\chi^*\\rho^\\dagger+ \\rho^*\\chi^\\dagger)\\Psi_{b} \\nonumber \\\\ &+&\n\\overline{(\\Psi_{aL})}(G^\\nu)_{ab}\\nu_{aR}\\eta+ \\overline{(\\nu_{aR})^c}(M_R)_{ab}\\nu_{bR}+\\overline{(\\Psi_{ia})^c}G^s_{ab} \\Psi_{jb}S_{ij}+H.c.,\n\\label{effective1}\n\\end{eqnarray}\nwhere $\\Lambda_l$ is a mass scale related to the origin of the dimension five interaction.\nThe second term in the first line of Eq.~(\\ref{effective1}), the dimension-five operator, is generated by the loop in Fig.~\\ref{fig1}. Notice that in Eq.~(\\ref{effective1}) the interactions with the sextet appear and, although they do not contribute significantly to the charged lepton masses, the degrees of freedom in this multiplet might be exited at high energies.\n\nWe will assume, for the sake of simplicity, that $M_R$ is diagonal and that $m_{3R}\\equiv M>m_{R1},m_{R2}$, and $M^{-1}_R=(1\/M)\\bar{M}_R$, where \n$\\bar{M}=\\textrm{diag}(r_{1},r_{2},1))$ and $r_1\\equiv M\/m_{R1},r_2\\equiv M\/m_{R2}$. In this case we have the mass matrices in the lepton sector\n\\begin{equation}\nM^\\nu\\approx -\\frac{v^2_\\eta}{2}G^\\nu\\frac{\\bar{M}_R}{M}G^{\\nu T},\\quad\nM^l_{ab}=G^\\eta_{ab}\\frac{v_\\eta}{\\sqrt2}+\\frac{1}{\\Lambda_l}\\tilde{G}^s_{ab}v_\\rho v_\\chi.\n\\label{mmassa3}\n\\end{equation}\nIf it is the sextet, through the interaction $\\overline{(\\Psi_{ai})^c} G^s_{ab} \\Psi_{jb}S_{ij}$ and the $f_2$ trilinear term in the scalar potential involving $\\rho,\\chi$ and $S$, then we have $1\/\\Lambda_l=f_2\/m^2_{s_2} $ and\n$\\tilde{G}^s=G^s$. \n\nThese mass matrices are diagonalized as follows: \n\\begin{equation} \n\\hat{M}^\\nu=V^{\\nu T}_LM^\\nu V^\\nu_L,\\;\\; \\hat{M}^l=V^{l\\dagger}_L M^l V^l_R,\n\\label{def}\n\\end{equation} \nwhere $\\hat{M}^\\nu = diag (m_1, m_2, m_3),\\;\\hat{M}^l = diag (m_e, m_\\mu, m_\\tau)$. The relation between symmetry eigenstates (primed) and mass eigenstates (unprimed) are $l^\\prime_{L,R}=V^l_{L,R}l_{L,R}$\nand $\\nu^\\prime_L=V^\\nu_L \\nu_L$, where\n$l^\\prime_{L,R}=(e^\\prime,\\mu^\\prime,\\tau^\\prime)^T_{L,R}$, $l_{L,R}=(e,\\mu,\\tau)^T_{L,R}$ and\n$\\nu ^\\prime_L=(\\nu_e\\,\\nu_\\mu\\,\\nu_\\tau)^T_L$\nand $\\nu_L=(\\nu_1\\,\\nu_2\\,\\nu_3)_L$. \n\nIn the following we assume $v_\\chi\\approx \\Lambda_l$ and, as in Ref.~\\cite{Machado:2013jca}, $v_\\rho\\sim 54,v_\\eta\\sim 240$ GeV. \nThe neutrino mass matrix is as in Eq.~(\\ref{mmassa3}). Solving simultaneously the following equations:\n\\begin{equation}\n\\hat{M}^\\nu_{ab}=V^{\\nu T}_L M^\\nu_{ab}V^\\nu_L,\\quad \nV^{l\\dagger}_L M^l M^{l\\dagger}V^l_L=V^{l\\dagger}_R M^{l\\dagger} M^lV^l_R=(\\hat{M}^l)^2,\\quad V_{PMNS}=V^{l\\dagger}_LV^\\nu_L,\n\\end{equation}\nwhere $M^\\nu$ and $M^l$ are defined in Eq.~(\\ref{mmassa3}), and $V_{PMNS}$ is the mixing matrix in the lepton sector (PMNS), the values for the charged lepton masses obtained are (in MeV) $(m_e,m_\\mu,m_\\tau)=(0.509648,105.541,1775.87)$\nand for the neutrinos masses (in eV) $(m_1,m_2,m_3)=(0.051,-0.0194,0.0174)$ which are consistent with \n$\\Delta m^2_{23}=2.219\\times10^{-3}\\,(\\textrm{eV})^2$ and $\\Delta m^2_{21}=7.5\\times 10^{-5}\\,(\\textrm{eV})^2$. \nThese values for the masses arise from the following values for the Yukawa matrices: $v^2_\\eta\/M=0.33$ eV, \n$G^\\nu_{11}=0.109$, $G^\\nu_{12}=0.097$, \n$G^\\nu_{13}=0.101$, $G^\\nu_{22}=0.09$, $G^\\nu_{23}=-0.02$, $G^\\nu_{33}=0.0106$ in the neutrino sector; and \n$G^s_{11}=-0.0453,G^s_{12}=-0.0076,G^s_{13}=-0.0008,G^s_{22}=0.0015,\nG^s_{23}=0.0001,G^s_{33}=1.84\\times10^{-5}$,\n$G^\\eta_{12}=,G^\\eta_{13}=G^\\eta_{13}=-0.00001$ in the charged lepton sector. The only way to avoid the latter fine-tunning is \nto consider $v_\\eta$ smaller, but in the context of the Ref.~\\cite{Machado:2013jca} this VEV is already fixed.\n\nWe obtain for the diagonalization matrices\n\\begin{equation}\nV^\\nu_L\\approx\\left(\\begin{array}{ccc}\n-0.24825& -0.57732& 0.77786\\\\\n0.73980 &-0.40539 &0.53698 \\\\\n-0.62535 &-0.70877 &0.32647 \\\\\n\\end{array}\\right)\n\\label{lep1}\n\\end{equation}\nand\n\\begin{equation}\nV^l_L\\approx\\left(\\begin{array}{ccc}\n-0.00985 &0.01457 &-0.99984 \\\\\n-0.31848& -0.94787 &-0.01067 \\\\\n0.94788&-0.31833 & -0.01398\\\\\n\\end{array}\\right),\\quad \nV^l_R\\approx\\left(\\begin{array}{ccc}\n0.00501&0.00716 & 0.99996\\\\\n0.00261&0.9910 & -0.00717\\\\\n0.99998 &-0.00265 & -0.00499\\\\\n\\end{array}\\right)\n\\label{lep2}\n\\end{equation}\n\nNotice that we have defined the lepton mixing matrix as $V_{PMNS}=V^{l\\dagger}_LV^\\nu_L$, which means that this matrix appears in the charged currents coupled to $W^-$. We obtain from Eqs.~(\\ref{lep1}) and (\\ref{lep2}) the following values for the PMNS matrix:\n\\begin{equation}\n\\vert V_{PMNS}\\vert \\approx\\left(\\begin{array}{ccc}\n0.826& 0.548 & 0.130\\\\\n0.506&0.618 &0.602 \\\\\n0.249 &0.563 &0.788 \\\\\n\\end{array}\\right),\n\\label{pmns}\n\\end{equation}\nwhich is in agreement, within 3$\\sigma$, with the experimental data given in Ref.~\\cite{GonzalezGarcia:2012sz},\n\\begin{equation}\n\\vert V_{PMNS}\\vert \\approx\\left(\\begin{array}{ccc}\n0.795-0.846& 0.513-0.585 & 0.126-0.178\\\\\n0.205-0.543&0.416-0.730 &0.579 - 0.808 \\\\\n0.215 - 0.548 &0.409 - 0.725 &0.567 -0.800 \\\\\n\\end{array}\\right),\n\\label{pmnsexp}\n\\end{equation}\nand we see that it is possible to accommodate all lepton masses and the PMNS matrix. We do not consider $CP$ violation here.\n\n\\section{Conclusions}\n\\label{sec:con}\n\nWe have shown that even if we introduce the sextet in such a way that it practically does not contribute to the lepton masses because of the small VEVs, and since its components are very heavy, it might generate the dimension-five operator involving only the triplets $\\rho$ and $\\chi$ in Eq.~(\\ref{effective1}) trough a process like the one shown in Fig.~\\ref{fig1}. A similar operator can be obtained for the neutrinos as in Ref.~\\cite{Montero:2001tq}, but here we prefer to introduce right-handed neutrinos in order to implement a type-I seesaw mechanism. Moreover, in the charged lepton sector the mass generation is similar to the type-II seesaw mechanism inducing small masses for neutrinos through the exchange of a heavy non-Hermitian triplet~\\cite{Cheng:1980qt,Konetschny:1977bn,Magg:1980ut}.\nThe existence of several mechanisms to generate this interaction in the context of the standard model have been shown in the literature~\\cite{Ma:1998dn,Bonnet:2012kz,Sierra:2014rxa}. Notwithstanding, the effective operator in Eq.~(\\ref{effective1}) can be originated by the effects of higher-dimension operators. \n\nIn our case, the sextet is introduced in the model as in the m331 model, and through its interactions with the other scalars in the scalar potential and with leptons in the Yukawa interactions, their degrees of freedom can be exited at high energies mainly in lepton colliders.\nNotice, that all extra scalars in the model are heavy except for two neutral scalars that correspond to the fields in a two-Higgs-doublet extension of the SM.\nWe also show the conditions under which we have a weak copositivity of the scalar potential and generate the vacuum stability at tree level and a global minimum as well. It is interesting to study the same problem at the one-loop level. For instance, in a model with two doublets (one of them inert) taking into account a neutral scalar with a mass of 125 GeV, the stability of the vacuum was shown in Ref.~\\cite{Goudelis:2013uca}; however, such analysis is beyond the scope of our paper.\n\nIn order to obtain the correct mass for the charged leptons, besides the effective interaction, it is necessary to consider the interactions with the triplet $\\eta$. For neutrinos, as we are considering that $v_{s_1}=0$, we have introduced right-handed components to generate the type-I seesaw mechanism. With the unitary (orthogonal if we neglect phases) matrices that diagonalize the mass matrices in the lepton sector, it is possible to accommodate a realistic Pontecorvo-Maki-Nakawaga-Sakata matrix. The constraints on the masses of the extra particles in the m331 model coming from lepton violation processes will be considered elsewhere.\nIn the present context we recall that it is the neutral scalar $\\rho^0$ which has the larger projection on the neutral scalar with a mass of near 125 GeV. For details see Ref.~\\cite{Machado:2013jca}.\n\nFinally, we would like to discuss the differences between our present model and those in Refs.~\\cite{Montero:2001tq,Ferreira:2011hm}. Our model is the usual minimal 3-3-1 (in the sense that the lepton sector consists only of the known leptons) and the four scalar multiplets, three triplets, and one sextet \\textit{plus} right-handed neutrinos. Although the sextet has interactions mediated by scalars [see Eq.~(\\ref{effective1})], its neutral components do not contribute significantly to the lepton masses.\nThe degrees of freedom in the sextet decouple at low energies; however, its interactions have to be taken into account for some phenomenology at sufficiently high energy. For instance, if this mechanism is implemented at the 100 GeV--1TeV scale, the interactions of the charged leptons with the left-handed neutrinos could have some signature at the LHC or other colliders. (See \\cite{Ng:2015hba} and references therein.)\n\nIn Ref.~\\cite{Montero:2001tq}, only three triplets $\\eta,\\rho$ and $\\chi$ were considered, being the sextet avoided at all, with the charged leptons and neutrinos gain mass only through nonrenormalizable interactions. The model of Ref.~\\cite{Ferreira:2011hm} is considered the so called \"reduced\" 3-3-1 model with only the triplets $\\rho$ and $\\chi$ (with no $\\eta$ and $S$ at all). Therefore, for generating all the fermion masses they need, besides the usual Yukawa interactions with these triplets, nonrenormalizable interactions involving the same triplets. Although this is an interesting situation, the model with only the triplets $\\rho$ and $\\chi$ has experimental troubles if we accept the existence of the Landau-like pole. See Ref.~\\cite{Dong:2014bha} for a discussion of the 3-3-1 models with only two triplets, in particular the troubles with the ``reduced\" minimal 3-3-1 model~\\cite{Ferreira:2011hm}. \n\nThus, in the limit of a heavy sextet, our model is a 3-3-1 model with three triplets and the sextet, but the latter one does not contribute to the lepton masses and almost not at all to the spontaneous symmetry breaking since its VEVs are zero or very small in the sense of $v_{s_2}\/v_W\\ll1$. The lepton interactions with the sextet and the interactions among all the scalars \nbecome important only at high energies. Our case is more similar to the type-II seesaw mechanism in which a complex heavy triplet is in charge of generating the neutrino masses~\\cite{Ma:1998dn}.\n\n\n\\acknowledgments\n\nThe authors would like to thank for full support to CNPq (GDC), CAPES (ACBM) and for partial\nsupport to CNPq (VP).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nWe consider the problem of online learning in Markov decision processes (MDPs) where a learner sequentially interacts with an environment \nby repeatedly taking actions that influence the future states of the environment while incurring some immediate costs. The goal of the \nlearner is to choose its actions in a way that the accumulated costs are as small as possible. Several variants of this problem have been \nwell-studied in the literature, primarily in the case where the costs are assumed to be independent and identically distributed \n\\citep{sutton,Puterman1994,ndp,Sze10}. In the current paper, we consider the case where the costs are generated by an arbitrary external \nprocess and the learner aims to minimize its total loss during the learning procedure---conforming to the learning paradigm known as \n\\emph{online learning} \\citep{CBLu06:Book,SS12}. In the online-learning framework, the performance of the learner is measured in terms of \nthe \\emph{regret} defined as the gap between the total costs incurred by the learner and the total costs of the best comparator chosen\nfrom a pre-specified class of strategies. In the case of online learning in MDPs, a natural class of strategies is the set of all \nstate-feedback policies: several works studied minimizing regret against this class both in the \nstationary-cost~\\citep{bartlett09regal,jaksch10ucrl,AYSze11} and the non-stochastic \nsetting~\\citep{even-dar09OnlineMDP,yu09ArbitraryRewards,neu10o-ssp,neu12ssp-trans,ZiNe13,DGS14,neu14o-mdp-full,AYBK14}. In the \nnon-stochastic setting, most works \nconsider MDPs with unstructured, finite state spaces and guarantee that the regret increases no faster than~$O(\\sqrt{T})$ as the number of \ninteraction rounds $T$ grows large. A notable exception is the work of \\citet{AYBK14}, who consider the special \ncase of (continuous-state) linear-quadratic control with arbitrarily changing target states, and propose an algorithm that guarantees a \nregret bound of $O(\\log^2 T)$.\n\nIn the present paper, we study another special class of MDPs that turns out to allow fast rates. Specifically, we consider the class of \nso-called \\emph{linearly solvable MDPs} (in short, LMDPs), first proposed and named so by \\citet{Tod06}.\nThis class takes its name \nafter the special property that the Bellman optimality equations characterizing\nthe optimal behavior policy take the form of a system of linear equations,\nwhich makes optimization remarkably straightforward in such problems. The\ncontinuous formulation (in both space and time) was discovered independently\nby~\\citet{Kap05} and is known as \\emph{path integral control}.\nLMDPs have many interesting properties. For example, optimal control laws for\nLMDPs can be linearly combined to derive composite optimal control laws\nefficiently~\\citep{Tod09}. Also, the inverse optimal control problem in LMDPs can be expressed as a convex optimization problem~\\citep{dvijotham2010inverse}.\nLMDPs generalize an existing duality between optimal control computation and Bayesian inference~\\citep{todorov2008general}.\nIndeed, the popular belief propagation algorithm used in dynamic probabilistic graphical models is\nequivalent the the power iteration method used to solve LMDPs~\\citep{KGO12}.\n\nThe LMDP framework has found applications in robotics~\\citep{latentkl,ariki}, crowdsourcing~\\citep{Abbasi}, and controlling the growth \ndynamics of complex networks~\\citep{klnetwork}. The related path integral control framework of \\citet{Kap05} has been applied in several \nreal-world tasks, including robot navigation~\\citep{KinjoFN2013}, motor skill \nreinforcement learning~\\citep{TheodorouJMLR10a,rombokas2013reinforcement,pireps}, aggressive car maneuvering~\\citep{aggressive} or \nautonomous flight of teams of quadrotors~\\citep{pi_uavs}.\n\n\nIn the present paper, we show that besides the aforementioned properties, the structure of LMDPs also enables constructing efficient online \nlearning procedures with very low regret. In particular, we show that, under some mild assumptions on the structure of the LMDP, the \n(conceptually) simplest online learning strategy of \\emph{following the leader} guarantees a regret of order $\\log^2 T$, vastly improving \nover the best known previous result by \\citet*{GRW12}, who prove a regret bound of order $T^{3\/4+\\epsilon}$ for arbitrarily small \n$\\epsilon>0$ under the same assumptions. Our approach is based on the observation that the optimal control law arising from the LMDP \nstructure is a smooth function of the underlying cost function, enabling rapid learning without any regularization whatsoever.\n\nThe rest of the paper is organized as follows. Section~\\ref{sec:bck} introduces the formalism of LMDPs and summarizes some basic facts that \nour technical content is going to rely on. Section~\\ref{sec:online} describes our online learning model. Our learning algorithm is \ndescribed in Section~\\ref{sec:ftl} and analyzed in Section~\\ref{sec:analysis}. Finally, we draw conclusions in Section~\\ref{sec:discussion}.\n\n\\paragraph{Notation.} We will consider several real-valued functions over a finite state-space $\\mathcal{X}$, and we will often treat these functions \nas finite-dimensional (column) vectors endowed with the usual definitions of the $\\ell_p$ norms. The set of probability distributions over \n$\\mathcal{X}$ will be denoted as $\\Delta(\\mathcal{X})$. Indefinite sums with running variables $x,y$ or $s$ are understood to run through all $\\mathcal{X}$.\n\n\\section{Background on linearly solvable MDPs}\\label{sec:bck}\nThis section serves as a quick introduction into the formalism of linearly solvable MDPs (LMDPs, \\cite{Tod06}). These decision processes \nare defined by the tuple $\\ev{\\mathcal{X},P,c}$, where $\\mathcal{X}$ is a finite set of \\emph{states}, $P:\\mathcal{X} \\rightarrow \\Delta(\\mathcal{X})$ is a transition kernel called \nthe \\emph{passive dynamics} (with $P(x'|x)$ being the probability of the process moving to state $x'$ given the previous state $x$) and \n$c:\\mathcal{X}\\rightarrow[0,1]$ is the \\emph{state-cost function}. Our Markov decision process is a sequential decision-making problem where the initial \nstate $X_0$ is drawn from some distribution $\\mu_0$, and the following steps are repeated for an indefinite number of rounds $t=1,2,\\dots$:\n\\begin{enumerate}[leftmargin=.7cm]\n \\item The learner chooses a transition kernel $Q_t:\\mathcal{X}\\rightarrow \\Delta(\\mathcal{X})$ satisfying $\\mathop{supp}Q_t(\\cdot|x) \\subseteq \n\\mathop{supp}P(\\cdot|x)$ \nfor all $x\\in\\mathcal{X}$.\n \\item The learner observes $X_t\\in\\mathcal{X}$ and draws the next state $X_{t+1}\\sim Q(\\cdot|X_t)$.\n \\item The learner incurs the cost\n \\[\n \\ell(X_t,Q_t) = c(X_t) + \\kl{Q_t(\\cdot|X_t)}{P(\\cdot|X_t)},\n \\]\n where $\\kl{q}{p}$ is the relative entropy (or Kullback-Leibler divergence) between the probability distributions $p$ \nand $q$ defined as $\\kl{q}{p}=\\sum_x q(x)\\log \\frac{q(x)}{p(x)}$.\n\\end{enumerate}\n\nThe state-cost function $c$ should be thought of as specifying the objective for the learner in the MDP, while the relative-entropy term \ngoverns the costs associated with significant deviations from the passive dynamics. Accordingly, we refer to this component as the \n\\emph{control \ncost}. A central question in the theory of Markov decision problems is finding a behavior policy that minimizes (some notion of) the \nlong-term total costs. In this paper, we consider the problem of \\emph{minimizing the long-term average cost-per-stage} \n$\\lim\\sup_{T\\rightarrow\\infty} \\frac 1T \\sum_{t=1}^T \\ell(X_t,Q_t)$. Assuming that the passive dynamics $P$ is aperiodic and irreducible, this \nlimit is minimized by a \\emph{stationary} policy $Q$ (see, e.g., \\citet[Sec.~8.4.4]{Puterman1994}). Below, we provide two distinct \nderivations for the optimal stationary policy that minimizes the average costs under this assumption.\n\n\\subsection{The Bellman equations}\\label{sec:bellman}\nWe first take an approach rooted in dynamic programming \\citep{Ber07:DPbookVol2}, following \\citet{Tod06}. Under our assumptions, the \noptimal stationary policy minimizing the average cost is given by finding the solution to the Bellman optimality equation\n\\begin{equation}\\label{eq:bellman_V}\n v(x) = c(x) - \\lambda + \\min_{q\\in\\Delta(\\mathcal{X})} \\ev{\\kl{q}{P(\\cdot|x)} + \\sum_{x'} q(x') v(x')}\n\\end{equation}\nfor all $x\\in\\mathcal{X}$, where $v$ is called the \\emph{optimal value function} and $\\lambda\\in\\mathbb{R}$ is the average cost associated with the \noptimal policy\\footnote{This solution is guaranteed to be unique up to a constant shift of the values: if $v$ is a solution, then so is \n$v + a$ for any $a\\in \\mathbb{R}$. Unless stated otherwise, we will assume that $v$ is such that $v(x_0) = 0$ holds for a fixed state \n$x_0\\in\\mathcal{X}$.}. Linearly solvable MDPs get their name from the fact that the Bellman optimality \nequation can be rewritten in a simple \nlinear form. To see this, observe that by elementary calculations involving Lagrange multipliers, we have\n\\begin{align*}\n \\min_{q\\in\\Delta(\\mathcal{X})} \\ev{\\kl{q}{P(\\cdot|x)} + \\sum_{x'} q(x') v(x')} =& -\\log \\sum_{x'} P(x'|x) e^{-v(x')},\n\\end{align*}\nso, after defining the exponentiated value function $z(x) = e^{-v(x)}$ for all $x$, plugging into Equation~\\eqref{eq:bellman_V} and \nexponentiating both sides gives\n\\begin{equation}\\label{eq:bellman_Z}\n z(x) = e^{\\lambda - c(x)} \\sum_{x'} P(x'|x) z(x').\n\\end{equation}\nRewriting the above set of equations in matrix form, we obtain the linear equations\n\\[\n e^{-\\lambda} z = GPz,\n\\]\nwhere $G$ is a diagonal matrix with $G_{ii} = e^{-c(i)}$. By the \\emph{Perron-Frobenius theorem} (see, e.g., Chapter~8 of \\cite{Mey00}) \nconcerning positive matrices, the above system of linear equations has a unique\\footnote{As in the case of the Bellman equations, this \nsolution is unique up to a \\emph{scaling} of $z$.} solution satisfying $z(x)\\ge 0$ for all $x$, and this \neigenvector corresponds to the largest eigenvalue $e^{-\\lambda}$ of $GP$. Since the solution of the Bellman optimality equation \n\\eqref{eq:bellman_V} is unique (up to a constant shift corresponding to a constant scaling of $z$), we obtain that $\\lambda$ is the average \ncost of the optimal policy. In summary, the Bellman optimality \nequation takes the form of a \\emph{Perron--Frobenius eigenvalue problem}, which can be efficiently solved by iterative methods such as the \nwell-known power method for finding top eigenvectors. Finally, getting back to the basic form~\\eqref{eq:bellman_V} of the Bellman equations, \nwe can conclude after simple calculations that the optimal policy can be computed for all $x,x'$ as\n\\[\n Q(x'|x) = \\frac{P(x'|x) z(x')}{\\sum_y P(y|x) z(y)}.\n\\]\n\n\\subsection{The convex optimization view}\\label{sec:convex}\nWe also provide an alternative (and, to our knowledge, yet unpublished) view of the optimal control problem in LMDPs, based on convex \noptimization. For the purposes of this paper, we find this form to be more insightful, as it enables us to study our \nlearning problem in the framework of online convex optimization \\citep{Haz11,Haz16,SS12}. To derive this form, observe that under \nour assumptions, every feasible policy $Q$ induces a stationary distribution $\\mu_Q$ over the state space $\\mathcal{X}$ satisfying $\\mu_Q^\\mathsf{\\scriptscriptstyle T} \n= \\mu_Q^\\mathsf{\\scriptscriptstyle T} Q$. This stationary distribution and the policy together induce a distribution $\\pi_Q$ over $\\mathcal{X}^2$ defined for all $x,x'$ \nas $\\pi_Q(x,x') = \\mu_Q(x) Q(x'|x)$. We will call $\\pi_Q$ as the \\emph{stationary transition measure} induced by $Q$, which is motivated by \nthe observation that $\\pi_Q(x,x')$ corresponds to the probability of observing the transition $x\\rightarrow x'$ in the equilibrium state: \n$\\pi_Q(x,x') = \\lim_{T\\rightarrow \\infty} \\frac 1T \\sum_{t=1}^T \\PP{X_t = x, X_{t+1}=x'}$. Notice that, with this notation, the average \ncost-per-stage of policy $Q$ can be rewritten in the form\n\\[\n\\begin{split}\n &\\lim_{T\\rightarrow\\infty} \\frac 1T \\sum_{t=1}^T \\EE{\\ell(X_t,Q)} = \\sum_{x} \\mu_Q(x) \\Bpa{c(x) + \\kl{Q(\\cdot|x)}{P(\\cdot|x)}}\n \\\\\n &\\qquad\\qquad= \\sum_{x,x'} \\pi_Q(x,x') \\pa{c(x) + \\log\\frac{\\pi_Q(x,x')}{P(x'|x)\\sum_y \\pi_Q(x,y)}}\n \\\\\n &\\qquad\\qquad= \\sum_{x,x'} \\pi_Q(x,x') \\log\\frac{\\pi_Q(x,x')}{\\sum_y \\pi_Q(x,y)} + \\sum_{x,x'} \\pi_Q(x,x') \\pa{c(x) - \\log\\pa{P(x'|x)}}.\n\\end{split}\n\\]\nThe first term in the final expression above is the \\emph{negative conditional entropy} of $X'$ relative to $X$, where $(X,X')$ is a pair \nof random states drawn from $\\pi_Q$. Since the negative conditional entropy is convex in $\\pi_Q$ (for a proof, see \nAppendix~\\ref{app:convexity}) and the second term in the expression is linear in $\\pi_Q$, we can see that $\\lambda$ is a convex function of \n$\\pi_Q$. This suggests that we can \nview the optimal control problem as having to find a feasible stationary transition measure $\\pi$ that minimizes the expected costs. \nIn short, defining \n\\begin{equation}\\label{eq:statcost}\n f(\\pi;c) = \\sum_{x,x'} \\pi(x,x') \\pa{c(x) + \\log\\frac{\\pi(x,x')}{P(x'|x)\\sum_y \\pi(x,y)}}\n\\end{equation}\nand $\\Delta(M)$ as the (convex) set of feasible stationary transition measures $\\pi$ satisfying\n\\begin{equation}\\label{eq:statdist}\n\\begin{split}\n \\sum_{x'} \\pi(x,x') &= \\sum_{x''} \\pi(x'',x) \\;\\;\\;\\;\\;\\,\\qquad (\\forall x),\n \\\\\n \\sum_{x,x'} \\pi(x,x')&=1,\n \\\\\n \\pi(x,x') &\\ge 0 \\qquad\\qquad\\qquad\\qquad (\\forall x,x'),\n \\\\\n \\pi(x,x') &= 0 \\qquad\\qquad\\qquad\\qquad (\\forall x,x': P(x'|x)=0),\n\\end{split}\n\\end{equation}\nthe optimization problem can be succinctly written as $\\min_{\\pi\\in\\Delta(M)} f(\\pi;c)$. In Appendix~\\ref{app:dual}, we provide a \nderivation of the optimal control given by Equation~\\eqref{eq:bellman_Z} starting from the formulation given above. We also remark that our \nanalysis will heavily rely on the fact that $f(\\pi;c)$ is affine in $c$. \n\n\n\n\n\n\n\\section{Online learning in linearly solvable MDPs}\\label{sec:online}\nWe now present the precise learning setting that we consider in the present paper.\nWe will study an online learning scheme where for each round $t=1,2,\\dots,T$, the following steps are repeated:\n\\begin{enumerate}[leftmargin=.7cm]\n \\item The learner chooses a transition kernel $Q_t:\\mathcal{X}\\rightarrow \\Delta(\\mathcal{X})$ satisfying $\\mathop{supp}Q_t(\\cdot|x) \\subseteq \n\\mathop{supp}P(\\cdot|x)$ for all $x\\in\\mathcal{X}$.\n \\item The learner observes $X_t\\in\\mathcal{X}$ and draws the next state $X_{t+1}\\sim Q_t(\\cdot|X_t)$.\n \\item Obliviously to the learner's choice, the environment chooses state-cost function $c_t:\\mathcal{X}\\rightarrow[0,1]$.\n \\item The learner incurs the cost\n \\[\n \\ell_t(X_t,Q_t) = c_t(X_t) + \\kl{Q_t(\\cdot|X_t)}{P(\\cdot|X_t)}.\n \\]\n \\item The environment reveals the state-cost function $c_t$.\n\\end{enumerate}\nThe key change from the stationary setting described in the previous section is that the state-cost function now may \\emph{change \narbitrarily} between each round, and the learner is only allowed to observe the costs \\emph{after it has made its decision}. We stress \nthat we assume that the learner \\emph{fully knows} the passive dynamics, so the only difficulty comes from having to deal with the \nchanging costs. As usual in the online-learning literature, our goal is to do nearly as well as the best \\emph{stationary} policy \nchosen in hindsight after observing the entire sequence of cost functions. To define our precise performance measure, we first define the \naverage reward of a policy $Q$ as\n\\[\n \\mathcal{L}_T(Q) = \\EE{\\sum_{t=1}^T \\ell_t(X_t',Q)},\n\\]\nwhere the state trajectory $X_t'$ is generated sequentially as $X_t'\\sim Q(\\cdot|X_{t-1}')$ and the expectation integrates over the \nrandomness of the \ntransitions. Having this definition in place, we can specify the best stationary policy\\footnote{The existence of the minimum is warranted \nby the fact that $\\mathcal{L}_T$ is a continuous function bounded from below on its compact domain.} $Q^*_T = \\mathop{\\mbox{ \\rm arg\\,min}}_{Q} \\mathcal{L}_T(Q)$ and define \nour \nperformance measure as the (total expected) \\emph{regret} against $Q^*_T$:\n\\[\n R_T = \\EE{\\sum_{t=1}^T \\ell_t(X_t,Q_t)} - \\mathcal{L}_T(Q^*_T),\n\\]\nwhere the expectation integrates over both the randomness of the state transitions and the potential randomization used by the \nlearning algorithm. Having access to this definition, we can now formally define the goal of the learner as having to come up \nwith a sequence of policies $Q_1,Q_2,...$ that guarantee that the total regret grows sublinearly, that is, that the average per-round \nregret asymptotically converges to zero.\n\nFor our analysis, it will be useful to define an idealized version of the above online optimization problem, where the learner is \nallowed to \\emph{immediately switch} between the stationary distributions of the chosen policies. By making use of the convex-optimization \nview given in Section~\\ref{sec:convex}, we define an auxiliary online convex optimization (or, in short, OCO, see, e.g., \n\\citealp{Haz11,SS12}) problem called the \n\\emph{idealized OCO problem} where in each round $t$, the following steps are repeated:\n\\begin{enumerate}\n \\item The learner chooses the stationary transition measure $\\pi_t\\in\\Delta(M)$.\n \\item Obliviously to the learner's choice, the environment chooses the loss function $\\widetilde{\\ell}_t = f(\\cdot;c_t)$.\n \\item The learner incurs a loss of $\\widetilde{\\ell}_t(\\pi_t)$. \n \\item The environment reveals the loss function $\\widetilde{\\ell}_t$.\n\\end{enumerate}\nThe performance of the learner \nin this setting is measured by the \\emph{idealized regret}\n\\[\n \\overline{R}_T = \\sum_{t=1}^T \\widetilde{\\ell}_t(\\pi_t) - \\min_{\\pi\\in\\Delta(M)}\\sum_{t=1}^T \\widetilde{\\ell}_t(\\pi).\n\\]\n\nThroughout the paper, we will consider \\emph{oblivious environments} that choose the sequence of state-cost functions without taking into \naccount the states visited by the learner. This assumption will enable us to simultaneously reason about the expected costs under any \nsequence of state distributions, and thus to make a connection between the idealized regret $\\overline{R}_T$ and the true regret $R_T$. \nThis technique was first used by \\citet{even-dar09OnlineMDP} and was shown to be essentially inevitable by \\citet{yu09ArbitraryRewards}: As \ndiscussed in their Section~3.1, no learning algorithm can avoid linear regret if the environment is not oblivious.\n\n\n\n\\section{Algorithm and main result}\\label{sec:ftl}\nIn this section, we propose a simple algorithm for online learning in LMDPs based on the ``follow-the-leader'' (FTL) strategy. On a high \nlevel, the idea of this algorithm is greedily betting on the policy that seems to have been optimal for the total costs observed so \nfar. While this strategy is known to fail catastrophically in several simple learning problems (see, e.g., \\citealt{CBLu06:Book}), it is \nknown to perform well in several important scenarios such as sequential prediction under the logarithmic loss \\citep{MF92} or prediction \nwith expert advice under bounded losses, given that losses are stationary \\citep{Kot16} and often serves as a strong benchmark \nstrategy \\citep{REGK14,SNL14}. In our learning problem, following the leader is a very natural choice of algorithm, as the convex \nformulation of Section~\\ref{sec:convex} suggests that we can effectively build on the analysis of Follow-the-Regularized-Leader-type \nalgorithms without having to explicitly regularize the objective.\n\nIn precise terms, our algorithm computes the sequence of policies $Q_1,Q_2,\\dots,Q_T$ by running FTL \\emph{in the idealized setting}: in \nround $t$, the algorithm chooses the stationary transition measure \n\\[\n\\begin{split}\n \\pi_t &= \\mathop{\\mbox{ \\rm arg\\,min}}_{\\pi\\in\\Delta(M)} \\sum_{s=1}^{t-1} \\widetilde{\\ell}_s(\\pi) = \\mathop{\\mbox{ \\rm arg\\,min}}_{\\pi\\in\\Delta(M)} \\sum_{s=1}^{t-1} f(\\pi;c_s)\n \\\\\n &= \\mathop{\\mbox{ \\rm arg\\,min}}_{\\pi\\in\\Delta(M)} (t-1)\\cdot f\\pa{\\pi;\\frac{1}{t-1}\\sum_{s=1}^{t-1}c_s} = \\mathop{\\mbox{ \\rm arg\\,min}}_{\\pi\\in\\Delta(M)} \nf\\pa{\\pi;\\overline{c}_t},\n\\end{split}\n\\]\nwhere the third equality uses the fact that $f$ is affine in its second argument and the last step introduces the average state-cost \nfunction $\\overline{c}_t = \\frac {1}{t-1} \\sum_{s=1}^{t-1} c_s$. This form implies that $\\pi_t$ can be computed as the optimal control for the \nstate-cost function $\\overline{c}_t$, which can be done by following the procedure described in Section~\\ref{sec:bellman}. Precisely, we define the \ndiagonal matrix $G_t$ with its $i$\\th~diagonal element $e^{-\\overline{c}_t(i)}$, let $\\gamma_t$ be the largest eigenvalue of $G_tP$ and $z_t$ be the \ncorresponding (unit-norm) right eigenvector. Also, let $v_t = -\\log z_t$ and $\\lambda_t = -\\log\\gamma_t$, and note that $\\lambda_t = \nf(\\pi_t;\\overline{c}_t)$ is the optimal average-cost-per-stage of $\\pi_t$ given the cost function $\\overline{c}_t$.\nFinally, we define the policy used in round $t$ as\n\\begin{equation}\\label{eq:optQ}\n Q_t(x'|x) = \\frac{P(x'|x) z_t(x')}{\\sum_y P(y|x) z_t(y)}\n\\end{equation}\nfor all $x'$ and $x$. We denote the induced stationary distribution by $\\mu_t$. The algorithm is presented as Algorithm~\\ref{alg:ftl}.\n\n\\begin{algorithm}\n \\textbf{Input:} Passive dynamics $P$.\n \\\\\\textbf{Initialization:} $\\overline{c}_1(x) = 0$ for all $x\\in\\mathcal{X}$.\n \\\\\\textbf{For $t=1,2,\\dots,T$, repeat}\n \\begin{enumerate}[leftmargin=.7cm]\n \\item Construct $G_t = \\left[\\mbox{diag}(e^{-\\overline{c}_t})\\right]$.\n \\item Find the right eigenvector $z_t$ of $G_t P$ corresponding to the largest eigenvalue.\n \\item Compute the policy\n \\[\n Q_t(x'|x) = \\frac{P(x'|x) z_t(x')}{\\sum_y P(y|x) z_t(y)}.\n\\]\n \\item Observe state $X_{t}$ and draw $X_{t+1}\\sim Q_t(\\cdot|X_{t})$.\n \\item Observe state-cost function $c_t$ and update $\\overline{c}_{t+1} = \\frac{\\pa{t-1}\\overline{c}_t + c_t}{t}$.\n \\end{enumerate}\n \\caption{Follow The Leader in LMDPs} \\label{alg:ftl}\n\\end{algorithm}\n\n\nNow we present our main result. First, we state two key assumptions about the underlying passive dynamics; both of these assumptions are \nalso made by \\citet{GRW12}.\n\\begin{assumption}\\label{ass:irred}\n The passive dynamics $P$ is irreducible and aperiodic. In particular, there exists a natural number $H>0$ such that $\\pa{P^n}(y|x)>0$ for \nall $n\\ge H$ and all $x,y\\in\\mathcal{X}$. We will refer to $H$ as the (worst-case) \\emph{hitting time}.\n\\end{assumption}\n\\begin{assumption}\\label{ass:ergod}\n The passive dynamics $P$ is ergodic in the sense that its Markov--Dobrushin ergodicity coefficient is strictly less than $1$:\n \\[\n \\alpha(P) = \\max_{x,y\\in\\mathcal{X}} \\onenorm{P(\\cdot|x)-P(\\cdot|y)} < 1.\n \\]\n\\end{assumption}\nA standard consequence (see, e.g., \\citealt{Sen2006}) of Assumption~\\ref{ass:ergod} is that the passive dynamics mixes quickly: for any \ndistributions $\\mu,\\mu'\\in\\Delta(\\mathcal{X})$, we have\n\\[\n \\onenorm{\\pa{\\mu-\\mu'}^\\mathsf{\\scriptscriptstyle T} P}\\le \\alpha(P)\\onenorm{\\mu-\\mu'}.\n\\]\nWe will sometimes refer to $\\tau(P) = \\pa{\\log\\pa{1\/\\alpha(P)}}^{-1}$ as the \\emph{mixing time} associated with $P$. \nNow we are ready to state our main result:\n\\begin{theorem}\\label{thm:main}\n Suppose that the passive dynamics satisfies Assumptions~\\ref{ass:irred} and~\\ref{ass:ergod}. Then, the regret of Algorithm~\\ref{alg:ftl} \nsatisfies $R_T = O(\\log^2 T)$.\n\\end{theorem}\nThe asymptotic notation used in the theorem hides a number of factors that depend only on the \npassive dynamics $P$. In particular, the bound scales polynomially with the worst-case mixing time $\\tau$ of \nany optimal policy, and shows no \\emph{explicit} dependence on the number of states.\\footnote{Of course, the mixing time time does depend \non the size of the state space in general.} We explicitly state the bound at the end of the proof \npresented in the next section as Equation~\\eqref{eq:fullbound}, when all terms are formally defined.\n\n\\section{Analysis}\\label{sec:analysis}\nIn this section, we provide a series of lemmas paving the way towards proving Theorem~\\ref{thm:main}. The attentive reader may find some of \nthese lemmas familiar from related work: indeed, we build on several technical results from \\citet{even-dar09OnlineMDP,neu14o-mdp-full} and \n\\citet{GRW12}. Our main technical contribution is an efficient combination of these tools that enables us to go way beyond the \nbest known bounds for our problem, proved by \\citet{GRW12}. Throughout the section, we will assume that the conditions of \nTheorem~\\ref{thm:main} are satisfied.\n \nBefore diving into the analysis, we state some technical results that we will use several times. We defer all proofs to \nAppendix~\\ref{sec:app}. First, we present some important facts regarding LMDPs with bounded state-costs. In particular, we define $Q^*(c)$ \nas the optimal policy with respect to an arbitrary state-cost function $c$ and let $\\mathcal{C}$ be the set of all state-costs bounded \nin $[0,1]$. We define $\\mathcal{Q}^*$ as the set of optimal policies induced by state-cost functions in $\\mathcal{C}$: $\\mathcal{Q}^* = \nQ^*(\\mathcal{C})$. Observe that $Q_t\\in\\mathcal{Q}^*$ for all $t$, as $Q_t = Q^*(\\overline{c}_t)$ and $\\overline{c}_t \\in \\mathcal{C}$ for all $t$. Below, we \ngive several useful results concerning policies in $\\mathcal{Q}^*$.\nFor stating these results, let $c\\in\\mathcal{C}$ and $Q = Q^*(c)$. We first note that the average cost $\\lambda$ of $Q$ is bounded in \n$[0,1]$: By the Perron-Frobenius theorem (see, e.g., \\citealp[Chapter~8]{Mey00}), we have that the largest eigenvalue of $GP$ is \nbounded by the maximal and minimal row sums of $GP$: $e^{-\\lambda}\\in[e^{-\\max_x c(x)},e^{-c(x)}]$, which translates to \nhaving $\\lambda\\in[0,1]$ under our assumptions. The next key result bounds the value functions and the control costs in terms of the \nhitting time:\n\\begin{lemma}\\label{lem:vbound}\n For all $x,y$ and $t$, the value functions satisfy $v_t(x)-v_t(y) \\le H$. Furthermore, all policies $Q\\in\\mathcal{Q}^*$ satisfy\n\\[\n \\max_x \\kl{Q(\\cdot|x)}{P(\\cdot|x)} \\le H+1.\n\\]\n\\end{lemma}\nThe proof is loosely based on ideas from \\citet{bartlett09regal}.\nThe second statement guarantees that the mixing time $\\tau(Q) = \\pa{\\log(1\/\\alpha(Q))}^{-1}$ is finite for all policies in $\\mathcal{Q}^*$:\n\\begin{lemma}\\label{lem:mixing}\n The Markov--Dobrushin coefficient $\\alpha(Q)$ of any policy $Q\\in\\mathcal{Q}^*$ is bounded as\n \\[\n \\alpha(Q) \\le \\alpha(P) + \\pa{1-\\alpha(P)} \\pa{1-e^{-H-2}}<1.\n \\]\n\\end{lemma}\nThe proof builds on the previous lemma and uses standard ideas from Markov-chain theory.\nIn what follows, we will use $\\tau = \\max_{Q\\in\\mathcal{Q}^*} \\tau(Q)$ and $\\alpha = \\max_{Q\\in\\mathcal{Q}^*} \\alpha(Q)$ to denote the worst-case \nmixing time and ergodicity coefficient, respectively. With this notation, we can state the following lemma that establishes that the value \nfunctions are $2\\tau$-Lipschitz with respect to the state-cost function. For pronouncing and proving the statement, it is useful to define \nthe \\emph{span seminorm} $\\spannorm{c} = \\max_x c(x) - \\min_y c(y)$. Note \nthat it is easy to show that $\\spannorm{\\cdot}$ is indeed a seminorm as it satisfies all the requirements to be a norm except that it maps \nall constant vectors (and not just zero) to zero. \n\\begin{lemma}\\label{lem:c_to_v}\n Let $f$ and $g$ be two state-cost functions taking values in the interval $[0,1]$ and let $v_f$ and $v_g$ be the corresponding optimal \nvalue \nfunctions. Then,\n\\[\n \\spannorm{v_f - v_g} \\le 2\\tau \\infnorm{f-g}.\n\\]\n\\end{lemma}\n The proof roughly follows the proof of Proposition~3 of \\citet{GRW12}, with the slight difference that we make the constant factor in the \nbound explicit. A consequence of this result is our final key lemma in this section that actually makes our fast rates possible: a bound on \nthe change-rate of the policies chosen by the algorithm.\n\\begin{lemma}\\label{lem:change} \n$\\max_x \\onenorm{Q_t(\\cdot|x) - Q_{t+1}(\\cdot|x)} \\le \\frac{\\tau}{t}$.\n\\end{lemma}\nThe proof is based on ideas by \\citet{GRW12}. As for the proof of Theorem~\\ref{thm:main}, we follow the path of \n\\citet{even-dar09OnlineMDP,neu14o-mdp-full,GRW12}, and first analyze the idealized setting where the learner is allowed to directly pick \nstationary distributions instead of policies. Then, we show how to relate the idealized regret of FTL to its true regret in the original \nproblem.\n\n\\subsection{Regret in the idealized OCO problem}\nLet us now consider the idealized online convex optimization problem described at the end of Section~\\ref{sec:online}. \nIn this setting, our algorithm can be formally stated as choosing the stationary transition measure $\\pi_t = \\mathop{\\mbox{ \\rm arg\\,min}}_{\\pi\\in\\Delta(M)} \nf(\\pi;\\overline{c}_t)$. This view enables us to follow a standard proof technique for analyzing online convex optimization algorithms, going back to \nat least \\citet{MF92}.\nThe first ingredient of our proof is the so-called ``follow-the-leader\/be-the-leader'' lemma \\citet[Lemma~3.1]{CBLu06:Book}:\n\\begin{lemma} \\label{lem:btl}\n$\\sum_{t=1}^T \\widetilde{\\ell}_t(\\pi_{t+1}) \\le \\min_\\pi \\sum_{t=1}^T \\widetilde{\\ell}_t(\\pi)$.\n\\end{lemma}\nThe second step exploits the bound on the change rate of the policies to show that looking one step into the future does not buy much \nadvantage. Note however that controlling the change rate is not sufficient by itself, as our loss functions are effectively unbounded.\n\\begin{lemma}\\label{lem:price}\n$\\sum_{t=1}^T \\pa{\\widetilde{\\ell}_t(\\pi_{t}) - \\widetilde{\\ell}_t(\\pi_{t+1})} \\le 2 \\pa{\\tau^2+1} \\pa{1+\\log T}$.\n\\end{lemma}\nIn the interest of space, we only provide a proof sketch here and defer the full proof to Appendix~\\ref{app:price}.\n\\begin{proofsketch}\n Let us define $\\Delta_t = \\overline{c}_{t+1} - \\overline{c}_{t}$. By exploiting the affinity of $f$ in its second argument, we can start by proving\n$\\lambda_t - \\lambda_{t+1} \\le \\infnorm{\\Delta_t}$. Furthermore, by using the form of the optimal policy $Q_t$ given in Eq.~\\eqref{eq:optQ} \nand the form of $f$ given in Eq.~\\eqref{eq:statcost}, we can obtain\n\\begin{align*}\n \\widetilde{\\ell}_t(\\pi_{t}) - \\widetilde{\\ell}_t(\\pi_{t+1}) &= \\pa{\\mu_{t} - \\mu_{t+1}}^\\mathsf{\\scriptscriptstyle T} \\pa{c_t + \\overline{c}_{t}} + \\mu_{t+1}^\\mathsf{\\scriptscriptstyle T} \\pa{\\overline{c}_{t} - \n\\overline{c}_{t+1}}\n + \\lambda_{t} - \\lambda_{t+1}\n \\\\\n &\\le 2 \\onenorm{\\mu_{t+1} - \\mu_t} + 2\\infnorm{\\Delta_t}.\n\\end{align*}\nThe first term can be bounded by a simple argument (see, e.g., Lemma~4 of \\citealt{neu14o-mdp-full}) that leads to\n\\[\n\\onenorm{\\mu_{t+1} - \\mu_t} \\le \\max\\ev{\\tau(Q_t),\\tau(Q_{t+1})} \\max_x \\onenorm{Q_{t+1}(\\cdot|x)-Q_t(\\cdot|x)}.\n\\]\nNow, the first factor can be bounded by $\\tau$ and the second by appealing to Lemma~\\ref{lem:change}. The proof is \nconcluded by plugging the above bounds into Equation~\\eqref{eq:lpidiff}, using $\\infnorm{\\Delta_t} \\le 1\/t$, summing up both sides, and \nnoting that $\\sum_{t=1}^T 1\/t \\le 1 + \\log T$.\n\\end{proofsketch}\nPutting Lemmas~\\ref{lem:btl} and~\\ref{lem:price} together, we obtain the following bound on the idealized regret of FTL:\n\\begin{lemma}\\label{lem:ideal}\n$\\overline{R}_T \\le 2\\pa{\\tau^2+1}\\pa{1+\\log T}$.\n\\end{lemma}\n\n\n\\subsection{Regret in the reactive setting}\nWe first show that the advantage of the true best policy $Q^*_T$ over our final policy $Q_{T+1}$ is bounded. \n\\begin{lemma} Let $p^* = \\min_{x,x':P(x'|x)>0} P(x'|x)$ be the smallest non-zero transition probability under the passive dynamics and $B = \n-\\log p^*$. Then,\n$ \\sum_{t=1}^T \\overline{\\loss}_t(\\pi_{T+1}) - \\mathcal{L}_T(Q_T^*) \\le \\pa{2\\tau + 2}\\pa{B+1}$.\n\\end{lemma}\nThe proof follows from applying Lemma~1 from \\citet{neu14o-mdp-full} and observing that $\\ell_t(X_t,Q_T^*) \\le B+1$ holds for all $t$.\nIt remains to relate the total cost of FTL to the total idealized cost of the algorithm. This is done in the following lemma:\n\\begin{lemma} \\label{lem:trueloss}\n$ \\sum_{t=1}^T \\pa{\\EE{\\ell_t(Q_t,X_t)} - \\overline{\\loss}_t(\\pi_{t})} \\le \\pa{\\tau+1}^3 \\pa{1+\\log T}^2 + 2\\pa{\\tau + 1}\\pa{3+\\log T}$.\n\\end{lemma}\n\\begin{proof}\n Let $p_t(x) = \\PP{X_t = x}$. Similarly to the proof of Lemma~\\ref{lem:price}, we rewrite $\\overline{\\loss}_t(\\pi_t)$ using \nEquation~\\eqref{eq:lpiform} to obtain\n \\[\n \\begin{split}\n \\EE{\\ell_t(Q_t,X_t) - \\overline{\\loss}_t(\\pi_{t})}\n &=\n \\sum_{x} \\pa{p_t(x) - \\mu_t(x)} \\pa{c_t(x) + v_t(x) + \\lambda_{t} - \\overline{c}_{t}(x) - \\sum_{x'} Q_t(x'|x) v_t(x')}\n \\\\\n &\\le \\sum_{x} p_t(x) \\pa{v_t(x) - \\sum_{x'} Q_t(x'|x) v_{t}(x')} + \\onenorm{p_t- \\mu_t},\n \\end{split}\n \\]\n where the last step uses $\\sum_{x} \\mu_t(x) Q_t(x'|x) = \\mu_t(x')$ and $\\infnorm{c_t - \\overline{c}_t}\\le 1$.\n Now, noticing that $\\sum_{x} p_t(x) Q_t(x'|x) = p_{t+1}(x')$, we obtain\n \\[\n \\begin{split}\n &\\sum_{t=1}^T \\EE{\\ell_t(Q_t,X_t) - \\overline{\\loss}_t(\\pi_{t})}\n \\le \\sum_{t=1}^T \\pa{p_t-p_{t+1}}^\\mathsf{\\scriptscriptstyle T} v_t + \\sum_{t=1}^T \\onenorm{\\mu_t - p_t}\n \\\\\n &\\quad= \\sum_{t=1}^{T} p_t^\\mathsf{\\scriptscriptstyle T} \\pa{v_t - v_{t-1}} + \\sum_{t=1}^T \\onenorm{\\mu_t - p_t} - p_{T+1}^\\mathsf{\\scriptscriptstyle T} v_T\n \\le \\sum_{t=1}^T \\frac{2\\tau}{t} + \\sum_{t=1}^T \\onenorm{\\mu_t - p_t} - p_{T+1}^\\mathsf{\\scriptscriptstyle T} v_T,\n \\end{split}\n \\]\n where the last inequality uses Lemma~\\ref{lem:c_to_v} and $\\infnorm{\\overline{c}_{t} - \\overline{c}_{t-1}}\\le 1\/t$ to bound the first term. By \nLemma~\\ref{lem:c_to_v}, this last term can be bounded by $\\norm{v_T}_s = \\norm{v_T - v_0}_s\\le 2\\tau\\infnorm{\\overline{c}_T} \\le 2\\tau$, where $v_0$ \nis the value function corresponding to the all-zero state-cost function.\n \n In the rest of the proof, we are going to prove the inequality\n \\begin{equation}\\label{eq:pmudiff}\n \\onenorm{\\mu_t - p_t} \\le 2 e^{-\\pa{t-1}\/\\tau} + \\frac{2(\\tau+1)^3\\pa{1+ \\log t}}{t}.\n \\end{equation}\n It is easy to see that this trivially holds for $\\pa{2 \\tau \\log t}\/t \\ge \n1$, so we will assume that the contrary holds in the following derivations.\nTo prove Equation~\\eqref{eq:pmudiff} for larger values of $t$, we can follow the proofs \nof Lemma~5 of \\citet{neu14o-mdp-full} or Lemma~5.2 of \\citet{even-dar09OnlineMDP} to obtain\n\\begin{equation}\\label{eq:mu_to_p}\n \\onenorm{\\mu_t - p_t} \\le 2 e^{-\\pa{t-1}\/\\tau} + \\tau \\pa{\\tau+1} \\sum_{n=1}^{t-1} \\frac{e^{-(t-n)\/\\tau}}{n}.\n\\end{equation}\nFor completeness, we include a proof in Appendix~\\ref{app:mu_to_p}.\nFor bounding the last term, we split the sum at $B=\\left\\lfloor t - \\tau\\log t\\right\\rfloor$:\n\\[\n \\begin{split}\n \\sum_{n=1}^{t-1} \\frac{e^{-(t-n)\/\\tau}}{n} &= \n \\sum_{n=1}^{B} \\frac{e^{-(t-n)\/\\tau}}{n} + \\sum_{n=B+1}^{t-1} \\frac{e^{-(t-n)\/\\tau}}{n}\n \\\\\n &= e^{-(t-B)\/\\tau} \\sum_{n=1}^{B} \\frac{e^{-(B-n)\/\\tau}}{n} + \\sum_{n=B+1}^{t} \\frac{e^{-(t-n)\/\\tau}}{n}\n \\\\\n &\\le \\frac{1}{t} \\cdot \\frac{1}{1-e^{-1\/\\tau}} + \\frac{\\tau\\log t}{t-\\tau\\log t} \\le \n \\frac{\\tau }{t} + \\frac{\\tau\\log t}{t} \\cdot \\frac{1}{1-\\pa{\\tau\\log t}\/ t}\n \\\\\n &\\le \\frac{\\tau}{t} + \\frac{2\\tau \\log t}{t} \\le \\frac{2\\tau\\pa{1 + \\log t}}{t},\n \\end{split}\n\\]\nwhere the first inequality follows from bounding the $1\/n$ factors by $1$ and $1\/B$, respectively, and bounding the sums by \nthe full geometric sums. The second-to-last inequality follows from our assumption that $(2\\tau\\log t)\/t \\le 1$. That is, we have \nsuccessfully proved Equation~\\eqref{eq:pmudiff}. Now the statement of the lemma follows from summing up for all $t$ and noting that \n$\\sum_{t=1}^T \\frac{1}{t} \\le 1 + \\log T$ and $\\sum_{t=1}^T e^{-\\pa{t-1}\/\\tau} \\le \\tau + 1$.\n\\end{proof}\nNow the proof of Theorem~\\ref{thm:main} follows easily from combining the bounds of Lemmas~\\ref{lem:ideal}--\\ref{lem:trueloss}. The result \nis\n\\begin{equation}\\label{eq:fullbound}\n R_T \\le \n 2\\pa{\\tau+1}^3 \\pa{1+\\log T}^2 + 2\\pa{\\tau^2 + \\tau + 2} (3+\\log T) + \\pa{2\\tau+2}\\pa{B+2}.\n\\end{equation}\nThus, we can see that the bound indeed demonstrates a polynomial dependence on the mixing time $\\tau$, \nand depends logarithmically on the smallest non-zero transition probability $p^*$ via $B = -\\log p^*$.\n\n\n\n\\section{Discussion}\\label{sec:discussion}\nIn this paper, we have shown that, besides the well-established computational advantages, linearly solvable MDPs also admit a remarkable\ninformation-theoretic advantage: fast learnability in the online setting. In particular, we show that achieving a regret of $O(\\log^2 T)$ \nis achievable by the simple algorithm of following the leader, thus greatly improving on the best previously known regret bounds of \n$O(T^{3\/4})$. At first sight, our improvement may appear dramatic: in their paper, \\citet{GRW12} pose the possibility of improving their \nbounds to $O(\\sqrt{T})$ as an important open question (Sec.~VII.). In light of our results, these conjectured improvements are also grossly \nsuboptimal. On the other hand, our new results can be also seen to complement well-known results on fast rates in online learning (see, \ne.g., \\citealt{EGMRW15} for an excellent summary). Indeed, our learning setting can be seen as a generalized variant of sequential \nprediction under the relative-entropy loss (see, e.g., \\citealp[Sec.~3.6]{CBLu06:Book}), which is known to be \\emph{exp-concave}. Such \nexp-concave losses are well-studied in the online learning literature, and are known to allow logarithmic regret bounds \\citep{KW99,HAK07}.\n\nInspired by these related results, we ask the question: Is the loss function $f$ defined in Section~\\ref{sec:convex} exp-concave? While our \nderivations Appendix~\\ref{app:convexity} indicate that $f$ has curvature in certain directions, we were not able to prove its \nexp-concavity. Similarly to the approach of \\citet{MF92}, our analysis in the current paper merely exploits the Lipschitzness of the optimal \npolicies with respect to the cost functions, but otherwise does not explicitly make use of the curvature of $f$. We hope that our work \npresented in this paper will inspire future studies that will clarify the exact role of the LMDP structure in efficient online learnability, \npotentially also leading to a better understanding of policy gradient algorithms for LMDPs \\citep{Tod10}.\n\nFinally, let us comment on the tightness of our bounds. Regardless of whether the loss function $f$ is exp-concave or not, \nwe are almost certain that our rates can be improved to at least $O(\\log T)$ by using a more sophisticated algorithm. While our focus in \nthis paper was on improving the asymptotic regret guarantees, we also slightly improve on the results \\citet{GRW12} in that we make the \nleading constants more explicit. However, we expect that the dependence on these constants may also be improved in future work. \nNote however that the potential \nlooseness of our bounds does not impact the performance of the algorithm itself, as it never makes use of any problem-dependent constants.\n\n\\paragraph{Acknowledgements}\nThis work was supported by the UPFellows Fellowship (Marie Curie COFUND program n${^\\circ}$ \n600387) and the Ramon y Cajal program RYC-2015-18878 (AEI\/MINEICO\/FSE, UE). \nThe authors wish to thank the three anonymous reviewers for their valuable comments that helped to improve the paper.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDeriving the light hadron spectrum from the first principles of\nQCD has been a major subject of lattice QCD\nsimulations\\cite{ref:reviews}.\nA precise determination of the known hadron spectrum\nwould lead us to a fundamental verification of QCD.\nWe should also clarify the nature of observed hadrons,\nprovide predictions for hadrons not in the quark model,\nand give informations for quantities of phenomenological\nimportance.\n\nIn order to achieve these goals, understanding and control\nof various systematic errors are required.\nOne of major sources of systematic errors is that of\na finite lattice spacing.\nRecent progress in reducing this systematic error\nhas been made in two ways.\nFor the quenched QCD spectrum,\ndevelopment of computer power has enabled to push simulations \ntoward smaller lattice spacings on physically larger lattices with \nhigher statistics than the previous attempts.\nAs a result we are now in a status to discuss the problem of how well\nquenched QCD describes the experimental spectrum.\nAnother progress in reducing scaling violation is brought \nwith the use of improved quark actions.\nTests of improvement, previously made mainly in quenched QCD, \nhave been extended this year to full QCD.\n\nFinite size effects and chiral extrapolations have been\nstudied extensively in the past.\nSeveral studies to investigate these systematic errors were \nalso reported at the Symposium.\n\nIn this review we attempt to describe the present status of\nspectroscopic studies.\nProgress in quenched QCD spectrum is summarized in\nsec.~\\ref{sec:progress}, emphasizing results in the continuum limit.\nDiscussions on several issues in spectroscopic studies follow\nin sec.~\\ref{sec:issues},\nwhich include study of finite size effects, chiral extrapolations,\nand quenching error in meson decay constants.\nAfter discussions on improvement of quark actions\nin sec.~\\ref{sec:improve},\nattempts toward a realistic calculation in full QCD are presented\nin sec.~\\ref{sec:fullQCD}.\nSec.~\\ref{sec:other} is devoted to results for masses of glueballs\nand exotics. Our conclusions are given in sec.~\\ref{sec:conclusions}.\n\n\n\\section{Progress in Quenched QCD Spectrum}\\label{sec:progress}\n\\subsection{major simulations}\n\nRecent quenched simulations made with the plaquette gauge action\nare compiled in Table \\ref{tab:table-q}. \nSee sec.~\\ref{sec:improve} for those with improved \ngauge actions. \n\nDeriving precise quenched results in the continuum limit \nis a first step toward understanding the light hadron spectrum. \nThe GF11 collaboration\\cite{ref:GF11mass} carried out\nthe first systematic effort to achieve this goal \nwith the Wilson quark action \nusing three lattices with $a^{-1} = 1.4-2.8$ GeV and \nthe spatial size $La \\approx 2.3$ fm.\n \nThis year the CP-PACS collaboration reported further effort \nin this direction\\cite{ref:CPPACS}.\nThey made high statistics simulations\non four lattices with $a^{-1} = 2.0 - 4.2$ GeV and $La \\approx 3$ fm.\nHadron masses are calculated for five quark masses\ncorresponding to $m_\\pi\/m_\\rho$ = 0.75, 0.7, 0.6, 0.5\nand 0.4, the last point being closer to the chiral limit\nthan ever attempted for the Wilson action. \nThey reported continuum values of hadron masses with a statistical \nerror of 0.5 \\% for mesons and 1--3 \\% for baryons.\n\nAnother trend in this year's simulations is a pursuit \nof reduction of scaling violation with the use of \nthe Sheikholeslami-Wohlert\\cite{ref:SW} or clover action.\nEfforts in this direction were made by \nthe UKQCD\\cite{ref:UKQCDlat96,ref:UKQCDb57,ref:UKQCDlat97} \nand JLQCD \\cite{ref:JLQCD-fB} collaborations\nfor the tadpole-improved\\cite{ref:LM} clover action\nand by the UKQCD, QCDSF\\cite{ref:QCDSFlatest} and APETOV\\cite{ref:APETOV}\ncollaborations for the non-perturbatively $O(a)$-improved\\cite{ref:NPI} \nclover action (see also Ref.\\cite{ref:Wittig} on this subject).\nThese studies have not yet reached the level of simulations with the Wilson \naction, being restricted to the parameter range \n$m_\\pi\/m_\\rho \\rlap{\\lower 3.5 pt\\hbox{$\\mathchar \\sim$}}\\raise 1pt \\hbox {$>$} 0.5$, $a^{-1} \\rlap{\\lower 3.5 pt\\hbox{$\\mathchar \\sim$}}\\raise 1pt \\hbox {$<$} 3$ GeV, and\n$La \\rlap{\\lower 3.5 pt\\hbox{$\\mathchar \\sim$}}\\raise 1pt \\hbox {$<$} 2.0$ fm.\n\nFor the Kogut-Susskind (KS) quark action, \nthe MILC collaboration\\cite{ref:MILClat96} last year \nreported a result of nucleon mass in the continuum limit based on\nsimulations on four lattices with $a^{-1}=0.6 - 2.4$ GeV and \n$La \\approx 2.7$ fm.\nNot much progress has been made this \nyear\\cite{ref:MILC-Tsukuba-Gottlieb,ref:KimOhta}.\n\n\\begin{table*}[t]\n\\caption{Recent spectrum runs in quenched QCD\nwith the standard gauge action. \nNew results since Lattice 96 are marked by double asterisks and\nthose with increased statistics by asterisks.\nQuark actions are denoted in parentheses by\nW: Wilson, C: clover, and KS: Kogut-Susskind. \nClover coefficients are denoted by 1: tree level, TP: tadpole improved,\nTP1: one-loop tadpole improved, and NP: non-perturbatively improved.}\n\\label{tab:table-q}\n\\begin{center}\n\\begin{tabular}{lrrrrrrr}\n & $\\beta$ & size & (fm) & \\#conf. & \\ \\ \\ $m_\\pi\/m_\\rho$ & \\#m & ref.\\\\\n\\hline\n\\hline\nMILC (W)** & 5.70 & $(12-24)^3\\times48$ & 1.7--3.4 & 404-170 & 0.90-0.50 & 6 & \n\\cite{ref:MILC-Tsukuba-Gottlieb,ref:MILClat97} \\\\\n\\hline\nCP-PACS (W)** & 5.90 & $32^3\\times56$ & 3.21 & 800 & 0.75-0.40 & 5 & \\cite{ref:CPPACS}\\\\\nCP-PACS (W)** & 6.10 & $40^3\\times70$ & 3.04 & 600 & 0.75-0.40 & 5 & \\cite{ref:CPPACS}\\\\\nCP-PACS (W)** & 6.25 & $48^3\\times84$ & 3.03 & 420 & 0.75-0.40 & 5 & \\cite{ref:CPPACS}\\\\\nCP-PACS (W)** & 6.47 & $64^3\\times112$& 3.03 & 91 & 0.75-0.40 & 5 & \\cite{ref:CPPACS}\\\\\n\\hline\n\\hline\nUKQCD (C=TP) & 5.70 & $(12,16)^3\\times24$ & (2.1,2.8) & (482,145) & 0.78,0.65 & 2 & \n\\cite{ref:UKQCDlat96,ref:UKQCDb57} \\\\\nUKQCD (C=TP) & 6.00 & $16^3\\times48$ & 1.6 & 499 & 0.76-0.62 & 3 & \n\\cite{ref:UKQCDlat96,ref:UKQCDlat97}\\\\\nUKQCD (C=TP)* & 6.20 & $24^3\\times48$ & 1.8 & 218 & 0.75-0.49 & 3 & \n\\cite{ref:UKQCDlat96,ref:UKQCDlat97} \\\\\nUKQCD (C=NP)** & 6.00 & $(16,32)^3\\times48$ & (1.7,3.3) & (497,70) & 0.77-0.50 & 3 & \n\\cite{ref:UKQCDlat97}\\\\\nUKQCD (C=NP)** & 6.20 & $24^3\\times48$ & 1.7 & 251 & 0.71-0.54 & 3 & \n\\cite{ref:UKQCDlat97} \\\\\n\\hline\nQCDSF (C=1)** & 5.70& $16^3\\times32$ & 2.4 & & 0.66-0.44 & 3 & \n\\cite{ref:QCDSFlatest} \\\\\nQCDSF (C=NP)** & 5.70& $16^3\\times32$ & 3.3& & 0.77-0.56 & 6 & \n\\cite{ref:QCDSFlatest} \\\\\nQCDSF (W)* & 6.00& $(16,24)^3\\times32$ & (1.4,2.0) & O(5000,100)& 0.93-0.50 \n& (4,3) & \\cite{ref:QCDSFlatest,ref:QCDSFb60}\\\\\nQCDSF (C=NP)* & 6.00& $(16,24)^3\\times32$ & (1.7,2.6) & O(1000,200) & 0.90-0.41 & \n(6,3) & \\cite{ref:QCDSFlatest,ref:QCDSFb60} \\\\\nQCDSF (W)** & 6.20& $24^3\\times48$ & 1.6 & O(100) & 0.94-0.61 & 5 & \n\\cite{ref:QCDSFlatest}\\\\\nQCDSF (C=NP)** & 6.20& $24^3\\times48$ & 1.8 & O(300) & 0.90-0.59 & 5 & \n\\cite{ref:QCDSFlatest}\\\\\nQCDSF (C=NP)** & 6.20& $32^3\\times64$ & 2.4 & O(40) & 0.55-0.39 & 3 &\n\\cite{ref:QCDSFlat97-2} \\\\\n\\hline\nAPETOV (W)**& 6.20 & $24^3\\times48$ & 1.7 & 50 & & 7 & \n\\cite{ref:APETOV} \\\\\nAPETOV (C=NP)**&6.20 & $24^3\\times48$ & 1.9 & 50 & 0.98-0.56 & 7 & \n\\cite{ref:APETOV} \\\\\n\\hline\nJLQCD (C=TP1)** & 5.90& $16^3\\times40$ & 2.0 & 400 & 0.76-0.56 & 4 & \n\\cite{ref:JLQCD-fB}\\\\\nJLQCD (C=TP1)** & 6.10& $24^3\\times64$ & 2.1 & 200 & 0.77-0.50 & 4 & \n\\cite{ref:JLQCD-fB}\\\\\nJLQCD (C=TP1)** & 6.30& $32^3\\times80$ & 2.2 & 100 & 0.81-0.52 & 4 & \n\\cite{ref:JLQCD-fB}\\\\\n\\hline\n\\hline\nKim-Ohta (KS)* & 6.50 & $48^3\\times64$ & 2.6 & 350 & 0.65-0.28& 4 & \n\\cite{ref:KimOhta}\\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{center}\n\\vspace{-0.5cm}\n\\end{table*}\n\n\n\\subsection{quenched spectrum in the continuum limit}\n\\begin{figure}[t]\n\\begin{center} \\leavevmode\n\\epsfxsize=7.5cm \\epsfbox{spectrum.ps}\n\\end{center}\n\\vspace{-13mm}\n\\caption{Quenched light hadron spectrum in the continuum limit \nreported by GF11\\protect\\cite{ref:GF11mass} and\nCP-PACS\\protect\\cite{ref:CPPACS} as compared to experiment.}\n\\label{fig:spectrum}\n\\vspace{-7mm}\n\\end{figure}\n\nIn Fig.~\\ref{fig:spectrum} we plot the result for the quenched \nlight hadron spectrum reported by the CP-PACS collaboration \nas compared to the GF11 result and experiment. \nThe quenched spectrum depends on the choice of hadron masses to set \nthe lattice scale and light quark masses. Results for two choices are shown \nin Fig.~\\ref{fig:spectrum}, one employing $m_\\pi, m_\\rho$ and $m_K$ and \nthe other replacing $m_K$ with $m_\\phi$.\nThe disagreement of about 5--10\\%\nobserved for strange hadrons between the two choices represent a \nmanifestation of quenching error.\n\nThe GF11 result, albeit not covering the entire spectrum, \nshowed agreement with experiment within the quoted error \nof 2\\% for mesons and 4--8\\% for baryons.\nComparing their result with the CP-PACS result obtained with \nthe same input (filled circles), \none finds a sizable difference for $K^*, \\phi, \\Xi^*$ and $\\Omega$. \nIn fact the CP-PACS result with significantly reduced errors \nexhibits a clear systematic deviation from experiment both for \nmesons and baryons. \n\n\\subsection{meson spectrum}\n\n\\begin{figure}[t]\n\\begin{center} \\leavevmode\n\\epsfxsize=7.5cm \\epsfbox{Kstar-phi.ps}\n\\end{center}\n\\vspace{-13mm}\n\\caption{$m_{K^{*}}$ and $m_\\phi$ with $m_K$ as input\nfor the Wilson (filled symbols) and clover (open symbols) \nactions. In parentheses of legends are physical lattice sizes \nin fm. Lines are continuum extrapolation adopted by GF11 and CP-PACS. \nLeft-most triangles are GF11 estimates for infinite volume.\n}\n\\label{fig:Kstar-phi}\n\\vspace{-7mm}\n\\end{figure}\n\nThe CP-PACS result in the continuum shows that the value of $m_{K^*}$ is \n3\\%(6$\\sigma$) smaller than experiment and $m_\\phi$ by \n5\\% (7$\\sigma$) if $m_K$ is used as input. Alternatively, with \n$m_\\phi$ as input, they find that $m_{K^{*}}$ agrees with\nexperiment to 0.6\\%, but $m_K$ is larger by 9\\%(7$\\sigma$).\nThis means that a small value of hyperfine splitting, previously\nobserved at finite lattice spacings\\cite{ref:GF11mass,ref:LANLmass},\nremains in the continuum limit, which is different from the conclusion \nof the GF11 collaboration after the continuum extrapolation. \n\nThe origin of the discrepancy is clearly seen in Fig.\\ref{fig:Kstar-phi}\nwhere the continuum extrapolations of $m_{K^{*}}$ and $m_\\phi$ are plotted.\nThe CP-PACS data (filled circles) show very small scaling violation, \nin contrast to an increase exhibited by the GF11 results. \nThe continuum extrapolation of GF11 strongly\ndepends on the small values of results at $\\beta=5.7$ obtained on a \nlattice of size $La \\approx 2.3$~fm ($L=16$). Their additional \nresults for a larger lattice with $La \\approx 3.4$~fm ($L=24$), also \nshown in Fig.~\\ref{fig:Kstar-phi}, are higher \nby 2--3\\%, and are more compatible with the CP-PACS results. \nWhether one can attribute the difference of the GF11 results \nbetween $L=16$ and 24 to finite-size effects \nis not clear since values of the two groups \nfor smaller lattice spacings are consistent.\n\n\\begin{figure}[t]\n\\begin{center} \\leavevmode\n\\epsfxsize=7.0cm \\epsfbox{hyperfine.ps}\n\\end{center}\n\\vspace{-13mm}\n\\caption{Meson hyperfine splitting obtained with $m_K$ as input. }\n\\label{fig:hyperfine}\n\\vspace{-7mm}\n\\end{figure}\n\nIn Fig.~\\ref{fig:hyperfine} we plot the meson hyperfine splitting as \na function of the pseudo-scalar meson mass squared where $m_K$ is used \nas input. \nThe CP-PACS data at four values of $\\beta$ (filled symbols) \nscale well and do not reproduce the experimental \nvalue of $K$--$K^{*}$ mass splitting.\n\nIn Figs.~\\ref{fig:Kstar-phi} and \\ref{fig:hyperfine}, the clover results \nhave also been plotted with open symbols. \nWe observe that they lie slightly above the Wilson results.\nThis agrees with the expectation that the clover term should increase \nthe hyperfine splitting compared to that of the Wilson action. \nHowever, there is a problematical feature that the difference \nof results for the two actions increases toward the continuum \nlimit rather than decreasing as $O(a)$. \nIn fact the UKQCD collaboration\\cite{ref:UKQCDlat97}\nconcluded this year \nthat $m_{K^{*}}$ linearly extrapolated to the continuum limit is\nconsistent with experiment using either $m_K$ or $m_\\phi$ as input. \n\nWe should emphasize that the difference of meson masses for the two actions\nis tiny(1--2\\%) and no more than a 3$\\sigma$ effect at finite $\\beta$.\nLattice sizes of $La\\rlap{\\lower 3.5 pt\\hbox{$\\mathchar \\sim$}}\\raise 1pt \\hbox {$<$} 2$~fm employed in the clover studies may be \ntoo small to avoid finite-size errors at this level of precision.\nStatistical errors of the clover results, \nwhich are larger by a factor 2--3 compared to those of the Wilson action, \nalso need to be reduced to resolve the discrepancy. \n\nWe compile results for the $J$ parameter\\cite{ref:J} in Fig.~\\ref{fig:J}. \nAs has been known, results for the Wilson action and its improved ones\nconsistently lie below the experimental value for a wide range\nof lattice spacing. Results for the KS action\\cite{ref:JLQCD-BKKS}\nalso converge to a similar value from above.\n \nA small value of $J$ is equivalent to a small hyperfine splitting if the \nlatter is a linear function of quark mass.\nThis correspondence is satisfied for the Wilson results, \nwhile it is apparently not for the clover case. This represents \nanother problem which needs to be understood in the quenched meson \nspectrum.\n\n\\begin{figure}[t]\n\\begin{center} \\leavevmode\n\\epsfxsize=7.5cm \\epsfbox{J.ps}\n\\end{center}\n\\vspace{-13mm}\n\\caption{Results for $J$ parameter. \nData are taken from CP-PACS\\protect\\cite{ref:CPPACS},\nJLQCD\\protect\\cite{ref:JLQCD1000,ref:JLQCD-BKKS}\nfor the Wilson and KS actions, respectively,\nLANL\\protect\\cite{ref:LANLmass},\nSCRI\\protect\\cite{ref:SCRIlat96},\nAlford {\\it et al.}\\protect\\cite{ref:Alfordlat95,ref:Alfordlat96}\nfor the D234 and D234(2\/3) actions, respectively.\nLines are fits to the CP-PACS results and the KS results.}\n\\label{fig:J}\n\\vspace{-7mm}\n\\end{figure}\n\n\\subsection{baryon spectrum}\n\nIn Fig.~\\ref{fig:ndE} we plot the continuum extrapolation of\nrepresentative baryon masses reported by the GF11 and CP-PACS \ncollaborations. \nThe quenched value of nucleon mass has been a long debated issue. \nPrevious high statistics results\\cite{ref:APE-OLD,ref:QCDPAX,ref:LANLmass}\n(see also Ref.\\cite{ref:UKQCDlat97}) at $\\beta\\approx 5.7-6.2$ \nobtained by a chiral extrapolation \nfrom $m_\\pi\/m_\\rho\\rlap{\\lower 3.5 pt\\hbox{$\\mathchar \\sim$}}\\raise 1pt \\hbox {$>$} 0.5$ yielded a value higher than \nexperiment. The GF11 results also shared this feature, and agreement \nwith experiment in the continuum limit was obtained only after \na finite-size correction.\n\nThe CP-PACS data down to $m_\\pi\/m_\\rho\\rlap{\\lower 3.5 pt\\hbox{$\\mathchar \\sim$}}\\raise 1pt \\hbox {$>$} 0.4$ show \nthat the nucleon and $\\Lambda$ masses have a negative\ncurvature in terms of $1\/K$ toward the chiral limit. \nThe bending significantly lowers the nucleon mass even at finite $\\beta$\nas shown in Fig.~\\ref{fig:ndE}, and a linear continuum extrapolation \nleads to a value 2.3\\% lower than experiment, albeit consistent \nwithin a 3\\% statistical error.\nThe nucleon mass for the KS action from the MILC \ncollaboration\\cite{ref:MILC-Tsukuba-Gottlieb,ref:MILClat96}\nis also consistent with experiment.\nSee Sec.~\\ref{sec:chiral-N} for further discussion on the chiral \nextrapolation.\n\nFor $\\Delta$ and $\\Omega$ masses, the GF11 and CP-PACS results \nare reasonably consistent at similar lattice spacings. \nThe continuum extrapolation is different, especially for $\\Omega$, \nwith the GF11 case strongly \naffected by the results at $\\beta=5.7$ on an $L=16$ lattice. \n\nIn the continuum limit, the CP-PACS results \nshow a systematic deviation from experiment. \nFor the octet, the non-strange nucleon mass is consistent with \nexperiment, while strange baryon masses are lower \nby 5--8\\% (3--5\\%) with $m_K$ ($m_\\phi$) as input.\nHowever, the Gell-Mann-Okubo (GMO) relation is well\nsatisfied at a 1\\% level.\n\nThe GMO relation is also well satisfied for the decuplet, where \nit takes the form of an equal spacing rule, \nwith at most 10\\% deviations.\nHowever, the average spacing is too small by 30\\% (20\\%) with\n$m_K$ ($m_\\phi$) as input.\n\nBaryon mass splittings were extensively studied at $\\beta=6.0$ on \na $32^3\\times 64$ lattice in Ref.~\\cite{ref:LANLmass}, which \nreported the validity of the GMO relations and \nthe smallness of the decuplet mass splitting. \nThe CP-PACS data confirm these results and extend them as the \nproperty of the quenched baryon spectrum in the continuum.\n\n\\begin{figure}[t]\n\\begin{center} \\leavevmode\n\\vspace*{-8mm}\n\\epsfxsize=7.8cm \\epsfbox{ndE.ps}\n\\end{center}\n\\vspace{-15mm}\n\\caption{Continuum extrapolations of baryon masses\nfrom CP-PACS (filled symbols) and GF11 (open symbols). } \n\\label{fig:ndE}\n\\vspace{-6mm}\n\\end{figure}\n\n\\subsection{quark mass for the Wilson action}\n\\begin{figure}[t]\n\\begin{center} \\leavevmode\n\\epsfxsize=7.5cm \\epsfbox{mstrange.ps}\n\\end{center}\n\\vspace{-13mm}\n\\caption{Comparison of strange quark masses obtained \nfrom the Ward identity and perturbation.\nMasses are in $\\overline {\\rm MS}$ scheme at $\\mu=2$ GeV.}\n\\label{fig:strange-mass}\n\\vspace{-8mm}\n\\end{figure}\n\nThe Wilson action explicitly breaks chiral symmetry at finite lattice spacing.\nOne of its manifestations is that quark mass $m_q^{WI}$ defined \nby the Ward identity \\cite{ref:Bo,ref:Itoh86,ref:MM} \ndoes not agree with quark mass $m_q^P$ defined perturbatively\nat finite lattice spacings\\cite{ref:Itoh86,ref:LANLmass}.\n\nThis problem was examined by four groups this year. \nThe CP-PACS collaboration compared the two definitions\nfor the Wilson action, and reported that they linearly extrapolate to \na consistent value in the continuum limit\\cite{ref:CPPACS}. \nThe JLQCD collaboration employed an extended current and found indications\nthat scaling violation for $m_q^{WI}$ becomes smaller than that for the local \ncurrent\\cite{ref:JLQCD-Kura}. \nThe QCDSF collaboration\\cite{ref:QCDSFlatest} reported that\nthe two definitions give consistent results in the continuum limit\nalso for the non-perturbatively $O(a)$ improved clover action.\nThe Ape collaboration\\cite{ref:Ape-QM} reported that $m_q^{WI}$\nare compatible with $m_q^{P}$ at each $\\beta$ when\nrenormalization factors determined non-perturbatively are used.\n\nWe summarize results for the strange quark mass in \nFig.~\\ref{fig:strange-mass}. \nThe agreement of $m_q^{WI}$ with $m_q^{P}$ \nin the continuum limit supports our expectation that \nchiral symmetry of the Wilson and clover actions is recovered\nin the continuum limit.\nThe disagreement of the values \n$m_s\\approx 135$ MeV obtained with $m_\\phi$ as input and \n$m_s\\approx 110$ MeV found with $m_K$ as input originates \nfrom the small meson hyperfine splitting, and hence represents a \nquenching uncertainty.\nFurther results on quark masses are reviewed in Ref.~\\cite{ref:Gupta}.\n\n\\section{Issues in Spectroscopic Studies}\\label{sec:issues}\n\n\n\\subsection{finite size effects in quenched QCD}\\label{sec:FS}\n\\begin{figure}[t]\n\\begin{center} \\leavevmode\n\\epsfxsize=6.5cm \\epsfbox{finite.ps}\n\\end{center}\n\\vspace{-13mm}\n\\caption{Nucleon mass for various source\/sink and lattice sizes \nat $\\beta=5.7$ and $m_\\pi\/m_\\rho \\approx 0.5$.\nData are slightly shifted in horizontal axis for clarity.\n}\n\\label{fig:finite}\n\\vspace{-7mm}\n\\end{figure}\n\nIn quenched QCD finite-size effects of hadron masses are expected to \nbe smaller than in full QCD due to Z(3) symmetry. \nFor the nucleon mass with the KS action, the magnitude has been \nestimated to be less than 2\\% at $m_\\pi\/m_\\rho \\approx 0.5$\nfor $La\\rlap{\\lower 3.5 pt\\hbox{$\\mathchar \\sim$}}\\raise 1pt \\hbox {$>$} 2$ fm\\cite{ref:Aoki-FS,ref:MILC-FS-Q}.\nOn the other hand, the GF11 result\\cite{ref:GF11mass} for the Wilson \naction at $\\beta=5.7$ showed \na larger effect of 5\\% between the sizes $L=16$ (2.3~fm) to 24 (3.4~fm).\n \nThe MILC collaboration carried out extensive runs \nat $\\beta=5.7$ with the Wilson action for the sizes $L=12-24$, \nand we reproduce their results for \nthe nucleon mass\\cite{ref:MILC-Tsukuba-Gottlieb,ref:MILClat97} \ntogether with those of GF11 in Fig.~\\ref{fig:finite}.\n\nThe GF11 result for $L=16$ significantly depends on the source\/sink size, \nwith the value for the size 4 consistent with those for $L=24$. \nThe MILC results for $L=16$ \ndo not show a source size dependence. Their values for the sizes \n$L=12-24$ mutually agree within the statistical error of about 2\\%, \nand are also consistent with the GF11 results for $L=24$.\n\nThese comparisons strongly suggest that finite-size effect at \n$La\\approx 2$~fm is already 2\\% or less also for the Wilson action, \nrather than 5\\% estimated by GF11. This implies that finite-size \neffects are negligible for $La \\approx 3$ fm as employed by \nthe CP-PACS collaboration.\n\n\n\\subsection{chiral extrapolation of nucleon mass}\\label{sec:chiral-N}\n\\begin{figure}[t]\n\\begin{center} \\leavevmode\n\\epsfxsize=7.5cm \\epsfbox{chiral.ps}\n\\end{center}\n\\vspace{-13mm}\n\\caption{(a) Chiral extrapolations of the CP-PACS nucleon masses \nat $\\beta=5.9$. (b) Continuum extrapolations of the nucleon masses\nobtained by various chiral extrapolations.\n}\n\\label{fig:chiral}\n\\vspace{-7mm}\n\\end{figure}\n\nLast year the MILC \ncollaboration\\cite{ref:MILC-Tsukuba-Gottlieb,ref:MILClat96} \nemphasized the difficulties in reliable chiral extrapolation for the \nnucleon mass using their high precision data with the \nKS action. The results obtained for light quarks down to \n$m_\\pi\/m_\\rho \\approx 0.3-0.4$ exhibit a negative curvature, \nand the mass in the chiral limit is sensitive to the choice of \nfitting functions.\n \nThe CP-PACS data for the nucleon mass for the Wilson action measured \ndown to $m_\\pi\/m_\\rho \\approx 0.4$ also show a negative curvature.\nThey tried to fit their data using four fitting\nfunctions; a cubic function in quark mass,\na form predicted by chiral perturbation theory ($\\chi$PT)\nin full QCD\\cite{ref:xPT} given by \n$m_N = c_0 + c_1 m_\\pi^2 + c_2 m_\\pi^3$, and two forms \nin quenched QCD (Q$\\chi$PT)\\cite{ref:QxPT-S,ref:QxPT-BG,ref:LSc1} given by \n$m_N = c_0 + c_1 m_\\pi + c_2 m_\\pi^2$ and \n$m_N = c_0 - 0.53 m_\\pi + c_1 m_\\pi^2 + c_2 m_\\pi^3$\nwhere in the latter the coefficient of the linear term \nis fixed to a value estimated from experiment. \nAs shown in Fig.~\\ref{fig:chiral}(a),\nthe four fitting functions describe data equally well, but deviate \nsignificantly toward the chiral limit. \n\n\\begin{figure}[t]\n\\vspace*{-3mm}\n\\begin{center} \\leavevmode\n\\epsfxsize=7.0cm \\epsfbox{decay-ratio.ps}\n\\end{center}\n\\vspace{-13mm}\n\\caption{$f_K\/f_\\pi-1$. \nData are from CP-PACS\\protect\\cite{ref:CPPACS},\nGF11\\protect\\cite{ref:GF11decay},\nLANL\\protect\\cite{ref:LANLdecay},\nUKQCD\\protect\\cite{ref:UKQCDlat97}, \nQCDSF\\protect\\cite{ref:QCDSFlatest},\nJLQCD\\protect\\cite{ref:JLQCD-fB}, and\nOSU\\protect\\cite{ref:OSU}.}\n\\label{fig:decay-ratio}\n\\vspace{-7mm}\n\\end{figure}\n\nIn Fig.~\\ref{fig:chiral}(b) we show how the choice of chiral extrapolations \naffects the nucleon mass in the continuum limit. \nHaving precision results \ndown to $m_\\pi\/m_\\rho=0.4$ at each $\\beta$ helped to constrain the \nuncertainty in the continuum limit almost within the statistical error \nof 3\\%. \n\nA major difficulty in exploring the chiral limit in quenched QCD simulations \nis the presence of exceptional configurations.\nA method has recently been proposed to avoid this \ndifficulty\\cite{ref:Eichten}. It would be very interesting to see \nif the method allows to obtain reliable results near the chiral limit as \nclose as $m_\\pi\/m_\\rho \\approx 0.2$, \nwhich would be needed to control the chiral extrapolation at a few \\% \nprecision level.\n\n\nThe APETOV collaboration\\cite{ref:APETOV} studied quark mass\ndependence of octet baryon masses for the non-perturbatively $O(a)$ \nimproved action\nfor the range of $m_\\pi\/m_\\rho = 0.98-0.56$.\nThey found that linearity is better \nif one includes the $O(m_qa)$ improvement term in the \ndefinition of quark mass.\n\n\n\n\\subsection{decay constants and quenching error}\n\n\\begin{figure}[t]\n\\begin{center} \\leavevmode\n\\epsfxsize=7.0cm \\epsfbox{decay.ps}\n\\end{center}\n\\vspace{-13mm}\n\\caption{Results for $f_\\pi$.\nSee the caption of Fig.~\\protect\\ref{fig:decay-ratio} for\nreferences except JLQCD\\protect\\cite{ref:JLQCD-Kura}\nfor the Wilson action.}\n\\label{fig:decay}\n\\vspace{-7mm}\n\\end{figure}\n\nIt has been observed for the Wilson action that\n$f_K\/f_\\pi-1$ in quenched QCD is much smaller than experiment,\nwhich is considered to be a quenching error \n(see Ref.~\\cite{ref:Sharpelat96} for a recent review).\nIn Fig.~\\ref{fig:decay-ratio} we compile recent results for the \nratio. Small values in the range 0.1--0.15 are also obtained for the clover \nand KS actions. A discrepancy of 30--40\\% with experiment \nroughly agrees with estimates based on quenched chiral perturbation \ntheory\\cite{ref:QxPT-BG}.\n\nIn Fig.~\\ref{fig:decay} we summarize the status with the determination \nof the pion decay constant. Continuum values for the Wilson action \nreported by various groups are consistent with each other, and \nare slightly smaller than experiment, while the situation with the clover \nresults is very unsatisfactory, suffering from a large discrepancy among \ngroups. \n\n\\section{Improvement of Quark Actions}\\label{sec:improve}\n\\begin{table*}[t]\n\\caption{Tests of improved quark actions with improved gauge actions.\nAbbreviations for gauge actions in brackets are \nTILW: tadpole-improved \nL\\\"uscher-Weisz\\protect\\cite{ref:TILW-LW,ref:LM,ref:TILW-Al},\nTISY: tadpole-improved Symanzik\\protect\\cite{ref:Symanzik,ref:LM},\nSY: Symanzik\\protect\\cite{ref:Symanzik}.}\n\\label{tab:table-I}\n\\begin{center}\n\\begin{tabular}{lrrrrrrr}\n & $\\beta_{pl}$ & size & (fm) & \\#conf. & \\ \\ \\ $m_\\pi\/m_\\rho$ & \\#m \\ \n& ref. \\\\\n\\hline\n\\hline\nSCRI (C=NP)[TILW] & 7.75-12 & $8^3\\times15$ & & O(1000) & & 1 & \\cite{ref:SCRIlat97} \\\\\n\\hline\nAlford {\\it et al.} (D234c,C)[TISY] & 1.157 & $5^3\\times18$ & 2.0 & & 0.76,0.70 & 2 & \n\\cite{ref:Alfordlat97}\\\\\nAlford {\\it et al.} (D234c,C)[TISY] & 1.719 & $8^3\\times20$ & 2.0 & & 0.76,0.70 & 2 & \n\\cite{ref:Alfordlat97}\\\\\n\\hline\nDeGrand & \\multicolumn{6}{c}{Fixed point actions} & \\cite{ref:DeGrand} \\\\\n\\hline\n\\hline\nMILC (KS,Naik)[TILW] & 7.60 & $16^3\\times32$ & & 100 &0.82-0.3 & 5 & \n\\cite{ref:MILClat97}\\\\\nMILC (KS,Naik)[TILW] & 7.75 & $16^3\\times32$ & & 200 &0.76-0.33 & 5 & \n\\cite{ref:MILClat97}\\\\\nMILC (KS,Naik)[TILW] & 7.90 & $16^3\\times32$ & & 200 &0.80-0.27 & 6 & \n\\cite{ref:MILClat97}\\\\\n\\hline\nBielefeld (fat)[SY] & 4.1 & $16^3\\times30$ & & 57 & $\\approx 0.65$ & & \n\\cite{ref:Bielefeld}\\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{center}\n\\vspace{-7mm}\n\\end{table*}\n\n\\begin{figure}[t]\n\\begin{center} \\leavevmode\n\\epsfxsize=7.5cm \\epsfbox{rho07-Q.ps}\n\\end{center}\n\\vspace{-13mm}\n\\caption{Comparison of $m_N\/m_\\rho$ at $m_\\pi\/m_\\rho=0.7$\nfor various quark actions.\nC-ML and D234c-ML employ mean link for the tadpole factor. \nGauge actions are denoted in brackets.\nLattice spacings are set with the string tension \n($\\protect\\sqrt\\sigma=427$ MeV)\nexcept for results with TISY gauge action which use \nthe charmonium spectrum.} \n\\label{fig:Rho07-Q}\n\\vspace{-7mm}\n\\end{figure}\n\n\\begin{figure}[t]\n\\vspace*{-3mm}\n\\begin{center} \\leavevmode\n\\epsfxsize=7.5cm \\epsfbox{mv07.ps}\n\\end{center}\n\\vspace{-13mm}\n\\caption{$m_V$ at $m_\\pi\/m_\\rho=0.7$. Symbols are the \nsame as in Fig.~\\protect\\ref{fig:Rho07-Q}.\nSold lines are extrapolation of SCRI \ndata\\protect\\cite{ref:SCRIlat96}.\nLattice spacings for the C-ML and D234c-ML actions are \nrecalibrated by us to those given by $\\protect\\sqrt\\sigma$.\n}\n\\label{fig:mv07}\n\\vspace{-7mm}\n\\end{figure}\n\nSeveral groups have been testing improved quark actions with improved \ngauge actions. In this section we discuss quenched results in this category. \nNew simulations since Lattice 96 are listed in Table \\ref{tab:table-I}.\n\n\\subsection{improvement of the Wilson action}\n\nImprovement of the Wilson quark action by adding the clover term \nhas been extensively investigated both\nwith the standard gauge \naction\\cite{ref:UKQCDlat96,ref:UKQCDb57,ref:UKQCDlat97,ref:QCDSFlatest,ref:QCDSFb60,ref:QCDSFlat97-2,ref:APETOV,ref:JLQCD-fB}\nand with improved gauge \nactions\\cite{ref:Bock,ref:SCRIlat96,ref:Alfordlat96,ref:Alfordlat97}.\nWe plot in Fig.~\\ref{fig:Rho07-Q} the mass ratio $m_N\/m_\\rho$ at \n$m_\\pi\/m_\\rho =0.7$. We clearly observe that the clover term \nsignificantly reduces scaling violation so that the ratio agrees \nwith the phenomenological value\\cite{ref:Ono} within 5\\% \nalready at $a\\approx 0.4$~fm.\n\nThe D234 action\\cite{ref:D234} is designed to achieve improvement \nbeyond the clover action. \nResults\\cite{ref:Alfordlat95,ref:Alfordlat96,ref:Alfordlat97,ref:Bock}\nfor a class of D234 actions, however, do not show clear improvement\nfor the mass ratio compared with those for the clover action.\n\n\n\nScaling test of hadron masses themselves at a fixed $m_\\pi\/m_\\rho$\nis useful to examine the functional dependence of scaling violation \non the lattice spacing.\nUsing the tadpole-improved L\\\"uscher-Weisz (TILW) gauge action \nfor which we expect only small scaling violation, the \nSCRI group\\cite{ref:SCRIlat96} showed last year that mass results for\nthe tadpole-improved clover action are consistent with an \n$O(a^2)$ scaling behavior, \nwhile Wilson data need both $O(a)$ and $O(a^2)$ terms.\n \nIn Fig.\\ref{fig:mv07} we reproduce their figure for the vector \nmeson mass at $m_\\pi\/m_\\rho=0.7$,\nadding new results for the Wilson\\cite{ref:CPPACS}(open circles) \nand clover\\cite{ref:UKQCDlat97} (filled circles) actions on\nthe standard plaquette gauge action.\nThe results for the two actions lie on the respective \nextrapolation curves of the SCRI results, showing a reduction of \nscaling violation with the clover action also for the plaquette \ngauge action. \n\nThe Cornell group\\cite{ref:Alfordlat97}\ntested improvement using mean value of link in the Landau \ngauge rather than plaquette for the tadpole factor(right triangles). \nThey reported that the mean link is superior in reducing scaling\nviolation effects over plaquette.\n\nLet us also mention that\nnon-perturbative determinations of\nthe clover coefficient with improved gauge actions \nhave been attempted\\cite{ref:SCRIlat97,ref:Klassen}.\nSpectrum calculations are in progress.\n\n\n\n\n\n\\subsection{improvement of the KS action}\n\nThe MILC\ncollaboration\\cite{ref:MILClat96}\nstudied the KS and Naik\\cite{ref:Naik} three-link actions\nusing the TILW gauge action, and compared them with those for\nthe KS action on the standard gauge acton.\nThey found that $m_N\/m_\\rho$ is improved by the use of the\nimproved gauge action, but the Naik improvement has a relatively\nsmall effect on the mass ratio. \nPushing the calculation toward higher $\\beta$\\cite{ref:MILClat97}, \nthey found little difference between the Naik and KS actions.\n\nAnother direction of improvement tested by the MILC \ncollaboration\\cite{ref:MILCfat} \nis the use of fat link, \nin which one replaces a link variable with a weighted sum of \nthe link and staples. \nThis is expected to improve flavor symmetry,\nand indeed they found a substantial reduction \nin the mass difference between the Goldstone and non-Goldstone pions. \n\nThe Bielefeld group\\cite{ref:Bielefeld} studied the fat link \nimprovement with the Symanzik gauge action.\nThey also observed improvement of flavor symmetry for this quark \naction, while $O(p^2)$ and $O(p^4)$ improved actions \nwhich include many link paths do not show any significant \nimprovement of flavor symmetry. \n\n\n\\section{Toward Full QCD Spectrum}\\label{sec:fullQCD}\n\\begin{table*}[t]\n\\caption{Recent spectrum runs in full QCD for $N_f$=2.\nNew results since Lattice 96 are marked by double asterisks and\nthose with increased statistics by asterisks.}\n\\label{tab:table-F}\n\\begin{center}\n\\begin{tabular}{lrrrrrrr}\n & $\\beta$ & size & (fm) & traj. & $m_\\pi\/m_\\rho$ & \\#m & ref.\\\\\n\\hline\n\\hline\nSESAM (W)* & 5.6 & $16^3\\times32$ & 1.4 & $200\\times 25$ & 0.84-0.7 & 3 &\n\\cite{ref:SESAMlat96,ref:Hoeber}\\\\\nT$\\chi$L (W)* & 5.6 & $24^3\\times40$ & 2.0 & O(3000) & 0.7,0.55 & 2 &\n\\cite{ref:TxLlat96,ref:Hoeber} \\\\\n\\hline\nUKQCD (C=1.76)** & 5.2 & $12^3\\times24$ & & 50 conf. & 0.85-0.75 & 4 &\n\\cite{ref:Talevi}\\\\\n\\hline\nCP-PACS (W,C=1,TP)** & & $(12,16)^3\\times32$ & \\multicolumn{4}{c}\n{study of action improvement} & \\cite{ref:CPPACS-F} \\\\\n\\hline\n\\hline\nMILC (KS) & 5.30 & $12^3\\times32$ & 3.7 &1000-5000 & 0.8-0.3 & 8 &\n\\cite{ref:MILClat96} \\\\\nMILC (KS)* & 5.415& $16^3\\times32$ & 3.2 &1000-2000 & 0.77-0.44 & 6 &\n\\cite{ref:MILClat96,ref:MILClat97-F}\\\\\nMILC (KS)* & 5.415& $12^3\\times24$ & 2.4 &2000 & 0.46 & 1 &\n\\cite{ref:MILClat96,ref:MILClat97-F}\\\\\nMILC (KS) & 5.50 & $24^3\\times64$ & 3.6 &1000-2000 & 0.69-0.63 & 2 &\n\\cite{ref:MILClat96}\\\\\nMILC (KS) & 5.50 & $20^3\\times48$ & 3.0 &2000 & 0.56-0.48 & 2 &\n\\cite{ref:MILClat96}\\\\\nMILC (KS)* & 5.60 & $24^3\\times64$ & 2.6 &1500-2000 & 0.75-0.53 & 4 &\n\\cite{ref:MILClat96,ref:MILClat97-F}\\\\\n\\hline\nColumbia(KS,$N_f$=2)*& 5.70 & $16^3\\times32(40)$& 1.5 & 1400-4900 &0.70-0.57& 4 & \n\\cite{ref:Columbialat96,ref:Columbialat97} \\\\\nColumbia(KS,$N_f$=4)*& 5.40 & $16^3\\times32$& 1.5 & 2700-4500&0.72-0.67& 2 &\n \\cite{ref:Columbialat96,ref:Columbialat97} \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{center}\n\\vspace*{-3mm}\n\\end{table*}\n\nWith progress of our understanding of the quenched spectrum, \nincreasingly larger efforts are beginning to be spent in simulations of \nfull QCD. Here we summarize recent work listed in \nTable \\ref{tab:table-F}.\n\n\\subsection{progress with the KS action}\n\\begin{figure}[t]\n\\begin{center} \\leavevmode\n\\epsfxsize=7.0cm \\epsfbox{edplot.ps}\n\\end{center}\n\\vspace{-13mm}\n\\caption{Edinburgh plot for $N_f$=2 KS quarks reported by \nMILC\\protect\\cite{ref:MILClat97-F} together with those \nfrom HEMCGC\\protect\\cite{ref:HEMCGC},\nColumbia\\protect\\cite{ref:Columbialat97,ref:Columbia32},\nand Kyoto-Tsukuba\\protect\\cite{ref:K-T}.\nIn parentheses are lattice sizes.}\n\\label{fig:Ed-KS-Nf2}\n\\vspace{-7mm}\n\\end{figure}\n\nThe MILC collaboration\\cite{ref:MILClat96} continued their study \nof the $N_f=2$ KS spectrum for $\\beta=5.3-5.6$ employing large lattices \nof a size $La \\rlap{\\lower 3.5 pt\\hbox{$\\mathchar \\sim$}}\\raise 1pt \\hbox {$>$} 2.6$~fm. In Fig.~\\ref{fig:Ed-KS-Nf2} we show their \nresults in the Edinburgh plot together with those of previous \nstudies\\cite{ref:Columbialat97,ref:HEMCGC,ref:Columbia32,ref:K-T}. \n\nThe ratio $m_N\/m_\\rho$ decreases toward weak coupling. \nTaking advantage of improved precision of their results \nas is clear from Fig.~\\ref{fig:Ed-KS-Nf2}, \nthe MILC collaboration attempted a continuum extrapolation of \n$m_N\/m_\\rho$ for a fixed value of $m_\\pi\/m_\\rho$. They find \n$m_N\/m_\\rho=1.252(37)$ at the physical point in the continuum limit. \n\nAlso of interest is the problem of\nhow the KS spectrum depends on the number of dynamical \nquark flavors.\nColumbia group\\cite{ref:Columbialat97} showed that\nthe four flavor hadron spectrum is nearly parity doubled on \na $16^3\\times 32$ lattice at $\\beta=5.4$.\nChiral symmetry breaking effects are smaller for four\nflavors than for two or zero flavors.\n\n\\subsection{progress with the Wilson action}\nSimulations of full QCD with the Wilson quark action for $N_f$=2\nhave been pushed forward by the SESAM\\cite{ref:SESAMlat96,ref:Hoeber}\nand T$\\chi$L\\cite{ref:TxLlat96,ref:Hoeber} collaborations. \nSimulations were initially made at $\\beta=5.6$ on a $16^3$ \nspatial lattice ($La \\approx 1.4$ fm) for $m_\\pi\/m_\\rho=0.85-0.7$ (SESAM), \nwhich have been extended to those on a larger lattice $24^3$ \n($La \\approx 2.0$ fm) and closer to the chiral limit with \n$m_\\pi\/m_\\rho =$0.7 and 0.55 (T$\\chi$L).\n\nAn important aspect of their study is a careful examination of \nvarious algorithmic issues of full QCD simulation, including\ndevelopment and tuning of efficient Wilson marix \ninverter\\cite{ref:Frommer}\nand a detailed autocorrelation study. \n\nFor the spectrum, they observed 3\\% (5\\%) finite-size effects\nfor $\\rho$-meson (nucleon) at $m_\\pi\/m_\\rho \\approx 0.7$.\nThe magnitude is comparable to that for the KS \naction\\cite{ref:K-T,ref:MILC-FS-F}.\nThey estimated strange hadron masses,\ntreating the strange quark as a valence quark\nin the presence of two light dynamical quarks.\nThe $K-K^{*}$ mass splitting is smaller than experiment by \n15\\%, contrary to the expectation that dynamical sea quark \neffects alleviate the small hyperfine splitting of quenched \nQCD. It is possible that dynamical quarks employed is still \ntoo heavy to improve the splitting significantly.\n\nSESAM and T$\\chi$L also studied the static potential and several \nhadron matrix elements to explore effects of sea quarks. See \nRef.~\\cite{ref:Gusken} for a review.\n\n\\subsection{full QCD with improved actions}\n\nTill last year there were only sporadic attempts toward full \nQCD simulations of the light hadron spectrum with improved \nactions\\cite{ref:SCRIlat96-F}.\nThis year the CP-PACS collaboration\\cite{ref:CPPACS-F} \nand the UKQCD collaboration\\cite{ref:Talevi} presented \npreliminary results of a systematic attempt in this direction. \n\nThe CP-PACS collaboration made a comparative study of improvement \nat a coarse lattice $a^{-1} \\approx 0.9-1.5$ GeV employing \nthe plaquette and an RG-improved action\\cite{ref:Iwasaki} for gluons and\nthe Wilson and tadpole-improved clover action for quarks.\nFor one action combination, they also explored the chiral limit \ndown to $m_\\pi\/m_\\rho\\approx 0.4$ with simulations on a $16^3\\times 32$ \nlattice.\nThe UKQCD collaboration employed the plaquette action at $\\beta=5.2$ and \nthe clover action with a clover coefficient of 1.76.\nSimulations were made for four values of sea quark masses\nand the spectrum is calculated for four values of valence quark \nmasses on each dynamical quark ensemble.\n\n\\begin{figure}[t]\n\\begin{center} \\leavevmode\n\\epsfxsize=7.5cm \\epsfbox{rho07.ps}\n\\end{center}\n\\vspace{-13mm}\n\\caption{$m_N\/m_\\rho$ in full QCD with $N_f=2$ as a function of \n$m_\\rho a$ both calculated at $m_\\pi\/m_\\rho =0.7$. \nAbbreviations for gauge actions are\nP: plaquette and R: RG-improved\\protect\\cite{ref:Iwasaki}, and \nfor quark actions W: Wilson and C: clover.\nData are taken from CP-PACS\\protect\\cite{ref:CPPACS-F,ref:CPPACS},\nSCRI\\protect\\cite{ref:SCRI-W2}, and\nSESAM\\protect\\cite{ref:Hoeber}.\n}\n\\label{fig:rho07}\n\\vspace{-7mm}\n\\end{figure}\n\nIn Fig.\\ref{fig:rho07} we compile full QCD results for $m_N\/m_\\rho$ \nas a function of $m_\\rho a$, both calculated at $m_\\pi\/m_\\rho =$ 0.7.\nResults for the Wilson quark action have large scaling violation \nand approximately lie on a single \ncurve, irrespective of the choice of gauge actions. \nIn contrast the lattice spacing dependence is much \nweaker for the clover actions, again irrespective of the gauge action, \nand the value of the ratio is close to a phenomenological estimate\neven on a very coase lattice of $a^{-1} \\approx 1.0$ GeV. \nThese results show that a significant improvement of $m_N\/m_\\rho$ \ndue to the clover term observed for the quenched case also \nholds in full QCD. \n\nAnother interesting question in full QCD is to what extent \nthe lattice scale obtained from the hadron spectrum agrees with \nthat from the static potential.\nThe clover term is important also in this regard. \nA mismatch of the scale determined from $m_\\rho$ in the chiral limit \nand that with the string tension observed for the Wilson action\nat $a^{-1} \\approx 1.0$ GeV is much reduced by the use of the clover \naction\\cite{ref:CPPACS-F}.\nThe UKQCD collaboration reported that the scale determined from $m_{K^{*}}$ \napproximately agrees with that from $r_0$ for each value of \ndynamical quark.\n\nFor an effect of improvement of gauge actions, \nrotational symmetry of the potential is improved to a great extent\nalso in full QCD\\cite{ref:CPPACS-F}.\n\nThe effects of improvement summarized here are parallel to those \nobserved in quenched QCD, and come mainly from valence quarks rather than \ndynamical sea quarks. \nNovertheless, they are important since they show that \nrealistic full QCD simulations are possible without having \nto reduce the lattice spacing below $a^{-1}\\approx 2$ GeV which is \nneeded with the standard action.\n\n\\section{Other Topics}\\label{sec:other}\nCalculation of glueball masses in quenched QCD\nhas reached a stage to pinpoint the mass\nranges at least for the scalar glueball. \nThe GF11 collaboration\\cite{ref:GF11lat97} \nreported $m_{0^{++}}=1710(63)$ MeV as the infinite volume value \nin the continuum from a reanalysis of their data\\cite{ref:GF11gb}.\nThis value is consistent or slightly higher than the previous\nresults by other groups\\cite{ref:UKQCDgb,ref:MPn,ref:Luo}.\n\nThe central effort of the GF11 collaboration has been a \ncalculation of the mass of the $s\\bar s$ scalar \nmeson\\cite{ref:GF11lat97,ref:GF11lat96}, for which they \nfound values below $m_{s\\bar s} < 1500$ MeV.\nThey conclude that the observed meson $f_J(1710)$ is mainly a\nscalar glueball, while $f_0(1500)$ is mainly an $s\\bar s$\nquarkonium. \n\nThe SESAM collaboration\\cite{ref:SESAMgb}\nmade a glueball mass measurement with their \nfull QCD runs.\nNo clear dynamical quark effects are seen in the glueball\nmasses. Instead, they observed strong finite size effects\nin the scalar glueball mass, which may be an indication of\nthe presence of mixing between the glueball and \nthe $s\\bar s$ scalar meson. \n\nTwo groups have contributions for spin exotic meson masses. \nThe UKQCD collaboration\\cite{ref:UKQCDlat97-ex} increased\nstatistics since last year. \nCalculating masses at one combination of $\\beta$\nand the quark mass and employing a model to estimate masses at\nthe strange quark, \nthey obtained $m_{1^{-+}}(s\\bar s) =2000(200)$ MeV.\nThe MILC collaboration\\cite{ref:MILClat97-ex}\nmade simulations at $\\beta$=5.85 and 6.15. \nExtrapolation to the strange quark mass was made to \nobtain $m_{1^{-+}}(s\\bar s)=2170(80)$ MeV.\nThe two results are consistent within 10\\%.\n\n\n\\section{Conclusions}\n\\label{sec:conclusions}\nA number of interesting studies have been made this year, making a step\nforward toward a precise determination of the light hadron spectrum.\n\nFor the quenched spectrum, a systematic deviation from experiment\nhas been uncovered both in the meson and baryon sectors.\nQuantitative results have been accumulated with improved actions \nboth for quenched and full QCD, clarifiing to what extent improving \nactions reduce scaling violations\nin the light hadron spectrum.\nQuenched clover simulations are moving toward high precision determination \nof physical quantities exploiting the improved scaling behavior, and \nsimilar effort should be pursued with other improved actions.\n\nAnd finally, attempts toward a realistic simulation in full QCD have begun.\nIn my opinion, there is real hope that such a calculation could be achieved \nwith the current generation of computers through application\nof improved actions.\n\n\\ \\\\\nI am deeply indebted to all the colleagues who made their results available\nto me before the conference.\nI also would like to thank Y.~Iwasaki and A.~Ukawa for \ncritical comments and suggestions on the manuscript.\nThis work is in part supported by the Grant-in-Aid of Ministry of\nEducation, Science and Culture (Nos. 08NP0101 and 09304029). \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzznpmg b/data_all_eng_slimpj/shuffled/split2/finalzznpmg new file mode 100644 index 0000000000000000000000000000000000000000..3d5513a74586cc1705ac73add8aa2b768d7fc468 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzznpmg @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n It is fair to say that tetrad calculations are generally considered superior \nto classical coordinate methods for the calculation of curvature in spacetime. \nExperiments by Campbell and Wainwright \\cite{campbell\/wainwright:1977}\nnow dating back many years showed that tetrad methods are faster than \ncoordinate methods by factors of $2 \\sim 4$. \nEven larger factors have been obtained by MacCallum \\cite{maccallum:1989}\nin a Euclidean context. \nWithin the well known system SHEEP, for example, the advice to the \nbeginner is to always use frame versions of the metric (e.g. MacCallum\nand Skea \\cite{maccallum\/skea:1994} p. 23). On the commercial side, within the\nsystem MACSYMA2 \\cite{macsyma} the demonstration \nCTENSOR4 begins with an explanation that ``frame fields'' (orthonormal bases) \nallow the computations to run much more quickly. The demonstration calculates \nthe bases components of the Ricci tensor for the Kerr-Newman spacetime in \nBoyer-Lindquist coordinates, and is a good place to begin our discussion.\\\\ \n\nIn Table 1 we have reproduced this demonstration within the system \nGRTensorII \\cite{musgrave\/pollney\/lake:1994}\\cite{grhome} running under\nMapleV Release 3 \\cite{maple}, and have included the \ncalculation of the Weyl tensor.\nThe Table demonstrates some interesting \nproperties. The theoretical advantage of the frame approach is clearly \ndemonstrated in the Boyer-Lindquist coordinates (Column BKN). However, \nunder the elementary coordinate transformation $u=a \\cos\\theta$ this advantage \nfails to deliver superior performance (Column BKNU). The importance of\nstrategic application of simplification at intermediate steps is\nillustrated in Column BKNS. For this test simplification of components\nhas been carried out only after the components of the Ricci and Weyl tensors\nare calculated. It is worth noting that without some optimization in the \nsimplification strategy (e.g. post-calculation simplification only) this \ncalculation cannot be executed in MapleV on a 32 bit machine.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lrrr}\n\\hline\\hline\nCalculation & BKN & BKNU & BKNS \\\\ \\hline\n$R_{(a)(b)}$ & 6.7 & 5.6 & 30.0 \\\\\n$C_{(a)(b)(c)(d)}$ & 1.4 & 0.8 & 9.5 \\\\ \\hline\nTotal & 8.1 & 6.4 & 39.5 \\\\ \\hline\n & & & \\\\\n$R_{ab}$ & 8.4 & 2.1 & 18.9 \\\\\n$C_{abcd}$ & 21.1 & 4.3 & 67.5 \\\\ \\hline\nTotal & 29.5 & 6.4 & 86.4 \\\\ \n\\hline\\hline\n\\end{tabular}\\\\\n\\vspace{\\baselineskip}\n\\begin{minipage}{.9\\linewidth}\nTable~1: Average CPU time\\footnotemark~in seconds for the calculation and \nsimplification of the bases components of the Ricci and Weyl tensors\n($R_{(a)(b)}$ and $C_{(a)(b)(c)(d)}$) compared to the same for the coordinate \ncomponents ($R_{ab}$ and $C_{abcd}$). BKN refers to the Kerr-Newman\nspacetime in Boyer-Lindquist coordinates, and BKNU the same but with the \ntransformation $u=a\\cos\\theta$. For both BKN and BKNU the \nsimplification procedures have been structured for optimum \nperformance\\footnotemark $^,$\\footnotemark. For the Column BKNS the tetrad\ncomponents of BKN are used, but simplification procedures are not applied\nto intermediate calculation steps, only to the final results.\n\\addtocounter{footnote}{-2}\n\\end{minipage}\\\\\n\\end{table}\nClearly one could dismiss the findings in Table 1 if the implementation \nof the bases algorithms in GRTensorII were particularly inefficient compared \nto coordinate methods. We do not believe this to be the case\n\\cite{lake\/musgrave\/pollney:1995b}. Rather, as we attempt to show in what\nfollows, we believe that Table 1 reflects the \nfact that bases methods are not fundamentally superior to classical \ncoordinate methods. We find that the most important criterion for speed is \nthe minimization of the time spent on simplification. This underlines the \nimportance of the choice of coordinates or tetrad in a computer algebra \ncalculation and, more importantly, points out the fact that the user must be \nable to select the style of simplification which is most appropriate for the \nparticular problem.\n\\footnotetext{All times are in seconds as\nreturned by the MapleV {\\tt status} function and are\nthe average of four runs on a Sun Sparc 5 (see Section \\ref{sec:2.4}). The\nmaximum deviation from the average is less than 5\\% for times exceeding \n2 seconds and about 10\\% for shorter times.}\n\\addtocounter{footnote}{1}\n\\footnotetext{We consider a worksheet (a sequence of calculation \nand simplification procedures) to be optimized when the execution time\nhas reached a minimum.}\n\\addtocounter{footnote}{1}\n\\footnotetext{Due to their length, the complete text\nof worksheets used for these tests have not been included in this\nreport (except for an example in Appendix \\ref{app:C}), however they\nhave been made publicly available \\cite{worksheets}.}\n\n\\section{Protocol for comparisons}\n\\subsection{Choice of algorithms}\\label{sec:2.1}\n Within the framework of tetrad methods, the formalism of Newman and Penrose \n\\cite{newman\/penrose:1962} has proven most useful for calculations in spacetime\n(see e.g. \\cite{campbell\/wainwright:1977}). Some \nof the earliest applications of computer algebra to relativity stressed the \nefficiency of this formalism (e.g. \\cite{campbell\/wainwright:1977}). \nMcLenaghan \\cite{mclenaghan:1994} (see also Allen et al. \\cite{allenetal:1994})\nhas emphasized two distinct approaches within this formalism.\nThese are distinguished as the methods of {\\em Cartan} and {\\em Debever} in \n\\cite{mclenaghan:1994}, and as Methods A and B in \\cite{allenetal:1994}, a \nnotation which we adopt here. The methods are outlined in\n\\cite{mclenaghan:1994} and \\cite{allenetal:1994} with references and we do not\nrepeat this material here\\footnote{\nThe curvature component $\\Phi_{12}$ is consistently incorrect in\n\\cite{mclenaghan:1994} and \\cite{allenetal:1994}. In particular, the\ncoefficients of $\\mu\\tau$ and $\\nu\\sigma$ are -1 and 1 respectively,\n{\\em not} -2 and +2. This error is also present in the {\\tt debever} package\nin MapleV Releases 2 and 3.}.\nWe simply note that Method A uses the definitions of Newman and \nPenrose explicitly, while Method B essentially uses definitions constructed \nso as to avoid inversion of coordinate indicies. \n\nIn this paper we compare these two approaches to classical coordinate methods\n(suitably optimized).\n\n\\subsection{Basis for comparison}\nThe null tetrad formalism is sufficiently distinct from the classical \ncoordinate approach that a basis for the comparison of the two methods is \nnot clearly defined. In the classical approach the ``curvature'' of a spacetime\nis usually considered evaluated when the coordinate components of the Ricci \nand Weyl tensors have been evaluated. In the Newman-Penrose (NP)\nformalism it is the tetrad\ncomponents of these tensors (the $\\Phi$s and $\\Psi$s) that display the \n``curvature''. The complication that arises in a comparison of such different\nmethods is the fact that the natural output of each method is distinct. Now\ngiven the coordinate components, and the null tetrad, the tetrad components \nfollow in the usual way \\cite{newman\/penrose:1962}. One could then form a\nbasis for comparison \nby defining the ``curvature'' as the tetrad components of the Ricci and Weyl \ntensors . This is the comparison used in \\cite{campbell\/wainwright:1977}.\nNaturally, this puts the \nclassical component method at a disadvantage since the extra sums involved \nare not a natural part of the method. In this paper we have tried to cover \nall possibilities by having both NP approaches output the tetrad components \nof the Ricci and Weyl tensors, and the coordinate approach output both the \ncoordinate and tetrad components of these tensors.\n\n\\subsection{Choice of spacetimes}\n To compare the performance of algorithms for spacetime calculations it is \nessential that a variety of spacetimes be considered, and that within a given\nspacetime different tetrads (coordinates) be examined. Campbell and \nWainwright \\cite{campbell\/wainwright:1977} chose to examine the spacetimes of \nGriffiths \\cite{griffiths:1975}, Lewis-Papapetrou \\cite{ernst:1968},\nBondi \\cite{bondi\/vanderburg\/metzner:1962}, and Debever \n\\cite{debever\/mclenaghan\/tariq:1979}.\nMore recently, the examination by Allen et al. \\cite{allenetal:1994} \n(which is a comparison of the NP approaches) included these spacetimes \n(with a more general form of the Debever metric, the \nDebever-McLenaghan-Tariq metric \\cite{debever\/mclenaghan\/tariq:1979}) and\nalso the plane wave, 2x2 decomposable, and static spherically symmetric\nspacetimes \\cite{krameretal:1980}. We have found these last \nthree spacetimes to be too simple since the associated calculation times\nare too short to form a reliable basis for comparison. They also include \none form of the Kerr-Newman metric\\footnote{\nCampbell and Wainwright \\cite{campbell\/wainwright:1977} also examined the\nKerr-Newman metric. However, their description involves intermediate hand\nsimplification in this case and does not include comparison to the component\nmethod. In this Section we take the position that a fair comparison of computer\nalgorithm efficiency can only be carried out if no hand simplification of\ntensor components is permitted.} and a\ngeneral tetrad (their Case 9). Here we examine the Kerr-Newman\nmetric in a variety of forms. We do not include the tetrad 9 given in \n\\cite{allenetal:1994} since it does not conform to the requirements of a null\ntetrad in the Newman-Penrose formalism \\cite{krameretal:1980}.\n\n\\subsection{Method of comparison}\\label{sec:2.4}\n The comparisons were made by way of 26 re-executable Maple worksheets, and \nthe calculations were performed with GRTensorII\n\\cite{musgrave\/pollney\/lake:1994} under MapleV \nRelease 3 \\cite{maple} (in the X-Windows interface) with patchlevel 3 on a \nSun SPARC5 running SunOS 4.1.4\nand equipped with a 75MHz CPU and 64Mb of RAM\\footnote{This configuration was\nchosen for its reliability and reproducibility of CPU times, not its speed. \nBy way of a comparison, a Pentium 133 is about 1.5 times faster, however the \nDOS\/Windows implementation of MapleV Release 3 reports only integer CPU\ntimes.}. \n\nThe Maple worksheets and associated input files used in these tests are\nsummarized \nin Appendix \\ref{app:A}. The associated tetrads, and line elements are shown\nin Appendix \\ref{app:B}. The reproduction of tetrads is prone to errors, and\nreferences \\cite{campbell\/wainwright:1977}, \\cite{mclenaghan:1994} and \n\\cite{allenetal:1994} all contain misprints in the tetrad components. Appendix \n\\ref{app:B} has been produced directly from the input files and so an error\nconstitutes not a misprint, but an actual error in the input which would \ninvalidate our conclusions in the case concerned. The worksheets are \navailable for detailed examination and execution \\cite{worksheets}.\n\nThe worksheets are constructed in the following way. Either the contravariant\nor covariant tetrad is loaded into GRTensorII from an input file. The choice \ndetermines NP algorithm to be used for the tetrad part of the calculation. \nThe tetrad components of the Ricci and Weyl tensors are then evaluated, \nusually in simplified form\\footnote{A tensor component is considered to\nbe fully simplified when its size, as measured in Maple `words', reaches\na minimum.} (see Section \\ref{sec:3}). The metric is then \ngenerated from the tetrad, and the covariant coordinate components of the\nRicci and Weyl tensors are evaluated and simplified when necessary (again\nsee Section \\ref{sec:3}). We have been careful to \nensure that the metric, though generated from the tetrad, is presented in \noptimal form. Assuming that the tetrads are given, the worksheets then \nevaluate the tetrad components from the covariant coordinate \ncomponents. These are then simplified to the exact form of the tetrad \ncalculation.\n\nWe believe that each worksheet has been fully optimized\\footnotemark[2].\nThat is, the procedures (e.g. precalculation of the spin coefficients) and\nsimplification procedures (type and order of simplification) at each step \nof the calculation have been constructed so as to present each approach in \nits best performance mode. It should be pointed out that partial optimization \nof a worksheet is straightforward (see Appendix \\ref{app:D}), but the full \noptimization of a worksheet is a somewhat involved task. Full optimization is\nnecessary if a real comparison of approaches is to be given.\n\nA sample worksheet is given in Appendix \\ref{app:C} along with the\nassociated input file. In \nGRTensorII the input files contain no information beyond the components of the\nbasis or metric. In particular, no simplification information is contained in\nthe input file. The simplification strategy used is read from the worksheet.\n\n\\section{Comparisons}\\label{sec:3}\n\nSummarized in Table 2 are the results of our tests\\footnotemark[1]\nand the results\navailable from the previous tests in \\cite{campbell\/wainwright:1977} and \n\\cite{allenetal:1994}. Before we discuss the \ncomparisons it is appropriate to emphasize the importance of simplification \nprocedures. The fastest calculation in the Kerr-Newman metric is the \ncovariant NP tetrad calculation (Method B) in Row 9. If even only part of the \nsimplification strategy is altered (e.g. the background simplification\nprocedure used before the components are more fully simplified) the \nexecution time can increase by a factor well over two orders of magnitude. \nThe global simplification strategy is of paramount importance.\n\n\\begin{table}\n\\vspace*{-2\\baselineskip}\n\\centering\n\\begin{tabular}{rlrrrrrrrrrrrrr}\\hline\\hline\n1. & Coord's & $A$ & $B$ & $C$ & $D$ & $E$ & $F$ & $A\/D$\n & \\cite{allenetal:1994} & $A\/B$ & $A\/C$ & \n\\cite{campbell\/wainwright:1977} & $D\/E$ & $D\/F$ \\\\ \\hline\n 2. & {\\bf Grif} & 2.0 & 1.9 & 2.9 & 1.6 & 2.3 & 3.5 & 1.25 & 1.65\n & 1.05 & 0.69 & 0.39 & 0.70 & 0.46 \\\\\n 3. & {\\bf L-P} & 1.9 & 2.8 & 4.9 & 2.6 & 2.9 & 6.1 & 0.73 & 1.44\n & 0.68 & 0.39 & 0.40 & 0.90 & 0.43 \\\\\n 4. & {\\bf Bondi1} & 3.5 & 6.2 & 10.5 & 4.4 & 6.2 & 10.4 & 0.80 & 0.83\n & 0.56 & 0.33 & 0.27 & 0.71 & 0.42 \\\\\n 5. & {\\bf Bondi2} & 1.5 & 1.8 & 4.7 & 2.6 & 1.8 & 4.6 & 0.58 & ? \n & 0.83 & 0.32 & ? & 1.44 & 0.57 \\\\\n 6. & {\\bf Deb} &11.1 &13.3 & 39.8 &24.0 &13.6 & 39.7 & 0.46 & \n & 0.84 & 0.28 & & 1.76 & 0.60 \\\\\n 7. & {\\bf DMT1} &10.9 &13.5 & 40.6 &22.3 &13.5 & 40.7 & 0.49 & \n & 0.81 & 0.27 & & 1.65 & 0.55 \\\\\n 8. & {\\bf DMT2} &35.3 &32.1 &109.0 &55.0 &32.6 &105.0 & 0.64 &0.93 \n & 1.10 & 0.32 & & 1.69 & 0.52 \\\\\n 9. & {\\bf KN-Euc1} &22.7 &13.1 & 19.9 & 6.3 &13.7 & 20.0 & 3.60 &5.80 \n & 1.68 & 1.14 & & 0.46 & 0.32 \\\\\n10. & {\\bf KN-Euc2} &54.4 &13.2 & 17.2 &24.4 &13.1 & 16.2 & 2.23 & \n & 4.12 & 3.16 & & 1.86 & 1.51 \\\\\n11. & {\\bf KN-BL1} &26.6 &27.5 & 38.4 &30.0 &30.0 & 34.6 & 0.89 & \n & 0.97 & 0.69 & & 1.00 & 0.87 \\\\\n12. & {\\bf KN-EF1} &32.5 &22.7 & 30.2 & 7.8 &22.7 & 25.4 & 4.17 & \n & 1.43 & 1.08 & & 0.34 & 0.31 \\\\\n13. & {\\bf KN-BL2} &12.6 & 6.7 & 14.9 &18.2 & 9.1 & 15.8 & 0.69 & \n & 1.88 & 0.84 & & 2.00 & 1.15 \\\\\n14. & {\\bf KN-EF2} &41.0 &22.9 & 38.9 &20.8 &22.8 & 26.0 & 1.97 & \n & 1.79 & 1.05 & & 0.91 & 0.80 \\\\\n\\hline\\hline\n\\end{tabular}\\\\\n\\vspace{\\baselineskip}\n\\begin{minipage}{.9\\linewidth}\nTable 2: CPU times and comparisons for optimized calculations. Columns $A$\nthrough $F$ give CPU times in seconds\\protect\\footnotemark[1].\\\\\n\\vspace*{-2\\baselineskip}\n\\begin{center}\n\\begin{tabular}{cl}\nColumn & \\\\\n$A$ & -- total CPU time to generate and simplify the curvature \ncomponents ($\\Phi$s and $\\Psi$s)\\\\\n& \\hspace*{5mm}from a contravariant tetrad using the standard NP approach \n(`Method A').\\\\\n$B$ & -- time for the simplified (covariant) coordinate components of\nthe Ricci and Weyl\\\\\n& \\hspace*{5mm}tensors for the metric generated from the tetrad $A$.\\\\\n$C$ & -- time (including $B$) to generate and simplify the tetrad\ncurvature components from\\\\\n& \\hspace*{5mm}the coordinate components calculated in $B$, given the tetrad \n$A$.\\\\\n$D$ & -- the time to generate and simplify the curvature components from\na covariant\\\\\n& \\hspace*{5mm}tetrad using the modified NP approach of \\cite{allenetal:1994} \n(`Method B'). \\\\\n$E$ & -- time for the simplified (covariant) coordinate components of the\nRicci and Weyl\\\\\n& \\hspace*{5mm}tensors for the metric generated from the covariant tetrad \n$D$.\\\\\n$F$ & -- time (including $E$) to generate and simplify the tetrad curvature\ncomponents from\\\\\n& \\hspace*{5mm}the coordinate components calculated in $E$ given the tetrad \n$D$.\\\\\n\\end{tabular}\n\\end{center}\nThe differences in Columns $B$ and $E$ are due to differences in the exact\nform of the metric generated from the tetrad. The spacetimes and form of the\noutput are distinguished by the following abbreviations:\n\\begin{center}\n\\begin{tabular}{ll}\n{\\bf Grif} & Griffiths metric \\cite{griffiths:1975}. Output in factored form.\\\\\n{\\bf L-P} & Lewis-Papapetrou metric \\cite{ernst:1968}. Output in factored \n\tform.\\\\\n{\\bf Bondi1} & Bondi metric \\cite{bondi\/vanderburg\/metzner:1962}. Output in \n\tfactored form.\\\\\n{\\bf Bondi2} & Bondi metric. Output in expanded form.\\\\\n{\\bf Deb} & Debever metric \\cite{debever\/mclenaghan\/tariq:1979}. Output in\n\tnormal form (does not simplify further).\\\\\n{\\bf DMT1} & Debever-McLenaghan-Tariq metric \n\t\\cite{debever\/mclenaghan\/tariq:1979}. Output in normal form (does not\\\\\n\t& simplify further).\\\\\n{\\bf DMT2} & Debever-McLenaghan-Tariq metric in general form\n\t\\cite{debever\/mclenaghan\/tariq:1979}. Output in normal\\\\\n\t& form (does not simplify further).\\\\\n{\\bf KN-Euc1} & Kerr-Newman metric \\cite{allenetal:1994}. Output in factored\n\tform.\\\\\n{\\bf KN-Euc2} & Modified form of 9. Output in factored form.\\\\\n{\\bf KN-BL1} & Kerr-Newman metric in Boyer-Lindquist coordinates\n\t\\cite{boyer\/lindquist:1967}. Output in\\\\\n\t& factored form.\\\\\n{\\bf KN-EF1} & Kerr-Newman metric in advanced Eddington-Finkelstein coordinates\n\t\\cite{mtw}.\\\\\n\t& Output in factored form.\\\\\n{\\bf KN-BL2} & Kerr-Newman metric in modified Boyer-Lindquist coordinates\n\tusing\\\\\n\t& $u=a\\cos\\theta$. Output in factored form.\\\\\n{\\bf KN-EF2} & Modified form of 12. Output in factored form.\\\\\n\\end{tabular}\n\\end{center}\n\\end{minipage}\\\\\n\\vspace{\\baselineskip}\n\\end{table}\n\nA number of interesting points emerge from Table 2.\n\n{\\bf i)} It is appropriate to begin by comparing our results with previously \npublished tests. Starting with the work of Campbell and Wainwright \n\\cite{campbell\/wainwright:1977}, although the exact\nform of their output is unavailable, we observe a notable agreement for\nthe ratio $A\/C$ and their results for the Lewis-Papapetrou and Bondi metrics\nas shown in Table 2. For the Griffiths metric our component times \nare somewhat faster\\footnote{The fourth component of $m^a$ for this metric\ngiven in \\cite{campbell\/wainwright:1977} is wrong. We \nbelieve this to be a misprint which would not alter the time reported.}.\nQuite naturally, a concern at the time was the storage requirements for the \ncalculations. They report storage requirements for the component method a \nfactor $2 \\sim 5$ times that of the tetrad approach. Whereas storage is no \nlonger the concern that it once was, we note that we have observed storage \nrequirements for the component calculations only about $1.1 \\sim 1.5$ times\nthat of the tetrad method.\n\nThe paper by Allen et al. \\cite{allenetal:1994} is concerned with the ratio\n$A\/D$, \nthat is, the relative performance of the two null tetrad methods. The exact \nform of their output is unavailable. In general we find that the performance\nof the {\\em Debever} approach (Method B) is overestimated in \n\\cite{allenetal:1994}\\footnote{The errors in $\\Phi_{12}$ within the MapleV\n{\\tt debever} package affects only their result for the \nDebever-McLenaghan-Tariq metric (See footnote 4).}.\nAlthough the central thesis of \\cite{allenetal:1994} is the superiority of the \n{\\em Debever} approach, this rests principally on their analysis of the \nKerr-Newman metric. It is clear from Table 2 that this superior\nperformance is tetrad (coordinate) dependent.\n\n{\\bf ii)} For metrics of a general type (Rows 3 through 8 in Table\n2) we find that the standard NP approach is faster than the\nalternative proposed in \\cite{mclenaghan:1994} and \\cite{allenetal:1994}.\nWe find that this superior performance is not uniformly maintained \nif the general functions are replaced by specific ones. We have used the \nKerr-Newman metric as an example, and as can be seen from Rows 9 through 14 \nof Table 2, the relative performance of the two tetrad approaches\nis highly dependent on the tetrads (coordinates). However, whereas the\n{\\em Debever} approach can significantly outpace the NP approach (Rows 9 \nand 12), it is never far behind.\n\n{\\bf iii)} For metrics of a general type (Rows 3 through 8 in Table\n2) both tetrad approaches are superior to a calculation of the\ntetrad components from the coordinate components. This is exactly as one\nwould expect. The extra sums involved slow the coordinate approach down \n(compare Columns $B$ and $C$ as well as $E$ and $F$ ). Again, however, this \nsuperior performance is not \nuniformly maintained if the general functions are replaced by specific ones. \nColumns $A\/C$ and $D\/F$ for Rows 9 through 14 show that the classical approach \ncan rival the tetrad approaches even for a calculation of the tetrad \ncomponents.\n\n{\\bf iv)} It could be argued that the natural output of the coordinate\napproach, as \nregards the calculation of curvature, is simply the coordinate components of \nthe Ricci and Weyl tensors. For metrics of a general type the standard NP \napproach retains its superiority over the coordinate calculation (Column \n$A\/B$). This is not true for the modified tetrad approach (Column $D\/E$). \nColumns $A\/B$ and $D\/E$ indicate that at least in the Kerr-Newman metric the \ncalculation of the coordinate components of the Ricci and Weyl tensors is \nusually faster than the calculation of the tetrad components.\n\n\\section{Discussion}\nOur central conclusion is that the best algorithm, as regards speed, for the\ncomputer algebra calculation of curvature in spacetime is the one that \nminimizes the amount of time spent on simplification. This underlines the \nimportance of the careful choice of coordinates or tetrad in a computer \ncalculation and, more importantly, demonstrates that the user must be able to\nstyle the global simplification strategy in a manner most appropriate for the \nparticular problem being studied. An appropriate simplification strategy \ncan change an untractable problem into one which can be solved essentially \ninstantaneously\\footnote{Some simplification strategies appropriate to \nGRTensorII in MapleV are given in Appendix \\ref{app:D}.}. Our comparisons \n(Table 2) indicate that there is no\nuniformly superior algorithm. In the development of these comparisons we\nobserved that the differences between procedures optimized with respect to the\nglobal simplification strategy for the procedure were less than the variations\nwithin a given procedure for different simplification strategies.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lrrrr}\\hline\\hline\nCalculation & {\\bf Mix} & {\\bf Mix1} & {\\bf Mix2} & {\\bf Mix3}\\\\ \\hline\n$R_{(a)(b)}$ & 8.7 & 7.5 & 4.2 & 2.5 \\\\\n$C_{(a)(b)(c)(d)}$ & 8.5 & 1.1 & 7.4 & 1.1 \\\\ \\hline\nTotal &17.2 & 8.6 &11.6 & 3.6 \\\\ \\hline\n & & & & \\\\\n$R_{ab}$ & 8.5 & 8.5 & 8.5 & 8.5 \\\\\n$C_{abcd}$ &52.5 &13.7 &52.5 &13.7 \\\\ \\hline\nTotal &61.0 &22.2 &61.0 &22.2 \\\\\n\\hline\\hline\n\\end{tabular}\\\\\n\\vspace{\\baselineskip}\n\\begin{minipage}{.9\\linewidth}\nTable 3: Average CPU time in seconds for the calculation and \nsimplification of the bases components of the Ricci and Weyl tensors\n($R_{(a)(b)}$ and $C_{(a)(b)(c)(d)}$) compared to the same for the coordinate\ncomponents ($R_{ab}$ and $C_{abcd}$). {\\bf Mix} refers to the standard\n1-forms with trigonometric functions \\cite{mtw2} and a time dependent basis\ninner product. The same inner product is used for {\\bf Mix1} but the \ntrigonometric functions have been transformed away. For {\\bf Mix2} a\nconstant basis inner product has been used,\nand in {\\bf Mix3} the trigonometric functions have been transformed away. \nIn all cases the simplification procedures have been structured for optimum \nperformance.\n\\end{minipage}\n\\end{table}\nAlthough we have not found any algorithm to be uniformly superior, there\ncertainly are cases where the appropriate algorithm stands out. A case in\npoint is the mixmaster spacetime \\cite{mtw}\\footnote{This\nexample was suggested to us by Prof. C. W. Misner.}.\nHere an appropriate choice of basis\nremoves any angular dependence in the bases components of both the Ricci and\nWeyl tensors. The tetrad approach (not a null tetrad in this case) would\ncertainly be expected to outperform a coordinate calculation in this case.\nThis is confirmed in Table 3. We have considered both a constant\nbasis inner product (for {\\bf Mix1} and {\\bf Mix3}) and a time dependent one\n(for {\\bf Mix} and {\\bf Mix2}). The coordinate transformations are simply \n$\\Theta=\\cos\\theta$ and \n$\\Psi=\\sin\\psi$. It is clear that the classical coordinate calculation is no \nmatch for the basis approach exactly as one would guess. Interestingly, it is \nthe Weyl tensor calculation that improves under coordinate transformation, \nand the Ricci tensor under change in the basis inner product.\n\n\\subsection*{Acknowledgements}\nIt is a pleasure to thank Malcolm MacCallum for a number of useful\ndiscussions and suggestions. This work was supported in part by grants\n(to KL) from the Natural Sciences and Engineering Research Council of Canada\nand the Advisory Research Committee of Queen's University. PM and DP would\nlike to thank the Ministry of Colleges and Universities of Ontario for\nfinancial support through Ontario Graduate Scholarships.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\begin{figure}[bth]\n\\centering\n\\includegraphics[width=56mm,clip]{Fig\/phen-pot_new.eps}\n\\caption{Three examples of the modern $NN$ potential in $^1S_0$ (spin singlet and $S$-wave) channel: Bonn\\cite{Bonn}, Reid93\\cite{Reid93} and AV18\\cite{AV18}. }\n\\label{fig:potential}\n\\end{figure}\nIn 1935, in order to explain the origin of the nuclear force which binds protons and neutrons (nucleons) inside nuclei, Yukawa introduced virtual particles, $\\pi$ mesons, an exchange of which between nucleons produces the famous Yukawa potential\\cite{Yukawa}. Since then, both theoretically and experimentally, enormous efforts have been devoted to understand the nucleon-nucleon ($NN$) potential, recent examples of which are displayed in Fig.\\ref{fig:potential}.\nThese modern $NN$ potentials are characterized as follows\\cite{Taketani, Machleidt}.\nAt long distances ($r\\ge 2$ fm) there exists weak attraction generated by the one pion exchange potential(OPEP),\nand contributions from the exchange of multi-pions and\/or heavy mesons such as $\\rho$ makes an overall attraction a little stronger at medium distances ( 1 fm $\\le r \\le $ 2 fm). \nAt short distances ($r \\le$ 1 fm), on the other hand, attraction turns into repulsion, and it becomes stronger as $r$ becomes smaller, forming the strong repulsive core\\cite{Jastrow}.\nAlthough the repulsive core is essential not only for describing the $NN$ scattering data but also for the stability of atomic nuclei, its origin remains one of the most fundamental problems in nuclear physics for a long time\\cite{OSY}. \n\\begin{figure}[bt]\n\\centering\n\\includegraphics[width=50mm,angle=270, clip]{Fig\/Fig-potential.ps} \n\\caption{The central (effective central) $NN$ potential (MeV) as a function of $r$ (fm) for the singlet (triplet) at $m_\\pi\\simeq 530$ MeV in quenched QCD\\cite{IAH1}. }\n\\label{fig:potential_zero}\n\\end{figure}\n\nIn a recent paper\\cite{IAH1}, using lattice QCD simulations, three of the present authors have calculated the $NN$ potential, which possesses the above three features of the modern $NN$ potentials, as shown in Fig.\\ref{fig:potential_zero} . This result has received general recognition\\cite{nature}.\n\nThe above potentials have been extracted from the Schr\\\"odinger equation as\n\\begin{eqnarray}\nV_E({\\bf x} ) \\varphi_E(x) &=& \\left( E +\\frac{\\nabla^2}{2m}\\right)\\varphi_E({\\bf x})\n\\end{eqnarray}\nwith the reduced mass $m = m_N\/2$,\nusing the equal-time Bethe-Salpeter wave function $\\varphi_E({\\bf x})$, defined by\n\\begin{eqnarray}\n\\varphi_E({\\bf x}) &=& \\langle 0 \\vert N({\\bf x},0) N({\\bf 0}, 0)\\vert NN; E\\rangle ,\n\\end{eqnarray}\nwhere $ \\vert NN; E\\rangle $ is an eigen-state of two nucleons with energy $E$ and $N({\\bf x},t)$ is an interpolating operator for the nucleon. The potentials in Fig.\\ref{fig:potential_zero} are obtained at $E\\simeq 0$.\nFrom this definition it is clear that the potential $V_E({\\bf x})$ may depend on the value of energy $E$ and\/or the choice of the operator $N({\\bf x},t)$. In this talk, we focus on the energy dependence of the potential $V_E({\\bf x})$. In Sect.\\ref{sec:Ising}, $V_E({\\bf x})$ from an integrable model in 2 dimensions is considered\\cite{ABW}. The $NN$ potential calculated at $E\\not= 0$ in quenched QCD is presented in Sect.\\ref{sec:QCD}. Our discussions are given in Sect.\\ref{sec:discussion}. \n \n\\section{Potentials from an integrable model }\n\\label{sec:Ising}\nIn this section we consider the Ising field theory in 2 dimensions, where \nthe one-particle state of mass $M$ and the rapidity $\\theta$, denoted by $\\vert \\theta \\rangle$,\nhas momentum ${\\bf p}= M(\\cosh\\theta, \\sinh\\theta)$ with state normalization\n\\begin{eqnarray}\n\\langle \\theta^\\prime \\vert \\theta \\rangle &=& 4\\pi \\delta(\\theta - \\theta^\\prime) .\n\\end{eqnarray}\nThe Bethe-Salpeter wave function is defined by\n\\begin{eqnarray}\n\\Psi (r, \\theta) &=& i \\langle 0 \\vert \\sigma(x,0)\\sigma(0,0)\\vert \\theta, -\\theta\\rangle^{\\rm in}\n\\end{eqnarray}\nwhere $ \\vert \\theta, -\\theta \\rangle^{\\rm in}$ is the 2-particle in-state and $r = M x$. The spin field $\\sigma({\\bf x})$ is normalized as\n\\begin{eqnarray}\n\\langle 0 \\vert \\sigma({\\bf x})\\vert \\theta \\rangle = e^{-i {\\bf p}\\cdot{\\bf x}}.\n\\end{eqnarray}\nThe explicit form of this wave function has been calculated by Fonseca and Zamolodchikov\\cite{FZ} as\n\\begin{eqnarray}\n\\Psi(r,\\theta) &=& \\frac{e^{\\chi(r)\/2}}{\\cosh\\theta}\\left[\n\\Phi_+(r,\\theta)^2 \\cosh\\left(\\frac{\\varphi(r)}{2} -\\theta\\right) -\\Phi_-(r,\\theta)^2\\cosh\\left(\\frac{\\varphi(r)}{2}+\\theta\\right)\\right]\n\\end{eqnarray}\nwhere $\\Phi_\\pm$, $\\varphi$ and $\\chi$ satisfy\n\\begin{eqnarray}\n\\Phi_\\pm^\\prime (r,\\theta) &=& \\frac{1}{2}\\sinh\\left(\\varphi(r) \\pm\\theta\\right) \\Phi_\\mp(r,\\theta) , \\label{eq:Phi}\\\\\n\\frac{1}{r}\\left[ r\\varphi^\\prime(r)\\right]^\\prime &=& \\frac{1}{2}\\sinh\\left( 2\\varphi(r)\\right) ,\n\\label{eq:phi} \\\\\n\\frac{1}{r}\\left[ r\\chi^\\prime(r)\\right]^\\prime &=& \\frac{1}{2}\\left[1-\\cosh\\left( 2\\varphi(r)\\right)\\right] . \\label{eq:chi}\n\\end{eqnarray}\nIn the limit that $r\\rightarrow 0$, \nthe wave function has the expansion\n\\begin{eqnarray}\n\\Psi(r,\\theta) \\sim C r^{3\/4} \\sinh(\\theta) + O(r^{7\/4}),\n\\label{eq:short}\n\\end{eqnarray}\nwhich is expected from the operator product expansion (OPE),\n\\begin{eqnarray}\n\\sigma(x,0) \\sigma(0,0) \\sim G(r){\\bf 1} + c r^{3\/4}{\\cal E}(0) + \\cdots,\n\\end{eqnarray}\nwhere ${\\cal E}(x)$ is the mass operator of dimension 1.\n\nWe can solve the coupled equations (\\ref{eq:Phi}), (\\ref{eq:phi}) and (\\ref{eq:chi}) for $\\Phi_\\pm, \\varphi, \\chi$ numerically with their boundary conditions at $r=0$\\cite{ABW}. From the wave function, a rapidity-dependent potential can be obtained by\n\\begin{eqnarray}\nV_\\theta (r) &=& \\frac{\\Psi^{\\prime\\prime}(r,\\theta) + \\sinh^2\\theta\\, \\Psi(r,\\theta)}{\\Psi(r,\\theta)} .\n\\end{eqnarray}\nAs $r\\rightarrow 0$, however, from (\\ref{eq:short}),\nthe potential $V_\\theta$ becomes rapidity independent: \n\\begin{eqnarray}\nV_\\theta(r,\\theta) &\\sim& -\\frac{3}{16}\\frac{1}{r^2},\n\\end{eqnarray}\nwhere not only the power of $r$ is universally -2 but also \nthe overall coefficient is determined as $ 3\/4 \\times (3\/4-1) $ from the $r^{3\/4}$ behaviour of the wave function.\nIn Fig.\\ref{fig:potential_ising}, $r^2 V_\\theta(r)$, the potential multiplied by $r^2$, is plotted as a function of $r$ for several values of $\\theta$.\nWe observe that an energy(rapidity)-dependence of potentials is small at $\\theta \\le 0.6$\\footnote{\nNote that the singularity of the potential for $\\theta=1.0$ is caused by the vanishing of the corresponding wave function at this point.}. In particular, potentials are almost identical between $\\theta=0$ and $\\theta=0.3$. The energy dependence of the Ising potential seems weak at low energy. Although the physics in the Ising model is vastly different from QCD,\nwe hope that a similar property holds for the $NN$ potential. In the next section we investigate an energy dependence of the $NN$ potentials in quenched QCD.\n\\begin{figure}[bt]\n\\centering\n\\includegraphics[width=56mm, angle=270, clip]{Fig\/Ising_pot1.eps} \n\\caption{The Ising potential (multiplied by $r^2$) $r^2 V_\\theta (r)$ for $\\theta=1.0$ (dotted), $\\theta=0.6$ (dot-dashed), $\\theta = 0.3$ (dashed) and $\\theta = 0$ (solid). }\n\\label{fig:potential_ising}\n\\end{figure} \n\n\\section{Nucleon-nucleon potentials at non-zero energy in quenched QCD}\n\\label{sec:QCD}\n\\begin{figure}[bth]\n\\centering\n\\includegraphics[width=65mm, clip]{Fig\/W1S0_APBC_PBC.t11.r13_16.eps} \n\\caption{ The $NN$ wave function with APBC (red bars) at $t=11 a$, together with the fit by the Green's function at large distances (green crosses).}\n\\label{fig:wave_APBC}\n\\end{figure} \n\nWe follow the strategy in Ref.\\cite{IAH1,IAH2} to define the wave function and to calculate the potential through it. Let us explain our set-up of numerical simulations. Gauge configurations are generated in quenched QCD on a $32^3\\times 48$ lattice with the plaquette gauge action at $\\beta = 5.7$, which corresponds to $a\\simeq 0.137$ fm. We employ the Wilson quark action with anti-periodic boundary condition (APBC) in space at the hopping parameter $K=0.1665$, corresponding to $m_\\pi\\simeq 530$ MeV and $m_N \\simeq 1330$ MeV. The minimum momentum is given by ${\\bf p}_{\\rm min} = \\displaystyle\\frac{\\pi}{32a}(1,1,1)$, which leads to $\\vert {\\bf p}_{\\rm min}\\vert \\simeq 240$ MeV and\n$E =\\displaystyle \\frac{k^2}{m_N} \\simeq 50$ MeV, where $E$ is the non-relativistic energy in the center of mass system. 2000 configurations are accumulated to obtain our result.\n\nWe first determine a value of $k^2$, by fitting the wave function at large distance ($13 a \\le \\vert {\\bf x} \\vert \\le 16 a$) \nwith the Green's function of the Helmholtz equation on an $L^3$ box, given by \n\\begin{eqnarray}\nG({\\bf x}; k^2) &=& \\frac{1}{L^{3}}\\sum_{{\\bf n}\\in \\Gamma} \\frac{e^{i(2\\pi\/L) {\\bf n}\\cdot{\\bf x}}}{(2\\pi\/L)^2 {\\bf n}^2 -k^2}, \\quad\n\\Gamma =\\left\\{ \\left. \\left(n_x+\\frac{1}{2},n_y+\\frac{1}{2},n_z+\\frac{1}{2}\\right)\\right\\vert n_x,n_y,n_z\\in {\\bf Z}\\right\\},\n\\end{eqnarray}\nas plotted in Fig.\\ref{fig:wave_APBC}. The fit gives %\n$ k^2 a^2=0.030(4)$ with %\n$\\chi^2\/{\\rm dof} \\simeq 1.8$ at $t=11 a$ , \nwhich corresponds to $E\\simeq 50 $ MeV in physical unit.\n\n\\begin{figure}[bt]\n\\centering\n\\includegraphics[width=63mm,clip]{Fig\/V1S0_APBC_PBC.t9.r13_16.eps} \n\\includegraphics[width=63mm,clip]{Fig\/V1S0_APBC_PBC.t9.r13_16.s.eps} \n\\caption{ Left: The central $NN$ potentials for the $^1S_0$ state with APBC (red bars) and PBC (blue crosses) in quenched QCD at $t=9a$.\nRight: Its zoom-in.} \n\\label{fig:potential_APBC}\n\\end{figure} \n\nIn Fig.\\ref{fig:potential_APBC}, the central $NN$ potential for the $^1S_0$ state with APBC ( $E\\simeq 50$ MeV) is plotted as a function of $r$ at $t=9a$, together with the one with PBC ( $E\\simeq 0$ ). \nFluctuations of data with APBC at large distances ( $ r\\ge 1.5$ fm ) are mainly caused by \ncontaminations from excited states, together with statistical noises. Data at larger $t$ are needed to reduce such contaminations from excited states, though statistical errors also become larger.\nA non-trivial part of potential at $ r < 1.5$ fm, on the other hand, are less affected by such contaminations.\nAs seen from Fig. \\ref{fig:potential_APBC}, the $NN$ potentials are almost identical between $E\\simeq 0$ and $E\\simeq 50$ MeV.\n\n\n\\section{Discussion}\n\\label{sec:discussion}\nAs discussed in the introduction, the potential defined from the Bethe-Salpeter wave function depends on the energy:\n\\begin{eqnarray}\nV_E({\\bf x})\\varphi_E({\\bf x}) &=& \\left( E + \\frac{\\nabla^2}{2m}\\right) \\varphi_E({\\bf x}) .\n\\end{eqnarray}\nIn \\cite{IAH2,Aoki1}, it is shown that the energy-dependent potential can be converted to the energy-independent but non-local potential as\n\\begin{eqnarray}\n\\int d^3 y\\, U({\\bf x},{\\bf y}) \\varphi_E({\\bf y}) &=& \\left( E + \\frac{\\nabla^2}{2m}\\right) \\varphi_E({\\bf x}) = V_E({\\bf x})\\varphi_E({\\bf x}) .\n\\end{eqnarray}\nWe then apply the derivative expansion to this non-local potential\\cite{IAH2} as\n\\begin{eqnarray}\nU({\\bf x}, {\\bf y} ) &=& V({\\bf x}, \\nabla)\\delta({\\bf x} - {\\bf y}) \\\\\nV({\\bf x}, \\nabla) &=& V_0(r) + V_{\\sigma}(r) ({\\bf \\sigma}_1\\cdot {\\bf \\sigma}_2) + V_T(r) \\, S_{12} + O(\\nabla)\n\\end{eqnarray}\nwhere $r = \\vert {\\bf x}\\vert$, $\\sigma_{1,2}$ represents the spin of nucleons, and \n\\begin{eqnarray}\nS_{12} &=&\\frac{3}{r^2} ({\\bf\\sigma}_1\\cdot{\\bf x}) ({\\bf\\sigma}_2\\cdot{\\bf x})\n-({\\bf \\sigma}_1\\cdot {\\bf \\sigma}_2) \n\\end{eqnarray}\nis the tensor operator. Our result in the previous section indicates that non-locality is very weak.\n\nThe analysis for the potentials in the Ising field theory in 2 dimensions suggests an interesting possibility that the universality of potentials at short distance can be understood from a point of view of the operator product expansion (OPE). If this is the case, the origin of the repulsive core might be explained by the OPE. We are currently working on this problem. \n\nBefore closing this talk,\nwe consider an alternative possibility to construct the energy-independent local potential\\cite{Aoki1}. The inverse scattering theory suggests that there exists an unique energy independent potential, which gives the correct phase shift at all energies.\nHere we propose a new method to construct the energy-independent local potential from $V_E$.\nFor simplicity , the 1 dimensional case is considered. Suppose that $\\Phi(x) \\equiv \\Lambda_E(x) \\varphi_E(x)$ satisfies the Schr\\\"odinger equation with the energy-independent local potential $V(x)$,\n\\begin{eqnarray}\n\\left( -\\frac{d^2}{d x^2} + V(x)\\right) ( \\Lambda_E(x) \\varphi_E(x) ) &=& E (\\Lambda_E (x) \\varphi_E(x) ),\n\\end{eqnarray}\nwe obtain the following differential equation,\n\\begin{eqnarray}\nV(x) \\Lambda_E(x) &=& V_E (x)\\Lambda_E(x) + \\Lambda_E^{\\prime\\prime}(x) +2\\Lambda_E^\\prime(x)(\\log\\varphi_E(x) )^\\prime .\n\\end{eqnarray}\n(Here we set $2m = 1$ .)\nIf $V(x)$ is given, $\\Lambda_E(x)$ can be easily obtained from this equation. \nWe first consider a finite box with size $L$, which allows only discrete momenta, $k_n \\simeq 2\\pi n\/L$, $n=0,1,2,\\cdots .$ Once $E_n = k_n^2$ is given, $\\varphi_E(x) $ becomes zero\nat $n+1$ points $\\Omega_n =\\{x_0,x_1, \\cdots, x_n \\} $. We then have\n\\begin{eqnarray}\n0 &=& K_E (x_i) \\Lambda_E (x_i) + 2 \\Lambda_E^\\prime (x_i)\\varphi_E^\\prime(x_i)\n\\end{eqnarray}\nfor $x_i \\in \\Omega_n$, where $K_E(x) \\equiv (E + d^2\/d x^2 ) \\varphi_E(x) $.\nSince $\\Omega_n$ becomes dense in $[0,L]$ in the $n\\rightarrow\\infty$ limit, $V(x)$ can be constructed as\n\\begin{eqnarray}\nV(x) &=& \\lim_{E\\rightarrow\\infty} \\left\\{ V_E(x) - 2 X_E(x) (\\log\\varphi_E(x) )^\\prime - X_E^\\prime(x) + X_E(x)^2\\right\\}\n\\label{eq:local}\n\\end{eqnarray}\nwhere $X_E(x)$ is an interpolations of $X_E(x_i) =\\displaystyle\\frac{K_E(x_i)}{2\\varphi_E^\\prime(x_i)}$\nwith $x_i\\in \\Omega_n$. If the limit (\\ref{eq:local}) exists,\nthe energy-independent local potential $V(x)$ can be obtained.\nIn the 3 dimensional case, we first introduce the polar coordinate, and then apply the above procedure in 1 dimension to the radial variable $r$ with the fixed angular momentum $l$.\n\n\\section*{ Acknowledgements }\nOur simulations have been performed with IBM Blue Gene\/L at KEK under a support of its Large Scale simulation Program, Nos. 06-21, 07-07, 08-19.\nWe are grateful for authors and maintainers of {\\tt CPS++}\\cite{cps},\nof which a modified version is used for measurement done in this work.\nJ.B. and S.A. are grateful to the Max-Planck-Institut f\\\"ur Physik for its hospitality.\nThis work was supported in part by the Hungarian National Science Fund OTKA (under T049495) and the Grant-in-Aid of the Japanese Ministry of Education, Science, Sports and Culture (Nos. 18540253, 19540261, 20028013, 20340047 ). \n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{introduction}\n\nLeveraging feedback to obtain the channel state information at the transmitter (CSIT) enables a wireless system to adapt its transmission\nstrategy to the varying wireless environment. The growing number of wireless users, as well as their increasing demands for higher data rate\nservices impose a significant burden on the feedback link. In particular for OFDMA systems, which have emerged as the core technology in 4G and\nfuture wireless systems, full CSIT feedback may become prohibitive because of the large number of resource blocks. This motivates more efficient\nfeedback design approaches in order to achieve performance comparable to a full CSIT system with reduced feedback. In the recent years,\nconsiderable work and effort has been focused on limited or partial feedback design, e.g., see \\cite{love08} and the references therein. To the\nbest of our knowledge, most of the existing partial feedback designs are homogeneous, i.e., users' feedback consumptions do not adapt to the\nunderlying channel statistics. In this paper, we propose and analyze a heterogeneous feedback design, which aligns users' feedback needs to the\nstatistical properties of their wireless environments.\n\nCurrent homogeneous feedback design in OFDMA systems groups the resource blocks into subband \\cite{zhu09} which forms the basic scheduling and\nfeedback unit. Since the subband granularity is determined by the frequency selectivity, or the coherence bandwidth of the underlying channel,\nit would be beneficial to adjust the subband size of different users according to their channel statistics. Empirical measurements and analysis\nfrom the channel modeling field have shown that the root mean square (RMS) delay spread which is closely related to the coherence bandwidth, is\nboth location and environment dependent \\cite{asplund06, huang12}. The typical RMS delay spread for an indoor environment in WLAN does not\nexceed hundreds of nanoseconds; whereas in the outdoor environment of a cellular system, it can be up to several microseconds. Intuitively,\nusers with lower RMS delay spread could model their channel with a larger subband size and require less feedback resource than the users with\nhigher RMS delay spread. Herein, we investigate this heterogeneous feedback design in a multiuser opportunistic scheduling framework where the\nsystem favors the user with the best channel condition to exploit multiuser diversity \\cite{knopp95, viswanath02}. There are two major existing\npartial feedback strategies for opportunistic scheduling, one is based on thresholding where each user provides one bit of feedback per subband\nto indicate whether or not the particular channel gain exceeds a predetermined or optimized threshold \\cite{sanayei07, hassel07, chen08,\npugh10}. The other promising strategy currently considered in practical systems such as LTE \\cite{sesia11} is the best-M strategy, where the\nreceivers order and convey the M best channels \\cite{jung07, ko07, choi07, choi08, pedersen09, leinonen09, donthi11, hur11}. The best-M partial\nfeedback strategy is embedded in the proposed heterogeneous feedback framework. Apart from the requirement of partial feedback to save feedback\nresource, the study of imperfections is also important to understand the effect of channel estimation error and feedback delay on the\nheterogeneous feedback framework. These imperfections are also considered in our work.\n\n\\subsection{Focus and Contributions of the Paper}\nAn important step towards heterogeneous feedback design is leveraging the ``match\" among coherence bandwidth, subband size and partial feedback.\nUnder a given amount of partial feedback, if the subband size is much larger than the coherence bandwidth, then multiple independent channels\ncould exist within a subband and the subband-based feedback could only be a coarse representative of the channels. On the other hand, if the\nsubband size is much smaller than the coherence bandwidth, then channels in adjacent subbands are likely to be highly correlated and requiring\nfeedback on adjacent subbands could be a waste of resource; or a small amount of subband-based partial feedback may not be enough to reflect the\nchannel quality. In order to support this heterogeneous framework, we first consider the scenario of a general correlated channel model with one\ncluster of users with the same coherence bandwidth. The subband size is adjustable and each user employs the best-M partial feedback strategy to\nconvey the M best channel quality information (CQI) which is defined to be the subband average rate. The simulation result shows that a\nsuitable chosen subband size yields higher average sum rate under partial feedback conforming the aforementioned intuition. This motivates the\ndesign of heterogeneous feedback to ``match\" the subband size to the coherence bandwidth. The above-mentioned study, though closely reflects the\nrelevant mechanism, is not analytically tractable due to two main reasons. Firstly, the general correlated channel model complicates the\nstatistical analysis of the CQI. Secondly, the use of subband average rate as CQI makes it difficult to analyze the multi-cluster scenario.\nTherefore, a simplified generic channel model is needed that balances the competing needs of analytical tractability and practical relevance.\n\nIn order to facilitate analysis, a subband fading channel model is developed that generalizes the widely used frequency domain block fading\nchannel model. The subband fading model is suited for the multi-cluster analysis. According to the subband fading model, the channel frequency\nselectivity is flat within each subband, and independent across subbands. Since the subband sizes are different across different clusters, the\nnumber of independent channels are heterogeneous across clusters and this yields heterogeneous partial feedback design. Another benefit of the\nsubband fading model is that the CQI becomes the channel gain and thus facilitate further statistical analysis. Under the multi-cluster subband\nfading model\\footnote[1]{An initial treatment of a two-cluster scenario was first presented in \\cite{huang11}.} and the assumption of perfect\nfeedback, we derive a closed form expression for the average sum rate. Additionally, we approximate the sum rate ratio for heterogeneous design,\ni.e., the ratio of the average sum rate obtained by a partial feedback scheme to that achieved by a full feedback scheme, in order to choose\ndifferent best-M for users with different coherence bandwidth. We also compare and demonstrate the potential of the proposed heterogeneous\nfeedback design against the homogeneous case under the same feedback constraint in our simulation study.\n\nThe average sum rate helps in understanding the system performance with perfect feedback. In practical feedback systems, imperfections occur\nsuch as channel estimation error and feedback delay. These inevitable factors degrade the system performance by causing outage\n\\cite{piantanida09, isukapalli10}. Therefore, rather than using average sum rate as the performance metric, we employ the notion of average\ngoodput \\cite{lau08, wu10, akoum10} to incorporate outage probability. Under the multi-cluster subband fading model, we perform analysis on the\naverage goodput and the average outage probability with heterogeneous partial feedback. In addition to examining the impact of\nimperfect feedback on multiuser diversity \\cite{ma05, kuhne08}, we also investigate how to adapt and optimize the average goodput in the presence of\nthese imperfections. We consider both the fixed rate and the variable rate scenarios, and utilize bounding technique and an efficient\napproximation to derive near-optimal strategies.\n\nTo summarize, the contributions of this paper are threefold: a conceptual heterogeneous feedback design framework to adapt feedback amount to\nthe underlying channel statistics, a thorough analysis of both perfect and imperfect feedback systems under the multi-cluster subband fading\nmodel, and the development of approximations and near-optimal approaches to adapt and optimize the system performance. The rest of the paper is\norganized as follows. The motivation under the general correlated channel model and the development of system model is presented in Section\n\\ref{system}. Section \\ref{perfect} deals with perfect feedback, and Section \\ref{imperfect} examines imperfect feedback due to channel\nestimation error and feedback delay. Numerical results are presented in Section \\ref{numerical}. Finally, Section \\ref{conclusion} concludes the\npaper.\n\n\n\\section{System Model}\\label{system}\n\n\\subsection{Motivation for Heterogeneous Partial Feedback}\\label{motivation}\nThis part provides justification for the adaptation of subband size with one cluster of users under the general correlated channel model, and\nmotivates the design of heterogeneous partial feedback for the multi-cluster scenario in Section \\ref{multicluster}. Consider a downlink\nmultiuser OFDMA system with one base station and $K$ users. One cluster of user is assumed in this part and users in this cluster are assumed to\nexperience the same frequency selectivity. The system consists of $N_\\mathsf{c}$ subcarriers. $H_{k,n}$, the frequency domain channel transfer\nfunction between transmitter and user $k$ at subcarrier $n$, can be written as:\n\\begin{equation} \\label{system:eq_1}\nH_{k,n}=\\sum_{l=0}^{L-1}\\sigma_lF_{k,l}\\exp\\left(-\\frac{j2\\pi ln}{N_\\mathsf{c}}\\right),\n\\end{equation}\nwhere $L$ is the number of channel taps, $\\sigma_l$ for $l=0,\\ldots,L-1$ represents the channel power delay profile and is normalized, i.e.,\n$\\sum_{l=0}^{L-1}\\sigma_l^2=1$, $F_{k,l}$ denotes the discrete time channel impulse response, which is modeled as complex Gaussian distributed\nrandom processes with zero mean and unit variance $\\mathcal{CN}(0,1)$ and is i.i.d. across $k$ and $l$. Only fast fading effect is considered in\nthis paper, i.e., the effects of path loss and shadowing are assumed to be ideally compensated by power control\\footnote[2]{This assumption has\nbeen employed in \\cite{kuhne08, leinonen09, pugh10} to simplify the scheduling policy. With the same average SNR, the opportunistic scheduling\npolicy is also long-term fair. When different average SNR is assumed, the proportional-fair scheduling policy \\cite{viswanath02} can be\nutilized.}. The received signal of user $k$ at subcarrier $n$ can be written as:\n\\begin{equation} \\label{system:eq_2}\nu_{k,n}=\\sqrt{P_{\\mathsf{c}}}H_{k,n}s_{k,n}+v_{k,n},\n\\end{equation}\nwhere $P_{\\mathsf{c}}$ is the average received power per subcarrier, $s_{k,n}$ is the transmitted symbol and $v_{k,n}$ is the additive white\nnoise distributed as $\\mathcal{CN}(0,\\sigma_{n_{\\mathsf{c}}}^2)$. From (\\ref{system:eq_1}), it can be shown that $H_{k,n}$ is distributed as\n$\\mathcal{CN}(0,1)$. The channels at different subcarriers are correlated, and the correlation coefficient between subcarriers $n_1$ and $n_2$\ncan be described as follows:\n\\begin{equation} \\label{system:eq_3}\n\\mathrm{cov}(H_{k,n_1},H_{k,n_2})=\\sum_{l=0}^{L-1}\\sigma_l^2\\exp\\left(-\\frac{j2\\pi l(n_2-n_1)}{N_\\mathsf{c}}\\right).\n\\end{equation}\n\nIn general, adjacent subcarriers are highly correlated. In order to reduce feedback needs, $R_\\mathsf{c}$ subcarriers are formed as one resource\nblock, and $\\eta$ resource blocks are grouped into one subband\\footnote[3]{E.g., in LTE, one resource block consists of $12$ subcarriers, and\none subband can contain $1$ to $8$ resource blocks \\cite{dahlman11}.}. Thus, there are $N=\\frac{N_\\mathsf{c}}{R_\\mathsf{c}}$ resource blocks and\n$\\frac{N}{\\eta}$ subbands\\footnote[4]{Throughout the paper, $N_\\mathsf{c}$, $N$ and $\\eta$ are assumed to be a radix $2$ number. A more general\ntreatment is possible but this will result in edge effects making for more complex notation without much insight.}. In this manner, each user\nperforms subband-based feedback to enable opportunistic scheduling at the transmitter. Since the channels are correlated and there is one CQI to\nrepresent a given subband, the CQI is a function of the all the individual channels within that subband. Herein, we employ the following subband\n(aggregate) average rate $S_{k,r}$ as the functional form\\footnote[5]{This functional form employs the capacity formula and the resulting\neffective SNR has a geometric mean interpretation. Other functional forms of the CQI exist in practical systems such as exponential effective\nSNR mapping (EESM) \\cite{ericsson03, song11, donthi11j} and mutual information per bit (MMIB) \\cite{wan06, fan11} to map the effective SNR to\nthe block-error-rate (BLER) curve. The intuitions are similar: to obtain a representative CQI as a single performance measure corresponding to\nthe rate performance.} \\cite{forney98, al96} of the CQI for user $k$ at subband $r$:\n\\begin{equation} \\label{system:eq_4}\nS_{k,r}\\triangleq\\frac{1}{\\eta R_{\\mathsf{c}}}\\sum_{n=(r-1)\\eta R_{\\mathsf{c}}+1}^{r\\eta\nR_{\\mathsf{c}}}\\log_2\\left(1+\\frac{P_{\\mathsf{c}}|H_{k,n}|^2}{\\sigma_{n_{\\mathsf{c}}}^2}\\right).\n\\end{equation}\n\nEach user employs the best-M partial feedback strategy and conveys back the $M$ best CQI values selected from $S_{k,r}, 1\\leq r\\leq\n\\frac{N}{\\eta}$. A detailed description of the best-M strategy can be found in \\cite{choi08, leinonen09, hur11}. After the base station receives\nfeedback, it performs opportunistic scheduling and selects the user $k$ for transmission at subband $r$ if user $k$ has the largest CQI at\nsubband $r$. Also, it is assumed that if no user reports CQI for a certain subband, scheduling outage happens and the transmitter does not\nutilize it for transmission.\n\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.55\\linewidth]{Figure\/fig1.eps}\n\\caption{Comparison of average sum rate for different subband sizes ($\\eta=1,2,4$) and partial feedback ($M=2,4$) with respect to the number of\nusers. A general correlated channel model is assumed with an exponential power delay profile. ($N_{\\mathsf{c}}=256$, $N=32$, $L=16$, $\\delta=4$,\n$\\frac{P_{\\mathsf{c}}}{\\sigma_{n_{\\mathsf{c}}}^2}=10$ dB)} \\label{fig_1}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.8\\linewidth]{Figure\/fig2.eps}\n\\caption{Illustration of the multi-cluster subband fading channel model for two different clusters with $16$ resource blocks. The subband sizes\nequal $2$ and $4$ for the two different clusters respectively. According to the subband fading model, the channel frequency selectivity is flat\nwithin each subband, and independent across subbands. The subband sizes can be heterogeneous across clusters, and this leads to heterogeneous\nchannel frequency selectivity across clusters. The subband fading model approximates the general correlated channel model, and is useful for\nstatistical analysis.} \\label{fig_2}\n\\end{figure}\n\nNow we demonstrate the need to adapt the subband size to achieve the potential ``match\" among coherence bandwidth, subband size and partial\nfeedback through a simulation example. The channel is modeled according to the exponential power delay profile \\cite{weinfurtner02, mckay08,\neslami11}: $\\sigma_l^2=\\frac{1-\\exp(-1\/\\delta)}{1-\\exp(-L\/\\delta)}\\exp\\left(-\\frac{l}{\\delta}\\right)$ for $0\\leq l0$, it is now shown that:\n\\begin{align}\n\\mathbb{P}\\left(\\left|\\frac{s(\\check{\\chi}_b)}{s(\\mathbb{E}[\\check{\\chi}_b])}-1\\right|\\geq\\epsilon\\right)=&\\mathbb{P}\\left(\\left|\\frac{s(\\check{\\chi}_b)-s(\\mathbb{E}[\\check{\\chi}_b])}{s(\\mathbb{E}[\\check{\\chi}_b])}\\right|\\geq\\epsilon\\right)\\notag\\\\\n\\label{appendix:eq_6}&\\mathop{\\leq}\\limits^{(a)}\\mathbb{P}\\left(\\frac{s(|\\check{\\chi}_b-\\mathbb{E}[\\check{\\chi}_b]|)}{s(\\mathbb{E}[\\check{\\chi}_b])}\\geq\\epsilon\\right)\\mathop{\\rightarrow}\\limits^{(b)}0,\n\\end{align}\nwhere (a) follows from the concave and monotonically increasing property of $s(\\cdot)$: $|s(x)-s(y)|a\\geq0$ \\cite{simon02}; (b) follows\nfrom applying L'Hospital's rule. Therefore, there exists a unique global optimal $\\beta_1$ which maximizes\n$\\mathcal{I}_3^{\\mathrm{A}}(\\beta_1,K)$.\n\n\n\n\n\n\\section*{Acknowledgment}\nThe authors want to express their deep appreciation to the anonymous reviewers and the Associated Editor for their many valuable comments and\nsuggestions, which have greatly helped to improve this paper.\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Stable Matching Problem with Ties: Strongly stable Matching}\n\\section{Strongly Stable Matching}\n\nA matching $M$ of men and women is {\\em strongly stable} if there is no blocking pair $(m,w)$ such that they are not married in $M$\nbut either (1) both of them prefer each other to their partners in $M$, or (2) one of them prefers the other to his\/her partner in $M$ and the other one is\nindifferent.\nFormally, \na pair of man and woman $(m,w)$ is {\\em blocking for a strongly stable matching} $M$ if they are\nnot matched in $M$ and\\\\\n\n\\ifdefined\\ISBOOK\n$((mrank[m][w] \\leq mrank[m][M(m)]) \\wedge $\n$ (wrank[w][m] < wrank[w][M(w)]))$\\\\\n$ \\vee ((mrank[m][w] < mrank[m][M(m)]) \\wedge $\n$(wrank[w][m] \\leq wrank[w][M(w)])). $\n\\else\n\\hspace*{0.2in} $((mrank[m][w] \\leq mrank[m][M(m)]) \\wedge $\\\\\n\\hspace*{0.2in} $ (wrank[w][m] < wrank[w][M(w)]))$\\\\ \n$ \\vee ((mrank[m][w] < mrank[m][M(m)]) \\wedge $\\\\\n\\hspace*{0.2in} $(wrank[w][m] \\leq wrank[w][M(w)])). $\n\\fi\n\n\nAs in superstable matching algorithm, we let $mpref[i][k]$ denote the set of women ranked $k$ by man $i$.\nAs before, we will use $G[i]$ to denote the $mrank$ that the man $i$ is currently considering. Initially, $G[i]$ is $1$ for all $i$, i.e.,\neach man proposes to all his top choices. We define a bipartite graph $Y(G)$ on the set of men and women with respect to any $G$ as follows.\nIf a woman does not get any proposal in $G$, then she is unmatched. If she receives multiple proposals then there is an edge\nfrom that woman to all men in the most preferred rank. For superstable matching, we required $Y(G)$ to be a perfect matching.\nFor strongly stable matching, we only require $Y(G)$ to contain a perfect matching.\n\nWe first note that a strongly stable matching may not exist.\nThe following example is taken from \\cite{IRVING1994261}. \\\\\n\\\\\n$m1: w1, w2$\\\\\n$m2:$ both choices are ties\\\\\n\\\\\n$w1: m2, m1$\\\\\n$w2: m2, m1$\n\n\nThe matching $\\{(m1, w1), (m2, w2)\\}$ is blocked by the pair $(m2, w1)$: $w1$ strictly prefers $m2$ and $m2$ is indifferent between $w1$ and $w2$.\nThe only other matching is $\\{(m1, w2), (m2, w1)\\}$. This matching is blocked by $(m2, w2)$: $w2$ strictly prefers $m2$ and $m2$ is indifferent between\n$w1$ and $w2$.\n\n\\remove{\nFor strongly stable matchings, we will focus only on finding the least proposal vector $G$ such that $Y(G)$ contains a perfect matching.\nTo that end, we define the predicate $B(G)$ to be true if $G$ is the least vector such that $Y(G)$ has a perfect matching.\nWe show that whenever there is a perfect matching in $G_1$ and $G_2$ and there is no perfect matching in $G_1 \\cap G_2$, then\nthere exists $G_3 < G_1 \\cap G_2$ that has a perfect matching.\n\n}\n\nConsider any bipartite graph with an equal number of men and women. If there is no perfect matching in the graph,\nthen by Hall's theorem there exists a set of men of size $r$ who collectively are adjacent to fewer than $r$ women.\nWe define {\\em deficiency} of a subset $Z$ of men as $|Z| - N(Z)$ where $N(Z)$ is the {\\em neighborhood} of $Z$ (the set of vertices that are adjacent to\nat least one vertex in $Z$). The deficiency $\\delta(G)$ is the maximum deficiency taken over all subsets of men.\nWe call a subset of men $Z$ {\\em critical} if it is maximally deficient and does not contain any maximally deficient proper subset.\nOur algorithm to find a strongly stable matching is simple. \nWe start with $G$ as the global state vector with top choices for all men.\nIf $Y(G)$ has a perfect matching, \nwe are done. The perfect matching in $Y(G)$ is a strongly stable matching.\nOtherwise, there must be a critical subset of men with maximum deficiency. These set of men must then advance on their proposal number, if possible.\nIf these men cannot advance, then there does not exist a strongly stable marriage and the algorithm terminates.\n\n\\begin{algorithm}\n \\SetAlgoRefName{LLP-ManOptimalStronglyStableMarriage}\n$P_j$: Code for thread $j$\\\\\n {\\bf input}: $mpref[i,k]$: set of int for all $i,k$; $wrank[k][i]$: int for all $k,i$;\\\\\n {\\bf init}: $G[j] := 1$;\\\\\n{\\bf always}: $Y(j) = mpref[j][G[j]];$\\\\\n \\BlankLine\n {\\bf forbidden($j$)}:\\\\\n \\hspace*{0.2in} $j$ is a member of the critical subset of men in the graph $Y(G)$\\\\\n \\hspace*{0.2in} {\\bf advance}: $G[j] := G[j]+1; $\n\\caption{A Parallel Algorithm for Man-Optimal Strongly Stable Matching \\label{fig:strongly-stable}}\n\\end{algorithm}\n\n\\ref{fig:strongly-stable} is the LLP version of the algorithm proposed by Irving and the interested reader is referred to \n\\cite{IRVING1994261} for the details and the proof of correctness.\nSimilar to superstable marriages, \nwe also get the following result.\n\\begin{theorem}\nThe set of strongly stable marriages, $L_{stronglystable}$, is a sublattice of the lattice $L$. \n\\end{theorem}\nObserve that each element in $L_{stronglystable}$ is not a single marriage but a set of marriages. This is in contrast to \n$L_{superstable}$, where each element corresponds to a single marriage.\n\n\\remove {\n\\begin{algorithm}\n \\SetAlgoRefName{LLP-ManOptimalStronglyStableMarriage}\n {\\bf input}: $mpref[i,k]$: set of int for all $i,k$; $rank[k][i]$: int for all $k,i$;\\\\\n {\\bf init}: $\\forall j: G[j] := 1$;\\\\\n{\\bf always}: \\\\\n\\> Y(G) = bipartite graph between men and women such that there is an edge between a man $m$ and a woman $w$\\\\ if\n$w$ is one of the top choices for $m$ according to $G$ and $m$ is one of the top choices for $w$ of all proposals received in $G$\\\\\n\\> $\\delta(G)$ = maximum deficiency of any subset of men in $Y(G)$\\\\\n\\> \\> $maximallyDeficient(J) \\equiv |J - N(J)| = \\delta(G)$\\\\\n\\> \\> $critical(J) \\equiv maximallyDeficient(J) \\wedge \\forall K \\subsetneq J: \\neg maximallyDeficient(K)$\\\\\n\\\\\n\\> \\myblue{\\bf forbidden(j)}: $\\exists J: critical(J) \\wedge (j \\in J) $\\\\\n\\> \\myblue{\\bf advance}: $G[j] := G[j]+1; $\\\\\n\\caption{A Parallel Algorithm for Man-Optimal Strongly Stable Matching \\label{fig:strong-stable}}\n\\end{algorithm}\n\n}\n\n\n\n\n\\remove {\n\n\n\\begin{theorem}\nThere is a strongly stable matching with $G$ as the proposal vector iff $Y(G)$ has a perfect matching.\n\\end{theorem}\n\\begin{proof}\nIt is clear that if $Y(G)$ does not have a perfect matching, then there cannot be a strongly stable matching corresponding to $G$.\nWe show the converse: \nif $Y(G)$ has a perfect matching then\nthere is a strongly stable matching with $G$ as the proposal vector. \nIf not, there must be a pair $(m,w)$ such that\nat least one of them can improve their partner while the other one either improves the partner or is indifferent.\nSuppose that man $m$ can improve his partner. This means that man $m$ proposed to $w$ in the proposal vector\nstrictly less than $G$. Since $G$ has a perfect matching, the man $m$ has proposed to a less preferable woman $w'$.\nThe woman $w$ is matched to a man $m'$ in the perfect matching in $Y(G)$.\nSince the choices for women can only improve or stay the same as the proposal vector increases, we know that\n$rank[w][m'] \\leq rank[w][m]$.\nIf $rank[w][m'] < rank[w][m]$, then $(m,w)$ is not a blocking pair because the woman prefers $m'$ to $m$.\n\nBy induction, we can assume that there is no strongly stable matching in any vector less than $G$.\nSince all men have women in their top rank, they cannot improve their rank. \nSince women have edges only for their most preferred rank, \nthey can also not improve their rank by going with an edge that in not in the matching.\n\\end{proof}\n\nWe now claim that the predicate $B(G) \\equiv Y(G) ~ \\mbox{contains a perfect matching}$ is a lattice-linear predicate.\n\n\\begin{lemma}\nIf $Y(G)$ in not a perfect matching, then at least one index in $G$ is forbidden.\n \\end{lemma}\n\\begin{proof}\nIf there is no perfect matching in $Y(G)$, then there must be a minimal set of \nmen, $Z$\nsuch that the set of neighbors of $Z$ is strictly smaller in size than $Z$. We claim that every man $i \\in Z$ is forbidden.\nConsider any $H \\geq G$ such that $G[i]$ equals $H[i]$. Suppose, if possible $H$ has a strongly stable matching $M_H$.\nIn $M_H$, man $i$ is matched with a woman in $w \\in mpref[i][H[i]]$. The woman must rank man $i$ as the most preferred until $G$; otherwise, there\nwould not be any edge between $w$ and $i$ in $G$ and therefore in $H$. There has to be at least one more edge from $w$ to some other man in $Y(G)$; otherwise,\nwe can remove man $i$ from $Z$ and $w$ from $N(Z)$ and we have a set $Z'$ strictly smaller than $Z$ such that the set of neighbors of $Z'$ is smaller than $Z'$, contradicting\nthat $Z$ is a {\\em minimal} underdemanded set.\nSuppose $w$ has an edge to man $j$ in $Y(G)$. \n\nIf $H[j] > G[j]$, then $(j,w)$ is a blocking pair to the matching in $H$ because man $j$ prefers $w$ to his match in $H$ and $w$ is indifferent between\n$i$ and $j$. \n\nIf $H[j] = G[j]$, then we do the similar analysis for $j$. The woman who is matched with man $j$ must have at least one edge outside of $\\{i,j\\}$; otherwise, \nwe can remove both $i$ and $j$ and their neighbors from $Z$ and get a smaller underdemanded set. Continuing in this manner, we either find a blocking pair for $H$ or\ncontradict minimality of $Z$.\n\\end{proof}\n\n\n}\n\n\n\n\n\n\\section{Stable Matching Problem with Ties: Superstable Matching}\n\\section{Superstable Matching}\n\nIn many applications, agents (men and women for the stable marriage problem) may not totally order all their choices. Instead, they\nmay be indifferent to some choices \\cite{IRVING1994261,manlove2002structure}.\nWe generalize $mpref[i][k]$\n to a set of women instead of a single woman. Therefore, \n $mrank$ function is not 1-1 anymore. Multiple women may have the same rank.\nSimilarly, $wrank$ function is not 1-1 anymore. Multiple men may have the same rank.\nWe now define the notion of blocking pairs for a matching $M$ with ties \\cite{IRVING1994261}.\nWe let $M(m)$ denote the woman matched with the man $m$ and $M(w)$ denote the man\nmatched with the woman $w$.\nIn the version, called {\\em weakly stable} matching $M$, there is no blocking pair of \nman and woman $(m,w)$ who are not married in $M$ but strictly prefer\neach other to their partners in $M$. Formally,\na pair of man and woman $(m,w)$ is {\\em blocking for a weakly stable matching} $M$ if they are\nnot matched in $M$ and\\\\\n\\ifdefined\\ISBOOK\n$(mrank[m][w] < mrank[m][M(m)]) \\wedge$\n$ (wrank[w][m] < wrank[w][M(w)]. $\n\\else\n$(mrank[m][w] < mrank[m][M(m)]) \\wedge$\\\\\n$ (wrank[w][m] < wrank[w][M(w)]. $\n\\fi\n\nFor the weakly stable matching, ties can be broken arbitrarily and any matching that is stable in the resulting instance is also\nweakly stable for the original problem.\nTherefore, Gale-Shapley algorithm is applicable for the weakly stable matching \\cite{IRVING1994261}.\nWe focus on other forms of stable matching --- superstable and strongly stable matchings.\n\nA matching $M$ of men and women is {\\em superstable} if there is no blocking pair $(m,w)$ such that they are not married in $M$\nbut they either prefer each other to their partners in $M$ or are indifferent with their partners in $M$.\nFormally, \na pair of man and woman $(m,w)$ is {\\em blocking for a super stable matching} $M$ if they are\nnot matched in $M$ and\\\\\n\\ifdefined\\ISBOOK\n$(mrank[m][w] \\leq mrank[m][M(m)]) \\wedge$\n$ (wrank[w][m] \\leq wrank[w][M(w)]. $\n\\else\n$(mrank[m][w] \\leq mrank[m][M(m)]) \\wedge$\\\\\n$ (wrank[w][m] \\leq wrank[w][M(w)]. $\n\\fi\n\n\nThe algorithms for superstable marriage have been proposed in \\cite{IRVING1994261,manlove2002structure}.\nOur goal is to show that LLP algorithm is applicable to this problem as well.\nAs before, we will use $G[i]$ to denote the $mrank$ that the man $i$ is currently considering. Initially, $G[i]$ is $1$ for all $i$, i.e.,\neach man proposes to all his top choices. We say that $G$ has a superstable matching if there exist $n$ women $w_1, w_2, \\ldots w_n$ such that\n$\\forall i: w_i \\in mpref[i][G[i]]]$ and the set $(m_i, w_i)$ is a superstable matching.\n\nWe define a bipartite graph $Y(G)$ on the set of men and women with respect to any $G$ as follows.\nIf a woman does not get any proposal in $G$, then she is unmatched. If she receives multiple proposals then there is an edge\nfrom that woman to all men in the most preferred rank. We say that $Y(G)$ is a perfect matching if \nevery man and woman has exactly one adjacent edge in $Y(G)$, \n\nWe claim \n\\begin{lemma}\nIf $Y(G)$ is not a perfect matching, then\nthere is no superstable matching with $G$ as the proposal vector. \n\\end{lemma}\n\\begin{proof}\nIf there is a man with no adjacent edge in $Y(G)$ then it is clear that $G$ cannot have a superstable matching.\nNow consider the case when a man has at least two adjacent edges. If all the adjacent women for this man have degree one, then\nexactly one of them can be matched with this man and other women will remain unmatched.\nTherefore, there is at least one woman $w$ who is also adjacent to another man $m'$. If $w$ is matched with $m$, then $(m',w)$ is a blocking pair.\nIf $w$ is matched with $m'$, then $(m,w)$ is a blocking pair.\n\\end{proof}\n\nWe now claim that the predicate $B(G) \\equiv Y(G) ~ \\mbox{is a perfect matching}$ is a lattice-linear predicate.\n\n\\begin{lemma}\\label{lem:superstable}\nIf $Y(G)$ is not a perfect matching, then at least one index in $G$ is forbidden.\n \\end{lemma}\n \\begin{proof}\nConsider any man $i$ such that there is no edge adjacent to $i$ in $Y(G)$. This happens when all women that man $i$ has proposed in state $G$ have rejected him.\nConsider any $H$ such that\n$H[i]$ equals $G[i]$. All the women had rejected man $i$ in $G$. As $H$ is greater than $G$, these women can only have\nmore choices and will reject man $i$ in $H$ as well.\n\nNow suppose that every man has at least one adjacent edge. \nLet $Z(G)$ be the set of women with degree exactly one. \nIf every woman is in $Z(G)$, then we have that $Y(G)$ is a perfect matching because every man has at least one adjacent edge.\nIf not, consider any man $i$ who is not matched to a woman in $Z(G)$. This means that all the women he is adjacent to have\ndegrees strictly greater than one. In $H$ all these women would have either better ranked proposals or equally ranked proposals. In either case,\nman $i$ would not be matched with any of these women. Hence, $i$ is forbidden.\n\\end{proof}\n\nWe are now ready to present \\ref{fig:super-stable}. \nIn \\ref{fig:super-stable}, we start with the proposal vector $G$ with all components $G[j]$ as $1$. \nWhenever a woman receives multiple proposals, she rejects proposals by men who are ranked lower than anyone who has proposed to her.\nWe say that a man $j$ is forbidden in $G$, if\nevery woman $z$ that man $j$ proposes in $G$ is either engaged to or proposed by someone who she prefers to $j$ or is indifferent with respect to $j$.\n \\ref{fig:super-stable} is a parallel algorithm because all processes $_j$ such that forbidden($j$) is true can advance in parallel.\n\n\n\n\\begin{algorithm}\n \\SetAlgoRefName{LLP-ManOptimalSuperStableMarriage}\n$P_j$: Code for thread $j$\\\\\n {\\bf input}: $mpref[i,k]$: set of int for all $i,k$; $wrank[k][i]$: int for all $k,i$;\\\\\n {\\bf init}: $G[j] := 1$;\\\\\n{\\bf always}: $Y(j) = mpref[j][G[j]];$\\\\\n \\BlankLine\n {\\bf forbidden($j$)}:\\\\\n \\hspace*{0.2in} $\\forall z \\in Y(j): \\exists i \\neq j: \\exists k \\leq G[i]: (z \\in mpref[i][k]) \\wedge (wrank[z][i] \\leq wrank[z][j]))$\\\\\n \/\/ all women $z$ in the current proposals from $j$ have been proposed by someone who either they prefer or are indifferent over $j$.\\\\\n \\hspace*{0.2in} {\\bf advance}: $G[j] := G[j]+1; $\n\\caption{A Parallel Algorithm for Man-Optimal Super Stable Matching \\label{fig:super-stable}}\n\\end{algorithm}\n\nLet us verify that this algorithm indeed generalizes the standard stable marriage algorithm. For the standard stable marriage problem, $mpref[i,k]$ is singleton for all\n$i$ and $k$. Hence, $Y(j)$ is also singleton. Using $z$ for the singleton value in $Y(j)$, we get the expression\n$ \\exists i \\neq j: \\exists k \\leq G[i]: (z = mpref[i][k]) \\wedge (wrank[z][i] < wrank[z][j]))$ which is identical to the stable marriage problem once we\nsubstitute $<$ for $\\leq$ for comparing the $wrank$ of man $i$ and man $j$.\n\nWhen the preference list has a singleton element for each rank as in the classical stable marriage problem, we know that there always exists at least one stable marriage. However, in presence of ties there is no guarantee of existence of a superstable marriage.\nConsider the case with two men and women where each one of them does not have any strict preference.\nClearly, for this case there is no superstable marriage.\n\nBy symmetry of the problem, one can also get woman-optimal superstable marriage by switching the roles of men and women.\nLet $mpref[i].length()$ denote the number of equivalence classes of women for man $i$. When all women are tied for the man $i$, the number of equivalence classes is equal to $1$, and when there\nare no ties then it is equal to $n$. \nConsider the distributive lattice $L$ defined as the cross product of $mpref[i]$ for each $i$. \nWe now have the following result.\n\\begin{theorem}\nThe set of superstable marriages, $L_{superstable}$, is a sublattice of the lattice $L$. \n\\end{theorem}\n\\begin{proof}\nFrom Lemma \\ref{lem:superstable}, the set of superstable marriages is closed under meet.\nBy symmetry of men and women, the set is also closed under join.\n\\end{proof}\n\nIt is already known that the set of superstable marriages forms a distributive lattice \\cite{spieker1995set}.\nThe set of join-irreducible elements of the lattice $L_{superstable}$ forms a partial order\n(analogous to the rotation poset \\cite{gusfield1989stable}) that can be used to generate all superstable marriages.\nVarious posets to generate all superstable marriages are discussed in \\cite{scott2005study,DBLP:conf\/wads\/HuG21}\n\n\nWe note that the algorithm \\ref{fig:super-stable} can also be used to find the constrained superstable marriage. In particular, the following\npredicates are lattice-linear:\n\\begin{enumerate}\n\\item\nRegret of man $i$ is at most regret of man $j$.\n\\item\nThe proposal vector is at least $I$.\n\\end{enumerate}\n\n\\remove{\n\n\\begin{figure}[htbp]\\begin{center}\n\\begin{tabbing}\nx\\=xxx\\=xxx\\=xxx\\=xxx\\=xxx\\= \\kill\n${\\bf P_i}$:: \/\/ Process for Man $i$\\\\\n\\> {\\bf input}\\\\\n\\> \\> $mpref$: array[$1$..$n$] of set of $1..n;$ \\\\\n\\> \\> \/\/ men's preferences: each a set of women\\\\\n\\> {\\bf var}\\\\\n\\> \\> $G_i:1..n$ initially $1$; \/\/ proposal number by $P_i$ \\\\\n\\> \\> $numrejects: 0..n$ initially $0$; \/\/ number of women who have rejected this man\\\\\n\\\\\n\\> Upon receiving a message ``initiate'' from environment;\\\\\n\\> \\> $acceptedProps := \\emptyset;$\\\\\n\\> \\> $rejectedProps := \\emptyset;$\\\\\n\\> \\> for each $w \\in mpref[G_i]$\\\\\n\\> \\> \\> send $(``proposal\", i)$ to woman $w$;\\\\\n\\\\\n\\> Upon receiving a message $(``reject\", j)$:\\\\\n\\> \\> {\\bf if} $(j \\in mpref[G_i])$ {\\bf then} \/\/ rejected by one of the current partners\\\\\n\\> \\> \\>$rejectedProps := rejectedProps \\cup \\{j\\}$;\\\\\n\\> \\> \\> {\\bf if} $|rejectedProps| = mpref[G_i]$ {\\bf then}\\\\\n\\> \\> \\> \\> {\\bf if} $(G_i = mpref.length)$ \/\/no more choices left\\\\\n\\> \\> \\> \\> \\> announce ``no super stable marriage possible\" ;\\\\\n\\> \\> \\> \\> {\\bf else } \/\/ move to the next ranked women\\\\\n\\> \\> \\> \\> \\> $G_i := G_i+1$;\\\\\n\\> \\> \\> \\> \\> $acceptedProps := \\emptyset;$\\\\\n\\> \\> \\> \\> \\> $rejectedProps := \\emptyset;$\\\\\n\\> \\> \\> \\> for each $w \\in mpref[G_i]$\\\\\n\\> \\> \\> \\> \\> send $(``proposal\", i)$ to woman $w$;\\\\\n\\\\\n\\> Upon receiving a message $(``accept\", q)$:\\\\\n\\> \\> {\\bf if} $(q \\in mpref[G_i])$\\\\\n\\> \\> $acceptedProps := acceptedProps \\cup \\{q\\};$\\\\\n\\\\\n${\\bf Q_i}$:: \/\/ Process for Woman $i$\\\\\n\\> {\\bf input}\\\\\n\\> \\> $wrank$: array[$1$..$n$] of $1..n;$ \/\/ rank of each man by the woman \\\\\n\\> {\\bf var}\\\\\n\\> \\> $partner$: $0..n;$ initially $0$ \/\/ current partner \\\\\n\\\\\n\\> Upon receiving a message $(``proposal'', j)$:\\\\\n\\> \\> {\\bf if} $(partner = 0)$ {\\bf then}\\\\\n\\> \\> \\> $partner := j$;\\\\\n\\> \\> {\\bf else if} $(wrank[j] < wrank[partner]) $ {\\bf then}\\\\\n\\> \\> \\> send $(``reject\", i)$ to $P_{partner}$;\\\\\n\\> \\> \\> $partner := j$;\\\\\n\\> \\> {\\bf else} \/\/ $wrank[j] \\geq wrank[partner]\n\\\\\n\\\\\n{\\bf Environment}::\\\\\n Process that (1) initiates the diffusing computation and \\\\\n (2) detects Termination\\\\\n \\\\\n \\> send ``initiate'' message to all $P_i$\\\\\n\\> Upon Detecting Termination of Diffusing Computation\\\\\n \\> \\> Announce the current assignment as a stable marriage \\\\\n \\> \\> satisfying external constraints. Halt\\\\\n\\end{tabbing}\n\n\\end{center}\n\\caption{{A diffusing distributed computation algorithm for constrained SMP} for men $P_i$ and women $Q_i$\\label{fig:CSMP}}\n\\end{figure}\n\n\n}\n\n\n\\section{Introduction}\nThe Lattice-Linear Predicate (LLP) algorithm \\cite{DBLP:conf\/spaa\/Garg20} is a general technique for designing parallel algorithms for combinatorial\noptimization problems. In \\cite{DBLP:conf\/spaa\/Garg20}, it is shown that the stable marriage problem, the shortest path problem in a graph, and \nthe assignment problem can all be solved using the LLP algorithm.\nIn \\cite{Garg:ICDCN22}, many dynamic programming problems, in \\cite{DBLP:conf\/sss\/Garg21}, the housing problem, and\nin \\cite{AlvGar22}, it is shown that the minimum spanning tree problem can be solved using the LLP\nalgorithm.\nIn \\cite{DBLP:conf\/sss\/GuptaK21}, Gupta and Kulkarni extend LLP algorithms for deriving self-stabilizing algorithms.\nIn this paper, we show that many generalizations of the stable matching problem can also be solved using\nthe LLP algorithm. A forthcoming book on parallel algorithms \\cite{Garg23} gives a uniform description of these and other problems\nthat can be solved using the LLP algorithm.\n\nThe Stable Matching Problem (SMP) \\cite{gale1962college} has wide applications in economics, distributed computing, resource allocation and many other fields \\cite{maggs2015algorithmic,iwama2008survey}.\nIn the standard SMP, there are $n$ men and $n$ women each with their totally ordered preference list. \nThe goal is to find \na matching between men and women such that there is no instability, i.e., there is no pair of a woman and a man such that they are not married to each other but\nprefer each other over their partners. \nIn this paper, we show that LLP algorithm can be used to derive solutions to a more general problem than SMP, called {\\em constrained SMP}. In our formulation, in addition to men's preferences and women's preferences, there may be a set of\n{\\em lattice-linear} \nconstraints \non the set of marriages consistent with men's preferences. \nFor example, we may state that Peter's regret \\cite{gusfield1989stable} should be less than that of Paul,\nwhere the {\\em regret} of a man in a matching is the choice number he is assigned. As another example, we may require\nthe matching must contain some pairs called {\\em forced pairs}, or must not contain some pairs called {\\em forbidden pairs} \\cite{Dias2003}.\nWe call such constraints {\\em external} constraints. Any algorithm to solve constrained SMP can solve standard SMP by setting (external) constraints to the empty set.\n\nIn this paper, \nwe also present a distributed algorithm to solve the constrained SMP in an asynchronous system. One of the goals is to show how a parallel LLP algorithm\ncan be converted into a distributed asynchronous algorithm.\nOur distributed algorithm uses a diffusing computation whose\ntermination is detected using a standard algorithm such as the Dijkstra-Scholten algorithm. The algorithm uses\n$O(n^2)$ messages each of size $O(\\log n)$.\nKipnis and Patt-Shamir \\cite{KipnisP10} have given a distributed algorithm for stable matching in a synchronous system. There are many differences with their work. First, they do not\nconsider external constraints and their work is not easily extensible for incorporating external constraints.\nSecond, for termination detection, they require each rejected node to broadcast the fact that the protocol has not terminated on a shortest-path tree. This step\nrequires the assumption of synchrony for termination detection and incurs additional message overhead. Our algorithm avoids such broadcasts and works for asynchronous systems.\nTheir paper suggests use of $\\alpha$ synchronizer \\cite{JACM::Awerbuch1985} for simulating in asynchronous systems. However, each round adds $O(n^2)$ messages for using\n$\\alpha$ synchronizer. Thus, our algorithm not only solves a more general problem, it is also more efficient for running the traditional SMP in an asynchronous system.\n\nWe also consider the generalizations of the stable matching problem to the case when the preference lists may have ties.\nThe problem of stable marriage with ties is clearly more general than the standard stable matching problem and has also been\nextensively studied \\cite{IRVING1994261,gusfield1989stable,david2013algorithmics}. \nWe consider three versions of matching with ties. \nIn the first version, called {\\em weakly stable} matching $M$, there is no blocking pair of \nman and woman $(m,w)$ who are not married in $M$ but strictly prefer\neach other to their partners in $M$. In the second version, called {\\em superstable} matching $M$, we require\nthat there is no blocking pair of man and woman $(m,w)$ who are not married in $M$\nbut either (1) both of them prefer each other to their partners in $M$, or (2) one of them prefers the other over his\/her partner in $M$ and the other one is\nindifferent, or (3) both of them are indifferent to their spouses.\nThe third version, called {\\em strongly stable matching}, we require that\nif there is no blocking pair $(m,w)$ such that they are not married in $M$\nbut either (1) both of them prefer each other to their partners in $M$, or (2) one of them prefers the other over his\/her partner in $M$ and the other one is\nindifferent. Algorithms for these problems are well-known; our goal is to \npresent LLP algorithms for these problems.\n\n\n\n\n\n\\section{Background: Lattice-Linear Predicate Detection Algorithm}\nIn this section, we give a self-contained description of the Lattice-Linear Predicate detection algorithm.\nThe reader should consult \\cite{DBLP:conf\/spaa\/Garg20} for more details.\nLet ${L}$ be the lattice of all $n$-dimensional vectors of reals greater than or equal to zero vector and less than or equal to a given vector $T$\nwhere the order on the vectors is defined by the component-wise natural $\\leq$.\nThe lattice is used to model the search space of the combinatorial optimization problem.\nThe combinatorial optimization problem is modeled as finding the minimum element in ${L}$ that satisfies a boolean {\\em predicate} $B$, where\n$B$ models {\\em feasible} (or acceptable solutions).\nWe are interested in parallel algorithms to solve the combinatorial optimization problem with $n$ processes.\nWe will assume that the systems maintains as its state the current candidate vector $G \\in {L}$ in the search lattice, \nwhere $G[ i]$ is maintained at process $i$. We call $G$, the global state, and $G[ i]$, the state of process $i$.\n\nFig. \\ref{fig:poset-lattice} shows a finite poset corresponding to $n$ processes ($n$ equals two in the figure), and the corresponding lattice of all eleven global states.\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=2.5in,height=1.0in]{lattice2-eps-converted-to.pdf}\n\\caption{A poset and its corresponding distributive lattice $L$ \\label{fig:poset-lattice}}\n\\end{center}\n\\end{figure}\n\nFinding an element in lattice that satisfies the given predicate $B$, is called the {\\em predicate detection} problem.\nFinding the {\\em minimum} element that satisfies $B$ (whenever it exists) is the combinatorial optimization problem.\nA key concept in deriving an efficient predicate detection algorithm is that of a {\\em\nforbidden} state. \nGiven a predicate $B$, and a vector $G \\in {L}$, a state $G[j]$ is {\\em forbidden} (or equivalently, the index $j$ is forbidden) if \nfor any vector $H \\in {L}$ , where $G \\leq H$, if $H[j]$ equals $G[ j]$, then $B$ is false for $H$.\nFormally,\n\\begin{definition}[Forbidden State \\cite{chase1998detection}]\n Given any distributive lattice ${L}$ of $n$-dimensional vectors of $\\mathbf{R}_{\\ge 0}$, and a predicate $B$, we define\n $ \\forbidden(G,j,B) \\equiv \\forall H \\in {L} : G \\leq H : (G[j] = H[j]) \\Rightarrow\n \\neg B(H).$\n\\end{definition}\n\nWe define a predicate $B$ to\nbe {\\em lattice-linear} with respect to a lattice ${L}$\n if for any global state $G$, $B$ is false in $G$ implies that $G$ contains a\n{\\em forbidden state}. Formally,\n\\begin{definition}[lattice-linear Predicate \\cite{chase1998detection}]\nA boolean predicate $B$ is {\\em {lattice-linear}} with respect to a lattice ${L}$\niff\n$\\forall G \\in {L}: \\neg B(G) \\Rightarrow (\\exists j: \\forbidden(G,j,B))$.\n\\end{definition}\n\n\nOnce we determine $j$ such that $forbidden(G,j,B)$, \nwe also need to determine how to advance along index $j$.\nTo that end, we extend the definition of forbidden as follows.\n\\begin{definition}[$\\alpha$-forbidden]\n Let $B$ be any boolean predicate on the lattice ${L}$ of all assignment vectors.\n For any $G$, $j$ and positive real $\\alpha > G[ j]$, we define $\\mbox{forbidden}(G,j, B, \\alpha)$ iff\n $$ \\forall H \\in {L}:H \\geq G: (H[j] < \\alpha) \\Rightarrow \\neg B(H). $$\n\\end{definition}\n\nGiven any lattice-linear predicate $B$, suppose $\\neg B(G)$. This means that $G$ must be advanced on all\nindices $j$ such that $\\forbidden(G,j,B)$. We use a function $\\alpha(G,j, B)$ such that $\\forbidden(G, j, B, \\alpha(G,j, B))$ holds\nwhenever $\\forbidden(G,j,B)$ is true. With the notion of $\\alpha(G, j, B)$, we have the Algorithm $LLP$.\nThe algorithm $LLP$ has two inputs --- the predicate $B$ and the top element of the lattice $T$. It returns the least vector $G$ which is less than or equal to $T$\nand satisfies $B$ (if it exists). Whenever $B$ is not true in the current vector $G$, the algorithm advances on all forbidden indices $j$\nin parallel. This simple parallel algorithm can be used to solve a large variety of combinatorial optimization problems\nby instantiating different $\\forbidden(G,j,B)$ and $\\alpha(G,j,B)$.\n\n\\begin{algorithm}\n\\SetAlgoRefName{LLP}\n vector {\\bf function} getLeastFeasible($T$: vector, $B$: predicate)\\\\\n {\\bf var} $G$: vector of reals initially $\\forall i: G[ i] = 0$;\\\\\n {\\bf while} $\\exists j: \\forbidden(G,j,B)$ {\\bf do}\\\\\n \\hspace*{0.2in} {\\bf for all} $j$ such that $\\forbidden(G,j,B)$ {\\bf in parallel}:\\\\\n \\hspace*{0.2in}\\h {\\bf if} $(\\alpha(G,j,B) > T[j])$ then return null; \\\\\n \n \\hspace*{0.2in}\\h {\\bf else} $G[ j] := \\alpha(G,j,B)$;\\\\\n {\\bf endwhile};\\\\\n {\\bf return} $G$; \/\/ the optimal solution\n\\caption{Find the minimum vector at most $T$ that satisfies $B$\\label{fig:alg-llp}}\n\\end{algorithm}\n\nThe following Lemma is useful in proving lattice-linearity of predicates.\n\\begin{lemma}\\label{lem:basic-LLP} \\cite{DBLP:conf\/spaa\/Garg20,chase1998detection}\nLet $B$ be any boolean predicate defined on a lattice ${L}$ of vectors. \\\\\n(a) Let $f:{L} \\rightarrow \\mathbf{R}_{\\ge 0}$ be any monotone function defined on the lattice ${L}$ of vectors of $\\mathbf{R}_{\\ge 0}$.\nConsider the predicate\n$B \\equiv G[ i] \\geq f(G)$ for some fixed $i$. Then, $B$ is lattice-linear.\\\\\n(b) If $B_1$ and $B_2$ are lattice-linear then $B_1 \\wedge B_2$ is also lattice-linear.\n\\end{lemma}\n\nWe now give an example of lattice-linear predicates for scheduling of $n$ jobs. Each job $j$ requires time $t_j$ for completion and has a set of\n prerequisite jobs, denoted by $pre(j)$, such that it can be started only after all its prerequisite jobs\nhave been completed. Our goal is to find the minimum completion time for each job.\nWe let our lattice ${L}$ be the set of all possible completion times. A completion vector $G \\in {L}$ is feasible iff $B_{jobs}(G)$ holds where\n$B_{jobs}(G) \\equiv \\forall j: (G[ j] \\geq t_j) \\wedge (\\forall i \\in pre(j): G[ j] \\geq G[ i] + t_j)$.\n$B_{jobs}$ is lattice-linear because if it is false, then there exists $j$ such that \neither $G[ j] < t_j$ or $\\exists i \\in pre(j): G[ j] < G[ i]+t_j$. We claim that $\\forbidden(G, j, B_{jobs})$. Indeed, any vector $H \\geq G$ cannot\nbe feasible with $G[ j]$ equal to $H[j]$. The minimum of all vectors that satisfy feasibility corresponds to the minimum completion time.\n\nAs an example of a predicate that is not lattice-linear, consider the predicate $B \\equiv \\sum_j G[ j] \\geq 1$ defined on the space of \ntwo dimensional vectors. Consider the vector $G$ equal to $(0,0)$. The vector $G$ does not satisfy $B$. For $B$ to be lattice-linear\neither the first index or the second index should be forbidden. However, \nnone of the indices are\nforbidden in $(0,0)$. The index $0$ is not\nforbidden because the vector $H = (0,1)$ is greater than $G$, has $H[0]$ equal to $G[ 0]$ but it still satisfies $B$.\nThe index $1$ is also not forbidden\nbecause $H =(1,0)$ is greater than $G$, has $H[1]$ equal to $G[ 1]$ but it satisfies $B$.\n\n \\label{sec:prog}\n \n We now go over the notation used in description of our parallel algorithms.\nFig. \\ref{fig:examples} shows a parallel algorithm for the job-scheduling problems.\n\nThe {\\bf var} section gives the variables of the problem.\nWe have a single variable $G$ in the example shown in Fig. \\ref{fig:examples}. \n\n $G$ is an array of objects such that\n $G[j]$ is the state of thread $j$ for a parallel program.\n\nThe {\\bf input} section gives all the inputs to the problem. These inputs are constant in the program and do not change during execution.\n\n The {\\bf init} section is used to initialize the state of the program.\n All the parts of the program\n are applicable to all values of $j$. For example, the {\\em init} section of the job scheduling program in Fig. \\ref{fig:examples}\n specifies that $G[j]$ is initially $t[j]$. Every thread $j$ would initialize $G[j]$.\n\n The {\\bf always} section defines additional variables which are derived from $G$.\n The actual implementation of these variables are left to the system. They can be viewed as\n macros. We will show its use later.\n\n\n\n The LLP algorithm gives the desirable predicate either by using the {\\bf forbidden} predicate or {\\bf ensure} predicate.\n The {\\em forbidden} predicate has an associated {\\em advance} clause that specifies how $G[j]$ must be advanced\n whenever the forbidden predicate is true.\n For many problems, it is more convenient to use the complement of the forbidden predicate.\n The {\\em ensure} section specifies the desirable predicates of the form $(G[j] \\geq expr)$ or \n $(G[j] \\leq expr)$. \n\n\n\n\n The statement {\\em ensure} $G[j] \\geq expr$ simply means that whenever thread $j$ finds $G[j]$ to be less than \n $expr$; it can advance $G[j]$ to $expr$.\n\n\n\n Since $expr$ may refer to $G$, just by setting $G[j]$ equal to $expr$, there is no guarantee \n that $G[j]$ continues to be equal to $expr$ --- the value of $expr$ may change because of changes in other components.\n We use {\\em ensure} statement whenever $expr$ is a monotonic function of $G$ and therefore the predicate\n is lattice-linear. \n\n\n\n\n\n\n\\begin{figure}\n\\begin{center}\n\\small {\n\\fbox{\\begin{minipage} {\\textwidth}\\sf\n\\begin{tabbing}\nxxxx\\=xxxx\\=xxxx\\=xxxx\\=xxxx\\=xxxx\\= \\kill\n$P_j$: Code for thread $j$\\\\\n{\\bf var} $G$: array[$1$..$n$] of $0..maxint$;\/\/ shared among all threads\\\\\n{\\bf input}: $t[j]: int$, $pre(j)$: list of $1..n$;\\\\\n{\\bf init}: $G[j] := t[j]$;\\\\\n\\\\\n{\\bf \\underline{job-scheduling}}:\\\\\n\\> {\\bf forbidden}: $G[j] < \\max \\{G[i] + t[j] ~|~ i \\in pre(j)\\}$;\\\\\n\\> \\> {\\bf advance}: $G[j] := \\max \\{G[i] + t[j] ~|~ i \\in pre(j)\\}$;\\\\\n\\\\\n{\\bf \\underline{job-scheduling}}:\\\\\n\\> {\\bf ensure}: $G[j] \\geq \\max \\{G[i] + t[j] ~|~ i \\in pre(j)\\}$;\\\\\n\\\\\n{\\bf \\underline{shortest path from node $s$: Parallel Bellman-Ford}} \\\\\n\\> {\\bf input}: $pre(j)$: list of $1..n$; $w[i,j]$: int for all $i \\in pre(j)$\\\\\n\\> {\\bf init}: if $(j=s)$ then $G[j] := 0$ else $G[j]$ := maxint;\\\\\n\\> {\\bf ensure}: $ G[j] \\leq \\min \\{ G[i] + w[i,j] ~|~ i \\in pre(j) \\}$\n\\end{tabbing}\n\\end{minipage}\n }\n }\n\\caption{LLP Parallel Program for (a) job scheduling problem using forbidden predicate (b) job scheduling problem using ensure clause and (c) the shortest path problem\n\\label{fig:examples}}\n\\end{center}\n\\vspace*{-0.2in}\n\\end{figure}\n\n\\section{A Parallel Algorithm for the Constrained Stable Matching Problem}\n We now derive the algorithm for the stable matching problem using Lattice-Linear Predicates \\cite{DBLP:conf\/wdag\/Garg17}.\nWe let $G[i]$ be the choice number that man $i$ has proposed to. Initially, $G[i]$ is $1$ for all men. \n \n \n \\begin{definition}\n An assignment $G$ is feasible for the stable marriage problem if (1) it corresponds to a perfect matching (all men are paired with different women) \n and (2) it has no blocking pairs. \n \\end{definition}\n \n The predicate ``$G$ is a stable marriage'' is a lattice-linear predicate \\cite{DBLP:conf\/spaa\/Garg20} which\nimmediately gives us \\ref{fig:GS}.\n The {\\bf always} section defines variables which are derived from $G$.\n These variables can be viewed as\nmacros. For example, for any thread $z = mpref[j][G[j]]$. This means that\nwhenever $G[j]$ changes, so does $z$.\nIf man $j$ is forbidden, it is clear that any vector in which man $j$ is matched with $z$ and the other man $i$ is matched\n with his current or a worse choice can never be a stable marriage. Thus, it is safe for man $j$ to advance to the next choice.\n \\begin{algorithm}\n \\SetAlgoRefName{LLP-ManOptimalStableMarriage}\n$P_j$: Code for thread $j$\\\\\n {\\bf input}: $mpref[i,k]$: int for all $i,k$; $wrank[k][i]$: int for all $k,i$;\\\\\n {\\bf init}: $G[j] := 1$;\\\\\n{\\bf always}: $z = mpref[j][G[j]];$\\\\\n {\\bf forbidden}: \\\\\n $\\exists i:\\exists k \\leq G[i]:(z = mpref[i][k]) \\wedge (wrank[z][i] < wrank[z][j])$\\\\\n \\hspace*{0.2in} {\\bf advance}: $G[j] := G[j]+1; $\n\\caption{A Parallel Algorithm for Stable Matching \\label{fig:GS}}\n\\end{algorithm}\n\nWe now generalize \\ref{fig:GS} algorithm to solve the constrained stable marriage problem.\nIn the standard stable matching problem, there are no constraints\non the order of proposals made by different men. \nLet $E$ be the set\nof proposals made by men to women.\nWe also call these proposals {\\em events}\nwhich are executed by\n$n$ processes corresponding to $n$ men denoted by $\\{P_1 \\ldots P_n\\}$.\nEach of the events can be characterized by a tuple $(i,j)$ that corresponds to\nthe proposal made by man $i$ to woman $j$.\nWe impose a partial order $\\rightarrow_p$ on this set of events to model the order in which\nthese proposals can be made. In the standard SMP, every man $P_i$ has its preference list $mpref[i]$ such that\n$mpref[i][k]$ gives the $k^{th}$ most preferred woman for $P_i$. We model $mpref$ using\n$\\rightarrow_p$; if $P_i$ prefers woman $j$ to woman $k$, then\nthere is an edge from the event $(i,j)$ to the event $(i,k)$. As in SMP, we assume that\nevery man gives a total order on all women.\nEach process makes proposals to women in the decreasing order of preferences (similar to Gale-Shapley algorithm).\n\nIn the standard stable matching problem, there are no constraints\non the order of proposals made by different men, and $\\rightarrow_p$ can be visualized as a partial order $(E, \\rightarrow_p)$ with\n$n$ disjoint chains.\nWe generalize the SMP problem to include external constraints\non the set of proposals.\nIn the constrained SMP, $\\rightarrow_p$ can relate proposals made by different men\nand therefore $\\rightarrow_p$ forms a general poset $(E, \\rightarrow_p)$.\nFor example, the constraint that Peter's regret is less than or equal to John can be modeled by adding $\\rightarrow_p$ edges\nas follows. For any regret $r$, we add an $\\rightarrow_p$ edge from the proposal by John with regret $r$ to the\nproposal by Peter with regret $r$.\nWe draw $\\rightarrow_p$ edges in solid \nedges as shown in Fig. \\ref{fig:csmp-model}.\n\n\n\n\nLet $G \\subseteq E$ denote the global state of the system. A global state $G$ is simply the subset of events executed in the computation\nsuch that it preserves the order of events within each $P_i$. \nSince all events executed by a process $P_i$ are totally ordered,\n it is sufficient to record the number of events executed \nby each process in a global state. \nLet $G[i]$ be the number of proposal made by $P_i$. \nInitially, $G[i]$ is $1$ for all men.\nIf $P_i$ has made $G[i]>0$ proposals, then \n$mpref[i][G[i]]$ gives the identity of the woman last proposed by $P_i$. \nWe let $event(i, G[i])$ denote the event in which $P_i$ makes a proposal to $mpref[i][G[i]]$.\nWe also use $succ(event(i, G[i]))$ to denote the next proposal made by $P_i$, if any.\n\n\n\nFor the constrained SMP, we have $\\rightarrow_p$ edges that relate proposals of different processes.\nThe graph in Fig. \\ref{fig:csmp-model} shows an example of using $\\rightarrow_p$ edges in the constrained SMP.\nFor this problem, we work with {\\em consistent global states} (or order ideals \\cite{davey,Gar:2015:bk}).\nA global state $G \\subseteq E$ is {\\em consistent} if\n$ \\forall e,f \\in E: (e \\rightarrow_p f) \\wedge (f \\in G) \\Rightarrow (e \\in G).$\nIn the context of constrained SMP, it is easy to verify that $G$ is consistent iff \nfor all $j$, there does not exist $i$ such that $$succ(event(j, G[j])) \\rightarrow_p event(i, G[i]).$$\nIt is well known that the set of all consistent global states of a finite poset forms a finite\ndistributive lattice \\cite{davey,Gar:2015:bk}. We use the lattice of all consistent global states as ${L}$ for\nthe predicate detection.\n\nIn the standard SMP, women's preferences\nare specified by preference lists $wpref$ such that $wpref[i][k]$ gives the $k^{th}$ most preferred man for woman $i$.\nIt is also convenient to define $wrank$ such that $wrank[i][j]$ gives the choice number $k$ for which $wpref[i][k]$ equals $j$, i.e.,\n$wpref[i][k] = j$ iff $wrank[i][j] = k$.\nWe model these preferences using edges on the computation graph as follows. If an event $e$ \ncorresponds to a proposal by $P_i$ to woman $q$ and she prefers $P_j$, then we add a dashed\n edge\nfrom $e$ to the event $f$ that corresponds to $P_j$ proposing to woman $q$.\nThe set $E$ along with the dashed edges also forms a partial order $(E, \\rightarrow_w)$ where \n$e \\rightarrow_w f$ iff both proposals are to the same woman and that woman prefers the proposal\n$f$ to $e$. \nWith $((E, \\rightarrow_p), \\rightarrow_w)$ we can model any SMP specified using $mpref$ and $wpref$.\n\nFigure \\ref{fig:cuts} gives an example of a standard SMP problem in Fig. \\ref{fig:SMP} in our model. To avoid cluttering the\nfigure, we have shown preferences of all men but preferences of only two of the women.\nFig \\ref{fig:csmp-model} gives an example of a constrained SMP. Since both $\\rightarrow_p$ and $\\rightarrow_w$ are\ntransitive relations, we draw only the transitively reduced diagrams.\n\n\\begin{figure}\n\\begin{tabular}{l | l l l l | l l l | l l l l |}\nmpref & & & & & & & wpref \\\\\nP1 & w4 & w1 & w2 & w3 & & & w1 & P4 & P1 & P3 & P2\\\\\nP2 & w2 & w3 & w1 & w4 & & & w2 & P1 & P4 & P2 & P3\\\\\nP3 & w3 & w1 & w4 & w2 & & & w3 & P1 & P2 & P4 & P3\\\\\nP4 & w2 & w4 & w3 & w1 & & & w4 & P3 & P1 & P4 & P2\\\\\n\\end{tabular}\n\\caption{\\label{fig:SMP} Stable Matching Problem specified using men preference list (mpref) and\nwomen preference list (wpref).}\n\\end{figure}\n\n\n\n\n\n\n\\begin{figure}[htbp]\n\\vspace*{-0.3in}\n\\begin{center}\n\\includegraphics[height=3.5in]{mexample2.pdf}\n\\caption{\\label{fig:cuts} Men preferences are shown in blue solid edges. Preferences of women 1 and 2 are shown in dashed green edges.\nIn the standard SMP graph, there are no blue edges from any event in $P_i$ to any event in $P_j$ for distinct $i$ and $j$.}\n\\end{center}\n\\vspace*{-0.2in}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\vspace*{-0.3in}\n\\begin{center}\n\\includegraphics[height=3.5in]{mCS.pdf}\n\\caption{\\label{fig:csmp-model} Constrained SMP Graph corresponding to constraint that the {\\em regret} for $P_2$ is less than or equal to that of $P_1$.\nIt also shows the preference of $w3$ of $P4$ over $P3$.}\n\\end{center}\n\\vspace*{-0.2in}\n\\end{figure}\n\n\n\nThe above discussion motivates the following definition.\n\\begin{definition}[Constrained SMP Graph]\nLet $E = \\{(i,j) | i \\in [1..n] \\mbox{ and } j \\in [1..n] \\}$.\nA Constrained SMP Graph $((E, \\rightarrow_p), \\rightarrow_w)$ is a directed graph on $E$ with two sets of edges $\\rightarrow_p$ and $\\rightarrow_w$ with the following\nproperties: (1) $(E, \\rightarrow_p)$ is a poset such that the set $P_i = \\{ (i,j) | j \\in [1..n] \\}$ is a chain for all $i$, and\n(2) $(E, \\rightarrow_w)$ is a poset such that the set $Q_j = \\{ (i,j) | i \\in [1..n] \\}$ is a chain for all $j$ and there is no $\\rightarrow_w$ edge\nbetween proposals to different women, i.e.,\nfor all $i,j,k,l: (i,j) \\rightarrow_w (k,l) \\Rightarrow (j=l)$.\n\\end{definition}\n\nGiven a global state $G$, we define the {\\em frontier} of $G$ as the set of maximal events executed by \nany process. The frontier includes only the last event executed by $P_i$ (if any). Formally,\n$\\frontier(G) = \\{ e \\in G ~|~ \\forall f \\in G$ such that $f \\neq e$, $f$ and $e$ are executed by $P_i$ implies $f \\rightarrow_p e$ \\}.\nWe call the events in $G$ that are not in $\\frontier(G)$ as pre-frontier events.\n\nWe now define the feasible predicate on global states as follows.\n\\begin{definition}[feasibility for marriage]\nA global state $G$ is feasible for marriage iff (1) $G$ is a consistent global state, and (2) there is no dashed edge ($\\rightarrow_w$) from a frontier event to any event of $G$ (frontier or pre-frontier).\nFormally,\n$B_{marriage}(G) \\equiv$\\\\\n$ consistent(G) \\wedge (\\forall e \\in \\frontier(G), \\forall g \\in G: \\neg (e \\rightarrow_w g).$\n\n\n\\end{definition}\n\\remove{\nA green edge from a frontier event $(i,j)$ to another frontier event $(k,l)$ would imply that $j$ is equal to $l$\nand both men $i$ and $k$ are assigned the same woman which violates the matching property.\nFor example, in Fig. \\ref{fig:csmp-model}, $(P_3,w_3)$ and $(P_4, w_3)$ cannot both be frontier events of an feasible global state.\nA green edge from a frontier event $(i,j)$ to a pre-frontier event $(k,l)$ in a global state $G$ implies that woman $j$ prefers man $k$ to her partner $i$ and man $k$ prefers $j$ to his partner in $G$.\nFor example, in Fig. \\ref{fig:csmp-model}, $(P_3,w_3)$ and $(P_4, w_1)$ cannot both be frontier events of an feasible global state because there is\na green edge from $(P_3, w_3)$ to a pre-frontier event $(P_4, w_3)$.\n}\n\n\n\nIt is easy to verify that the problem of finding a stable matching is the same as finding a global state that \nsatisfies the predicate $B_{marriage}$ which is defined purely in graph-theoretic terms on the\nconstrained SMP graph.\nThe next task is to show that $B_{marriage}$ is lattice-linear.\n\n\n\n\\begin{theorem}\\label{lem:CSMP-forbid}\nFor any global state $G$ that is not a constrained stable matching,\nthere exists $i$ such that $\\forbidden(G,i,B_{marriage})$.\n\\end{theorem}\n\\begin{proof}\n\nFirst suppose that $G$ is not consistent, i.e., there exists $f \\in G$ such that\nthere exists $e \\not \\in G$ and $e \\rightarrow_p f$. Suppose that $e$ is on $P_i$. Then, \n$\\forbidden(G,i,B)$ holds because any global state $H$ that is greater than $G$ cannot be consistent\nunless $e$ is included.\n\nNext, suppose that $G$ is a consistent global state but the assignment for $G$ is not a matching.\nThis means that for some distinct $i$ and $j$, both $G[i]$ and $G[j]$ refer to the same woman, say $w$.\nSuppose that $w$ prefers $j$ to $i$, then we claim $\\forbidden(G, i, B)$.\nConsider any $H$ such that $H[i] = G[i]$ and $H[j] \\geq G[j]$.\nFirst consider the case $H[j] = G[j]$. In this case, the same woman $w$ is still assigned to \ntwo men and hence $H$ is not a stable matching. \nNow consider the case $H[j] > G[j]$. In this case, \nthe woman $w$ prefers man $j$ to $i$,\nand the man $j$ prefers $w$ to the woman assigned in $H[j]$ violating stability.\n\nNow suppose that the assignment for $G$ is a constrained matching but not stable. \nSuppose that $(j,w)$ is a blocking pair in $G$. Let $i$ be assigned to $w$ in $G$\n (i.e., the woman corresponding to $G[i]$ prefers man $j$ to $i$,\nand the man $j$ also prefers her to his assignment).\nWe claim that $\\forbidden(G, i, B)$.\nConsider any $H$ such that $H[i] = G[i]$ and $H[j] \\geq G[j]$.\nIn this case, $(j,w)$ continues to be blocking in $H$.\nThe woman $w$ prefers man $j$ to $i$,\nand the man $j$ prefers $w$ to the woman assigned in $H[j]$.\n\n\\end{proof}\n\n\n\n\n\n\n\n\nWe now apply the detection of lattice-linear global predicates for the constrained stable matching. \n\n\n\n \\begin{algorithm}\n \\SetAlgoRefName{LLP-ConstrainedStableMarriage}\n$P_j$: Code for thread $j$\\\\\n {\\bf input}: $mpref[i,k]$: int for all $i,k$; $wrank[k][i]$: int for all $k,i$;\\\\\n {\\bf init}: $G[j] := 1$;\\\\\n{\\bf always}: $z = mpref[j][G[j]];$\\\\\n {\\bf forbidden}: \\\\\n $\\exists i: \\exists k \\leq G[i]: (z = mpref[i][k]) \\wedge (wrank[z][i] < wrank[z][j])$ $\\vee (\\exists i: succ(event(j, G[j])) \\rightarrow_p event(i, G[i]]))$ \\\\\n\\hspace*{0.2in} {\\bf advance}: {\\bf if} $(G[j] < n)$ then $G[j] := G[j]+1; $\\\\\n\\hspace*{0.2in} \\hspace*{0.2in} {\\bf else} print(``no constrained stable marriage'')\n\\caption{A Parallel Algorithm for the Constrained Stable Matching \\label{fig:alg}}\n\\end{algorithm}\n\n\\remove {\n\\begin{figure}[htb]\n\\begin{center}\n\\fbox{\\begin{minipage} {\\textwidth}\\sf\n\\begin{tabbing}\nxx\\=xxxx\\=xxxx\\=xxxx\\=xxxx\\=xxxx\\= \\kill\n{\\bf Algorithm Constrained-Stable-Matching}:\nUse Algorithm $LLP$ where\\\\\n$T$ = $(n,n,...,n)$; \/\/maximum number of proposals at $P_i$\\\\\n$z = mpref[j][G[j]]$; \/\/current woman assigned to man $j$\\\\\n$\\forbidden(G, j, B_{marriage}) \\equiv (G[j] = 0)$ \\\\\n \\> $\\vee (\\exists i: \\exists k \\leq G[i]: (z = mpref[i][k]) \\wedge (wrank[z][i] < wrank[z][j]))$\\\\\n \\> $\\vee (\\exists i: succ(event(j, G[j])) \\rightarrow_p event(i, G[i]]))$ \\\\\n\n$\\alpha(G,j,B_{marriage}) = (G[j]+1)$;\n\\end{tabbing}\n\\end{minipage}\n}\n\\end{center}\n\\vspace*{-0.2in}\n\\caption{An efficient algorithm to find the man-optimal constrained stable matching \\label{fig:alg}}\n\\vspace*{-0.1in}\n\\end{figure}\n}\n\n\nThe algorithm to find the man-optimal constrained stable marriage is shown in Fig. \\ref{fig:alg}.\nFrom the proof of Theorem \\ref{lem:CSMP-forbid}, we get the\nfollowing implementation of $\\forbidden(G, j, B_{marriage})$ in Fig. \\ref{fig:alg}.\nThe first disjunct holds when the woman $z$ assigned to man $j$ is such that there exists a man $i$\nwho is either (1) currently assigned to $z$ and woman $z$ prefers man $i$ , or (2) currently assigned\nto another woman but he prefers $z$ to the current assignment. The first case holds when $k=G[i]$ and the\nsecond case holds when $k < G[i]$.\nThe first case is equivalent to checking if a dashed edge exists from $(j, z)$ to a frontier event.\nThe second case is equivalent to checking if a dashed edge exists to a pre-frontier event.\nThe second disjunct checks that the assignment for $G$ satisfies all external constraints with respect to $j$.\n\nOur algorithm generalizes the Gale-Shapley algorithm in that it allows specification of external constraints.\n\n\n\nWe now show an execution of the algorithm on the CSMP in Fig. \\ref{fig:csmp-model}.\nSince every $P_i$ must make at least one proposal, we start with \nthe first proposal for every $P_i$. The corresponding assignment is\n$[w_4, w_2, w_3, w_2]$, i.e., $P_1$ is assigned $w_4$, $P_2$ is assigned $w_2$ and so on.\nIn this global state $G$, the second component is forbidden. This is because\n$w_2$ prefers $P_4$ over $P_2$. We advance on $P_2$ to get the global state\n$[w_4, w_3, w_3, w_2]$. Now, \nbecause $w_3$ prefers $P_2$ over $P_3$, $P_3$ must advance.\nWe get the global state\n$[w_4, w_3, w_1, w_2]$. This is a stable matching. However, it does not satisfy\n the constraint that the regret of $P_2$ is less than or equal to that of $P_1$.\n Here, $P_1$ is forbidden and $P_1$ must advance. We now get the global state\n$[w_1, w_3, w_1, w_2]$ which is not a matching. \nSince $w_1$ prefers $P_1$ over $P_3$, $P_3$ must advance.\nWe reach the global state\n$[w_1, w_3, w_4, w_2]$ which satisfies the constrained stable matching.\n\n\n\n\n\nWe have discussed man-oriented constrained stable marriage problem. \nOne can also get an LLP algorithm the for woman-oriented constrained stable marriage problem.\nThe paper \\cite{GarHu20} gives an algorithm $\\beta$ that does the downward traversal in the proposal lattice in search of a stable marriage.\nWhen men and women are equal then such a traversal\ncan be accomplished by switching the roles of men and women. However, in \\cite{GarHu20} is is assumed that\nthe number of men $n_m$ may be much smaller than the number of women $n_w$. It has the\ntime complexity of $O(n_m^2 + n_w)$. Switching the roles of men and women is not feasible\nwithout increasing the complexity of the algorithm. \n\n\n\\section{A Distributed Algorithm for the Constrained Stable Matching Problem}\nAlthough the standard SMP has been studied in a distributed system setting (e.g., \\cite{brito2005distributed,kipnis2009note}), \nwe study the constrained SMP in a distributed system setting. \nOur goal is to show how a parallel LLP algorithm can be converted to a distributed program.\nWe assume an asynchronous system in which all channels are FIFO and reliable and\nthat processes do not crash.\n\n\nWe assume that each man and woman knows only his or her preference lists. \n$P_i$ corresponds to the computation at man $i$ and $Q_i$ corresponds to the computation at woman $i$.\nEach process $P_i$ is responsible for updating its own component in $G[i]$.\nFor the LLP algorithm, we will assume that the only variable at $P_i$ is $G$ and\nall other variables such as $mpref$ are constants.\nIn addition, each man is given\na list of prerequisite proposals for each of the women that he can propose to.\nIn terms of the constrained-SMP graph, this corresponds to every man knowing the\nincoming solid\nedges for the chain that corresponds to that man in the graph.\nFrom $mpref$, one can also derive $mrank$, the rank $P_i$ assigns to each woman.\n\nThe process $Q_i$ has $wpref$, preferences of woman $i$. However, it is more convenient to\nkeep $wrank$, the rank $Q_i$ assigns to each man. This information is input to $Q_i$.\nThe only variable a woman $Q_i$\nmaintains is the {\\em partner}. Note that given $G$, the partner for each woman can be derived.\nHowever, in a distributed system setting it is more efficient to maintain the partner at each woman.\n\nWhenever $G[i]$ is updated by $P_i$, we will assume that $P_i$ sends a message\nto other relevant processes informing them about the update. Each process keeps\nenough information to be able to evaluate its forbidden predicate. Since the message\ntransfer takes time, the data structures are not necessarily up to date at each process.\nIn particular $P_j$ may have an old value of $G[i]$ maintained at $P_i$.\nWe show that the LLP algorithm has the advantage that it works correctly\ndespite the fact that processes use old values of $G$. Each process evaluates\nits forbidden predicate and advances its state whenever the forbidden predicate is true.\nThe algorithm terminates when no process is forbidden. In a distributed system setting, \nwe need some process to determine that the system has reached such a state.\nA possible solution for running LLP algorithms in a distributed environment is to \nrun it as a diffusing computation\\cite{dijkstra1980termination} and use a termination detection algorithm\nalong with the LLP algorithm.\n\nWe now present a diffusing computation for\nsolving the constrained SMP.\nWe adopt the standard rules of a diffusing computation.\nA {\\em passive} process can become {\\em active} only on receiving messages, and only an active process can send a message.\nWe assume the existence of a process called environment that starts the algorithm by\nsending {\\em initiate} messages to all men. In our algorithm shown in Fig. \\ref{fig:CSMP},\n\nThere are four types of messages used in this algorithm. There are exactly $n$ {\\em initiate} messages sent\nby the environment to all men. Each man can send two types of messages. He sends {\\em propose} messages\nto women one at a time in the order given by $mpref$. These messages are sent whenever the current state of the man\nis forbidden and he needs to advance to the next woman.\nA man may sometimes skip proposing some women as\nexplained later. A man also sends {\\em advance} messages to other men which may force other men to skip\ncertain proposals to satisfy external constraints. \n\nA woman acts only when she receives a {\\em propose} message \nfrom a man $j$.\nOn receiving a {\\em propose} message, if she is currently not engaged, she \ngets engaged to man $j$. If she is engaged to a man and the new proposal is preferable to her current\npartner then she sends a {\\em reject} message to the current partner. If the new proposal is less preferable, then\nshe sends a {\\em reject} message to the proposer. The variable {\\em partner} indicates her partner at any point.\nIf the value of {\\em partner} is zero, then that woman is free; otherwise, she is engaged.\nNote that a woman never sends any {\\em accept} message.\nThe algorithm is based on the assumption that if a woman has received a proposal and not rejected it, then\nshe has accepted the message (the algorithm assumes that no messages are lost).\n\nWe now explain the behavior of men for each message type he receives as shown in Fig. \\ref{fig:CSMP}.\nOn receiving an {\\em initiate} message from the environment, we know that any assignment must have\nat least one proposal from that man. To satisfy external constraints, all proposals that are prerequisite must also be made.\nHence, the man sends an {\\em advance} message to all men with prerequisite proposals. He then sends a proposal to his top choice.\nOn receiving a {\\em reject} message, he first checks if the {\\em reject} message is from his current partner. Since a man\nmay have advanced to a different proposal, there is no need for any action if the {\\em reject} message is from an earlier proposal.\nIf the {\\em reject} message is for the current proposal, then\nthe man knows that he must make another proposal. If he is out of proposals, then\nhe announces that there is no stable marriage with external constraints. Otherwise, he moves on to the next best proposal after\nsending out {\\em advance} messages to all men with prerequisite proposals.\nOn receiving an {\\em advance} message with woman $w$, the man must ensure that he has made a proposal to woman $w$.\nIf he has already made a proposal to $w$, then there is nothing to be done; otherwise, \nhe skips all proposals till\nhe gets to his choice which corresponds to $w$. Next, he makes a proposal to $w$ thereby satisfying external constraints.\n\n\n\n\n\n\\begin{figure}[htbp]\\begin{center}\n\\begin{tabbing}\nx\\=xxx\\=xxx\\=xxx\\=xxx\\=xxx\\= \\kill\n${\\bf P_i}$:: \/\/ Process for Man $i$\\\\\n\\> {\\bf input}\\\\\n\\> \\> $mpref$: array[$1$..$n$] of $1..n;$ \/\/ men's preferences\\\\\n\\> \\> $mrank$: array[$1$..$n$] of $1..n;$ \/\/ rank of each of the women by man\\\\\n\\> \\> \/\/ $mrank$ can be derived from $mpref$\\\\\n\\> \\> $prerequisite$: array[$1$..$n$] of list of proposals; \\\\\n\\> \\> \/\/ list of proposals that must be executed before $mpref[i]$\\\\ \n\\> {\\bf var}\\\\\n\\> \\> $G_i:1..n$ initially $1$; \/\/ proposal number by $P_i$ \\\\\n\\\\\n\\> Upon receiving a message ``initiate'' from environment;\\\\\n\\> \\> for each $(m,w) \\in prerequisite[G_i]$\\\\\n\\> \\> \\> send $(``advance\" , w)$ to $P_m$;\\\\\n\\> \\> send $(``proposal\", i)$ to woman $mpref[G_i]$;\\\\\n\\\\\n\\> Upon receiving a message $(``reject\", j)$:\\\\\n\\> \\> {\\bf if} $(mpref[G_i] = j)$ {\\bf then} \/\/ rejected by current partner\\\\\n\\> \\> \\> {\\bf if} $(G_i = n)$ {\\bf then}\\\\\n\\> \\> \\> \\> Announce ``no constrained stable marriage possible\" ;\\\\\n\\> \\> \\> {\\bf else} \\\\\n\\> \\> \\> \\> $G_i := G_i+1$;\\\\\n\\> \\> \\> \\> for each $(m,w) \\in prerequisite[G_i]$\\\\\n\\> \\> \\> \\> \\> send $(``advance\" , w)$ to $P_m$;\\\\\n\\> \\> \\> \\> send $(``proposal\", i)$ to woman $mpref[G_i]$;\\\\\n\n\\\\\n\\> Upon receiving a message $(``advance\", q)$:\\\\\n\\> \\> {\\bf while} $(mrank[q] > G_i)$ \\\\\n\\> \\> \\> $G_i := G_i+1$\\\\\n\\> \\> \\> for each $(m,w) \\in prerequisite[G_i]$\\\\\n\\> \\> \\> \\> send $(``advance\" , w)$ to $P_m$;\\\\\n\\> \\> {\\bf endwhile};\\\\\n\\> \\> send $(``proposal\", i)$ to woman $mpref[G_i]$;\\\\\n\\\\\n${\\bf Q_i}$:: \/\/ Process for Woman $i$\\\\\n\\> {\\bf input}\\\\\n\\> \\> $wrank$: array[$1$..$n$] of $1..n;$ \/\/ rank of each man by the woman \\\\\n\\> {\\bf var}\\\\\n\\> \\> $partner$: $0..n;$ initially $0$ \/\/ current partner \\\\\n\\\\\n\\> Upon receiving a message $(``proposal'', j)$:\\\\\n\\> \\> {\\bf if} $(partner = 0)$ {\\bf then}\\\\\n\\> \\> \\> $partner := j$;\\\\\n\\> \\> {\\bf else if} $(wrank[j] < wrank[partner]) $ {\\bf then}\\\\\n\\> \\> \\> send $(``reject\", i)$ to $P_{partner}$;\\\\\n\\> \\> \\> $partner := j$;\\\\\n\\\\\n\\\\\n{\\bf Environment}::\\\\\n Process that (1) initiates the diffusing computation and \\\\\n (2) detects Termination\\\\\n \\\\\n \\> send ``initiate'' message to all $P_i$\\\\\n\\> Upon Detecting Termination of Diffusing Computation\\\\\n \\> \\> Announce the current assignment as a stable marriage \\\\\n \\> \\> satisfying external constraints. Halt\\\\\n\\end{tabbing}\n\n\\end{center}\n\\caption{{A diffusing distributed computation algorithm for constrained SMP} for men $P_i$ and women $Q_i$\\label{fig:CSMP}}\n\\end{figure}\n\n\n\nObserve that when a man $P_i$ advances, he does not inform his existing partner, if any. Since the number of men and women are same, his partner\nwill eventually get a proposal from someone who she prefers to $P_i$ if there exists a constrained stable matching.\nHis partner $q$ can never be matched with $P_j$ such that $q$ prefers $P_i$ over $P_j$. Otherwise, \nwe have a blocking pair: both $q$ and $P_i$ prefer each other over their partners.\n\nIf there are no external constraints, then there are no {\\em advance} messages, and the algorithm is \na distributed version of the Gale-Shapley algorithm. \nEven in the presence of external constraints, the algorithm shares the following properties with the Gale-Shapley\nalgorithm. As the algorithms progress, the partner for a man can only get worse and the partner for a woman can only get better.\nBoth these properties are direct results of the way men send their proposals and the way women respond to proposals.\n\nThere are also some crucial differences from the Gale-Shapley algorithm.\nIn the Gale-Shapley algorithm, once a woman is engaged she continues to be engaged.\nFor any woman $w$, the predicate that there exists a man such that he is assigned to $w$ is a stable predicate.\nAs a result, the termination of Gale-Shapley (sequential or distributed version) is easy to detect. When all women have been proposed to, the system\nhas reached a stable matching. However, due to external constraints, it is not true in CSMP that once a woman is engaged she\ncontinues to stay engaged. The man who she was engaged to, may be required to advance on receiving an {\\em advance} message\nand then that woman is no longer logically assigned to that man. For the constrained SMP algorithm, we need additional messages to detect termination.\nIt is the environment process that initiates the computation and detects termination \nof the computation. We assume that a termination detection algorithm such as that of Dijkstra and Scholten \\cite{dijkstra1980termination} \nis running in conjunction with the CSMP algorithm. Termination in a diffusing computation corresponds to the condition \nthat all processes are passive and there are no messages in-transit. \n\nWe now show that the algorithm in Fig. \\ref{fig:CSMP} correctly finds the least assignment (or man-optimal) constrained stable matching whenever it exists.\nThe correctness follows from the following invariants.\n\\begin{lemma}\nAny assignment $M$ in which $M[i] < G_i$ for any $P_i$ cannot be a constrained stable marriage.\n\\end{lemma}\n\\begin{proof}\nInitially, the invariant is true because $G_i$ is initialized to $1$ and $M[i]<1$ implies that $P_i$ has not proposed to any one.\nThere are only two reasons the $G_i$ variable is incremented.\nEither the woman corresponding to the current proposal has sent a {\\em reject} or\na man has sent a message to {\\em advance} beyond the current woman.\nWe first consider the case when the current proposal was rejected by the woman $q$.\nIt is sufficient to show that any assignment in which this man is assigned $q$ cannot be a stable marriage.\nSuppose $q$ rejected $P_i$ in favor of $P_j$. If $P_j$ is also assigned to $q$ in $G$, then\nit is not a matching. If $P_j$ is assigned to a woman that he proposes to later, then we have that $q$ assigned to\n$P_i$ prefers $P_j$ and $P_j$ prefers $q$ to the woman he is assigned.\nIf $G_i$ is advanced because of an {\\em advance} message from $P_j$, then any assignment in which $M[i] < G_i$ does\nnot satisfy prerequisite constraints due to $\\rightarrow_p$.\n\\end{proof}\n\nTo show that the algorithm gives a stable matching on termination, if it exists, we show that\nthe number of successful proposals is equal to $n$ on termination. \nA proposal is defined to be successful\nif it is not rejected by a woman and not advanced over by a man and thereby rejected by the man.\nWe start the algorithm by each process sending out a proposal. Thus, there are $n$ proposals to start with.\nAny proposal that is rejected by a woman leads to another proposal if the reject message is not in transit.\nAny proposal that is skipped due to prerequisite constraints also leads to another proposal. \nSo either a man\nruns out of proposals, or the computation has not terminated until every man has made a successful proposal.\nThis assertion gives us\n\\begin{lemma}\nIf the algorithm announces that the current assignment denotes stable marriage, then the assignment given by $G$\nis a stable matching satisfying external constraints, i.e., if $P_i$ is paired with $mpref[i][G_i]$, then \nthe assignment satisfies constrained stable matching.\n\\end{lemma}\n\\begin{proof}\nSince there are no {\\em reject} messages, {\\em advance} messages, or {\\em propose} messages in transit, we know that\nthere are $n$ successful proposals. Each successful proposal has the property that\nthe value of current for $P_i$ equals $j$ iff the value of partner for $Q_j$ equals $i$.\nSince any proposal that violates stability is rejected and any proposal that violates external constraints is advanced\nwe get that the assignment on termination is a stable matching satisfying external constraints.\n\\end{proof}\n\n\n\n\n\nWe now analyze the message complexity of the algorithm.\nSuppose that there are $e$ external constraints, $n$ men, $n$ women\nand $m$ unsuccessful proposals.\nThere are $n$ initiate messages.\nFor every unsuccessful proposal, the algorithm uses at most one {\\em reject} message.\nThere are exactly $n$ final successful proposals resulting in one message per proposal in the\ndiffusing computation.\nIf there are $e$ external constraints (solid edges) across processes), then there are at most $e$ advance messages.\nThus, the messages in the diffusing computation are at most\n$n$ {\\em initiate messages}, $m$ unsuccessful {\\em propose} messages, $m$ {\\em reject} messages, $n$ successful {\\em propose} messages,\nand $e$ {\\em advance} messages.\nThus, the total number of messages in the diffusing computation is at most\n$2m+2n+e$.\n\nTermination detection algorithms such as Dijkstra and Scholten's requires as many messages as the\napplication messages in the worst case giving us the overall message complexity of $4m+4n+2e$ messages.\nWe note here that this message complexity can be reduced by various optimizations such as \ncombining the {\\em signal\/ack} messages of Dijkstra and Scholten's algorithm with application messages.\nFor example, a {\\em reject} message can also serve as an {\\em ack} message for a {\\em propose} message.\nFor simplicity, we do not consider these optimizations in the paper.\nSince both $m$ and $e$ are $O(n^2)$, we get $O(n^2)$ overall message complexity.\nAlthough the number of unsuccessful proposals can be $O(n^2)$ in the worst case,\nthey are $O(n \\log n)$ on an average for the standard SMP \\cite{knuth1997stable} .\nNote that each message carries only $O(\\log n)$ bits.\n\n\\remove {\n\\section{A Token-based Algorithm for Constrained SMP}\n\n\\label{sec:vdist}\nThe diffusing computation based distributed algorithm requires $4(m+n)+e$ messages of size $O(\\log n)$ bits.\nWe now show a token-based algorithm that requires $2(m+n)$ messages although each message is of size $O(n \\log n)$ bits..\n\n\n\nThe distributed WCP detection algorithm uses a unique token. The\ntoken contains two vectors. The first vector is labeled {$G$}.\nThis vector defines the current candidate cut. If {$G[i]$} has\nthe value $k$, then the proposal $k$ from process $P_i$ is part of the\ncurrent assignment. Note that the current assignment may not be a matching, i.e., a woman may be assigned to multiple men.\nAlso, the current assignment may not satisfy the external constraints.\nThe token is initialized with $\\forall i : G[i] = 0$.\n\nThe second vector is labeled $color$, where $color[i]$\nindicates the color for the candidate proposal from process\n$P_i$. The color of a state can be $red$, $green$ or $yellow$.\nIf $color[i]$ equals $red$, then the proposal $(i,G[i])$ and all its\npredecessors have been eliminated and can never satisfy the constrained stable matching.\nIf $color[i] = green$, then there is no state in $G$\nsuch that $(i,G[i])$ happened before that state and the woman assigned to $P_i$ has not been \nassigned to any other man. If $color[i] = yellow$, then the proposal $(i,G[i])$ satisfies external constraints but the woman \nassigned to $i$ is assigned to another man $j$. In this case, the woman would decide if $P_i$ is preferable to $P_j$ or not.\nWhen the token goes to that woman, she would convert the $yellow$ color to either $red$ or $green$ depending upon her \npreference.\n The token is\ninitialized with $\\forall i : color[i] = red$. Whenever the token satisfies is green for all $P_i$, the algorithm terminates and\nthe constrained stable matching is given by $G$. If a process $P_i$ finds that its color is red in the token and it has run out of proposals,\nthen the algorithm terminates with the messages that ``no constrained stable marriage exists.''\n\n\nThe token is sent to process $P_i$ only when $color[i] = red$.\nWhen it receives the token, $P_i$ moves to the next proposal in its $mpref[i]$ and then checks for violations of\nconsistency conditions with this new proposal. \nThis activity is\nrepeated until the candidate proposal satisfies all external constraints.\nIt then checks if its proposal is to a woman who is already engaged.\nIn that case, the color is labeled yellow; otherwise, \nit can be labeled green. \n\nNext, $P_i$ examines the token to see if any other man violates\nexternal constraints. If it finds any $j$ such that $(j, G[j])$ happened\nbefore $(i, G[i])$, then it makes $color[j]$ red. \n\nNow $P_i$ is ready to ship the token.\nIf all states in\n$G$ are green, that is, $G$ satisfies all external constraints, no woman\nis assigned to multiple men, and all men are assigned, then $P_i$\nhas detected the constrained stable matching. Otherwise, if its color is yellow, the token is sent to\nthe corresponding woman. If the color is green, then the token is sent to a man whose color is red.\n\nThe algorithm for these actions\nis given in Fig. \\ref{f:monitor-dwcp}. Note that the token can start on any\nprocess.\n\n\\begin{figure}[htbp]\\begin{center}\n\\fbox{\\begin{minipage} {\\textwidth}\\sf\n\\begin{tabbing}\nx\\=xxxx\\=xxxx\\=xxxx\\=xxxx\\=xxxx\\= \\kill\n$P_i$:\\\\\n\\> {\\bf var}\\\\\n\\> \\> \/\/ vector clock from the candidate state\\\\\n\\> \\> $candidate$: array[$1$..$n$] of integer initially $0$; \\\\\n\\\\\n\\> Upon receiving the token $(G, D, color)$ \\\\\n\\> \\> {\\bf while} $(color[i] = red)$ {\\bf do}\\\\\n\\> \\> \\> {\\bf if} $(G[i] < n) G[i]++$; \/\/ move to the next proposal if any;\\\\\n\\> \\> \\> else { announce ``no constrained stable marriage''; halt;} \\\\\n\\> \\> \\> $D := max (D, G[i].v);$\\\\\n\\> \\> \\> {\\bf if} $(D[i] < G[i])$ {\\bf then}\\\\\n\\> \\> \\> \\> $color[i]:=green;$\\\\\n\\> \\> {\\bf endwhile};\\\\\n\\> \\> {\\bf for} $j := 1$ {\\bf to } $n$, $(j \\neq i)$ {\\bf do}\\\\\n\\> \\> \\> {\\bf if} $(D[j] >= G[j])$ {\\bf then}\\\\\n\\> \\> \\> \\> $color[j] := red$;\\\\\n\\> \\> {\\bf endfor}\\\\\n\\> \\> {\\bf for} $j := 1$ {\\bf to } $n$, $(j \\neq i)$ {\\bf do}\\\\\n\\> \\> \\> {\\bf if} $(color[j] = green) \\wedge (G[i].w = G[j].w)$ {\\bf then}\\\\\n\\> \\> \\> \\> $color[i] := yellow$;\\\\\n\\> \\> {\\bf endfor}\\\\\n\\> \\> {\\bf if} ($\\exists j: color[j] = yellow)$ {\\bf then} send token\nto woman $G[i].w$;\\\\\n\\> \\> {\\bf else if} ($\\exists j: color[j] = red)$ {\\bf then} send token\nto woman $P_j$;\\\\\n\\> \\> {\\bf else} {announce \"constrained stable marriage found\"; return $G$;}\\\\\n\\\\\n\\\\\nWoman $wi$:\\\\\n\\> Upon receiving the token $(G, D, color)$ from $P_j$\\\\\n\\> $cur$ := man $k \\neq j$ such that $(G[k].w = i) $ and $(G[k].color = green)$;\\\\\n\\> \\> {\\bf if} $(wrank[i][j] > wrank[i][cur])$ \\\\\n\\> \\> \\> $color[j] := red; $ send token to $P_j$\\\\\n\\> \\> {\\bf else}\\\\\n\\> \\> \\> $color[cur] := red; $ send token to $P_{cur}$\n\\end{tabbing}\n\\end{minipage}\n}\n\\end{center}\n\\caption{Monitor process algorithm \\label{f:monitor-dwcp}}\n\\end{figure}\n\n\nWe first analyze the time complexity for computation. It is easy to\nsee that whenever a man receives the token, it advances by at least one proposal.\n Every time a proposal is advanced, $O(n)$ work\nis performed by the process with the token. There are at most\n$n^2$ proposals; therefore, the total computation time for all processes\nis $O(n^3)$. The work for any process in the distributed\nalgorithm is at most $O(n^2)$. Upon receiving a token, either that man sends the token to another man\nor the woman who he proposes next. A woman can send token only to a man.\nTherefore, the total number of messages is at most twice the number of proposals\nexplored before the algorithm terminates. Each message carries $O(n)$ integers (assuming that an integer is sufficient to encode $n$).\n}\n\n\n\\input{bxx-superStable.tex}\n\n\\input{bxx-strongStable.tex}\n\n\\section{Conclusions and Future Work}\nWe have shown that the Lattice-Linear Parallel Algorithm can solve many problems in the stable marriage literature.\nWe have shown that the LLP Algorithm can also be converted into an asynchronous distributed algorithm.\n\n\nIn the constrained SMP formulation, we have assumed that $(E, \\rightarrow_p)$ is a poset for simplicity.\nOur algorithms are applicable when $(E, \\rightarrow_p)$ may have cycles. For the general graph $(E, \\rightarrow_p)$ we can consider\nthe graph on strongly connected components which is guaranteed to be acyclic. By viewing each strongly\nconnected component as a super-proposal in which multiple proposals are made simultaneously, the same\nanalysis and algorithms can be applied.\n\nWe have also derived parallel LLP algorithms for stable matching problems with ties.\nOur technique gives an easy derivation of algorithms to find the man-optimal matchings as well as the\nsublattice representation of superstable and strongly stable matchings.\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\subsection{Basic notation}\n\\label{subsec-1.1}\nWe use the same basic notation as in \\cite{Hed3}. So, we\nwrite $\\mathbb{R}$ for the real line, $\\mathbb{C}$ for the complex plane, \n$\\mathbb{C}_\\infty:=\\mathbb{C}\\cup\\{\\infty\\}$ for the extended complex plane \n(the Riemann sphere). We use $\\mathrm{d} s$ for normalized arc length\nmeasure, and $\\mathrm{d} A$ for normalized area measure, while $\\partial_z$,\n$\\bar\\partial_z$ are the standard Wirtinger derivatives, and\n$\\hDelta_z=\\partial_z\\bar\\partial_z$. \nWe let $\\mathbb{D}$ denote the open unit disk, $\\mathbb{T}:=\\partial\\mathbb{D}$ the unit circle, \nand $\\mathbb{D}_e$ the exterior disk. We use the sesquilinear forms $\\langle\\cdot,\n\\cdot\\rangle_\\mathbb{T}$ and $\\langle \\cdot,\\cdot\\rangle_\\mathbb{D}$ where the measure is\n$\\mathrm{d} s$ and $\\mathrm{d} A$, respectively.\nWe write $1_E$ for the indicator function of a subset $E$. \n\n\n\\subsection{The Bloch space and the Bloch seminorm}\nThe \\emph{Bloch space} consists of those holomorphic functions \n$g:\\mathbb{D}\\to\\mathbb{C}$ that are subject to the seminorm boundedness condition\n\\begin{equation}\n\\|g\\|_{\\mathcal{B}(\\mathbb{D})}:=\\sup_{z\\in\\mathbb{D}}(1-|z|^2)|g'(z)|<+\\infty.\n\\label{eq-Blochnorm}\n\\end{equation}\nLet $\\mathrm{aut}(\\mathbb{D})$ denote the group of sense-preserving M\\\"obius \nautomorphism of $\\mathbb{D}$. By direct calculation,\n\\[\n\\|g\\circ\\gamma\\|_{\\mathcal{B}(\\mathbb{D})}=\\|g\\|_{\\mathcal{B}(\\mathbb{D})},\\qquad\n\\gamma\\in\\mathrm{aut}(\\mathbb{D}),\n\\] \nwhich says that the Bloch seminorm is invariant under all M\\\"obius \nautomorphisms of $\\mathbb{D}$.\nAn immediate observation we can make at this point is that provided that \n$g(0)=0$, we have the estimate\n\\begin{equation*}\n|g(z)|\\le\\|g\\|_{\\mathcal{B}(\\mathbb{D})}\\int_0^{|z|}\\frac{\\mathrm{d} t}{1-t^2}\n=\\frac12\\,\\|g\\|_{\\mathcal{B}(\\mathbb{D})}\\log\\frac{1+|z|}{1-|z|},\\qquad z\\in\\mathbb{D},\n\\end{equation*}\nwhich is sharp pointwise. This estimate is related with the interpretation\nof $f\\in\\mathcal{B}(\\mathbb{D})$ as the hyperbolically Lipschitz continuous functions.\n\n\n\\subsection{The Bergman projection of bounded functions}\n\\label{subsec-PLinfty}\n\nFor $\\mu\\in L^1(\\mathbb{D})$, let \n\\[\n\\mathbf{P} \\mu(z):=\\int_\\mathbb{D}\\frac{\\mu(w)}{(1-z\\bar w)^2}\\,\\mathrm{d} A(w),\\qquad z\\in\\mathbb{D},\n\\]\nbe its \\emph{Bergman projection}. Restricted to $L^2(\\mathbb{D})$, it is the \northogonal projection onto the subspace of holomorphic functions. In addition, \nit acts boundedly on $L^p(\\mathbb{D})$ for each $p$ in the interval $10$ and\n$\\mathrm{Re}\\, \\zeta\\in\\mathbb{R}\/2\\mathbb{Z}$. The boxes are now just rectangles that form a grid. The Bergman \nprojection may be expressed in these coordinates, if $\\mathcal{D}$ is the \nfundamental domain in the upper half-plane with\nreal part between $-1$ and $1$:\n\\[\n\\mathbf{P} \\mu (\\mathrm{e}^{\\mathrm{i}\\pi\\zeta})=\\int_{\\mathcal{D}}\n\\frac{1}{(1-\\mathrm{e}^{\\mathrm{i}\\pi(\\zeta-\\bar\\xi)})^2}\\mu(\\mathrm{e}^{\\mathrm{i}\\pi\\xi})\\,\\mathrm{e}^{-2\\pi\\mathrm{Im} \\xi}\\mathrm{d} A(\\xi).\n\\]\nSimilarly, we find that\n\\begin{multline}\n(1-|z|^2)(\\mathbf{P}\\mu)'(z)\\big|_{z:=\\mathrm{e}^{\\mathrm{i}\\pi\\zeta}}\n\\\\=\n2(1-\\mathrm{e}^{-2\\pi\\mathrm{Im}\\zeta})\\int_{\\mathcal{D}}\n\\frac{\\mathrm{e}^{-\\mathrm{i}\\pi\\bar\\xi}}{(1-\\mathrm{e}^{\\mathrm{i}\\pi(\\zeta-\\bar\\xi)})^3}\\,\n\\mu(\\mathrm{e}^{\\mathrm{i}\\pi\\xi})\\,\\mathrm{e}^{-2\\pi\\mathrm{Im} \\xi}\\mathrm{d} A(\\xi),\n\\label{eq:Ngbasic}\n\\end{multline}\nand we observe that for $\\zeta,\\xi\\in\\mathcal{D}$, the formula\n\\begin{multline}\n\\label{eq:singres0}\n\\frac{1}{(\\mathrm{e}^{\\mathrm{i}\\pi(\\zeta-\\bar\\xi)}-1)^3}=\n\\\\\n\\sum_{j=-1}^1\\bigg\\{(\\mathrm{i}\\pi(\\zeta-\\bar\\xi+2j))^{-3}-\n\\frac32((\\mathrm{i}\\pi(\\zeta-\\bar\\xi+2j))^{-2}+(\\mathrm{i}\\pi(\\zeta-\\bar\\xi+2j))^{-1}\\bigg\\}+\\mathrm{O}(1)\n\\end{multline}\nallows us to control the singularity. The sum over $j\\in\\{-1,0,1\\}$ is due to the fact that the\nleft-hand side expression is $2$-periodic in the variable $\\zeta-\\bar\\xi$. But we may as well \nfocus our attention to the\ncontribution with $j=0$, and observe that the contributions associated with \nthe second a third term on the right-hand side of \n\\eqref{eq:singres0} are negligible, i.e. correspond to asymptotically vanishing contributions\nin \\eqref{eq:Ngbasic} as $\\mathrm{Im}\\,\\zeta\\to0$ (i.e., as $|z|\\to1$). Similarly,\n\\[\n\\mathrm{e}^{-\\mathrm{i}\\pi\\bar\\xi}\\,\\mathrm{e}^{-2\\pi\\mathrm{Im}\\, \\xi}=1-\\mathrm{i}\\pi\\bar\\xi -2\\pi\\mathrm{Im}\\,\\xi+\\mathrm{O}(|\\xi|^2),\n\\]\nwhere only the constant $1$ makes a significant contribution. For $\\zeta\\in\\mathcal{D}$ with\n$|\\zeta|\\le\\frac12$, we see from\n\\eqref{eq:Ngbasic} that\n\\begin{multline}\n(1-|z|^2)(\\mathbf{P}\\mu)'(z)\\big|_{z:=\\mathrm{e}^{\\mathrm{i}\\pi\\zeta}}\n\\\\=\n-4\\pi(\\mathrm{Im}\\,\\zeta+\\mathrm{O}(\\mathrm{Im}\\,\\zeta)^2)\\int_{\\mathcal{D}\\cap\\mathbb{D}}\n\\frac{1}{{(\\mathrm{i}\\pi(\\zeta-\\bar\\xi)})^3}\\,\n\\mu(\\mathrm{e}^{\\mathrm{i}\\pi\\xi})\\,\\mathrm{d} A(\\xi)+\\mathrm{o}(1),\n\\label{eq:Ngbasic2}\n\\end{multline}\nas $\\mathrm{Im}\\,\\zeta\\to0$.\nEffectively this reduces to the study of the the derivative of the Bergman projection on \nthe upper half-plane $\\mathbb{H}$, since the main kernel in \\eqref{eq:Ngbasic2} appears\nthis way. Moreover, in $\\mathbb{H}$, we have access to both translation and dilatation \ninvariance. This is key to making the argument with boxes effective.\n\nSome clarifying words should perhaps be added regarding the boxes $(1-\\epsilon)Q$ compared\nwith $Q$. More precisely, how should the size reduction be made? It is easier to explain these\nmatters in the context of the upper half-plane. We work with the hyperbolic metric \n$\\mathrm{d} s_{\\mathbb{H}}(\\zeta)=(\\mathrm{Im} \\zeta)^{-1}|\\mathrm{d} \\zeta|$ and weighted area element \n$\\mathrm{d} A_{-1}(\\zeta)=(\\mathrm{Im}\\zeta)^{-1}\\mathrm{d} A(\\zeta)$. A prototypical box $Q$ of size $L$ would look like \nthis:\n\\[\nQ=\\big\\{\\zeta\\in\\mathbb{H}: \\,0<\\mathrm{Re}\\,\\zeta 0\n\\label{eq:buoyancy}\n\\end{equation}\nas condition for convective instability, i.e., a blob that is displaced radially outward will\nfind itself in a medium of higher density and continue to rise to larger radii. Since\n$({\\partial \\rho}\/{\\partial s})_P=-\\rho^2 (\\partial T \/ \\partial P)_s<0$ (material\nis heated upon adiabatic compression), Equation~\\ref{eq:buoyancy} simply\nreads that \\emph{a radially decreasing entropy profile is convective unstable}.\n\nAn ideal gas has an entropy of\n\\begin{equation}\nS = \\nu R \\left( \\frac{3}{2} \\ln T - \\ln \\rho + C\\right)\n\\end{equation}\nwhere $\\nu$ is the number of moles, $R$ is the gas constant, and $C$ is a constant.\nIn astrophysical applications, it is customary \\citep[e.g.][]{cavagnolo2009} to use a definition\nof entropy that is related to the thermodynamic entropy by an operation of\nexponential and a constant offset,\n\\begin{equation}\nS = \\frac{kT}{n_e^{2\/3}},\n\\label{eq:entropy}\n\\end{equation}\nThe entropy $S$ defined by Equation~\\ref{eq:entropy} has\nunits\nof keV cm$^{2}$, and it is required to be radially increasing to maintain convective equilibrium.\nNumerical simulations\nindicate that entropy outside the core is predicted to increase with radius approximately \nas $r^{1.1}$ or $r^{1.2}$ \\citep{voit2005,tozzi2001}.\nIn Figure~\\ref{fig:entropy-profile} we show the radial profile of the entropy out to\nthe outer radius of 10 arcmin, with a significant decrease at large radii that indicates\nan incompatibility of the best-fit model with convective equilibrium. For comparison,\nwe also show the entropy profile measured using the modelling of the data\nout to only $r_{500}$, as described in Sec.~\\ref{sec:r500}. This entropy profile\nuses the shallower temperature profile of Figure~\\ref{fig:kT-0-330}, and its\nextrapolation to larger radii remains non-decreasing, i.e., marginally consistent\nwith convective equilibrium.\n\nThe Schwarzschild criterion \ndoes not apply in the presence of a magnetic field. For typical\nvalues of the thermodynamic quantities of the ICM, the electron and ion gyroradii are\nseveral orders of magnitude smaller than the mean free path for Coulomb collisions\n\\citep[e.g.][]{sarazin1988}, even for a magnetic field of order 1 $\\mu G$, and therefore\ndiffusion takes place primarily along field lines \\citep[e.g.][]{chandran2007}.\nThere is strong evidence of magnetic\nfields in the central regions of clusters \\citep[e.g., radio halos, ][]{venturi2008,cassano2006},\nthough it is not clear whether magnetic fields are ubiquitous\nnear the virial radius, as in the case of Abell~3376 \\citep{bagchi2006}.\nIn the presence of magnetic fields, \\cite{chandran2007} has shown that the\ncondition for convective instability is simply $dT\/dR<0$.\n\n \nThe \\it Chandra\\rm\\ data out to the virial radius therefore indicate\nthat the ICM is convectively unstable, regardless of the\npresence of a magnetic field. In fact, in the absence of magnetic\nfields near the virial radius, Figure~\\ref{fig:entropy-profile} shows that \\emph{Abell~1835}\\\nfails the standard Schwarzschild criterion, i.e., the entropy decreases with radius;\nin the presence of magnetic fields, the negative gradient in the temperature profile alone\nis sufficient for the onset of convective instability \n\\citep[e.g., as discussed by ][]{chandran2007}.\n Convective instabilities would carry hotter\ngas from the inner regions towards the outer region within a few sound crossing\ntimes. As shown by \\cite{sarazin1988}, the sound crossing time for a 10~keV\ngas is $\\sim 0.7$~Gyr for a 1~Mpc distance,\nand an unstable temperature gradient such as that of Figure~\\ref{fig:kt-0-600-fit}\nwould be flattened by convection within a few Gyrs.\nConvection could in principle also result in an additional pressure gradient\ndue to the flow of hot plasma to large radii, which can in turn help support the gas\nagainst gravitational forces. \n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=2.3in, angle=-90]{vikh_temp_0-330chain_extrapolated_to_600_90CI.ps}\n\\includegraphics[width=2.3in, angle=-90]{fgas_profile.ps}\n\\caption{\nTemperature and gas mass fraction profiles measured from a fit to the \\it Chandra\\rm\\ data out to 330\", and extrapolation of the\nbest-fit model out to 600\".}\n\\label{fig:kT-0-330}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=2.3in, angle=-90]{entropy_0-600.ps}\n\\includegraphics[width=2.3in, angle=-90]{entropy_0-330.ps}\n\\caption{\nDeprojected entropy profiles using the full \\it Chandra\\rm\\ data out to 600\" (left, see Section~\\ref{sec:hse}),\nand using only data out to $r_{500}$\\ (right, see Section~\\ref{sec:r500}).}\n\\label{fig:entropy-profile}\n\\end{figure*}\n\n\n\\section{Discussion and interpretation}\nIn this paper we have reported the detection of X-ray emission in \\emph{Abell~1835}\\ with \\it Chandra\\rm\\ that extends out to approximately\nthe cluster's virial radius. The emission can be explained by the presence of a cooler\nphase of the plasma that is dominant at large radii, possibly linked to the infall\nof gas from large-scale filamentary structures. We also investigate the effects of clumping of the gas \nat large radii, and conclude that in principle a radial gradient in the clumping factor of the hot ICM\ncan explain the apparent flattening of the entropy profile and the turn-over of the mass profile.\n\\subsection{Detection of X-ray emission out to the virial radius}\nThe detection of X-ray emission out to a radial distance of 10 arcmin, or approximately 2.4~Mpc,\nindicates the presence of diffuse gas out to the cluster's virial radius.\nThis is the first detection of gas out to the virial radius with \\it Chandra\\rm, matching\nother detections obtained with \\it Suzaku\\rm\\ for nearby clusters \n\\citep[e.g.][]{akamatsu2011,walker2012a,walker2012b,simionescu2011,burns2010,kawaharada2010,\nbautz2009,george2009}.\nDespite its higher background, \\it Chandra\\rm\\ provides a superior angular resolution to image and remove emission from unrelated sources.\nAs can be seen from Figure~\\ref{fig:a1835}, there are approximately 100 point-like sources that were automatically\ndetected and removed, and we were also able to identify two low-mass clusters that are likely associated with \\emph{Abell~1835}.\n\\it Chandra\\rm\\ therefore has the ability to constrain the emission of clusters to the virial radius, especially for higher-redshift\ncool-core clusters for which the \\it Suzaku\\rm\\ point-spread function would cause significant contamination from the \ncentral signal to large radii.\n\n\nIt is not easy to interpret the emission at the outskirts as an extension \nof the hot gas detected at radii $\\leq$~$r_{500}$. In fact, as shown in Section~\\ref{sec:hse}, the steepening of the\ntemperature profile is incompatible with the assumption of\nhydrostatic equilibrium at large radii. \nWe also showed in Section~\\ref{sec:entropy} that\nthe gas has a negative entropy gradient beyond this radius, rendering it convectively unstable. \nTherefore, if the temperature profile of Figure~\\ref{fig:kt-0-600-fit} originates from\na single phase of the ICM, convection would transport hotter gas towards the outskirts, flattening\nthe temperature profile within a few Gyrs. Cooling of the gas by thermal radiation cannot be\nresponsible for off-setting the heating by convection, since the cooling time\n ($t_{cool} \\sim kT^{1\/2} n_e^{-1}$) is longer at the outskirts than in the peak-temperature regions\ndue to the higher density.\n\n\\subsection{Warm-hot gas towards the cluster outskirts}\nA possible interpretation for the detection of emission near the virial radius and its\nsteep temperature profile is the presence of a separate phase at the cluster outskirts\nthat is not in hydrostatic equilibrum with the cluster's potential.\nIn this case, cooler gas may be the result of infall from filamentary structures that\nfeed gas into the cluster, and the temperature of this \\emph{warm-hot} gas may in fact be\nlower than that shown in Figure~\\ref{fig:kt-0-600-fit} \n(i.e., $kT \\sim 1.25$~keV for the region $\\geq 450$\") if\nthis gas lies in projection against the tail end of the hotter ICM.\n\nWe estimate the mass of this putative warm-hot gas assuming that all of the\nemission from the outermost region is from a uniform density gas\nseen in projection. This assumption may result in an overestimate\nof the emission measure; in fact, the extrapolation of the gas density profile \nin the hydrostatic or convective scenarios may yield a significant amount of\nemission in the last radial bin. \nWe were unable to perform a self-consistent modelling\nof the emission in the full radial range, since the low signal-to-noise\nratio does not allow a two-phase modelling in the last radial bin.\nIn this simple uniform density warm-hot gas scenario, \nthe gas is in a filamentary structure\nof length $L$ and area $A=\\pi(R_{out}^2-R_{in}^2)$, where\n$R_{out}=600$\" and $R_{in}=450\"$; this is the same model\nalso considered in \\cite{bonamente2005} for the cluster \\emph{Abell~S1101}.\nSince the length $L$ of the filament along the sightline is unknown,\nwe must either assume $L$ or the electron density $n_e$, and \nestimate the mass implied by the detected emission.\nThe emission integral for this region is proportional to\n\\begin{equation}\nK = \\frac{10^{-14}}{4 \\pi D_A^2 (1+z)^2} n_e^2 V,\n\\end{equation}\nwhere $K$ is measured in XSPEC from a fit to the spectrum, $D_A$ is\nthe angular distance in cm, $z$ is the cluster redshift, and \nthe volume is $V=A \\times L$. For this estimate we assume \nfor simplicity that\nthe mean atomic weights of hydrogen and of the electrons are\nthe same, $\\mu_e=\\mu_H$.\nUsing the best-fit spectral model with $kT=1.26\\pm0.16$ keV,\nwe measure $K=1.05\\pm 0.13 \\times 10^{-4}$. If we assume a filament of\nlength $L=10$~Mpc, then the average density is $n_e=2.4\\pm0.3$~cm$^{-3}$,\nand the filament mass is $4.6\\pm0.6 \\times 10^{13}$~$M_{\\odot}$.\nAlternatively, a more diffuse filament gas of $n_e=10^{-5}$~cm$^{-3}$\nwould require a filament of length $L=58\\pm8$~Mpc, with\na mass of $1.1\\pm0.2\\times 10^{14}$~$M_{\\odot}$, comparable to the\nentire hot gas mass within $r_{200}$. The fact that a lower density gas\nyields a higher mass is given by the fact that, for a measured value of $K$\nwe obtain $n_e \\propto L^{-1\/2}$, and therefore the mass is proportional to $L^{1\/2}$.\nFor comparison, the gas mass for this shell inferred from the standard\nanalysis, i.e., assuming that the gas is in the shell itself,\nis $\\sim 3\\times 10^{13}$~$M_{\\odot}$, as can be also seen from Table~\\ref{tab:vikh-masses}.\n\nIf the gas is cooler, then the mass budget would increase further.\nIn fact, the bulk of the emission from cooler gas falls outside of the \\it Chandra\\rm\\ bandpass,\nand for a fixed number of detected counts the required emission integral increases.\nWe illustrate this situation by fitting the annulus to an emission\nmodel with a fixed value of $KT=0.5$~keV, which result in a value\nof $K=1.88\\pm 0.24 \\times 10^{-4}$ (the fit is significantly poorer, with\n$\\Delta \\chi^2=+10$ for one fewer degree of freedom). \nAccordingly, the filament mass estimates would be increased \napproximately by a factor of two. \n\n\nA warm-hot phase at $T\\leq 10^7$~K is expected to be a significant reservoir of baryons\nin the universe \\citep[e.g.][]{cen1999,dave2001}. Using the \\it ROSAT\\rm\\ soft X-ray \nPosition Sensitive Proportional Counter (PSPC) detector -- better suited to\ndetect the emission from sub-keV plasma -- we\nhave already reported \nthe detection of a large-scale halo of emission around the \\emph{Coma} cluster out to $\\sim$~5 Mpc, well beyond the\ncluster's virial radius \\citep{bonamente2003,bonamente2009}. \nIt is possible to speculate that the high mass of \\emph{Abell~1835}, one\nof the most luminous and massive clusters on the \\emph{Bright Cluster Survey} sample \\citep{ebeling1998},\nis responsible for the heating of the infalling gas to temperatures that makes it\ndetectable by \\it Chandra\\rm, and that other massive clusters may therefore provide\nevidence of emission to the virial radius with the \\it Chandra\\rm\\ ACIS detectors. \nThe infall scenario is supported by the \\emph{Herschel} observations\nof \\cite{pereira2010}, who measure a galaxy velocity distribution for \\emph{Abell~1835}\\\nthat does not appear to decline at large radii as in most of the other clusters\nin their sample. A possible interpretation for their data is the presence of a \nsurrounding filamentary structure that is infalling into the cluster.\n\n\\subsection{Effects of gas clumping at large radii}\nMasses and entropy measured in this paper assume that the gas has a uniform density\nat each radius. To quantify the effect of departures from uniform density, we\n define the clumping factor $C$ \nas the ratio of density averages over a large region,\n\\begin{align}\nC & = \\frac{\\langle n_e^2 \\rangle}{\\langle n_e \\rangle^2}\n\\end{align}\nwith $C \\geqslant 1$.\nClumped gas emits more efficiently than gas of uniform\ndensity, \nand the same surface brightness $I$ results in a lower estimate for the gas density and mass,\n\\begin{equation}\nI \\propto \\int dl = \\int ^2 C dl,\n\\end{equation}\nwhere $l$ is a distance along the sightline.\nFrom Figure~\\ref{fig:entropy-profile} we see that the entropy drop from\napproximately 400\" to 600\" would be offset by a decrease in $n_e^{2\/3}$ by a factor\nof 3, or a decrease in $n_e$ by a factor of 5. We therefore suggest that a clumping\nfactor of $C \\simeq 25$ at 600\" would in principle be able to provide\na flat entropy profile, and even higher clumping factors would provide\nan increasing entropy profile in better agreement with theory \\citep[e.g.][]{voit2005,tozzi2001}.\nNumerical simulations by \\cite{nagai2011} suggest values of the clumping factor\n$C \\leq 3$ near $r_{200}$, with significantly higher clumping possible at larger radii. \nUse of the \\cite{nagai2011} model in the analysis of a large sample of galaxy clusters by \\cite{eckert2012}\nresults in better agreement of observations with numerical simulations. \n\nClumping can also affect the measurement of hydrostatic masses.\nIn particular, gas with an increasing radial profile of the clumping factor\ncould result in a steeper gradient of the density profile, when compared with what is measured assuming\na uniform density. According to Equation~\\ref{eq:hse}, this \nwould result in larger estimates of the hydrostatic mass, in principle able to reduce or entirely\noffset the apparent decreas of $M(r)$ reported in Figure~\\ref{fig:mass-0-600}.\nWe therefore conclude that a radial increase in the clumping of the gas can in principle\naccount for the apparent decrease of the mass profile and of the entropy profile\nreported in this paper (Figures~\\ref{fig:mass-0-600} and \\ref{fig:entropy-profile}), and therefore\nit is a viable scenario to interpret our \\it Chandra\\rm\\ observations.\nClumping of the gas at large radii has also been suggested based on \\it Suzaku\\rm\\ observations\n\\citep[e.g.,][]{simionescu2011}.\n\n\\section{Conclusions}\nIn this paper we have reported the detection of emission from \\emph{Abell~1835}\\ with \\it Chandra\\rm\\\nout to the cluster's virial radius. The cluster's surface brightness\nis significantly above the background level out to a radius of\napproximately 10 arcminutes, which correspond to $\\sim$2.4 Mpc at the\ncluster's redshift. We have investigated several sources of systematic\nerrors in the background subtraction process, and determined that the\nsignificance of the detection in the outer region (450-600\") is\n$\\geq 4.7$~$\\sigma$, and the emission cannot be explained\nby fluctuations in the background. Detection out to the virial\nradius is also implied by the \\it XMM-Newton\\rm\\ temperature profile\nreported by \\cite{snowden2008}.\n\nThe \\it Chandra\\rm\\ superior angular resolution made it straightforward to\nidentify and subtract sources of X-ray emission that are unrelated to the cluster. \nIn addition to a large number of point sources, we have identified X-ray emission\nfrom two low-mass clusters that were selected from the SDSS data,\nMAXBCG J210.31728+02.75364 \\citep{koester2007}\nand WHL J140031.8+025443 \\citep{wen2009}.\nThe two clusters have photometric and spectroscopic redshifts that make them\nlikely associated with \\emph{Abell~1835}. These are the only two\nSDSS-selected clusters that are in the vicinity of \\emph{Abell~1835}.\n\n\nThe outer regions of the \\emph{Abell~1835}\\ cluster have a sharp drop in the temperature\nprofile, a factor of about ten from the peak temperature. The sharp drop\nin temperature implies that the hot gas cannot be in hydrostatic equilibrium, and\nthat the hot gas would be convectively unstable. A possible scenario to\nexplain the observations is the presence of \\emph{warm-hot} gas\nnear the virial radius that is not in hydrostatic equilibrium with\nthe cluster's potential, and with a mass budget comparable to that\nof the entire ICM. The data are also consistent with an alternative scenario\nin which a significant clumping of the gas at large radii is responsible\nfor the apparent negative gradients of the mass and entropy profiles\nat large radii.\n\n\n\n\n\\bibliographystyle{mn2e}\n\\bibliographystyle{apj}\n\\input{ms.bbl}\n\n\n\n\\label{lastpage}\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe advent of the {\\it Einstein} observatory changed the belief that\nearly-type galaxies contain little interstellar gas by revealing hot\nX-ray emitting halos associated with many of them (e.g. Forman et\nal. 1979). Subsequent X-ray observations led to the conclusion that\nthese galaxies can retain large amounts (up to $\\sim 10 ^{11}\nM_{\\odot}$) of hot ($T \\sim 10^{7}$ K) interstellar medium\n(Forman, Jones \\& Tucker 1985; Trinchieri \\& Fabbiano 1985; Canizares,\nFabbiano \\& Trinchieri 1987).\n\n\\begin{figure*}\n\\begin{center}\n \\leavevmode\n \\epsfxsize 0.6\\hsize\n \\epsffile{xray_fig.eps}\n \\caption{Grey-scale {\\it ROSAT} HRI image of the core of the cluster\nAbell~2634. The image has been smoothed using a Gaussian kernel with a\ndispersion of 8 arcsec. The positions of the cluster galaxies with\nmeasured redshift are marked with crosses.} \n\\end{center}\n\\label{grayscale}\n\\end{figure*}\n\n\nHowever, this picture might be expected to be different for galaxies\nthat reside near the centres of rich clusters of galaxies, since their\nproperties must be affected by their dense environment. For example,\nwe might expect interstellar medium (ISM) gas to be stripped from the\ngalaxy by the ram pressure resulting from the passage of the galaxy\nthrough the intracluster medium (ICM) (Gunn \\& Gott 1972). Stripping\nof the ISM can also result from tidal interactions with other nearby\ngalaxies (Richstone 1975; Merritt 1983, 1984). The most dramatic and\nwell-studied example of a galaxy which appears to be in the process of\nbeing stripped of its ISM is the elliptical galaxy M86 in the Virgo\ncluster, which shows a `plume' of X-ray emission emanating from it\n(Forman et al. 1979; White et al. 1991; Rangarajan et al. 1995).\n\nIn addition to the mechanisms which remove the ISM of a galaxy, gas\ncan also be replenished. The gravitational pull of a galaxy attracts\nthe surrounding ICM. This gas ends up being concentrated in or behind\nthe galaxy, depending on the velocity of the galaxy relative to the\nICM (see, for example, Sakelliou, Merrifield \\& McHardy 1996). Stellar\nwinds can also replenish the hot gas in a galaxy's ISM.\n\nAll the processes mentioned above take place simultaneously. The\nrelative importance of each process depends on: the galaxies'\nvelocities; the local density of the ICM; the number density of\ngalaxies; their orbits in the cluster; and the gravitational potential\nof each galaxy. It is therefore {\\it a priori} difficult to say which\nmechanism dominates in the cores of rich clusters of galaxies, and\nhence whether cluster galaxies are surrounded by the extensive X-ray\nemitting halos that we see associated with galaxies in the field.\n\nUnfortunately, X-ray observations of rich clusters have generally not\nbeen of high enough quality to answer this question, since any\nemission from the galaxies is hard to detect against the high X-ray\nbackground produced by the cluster's ICM (Canizares \\& Blizzard 1991;\nVikhlinin, Forman \\& Jones 1994; Bechtold et al. 1983; Grebenev et\nal. 1995; Soltan \\& Fabricant 1990; Mahdavi et al. 1996). In the cases\nwhere galaxy X-ray emission has been reported, the studies have been\nrestricted to a few bright cluster galaxies, and it has not proved\npossible to investigate the general galaxy population in a\nstatistically complete manner.\n\nIn order to search for X-ray emission from galaxies in a moderately\nrich environment, we obtained a deep {\\it ROSAT} HRI observation of\nthe core of the rich cluster Abell~2634, which is a nearby (z=0.0312)\ncentrally-concentrated cluster of richness class I. In \\S2.1 we\ndescribe the analysis by which the X-ray emission from the galaxies in\nthis cluster was detected. In \\S2.2 we explore the properties of this\nX-ray emission, and show that the galaxies in this cluster lack the\nextensive gaseous halos of similar galaxies in poorer environments.\nIn \\S3, we show how this difference can be attributed to ram\npressure stripping.\n\n\n\\section{X-ray observations and analysis}\n\nThe core of Abell~2634 was observed with the {\\it ROSAT} HRI in two\npointings, in January and June 1995, for a total of $62.5\\,{\\rm\nksec}$. The analysis of these data was performed with the IRAF\/PROS\nsoftware.\n\nInspection of the emission from the cD galaxy and other bright X-ray\nsources in the images from the two separate observations indicates\nthat the two sets of observations do not register exactly and that a\ncorrection to the nominal {\\it ROSAT} pointing position is\nrequired. Therefore, the second set of observations was shifted by\n$\\sim$ 2.0 arcsec to the east and $\\sim$ 0.8 arcsec to the south; such\na displacement is consistent with typical {\\it ROSAT} pointing\nuncertainties (Briel et al. 1996). Both images were then registered\nwith the optical reference frame to better than an arcsecond. A\ngrey-scale image of the total exposure is shown in Fig. 1. The image\nhas been smoothed with a Gaussian kernel of 8 arcseconds dispersion.\nAt the distance of Abell~2634, $1\\,{\\rm arcsec}$ corresponds to\n$900\\,{\\rm pc}$.\\footnote{Here, as throughout this paper, we have\nadopted a Hubble constant of $H_{0}=50\\; {\\rm km\\:s^{-1}\\:Mpc^{-1}}$.}\n\n\\begin{table}\n \\caption{Bright Sources}\n \\begin{tabular}{cccc}\n\\hline \\hline\nSource & $\\alpha$(J2000) & $\\delta$(J2000) & ID\/notes \\\\\n & $^{\\rm h} \\; ^{\\rm m} \\; ^{\\rm s} $ & $\\degr \\; \\arcmin \\;\n\\arcsec$ & \\\\\n\\hline\n1 & 23 38 29.1 & 27 01 53.5 & cD galaxy \\\\\n2 & 23 37 56.1 & 27 11 31.3 & cluster \\\\\n3 & 23 39 01.6 & 27 05 35.9 & star \\\\\n4 & 23 39 00.5 & 27 00 27.9 & star\\\\\n5 & 23 38 31.7 & 27 00 30.5 & nothing \\\\\n6 & 23 38 41.5 & 26 48 04.1 & star \\\\\n7 & 23 38 19.8 & 26 56 41.5 & ? \\\\\n8 & 23 38 07.4 & 26 55 52.8 & star \\\\\n9 & 23 37 57.5 & 26 57 30.1 & galaxy ?\\\\\n10 & 23 37 45.3 & 26 57 53.1 & two objects \\\\\n11 & 23 37 26.2 & 27 08 14.6 & ? \\\\\n\n\\hline\n \\end{tabular}\n\\end{table}\n\nThis deep image of Abell~2634 reveals the largescale X-ray emission\nfrom the hot ICM of the cluster and a few bright sources, which are\nnumbered on Fig.~1. Source 1 is the cD galaxy NGC~7720, located near\nthe centre of Abell~2634. It hosts the prototype wide-angle tailed\nradio source 3C~465 (e.g.~Eilek et al. 1984). Source 2 is a\nbackground cluster at a redshift of $cz \\simeq 37,000 \\ {\\rm km \\\ns^{-1}}$ (Pinkney et al. 1993; Scodeggio et al. 1995). For the rest\nof the X-ray bright sources, the Automatic Plate Measuring \nmachine, run by the Royal Greenwich Observatory in Cambridge, was used\nto obtain optical identifications. Table 1. gives the positions of\nthese sources as determined from the X-ray image, and the class of\ntheir optical counterparts. The position of source 7 coincides with a\nfaint object in the Palomar sky survey, but there is also a nearby\nstar, and source 11 does not seem to have a discernible optical\ncounterpart. All these sources were masked out in the subsequent\nanalysis. \n\nThe positions of galaxies that are members of Abell~2634 are also\nindicated on Fig.~1. Pinkney et al.\\ (1993) collected the redshifts\nof $\\sim$150 galaxies that are probable members Abell~2634 (on the\nbasis that their redshifts lie in the range $6,000 < cz < 14,000\\,\n{\\rm km\\,s^{-1}}$), and Scodeggio et al. (1995) have increased the\nnumber of galaxies whose redshifts confirm that they are cluster\nmembers up to $\\sim 200$. The sample of redshifts is complete to a\nmagnitude limit of 16.5, and from this magnitude-limited sample we\nhave selected those galaxies that appear projected within a circle of\n15 arcmin radius, centered on the cD galaxy. This selection yields 62\ngalaxies, of which the vast majority are of type E and S0 -- only 10\nare classified as spirals or irregular. The positions of these\ngalaxies are taken from the CCD photometry of Pinkney (1995) and\nScodeggio et al. (1995), and are accurate to $\\sim 1\\,{\\rm arcsec}$.\nThey are marked as crosses on Fig.~1.\n\nInspection of Fig.~1 reveals several cases where the location of a\ngalaxy seems to coincide with an enhancement in the cluster's X-ray\nemission, and it is tempting to interpret such enhancements as the\nemission from the galaxy's ISM. However, it is also clear from Fig.~1\nthat the X-ray emission in this cluster contains significant\nsmall-scale fluctuations and non-uniformities. We must therefore\nconsider the possibility that the apparent associations between galaxy\nlocations and local excesses in the X-ray emission may be\nchance superpositions. We therefore now present a more objective\napproach to searching for the X-ray emission from cluster galaxies.\n\n\n\n\\subsection{Detection of the cluster galaxies}\n\nBefore adopting an approach to detecting the emission from cluster\ngalaxies, we must first have some notion as to how bright we might\nexpect the emission to appear in this deep HRI image. Previous X-ray\nobservations have shown that the X-ray luminosities of E and S0\ngalaxies in the 0.2-3.5 keV energy band range from $\\sim10^{39}$ to\n$\\sim 10^{42} \\ {\\rm erg \\ s^{-1}}$ (Kim, Fabbiano \\& Trinchieri\n1992a, b; Forman et al. 1985). These limits at the distance of\nAbell~2634 correspond to fluxes of $5 \\times 10^{-16}$ to $5 \\times\n10^{-13} \\ {\\rm erg \\ s^{-1} \\ cm^{-2}}$. We have used the PIMMS\nsoftware to convert these limits to count rates for the {\\it ROSAT}\nHRI detector. The emission from the galaxies was modeled by a\nRaymond-Smith plasma (Raymond \\& Smith 1977) with a temperature\n$kT=0.862$ keV and a metal abundance of 25\\% solar; these quantities\nare consistent with the values previously found from observations of\nearly-type galaxies (Kim et al. 1992a; Matsushita et al. 1994; Awaki\net al. 1994). The absorption by the galactic hydrogen was also taken\ninto account by using the column density given by Stark et al. (1992)\nfor the direction of Abell~2634 ($N_{\\rm H}= 4.94 \\times 10^{20} \\\n{\\rm cm^{-2}}$). These calculations predict that the $62.5\\,{\\rm\nksec}$ HRI observation of this cluster should yield somewhere between\n$\\sim1$ and $\\sim1200$ counts from each galaxy. Motivated by this\nprediction of a respectable, but not huge, number of counts per\ngalaxy, we set out to detect emission associated with cluster\ngalaxies.\n\nWe are trying to detect this fairly modest amount of emission against\nthe bright background of the ICM emission. We therefore seek to\nimprove the statistics by stacking together the X-ray images in the\nvicinity of the 40 E and S0 galaxies marked in Fig.~1.\nFig. 2. presents a contour plot of the combined image, which covers a\nregion of 1 arcmin radius around the stacked galaxies. The centre of\nthe plot coincides with the optical centres of the individual\ngalaxies. Clearly, there appears to be X-ray emission associated with\nthe cluster galaxies, and it is centered at their optical positions.\nThis coincidence provides us with some confidence that the X-ray and\noptical frames are correctly registered. We have also constructed a\ncomposite brightness profile for the 40 galaxies by adding the\nunsmoothed counts detected in concentric annuli centered on each\ngalaxy. The width of each annulus in this profile was set at 6 arcsec\nand the local background, as measured in an annulus between 1.0 and\n2.0 arcmin around each galaxy, was subtracted. The resulting profile\nis presented in Fig. 3. Once again, the excess of emission in the\nvicinity of the cluster galaxies is apparent.\n\nIn order to assess the significance of this detection, we generated\n100 sets of simulated data from randomly selected points on the image.\nThe diffuse emission from the ICM varies systematically with radius,\nand so we might expect the probability that a galaxy is coincidentally\naligned with a clump in the ICM emission to vary systematically with\nradius. Further, the sensitivity of the HRI varies with radius, and\nso the detectability of the emission from a single galaxy will vary\nwith radius as well. We therefore constructed the simulated data sets\nby extracting counts from the HRI image at the same radii as the true\ngalaxy locations, but at randomized azimuthal angles. The mean\nprofile and the RMS fluctuations amongst the simulated data sets are\nshown in Fig.~3. As might be expected, the average number of counts\nin these random data sets is zero; the larger RMS error bars at small\nradii reflect the smaller sizes of these annuli. From a $\\chi^{2}$\ncomparison between the observed galaxy profile and the simulated\nprofile, we can conclude that there is less than 0.1\\% probability\nthat the apparent peak in the galaxy emission is produced by chance.\nThus, the detection of emission from the galaxies in Abell~2634 is\nhighly statistically significant.\n\n\\begin{figure}\n\\begin{center}\n \\leavevmode\n \\epsfxsize 0.9\\hsize\n \\epsffile{contour.ps}\n \\caption{Contour plot of the combined image of all the early-type\ngalaxies that belong to Abell~2634. The pixel size of the image is 2\narcsec and it has been smoothed with a Gaussian kernel of 2\npixels. The center of the plot coincides with the optical centres of\nthe galaxies. The contour lines are from 20 to 100 per cent the peak\nvalue and are spaced linearly in intervals of 5 per cent.}\n\\end{center}\n\\label{contour}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\begin{center}\n \\leavevmode\n \\epsfxsize 0.9\\hsize\n \\epsffile{40gal_simul.ps}\n \\caption{The combined surface brightness profile of all the\nearly-type galaxies (filled squares) normalized to one galaxy. Open squares\nrepresent the average profile from the simulations.} \n\\end{center}\n\\label{earlysimul}\n\\end{figure}\n\n\\subsection{Origin of the X-ray emission}\n\nAs mentioned in the introduction, early-type galaxies have been found\nto retain large amounts of hot gas, which extends far beyond the\noptical limits of the galaxies. X-ray binaries also contribute to the\ntotal emission, and they become more dominant in X-ray faint galaxies.\nWe might also expect some of the emission to originate from faint\nactive galactic nuclei (AGNs) in the cores of these galaxies.\nAlthough none of the galaxies in our sample has been reported as an\nactive galaxy, there is increasing dynamical evidence that the vast\nmajority of elliptical galaxies contain central massive black holes\n(van der Marel et al. 1997; Kormendy et al. 1996a, 1996b; for a\nreview Kormendy \\& Richstone 1995), and so we might expect some\ncontribution from low-level activity in such systems. We therefore\nnow see what constraints the observed X-ray properties of the galaxies\nin Abell~2634 can place on the origins of the emission.\n\n\n\\subsubsection{The extent of the X-ray emission}\n\nOne diagnostic of the origins of the X-ray emission is the measurement\nof its spatial extent. AGN emission should be unresolved by the HRI,\nwhile emission from X-ray binaries should be spread over a similar\nspatial scale as the optical emission, and halos of hot gas should be\nstill more extended.\n\n\\begin{figure*}\n\\begin{center}\n \\leavevmode\n \\epsfxsize 0.45\\hsize\n \\epsffile{cDplotno2.ps}\n \\epsfxsize 0.45\\hsize\n \\epsffile{s3plotno2.ps}\n\\end{center}\n\\begin{center}\n \\leavevmode \n \\epsfxsize 0.45\\hsize\n \\epsffile{s4plotno2.ps}\n \\epsfxsize 0.45\\hsize\n \\epsffile{s8plotno2.ps}\n\\end{center}\n\\caption{Surface brightness distribution of the bright sources in the\nHRI image. (s1, s3, s4, and s8). Source 1 is the central cD\ngalaxy. The profile is fitted by the appropriate HRI PSF for the\ndistance of the point source from the centre of the image (dash line)\nand a Gaussian (solid line). The calculated width ($\\sigma$) of the\nbest fit Gaussian is also given.}\n\\label{sources}\n\\end{figure*}\n \nIn order to assess the spatial extent of the X-ray emission, we need\nto characterize the PSF in this HRI observation. The point sources\ndetected in these data are more extended than the model PSF for the\nHRI detector given by Briel et al. (1996) as is seen in Fig.~4, where\nfour of the sources are fitted by this model PSF (dash line). This\ndiscrepancy can be attributed to residual errors in the reconstruction\nof {\\it ROSAT}'s attitude, which broaden the PSF in long integrations.\nWe have therefore empirically determined the PSF that is appropriate\nfor this observation by fitting the profiles of the point sources with\na Gaussian PSF model (Fig.~4, solid line). Only sources 1, 3, 4, 8 are\nused for the determination of the width of the Gaussian. Source 6 is\nvery elongated and can not be represented by a symmetrical\nfunction. The mean dispersion of the best-fit model was found to be $(4.1\n\\pm 0.1)$ arcsec. All of the point sources detected in this image\nhave widths consistent with this value, and so there is no evidence\nthat the PSF varies with radius. We therefore adopt this PSF for the\nemission from all the galaxies in the observation.\n\n\nFigure~5 shows the comparison between the adopted PSF and the emission\nfrom the cluster galaxies. The emission appears to be more extended\nthan the PSF; fitting the data to the PSF yields a $\\chi^{2}$ value of\n14.2 with 9 degrees of freedom, which is marginally consistent with\nthe emission being unresolved. We can obtain a better fit by modeling\nthe radial profile of the emission using a Gaussian, which we convolve\nwith the PSF to model the observed profile. Fitting this model to the\nobservations, we find that the intrinsic width of the X-ray emission\nis $4.3^{+2.2}_{-2.8}$ arcsec. The best-fit model is also shown in\nFig.~5.\n\n\n\n\\begin{figure}\n\\begin{center}\n \\leavevmode\n \\epsfxsize 0.9\\hsize\n \\epsffile{40gal_gauss_2psfs.ps}\n\\caption{The combined surface brightness distribution of the\n40 early-type galaxies, normalized to one galaxy. The profile is fitted\nby the measured HRI PSF (dashed line) and the spatially-extended model \n(solid line).}\n\\end{center}\n\\label{earlyprof}\n\\end{figure}\n\nThe radius of the X-ray halos of early-type galaxies with optical\nluminosities comparable to those in this cluster has been shown to be\n$\\sim 20$ -- 60 kpc (e.g.~Fabbiano et al. 1992), with the lower values\ncharacterizing optically fainter galaxies. At the distance of\nAbell~2634 these values correspond to $\\sim 20$ -- 60 arcsec, much\nlarger than the upper limit of $\\sim$ 7 arcsec we found for the extent\nof the galactic X-ray emission. Thus, the X-ray emission from the\ncluster galaxies, although apparently extended, clearly does not\noriginate from the large halos of hot gas found around comparable\ngalaxies in poorer environments.\n\nOne possible explanation for the spatial extent of the emission from\nthese galaxies is that it could arise from errors in the adopted\npositions for the galaxies. Such errors would broaden the\ndistribution of X-rays when the data from different galaxies are\nco-added even if the individual sources are unresolved. However, the\nzero-point of the X-ray reference frame is well tied-down by\nthe detected point sources in the field. Further, the optical\nlocations of the galaxies come from CCD photometry with positional\nerrors of less than an arcsecond. It therefore cannot explain the\n$\\sim 4$ arcsec extent of the observed X-ray emission.\n\nWe therefore now turn to the extent of the X-ray emission that we\nmight expect from X-ray binaries. Nearly half of the early-type\ngalaxies that we use for our analysis have been imaged in the I-band\nby Scodeggio, Giovanelli \\& Haynes (1997). They have fitted the\noptical galaxy profile with a de Vaucouleur law, and found a mean\nvalue for their effective radii of $\\sim$8 arcsec, with only 4\ngalaxies smaller than 3 arcsec and another 3 larger than 13 arcsec.\nThese values are directly comparable to the spatial extent of the\nX-ray emission derived above. Thus, it would appear that the\nobservations are consistent with what we would expect if the X-ray\nemission from the galaxies in Abell~2634 originates from X-ray\nbinaries in these systems, although we have not ruled out the\npossibility that some fraction of the emission comes from AGN.\n\n\\subsubsection{The luminosity of the X-ray emission}\n\nA further test of the origins of the X-ray emission in the cluster\ngalaxies comes from its luminosity. It has been found that the blue\nluminosities of galaxies correlates with their X-ray luminosities,\nwith the optically brighter galaxies being more luminous in X-rays\n(e.g.~Forman et al. 1985; Fabbiano et al. 1992). This correlation\nfor the early-type galaxies in the Virgo cluster is presented in\nFig. 6. The optical and X-ray luminosities of these galaxies are taken\nfrom Fabbiano et al. (1992). The line in this plot divides the\n$L_{\\rm B} - L_{\\rm X}$ plane into two distinct galaxy types (Fabbiano\n\\& Schweizer 1995). In addition to the differences in the ratio of\nX-ray-to-optical luminosities, galaxies in these two regions have been\nshown to possess different spectral properties. The spectra of the\nX-ray bright galaxies [group (I)] are well fitted by Raymond-Smith\nmodels of 1 keV temperature, and it is believed that these galaxies\nretain large amounts of hot ISM. In the spectra of the X-ray faint\ngalaxies of group (II), on the other hand, a hard component is\npresent; X-ray binaries are believed to be the major source of the\nX-rays in these galaxies.\n\n\n\\begin{figure}\n\\begin{center}\n \\leavevmode\n \\epsfxsize 0.9\\hsize\n \\epsffile{fabb.ps}\n\\caption{X-ray luminosity versus blue luminosity for the early-type\ngalaxies. The line delineates the boundary between the locations of\ngalaxies where the hot ISM makes a significant contribution to the\ntotal emission [region (I)], and the locations of galaxies where the\nentire emission can be ascribed to X-ray binaries [region (II)]. The\nlocations in this plane of galaxies that belong to the Virgo cluster\nare marked by filled circles. The average properties of galaxies in\nAbell~2634 lying in\ndifferent optical luminosity ranges are indicated by crosses.}\n\\end{center} \n\\label{LxLB}\n\\end{figure}\n\n\nIn order to see where the galaxies of Abell~2634 lie in this plot, we\nmust calculate their optical and X-ray luminosities. Butcher \\&\nOemler (1985) have measured {\\it J} and {\\it F} optical magnitudes for\na large number of galaxies in Abell~2634. We have converted these\nmagnitudes to the blue band by applying the colour relations provided\nby Oemler (1974) and Butcher \\& Oemler (1985), correcting for galactic\nextinction, and using the appropriate K-correction. We find that the\nabsolute blue magnitude of the galaxies in the HRI image lie in the\nrange from $-18.5$ to $-21.4$. We have divided these galaxies into\nthree groups according to their optical luminosities: group A\n$(0.4-2.0) \\times 10^{10} \\ {\\rm L_{\\sun}}$ with 17 galaxies; group B\n$(2.1-3.7) \\times 10^{10} \\ {\\rm L_{\\sun}}$ with 8 galaxies; and group\nC $(3.8-5.5) \\times 10^{10} \\ {\\rm L_{\\sun}}$ with only 2 galaxies.\n\nThe X-ray luminosity of each group was obtained by repeating the\nanalysis of \\S2.1 using just the galaxies in each sub-sample. Using\nthe PIMMS software we converted the observed count rate from the HRI\nimage to X-ray luminosity in the energy range 0.2-3.5 keV. The thermal\nmodel used for the conversion is the same that was used by Fabbiano et\nal. (1992) to derive the plot shown in Fig. 6, and is discussed in\n\\S2.1. \n\nThe resulting values for optical and X-ray luminosities in each\nsub-sample are shown in Fig.~6. The horizontal error bars represent\nthe width of each optical luminosity bin and the vertical ones show\nthe errors in the measured X-ray luminosities. This plot shows that\nthe galaxies in our sample follow the established correlation: the\noptically-brighter galaxies are also more luminous in the X-rays. The\nexistence of this correlation also implies that the detected X-ray\nflux from the galaxies in Abell~2634 is not dominated by a few bright\ngalaxies, but that the optically-fainter galaxies also contribute to\nthe detected X-ray emission.\n\nThe galaxies in Abell~2634 probe the fainter end of the $L_{\\rm B}$ --\n$L_{\\rm X}$ relation as covered by Virgo galaxies. It should be borne\nin mind that there is a bias in the Virgo data which means that the\ntwo data sets in Fig.~6 are not strictly comparable. At the lower\nflux levels, a large number of Virgo galaxies have not been detected\nin X-rays, and so this plot preferentially picks out any X-ray-bright\nVirgo galaxies. For the Abell~2634 data, on the other hand, the\nco-addition of data from all the galaxies in a complete sample means\nthat the data points represent a true average flux. However, it is\nclear that the X-ray fluxes from galaxies in these two clusters are\ncomparable.\n\nThe similarity between the X-ray properties of galaxies in these two\nclusters is of particular interest because their environments differ\nsignificantly. The galaxies from the Virgo cluster shown in Fig.~6\nlie in a region between 360 kpc and 2 Mpc from the centre of the\ncluster. Recent {\\it ROSAT} PSPC observations have shown that the\nnumber density of the hot ICM of this cluster drops from $3 \\times\n10^{-4}$ to $3 \\times 10^{-5} \\ {\\rm cm^{-3}}$ in this region\n(Nulsen \\& B\\\"{o}hringer 1995). The galaxies from Abell~2634 that\nhave gone into this plot lie in the inner 0.8 Mpc of Abell~2634, and\nin this region the number density of the ICM varies between $1\n\\times 10^{-3}$ and $2 \\times 10^{-4} \\ {\\rm cm^{-3}}$ (Sakelliou \\&\nMerrifield 1997). Thus, the galaxies in the current analysis come from\na region in which the intracluster gas density is, on average, an order of\nmagnitude higher that surrounding the Virgo cluster galaxies.\n\nThe location of the galaxies in region (II) of Fig.~6 adds weight to\nthe tentative conclusion of the previous section that the X-ray\nemission from these galaxies can be explained by their X-ray binary\npopulations, since any significant ISM contribution would place them in\nregion (I). Similarly, the low X-ray fluxes of these galaxies leaves\nlittle room for a significant contribution from weak AGN. If the\nX-ray binary populations are comparable to those assumed by Fabbiano\n\\& Schweizer (1995) in calculating the dividing line in Fig.~6, then\nessentially all the X-ray emission from these galaxies can be\nattributed to the X-ray binaries. Thus, any average AGN emission\nbrighter than a few times $10^{40} \\ {\\rm erg \\ s^{-1}}$ can be\nexcluded, as such emission would also move the galaxies into region\n(I) of the $L_{\\rm B}$ -- $L_{\\rm X}$ plane.\n\n\n\\subsection{Spiral galaxies}\n\n\\begin{figure}\n\\begin{center}\n \\leavevmode\n \\epsfxsize 0.9\\hsize\n \\epsffile{spiral_sim.ps}\n \\caption{The combined surface brightness profile of the spiral \ngalaxies that lie in the field of view of the HRI (filled squares)\nnormalized to one galaxy. Open squares \nrepresent the average profile from the simulations.} \n\\end{center}\n\\label{spiral}\n\\end{figure}\n\nHaving discussed the X-ray properties of the early-type galaxies in\nAbell~2634 at some length, we now turn briefly to the properties of\nthe spiral galaxies in the cluster. Abell~2634 is a reasonably rich\nsystem, and we therefore do not expect to find many spiral galaxies\nwithin it. Indeed, only 7 of the 62 galaxies whose redshifts place\nthem at the distance of Abell~2634, and which lie within the field of\nthe HRI, have been classified as spirals. The statistics are\ncorrespondingly poor when the X-ray emission around these galaxies is\nco-added: the combined profile is shown in Fig.~7, together with the\nresults from the control simulations (see \\S 2.1 for details). It is\nclear from this figure that the spirals have not been detected in this\nobservation, and a $\\chi^2$ fit confirms this impression.\n\nThe failure to detect these galaxies is not surprising. Not only are\nthere relatively few of them, but their X-ray luminosities are lower\nthan those of early-type galaxies. In the {\\it Einstein} energy band\n(0.2 - 3.5 keV), their luminosities have been found to lie in the\nrange $\\sim 10^{38}$ to $\\sim 10^{41} \\ {\\rm erg \\ s^{-1}}$ (Fabbiano\n1989). Modeling this emission using a Raymond-Smith model with a\nhigher temperature than for the early-type galaxies, as appropriate\nfor spiral galaxies (Kim et al. 1992a), we find that the expected\ncount rate for these galaxies is a factor of $\\sim 40$ lower than for\nthe ellipticals in the cluster. It is therefore unsurprising that we\nfail to detect the small number of spiral galaxies present in the\ncluster. \n\n\n\n\\section{Discussion}\n\nIn this paper, we have detected the X-ray emission from the normal\nelliptical galaxies in Abell~2634. The limited spatial extent of this\nemission coupled with its low luminosity is consistent with it\noriginating from normal X-ray binaries in the galaxies' stellar\npopulations. These galaxies do not seem to have the extended\nhot ISM found around galaxies that reside in poorer cluster\nenvironments. We therefore now discuss whether this difference can be\nunderstood in terms of the physical processes outlined in the\nintroduction.\n\nIntuitively, the simplest explanation for the absence of an extensive\nhalo around a cluster galaxy is that it has been removed by ram\npressure stripping as the galaxy travels through the ICM. A simple\ncriterion for the efficiency of this process can be obtained by\ncomparing the gravitational force that holds the gas within the\ngalaxy to the force due to the ram pressure, which tries to remove it\n(Gunn \\& Gott 1972).\n\nThe gravitational force is given by:\n\\begin{equation}\nF_{\\rm GR} \\sim G \\ \\frac{M_{\\rm gal} \\ M_{\\rm gas}}{R_{\\rm gal}^{2}}\n\\end{equation}\nwhere $M_{\\rm gal}$ is the total mass of the galaxy, $M_{\\rm gas}$ is\nthe mass of the X-ray emitting gas, and $R_{\\rm gal}$ is the radius of\nthe galaxy's X-ray halo. For typical values for the masses of the\ngalaxy and the gas of $10^{12} \\ M_{\\sun}$ (Forman et al.\\ 1985) and\n$5 \\times 10^{9} \\ M_{\\sun}$ (e.g.~Canizares et al.\\ 1987)\nrespectively, and a mean value for $R_{\\rm gal}$ of $40 \\ {\\rm kpc}$\n(Canizares et al. 1986), which is a representative value for galaxies\nof the same optical luminosity as the galaxies in Abell~2634,\nequation~(1) implies that $F_{\\rm GR} \\sim 1 \\times 10^{30} \\ {\\rm\nN}$.\n \nThe force due to ram pressure is described by:\n\\begin{equation}\nF_{\\rm RP}=\\rho_{\\rm ICM} \\ v_{\\rm gal}^{2} \\ \\pi R_{\\rm gal}^{2} = \\mu\n\\ m_{\\rm p} \\ n\\ v_{\\rm gal}^{2} \\ \\pi R_{\\rm gal}^{2}\n\\end{equation}\nwhere $\\rho_{\\rm ICM}$ is the density of the ICM, $\\mu$ is the mean\nmolecular weight, $m_{\\rm p}$ is the proton mass, $n$ is the number\ndensity of the ICM, and $v_{\\rm gal}$ is the galaxy velocity. From\nthe velocity dispersion profile of Abell~2634 presented by den Hartog\n\\& Katgert (1996) we find that the velocity dispersion, $\\sigma_{\\rm\nv}$, in the inner 15 arcmin of this system is $ \\approx 710 \\ {\\rm km\n\\ s^{-1}}$. Assuming an isotropic velocity field, the characteristic\nthree-dimensional velocity of each galaxy is hence $v_{\\rm gal}=\\surd\n3 \\ \\sigma_{\\rm v} \\approx 1230 \\ {\\rm km \\ s^{-1}}$. The number\ndensity of the ICM in the same inner region has been derived from\nrecent {\\it ROSAT} PSPC data (Schindler \\& Prieto 1997) and this HRI\nobservation (Sakelliou \\& Merrifield 1997), and is found to vary from\n$1 \\times 10^{-3} \\ {\\rm cm^{-3}}$ down to $2 \\times 10^{-4} \\ {\\rm\ncm^{-3}}$, consistent with previous {\\it Einstein} observations (Jones\n\\& Forman 1984; Eilek et al. 1984). Inserting these values into\nequation~(2), we find $F_{\\rm RP} \\sim (1 - 10) \\times 10^{30} \\ {\\rm\nN}$.\n\nThus, the ram pressure force exerted on the galaxies in Abell~2634 is\nfound to be larger than the force of gravity, and so we might expect\nram pressure stripping to be an effective mechanism for removing the\nISM from these galaxies. In poorer environments, the density of the\nICM is likely to be at least a factor of ten lower, and the velocities\nof galaxies will be a factor of $\\sim 3$ smaller. We might therefore\nexpect $F_{\\rm RP}$ to be a factor of $\\sim 100$ lower in poor\nenvironments. Since such a change would make $F_{\\rm RP} < F_{\\rm\nGR}$, it is not surprising that galaxies in poor environments manage\nto retain their extensive halos.\n\nThe absence of extensive X-ray halos around the galaxies in Abell~2634\nimplies that ram pressure stripping dominates the processes of\naccretion and stellar mass loss which can replenish the ISM. By\ncarrying out similar deep X-ray observations of clusters spanning a\nwide range of ICM properties, it will be interesting to discover more\nprecisely what sets of physical conditions can lead to the efficient\nISM stripping that we have witnessed in Abell~2634.\n\n \n\n\\section*{ACKNOWLEDGEMENTS} \n \nWe are indebted to the referee, Alastair Edge, for a very insightful\nreport on the first incarnation of this paper. We thank Jason Pinkney\nfor providing us with the positions and redshifts of the galaxies and\nRob Olling for helpful discussions. This research has made use of the\nNASA\/IPAC Extragalactic Database (NED) which is operated by the Jet\nPropulsion Laboratory, California Institute of Technology, under\ncontract with the National Aeronautics and Space Administration. Much\nof the analysis was performed using {\\sc iraf}, which is distributed\nby NOAO, using computing resources provided by STARLINK. MRM is\nsupported by a PPARC Advanced Fellowship (B\/94\/AF\/1840).\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\noindent \n\n\n In Si based semiconductor technology Sb is considered an important \ndopant for its role in the development of field effect transistors \nand infrared detectors \\cite{temp}. Ion implantation is a useful \ntechnique for fabricating such devices as it produces buried layers \nwith well defined interfaces, expanding possibility of designing \nnovel structures. The increased density in VLSI circuits also makes \nthe technological applications of the ion implantation , especially \nin MeV energy range, increasingly important. MeV implantation however \ncan also produce severe modifications in the material depending on \nthe nature and the energy of the impinging ion, and the implantation \ndose \\cite{tam1}.\nExtensive usage of ion implantation in device fabrication and the \ncontinued miniaturization of device structures has brought the \nissue of surface modifications, via ion implantations, to the \nforefront. However, the factors responsible for such modifications \nand the surface morphology after ion implantation, have received \nlittle attention \\cite{car}.\n\n Atomic force microscope (AFM) is a very effective tool for \nexamining surface modifications and surface structures. However,\nthere are very few studies in literature that have investigated \nthe morphological changes of the ion implanted surfaces by AFM. \nFurthermore, most of these surface studies are performed after keV \nimplantations \\cite{cou,pia,wan}, or at low fluences for individual \ncascade studies \\cite{wil}. Surface modifications after high energy,\n100 MeV, ion irradiation have also been investigated \\cite{sin}. \nHowever, the role of MeV ion implantation \non the surface topography remains poorly understood. In the present \nstudy we have made detailed investigation on Si(100) surfaces after \n1.5~MeV Sb implantation. The technique of AFM has been applied to \nunderstand the modification in roughness and morphology of silicon \nsurfaces upon ion implantation. We also investigate here the formation\n of nano-sized defects zones on Si(100) surfaces after MeV implantations.\n The results of shape transition in these nanostructures, from being\n elliptical at low fluences to becoming deformed circular at high fluences,\n will also be discussed here. Experimental procedure are mentioned in section~2\n and the results will be discussed in section~3. Conclusion are presented in section~4.\n \n\\section {Experimental}\n\\noindent \n\nA mirror polished (100)-oriented Si single crystal (p-type) \nwafer was used in the\npresent study. The samples were implanted at room temperature\nwith a scanned beam of 1.5 MeV Sb$^{2+}$ ions at various fluences\nranging from 1$\\times 10^{11}$ to 5$\\times10^{14} ions\/cm^2$.\n The implantations were performed with\nthe samples oriented 7$^o$ off-normal to the incident beam to\navoid channeling effects. \n\nAFM Nanoscope E and Nanoscope III from Veeco were used to image\nthe implanted silicon sample surfaces. Images have been acquired\nin contact and tapping modes. Images ranging from 0.2 to 10 $\\mu{m}$ square \nwere obtained.\n\n\n\\section {Results and Discussion}\n\nFigure~1 shows \nthe $1 \\times 1\\mu{m^2}$ and $200 \\times 200 {nm^2}$ \n3D-images of the virgin silicon surface. It is observed that the\n virgin Si surface is smooth. 1.5 MeV implantation was carried out at \n various fluences and several $1 \\times 1\\mu{m^2}$ and $200 \\times 200 {nm^2}$ \nimages were taken at all the fluences. The $1 \\times 1\\mu{m^2}$ images \nwere utilized for measuring the rms surface roughness of Si(100) surfaces\n after implantation.\nThe average roughness at each fluence is plotted \nin Figure~2. The rms roughness for a virgin Si(100) surface, \nmeasured to be 0.234~nm, is also marked. It can be clearly \nseen from Fig.~2 that the rms surface roughness exhibits \nthree prominent behaviors as a function of fluence. For \nlow fluences, up to $1\\times10^{13} ions\/cm^{2}$, the \nroughness is small and does not increase much compared to \nthe virgin surface roughness. Beyond this fluence an enhanced \nsurface roughness, increasing at a much steeper rate is observed. \nThis trend continues up to the fluence of \n$1\\times10^{14} ions\/cm^{2}$ where a high roughness of 0.296~nm \nis measured. A saturation in surface roughness with a slight \ndecrease in the roughness is observed beyond this fluence. \nThe decrease in surface roughness, at \n$5\\times10^{14} ions\/cm^{2}$, seems reasonable in view of high \nlevel of amorphicity at this dose \\cite{sdey2}, as beyond a certain \nhigh amorphicity, further higher levels of amorphization should \ntend to make the surface more homogeneous. A similar decrease \nin surface roughness with increasing fluence, beyond a critical \nfluence, has been observed for keV implantations of P and As \nin amorphous films \\cite{edr}.\n\n\n Our earlier RBS\/C and Raman scattering results \n\\cite{sdey2,sdey3} show that Si lattice disorder also displays\n3 similar behaviours as a function of ion fluence. Initially a low lattice \ndamage, due to simple defects, is seen upto the fluence of \n$1\\times10^{13}ions\/cm^{2}$. The disorder becomes larger with the\nonset of crystalline\/amorphous (c\/a) transition in Si-bulk \nat $1\\times10^{13}ions\/cm^{2}$. Finally the disorder \nsaturates with the Si-lattice as well as Si-surface becoming \namorphised at $5\\times10^{15}ions\/cm^{2}$.\nThe roughness on the Si\nsurface will be determined by several roughening and smoothening\nprocess that undergo on an ion implanted surface. Nuclear Energy loss\neffects are also crucial. In addition lattice disorder and the\nassociated stress will also be important in the evolution of the\nion implanted surface.\n\n\n High resolution $200\\times200nm^{2}$ \nimages of the Si-surfaces were acquired for all the fluences\nand are shown in Figure~3 for two representative Sb fluences\nof 1$\\times10^{13}$ and 5$\\times10^{14}ions\/cm^2$. \nThe images of Si surface acquired upto the fluence of \n1$\\times10^{12}ions\/cm^2$ (not shown) are similar to the \nvirgin surface (of Fig.~1b) and their surface roughness \nis also similar (Fig.~2). However, after a \nfluence of 1$\\times10^{13}ions\/cm^2$, several nanostructures\ncan be seen on the Si-surface (see Fig.~3a). Fig.~4a is same \nas Fig.~3a and shows the approximate outlines of some of the \nnanostructures. The nanostructures represent the damage due to \nion implantation and are roughly of elliptical \nshape. For a quantitative analysis of \nthese nanostructures, the two axial lengths \nwere measured and the mean lengths of the minor and the major \naxes are found to be 11.6 and 23.0~nm respectively. The mean lengths \nof the two axes and the mean areas of the surface features \nare tabulated in Table~1 for various incident ion fluences. \n\nFor a Sb fluence of 5$\\times10^{13}ions\/cm^2$, although the \nsilicon surface is again found to be decorated with the \nelliptical nanostructures, the features have expanded\nalong both the axial directions (with the mean lengths \nof axes being 14.5 and 26.1~nm respectively). The average \narea of the nanostructures at this stage is calculated to be \n$297~nm^2$, which is about 41\\% higher than that at \n1$\\times10^{13}ions\/cm^2$. For a fluence of \n1$\\times10^{14}ions\/cm^2$, the area of these nanostructures\nfurther inflates to $325\\pm31~nm^2$. Although the length of \nthe minor axis has not changed much at this fluence compared \nto that at 5$\\times10^{13}ions\/cm^2$, the major axis is elongated \nand has an average value of 31.6~nm. Upto this stage the \neccentricity of the elliptical structures, for all fluences \nis found to be $\\sim$ 0.85$\\pm$ 0.4. The eccentricity of the \nelliptical structures, at each fluence is listed in Table~ 1. \nInterestingly, after a fluence 5$\\times10^{14}ions\/cm^2$, the \nsurface structures undergo a shape transition with the \nnanostructures having axial lengths of $30.1\\pm4.4~nm$ and \n$30.7\\pm2.4~nm$, respectively (see Fig.~3b). Fig.~4b is same \nas Fig.~3b and shows the approximate outlines of some of the \nnanostructures. The nanostructures have become much bigger in \nsize and appear somewhat of circular shape. The nanostructures \nare not fully circular and have eccentricity $\\sim$ 0.19$\\pm$ 0.05 \n(the eccentricity for circle=0). However, the eccentricity of these \nnanostructures is much reduced compared to those at lower fluences \n(where eccentricity $\\sim$ 0.85$\\pm$ 0.4). Hence, we refer to these \nnanostructures as {\\it approximately circular}. An explosion in \nsize ($\\sim$ 120\\%) of these features compared to that at \n1$\\times10^{14}ions\/cm^2$ suggests a tremendous modification in \nsurface morphology at this stage. Our results are in contrast to \nthe keV implantation study of Sb in Si where, for doses lower \nthan 1$\\times10^{14}ions\/cm^2$, no change in the surface topography \nwas observed \\cite{cou}.\n\nThe random arrival of ions on the surface constitute the stochastic\nsurface roughening. Surface diffusion, viscous flow, and surface sputtering\netc. contribute towards the smoothening of the surface \\cite{eklund}.\nThe mechanism for the formation of surface damage is also postulated as a\nresults of cascade collision due to nuclear energy loss ($S_n$). In the \npresent study also, $S_n$ seems to be the dominating factor in the creation of\nthe nanostructures after Sb implantation. In addition several factor like\nc\/a transition in Si lattice, the strain in the surface and in bulk, defect \nand disorder in the medium etc. may also responsible for the \nstructure formation at the surface. For Sb implantation in Si we observe \nthe formation of nanostructure at Si(100) surface only after the \nfluence of 1$\\times10^{13}ions\/cm^2$. These nanostructures inflate in size \nwith increasing fluence. The size inflation may also be related to the increased \ndisorder \\cite{sdey2,sdey3} in the Si lattice. The shape transition of \nnanostructures, from being elliptical at lower fluence to deformed circular at \n5$\\times10^{14} ions\/cm^2$ , may be caused by the increase in the density\nof the electronic excitations. Our earlier studies \\cite{sdey2,sdey3} \nshow that the amorphization of Si-surface at this stage also leads to stress\nrelaxations on the ion implanted surface.\n\n\n\\section{Summary and conclusions}\n\\noindent\n\nIn the present study we have investigated the modifications in \nthe morphology of the Si(100) surfaces after 1.5~MeV Sb implantation. \nWe observe the presence of nano-sized defect zones on the Si surfaces \nfor the Sb fluences of 1$\\times10^{13} ions\/cm^2$ and higher.\nThese nanostructures are elliptical in shape and their size \nincrease with fluence. We observe an abrupt\nincrease in size of nanostructures accompanied by a shape\ntransition after the fluence of 5$\\times10^{14}ions\/cm^2$. \nThe nanostructures become approximately circular at this stage. \nWe have also investigated the modifications in the surface\nroughness of the ion implanted Si surfaces and find that\nsurface roughness demonstrates 3 different stages as a function \nof fluence. \n\n\\section{Acknowledgments}\n\\noindent\n\nThis work is partly supported by ONR grant no. N00014-97-1-0991.\nWe would like to thank A.M. Srivastava for very useful comments \nand suggestions. We would also like to thank Puhup Manas\nfor his help with the figures.\n\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nVarious types of oscillating stars reside in binary systems and therefore a precise determination of their system and stellar parameters can be performed. In particular the combined photometric-spectroscopic analysis of eclipsing binaries (EBs hereafter) leads to the determination of absolute masses and radii of the components in a direct way \\citep[e.g.][]{2013A&A...557A..79L, 2013A&A...556A.138F}. Thanks to spectral disentangling techniques \\citep[e.g.][]{1994A&A...281..286S, 1995A&AS..114..393H, 2001LNP...573..269I, 2019A&A...623A..31S}, faint components giving rise to a contribution of a few per cent can be detected in high-resolution spectra \\citep[e.g.][]{2013A&A...557A..79L, 2014MNRAS.438.3093T, 2014MNRAS.443.3068B}. Such techniques ensure the determination of precise (up to one per cent) model-independent dynamical masses of stars, which can further be confronted with asteroseismic values provided at least one of the binary components pulsates. This also means that the results can be used to calibrate the asteroseismic mass determination that becomes more and more important, in particular for planet hosting stars \\citep[e.g.][]{2007ApJ...670L..37H, 2008JPhCS.118a2016H, 2012A&A...543A..98H}.\n\nAlgol-type systems (Algols, hereafter) are semi-detached, interacting EBs consisting of a main-sequence star of spectral type B-A (primary, hereafter) and an evolved F-K type companion (secondary, hereafter). As a consequence of the above-mentioned facts, studying pulsating Algols from complementary spectroscopic and photometric data provides a valuable test of stellar evolutionary models. The group of oscillating eclipsing Algol stars (oEAs, hereafter) \\citep{2002ASPC..259...96M} consists of eclipsing Algols with mass transfer where the mass-accreting primary shows $\\delta$\\,Sct-like oscillations. These stars are extraordinary objects for asteroseismic studies because they allow us to investigate short-term dynamical stellar evolution during mass-transfer episodes, most probably caused by the magnetic activity cycle of the less massive secondary. Basic principles of the interaction between the magnetic cycle of the cool secondary, the occurrence of rapid mass-transfer episodes, the dynamical behaviour of the system, and the excitation of different non-radial pulsation (NRP hereafter) modes of the mass-gaining primary can be studied in great detail. \\citet{2018MNRAS.475.4745M} showed for the first time that mass transfer and accretion influences amplitudes and frequencies of NRP modes of the oEA star RZ\\,Cas, where amplitude changes are caused by the sensitivity of the mode selection mechanism to conditions in the outer envelope \\citep[e.g.][]{1998A&A...333..141P} and frequency variations by the acceleration of the outer layers by mass and angular momentum transfer.\n\nThe details of angular momentum exchange in Algols determining their evolution are still not completely understood. This is in particular valid for systems like the so-called R\\,CMa stars \\citep[e.g.][]{2011MNRAS.418.1764B, 2013A&A...557A..79L, 2018A&A...615A.131L} that include companions of extremely low masses. But also for the oEA star RZ\\,Cas, calculations showed that its actual configuration cannot be explained when assuming a purely conservative mass-transfer scenario in the past \\citep{2008A&A...486..919M}. In several Algols, such as RZ\\,Cas \\citep{2008A&A...480..247L} or TW Dra \\citep{2008A&A...489..321Z, 2010AJ....139.1327T}, orbital period variations were observed that can be attributed to periods of rapid mass exchange. The gas stream hits the equatorial zones of the atmosphere of the pulsating star, transfers an essential amount of angular momentum, and forces the acceleration of its outermost surface layers, thus causing strong differential rotation. While rotation alters the frequencies of the individual axisymmetric modes \\citep[e.g.][]{1981ApJ...244..299S, 2008ApJ...679.1499L}, it also lifts the degeneracy in frequency for the non-axisymmetric modes as observed for some $\\beta$\\,Cep variables \\citep[e.g.][]{2004A&A...415..241A, 2005MNRAS.360..619J}. This means that NRP modes are sensitive to the acceleration of the surface layers and that it is possible to probe the acceleration via the rotational splitting effect, as suggested by \\citet{2018MNRAS.475.4745M} for RZ\\,Cas. The study of the corresponding frequency shifts, together with the direct measurement of changes in the projected equatorial velocity ($v\\sin{i}$, hereafter) of the primary from its spectral line profiles leads to an estimation of the amount of matter transferred to the primary. The results can be compared to those obtained from 3D hydrodynamical calculations, assuming different rates of mass transfer \\citep[e.g.][]{2007ASPC..370..194M}, and finally explain the kind of mass and angular momentum transfer of the Algols.\n\nOne further advantage of oEAs is that the so-called spatial filtration or eclipse mapping effect, occurring as a result of the obscuration of parts of the stellar disc of the oscillating primary by the secondary during eclipses, simplifies the identification of the pulsation modes. The effect was predicted and found in photometric \\citep[e.g.][]{2003ASPC..292..369G, 2004A&A...419.1015M, 2005ApJ...634..602R} and spectroscopic \\citep{2018A&A...615A.131L} observations. The dynamic eclipse mapping method was introduced by \\citet{2011MNRAS.416.1601B} with the aim of NRP mode identification by reconstructing the surface intensity patterns on EBs. With modern methods such as the least-squares deconvolution (LSD) technique \\citep{1997MNRAS.291..658D} and the pixel-by-pixel method of the FAMIAS program \\citep{2008CoAst.157..387Z}, high-$l$-degree NRP modes can also be detected and identified from the signal-to-noise ratio (S\/N hereafter) enhanced line profiles; a unique identification however is difficult \\citep[e.g.][]{2009CoAst.159...45L}. \n\nRZ\\,Cas (spectral type A3\\,V\\,+\\,K0\\,IV) is a short-period ($P$\\,=\\,1.1953\\,d) Algol and one of the best studied oEA stars. A partial eclipse is observed during primary minimum \\citep{1994AJ....107.1141N}. The primary was found by \\citet{1998IBVS.4581....1O, 2001AJ....122..418O} to exhibit short-period light variability with a dominant oscillation mode of a frequency of 64.2\\,c\\,d$^{-1}$. This finding was later confirmed from dedicated multi-site photometric campaigns by both \\citet{2003ASPC..292..113M} and \\citet{2004MNRAS.347.1317R}. The latter authors obtained simultaneous Str\\\"omgren uvby light curves of RZ\\,Cas. These authors present a detailed photometric analysis for both binarity and pulsation, deriving WD \\citep{1971ApJ...166..605W} solutions in the four mentioned passbands as well as absolute parameters of both components, and confirming the pulsational behaviour of the primary component as found by \\citet{1998IBVS.4581....1O}. We use the radii and flux ratios between the components derived by \\citet{2004MNRAS.347.1317R} as a reference in our spectroscopic investigation. A comprehensive overview on the observations and analysis of RZ\\,Cas can be found in \\citet{2018MNRAS.475.4745M}. \n\nStarting our spectroscopic investigation of RZ\\,Cas in 2001, we were the first to detect rapid oscillations in its spectra \\citep{2004A&A...413..293L}, and later on in spectra taken in 2006 \\citep{2008A&A...480..247L, 2009A&A...504..991T}. From the different amplitudes of the Rossiter-McLaughlin effect \\citep[][RME hereafter]{1924ApJ....60...15R, 1924ApJ....60...22M} observed during the primary eclipses in different seasons and the modelling of line profiles over the full orbital cycle using the Shellspec07\\_inverse program, we deduced that RZ\\,Cas was in an active phase of mass transfer in 2001, whereas in 2006 it was in a quiet state. To model the surface intensity distribution of the secondary of RZ\\,Cas, we had to include a large cool spot facing the primary, presumably originating from a cooling mechanism by the enthalpy transport via the inner Lagrangian point as suggested by \\citet{1994PASJ...46..613U}. Comparing the rapid oscillations found in the radial velocities (RVs hereafter) from 2001 and 2006 also with those derived from the light curves of RZ\\,Cas taken over many years \\citep[see][]{2018MNRAS.475.4745M}, we found that the NRP pattern of RZ\\,Cas changed from season to season. Different NRP modes have been excited with different amplitudes in different years and also frequency variations of single modes were observed. \\citet{2018MNRAS.475.4745M} suggested that these frequency variations could be caused by a temporary acceleration of the outer layers of the primary owing to angular momentum exchange by mass-transfer effects. \n\nAfter observing RZ\\,Cas in 2001 and 2006, we took new time series of high-resolution spectra in 2008 and 2009. The fact that we found a typical timescale of about nine years from the behaviour of the pulsation amplitudes but also from light-curve analysis \\citep[see][]{2018MNRAS.475.4745M} forced us to start a spectroscopic monitoring of the star covering the years 2013 to 2017. We now investigate the complete data set with the aim to use the spectra and observed variations in RVs and line profile variations (LPV hereafter) to deduce stellar and system parameters, check for timely variable NRP pulsation patterns, and try to correlate all observed variations with the occurrence of quiet and active phases of the Algol system. Observations are described in Sect.\\,\\ref{Sect2}. The spectra taken with the HERMES spectrograph are used for a detailed analysis with the aim to derive precise atmospheric parameters for both components of RZ\\,Cas (Sect.\\,\\ref{Sect3}). The extraction of RVs and calculation of mean, high-S\/N line profiles with the newly developed LSDbinary program is described in Sect.\\,\\ref{Sect4}. Using different methods, we measure the projected equatorial rotation velocity of the primary (Sect.\\,\\ref{Sect5}). In Sect.\\,\\ref{Sect6} we use the O-C values collected from literature to compute the orbital period variations of RZ\\,Cas over the last decades. We check for non-Keplerian effects in the orbital RV curves in Sect.\\,\\ref{Sect7.2} and try to model them using the PHOEBE \\citep{2005ApJ...628..426P} program. The results are discussed in Sect.\\,\\ref{Sect8} followed by concluding remarks in Sect.\\,\\ref{Sect9}.\n\nOur investigation of NRP is based on high-frequency oscillations in the RVs and in LPV. Applied methods and results will be described in a forthcoming article (Paper\\,II) and discussed together with the results presented in this work.\n\n\\begin{table} \\centering\n \\tabcolsep 1.8mm\n \\caption{Journal of observations listing the instrument, its spectral resolving power, year of observation, and mean Julian date. The last four columns give the number of spectra, total time span of observations in days, number of observed nights, and number of groups of observations.}\\label{Tab01}\n \\begin{tabular}{lclcrrrr}\n \\toprule\n Source & $R$ & Year & mean JD & $s$ & $t$ & $n$ & $g$\\\\\n \\midrule\n TCES & 32\\,000 & 2001 & 2\\,452\\,190 & 962 & 15 &13&1\\\\\n TCES & 32\\,000 & 2006 & 2\\,453\\,800 & 517 & 150 &7 &3\\\\\n TCES & 32\\,000 & 2008 & 2\\,454\\,717 & 94 & 27 &5 &2\\\\\n HERMES & 85\\,000 & 2009 & 2\\,455\\,156 & 228 & 5 &5 &1\\\\\n TCES & 32\\,000 & 2013 & 2\\,456\\,600 & 835 & 157 &21&3\\\\\n TCES & 32\\,000 & 2014 & 2\\,456\\,938 & 696 & 13 &8 &2\\\\ \n TCES & 58\\,000 & 2015 & 2\\,457\\,300 & 998 & 152 &31&4\\\\\n TCES & 58\\,000 & 2016 & 2\\,457\\,647 & 586 & 14 &10&1\\\\\n TCES & 58\\,000 & 2017 & 2\\,458\\,097 & ~~43 & 4 &3 &1\\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\n\\section{Observations}\\label{Sect2}\n\nSpectra were taken over a total time span of 16 years with the TCES spectrograph at the 2 m Alfred Jensch Telescope of the Th\\\"uringer Landessternwarte (TLS) Tautenburg and over one year with the HERMES spectrograph \\citep{2011A&A...526A..69R} at the 1.25 m Mercator Telescope on La Palma. The TCES instrument is an echelle spectrograph in Coude focus and covers the wavelength range 4450-7550\\,\\AA. A 2015 upgrade resulted in improved efficiency so that this spectrograph could be used with higher spectral resolution (narrower entrance slit of one instead of two arcsec width). Table\\,\\ref{Tab01} gives the journal of observations. \n\nThe TCES spectra were reduced using standard MIDAS packages for Echelle spectrum reduction and HERMES spectra were reduced using the HERMES spectrum reduction pipeline. We used our own routines for the normalisation of the spectra to the local continuum. Instrumental shifts were corrected using an additional calibration based on a large number of telluric O$_2$ absorption lines.\n\n\\section{Spectrum analysis}\\label{Sect3}\n\n\\subsection{Spectra}\n\nThe HERMES spectra taken in 2009 comprise the H$_\\epsilon$ to H$_\\alpha$ range, whereas the TLS spectra only include H$_\\beta$ and H$_\\alpha$. Moreover, the HERMES spectra provide the higher spectral resolution. That is why we decided to use them for spectrum analysis. We were able to decompose the spectra into the spectra of the components using the KOREL program \\citep{1995A&AS..114..393H}. Analysing the decomposed spectra, we faced problems with the continuum normalisation. As a result of the very faint contribution of the secondary, small deviations in the continuum of the composite spectrum from the true continuum leads to large deviations in the continuum of the decomposed spectrum of the secondary. A first normalisation was applied during spectrum reduction by fitting higher polynomials to nodes assumed to represent the local continuum in each of the extracted echelle orders. This approach is not accurate enough when the contribution of the secondary is as faint as in the present case. Corrections to the local continuum have to be applied during spectrum analysis by a comparison with the continua of synthetic spectra. The spectra are then renormalised by multiplying them with the local ratio of the continua of the synthetic spectrum to that of the reduced spectrum. The spectrum decomposition with KOREL, on the other hand, is based on Fourier transform and introduces undulations in the continuum that are additive and have to be subtracted. This introduces an ambiguity with the multiplicative corrections described before that we could not solve to reach the required accuracy. Instead, we used the average of nine composite HERMES spectra of RZ\\,Cas taken at maximum separation of the components and having about the same RV. In this case, we have to apply multiplicative continuum corrections only, for which we used spline functions. Because the signal of the secondary gets fainter for smaller wavelengths and because of the occurrence of telluric lines in the red part of the spectra we limited the spectral range to 4000-5550\\,\\AA, including the three Balmer lines H$_\\beta$, H$_\\gamma$, and H$_\\delta$. \n\n\\subsection{Methods}\n\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{Fig01.png}\n \\caption{Continuum flux ratios between the components vs. wavelength. The dotted lines are calculated from synthetic continuum spectra with $T_{\\rm eff}$$_1$ of 8700\\,K and $T_{\\rm eff}$$_2$ of 3800\\,K to 5000\\,K. The best agreement with the flux ratio from photometry (red line) is obtained for $T_{\\rm eff}$$_2$\\,=\\,4400\\,K (black solid line). The upper green line represents $T_{\\rm eff}$$_1$\\,=\\,9000\\,K, $T_{\\rm eff}$$_2$\\,=\\,4500\\,K, the lower green line for $T_{\\rm eff}$$_1$\\,=\\,8400\\,K, $T_{\\rm eff}$$_2$\\,=\\,4300\\,K.}\n \\label{Fig01}\n\\end{figure}\n\n\\begin{table*}\n \\tabcolsep 4mm\n \\caption{Atmospheric parameters of primary and secondary of RZ\\,Cas derived with multiple methods.}\\label{Tab02}\n \\begin{tabular}{lllllll}\n \\toprule\n & \\multicolumn{2}{c}{Method a)} & \\multicolumn{2}{c}{Method b)} & \\multicolumn{2}{c}{Method c)}\\\\\n\\midrule\n$T_{\\rm eff}$\\ (K) & ~8650$\\pm$60 & ~4860$_{-150}^{+190}$ & ~8643$\\pm$57 & ~4474$\\pm$83 & ~8635$\\pm$49 & ~4800$_{-120}^{+130}$ \\vspace{1mm}\\\\\n$\\log{g}$\\ (dex) & ~~4.41$\\pm$0.09 & ~~3.7 fix & ~~4.42$\\pm$0.07 & ~~3.7 fix & ~~4.41$\\pm$0.06 & ~~3.7 fix \\vspace{1mm}\\\\\n$v_{\\rm turb}$\\ (km\\,s$^{-1}$)& ~~3.60$_{-0.15}^{+0.32}$ & ~~1.42$\\pm$0.80 & ~~~3.61$_{-0.13}^{+0.17}$ & ~~1.03$_{-0.96}^{+0.63}$ & ~~3.59$\\pm$0.13 & ~~1.83$\\pm$0.75 \\vspace{1mm}\\\\ \n$[$Fe\/H$]$ & $-$0.43$_{-0.06}^{+0.01}$ & $-$0.50$\\pm$0.20 & $-$0.42$\\pm$0.03 & $-$0.49$\\pm$0.20 & $-$0.43$\\pm$0.02 & $-$0.38$\\pm$0.18 \\vspace{1mm}\\\\\n$[$C\/H$]$ & $-$0.82$_{-0.20}^{+0.14}$ & $-$0.53$_{-0.36}^{+0.27}$& $-$0.80$_{-0.18}^{+0.13}$ & $-$0.63$_{-0.42}^{+0.29}$& $-$0.82$_{-0.18}^{+0.13}$ & $-$0.24$_{-0.46}^{+0.18}$\\vspace{1mm}\\\\\n$v\\sin{i}$\\ (km\\,s$^{-1}$)& 65.40$\\pm$0.95 & ~~~84.5$_{-7.9}^{+9.3}$ & 65.65$\\pm$0.93 & ~~~92.3$_{-9}^{+12}$ & 65.74$\\pm$0.88 & ~~~81.5$_{-7.6}^{+8.7}$ \\vspace{1mm}\\\\\n$F_2\/F_1$ & \\multicolumn{2}{c}{free} & \\multicolumn{2}{c}{free} & \\multicolumn{2}{c}{taken from photometry} \\\\\n$(R_2\/R_1)^{\\rm calc}$ & \\multicolumn{2}{c}{0.926$\\pm$0.008} & \\multicolumn{2}{c}{~~~~~~~1.17 fixed} & \\multicolumn{2}{c}{0.870} \\\\\n$(R_2\/R_1)^{\\rm adj}$ & \\multicolumn{2}{c}{0.780} & \\multicolumn{2}{c}{1.065} & \\multicolumn{2}{c}{0.870} \\\\\nrms (line depth) & \\multicolumn{2}{c}{0.004499} & \\multicolumn{2}{c}{0.004696} & \\multicolumn{2}{c}{0.004525} \\\\ \n \\bottomrule\n \\end{tabular}\n\\end{table*}\n\n\\begin{figure*}[hbt!]\n \\includegraphics[width=\\linewidth]{Fig02.png}\n \\caption{Results of spectrum analysis using method a) (cf. Sect.\\,\\ref{Sect3.3}, Table\\,\\ref{Tab02}), shown for the H$_{\\delta}$-H$_{\\gamma}$ region (left) and the region around the Mg\\,I\\,b triplet (right). First row: Observed, continuum adjusted composite spectrum (black), best-fitting synthetic spectrum (red), and shifted difference spectrum (green). The inverse of the applied continuum correction is shown in blue. Second row: The same for the secondary. Third row: Flux ratio of the secondary to primary from photometry (black) from spectrum analysis based on $R_2\/R_1$=0.926 (red) and assuming the adjusted ratio of 0.78 (green).}\n \\label{Fig02}\n \\vspace{4mm}\n \\includegraphics[width=\\linewidth]{Fig03.png}\n \\caption{As Fig.\\,\\ref{Fig02}, but using method b). Third row: Flux ratio from photometry (black) and from spectrum analysis based on $R_2\/R_1$=1.17 (red) and on the calculated ratio of 1.065 (green).}\n \\label{Fig03}\n \\vspace{4mm}\n \\includegraphics[width=\\linewidth]{Fig04.png}\n \\caption{As Fig.\\,\\ref{Fig02} but using method c). Third row: Flux ratio from photometry (black) from spectrum analysis based on the photometric flux ratio assuming $R_2\/R_1=1.17$ (red) and on the adjusted ratio of $R_2\/R_1=0.87$ (green). Bottom row: Ratio of the photometric flux ratio to the flux ratio obtained from spectrum analysis based on $R_2\/R_1=1.17$.}\n \\label{Fig04}\n\\end{figure*}\n\nWe used the GSSP program \\citep{2015A&A...581A.129T} to derive the atmospheric parameters of the components of RZ\\,Cas. It is based on the spectrum synthesis method and performs a grid search in stellar parameters. Synthetic spectra were computed with SynthV \\citep{1996ASPC..108..198T}, based on a library of atmosphere models computed with LLmodels \\citep{2004A&A...428..993S} for the hot primary and on MARCS models \\citep{2008A&A...486..951G} for the cool secondary. Atomic data were taken from the VALD database \\citep{2000BaltA...9..590K}.\n\nBesides the atmospheric parameters, GSSP also has to solve for the a priori unknown flux ratio of the components. In the following, we use the continuum flux ratio of the secondary (component\\,2) to primary (component\\,1) for comparison. It is interpolated from the $uvby$ luminosities provided by \\citet{2004MNRAS.347.1317R} and shown by the red line in Fig.\\,\\ref{Fig01}. To get an impression of the influence of the $T_{\\rm eff}$\\ of primary and secondary on the resulting flux ratios with wavelength, we computed synthetic continuum spectra for different $T_{\\rm eff}$\\ of the components, assuming $\\log{g}$$_2$\\,=\\,3.7 \\citep{2004MNRAS.347.1317R}, and $\\log{g}$$_1$\\,=\\,4.4 and [Fe\/H]=$-$0.42 as derived in Sect.\\,\\ref{Sect3.3}. Synthetic flux ratios were computed from the ratio of the continuum spectra assuming a radii ratio of $R_2\/R_1$\\,=\\,1.17, also taken from \\cite{2004MNRAS.347.1317R}. The best representative theoretical curve is shown by the black line in Fig.\\,\\ref{Fig01} and corresponds to $T_{\\rm eff1}$\\,=\\,8700\\,K, $T_{\\rm eff2}$\\,=\\,4400\\,K. From the green lines, we see that a variation of $T_{\\rm eff}$$_2$ by 100\\,K requires a variation of $T_{\\rm eff}$$_1$ by 300\\,K to approximately fit the photometric flux ratio.\n\nWe applied three different methods (labelled a, b, and c in what follows) to determine the atmospheric parameters together with the radii ratio and continuum flux ratio of the secondary to primary.\n\na) We use the standard method of the GSSP program that takes the wavelength dependence of the flux ratio from the ratio of the synthetic continuum spectra calculated with SynthV and scales it with the square of the radii ratio. The latter is obtained from comparing the observed with the synthetic normalised spectra on a grid of atmospheric parameters. To obtain the atmospheric parameters together with the radii ratio, we minimise\n\\begin{equation}\n \\chi^2 = \\sum_\\lambda\\left(\\frac{\\displaystyle o(\\lambda)-s(\\lambda)}{\\displaystyle\\sigma}\\right)^2,\n \\label{GSSP1}\n\\end{equation}\n where $o(\\lambda)$ and $s(\\lambda)$ are the observed and synthetic composite spectra, respectively, and $\\sigma$ is the estimated mean error of $o(\\lambda)$. The synthetic spectrum is computed from\n\\begin{equation}\n s(\\lambda) = \\frac{\\displaystyle s_1(\\lambda,v_1)+V_F(\\lambda)\\,s_2(\\lambda,v_2)}{\\displaystyle 1+V_F(\\lambda)},\n \\label{GSSP2}\n\\end{equation}\nwhere $s_1$ and $s_2$ are the synthetic spectra of the components shifted for their RVs $v_1, v_2$ (all spectra in line depths), and the flux ratio\n\\begin{equation}\n V_F(\\lambda) = \\left(\\frac{\\displaystyle R_2}{\\displaystyle R_1}\\right)^2\\frac{\\displaystyle C_2(\\lambda)}{\\displaystyle C_1(\\lambda)}\\label{GSSP3} \n\\end{equation}\n is the product of the squared radii ratio and the ratio of the continuum fluxes $C_2$ and $C_1$ per unit surface, determined with SynthV.\n\nb) We use Equations\\,\\ref{GSSP1} to \\ref{GSSP3} as before, but fix the radii ratio to 1.17 as obtained from the uvby photometry.\n\nc) We replace the flux ratio $V_F$ computed so far from Eq.\\,\\ref{GSSP3} by the flux ratio obtained from the $uvby$ photometry. No synthetic spectra are needed in this case. The best-fitting radii ratio can then be determined from Eq.\\,\\ref{GSSP3} using a simple least-squares fit.\n\nBecause of the known degeneracy between various free parameters, we reduced their number. The spectrum of the secondary with its small contribution to total light suffers from low S\/N and we fixed its $\\log{g}$\\ to 3.7, as derived from light curve analysis \\citep{2004MNRAS.347.1317R}. Furthermore, we used the same values for its elemental abundances as derived for the primary, except for the iron and carbon abundances, which we determined separately. For the primary, we basically know the photometric value of $\\log{g}$\\,=\\,4.33(3) from \\citet{2004MNRAS.347.1317R}. Because of the good S\/N of the spectra and the fact that the Balmer lines shapes are very sensitive to $\\log{g}$\\ in this temperature range, we decided to use $\\log{g}$\\ of the primary as a free parameter to compare the results with the photometric results. \n\nThe GSSP allows us to iterate atmospheric parameters such as $T_{\\rm eff}$, $\\log{g}$, and microturbulent velocity $v_{\\rm turb}$\\ together with $v\\sin{i}$\\ and the surface abundance of one chemical element at the same time. We started, in this way, to optimise [Fe\/H] together with the other parameters, first for the primary, then for the secondary, and repeated until the change in parameters and $\\chi^2$ became marginal. In a next step, we fixed all parameters to the values obtained so far and optimised the elemental abundances of all other chemical elements for which significant contributions in the spectra could be found.\n\n\\subsection{Results}\\label{Sect3.3}\n\nTables\\,\\ref{Tab02} and \\ref{Tab03} list the results. $(R_2\/R_1)^{\\rm calc}$ in Table\\,\\ref{Tab02} is the radii ratio obtained from the analysis using methods a, b), or c). $(R_2\/R_1)^{\\rm adj}$ is the adjusted radii ratio that follows when we apply the least-squares fit based on Eq.\\,\\ref{GSSP3}, as used in method c) for the two other methods as well. This means that when keeping all other parameters obtained with the various methods, the radii ratio has to be changed from $(R_2\/R_1)^{\\rm calc}$ to $(R_2\/R_1)^{\\rm adj}$ to give the best agreement with the photometric flux ratio. For method c), both radii ratios are identical. The quantity $(R_2\/R_1)^{\\rm adj}$ is used in the following as a measure for the agreement of the spectroscopic with the photometric results. It should be equal to 1.17 in the optimum case.\n\nFigures\\,\\ref{Fig02} to \\ref{Fig04} compare the results from the three methods. These figures show in the first row the observed spectrum together with the best-fitting synthetic composite spectrum. The second row compares the ``observed'' spectrum of the secondary with the best-fitting synthetic spectrum found for the secondary. The observed spectrum of the secondary was therefore computed by subtracting the best-fitting spectrum of the primary from the observed composite spectrum. These spectra are rescaled according to the obtained flux ratio and the observed spectrum of the secondary is correspondingly noisy. The bottom row compares the obtained flux ratio as a function of wavelength with that obtained from photometry. The differences seen by eye are marginal. \n\nFrom Table\\,\\ref{Tab02} and Figures\\,\\ref{Fig02} to \\ref{Fig04} we conclude as follows:\nFirst, there are only small differences in the atmospheric parameters and elemental abundances derived with the different methods for the primary.\nSecond, methods a) and c) give $T_{\\rm eff}$\\ of the secondary that agree with each other within the 1\\,$\\sigma$ error bars, but both are distinctly higher than the value expected from photometry. The $T_{\\rm eff}$\\ derived from method b), on the other hand, is in agreement with that value.\nThird, the iron and carbon abundances of the secondary cannot be distinguished from those of the primary within the 1$\\sigma$ errors (Table\\,\\ref{Tab02}), but all three methods yield a distinctly lower [C\/H] of the primary component compared to the solar value. \nFourth, all three methods deliver flux ratios higher than the photometric ratio, as can be seen from the bottom panels in Figures\\,\\ref{Fig02} to \\ref{Fig04}. Based on the atmospheric parameters derived with methods a) and c), RZ\\,Cas should have much smaller radii ratios (the $(R_2\/R_1)^{\\rm adj}$ in Table\\,\\ref{Tab02}) than computed. From the results of light curve analysis by \\citet{2004MNRAS.347.1317R}, however, we expect a radii ratio of 1.17 and can exclude that the radius of the secondary is larger than that of the primary. From method b), on the other hand, we obtain a value of $(R_2\/R_1)^{\\rm adj}$ that is larger than unity and not so far from 1.17.\n\n\\begin{table}\n \\tabcolsep 3.3mm\n \\caption{Elemental abundances. We list the solar values based on \\citet{2009ARA&A..47..481A}, given as 12\\,+\\,log{(E\/H)}, and the abundances of the primary of RZ\\,Cas relative to the solar values, measured with the multiple methods.}\\label{Tab03}\n\\begin{tabular}{lcccccc}\n\\toprule\n & Solar & Method a) & Method b) & Method c) \\\\\n\\midrule\nC &8.43 & $-0.82_{-0.20}^{+0.14}$ & $-0.80_{-0.18}^{+0.13}$ & $-0.82_{-0.18}^{+0.13}$\\vspace{1mm}\\\\\nO &8.69 & $-0.11_{-0.34}^{+0.21}$ & $-0.22_{-0.48}^{+0.24}$ & $-0.17_{-0.39}^{+0.22}$\\vspace{1mm}\\\\\nMg &7.60 & $-0.14_{-0.05}^{+0.05}$ & $-0.17_{-0.05}^{+0.05}$ & $-0.17_{-0.05}^{+0.05}$\\vspace{1mm}\\\\\nSi &7.51 & $-0.20_{-0.14}^{+0.12}$ & $-0.18_{-0.14}^{+0.13}$ & $-0.19_{-0.14}^{+0.13}$\\vspace{1mm}\\\\\nCa &6.34 & $-0.39_{-0.08}^{+0.07}$ & $-0.38_{-0.08}^{+0.08}$ & $-0.39_{-0.07}^{+0.07}$\\vspace{1mm}\\\\\nSc &3.15 & $-0.23_{-0.09}^{+0.09}$ & $-0.21_{-0.09}^{+0.09}$ & $-0.24_{-0.09}^{+0.09}$\\vspace{1mm}\\\\\nTi &4.95 & $-0.18_{-0.04}^{+0.04}$ & $-0.20_{-0.04}^{+0.04}$ & $-0.21_{-0.04}^{+0.04}$\\vspace{1mm}\\\\\nV &3.93 & $-0.06_{-0.20}^{+0.15}$ & $-0.11_{-0.22}^{+0.16}$ & $-0.07_{-0.19}^{+0.15}$\\vspace{1mm}\\\\\nCr &5.64 & $-0.35_{-0.07}^{+0.06}$ & $-0.36_{-0.06}^{+0.06}$ & $-0.36_{-0.06}^{+0.06}$\\vspace{1mm}\\\\\nMn &5.43 & $-0.45_{-0.13}^{+0.11}$ & $-0.47_{-0.14}^{+0.12}$ & $-0.46_{-0.13}^{+0.11}$\\vspace{1mm}\\\\\nFe &7.50 & $-0.43_{-0.06}^{+0.01}$ & $-0.42_{-0.04}^{+0.02}$ & $-0.43_{-0.02}^{+0.03}$\\vspace{1mm}\\\\\nNi &6.22 & $-0.41_{-0.15}^{+0.12}$ & $-0.41_{-0.15}^{+0.12}$ & $-0.43_{-0.15}^{+0.12}$\\vspace{1mm}\\\\\nY &2.21 & $-0.32_{-0.26}^{+0.19}$ & $-0.32_{-0.26}^{+0.20}$ & $-0.32_{-0.26}^{+0.19}$\\vspace{1mm}\\\\\nBa &2.18 & $-0.45_{-0.18}^{+0.17}$ & $-0.42_{-0.18}^{+0.17}$ & \n$-0.45_{-0.17}^{+0.17}$\\\\ \n \\bottomrule \n \\end{tabular} \n\\end{table}\n\nThere is a large degeneracy between various parameters such as $T_{\\rm eff}$, $\\log{g}$, $v_{\\rm turb}$, [Fe\/H] (both components), [Mg\/H] (cool component), and radii ratio. This explains why we end up in both methods a) and c) with extraordinary small radii ratios on costs of higher $T_{\\rm eff}$\\ of the secondary (much too high when comparing with Fig.\\,\\ref{Fig01}). We assume that the number of degrees of freedom is too high when trying to optimise the radii ratio together with the other parameters; the signal from the faint companion in our composite spectra is simply too low for that. Thus we observe, although it does not give the smallest root mean square (rms) of the O-C residuals, that the most reliable results are obtained when fixing the flux ratio to the photometric ratio as we did in method b). \n\n\\section{Radial velocities and LSD profiles with LSDbinary}\\label{Sect4}\n\n\\begin{figure}\\centering\n \\includegraphics[width=\\linewidth]{Fig05.png}\n \\caption{LSD profiles computed with LSDbinary from spectra taken during primary eclipse for the primary (left) and secondary (right) of RZ\\,Cas. Phase zero corresponds to Min\\,I.}\n \\label{Fig05}\n\\end{figure}\n\nThe classical method of LSD \\citep[][]{1997MNRAS.291..658D} is based on using one line mask as a template. In the result, a strong deconvolved line profile is obtained for the star that best matches this template, whereas the contribution of the other component is more or less suppressed. \\citet{2013A&A...560A..37T} generalised the method so that it can simultaneously compute an arbitrary number of LSD profiles from an arbitrary number of line masks. In this work, we used the LSDbinary program written by V. Tsymbal, which computes separated LSD profiles for the two stellar components using as templates two synthetic spectra that are based on two different atmosphere models. The program also delivers optimised values of the RVs of the components and their radii ratio. A first successful application of the program to the short-period Algol \\object{R CMa} is presented in \\cite{2018A&A...615A.131L}; this work also includes a short description of the program algorithms and shows the advantages of LSDbinary against TODCOR \\citep{1994ApJ...420..806Z} in the case of very small flux ratios between the components of binary stars.\n\nWe applied LSDbinary to the spectra of RZ\\,Cas to obtain the separated LSD profiles and RVs of its components. Figure\\,\\ref{Fig05} shows LSD profiles computed from RZ\\,Cas spectra taken during the primary eclipse in 2016 as an example. The RME can clearly be seen in the profiles of the primary, as well as the RV variation due to orbital motion in the profiles of both components. We will use the obtained RVs and LSD profiles for a detailed investigation of the stellar and system parameters of RZ\\,Cas in this first part and of the pulsations of its primary component in Paper\\,II.\n\n\\section{Rotation velocity of the primary}\\label{Sect5}\n\nWe applied three different methods to determine $v\\sin{i}$\\ of the primary of RZ\\,Cas. In all cases, we only use spectra taken in out-of-eclipse phases.\n\n\\subsection{Fourier method}\\label{Sect5.1}\nFirst, we applied the Fourier method \\citep{1933MNRAS..93..478C, 1976PASP...88..809S, 2005oasp.book.....G} to the calculated LSD profiles. This method (DFT hereafter) is based on the determination of the rotation-broadening related zero points in the Fourier power spectra of the profiles. We assumed a linear limb darkening law with a limb darkening coefficient $\\beta$\\,=\\,1.5 (or $b$\\,=\\,0.6 with $b=\\beta\/(1+\\beta)$). Varying the value of $\\beta$ leads to slight systematic effects in $v\\sin{i}$, but changes were minor compared to the differences to the results from the two other methods described below. The selection of zero points was based on a 3$\\sigma$ clipping, comparing the $v\\sin{i}$\\ from single zero points with the mean values from all zero points.\n\n\\noindent In the result, we obtained $v\\sin{i}$\\ values strongly varying with orbital phase. Figure\\,\\ref{Fig06} shows an example. We assume that this variation comes from Algol-typical effects such as an inhomogeneous circum-primary gas-density distribution and\/or surface structures caused by the influence of mass transfer and gas stream from the secondary. In a next step, we removed all the variations found in the line profiles with the pixel-by-pixel method of FAMIAS \\citep{2008CoAst.157..387Z} by subtracting all frequency contributions computed with the mentioned program from the profiles. This method will be explained in detail in Paper\\,II. For each season, we considered a certain pixel of all LSD profiles as part of one time series from which we subtracted the found contributions and did this pixel-by-pixel to build the undistorted profiles. We performed that for all seasons but 2008 for which we did not have enough data to apply the pixel-by-pixel method. The subtraction of the high-amplitude, low-frequency contributions found with FAMIAS (most of them are harmonics of the orbital frequency) are responsible for the cleaning, not the faint high-frequency oscillations due to pulsation. The resulting $v\\sin{i}$\\ are shown in Fig.\\,\\ref{Fig06} in red. The distribution is now much flatter and we used it to calculate the $v\\sin{i}$\\ of the corresponding season as its arithmetic mean. Results are listed in the third column of Table\\,\\ref{Tab04}.\n\n\\subsection{Single spectral line} \nNext, we looked for stronger spectral lines of the primary in the composite spectra that are free of blends from the cool companion. We found only one line consisting of the Fe\\,II doublet 5316.61\/5316.78\\,\\AA\\ that fulfils that condition. For each season, we fitted the line profiles by synthetic spectra computed with the parameters listed in Table\\,\\ref{Tab02}, method b) to determine the best-fitting $v\\sin{i}$\\ and its error. This means that we were fixing all atmospheric parameters except for $v\\sin{i}$\\ to the solution determined from the spectra observed in 2009 when the star was in a relatively quiet phase. However, this approach does not account for the effects in active phases of RZ\\,Cas such as the attenuation of light by circumstellar material as found for the year 2001 for example \\citep{2009A&A...504..991T}. Thus we used two more free parameters in our fit, correction factor $a$ counting for different line depths caused by the presumed effects, and factor $b$ to correct the continuum of the observed spectrum in the vicinity of the Fe\\,II line. Both were determined from a least-squares fit\n\\begin{equation}\n \\{1-[1-a\\,P_s(v\\sin{i})]-b\\,P_o\\}^2~\\longrightarrow~min.\n\\end{equation}\nwhere $P_s$ is the synthetic and $P_o$ the observed line profile. Finally, we built the mean $v\\sin{i}$\\ per season from the weighted mean of well-selected data points using 3$\\sigma$ clipping. Figure\\,\\ref{Fig07} shows two examples: one for 2001 in which the star was in an active phase, and one for 2014 in which the $v\\sin{i}$\\ shows a much smoother behaviour with orbital phase. The results are listed in the fourth column of Table\\,\\ref{Tab04}.\n\n\\begin{figure}\\centering\n \\includegraphics[angle=-90, width=\\linewidth]{Fig06.png}\n \\caption{Values of $v\\sin{i}$\\ obtained from DFT vs. out-of-eclipse phases measured from the LSD profiles observed in 2016 (black) and from the same profiles corrected for the LPV found with FAMIAS (red).}\n \\label{Fig06}\n\\end{figure}\n\n\\begin{figure}\\centering\n \\includegraphics[width=.82\\linewidth]{Fig07.png}\n \\caption{Values of $v\\sin{i}$\\ determined from the Fe\\,II 5317\\,\\AA\\ line vs. orbital phase, shown for 2001 and 2014. The mean values were built from the values indicated in green.}\n \\label{Fig07}\n\\end{figure}\n\n\\subsection{Using FAMIAS}\nIn Paper\\,II, we will \nuse the moment methods \\citep{1992A&A...266..294A} of the FAMIAS program for a mode identification of low-$l$ degree modes. For that, we cleaned the observed LSD profiles for all low-frequency distributions in the same way as described in Sect.\\,\\ref{Sect5.1}. One free parameter in applying the moment method is the $v\\sin{i}$\\ of the primary, which optimum value and 1$\\sigma$ error we obtained from the resulting $\\chi^2$-distribution. The values are listed in the last column of Table\\,\\ref{Tab04}. The number of observations in 2008 were not sufficient to apply this method.\n\n\\subsection{Comparison}\\label{Sect5.4}\nThe $v\\sin{i}$\\ determined with the three different methods are shown in Fig.\\,\\ref{Fig08}. The values obtained from DFT and FAMIAS agree in most cases within the 1$\\sigma$ error bars; there is a larger difference for 2006. A systematic offset can be observed for the $v\\sin{i}$\\ values obtained from the Fe\\,II line. This is in particular the case for 2001 when RZ\\,Cas was in an active phase. We assume that the offset comes from the cleaning procedures that we applied when using the other two methods. In these cases we removed the low-frequency contributions that we found with the pixel-by-pixel method of FAMIAS from the LSD profiles so that we removed the line broadening effects due to orbital-phase dependent dilution effects by circumstellar material in this way. \n\n\\begin{table}\n\\tabcolsep 1.4mm\n\\caption{Values of $v\\sin{i}$\\ obtained from the three methods in multiple years. JD is JD\\,2\\,450\\,000+. The last column gives the rotation-to-orbit synchronisation factor. Here and in the following, values in parentheses are the errors in units of the last digits.}\\label{Tab04}\n\\begin{tabular}{cccccc}\n\\toprule\nYear & JD & $v\\sin{i}$$_{\\rm DFT}$ &$v\\sin{i}$$_{\\rm FeII}$ & $v\\sin{i}$$_{\\rm FAMIAS}$ & $F_1$ \\\\\n & & (km\\,s$^{-1}$) & (km\\,s$^{-1}$) & (km\\,s$^{-1}$) & \\\\\n\\midrule\n2001 & 2192 & 68.9(1.2) & 74.1(2.4) & 69.47(39) & 0.978(15)\\\\\n2006 & 3768 & 64.33(74) & 67.0(1.6) & 65.95(28) & 0.921(12)\\\\\n2008 & 4717 & 63.39(27) & 63.6(1.0) & -- & 0.896(11)\\\\\n2009 & 5156 & 63.91(45) & 64.49(59) & 62.99(25) & 0.897(11)\\\\\n2013 & 6635 & 65.16(87) & 66.9(1.3) & 64.43(35) & 0.916(13)\\\\\n2014 & 6936 & 64.33(61) & 65.3(1.1) & 64.94(26) & 0.914(12)\\\\\n2015 & 7278 & 64.31(67) & 66.11(91) & 64.08(23) & 0.908(12)\\\\\n2016 & 7646 & 63.62(65) & 64.6(1.0) & 64.87(29) & 0.909(12)\\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}\\centering\n\\includegraphics[angle=-90, width=\\linewidth]{Fig08.png}\n\\caption{Values of $v\\sin{i}$\\ measured from a) DFT (black crosses), b) the Fe\\,II line (red squares), and c) FAMIAS (green plus signs). The solid and dotted lines show possible sinusoidal variations (see text).}\n\\label{Fig08}\n\\end{figure}\n\nFinally, we built a spline fit through the averaged values obtained from DFT and FAMIAS and interpret this as the typical variation of $v\\sin{i}$\\ over the epoch of our observations.\nWe see that $v\\sin{i}$\\ was distinctly larger in 2001 compared to the other seasons. We assume that its value increased during the active phase of RZ\\,Cas near to 2001 because of the acceleration of the outer layers of the primary by angular momentum transport via mass transfer from the cool component (see Sect.\\,\\ref{Sect8} for a more detailed discussion). A period search in the values averaged from DFT and FAMIAS gives a period of 9.6\\,yr, overlaid on a long-term trend (the solid line in Fig.\\,\\ref{Fig08}). A fit based on the 9.0\\,yr period as discussed in Sect.\\,\\ref{Sect7.2.3} is shown by the dotted line. The difference is marginal. The last column in Table\\,\\ref{Tab04} lists the rotation-to-orbit synchronisation factor of the primary, $F_1$, calculated from its radius, taken from \\citet{2004MNRAS.347.1317R} as 1.67\\,$\\pm$\\,0.02\\,R$_\\odot$, and the arithmetic mean of the $v\\sin{i}$\\ measured with DFT and FAMIAS. The result shows that the primary rotates sub-synchronously, only in 2001 it reaches almost synchronous rotation velocity.\n\n\\section{Orbital period changes}\\label{Sect6}\n\nIt is hard to search for orbital period changes in the range of a few seconds when the orbital RV curves are heavily distorted by non-Keplerian effects such as in the case of RZ\\,Cas (see Fig.\\,\\ref{Fig13}). Table\\,\\ref{Tab05} lists the orbital periods and times of minimum ($T_{\\rm Min}$ hereafter) derived from the RVs in different seasons using the method of differential corrections \\citep{1910PAllO...1...33S, 1941PNAS...27..175S}. The 1$\\sigma$ errors of the period are of the order of two seconds or more, which is larger than the expected period changes. Also the inclusion of effects such as Roche geometry of the components and spots on the stellar surfaces into the PHOEBE calculations (Sect.\\,\\ref{Sect7.2}) did not help us reach the desired accuracy in orbital period. The $T_{\\rm Min}$, on the other hand, could be very precisely determined.\n\n\\begin{table}\\centering\n\\tabcolsep 1.3mm\n\\caption{Orbital periods $P_{RV}$ and times of minimum $T_{\\rm RV}$ derived from the RVs from single seasons, and period changes $\\triangle P_{phot}$ and periods $P_{phot}$ derived from the photometric $T_{\\rm Min}$.}\n\\label{Tab05}\n\\begin{tabular}{cllrc}\n\\toprule\nYear & \\multicolumn{1}{c}{$P_{RV}$} & \\multicolumn{1}{c}{$T_{\\rm RV}$} \n & \\multicolumn{1}{c}{$\\triangle P_{phot}$} & \\multicolumn{1}{c}{$P_{phot}$}\\\\\n & \\multicolumn{1}{c}{(d)} & \\multicolumn{1}{c}{BJD\\,2450000+}\n & (sec) & (d) \\\\\n\\midrule\n2001 & 1.19572(30) & 2190.9954063(32) & 1.07 & 1.1952626\\\\\n2006 & 1.195248(20) & 3775.9076591(23) & $-$0.15 & 1.1952486\\\\\n2008 & 1.19525(26) & 4717.7661978(51) & $-$0.03 & 1.1952500\\\\\n2009 & 1.1954(13) & 5156.4217537(36) & 0.40 & 1.1952549\\\\\n2013 & 1.195307(65) & 6663.6333727(22) & 0.31 & 1.1952539\\\\\n2014 & 1.19533(28) & 6936.1537115(17) & 0.45 & 1.1952554\\\\\n2015 & 1.195278(17) & 7279.1944163(12) & 0.70 & 1.1952583\\\\\n2016 & 1.19497(26) & 7644.9432256(20) & 0.67 & 1.1952580\\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}\n\\includegraphics[angle=-90, width=.9\\linewidth]{Fig09.png} \n\\caption{Radial velocities of the primary in 2006 (black) and 2001 (red) vs. orbital phase. Phase is calculated from $P$ and $T_{\\rm min}$ as derived from the RVs in 2006.}\n\\label{Fig09}\n\\end{figure}\n\nWhen we plot the RVs versus orbital phase where the latter is calculated from $T_{\\rm min}$ and $P$ of one single season, we see a clear phase shift in RV compared to other seasons (Fig.\\,\\ref{Fig09}). Since we do not have any information about the behaviour of the system in between, we cannot deduce period shifts from our RVs alone, however. To fix the problem, we used the photometric $T_{\\rm Min}$ from literature. We collected data from the O-C Gateway\\footnote{http:\/\/var2.astro.cz\/ocgate\/index.php?lang=en}, Bob Nelson's Data Base of O-C Values\\footnote{w.aavso.org\/bob-nelsons-o-c-files}, $T_{\\rm Min}$ kindly provided by J.\\,Kreiner\\footnote{https:\/\/www.as.up.krakow.pl\/o-c\/} \\citep[also see][]{2004AcA....54..207K}, and unpublished data from D.\\,Mkrtichian. Cross-checking for duplicates and rejecting all visual observations, we ended up with 605 $T_{\\rm Min}$ covering the time span from 1896 to 2019, to which we added the eight $T_{\\rm Min}$ derived from our spectra. We converted all dates given as HJD to BJD based on terrestial time TT. Then we computed the overall best fitting period from a least-squares fit $T_{\\rm Min}$ versus season number $E$, yielding $P_0$\\,=\\,1.195250392(61)\\,d and $T_0$\\,=\\,2\\,453\\,775.89453(79). The resulting values\n \\begin{equation}\n O-C = T_E-T_0-P_0E\n \\end{equation}\nare shown in Fig.\\,\\ref{Fig10}. \n\nThere exist different approaches to determine the local period from an O-C diagram. The classic method is to fit segments by linear or parabolic functions to calculate constant periods or linearly changing periods per segment, respectively. But there are also approaches that assume a continuous change of the orbital period like that of \\citet{1994A&A...282..775K}, see \\citet{2001OAP....14...91R} for an overview on other methods. We applied a similar procedure as used in \\citet{2018MNRAS.475.4745M}, interpolating the O-C data to a grid of step width one in season $E$, using spline fits together with 3$\\sigma$-clipping and computing the period change from the local slope of the resulting fit. It is\n \\begin{equation}\\label{deltaP}\n (O-C)_E-(O-C)_{E-1} = T_E-T_{E-1}-P_0\n \\end{equation}\nwhere $T_E$\\,$-$\\,$T_{E-1}$ is the local period and thus Eq.\\,\\ref{deltaP} describes the local difference $\\triangle P$\\,=\\,$P$\\,$-$\\,$P_0$. A necessary precondition for our method is that the observed data points are sufficiently dense so that the spline fit gives a reliable prediction of the behaviour between these points. For that reason, we only considered all $T_{\\rm Min}$ observed after BJD 2\\,437\\,000.\n\n\\begin{figure}\n\\includegraphics[angle=-90, width=\\linewidth]{Fig10.png}\n\\caption{O-C values calculated from the corrected $T_{\\rm Min}$, full range in JD.}\n\\label{Fig10}\n\\end{figure}\n \n\\begin{figure}\n\\includegraphics[width=\\linewidth]{Fig11.png}\n\\caption{O-C values and derived period changes. Top: O-C values (filled black circles) fitted by splines (red line). Values considered as outliers are shown by open circles and those derived from our spectra in green. Bottom: Period changes calculated from the local slope of the spline fit (see text). Open circles indicate the seasons of our spectroscopic observations.}\n\\label{Fig11}\n\\end{figure}\n\nFigure\\,\\ref{Fig11} shows in its top panel the O-C values together with the spline fit. The $T_{\\rm Min}$ obtained from our RVs ($T_{\\rm RV}$ in Table\\,\\ref{Tab05}) are shown in green and fit very well. The bottom panel shows the resulting period changes by the solid line, where we assumed a ``best-selected'' smoothness parameter for the underlying spline fit. To give an impression of the influence of the smoothness parameter, we also show (dotted line) the period changes resulting from a much larger parameter, leading to a more relaxed spline fit. The positions of our spectroscopic observations are indicated. Obtained period changes and periods are listed in Table\\,\\ref{Tab05} as $\\triangle P_{LC}$ and $P_{LC}$. \n\n\\citet{2018MNRAS.475.4745M} derived typical timescales of 4.8, 6.1, and 9.2\\,yr from the O-C variations. We did a frequency search in the obtained period changes using the PERIOD04 program \\citep{2005CoAst.146...53L} and found the six periods listed in Table\\,\\ref{Tab06}. We do not assume that the observed variation can be described as strictly cyclic but consider the found periods as typical timescales that describe the behaviour in certain seasons. Figure\\,\\ref{Fig12} illustrates this. It shows the calculated period changes together with the best fit of the six periods and also single contributions from four of the six periods. The longest period of 52.7\\,yr describes a long-term trend in the data. The 6.3\\,yr period describes the variations before BJD 2\\,442\\,560 (segment A), and the 8.6\\,yr period the behaviour between 2\\,445\\,240 and 2\\,456\\,070 (segment B). The 14.5\\,yr period is responsible for an amplitude variation over a longer timescale. The remaining two periods of 6.9 yr and 10.8\\,yr cannot directly be linked to the variations in this way, but improve the fit by counting for their non-cyclicity. When doing a separate frequency search in segments A and B, we find the dominant periods as 5.9\\,yr and 8.4\\,yr, respectively. This shows that we can only estimate the order of the time scales underlying the orbital period variations but cannot use the errors obtained from frequency search as a measure for the accuracy of the obtained values.\n\n\\begin{figure}\\centering\n\\includegraphics[angle=-90, width=\\linewidth]{Fig12.png}\n\\caption{Calculated period changes (solid line) fitted by six frequencies (dashed line). The dotted lines show (from top to bottom) the contributions of the 52.7, 6.3, 8.6, and 14.5\\,yr periods.}\n\\label{Fig12}\n\\end{figure}\n\n\\begin{table}\n\\tabcolsep 2.65mm\n\\caption{Timescales and amplitudes of orbital period variation.}\n\\label{Tab06}\n\\begin{tabular}{lrrrrrr}\n\\toprule\n$P$ (y) & 6.32 & 6.94 & 8.60 & 10.78 & 14.55 & 52.70\\\\\n$A$ (sec)& 0.34 & 0.37 & 0.37 & 0.24 & 0.26 & 0.45\\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\begin{figure*}\\centering\n\\includegraphics[width=0.76\\textwidth]{Fig13.png}\n\\caption{Radial velocity residuals. Left: the Rossiter-McLaughlin effect in the RVs of the primary in different years (red) after subtracting the best-fitting Keplerian orbits. For comparison, the RVs from 2006 are shown in black. In 2013, the RVs are divided into BJD\\,<\\,2\\,456\\,600 (2013a, red) and BJD\\,>\\,2\\,456\\,600 (2013b, green), and in 2015 into BJD\\,<\\,2\\,457\\,250 (2015a, red) and BJD\\,>\\,2\\,457\\,250 (2015b, green). Right: The same for the RVs of the secondary, shown over a larger range in orbital phase. Phase zero corresponds to Min\\,I in each case.}\n\\label{Fig13}\n\\end{figure*}\n\n\\section{Radial velocity variations with orbital phase}\n\\subsection{Keplerian approach}\\label{Sect7.1}\n\nFigure\\,\\ref{Fig13} shows the RVs folded with the orbital period after subtracting the best-fitting Keplerian orbits computed with the method of differential corrections as mentioned in Sect.\\,\\ref{Sect6}. We use the observations from 2006 when RZ\\,Cas was in a quiet state for comparison, shown by black dots. The deviations from a straight line (pure Keplerian motion) of the RVs of the secondary seen in 2006 are due to its non-spherical shape and inhomogeneous surface intensity distribution as discussed in \\citet{2009A&A...504..991T}, which we investigate in detail in Sect.\\,\\ref{Sect7.2}. More or less strong deviations from the behaviour in 2006 can be seen in different years. \n\nLooking at the behaviour of the RVs of the secondary, we find the strongest deviations in 2001 where we see large variations with orbital phase. Almost no differences to 2006 are observed in 2009, 2015, and 2016, whereas moderate differences occur in 2013 and 2014. Interpreting the strength of the deviations as activity indicator, we conclude that we observed RZ\\,Cas in or just after a mass-transfer episode in 2001 and in a quiet state in 2006 (as already stated in \\citet{2008A&A...480..247L}, and \\citet{2009A&A...504..991T}), and that this quiet phase continued until 2009, followed by a slightly more active phase around 2013 and 2014, and falling back into a quiet state in 2015 and 2016.\n\nLooking at the behaviour of the RVs of the primary, in particular at the amplitude and shape of the RME, we see a different picture. The amplitude of the RME in 2001 is much larger and its shape is strongly asymmetric. In all the other years, however, we see almost no difference to 2006, except for three nights of observation covering the ingress of the eclipses: one in 2009, one in 2013, and one in 2015. \n\n\\subsection{Analysis with PHOEBE}\\label{Sect7.2}\n\nRZ Cas is a semi-detached Algol-type binary system and its cool component fills its critical Roche lobe. Tidal distortions occur and lead to non-spherical shapes. According to \\citet{2013MNRAS.431.2024S}, non-negligible effects on RVs occur for $a<20\\,(R_1+R_2)$, that is when the separation of the components is smaller than 20 times the sum of their radii. Thus, the a priori assumption of Keplerian orbit, which considers the mass centre of the stellar disc coinciding with its intensity centre, is no longer valid. As already mentioned in previous sections, RZ Cas is undergoing stages of active mass transfer. A hot region around the equatorial belt has long been known \\citep{1982ApJ...259..702O}. Moreover, the recent comprehensive tomographic study of \\citet{2014ApJ...795..160R} revealed clear indicators of mass stream activity in several short period Algols, including RZ Cas. \\citet{1994PASJ...46..613U} found that the secondary of RZ\\,Cas can be modelled only when assuming an unusually large value of the gravity darkening exponent of 0.53 and inferred that dark spots are present on the front and back sides of the secondary with respect to the primary. These authors supposed that quasi-radial flow in the sub-adiabatic stellar envelope from the deep interior is the cause of darkening. \\citet{2008ysc..conf...33T, 2009A&A...504..991T} confirmed the finding of the two spots, resuming the interpretation. \\citet{2018MNRAS.481.5660D} and \\citet{2018A&A...611A..69B}, on the other hand, give an alternative explanation, showing for the short-period Algols \\object{$\\delta$~Lib} and \\object{$\\lambda$~Tau} that the mass stream can produce a light scattering cloud in front of the surface of the Roche lobe filling secondary facing the primary.\n\n\\subsubsection{Method}\n\nTo account for all these effects, we need to use sophisticated models to simulate RV changes during the whole orbital motion via integrating both components surface intensities. For this purpose, we used the well-known Wilson-Devinney code \\citep{1971ApJ...166..605W, 2005Ap&SS.296..121V} through the PHOEBE interface \\citep{2005ApJ...628..426P}. The WD code consists of light and\/or RV curve synthesiser (LC) and parameter optimiser (DC) for fitting purpose. As we describe later, our attempts to optimise the multi-parameter fit failed with DC, which is expected for such complex configurations. Thus we used LC via our Markov Chain Monte Carlo (MCMC hereafter) optimiser written in Phoebe-scripter extension, to both optimise the parameters and explore the parameter space \\citep[see][]{2018MNRAS.481.5660D}.\n\nIn MCMC runs, we start from a random initial parameter set and add accepted parameters to the Markov chain. We need to run a lot of chains (or \"walkers\") to avoid the issue of one Markov chain sticking in a local $\\chi^2$ valley. That is why it is advisable to start the simulation with at least two times the number of walkers of the number of parameters to be optimised \\citep[see][]{2013PASP..125..306F}. For our last sets of simulation, we set the number of walkers > 20 and the lowest iteration numbers are dynamically increased to fulfil the condition by \\citet{2013PASP..125..306F} that the iteration number should be at least ten times the auto-correlation time (i.e. typically 500,000 iterations per season). \n\n\\subsubsection{Application}\n\nSpots are described in WD by two coordinates, co-latitude $\\delta$ and longitude $\\lambda$, and two physical parameters, temperature ratio compared to the normal surface $T$ and radius $R$. In our initial run, we started to model the RVs of the cool component by taking one spot (Spot1) on the cool secondary facing the primary into account to mimic a scattering cloud or diffusive material between the components. All of our models converged to spot position $\\delta\\approx 90^\\circ$, $\\lambda\\approx 0^\\circ$, that is the region that faces the primary. However, we found a strong correlation between spot size and temperature ratio of the form of $R^2T^4$\\,=\\,constant, balancing a lower temperature ratio by a more extended spot. Thus, we could only fit the \"strength\" or \"contrast\" of the spot with respect to the stellar surface, fixing the temperature ratio to a reasonable lower limit of 0.76. As mentioned at the beginning of Sect.\\,\\ref{Sect7.2}, \\citet{1994PASJ...46..613U} suggested dark spots at the front and back sides of the cool secondary towards the primary caused by mass transfer. We therefore implemented a second spot (Spot2) on the opposite side of the surface of the secondary to check if this offers further improvement.\n\nTo check if we can detect a variation of the filling factor between active and inactive phases of RZ\\,Cas, we used the \"unconstrained mode \" of WD and the surface potential of the secondary as a free parameter. In all of our runs, however, the filling factor of the cool component converged to unity. So for the final analysis, we fixed it to unity and used the \"semi-detached mode\". Eccentricity was set to zero and the orbital inclination to 82$^\\circ$. The temperature ratio was fixed to 0.76 for both spots on the secondary and, as already mentioned, the position of Spot1 to $\\delta_1$\\,=\\,90$^\\circ$, $\\lambda_1$\\,=\\,0$^\\circ$. The location of Spot2 converged to $\\delta_2$\\,=\\,90$^\\circ$ without showing larger scatter, so we fixed $\\delta_2$ as well.\n\n\\subsubsection{Results}\\label{Sect7.2.3}\n\nFree parameters were the synchronisation parameter for the primary, $F_1$, the radii of Spot1 and Spot2 on the secondary, $R_1^{sec}$ and $R_2^{sec}$, the longitude of Spot2, $\\lambda_2^{sec}$, and the systemic velocity $V_\\gamma$. The RV semi-amplitudes were allowed to vary within the error bars derived in previous attempts and led to the values as listed in Table\\,\\ref{Tab07}. \n\n\\begin{table}\\centering\n\\tabcolsep 4.1mm\n\\caption{Basic absolute values derived with PHOEBE.} \n \\label{Tab07}\n \\begin{tabular}{llll}\n \\toprule \n $q$ & 0.350782(47) & $M_1$ (M$_\\sun$) & 1.9507(54)\\\\\n $a$ (R$_\\sun$) & 6.5464(54) & $M_2$ (M$_\\sun$) & 0.6843(13) \\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\n\\begin{table}\n\\tabcolsep 1.45mm\n\\caption{Synchronisation factor of the primary, radii of both spots on the secondary, longitude of Spot2 on the secondary, and mean scatter of the RV residuals.}\\label{Tab08}\n\\begin{tabular}{lccccc}\n\\toprule\nSeason & $F_1$ & $R_1^{sec}$ & $R_2^{sec}$ & $\\lambda_2^{sec}$ & rms\\\\\n & & \\small (deg)& \\small (deg)& \\small (deg) & \\small (km\\,s$^{-1}$)\\\\\n\\midrule\n2001 & 1.217(20) & $23.09_{-1.40}^{+0.80}$ & $22.8_{-2.2}^{+2.1}$ & $136.7_{-4.1}^{+3.9}$& 5.31\\vspace{1mm}\\\\\n2006 & 0.924(13) & $41.04_{-0.92}^{+0.92}$ & $10.6_{-2.5}^{+1.9}$ & $150 _{-20 }^{+26 }$& 2.47\\vspace{1mm}\\\\\n2009 & 1.006(11) & $36.55_{-0.74}^{+0.74}$ & $10.8_{-1.7}^{+1.4}$ & $244.4_{-6.1}^{+6.6}$& 1.62\\vspace{1mm}\\\\ \n2013a & 0.897(08) & $28.69_{-1.08}^{+0.64}$ & $24.0_{-1.0}^{+0.9}$ & $180.0_{-2.4}^{+2.4}$& 3.46\\vspace{1mm}\\\\\n2013b & 0.991(14) & $28.01_{-1.09}^{+0.99}$ & $23.7_{-1.2}^{+1.2}$ & $218.4_{-2.9}^{+3.1}$& 3.60\\vspace{1mm}\\\\ \n2014 & 0.812(12) & $33.16_{-0.84}^{+0.96}$ & $21.3_{-1.2}^{+1.2}$ & $209.4_{-3.1}^{+3.3}$& 2.92\\vspace{1mm}\\\\ \n2015a & 1.057(13) & $42.51_{-0.72}^{+0.66}$ & $19.2_{-1.9}^{+1.8}$ & $194.4_{-3.2}^{+3.6}$& 2.71\\vspace{1mm}\\\\ \n2015b & 0.782(11) & $41.53_{-0.54}^{+0.69}$ & $14.3_{-1.0}^{+0.9}$ & $180.0_{-3.0}^{+3.2}$& 2.46\\vspace{1mm}\\\\\n2016 & 0.826(10) & $42.71_{-0.73}^{+0.63}$ & $16.2_{-1.1}^{+1.1}$ & $124.5_{-3.3}^{+3.0}$& 2.26\\\\ \n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}\\centering\n\\includegraphics[width=\\linewidth]{Fig14.png}\n\\caption{Residuals of the RVs of the secondary after subtracting the best-fitting PHOEBE solutions without including spots (black), including one spot (red), and including two spots (green) on the secondary, obtained (from top to bottom) for seasons 2013a, 2015b, and 2016.}\\label{Fig14}\n\\end{figure}\n\n\\begin{table}\\centering\n\\caption{Absolute values of positive and negative RV deviations due to RME.}\n\\begin{tabular}{lccc}\n \\toprule\nSeason& JD & $K_p$ (km\\,s$^{-1}$)& $K_n$ (km\\,s$^{-1}$)\\\\\n \\midrule\n2001 & 2\\,452\\,192 & 26.5 & 32.4\\\\\n2006 & 2\\,453\\,768 & 19.9 & 22.4\\\\\n2009 & 2\\,455\\,156 & 24.7 & 21.0\\\\\n2013a & 2\\,456\\,584 & 22.6 & 21.7\\\\\n2013b & 2\\,456\\,672 & 26.4 & 20.0\\\\\n2014 & 2\\,456\\,936 & 19.3 & 19.7\\\\ \n2015a & 2\\,457\\,238 & 24.6 & 19.8\\\\\n2015b & 2\\,457\\,293 & 17.5 & 19.8\\\\\n2016 & 2\\,457\\,646 & 19.1 & 19.0\\\\\n\\bottomrule\n \\end{tabular}\n \\label{Tab09}\n\\end{table}\n\nIn Fig.\\,\\ref{FigA1}, we show the corner plot for season 2009. This shows that MCMC not only gives the most optimal parameter set, it also provides an error estimation on each parameter and possible correlations between the parameters. Finally obtained values are listed in Table\\,\\ref{Tab08}, in which we also included the mean scatter calculated from the RV residuals after subtracting the PHOEBE solutions obtained for each season. These residuals are shown in Fig.\\,\\ref{FigA2}. \n\nFigure\\,\\ref{Fig14} shows the influence of the inclusion of spots on the secondary into the model. We selected three seasons as examples. Including spots on the secondary did not effect the modelled RVs of the primary and vice versa. Thus, we only show the residuals of the RVs of the secondary. The black dots corresponds to the best-fitting solutions without spots, that is when only the non-spherical shapes of the components (in particular of the secondary) are taken into account. The RVs show a systematic deviation around Min\\,II that is distinctly reduced when including Spot1 that faces the primary, resulting in the red dots. The RVs show smaller, non-systematic deviations around Min\\,I, which is clearly present in the data from 2013a, small in 2016, and almost not visible in 2015b. Adding the second spot on the opposite side reduces this scatter, as shown by the green dots, but in some cases the reduction is marginal.\n\nOne explanation for the asymmetry observed in the RME in the years 2009, 2013b, and 2015a (see Fig.\\,\\ref{Fig13}) could be the impact of a hot spot on the primary on its RVs. For comparison, we therefore additionally included a hot spot on the primary for\nthese seasons, taking three more free parameters (temperature ratio, radius, and longitude) into account. The fit did not improve the solutions, however. \n\nTable\\,\\ref{Tab09} lists absolute values of positive and negative amplitudes of the RME (maximum deviations in RV during ingress and egress, respectively) obtained after subtracting the best-fitting PHOEBE solution for $F_1$\\,=\\,0, that is for a non-rotating primary, from the observed RVs. Panels h) and i) in Fig.\\,\\ref{Fig15} show that the positive amplitudes reveal a similar behaviour as $F_1$ or $R_2$, whereas the negative amplitudes show a systematically decreasing trend with time (see next Section for a discussion). \n\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{Fig15.png}\n\\caption{Time variations of different parameters (see text). The solid lines are calculated from the best-fitting periods and long-term trends. The dotted lines show possible sinusoidal variations with a period of 9\\,yr plus a long-term trend, except for panel f), which shows an 18 y variation. Values from 2013b and 2015a were considered as outliers.}\\label{Fig15}\n\\end{figure}\n\n\\begin{table}\\centering\n\\caption{Timescales in years determined from the seasonal variations of different parameters.}\n\\begin{tabular}{ccccccccc}\n\\toprule\nd$P$ & $v\\sin{i}$ & $F_1$ & $R_1^{sec}$ & $R_2^{sec}$ & $\\lambda_2^{sec}$ & rms & $K_p$ & $K_n$\\\\\n\\midrule\n8.6 & 9.6 & 8.4 & 9.4 & 8.8 & 16.7 & 9.3 & 8.5 & 9.6\\\\\n\\bottomrule\n\\end{tabular}\\label{Tab10}\n\\end{table}\n\nFigure\\,\\ref{Fig15} shows the seasonal variations of all investigated parameters. In panel a) we added the period change derived from the O-C values (cf. Sect.\\,\\ref{Sect6}), in panel b) the $v\\sin{i}$\\ values from the fit determined in Sect.\\,\\ref{Sect5.4}, and in panel g) the mean scatter of the RV residuals after subtracting the best-fitting PHOEBE solutions from the input data (see Fig.\\,\\ref{FigA2}). All variations can be described by a sinusoid plus a long-term trend. Because the seasonal sampling was not sufficient to determine a second frequency describing the long-term trend, we fixed it to $10^{-6}$\\,c\\,d$^{-1}$\\ and determined only amplitudes and phases. Table\\,\\ref{Tab10} lists the timescales derived from the best-fitting sinusoids.\n\nThe best-fitting curves are shown in Fig.\\,\\ref{Fig15} by solid lines. We see that $F_1$, $R_2$, rms, and $K_p$ vary almost in phase, whereas $R_1$ varies in anti-phase. Moreover, $v\\sin{i}$\\ and $K_p$ show almost identical shapes of variation. Searching for a possible common period that explains the variations of all parameters, we found a period of 9 years that best fits, together with the mentioned long-term trend, the variations of all parameters except for $\\lambda_2^{sec}$ that can be fitted by a period of 18\\,years, twice the period of 9\\,years. For d$P$ we had to add a third (optimised) period of 6.3\\,yr. The resulting fits are shown in Fig.\\,\\ref{Fig15} by dotted lines. In all cases, the quality is comparable to that obtained from the optimised values listed in Table\\,\\ref{Tab10}.\n\n\\section{Discussion}\\label{Sect8}\n\nOur spectral analysis yields atmospheric parameters of the components that are in agreement with the results of light curve analysis by \\citet{2004MNRAS.347.1317R}. For the first time, we determine the elemental surface abundances. For both components, we find [Fe\/H] close to $-0.4$, as for Ca, Cr, Mn, and Ni of the primary, whereas O, Mg, Si, Sc, Ti, and V show relative abundances between $-0.2$ and $-0.1$. For the carbon abundance of the primary, we find [C\/H]\\,=\\,$-0.80^{+0.13}_{-0.18}$, which is remarkably different from the other elements. We think that the difference is significant. First, the considered spectral range shows a sufficiently large number of C\\,I lines of the primary (SynthV computes line depths >5\\% for 32 rotationally unbroadened C\\,I lines). Second, differences between the fits with [C\/H] of $-0.4$ and $-0.8$ can clearly be seen by eye. Figure\\,\\ref{Fig16} shows this for the strongest carbon lines in the H$\\beta$ region of the spectrum of the primary (observed composite spectrum after subtracting the synthetic spectrum of the secondary). In the spectrum of the cool secondary, carbon is mainly present in form of the CH molecule bands. Compared to the primary, the signal is very weak and we can only say that [C\/H] is below solar abundance, between $-0.3$ and $-1.0$ within the 1$\\sigma$ error bars. \n\n\\citet{2008A&A...486..919M} tried to model the binary evolution of RZ\\,Cas and found consistent solutions only for an initial mass ratio of $q$\\,$\\approx$\\,3, which is about the inverse of the actual ratio. From the large initial mass of the donor in that case we can assume that the primary has switched during its evolution from pp-chain to CNO-cycle hydrogen burning, resulting in depleted carbon and enhanced nitrogen abundances in its core \\citep[e.g.][]{2010A&A...517A..38P}. From the reversal of mass ratio we conclude that the donor was stripped by mass loss down to its core so that the gainer accreted CNO-cycled, carbon-deficient material in the late phase of its evolution. This material has then mixed with the surface layers of the gainer, leading to the observed carbon abundance. In that case, the surface carbon abundance of the gainer cannot be lower than that of the donor star, however. Multiple authors have used the sketched scenario to explain the surface abundance anomalies observed in several other Algol-type stars; these include \\citet{2012MNRAS.419.1472I} for \\object{GT~Cep}, \\object{AU~Mon}, and \\object{TU~Mon}, \\citet{2014MNRAS.444.3118K} for \\object{u~Her}, and \\citet{2018MNRAS.481.5660D} for $\\delta$\\,Lib.\n\n\\begin{figure}\n\\includegraphics[angle=-90, width=\\linewidth]{Fig16.png}\n\\caption{Fit of the observed spectrum of the primary (black) in the H$_\\beta$\\,region by synthetic spectra with [C\/H] of 0.0 (blue), $-0.4$ (green), and $-0.8$ (red).}\n\\label{Fig16}\n\\end{figure}\n\n\\begin{figure*}\n\\includegraphics[width=.5\\linewidth]{Fig17a.png}\n\\includegraphics[width=.5\\linewidth]{Fig17b.png}\n\\caption{Logarithmic plots of gas density distributions (in $10^{11}$\\,cm$^{-3}$) obtained from 3D hydrodynamic simulations of the RZ\\,Cas system (Nazarenko \\& Mkrtichian, priv. comm.) for mass-transfer rates of 1\\,$\\times$\\,$10^{-9}$\\,M$_\\odot$y$^{-1}$ (left) and 6\\,$\\times$\\,$10^{-8}$\\,M$_\\odot$y$^{-1}$ (right). The viewing angles at first ($\\phi$\\,=\\,$-0.1$) and last ($\\phi$\\,=\\,$+0.1$) contact are indicated.}\n\\label{Fig17}\n\\end{figure*}\n\nOur further investigation was based on the separated LSD profiles and RVs of the two components of RZ\\,Cas computed with LSDbinary, and on the times of minimum (O-C values) taken from literature. The change in orbital period computed from the slope of the O-C diagram cannot be characterised by one single timescale, as already found by \\citet{2018MNRAS.475.4745M}. These authors derived timescales of 4.8, 6.1, and 9.2 years. We observe from our slightly extended data set different timescales of variation in different seasons, such as 6.3\\,yr around JD\\,2\\,440\\,000, and 8.6\\,yr from 2\\,450\\,000 to 2\\,455\\,000 (which is about the time span of our spectroscopic observations), overlaid by longer periods. From our previous analysis using the Shellspec07\\_inverse code \\citep{2009A&A...504..991T}, we know that RZ\\,Cas was in an active state of mass transfer in 2001 and in a quiet state in 2006. Comparison of the RV residuals after subtracting a pure Keplerian orbit with those from 2006 gave us initial hints pointing to further activity periods (Fig.\\,\\ref{Fig13}). From the RVs of the primary we see that the RME was distinctly enhanced only in 2001. A comparison of the RV residuals of the secondary with those from 2006 shows the largest deviation in 2001, weaker deviations in 2013 and 2014, and almost no deviations in 2009, 2015, and 2016. The amount of scatter found in the RV residuals after subtracting the best-fitting PHOEBE solutions is strongly correlated with this finding. Thus, we conclude that no further mass-transfer episode as strong as in or close to 2001 occurred in RZ\\,Cas.\n\nThe modelling of the observed RVs of both components with MCMC-PHOEBE gave the best results when adding two dark spots on the surface of the cool companion: Spot1 facing the primary, Spot2 on the opposite side. Spot1 was already found from the RVs from 2001 and 2006 by \\citet{2009A&A...504..991T}, whereas \\citet{1994PASJ...46..613U} included two spots into their model, explaining the observed anomalous gravity darkening by a cooling mechanism by enthalpy transport due to mass outflow that leads to a reverse process of gravitational contraction. \n\nWe monitored both spots for the first time over decades.\nWe find that Spot1 always exactly points towards the primary, with radii (a synonym for strength or contrast as mentioned in before) between 23$^\\circ$ and 43$^\\circ$. Spot2, on the other hand, is much weaker, shows a variation in its position, and induces only a second order improvement in some of the observed seasons (cf. Fig.\\,\\ref{Fig14}). The main findings are that the strengths of the two spots vary in anti-phase and that Spot2 shows different positions in longitude, varying around the longitude of the L2 point of 180$^\\circ$.\n\nThe fact that the strength of Spot1 is largest when the star is between 2006 and 2009 in a quiet phase (Fig.\\,\\ref{Fig15}) speaks against the cooling by mass-outflow mechanism. Instead, we argue that the variations of the strengths of both spots can be explained by magnetic activity of the cool companion, assuming an activity cycle of 9 years, based on an 18-year cycle of magnetic field change, including a reversal of the magnetic poles. The 9-year cycle can be found in the variations of all investigated parameters except for the longitude of Spot2, as we showed in the last section, and was also found by \\citet{2018MNRAS.475.4745M}. Spot2 shows a migration in longitude, returning after an 18-year cycle to its position from 2001 (cf. Fig.\\,\\ref{Fig15}). We assume that we observe similar surface structures on the cool secondary of RZ\\,Cas such as for cool, rapidly rotating RS\\,CVn binaries or single rapidly rotating variables of FK Com and BY Dra type. Long-living active regions were observed on opposite sides of these stars. The spots are of different intensity and also show switching of activity from one spot to the other on timescales of years or decades, which is known as the flip-flop effect \\citep[see][and references herein]{2018A&A...613A...7Y}.\n\nWe believe that there is a direct analogy with sunspot activity. Sunspots show a saucer-shaped depression in the photosphere caused by the Lorentz force of the strong magnetic field within the spot, the so-called Wilson depression \\citep{1774RSPT...64....1W}. This means that inside the spot the level of $\\tau$\\,=\\,1 is located below the level of the photosphere outside the spot. For the Sun, the geometric depth of the depression is of the order of 600\\,km \\citep[e.g.][]{1972SoPh...26...52G, 2018A&A...619A..42L}. For the Roche-lobe filling donor, the existence of such atmospheric depression close to the L1 point means that L1 is \"fed\" by atmospheric layers of lower density and thus the mass transfer is the more suppressed the deeper the depression is. The local magnetic field strength controls the strength (depression) of Spot1 and the height of atmospheric layers feeding the L1 point and in this way the mass-transfer rate. RZ\\,Cas showed, in perfect agreement with the drafted scenario, high mass-transfer rate in 2001 (and possibly in 2011, when observations are missing) when the Spot1 size was around 20 degrees and low activity state in 2006-2007 and in 2015-2016 when the spot size was about 40 degrees.\n\nThe explanation for the asymmetry and different amplitudes observed in the RME in the years 2001, 2009, 2013b, and 2015a (see Fig.\\,\\ref{Fig13}) could be the impact of the combined effect of acceleration of the photospheric layers in 2001 caused by the high mass-transfer rate and by screening the surface of the primary by the dense gas stream \\citep{2008A&A...480..247L}. The amplitudes of the RME in RZ\\,Cas in general show very different behaviour during ingress ($K_p$) and egress ($K_n$) of primary eclipse. The variation of $K_p$ is similar to those of $R_2$ and rms, whereas the shape of the $K_n$ variation perfectly matches that of $v\\sin{i}$\\ (see Fig.\\,\\ref{Fig15}). We assume that $K_n$ is related to the true $v\\sin{i}$\\ seen outside the eclipses and $K_p$ is strongly influenced by mass-transfer effects. This can be explained from Fig.\\,\\ref{Fig17}, showing the gas density distribution around RZ\\,Cas for two different mass-transfer rates, calculated from 3D-hydrodynamic simulations by Nazarenko \\& Mkrtichian (priv. comm.). It can be seen that the equatorial zone of the surface of the primary is masked during ingress (at orbital phases $\\phi$\\,=\\,$-0.1..0$) by the optical thick gas stream from the secondary, whereas we directly see the complete disc of the primary during egress ($\\phi$\\,=\\,$0..0.1$), only very slightly hampered by optical thin circumbinary matter. In consequence, $K_p$ is affected by the seasonal varying density of the gas stream (or mass-transfer rate) and $K_n$ is correlated with the surface rotation velocity of the primary. All the variability seen in RME in the years from 2006 on is mainly related to the variations of $K_p$, forced by changes in gas-stream density and variable screening and attenuation effects, while the rotation speed was nearly constant. \n\nThe synchronisation factor derived with PHOEBE is mainly based on the shape of the RME and this shape is strongly influenced by timely varying Algol-typical effects (see paragraph below). Our MCMC simulation did not count for the observed asymmetry in the RME either (i.e. the fact that the amplitudes $K_p$ and $K_n$ are different from each other) and so the synchronisation factor $F_1$ has to be considered as some kind of mean value. The shape of its time variation resembles the mean of the shapes of the $K_p$ and $K_n$ variations (cf. Fig.\\,\\ref{Fig15}). The results obtained for the synchronisation factor in Sect.\\,\\ref{Sect5.4} from radius and $v\\sin{i}$\\ of the primary, on the other hand, give sub-synchronous rotation of the primary. Its outer layers are only accelerated to almost synchronous rotation during the active phase\nin 2001. The finding of sub-synchronous rotation is surprising. It was suggested by \\citet{2014A&A...570A..25D} to occur during the rapid phase of mass exchange in Algols, when the donor star is spun down on a timescale shorter than the tidal synchronisation timescale and material leaving the inner-Lagrangian point is accreted back onto the donor, enhancing orbital shrinkage. But these authors state that once the mass-transfer rate decelerates and convection develops in the surface layers, tides should be effective enough to re-synchronise the primary. To our knowledge, sub-synchronous rotation was found so far in only one short-period Algol, namely TV\\,Cas, by \\cite{1992A&A...257..199K}, in which the authors found synchronisation factors of 0.85 for the primary and 0.65 for the secondary component.\n\nThe RV residuals after subtracting the solutions obtained with our MCMC simulations (Fig.\\,\\ref{FigA2}) finally show that our model distinctly reduces the scatter compared to the residuals obtained from pure Keplerian motion. On the other hand, we can still recognise many of the signs of activity that we discussed in connection with Fig.\\,\\ref{Fig13}. We assume that these still unexplained features are due to the distribution and density of circumbinary matter along the line of sight in different orbital phases, varying between different seasons according to the varying activity of the star. Finally, we can conclude from Fig.\\,\\ref{FigA2} in the same way as from Fig.\\,\\ref{Fig13} that RZ\\,Cas showed an extraordinary phase of activity around 2001 followed by a quiet phase in 2006 to 2009, slightly enhanced activity in 2013\/2014, followed by a quiet phase again. The calculated fits on the variation of the radii of the two spots and the RME amplitude $K_p$ based on the nine-year cycle, on the other hand, show extrema of the same amplitude such as in 2001 for the time around 2010-2011. Therefore, because of missing data in this period, we cannot exclude the possibility that a second mass-transfer episode of comparable strength like in 2001 occurred close to 2010-2011.\n\nThe orbital period was at maximum shortly after 2001, dropped down steeply until 2006, and was then more or less continuously rising. From Eq.\\,8 in \\citet{1973A&A....27..249B}, assuming conservative mass transfer and conservation of orbital angular momentum, we obtain a mass-transfer rate of 1.5\\,$\\times\\,10^{-6}$\\,M$_\\odot$\\,yr$^{-1}$. This is a typical value observed for Algol-type stars \\citep[e.g.][]{1976IAUS...73..283H} and also agrees \\citet{1976AcA....26..239H}, who calculate, from an O-C analysis, a mean mass-transfer rate of RZ\\,Cas of 1.0\\,$\\times\\,10^{-6}$\\,M$_\\odot$\\,yr$^{-1}$, averaged over several episodes of period change. As mentioned in the introduction, evolutionary scenarios for RZ\\,Cas have to count for mass loss from the system \\citep{2008A&A...486..919M}, which may be the case for Algols in earlier stages of their evolution in general \\citep[e.g.][]{2006MNRAS.373..435I, 2013A&A...557A..40D, 2015A&A...577A..55D}. Thus, conservatism is not necessarily justified and the assumption of mass conservation can only lead to a raw estimation of the amount of mass transferred during the active phase of RZ\\,Cas in 2001.\n\nWe assume that the enhanced value of $v\\sin{i}$\\ in 2001 points to an acceleration of the outer layers of the primary of RZ\\,Cas due to the mass-transfer episode occurring close to that year. This then dropped down by about 5\\,km\\,s$^{-1}$\\ and it could be that its slight increase after 2009 is correlated with a slightly increased activity as indicated by the variation of $F_1$ and rms.\n\n\\section{Conclusions}\\label{Sect9}\n\nIn this first of three articles related to spectroscopic long-term monitoring of RZ\\,Cas, we investigated high-resolution spectra with respect to stellar and system parameters based on the RVs and LSD profiles of its components calculated with the LSDbinary program. The main goal was to search for further episodes of enhanced mass transfer occurring after 2001 and for a general timescale of variations possibly caused by the magnetic cycle of the cool companion.\n\nFrom spectrum analysis we determined precise atmospheric parameters, among them low metal surface abundances, in particular [Fe\/H] of $-0.42$ and [C\/H] of $-0.80$ for the primary and [Fe\/H] of the same order for the secondary. The carbon deficiency observed for the primary gives evidence that the outer layers of the cool secondary have been stripped in the fast mass-transfer phase down to its core so that CNO-cycled material was transferred to the outer layers of the primary in later stages of evolution. The derived $T_{\\rm eff}$\\ of the components and the $\\log{g}$\\ of the primary agree within the 1$\\sigma$ error bars with the results from LC analysis by \\citet{2004MNRAS.347.1317R}. From the RV analysis with PHOEBE, we derived very precise masses and separation of the components of $M_1$\\,=\\,1.951(5)\\,M$_\\odot$, $M_2$\\,=\\,0.684(1)\\,M$_\\odot$, and $a$\\,=\\,6.546(5)\\,R$_\\odot$.\n\nFrom several of the investigated parameters that show seasonal variations, such as orbital period, $v\\sin{i}$, strength of the spots on the secondary, synchronisation factor calculated with PHOEBE, and rms of the RV residuals after subtracting the PHOEBE solutions, we deduce a common time scale of the order of 9 years. The variation of the orbital period is complex and can be described in detail only when adding further periods. We conclude that we see the effects of a 9-year magnetic activity cycle of the cool companion of RZ\\,Cas, caused by an 18-year dynamo cycle that includes a reversal of polarity. This conclusion is strongly supported by the behaviour of the two dark spots on the surface of the secondary that show the flip-flop effect in their strengths and one spot that shows an 18-year periodicity in longitudinal migration.\n\nFrom the variations of orbital period and $v\\sin{i}$\\ around 2001, we conclude that the determined $v\\sin{i}$\\ is the projected equatorial rotation velocity of the outer layers of the primary accelerated by transferred matter and does not stand for the rotation velocity of the star as a whole. In all other seasons, the measured $v\\sin{i}$\\ point to a sub-synchronous rotation of the gainer. At the present stage, we cannot give a physical explanation for this new and interesting finding, however.\n\nBased on our available data, we conclude that RZ\\,Cas was undergoing an episode of high mass transfer in 2001, in a quiet phase in 2006 and 2009, followed by a slightly more active phase in 2013 and 2014, and again in a quiet phase in 2015 and 2016. Because we did not observe the star in 2010 and 2011, we cannot exclude that a second episode of high mass transfer occurred in these years, which would agree with the derived magnetic activity cycle of nine years.\n\nThe results of our investigation of high-frequency oscillations of the primary of RZ\\,Cas will be presented in Part II of this article. In a third article, we will investigate the accretion-induced variability of He\\,I lines detected in the spectra.\n\n\\begin{acknowledgements}\nHL and FP acknowledge support from DFG grant LE 1102\/3-1. VT acknowledges support from RFBR grant No. 15-52-12371. AD is financially supported by the Croatian Science Foundation through grant IP 2014-09-8656 and Erciyes University Scientific Research Projects Coordination Unit under grant number MAP-2020-9749. The research leading to these results has (partially) received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement N$^\\circ$670519: MAMSIE), from the KU~Leuven Research Council (grant C16\/18\/005: PARADISE), from the Research Foundation Flanders (FWO) under grant agreement G0H5416N (ERC Runner Up Project), as well as from the BELgian federal Science Policy Office (BELSPO) through PRODEX grant PLATO. Results are partly based on observations obtained with the HERMES spectrograph, which is supported by the Research Foundation - Flanders (FWO), Belgium, the Research Council of KU Leuven, Belgium, the Fonds National de la Recherche Scientifique (F.R.S.-FNRS), Belgium, the Royal Observatory of Belgium, the Observatoire de Gen\\`eve, Switzerland and the Th\\\"uringer Landessternwarte Tautenburg, Germany.\n\\end{acknowledgements}\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nRecently much attention has been given to connections between\ncriticality and self-organized criticality (SOC) and evolutionary\nphenomenon, particularly punctuated equilibrium, and to\nconnections between SOC and synchronization. \nWe describe a model \nwhich we hope draws some connection between these 2 ideas.\nSOC has been proposed to describe out of equilibrium systems\nthat are critical, that self-organize into a scale invariant\ncritical state without tuning of a control\nparameter and show fractal time series.\\cite{rf:52} \nEvolution SOC type models \\cite{rf:60,rf:68,rf:45}\nhave been proposed to explain \npunctuated equilibrium. \\cite{rf:66,rf:67}\nPunctuated equilibrium is the phenomenon observed\nin the fossil record where long periods of stasis are interrupted \nby sudden bursts of evolutionary change. \nKaufman and Johnsen \\cite{rf:64} have modeled co-evolution,\nwhere agents live on \na coupled fitness landscape and walk around by\nrandom mutation, only \nmoves to higher fitnesses are allowed.\nOnce at a local maximum the walk stops until moves by another\nagent deform the lanscape so the agent is no longer at a maximum.\nKauffman \\cite{rf:64} has linked this to SOC. Bak,\nSneppen and Flyvbjerg \\cite{rf:60,rf:68} have taken a similar\napproach. They define a species as \na barrier to increasing fitness and choose the\nleast fit, then randomly change its barrier and the barriers of\nother agents. The\nsystem evolves to a critical state with a self-organized\nfitness threshold.\nSOC has also been linked to periodic\nbehaviour.\\cite{rf:50,rf:56,rf:58,rf:70} \nA.Corral et al and Bottani \\cite{rf:56,rf:50}, say there is a close relationship between\nSOC and synchronization. SOC\nappears when a system is perturbed which\notherwise should synchronise totally or partially. \nThe perturbation may be open boundary conditions rather than\nperiodic\\cite{rf:50,rf:70} or it may be\nrandomness present in initial conditions which is preserved by\nthe dynamic, or it may be the addition of noise.\nThe model we study appears to link these 2 ideas with the\nemergence of avalanches of partial synchronization on all\nscales. However all these models are real space models whereas\nours is a mean-field model with no spatial dimension. Our model \nis entirely determistic, the critical state is produced by\ncertain initial conditions, indeed other initial conditions\nproduce completely periodic states. \nOur model also is not strictly speaking an SOC model since\ncritical behaviour only occurs for a certain range of the\nparameter and then only for certain initial conditions. A\ncomplete analysis of the \nintial conditions is outside the scope of this paper.\n\nOur model was originally\nmotivated as a model of the behaviour of speculating traders in a \nfinancial market in the spirit of co-evolution.\nRecent results have shown stock price time\nseries to be fractal with Hurst exponent different from 0.5,\n\\cite{rf:72} \nand with positive lyapunov exponent.\n\\cite{rf:33,rf:34,rf:39}\nScrambling daily returns changes the Hurst exponent back to 0.5. \nLarge crashes have been supposed\nto be due to exogenous shocks, where information enters the\nmarket randomly.\nHowever large crashes interspersed with periods of slow\ngrowth are \nstrongly reminiscent of punctuated equilibrium. Indeed\nMandlebrot has noted large changes of cotton prices occur in\noscillatory groups \nand the movement in tranquil periods is smoother than\npredicted.\\cite{rf:77} \nScaling behavior has been noted in a financial \nindex and in the size of companies.\\cite{rf:100,rf:101} Stanley et\nal have noted `scaling laws used to describe complex systems\ncomprised of many interacting inanimate particles (as in many\nphysical systems) may be usefully extended to describe complex\nsystems comprised of many interacting animate subsystems (as in\neconomics).'\n\nVarious models have been proposed to explain\nmarket movements.\\cite{rf:36,rf:38,rf:6}\nSato and Takayasu have proposed a threshold type\nmodel.\\cite{rf:73,rf:73a}\nSince critical states can produce avalanches on all scales,\nwithout the need for \nexogenous shocks, we believe critical type dynamics\nare present in financial market dynamics. \n\n\\section{Model}\n\\noindent\nWe hope to model co-evolutionary phenomenon where the micro-level\nitself defines the macro-level but is also slaved to the\nmacro-level. This is very evident in speculative financial market dynamics\nwhere a collection of individuals (micro) trade therby creating a price time\nseries (macro), but determine their trading behaviour by reference to\nthis same price series and other macro variables. \nWe desired to make a model in analogy to this phenomenon. \n\nThis is a highly stylized toy model of a stock-market.There are \n$N$ agents which are represented by spins $s_{i}(t)$, where\n$s_{i}(t)=1$, means the agent $i$ owns the stock and\n$s_{i}(t)=-1$ means doesn't own the stock at time t. Each agent also has an \nabsolute fitness $F_{i}(t)$ and a relative fitnes\n$f_{i}(t)=F_{i}(t)-F(t)$ where the mean-fitness\n$F(t)=\\frac{1}{N}\\sum_{i=1}^{N}F_{i}(t)$. \nWe\nbelieve speculative traders are part of 2 crowds, bulls and bears, and our\nmacrovariable is `groupthink' $G(t)$, \ndefined by, \n\\begin{equation}\nG(t)=\\Delta P(t)=\\frac{1}{N}\\sum_{i=1}^{N}s_{i}(t)\n\\end{equation}\nThe dynamic is:\n\\begin{equation}\n\\Delta s_{i}(t)=s_{i}(t+1)-s_{i}(t)=\\left\\{\\begin{array}{ll}\n-2s_{i}(t) & f_{i}(t)\\leq 0\\\\\n0 & f_{i}(t)>0\n\\end{array}\n\\right.\n\\end{equation}\n\\begin{equation}\n\\Delta F_{i}(t)=F_{i}(t+1)-F_{i}(t)=-\\frac{1}{2}\\Delta s_{i}(t)G(t)+\\frac{1}{2}|\\Delta s_{i}(t)|c\n\\end{equation}\n\nThe dynamic is synchronous and deterministic. First $G(t)$ and\n$F(t)$ are calculated then all agents are updated according to\n(2) and (3). The price $P(t)$ is defined by (1) and\n$P(0)=0$. Initially $s_{i}(0)$ are chosen randomly with\nprobablity 1\/2 and $F_{i}(0)$ are chosen randomly from the\ninterval [-1,1].\n\n$G(t)$ measures the bullishness or bearishness of the crowd.\nAlthough different to ours Callan and Shapiro mention groupthink\nin Theory of Social \nImitation\\cite{rf:75} and Vaga's\\cite{rf:76,rf:72} Coherent\nMarket Hypothesis explicitly includes a variable called\n{\\it groupthink}. \n\nWe believe speculative agents determine their spin state\ndependent on whether they \nbelieve the market will move in their favour in the future. \nTherefore \nour agents have an absolute fitness $F_{i}(t)$ which measures their\nperception of whether they are in a good position with respect\nto the future. If $F_{i}(t)$ is relatively high their state will\nbe stable and if $F_{i}(t)$ is relatively low they will want to\nchange their current state. Many ways to define $F_{i}(t)$ are\npossible. In this model we define it by \nanalogy to Plummer. Agents\nconsider the market to be `overbought' or `oversold'. \nIn our simplistic model this is measured by $G(t)$. An agent is fit if it is in the minority\ngroup. According to Plummer when most agents \nare in one position then there must be less buying into this\nposition (because there are only a finite amount of agents) and\ntherefore the market will eventually correct itself (change\ndirection) because \nits growth will not be sustainable. It is always profitable then \nto be in the minority group before a correction. At a correction \nthe dominant crowd breaks, the macro position dissolves, the market\nmay crash, and \nsubsequently bull and bear crowds will begin to reform. \nIn fact at these times the agents may trade in\n2 macro-clusters or chaotically with the market attaining high\nvolatility which persists for some time. \nThis type of\ntrader has \nbeen called a {\\it sheep trader} \\cite{rf:36,rf:38}, in\ncontrast to fundamentalists speculators and\nnon-speculators. \nTherefore in this model an agents fitness is increased if it\nchanges from the \nmajority group to the minority group, with the increase\nproportional to the size of the majority. The opposite is\napplied if it changes the other way. \nIf an agent doesn't change its state then it's absolute fitness\n$F_{i}(t)$ is not\nchanged, regardless of whether $G(t)$ changes.\nAn agent also has a relative fitness $f_{i}(t)$. The $f_{i}(t)$ are the\nbehaviour controlling variables in this model. They may change\nin 2 ways. Firstly an agent $\\alpha$ may change its state $s_{\\alpha}(t)$ thereby\ndirectly changing $F_{\\alpha}(t)$ and $f_{\\alpha}(t)$. This is similar to\na single adaptive\nmove on a fitness landscape by an individual optimizing\nagent. Secondly co-evolution may occur. Here an agent's\nrelative fitness $f_{\\beta}(t)$ may change due to changes in the\nother agents fitnesses $F_{i}(t)$ changing $F(t)$ while\n$F_{\\beta}(t)$ remains constant.\n\nTo model evolution then we follow natural selection by analogy\nand mutate unfit agents and leave fit agents unchanged (although \ntheir relative fitnesses may change). \nas in Kaufman,\\cite{rf:64} and Bak et al\\cite{rf:60,rf:68}.\nMutation is considered to\nbe a state change and this changes an agent's fitness according\nto (3). In this model since\ntheir are only 2 possible states this means we simply flip\nstate. (In a more extensive model this would correspond to\nchanges to an\nownership portfolio vector). To decide which spins flip we could\ncompare pairs of \nfitnesses and change the least fit. \nThat is we could choose 2 agents $\\alpha$ and $\\beta$ and let\nthem compete so that\n$F_{\\alpha}>F_{\\beta}$ then we say\n$s_{\\alpha}(t+1)=s_{\\alpha}(t)$ and\n$s_{\\beta}(t+1)=-s_{\\beta}(t)$. \nHowever in this paper we simply\ntake a mean-fitness approach. That is all agents $i$ whose\nfitnesses $F_{i}(t)$ fulfill $F_{i}(t)\\leq F(t)$ ie\n$f_{i}(t)\\leq 0$\nflip there spins and their fitnesses change according to (3).\nAll other agent's states and absolute fitnesses $F_{i}(t)$ do\nnot change although their relative fitnesses $f_{i}(t)$ of\ncourse do. \nTherefore fit agents which could be considered to be at a local\nmaximum do not change their states until the mean-fitness $F(t)$ has\nbecome equal to their fitnesses $F_{i}(t)$.\n\nThis means the fitnesses are all internally defined emergent\nproperties as in co-evolution. Of course if there is no overall crowd\npolarisation then changing state does not change fitness.\n\nTherefore the fitness update rule (3) can be seen as the\nadaptive walk part and this is the reason why we do not simply\nset $F_{i}(t)=-s_{i}(t)G(t)$ or $\\Delta F_{i}(t)=-s_{i}(t)G(t)$\ncontinuously for all agents. We \nhope agents will take time to walk out of unfit states and that\nfit maxima will be created which persist for some time. Our\nabsolute fitness is therefore cumulative and is only changed for unfit\nagents. More realistically we could\nthink of agents imperfectly sampling the market ie $G(t)$ at a series of\ntimes to determine their current absolute fitness. In\nfact we see that the concept of relative fitness and absolute\nfitness are very similar to the concept of `bounded\nrationality.' An agents rationality is bounded because he only\nmakes local adaptive moves and can percieve only his absolute\nfitness but not the overall mean-fitness or his relative fitness\n\n\nThis fitness of the position is natural in the sense that it can be seen as \na kind of potential for future profit, usually termed `utility'\nin economics. The fitter an agent is\nthe less likely it will want to change its state, the more\nstable it is, because it believes the market to be oversold in\nits favour.\n\nSince $G(t)$ will on average be $0$,\nin addition in equation (3) we include a very small control\nparameter $c$ which controls the driving rate. We add this to \nall fitnesses below the mean so that unfit agents on\naverage will be come fitter and interact with the fit\nagents. This is a general characteristic of evolutionary systems \nthat single entity moves on fitness landscape should be on\naverage uphill. \n\nOur market price $P(t)$ is defined by $\\Delta P(t)=G(t)$, ie\nprice increases while more people own the share than don't own\nit, and the price is theoretically unbounded as it should\nbe. Positive groupthink means positive increase and\nvice-versa. This is similar to the way prices are usually\ndefined by $\\Delta p(t)\\propto Z(t)$, where $Z(t)$ \nis the excess demand for something. \nThis model does not included a fixed amount of\nshares. Indeed any trader may independently buy or sell a share\nwithout the notion of swapping. This reflects the fact that this \nis a model of only speculative behaviour, and part of a much\nlarger of pool of shares. However a more realistic model should\ninclude a fixed amount of shares.\n\nThis model is intended to be a suggestive\nillustration rather than a realistic stock-market. \n\n\\section{Results}\n\nShown in Fig.1a is a time series for the fitnesses $F_{i}(t)$\nfor an $N=80$ system for $c=0.01$. Punctuated equilibrium behaviour is\nclearly visible, with periods of relative stasis interspersed\nwith sudden jumps.\nAlthough not shown the mean-fitness time series $F(t)$\nshows changes on all scales similar to a devils staircase. Also\nshown in Fig.1b is the corresponding daily returns time-series\n$\\Delta P(t)=G(t)$,\nthis also shows calm periods and sudden bursts of high\nvolatility. Infact this behaviour is a kind of intermittent\npartial synchronization. Shown in Fig.2 is the same time\nseries but with a small portion magnified. Macroscopic \nsynchronization can be seen. Partial synchronizations show various\ndifferent periods and complexities, and persist for various\nlengths of time.\nClustering allows\nsynchronized spins to trade in phase with groupthink $G(t)$ thereby\nrapidly increasing there fitnesses, or out of phase thereby\nbecoming less fit, this is the origin of the sudden large\nchanges in fitness. \nAlso at these times the fitness deviation suddenly\nincreases,(not shown). Between periods of large-scale partial\nsynchronization with high volatility, periods \nof calm are characterized by a small even number of spins\nflipping in anti-phase, they therefore increase their $F_{i}$\nonly slowly due to the driving parameter c, the returns $G(t)$\nremaining roughly \nconstant during these periods with the fitness deviation\ndecreasing. (Of course anti-phase flipping with no average\nincrease in fitness is prevented in a real market by a\nfixed transaction cost. A more realistic model must include this.) \nWhen the mean-fitness $F(t)$ which is usually increasing\ncrosses some non-flipping $F_{i}$, this spin flips and may cause the\n$F(t)$ to cross some more $F_{i}$, possibly starting an \navalanche. This only happens when the total fitness deviation is \nsmall.\n\nShown in Fig.3 are two price $P(t)$ time series for\n$c=0.013$. Their fractal slightly repetetive pattern is highly\nreminiscent of real financial time series.\n\nSince this model is deterministic, completely periodic\nstates are also possible. Shown in Fig.4 is the average of the\nquantity $$\nwhere $R(t)=\\sum_{i=1}^{N}|\\Delta s_{i}(t)|$\nis the amount of spins which flip at any time and $<\\ldots >$\ndenotes time averaging. In Fig.4a are time series for $c=0.0113$ \nwhile Fig.4b is $c=0.01$. \n\nFig.4a shows one time series finds\nthe 2 cluster periodic state, where 2 groups alternately topple. \nHere 59 time series are included in the non-periodic state. If\nthis is a\ntransient it is super long even for the moderate size \n$N=200$. Fig.4b shows at more regular $c$ values the system\ncan find periodic states with larger amounts of\nclusters. Roughly half of the 60 series investigated become\nperiodic by $t=2.5\\times 10^{7}$.\nShown in Fig.5a is $$ plotted against $c$. \nInfact to construct this plot we\ndiscarded $3\\times 10^{7}$ time steps and then averaged over the next\n$20,000$, each point represents a different initial condition\nand there are $8$ for each cost $c=0.001x+0.000138$, $x$ is an integer.\nFor small cost\ncritical type behaviour is evident \nwith a sudden phase transition at $c=0$. This is of course\nbecause at negative $c$ less fit spins continuously flip and\nnever interact with spins at greater than mean fitness. In fact\nthe system divides into a frozen solid fit component and an\nunfit gaseous type component for negative $c$. The size of the\nfrozen component \ndepends on the initial conditions as can be seen from the points \nat negative cost. For larger positive cost an upper branch of\nperiodic attractors at \n$=100$, half the system size, is evident, the\nsystem has settled into 2 alternately toppling clusters which\ninterleave the mean-fitness $F(t)$. The lower branch is\ncharacterised by the punctuated equilibrium \nstate shown in Fig.1.\nFig.5b shows the time average $$ of an entropy type\nquantity of the fitness\ndistribution $S(t)$ given by,\n$S(t)=-\\sum_{i=1}^{N}\\frac{|f_{i}(t)|}{f(t)}ln\\frac{|f_{i}(t)|}{f(t)}$\nwhere $f_{i}(t)$ is the fitness deviation and \n$f(t)=\\sum_{i=1}^{N}|f_{i}(t)|$. The averaging is the same as\nfor $$. \nFor this $N=200$ system the maximum $S=ln200\\approx\n5.3$ and the periodic points at positive and negative cost are\nvery near this. \nThe punctuated equilibrium state which exists near\nthe transition is more ordered at lower entropy. \n\nThis is our first evidence of critical behaviour for small $c$. \nSecond evidence is obtained by \nlooking at the distribution of avalanches. In the punctuated\nequilibrium state the system finds a state characterised by\nfluctuations on all scales. Shown in Fig.6 is the\ndistribution of $P(R)$ against $R$ where $P(R)$ is the\nprobability of an avalanche of size $R$. These are distributions \nof avalanches for 1 time series for 3 different system\nsizes. They are not ensembles of time series, this distribution\nis independent of the initial conditions and any non-periodic\ntime series contains all avalanche sizes. \n\nThe time series\nwere of length $T=16,000,000$, near the transition point at\n$c=0.0113$. \nThe distribution shows scale invariance,$P(R)\\sim R^{\\alpha }$ up\nto about \nhalf the system size. At half the system size there is a peak\nwhere the system almost finds the periodic attractor and spends\nmore time in these states.\nAfter this \nthe distribution continues \nto the cutoff near the system size. The scaling exponent $\\alpha \n$ taken\nfrom the $N=3000$ distribution is $\\alpha =-1.085\\pm 0.002$.\nAlso shown in Fig.7 is the distribution of magnitude of\nchanges in mean \nfitness $\\Delta F(t)=|F(t+1)-F(t)|$, the steps in the devils\nstaircase. \nThe time series are\nthe same as in Fig.6 for 2 different system sizes, there is\nno ensemble averaging. In fact the two distributions for\n$N=1500,923$ are almost identical, if we were to superpose them,\nonly one could be seen. This is also true for other system\nsizes. Peaks appear at $\\Delta F\n\\approx 0.12,0.5,0.75$. Between the peaks we see scaling\nregimes. Here we see at least 2\nscaling regimes, $P(\\Delta F)\\sim \\Delta F^{\\beta}$ where for\n$\\Delta F \\le 0.1$, $\\beta =-1.25 \\pm 0.003$ and for $0.1\\le\n\\Delta F \\le 0.4$, $\\beta =-1.39 \\pm 0.02$. Possibly there is\nanother scaling regime for $0.55\\le \\Delta F \\le 0.7$.\n\\section{Conclusion}\n\\noindent\nThis model illustrates an interesting relation between critical\nphenomenon and punctuated equilibrium on the one hand, and\nbetween partial synchronization and punctuated equilibrium on\nthe other hand. The system synchronizes for certain cost $c$ and\ncertain intial conditions, otherwise it shows critical\nbehaviour, similar to the SOC models cited in the introduction. We\nbelieve this deserves further \ninvestigation. We also find an interesting phase transition. \n\nSome typical behaviour of \nmoney markets is present here, especially the periods of low\nvolatility, where the price is relatively stable and the fitness \ngrows slowly while the fitness deviation decreases slowly, \ninterrupted by shorter periods of persistent high volatility and macroscopic\noscilations which are observed in real time series. We wonder if \nlike in earthquake dynamics, which are often modelled by SOC\ndynamics, a large crash in a real financial market is preceded\nby some smaller self-reinforcing oscilatory pre-shock, as is\nseen in our dynamics here. Also the time price time series is\nhighly suggestive of real time series, with \nformations similar to `double tops' and `rebounds' described in\nquantitative analysis, produced by the \nnear periodic macro behaviour which can appear. The\nslightly repetetive self-similarity reminds us of financial time series. \n\nMany possible models of financial market dynamics can be\nplausibly suggested, included many exhibiting threshold\ndynamics, since data concerning the micro behaviour of\nindividual traders is not available.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzoivt b/data_all_eng_slimpj/shuffled/split2/finalzzoivt new file mode 100644 index 0000000000000000000000000000000000000000..48406328ed4a36b7afb77e763b4e382b3f5ad06c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzoivt @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\n\nSo far, the LHC did not observe any particles beyond those of the\nStandard Model (SM). Complementary to direct high energy searches at\nthe LHC, there is a continuous effort in indirect searches for new\nphysics (NP). In this respect, a promising approach is the search for\nprocesses which are absent -- or extremely suppressed -- in the SM\nsuch as lepton flavour violation (LFV) which is forbidden in the SM in\nthe limit of vanishing neutrino masses. The experimental sensitivity\nfor rare LFV processes such as $\\ell\\rightarrow \\ell^{\\prime}\\gamma$,\n$\\mu \\to e$ conversion in nuclei and $\\ell\\rightarrow \\ell^{\\prime}\n\\mu^+\\mu^-$ or $\\ell\\rightarrow \\ell^{\\prime} e^+e^-$ will improve\nsignificantly in the near future, probing scales well beyond those\naccessible at foreseeable colliders. Furthermore, the discovery of\nthe 125 \\ensuremath{\\,\\mbox{GeV}}{} Higgs boson $h$ \\cite{Aad:2012tfa,Chatrchyan:2012ufa}\nhas triggered an enormous experimental effort in measuring its\nproperties, including studies of its LFV decays. The most recent\nexperimental limits {on the LFV processes} are given in\nTable~\\ref{tab:leplim} {in Sec.~\\ref{sec:pheno}}.\n\nMany studies of LFV processes within the MSSM (and possible extensions\nof it) exist (see e.g. Refs.\\cite{Borzumati:1986qx, Casas:2001sr,\n Masina:2002mv, Brignole:2004ah, Paradisi:2005fk, Fukuyama:2005bh,\n Paradisi:2005tk, Dedes:2006ni, Dedes:2007ef, Antusch:2007dj,\n Arganda:2008jj, Ilakovac:2009jf, Calibbi:2009ja,\n Altmannshofer:2009ne, Calibbi:2006nq, Calibbi:2009wk, Hisano:2009ae,\n Girrbach:2009uy, Biggio:2010me, Esteves:2010ff, Ilakovac:2012sh,\n Arana-Catania:2013ggc, Goto:2014vga, Abada:2014kba, Vicente:2015cka,\n Bonilla:2016fqd, Lindner:2016bgg} and Ref.~\\cite{Calibbi:2017uvl}\nfor a recent review). In this article we revisit this subject in the\nlight of the new calculational methods which have been recently\ndeveloped~\\cite{Dedes:2015twa, Rosiek:2015jua}. These methods allow\nfor a systematic expansion of the amplitudes of the LFV processes in\nterms of mass insertions (MI), i.e. in terms of off-diagonal elements\nof the mass matrices. We show that a transparent qualitative\nbehaviour of the amplitudes of the LFV processes is obtained by\nexpanding them not only in the flavour-violating off-diagonal terms in\nthe sfermion mass matrices but also in the flavour conserving but\nchirality violating entries related to the tri-linear $A$-terms as\nwell as in the off-diagonal terms of the gaugino and higgsino mass\nmatrices. This procedure is useful because in the MI approximation we\nwork directly with the parameters of the Lagrangian and can therefore\neasily put experimental bounds on them. We compare the results of the\ncalculations performed in the mass eigenbasis (i.e. using a numerical\ndiagonalization of the slepton mass matrices) with those obtained at\nleading non-vanishing order of the MI approximation, in different\nregions of the supersymmetric parameter space and considering various\ndecoupling limits. Of course, the MI\napproximation~\\cite{Gabbiani:1996hi, Misiak:1997ei} has already been\nexplored for many years as a very useful tool in flavour physics.\nHowever, a detailed comparison between the full calculation and the MI\napproximation is still lacking, partly because a fully systematic\ndiscussion of the MI approximation~\\cite{Dedes:2015twa} to any order\nand the technical tools facilitating it~\\cite{Rosiek:2015jua} have not\nbeen available until recently.\n\nConcerning the phenomenology, we summarise and update the bounds on\nthe flavour violating SUSY parameters, show their complementarity and\nexamine the impact of the anticipated increase in the experimental\nsensitivity. We investigate in detail the decay $h \\to \\mu \\tau$\nshowing the results in various decoupling limits and analyse the role\nof the so-called non-holomorphic $A$-terms~\\cite{deWit:1996kc,\n Matone:1996bj, Bellisai:1997ck, Dine:1997nq, ArkaniHamed:1998wc,\n GonzalezRey:1998gz, Buchbinder:1998qd, Martin:1999hc} , which are\nusually neglected in literature.\nWe also avoid simplifying assumptions on the sparticle spectrum and\nassume neither degeneracies nor hierarchies among the supersymmetric\nparticles.\n\nThis article is structured as follows: in Sec.~\\ref{sec:efflag} we\nestablish our conventions and present the results for the 2-point,\n3-point, and 4-point functions related to flavour violating charged\nlepton interactions in the mass eigenbasis, i.e. expressed in terms of\nrotation matrices and physical masses. Sec.~\\ref{sec:lfv} contains\nthe formula for the decay rates of the processes under investigation.\nIn Sec.~\\ref{sec:memi} we discuss the MI expansion and summarise\nimportant properties of the decoupling limits $M_{\\rm SUSY}\\to\\infty$\nand $M_A\\to \\infty$. In Sec.~\\ref{sec:pheno} we present the numerical\nbounds on LFV parameters obtained from current experimental\nmeasurements and discuss the dependence of the results on the SUSY\nspectrum. We also discuss the correlations between the radiative\ndecays and the 3-body decays of charged lepton as well as the\nnon-decoupling effects in LFV neutral Higgs decays. Finally we\nconclude in Sec.~\\ref{sec:conclusion}. All required Feynman rules\nused in our calculations are collected in appendix~\\ref{app:lagr}. The\ndefinitions of loop integrals can be found in appendix~\\ref{app:loop}.\nIn appendix~\\ref{app:ddiff} we explain the notation for the ``divided\ndifferences'' of the loop functions used in the expanded form of the\namplitudes. The expression for the 4-lepton box diagrams and for the\nMI-expanded expression of the amplitudes are given in the\nappendices~\\ref{app:fullbox} and ~\\ref{app:miexp}, respectively.\n\n\n\\section{Effective LFV interactions}\n\\label{sec:efflag}\n\nIn this Section we collect the analytical formula in the mass\neigenbasis for flavour violating interactions generated at the\none-loop level\\footnote{Note that these expressions are not valid in\n the flavour conserving case where additional terms should be\n included and renormalization is required.}. We use the notation and\nconventions for the MSSM as given in Ref.~\\cite{Rosiek:1989rs,\n Rosiek:1995kg}\\footnote{The conventions of~\\cite{Rosiek:1989rs,\n Rosiek:1995kg} are very similar to the later introduced and now\n widely accepted SLHA2~\\cite{Allanach:2008qq} notation, up to the\n minor differences summarised in the Appendix~\\ref{app:lagr}.}.\n\n\nIn our analysis, we include the so-called non-holomorphic trilinear\nsoft SUSY breaking terms:\n\\begin{eqnarray}\nL_{nh} = \\sum_{I,J=1}^3\\sum_{i=1}^2 \\left(A_l^{'IJ} H_i^{2\\star} L_i^I\nR^J+ A_d^{'IJ} H_i^{2\\star} Q_i^I D^J + A_u^{'IJ} H_i^{1\\star} Q_i^I\nU^J + \\mathrm{H.c.} \\right)\\,,\n\\end{eqnarray}\nwhich couple up(down)-sfermions to the down(up)-type Higgs doublets.\nHere, {as throughout the rest of the paper}, capital letters\n$I,J=1,2,3$ denote flavour indices and the small letters $i=1,2$ are\n$SU(2)_L$ indices.\n\n\n\\subsection{$\\gamma-\\ell-\\ell^\\prime$ interactions}\n\nWe define the effective Lagrangian for flavour violating couplings of\nleptons to on-shell photons as\n\\begin{eqnarray}\nL_{\\ell \\gamma }^{} = - e\\sum\\limits_{I,J} {\\left(\n {F_{\\gamma}^{JI}{{\\bar \\ell }^J}{\\sigma _{\\mu \\nu }}{P_L}{\\ell ^I} +\n F_{\\gamma}^{IJ*}{{\\bar \\ell }^J}{\\sigma _{\\mu \\nu }}{P_R}{\\ell\n ^I}} \\right)} {F^{\\mu \\nu }}\\,,\n\\label{eq:lgdef}\n\\end{eqnarray}\n\n\\begin{figure}[tbp]\n\\begin{center}\n\\includegraphics[width=0.8\\textwidth]{pl_llg-eps-converted-to.pdf}\n\\caption{\\small One-loop supersymmetric contributions to the LF\n violating effective lepton-photon interaction (mirror-reflected\n self-energy diagram not shown).\\label{fig:llg}}\n\\end{center}\n\\end{figure}\n\nThe SM contribution to $F_{\\gamma}^{JI}$ is suppressed by powers of\n$m_{\\nu}^2\/M_W^2$ and thus completely negligible. In the mass\neigenbasis the supersymmetric contributions to $F_{\\gamma}^{JI}$ come\nfrom the diagrams displayed in Fig.~\\ref{fig:llg}. Let us decompose\n$F_{\\gamma}$ in the following way\n\\begin{eqnarray}\nF_{\\gamma}^{JI} = F_{\\gamma A}^{JI} - m_J F_{\\gamma L B}^{JI} - m_I\nF_{\\gamma R B}^{JI}\\,,\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\n(4\\pi)^2 F_{\\gamma A}^{JI} &=& \\sum_{K=1}^3 \\sum_{n=1}^2\nV_{\\ell\\tilde\\nu C, R}^{JKn*} V_{\\ell\\tilde\\nu C, L}^{IKn} \\; m_{C_n}\nC_{11}(m_{C_n},m_{\\tilde\\nu_K}) \\nonumber\\\\\n&-&\\frac{1}{2} \\sum_{k=1}^6\\sum_{n=1}^4 V_{\\ell \\tilde L N,R}^{Jkn*}\nV_{\\ell \\tilde L N,L}^{Ikn} \\; m_{N_n} C_{12}(m_{\\tilde\n L_k},m_{N_n})\\,,\\nonumber\\\\[3mm]\n(4\\pi)^2 F_{\\gamma L B}^{JI} &=& - \\sum_{K=1}^3 \\sum_{n=1}^2\nV_{\\ell\\tilde\\nu C, L}^{JKn*} V_{\\ell\\tilde\\nu C, L}^{IKn} \\;\nC_{23}(m_{C_n},m_{\\tilde\\nu_K}) \\nonumber\\\\\n&+& \\frac{1}{2} \\sum_{k=1}^6\\sum_{n=1}^4 V_{\\ell \\tilde L N,L}^{Jkn*}\nV_{\\ell \\tilde L N,L}^{Ikn} C_{23}(m_{\\tilde L_k},m_{N_n})\\,.\n\\label{eq:fgamab}\n\\end{eqnarray}\nHere, $V$ abbreviates the tree-level lepton-slepton-neutrino and\nlepton-sneutrino-chargino vertices, i.e. the subscripts of $V$ stand\nfor the interacting particles and the chirality of the lepton\ninvolved. The super-scripts refer to the lepton or slepton flavour as\nwell as to the chargino and neutralino involved. The specific form of\nthe chargino and neutralino vertices $V_{L(R)}$ is defined in\nAppendix~\\ref{app:lagr} and the 3-point loop functions $C_{ij}$ are\ngiven in Appendix~\\ref{app:loop}. $F_{\\gamma A}\\, (F_{\\gamma L B}$)\ndenotes the parts of the amplitude which is (not) proportional to the\nmasses of fermions exchanged in the loop. $F_{\\gamma R B}$ can be\nobtained from $F_{\\gamma L B}$ by exchanging $L\\leftrightarrow R$ on\nthe RHS of~\\eq{eq:fgamab}.\n\nGauge invariance requires that LFV (axial) vectorial photon couplings\nvanish for on-shell external particles. However, off-shell photon\ncontributions are necessary to calculate three body decays of charged\nleptons. The vectorial part of the amplitude for the $\\gamma\\ell\\ell'$\nvertex can be written as\n\\begin{eqnarray}\ni A_{\\gamma}^{JI\\;\\mu} = ie q^2 \\bar u_J (p_J) \\left(\\Gamma_{\\gamma\n L}^{JI} P_L + \\Gamma_{\\gamma R}^{JI} P_R \\right) \\gamma^{\\mu}\nu_I(p_I)\\;,\n\\label{eq:phvect}\n\\end{eqnarray}\nwhere $q=p_I-p_J$ and $\\Gamma_{\\gamma L}^{JI}$ is at the leading order\nin $p^2\/M_{\\rm SUSY}^2$ momentum independent and reads\n\\begin{eqnarray}\n\\Gamma_{\\gamma L}^{JI} &=& \\sum_{K=1}^3 \\sum_{n=1}^2 V_{\\ell\\tilde\\nu C,\n L}^{JKn*} V_{\\ell\\tilde\\nu C, L}^{IKn} \\;\nC_{01}(m_{C_n},m_{\\tilde\\nu_K}) \\nonumber\\\\ &-& \\sum_{k=1}^6\\sum_{n=1}^4\nV_{\\ell \\tilde L N,L}^{Jkn*} V_{\\ell \\tilde L N,L}^{Ikn} \\;\nC_{02}(m_{N_n},m_{\\tilde L_k})\\,.\n\\label{eq:llgv}\n\\end{eqnarray}\n$\\Gamma_{\\gamma R}^{JI}$ can be obtained by replacing\n$L\\leftrightarrow R$. Again, the loop functions $C_{01},C_{02}$ are\ndefined in Appendix~\\ref{app:loop}.\n\nFinally, one should note that for heavy MSSM spectrum the 2-loop\nBarr-Zee diagrams~\\cite{Barr:1990vd} involving the non-decoupling LFV\nHiggs interactions (see Sec.~\\ref{sec:hllndec}) are important and have\nto be included~\\cite{Chang:1993kw,Hisano:2006mj, Jung:2013hka,\n Abe:2013qla, Ilisie:2015tra,Crivellin:2015hha}.\n\n\n\\subsection{$Z-\\ell-\\ell^\\prime$ interactions}\n\nIn order to calculate the three body decays of charged leptons as to\nbe considered in Sec.~\\ref{sec:llll} it is sufficient to calculate the\neffective $Z-\\ell-\\ell^\\prime$ interactions in the limit of vanishing\nexternal momenta. The Wilson coefficients of the effective Lagrangian\nfor the $Z$ coupling to charged leptons are generated at one-loop\nlevel by the diagrams shown in Fig.~\\ref{fig:zll} and can be written\nas\n\\begin{eqnarray}\nL_{\\ell Z}^{JI} = \\left(F_{ZL}^{JI}\\bar\\ell^J\\gamma_\\mu P_L \\ell^I +\nF_{Z R}^{JI}\\bar\\ell^J\\gamma_\\mu P_R \\ell^I\\right) Z^\\mu\\,,\n\\label{eq:lzdef}\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\nF_{ZL}^{JI} & = & \\Gamma_{ZL}^{JI} - \\frac{e(1 - 2 s_W^2)}{2 s_W\n c_W}\\; \\Sigma_{VL}^{JI}(0)\\,,\\nonumber\\\\\nF_{ZR}^{JI} & = & \\Gamma_{ZR}^{JI} + \\frac{e s_W}{c_W}\\;\n\\Sigma_{VR}^{JI}(0)\\,.\n\\end{eqnarray}\nHere, $\\Gamma_{ZL(R)}$ denote the contribution originating from the\none-particle irreducible (1PI) vertex diagram and $\\Sigma_{VL(R)}$ is\nthe left-(right-)handed part of the lepton self-energy defined as\n\\begin{eqnarray}\n\\Sigma^{JI}(p^2) = \\Sigma_{VL}^{JI}(p^2)\\, \\slashed{p}\\,\nP_L+\\Sigma_{VR}^{JI}(p^2) \\,\\slashed{p}\\, P_R+\\Sigma_{mL}^{JI}(p^2)\n\\,P_L+\\Sigma_{mR}^{JI}(p^2) \\, P_R\\;.\n\\label{eq:sigdef}\n\\end{eqnarray}\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.8\\textwidth]{pl_zll-eps-converted-to.pdf}\n\\caption{\\small One-loop supersymmetric contributions to the LFV\n effective lepton-$Z^0$ interaction (the mirror-reflected self-energy\n diagram not shown).\\label{fig:zll}}\n\\end{center}\n\\end{figure}\n\nContrary to the left- and right-handed magnetic photon-lepton\ncouplings, which change chirality, the $Z\\bar\\ell^I\\ell^J$ coupling is\nchirality conserving. Therefore, the Wilson coefficients of the\nleft-handed and right-handed couplings are not related to each other\nbut rather satisfy $F_{Z L(R)}^{IJ} = F_{Z L(R)}^{JI*}$. In the mass\neigenbasis the vectorial part of the lepton self-energy and the 1PI\ntriangle diagrams are given by (see Appendix~\\ref{app:lagr} for\ndefinitions of vertices $V$)\n\\begin{eqnarray}\n(4\\pi)^2 \\Sigma_{VL}^{JI}(p^2) &=& \\sum_{i=1}^2\\sum_{K=1}^3\nV_{\\ell\\tilde\\nu C,L}^{IKi} V_{\\ell\\tilde\\nu C,L}^{JKi\\,*}\nB_1(p,m_{\\tilde\\nu_K},m_{C_i})\\nonumber\\\\ &+& \\sum_{i=1}^4\\sum_{j=1}^6 V_{\\ell\n \\tilde L N,L}^{Iji} V_{\\ell \\tilde L N,L}^{Jji\\,*}\nB_1(p,m_{L_j},m_{N_i})\\,,\n\\label{eq:sigmaV}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n(4\\pi)^2 \\Gamma_{ZL}^{JI} &=& \\frac{1}{2} \\sum_{i,j=1}^2\\sum_{K=1}^3\nV_{\\ell\\tilde\\nu C,L}^{IKi} V_{\\ell\\tilde\\nu C,L}^{JKj*} \\, \\left(\nV_{CCZ,L}^{ij} C_2(m_{\\tilde\\nu_K},m_{C_i},m_{C_j})\\right.\\nonumber\\\\\n&-&\\left. 2 V_{CCZ,R}^{ij} m_{C_i} m_{C_j}\nC_0(m_{\\tilde\\nu_K},m_{C_i},m_{C_j}) \\right)\\nonumber\\\\\n&+&\\frac{e}{4 s_W c_W} \\sum_{i=1}^2\\sum_{K=1}^3 V_{\\ell\\tilde\\nu\n C,L}^{IKi} V_{\\ell\\tilde\\nu C,L}^{JKi*} \\,\nC_2(m_{\\tilde\\nu_K},m_{\\tilde\\nu_K},m_{C_i}) \\nonumber\\\\\n&+& \\frac{1}{2} \\sum_{j=1}^6\\sum_{i,k=1}^4 V_{\\ell \\tilde L N,L}^{Iji}\nV_{\\ell \\tilde L N,L}^{Jjk\\,*}\\,\\left(V_{NNZ,L}^{ik}\nC_2(m_{L_j},m_{N_i},m_{N_k}) \\right. \\nonumber\\\\\n&-&\\left. 2 V_{NNZ,R}^{ik} m_{N_i} m_{N_k}\nC_0(m_{L_j},m_{N_i},m_{N_k})\\right)\\nonumber\\\\\n&-& \\frac{1}{2} \\sum_{j,k=1}^6\\sum_{i=1}^4 V_{\\ell \\tilde L N,L}^{Iji}\nV_{\\ell \\tilde L N,L}^{Jki\\,*}\\,V_{LLZ}^{jk}\nC_2(m_{L_j},m_{L_k},m_{N_i})\\,,\n\\label{eq:VZ}\n\\end{eqnarray}\nat vanishing external momenta with obvious replacements\n$L\\leftrightarrow R$ for $\\Sigma_{VR}^{JI},\\, \\Gamma_{ZR}^{JI}$.\n\n\n\\subsection{LFV Higgs interactions}\n\nTo compactify the notation, we denote the CP-even Higgs boson decays\nby $H_0^K\\to \\bar\\ell^I \\ell^J$, where, following again the notation\nof~\\cite{Rosiek:1989rs,Rosiek:1995kg}, $H\\equiv H_0^1, h\\equiv\nH_0^2$. As usual, we denote CP-odd neutral Higgs boson by $A_0$.\n\nIn order to study $h\\to\\ell\\ell^\\prime$ decays precisely, we keep the\nterms depending on the external Higgs mass. Therefore, we assume the\nfollowing effective action governing the LFV Higgs-lepton interaction:\n\\begin{eqnarray}\nA_{H{\\rm eff}}^{\\ell} &=& \\bar \\ell^J(k_J) (F_{h\\ell}^{JIK}(k_J,k_I)\nP_L + F_{h\\ell}^{IJK*}(k_J,k_I) P_R )\\ell^I(k_I) H_0^K(k_I - k_J)\\nonumber\\\\\n& +& \\bar \\ell^J(k_J) (F_{A\\ell}^{JI}(k_J,k_I) P_L +\nF_{A\\ell}^{IJ*}(k_J,k_I) P_R )\\ell^I(k_I) A_0(k_I - k_J) \\,.\n\\end{eqnarray}\nIn addition, to calculate the $\\mu\\to e$ conversion rate one needs to\ninclude the effective Higgs-quark couplings. For this purpose, one can\nset all external momenta to zero and consider the effective Lagrangian\n\\begin{eqnarray}\nL_{H{\\rm eff}}^q &=& \\bar u^J (F_{hu}^{JIK} P_L + F_{hu}^{IJK*} P_R\n)u^I H_0^K + \\bar d^J (F_{hd}^{JIK} P_L + F_{hd}^{IJK*} P_R ) d^I\nH_0^K\\,.\n\\label{eq:lhdef}\n\\end{eqnarray}\nHowever, in this article we consider only the lepton sector and\ntherefore do not give the explicit forms of Higgs quark couplings. The\nrelevant 1-loop expressions in the same notation as used in the\ncurrent paper are given in Ref.~\\cite{Buras:2002vd} and the formulae\nthat take into account also non-decoupling chirally enhanced\ncorrections and 2-loop QCD corrections in the general MSSM can be\nfound in Refs.~\\cite{Crivellin:2010er, Crivellin:2011jt,\n Crivellin:2012zz}\\footnote{Earlier accounts on chiral resummation\n can be found in Refs.~\\cite{Hall:1993gn, Carena:1994bv,\n Carena:1999py, Bobeth:2001sq, Babu:1999hn, Isidori:2001fv,\n Dedes:2002er,Hofer:2009xb,Noth:2010jy}}.\n\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.8\\textwidth]{pl_hll-eps-converted-to.pdf}\n\\caption{\\small Slepton-neutralino diagrams contributing to the\n $H_0^K\\to \\ell^I \\bar \\ell^J$ {and $A_0\\to \\ell^I \\bar \\ell^J$}\n decays in the MSSM (the mirror-reflected self-energy diagram is\n omitted).\\label{fig:diag}}\n\\end{center}\n\\end{figure}\n\nAt the 1-loop level there are eight diagrams contributing to the\neffective lepton Yukawa couplings. The ones with slepton and\nneutralino exchange are displayed in Fig.~\\ref{fig:diag}, while\ndiagrams with the chargino exchange can be obtained by the obvious\nreplacements $N\\to C, L\\to\\tilde\\nu$.\n\nThe expressions for $F_h$ are obtained from 1PI triangle diagrams and\nthe scalar part of lepton self-energies (see \\eq{eq:sigdef}) while the\nchirality conserving parts of the self-energies are absorbed by a\nfield rotation required to go to the physical basis with a diagonal\nlepton mass matrix. Therefore,\n\\begin{eqnarray}\nF_h^{JIK}(k_J,k_I) &=& \\Gamma_{h}^{JIK}(k_J,k_I) -\n\\frac{Z_R^{1K}}{v_1} \\,\\Sigma_{mL}^{JI}(0)\\;,\\nonumber\\\\\nF_A^{JI}(k_J,k_I) &=& \\Gamma_{A}^{JI}(k_J,k_I) - \\frac{i\n \\sin\\beta}{v_1} \\,\\Sigma_{mL}^{JI}(0)\\;,\n\\label{eq:yukeff}\n\\end{eqnarray}\nwhere the $Z_R$ denotes the CP-even Higgs mixing matrix (see\nAppendix~\\ref{app:lagr}) and the scalar self-energy contributions are\nevaluated at zero momentum transfer and given by:\n\\begin{eqnarray}\n(4\\pi)^2 \\Sigma_{mL}^{JI}(0) & = & \\sum_{i=1}^2\\sum_{L=1}^3 m_{C_i}\nV_{\\ell\\tilde\\nu C,L}^{ILi} V_{\\ell\\tilde\\nu C,R}^{JLi\\,*}\n\\,\\,B_0\\,(0,m_{\\tilde\\nu_L},m_{C_i})\\nonumber\\\\\n&+& \\sum_{i=1}^4\\sum_{j=1}^6 m_{N_i} V_{\\ell \\tilde L N,L}^{Iji}\nV_{\\ell \\tilde L N,R}^{Jji\\,*}\\,\\,B_0\\,(0,m_{L_j},m_{N_i})\n\\label{eq:sigmaLR}\n\\end{eqnarray}\nThe neutralino-slepton contributions to the 1PI vertex diagrams can be\nwritten as (the symbols in square brackets denote common arguments of\nthe 3-point functions)\\footnote{As we shall see later using MI\n expanded formulae (see Appendix~\\ref{app:lhmi}), due to strong\n cancellations the leading order terms in Eqs.~(\\ref{eq:sigmaLR},\n \\ref{eq:hnn}) are suppressed by the ratios of $m_\\ell\/M_W$ or\n $A'_l\/M_{SUSY}$. Additional terms linear in $m_\\ell\/M_W$, not\n included in~\\eq{eq:hnn}, appear in 1PI vertex diagrams when external\n lepton masses are not neglected. We calculated such terms and proved\n explicitly that after performing the MI expansion they are\n suppressed by additional powers of $v^2\/M_{SUSY}^2$ and therefore,\n {\\em a posteriori}, negligible. Thus, we do not display such terms\n in~\\eq{eq:hnn}.} {\n\\begin{eqnarray}\n(4\\pi)^2 \\Gamma_h^{JIK}(k_J,k_I) &=& - \\sum_{n=1}^4 \\sum_{l,m=1}^6\nV_{\\ell\\tilde L N,L}^{Jmn*} V_{\\ell\\tilde L N,L}^{Iln} V_{H\\tilde L\n \\tilde L}^{Klm} m_{N_n} C_0 [k_J,k_I-k_J,m_{N_n},m_{\\tilde\n L_m},m_{\\tilde L_l}]\\nonumber\\\\\n&& \\hspace{-38mm} - \\sum_{l,n=1}^4 \\sum_{m=1}^6 V_{\\ell\\tilde L\n N,R}^{Jnm*} V_{\\ell\\tilde LN,L}^{Inl} (V_{NHN,R}^{lKm} C_2 +\nV_{NHN,L}^{lKm} m_{N_l} m_{N_m} C_0 )[k_J,k_I-k_J,m_{\\tilde\n L_n},m_{N_m},m_{N_l}]\\;,\\nonumber\\\\\n(4\\pi)^2 \\Gamma_A^{JI}(k_J,k_I) &=& - \\sum_{n=1}^4 \\sum_{l,m=1}^6\nV_{\\ell\\tilde L N,L}^{Jmn*} V_{\\ell\\tilde L N,L}^{Iln} V_{A\\tilde L\n \\tilde L}^{1lm} m_{N_n} C_0 [k_J,k_I-k_J,m_{N_n},m_{\\tilde\n L_m},m_{\\tilde L_l}]\\nonumber\\\\\n&& \\hspace{-38mm} - \\sum_{l,n=1}^4 \\sum_{m=1}^6 V_{\\ell\\tilde L\n N,R}^{Jnm*} V_{\\ell\\tilde LN,L}^{Inl} (V_{NAN,R}^{l1m} C_2 +\nV_{NAN,L}^{l1m} m_{N_l} m_{N_m} C_0 )[k_J,k_I-k_J,m_{\\tilde\n L_n},m_{N_m},m_{N_l}]\\;,\\nonumber\\\\\n\\label{eq:hnn} \n\\end{eqnarray}\n}\nwhile the chargino-sneutrino triangle diagram is obtained by replacing\n$\\tilde L\\to\\tilde\\nu, N\\to C$ and adjusting the summation limits\nappropriately in vertex factors $V_{\\ldots}^{\\ldots}$ (see\nAppendix~\\ref{app:lagr}).\n\n\\subsection{Box contributions}\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{pl_box-eps-converted-to.pdf}\n\\caption{\\small Box diagrams with external charged leptons or\n quarks\\label{fig:box}}\n\\end{center}\n\\end{figure}\n\n4-fermion interactions are also generated by box diagrams. The\ncorresponding conventions for incoming and outgoing particles are\nshown in Fig.~\\ref{fig:box}. We calculate all box diagrams in the\napproximation of vanishing external momenta. The effective Lagrangian\nfor the 4-lepton interactions involves the quadrilinear operators\n\\begin{eqnarray}\nO_{VXY}^{JIKL} & = & (\\bar\\ell^{J}\\gamma^{\\mu}P_X \\ell^I) \\times\n(\\bar\\ell^K\\gamma_{\\mu}P_Y \\ell^L)\\,,\\nonumber\\\\\nO_{SXY}^{JIKL} & = & (\\bar\\ell^{J}P_X \\ell^I) \\times (\\bar\\ell^K P_Y\n\\ell^L)\\,,\\nonumber\\\\\nO_{TX}^{JIKL} & = & (\\bar\\ell^{J}\\sigma^{\\mu\\nu}\\ell^I) \\times\n(\\bar\\ell^K\\sigma_{\\mu\\nu}P_X \\ell^L)\\,,\n\\label{eq:lboxdec}\n\\end{eqnarray}\nwhere $X,Y$ stands for the chirality $L$ or $R$\\footnote{Recall that\n $(\\bar\\ell^{J}\\sigma^{\\mu\\nu}P_L\\ell^I) \\times\n (\\bar\\ell^K\\sigma_{\\mu\\nu}P_R \\ell^L)=0$.}. The Wilson coefficients\nof these operators are calculated from the box diagrams in\nFig.~\\ref{fig:box} and are denoted by $B_{NXY}^{JIKL}$ with $N=V$,$S$,\nor $B_{TX}^{JIKL}$.\n\nThe operator basis in \\eq{eq:lboxdec} is redundant. First, we note\nthat\n\\begin{eqnarray}\nO_{NXY}^{JIKL} &=& O_{NYX}^{KLJI}\\quad\\mathrm{for}~N=V,S,\\nonumber\\\\\nO_{TX}^{JIKL} &=& O_{TX}^{KLJI}.\n\\label{eq:flip}\n\\end{eqnarray}\nSecond, there are Fierz relations among different operators:\n\\begin{eqnarray}\nO_{VXX}^{JIKL} &=& O_{VXX}^{KIJL}, \\nonumber\\\\\nO_{VXY}^{JIKL} &=& -\\, 2\\, O_{SXY}^{KIJL} \\quad \\mathrm{for}~X\\neq Y,\n\\nonumber\\\\\nO_{TX}^{JIKL} &=& \\frac{1}{2} O_{TX}^{KIJL} -\\, 6\\, O_{SXX}^{KIJL},\\nonumber\\\\\nO_{SXX} ^{JIKL} &=& -\\frac{1}{2} O_{SXX}^{KIJL} - \\frac{1}{8}\nO_{TX}^{KIJL}.\n\\label{eq:fierz}\n\\end{eqnarray}\nFurthermore, we have \n\\begin{eqnarray}\nO_{VXY}^{JIKL\\,\\dagger} &=& O_{VXY}^{IJLK}, \\qquad\\qquad\nO_{SLL}^{JIKL\\,\\dagger} \\;=\\; O_{SRR}^{IJLK}, \\nonumber\\\\\nO_{SLR}^{JIKL\\,\\dagger} &=& O_{SRL}^{IJLK}, \\qquad\\qquad\nO_{TL}^{JIKL\\,\\dagger} \\;=\\; O_{TR}^{JILK}.\n\\label{eq:herm}\n\\end{eqnarray}\nEqs.~(\\ref{eq:flip}) to (\\ref{eq:herm}) must be taken into account\nwhen deriving the effective Lagrangian.\n\n\n\\subsubsection{Leptonic operators with $\\mathbf{J\\neq K}$ and\n $\\mathbf{I\\neq L}$\\label{sec:jnotk}}\n\n\nThe case with both $J\\neq K$ and $I\\neq L$ covers the decays\n$\\tau^{\\mp} \\to \\mu^{\\mp} e^{\\mp}\\ell^{\\pm} $ with $\\ell=e$ or $\\mu$,\nbut does not appear in $\\mu^{\\mp}$ decays. We can therefore specify\nto $I=3$ for the effective Lagrangian. Furthermore, we can choose\neither $(J,K)=(1,2)$ or $(J,K)=(2,1)$ without the need to sum over\nboth cases: The Fierz identities in \\eq{eq:fierz} permit to bring all\noperators into the form $(\\overline{e}\\ldots\\tau) \\times (\\overline{\\mu}\\ldots\n\\ell)$ (corresponding to the case $(J,K)=(1,2)$) or into an\nalternative form with $e$ interchanged with $\\mu$. Thus we have\n\\begin{eqnarray}\nL_{4\\ell}^{J3KL} &=&\n\\sum_{L=1,2} \\left[ \\sum_{\\stackrel{N=V,S}{X,Y=L,R}} B_{NXY}^{J3KL}\n O_{NXY}^{J3KL} \\, +\\! \\sum_{X=L,R} B_{TX}^{J3KL} O_{TX}^{J3KL}\n \\right]\n\\;+\\; \\mbox{h.c.}\\nonumber\\\\ &&\\qquad\\qquad\\qquad \\qquad\\mbox{with }J\\neq\nK\\mbox{ and }J,K,L \\leq 2,\n \\label{eq:lboxbasis}\n\\end{eqnarray}\nas the four-lepton interaction in the Lagrangian. Note that the\n``$+$h.c.'' piece of $L_{4\\ell}^{JK}$ describes $\\tau^+$ decays.\n\nThe Wilson coefficients $B_{NXY}^{J3KL}$ and $ B_{TX}^{J3KL}$ in\n\\eq{eq:lboxbasis} are simply identical to the results of the sum of\nall contributing box diagrams to the decay amplitude. The latter is\ngiven in \\eq{eq:lllla0} with the coefficients of the spinor structure\nin the right column of Tab.~\\ref{tab:lbox}. The relation to the\nanalytic expressions in Eqs.~(\\ref{eq:lboxa}) to (\\ref{eq:lboxd}) is\n\\begin{eqnarray}\nB_{NXY}^{JIKL} &=& B_{A\\, NXY}^{JIKL} + B_{B\\, NXY}^{JIKL} + \n B_{C\\, NXY}^{JIKL} + B_{D\\, NXY}^{JIKL},\\qquad\\mbox{for }N=V,S \n \\label{eq:boxsum}\n\\end{eqnarray}\nand an analogous expression for $B_{TX}^{JIKL}$. \n\n\\subsubsection{Leptonic operators with $\\mathbf{J= K}$ and $\\mathbf{I\\neq L}$}\n\nThe case $J=K$ occurs for the decays $\\mu^\\pm \\to e^\\pm e^\\pm e^\\mp$\nand $\\tau^\\pm \\to \\ell^\\pm \\ell^\\pm \\ell^{\\mp\\prime}$ with\n$\\ell,\\ell^\\prime=e,\\mu$. Thanks to the Fierz identities in\n\\eq{eq:fierz} we may restrict the operator basis to\n\\begin{eqnarray}\nO_{VXX}^{JIJL},\\qquad\\quad \nO_{VXY}^{JIJL}\\;=\\; -2 O_{SXY}^{JIJL},&& \\qquad \nO_{SXX}^{JIJL}\\;=\\; -\\frac{1}{12} O_{TX}^{JIJL},\\nonumber\\\\\n&& \n\\qquad \\quad\\mbox{with }X,Y=L,R\\mbox{ and }X\\neq Y.\n\\label{eq:jkbasis}\n\\end{eqnarray}\nThe four-lepton piece of the effective Lagrangian for the decay\n$\\ell^{I\\mp} \\to \\ell^{J\\mp} \\ell^{J\\mp} \\ell^{L\\pm} $ reads:\n\\begin{eqnarray}\nL_{4\\ell}^{JIJL} &=& \\sum_{L=1,2} \\left[ \\sum_{X,Y=L,R}\n \\widetilde{C}_{VXY}^{JIJL} O_{VXY}^{JIJL} \\, +\\! \\sum_{X=L,R}\n \\widetilde{C}_{SXX}^{JIJL} O_{SXX}^{JIJL} \\right] \\;+\\;\n\\mbox{h.c.}\\nonumber\\\\ &&\\qquad\\qquad\\qquad \\qquad\\mbox{with }L,J